Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
1,100 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is the <a href="https
Step1: How do we define direction of an earth magnetic field?
Earth magnetic field is a vector. To define a vector we need to choose a coordinate system. We use right-handed system
Step2: Magnetic applet
Based on the prism that you made above, below Magnetic applet computes magnetic field at receiver locations, and provide both 2D map (left) and profile line (right).
For the prism, you can alter | Python Code:
import numpy as np
from geoscilabs.mag import Mag, Simulator
%matplotlib inline
Explanation: This is the <a href="https://jupyter.org/">Jupyter Notebook</a>, an interactive coding and computation environment. For this lab, you do not have to write any code, you will only be running it.
To use the notebook:
- "Shift + Enter" runs the code within the cell (so does the forward arrow button near the top of the document)
- You can alter variables and re-run cells
- If you want to start with a clean slate, restart the Kernel either by going to the top, clicking on Kernel: Restart, or by "esc + 00" (if you do this, you will need to re-run the following block of code before running any other cells in the notebook)
This notebook uses code adapted from
SimPEG
- Cockett, R., S. Kang, L.J. Heagy, A. Pidlisecky, D.W. Oldenburg (2015, in review), SimPEG: An open source framework for simulation and gradient based parameter estimation in geophysical applications. Computers and Geosciences
End of explanation
#Input parameters
fileName = 'https://github.com/geoscixyz/geosci-labs/raw/main/assets/mag/data/DO27_TMI.dat'
xyzd = np.genfromtxt(fileName, skip_header=3)
B = np.r_[60308, 83.8, 25.4]
survey, dobj = Mag.createMagSurvey(xyzd, B)
# View the data and chose a profile
param = Simulator.ViewMagSurvey2D(survey, dobj)
display(param)
# Define the parametric model interactively
model = Simulator.ViewPrism(param.result)
display(model)
Explanation: How do we define direction of an earth magnetic field?
Earth magnetic field is a vector. To define a vector we need to choose a coordinate system. We use right-handed system:
- X (Easting),
- Y (Northing), and
- Z (Up).
Here we consider an earth magnetic field ($\vec{B_0}$), of which intensity is one. To define this unit vector, we use inclinatino and declination:
- Declination: An angle from geographic North (Ng) (positive clockwise)
- Inclination: Vertical angle from the N-E plane (positive down)
<img src="https://github.com/geoscixyz/geosci-labs/raw/main/images/mag/earthfield.png?raw=true" style="width: 60%; height: 60%"> </img>
What's data: total field anomaly
We consider a typical form of magnetic data. To illustrate this we consider an suceptible object embedded in the earth.
Based upon the earth magnetic field ($\vec{B}_0$), this object will generate anomalous magnetic field ($\vec{B}_A$). We define an unit vector $\hat{B}_0$ for the earth field as
$$ \hat{B}_0 = \frac{\vec{B}_0}{|\vec{B}_0|}$$
We measure both earth and anomalous magnetic field such that
$$ \vec{B} = \vec{B}_0 + \vec{B}_A$$
Total field anomaly, $\triangle \vec{B}$ can be defined as
$$ |\triangle \vec{B}| = |\vec{B}|-|\vec{B}_E| $$
If $|\vec{B}|\ll|\vec{B}_E|$, then that is total field anomaly $\triangle \vec{B}$ is the projection of the anomalous field onto the direction of the earth field:
$$ |\triangle \vec{B}| \simeq \vec{B}_A \cdot \hat{B}_0=|\vec{B}_A|cos\theta$$
<img src="https://github.com/geoscixyz/geosci-labs/raw/main/images/mag/totalfieldanomaly.png?raw=true" style="width: 50%; height: 50%">
Define a 3D prism
Our model is a rectangular prism. Parameters to define this prism are given below:
dx: length in Easting (x) direction (meter)
dy: length in Northing (y) direction (meter)
dz: length in Depth (z) direction (meter) below the receiver
depth: top boundary of the prism (meter)
pinc: inclination of the prism (reference is a unit northing vector; degree)
pdec: declination of the prism (reference is a unit northing vector; degree)
You can also change the height of the survey grid above the ground
- rx_h: height of the grid (meter)
Green dots show a plane where we measure data.
End of explanation
plotwidget = Simulator.PFSimulator(model, param)
display(plotwidget)
Explanation: Magnetic applet
Based on the prism that you made above, below Magnetic applet computes magnetic field at receiver locations, and provide both 2D map (left) and profile line (right).
For the prism, you can alter:
- sus: susceptibility of the prism
Parameters for the earth field are:
- Einc: inclination of the earth field (degree)
- Edec: declination of the earth field (degree)
- Bigrf: intensity of the earth field (nT)
For data, you can view:
- tf: total field anomaly,
- bx :x-component,
- by :y-component,
- bz :z-component
You can simulate and view remanent magnetization effect with parameters:
- irt: "induced", "remanent", or "total"
- Q: Koenigsberger ratio ($\frac{M_{rem}}{M_{ind}}$)
- rinc: inclination of the remanent magnetization (degree)
- rdec: declination of the remanent magnetization (degree)
End of explanation |
1,101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How many tweets are about the 'wall'?
Step1: What is the average twitter tenure of people who tweeted about the wall?
Step2: There are a couple of users tweeting multiple times, but most tweets come from distinct twitter handles
Step3: Who are the 'top tweeters' + descriptions?
Step4: What is the reach of these tweets in terms of followers?
Step5: Location of the tweets? | Python Code:
# Lowercase the hashtags and tweet body
df['hashtags'] = df['hashtags'].str.lower()
df['text'] = df['text'].str.lower()
print("Total number of tweets containing hashtag 'wall' = {}".format(len(df[df['hashtags'].str.contains('wall')])))
print("Total number of tweets whose body contains 'wall' = {}".format(len(df[df['text'].str.contains('wall')])))
wall_tweets = df[(df['hashtags'].str.contains('wall')) | (df['text'].str.contains('wall'))].copy()
print("Total number of tweets about the 'wall' = {}".format(len(wall_tweets)))
Explanation: How many tweets are about the 'wall'?
End of explanation
def months_between(end, start):
return (end.year - start.year)*12 + end.month - start.month
wall_tweets['created'] = pd.to_datetime(wall_tweets['created'])
wall_tweets['user_created'] = pd.to_datetime(wall_tweets['user_created'])
wall_tweets['user_tenure'] = wall_tweets[['created', \
'user_created']].apply(lambda row: months_between(row[0], row[1]), axis=1)
tenure_grouping = wall_tweets.groupby('user_tenure').size() / len(wall_tweets) * 100
fig, ax = plt.subplots()
ax.plot(tenure_grouping.index, tenure_grouping.values)
ax.set_ylabel("% of tweets")
ax.set_xlabel("Acct tenure in months")
plt.show()
Explanation: What is the average twitter tenure of people who tweeted about the wall?
End of explanation
tweets_per_user = wall_tweets.groupby('user_name').size().sort_values(ascending=False)
fig, ax = plt.subplots()
ax.plot(tweets_per_user.values)
plt.show()
Explanation: There are a couple of users tweeting multiple times, but most tweets come from distinct twitter handles
End of explanation
wall_tweets.groupby(['user_name', 'user_description']).size().sort_values(ascending=False).head(20).to_frame()
Explanation: Who are the 'top tweeters' + descriptions?
End of explanation
plt.boxplot(wall_tweets['friends_count'].values, vert=False)
plt.show()
wall_tweets['friends_count'].describe()
Explanation: What is the reach of these tweets in terms of followers?
End of explanation
wall_tweets.groupby('user_location').size().sort_values(ascending=False)
Explanation: Location of the tweets?
End of explanation |
1,102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We're wasting a bunch of time waiting for our iterators to produce minibatches when we're running epochs. Seems like we should probably precompute them while the minibatch is being run on the GPU. To do this involves using the multiprocessing module. Since I've never used it before, here are my dev notes for writing this into the dataset iterators.
Step1: For some reason can't run these in the notebook. So have to run them with subprocess like so
Step2: Now doing this asynchronously
Step3: Now trying to create an iterable that will precompute it's output using multiprocessing.
Step4: Then we have to try and do a similar thing, but using the randomaugment function. In the following two cells one uses multiprocessiung and one that doesn't. Testing them by pretending to ask for a minibatch and then sleep, applying the RandomAugment function each time. | Python Code:
import multiprocessing
import numpy as np
p = multiprocessing.Pool(4)
x = range(3)
f = lambda x: x*2
def f(x):
return x**2
print(x)
Explanation: We're wasting a bunch of time waiting for our iterators to produce minibatches when we're running epochs. Seems like we should probably precompute them while the minibatch is being run on the GPU. To do this involves using the multiprocessing module. Since I've never used it before, here are my dev notes for writing this into the dataset iterators.
End of explanation
%%python
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
p = Pool(5)
print(p.map(f, [1, 2, 3]))
%%python
from multiprocessing import Pool
import numpy as np
def f(x):
return x*x
if __name__ == '__main__':
p = Pool(5)
print(p.map(f, np.array([1, 2, 3])))
Explanation: For some reason can't run these in the notebook. So have to run them with subprocess like so:
End of explanation
%%python
from multiprocessing import Pool
import numpy as np
def f(x):
return x**2
if __name__ == '__main__':
p = Pool(5)
r = p.map_async(f, np.array([0,1,2]))
print(dir(r))
print(r.get(timeout=1))
Explanation: Now doing this asynchronously:
End of explanation
%%python
from multiprocessing import Pool
import numpy as np
def f(x):
return x**2
class It(object):
def __init__(self,a):
# store an array (2D)
self.a = a
# initialise pool
self.p = Pool(4)
# initialise index
self.i = 0
# initialise pre-computed first batch
self.batch = self.p.map_async(f,self.a[self.i,:])
def get(self):
return self.batch.get(timeout=1)
def f(self,x):
return x**2
if __name__ == '__main__':
it = It(np.random.randn(4,4))
print(it.get())
%%python
from multiprocessing import Pool
import numpy as np
def f(x):
return x**2
class It(object):
def __init__(self,a):
# store an array (2D)
self.a = a
# initialise pool
self.p = Pool(4)
# initialise index
self.i = 0
# initialise pre-computed first batch
self.batch = self.p.map_async(f,self.a[self.i,:])
def __iter__(self):
return self
def next(self):
# check if we've got something pre-computed to return
if self.batch:
# get the output
output = self.batch.get(timeout=1)
#output = self.batch
# prepare next batch
self.i += 1
if self.i < self.a.shape[0]:
self.p = Pool(4)
self.batch = self.p.map_async(f,self.a[self.i,:])
#self.batch = map(self.f,self.a[self.i,:])
else:
self.batch = False
return output
else:
raise StopIteration
if __name__ == '__main__':
it = It(np.random.randn(4,4))
for a in it:
print a
Explanation: Now trying to create an iterable that will precompute it's output using multiprocessing.
End of explanation
%%time
%%python
from multiprocessing import Pool
import numpy as np
import neukrill_net.augment
import time
class It(object):
def __init__(self,a,f):
# store an array (2D)
self.a = a
# store the function
self.f = f
# initialise pool
self.p = Pool(4)
# initialise indices
self.inds = range(self.a.shape[0])
# pop a batch from top
self.batch_inds = [self.inds.pop(0) for _ in range(100)]
# initialise pre-computed first batch
self.batch = map(self.f,self.a[self.batch_inds,:])
def __iter__(self):
return self
def next(self):
# check if we've got something pre-computed to return
if self.inds != []:
# get the output
output = self.batch
# prepare next batch
self.batch_inds = [self.inds.pop(0) for _ in range(100)]
self.p = Pool(4)
self.batch = map(self.f,self.a[self.batch_inds,:])
return output
else:
raise StopIteration
if __name__ == '__main__':
f = neukrill_net.augment.RandomAugment(rotate=[0,90,180,270])
it = It(np.random.randn(10000,48,48),f)
for a in it:
time.sleep(0.01)
pass
%%time
%%python
from multiprocessing import Pool
import numpy as np
import neukrill_net.augment
import time
class It(object):
def __init__(self,a,f):
# store an array (2D)
self.a = a
# store the function
self.f = f
# initialise pool
self.p = Pool(8)
# initialise indices
self.inds = range(self.a.shape[0])
# pop a batch from top
self.batch_inds = [self.inds.pop(0) for _ in range(100)]
# initialise pre-computed first batch
self.batch = self.p.map_async(f,self.a[self.batch_inds,:])
def __iter__(self):
return self
def next(self):
# check if we've got something pre-computed to return
if self.inds != []:
# get the output
output = self.batch.get(timeout=1)
# prepare next batch
self.batch_inds = [self.inds.pop(0) for _ in range(100)]
#self.p = Pool(4)
self.batch = self.p.map_async(f,self.a[self.batch_inds,:])
return output
else:
raise StopIteration
if __name__ == '__main__':
f = neukrill_net.augment.RandomAugment(rotate=[0,90,180,270])
it = It(np.random.randn(10000,48,48),f)
for a in it:
time.sleep(0.01)
pass
%%time
%%python
from multiprocessing import Pool
import numpy as np
import neukrill_net.augment
import time
class It(object):
def __init__(self,a,f):
# store an array (2D)
self.a = a
# store the function
self.f = f
# initialise pool
self.p = Pool(8)
# initialise indices
self.inds = range(self.a.shape[0])
# pop a batch from top
self.batch_inds = [self.inds.pop(0) for _ in range(100)]
# initialise pre-computed first batch
self.batch = self.p.map_async(f,self.a[self.batch_inds,:])
def __iter__(self):
return self
def next(self):
# check if we've got something pre-computed to return
if self.inds != []:
# get the output
output = self.batch.get(timeout=1)
# prepare next batch
self.batch_inds = [self.inds.pop(0) for _ in range(100)]
#self.p = Pool(4)
self.batch = self.p.map_async(f,self.a[self.batch_inds,:])
return output
else:
raise StopIteration
if __name__ == '__main__':
f = neukrill_net.augment.RandomAugment(rotate=[0,90,180,270])
it = It(np.random.randn(10000,48,48),f)
for a in it:
print np.array(a).shape
print np.array(a).reshape(100,48,48,1).shape
break
Explanation: Then we have to try and do a similar thing, but using the randomaugment function. In the following two cells one uses multiprocessiung and one that doesn't. Testing them by pretending to ask for a minibatch and then sleep, applying the RandomAugment function each time.
End of explanation |
1,103 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intialize a spark instance-
Step1: Get number of RDD partitions-
Step2: We will define a square function for our map operation-
Step3: The above map function maps generated two types of key value pairs- (even, 1) and (odd, 0)
Step4: Now define a lambda function to generate (key, value) pairs where key = even or odd depending upon the input and
value = input
Step5: There are two types of Reduce operations- reduceByKey() and reduce(). Check PySpark documentation for more details
Step6: In MLlib you can use Dense or Sparse matrices for computation.
Create a Sparse vector for MLlib using mat-
Step7: Labeled point
A labeled point is a local vector, either dense or sparse, associated with a label/response. In MLlib, labeled points are used in supervised learning algorithms. We use a double to store a label, so we can use labeled points in both regression and classification.
Step8: Local matrix
A local matrix has integer-typed row and column indices and double-typed values, stored on a single machine.
Step9: Distributed matrix
A distributed matrix has long-typed row and column indices and double-typed values, stored distributively in one or more RDDs. It is very important to choose the right format to store large and distributed matrices. Converting a distributed matrix to a different format may require a global shuffle, which is quite expensive.
Step10: Row matrix
Step11: BlockMatrix | Python Code:
sc = pyspark.SparkContext(appName="spark-notebook")
ss = SparkSession(sc)
myRDD = sc.textFile("file:///path/to/part3/numbers.txt", 10)
Explanation: Intialize a spark instance-
End of explanation
myRDD.getNumPartitions()
print myRDD.take(20) # get first 20 values
Explanation: Get number of RDD partitions-
End of explanation
def square(value):
return int(value)**2
newRDD = myRDD.map(square)
print newRDD.collect() # get map results
subRDD = newRDD.map(lambda x: (x, 1) if x%2==0 else (x, 0)) # using lamda functions
Explanation: We will define a square function for our map operation-
End of explanation
print subRDD.collect()
Explanation: The above map function maps generated two types of key value pairs- (even, 1) and (odd, 0)
End of explanation
# your code here
print testRDD.take(10)
Explanation: Now define a lambda function to generate (key, value) pairs where key = even or odd depending upon the input and
value = input
End of explanation
reduced = testRDD.reduceByKey(add)
print reduced.collect()
sc.stop()
sc = pyspark.SparkContext(appName="spark-notebook")
mat = np.array([])
with open("./numbers.txt", "r") as file:
for line in file:
mat = np.hstack((mat, np.array(int(line))))
mymat = mat[:6]
print mymat
Explanation: There are two types of Reduce operations- reduceByKey() and reduce(). Check PySpark documentation for more details
End of explanation
from pyspark.mllib.linalg import Vectors
sv = Vectors.sparse(6,[0, 1, 3, 4],[1, 2, 4, 5])
print type(sv)
Explanation: In MLlib you can use Dense or Sparse matrices for computation.
Create a Sparse vector for MLlib using mat-
End of explanation
from pyspark.mllib.regression import LabeledPoint
pos = LabeledPoint(1.0, mat) # label(Y) = 1 and data(X) = mymat
print pos
Explanation: Labeled point
A labeled point is a local vector, either dense or sparse, associated with a label/response. In MLlib, labeled points are used in supervised learning algorithms. We use a double to store a label, so we can use labeled points in both regression and classification.
End of explanation
from pyspark.mllib.linalg import Matrix, Matrices
dm = Matrices.dense(2,2,mat[7:11]) # 2x2 dense matrix
print dm
Explanation: Local matrix
A local matrix has integer-typed row and column indices and double-typed values, stored on a single machine.
End of explanation
newmat = np.reshape(mat[:6], (2,3)) # 2x3 matrix
print newmat
Explanation: Distributed matrix
A distributed matrix has long-typed row and column indices and double-typed values, stored distributively in one or more RDDs. It is very important to choose the right format to store large and distributed matrices. Converting a distributed matrix to a different format may require a global shuffle, which is quite expensive.
End of explanation
from pyspark.mllib.linalg.distributed import RowMatrix
# Create an RDD of newmat
rows = sc.parallelize(newmat)
rowmat = RowMatrix(rows)
print rowmat.numRows(), rowmat.numCols()
Explanation: Row matrix
End of explanation
from pyspark.mllib.linalg.distributed import BlockMatrix
# Create an RDD of sub-matrix blocks.
blocks = sc.parallelize([((0, 0), Matrices.dense(2,2,mat[7:11])),
((1, 0), Matrices.dense(2,2,mat[1:5]))])
# Create a BlockMatrix from an RDD of sub-matrix blocks.
mat = BlockMatrix(blocks, 2, 2) # [A | B]
print mat
sc.stop()
Explanation: BlockMatrix
End of explanation |
1,104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Observation and clean of the data
Step1: 1.1. Missing values
Step2: Where are the missing values ?
Step3: To clean the data, we will go step by step
Step4: Check now how many incomplete dayads our data contains now
Step5: Let's check again the columns with missing values
Step6: We only 7.4% of incomplete dayads. Let's try to replace the nan in heights and weights with the median
Step7: The incomplete row only represent 7%.
We will check if 2 dayads with referee of the same country, one with complete data and an other one with missing IAT and Exp info exist. In that case it is easy to complete those information
Step8: Look at how many player don't have a position
Step9: For those player we will try to assign them a new position called 'Joker'
Step10: The remaining NaN represent only 0.1% and concern the columns about country and the test in those country which are hard to guess. So we deceide to drop the remaining dayad with NaN
Step11: 1.2. Handling different ratings (rater1 vs rater2)
We check if there is player with only one rating
We also check if one rater always give the same rate to the same player
Now we need to take one single rating for each row. We'll need to investigate a bit more the different rating
Is there players with only one rating?
Step12: Are the raters consistent?
Step13: Investigation on the raters
Values of the rating
Step14: Are the raters always agree
Step15: Let's show some plost to visualize when the rater differe
Step16: We can see that rater1 and 2 disagree the most when they have to rate "white" people.
We can also see with graph2 that when there disagree it's ony of one category.
Now we will create a new columns
Step17: And now make the color_rating categorical
Step18: Aggregate the data
One solution is to group the data by player name. Then we need to find a solution to correctly group the remaining features
Step19: Then aggregate
Step21: Encode the data for the Random Forest
The Random Forest can only handle integer or float attributes, so we have to encode the string attributes as numbers. | Python Code:
print('Number of diad: ', len(data))
print('Number of players: ', len(data.playerShort.unique()))
print('Number of referees: ', len(data.refNum.unique()))
Explanation: 1. Observation and clean of the data
End of explanation
complete = len(data.dropna())
all_ = len(data_total)
print('Number of row with complete data: {} ({:.3f}%)'.format(complete, (complete/all_ ) * 100 ))
print('Number of row with missing data: {} ({:.3f}%)'.format(all_-complete, (all_ -complete)/all_ * 100 ))
Explanation: 1.1. Missing values
End of explanation
def find_col_nan(d):
col = []
for c in d.columns:
if d[c].isnull().any():
col = np.append(col, c)
return col
missing_col = find_col_nan(data)
missing_col
Explanation: Where are the missing values ?
End of explanation
data = data[ ~data.rater1.isnull() & ~data.rater2.isnull()]
print('Number of row with the 2 ratings {} ({:.3f}%)'.format(len(data), len(data)/len(data_total) * 100))
onlyOne = data[ ~data.rater1.isnull() ^ ~data.rater2.isnull()]
print('Number of row with only one ratings {} ({:.3f}%)'.format(len(onlyOne), len(onlyOne)/len(data_total) * 100))
Explanation: To clean the data, we will go step by step:
- First of all we have to clean all dayads that don't have any rating, because those dayads are uneseful for our problem.
- Then we will look again which columns contains missing values and how to deal with them
End of explanation
complete = len(data.dropna())
all_ = len(data)
print("After removing data without rating:")
print("-----------------------------------")
print('Number of row with complete data: {} ({:.3f}%)'.format(complete, (complete/all_ ) * 100 ))
print('Number of row with missing data: {} ({:.3f}%)'.format(all_-complete, (all_ -complete)/all_ * 100 ))
Explanation: Check now how many incomplete dayads our data contains now
End of explanation
missing_col = find_col_nan(data)
missing_col
Explanation: Let's check again the columns with missing values
End of explanation
# replace no height and weight with the mean value
median_height = np.median(data['height'].dropna())
median_weight = np.median(data['weight'].dropna())
data['height'] = data['height'].fillna(value=median_height)
data['weight'] = data['weight'].fillna(value=median_weight)
complete = len(data.dropna())
all_ = len(data)
print("After removing data without rating:")
print("-----------------------------------")
print('Number of row with complete data: {} ({:.3f}%)'.format(complete, (complete/all_ ) * 100 ))
print('Number of row with missing data: {} ({:.3f}%)'.format(all_-complete, (all_ -complete)/all_ * 100 ))
missing_col = find_col_nan(data)
missing_col
Explanation: We only 7.4% of incomplete dayads. Let's try to replace the nan in heights and weights with the median
End of explanation
missing_col_test = ['meanIAT', 'nIAT', 'seIAT', 'meanExp',
'nExp', 'seExp']
exist = False
def checkMissingTest(df):
for col in missing_col_test:
nbr_dayads = len(df)
nbr_noNaN = len(df.dropna(subset=[col]))
if nbr_dayads > nbr_noNaN & nbr_noNaN > 0:
exist = True
print('There exist valid data for ', df.Alpha_3)
grouped = pd.groupby(data, by='refCountry').apply(checkMissingTest)
print('Does it exist 2 dayads of same country, one with info on test and one with missing values in test ?: ', exist)
Explanation: The incomplete row only represent 7%.
We will check if 2 dayads with referee of the same country, one with complete data and an other one with missing IAT and Exp info exist. In that case it is easy to complete those information
End of explanation
complete = len(data.dropna(subset=['position']))
all_ = len(data)
print("After removing data without rating:")
print("-----------------------------------")
print('Number of row with complete data: {} ({:.3f}%)'.format(complete, (complete/all_ ) * 100 ))
print('Number of row with missing data: {} ({:.3f}%)'.format(all_-complete, (all_ -complete)/all_ * 100 ))
Explanation: Look at how many player don't have a position
End of explanation
data.position = data.position.fillna('Joker')
missing_col = find_col_nan(data)
missing_col
complete = len(data.dropna())
all_ = len(data)
print("After removing data without rating:")
print("-----------------------------------")
print('Number of row with complete data: {} ({:.3f}%)'.format(complete, (complete/all_ ) * 100 ))
print('Number of row with missing data: {} ({:.3f}%)'.format(all_-complete, (all_ -complete)/all_ * 100 ))
Explanation: For those player we will try to assign them a new position called 'Joker'
End of explanation
data = data.dropna()
find_col_nan(data)
Explanation: The remaining NaN represent only 0.1% and concern the columns about country and the test in those country which are hard to guess. So we deceide to drop the remaining dayad with NaN
End of explanation
(data.rater1.isnull() | data.rater2.isnull()).any()
Explanation: 1.2. Handling different ratings (rater1 vs rater2)
We check if there is player with only one rating
We also check if one rater always give the same rate to the same player
Now we need to take one single rating for each row. We'll need to investigate a bit more the different rating
Is there players with only one rating?
End of explanation
def areRaterConsistent(d):
for playerID in d.playerShort.unique():
player = d[d.playerShort == playerID]
rater1 = player.rater1.unique()
rater2 = player.rater2.unique()
if len(rater1) >1 or len(rater2) > 1:
return False
return True
print("Are the rater consistent: ",areRaterConsistent(data))
Explanation: Are the raters consistent?
End of explanation
data.rater1.unique()
Explanation: Investigation on the raters
Values of the rating
End of explanation
print("percentage of players with different ratings: ", len(data[data['rater1'] != data['rater2']])*100 / len(data), "%")
Explanation: Are the raters always agree
End of explanation
len(data)
fig, ax = plt.subplots(1, 4, figsize=(16, 4))
ax[0].hist([data['rater1'], data['rater2']], bins=5)
ax[0].set_title("1) Raters compared \n(blue: rater1, green: rater2)")
ax[1].hist(abs(data['rater1'] - data['rater2']), bins=3)
ax[1].set_title("2) Difference (Rater1 - Rater2)")
dissagree_data = data[data['rater1'] != data['rater2']]
ax[2].hist(dissagree_data['rater1'], bins=5)
ax[2].set_title("3) Dissagreed values Rater1")
ax[3].hist(dissagree_data['rater2'], bins=5, color='seagreen')
ax[3].set_title("4) Dissagreed values Rater2")
Explanation: Let's show some plost to visualize when the rater differe
End of explanation
data['color_rating'] = -1
def color_skin_rules(row):
#Rule 1
if row.rater1 == row.rater2:
return row.rater1
#Rule2
elif row.rater2 == 0:
return 0
#Rule 3
elif row.rater1 == 1:
return 1
#Rule 4
elif row.rater1 == 0.25:
return 0.25
else:
return np.random.choice([row.rater1, row.rater2])
data.color_rating = data.apply(color_skin_rules, axis=1)
Explanation: We can see that rater1 and 2 disagree the most when they have to rate "white" people.
We can also see with graph2 that when there disagree it's ony of one category.
Now we will create a new columns: color_skin that will be our label to guess. To convert the values of rater 1 and 2 in one rate, we need to follow some rules that come from the graph:
1. if rater1 and rater2 are agree, take that value
2. We can see on graph 4 that when rater2 give 0, usually, rater1 agrees => so when rater2 give 0, we take that value as the color skin
3. In graph 3, when rater1 give 1, rater2 usually agrees => when rater1 give 1, take that value as the color_skin
4. In graph 3, we can see that when rater1 rate1 give 0.25, usually rater2 agrees => take rater1
5. choose at random between both values
Then we can drop the features rater1, rater2 since we don't need them anymore
End of explanation
categorical_dict = {0: 1, 0.25: 2, 0.5: 3, 0.75: 4, 1: 5 }
data['color_rating'] = data.apply(lambda row: categorical_dict[row['color_rating']]
, axis=1).astype('category')
data = data.drop(['rater1', 'rater2'], axis=1)
data_cleaned=data.copy()
data_cleaned.to_csv('CrowdstormingDataJuly1st_preprocessed.csv')
data_cleaned.head()
Explanation: And now make the color_rating categorical:
End of explanation
clubUnique = True
leagueUnique = True
positionUnique = True
def checkFunction(player):
#check if the club is unique for one player
if len(player.club.unique()) > 1:
clubUnique = False
print(player.player, 'plays for more than one team: ', player.club.unique())
#check if the leagueCountry is unique
if len(player.leagueCountry.unique()) > 1:
leagueUnique = False
print(player.player, 'plays for more than one league: ', player.leagueCountry.unique())
#check if the position is unique
if len(player.position.unique()) > 1:
positionUnique = False
print(player.player, 'plays for more than one position: ', player.position.unique())
data_group = pd.groupby(data_cleaned, by=data_cleaned.playerShort).apply(checkFunction)
print("Is the club for a player unique? ", clubUnique)
print("Is the league for a player unique? ", leagueUnique)
print("Is the position for a player unique? ", positionUnique)
Explanation: Aggregate the data
One solution is to group the data by player name. Then we need to find a solution to correctly group the remaining features:
- club: we have to check if a player appear in 2 different club (in case of a transfer during the winter mercato ) or if the transfer are not taking into account. (-> one (several) hot encoding. or majority dyads per club)
- leagueCountry: same as club
- position: test if the player as different -> position with the majority of game?
- photoID: drop that information, the id is unique -> not relevant for our classification probleme
- refNum: replace with the total of unique refs
- refCountry: same as refNum
- Alpha_3: remove: redundant information since it correspond to the refCountry
- meanIAT: make new features
- take mean
- take weighted mean (weight with nIAT)
- take weighted mean (weight with game numers)
- meanExp: same as IAT
- seAIT:
- seExp:
First do some checks
End of explanation
def meanCards(row, test):
total_cards = row.yellowCards + row.yellowReds + row.redCards
if total_cards == 0:
return 0
else:
if(test == 'IAT'):
return (row.meanIAT * row.yellowCards) + (row.meanIAT * row.yellowReds) \
+ (row.meanIAT * row.redCards) / total_cards
else:
return (row.meanExp * row.yellowCards) + (row.meanExp * row.yellowReds) \
+ (row.meanExp * row.redCards) / total_cards
def aggreagtion(df):
first_entry = df.head(1)
# new aggregation entry
new_entry = first_entry.copy()
#sum of the info about the games
new_entry.games = df.games.sum()
new_entry.victories = df.victories.sum()
new_entry.ties = df.ties.sum()
new_entry.defeats = df.defeats.sum()
new_entry.goals = df.goals.sum()
new_entry.yellowCards = df.yellowCards.sum()
new_entry.yellowReds = df.yellowReds.sum()
new_entry.redCards = df.redCards.sum()
#drop photoID and alpha_3
new_entry.drop('photoID', inplace = True, axis=1)
new_entry.drop('Alpha_3', inplace = True, axis=1)
#refNum: number of unique ref
new_entry = new_entry.rename(columns = {'refNum': 'refCount'})
new_entry.refCount = len(df.refNum.unique())
#refCountry: replace by number of unique country
new_entry = new_entry.rename(columns = {'refCountry': 'refCountryCount'})
new_entry.refCountryCount = len(df.refCountry.unique())
#==Mean of the test result ===
#- take mean
#- take weighted mean (weight with nIAT)
#- take weighted mean (weight with game numers)
new_entry.meanIAT = df.meanIAT.mean()
new_entry.meanExp = df.meanExp.mean()
new_entry['meanIAT_nIAT'] = (df.meanIAT * df.nIAT).sum() / df.nIAT.sum()
new_entry['meanExp_nExp'] = (df.meanExp * df.nExp).sum() / df.nExp.sum()
new_entry['meanIAT_GameNbr'] = (df.meanIAT * df.games).sum() / df.games.sum()
new_entry['meanExp_GameNbr'] = (df.meanExp * df.games).sum() / df.games.sum()
new_entry['meanIAT_cards'] = df.apply(lambda r : meanCards(r, test='IAT'), axis = 1)
new_entry['meanExp_cards'] = df.apply(lambda r: meanCards(r, test = 'Exp'), axis = 1)
#????????????????????? DROP nIART nExp or NOT ?????????????????????????????
new_entry.drop('nIAT', inplace = True, axis=1)
new_entry.drop('nExp', inplace = True, axis=1)
# standard error = standard deviation / sqrt(n)
#mean of the standard deviation: mean of the variance and then sqrt
varIAT = (df.seIAT * np.sqrt(df.nIAT)) ** 2
new_entry.seIAT = np.sqrt(np.mean(varIAT))/ np.sqrt(df.nIAT)
varExp = (df.seExp * np.sqrt(df.nExp)) ** 2
new_entry.seExp = np.sqrt(np.mean(varExp))/ np.sqrt(df.nExp)
return new_entry
data_agregated = pd.groupby(data_cleaned, by="playerShort").apply(aggreagtion)
data_agregated
data_agregated.to_csv('CrowdstormingDataJuly1st_aggregated.csv')
Explanation: Then aggregate
End of explanation
def encode(data_frame, inplace=False):
encodes the non numerical columns with a LabelEncoder
returns a new data_frame if inplace = False, otherwise changes the given data_frame
_df = data_frame if inplace else data_frame.copy()
le = pp.LabelEncoder() # encoder
for col_name in _df.columns:
if _df.dtypes[col_name] == 'O':
_df[col_name] = le.fit_transform(_df[col_name])
print("encoded", col_name)
return _df
data_cleaned_encoded = encode(data_cleaned, inplace=False)
data_cleaned_encoded.head(1)
data_cleaned_aggregated_encoded = encode(data_agregated, inplace=False)
data_cleaned_aggregated_encoded.head(1)
data_cleaned_encoded.to_csv('CrowdstormingDataJuly1st_preprocessed_encoded.csv')
data_cleaned_aggregated_encoded.set_index("playerShort").to_csv('CrowdstormingDataJuly1st_aggregated_encoded.csv')
data_cleaned_aggregated_encoded.set_index("playerShort").head()
Explanation: Encode the data for the Random Forest
The Random Forest can only handle integer or float attributes, so we have to encode the string attributes as numbers.
End of explanation |
1,105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Primer Design
One of the first things anyone learns in a molecular biology lab is how to design primers. The exact strategies vary a lot and are sometimes polymerase-specific. coral uses the Klavins' lab approach of targeting a specific melting temperature (Tm) and nothing else, with the exact Tm targeted being between 65°C and 72°C, the choice being personal preference. coral currently defaults to 72°C on the Phusion (modified Breslauer 1986) Tm calculator.
coral.design_primer is a function that takes in a sequence.DNA object and rapidly finds the 5' subsequence that is closest to the desired Tm (within a user-definable error range). If the entire sequence would make a primer with too low of a Tm, a descriptive error is produced.
For this tutorial, let's design primers that will amplify the gene EYFP.
Step1: First we read in a plasmid from Havens et al. 2012 and isolate the EYFP sequence.
Step2: Designing primers is straightforward - you just call design.design_primer with a sequence.DNA object as the input.
Step3: Designing primers and getting a string output is just the first step in primer design - we want to know whether the primers actually work and write them out to a file. The point of programming DNA is that you never copy and paste!
To simulate a PCR using the rules of molecular biology, use coral.reaction.pcr. The output is a subsequence of the template DNA - the features may not match the plasmid exactly (due to being truncated by the PCR), but the sequences match. If a primer would bind in multiple places (exact matches to the template), the pcr function will fail and give a useful message.
You can check for identical sequences using python's built in == operator.
Step4: Now that we have verified that our primers should at least amplify the DNA that we want, let's write out our primers to file so they can be submitted to an oligo synthesis company.
Step5: The csv file can then be opened in a spreadsheet application like Excel or processed by a downstream program. This is the format of the csv | Python Code:
import coral as cor
Explanation: Primer Design
One of the first things anyone learns in a molecular biology lab is how to design primers. The exact strategies vary a lot and are sometimes polymerase-specific. coral uses the Klavins' lab approach of targeting a specific melting temperature (Tm) and nothing else, with the exact Tm targeted being between 65°C and 72°C, the choice being personal preference. coral currently defaults to 72°C on the Phusion (modified Breslauer 1986) Tm calculator.
coral.design_primer is a function that takes in a sequence.DNA object and rapidly finds the 5' subsequence that is closest to the desired Tm (within a user-definable error range). If the entire sequence would make a primer with too low of a Tm, a descriptive error is produced.
For this tutorial, let's design primers that will amplify the gene EYFP.
End of explanation
plasmid = cor.seqio.read_dna("../files_for_tutorial/maps/pGP4G-EYFP.ape")
eyfp_f = [f for f in plasmid.features if f.name == 'EYFP'][0]
eyfp = plasmid.extract(eyfp_f)
print len(eyfp)
eyfp
Explanation: First we read in a plasmid from Havens et al. 2012 and isolate the EYFP sequence.
End of explanation
# Forward and reverse, one at a time using design_primer()
forward = cor.design.primer(eyfp)
reverse = cor.design.primer(eyfp.reverse_complement())
# Both at once using design_primers()
forward, reverse = cor.design.primers(eyfp)
# design_primer has many options, including adding overhangs
custom_forward = cor.design.primer(eyfp, tm=65, min_len=12,
tm_undershoot=1, tm_overshoot=1,
end_gc=True, tm_parameters="santalucia98",
overhang=cor.DNA("GGGGGATCGAT"))
print forward
print
print custom_forward
Explanation: Designing primers is straightforward - you just call design.design_primer with a sequence.DNA object as the input.
End of explanation
amplicon = cor.reaction.pcr(plasmid, forward, reverse)
amplicon == eyfp
Explanation: Designing primers and getting a string output is just the first step in primer design - we want to know whether the primers actually work and write them out to a file. The point of programming DNA is that you never copy and paste!
To simulate a PCR using the rules of molecular biology, use coral.reaction.pcr. The output is a subsequence of the template DNA - the features may not match the plasmid exactly (due to being truncated by the PCR), but the sequences match. If a primer would bind in multiple places (exact matches to the template), the pcr function will fail and give a useful message.
You can check for identical sequences using python's built in == operator.
End of explanation
# First we give our primers names (the `.name` attribute is empty by default)
forward.name = "EYFP_forward"
reverse.name = "EYFP_reverse"
# Then we write to file - a csv (comma separated value file)
cor.seqio.write_primers([forward, reverse], "./designed_primers.csv", ["Forward EYFP primer", "Reverse EYFP primer"])
Explanation: Now that we have verified that our primers should at least amplify the DNA that we want, let's write out our primers to file so they can be submitted to an oligo synthesis company.
End of explanation
import csv
with open("./designed_primers.csv", "r") as csv_file:
reader = csv.reader(csv_file)
lines = [line for line in reader]
for line in lines:
print line
Explanation: The csv file can then be opened in a spreadsheet application like Excel or processed by a downstream program. This is the format of the csv:
End of explanation |
1,106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preprocess data
Step1: Conversion and cleaning
Surprise forces you to use schema ["user_id", "doc_id", "rating"]
CF models are often sensitive to NA values -> replace NaN with 0 (TBD
Step2: Descriptive statistics of ratings
Not meaningful <- randomly generated
Step3: Recommendation Engines
Model-based Collaborative Filtering
Step4: Matrix factorization-based CF
Step5: Model Evaluation
Don't expect accurate models <- they are trained with random noise
Step6: k-NN-based CF
Step7: Model Evaluation
Don't expect accurate models <- they are trained with random noise
Step8: Final model and predictions
kNN Basic looks most promising so far, so go for this one
Train & evaluate final model
Step9: Predict some document ratings | Python Code:
# Import data
path = "../data/petdata_1000_100.csv"
raw_data = pd.read_csv(path, index_col="doc_uri")
assert raw_data.shape == (1000,100), "Import error, df has false shape"
Explanation: Preprocess data
End of explanation
# Convert df
data = raw_data.unstack().to_frame().reset_index()
data.columns = ["user", "doc_uri", "rating"]
# Missing value handling
data.fillna(0, inplace=True)
assert data.shape == (raw_data.shape[0] * raw_data.shape[1], 3), "Conversion error, df has false shape"
assert data.rating.max() <= 5., "Value error, max rating over upper bound"
assert data.rating.min() >= 0., "Value error, min rating under lower bound"
data.head()
Explanation: Conversion and cleaning
Surprise forces you to use schema ["user_id", "doc_id", "rating"]
CF models are often sensitive to NA values -> replace NaN with 0 (TBD: originally 0 codes to non-rating, but doc already shown to user)
End of explanation
data.rating.describe().to_frame().T
# Plot distribution of (random) ratings
plt.rcParams['figure.figsize'] = [9, 3]
plt.subplot(1, 2, 1)
hist = data.rating.plot(kind="hist", grid=True,
bins=[-0.1,0.1,0.9,1.1,1.9,2.1,2.9,3.1,3.9,4.1,4.9,5.1])
hist.set(xlabel= "rating")
plt.subplot(1, 2, 2)
box = data.rating.plot(kind="box")
plt.tight_layout()
plt.savefig("plots/ratings.png", orientation="landscape", dpi=120)
Explanation: Descriptive statistics of ratings
Not meaningful <- randomly generated
End of explanation
from surprise import SVD, Dataset, Reader, NMF, accuracy
from surprise.model_selection import cross_validate
from surprise.prediction_algorithms.random_pred import NormalPredictor
reader = Reader(rating_scale=(1, 5))
ds = Dataset.load_from_df(data[["user", "doc_uri", "rating"]], reader)
baseline_model = NormalPredictor() # Baseline model, predicts labels based on distribution of ratings
Explanation: Recommendation Engines
Model-based Collaborative Filtering
End of explanation
# Models - tune parameters, if you'd like ;)
svd = SVD() # Singular Value Decomposition
nmf = NMF() # Non-negative Matrix factorization
Explanation: Matrix factorization-based CF
End of explanation
for algo in [baseline_model, svd, nmf]:
cross_validate(algo, ds, measures=["RMSE", "MAE"], cv=5, verbose=True)
Explanation: Model Evaluation
Don't expect accurate models <- they are trained with random noise
End of explanation
from surprise.prediction_algorithms.knns import KNNBasic
sim_options = {"name": "pearson", # pearson's r
"user_based": False # item-based
}
knn = KNNBasic(sim_options=sim_options)
Explanation: k-NN-based CF
End of explanation
cross_validate(knn, ds, measures=["RMSE", "MAE"], cv=5, verbose=True)
Explanation: Model Evaluation
Don't expect accurate models <- they are trained with random noise
End of explanation
# Train final model
trainset = ds.build_full_trainset()
knn.fit(trainset)
# RMSE of final model
testset = trainset.build_testset()
test_pred = knn.test(testset)
accuracy.rmse(test_pred, verbose=True) # should be very bad ;)
Explanation: Final model and predictions
kNN Basic looks most promising so far, so go for this one
Train & evaluate final model
End of explanation
combinations_to_predict = [("Aaron Keith III", "http://gregory.com/"),
("Abigail Wong", "http://hicks.com/"),
("Julie Bullock", "https://www.garcia.com/"),
("Victoria Perez", "http://lee-phillips.org/register/")]
# Predictions
for combination in combinations_to_predict:
user = combination[0]
doc = combination[1]
pred = knn.predict(user, doc)
print(pred[0], "should rate", pred[1], "with", int(round(pred[3])), "stars")
Explanation: Predict some document ratings
End of explanation |
1,107 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Tuning XGBoost Hyperparameters with Grid Search
| Python Code::
from sklearn.model_selection import GridSearchCV
import xgboost as xgb
# create a dictionary containing the hyperparameters
# to tune and the range of values to try
PARAMETERS = {"subsample":[0.75, 1],
"colsample_bytree":[0.75, 1],
"max_depth":[2, 6],
"min_child_weight":[1, 5],
"learning_rate":[0.1, 0.01]}
# create a validation set which will be used for early stopping
eval_set = [(X_val, y_val)]
# initialise an XGBoost classifier, set the number of estimators,
# evaluation metric & early stopping rounds
estimator = xgb.XGBClassifier(n_estimators=100,
n_jobs=-1,
eval_metric='logloss',
early_stopping_rounds=10)
# initialise GridSearchCV model by passing the XGB classifier we
# initialised in the last step along with the dictionary of parameters
# and values to try. We also set the number of folds to validate over
# along with the scoring metic to use
model = GridSearchCV(estimator=estimator,
param_grid=PARAMETERS,
cv=3,
scoring="neg_log_loss")
# fit model
model.fit(X_train,
y_train,
eval_set=eval_set,
verbose=0)
# print out the best hyperparameters
print(model.best_params_)
|
1,108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top
Step1: Because the Seasons 2 and 3 together only use about 1.7 GB of RAM, no need of special on-disk techniques, I can just load in the whole file.
Step2: Image IDs
For a simple first task, let's get a list of unique image ids, to know how many objects have been published.
Step3: So, how many objects were online
Step4: Classification IDs
Now we need to find out how often each image_id has been looked at.
For that we have the groupby functionality.
Specifically, because we want to know how many citizens have submitted a classification for each image_id, we need to group by the image_id and count the unique classification_ids within each image_id group.
Uniqueness within Image_ID!
We need to constrain for uniqueness because each classified object is included with the same classification_id and we don't want to count them more than once, because we are interested in the overall submission only for now.
In other words
Step5: Percentages done.
By constraining the previous data series for the value it has (the counts) and look at the length of the remaining data, we can determine the status of the finished rate.
Step6: That's pretty disappointing, but alas, the cold hard truth.
This means, taking all submitted years into account in the data, we have currently only the following percentage done
Step7: Wishing to see higher values, I was for some moments contemplating if one maybe has to sum up the different counts to be correct, but I don't think that's it.
The way I see it, one has to decide in what 'phase-space' one works to determine the status of Planet4.
Either in the phase space of total subframes or in the total number of classifications. And I believe to determine the finished state of Planet4 it is sufficient and actually easier to focus on the available number of subframes and determine how often each of them has been looked at.
Separate for seasons
The different seasons of our south polar observations are separated by several counts of the thousands digit in the image_id column of the original HiRISE image id, in P4 called image_name.
Step8: As one can see, we have groups of [1..5, 11..13, 20..22].
Let's add another season column to the dataframe, first filled with zeros.
Step9: For the first season, we actually don't need to look at the thousands counter, as the first 3 letters of the image_names started all with PSP in the first season (for 'P_rimary S_cience P_hase').
Now let's set all rows with names starting with 'PSP' to season 1.
Step10: And for the later seasons, we actually need to group by the thousands counter
Step11: So, for all seasons, how many rows to we have in the overall data
Step12: Percentages done
Now I code a short function with the code I used above to create the counts of classification_ids per image_id. Note again the restriction to uniqueness of classification_ids.
Step13: In the following code I not only check for the different years, but also the influence on the demanded limit of counts to define a subframe as 'finished'.
To collect the data I create an empty dataframe with an index ranging through the different limits I want to check (i.e. range(30,101,10))
Step14: Problem ??
Group by user_name instead of classification_id
I realised that user_ids should provide just the same access to the performed counts, because each classification_id should have exactly one user_id, as they are created when that user clicks on Submit, right?
At least that's how I understood it.
So imagine my surprise when I found out it isn't the same answer. And unfortunately it looks like we have to reduce our dataset even further by apparent multiple submissions of the same classification, but let's see.
First, create the respective function to determine counts via the user_name instead of classification_id after grouping for image_id.
This first grouping by image_id is the essential step for the determination how often a particular image_id has been worked on, so that doesn't change.
Step15: Compare that again to the output for classifying per classification_id
Step16: So, not the same result! Let's dig deeper.
The subframe known as jp7
Focus on one image_id and study what is happening there. I first get a sub-table for the subframe 'jp7' and determine the user_names that worked on that subframe.
Then I loop over the names, filtering another sub-part of the table where the current user worked on jp7.
According to the hypothesis that a classification_id is created for a user at submisssion time and the idea that a user should not see an image twice, there should only be one classification_id in that sub-part.
I am testing that by checking if the unique list of classification_ids has a length $>1$. If it does, I print out the user_name.
Step17: Ok, so let's have a look at the data for the first user_name for the subframe jp7
Step18: First note that the creation time of these 2 different classifications is different, so it looks like this user has seen the jp7 subframe more than once.
But then when you scroll this html table to the right, you will notice that the submitted object has the exact same coordinates in both classifications?
How likely is it, that the user finds the exact same coordinates in less than 60 seconds?
So the question is, is this really a new classification and the user has done it twice? Or was the same thing submitted twice? Hopefully Meg knows the answer to that.
Some instructive plots
Plot over required constraint
I found it instructive to look at how the status of finished data depends on the limit we put on the reached counts per image_id (i.e. subframe).
Also, how does it change when looking for unique user_names per image_id instead of unique classification_ids.
Step19: Ok, so not that big a deal until we require more than 80 classifications to be done.
How do the different existing user counts distribute
The method 'value_counts()' basically delivers a histogram on the counts_by_user data series.
In other words, it shows how the frequency of classifications distribute over the dataset. It shows an to be expected peak close to 100, because that's what we are aiming now and the system does today not anymore show a subframe that has been seen 100 times.
But it also shows quite some waste in citizen power from all the counts that went for counts > 100. | Python Code:
import planet4 as p4
import pandas as pd
from planet4 import io
db = io.DBManager()
db_fname = db.dbname
db.dbname
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Task:-Define-status-of-Planet-4" data-toc-modified-id="Task:-Define-status-of-Planet-4-1"><span class="toc-item-num">1 </span>Task: Define status of Planet 4</a></span><ul class="toc-item"><li><span><a href="#Image-IDs" data-toc-modified-id="Image-IDs-1.1"><span class="toc-item-num">1.1 </span>Image IDs</a></span></li><li><span><a href="#Classification-IDs" data-toc-modified-id="Classification-IDs-1.2"><span class="toc-item-num">1.2 </span>Classification IDs</a></span><ul class="toc-item"><li><span><a href="#Uniqueness-within-Image_ID!" data-toc-modified-id="Uniqueness-within-Image_ID!-1.2.1"><span class="toc-item-num">1.2.1 </span>Uniqueness within Image_ID!</a></span></li></ul></li><li><span><a href="#Percentages-done." data-toc-modified-id="Percentages-done.-1.3"><span class="toc-item-num">1.3 </span>Percentages done.</a></span></li><li><span><a href="#Separate-for-seasons" data-toc-modified-id="Separate-for-seasons-1.4"><span class="toc-item-num">1.4 </span>Separate for seasons</a></span><ul class="toc-item"><li><span><a href="#Percentages-done" data-toc-modified-id="Percentages-done-1.4.1"><span class="toc-item-num">1.4.1 </span>Percentages done</a></span></li></ul></li></ul></li><li><span><a href="#Problem-??" data-toc-modified-id="Problem-??-2"><span class="toc-item-num">2 </span>Problem ??</a></span><ul class="toc-item"><li><span><a href="#Group-by-user_name-instead-of-classification_id" data-toc-modified-id="Group-by-user_name-instead-of-classification_id-2.1"><span class="toc-item-num">2.1 </span>Group by user_name instead of classification_id</a></span><ul class="toc-item"><li><span><a href="#The-subframe-known-as-jp7" data-toc-modified-id="The-subframe-known-as-jp7-2.1.1"><span class="toc-item-num">2.1.1 </span>The subframe known as jp7</a></span></li></ul></li><li><span><a href="#Some-instructive-plots" data-toc-modified-id="Some-instructive-plots-2.2"><span class="toc-item-num">2.2 </span>Some instructive plots</a></span><ul class="toc-item"><li><span><a href="#Plot-over-required-constraint" data-toc-modified-id="Plot-over-required-constraint-2.2.1"><span class="toc-item-num">2.2.1 </span>Plot over required constraint</a></span></li><li><span><a href="#How-do-the-different-existing-user-counts-distribute" data-toc-modified-id="How-do-the-different-existing-user-counts-distribute-2.2.2"><span class="toc-item-num">2.2.2 </span>How do the different existing user counts distribute</a></span></li></ul></li></ul></li></ul></div>
Task: Define status of Planet 4
First import the pandas data table analyis library and check which version I'm using (as I'm constantly changing that to keep up-to-date.)
End of explanation
df = db.get_all()
df.info()
df.user_name.nunique()
df[df.user_name.str.startswith('not-logged-in')].user_name.nunique()
df[df.marking=='fan'].shape
df[df.marking=='blotch'].shape
df[df.marking=='interesting'].shape
df.shape
Explanation: Because the Seasons 2 and 3 together only use about 1.7 GB of RAM, no need of special on-disk techniques, I can just load in the whole file.
End of explanation
img_ids = df.image_id.unique()
img_names = df.image_name.unique()
Explanation: Image IDs
For a simple first task, let's get a list of unique image ids, to know how many objects have been published.
End of explanation
no_all = len(img_ids)
print(no_all)
n_images = len(img_names)
print(n_images)
Explanation: So, how many objects were online:
End of explanation
from planet4 import stats
stats.define_season_column(df)
grp = df.groupby(['season', 'image_id'])
counts = grp.classification_id.nunique()
%matplotlib ipympl
import seaborn as sns
plt.close('all')
sns.set_palette('colorblind', 4)
pal = sns.color_palette("colorblind", 4)
bins = np.arange(25, 170, 5)
bins
n_my29 = counts.loc[2].size
n_my30 = counts.loc[3].size
sns.distplot?
fig, ax = plt.subplots()
kwargs = {'alpha':0.2}
axlabel="Number of classifications per Planet Four tile"
sns.distplot(counts.loc[2], label='MY29', ax=ax, kde=True, bins=bins, color=pal[0],
hist_kws=kwargs)
sns.distplot(counts.loc[3], label='MY30', ax=ax, kde=True, bins=bins, color=pal[2],
axlabel=axlabel, hist_kws=kwargs)
ax.set_ylabel('Density')
ax.legend()
ax.set_title("Distribution of Planet Four classification counts")
fig.savefig("/Users/klay6683/Dropbox/src/p4_paper1/figures/count_stats.png",
dpi=200)
plt.close('all')
sns.set_context('paper')
fig, ax = plt.subplots(figsize=(9,4))
pal = sns.color_palette('colorblind')
weights2 = np.ones_like(counts.loc[2].values)/float(len(counts.loc[2]))
weights3 = np.ones_like(counts.loc[3].values)/float(len(counts.loc[3]))
ax.hist([counts.loc[2], counts.loc[3]], normed=False, bins=bins, color=pal[:3:2],
label=['MY29', 'MY30'], weights=[weights2, weights3])
ax.set_xlabel("Number of classifications per Planet Four tile")
ax.set_ylabel("Fraction of number of classifications")
ax.set_title("Distribution of Planet Four classification counts.")
ax.legend()
fig.savefig("/Users/klay6683/Dropbox/src/p4_paper1/figures/count_stats.png",
dpi=200)
Explanation: Classification IDs
Now we need to find out how often each image_id has been looked at.
For that we have the groupby functionality.
Specifically, because we want to know how many citizens have submitted a classification for each image_id, we need to group by the image_id and count the unique classification_ids within each image_id group.
Uniqueness within Image_ID!
We need to constrain for uniqueness because each classified object is included with the same classification_id and we don't want to count them more than once, because we are interested in the overall submission only for now.
In other words: Because the different fans, blobs and interesting things for one image_id have all been submitted with the same classification_id, I need to constrain to unique classification_ids, otherwise images with a lot of submitted items would appear 'more completed' just for having a lot of fan-content, and not for being analyzed by a lot of citizens, which is what we want.
End of explanation
counts[counts >= 30].size
Explanation: Percentages done.
By constraining the previous data series for the value it has (the counts) and look at the length of the remaining data, we can determine the status of the finished rate.
End of explanation
counts[counts>= 30].size / float(no_all) * 100
Explanation: That's pretty disappointing, but alas, the cold hard truth.
This means, taking all submitted years into account in the data, we have currently only the following percentage done:
End of explanation
# str[5:7] is the 2-digit thousands count in, e.g., ESP_011234_0950, in this case 11.
df['thousands'] = df.image_name.str[5:7].astype('int')
thousands = df.thousands.value_counts().sort_index()
thousands
Explanation: Wishing to see higher values, I was for some moments contemplating if one maybe has to sum up the different counts to be correct, but I don't think that's it.
The way I see it, one has to decide in what 'phase-space' one works to determine the status of Planet4.
Either in the phase space of total subframes or in the total number of classifications. And I believe to determine the finished state of Planet4 it is sufficient and actually easier to focus on the available number of subframes and determine how often each of them has been looked at.
Separate for seasons
The different seasons of our south polar observations are separated by several counts of the thousands digit in the image_id column of the original HiRISE image id, in P4 called image_name.
End of explanation
df['season'] = 0
Explanation: As one can see, we have groups of [1..5, 11..13, 20..22].
Let's add another season column to the dataframe, first filled with zeros.
End of explanation
df.loc[:, 'season'][df.image_name.str.startswith('PSP')] = 1
Explanation: For the first season, we actually don't need to look at the thousands counter, as the first 3 letters of the image_names started all with PSP in the first season (for 'P_rimary S_cience P_hase').
Now let's set all rows with names starting with 'PSP' to season 1.
End of explanation
df.loc[:, 'season'][(df.thousands > 10) & (df.thousands < 20)] = 2
df.loc[:, 'season'][df.thousands > 19] = 3
Explanation: And for the later seasons, we actually need to group by the thousands counter:
End of explanation
no_all = df.season.value_counts()
no_all
Explanation: So, for all seasons, how many rows to we have in the overall data:
End of explanation
def get_counts_per_classification_id(df, unique=True):
grouping = df.classification_id.groupby(df.image_id, sort=False)
# because I only grouped the classification_id column above, this function is only
# applied to it. First, reduce to a unique list, and then save the size of that list.
if unique:
return grouping.agg(lambda x: x.unique().size)
else:
return grouping.size()
df.image_name.groupby(df.season).agg(lambda x:x.unique().size)
no_all = df.image_id.groupby(df.season).agg(lambda x: x.unique().size)
no_all
def done_per_season(season, limit, unique=True, in_percent=True):
subdf = df[df.season == season]
counts_per_classid = get_counts_per_classification_id(subdf, unique)
no_done = counts_per_classid[counts_per_classid >= limit].size
if in_percent:
return 100.0 * no_done / no_all[season]
else:
return no_done
for season in [1,2,3]:
print season
print done_per_season(season, 30, in_percent=True)
Explanation: Percentages done
Now I code a short function with the code I used above to create the counts of classification_ids per image_id. Note again the restriction to uniqueness of classification_ids.
End of explanation
import sys
from collections import OrderedDict
results = pd.DataFrame(index=range(30,101,10))
for season in [1,2,3]:
print season
sys.stdout.flush() # to force a print out of the std buffer
subdf = df[df.season == season]
counts = get_counts_per_classification_id(subdf)
values = OrderedDict()
for limit in results.index:
values[limit] = done_per_season(season, limit)
results[season] = values.values()
np.round(results)
Explanation: In the following code I not only check for the different years, but also the influence on the demanded limit of counts to define a subframe as 'finished'.
To collect the data I create an empty dataframe with an index ranging through the different limits I want to check (i.e. range(30,101,10))
End of explanation
def get_counts_per_user_name(df):
grouping = df.user_name.groupby(df.image_id, sort=False)
counts = grouping.agg(lambda x: x.unique().size)
# counts = counts.order(ascending=False)
return counts
counts_by_user = get_counts_per_user_name(df)
counts_by_user
Explanation: Problem ??
Group by user_name instead of classification_id
I realised that user_ids should provide just the same access to the performed counts, because each classification_id should have exactly one user_id, as they are created when that user clicks on Submit, right?
At least that's how I understood it.
So imagine my surprise when I found out it isn't the same answer. And unfortunately it looks like we have to reduce our dataset even further by apparent multiple submissions of the same classification, but let's see.
First, create the respective function to determine counts via the user_name instead of classification_id after grouping for image_id.
This first grouping by image_id is the essential step for the determination how often a particular image_id has been worked on, so that doesn't change.
End of explanation
counts_by_class = get_counts_per_classification_id(df)
counts_by_class
Explanation: Compare that again to the output for classifying per classification_id:
End of explanation
jp7 = df[df.image_id == 'APF0000jp7']
unique_users = jp7.user_name.unique()
# having the list of users that worked on jp7
for user in unique_users:
subdf = jp7[jp7.user_name == user]
if len(subdf.classification_id.unique()) > 1:
print user, len(subdf)
Explanation: So, not the same result! Let's dig deeper.
The subframe known as jp7
Focus on one image_id and study what is happening there. I first get a sub-table for the subframe 'jp7' and determine the user_names that worked on that subframe.
Then I loop over the names, filtering another sub-part of the table where the current user worked on jp7.
According to the hypothesis that a classification_id is created for a user at submisssion time and the idea that a user should not see an image twice, there should only be one classification_id in that sub-part.
I am testing that by checking if the unique list of classification_ids has a length $>1$. If it does, I print out the user_name.
End of explanation
jp7[jp7.user_name == 'not-logged-in-8d495c463aeffd67c08b2dfc1141f33b']
Explanation: Ok, so let's have a look at the data for the first user_name for the subframe jp7
End of explanation
results[[2,3]].plot()
xlabel('Required number of analyses submitted to be considered "done".')
ylabel('Current percentage of dataset finished [%]')
title("Season 2 and 3 status, depending on definition of 'done'.")
savefig('Season2_3_status.png', dpi=200)
x = range(1,101)
per_class = []
per_user = []
for val in x:
per_class.append(100 * counts_by_class[counts_by_class >= val].size/float(no_all))
per_user.append(100 * counts_by_user[counts_by_user >= val].size/float(no_all))
plot(x,per_class)
plot(x, per_user)
xlabel('Counts constraint for _finished_ criterium')
ylabel('Current percent finished [%]')
Explanation: First note that the creation time of these 2 different classifications is different, so it looks like this user has seen the jp7 subframe more than once.
But then when you scroll this html table to the right, you will notice that the submitted object has the exact same coordinates in both classifications?
How likely is it, that the user finds the exact same coordinates in less than 60 seconds?
So the question is, is this really a new classification and the user has done it twice? Or was the same thing submitted twice? Hopefully Meg knows the answer to that.
Some instructive plots
Plot over required constraint
I found it instructive to look at how the status of finished data depends on the limit we put on the reached counts per image_id (i.e. subframe).
Also, how does it change when looking for unique user_names per image_id instead of unique classification_ids.
End of explanation
counts_by_user.value_counts()
counts_by_user.value_counts().plot(style='*')
users_work = df.classification_id.groupby(df.user_name).agg(lambda x: x.unique().size)
users_work.order(ascending=False)[:10]
df[df.user_name=='gwyneth walker'].classification_id.value_counts()
import helper_functions as hf
reload(hf)
hf.classification_counts_for_user('Kitharode', df).hist?
hf.classification_counts_for_user('Paul Johnson', df)
np.isnan(df.marking)
df.marking
s = 'INVESTIGATION OF POLAR SEASONAL FAN DEPOSITS USING CROWDSOURCING'
s.title()
Explanation: Ok, so not that big a deal until we require more than 80 classifications to be done.
How do the different existing user counts distribute
The method 'value_counts()' basically delivers a histogram on the counts_by_user data series.
In other words, it shows how the frequency of classifications distribute over the dataset. It shows an to be expected peak close to 100, because that's what we are aiming now and the system does today not anymore show a subframe that has been seen 100 times.
But it also shows quite some waste in citizen power from all the counts that went for counts > 100.
End of explanation |
1,109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WMI Win32_Process Class and Create Method for Remote Execution
Metadata
| | |
|
Step1: Download & Process Mordor Dataset
Step2: Analytic I
Look for wmiprvse.exe spawning processes that are part of non-system account sessions.
| Data source | Event Provider | Relationship | Event |
|
Step3: Analytic II
Look for wmiprvse.exe spawning processes that are part of non-system account sessions.
| Data source | Event Provider | Relationship | Event |
|
Step4: Analytic III
Look for non-system accounts leveraging WMI over the netwotk to execute code
| Data source | Event Provider | Relationship | Event |
| | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: WMI Win32_Process Class and Create Method for Remote Execution
Metadata
| | |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/08/10 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be leveraging WMI Win32_Process class and method Create to execute code remotely across my environment
Technical Context
WMI is the Microsoft implementation of the Web-Based Enterprise Management (WBEM) and Common Information Model (CIM).
Both standards aim to provide an industry-agnostic means of collecting and transmitting information related to any managed component in an enterprise.
An example of a managed component in WMI would be a running process, registry key, installed service, file information, etc.
At a high level, Microsoft’s implementation of these standards can be summarized as follows > Managed Components Managed components are represented as WMI objects — class instances representing highly structured operating system data. Microsoft provides a wealth of WMI objects that communicate information related to the operating system. E.g. Win32_Process, Win32_Service, AntiVirusProduct, Win32_StartupCommand, etc.
Offensive Tradecraft
One well known lateral movement technique is performed via the WMI object — class Win32_Process and its method Create.
This is because the Create method allows a user to create a process either locally or remotely.
One thing to notice is that when the Create method is used on a remote system, the method is run under a host process named “Wmiprvse.exe”.
The process WmiprvSE.exe is what spawns the process defined in the CommandLine parameter of the Create method. Therefore, the new process created remotely will have Wmiprvse.exe as a parent. WmiprvSE.exe is a DCOM server and it is spawned underneath the DCOM service host svchost.exe with the following parameters C:\WINDOWS\system32\svchost.exe -k DcomLaunch -p.
From a logon session perspective, on the target, WmiprvSE.exe is spawned in a different logon session by the DCOM service host. However, whatever is executed by WmiprvSE.exe occurs on the new network type (3) logon session created by the user that authenticated from the network.
Additional Reading
* https://github.com/OTRF/ThreatHunter-Playbook/tree/master/docs/library/windows/logon_session.md
Mordor Test Data
| | |
|:----------|:----------|
| metadata | https://mordordatasets.com/notebooks/small/windows/08_lateral_movement/SDWIN-200921001437.html |
| link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/lateral_movement/host/empire_wmi_dcerpc_wmi_IWbemServices_ExecMethod.zip |
Analytics
Initialize Analytics Engine
End of explanation
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/lateral_movement/host/empire_wmi_dcerpc_wmi_IWbemServices_ExecMethod.zip"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
Explanation: Download & Process Mordor Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, SubjectUserName, TargetUserName, NewProcessName, CommandLine
FROM mordorTable
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND lower(ParentProcessName) LIKE "%wmiprvse.exe"
AND NOT TargetLogonId = "0x3e7"
'''
)
df.show(10,False)
Explanation: Analytic I
Look for wmiprvse.exe spawning processes that are part of non-system account sessions.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Security-Auditing | Process created Process | 4688 |
| Process | Microsoft-Windows-Security-Auditing | User created Process | 4688 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, Image, CommandLine
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND lower(ParentImage) LIKE "%wmiprvse.exe"
AND NOT LogonId = "0x3e7"
'''
)
df.show(10,False)
Explanation: Analytic II
Look for wmiprvse.exe spawning processes that are part of non-system account sessions.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 |
| Process | Microsoft-Windows-Sysmon/Operational | User created Process | 1 |
End of explanation
df = spark.sql(
'''
SELECT o.`@timestamp`, o.Hostname, o.SubjectUserName, o.TargetUserName, o.NewProcessName, o.CommandLine, a.IpAddress
FROM mordorTable o
INNER JOIN (
SELECT Hostname,TargetUserName,TargetLogonId,IpAddress
FROM mordorTable
WHERE LOWER(Channel) = "security"
AND EventID = 4624
AND LogonType = 3
AND NOT TargetUserName LIKE "%$"
) a
ON o.TargetLogonId = a.TargetLogonId
WHERE LOWER(o.Channel) = "security"
AND o.EventID = 4688
AND lower(o.ParentProcessName) LIKE "%wmiprvse.exe"
AND NOT o.TargetLogonId = "0x3e7"
'''
)
df.show(10,False)
Explanation: Analytic III
Look for non-system accounts leveraging WMI over the netwotk to execute code
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Security-Auditing | Process created Process | 4688 |
| Process | Microsoft-Windows-Security-Auditing | User created Process | 4688 |
| Authentication log | Microsoft-Windows-Security-Auditing | User authenticated Host | 4624 |
End of explanation |
1,110 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Structures
Data structures are a concrete implementation of the specification provided by one or more particular abstract data types (ADT), which specify the operations that can be performed on a data structure and the computational complexity of those operations.
Different kinds of data structures are suited for different kinds of applications, and some are highly specialized to specific tasks. For example, relational databases commonly use B-tree indexes for data retrieval, while compiler implementations usually use hash tables to look up identifiers.
Usually, efficient data structures are key to designing efficient algorithms.
Standard import statement
Step1: DataStructureBase is the base class for implementing data structures
DataStructureVisualization is the class that visualizes data structures in GUI
DataStructureBase class
Any data structure, which is to be implemented, has to be derived from this class. Now we shall see data members and member functions of this class
Step2: Now, this program can be executed as follows | Python Code:
from openanalysis.data_structures import DataStructureBase, DataStructureVisualization
import gi.repository.Gtk as gtk # for displaying GUI dialogs
Explanation: Data Structures
Data structures are a concrete implementation of the specification provided by one or more particular abstract data types (ADT), which specify the operations that can be performed on a data structure and the computational complexity of those operations.
Different kinds of data structures are suited for different kinds of applications, and some are highly specialized to specific tasks. For example, relational databases commonly use B-tree indexes for data retrieval, while compiler implementations usually use hash tables to look up identifiers.
Usually, efficient data structures are key to designing efficient algorithms.
Standard import statement
End of explanation
class BinarySearchTree(DataStructureBase): # Derived from DataStructureBase
class Node: # Class for creating a node
def __init__(self, data):
self.left = None
self.right = None
self.data = data
def __str__(self):
return str(self.data)
def __init__(self):
DataStructureBase.__init__(self, "Binary Search Tree", "t.png") # Initializing with name and path
self.root = None
self.count = 0
def get_root(self): # Returns root node of the tree
return self.root
def insert(self, item): # Inserts item into the tree
newNode = BinarySearchTree.Node(item)
insNode = self.root
parent = None
while insNode is not None:
parent = insNode
if insNode.data > newNode.data:
insNode = insNode.left
else:
insNode = insNode.right
if parent is None:
self.root = newNode
else:
if parent.data > newNode.data:
parent.left = newNode
else:
parent.right = newNode
self.count += 1
def find(self, item): # Finds if item is present in tree or not
node = self.root
while node is not None:
if item < node.data:
node = node.left
elif item > node.data:
node = node.right
else:
return True
return False
def min_value_node(self): # Returns the minimum value node
current = self.root
while current.left is not None:
current = current.left
return current
def delete(self, item): # Deletes item from tree if present
# else shows Value Error
if item not in self:
dialog = gtk.MessageDialog(None, 0, gtk.MessageType.ERROR,
gtk.ButtonsType.CANCEL, "Value not found ERROR")
dialog.format_secondary_text(
"Element not found in the %s" % self.name)
dialog.run()
dialog.destroy()
else:
self.count -= 1
if self.root.data == item and (self.root.left is None or self.root.right is None):
if self.root.left is None and self.root.right is None:
self.root = None
elif self.root.data == item and self.root.left is None:
self.root = self.root.right
elif self.root.data == item and self.root.right is None:
self.root = self.root.left
return self.root
if item < self.root.data:
temp = self.root
self.root = self.root.left
temp.left = self.delete(item)
self.root = temp
elif item > self.root.data:
temp = self.root
self.root = self.root.right
temp.right = self.delete(item)
self.root = temp
else:
if self.root.left is None:
return self.root.right
elif self.root.right is None:
return self.root.left
temp = self.root
self.root = self.root.right
min_node = self.min_value_node()
temp.data = min_node.data
temp.right = self.delete(min_node.data)
self.root = temp
return self.root
def get_graph(self, rt): # Populates self.graph with elements depending
# upon the parent-children relation
if rt is None:
return
self.graph[rt.data] = {}
if rt.left is not None:
self.graph[rt.data][rt.left.data] = {'child_status': 'left'}
self.get_graph(rt.left)
if rt.right is not None:
self.graph[rt.data][rt.right.data] = {'child_status': 'right'}
self.get_graph(rt.right)
Explanation: DataStructureBase is the base class for implementing data structures
DataStructureVisualization is the class that visualizes data structures in GUI
DataStructureBase class
Any data structure, which is to be implemented, has to be derived from this class. Now we shall see data members and member functions of this class:
Data Members
name - Name of the DS
file_path - Path to store output of DS operations
Member Functions
__init__(self, name, file_path) - Initializes DS with a name and a file_path to store the output
insert(self, item) - Inserts item into the DS
delete(Self, item) - Deletes item from the DS, <br/>            if item is not present in the DS, throws a ValueError
find(self, item) - Finds the item in the DS
<br/>          returns True if found, else returns False<br/>          similar to __contains__(self, item)
get_root(self) - Returns the root (for graph and tree DS)
get_graph(self, rt) - Gets the dict representation between the parent and children (for graph and tree DS)
draw(self, nth=None) - Draws the output to visualize the operations performed on the DS<br/>             nth is used to pass an item to visualize a find operation
DataStructureVisualization class
This class is used for visualizing data structures in a GUI (using GTK+ 3). Now we shall see data members and member functions of this class:
Data Members
ds - Any DS, which is an instance of DataStructureBase
Member Functions
__init__(self, ds) - Initializes ds with an instance of DS that is to be visualized
run(self) - Opens a GUI window to visualize the DS operations
An example ..... Binary Search Tree
Now we shall implement the class BinarySearchTree
End of explanation
DataStructureVisualization(BinarySearchTree).run()
import io
import base64
from IPython.display import HTML
video = io.open('../res/bst.mp4', 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii')))
Explanation: Now, this program can be executed as follows:
End of explanation |
1,111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Processing a LexisNexus text export into CSV
Preparation
download the file
Step1: Show the number of characters in the text file
Step2: Downloading the text file directly from github using the <code>requests</code> module
Step3: The <code>get</code> function takes a URL and returns the content at that address.
Step4: Examine the first 20,000 characters to find string patterns that mark divisions between documents
Step5: The string <code>of 1000 DOCUMENTS</code> looks like a good candidate for splitting the the text file into the individual documents
Split the text string into n chunks using of 1000 DOCUMENTS
Step6: Python excurcus
Step7: Another, 'less Pythonic', way to do this is to create a loop that uses the indices of each item in the list, e.g.
Step8: But often you want to have both each item in the list and its index without having to do list[idx] to get the item. The enumerate function helps in such cases.
enumerate(list) returns a list of tuples, where each item in the list consists of a pair where the first item is the index and second the item itself.
Step9: Back to the LexisNexus task
We can use the enumerate and for loop idiom to check how good our proposed strings are for splitting the document up into components.
For example, it looks like the string ( END ) could be a good marker of the end of the body. So we can test which documents contain it using the count function and testing whether we get at least one instance
Step10: AH! - actually doesn't look like a great candidate for splitting a document in to body and post body sections.
A second look at some sample documents suggests we might be able to use LOAD-DATE instead.
Step11: We can use list interpolation to get a count of the number of documents that contain a feature
Step12: Now we have a set of string markers and a strategy for splitting documents up
Step13: Now we have that figured out we can set two variables
Step14: Then we can get the three parts of the document we want
Step15: Python excurcus
Step16: Back to the LexisNexus task
We are going to use a list of dictionaries approach to store the prebody, body, postbody components for each document
Step17: Python excursus
Step18: The final parts of the task!
Now we want to write out the documents split into three parts to a CSV file
Then just for fun we construction a frequency list of all the words in the documents
Step19: Frequency counts
The <code>Counter</code> function generates a dictionary-like object with words as keys and the number of times they occur in a sequence as values
It is a quick way to generate a word frequency list from a set of tokenized documents (i.e., where the text has been turned into a list of words)
Step20: Simple example of using <code>Counter</code>
<code>Counter</code> works by passing it a list of items, e.g.
Step21: and it returns a dictionary with a count for the number of times each item occurs
Step22: Just like a dictionary you can get the frequency for a specific item like this
Step23: <code>Counter</code> object has an <code>update</code> method that allows multiple lists to be counted.
Step24: First define a new <code>Counter</code>
Step25: Then update it with the words from text1
Step26: Then update it again with the words from text2
Step27: A simple example of looping of some texts and generating a frequency list using Counter
Step28: Finally lets make the formatting a bit more pretty by looping over the frequency list and producing a tab separated table
Step29: Counting words in the LexisNexus documents
For a single LexisNexus document loaded in from the CSV file (the list of dictionaries), we select the document by index and then the body component
Step30: Find the frequency of the word vaping
Step31: Finally we can create a frequency list for all the documents with a loop and using the update function on the Counter object. | Python Code:
text = open('data/LexisNexusVapingExample.txt', 'r').read()
Explanation: Processing a LexisNexus text export into CSV
Preparation
download the file: https://github.com/mbod/intro_python_for_comm/blob/master/data/LexisNexusVapingExample.txt
place it in the data folder of your IPython notebook
Task
Load the text file <code>LexisNexusVapingExample.txt</code> into a variable text
Examine the first 20000 chars to figure out how articles are separated
Create a list by splitting on the separator string
Seperate each article into prebody, body and postbody components
Save the output to a CSV file with three columns:
prebody, body and postbody
and a row for each article
Loading contexts of the text file from the data folder
If you downloaded the text file and placed in the data folder you can read it into a variable like this:
End of explanation
len(text)
Explanation: Show the number of characters in the text file:
End of explanation
import requests # import the module
Explanation: Downloading the text file directly from github using the <code>requests</code> module
End of explanation
resp = requests.get('https://raw.githubusercontent.com/mbod/intro_python_for_comm/master/data/LexisNexusVapingExample.txt')
text2=resp.text # assign content of response to a variable
text2[:300] # show first 300 characters
print(text2[:300]) # use print to see formatting (spacing and newlines etc.)
Explanation: The <code>get</code> function takes a URL and returns the content at that address.
End of explanation
print(text[0:20000])
Explanation: Examine the first 20,000 characters to find string patterns that mark divisions between documents
End of explanation
chunks = text.split('of 1000 DOCUMENTS')
len(chunks) # see how many chunks this produces
docs = chunks[1:]
Explanation: The string <code>of 1000 DOCUMENTS</code> looks like a good candidate for splitting the the text file into the individual documents
Split the text string into n chunks using of 1000 DOCUMENTS:
End of explanation
alist = [1,2,3,4,5]
slist = ['a','b','c','d']
for item in alist:
print(item)
for item in slist:
print('The current item is:',item)
Explanation: Python excurcus: Using enumerate to loop over lists
When you have a list of items and what to process each in turn then using a for loop is a common approach, e.g.
End of explanation
for idx in range(0,len(alist)):
print('Index', idx, 'is item:', alist[idx])
Explanation: Another, 'less Pythonic', way to do this is to create a loop that uses the indices of each item in the list, e.g.
End of explanation
list(enumerate(slist))
result = list(enumerate(slist))
result[0]
for idx, item in enumerate(slist):
print(idx, item)
Explanation: But often you want to have both each item in the list and its index without having to do list[idx] to get the item. The enumerate function helps in such cases.
enumerate(list) returns a list of tuples, where each item in the list consists of a pair where the first item is the index and second the item itself.
End of explanation
for idx, doc in enumerate(docs):
print('Document', idx, 'has ( END )?', doc.count('( END )') > 0)
Explanation: Back to the LexisNexus task
We can use the enumerate and for loop idiom to check how good our proposed strings are for splitting the document up into components.
For example, it looks like the string ( END ) could be a good marker of the end of the body. So we can test which documents contain it using the count function and testing whether we get at least one instance:
End of explanation
for idx, doc in enumerate(docs):
print('Document', idx, 'has LOAD-DATE?', doc.count('LOAD-DATE:') > 0)
Explanation: AH! - actually doesn't look like a great candidate for splitting a document in to body and post body sections.
A second look at some sample documents suggests we might be able to use LOAD-DATE instead.
End of explanation
has_end = sum([doc.count('( END )')>0 for doc in docs])
print(has_end, 'documents with ( END ) out of ', len(docs), 'docs')
has_load_date = sum([doc.count('LOAD-DATE:')>0 for doc in docs])
print(has_load_date, 'documents with LOAD-DATE out of ', len(docs), 'docs')
Explanation: We can use list interpolation to get a count of the number of documents that contain a feature
End of explanation
doc = docs[0]
doc.index('LENGTH') # find the character position for the start of string LENGTH
doc[224:274] # slice the string starting at this point plus 50 characters
doc.index('\n',224) # find the first newline character after the start of LENGTH
doc[241:280] # slice after this character
Explanation: Now we have a set of string markers and a strategy for splitting documents up:
for each document
split into three parts
prebody = text up to LENGTH xxx words
body = text from LENGTH to before LOAD-DATE
postbody = LOAD-DATE to the end
End of explanation
start_pos = doc.index('LENGTH')
start_pos = doc.index('\n', start_pos)
end_pos = doc.index('LOAD-DATE:')
Explanation: Now we have that figured out we can set two variables:
start_pos for the beginning of the body (the line after the one beginning with LENGTH)
end_pos the point were LOAD-DATE begins
End of explanation
pre_body=doc[:start_pos]
body = doc[start_pos:end_pos]
post_body = doc[end_pos:]
Explanation: Then we can get the three parts of the document we want
End of explanation
exdict = { 'item1': 123, 'item2': 'asdsad', 'item3': [1,2,3,'asd','dada'] } # define a dictionary
exdict
exdict['item2'] # get the value associated with the key 'item2'
# add a new value associated with the key 'item4' which is itself a dictionary
exdict['item4'] = {'a': 12323, 'b': [1,2,23]}
exdict
exdict['item4']['a'] # address this dictionary of dictionary structure
exdict['item3'][3]
Explanation: Python excurcus: Dictionaries
Alongside lists one of the most useful structures in Python is a dictionary. It is an unordered set of key-value pairings.
Data is organized and identified by a unique key using the syntax 'key' : value
End of explanation
doc_dict = {'prebody': pre_body, 'body': body, 'postbody': post_body}
doc_dict['postbody']
rows = []
for idx, doc in enumerate(docs):
try:
start_pos = doc.index('LENGTH')
start_pos = doc.index('\n', start_pos)
end_pos = doc.index('LOAD-DATE:')
except:
print('ERROR with doc', idx)
continue
doc_dict = {
'prebody': doc[:start_pos],
'body': doc[start_pos: end_pos],
'postbody': doc[end_pos:]
}
rows.append(doc_dict)
print(docs[13])
Explanation: Back to the LexisNexus task
We are going to use a list of dictionaries approach to store the prebody, body, postbody components for each document
End of explanation
import this
Explanation: Python excursus: The Zen of Python
A little bit of poetry from the creators of Python explaining the design and suggestions for truly Pythonic coding!
End of explanation
import csv
with open('data/articles.csv','w') as out:
csvfile = csv.DictWriter(out, fieldnames=('prebody','body','postbody'))
csvfile.writeheader()
csvfile.writerows(rows)
docs2 = [r for r in csv.DictReader(open('data/articles.csv','r'))]
len(docs2)
print(docs2[0])
Explanation: The final parts of the task!
Now we want to write out the documents split into three parts to a CSV file
Then just for fun we construction a frequency list of all the words in the documents
End of explanation
from collections import Counter
Explanation: Frequency counts
The <code>Counter</code> function generates a dictionary-like object with words as keys and the number of times they occur in a sequence as values
It is a quick way to generate a word frequency list from a set of tokenized documents (i.e., where the text has been turned into a list of words)
End of explanation
count = Counter(['a','a','v','c','d','e','a','c'])
Explanation: Simple example of using <code>Counter</code>
<code>Counter</code> works by passing it a list of items, e.g.:
End of explanation
count.items()
Explanation: and it returns a dictionary with a count for the number of times each item occurs:
End of explanation
count['a'] # how many times does 'a' occur in the list ['a','a','v','c','d','e','a','c']
Explanation: Just like a dictionary you can get the frequency for a specific item like this:
End of explanation
text1 = 'This is a text with some words in it'
text2 = 'This is another text with more words that the other one has in it'
tokens1 = text1.lower().split()
tokens2 = text2.lower().split()
print('text1:', tokens1)
print('text2:', tokens2)
Explanation: <code>Counter</code> object has an <code>update</code> method that allows multiple lists to be counted.
End of explanation
freq = Counter()
Explanation: First define a new <code>Counter</code>
End of explanation
freq.update(tokens1)
freq.items()
Explanation: Then update it with the words from text1
End of explanation
freq.update(tokens2)
freq.items()
Explanation: Then update it again with the words from text2
End of explanation
texts = [
'This is the first text and it has words',
'This is the second text and it has some more words',
'Finally this one has the most words of all three examples words and words and words'
]
freq2 = Counter()
for text in texts:
# turn the text into lower case and split on whitespace
tokens = text.lower().split()
freq2.update(tokens)
# show the top 10 most frequent words
print(freq2.most_common(10))
Explanation: A simple example of looping of some texts and generating a frequency list using Counter
End of explanation
for item in freq2.most_common(7):
print("{}\t\t{}".format(item[0],item[1]))
Explanation: Finally lets make the formatting a bit more pretty by looping over the frequency list and producing a tab separated table:
End of explanation
freq_list = Counter(docs2[0]['body'].lower().split())
freq_list.most_common()
Explanation: Counting words in the LexisNexus documents
For a single LexisNexus document loaded in from the CSV file (the list of dictionaries), we select the document by index and then the body component:
End of explanation
freq_list['vaping']
Explanation: Find the frequency of the word vaping
End of explanation
freq_list_all = Counter()
for doc in docs2:
body_text = doc['body']
tokens = body_text.lower().split()
freq_list_all.update(tokens)
print(freq_list_all.most_common())
Explanation: Finally we can create a frequency list for all the documents with a loop and using the update function on the Counter object.
End of explanation |
1,112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fixed and Random Effect Models
Step1: Exploring Within-Group Variation and Between-Group Variation
Multilevel models make sense in cases where we might expect there to be variation between groups (or in the time case, variation/differences between individuals across multible observations).
Within group variation
Step2: Predicting Math Achievement from SES with Linear Models
Another way to look at the group variation, let's compare two basic linear models, one of individual students, one with school means. In the following linear models, we see that there is there is indeed a relationship between SES and math achivement between school mean math achievement. However, just as the student-level model doesn't explain the group-level variation between schools, the school-level model doesn't explain the individual-level variation within schools. This is especially evident at high and low levels of SES or math achievement, where the group level model doesn't extend.
Step3: Fixed Effect Model
Step4: Plotting the Fixed Effects Model for Individual Schools
We can plot the fixed effects model on a school-level basis using the model that uses the school mean as a covariate and then plotting the model for prototypical values of individual schools.
It's important to note however that this model makes no claims about the population of all schools -- it just models the relationship between student math achievement and student SES, holding constant the variation in SES between schools.
Step5: Random Effects Model
A random effects model makes the assumption that the variance attributable to groups is normal in the population (and centered on 0), allowing us to make claims about groups, and not just individual observations. In the school example, this would allow us to make claims about schools as well as the individuals within them. In a timeseries example, this is equally important, because it's only in the random effects world that we would be able to make claims about the population of things that we're observing over time. Correspondingly, our population model for the random effects model, although it has the same terms of the fixed effects model, has an additional assumption about $u_{j}$
Step6: In the above results, the ses coefficient of 2.390 is
Step7: Calculate Within Group Variance, Between Group Variance, and Intraclass Correlation of a Random Effects Models
In a random effects model, The intraclass correlation can be interpreted as
Step8: In this case, we see that there is a low intraclass correlation, suggesting that most of the variation in math achievement scores is within schools, but that there is a significant difference between the math achievement of schools on average in the model as well (as indicated by the Z test).
Using Pseudo-$R^{2}$ To Describe Changes in Variance Components
If we are interested in differences between the proportion of between-group or within group variance accounted for by competing models, we can generate a pseudo-$R^{2}$ measure by comparing the between and within group variances to those of a baseline model.
Step9: Calculating Pseudo-$R^{2}$
To calculate pseudo-$R^{2}$, we use the following equations
Step10: In the above pseudo $R^{2}$ calculations, we see that our model of math achievement on SES accounts for 8.44% of the between-group variation and 46.43% of the within-group variation. This is consistent with our intraclass correlation, which shows that in the model there is much more within-group variation than betwen-group variation.
Level Two Predictors
Step11: Now add an interaction between sector and SES
Step12: Plotting the Random Effects Model with a Level 2 Interaction
Showing Predictions of Students in Prototypical Schools
Step13: Plot Predictions for Catholic and Public Schools | Python Code:
# THINGS TO IMPORT
# This is a baseline set of libraries I import by default if I'm rushed for time.
%matplotlib inline
import codecs # load UTF-8 Content
import json # load JSON files
import pandas as pd # Pandas handles dataframes
import numpy as np # Numpy handles lots of basic maths operations
import matplotlib.pyplot as plt # Matplotlib for plotting
import seaborn as sns # Seaborn for beautiful plots
from dateutil import * # I prefer dateutil for parsing dates
import math # transformations
import statsmodels.formula.api as smf # for doing statistical regression
import statsmodels.api as sm # access to the wider statsmodels library, including R datasets
from collections import Counter # Counter is useful for grouping and counting
import scipy
from patsy import dmatrices
# High School and Beyond Dataset
# https://nces.ed.gov/surveys/hsb/
import urllib2
import os.path
if(os.path.isfile("hsb.dta")!=True):
response = urllib2.urlopen("http://www.stata-press.com/data/mlmus3/hsb.dta")
if(response.getcode()==200):
f = open("hsb.dta","w")
f.write(response.read())
f.close()
hsb_df = pd.read_stata("hsb.dta")
print hsb_df[['mathach','ses']].describe()
print
print "CROSSTAB"
print pd.crosstab(hsb_df['sector'], [hsb_df['female'],hsb_df['minority']])
Explanation: Fixed and Random Effect Models: Identifying Relationships of Individuals Within and Between Groups
by J. Nathan Matias, April 21, 2015
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
About Random Effects Models
In random effects models, we are fitting a model that includes panel data, either multiple observations for an individual, or multiple observations like a group.
Why do we use a different approach in this case? Groups and individuals are great examples of cases where the linear regression assumption of "independence of observations" does not apply. Imagine if we have observations of students from several schools. Observations of students(level 1) are not all independent from each other; we can assume that some of the variation in the observations comes from unobserved variables that students share within the same school, and that school experiences differ from each other at the group level(level 2) in ways that aren't observed in our dataset. Multilevel models allow us to account for the variation between individuals and also the variation between groups.
Another use of Multilevel models is to model change, where we have observations from many individuals over time and we want to identify change over time. The individual observed events are grouped by the person in question.
Dataset
The dataset used here is a classic pedagogical dataset, from the High School and Beyond study by the National Center for Education Statistics, which followed high school students starting in 1980, continuing through 1982, 1984, 1986, and 1992. The High School and Beyond study has its own wikipedia page, which includes 48 published studies based on the data.
Research Question: Do Catholic Schools and Students in Catholic Schools have Different Math Achievement from Public Schools, when Controlling For SES?
This example is drawn from the work of Andrew Ho and John Willett, from class examples in the S-052 class at the Harvard Graduate School of Education. It also roughly follows the course of Chapters 3 and 4 of Singer, Judith D., Willett, John B. (2003) Applied Longitudinal Data Analysis: Modeling Change and Event Occurrence. Oxford University Press.
important note: The MixedLM library in statsmodels is relatively recent, so many of the methods outlined by the above authors are not yet possible in Python, notably analysis of variance components of the models and intra-class correlation. There is a Google Summer of Code proposal for 2015 to add variance components to MixedLM, but the announcement was 5 days away when I published this, so we shall have to see. Let's hope it works out. The approach taken here is the likelihood-based approach. Statsmodels MixedLM can also be used with a Generalized Estimating Equation (GEE) approach.
For the Bayesian approach to multilevel methods, Chris Fonnesbeck, assistant prof of biostatistics at Vanderbilt, has published a notebook showing how to do Bayesian multilevel modeling with pymc.*
In this study, we want to know if catholic schools and public schools (and individual students in those schools) differ in their math achievement, when controlling for SES. In order to answer this question, we turn to a random effects model, which assumes that:
* the basic assumptions of linear regression
* the individual residuals (error) are normally distributed in the population
* the group residuals (error) are normally distributed in the population
It's this final assumption that when satisfied allows us to make claims about the population of groups, and not just the groups represented in this dataset. The population model for a random effects model is:
$$y_{ij} = \beta_{0} + \beta_{1}X_{ij} + u_{j} + \epsilon_{ij}$$
$$u_{j} \space \widetilde\space \space i.i.d. N(0, \sigma^{2}{u})$$
$$\epsilon{ij} \space \widetilde\space \space i.i.d. N(0, \sigma^{2}_{\epsilon})$$
End of explanation
#generate de-meaned mathach
sgp = school_gp.to_dict()
def school_mathach(f):
return float(f.mathach) - sgp['mathach'][f.schoolid]
hsb_df['school_mathach'] = hsb_df.apply(school_mathach, 1)
#make the Side-by-Side Boxplot
fig = plt.figure(num=None, figsize=(8, 20), dpi=80, edgecolor='k')
ax = fig.add_subplot(121)
hsb_df.boxplot("mathach", by="schoolid", ax=ax, vert=False)
plt.title("School Math Achievement", fontsize="16")
ax2 = fig.add_subplot(122)
hsb_df.boxplot("school_mathach", by="schoolid", ax=ax2, vert=False)
plt.title("De-Meaned School Math Achievement", fontsize="16")
plt.show()
Explanation: Exploring Within-Group Variation and Between-Group Variation
Multilevel models make sense in cases where we might expect there to be variation between groups (or in the time case, variation/differences between individuals across multible observations).
Within group variation: the amount of variation attributable to individuals within a group
Between group variation: the amount of variation attributable between groups
One way to explore within and between group variation is to do boxplots of the outcome by group. When looking at the first plot, we try to gain an intuitive sense of how much the outcome varies by group and how much it varies within groups. In this case, it's not obvious that there are many differences between groups, since so many of the error bars overlap, so we'll have to find another way to assert that difference.
In the second plot, we show the de-meaned math achievement, which allows us to look at the variation within schools, next to each other.
Note that the Pandas boxplot method only shows us the median line, which is why there's some jitter in the second plot. (Matplotlib apparently allows us to specify the mean with meanline=True, but I couldn't get the argument to pass through from Pandas.)
End of explanation
result = smf.ols(formula = "mathach ~ ses",
data = hsb_df).fit()
print "==========================================================="
print "MODEL 1: Regressing Student Math Achievement on Student SES"
print result.summary()
plt.figure(num=None, figsize=(12, 6), dpi=80, facecolor='w', edgecolor='k')
plt.scatter(hsb_df.ses, hsb_df.mathach, marker=".", color="c")
student_line, = plt.plot(hsb_df['ses'], result.predict(), "-", color="c")
#plt.title("Predicting Math Achievement from SES Across all 7185 students", fontsize="16")
school_gp = hsb_df.groupby("schoolid").aggregate(np.mean)
result = smf.ols(formula = "mathach ~ ses",
data = school_gp).fit()
print
print "==================================================================="
print "MODEL 2: Regressing Mean School Math Achievement on Mean School SES"
print result.summary()
#plt.figure(num=None, figsize=(12, 6), dpi=80, facecolor='w', edgecolor='k')
plt.scatter(school_gp.ses, school_gp.mathach, marker=".", color="r")
school_line, = plt.plot(school_gp.ses, result.predict(), "-", color="r")
plt.title("Predicting Math Achievement Scores from SES with Linear Regression", fontsize="16")
plt.legend([student_line, school_line], ['All Students', 'School Means'], fontsize="14")
plt.show()
Explanation: Predicting Math Achievement from SES with Linear Models
Another way to look at the group variation, let's compare two basic linear models, one of individual students, one with school means. In the following linear models, we see that there is there is indeed a relationship between SES and math achivement between school mean math achievement. However, just as the student-level model doesn't explain the group-level variation between schools, the school-level model doesn't explain the individual-level variation within schools. This is especially evident at high and low levels of SES or math achievement, where the group level model doesn't extend.
End of explanation
# calculate the demeaned_ses for each student
def demeaned_ses(f):
return f.ses - school_gp.to_dict()['ses'][f['schoolid']]
# add the school mean SES to the dataframe for each student
def schoolmean_ses(f):
return school_gp.to_dict()['ses'][f['schoolid']]
hsb_df['demeaned_ses'] = hsb_df.apply(demeaned_ses, axis=1)
hsb_df['schoolmean_ses'] = hsb_df.apply(schoolmean_ses, axis=1)
result_school_covary = smf.ols(formula = "mathach ~ ses + schoolmean_ses",
data = hsb_df).fit()
print "MODEL: Regressing Student Math Achievement on De-meaned Student SES"
print result_school_covary.params
result = smf.ols(formula = "mathach ~ demeaned_ses",
data = hsb_df).fit()
print
print "MODEL: Regressing Student Math Achievement on De-meaned Student SES"
print result.params
print
print "Notice how the slope for *ses* is the same as the slope for *demeaned_ses* in the two models"
print
plt.figure(num=None, figsize=(12, 6), dpi=80, facecolor='w', edgecolor='k')
plt.scatter(hsb_df.demeaned_ses, hsb_df.mathach, marker=".", color="darkgrey")
student_line, = plt.plot(hsb_df['demeaned_ses'], result.predict(), "-", color="darkgrey")
plt.title("Predicting Math Achievement Scores from De-meaned SES", fontsize="16")
plt.xlabel("De-meaned Socio-Economic Status", fontsize="14")
plt.ylabel("Math Achivement", fontsize="14")
plt.show()
Explanation: Fixed Effect Model: Predicting Math Achievement with De-Meaned SES
In the fixed effects model, we add $u_{j}$ to the model, denoting the group level variance, absorbing all the variance between groups in order to better estimate the variance within groups.
$$y_{ij} = \beta_{0} + \beta_{1}X_{ij} + \mathbf{u_{j}} + \epsilon_{ij}$$
$$\epsilon_{ij} \space \widetilde\space \space i.i.d. N(0, \sigma^{2}_{\epsilon})$$
In practice, we can do this in several equivalent ways. I show two here:
* add the school mean SES as a predictor to the model
* replace SES with the "de-meaned" SES rather than the SES as our predictor. In the de-meaned model, instead of using the SES of the individual in the school, we are using amount by which that student differs from the mean SES in the school.
In both cases, we effectively remove the variation between groups from the model. The resulting SES or demeaned_ses predictor models the within-school variation between students.
In the following models, notice how the slope for ses is the same as the slope for demeaned_ses in the two models.
End of explanation
# highlight the maximum, and minimum
max_school = school_gp[school_gp['ses'] == school_gp.ses.max()].index[0]
min_school = school_gp[school_gp['ses'] == school_gp.ses.min()].index[0]
hsb_df['fixed_preds'] = result_school_covary.predict()
plt.figure(num=None, figsize=(12, 6), dpi=80, edgecolor='k')
for schoolid in hsb_df.schoolid.unique():
if(schoolid!=max_school and schoolid!=min_school):
plt.scatter(hsb_df[hsb_df.schoolid == schoolid].ses, hsb_df[hsb_df.schoolid == schoolid].mathach, marker=".", color="lightgrey")
for schoolid in hsb_df.schoolid.unique():
if(schoolid == max_school):
plt.scatter(hsb_df[hsb_df.schoolid == schoolid].ses, hsb_df[hsb_df.schoolid == schoolid].mathach, marker=".", color="r")
maxline, = plt.plot(hsb_df[hsb_df.schoolid == schoolid].ses, hsb_df[hsb_df.schoolid == schoolid].fixed_preds, "-", color="r")
elif(schoolid == min_school):
plt.scatter(hsb_df[hsb_df.schoolid == schoolid].ses, hsb_df[hsb_df.schoolid == schoolid].mathach, marker=".", color="b")
minline, = plt.plot(hsb_df[hsb_df.schoolid == schoolid].ses, hsb_df[hsb_df.schoolid == schoolid].fixed_preds, "-", color="b")
plt.legend([maxline, minline], ['School with Max SES', 'School with Min SES'], fontsize="12")
plt.title("Fixed Effects Model Predicting Math Achievement Scores from SES & School Mean SES", fontsize="16")
plt.xlabel("Socio-Economic Status", fontsize="14")
plt.ylabel("Math Achivement", fontsize="14")
plt.show()
Explanation: Plotting the Fixed Effects Model for Individual Schools
We can plot the fixed effects model on a school-level basis using the model that uses the school mean as a covariate and then plotting the model for prototypical values of individual schools.
It's important to note however that this model makes no claims about the population of all schools -- it just models the relationship between student math achievement and student SES, holding constant the variation in SES between schools.
End of explanation
##http://statsmodels.sourceforge.net/devel/mixed_linear.html
md = smf.mixedlm("mathach ~ ses", data=hsb_df, groups=hsb_df["schoolid"])
result = md.fit()
print result.summary()
Explanation: Random Effects Model
A random effects model makes the assumption that the variance attributable to groups is normal in the population (and centered on 0), allowing us to make claims about groups, and not just individual observations. In the school example, this would allow us to make claims about schools as well as the individuals within them. In a timeseries example, this is equally important, because it's only in the random effects world that we would be able to make claims about the population of things that we're observing over time. Correspondingly, our population model for the random effects model, although it has the same terms of the fixed effects model, has an additional assumption about $u_{j}$:
$$y_{ij} = \beta_{0} + \beta_{1}X_{ij} + u_{j} + \epsilon_{ij}$$
$$\mathbf{u_{j} \space \widetilde\space \space i.i.d. N(0, \sigma^{2}{u})}$$
$$\epsilon{ij} \space \widetilde\space \space i.i.d. N(0, \sigma^{2}_{\epsilon})$$
End of explanation
plt.figure(num=None, figsize=(12, 6), dpi=80, facecolor='w', edgecolor='k')
result = smf.ols(formula = "mathach ~ ses",
data = hsb_df).fit()
print "MODEL 1: Regressing Student Math Achievement on Student SES"
plt.scatter(hsb_df.ses, hsb_df.mathach, marker=".", color="c")
student_line, = plt.plot(hsb_df['ses'], result.predict(), "-", color="c")
school_gp = hsb_df.groupby("schoolid").aggregate(np.mean)
result = smf.ols(formula = "mathach ~ ses",
data = school_gp).fit()
print result.summary()
print
print "MODEL 2: Regressing Mean School Math Achievement on Mean School SES"
print result.summary()
#plt.figure(num=None, figsize=(12, 6), dpi=80, facecolor='w', edgecolor='k')
plt.scatter(school_gp.ses, school_gp.mathach, marker=".", color="r")
school_line, = plt.plot(school_gp.ses, result.predict(), "-", color="r")
result = smf.ols(formula = "mathach ~ demeaned_ses",
data = hsb_df).fit()
print "MODEL 3: Regressing Student Math Achievement on De-meaned Student SES"
print result.summary()
#plt.figure(num=None, figsize=(12, 6), dpi=80, facecolor='w', edgecolor='k')
demeaned_line, = plt.plot(hsb_df['demeaned_ses'], result.predict(), "-", color="darkgrey")
print
print "MODEL 4: Regressing Student Math Achievement on Student SES Grouped by School in a Random Effects Model"
md = smf.mixedlm("mathach ~ ses", data=hsb_df, groups=hsb_df["schoolid"])
result = md.fit()
print result.summary()
def predict(x, key, result):
return result.params.Intercept + result.params['ses']*x
ses = np.linspace(hsb_df.ses.min(), hsb_df.ses.max(), 100)
preds = [predict(x, 'ses',result) for x in ses]
multi_line, = plt.plot(ses, preds, "-", color="m")
plt.title("Predicting Math Achievement Scores from SES (schools=160) (students=7185)", fontsize="16")
plt.legend([student_line, school_line, multi_line, demeaned_line], ['All Students (Total)', 'School Means (Between)', "Random Effects", "De-Meaned (within group, Fixed)"])
plt.show()
Explanation: In the above results, the ses coefficient of 2.390 is:
* the slope of the relationship between math achievement and ses among the population of students within a school
* also the slope of the relationship between math achievement and ses among the population of schools (In a timeseries situation, this becomes especially meaningful in cases where we add further covariates that explain differences between individuals over time)
Comparing Linear, Grouped, De-Meaned, and Mixed Effects Models
For fun, here is a plot that shows all of the models that we have fit so far on the same plot.
End of explanation
##http://statsmodels.sourceforge.net/devel/mixed_linear.html
md = smf.mixedlm("mathach ~ ses", data=hsb_df, groups=hsb_df["schoolid"])
result = md.fit()
print result.summary()
#store the model results to a variable
models = {}
m = "Model1"
models[m] = {}
models[m]['result'] = result
def individual_residuals(f):
observed_individual = f.mathach
predicted_individual = result.params.Intercept + result.params['ses']*f.ses
return observed_individual - predicted_individual
def group_residuals(f):
observed_group = school_gp.to_dict()['mathach'][f.schoolid]
predicted_group = result.params.Intercept + result.params['ses']*f.schoolmean_ses
return predicted_group - observed_group
group_count = school_gp.count()[0]
indiv_count = hsb_df.count()[0]
resid_u = hsb_df.apply(group_residuals, 1)
models[m]["sigma_u"] = np.std(resid_u)
models[m]["sigma_u_err"] = models[m]["sigma_u"]/math.sqrt(group_count)
resid_e = hsb_df.apply(individual_residuals, 1)
models[m]["sigma_e"] = np.std(resid_e)
models[m]["sigma_e_err"] = models[m]["sigma_e"]/math.sqrt(indiv_count)
models[m]["icc"] = math.pow(models[m]["sigma_u"],2)/(math.pow(models[m]["sigma_u"],2) + math.pow(models[m]["sigma_e"],2))
models[m]["icc_err"] = icc/math.sqrt(group_count)
print " stdev stderr"
print "sigma_u (between group variation): %(s).04f %(e).04f" % {'s':models[m]["sigma_u"],
'e':models[m]["sigma_u_err"]}
print "sigma_e (within group variation): %(s).04f %(e).04f" % {'s':models[m]["sigma_e"],
'e':models[m]["sigma_e_err"]}
print "intraclass correlation: %(i).04f %(e).04f" % {'i':models[m]["icc"],
'e':models[m]["icc_err"]}
print
print "Z-Test of intraclass correlation:"
print " H0: icc = 0 in the population"
print " test-statistic: z=icc/SE(icc)"
print " decision rule: z>z_crit"
print " critical value: 1.96"
print " z = %(z).04f" %{'z':models[m]["icc"] /models[m]["icc_err"]}
Explanation: Calculate Within Group Variance, Between Group Variance, and Intraclass Correlation of a Random Effects Models
In a random effects model, The intraclass correlation can be interpreted as:
* the group variation as a proportion of total variance
* the proportion of overall variation in math scores "accounted for by" the group
Intraclass correlation is apparently still being debated, as outlined in the Wikipedia page for intraclass correlation, and some people avoid this measure entirely.
The statsmodels MixedLM model doesn't include within it any analysis of residuals, so of we want to consider the intraclass correlation in the model, we have to do it ourselves. I've written a method to collect the individual and group residuals.
Note that I think the calculations presented here are correct, but I have only run them against a single test case, so you may want to doublecheck my work before lifting this code.
End of explanation
# now generate the baseline model
md = smf.mixedlm("mathach ~ 1", data=hsb_df, groups=hsb_df["schoolid"])
result = md.fit()
print result.summary()
def individual_residuals(f):
observed_individual = f.mathach
predicted_individual = result.params.Intercept
return observed_individual - predicted_individual
def group_residuals(f):
observed_group = school_gp.to_dict()['mathach'][f.schoolid]
predicted_group = result.params.Intercept
return predicted_group - observed_group
group_count = school_gp.count()[0]
indiv_count = hsb_df.count()[0]
m = "Model0"
models[m] = {}
models[m]['result'] = result
resid_u = hsb_df.apply(group_residuals, 1)
models[m]["sigma_u"] = np.std(resid_u)
models[m]["sigma_u_err"] = models[m]["sigma_u"]/math.sqrt(group_count)
resid_e = hsb_df.apply(individual_residuals, 1)
models[m]["sigma_e"] = np.std(resid_e)
models[m]["sigma_e_err"] = models[m]["sigma_e"]/math.sqrt(indiv_count)
models[m]["icc"] = math.pow(models[m]["sigma_u"],2)/(math.pow(models[m]["sigma_u"],2) + math.pow(models[m]["sigma_e"],2))
models[m]["icc_err"] = icc/math.sqrt(group_count)
print " stdev stderr"
print "sigma_u (between group variation): %(s).04f %(e).04f" % {'s':models[m]["sigma_u"],
'e':models[m]["sigma_u_err"]}
print "sigma_e (within group variation): %(s).04f %(e).04f" % {'s':models[m]["sigma_e"],
'e':models[m]["sigma_e_err"]}
print "intraclass correlation: %(i).04f %(e).04f" % {'i':models[m]["icc"],
'e':models[m]["icc_err"]}
print
print "Z-Test of intraclass correlation:"
print " H0: icc = 0 in the population"
print " test-statistic: z=icc/SE(icc)"
print " decision rule: z>z_crit"
print " critical value: 1.96"
print " z = %(z).04f" %{'z':models[m]["icc"] /models[m]["icc_err"]}
Explanation: In this case, we see that there is a low intraclass correlation, suggesting that most of the variation in math achievement scores is within schools, but that there is a significant difference between the math achievement of schools on average in the model as well (as indicated by the Z test).
Using Pseudo-$R^{2}$ To Describe Changes in Variance Components
If we are interested in differences between the proportion of between-group or within group variance accounted for by competing models, we can generate a pseudo-$R^{2}$ measure by comparing the between and within group variances to those of a baseline model.
End of explanation
m0 = "Model0"
m1 = "Model1"
r2_u = math.pow(models[m0]['sigma_u'], 2) - math.pow(models[m1]['sigma_u'], 2)/math.pow(models[m0]['sigma_u'], 2)
print "Pseudo R^2 for group variation: %(r).03f%%" % {'r':r2_u}
r2_e = math.pow(models[m0]['sigma_e'], 2) - math.pow(models[m1]['sigma_e'], 2)/math.pow(models[m0]['sigma_e'], 2)
print "Pseudo R^2 for individual variation: %(r).03f%%" % {'r':r2_e}
Explanation: Calculating Pseudo-$R^{2}$
To calculate pseudo-$R^{2}$, we use the following equations:
Between group variation: $R^{2}{u} = \sigma{u,0}^{2} - \sigma_{u,1}^{2}/\sigma_{u,0}^{2}$
Within group variation: $R^{2}{e} = \sigma{e,0}^{2} - \sigma_{e,1}^{2}/\sigma_{e,0}^{2}$
End of explanation
# in this dataset, sector refers to whether the school is catholic(1) or public(0)
from patsy import dmatrices
md = smf.mixedlm("mathach ~ ses + sector", data=hsb_df, groups=hsb_df["schoolid"])
result = md.fit()
print result.summary()
def individual_residuals(f):
observed_individual = f.mathach
predicted_individual = result.params.Intercept + result.params['ses']*f.ses + result.params['sector']*f.sector
return observed_individual - predicted_individual
def group_residuals(f):
observed_group = school_gp.to_dict()['mathach'][f.schoolid]
predicted_group = result.params.Intercept + result.params['ses']*f.schoolmean_ses + result.params['sector']*f.sector
return predicted_group - observed_group
group_count = school_gp.count()[0]
indiv_count = hsb_df.count()[0]
m = "Model2"
models[m] = {}
models[m]['result'] = result
resid_u = hsb_df.apply(group_residuals, 1)
models[m]["sigma_u"] = np.std(resid_u)
models[m]["sigma_u_err"] = models[m]["sigma_u"]/math.sqrt(group_count)
resid_e = hsb_df.apply(individual_residuals, 1)
models[m]["sigma_e"] = np.std(resid_e)
models[m]["sigma_e_err"] = models[m]["sigma_e"]/math.sqrt(indiv_count)
models[m]["icc"] = math.pow(models[m]["sigma_u"],2)/(math.pow(models[m]["sigma_u"],2) + math.pow(models[m]["sigma_e"],2))
models[m]["icc_err"] = icc/math.sqrt(group_count)
print " stdev stderr"
print "sigma_u (between group variation): %(s).04f %(e).04f" % {'s':models[m]["sigma_u"],
'e':models[m]["sigma_u_err"]}
print "sigma_e (within group variation): %(s).04f %(e).04f" % {'s':models[m]["sigma_e"],
'e':models[m]["sigma_e_err"]}
print "intraclass correlation: %(i).04f %(e).04f" % {'i':models[m]["icc"],
'e':models[m]["icc_err"]}
print
print "Z-Test of intraclass correlation:"
print " H0: icc = 0 in the population"
print " test-statistic: z=icc/SE(icc)"
print " decision rule: z>z_crit"
print " critical value: 1.96"
print " z = %(z).04f" %{'z':models[m]["icc"] /models[m]["icc_err"]}
print
m0 = "Model0"
m1 = "Model2"
r2_u = math.pow(models[m0]['sigma_u'], 2) - math.pow(models[m1]['sigma_u'], 2)/math.pow(models[m0]['sigma_u'], 2)
print "Pseudo R^2 for group variation: %(r).03f%%" % {'r':r2_u}
r2_e = math.pow(models[m0]['sigma_e'], 2) - math.pow(models[m1]['sigma_e'], 2)/math.pow(models[m0]['sigma_e'], 2)
print "Pseudo R^2 for individual variation: %(r).03f%%" % {'r':r2_e}
Explanation: In the above pseudo $R^{2}$ calculations, we see that our model of math achievement on SES accounts for 8.44% of the between-group variation and 46.43% of the within-group variation. This is consistent with our intraclass correlation, which shows that in the model there is much more within-group variation than betwen-group variation.
Level Two Predictors: Testing Our Hypothesis About Catholic Schools
At the beginning of this example, we asked if Catholic Schools and Students in Catholic Schools have different Math Achievement from Public Schools, when Controlling For SES? To answer this question, we add another predictor, a "level-2," group-level predictor that contains information about schools rather than individual students.
End of explanation
# in this dataset, sector refers to whether the school is catholic(1) or public(0)
from patsy import dmatrices
md = smf.mixedlm("mathach ~ ses + sector + sector:ses", data=hsb_df, groups=hsb_df["schoolid"])
result = md.fit()
print result.summary()
def individual_residuals(f):
observed_individual = f.mathach
predicted_individual = result.params.Intercept + result.params['ses']*f.ses + result.params['sector']*f.sector + result.params['sector:ses']*f.sector*f.ses
return observed_individual - predicted_individual
def group_residuals(f):
observed_group = school_gp.to_dict()['mathach'][f.schoolid]
predicted_group = result.params.Intercept + result.params['ses']*f.schoolmean_ses + result.params['sector']*f.sector + result.params['sector:ses']*f.sector*f.ses
return predicted_group - observed_group
group_count = school_gp.count()[0]
indiv_count = hsb_df.count()[0]
m = "Model3"
models[m] = {}
models[m]['result'] = result
resid_u = hsb_df.apply(group_residuals, 1)
models[m]["sigma_u"] = np.std(resid_u)
models[m]["sigma_u_err"] = models[m]["sigma_u"]/math.sqrt(group_count)
resid_e = hsb_df.apply(individual_residuals, 1)
models[m]["sigma_e"] = np.std(resid_e)
models[m]["sigma_e_err"] = models[m]["sigma_e"]/math.sqrt(indiv_count)
models[m]["icc"] = math.pow(models[m]["sigma_u"],2)/(math.pow(models[m]["sigma_u"],2) + math.pow(models[m]["sigma_e"],2))
models[m]["icc_err"] = icc/math.sqrt(group_count)
print " stdev stderr"
print "sigma_u (between group variation): %(s).04f %(e).04f" % {'s':models[m]["sigma_u"],
'e':models[m]["sigma_u_err"]}
print "sigma_e (within group variation): %(s).04f %(e).04f" % {'s':models[m]["sigma_e"],
'e':models[m]["sigma_e_err"]}
print "intraclass correlation: %(i).04f %(e).04f" % {'i':models[m]["icc"],
'e':models[m]["icc_err"]}
print
print "Z-Test of intraclass correlation:"
print " H0: icc = 0 in the population"
print " test-statistic: z=icc/SE(icc)"
print " decision rule: z>z_crit"
print " critical value: 1.96"
print " z = %(z).04f" %{'z':models[m]["icc"] /models[m]["icc_err"]}
print
m0 = "Model0"
m1 = "Model3"
r2_u = math.pow(models[m0]['sigma_u'], 2) - math.pow(models[m1]['sigma_u'], 2)/math.pow(models[m0]['sigma_u'], 2)
print "Pseudo R^2 for group variation: %(r).02f%%" % {'r':r2_u}
r2_e = math.pow(models[m0]['sigma_e'], 2) - math.pow(models[m1]['sigma_e'], 2)/math.pow(models[m0]['sigma_e'], 2)
print "Pseudo R^2 for individual variation: %(r).02f%%" % {'r':r2_e}
Explanation: Now add an interaction between sector and SES
End of explanation
#step one: find prototypical values of a catholic and a public school with an SES of 0.
school_gp['p_abs_ses']=school_gp[np.isclose(school_gp.sector, 0.)].ses.map(lambda x: abs(x))
school_gp['c_abs_ses']=school_gp[np.isclose(school_gp.sector, 1.)].ses.map(lambda x: abs(x))
#public school with SES closest to 0: 1946
print school_gp[(np.isclose(school_gp.p_abs_ses,school_gp.p_abs_ses.min())) & (np.isclose(school_gp.sector, 0.))].ses
#catholic school with SES closest to 0: 5650
print school_gp[(np.isclose(school_gp.c_abs_ses,school_gp.c_abs_ses.min())) & (np.isclose(school_gp.sector, 1.))].ses
p_school = 1946
c_school = 5650
def predict(f):
return result.params.Intercept + result.params['ses']*f.ses + result.params['sector']*f.sector + result.params['sector:ses']*f.sector*f.ses
hsb_df['interaction_preds'] = hsb_df.apply(predict, 1)
plt.figure(num=None, figsize=(12, 6), dpi=80, edgecolor='k')
# PLOT A PREDICTION OF INDIVIDUAL MATH ACHIEVEMENT SCORES
# FOR TWO SCHOOLS
for schoolid in hsb_df.schoolid.unique():
if(schoolid!=max_school and schoolid!=min_school):
plt.scatter(hsb_df[hsb_df.schoolid == schoolid].ses, hsb_df[hsb_df.schoolid == schoolid].mathach, marker=".", color="lightgrey")
for schoolid in hsb_df.schoolid.unique():
if(schoolid == p_school):
plt.scatter(hsb_df[hsb_df.schoolid == schoolid].ses, hsb_df[hsb_df.schoolid == schoolid].mathach, marker=".", color="r")
p_line, = plt.plot(hsb_df[hsb_df.schoolid == schoolid].ses, hsb_df[hsb_df.schoolid == schoolid].interaction_preds, "-", color="r")
elif(schoolid == c_school):
plt.scatter(hsb_df[hsb_df.schoolid == schoolid].ses, hsb_df[hsb_df.schoolid == schoolid].mathach, marker=".", color="b")
c_line, = plt.plot(hsb_df[hsb_df.schoolid == schoolid].ses, hsb_df[hsb_df.schoolid == schoolid].interaction_preds, "-", color="b")
plt.legend([c_line, p_line], ['Students in a Catholic School with Mean SES', 'Students in a Public School with Mean SES'], fontsize="12")
plt.suptitle("Predicting Individual Math Achievement Scores from SES & Sector", fontsize="16")
plt.title("in a Multi-Level Random Effects Model, where SES=0", fontsize="16")
plt.xlabel("Socio-Economic Status", fontsize="14")
plt.ylabel("Math Achivement", fontsize="14")
plt.show()
Explanation: Plotting the Random Effects Model with a Level 2 Interaction
Showing Predictions of Students in Prototypical Schools
End of explanation
# PLOT SCHOOL MEAN CATHOLIC AND PUBLIC SCHOOL MATH ACHIEVEMENT
plt.figure(num=None, figsize=(12, 6), dpi=80, edgecolor='k')
plt.scatter(hsb_df.ses, hsb_df.mathach, marker=".", color="lightgrey")
plt.scatter(school_gp[school_gp.sector==0.].ses, school_gp[school_gp.sector==0.].mathach, color="r")
plt.scatter(school_gp[school_gp.sector==1.].ses, school_gp[school_gp.sector==1.].mathach, color="b")
school_gp['interaction_preds'] = school_gp.apply(predict, 1)
c_line, = plt.plot(school_gp[np.isclose(school_gp.sector, 1.)].ses, school_gp[np.isclose(school_gp.sector, 1.)].interaction_preds, "-", color="b")
p_line, = plt.plot(school_gp[np.isclose(school_gp.sector, 0.)].ses, school_gp[np.isclose(school_gp.sector, 0.)].interaction_preds, "-", color="r")
plt.suptitle("Predicting School Math Achievement Scores from SES & Sector", fontsize="16")
plt.title("in a Multi-Level Random Effects Model", fontsize="16")
plt.legend([c_line, p_line], ['Catholic Schools', 'Public Schools'], fontsize="12")
plt.xlabel("Socio-Economic Status", fontsize="14")
plt.ylabel("Math Achivement", fontsize="14")
plt.show()
Explanation: Plot Predictions for Catholic and Public Schools
End of explanation |
1,113 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Since we announced our collaboration with the World Bank and more partners to create the Open Traffic platform, we’ve been busy. We’ve shared two technical previews of the OSMLR linear referencing system. Now we’re ready to share more about how we’re using Mapzen Map Matching to “snap” GPS-derived locations to OSMLR segments, and how we’re using a data-driven approach to evaluate and improve the algorithms.
A "data-driven" approach to improving map-matching - Part II
Step1: User vars
Step2: Load our old routes
Check out the last notebook in this series if you don't have routes yet. Or just read along!
Step3: 1. Parameter Definitions
The Open Traffic Reporter map-matching service is based on the Hidden Markov Model (HMM) design of Newson and Krumm (2009). HMM's define a class of models that utilize a directed graph structure to represent the probability of observing a particular sequence of events -- or in our case, a particular sequence of road segments that defines a route. For our purposes, it is enough to know that HMM's are, in general, governed by two probability distributions which we've parameterized using $\sigma_z$ and $\beta$, respectively. If, however, you are interested in a more detailed explanation of how HMM's work in the context of map-matching, please see the excellent map-matching primer here written by Mapzen's own routing experts.
<center>The graph structure of an HMM for an example route (from Mapzen's map-matching algorithms primer)</center>
$\sigma_z$
The parameter $\sigma_z$ appears in the equation below which expresses the likelihood of recording a GPS measurement $z_t$ from road segment $r_i$ as the following Gaussian probability density function (PDF)
Step4: b) Search and score
We've wrapped the entire grid search code into the grid_search_hmm_params() function since the internals are pretty messy. Behind the scenes, though, grid_search_hmm_params() proceeds as follows
Step5: 3. Get optimal params
a) Combine the scores for all routes
Because of the slow runtime, grid_search_hmm_params() saves its results incrementally in route-specific files. We must first recombine them into a single master dataframe so that we can query, plot, and otherwise manipulate the data all together
Step6: The first five rows of the combined dataframe
Step7: b) Visualize scores
Now we visually can assess the parameter scores to check for global optima or tease out any other patterns that might persist across the various dimensions of our grid search. The get_param_scores() function will also return a dictionary of optimal parameter values corresponding to each one of the subplots below
Step8: c) Making sense of the results
In our case there does appear to be a "global" trend of decreasing error toward the upper right quadrants of our plots, which correspond to higher $\sigma_z$'s and lower $\beta$'s. If we wanted to optimize our parameters across all sample rates and noise levels at once, the results suggest that our optimal parameters would lie at the maximum $\sigma_z$ and minimum $\beta$ values of our search space. This is an interesting finding, and not entirely satistfying, since it is typically unwise to pick parameter values at the limits of your search space -- who knows what may lie on the other side? Although this could be evidence that we didn't cast a wide enough net in our parameter sweep, there is reason to believe that it might not be worthwhile to search any further
Step9: b) Customizing our search space
Since we know that almost 90% of our GPS data will be generated from within an area dominated by skyscrapers and other forms of dense urban development, this is a case where we actually want to optimize for noisier data. As such, we will perform our parameter sweep over routes generated with 60 m of noise
Step10: Similarly, since A-OK only collects data every 20 seconds, we can make sure we are only optimizing for data generated at this frequency
Step11: We'll search for our optimal $\beta$ and $\sigma_z$ values over the same space as before
Step12: c) Retrieve optimal params
Step13: d) Get scores with optimal params
Generate new routes
Step14: And score the matches using our tuned parameter values
Step15: e) Compare to performance of default params
If we don't specify a value for $\sigma_z$ or $\beta$, get_route_metrics() will use the default values
Step16: Compare the before-and-after error distributions | Python Code:
import os
import sys; sys.path.insert(0, os.path.abspath('..'));
import validator.validator as val
import numpy as np
import glob
import pandas as pd
import pickle
import seaborn as sns
from matplotlib import pyplot as plt
from IPython.display import Image
from IPython.core.display import HTML
%matplotlib inline
Explanation: Since we announced our collaboration with the World Bank and more partners to create the Open Traffic platform, we’ve been busy. We’ve shared two technical previews of the OSMLR linear referencing system. Now we’re ready to share more about how we’re using Mapzen Map Matching to “snap” GPS-derived locations to OSMLR segments, and how we’re using a data-driven approach to evaluate and improve the algorithms.
A "data-driven" approach to improving map-matching - Part II:
PARAMETER TUNING
============================================================================================
In the last blog post on Mapzen's map-matching service, we showed you how Mapzen uses synthetic GPS data to validate the results of our map-matching algorithm. This time around, we'll dive a bit deeper into the internals of the algorithm itself to see how we can use our validation metrics to fine-tune the map-matching parameters.
0. Setup test environment
End of explanation
mapzenKey = os.environ.get('MAPZEN_API')
gmapsKey = os.environ.get('GOOGLE_MAPS')
cityName = 'San Francisco'
numRoutes = 200
Explanation: User vars
End of explanation
routeList = pickle.load(open('{0}_{1}_routes.pkl'.format(cityName, numRoutes), 'rb'))
Explanation: Load our old routes
Check out the last notebook in this series if you don't have routes yet. Or just read along!
End of explanation
noiseLevels = np.linspace(0, 100, 6) # in meters
sampleRates = [1, 5, 10, 20, 30] # in seconds
betas = np.linspace(0.5, 10, 20) # no units
sigmas = np.linspace(0.5, 10, 20) # no units
Explanation: 1. Parameter Definitions
The Open Traffic Reporter map-matching service is based on the Hidden Markov Model (HMM) design of Newson and Krumm (2009). HMM's define a class of models that utilize a directed graph structure to represent the probability of observing a particular sequence of events -- or in our case, a particular sequence of road segments that defines a route. For our purposes, it is enough to know that HMM's are, in general, governed by two probability distributions which we've parameterized using $\sigma_z$ and $\beta$, respectively. If, however, you are interested in a more detailed explanation of how HMM's work in the context of map-matching, please see the excellent map-matching primer here written by Mapzen's own routing experts.
<center>The graph structure of an HMM for an example route (from Mapzen's map-matching algorithms primer)</center>
$\sigma_z$
The parameter $\sigma_z$ appears in the equation below which expresses the likelihood of recording a GPS measurement $z_t$ from road segment $r_i$ as the following Gaussian probability density function (PDF):
$$ p(z_t|r_i) = \frac{1}{\sqrt{2 \pi \sigma_z}} e^{-0.5(\frac{||z_t - x_{t,i}||_{\text{great circle}}}{\sigma_z})^2}$$
where $x_{t,i}$ is the point on road segment $r_i$ nearest the measurement $z_t$ at time $t$.
In practice, $\sigma_z$ can be thought of as an estimate of the standard deviation of GPS noise. The more we trust our measurements, the lower the value of our $\sigma_z$ should be. Newson and Krumm (2009) derive $\sigma_z$ from the median absolute deviation over their dataset, arriving at a value of 4.07, which we have adopted as our default parameter value.
$\beta$
The $\beta$ parameter comes from the following exponential PDF:
$$p(d_t) = \frac{1}{\beta}e^{\frac{-d_t}{\beta}}$$
where $d_t$ is the difference between the great circle distance and route-traveled distance between time $t$ and $t+1$. We use an exponential PDF to define this probability because Newson and Krumm (2009) found that for two successive GPS measurements, the differences in length between the "great circle" distance (i.e. "as-the-crow-flies" distances) and the corresponding route-traveled distance are exponentially distributed.
<img src="newson_and_krumm_exp.png" alt="Drawing" width="40%" align="center"/>
<center><i>From Newson and Krumm (2009)</i></center>
In this context, $\beta$ can be thought of as the expected difference between great circle distances and route distance traveled, or in other words, the degree of circuitousness of GPS routes. We have adopted a $\beta$ of 3 as our default parameter value.
2. Grid search
Although we can empirically derive our parameter values using the equations above, we can also just search the space of feasible parameter values to find the ones that give the best results. This is a common machine learning approach for algorithm optimization, typically referred to as a grid search or parameter sweep. Grid search requires a reliable scoring mechanism in order to compare results. Lucky for us, we already implemented a number of these in our exploration of validation metrics.
Since we have to pick one to use in the grid search, we'll stick with Newson and Krumm's distance-based error metric for now, since it appears most frequently in the literature. Let's take a quick look back at what those errors looked like over our set of validation data:
<img src="newson_krumm_score.png" alt="Drawing" width="70%" align="center"/>
Clearly the quality of our data, by way of GPS noise and sample rate, has a huge impact on match error. Taking that into consideration, in addition to searching over the feasible space of $\beta$'s and $\sigma_z$'s, we'll want to iterate over different noise levels and sample rates so that we can optimize our map-matching regardless of the quality of the data we receive.
a) Define the search space
Because this grid search takes place in such high dimensional space - $\text{noise levels} \times \text{sample rates} \times \beta \ \text{values} \times \sigma\ \text{values} \times \text{# routes}$ - we use a smaller number of discrete noise levels for this section of the analysis:
End of explanation
val.grid_search_hmm_params(cityName, routeList, sampleRates, noiseLevels, betas, sigmas)
Explanation: b) Search and score
We've wrapped the entire grid search code into the grid_search_hmm_params() function since the internals are pretty messy. Behind the scenes, though, grid_search_hmm_params() proceeds as follows:
python
for route in routes:
for beta in betas:
for sigma in sigmas:
for noise in noiseLevels:
for rate in sampleRates:
i) simulate gps
ii) match routes
iii) score the route
write scores to a route-specific file
Depending on the dimensions of your search space, grid search can be a long and arduous process. In our case, on a 2015 MacBook Pro, it seems to take ~1 min. per route for a given noise level and sample rate. With the configuration specified above, that works out to roughly 30 minutes per route. While this runtime does indeed seem a bit absurd, the beauty of grid search is that if you properly define your search space, you should really only need to do it once.
End of explanation
frame = val.mergeCsvsToDataFrame(dataDir='../data/sf200/', matchString='*.csv')
Explanation: 3. Get optimal params
a) Combine the scores for all routes
Because of the slow runtime, grid_search_hmm_params() saves its results incrementally in route-specific files. We must first recombine them into a single master dataframe so that we can query, plot, and otherwise manipulate the data all together
End of explanation
frame.head()
Explanation: The first five rows of the combined dataframe:
End of explanation
paramDict = val.get_param_scores(frame, sampleRates, noiseLevels, betas, sigmas, plot=True)
Explanation: b) Visualize scores
Now we visually can assess the parameter scores to check for global optima or tease out any other patterns that might persist across the various dimensions of our grid search. The get_param_scores() function will also return a dictionary of optimal parameter values corresponding to each one of the subplots below:
End of explanation
cityName = 'London'
routes = val.get_POI_routes_by_length(cityName, 1, 5, 100, gmapsKey)
Explanation: c) Making sense of the results
In our case there does appear to be a "global" trend of decreasing error toward the upper right quadrants of our plots, which correspond to higher $\sigma_z$'s and lower $\beta$'s. If we wanted to optimize our parameters across all sample rates and noise levels at once, the results suggest that our optimal parameters would lie at the maximum $\sigma_z$ and minimum $\beta$ values of our search space. This is an interesting finding, and not entirely satistfying, since it is typically unwise to pick parameter values at the limits of your search space -- who knows what may lie on the other side? Although this could be evidence that we didn't cast a wide enough net in our parameter sweep, there is reason to believe that it might not be worthwhile to search any further:
Focusing specifically at the plots in the 0 - 40 m range of noise, it's pretty clear that expanding the range of feasible parameter values wouldn't do any good. You can't get much better than 0 error, and our plots in this range are already dominated by dark green. We can obtain optimal results by picking parameters anywhere in this dark green zone.
<div style="margin: 0 auto; width:100%;">
<img src="https://s3.amazonaws.com/mapzen-assets/images/data-driven-map-matching/noise_vs_sample_rate.gif">
</div>
We expect most of our data to be within this bottom half of the range of noise levels we tested. According to the Institute of Navigation, GPS-enabled smartphones can typically achieve an accuracy of ~5 m. Even taking into consideration the fact that most of our Open Traffic partners will be operating in dense urban environments where GPS is notoriously inaccurate, we still expect to be operating primarily in the realm sub-60 m of noise. It is likely not to our advantage to tune our algorithm parameters to optimize for the worst-case scenario.
In the range of noiser data (> 60 m), although there is very little dark green in the plots, there is also no evidence to suggest that we should expect to do any better by increasing our search space. In fact, our inuition about the limitations of map-matching with extremely noisy GPS data tells us that the opposite is probably true. There is only so much we can expect the algorithm to do with data once its positional accuracy is less than the length of a standard city block.
Performing a grid search over this many different parameters is extremely slow.
Taking these ideas into consideration, it is clear that expanding our search space would yield diminishing returns. Instead, we'll use the results we've already got to make a more informed decision about how we want to fine-tune our parameters. In the next section, we'll implement such an approach for one hypothetical transportation network company (TNC).
4. Case study: "A-OK Rides"
Mapzen's latest Open Traffic partner is a hypothetical TNC called A-OK Rides. A-OK Rides happens to be based in London, and nearly 90% of A-OK trips either start or end in the city's central business district. In order to conserve bandwidth and processing power, the A-OK app only collects GPS measurements from A-OK drivers every 20 seconds. In this case study, we'll see how Mapzen can use this contextual information about our partner's service territory and anticipated data quality in order to improve our match results.
a) Generate routes
End of explanation
noiseLevels = [60] # meters
Explanation: b) Customizing our search space
Since we know that almost 90% of our GPS data will be generated from within an area dominated by skyscrapers and other forms of dense urban development, this is a case where we actually want to optimize for noisier data. As such, we will perform our parameter sweep over routes generated with 60 m of noise:
End of explanation
sampleRates = [20] # seconds
Explanation: Similarly, since A-OK only collects data every 20 seconds, we can make sure we are only optimizing for data generated at this frequency:
End of explanation
betas = np.linspace(0.5, 10, 20)
sigmas = np.linspace(0.5, 10, 20)
val.grid_search_hmm_params(cityName, routes[24:], sampleRates, noiseLevels, betas, sigmas)
Explanation: We'll search for our optimal $\beta$ and $\sigma_z$ values over the same space as before:
End of explanation
paramDf = val.mergeCsvsToDataFrame(dataDir='../data/', matchString="London*.csv")
paramDict = val.get_param_scores(paramDf, sampleRates, noiseLevels, betas, sigmas, plot=False)
beta, sigma = val.get_optimal_params(paramDict, sampleRateStr='20', noiseLevelStr='60')
print('Sigma: {0} // Beta: {1}'.format(sigma, beta))
Explanation: c) Retrieve optimal params
End of explanation
newRoutes = val.get_POI_routes_by_length(cityName, minRouteLength=1, maxRouteLength=5, numResults=100, apiKey=gmapsKey)
Explanation: d) Get scores with optimal params
Generate new routes:
End of explanation
tunedDf, _, _ = val.get_route_metrics(cityName, newRoutes, sampleRates, noiseLevels,
sigmaZ=sigma, beta=beta, saveResults=False)
Explanation: And score the matches using our tuned parameter values:
End of explanation
defaultDf, _, _ = val.get_route_metrics(cityName, newRoutes, sampleRates, noiseLevels, saveResults=False)
Explanation: e) Compare to performance of default params
If we don't specify a value for $\sigma_z$ or $\beta$, get_route_metrics() will use the default values:
End of explanation
val.plot_err_before_after_optimization(defaultDf, tunedDf)
Explanation: Compare the before-and-after error distributions:
End of explanation |
1,114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hodograph Inset
Layout a Skew-T plot with a hodograph inset into the plot.
Step1: Upper air data can be obtained using the siphon package, but for this example we will use
some of MetPy's sample data.
Step2: We will pull the data out of the example dataset into individual variables and
assign units. | Python Code:
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import numpy as np
import pandas as pd
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.plots import add_metpy_logo, Hodograph, SkewT
from metpy.units import units
Explanation: Hodograph Inset
Layout a Skew-T plot with a hodograph inset into the plot.
End of explanation
col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']
df = pd.read_fwf(get_test_data('may4_sounding.txt', as_file_obj=False),
skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)
df['u_wind'], df['v_wind'] = mpcalc.get_wind_components(df['speed'],
np.deg2rad(df['direction']))
# Drop any rows with all NaN values for T, Td, winds
df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed',
'u_wind', 'v_wind'), how='all').reset_index(drop=True)
Explanation: Upper air data can be obtained using the siphon package, but for this example we will use
some of MetPy's sample data.
End of explanation
p = df['pressure'].values * units.hPa
T = df['temperature'].values * units.degC
Td = df['dewpoint'].values * units.degC
wind_speed = df['speed'].values * units.knots
wind_dir = df['direction'].values * units.degrees
u, v = mpcalc.get_wind_components(wind_speed, wind_dir)
# Create a new figure. The dimensions here give a good aspect ratio
fig = plt.figure(figsize=(9, 9))
add_metpy_logo(fig, 115, 100)
# Grid for plots
skew = SkewT(fig, rotation=45)
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.plot_barbs(p, u, v)
skew.ax.set_ylim(1000, 100)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
# Good bounds for aspect ratio
skew.ax.set_xlim(-50, 60)
# Create a hodograph
ax_hod = inset_axes(skew.ax, '40%', '40%', loc=1)
h = Hodograph(ax_hod, component_range=80.)
h.add_grid(increment=20)
h.plot_colormapped(u, v, np.hypot(u, v))
# Show the plot
plt.show()
Explanation: We will pull the data out of the example dataset into individual variables and
assign units.
End of explanation |
1,115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 4 - Inferences with Gaussians
4.1 Inferring a mean and standard deviation
Inferring the mean and variance of a Gaussian distribution.
$$ \mu \sim \text{Gaussian}(0, .001) $$
$$ \sigma \sim \text{Uniform} (0, 10) $$
$$ x_{i} \sim \text{Gaussian} (\mu, \frac{1}{\sigma^2}) $$
Step1: Note from Junpeng Lao
There are might be divergence warning (Uniform prior on sigma is not a good idea in general), which you can further visualize below
Step2: 4.2 The seven scientists
The model
Step3: 4.3 Repeated measurement of IQ
The model | Python Code:
# Data
x = np.array([1.1, 1.9, 2.3, 1.8])
n = len(x)
with pm.Model() as model1:
# prior
mu = pm.Normal('mu', mu=0, tau=.001)
sigma = pm.Uniform('sigma', lower=0, upper=10)
# observed
xi = pm.Normal('xi',mu=mu, tau=1/(sigma**2), observed=x)
# inference
trace = pm.sample(1e3, njobs=2)
pm.traceplot(trace[50:]);
from matplotlib.ticker import NullFormatter
nullfmt = NullFormatter() # no labels
y = trace['mu'][50:]
x = trace['sigma'][50:]
# definitions for the axes
left, width = 0.1, 0.65
bottom, height = 0.1, 0.65
bottom_h = left_h = left + width + 0.02
rect_scatter = [left, bottom, width, height]
rect_histx = [left, bottom_h, width, 0.2]
rect_histy = [left_h, bottom, 0.2, height]
# start with a rectangular Figure
plt.figure(1, figsize=(8, 8))
axScatter = plt.axes(rect_scatter)
axHistx = plt.axes(rect_histx)
axHisty = plt.axes(rect_histy)
# no labels
axHistx.xaxis.set_major_formatter(nullfmt)
axHisty.yaxis.set_major_formatter(nullfmt)
# the scatter plot:
axScatter.scatter(x, y, c=[1, 1, 1], alpha=.5)
# now determine nice limits by hand:
binwidth1 = 0.25
axScatter.set_xlim((-.01, 10.5))
axScatter.set_ylim((-0, 5))
bins1 = np.linspace(-.01, 10.5, 20)
axHistx.hist(x, bins=bins1)
bins2 = np.linspace(-0, 5, 20)
axHisty.hist(y, bins=bins2, orientation='horizontal')
axHistx.set_xlim(axScatter.get_xlim())
axHisty.set_ylim(axScatter.get_ylim());
print('The mu estimation is: ', y.mean())
print('The sigma estimation is: ', x.mean())
Explanation: Chapter 4 - Inferences with Gaussians
4.1 Inferring a mean and standard deviation
Inferring the mean and variance of a Gaussian distribution.
$$ \mu \sim \text{Gaussian}(0, .001) $$
$$ \sigma \sim \text{Uniform} (0, 10) $$
$$ x_{i} \sim \text{Gaussian} (\mu, \frac{1}{\sigma^2}) $$
End of explanation
# display the total number and percentage of divergent
divergent = trace['diverging']
print('Number of Divergent %d' % divergent.nonzero()[0].size)
divperc = divergent.nonzero()[0].size/len(trace)
print('Percentage of Divergent %.5f' % divperc)
# scatter plot for the identifcation of the problematic neighborhoods in parameter space
plt.figure(figsize=(6, 6))
y = trace['mu']
x = trace['sigma']
plt.scatter(x[divergent == 0], y[divergent == 0], color='r', alpha=.05)
plt.scatter(x[divergent == 1], y[divergent == 1], color='g', alpha=.5);
Explanation: Note from Junpeng Lao
There are might be divergence warning (Uniform prior on sigma is not a good idea in general), which you can further visualize below
End of explanation
# data
x = np.array([-27.020,3.570,8.191,9.898,9.603,9.945,10.056])
n = len(x)
with pm.Model() as model2:
# prior
mu = pm.Normal('mu', mu=0, tau=.001)
lambda1 = pm.Gamma('lambda1', alpha=.01, beta=.01, shape=(n))
# sigma = pm.Deterministic('sigma',1 / sqrt(lambda1))
# observed
xi = pm.Normal('xi',mu = mu, tau = lambda1, observed = x )
# inference
trace2 = pm.sample(5000, njobs=2)
burnin = 1000
pm.traceplot(trace2[burnin:]);
mu = trace2['mu'][burnin:]
lambda1 = trace2['lambda1'][burnin:]
print('The mu estimation is: ', mu.mean())
print('The sigma estimation is: ')
for i in np.mean(np.squeeze(lambda1),axis=0):
print(1 / np.sqrt(i))
Explanation: 4.2 The seven scientists
The model:
$$ \mu \sim \text{Gaussian}(0, .001) $$
$$ \lambda_{i} \sim \text{Gamma} (.001, .001) $$
$$ \sigma = 1/{\sqrt\lambda_{i}} $$
$$ x_{i} \sim \text{Gaussian} (\mu, \lambda_{i}) $$
The mean is the same for all seven scientists, while the standard deviations are different
End of explanation
# Data
y = np.array([[90,95,100],[105,110,115],[150,155,160]])
ntest = 3
nsbj = 3
import sys
eps = sys.float_info.epsilon
with pm.Model() as model3:
# mu_i ~ Uniform(0, 300)
mui = pm.Uniform('mui', 0, 300, shape=(nsbj,1))
# sg ~ Uniform(0, 100)
# sg = pm.Uniform('sg', .0, 100)
# It is more stable to use a Gamma prior
lambda1 = pm.Gamma('lambda1', alpha=.01, beta=.01)
sg = pm.Deterministic('sg',1 / np.sqrt(lambda1))
# y ~ Normal(mu_i, sg)
yd = pm.Normal('y', mu=mui, sd=sg, observed=y)
trace3 = pm.sample(5e3, njobs=2)
burnin = 500
pm.traceplot(trace3[burnin:]);
mu = trace3['mui'][burnin:]
sigma = trace3['sg'][burnin:]
print('The mu estimation is: ', np.mean(mu, axis=0))
print('The sigma estimation is: ',sigma.mean())
Explanation: 4.3 Repeated measurement of IQ
The model:
$$ \mu_{i} \sim \text{Uniform}(0, 300) $$
$$ \sigma \sim \text{Uniform} (0, 100) $$
$$ x_{ij} \sim \text{Gaussian} (\mu_{i}, \frac{1}{\sigma^2}) $$
Data Come From Gaussians With Different Means But Common Precision
End of explanation |
1,116 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Method 1
Step2: Method 2 | Python Code:
# Import modules
import pandas as pd
import numpy as np
# Create a dataframe
raw_data = {'first_name': ['Jason', 'Molly', np.nan, np.nan, np.nan],
'nationality': ['USA', 'USA', 'France', 'UK', 'UK'],
'age': [42, 52, 36, 24, 70]}
df = pd.DataFrame(raw_data, columns = ['first_name', 'nationality', 'age'])
df
Explanation: Title: Selecting Pandas DataFrame Rows Based On Conditions
Slug: pandas_selecting_rows_on_conditions
Summary: Selecting Pandas DataFrame Rows Based On Conditions
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
Preliminaries
End of explanation
# Create variable with TRUE if nationality is USA
american = df['nationality'] == "USA"
# Create variable with TRUE if age is greater than 50
elderly = df['age'] > 50
# Select all casess where nationality is USA and age is greater than 50
df[american & elderly]
Explanation: Method 1: Using Boolean Variables
End of explanation
# Select all cases where the first name is not missing and nationality is USA
df[df['first_name'].notnull() & (df['nationality'] == "USA")]
Explanation: Method 2: Using variable attributes
End of explanation |
1,117 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quick start
Open this page in an interactive mode via Google Colaboratory.
In this quick starting guide we show the basics of working with t3f library. The main concept of the library is a TensorTrain object -- a compact (factorized) representation of a tensor (=multidimensional array). This is generalization of the matrix low-rank decomposition.
To begin, let's import some libraries.
Step1: Converting to and from TT-format
Let's start with converting a dense (numpy) matrix into the TT-format, which in this case coincides with the low-rank format.
Step2: The same idea applies to tensors
Step3: Arithmetic operations
T3F is a library of different operations that can be applied to the tensors in the TT-format by working directly with the compact representation, i.e. without the need to materialize the tensors themself.
Here are some basic examples
Step4: Working with TT-matrices
Recall that for 2-dimensional tensors the TT-format coincides with the matrix low-rank format. However, sometimes matrices can have full matrix rank, but some tensor structure (for example a kronecker product of matrices). In this case there is a special object called Matrix TT-format. You can think of it as a sum of kronecker products (although it's a bit more complicated than that).
Let's say that you have a matrix of size 8 x 27. You can convert it into the matrix TT-format of tensor shape (2, 2, 2) x (3, 3, 3) (in which case the matrix will be represented with 3 TT-cores) or, for example, into the matrix TT-format of tensor shape (4, 2) x (3, 9) (in which case the matrix will be represented with 2 TT-cores).
Step5: But, additionally, you can also compute matrix multiplication between TT-matrices | Python Code:
import numpy as np
# Import TF 2.
%tensorflow_version 2.x
import tensorflow as tf
# Fix seed so that the results are reproducable.
tf.random.set_seed(0)
np.random.seed(0)
try:
import t3f
except ImportError:
# Install T3F if it's not already installed.
!git clone https://github.com/Bihaqo/t3f.git
!cd t3f; pip install .
import t3f
Explanation: Quick start
Open this page in an interactive mode via Google Colaboratory.
In this quick starting guide we show the basics of working with t3f library. The main concept of the library is a TensorTrain object -- a compact (factorized) representation of a tensor (=multidimensional array). This is generalization of the matrix low-rank decomposition.
To begin, let's import some libraries.
End of explanation
# Generate a random dense matrix of size 3 x 4.
a_dense = np.random.randn(3, 4)
# Convert the matrix into the TT-format with TT-rank = 3 (the larger the TT-rank,
# the more exactly the tensor will be converted, but the more memory and time
# everything will take). For matrices, matrix rank coinsides with TT-rank.
a_tt = t3f.to_tt_tensor(a_dense, max_tt_rank=3)
# a_tt stores the factorized representation of the matrix, namely it stores the matrix
# as a product of two smaller matrices which are called TT-cores. You can
# access the TT-cores directly.
print('factors of the matrix: ', a_tt.tt_cores)
# To check that the convertions into the TT-format didn't change the matrix too much,
# let's convert it back and compare to the original.
reconstructed_matrix = t3f.full(a_tt)
print('Original matrix: ')
print(a_dense)
print('Reconstructed matrix: ')
print(reconstructed_matrix)
Explanation: Converting to and from TT-format
Let's start with converting a dense (numpy) matrix into the TT-format, which in this case coincides with the low-rank format.
End of explanation
# Generate a random dense tensor of size 3 x 2 x 2.
a_dense = np.random.randn(3, 2, 2).astype(np.float32)
# Convert the tensor into the TT-format with TT-rank = 3.
a_tt = t3f.to_tt_tensor(a_dense, max_tt_rank=3)
# The 3 TT-cores are available in a_tt.tt_cores.
# To check that the convertions into the TT-format didn't change the tensor too much,
# let's convert it back and compare to the original.
reconstructed_tensor = t3f.full(a_tt)
print('The difference between the original tensor and the reconsrtucted '
'one is %f' % np.linalg.norm(reconstructed_tensor - a_dense))
Explanation: The same idea applies to tensors
End of explanation
# Create a random tensor of shape (3, 2, 2) directly in the TT-format
# (in contrast to generating a dense tensor and then converting it to TT).
b_tt = t3f.random_tensor((3, 2, 2), tt_rank=2)
# Compute the Frobenius norm of the tensor.
norm = t3f.frobenius_norm(b_tt)
print('Frobenius norm of the tensor is %f' % norm)
# Compute the TT-representation of the sum or elementwise product of two TT-tensors.
sum_tt = a_tt + b_tt
prod_tt = a_tt * b_tt
twice_a_tt = 2 * a_tt
# Most operations on TT-tensors increase the TT-rank. After applying a sequence of
# operations the TT-rank can increase by too much and we may want to reduce it.
# To do that there is a rounding operation, which finds the tensor that is of
# a smaller rank but is as close to the original one as possible.
rounded_prod_tt = t3f.round(prod_tt, max_tt_rank=3)
a_max_tt_rank = np.max(a_tt.get_tt_ranks())
b_max_tt_rank = np.max(b_tt.get_tt_ranks())
exact_prod_max_tt_rank = np.max(prod_tt.get_tt_ranks())
rounded_prod_max_tt_rank = np.max(rounded_prod_tt.get_tt_ranks())
difference = t3f.frobenius_norm(prod_tt - rounded_prod_tt)
print('The TT-ranks of a and b are %d and %d. The TT-rank '
'of their elementwise product is %d. The TT-rank of '
'their product after rounding is %d. The difference '
'between the exact and the rounded elementwise '
'product is %f.' % (a_max_tt_rank, b_max_tt_rank,
exact_prod_max_tt_rank,
rounded_prod_max_tt_rank,
difference))
Explanation: Arithmetic operations
T3F is a library of different operations that can be applied to the tensors in the TT-format by working directly with the compact representation, i.e. without the need to materialize the tensors themself.
Here are some basic examples
End of explanation
a_dense = np.random.rand(8, 27).astype(np.float32)
a_matrix_tt = t3f.to_tt_matrix(a_dense, shape=((2, 2, 2), (3, 3, 3)), max_tt_rank=4)
# Now you can work with 'a_matrix_tt' like with any other TT-object, e.g.
print('Frobenius norm of the matrix is %f' % t3f.frobenius_norm(a_matrix_tt))
twice_a_matrix_tt = 2.0 * a_matrix_tt # multiplication by a number.
prod_tt = a_matrix_tt * a_matrix_tt # Elementwise product of two TT-matrices.
Explanation: Working with TT-matrices
Recall that for 2-dimensional tensors the TT-format coincides with the matrix low-rank format. However, sometimes matrices can have full matrix rank, but some tensor structure (for example a kronecker product of matrices). In this case there is a special object called Matrix TT-format. You can think of it as a sum of kronecker products (although it's a bit more complicated than that).
Let's say that you have a matrix of size 8 x 27. You can convert it into the matrix TT-format of tensor shape (2, 2, 2) x (3, 3, 3) (in which case the matrix will be represented with 3 TT-cores) or, for example, into the matrix TT-format of tensor shape (4, 2) x (3, 9) (in which case the matrix will be represented with 2 TT-cores).
End of explanation
vector_tt = t3f.random_matrix(((3, 3, 3), (1, 1, 1)), tt_rank=3)
matvec_tt = t3f.matmul(a_matrix_tt, vector_tt)
# Check that the result coinsides with np.matmul.
matvec_expected = np.matmul(t3f.full(a_matrix_tt), t3f.full(vector_tt))
difference = np.linalg.norm(matvec_expected - t3f.full(matvec_tt))
print('Difference between multiplying matrix by vector in '
'the TT-format and then converting the result into '
'dense vector and multiplying dense matrix by '
'dense vector is %f.' % difference)
Explanation: But, additionally, you can also compute matrix multiplication between TT-matrices
End of explanation |
1,118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Delaunay
Here, we'll perform various analysis by constructing graphs and measure properties of those graphs to learn more about the data
Step1: We'll start with just looking at analysis in euclidian space, then thinking about weighing by synaptic density later. Since we hypothesize that our data will show that tissue varies as we move down the y-axis (z-axis in brain) through cortical layers, an interesting thing to do would be compare properties of the graphs on each layer (ie how does graph connectivity vary as we move through layers).
Let's start by triangulating our data. We'll use Delaunay on each y layer first. Putting our data in the right format for doing graph analysis...
Step2: Now that our data is in the right format, we'll create 52 delaunay graphs. Then we'll perform analyses on these graphs. A simple but useful metric would be to analyze edge length distributions in each layer.
Step3: We're going to need a method to get edge lengths from 2D centroid pairs
Step4: Realizing after all this that simply location is useless. We know the voxels are evenly spaced, which means our edge length data will be all the same. See that the "centroids" are no different
Step5: There is no distance between the two. Therefore it is perhaps more useful to consider a graph that considers node weights. Voronoi is dual to Delaunay, so that's not much of an option. We want something that considers both spacial location and density similarity.
Drawing Graphs
First we look at the default networkx graph plotting
Step6: This is using the spring layout, so we're losing positional information. We can improve the plot by adding position information.
Self Loops
Step7: Interesting. There are no self loops. Why would this be? Let's come back to this. In the meantime, I want to give some though to what it means to have a self loop, whether it should be theoretically possible given our data, and whether our graphs are formed properly.
The answer to this question is very simple. In a RAG, there are no self-loops by definition. Self loops are edges that form a connection between a node and itself.
<img src="../../docs/figures/selfloop.png" width="100">
To see whether the graphs are formed properly, let's look at an adjacency lists
Step8: Compare that to the test data
Step9: X-Layers
Step10: We can see here the number of edges is low in that area that does not have many synapses. It, as expected, mirrors the distribution of synapses. It appears to be approximately uniform at the top, with buffers of very few synapses on the sides. Remember from here | Python Code:
import csv
from scipy.stats import kurtosis
from scipy.stats import skew
from scipy.spatial import Delaunay
import numpy as np
import math
import skimage
import matplotlib.pyplot as plt
import seaborn as sns
from skimage import future
import networkx as nx
from ragGen import *
%matplotlib inline
sns.set_color_codes("pastel")
from scipy.signal import argrelextrema
# Read in the data
data = open('../../data/data.csv', 'r').readlines()
fieldnames = ['x', 'y', 'z', 'unmasked', 'synapses']
reader = csv.reader(data)
reader.next()
rows = [[int(col) for col in row] for row in reader]
# These will come in handy later
sorted_x = sorted(list(set([r[0] for r in rows])))
sorted_y = sorted(list(set([r[1] for r in rows])))
sorted_z = sorted(list(set([r[2] for r in rows])))
Explanation: Delaunay
Here, we'll perform various analysis by constructing graphs and measure properties of those graphs to learn more about the data
End of explanation
a = np.array(rows)
b = np.delete(a, np.s_[3::],1)
# Separate layers - have to do some wonky stuff to get this to work
b = sorted(b, key=lambda e: e[1])
b = np.array([v.tolist() for v in b])
b = np.split(b, np.where(np.diff(b[:,1]))[0]+1)
Explanation: We'll start with just looking at analysis in euclidian space, then thinking about weighing by synaptic density later. Since we hypothesize that our data will show that tissue varies as we move down the y-axis (z-axis in brain) through cortical layers, an interesting thing to do would be compare properties of the graphs on each layer (ie how does graph connectivity vary as we move through layers).
Let's start by triangulating our data. We'll use Delaunay on each y layer first. Putting our data in the right format for doing graph analysis...
End of explanation
graphs = []
centroid_list = []
for layer in b:
centroids = np.array(layer)
# get rid of the y value - not relevant anymore
centroids = np.delete(centroids, 1, 1)
centroid_list.append(centroids)
graph = Delaunay(centroids)
graphs.append(graph)
Explanation: Now that our data is in the right format, we'll create 52 delaunay graphs. Then we'll perform analyses on these graphs. A simple but useful metric would be to analyze edge length distributions in each layer.
End of explanation
def get_d_edge_length(edge):
(x1, y1), (x2, y2) = edge
return math.sqrt((x2-x1)**2 + (y2-y1)**2)
edge_length_list = [[]]
tri_area_list = [[]]
for del_graph in graphs:
tri_areas = []
edge_lengths = []
triangles = []
for t in centroids[del_graph.simplices]:
triangles.append(t)
a, b, c = [tuple(map(int,list(v))) for v in t]
edge_lengths.append(get_d_edge_length((a,b)))
edge_lengths.append(get_d_edge_length((a,c)))
edge_lengths.append(get_d_edge_length((b,c)))
try:
tri_areas.append(float(Triangle(a,b,c).area))
except:
continue
edge_length_list.append(edge_lengths)
tri_area_list.append(tri_areas)
Explanation: We're going to need a method to get edge lengths from 2D centroid pairs
End of explanation
np.subtract(centroid_list[0], centroid_list[1])
Explanation: Realizing after all this that simply location is useless. We know the voxels are evenly spaced, which means our edge length data will be all the same. See that the "centroids" are no different:
End of explanation
real_volume = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))
for r in rows:
real_volume[sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]
nx_graphs = []
for layer in b:
G = nx.Graph(graph)
nx_graphs.append(G)
for graph in nx_graphs:
plt.figure()
nx.draw(graph, node_size=100)
Explanation: There is no distance between the two. Therefore it is perhaps more useful to consider a graph that considers node weights. Voronoi is dual to Delaunay, so that's not much of an option. We want something that considers both spacial location and density similarity.
Drawing Graphs
First we look at the default networkx graph plotting:
End of explanation
num_self_loops = []
for rag in y_rags:
num_self_loops.append(rag.number_of_selfloops())
num_self_loops
Explanation: This is using the spring layout, so we're losing positional information. We can improve the plot by adding position information.
Self Loops
End of explanation
# y_rags[0].adjacency_list()
Explanation: Interesting. There are no self loops. Why would this be? Let's come back to this. In the meantime, I want to give some though to what it means to have a self loop, whether it should be theoretically possible given our data, and whether our graphs are formed properly.
The answer to this question is very simple. In a RAG, there are no self-loops by definition. Self loops are edges that form a connection between a node and itself.
<img src="../../docs/figures/selfloop.png" width="100">
To see whether the graphs are formed properly, let's look at an adjacency lists:
End of explanation
# Test Data
test = np.array([[1,2],[3,4]])
test_rag = skimage.future.graph.RAG(test)
test_rag.adjacency_list()
Explanation: Compare that to the test data:
End of explanation
real_volume_x = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))
for r in rows:
real_volume_x[ sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]
x_rags = []
count = 0;
for layer in real_volume_x:
count = count + 1
x_rags.append(skimage.future.graph.RAG(layer))
num_edges_x = []
for rag in x_rags:
num_edges_x.append(rag.number_of_edges())
sns.barplot(x=range(len(num_edges_x)), y=num_edges_x)
sns.plt.show()
Explanation: X-Layers
End of explanation
plt.imshow(np.amax(real_volume, axis=2), interpolation='nearest')
plt.show()
# edge_length_list[3]
# tri_area_list[3]
# triangles
# Note for future
# del_features['d_edge_length_mean'] = np.mean(edge_lengths)
# del_features['d_edge_length_std'] = np.std(edge_lengths)
# del_features['d_edge_length_skew'] = scipy.stats.skew(edge_lengths)
# del_features['d_edge_length_kurtosis'] = scipy.stats.kurtosis(edge_lengths)
Explanation: We can see here the number of edges is low in that area that does not have many synapses. It, as expected, mirrors the distribution of synapses. It appears to be approximately uniform at the top, with buffers of very few synapses on the sides. Remember from here:
End of explanation |
1,119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Recommending movies
Step2: Preparing the dataset
Next, we need to prepare our dataset. We are going to leverage the data generation utility in this TensorFlow Lite On-device Recommendation reference app.
MovieLens 1M data contains ratings.dat (columns
Step3: Here is a sample of the generated dataset.
0
Step4: Now our train/test datasets include only a sequence of historical movie IDs and a label of next movie ID. Note that we use [10] as the shape of the features during tf.Example parsing because we specify 10 as the length of context features in the example generateion step.
We need one more thing before we can start building the model - the vocabulary for our movie IDs.
Step5: Implementing a sequential model
In our basic retrieval tutorial, we use one query tower for the user, and the candidate tow for the candidate movie. However, the two-tower architecture is generalizble and not limited to <user,item> pair. You can also use it to do item-to-item recommendation as we note in the basic retrieval tutorial.
Here we are still going to use the two-tower architecture. Specificially, we use the query tower with a Gated Recurrent Unit (GRU) layer to encode the sequence of historical movies, and keep the same candidate tower for the candidate movie.
Step6: The metrics, task and full model are defined similar to the basic retrieval model.
Step7: Fitting and evaluating
We can now compile, train and evaluate our sequential retrieval model. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
import os
import pprint
import tempfile
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
Explanation: Recommending movies: retrieval using a sequential model
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/recommenders/examples/sequential_retrieval"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/sequential_retrieval.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/recommenders/blob/main/docs/examples/sequential_retrieval.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/sequential_retrieval.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this tutorial, we are going to build a sequential retrieval model. Sequential recommendation is a popular model that looks at a sequence of items that users have interacted with previously and then predicts the next item. Here the order of the items within each sequence matters, so we are going to use a recurrent neural network to model the sequential relationship. For more details, please refer to this GRU4Rec paper.
Imports
First let's get our dependencies and imports out of the way.
End of explanation
!wget -nc https://raw.githubusercontent.com/tensorflow/examples/master/lite/examples/recommendation/ml/data/example_generation_movielens.py
!python -m example_generation_movielens --data_dir=data/raw --output_dir=data/examples --min_timeline_length=3 --max_context_length=10 --max_context_movie_genre_length=10 --min_rating=2 --train_data_fraction=0.9 --build_vocabs=False
Explanation: Preparing the dataset
Next, we need to prepare our dataset. We are going to leverage the data generation utility in this TensorFlow Lite On-device Recommendation reference app.
MovieLens 1M data contains ratings.dat (columns: UserID, MovieID, Rating, Timestamp), and movies.dat (columns: MovieID, Title, Genres). The example generation script download the 1M dataset, takes both files, only keep ratings higher than 2, form user movie interaction timelines, sample activities as labels and 10 previous user activities as the context for prediction.
End of explanation
train_filename = "./data/examples/train_movielens_1m.tfrecord"
train = tf.data.TFRecordDataset(train_filename)
test_filename = "./data/examples/test_movielens_1m.tfrecord"
test = tf.data.TFRecordDataset(test_filename)
feature_description = {
'context_movie_id': tf.io.FixedLenFeature([10], tf.int64, default_value=np.repeat(0, 10)),
'context_movie_rating': tf.io.FixedLenFeature([10], tf.float32, default_value=np.repeat(0, 10)),
'context_movie_year': tf.io.FixedLenFeature([10], tf.int64, default_value=np.repeat(1980, 10)),
'context_movie_genre': tf.io.FixedLenFeature([10], tf.string, default_value=np.repeat("Drama", 10)),
'label_movie_id': tf.io.FixedLenFeature([1], tf.int64, default_value=0),
}
def _parse_function(example_proto):
return tf.io.parse_single_example(example_proto, feature_description)
train_ds = train.map(_parse_function).map(lambda x: {
"context_movie_id": tf.strings.as_string(x["context_movie_id"]),
"label_movie_id": tf.strings.as_string(x["label_movie_id"])
})
test_ds = test.map(_parse_function).map(lambda x: {
"context_movie_id": tf.strings.as_string(x["context_movie_id"]),
"label_movie_id": tf.strings.as_string(x["label_movie_id"])
})
for x in train_ds.take(1).as_numpy_iterator():
pprint.pprint(x)
Explanation: Here is a sample of the generated dataset.
0 : {
features: {
feature: {
key : "context_movie_id"
value: { int64_list: { value: [ 1124, 2240, 3251, ..., 1268 ] } }
}
feature: {
key : "context_movie_rating"
value: { float_list: {value: [ 3.0, 3.0, 4.0, ..., 3.0 ] } }
}
feature: {
key : "context_movie_year"
value: { int64_list: { value: [ 1981, 1980, 1985, ..., 1990 ] } }
}
feature: {
key : "context_movie_genre"
value: { bytes_list: { value: [ "Drama", "Drama", "Mystery", ..., "UNK" ] } }
}
feature: {
key : "label_movie_id"
value: { int64_list: { value: [ 3252 ] } }
}
}
}
You can see that it includes a sequence of context movie IDs, and a label movie ID (next movie), plus context features such as movie year, rating and genre.
In our case we will only be using the sequence of context movie IDs and the label movie ID. You can refer to the Leveraging context features tutorial to learn more about adding additional context features.
End of explanation
movies = tfds.load("movielens/1m-movies", split='train')
movies = movies.map(lambda x: x["movie_id"])
movie_ids = movies.batch(1_000)
unique_movie_ids = np.unique(np.concatenate(list(movie_ids)))
Explanation: Now our train/test datasets include only a sequence of historical movie IDs and a label of next movie ID. Note that we use [10] as the shape of the features during tf.Example parsing because we specify 10 as the length of context features in the example generateion step.
We need one more thing before we can start building the model - the vocabulary for our movie IDs.
End of explanation
embedding_dimension = 32
query_model = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=unique_movie_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_ids) + 1, embedding_dimension),
tf.keras.layers.GRU(embedding_dimension),
])
candidate_model = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=unique_movie_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_ids) + 1, embedding_dimension)
])
Explanation: Implementing a sequential model
In our basic retrieval tutorial, we use one query tower for the user, and the candidate tow for the candidate movie. However, the two-tower architecture is generalizble and not limited to <user,item> pair. You can also use it to do item-to-item recommendation as we note in the basic retrieval tutorial.
Here we are still going to use the two-tower architecture. Specificially, we use the query tower with a Gated Recurrent Unit (GRU) layer to encode the sequence of historical movies, and keep the same candidate tower for the candidate movie.
End of explanation
metrics = tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(candidate_model)
)
task = tfrs.tasks.Retrieval(
metrics=metrics
)
class Model(tfrs.Model):
def __init__(self, query_model, candidate_model):
super().__init__()
self._query_model = query_model
self._candidate_model = candidate_model
self._task = task
def compute_loss(self, features, training=False):
watch_history = features["context_movie_id"]
watch_next_label = features["label_movie_id"]
query_embedding = self._query_model(watch_history)
candidate_embedding = self._candidate_model(watch_next_label)
return self._task(query_embedding, candidate_embedding, compute_metrics=not training)
Explanation: The metrics, task and full model are defined similar to the basic retrieval model.
End of explanation
model = Model(query_model, candidate_model)
model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))
cached_train = train_ds.shuffle(10_000).batch(12800).cache()
cached_test = test_ds.batch(2560).cache()
model.fit(cached_train, epochs=3)
model.evaluate(cached_test, return_dict=True)
Explanation: Fitting and evaluating
We can now compile, train and evaluate our sequential retrieval model.
End of explanation |
1,120 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Leemos los sondeos obtenidos de la wikipedia (https
Step1: Hacemos un dibujo con los datos. Cada valor del sondeo se muestra con puntos. Además hacemos una regresión con un Gaussian Process, que lo extrapolamos hacia adelante unos pocos meses. Las compañías de sondeos indican que sus errores de muestreo son del orden del 3% (en promedio). Si calculamos la diferencia entre los sondeos y el ajuste del GP, vemos que las diferencias están del orden del 3% absoluto.
Step2: La extrapolación hacia adelante | Python Code:
book = xlrd.open_workbook("sondeos.xlsx")
sh = book.sheet_by_index(0)
PP = []
PSOE = []
IU = []
UPyD = []
Podemos = []
Ciudadanos = []
fecha = []
mesEsp = ['ene', 'feb', 'mar', 'abr', 'may', 'jun', 'jul', 'ago', 'sep', 'oct', 'nov', 'dic']
mesEng = ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov', 'dec']
for rx in range(2,sh.nrows):
row = sh.row_values(rx)
res = row[2]
out = bytes(res, 'utf-8')
res = out.replace(b'\xe2\x80\x93', b'-').decode('utf-8').lower()
for i in range(12):
res = res.replace(mesEsp[i], mesEng[i])
res = res.replace('.', '')
guion = res.find('-')
if (guion != -1):
res = res[guion+1:]
out = res.split(' ')
if ((len(out) > 1) and (res.find(')') == -1) ):
if (len(out) == 2):
res = '1 '+res
PP.append(row[3])
PSOE.append(row[4])
IU.append(row[5])
UPyD.append(row[6])
Podemos.append(row[16])
Ciudadanos.append(row[17])
fecha.append(dt.datetime.strptime(res, "%d %b %Y").date())
partidos = [PP, PSOE, IU, UPyD, Podemos, Ciudadanos]
nSondeos = len(PP)
# Now clean the lists to transform percentages to numbers
for partido in partidos:
for i in range(nSondeos):
if (not isinstance(partido[i], numbers.Number)):
findPct = partido[i].find('%')
if (findPct != -1):
res = 0.01*float(partido[i][0:findPct].replace(',', '.'))
partido[i] = res
else:
partido[i] = 0
PP = np.asarray(PP)[::-1]
PSOE = np.asarray(PSOE)[::-1]
IU = np.asarray(IU)[::-1]
UPyD = np.asarray(UPyD)[::-1]
Podemos = np.asarray(Podemos)[::-1]
Ciudadanos = np.asarray(Ciudadanos)[::-1]
fecha = np.asarray(fecha)[::-1]
delta = np.zeros(len(fecha))
for i in range(len(fecha)):
delta[i] = (fecha[i]-fecha[0]).days
Explanation: Leemos los sondeos obtenidos de la wikipedia (https://es.wikipedia.org/wiki/Anexo:Sondeos_de_intenci%C3%B3n_de_voto_para_las_elecciones_generales_de_Espa%C3%B1a_de_2015). Hacemos un poco de limpieza de los datos para dejar una fecha del sondeo y unificar los valores en tanto por uno. Generamos además un eje de tiempo.
End of explanation
partidos = [PP, PSOE, IU, UPyD, Podemos, Ciudadanos]
colors = ["blue", "red", "green", "magenta", "violet", "orange"]
nombres = ["PP", "PSOE", "IU", "UPyD", "Podemos", "Ciudadanos"]
f, ax = pl.subplots(nrows=1, ncols=1, figsize=(15,8))
ax.xaxis_date()
xu = np.unique(delta) # get unique x values
idx = [np.where(X==x1)[0][0] for x1 in xu]
nObs = len(xu)
deltaNew = np.copy(delta)
fechaNew = np.copy(fecha)
for i in range(150):
deltaNew = np.append(deltaNew, delta[-1] + i+1)
fechaNew = np.append(fechaNew, fecha[-1]+dt.timedelta(days=i+1))
for i in range(6):
ax.plot(fecha, partidos[i], '.', color=sn.xkcd_rgb[colors[i]], linewidth=2)
y = partidos[i][idx]
nugget = (0.03 / y)**2
nugget[y==0] = 0.03
gp = gaussian_process.GaussianProcess(theta0=0.1, nugget=nugget)
gp.fit(xu[:,None], y)
predict, variance = gp.predict(deltaNew[:,None], eval_MSE=True)
stddev = np.sqrt(variance)
ax.plot(fechaNew, predict, color=sn.xkcd_rgb[colors[i]], linewidth=2)
ax.plot(fechaNew, predict+2.0*stddev, '--', color=sn.xkcd_rgb[colors[i]], linewidth=1)
ax.plot(fechaNew, predict-2.0*stddev, '--', color=sn.xkcd_rgb[colors[i]], linewidth=1)
f.autofmt_xdate()
ax.fmt_xdata = mdates.DateFormatter('%Y-%m-%d')
ax.xaxis.set_major_locator(mdates.MonthLocator())
y = np.zeros((nObs,6))
x = np.zeros((nObs,6))
xnew = np.zeros((len(deltaNew), 6))
for i in range(6):
x[:,i] = xu
y[:,i] = partidos[i][idx]
xnew[:,i] = deltaNew
gp = gaussian_process.GaussianProcess(theta0=0.1, nugget=nugget)
gp.fit(x, y)
predict, variance = gp.predict(xnew, eval_MSE=True)
stddev = np.sqrt(variance)
f, ax = pl.subplots(nrows=1, ncols=1, figsize=(15,8))
ax.xaxis_date()
for i in range(6):
ax.plot(fecha, partidos[i], '.', color=sn.xkcd_rgb[colors[i]], linewidth=2)
ax.plot(fechaNew, predict[:,i], color=sn.xkcd_rgb[colors[i]], linewidth=2)
ax.plot(fechaNew, predict[:,i]+2.0*stddev, '--', color=sn.xkcd_rgb[colors[i]], linewidth=1)
ax.plot(fechaNew, predict[:,i]-2.0*stddev, '--', color=sn.xkcd_rgb[colors[i]], linewidth=1)
gp.theta_
Explanation: Hacemos un dibujo con los datos. Cada valor del sondeo se muestra con puntos. Además hacemos una regresión con un Gaussian Process, que lo extrapolamos hacia adelante unos pocos meses. Las compañías de sondeos indican que sus errores de muestreo son del orden del 3% (en promedio). Si calculamos la diferencia entre los sondeos y el ajuste del GP, vemos que las diferencias están del orden del 3% absoluto.
End of explanation
f, ax = pl.subplots(nrows=1, ncols=1, figsize=(15,8))
ax.xaxis_date()
gp = ml.GaussianProcess(delta)
for i in range(6):
gp.fit(partidos[i], 0.05)
predict, covariance = gp.predict(delta)
ax.plot(fecha, partidos[i] - predict, '.', color=sn.xkcd_rgb[colors[i]], linewidth=2)
print("{0} -> {1:3.1f}%".format(nombres[i], 100.0*np.std(partidos[i] - predict)))
f.autofmt_xdate()
ax.fmt_xdata = mdates.DateFormatter('%Y-%m-%d')
ax.xaxis.set_major_locator(mdates.MonthLocator())
Explanation: La extrapolación hacia adelante
End of explanation |
1,121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dev for Handling BAM Extaction Radii
Step1: NOTE that in the case below, r6.l6 us loaded while r5.l5 is the config file default. This is the desired behavior as there is not r5.l5 for this case. Instead, r6.l6 has been infered from the simulation directory. | Python Code:
# Setup ipython environment
%load_ext autoreload
%autoreload 2
# %matplotlib auto
%matplotlib inline
# Import useful things
from nrutils import scsearch,gwylm
# Setup plotting backend
import matplotlib as mpl
from mpl_toolkits.mplot3d import axes3d
mpl.rcParams['lines.linewidth'] = 0.8
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['font.size'] = 12
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['axes.titlesize'] = 20
from matplotlib.pyplot import *
from mpl_toolkits.mplot3d import Axes3D
#
from numpy import *
#
# A = scsearch(keyword='silures',verbose=True,nonspinning=True)
# A = scsearch(keyword='q1.2_base',verbose=True)
A = scsearch(q=[9.9,11],verbose=True,nonspinning=True)
Explanation: Dev for Handling BAM Extaction Radii
End of explanation
b = A[0]
y = gwylm(b,lm=[2,2],verbose=True)
y.plot()
Explanation: NOTE that in the case below, r6.l6 us loaded while r5.l5 is the config file default. This is the desired behavior as there is not r5.l5 for this case. Instead, r6.l6 has been infered from the simulation directory.
End of explanation |
1,122 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have been trying to get the arithmetic result of a lognormal distribution using Scipy. I already have the Mu and Sigma, so I don't need to do any other prep work. If I need to be more specific (and I am trying to be with my limited knowledge of stats), I would say that I am looking for the expected value and median of the distribution. The problem is that I can't figure out how to do this with just the mean and standard deviation. I'm also not sure which method from dist, I should be using to get the answer. I've tried reading the documentation and looking through SO, but the relevant questions (like this and this) didn't seem to provide the answers I was looking for. | Problem:
import numpy as np
from scipy import stats
stddev = 2.0785
mu = 1.744
expected_value = np.exp(mu + stddev ** 2 / 2)
median = np.exp(mu) |
1,123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Definition(s)
The closest pair of points problem or closest pair problem is a problem of computational geometry
Step1: Naive implementation of closest_pair
Step2: Draw points (with closest-pair marked as red)
Step3: Run(s) | Python Code:
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from operator import itemgetter
%matplotlib inline
def euclid_distance(p, q):
return np.sqrt((p[0] - q[0]) ** 2 + (p[1] - q[1]) ** 2)
def search(points, st, dr):
if st >= dr:
return np.inf, None, None
elif st + 1 == dr:
# sort on y
if points[st][1] > points[dr][1]:
points[st], points[dr] = points[dr], points[st]
return euclid_distance(points[st], points[dr]), points[st], points[dr]
m = (st + dr) // 2
median_x = points[m][0]
dleft, pleft, qleft = search(points, st, m)
dright, pright, qright = search(points, m + 1, dr)
if dleft < dright:
d, p, q = dleft, pleft, qleft
else:
d, p, q = dright, pright, qright
# merge the two halves on y
aux = []
i, j = st, m + 1
while i <= m and j <= dr:
if points[i][1] <= points[j][1]:
aux.append(points[i])
i += 1
else:
aux.append(points[j])
j += 1
while i <= m:
aux.append(points[i])
i += 1
while j <= dr:
aux.append(points[j])
j += 1
# copy back the points
points[st:dr+1] = aux
# select a set of points to be tested
good_points = []
for i in range(st, dr + 1):
if abs(points[i][0] - median_x) <= d:
good_points.append(points[i])
for i in range(len(good_points)):
j, cnt = i - 1, 8
# go for at most 8 steps
while j >= 0 and cnt > 0:
tmp_d = euclid_distance(aux[i], aux[j])
if tmp_d < d:
d, p, q = tmp_d, aux[i], aux[j]
j -= 1
cnt -= 1
return d, p, q
def closest_pair(points):
points.sort(key = itemgetter(0))
return search(points, 0, len(points) - 1)
Explanation: Definition(s)
The closest pair of points problem or closest pair problem is a problem of computational geometry: given n points in metric space, find a pair of points with the smallest distance between them.
The closest pair problem for points in the Euclidean plane was among the first geometric problems that were treated at the origins of the systematic study of the computational complexity of geometric algorithms.
Algorithm(s)
End of explanation
def naive_closest_pair(points):
best, p, q = np.inf, None, None
n = len(points)
for i in range(n):
for j in range(i + 1, n):
d = euclid_distance(points[i], points[j])
if d < best:
best, p, q = d, points[i], points[j]
return best, p, q
Explanation: Naive implementation of closest_pair
End of explanation
def draw_points(points, p, q):
xs, ys = zip(*points)
plt.figure(figsize=(10,10))
plt.scatter(xs, ys)
plt.scatter([p[0], q[0]], [p[1], q[1]], s=100, c='red')
plt.plot([p[0], q[0]], [p[1], q[1]], 'k', c='red')
plt.show()
Explanation: Draw points (with closest-pair marked as red)
End of explanation
points = [(26, 77), (12, 37), (14, 18), (19, 96), (71, 95), (91, 9), (98, 43), (66, 77), (2, 75), (94, 91)]
xs, ys = zip(*points)
d, p, q = closest_pair(points)
assert d == naive_closest_pair(points)[0]
print("The closest pair of points is ({0}, {1}) at distance {2}".format(p, q, d))
draw_points(points, p, q)
N = 10
x = np.random.rand(N) * 100
y = np.random.rand(N) * 100
points = list(zip(x, y))
d, p, q = closest_pair(points)
assert d == naive_closest_pair(points)[0]
print("The closest pair of points is ({0}, {1}) at distance {2}".format(p, q, d))
draw_points(points, p, q)
N = 20
x = np.random.randint(100, size=N)
y = np.random.randint(100, size=N)
points = list(zip(x, y))
d, p, q = closest_pair(points)
assert d == naive_closest_pair(points)[0]
print("The closest pair of points is ({0}, {1}) at distance {2}".format(p, q, d))
draw_points(points, p, q)
N = 20
x = np.random.rand(N)
y = np.random.rand(N)
points = list(zip(x, y))
d, p, q = closest_pair(points)
assert d == naive_closest_pair(points)[0]
print("The closest pair of points is ({0}, {1}) at distance {2}".format(p, q, d))
draw_points(points, p, q)
Explanation: Run(s)
End of explanation |
1,124 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Instructions
Compute the sample statistics on the given data using numpy. Write the equation in LaTeX first and then complete the computation in Python second. You may refer to equations in other problems. For example, to compute sample variance you could refer to the sample mean computed in problem 1.
example
The sum of the data is
$$
\sum x_i = 16
$$
Note that your answer must appear in the Markdown cell and Python cell
Step1: Data
Step2: Problem 1
Compute the mean
The mean is given by
$$
\bar{x} = \frac{1}{N} \sum_i^N x_i = 11.82
$$
Step3: Problem 2
Compute the sample standard deviation
Sample standard deviation is given by
Step4: Problem 3
Compute the correlation of the original data with the given data below
The sample correlation is given by
$$
r = \frac{\sigma_{xy}}{\sigma_x\sigma_y}
$$
where $\sigma_x$ is the sample standard deviation defined above and the sample covariance, $\sigma_{xy}$ is
$$
\sigma_{xy}= \frac{1}{N - 1} \sum_i^N (x - \bar{x})(y - \bar{y})
$$
In this case $r = 0.865$ | Python Code:
#example
example_data_do_not_use = [4,3,6,3]
print(sum(example_data_do_not_use))
Explanation: Instructions
Compute the sample statistics on the given data using numpy. Write the equation in LaTeX first and then complete the computation in Python second. You may refer to equations in other problems. For example, to compute sample variance you could refer to the sample mean computed in problem 1.
example
The sum of the data is
$$
\sum x_i = 16
$$
Note that your answer must appear in the Markdown cell and Python cell
End of explanation
data=[13,13,11,11,12,10,14,14,8,11,14,10,16,11,11,15,12,13,12,11,13,12,14,10,9,12,13,14,14,10,15,13,12,12,13,10,12,10,13,13,14,8,14,11,9,13,10,11,9,9,15,12,14,10,16,14,9,10,12,13,8,11,16,13,10,10,13,10,11,11,14,7,12,14,13,13,9,9,13,10,12,12,13,12,10,10,13,11,15,13,13,17,9,12,12,9,12,9,10,12]
Explanation: Data
End of explanation
np.mean(data)
Explanation: Problem 1
Compute the mean
The mean is given by
$$
\bar{x} = \frac{1}{N} \sum_i^N x_i = 11.82
$$
End of explanation
np.std(data, ddof=1)
Explanation: Problem 2
Compute the sample standard deviation
Sample standard deviation is given by:
$$
\sigma_x = \sqrt{\frac{1}{N - 1} \sum_x (x - \bar{x})^2}
$$
and is $\sigma_x = 2.04$
End of explanation
data2=[16,15,14,13,16,12,15,15,9,13,17,13,19,14,16,18,15,14,14,14,14,14,15,14,13,14,16,18,15,13,17,16,14,16,17,13,16,13,17,16,16,11,18,12,12,16,13,15,14,11,15,17,17,15,20,16,11,14,14,15,11,14,19,16,13,11,13,11,13,15,16,9,13,15,15,15,10,11,17,11,15,15,16,15,12,12,16,13,17,17,15,18,11,16,15,11,15,12,14,16]
### BEGIN SOLUTION
np.corrcoef(data, data2)
### END SOLUTION
Explanation: Problem 3
Compute the correlation of the original data with the given data below
The sample correlation is given by
$$
r = \frac{\sigma_{xy}}{\sigma_x\sigma_y}
$$
where $\sigma_x$ is the sample standard deviation defined above and the sample covariance, $\sigma_{xy}$ is
$$
\sigma_{xy}= \frac{1}{N - 1} \sum_i^N (x - \bar{x})(y - \bar{y})
$$
In this case $r = 0.865$
End of explanation |
1,125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'hadgem3-gc31-hh', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: NERC
Source ID: HADGEM3-GC31-HH
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:26
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
1,126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of two oyster samples where Lotterhos did methylRAD
The M2 and M3 samples are here
Step1: Genome version
Step2: Products | Python Code:
bsmaploc="/Applications/bioinfo/BSMAP/bsmap-2.74/"
Explanation: Analysis of two oyster samples where Lotterhos did methylRAD
The M2 and M3 samples are here:
http://owl.fish.washington.edu/nightingales/C_gigas/9_GATCAG_L001_R1_001.fastq.gz
http://owl.fish.washington.edu/nightingales/C_gigas/10_TAGCTT_L001_R1_001.fastq.gz
End of explanation
!curl \
ftp://ftp.ensemblgenomes.org/pub/release-32/metazoa/fasta/crassostrea_gigas/dna/Crassostrea_gigas.GCA_000297895.1.dna_sm.toplevel.fa.gz \
> /Volumes/caviar/wd/data/Crassostrea_gigas.GCAz_000297895.1.dna_sm.toplevel.fa.gz
!curl ftp://ftp.ensemblgenomes.org/pub/release-32/metazoa/fasta/crassostrea_gigas/dna/CHECKSUMS
!ls /Volumes/caviar/wd/data/
!md5 /Volumes/caviar/wd/data/Crassostrea_gigas.GCAz_000297895.1.dna_sm.toplevel.fa.gz
cd /Volumes/caviar/wd/
mkdir $(date +%F)
ls
ls /Volumes/web/nightingales/C
!curl \
http://owl.fish.washington.edu/nightingales/C_gigas/9_GATCAG_L001_R1_001.fastq.gz \
> /Volumes/caviar/wd/2016-10-11/9_GATCAG_L001_R1_001.fastq.gz
!curl \
http://owl.fish.washington.edu/nightingales/C_gigas/10_TAGCTT_L001_R1_001.fastq.gz \
> /Volumes/caviar/wd/2016-10-11/10_TAGCTT_L001_R1_001.fastq.gz
cd 2016-10-11/
!cp 9_GATCAG_L001_R1_001.fastq.gz M2.fastq.gz
!cp 10_TAGCTT_L001_R1_001.fastq.gz M3.fastq.gz
for i in ("M2","M3"):
!{bsmaploc}bsmap \
-a {i}.fastq.gz \
-d ../data/Crassostrea_gigas.GCAz_000297895.1.dna_sm.toplevel.fa \
-o bsmap_out_{i}.sam \
-p 6
for i in ("M2","M3"):
!python {bsmaploc}methratio.py \
-d ../data/Crassostrea_gigas.GCAz_000297895.1.dna_sm.toplevel.fa \
-u -z -g \
-o methratio_out_{i}.txt \
-s {bsmaploc}samtools \
bsmap_out_{i}.sam \
!head /Volumes/caviar/wd/2016-10-11/methratio_out_M2.txt
!curl https://raw.githubusercontent.com/che625/olson-ms-nb/master/scripts/mr3x.awk \
> /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr3x.awk
!curl https://raw.githubusercontent.com/che625/olson-ms-nb/master/scripts/mr_gg.awk.sh \
> /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr_gg.awk.sh
#first methratio files are converted to filter for CG context, 3x coverage (mr3x.awk), and reformatting (mr_gg.awk.sh).
#due to issue passing variable to awk, simple scripts were used (included in repository)
for i in ("M2","M3"):
!echo {i}
!grep "[A-Z][A-Z]CG[A-Z]" <methratio_out_{i}.txt> methratio_out_{i}CG.txt
!awk -f /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr3x.awk methratio_out_{i}CG.txt \
> mr3x.{i}.txt
!awk -f /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr_gg.awk.sh \
mr3x.{i}.txt > mkfmt_{i}.txt
#first methratio files are converted to filter for CG context, 3x coverage (mr3x.awk), and reformatting (mr_gg.awk.sh).
#due to issue passing variable to awk, simple scripts were used (included in repository)
for i in ("M2","M3"):
!echo {i}
!grep -i "[A-Z][A-Z]CG[A-Z]" <methratio_out_{i}.txt> methratio_out_{i}CGi.txt
!awk -f /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr3x.awk methratio_out_{i}CGi.txt \
> mr3xi.{i}.txt
!awk -f /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr_gg.awk.sh \
mr3xi.{i}.txt > mkfmti_{i}.txt
#maybe we need to ignore case
!md5 mkfmt_M2.txt mkfmti_M2.txt | head
#nope
!head -100 mkfmt_M2.txt
Explanation: Genome version
End of explanation
cd /Users/sr320/git-repos/sr320.github.io/jupyter
mkdir analyses
mkdir analyses/$(date +%F)
for i in ("M2","M3"):
!cp /Volumes/caviar/wd/2016-10-11/mkfmt_{i}.txt analyses/$(date +%F)/mkfmt_{i}.txt
!head analyses/$(date +%F)/*
Explanation: Products
End of explanation |
1,127 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The Cirq Developers
Step1: Custom gates
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step3: Standard gates such as Pauli gates and CNOTs are defined in cirq.ops as described here. To use a unitary which is not a standard gate in a circuit, one can create a custom gate as described in this guide.
General pattern
Gates are classes in Cirq. To define custom gates, we inherit from a base gate class and define a few methods.
The general pattern is to
Step5: In this example, the _num_qubits_ method tells Cirq that this gate acts on a single-qubit, and the _unitary_ method defines the unitary of the gate. The _circuit_diagram_info_ method tells Cirq how to display the gate in a circuit, as we will see below.
Once this gate is defined, it can be used like any standard gate in Cirq.
Step7: When we print the circuit, we see the symbol we specified in the _circuit_diagram_info_ method.
Circuits with custom gates can be simulated in the same manner as circuits with standard gates.
Step9: An alternative to inheriting from cirq.Gate is to inherit from cirq.SingleQubitGate, in which case defining _num_qubits_ is unnecessary. An example of a defining a two-qubit gate is shown below.
Step11: Here, the _circuit_diagram_info_ method returns two symbols (one for each wire) since it is a two-qubit gate.
Step13: As above, this circuit can also be simulated in the expected way.
With parameters
Custom gates can be defined and used with parameters. For example, to define the gate
$$ R(\theta) = \left[ \begin{matrix} \cos \theta & \sin \theta \ \sin \theta & - \cos \theta \end{matrix} \right], $$
we can do the following.
Step15: This gate can be used in a circuit as shown below.
Step16: From a known decomposition
Custom gates can also be defined from a known decomposition (of gates). This is useful, for example, when groups of gates appear repeatedly in a circuit, or when a standard decomposition of a gate into primitive gates is known.
We show an example below of a custom swap gate defined from a known decomposition of three CNOT gates.
Step18: The _decompose_ method yields the operations which implement the custom gate. (One can also return a list of operations instead of a generator.)
When we use this gate in a circuit, the individual gates in the decomposition do not appear in the circuit. Instead, the _circuit_diagram_info_ appears in the circuit. As mentioned, this can be useful for interpreting circuits at a higher level than individual (primitive) gates.
Step20: We can simulate this circuit and verify it indeed swaps the qubits.
Step21: More on magic methods and protocols
As mentioned, methods such as _unitary_ which we have seen are known as "magic
methods." Much of Cirq relies on "magic methods", which are methods prefixed with one or
two underscores and used by Cirq's protocols or built-in Python methods.
For instance, Python translates cirq.Z**0.25 into
cirq.Z.__pow__(0.25). Other uses are specific to cirq and are found in the
protocols subdirectory. They are defined below.
At minimum, you will need to define either the _num_qubits_ or
_qid_shape_ magic method to define the number of qubits (or qudits) used
in the gate.
Standard Python magic methods
There are many standard magic methods in Python. Here are a few of the most
important ones used in Cirq | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The Cirq Developers
End of explanation
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
import cirq
import numpy as np
Explanation: Custom gates
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/custom_gates"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/custom_gates.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/custom_gates.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/custom_gates.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
End of explanation
Define a custom single-qubit gate.
class MyGate(cirq.Gate):
def __init__(self):
super(MyGate, self)
def _num_qubits_(self):
return 1
def _unitary_(self):
return np.array([
[1.0, 1.0],
[-1.0, 1.0]
]) / np.sqrt(2)
def _circuit_diagram_info_(self, args):
return "G"
my_gate = MyGate()
Explanation: Standard gates such as Pauli gates and CNOTs are defined in cirq.ops as described here. To use a unitary which is not a standard gate in a circuit, one can create a custom gate as described in this guide.
General pattern
Gates are classes in Cirq. To define custom gates, we inherit from a base gate class and define a few methods.
The general pattern is to:
Inherit from cirq.Gate.
Define one of the _num_qubits_ or _qid_shape_ methods.
Define one of the _unitary_ or _decompose_ methods.
Note: Methods beginning and ending with one or more underscores are magic methods and are used by Cirq's protocols or built-in Python functions. More information about magic methods is included at the end of this guide.
We demonstrate these patterns via the following examples.
From a unitary
One can create a custom Cirq gate from a unitary matrix in the following manner. Here, we define a gate which corresponds to the unitary
$$ U = \frac{1}{\sqrt{2}} \left[ \begin{matrix} 1 & 1 \ -1 & 1 \end{matrix} \right] . $$
End of explanation
Use the custom gate in a circuit.
circ = cirq.Circuit(
my_gate.on(cirq.LineQubit(0))
)
print("Circuit with custom gates:")
print(circ)
Explanation: In this example, the _num_qubits_ method tells Cirq that this gate acts on a single-qubit, and the _unitary_ method defines the unitary of the gate. The _circuit_diagram_info_ method tells Cirq how to display the gate in a circuit, as we will see below.
Once this gate is defined, it can be used like any standard gate in Cirq.
End of explanation
Simulate a circuit with a custom gate.
sim = cirq.Simulator()
res = sim.simulate(circ)
print(res)
Explanation: When we print the circuit, we see the symbol we specified in the _circuit_diagram_info_ method.
Circuits with custom gates can be simulated in the same manner as circuits with standard gates.
End of explanation
Define a custom two-qubit gate.
class AnotherGate(cirq.Gate):
def __init__(self):
super(AnotherGate, self)
def _num_qubits_(self):
return 2
def _unitary_(self):
return np.array([
[1.0, -1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 1.0],
[1.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, -1.0]
]) / np.sqrt(2)
def _circuit_diagram_info_(self, args):
return "Top wire symbol", "Bottom wire symbol"
this_gate = AnotherGate()
Explanation: An alternative to inheriting from cirq.Gate is to inherit from cirq.SingleQubitGate, in which case defining _num_qubits_ is unnecessary. An example of a defining a two-qubit gate is shown below.
End of explanation
Use the custom two-qubit gate in a circuit.
circ = cirq.Circuit(
this_gate.on(*cirq.LineQubit.range(2))
)
print("Circuit with custom two-qubit gate:")
print(circ)
Explanation: Here, the _circuit_diagram_info_ method returns two symbols (one for each wire) since it is a two-qubit gate.
End of explanation
Define a custom gate with a parameter.
class RotationGate(cirq.Gate):
def __init__(self, theta):
super(RotationGate, self)
self.theta = theta
def _num_qubits_(self):
return 1
def _unitary_(self):
return np.array([
[np.cos(self.theta), np.sin(self.theta)],
[np.sin(self.theta), -np.cos(self.theta)]
]) / np.sqrt(2)
def _circuit_diagram_info_(self, args):
return f"R({self.theta})"
Explanation: As above, this circuit can also be simulated in the expected way.
With parameters
Custom gates can be defined and used with parameters. For example, to define the gate
$$ R(\theta) = \left[ \begin{matrix} \cos \theta & \sin \theta \ \sin \theta & - \cos \theta \end{matrix} \right], $$
we can do the following.
End of explanation
Use the custom gate in a circuit.
circ = cirq.Circuit(
RotationGate(theta=0.1).on(cirq.LineQubit(0))
)
print("Circuit with a custom rotation gate:")
print(circ)
Explanation: This gate can be used in a circuit as shown below.
End of explanation
class MySwap(cirq.Gate):
def __init__(self):
super(MySwap, self)
def _num_qubits_(self):
return 2
def _decompose_(self, qubits):
a, b = qubits
yield cirq.CNOT(a, b)
yield cirq.CNOT(b, a)
yield cirq.CNOT(a, b)
def _circuit_diagram_info_(self, args):
return ["CustomSWAP"] * self.num_qubits()
my_swap = MySwap()
Explanation: From a known decomposition
Custom gates can also be defined from a known decomposition (of gates). This is useful, for example, when groups of gates appear repeatedly in a circuit, or when a standard decomposition of a gate into primitive gates is known.
We show an example below of a custom swap gate defined from a known decomposition of three CNOT gates.
End of explanation
Use the custom gate in a circuit.
qreg = cirq.LineQubit.range(2)
circ = cirq.Circuit(
cirq.X(qreg[0]),
my_swap.on(*qreg)
)
print("Circuit:")
print(circ)
Explanation: The _decompose_ method yields the operations which implement the custom gate. (One can also return a list of operations instead of a generator.)
When we use this gate in a circuit, the individual gates in the decomposition do not appear in the circuit. Instead, the _circuit_diagram_info_ appears in the circuit. As mentioned, this can be useful for interpreting circuits at a higher level than individual (primitive) gates.
End of explanation
Simulate the circuit.
sim.simulate(circ)
Explanation: We can simulate this circuit and verify it indeed swaps the qubits.
End of explanation
print(cirq.unitary(cirq.X))
# prints
# [[0.+0.j 1.+0.j]
# [1.+0.j 0.+0.j]]
sqrt_x = cirq.X**0.5
print(cirq.unitary(sqrt_x))
# prints
# [[0.5+0.5j 0.5-0.5j]
# [0.5-0.5j 0.5+0.5j]]
Explanation: More on magic methods and protocols
As mentioned, methods such as _unitary_ which we have seen are known as "magic
methods." Much of Cirq relies on "magic methods", which are methods prefixed with one or
two underscores and used by Cirq's protocols or built-in Python methods.
For instance, Python translates cirq.Z**0.25 into
cirq.Z.__pow__(0.25). Other uses are specific to cirq and are found in the
protocols subdirectory. They are defined below.
At minimum, you will need to define either the _num_qubits_ or
_qid_shape_ magic method to define the number of qubits (or qudits) used
in the gate.
Standard Python magic methods
There are many standard magic methods in Python. Here are a few of the most
important ones used in Cirq:
* __str__ for user-friendly string output and __repr__ is the Python-friendly string output, meaning that eval(repr(y))==y should always be true.
* __eq__ and __hash__ which define whether objects are equal or not. You
can also use cirq.value.value_equality for objects that have a small list
of sub-values that can be compared for equality.
* Arithmetic functions such as __pow__, __mul__, __add__ define the
action of **, *, and + respectively.
cirq.num_qubits and def _num_qubits_
A Gate must implement the _num_qubits_ (or _qid_shape_) method.
This method returns an integer and is used by cirq.num_qubits to determine
how many qubits this gate operates on.
cirq.qid_shape and def _qid_shape_
A qudit gate or operation must implement the _qid_shape_ method that returns a
tuple of integers. This method is used to determine how many qudits the gate or
operation operates on and what dimension each qudit must be. If only the
_num_qubits_ method is implemented, the object is assumed to operate only on
qubits. Callers can query the qid shape of the object by calling
cirq.qid_shape on it. See qudit documentation for more
information.
cirq.unitary and def _unitary_
When an object can be described by a unitary matrix, it can expose that unitary
matrix by implementing a _unitary_(self) -> np.ndarray method.
Callers can query whether or not an object has a unitary matrix by calling
cirq.unitary on it.
The _unitary_ method may also return NotImplemented, in which case
cirq.unitary behaves as if the method is not implemented.
cirq.decompose and def _decompose_
Operations and gates can be defined in terms of other operations by implementing
a _decompose_ method that returns those other operations. Operations implement
_decompose_(self) whereas gates implement _decompose_(self, qubits)
(since gates don't know their qubits ahead of time).
The main requirements on the output of _decompose_ methods are:
DO NOT CREATE CYCLES. The cirq.decompose method will iterative decompose until it finds values satisfying a keep predicate. Cycles cause it to enter an infinite loop.
Head towards operations defined by Cirq, because these operations have good decomposition methods that terminate in single-qubit and two qubit gates.
These gates can be understood by the simulator, optimizers, and other code.
All that matters is functional equivalence.
Don't worry about staying within or reaching a particular gate set; it's too hard to predict what the caller will want. Gate-set-aware decomposition is useful, but this is not the protocol that does that.
Instead, use features available in the transformer API.
For example, cirq.CCZ decomposes into a series of cirq.CNOT and cirq.T operations.
This allows code that doesn't understand three-qubit operation to work with cirq.CCZ; by decomposing it into operations they do understand.
As another example, cirq.TOFFOLI decomposes into a cirq.H followed by a cirq.CCZ followed by a cirq.H.
Although the output contains a three qubit operation (the CCZ), that operation can be decomposed into two qubit and one qubit operations.
So code that doesn't understand three qubit operations can deal with Toffolis by decomposing them, and then decomposing the CCZs that result from the initial decomposition.
In general, decomposition-aware code consuming operations is expected to recursively decompose unknown operations until the code either hits operations it understands or hits a dead end where no more decomposition is possible.
The cirq.decompose method implements logic for performing exactly this kind of recursive decomposition.
Callers specify a keep predicate, and optionally specify intercepting and fallback decomposers, and then cirq.decompose will repeatedly decompose whatever operations it was given until the operations satisfy the given keep.
If cirq.decompose hits a dead end, it raises an error.
Cirq doesn't make any guarantees about the "target gate set" decomposition is heading towards.
cirq.decompose is not a method
Decompositions within Cirq happen to converge towards X, Y, Z, CZ, PhasedX, specified-matrix gates, and others.
But this set will vary from release to release, and so it is important for consumers of decompositions to look for generic properties of gates,
such as "two qubit gate with a unitary matrix", instead of specific gate types such as CZ gates.
cirq.inverse and __pow__
Gates and operations are considered to be invertible when they implement a __pow__ method that returns a result besides NotImplemented for an exponent of -1.
This inverse can be accessed either directly as value**-1, or via the utility method cirq.inverse(value).
If you are sure that value has an inverse, saying value**-1 is more convenient than saying cirq.inverse(value).
cirq.inverse is for cases where you aren't sure if value is invertible, or where value might be a sequence of invertible operations.
cirq.inverse has a default parameter used as a fallback when value isn't invertible.
For example, cirq.inverse(value, default=None) returns the inverse of value, or else returns None if value isn't invertible.
(If no default is specified and value isn't invertible, a TypeError is raised.)
When you give cirq.inverse a list, or any other kind of iterable thing, it will return a sequence of operations that (if run in order) undoes the operations of the original sequence (if run in order).
Basically, the items of the list are individually inverted and returned in reverse order.
For example, the expression cirq.inverse([cirq.S(b), cirq.CNOT(a, b)]) will return the tuple (cirq.CNOT(a, b), cirq.S(b)**-1).
Gates and operations can also return values beside NotImplemented from their __pow__ method for exponents besides -1.
This pattern is used often by Cirq.
For example, the square root of X gate can be created by raising cirq.X to 0.5:
End of explanation |
1,128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Channels
Channel
A Channel gives a means of asynchronous iteration over items coming from some upstream source.
A consumer of a Channel uses its next() method to iteratively receive items as the channel makes them available; when the channel is exhausted, subsequent invocations of next() result in a ChannelDone exception. Those calls to next(), however, would need to be made in the context of a tornado coroutine, which gets complex pretty quickly.
In practice, channels are, instead, set up declaratively and passed to an app.Flo object to run exhaustively through them via the run() method. That behavior will generally be used in this guide via functions like this (though more complex in future chapters)
Step1: Also, channels are meant to refer to other channels to build up a graph of dependencies. flowz provides the ability to "tee" a channel to create a second independent iterator that enables safe dependencies. That feature is one of the things that makes flowz useful in ways that other options (e.g., Python 3 async iterators) are not.
The plain Channel class is effectivley abstract, with many useful implementations via subclasses. Many of the important subclasses will be covered in this guide.
IterChannel
An IterChannel converts an iterable into a channel. This is particularly helpful in this guide for illustrative purposes
Step2: Those innocuous lines did the following things
Step3: A slight modification of that example gives a first chance to demonstrate the asynchronous nature of the iteration.
Step4: Note that the printing of the numbers interleaves between the two channels. That is a consequence of the logic running on a tornado loop with each channel yielding control back and forth to each other (and any other coroutines involved). That asynchrony becomes very important and useful as the channels get more complicated and involve accessing cloud-based storage and other sources that are best accessed asynchronously themselves.
It is important when constructing a graph of flowz channels to make sure that each channel and each tee of a channel is consumed by exactly one consumer. If two consumers iterate the same channel or tee, indeterminate behavior will result. And if no consumer iterates a channel or tee, it will cause all of the objects iterated in other tees of the same channel to be kept in memory until the unconsumed channel is garbage collected.
See the later chapter on channel managers for additional tools to make this easier.
MapChannel [.map(mapper)]
A MapChannel wraps another channel and applies a function to each of the elements in the underlying channel.
Step5: The same logic can be performed on any channel with the helper method .map(mapper)
Step6: Almost all of the useful Channel subclasses are accessible via helper methods on the Channel class, so the rest of this guide will generally demonstrate the helper methods only.
Indexing
An occasionally handy variant on mapping is to use the standard python indexing operator [] to perform the indexing operation on each element of the channel.
Step7: FlatMapChannel [.flat_map(mapper)]
A variant on the MapChannel is a FlatMapChannel. Its mapper can return an iterable, and the items will be emitted one by one by the channel. Note the difference in behavior here
Step8: FilterChannel [.filter(predicate)]
A FilterChannel wraps another channel and applies a function to each of the elements in the underlying channel, passing through the element only if the function returns true.
Step9: And since these examples are looking a lot like the standard map/filter examples in Python tutorials, we might as well string them together!
Step10: ZipChannel [.zip(*channels)]
A ZipChannel returns the items from multiple channels grouped together in a way akin to the built-in zip function. In the resulting ZipChannel, the items in all the channels specified will be zipped together on a per-item basis. The channel on which you're invoking zip will be the first, and items from the other channels will follow their order of specification in parameters.
Step11: ChainChannel [.chain(*channels)]
A ChainChannel simply chains together multiple channels into one channel, as though concatenating them.
Step12: ObserveChannel [.observe(observer)]
An ObserveChannel wraps another channel and passes along its items untouched, but also has the opportunity to run its observer function against them.
Step13: GroupChannel [.groupby(key_func)]
A GroupChannel wraps another channel and organizes its items into groups (tuples) based on the key returned for each when the key_func is applied.
Step14: For groupby() to work as expected, the channel must already be organized in the order (ascending or descending) of the desired group keys. Note how the alteration of the above example to group by "mod 5" rather than "div 5" is not pleasant
Step15: Rolling windows [.windowby(rolling(window_size))]
An alternate form of grouping that does not have a dedicated class, but instead relies on a helper function passed to the WindowChannel class (the superclass of GroupChannel), works as follows to produce rolling windows
Step16: Note that the objects are progressively added into rolling windows until they reach the target size, and then they taper off again at the very end. If you want to only deal with groups of the exact size, you can do this | Python Code:
def print_chans(*chans):
app.Flo([chan.map(print) for chan in chans]).run()
Explanation: Introduction to Channels
Channel
A Channel gives a means of asynchronous iteration over items coming from some upstream source.
A consumer of a Channel uses its next() method to iteratively receive items as the channel makes them available; when the channel is exhausted, subsequent invocations of next() result in a ChannelDone exception. Those calls to next(), however, would need to be made in the context of a tornado coroutine, which gets complex pretty quickly.
In practice, channels are, instead, set up declaratively and passed to an app.Flo object to run exhaustively through them via the run() method. That behavior will generally be used in this guide via functions like this (though more complex in future chapters):
End of explanation
chan = IterChannel(range(5))
print_chans(chan)
Explanation: Also, channels are meant to refer to other channels to build up a graph of dependencies. flowz provides the ability to "tee" a channel to create a second independent iterator that enables safe dependencies. That feature is one of the things that makes flowz useful in ways that other options (e.g., Python 3 async iterators) are not.
The plain Channel class is effectivley abstract, with many useful implementations via subclasses. Many of the important subclasses will be covered in this guide.
IterChannel
An IterChannel converts an iterable into a channel. This is particularly helpful in this guide for illustrative purposes:
End of explanation
chan1 = IterChannel(range(5))
chan2 = chan1.tee()
print_chans(chan1)
print('----')
print_chans(chan2)
Explanation: Those innocuous lines did the following things:
1. Wrapped a range iterator with an IterChannel.
2. Wrapped that channel with a MapChannel (see below) that will print each element in the wrapped channel.
3. Passed that MapChannel to an app.Flo object.
4. Fully iterated (asychronously) over that channel.
For now, that seems like a lot a hullabaloo for iterating over five numbers, but the value will become clearer as time goes on.
TeeChannel [.tee()]
A TeeChannel wraps another channel to provide a way to independently iterate over the wrapped channel starting from the same point forwards. It doesn't create a copy of the channel or the objects in it; it just presents the same objects in the same order via an independent iterator.
In practice, you would never use the TeeChannel class directly; you would, instead, just call .tee() on an existing channel.
End of explanation
chan1 = IterChannel(range(5))
chan2 = chan1.tee()
print_chans(chan1, chan2)
Explanation: A slight modification of that example gives a first chance to demonstrate the asynchronous nature of the iteration.
End of explanation
chan = MapChannel(IterChannel(range(5)), lambda x: x*2)
print_chans(chan)
Explanation: Note that the printing of the numbers interleaves between the two channels. That is a consequence of the logic running on a tornado loop with each channel yielding control back and forth to each other (and any other coroutines involved). That asynchrony becomes very important and useful as the channels get more complicated and involve accessing cloud-based storage and other sources that are best accessed asynchronously themselves.
It is important when constructing a graph of flowz channels to make sure that each channel and each tee of a channel is consumed by exactly one consumer. If two consumers iterate the same channel or tee, indeterminate behavior will result. And if no consumer iterates a channel or tee, it will cause all of the objects iterated in other tees of the same channel to be kept in memory until the unconsumed channel is garbage collected.
See the later chapter on channel managers for additional tools to make this easier.
MapChannel [.map(mapper)]
A MapChannel wraps another channel and applies a function to each of the elements in the underlying channel.
End of explanation
chan = IterChannel(range(5)).map(lambda x: x*2)
print_chans(chan)
Explanation: The same logic can be performed on any channel with the helper method .map(mapper):
End of explanation
chan = IterChannel(({'first': 'John', 'last': 'Cleese'}, {'first': 'Eric', 'last': 'Idle'}))['last']
print_chans(chan)
Explanation: Almost all of the useful Channel subclasses are accessible via helper methods on the Channel class, so the rest of this guide will generally demonstrate the helper methods only.
Indexing
An occasionally handy variant on mapping is to use the standard python indexing operator [] to perform the indexing operation on each element of the channel.
End of explanation
chan = IterChannel(range(5))
map_chan = chan.map(lambda x: [x]*x)
flat_map_chan = chan.tee().flat_map(lambda x: [x]*x)
print_chans(map_chan)
print('----')
print_chans(flat_map_chan)
Explanation: FlatMapChannel [.flat_map(mapper)]
A variant on the MapChannel is a FlatMapChannel. Its mapper can return an iterable, and the items will be emitted one by one by the channel. Note the difference in behavior here:
End of explanation
chan = IterChannel(range(5)).filter(lambda x: x % 2 == 0)
print_chans(chan)
Explanation: FilterChannel [.filter(predicate)]
A FilterChannel wraps another channel and applies a function to each of the elements in the underlying channel, passing through the element only if the function returns true.
End of explanation
chan = IterChannel(range(5)).filter(lambda x: x % 2 == 0).map(lambda x: x*2)
print_chans(chan)
Explanation: And since these examples are looking a lot like the standard map/filter examples in Python tutorials, we might as well string them together!
End of explanation
chan1 = IterChannel(range(5))
chan2 = chan1.tee().map(lambda x: x * 2)
chan3 = chan1.tee().map(lambda x: x ** 2)
print_chans(chan1.zip(chan2, chan3))
Explanation: ZipChannel [.zip(*channels)]
A ZipChannel returns the items from multiple channels grouped together in a way akin to the built-in zip function. In the resulting ZipChannel, the items in all the channels specified will be zipped together on a per-item basis. The channel on which you're invoking zip will be the first, and items from the other channels will follow their order of specification in parameters.
End of explanation
print_chans(IterChannel(range(3)).chain(IterChannel(range(10,13)), IterChannel(range(100,103))))
Explanation: ChainChannel [.chain(*channels)]
A ChainChannel simply chains together multiple channels into one channel, as though concatenating them.
End of explanation
print_chans(IterChannel(range(3)).observe(lambda x: print('I saw %d' % x)))
Explanation: ObserveChannel [.observe(observer)]
An ObserveChannel wraps another channel and passes along its items untouched, but also has the opportunity to run its observer function against them.
End of explanation
print_chans(IterChannel(range(20)).groupby(lambda x: x // 5))
Explanation: GroupChannel [.groupby(key_func)]
A GroupChannel wraps another channel and organizes its items into groups (tuples) based on the key returned for each when the key_func is applied.
End of explanation
print_chans(IterChannel(range(20)).groupby(lambda x: x % 5))
Explanation: For groupby() to work as expected, the channel must already be organized in the order (ascending or descending) of the desired group keys. Note how the alteration of the above example to group by "mod 5" rather than "div 5" is not pleasant:
End of explanation
from flowz.channels import tools as chtools
print_chans(IterChannel(range(10)).windowby(chtools.rolling(5)))
Explanation: Rolling windows [.windowby(rolling(window_size))]
An alternate form of grouping that does not have a dedicated class, but instead relies on a helper function passed to the WindowChannel class (the superclass of GroupChannel), works as follows to produce rolling windows:
End of explanation
print_chans(IterChannel(range(10)).windowby(chtools.rolling(5)).filter(chtools.exact_group_size(5)))
Explanation: Note that the objects are progressively added into rolling windows until they reach the target size, and then they taper off again at the very end. If you want to only deal with groups of the exact size, you can do this:
End of explanation |
1,129 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
关于分辨率
先扫一下盲:http
Step1: 两种情况
把resize的image保存到和原image同一目录下
http
Step2: 把resize的image保存到同一目录下
另外为了保存到和原image同一目录下,我们要
os.path.split(path)
将path分割成目录和文件名二元组返回。
```
os.path.split('c | Python Code:
import os
import glob
from PIL import Image
def thumbnail_pic(path):
a = glob.glob(r'*.jpg')
for x in a:
name = os.path.join(path, x)
im = Image.open(name)
im.thumbnail((1136, 640))
print(im.format, im.size, im.mode)
im.save(name, 'JPEG')
print('Done!')
if __name__ == '__main__':
path = '.'
thumbnail_pic(path)
import glob
for filename in glob.glob('**/*.jpg', recursive=True):
print(filename)
import os
import glob
from PIL import Image
def thumbnail_pic(path):
for x in glob.glob('**/*.jpg', recursive=True):
print ("x is \n ", x)
name = os.path.join(path, x)
print ("the join path is : \n", name)
im = Image.open(name)
print (im.format, im.size, im.mode)
im.copy().thumbnail((1136, 640))
print (im.format, im.size, im.mode)
im.save("%s/output/%"/, 'JPEG')
print('Done!')
if __name__ == '__main__':
path = '.'
thumbnail_pic(path)
Explanation: 关于分辨率
先扫一下盲:http://www.jianshu.com/p/c3387bcc4f6e。
我们所说的这个5.2英寸是手机屏幕对角线的长度。
分辨率1920px*1080px的意思就是,在这个华为荣耀7的5.2英寸屏幕上,在竖向的高度上有1920个像素块,在横向的宽度上有1080个像素块。
我们可以这么理解屏幕像素密度,即在一个对角线长度为1英寸的正方形内所拥有的像素数。
os
os.path.exists
python os.path模块常用方法详解
os.path.exists(path)
如果path存在,返回True;如果path不存在,返回False。
Python os.walk的用法与举例
链接
os.walk(top, topdown=True, onerror=None, followlinks=False)
可以得到一个三元tupple(dirpath, dirnames, filenames),
第一个为起始路径,第二个为起始路径下的文件夹,第三个是起始路径下的文件。
dirpath 是一个string,代表目录的路径,
dirnames 是一个list,包含了dirpath下所有子目录的名字。
filenames 是一个list,包含了非目录文件的名字。
这些名字不包含路径信息,如果需要得到全路径,需要使用os.path.join(dirpath, name).
Linux中的绝对路径(Absolute Pathname)与相对路径(Relative Pathnames)
绝对路径就是文件的真正存在的路径,是指从硬盘的根目录(盘符)开始,进行一级级目录指向文件。
相对路径就是以当前文件为基准进行一级级目录指向被引用的资源文件。
End of explanation
from PIL import Image
import glob, os
size = 100, 100
for infile in glob.glob("**/*.jpg"):
file, ext = os.path.splitext(infile)
print (file)
print (ext)
im = Image.open(infile)
im.thumbnail(size)
im.save( file + ".thum.jpg")
#print (im.format, im.size, im.mode)
Explanation: 两种情况
把resize的image保存到和原image同一目录下
http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#create-thumbnails
这个直接通过thumbnail()这个方法来做到resize。
在python3.5中,glob.glob通过**来递归得寻找所有符合.jpg的文件,比起2.7来说确实方便很多。
另外为了保存到和原image同一目录下,我们要os.path.splitext(path)
分离文件名与扩展名;默认返回(fname,fextension)元组,可做分片操作.
下面的file,ext里,file用来当做绝对路径,最后用im.save( file + ".thum.jpg")。其中.thum.jpg只是为了方便我们知道哪些文件是重新制作的,也可以直接用'.jpg'
End of explanation
from PIL import Image
import glob, os
size = 100, 100
for infile in glob.glob("**/*.jpg"):
file, ext = os.path.split(infile)
print (file)
print (ext)
im = Image.open(infile)
im.thumbnail(size)
im.save( "./output/" + ext)
#print (im.format, im.size, im.mode)
Explanation: 把resize的image保存到同一目录下
另外为了保存到和原image同一目录下,我们要
os.path.split(path)
将path分割成目录和文件名二元组返回。
```
os.path.split('c:\csv\test.csv')
('c:\csv', 'test.csv')
```
下面的file,ext里,file用来当做绝对路径,最后用im.save( "./output/" + ext)。其中"./output/"使我们指定的文件目录,最后ext是image原始的文件名
End of explanation |
1,130 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize subject head movement
Show how subjects move as a function of time.
Step1: Visualize the subject head movements as traces
Step2: Or we can visualize them as a continuous field (with the vectors pointing
in the head-upward direction) | Python Code:
# Authors: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
from os import path as op
import mne
print(__doc__)
data_path = op.join(mne.datasets.testing.data_path(verbose=True), 'SSS')
pos = mne.chpi.read_head_pos(op.join(data_path, 'test_move_anon_raw.pos'))
Explanation: Visualize subject head movement
Show how subjects move as a function of time.
End of explanation
mne.viz.plot_head_positions(pos, mode='traces')
Explanation: Visualize the subject head movements as traces:
End of explanation
mne.viz.plot_head_positions(pos, mode='field')
Explanation: Or we can visualize them as a continuous field (with the vectors pointing
in the head-upward direction):
End of explanation |
1,131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Process Regression and Classification with Elliptical Slice Sampling
Elliptical slice sampling is a variant of slice sampling that allows sampling from distributions with multivariate Gaussian prior and arbitrary likelihood. It is generally about as fast as regular slice sampling, mixes well even when the prior covariance might otherwise induce a strong dependence between samples, and does not depend on any tuning parameters. It can be useful when working with Gaussian processes, in which a multivariate Gaussian prior is used to impose a covariance structure on some latent function.
This notebook provides examples of how to use PyMC3's elliptical slice sampler to perform Gaussian process regression and classification. Since the focus of these examples are to show how to of elliptical slice sampling to sample from the posterior rather than to show how to fit the covariance kernel parameters, we assume that the kernel parameters are known.
Step1: Gaussian Process Regression
In Gaussian process regression, the prior $f$ is a multivariate normal with mean zero and covariance matrix $K$, and the likelihood is a factored normal (or, equivalently, a multivariate normal with diagonal covariance) with mean $f$ and variance $\sigma^2_n$
Step2: Examine actual posterior distribution
The posterior is analytically tractable so we can compute the posterior mean explicitly. Rather than computing the inverse of the covariance matrix K, we use the numerically stable calculation described Algorithm 2.1 in the book "Gaussian Processes for Machine Learning" (2006) by Rasmussen and Williams, which is available online for free.
Step3: Sample from posterior distribution
Step4: Evaluate posterior fit
The posterior samples are consistent with the analytically derived posterior and behaves how one would expect–narrower near areas with lots of observations and wider in areas with more uncertainty.
Step5: Gaussian Process Classification
In Gaussian process classification, the likelihood is not normal and thus the posterior is not analytically tractable. The prior is again a multivariate normal with covariance matrix $K$, and the likelihood is the standard likelihood for logistic regression
Step6: Sample from posterior distribution
Step7: Evaluate posterior fit
The posterior looks good, though the fit is, unsurprisingly, erratic outside the range of the observed data. | Python Code:
import pymc3 as pm
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import theano.tensor as tt
sns.set(style='white', palette='deep', color_codes=True)
%matplotlib inline
Explanation: Gaussian Process Regression and Classification with Elliptical Slice Sampling
Elliptical slice sampling is a variant of slice sampling that allows sampling from distributions with multivariate Gaussian prior and arbitrary likelihood. It is generally about as fast as regular slice sampling, mixes well even when the prior covariance might otherwise induce a strong dependence between samples, and does not depend on any tuning parameters. It can be useful when working with Gaussian processes, in which a multivariate Gaussian prior is used to impose a covariance structure on some latent function.
This notebook provides examples of how to use PyMC3's elliptical slice sampler to perform Gaussian process regression and classification. Since the focus of these examples are to show how to of elliptical slice sampling to sample from the posterior rather than to show how to fit the covariance kernel parameters, we assume that the kernel parameters are known.
End of explanation
np.random.seed(1)
# Number of training points
n = 30
X0 = np.sort(3 * np.random.rand(n))[:, None]
# Number of points at which to interpolate
m = 100
X = np.linspace(0, 3, m)[:, None]
# Covariance kernel parameters
noise = 0.1
lengthscale = 0.3
f_scale = 1
cov = f_scale * pm.gp.cov.ExpQuad(1, lengthscale)
K = cov(X0)
K_s = cov(X0, X)
K_noise = K + noise * np.eye(n)
# Add very slight perturbation to the covariance matrix diagonal to improve numerical stability
K_stable = K + 1e-12 * np.eye(n)
# Observed data
f = np.random.multivariate_normal(mean=np.zeros(n), cov=K_noise.eval())
Explanation: Gaussian Process Regression
In Gaussian process regression, the prior $f$ is a multivariate normal with mean zero and covariance matrix $K$, and the likelihood is a factored normal (or, equivalently, a multivariate normal with diagonal covariance) with mean $f$ and variance $\sigma^2_n$:
\begin{equation}
f \sim N(\boldsymbol{0}, K) \
L(y | f, \sigma^2_n) = \Pi_n N(f_n, \sigma^2_n)
\end{equation}
Generate some example data
We generate some data from Gaussian process at 30 random points in $[0, 3]$ and interpolate the function's value in this interval.
End of explanation
fig, ax = plt.subplots(figsize=(14, 6));
ax.scatter(X0, f, s=40, color='b', label='True points');
# Analytically compute posterior mean
L = np.linalg.cholesky(K_noise.eval())
alpha = np.linalg.solve(L.T, np.linalg.solve(L, f))
post_mean = np.dot(K_s.T.eval(), alpha)
ax.plot(X, post_mean, color='g', alpha=0.8, label='Posterior mean');
ax.set_xlim(0, 3);
ax.set_ylim(-2, 2);
ax.legend();
Explanation: Examine actual posterior distribution
The posterior is analytically tractable so we can compute the posterior mean explicitly. Rather than computing the inverse of the covariance matrix K, we use the numerically stable calculation described Algorithm 2.1 in the book "Gaussian Processes for Machine Learning" (2006) by Rasmussen and Williams, which is available online for free.
End of explanation
with pm.Model() as model:
# The actual distribution of f_sample doesn't matter as long as the shape is right since it's only used
# as a dummy variable for slice sampling with the given prior
f_sample = pm.Flat('f_sample', shape=(n, ))
# Likelihood
y = pm.MvNormal('y', observed=f, mu=f_sample, cov=noise * tt.eye(n), shape=n)
# Interpolate function values using noisy covariance matrix
L = tt.slinalg.cholesky(K_noise)
f_pred = pm.Deterministic('f_pred', tt.dot(K_s.T, tt.slinalg.solve(L.T, tt.slinalg.solve(L, f_sample))))
# Use elliptical slice sampling
ess_step = pm.EllipticalSlice(vars=[f_sample], prior_cov=K_stable)
trace = pm.sample(5000, start=model.test_point, step=[ess_step], progressbar=False, random_seed=1)
Explanation: Sample from posterior distribution
End of explanation
fig, ax = plt.subplots(figsize=(14, 6));
for idx in np.random.randint(4000, 5000, 500):
ax.plot(X, trace['f_pred'][idx], alpha=0.02, color='navy')
ax.scatter(X0, f, s=40, color='k', label='True points');
ax.plot(X, post_mean, color='g', alpha=0.8, label='Posterior mean');
ax.legend();
ax.set_xlim(0, 3);
ax.set_ylim(-2, 2);
Explanation: Evaluate posterior fit
The posterior samples are consistent with the analytically derived posterior and behaves how one would expect–narrower near areas with lots of observations and wider in areas with more uncertainty.
End of explanation
np.random.seed(5)
f = np.random.multivariate_normal(mean=np.zeros(n), cov=K_stable.eval())
# Separate data into positive and negative classes
f[f > 0] = 1
f[f <= 0] = 0
fig, ax = plt.subplots(figsize=(14, 6));
ax.scatter(X0, np.ma.masked_where(f == 0, f), color='b', label='Positive Observations');
ax.scatter(X0, np.ma.masked_where(f == 1, f), color='r', label='Negative Observations');
ax.legend(loc='lower right');
ax.set_xlim(-0.1, 3.1);
ax.set_ylim(-0.2, 1.2);
Explanation: Gaussian Process Classification
In Gaussian process classification, the likelihood is not normal and thus the posterior is not analytically tractable. The prior is again a multivariate normal with covariance matrix $K$, and the likelihood is the standard likelihood for logistic regression:
\begin{equation}
L(y | f) = \Pi_n \sigma(y_n, f_n)
\end{equation}
Generate some example data
We generate random samples from a Gaussian process, assign any points greater than zero to a "positive" class, and assign all other points to a "negative" class.
End of explanation
with pm.Model() as model:
# Again, f_sample is just a dummy variable
f_sample = pm.Flat('f_sample', shape=n)
f_transform = pm.invlogit(f_sample)
# Binomial likelihood
y = pm.Binomial('y', observed=f, n=np.ones(n), p=f_transform, shape=n)
# Interpolate function values using noiseless covariance matrix
L = tt.slinalg.cholesky(K_stable)
f_pred = pm.Deterministic('f_pred', tt.dot(K_s.T, tt.slinalg.solve(L.T, tt.slinalg.solve(L, f_transform))))
# Use elliptical slice sampling
ess_step = pm.EllipticalSlice(vars=[f_sample], prior_cov=K_stable)
trace = pm.sample(5000, start=model.test_point, step=[ess_step], progressbar=False, random_seed=1)
Explanation: Sample from posterior distribution
End of explanation
fig, ax = plt.subplots(figsize=(14, 6));
for idx in np.random.randint(4000, 5000, 500):
ax.plot(X, trace['f_pred'][idx], alpha=0.04, color='navy')
ax.scatter(X0, f, s=40, color='k');
ax.set_xlim(0, 3);
ax.set_ylim(-0.1, 1.1);
Explanation: Evaluate posterior fit
The posterior looks good, though the fit is, unsurprisingly, erratic outside the range of the observed data.
End of explanation |
1,132 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 07 - Non linear Elliptic problem
Keywords
Step1: 3. Affine Decomposition
For this problem the affine decomposition is straightforward
Step2: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
Step3: 4.2. Create Finite Element space (Lagrange P1)
Step4: 4.3. Allocate an object of the NonlinearElliptic class
Step5: 4.4. Prepare reduction with a POD-Galerkin method
Step6: 4.5. Perform the offline phase
Step7: 4.6. Perform an online solve
Step8: 4.7. Perform an error analysis
Step9: 4.8. Perform a speedup analysis | Python Code:
from dolfin import *
from rbnics import *
Explanation: Tutorial 07 - Non linear Elliptic problem
Keywords: DEIM, POD-Galerkin
1. Introduction
In this tutorial, we consider a non linear elliptic problem in a two-dimensional spatial domain $\Omega=(0,1)^2$. We impose a homogeneous Dirichlet condition on the boundary $\partial\Omega$. The source term is characterized by the following expression
$$
g(\boldsymbol{x}; \boldsymbol{\mu}) = 100\sin(2\pi x_0)cos(2\pi x_1) \quad \forall \boldsymbol{x} = (x_0, x_1) \in \Omega.
$$
This problem is characterized by two parameters. The first parameter $\mu_0$ controls the strength of the sink term and the second parameter $\mu_1$ the strength of the nonlinearity. The range of the two parameters is the following:
$$
\mu_0,\mu_1\in[0.01,10.0]
$$
The parameter vector $\boldsymbol{\mu}$ is thus given by
$$
\boldsymbol{\mu} = (\mu_0,\mu_1)
$$
on the parameter domain
$$
\mathbb{P}=[0.01,10]^2.
$$
In order to obtain a faster approximation of the problem, we pursue a model reduction by means of a POD-Galerkin reduced order method. In order to preserve the affinity assumption the discrete empirical interpolation method will be used on the forcing term $g(\boldsymbol{x}; \boldsymbol{\mu})$.
2. Parametrized formulation
Let $u(\boldsymbol{\mu})$ be the solution in the domain $\Omega$.
The strong formulation of the parametrized problem is given by:
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u(\boldsymbol{\mu})$ such that</center>
$$ -\nabla^2u(\boldsymbol{\mu})+\frac{\mu_0}{\mu_1}(\exp{\mu_1u(\boldsymbol{\mu})}-1)=g(\boldsymbol{x}; \boldsymbol{\mu})$$
<br>
The corresponding weak formulation reads:
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u(\boldsymbol{\mu})\in\mathbb{V}$ such that</center>
$$a\left(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}\right)+c\left(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}\right)=f(v;\boldsymbol{\mu})\quad \forall v\in\mathbb{V}$$
where
the function space $\mathbb{V}$ is defined as
$$
\mathbb{V} = {v\in H_1(\Omega) : v|_{\partial\Omega}=0}
$$
the parametrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$a(u, v;\boldsymbol{\mu})=\int_{\Omega} \nabla u\cdot \nabla v \ d\boldsymbol{x},$$
the parametrized bilinear form $c(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$c(u, v;\boldsymbol{\mu})=\mu_0\int_{\Omega} \frac{1}{\mu_1}\big(\exp{\mu_1u} - 1\big)v \ d\boldsymbol{x},$$
the parametrized linear form $f(\cdot; \boldsymbol{\mu}): \mathbb{V} \to \mathbb{R}$ is defined by
$$f(v; \boldsymbol{\mu})= \int_{\Omega}g(\boldsymbol{x}; \boldsymbol{\mu})v \ d\boldsymbol{x}.$$
The output of interest $s(\boldsymbol{\mu})$ is given by
$$s(\boldsymbol{\mu}) = \int_{\Omega} v \ d\boldsymbol{x}$$
is computed for each $\boldsymbol{\mu}$.
End of explanation
@DEIM("online", basis_generation="Greedy")
@ExactParametrizedFunctions("offline")
class NonlinearElliptic(NonlinearEllipticProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
NonlinearEllipticProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
self.du = TrialFunction(V)
self.u = self._solution
self.v = TestFunction(V)
self.dx = Measure("dx")(subdomain_data=self.subdomains)
self.ds = Measure("ds")(subdomain_data=self.boundaries)
# Store the forcing term expression
self.f = Expression("sin(2*pi*x[0])*sin(2*pi*x[1])", element=self.V.ufl_element())
# Customize nonlinear solver parameters
self._nonlinear_solver_parameters.update({
"linear_solver": "mumps",
"maximum_iterations": 20,
"report": True
})
# Return custom problem name
def name(self):
return "NonlinearEllipticDEIM"
# Return theta multiplicative terms of the affine expansion of the problem.
@compute_theta_for_derivatives
def compute_theta(self, term):
mu = self.mu
if term == "a":
theta_a0 = 1.
return (theta_a0,)
elif term == "c":
theta_c0 = mu[0]
return (theta_c0,)
elif term == "f":
theta_f0 = 100.
return (theta_f0,)
elif term == "s":
theta_s0 = 1.0
return (theta_s0,)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
@assemble_operator_for_derivatives
def assemble_operator(self, term):
v = self.v
dx = self.dx
if term == "a":
du = self.du
a0 = inner(grad(du), grad(v)) * dx
return (a0,)
elif term == "c":
u = self.u
mu = self.mu
c0 = (exp(mu[1] * u) - 1) / mu[1] * v * dx
return (c0,)
elif term == "f":
f = self.f
f0 = f * v * dx
return (f0,)
elif term == "s":
s0 = v * dx
return (s0,)
elif term == "dirichlet_bc":
bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1)]
return (bc0,)
elif term == "inner_product":
du = self.du
x0 = inner(grad(du), grad(v)) * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
# Customize the resulting reduced problem
@CustomizeReducedProblemFor(NonlinearEllipticProblem)
def CustomizeReducedNonlinearElliptic(ReducedNonlinearElliptic_Base):
class ReducedNonlinearElliptic(ReducedNonlinearElliptic_Base):
def __init__(self, truth_problem, **kwargs):
ReducedNonlinearElliptic_Base.__init__(self, truth_problem, **kwargs)
self._nonlinear_solver_parameters.update({
"report": True,
"line_search": "wolfe"
})
return ReducedNonlinearElliptic
Explanation: 3. Affine Decomposition
For this problem the affine decomposition is straightforward:
$$a(u,v;\boldsymbol{\mu})=\underbrace{1}{\Theta^{a}_0(\boldsymbol{\mu})}\underbrace{\int{\Omega}\nabla u \cdot \nabla v \ d\boldsymbol{x}}{a_0(u,v)},$$
$$c(u,v;\boldsymbol{\mu})=\underbrace{\mu_0}{\Theta^{c}0(\boldsymbol{\mu})}\underbrace{\int{\Omega}\frac{1}{\mu_1}\big(\exp{\mu_1u} - 1\big)v \ d\boldsymbol{x}}{c_0(u,v)},$$
$$f(v; \boldsymbol{\mu}) = \underbrace{100}{\Theta^{f}0(\boldsymbol{\mu})} \underbrace{\int{\Omega}\sin(2\pi x_0)cos(2\pi x_1)v \ d\boldsymbol{x}}{f_0(v)}.$$
We will implement the numerical discretization of the problem in the class
class NonlinearElliptic(NonlinearEllipticProblem):
by specifying the coefficients $\Theta^{a}(\boldsymbol{\mu})$, $\Theta^{c}_(\boldsymbol{\mu})$ and $\Theta^{f}(\boldsymbol{\mu})$ in the method
def compute_theta(self, term):
and the bilinear forms $a_(u, v)$, $c(u, v)$ and linear forms $f_(v)$ in
def assemble_operator(self, term):
End of explanation
mesh = Mesh("data/square.xml")
subdomains = MeshFunction("size_t", mesh, "data/square_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/square_facet_region.xml")
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
End of explanation
V = FunctionSpace(mesh, "Lagrange", 1)
Explanation: 4.2. Create Finite Element space (Lagrange P1)
End of explanation
problem = NonlinearElliptic(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [(0.01, 10.0), (0.01, 10.0)]
problem.set_mu_range(mu_range)
Explanation: 4.3. Allocate an object of the NonlinearElliptic class
End of explanation
reduction_method = PODGalerkin(problem)
reduction_method.set_Nmax(20, DEIM=21)
reduction_method.set_tolerance(1e-8, DEIM=1e-4)
Explanation: 4.4. Prepare reduction with a POD-Galerkin method
End of explanation
reduction_method.initialize_training_set(50, DEIM=60)
reduced_problem = reduction_method.offline()
Explanation: 4.5. Perform the offline phase
End of explanation
online_mu = (0.3, 9.0)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
plot(reduced_solution, reduced_problem=reduced_problem)
Explanation: 4.6. Perform an online solve
End of explanation
reduction_method.initialize_testing_set(50, DEIM=60)
reduction_method.error_analysis()
Explanation: 4.7. Perform an error analysis
End of explanation
reduction_method.speedup_analysis()
Explanation: 4.8. Perform a speedup analysis
End of explanation |
1,133 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 id="tocheading">Table of Contents</h1>
<div id="toc"></div>
Step1: DataFrame basics
Difficulty
Step2: Task
Step3: Task
Step4: Task
Step5: Task
Step6: Task
Step7: Task
Step8: Task
Step9: Task
Step10: Task
Step11: Task
Step12: Task
Step13: Task
Step14: Task
Step15: Task
Step16: Task
Step17: Task
Step18: Task
Step19: Task
Step20: Task
Step21: DataFrames
Step22: Task
Step23: Cleaning Data
Making a DataFrame easier to work with
Difficulty
Step24: Task
Step25: Apply
Task
Step26: Task
Step27: Task
Step28: Task
Step29: Task
Step30: The DataFrame should look much better now.
Rename, delete, rank and pivot
Step31: Task
Step32: Task
Step33: Task
Step34: Task
Step35: Task | Python Code:
%%javascript
$.getScript('misc/kmahelona_ipython_notebook_toc.js')
Explanation: <h1 id="tocheading">Table of Contents</h1>
<div id="toc"></div>
End of explanation
data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'],
'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3],
'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
Explanation: DataFrame basics
Difficulty: easy
A few of the fundamental routines for selecting, sorting, adding and aggregating data in DataFrames
Consider the following Python dictionary data and Python list labels:
End of explanation
# Write your answer here
Explanation: Task: Create a DataFrame df from this dictionary data which has the index labels.
End of explanation
# Write your answer here
Explanation: Task: Display a summary of the basic information about this DataFrame and its data.
End of explanation
# Write your answer here
Explanation: Task: Return the first 3 rows of the DataFrame df.
End of explanation
# Write your answer here
Explanation: Task: Select just the 'animal' and 'age' columns from the DataFrame df.
End of explanation
# Write your answer here
Explanation: Task: Select the data in rows ["d", "e", "i"] and in columns ['animal', 'age'].
End of explanation
# Write your answer here
Explanation: Task: Select the data in rows [3, 4, 8] and in columns ['animal', 'age'].
End of explanation
# Write your answer here
Explanation: Task: Select only the rows where the number of visits is greater than 2.
End of explanation
# Write your answer here
Explanation: Task: Select the rows where the age is missing, i.e. is NaN.
End of explanation
# Write your answer here
Explanation: Task: Select the rows where the animal is a cat and the age is less than 3.
End of explanation
# Write your answer here
Explanation: Task: Change the age in row 'f' to 1.5.
End of explanation
# Write your answer here
Explanation: Task: Calculate the sum of all visits (the total number of visits).
End of explanation
# Write your answer here
Explanation: Task: Calculate the mean age for each different animal in df.
End of explanation
# Write your answer here
Explanation: Task: Append a new row 'k' to df with your choice of values for each column. Then delete that row to return the original DataFrame.
End of explanation
# Write your answer here
Explanation: Task: Count the number of each type of animal in df.
End of explanation
# Write your answer here
Explanation: Task: Sort df first by the values in the 'age' in decending order, then by the value in the 'visit' column in ascending order.
End of explanation
# Write your answer here
Explanation: Task: The 'priority' column contains the values 'yes' and 'no'. Replace this column with a column of boolean values: 'yes' should be True and 'no' should be False.
End of explanation
# Write your answer here
Explanation: Task: In the 'animal' column, change the 'snake' entries to 'python'.
End of explanation
df = pd.DataFrame({'A': [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7]})
# Write your answer here
Explanation: Task: For each animal type and each number of visits, find the mean age. In other words, each row is an animal, each column is a number of visits and the values are the mean ages (hint: use a pivot table).
DataFrames: beyond the basics
Slightly trickier: you may need to combine two or more methods to get the right answer
Difficulty: medium
The previous section was tour through some basic but essential DataFrame operations. Below are some ways that you might need to cut your data, but for which there is no single "out of the box" method.
Task: How do you count how many unique rows a DataFrame has (i.e. ignore all rows that are duplicates)?
End of explanation
df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'),
'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})
# Write your answer here
Explanation: Task: A DataFrame has a column of groups 'grps' and and column of numbers 'vals'.
For each group, find the sum of the three greatest values.
End of explanation
df = pd.DataFrame.from_dict({'A': {1: 0, 2: 6, 3: 12, 4: 18, 5: 24},
'B': {1: 1, 2: 7, 3: 13, 4: 19, 5: 25},
'C': {1: 2, 2: 8, 3: 14, 4: 20, 5: 26},
'D': {1: 3, 2: 9, 3: 15, 4: 21, 5: 27},
'E': {1: 4, 2: 10, 3: 16, 4: 22, 5: 28},
'F': {1: 5, 2: 11, 3: 17, 4: 23, 5: 29}})
df.head()
# Write your answer here
Explanation: DataFrames: harder problems
These might require a bit of thinking outside the box...
...but all are solvable using just the usual pandas/NumPy methods (and so avoid using explicit for loops).
Difficulty: hard
Task: Consider a DataFrame consisting of purely numerical data. Create a list of the row-column indices of the 3 largest values.
End of explanation
import pandas as pd
df = pd.DataFrame([[1,1],[1,-1],[2,1],[2,2]], columns=["groups", "vals"])
df
# Write your answer here
Explanation: Task: Given a DataFrame with a column of group IDs, 'groups', and a column of corresponding integer values, 'vals', replace any negative values in 'vals' with the group mean.
End of explanation
df = pd.DataFrame({'From_To': ['LoNDon_paris', 'MAdrid_miLAN', 'londON_StockhOlm',
'Budapest_PaRis', 'Brussels_londOn'],
'FlightNumber': [10045, np.nan, 10065, np.nan, 10085],
'RecentDelays': [[23, 47], [], [24, 43, 87], [13], [67, 32]],
'Airline': ['KLM(!)', '<Air France> (12)', '(British Airways. )',
'12. Air France', '"Swiss Air"']})
Explanation: Cleaning Data
Making a DataFrame easier to work with
Difficulty: easy/medium
It happens all the time: someone gives you data containing malformed strings, Python, lists and missing data. How do you tidy it up so you can get on with the analysis?
End of explanation
# Write your answer here
Explanation: Task: Some values in the the FlightNumber column are missing. These numbers are meant to increase by 10 with each row. Therefore the numbers 10055 and 10075 need to replace the missing values. Fill in these missing numbers and make the column an integer column (instead of a float column).
End of explanation
# Write your answer here
Explanation: Apply
Task: The From_To column would be better as two separate columns! Split each string on the underscore delimiter _ to make two new columns with the correct values. Assign the correct column names to a new temporary DataFrame called temp.
End of explanation
# Write your answer here
Explanation: Task: Notice how the capitalisation of the city names is all mixed up in this temporary DataFrame. Standardise the strings so that only the first letter is uppercase (e.g. "londON" should become "London".)
End of explanation
# Write your answer here
Explanation: Task: Delete the From_To column from df and attach the temporary DataFrame from the previous questions.
End of explanation
# Write your answer here
Explanation: Task: In the Airline column, you can see some extra puctuation and symbols have appeared around the airline names. Pull out just the airline name. E.g. '(British Airways. )' should become 'British Airways'.
End of explanation
# Write your answer here
Explanation: Task:. In the RecentDelays column, the values have been entered into the DataFrame as a list. We would like each first value in its own column, each second value in its own column, and so on. If there isn't an Nth value, the value should be NaN.
Expand the Series of lists into a DataFrame named delays, rename the columns delay_1, delay_2, etc. and replace the unwanted RecentDelays column in df with delays.
End of explanation
fn = "data/Saliva.txt"
df = pd.read_csv(fn, sep='\t')
df
Explanation: The DataFrame should look much better now.
Rename, delete, rank and pivot
End of explanation
# Write your answer here
Explanation: Task: Rename species to NCBI_TaxID
End of explanation
# Write your answer here
Explanation: Task: Delete the columns named frequency and rank
End of explanation
# Write your answer here
Explanation: Task: Select the top 2 most abundant taxa per sample category. Write a function that expects a DataFrame, a column-name, and top-n (an Integer indicating the top n most abundant things within the given column-name).
End of explanation
# Write your answer here
Explanation: Task: Create a column named Rank that ranks the taxa by their abundance within each SampleCategory in descending order (most abundant taxa get the lowest rank).
End of explanation
# Write your answer here
Explanation: Task: Reshape the DataFrame so that you can compare the values of Rank and Abundance from the three sample categories by placing them next to each other in one row per taxon. In other words, reshape in a way that you get one row per taxon, placing Rank and Abundance values from the three sample categories next to each other (from "long" to "wide" format).
End of explanation |
1,134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using side features
Step1: Please re-run the above cell if you are getting any incompatible warnings and errors.
Step2: There are a couple of key features here
Step3: The layer itself does not have a vocabulary yet, but we can build it using our data.
Step4: Once we have this we can use the layer to translate raw tokens to embedding ids
Step5: Note that the layer's vocabulary includes one (or more!) unknown (or "out of vocabulary", OOV) tokens. This is really handy
Step6: We can do the lookup as before without the need to build vocabularies
Step7: Defining the embeddings
Now that we have integer ids, we can use the Embedding layer to turn those into embeddings.
An embedding layer has two dimensions
Step8: We can put the two together into a single layer which takes raw text in and yields embeddings.
Step9: Just like that, we can directly get the embeddings for our movie titles
Step10: We can do the same with user embeddings
Step11: Normalizing continuous features
Continuous features also need normalization. For example, the timestamp feature is far too large to be used directly in a deep model
Step12: We need to process it before we can use it. While there are many ways in which we can do this, discretization and standardization are two common ones.
Standardization
Standardization rescales features to normalize their range by subtracting the feature's mean and dividing by its standard deviation. It is a common preprocessing transformation.
This can be easily accomplished using the tf.keras.layers.experimental.preprocessing.Normalization layer
Step13: Discretization
Another common transformation is to turn a continuous feature into a number of categorical features. This makes good sense if we have reasons to suspect that a feature's effect is non-continuous.
To do this, we first need to establish the boundaries of the buckets we will use for discretization. The easiest way is to identify the minimum and maximum value of the feature, and divide the resulting interval equally
Step14: Given the bucket boundaries we can transform timestamps into embeddings
Step15: Processing text features
We may also want to add text features to our model. Usually, things like product descriptions are free form text, and we can hope that our model can learn to use the information they contain to make better recommendations, especially in a cold-start or long tail scenario.
While the MovieLens dataset does not give us rich textual features, we can still use movie titles. This may help us capture the fact that movies with very similar titles are likely to belong to the same series.
The first transformation we need to apply to text is tokenization (splitting into constituent words or word-pieces), followed by vocabulary learning, followed by an embedding.
The Keras tf.keras.layers.experimental.preprocessing.TextVectorization layer can do the first two steps for us
Step16: Let's try it out
Step17: Each title is translated into a sequence of tokens, one for each piece we've tokenized.
We can check the learned vocabulary to verify that the layer is using the correct tokenization
Step18: This looks correct
Step19: Let's try it out
Step20: Movie model
We can do the same for the movie model
Step21: Let's try it out | Python Code:
!pip install -q --upgrade tensorflow-datasets
Explanation: Using side features: feature preprocessing
Learning Objectives
Turning categorical features into embeddings.
Normalizing continuous features.
Processing text features.
Build a User and Movie model.
Introduction
One of the great advantages of using a deep learning framework to build recommender models is the freedom to build rich, flexible feature representations.
The first step in doing so is preparing the features, as raw features will usually not be immediately usable in a model.
For example:
User and item ids may be strings (titles, usernames) or large, noncontiguous integers (database IDs).
Item descriptions could be raw text.
Interaction timestamps could be raw Unix timestamps.
These need to be appropriately transformed in order to be useful in building models:
User and item ids have to be translated into embedding vectors: high-dimensional numerical representations that are adjusted during training to help the model predict its objective better.
Raw text needs to be tokenized (split into smaller parts such as individual words) and translated into embeddings.
Numerical features need to be normalized so that their values lie in a small interval around 0.
Fortunately, by using TensorFlow we can make such preprocessing part of our model rather than a separate preprocessing step. This is not only convenient, but also ensures that our pre-processing is exactly the same during training and during serving. This makes it safe and easy to deploy models that include even very sophisticated pre-processing.
In this notebook, we are going to focus on recommenders and the preprocessing we need to do on the MovieLens dataset. If you're interested in a larger tutorial without a recommender system focus, have a look at the full Keras preprocessing guide.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook
The MovieLens dataset
Let's first have a look at what features we can use from the MovieLens dataset:
End of explanation
import pprint
import tensorflow_datasets as tfds
ratings = tfds.load("movielens/100k-ratings", split="train")
for x in ratings.take(1).as_numpy_iterator():
pprint.pprint(x)
Explanation: Please re-run the above cell if you are getting any incompatible warnings and errors.
End of explanation
import numpy as np
import tensorflow as tf
movie_title_lookup = tf.keras.layers.experimental.preprocessing.StringLookup()
Explanation: There are a couple of key features here:
Movie title is useful as a movie identifier.
User id is useful as a user identifier.
Timestamps will allow us to model the effect of time.
The first two are categorical features; timestamps are a continuous feature.
Turning categorical features into embeddings
A categorical feature is a feature that does not express a continuous quantity, but rather takes on one of a set of fixed values.
Most deep learning models express these feature by turning them into high-dimensional vectors. During model training, the value of that vector is adjusted to help the model predict its objective better.
For example, suppose that our goal is to predict which user is going to watch which movie. To do that, we represent each user and each movie by an embedding vector. Initially, these embeddings will take on random values - but during training, we will adjust them so that embeddings of users and the movies they watch end up closer together.
Taking raw categorical features and turning them into embeddings is normally a two-step process:
Firstly, we need to translate the raw values into a range of contiguous integers, normally by building a mapping (called a "vocabulary") that maps raw values ("Star Wars") to integers (say, 15).
Secondly, we need to take these integers and turn them into embeddings.
Defining the vocabulary
The first step is to define a vocabulary. We can do this easily using Keras preprocessing layers.
End of explanation
movie_title_lookup.adapt(ratings.map(lambda x: x["movie_title"]))
print(f"Vocabulary: {movie_title_lookup.get_vocabulary()[:3]}")
Explanation: The layer itself does not have a vocabulary yet, but we can build it using our data.
End of explanation
movie_title_lookup(["Star Wars (1977)", "One Flew Over the Cuckoo's Nest (1975)"])
Explanation: Once we have this we can use the layer to translate raw tokens to embedding ids:
End of explanation
# We set up a large number of bins to reduce the chance of hash collisions.
num_hashing_bins = 200_000
movie_title_hashing = tf.keras.layers.experimental.preprocessing.Hashing(
num_bins=num_hashing_bins
)
Explanation: Note that the layer's vocabulary includes one (or more!) unknown (or "out of vocabulary", OOV) tokens. This is really handy: it means that the layer can handle categorical values that are not in the vocabulary. In practical terms, this means that the model can continue to learn about and make recommendations even using features that have not been seen during vocabulary construction.
Using feature hashing
In fact, the StringLookup layer allows us to configure multiple OOV indices. If we do that, any raw value that is not in the vocabulary will be deterministically hashed to one of the OOV indices. The more such indices we have, the less likley it is that two different raw feature values will hash to the same OOV index. Consequently, if we have enough such indices the model should be able to train about as well as a model with an explicit vocabulary without the disadvantage of having to maintain the token list.
We can take this to its logical extreme and rely entirely on feature hashing, with no vocabulary at all. This is implemented in the tf.keras.layers.experimental.preprocessing.Hashing layer.
End of explanation
movie_title_hashing(["Star Wars (1977)", "One Flew Over the Cuckoo's Nest (1975)"])
Explanation: We can do the lookup as before without the need to build vocabularies:
End of explanation
# Turns positive integers (indexes) into dense vectors of fixed size.
movie_title_embedding = # TODO: Your code goes here
# Let's use the explicit vocabulary lookup.
input_dim=movie_title_lookup.vocab_size(),
output_dim=32
)
Explanation: Defining the embeddings
Now that we have integer ids, we can use the Embedding layer to turn those into embeddings.
An embedding layer has two dimensions: the first dimension tells us how many distinct categories we can embed; the second tells us how large the vector representing each of them can be.
When creating the embedding layer for movie titles, we are going to set the first value to the size of our title vocabulary (or the number of hashing bins). The second is up to us: the larger it is, the higher the capacity of the model, but the slower it is to fit and serve.
End of explanation
movie_title_model = tf.keras.Sequential([movie_title_lookup, movie_title_embedding])
Explanation: We can put the two together into a single layer which takes raw text in and yields embeddings.
End of explanation
movie_title_model(["Star Wars (1977)"])
Explanation: Just like that, we can directly get the embeddings for our movie titles:
End of explanation
user_id_lookup = tf.keras.layers.experimental.preprocessing.StringLookup()
user_id_lookup.adapt(ratings.map(lambda x: x["user_id"]))
user_id_embedding = tf.keras.layers.Embedding(user_id_lookup.vocab_size(), 32)
user_id_model = tf.keras.Sequential([user_id_lookup, user_id_embedding])
Explanation: We can do the same with user embeddings:
End of explanation
for x in ratings.take(3).as_numpy_iterator():
print(f"Timestamp: {x['timestamp']}.")
Explanation: Normalizing continuous features
Continuous features also need normalization. For example, the timestamp feature is far too large to be used directly in a deep model:
End of explanation
# Feature-wise normalization of the data.
timestamp_normalization = # TODO: Your code goes here
timestamp_normalization.adapt(ratings.map(lambda x: x["timestamp"]).batch(1024))
for x in ratings.take(3).as_numpy_iterator():
print(f"Normalized timestamp: {timestamp_normalization(x['timestamp'])}.")
Explanation: We need to process it before we can use it. While there are many ways in which we can do this, discretization and standardization are two common ones.
Standardization
Standardization rescales features to normalize their range by subtracting the feature's mean and dividing by its standard deviation. It is a common preprocessing transformation.
This can be easily accomplished using the tf.keras.layers.experimental.preprocessing.Normalization layer:
End of explanation
max_timestamp = ratings.map(lambda x: x["timestamp"]).reduce(
tf.cast(0, tf.int64), tf.maximum).numpy().max()
min_timestamp = ratings.map(lambda x: x["timestamp"]).reduce(
np.int64(1e9), tf.minimum).numpy().min()
timestamp_buckets = np.linspace(
min_timestamp, max_timestamp, num=1000)
print(f"Buckets: {timestamp_buckets[:3]}")
Explanation: Discretization
Another common transformation is to turn a continuous feature into a number of categorical features. This makes good sense if we have reasons to suspect that a feature's effect is non-continuous.
To do this, we first need to establish the boundaries of the buckets we will use for discretization. The easiest way is to identify the minimum and maximum value of the feature, and divide the resulting interval equally:
End of explanation
timestamp_embedding_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32)
])
for timestamp in ratings.take(1).map(lambda x: x["timestamp"]).batch(1).as_numpy_iterator():
print(f"Timestamp embedding: {timestamp_embedding_model(timestamp)}.")
Explanation: Given the bucket boundaries we can transform timestamps into embeddings:
End of explanation
# Text vectorization layer.
title_text = # TODO: Your code goes here
title_text.adapt(ratings.map(lambda x: x["movie_title"]))
Explanation: Processing text features
We may also want to add text features to our model. Usually, things like product descriptions are free form text, and we can hope that our model can learn to use the information they contain to make better recommendations, especially in a cold-start or long tail scenario.
While the MovieLens dataset does not give us rich textual features, we can still use movie titles. This may help us capture the fact that movies with very similar titles are likely to belong to the same series.
The first transformation we need to apply to text is tokenization (splitting into constituent words or word-pieces), followed by vocabulary learning, followed by an embedding.
The Keras tf.keras.layers.experimental.preprocessing.TextVectorization layer can do the first two steps for us:
End of explanation
for row in ratings.batch(1).map(lambda x: x["movie_title"]).take(1):
print(title_text(row))
Explanation: Let's try it out:
End of explanation
title_text.get_vocabulary()[40:45]
Explanation: Each title is translated into a sequence of tokens, one for each piece we've tokenized.
We can check the learned vocabulary to verify that the layer is using the correct tokenization:
End of explanation
class UserModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.user_embedding = tf.keras.Sequential([
user_id_lookup,
tf.keras.layers.Embedding(user_id_lookup.vocab_size(), 32),
])
self.timestamp_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 2, 32)
])
self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()
def call(self, inputs):
# Take the input dictionary, pass it through each input layer,
# and concatenate the result.
return tf.concat([
self.user_embedding(inputs["user_id"]),
self.timestamp_embedding(inputs["timestamp"]),
self.normalized_timestamp(inputs["timestamp"])
], axis=1)
Explanation: This looks correct: the layer is tokenizing titles into individual words.
To finish the processing, we now need to embed the text. Because each title contains multiple words, we will get multiple embeddings for each title. For use in a downstream model these are usually compressed into a single embedding. Models like RNNs or Transformers are useful here, but averaging all the words' embeddings together is a good starting point.
Putting it all together
With these components in place, we can build a model that does all the preprocessing together.
User model
The full user model may look like the following:
End of explanation
user_model = # TODO: Your code goes here
user_model.normalized_timestamp.adapt(
ratings.map(lambda x: x["timestamp"]).batch(128))
for row in ratings.batch(1).take(1):
print(f"Computed representations: {user_model(row)[0, :3]}")
Explanation: Let's try it out:
End of explanation
class MovieModel(tf.keras.Model):
def __init__(self):
super().__init__()
max_tokens = 10_000
self.title_embedding = tf.keras.Sequential([
movie_title_lookup,
tf.keras.layers.Embedding(movie_title_lookup.vocab_size(), 32)
])
self.title_text_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.TextVectorization(max_tokens=max_tokens),
tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),
# We average the embedding of individual words to get one embedding vector
# per title.
tf.keras.layers.GlobalAveragePooling1D(),
])
def call(self, inputs):
return tf.concat([
self.title_embedding(inputs["movie_title"]),
self.title_text_embedding(inputs["movie_title"]),
], axis=1)
Explanation: Movie model
We can do the same for the movie model:
End of explanation
movie_model = # TODO: Your code goes here
movie_model.title_text_embedding.layers[0].adapt(
ratings.map(lambda x: x["movie_title"]))
for row in ratings.batch(1).take(1):
print(f"Computed representations: {movie_model(row)[0, :3]}")
Explanation: Let's try it out:
End of explanation |
1,135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gradient Boosted Models
Gradient Boosting does not refer to one particular model, but a versatile framework to optimize many loss functions. It follows the strength in numbers principle by combining the predictions of multiple base learners to obtain a powerful overall model. The base learners are often very simple models that are only slightly better than random guessing, which is why they are also referred to as weak learners. The predictions are combined in an additive manner, where the addition of each base model improves (or "boosts") the overall model. Therefore, the overall model $f$ is an additive model of the form
Step1: To demonstrate its use we are going to use the breast cancer data, which contains the expression levels of 76 genes, age, estrogen receptor status (er), tumor size and grade for 198 individuals. The objective is to predict the time to distant metastasis.
First, we load the data and perform one-hot encoding of categorical variables er and grade.
Step2: Next, we are using gradient boosting on Cox's partial likelihood with regression trees base learners, which we restrict to using only a single split (so-called stumps).
Step3: This model achieves a concordance index of 0.756 on the test data. Let's see how the test performance changes with the ensemble size (n_estimators).
Step4: We can see that the performance quickly improves, but also that the performance starts to decrease if the ensemble becomes too big.
Let's repeat the analysis using component-wise least squares base learners.
Step5: The performance increase is much slower here and its maximum performance seems to be below that of the ensemble of tree-based learners. This is not surprising, because with component-wise least squares base learners the overall ensemble is a linear model, whereas with tree-based learners it will be a non-linear model.
The coefficients of the model can be retrieved as follows
Step6: Despite using hundreds of iterations, the resulting model is very parsimonious and easy to interpret.
Accelerated Failure Time Model
The Accelerated Failure Time (AFT) model is an alternative to Cox's proportional hazards model. The latter assumes that features only influence the hazard function via a constant multiplicative factor. In contrast, features in an AFT model can accelerate or decelerate the time to an event by a constant factor. The figure below depicts the predicted hazard functions of a proportional hazards model in blue and that of an AFT model in orange.
We can see that the hazard remains constant for the proportional hazards model and varies for the AFT model.
The objective function in an AFT model can be expressed as a weighted least squares problem with respect to the logarithm of the survival time
Step7: Regularization
The most important parameter in gradient boosting is the number of base learner to use (n_estimators argument). A higher number will lead to a more complex model. However, this can easily lead to overfitting on the training data. The easiest way would be to just use less base estimators, but there are three alternatives to combat overfitting
Step8: The plot reveals that using dropout or a learning rate are most effective in avoiding overfitting. Moreover, the learning rate and ensemble size are strongly connected, choosing smaller a learning rate suggests increasing n_estimators. Therefore, it is recommended to use a relatively small learning rate and select the number of estimators via early stopping. Note that we can also apply multiple types of regularization, such as regularization by learning rate and subsampling. Since not all training data is used, this allows using the left-out data to evaluate whether we should continue adding more base learners or stop training.
Step9: The monitor looks at the average improvement of the last 25 iterations, and if it was negative for the last 50 iterations, it will abort training. In this case, this occurred after 119 iterations. We can plot the improvement per base learner and the moving average. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from sklearn.model_selection import train_test_split
from sksurv.datasets import load_breast_cancer
from sksurv.ensemble import ComponentwiseGradientBoostingSurvivalAnalysis
from sksurv.ensemble import GradientBoostingSurvivalAnalysis
from sksurv.preprocessing import OneHotEncoder
Explanation: Gradient Boosted Models
Gradient Boosting does not refer to one particular model, but a versatile framework to optimize many loss functions. It follows the strength in numbers principle by combining the predictions of multiple base learners to obtain a powerful overall model. The base learners are often very simple models that are only slightly better than random guessing, which is why they are also referred to as weak learners. The predictions are combined in an additive manner, where the addition of each base model improves (or "boosts") the overall model. Therefore, the overall model $f$ is an additive model of the form:
$$
\begin{equation}
f(\mathbf{x}) = \sum_{m=1}^M \beta_m g(\mathbf{x}; {\theta}_m),
\end{equation}
$$
where $M > 0$ denotes the number of base learners, and $\beta_m \in \mathbb{R}$ is a weighting term. The function $g$ refers to a base learner and is parameterized by the vector ${\theta}$. Individual base learners differ in the configuration of their parameters ${\theta}$, which is indicated by a subscript $m$.
A gradient boosted model is similar to a Random Survival Forest, in the sense that it relies on multiple base learners to produce an overall prediction, but differs in how those are combined. While a Random Survival Forest fits a set of Survival Trees independently and then averages their predictions, a gradient boosted model is constructed sequentially in a greedy stagewise fashion.
Base Learners
Depending on the loss function to be minimized and base learner used, different models arise.
sksurv.ensemble.GradientBoostingSurvivalAnalysis implements gradient boosting with regression tree base learner, and
sksurv.ensemble.ComponentwiseGradientBoostingSurvivalAnalysis uses component-wise least squares as base learner. The former is very versatile and can account for complicated non-linear relationships between features and time to survival. When using component-wise least squares as base learner, the final model will be a linear model, but only a small subset of features will be selected, similar to the LASSO penalized Cox model.
Losses
Cox's Partial Likelihood
The loss function can be specified via the loss argument loss; the default loss function is the partial likelihood loss of Cox's proportional hazards model (coxph). Therefore, the objective is to maximize the log partial likelihood function, but replacing the traditional linear model $\mathbf{x}^\top \beta$ with the additive model $f(\mathbf{x})$:
$$
\begin{equation}
\arg \min_{f} \quad \sum_{i=1}^n \delta_i \left[ f(\mathbf{x}i)
- \log \left( \sum{j \in \mathcal{R}_i} \exp(f(\mathbf{x}_j)) \right) \right] .
\end{equation}
$$
End of explanation
X, y = load_breast_cancer()
Xt = OneHotEncoder().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(Xt, y, test_size=0.25, random_state=0)
Explanation: To demonstrate its use we are going to use the breast cancer data, which contains the expression levels of 76 genes, age, estrogen receptor status (er), tumor size and grade for 198 individuals. The objective is to predict the time to distant metastasis.
First, we load the data and perform one-hot encoding of categorical variables er and grade.
End of explanation
est_cph_tree = GradientBoostingSurvivalAnalysis(
n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0
)
est_cph_tree.fit(X_train, y_train)
cindex = est_cph_tree.score(X_test, y_test)
print(round(cindex, 3))
Explanation: Next, we are using gradient boosting on Cox's partial likelihood with regression trees base learners, which we restrict to using only a single split (so-called stumps).
End of explanation
scores_cph_tree = {}
est_cph_tree = GradientBoostingSurvivalAnalysis(
learning_rate=1.0, max_depth=1, random_state=0
)
for i in range(1, 31):
n_estimators = i * 5
est_cph_tree.set_params(n_estimators=n_estimators)
est_cph_tree.fit(X_train, y_train)
scores_cph_tree[n_estimators] = est_cph_tree.score(X_test, y_test)
x, y = zip(*scores_cph_tree.items())
plt.plot(x, y)
plt.xlabel("n_estimator")
plt.ylabel("concordance index")
plt.grid(True)
Explanation: This model achieves a concordance index of 0.756 on the test data. Let's see how the test performance changes with the ensemble size (n_estimators).
End of explanation
scores_cph_ls = {}
est_cph_ls = ComponentwiseGradientBoostingSurvivalAnalysis(
learning_rate=1.0, random_state=0
)
for i in range(1, 31):
n_estimators = i * 10
est_cph_ls.set_params(n_estimators=n_estimators)
est_cph_ls.fit(X_train, y_train)
scores_cph_ls[n_estimators] = est_cph_ls.score(X_test, y_test)
x, y = zip(*scores_cph_ls.items())
plt.plot(x, y)
plt.xlabel("n_estimator")
plt.ylabel("concordance index")
plt.grid(True)
Explanation: We can see that the performance quickly improves, but also that the performance starts to decrease if the ensemble becomes too big.
Let's repeat the analysis using component-wise least squares base learners.
End of explanation
coef = pd.Series(est_cph_ls.coef_, ["Intercept"] + Xt.columns.tolist())
print("Number of non-zero coefficients:", (coef != 0).sum())
coef_nz = coef[coef != 0]
coef_order = coef_nz.abs().sort_values(ascending=False).index
coef_nz.loc[coef_order]
Explanation: The performance increase is much slower here and its maximum performance seems to be below that of the ensemble of tree-based learners. This is not surprising, because with component-wise least squares base learners the overall ensemble is a linear model, whereas with tree-based learners it will be a non-linear model.
The coefficients of the model can be retrieved as follows:
End of explanation
est_aft_ls = ComponentwiseGradientBoostingSurvivalAnalysis(
loss="ipcwls", n_estimators=300, learning_rate=1.0, random_state=0
).fit(X_train, y_train)
cindex = est_aft_ls.score(X_test, y_test)
print(round(cindex, 3))
Explanation: Despite using hundreds of iterations, the resulting model is very parsimonious and easy to interpret.
Accelerated Failure Time Model
The Accelerated Failure Time (AFT) model is an alternative to Cox's proportional hazards model. The latter assumes that features only influence the hazard function via a constant multiplicative factor. In contrast, features in an AFT model can accelerate or decelerate the time to an event by a constant factor. The figure below depicts the predicted hazard functions of a proportional hazards model in blue and that of an AFT model in orange.
We can see that the hazard remains constant for the proportional hazards model and varies for the AFT model.
The objective function in an AFT model can be expressed as a weighted least squares problem with respect to the logarithm of the survival time:
$$
\begin{equation}
\arg \min_{f} \quad \frac{1}{n} \sum_{i=1}^n
\omega_i (\log y_i - f(\mathbf{x}_i)) .
\end{equation}
$$
The weight $\omega_i$ associated with the $i$-th sample is the inverse probability of being censored after time $y_i$:
$$
\begin{equation}
\omega_i = \frac{\delta_i}{\hat{G}(y_i)} ,
\end{equation}
$$
where $\hat{G}(\cdot)$ is an estimator of the censoring survivor function.
Such a model can be fit with sksurv.ensemble.GradientBoostingSurvivalAnalysis or
sksurv.ensemble.ComponentwiseGradientBoostingSurvivalAnalysis
by specifying the loss="ipcwls" argument.
End of explanation
n_estimators = [i * 5 for i in range(1, 21)]
estimators = {
"no regularization": GradientBoostingSurvivalAnalysis(
learning_rate=1.0, max_depth=1, random_state=0
),
"learning rate": GradientBoostingSurvivalAnalysis(
learning_rate=0.1, max_depth=1, random_state=0
),
"dropout": GradientBoostingSurvivalAnalysis(
learning_rate=1.0, dropout_rate=0.1, max_depth=1, random_state=0
),
"subsample": GradientBoostingSurvivalAnalysis(
learning_rate=1.0, subsample=0.5, max_depth=1, random_state=0
),
}
scores_reg = {k: [] for k in estimators.keys()}
for n in n_estimators:
for name, est in estimators.items():
est.set_params(n_estimators=n)
est.fit(X_train, y_train)
cindex = est.score(X_test, y_test)
scores_reg[name].append(cindex)
scores_reg = pd.DataFrame(scores_reg, index=n_estimators)
ax = scores_reg.plot(xlabel="n_estimators", ylabel="concordance index")
ax.grid(True)
Explanation: Regularization
The most important parameter in gradient boosting is the number of base learner to use (n_estimators argument). A higher number will lead to a more complex model. However, this can easily lead to overfitting on the training data. The easiest way would be to just use less base estimators, but there are three alternatives to combat overfitting:
Use a learning_rate less than 1 to restrict the influence of individual base learners, similar to the Ridge penalty.
Use a non-zero dropout_rate, which forces base learners to also account for some of the previously fitted base learners to be missing.
Use subsample less than 1 such that each iteration only a portion of the training data is used. This is also known as stochastic gradient boosting.
End of explanation
class EarlyStoppingMonitor:
def __init__(self, window_size, max_iter_without_improvement):
self.window_size = window_size
self.max_iter_without_improvement = max_iter_without_improvement
self._best_step = -1
def __call__(self, iteration, estimator, args):
# continue training for first self.window_size iterations
if iteration < self.window_size:
return False
# compute average improvement in last self.window_size iterations.
# oob_improvement_ is the different in negative log partial likelihood
# between the previous and current iteration.
start = iteration - self.window_size + 1
end = iteration + 1
improvement = np.mean(estimator.oob_improvement_[start:end])
if improvement > 1e-6:
self._best_step = iteration
return False # continue fitting
# stop fitting if there was no improvement
# in last max_iter_without_improvement iterations
diff = iteration - self._best_step
return diff >= self.max_iter_without_improvement
est_early_stopping = GradientBoostingSurvivalAnalysis(
n_estimators=1000, learning_rate=0.05, subsample=0.5,
max_depth=1, random_state=0
)
monitor = EarlyStoppingMonitor(25, 50)
est_early_stopping.fit(X_train, y_train, monitor=monitor)
print("Fitted base learners:", est_early_stopping.n_estimators_)
cindex = est_early_stopping.score(X_test, y_test)
print("Performance on test set", round(cindex, 3))
Explanation: The plot reveals that using dropout or a learning rate are most effective in avoiding overfitting. Moreover, the learning rate and ensemble size are strongly connected, choosing smaller a learning rate suggests increasing n_estimators. Therefore, it is recommended to use a relatively small learning rate and select the number of estimators via early stopping. Note that we can also apply multiple types of regularization, such as regularization by learning rate and subsampling. Since not all training data is used, this allows using the left-out data to evaluate whether we should continue adding more base learners or stop training.
End of explanation
improvement = pd.Series(
est_early_stopping.oob_improvement_,
index=np.arange(1, 1 + len(est_early_stopping.oob_improvement_))
)
ax = improvement.plot(xlabel="iteration", ylabel="oob improvement")
ax.axhline(0.0, linestyle="--", color="gray")
cutoff = len(improvement) - monitor.max_iter_without_improvement
ax.axvline(cutoff, linestyle="--", color="C3")
_ = improvement.rolling(monitor.window_size).mean().plot(ax=ax, linestyle=":")
Explanation: The monitor looks at the average improvement of the last 25 iterations, and if it was negative for the last 50 iterations, it will abort training. In this case, this occurred after 119 iterations. We can plot the improvement per base learner and the moving average.
End of explanation |
1,136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro
At the end of this lesson, you will be able to write TensorFlow and Keras code to use one of the best models in computer vision.
Lesson
Step1: Sample Code
Choose Images to Work With
Step2: Function to Read and Prep Images for Modeling
Step3: Create Model with Pre-Trained Weights File. Make Predictions
Step4: Visualize Predictions | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('sDG5tPtsbSA', width=800, height=450)
Explanation: Intro
At the end of this lesson, you will be able to write TensorFlow and Keras code to use one of the best models in computer vision.
Lesson
End of explanation
from os.path import join
image_dir = '../input/dog-breed-identification/train/'
img_paths = [join(image_dir, filename) for filename in
['0c8fe33bd89646b678f6b2891df8a1c6.jpg',
'0c3b282ecbed1ca9eb17de4cb1b6e326.jpg',
'04fb4d719e9fe2b6ffe32d9ae7be8a22.jpg',
'0e79be614f12deb4f7cae18614b7391b.jpg']]
Explanation: Sample Code
Choose Images to Work With
End of explanation
import numpy as np
from tensorflow.python.keras.applications.resnet50 import preprocess_input
from tensorflow.python.keras.preprocessing.image import load_img, img_to_array
image_size = 224
def read_and_prep_images(img_paths, img_height=image_size, img_width=image_size):
imgs = [load_img(img_path, target_size=(img_height, img_width)) for img_path in img_paths]
img_array = np.array([img_to_array(img) for img in imgs])
output = preprocess_input(img_array)
return(output)
Explanation: Function to Read and Prep Images for Modeling
End of explanation
from tensorflow.python.keras.applications import ResNet50
my_model = ResNet50(weights='../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels.h5')
test_data = read_and_prep_images(img_paths)
preds = my_model.predict(test_data)
Explanation: Create Model with Pre-Trained Weights File. Make Predictions
End of explanation
from learntools.deep_learning.decode_predictions import decode_predictions
from IPython.display import Image, display
most_likely_labels = decode_predictions(preds, top=3, class_list_path='../input/resnet50/imagenet_class_index.json')
for i, img_path in enumerate(img_paths):
display(Image(img_path))
print(most_likely_labels[i])
Explanation: Visualize Predictions
End of explanation |
1,137 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demonstration of MCE IRL code & environments
This is just tabular environments & vanilla MCE IRL.
Step1: IRL on a random MDP
Testing both linear reward models & MLP reward models.
Step2: Same thing, but on grid world
The true reward here is not linear in the reduced feature space (i.e $(x,y)$ coordinates). Finding an appropriate linear reward is impossible (as I will demonstration), but an MLP should Just Work(tm). | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import copy
import numpy as np
import seaborn as sns
import pandas as pd
import jax.experimental.optimizers as jaxopt
import matplotlib.pyplot as plt
import scipy
import imitation.tabular_irl as tirl
import imitation.examples.model_envs as menv
sns.set(context='notebook')
np.random.seed(42)
Explanation: Demonstration of MCE IRL code & environments
This is just tabular environments & vanilla MCE IRL.
End of explanation
mdp = menv.RandomMDP(
n_states=16,
n_actions=3,
branch_factor=2,
horizon=10,
random_obs=True,
obs_dim=5,
generator_seed=42)
V, Q, pi = tirl.mce_partition_fh(mdp)
Dt, D = tirl.mce_occupancy_measures(mdp, pi=pi)
demo_counts = D @ mdp.observation_matrix
obs_dim, = demo_counts.shape
rmodel = tirl.LinearRewardModel(obs_dim)
opt = jaxopt.sgd(0.1)
final_weights, D_fake = tirl.mce_irl(
mdp, opt, rmodel, D, linf_eps=1e-1)
rmodel = tirl.MLPRewardModel(obs_dim, [32, 32])
opt = jaxopt.sgd(0.1)
final_weights, D_fake = tirl.mce_irl(
mdp, opt, rmodel, D, linf_eps=1e-2)
Explanation: IRL on a random MDP
Testing both linear reward models & MLP reward models.
End of explanation
# Same experiments, but on grid world
mdp = menv.CliffWorld(
width=7,
height=4,
horizon=8,
use_xy_obs=True)
V, Q, pi = tirl.mce_partition_fh(mdp)
Dt, D = tirl.mce_occupancy_measures(mdp, pi=pi)
demo_counts = D @ mdp.observation_matrix
obs_dim, = demo_counts.shape
rmodel = tirl.LinearRewardModel(obs_dim)
opt = jaxopt.adam(1)
final_weights, D_fake = tirl.mce_irl(
mdp, opt, rmodel, D, linf_eps=0.1)
mdp.draw_value_vec(D)
plt.title("Cliff World $p(s)$")
plt.xlabel('x-coord')
plt.ylabel('y-coord')
plt.show()
mdp.draw_value_vec(D_fake)
plt.title("Occupancy for linear reward function")
plt.show()
plt.subplot(1, 2, 1)
mdp.draw_value_vec(rmodel.out(mdp.observation_matrix))
plt.title("Inferred reward")
plt.subplot(1, 2, 2)
mdp.draw_value_vec(mdp.reward_matrix)
plt.title("True reward")
plt.show()
rmodel = tirl.MLPRewardModel(obs_dim, [1024,], activation='Relu')
opt = jaxopt.adam(1e-3)
final_weights, D_fake_mlp = tirl.mce_irl(
mdp, opt, rmodel, D, linf_eps=3e-2, print_interval=250)
mdp.draw_value_vec(D_fake_mlp)
plt.title("Occupancy for MLP reward function")
plt.show()
plt.subplot(1, 2, 1)
mdp.draw_value_vec(rmodel.out(mdp.observation_matrix))
plt.title("Inferred reward")
plt.subplot(1, 2, 2)
mdp.draw_value_vec(mdp.reward_matrix)
plt.title("True reward")
plt.show()
Explanation: Same thing, but on grid world
The true reward here is not linear in the reduced feature space (i.e $(x,y)$ coordinates). Finding an appropriate linear reward is impossible (as I will demonstration), but an MLP should Just Work(tm).
End of explanation |
1,138 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'sandbox-2', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: INPE
Source ID: SANDBOX-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:06
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
1,139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyMCEF Quickstart tutorial
<br>
<br>
Prerequisites
Install
Please install package PyMCEF through either conda or pip
Step1: Instead of smoothing, we directly exclude those stocks with extremely large jump in prices (possibly erroneous).
Here, a gain or loss more than four times the 99% or 1% quantile is considered as extreme.
Step2: After this filtering process, the number of remaining assets is
Step3: Next we use this return data as both training set and validation set to construct an efficient frontier with PyMCEF.
The risk measure is not specified and the default absolute semi-deviation is used.
Step4: As we can see, the full efficient frontier is obtained in less than one minute. Let's take a look at the efficient frontier
Step5: The performances of all portfolio is also visualized
Step6: And how the weights vary with different values of $\lambda$
Step7: All the weights are stored in the instance of SimpleEFp.
The first one is the most risk-seeking
Step8: Starting from the second portfolio, there are more than one assets
Step9: And the number of assets contained in the most risk-averse portfolio is | Python Code:
import pandas as pd
returns = pd.read_json('data/Russel3k_return.json')
Explanation: PyMCEF Quickstart tutorial
<br>
<br>
Prerequisites
Install
Please install package PyMCEF through either conda or pip:
<pre>
$ conda install -c hzzyyy pymcef
$ pip install pymcef
</pre>
conda packages are available on anaconda cloud for python 2.7, 3.5 and 3.6 on MacOS, Linux and Windows.
pip packages are available on pypi for both python 2.7, 3.5 and 3.6 on MacOs and Windows.
pypi doesn't support linux binary distributions, to install from pip on linux please download the wheel files from this repo and install locally.
<pre>
$ pip install your_download_path\pymcef-downloaded-filename.whl
</pre>
This package is only available on 64 bits OS, in addition, C++11 runtime library is alse required.
As result, this package will NOT work on Redhat EL6 with default configuration.
If you mainly work with Redhat you can either upgrade to EL7 or customize EL6 with C++11 support.
The implementation quality of PyMCEF is validated through several benchmarks.
Efficient frontier
This package helps to find the efficient frontier based on (jointly) simulated returns of all investible assets.
Alternatively speaking, this package tries to solve this problem numerically:
Given the joint distribution of the returns of all the investible assets, what should be the
best choice of weights to assign on each assets, for all possible risk preferences.
Efficient frontier is the set of all 'best' choices of asset weights (a.k.a. portfolio) with all possible risk preferences.
Heuristically, the most risk-taking portfolio simply puts all weight on the asset with largest expected return.
The most risk-averse portfolio is very diversified, with many positions hedging each other.
Portfolio optimization
Each portfolio on the efficient frontier is obtained by solving the following optimization problem on the choice of weight vector $w$:
\begin{align}
\underset{w}{\mathrm{argmin}}\quad & \mathrm{Risk}\left(w\right)-\lambda\cdot\mathrm{Reward}\left(w\right),\
\mathrm{subject\ to}\quad & \sum_{i}w_{i} = 1,\
& \ \ \ \quad w_{i} > 0.
\end{align}
The Lagrangian multiplier $\lambda$ here uniquely defines a risk preference and its corresponding solution.
With $\lambda$ going to infinity the solution is the most risk-taking portfolio and with $\lambda$ being zero, the solution is the most risk-averse portfolio.
Risk measures
We haven't defined the formulas for the risk and reward functions in the above optimization problem. Not surprisingly, the reward function is the just the expected return of the whole portfolio. Suppose the returns of all the asserts is the random vector:
$$ Y = {Y_i}. $$
Then the reward function is simply:
\begin{eqnarray}
\mathrm{Reward}\left(w\right) & = & \mathbb{E}\left[\sum_{i}w_{i}Y_{i}\right]\
& = & \mathbb{E}\left[X\right]\quad\mathrm{namely}
\end{eqnarray}
The less obviously obtained risk function, formally known as risk measure, is a function of the portfolio return random variable $X$.
In the Markowitz mean-variance model, the risk measure is the variance of $X$, which is theoretically flawed (check this example). In this PyMCEF, the following two more sophisticated risk measures are used:
Absolute Semideviation
\begin{eqnarray}
\mathrm{Risk}\left(w\right) & = & \mathbb{E}\left[\max\left(\mathbb{E}X-X,\ 0\right)\right]\
& = & \mathbb{E}\left[\left(\mathbb{E}X-X\right)^{+}\right].
\end{eqnarray}
Fixed-target under-performance
\begin{eqnarray}
\mathrm{Risk}\left(w\right) & = & \mathbb{E}\left[\max\left(t-X,\ 0\right)\right]\
& = & \mathbb{E}\left[\left(t-X\right)^{+}\right],
\end{eqnarray}
where $t$ is given target. The riskless return is a sensible choice.
According to axiomatized portfolio theory, these two risk measures are better than the variance because they are more consistent with the stochastic dominance rule.
Stochastic programming
The (joint) distribution of the returns of assets ${Y_i}$ can be parametric. A convenient
choice is to assume Normal or log Normal distribution. More complicated ways of
parameterizations for sure exist, and they lead to different optimization problems.
In addition, with $\lambda$ varying as a positive real number, we face a continuum of
optimization problems.
Alternatively, we can work with the (finite) samples of ${Y_i}$.
In this case, we replace the expectation in the risk or reward function with statistical mean and work with a stochastic programming problem.
PyMCEF is implemented in this way, and there is huge advantage in terms of flexibility.
The input for PyMCEF is just Monte Carlo simulated returns, and no knowledge about the underlying distribution function is needed for this package to work.
In Bayesian inference, it is a common practice to directly simulate the posterior predictive
distribution with Markov Chain Monte Carlo, where the posterior density function is known only
up to a scaling factor.
In another word, in practice, samples of distribution is more available than its corresponding parametric form.
Another benefit comes with the linearization of the problem. With slack variables,
the constraint and objective function can be linearized such that the solution is valid for an interval of $\lambda$.
As a result, the (approximated) efficient frontier has only finite number of portfolios.
With stochastic programming, we solve an approximated problem whose solution is much easier to describe.
One example
We take the daily returns of US stocks in Russell 3000 from 2015-01-02 to 2017-01-04.
Each asset has 505 returns, which are treated as simulated daily returns.
The direct use of historical data are not Monter Carlo, however it is good enough to demonstrate how to use PyMCEF.
We can consider this predictive distribution as a largely over fitted model with not much predictive power.
However, this is good starting point for better models.
We removed stocks with prices lower than $5 (pink sheet) to avoid liquidity problem. The data is stored in this git repository and can be loaded into a pandas data frame:
End of explanation
returns_filted = returns[(returns > returns.quantile(0.01) * 4) & \
(returns < returns.quantile(0.99) * 4)].dropna(axis=1)
Explanation: Instead of smoothing, we directly exclude those stocks with extremely large jump in prices (possibly erroneous).
Here, a gain or loss more than four times the 99% or 1% quantile is considered as extreme.
End of explanation
len(returns_filted.columns)
Explanation: After this filtering process, the number of remaining assets is:
End of explanation
from time import time
import numpy as np
from pymcef import SimpleEFp
tic = time()
frt = SimpleEFp(training_set=np.transpose(returns_filted.values),\
validation_set=np.transpose(returns_filted.values),\
asset_names=returns_filted.columns)
print(time() - tic)
Explanation: Next we use this return data as both training set and validation set to construct an efficient frontier with PyMCEF.
The risk measure is not specified and the default absolute semi-deviation is used.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
fig_ef = frt.plot_ef()
Explanation: As we can see, the full efficient frontier is obtained in less than one minute. Let's take a look at the efficient frontier:
End of explanation
fig_pf = frt.plot_performance()
Explanation: The performances of all portfolio is also visualized:
End of explanation
fig_ws = frt.plot_weights()
Explanation: And how the weights vary with different values of $\lambda$:
End of explanation
from __future__ import print_function
prt0 = frt.frontier_in_sample[0]
for k, v in prt0['weight'].items():
print(frt.asset_names[k], v)
Explanation: All the weights are stored in the instance of SimpleEFp.
The first one is the most risk-seeking:
End of explanation
prt1 = frt.frontier_in_sample[1]
for k, v in prt1['weight'].items():
print(frt.asset_names[k], v)
prt3 = frt.frontier_in_sample[3]
for k, v in prt3['weight'].items():
print(frt.asset_names[k], v)
Explanation: Starting from the second portfolio, there are more than one assets:
End of explanation
print(len(frt.frontier_in_sample[-1]['weight']))
Explanation: And the number of assets contained in the most risk-averse portfolio is:
End of explanation |
1,140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optimization Exercise 1
Imports
Step1: Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential"
Step2: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$
Step3: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
Explanation: Optimization Exercise 1
Imports
End of explanation
# YOUR CODE HERE
def hat(x, a, b):
return (-a * x**2) + (b * x**4)
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(1.0, 10.0, 1.0)==-9.0
Explanation: Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential":
$$ V(x) = -a x^2 + b x^4 $$
Write a function hat(x,a,b) that returns the value of this function:
End of explanation
a = 5.0
b = 1.0
x = np.linspace(-3, 3, 100)
plt.plot(x, hat(x, a, b))
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this to grade the plot
Explanation: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
End of explanation
# YOUR CODE HERE
#specifies a number of divisions which the function will look for a minimum on each.
#more divisions = more accurate
def minima(divisions, function, a, b):
critpoints = []
for n in range(0, divisions):
sectionx = np.linspace(n*(6/divisions) - 3, (n+1)*(6/divisions) - 3, 100)
sectiony = function(np.linspace(n*(6/divisions) - 3, (n+1)*(6/divisions) - 3, 100), a, b)
#make sure the minimum is not on the ends
if np.amin(sectiony) != sectiony[0] and np.amin(sectiony) != sectiony[-1]:
minpt = np.argmin(sectiony)
critpoints.append(sectionx[minpt])
return critpoints
minpts = minima(100, hat, a, b)
x = np.linspace(-3, 3, 100)
plt.plot(x, hat(x, a, b))
plt.scatter(minpts[0], hat(minpts[0], a, b), color = "r")
plt.scatter(minpts[1], hat(minpts[1], a, b), color = "r")
plt.xlabel("X")
plt.ylabel("V(x)")
print("Minimums: ", minpts)
assert True # leave this for grading the plot
Explanation: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective.
End of explanation |
1,141 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Contents
Introduction
Block Using the Sorted Neighborhood Blocker
Block Tables to Produce a Candidate Set of Tuple Pairs
Handling Missing Values
Window Size
Stable Sort Order
Sorted Neighborhood Blocker Limitations
Introduction
<font color='red'>WARNING
Step1: Then, read the input tablse from the datasets directory
Step2: Block Using the Sorted Neighborhood Blocker
Once the tables are read, we can do blocking using sorted neighborhood blocker.
With the sorted neighborhood blocker, you can only block between two tables to produce a candidate set of tuple pairs.
Block Tables to Produce a Candidate Set of Tuple Pairs
Step3: For the given two tables, we will assume that two persons with different zipcode values do not refer to the same real world person. So, we apply attribute equivalence blocking on zipcode. That is, we block all the tuple pairs that have different zipcodes.
Step4: Note that the tuple pairs in the candidate set have the same zipcode.
The attributes included in the candidate set are based on l_output_attrs and r_output_attrs mentioned in block_tables command (the key columns are included by default). Specifically, the list of attributes mentioned in l_output_attrs are picked from table A and the list of attributes mentioned in r_output_attrs are picked from table B. The attributes in the candidate set are prefixed based on l_output_prefix and r_ouptut_prefix parameter values mentioned in block_tables command.
Step5: Note that the metadata of C1 includes key, foreign key to the left and right tables (i.e A and B) and pointers to left and right tables.
Handling Missing Values
If the input tuples have missing values in the blocking attribute, then they are ignored by default. This is because, including all possible tuple pairs with missing values can significantly increase the size of the candidate set. But if you want to include them, then you can set allow_missing paramater to be True.
Step6: The candidate set C2 includes all possible tuple pairs with missing values.
Window Size
A tunable parameter to the Sorted Neighborhood Blocker is the Window size. To perform the same result as above with a larger window size is via the window_size argument. Note that it has more results than C1.
Step7: Stable Sort Order
One final challenge for the Sorted Neighborhood Blocker is making the sort order stable. If the column being sorted on has multiple identical keys, and those keys are longer than the window size, then different results may occur between runs. To always guarantee the same results for every run, make sure to make the sorting column unique. One method to do so is to append the id of the tuple onto the end of the sorting column. Here is an example. | Python Code:
# Import py_entitymatching package
import py_entitymatching as em
import os
import pandas as pd
Explanation: Contents
Introduction
Block Using the Sorted Neighborhood Blocker
Block Tables to Produce a Candidate Set of Tuple Pairs
Handling Missing Values
Window Size
Stable Sort Order
Sorted Neighborhood Blocker Limitations
Introduction
<font color='red'>WARNING: The sorted neighborhood blocker is still experimental and has not been fully tested yet. Use this blocker at your own risk.</font>
Blocking is typically done to reduce the number of tuple pairs considered for matching. There are several blocking methods proposed. The py_entitymatching package supports a subset of such blocking methods (#ref to what is supported). One such supported blocker is the sorted neighborhood blocker. This IPython notebook illustrates how to perform blocking using the sorted neighborhood blocker.
Note, often the sorted neighborhood blocking technique is used on a single table. In this case we have implemented sorted neighborhood blocking between two tables. We first enrich the tables with whether the table is the left table, or right table. Then we merge the tables. At this point we perform sorted neighborhood blocking, which is to pass a sliding window of window_size (default 2) across the merged dataset. Within the sliding window all tuple pairs that have one tuple from the left table and one tuple from the right table are returned.
First, we need to import py_entitymatching package and other libraries as follows:
End of explanation
# Get the datasets directory
datasets_dir = em.get_install_path() + os.sep + 'datasets'
# Get the paths of the input tables
path_A = datasets_dir + os.sep + 'person_table_A.csv'
path_B = datasets_dir + os.sep + 'person_table_B.csv'
# Read the CSV files and set 'ID' as the key attribute
A = em.read_csv_metadata(path_A, key='ID')
B = em.read_csv_metadata(path_B, key='ID')
A.head()
B.head()
Explanation: Then, read the input tablse from the datasets directory
End of explanation
# Instantiate attribute equivalence blocker object
sn = em.SortedNeighborhoodBlocker()
Explanation: Block Using the Sorted Neighborhood Blocker
Once the tables are read, we can do blocking using sorted neighborhood blocker.
With the sorted neighborhood blocker, you can only block between two tables to produce a candidate set of tuple pairs.
Block Tables to Produce a Candidate Set of Tuple Pairs
End of explanation
# Use block_tables to apply blocking over two input tables.
C1 = sn.block_tables(A, B,
l_block_attr='birth_year', r_block_attr='birth_year',
l_output_attrs=['name', 'birth_year', 'zipcode'],
r_output_attrs=['name', 'birth_year', 'zipcode'],
l_output_prefix='l_', r_output_prefix='r_', window_size=3)
# Display the candidate set of tuple pairs
C1.head()
Explanation: For the given two tables, we will assume that two persons with different zipcode values do not refer to the same real world person. So, we apply attribute equivalence blocking on zipcode. That is, we block all the tuple pairs that have different zipcodes.
End of explanation
# Show the metadata of C1
em.show_properties(C1)
id(A), id(B)
Explanation: Note that the tuple pairs in the candidate set have the same zipcode.
The attributes included in the candidate set are based on l_output_attrs and r_output_attrs mentioned in block_tables command (the key columns are included by default). Specifically, the list of attributes mentioned in l_output_attrs are picked from table A and the list of attributes mentioned in r_output_attrs are picked from table B. The attributes in the candidate set are prefixed based on l_output_prefix and r_ouptut_prefix parameter values mentioned in block_tables command.
End of explanation
# Introduce some missing values
A1 = em.read_csv_metadata(path_A, key='ID')
A1.ix[0, 'zipcode'] = pd.np.NaN
A1.ix[0, 'birth_year'] = pd.np.NaN
A1
# Use block_tables to apply blocking over two input tables.
C2 = sn.block_tables(A1, B,
l_block_attr='zipcode', r_block_attr='zipcode',
l_output_attrs=['name', 'birth_year', 'zipcode'],
r_output_attrs=['name', 'birth_year', 'zipcode'],
l_output_prefix='l_', r_output_prefix='r_',
allow_missing=True) # setting allow_missing parameter to True
len(C1), len(C2)
C2
Explanation: Note that the metadata of C1 includes key, foreign key to the left and right tables (i.e A and B) and pointers to left and right tables.
Handling Missing Values
If the input tuples have missing values in the blocking attribute, then they are ignored by default. This is because, including all possible tuple pairs with missing values can significantly increase the size of the candidate set. But if you want to include them, then you can set allow_missing paramater to be True.
End of explanation
C3 = sn.block_tables(A, B,
l_block_attr='birth_year', r_block_attr='birth_year',
l_output_attrs=['name', 'birth_year', 'zipcode'],
r_output_attrs=['name', 'birth_year', 'zipcode'],
l_output_prefix='l_', r_output_prefix='r_', window_size=5)
len(C1)
len(C3)
Explanation: The candidate set C2 includes all possible tuple pairs with missing values.
Window Size
A tunable parameter to the Sorted Neighborhood Blocker is the Window size. To perform the same result as above with a larger window size is via the window_size argument. Note that it has more results than C1.
End of explanation
A["birth_year_plus_id"]=A["birth_year"].map(str)+'-'+A["ID"].map(str)
B["birth_year_plus_id"]=B["birth_year"].map(str)+'-'+A["ID"].map(str)
C3 = sn.block_tables(A, B,
l_block_attr='birth_year_plus_id', r_block_attr='birth_year_plus_id',
l_output_attrs=['name', 'birth_year_plus_id', 'birth_year', 'zipcode'],
r_output_attrs=['name', 'birth_year_plus_id', 'birth_year', 'zipcode'],
l_output_prefix='l_', r_output_prefix='r_', window_size=5)
C3.head()
Explanation: Stable Sort Order
One final challenge for the Sorted Neighborhood Blocker is making the sort order stable. If the column being sorted on has multiple identical keys, and those keys are longer than the window size, then different results may occur between runs. To always guarantee the same results for every run, make sure to make the sorting column unique. One method to do so is to append the id of the tuple onto the end of the sorting column. Here is an example.
End of explanation |
1,142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot a univariate distribution along the x axis
Step1: Flip the plot by assigning the data variable to the y axis
Step2: Plot distributions for each column of a wide-form dataset
Step3: Use less smoothing
Step4: Use more smoothing, but don't smooth past the extreme data points
Step5: Plot conditional distributions with hue mapping of a second variable
Step6: "Stack" the conditional distributions
Step7: Normalize the stacked distribution at each value in the grid
Step8: Estimate the cumulative distribution function(s), normalizing each subset
Step9: Estimate distribution from aggregated data, using weights
Step10: Map the data variable with log scaling
Step11: Use numeric hue mapping
Step12: Modify the appearance of the plot
Step13: Plot a bivariate distribution
Step14: Map a third variable with a hue semantic to show conditional distributions
Step15: Show filled contours
Step16: Show fewer contour levels, covering less of the distribution
Step17: Fill the axes extent with a smooth distribution, using a different colormap | Python Code:
tips = sns.load_dataset("tips")
sns.kdeplot(data=tips, x="total_bill")
Explanation: Plot a univariate distribution along the x axis:
End of explanation
sns.kdeplot(data=tips, y="total_bill")
Explanation: Flip the plot by assigning the data variable to the y axis:
End of explanation
iris = sns.load_dataset("iris")
sns.kdeplot(data=iris)
Explanation: Plot distributions for each column of a wide-form dataset:
End of explanation
sns.kdeplot(data=tips, x="total_bill", bw_adjust=.2)
Explanation: Use less smoothing:
End of explanation
ax= sns.kdeplot(data=tips, x="total_bill", bw_adjust=5, cut=0)
Explanation: Use more smoothing, but don't smooth past the extreme data points:
End of explanation
sns.kdeplot(data=tips, x="total_bill", hue="time")
Explanation: Plot conditional distributions with hue mapping of a second variable:
End of explanation
sns.kdeplot(data=tips, x="total_bill", hue="time", multiple="stack")
Explanation: "Stack" the conditional distributions:
End of explanation
sns.kdeplot(data=tips, x="total_bill", hue="time", multiple="fill")
Explanation: Normalize the stacked distribution at each value in the grid:
End of explanation
sns.kdeplot(
data=tips, x="total_bill", hue="time",
cumulative=True, common_norm=False, common_grid=True,
)
Explanation: Estimate the cumulative distribution function(s), normalizing each subset:
End of explanation
tips_agg = (tips
.groupby("size")
.agg(total_bill=("total_bill", "mean"), n=("total_bill", "count"))
)
sns.kdeplot(data=tips_agg, x="total_bill", weights="n")
Explanation: Estimate distribution from aggregated data, using weights:
End of explanation
diamonds = sns.load_dataset("diamonds")
sns.kdeplot(data=diamonds, x="price", log_scale=True)
Explanation: Map the data variable with log scaling:
End of explanation
sns.kdeplot(data=tips, x="total_bill", hue="size")
Explanation: Use numeric hue mapping:
End of explanation
sns.kdeplot(
data=tips, x="total_bill", hue="size",
fill=True, common_norm=False, palette="crest",
alpha=.5, linewidth=0,
)
Explanation: Modify the appearance of the plot:
End of explanation
geyser = sns.load_dataset("geyser")
sns.kdeplot(data=geyser, x="waiting", y="duration")
Explanation: Plot a bivariate distribution:
End of explanation
sns.kdeplot(data=geyser, x="waiting", y="duration", hue="kind")
Explanation: Map a third variable with a hue semantic to show conditional distributions:
End of explanation
sns.kdeplot(
data=geyser, x="waiting", y="duration", hue="kind", fill=True,
)
Explanation: Show filled contours:
End of explanation
sns.kdeplot(
data=geyser, x="waiting", y="duration", hue="kind",
levels=5, thresh=.2,
)
Explanation: Show fewer contour levels, covering less of the distribution:
End of explanation
sns.kdeplot(
data=geyser, x="waiting", y="duration",
fill=True, thresh=0, levels=100, cmap="mako",
)
Explanation: Fill the axes extent with a smooth distribution, using a different colormap:
End of explanation |
1,143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A/B Testing with Hierarchical Models
Though A/B testing seems simple in that you're just comparing A against B and see which one performs better, but figuring out whether your results mean anything is actually quite complicated. During this explore and evaluate process, failing to correct for multiple comparisons is one of the most common A/B testing mistakes. Hierarchical models are one way to address this problem.
Background Information
Imagine the following scenario
Step1: Now, we can model each possible sign-up as a Bernoulli event. Recall the Bernoulli distribution reflects the outcome of a coin flip. With some probability $p$, the coin flips head and with probability $1-p$, the coin flips tails. The intuition behind this model is as follows
Step2: Notes on the code above
Step3: Notes on the code above We then use pymc to run a MCMC (Markov Chain Monte Carlo) to sample points from each website's posterior distribution.
stochastic variables have a keyword argument observed which accepts a boolean (False by default). The keyword observed has a very simple role
Step4: The black line is at x = 0, representing where the difference between the two distributions is 0. From inspection, we see that most of the distribution's mass is to the right of the black line. This means most of the points sampled from B's distribution are larger than those sampled from A's distribution, implying that site B's response is likely better than site A's response. To get more quantitative results, we can compute the probability that website B gets more sign-ups than website A by simply counting the number of samples less than 0, i.e. the area under the curve before 0,
represent the probability that site B is worse than site A.
Step5: Diagnosing Convergence
Due to the fact that MCMC is a sampling algorithm, we should also do a double check to make sure that these samples are stable and accurate representatives of the posterior distribution. The current best practice for this is to visually examine trajectory and the autocorrelation.
The pymc.Matplot module contains a poorly named function plot and it is preferred to import it as mcplot so there is no conflict with matplotlib's plot. The function takes an MCMC's trace object and will return posterior distributions, traces and auto-correlations for each variable.
Step8: Trajectory
Step9: What's Next?
For these two websites, we see that website B outperforms website A. This worked well for two websites, but if you're modeling an A/B test with several variants ( e.g. an A/B/C/D test ), you should consider using a hierarchical model to
Step10: As you can see, it's a simple distribution. It assigns equal probability weight to all points in the domain (0,1), also known as the support of the distribution. However, what if we want a distribution over (0,1) that isn't just flat everywhere?
This is where the Beta distribution comes in! The Beta distribution can be seen as a generalization of the Uniform(0,1) as it allows us to define more general probability density functions over the interval (0,1). Using two parameters a and b, the Beta(a,b) distribution is defined with the following probability density function
Step11: Notice in the above plot that the green line corresponding to the distribution Beta(1,1) is the same as that of Uniform(0,1), proving that the Beta distribution is indeed a generalization of the Uniform(0,1).
Now, many of you might be wondering what's the big takeaway from this section, so here they are
Step12: Side note on a handy general rule. The intuition behind $Binomial(n,p)$ is that if we flip a coin with probability p of landing heads n times, how likely is it that we see k heads for some k between 0 and n.
And given that information, If your prior is $p \sim Beta(a,b)$ and you observe $X=k$ for $X \sim Binomial(n,p)$, then your posterior is $(p∣X) \sim Beta(a+k,b+n−k)$. Beta is a "conjugate prior" for the Binomial, meaning that the posterior is also Beta.
Second, let's suppose, unrealistically, that we have an explicit prior distribution. We've flipped a lot of similar coins in the past, and we're pretty sure that the true bias of such coins follows a $Beta(51,51)$ distribution. Applying Bayes' rule with this prior, we would now model our observation of 60 out of 100 heads-up as $p \sim Beta(112,92)$.
Now our posterior distribution looks as follows. We keep the original for reference
Step13: Notice how much the distribution has shifted to the towards the prior! The preceding plot tells us that when we know an explicit prior, we should use it. Great. The problem with all of this is that for A/B tests, we often don't have an explicit prior. But when we have multiple test buckets, we can infer a prior.
To keep things concrete, let's say that we are designing a company's website, and we're testing five different layouts for the landing page. When a user clicks on our site, he/she sees one of the five landing page layouts. From there, the user can decide whether or not she wants to create an account on the site.
| Experiment | Clicks | Orders | True Rate | Empirical Rate |
|------------|--------|--------|------------|----------------|
| A | 1055 | 28 | 0.032 | 0.027 |
| B | 1057 | 45 | 0.041 | 0.043 |
| C | 1065 | 69 | 0.058 | 0.065 |
| D | 1039 | 58 | 0.047 | 0.056 |
| E | 1046 | 60 | 0.051 | 0.057 |
As a disclaimer, this is simulated data that is created to mimic a real A/B testing data. The number of Orders for each website was generated by generating a number from a Binomial distribution with n = Clicks and p = True Rate. The Empirical Rate represents the actual or so called observed Orders/Clicks
So now we have $\beta_1, ...,\beta_5$ tests and that for each test $\beta_i$ we observe $k_i$ successes out of $N$ trials. Let's further say that each bucket $\beta_i$ has some true success rate $p_i$; we don't know what $p_i$ is, but we're assuming that $k_i$ was drawn from a $Binomial( N, p_i )$ distribution. What we'd like is a prior for each $p_i$. The key idea is
Step14: We then model the true sign-up rates as Beta distribution and use our observed sign-up data to construct the Binomial distribution. Once again use MCMC to sample the data points and throw out the first few.
Step15: Let's see what our five posterior distributions look like.
Step16: Now that we have all five posterior distributions, we can easily computer the difference between any two of them. For example, let's revisit the difference of the posterior distribution of website B and website A.
Step17: we see most of the probability mass of this posterior distribution lies to the right of the line x = 0.00. We can quantify these results using the same method we used for the difference between website C and website A.
Step18: Again, we see results showing that website B has a higher sign-up rate than website A at a statistically significant level, same as the result when using the Bernoulli model.
Though it should be noted that It the hierarchical model cannot overcome the limitations of data. For example, let's consider website D and website E. While these two websites have a differing true sign-up rate (website E being better than website D), they have virtually identical click and sign-up data. As a result, our difference of the posterior yields a distribution centered about 0.0 (see plot below), and we cannot conclude that one website has a higher sign-up rate at a statistically significant level.
Step19: Comparing the Two Methods
We have gone through two different ways to do A/B testing
Step20: In this case, the mass of the Hierarchical Beta-Binomial model is closer to the true rate than that of the Bernoulli model. The posteriors of the hierarchical model gave us a closer estimate of the true rate.
Why does the Hierarchical Beta-Binomial model appear to be more accurate in estimating the true rate? Well this comes down to the prior distributions used in each method. In the classical Bernoulli method, we used the $Uniform(0,1)$ as our prior distribution. As I mentioned earlier, this is an uninformative prior as it assigns equal weight to all possible probabilities. On the other hand, the Beta prior creates a distribution that puts some of the probability mass towards the "truth" and thus we see a more a bit accurate estimate for the posterior distribution.
The other thing worth noticing is that remember Website A was observed to have a success rate of 0.027% and a true (unobserved) success rate of 0.032%. And our Hierarchical Beta-Binomial's posterior distribution's estimation was about
Step21: As we can see directly, that the hierarchical model gives a much better estimate than the empirical rate. Again, this is because hierarchical models shrink the individual posteriors towards the family-wise posterior. (You can think of it as "regression to the mean" in a special case.) | Python Code:
# Website A had 1055 clicks and 28 sign-ups
# Website B had 1057 clicks and 45 sign-ups
values_A = np.hstack( ( [0] * (1055 - 28), [1] * 28 ) )
values_B = np.hstack( ( [0] * (1057 - 45), [1] * 45 ) )
print(values_A)
print(values_B)
Explanation: A/B Testing with Hierarchical Models
Though A/B testing seems simple in that you're just comparing A against B and see which one performs better, but figuring out whether your results mean anything is actually quite complicated. During this explore and evaluate process, failing to correct for multiple comparisons is one of the most common A/B testing mistakes. Hierarchical models are one way to address this problem.
Background Information
Imagine the following scenario: You work for a company that gets most of its online traffic through ads. Your current ads have a 3% click rate, and your boss decides that's not good enough. The marketing team comes up with 26 new ad designs, and as the company's data scientist, it's your job to determine if any of these new ads have a higher click rate than the current ad.
You set up an online experiment where internet users are shown one of the 27 possible ads (the current ad or one of the 26 new designs). After two weeks, you collect the data on each ad: How many users saw it, and how many times it was clicked on.
Time to run some statistical tests! New design A vs current design? No statistically significant difference. New design B vs current design? No statistically significant difference. You keep running test and continue getting not significant results. Just as you are about lose hope, new design Z v. current design.... Statically significant difference at the alpha = 0.05 level!
You tell your boss you've found a design that has a higher click rate than the current design, and your company deploys it in production. However, after two months of collecting statistics on the new design, it seems the new design has a click rate of 3%. What went wrong?
When performing A/B testing, data scientists often fall into the common pitfall of failing to correct to for multiple testing. Testing at alpha = 0.05 means your statistical test yielding a result as extreme or more extreme by random chance (assuming a given null hypothesis is true) occurs with probability 0.05, or you can say that your statistical test has an EXPECTED 5% false positive rate. If you run 26 statistical tests, then an upper bound on the expected number of false positives is 26*0.05 = 1.3. This means in our above scenario, our data scientist can expect to have at least one false positive result, and unfortunately, the false positive result is the one she reported to her boss.
Preliminary Statistics
There are two well-known branches of statistics: Frequentist statistics and Bayesian statistics. These two branches have plenty of differences, but we're going to focus on one key difference:
In frequentist statistics, we assume the parameter(s) of interest are fixed constants. We focus on computing the likelihood $p(Data \mid Parameter)$, the probability we see the observed set of data points given the parameter of interest.
In Bayesian statistics, we having uncertainty surrounding our parameter(s) of interest, and we mathematically capture our prior uncertainty about these parameters in a prior distribution, formally represented as $p(Parameter)$. We focus on computing the posterior distribution $p(Parameter \mid Data)$, representing our posterior uncertainty surrounding the parameter of interest after we have observed data.
Put it in another way, when using frequentist statistics, you based your decision on whether A beat B only from the data in the test all other information is irrelevant as you are simply testing A against B. And nothing else is relevant, much like justice should be blind to outside beliefs.
On the other hand, the Bayesian approach lets you to think a bit deeper about the problem.When you're testing A against B you actually do have some other information. You know what makes sense. And this is valuable information when making a decision. So, sure, justice may be blind - but sometimes we need her to peek a bit and make sure what's on the scale makes sense!
For A/B testing, what this means is that you, the marketer, have to come up with what conversion rate makes sense, known as the prior. That is, if you typically see a 10% conversion in A, you would not, during the test, expect to see it at 100%.
Then instead of only finding the winner in the test itself, Bayesian analysis will include your prior knowledge into the test. That is, you can tell the test what you believe the right answer to be - and then using that prior knowledge, the test can tell you whether A beats B. And, because it uses more information than what's in the test itself, it can give you a defensible answer as to whether A beat B from a remarkably small sample size.
The Bernoulli Model
Let's first look at how we would perform A/B Testing in the standard two website case using Bayesian models, namely the Bernoulli model. Suppose website A had 1055 clicks and 27 sign-ups, and website B had 1057 clicks and 45 sign-ups.
End of explanation
# Create a uniform prior for the probabilities p_a and p_b
p_A = pm.Uniform( 'p_A', lower = 0, upper = 1 )
p_B = pm.Uniform( 'p_B', lower = 0, upper = 1 )
# Creates a posterior distribution of B - A
@pm.deterministic
def delta( p_A = p_A, p_B = p_B ):
return p_B - p_A
Explanation: Now, we can model each possible sign-up as a Bernoulli event. Recall the Bernoulli distribution reflects the outcome of a coin flip. With some probability $p$, the coin flips head and with probability $1-p$, the coin flips tails. The intuition behind this model is as follows: A user visits the website. The user flips a coin. If coin lands head, the user signs up.
Now, let's say each website has its own coin. Website A has a coin that lands heads with probability $p(A)$, and Website $p(B)$ has a coin that lands heads with probability $p(B)$. We don't know either probabilities, but we want to determine if $p(A)$ < $p(B)$ or if the reverse is true (There is also the possibility that $p(A)$ = $p(B)$).
Since we have no information or bias about the true values of $p(A)$ or $p(B)$, we will draw these two parameters from a Uniform distribution. In addition, we will create a delta function to represent the posterior distribution for the difference of the two distributions. Remember the difference between the two probabilities is what we're interested in.
End of explanation
# Create the Bernoulli variables for the observation,
# value is the value that we know (observed)
obs_A = pm.Bernoulli( 'obs_A', p_A, value = values_A, observed = True )
obs_B = pm.Bernoulli( 'obs_B', p_B, value = values_B, observed = True )
# Create the model and run the sampling
# Sample 70,000 points and throw out the first 10,000
iteration = 70000
burn = 10000
model = pm.Model( [ p_A, p_B, delta, obs_A, obs_B ] )
mcmc = pm.MCMC(model)
pm.MAP(model).fit()
mcmc.sample( iteration, burn )
Explanation: Notes on the code above:
For the pm.Uniform() section: These are stochastics variables, variables that are random. Initializing a stochastic variable requires a name argument, plus additional parameters that are class specific. The first attribute is the name attribute, which is used to retrieve the posterior distribution later in the analysis, so it is best to use a descriptive name. Typically, you can use the Python variable's name as the name. Here, the later two attribute is simply the lower and upper bound for the uniform distribution.
For the deterministic variables are variables that are not random if the variables' parents were known, here if I knew all the value of delta's parent, p_A and p_B, then I could determine what delta is. We distinguish deterministic variables with a pm.deterministic decorator wrapper.
Next, we will create an observations variable for each website that incorporates the sign-up data for each website. Thus we create a Bernoulli stochastic variable with our prior and values. Recall that if $X \sim Bernoulli(p)$, then then X is 1 with probability $p$ and 0 with probability $1−p$.
End of explanation
# use .trace to obtain the desired info
delta_samples = mcmc.trace("delta")[:]
plt.figure( figsize = ( 10, 5 ) )
plt.hist( delta_samples, histtype = 'stepfilled', bins = 30, alpha = 0.85,
label = "posterior of delta", color = "#7A68A6", normed = True )
plt.axvline( x = 0.0, color = "black", linestyle = "--" )
plt.legend(loc = "upper right")
plt.show()
Explanation: Notes on the code above We then use pymc to run a MCMC (Markov Chain Monte Carlo) to sample points from each website's posterior distribution.
stochastic variables have a keyword argument observed which accepts a boolean (False by default). The keyword observed has a very simple role: fix the variable's current value, i.e. make value immutable. We have to specify an initial value in the variable's creation, equal to the observations we wish to include, typically an array (and it should be an numpy array for speed).
Most often it is a good idea to prepend your call to the MCMC (Markov Chain Monte Carlo) with a call to .MAP(model).fit(). Recall that MCMC is a class of algorithms for sampling from a desired distribution by constructing an equilibrium distribution that has the properties of the desired distribution. And poor starting sampling points can prevent any convergence, or significantly slow it down. Thus, ideally, we would like to have the sampling process start at points where the posterior distributions truly exist. By calling .MAP(model).fit() we could avoid a lengthy burn-in period (where we discard the first few samples because they are still unrepresentative samples of the posterior distribution) and incorrect inference. Generally, we call this the maximum a posterior or, more simply, the MAP.
We can wrap all the created variables into a pm.Model class. With this Model class, we can analyze the variables as a single unit. This is an optional step, as the fitting algorithms can be sent an array of the variables rather than a Model class. So for the code above, you can do mcmc = pm.MCMC([p_A, p_B, delta, obs_A, obs_B]) instead of having to call pm.Model ... and then pm.MCMC.
Now, let' examine the posterior of the delta distribution (Remember, this is the posterior of Website B - posterior of Website A).
End of explanation
print( "Probability site B is WORSE than site A: %.3f" % ( delta_samples < 0 ).mean() )
print( "Probability site B is BETTER than site A: %.3f" % ( delta_samples > 0 ).mean() )
Explanation: The black line is at x = 0, representing where the difference between the two distributions is 0. From inspection, we see that most of the distribution's mass is to the right of the black line. This means most of the points sampled from B's distribution are larger than those sampled from A's distribution, implying that site B's response is likely better than site A's response. To get more quantitative results, we can compute the probability that website B gets more sign-ups than website A by simply counting the number of samples less than 0, i.e. the area under the curve before 0,
represent the probability that site B is worse than site A.
End of explanation
mcplot( mcmc.trace("delta"), common_scale = False )
Explanation: Diagnosing Convergence
Due to the fact that MCMC is a sampling algorithm, we should also do a double check to make sure that these samples are stable and accurate representatives of the posterior distribution. The current best practice for this is to visually examine trajectory and the autocorrelation.
The pymc.Matplot module contains a poorly named function plot and it is preferred to import it as mcplot so there is no conflict with matplotlib's plot. The function takes an MCMC's trace object and will return posterior distributions, traces and auto-correlations for each variable.
End of explanation
def acf( x, lag = 100 ):
autocorrelation function that calculates the
autocorrelation for series x from 1 to the user-specified lag
Parameter
---------
x : 1d-array, size ( iteration - burn-in iteration, )
pymc's random sample
lag : int, default 100
maximum lagging number
Return
------
corr : list, size (lag)
autocorrelation for each lag
Reference
---------
http://stackoverflow.com/questions/643699/how-can-i-use-numpy-correlate-to-do-autocorrelation
# np.corrcoef here returns a 2 * 2 matrix, either [ 0, 1 ] or [ 1, 0 ]
# will be the actual autocorrelation between the two series,
# the diagonal is just the autocorrelation between each series itself
# which is 1
corr = [ np.corrcoef( x[k:], x[:-k] )[ 0, 1 ] for k in range( 1, lag ) ]
return corr
def effective_sample( iteration, burn, corr ):
calculate the effective sample size of the mcmc,
note that the calculation is based on the fact that
the autorrelation plot appears to have converged
Parameters
----------
iteration : int
number of iteration of the mcmc
burn : int
burn-in iteration of the mcmc
corr : list
list that stores the autocorrelation for different lags
Returns
-------
the effective sample size : int
for index, c in enumerate(corr):
if c < 0.05:
i = index
break
return ( iteration - burn ) / ( 1 + 2 * np.sum( corr[:i + 1] ) )
lag = 100
corr = acf( x = delta_samples, lag = lag )
effective_sample( iteration, burn, corr ) # the iteration and burn is specified above
Explanation: Trajectory:
The top-left plot, often called the trace plot shows the trajectory of the samples. If the samples are representative, then they should look like they're zigzagging along the y-axis. Fortunately, our trace plot matches the description (The plot will most likely not look like this if you did not specify burn-in samples).
Autocorrelation:
Recall that a good trace plot will appear to be zigzagging along the y-axis. This is a sign that tells us the current position will exhibit some sort of correlation with previous positions. And too much of it means we are not exploring the space well.
The bottom-left plot depicts the autocorrelation, a measure of how related a series of numbers is with itself. A measurement of 1.0 is perfect positive autocorrelation, 0 no autocorrelation, and -1 is perfect negative correlation. If you are familiar with standard correlation, then autocorrelation is just how correlated a series, $x_\tau$, at time $t$ is with the series at time $t-k$:
$$R(k) = Corr( x_t, x_{t-k} )$$
And there will be a different autocorrelation value for different $k$ (also referred to as lag). For our plot, the autocorrelation value starts to drop near zero for large values of $k$. This is a sign that our random samples are providing independent information about the posterior distribution, which is exactly what we wanted!!
Posterior Distribution:
The largest plot on the right-hand side is the histograms of the samples, which is basically the same plot as the ones we manually created, plus a few extra features. The thickest vertical line represents the posterior mean, which is a good summary of posterior distribution. The interval between the two dashed vertical lines in each the posterior distributions represent the 95% credible interval, which can be interpreted as "there is a 95% chance that our parameter of interest lies in this interval".
When communicating your results to others, it is incredibly important to state this interval. One of our purposes for studying Bayesian methods is to have a clear understanding of our uncertainty in unknowns. Combined with the posterior mean, the 95% credible interval provides a reliable interval to communicate the likely location of the unknown (provided by the mean) and the uncertainty (represented by the width of the interval).
Effective Sample Size
To figure out exactly how many samples we would have to draw, we can compute the effective sample size (ESS), a measure of how many independent samples our our samples are equivalent to. Formally, denote the actual number of steps in the chain as N. The ESS is:
$$ESS = \frac{N}{1 + 2 \sum_{k=1}^\infty ACF(k)}$$
where ACF(k) is the autocorrelation of the series at lag k. For practical computation, the infinite sum in the definition of ESS may be stopped when ACF(k) < 0.05. A rule of thumb for the ESS is that if you wish to have a stable estimates of the 95% credible interval, an ESS around 10,000 is recommended.
End of explanation
support = np.linspace( 0, 1, num = 500 )
dunif = stats.uniform().pdf(support)
plt.figure( figsize = ( 10, 5 ) )
plt.plot( support, dunif, label = "Uniform(0,1)" )
plt.ylim( 0, 1.15 )
plt.legend( loc = "lower right" )
plt.title("Uniform Distribution")
plt.show()
# replace .show() with .savefig to save the pics and also shows the plot
# plt.savefig( "Uniform.png", format = "png" )
Explanation: What's Next?
For these two websites, we see that website B outperforms website A. This worked well for two websites, but if you're modeling an A/B test with several variants ( e.g. an A/B/C/D test ), you should consider using a hierarchical model to:
Protect yourself from a variety of multiple-comparison-type errors.
Get ahold of posterior distributions for your true conversion rates.
Let's first examine the sort of multiple comparison errors we're trying to avoid. Here's an exaggerated example:
Suppose that we have a single coin. We flip it 100 times, and it lands heads up on all 100 of them; how likely do you think it is that the coin is fair (i.e has a 50/50 chance of landing heads up)? Pretty slim; The probability of observing 100 heads out of 100 flips of a fair coin is:
$$1/2^{100} \approx 7.9×10^{−31}$$
Now imagine a new scenario: Instead of just one coin, we now have $2^{100}$ of them. We flip each 100 times. If we noticed that one of the $2^{100}$ coins has landed heads up on all 100 of its flips; how likely do you think it is that this coin is fair? A full answer will lead us into hierarchical modeling, but at this point it's already clear that we need to pay attention to the fact that there were another $2^{100} - 1$ coins: Even if all the $2^{100}$ coins were fair, the probability that at least one of them lands heads up on all 100 flips is:
$$1 − \left( 1 − \frac{1}{2^{100}} \right)^{2^{100}} \approx 1 − \frac{1}{e} \approx 63.2%$$
Back to the website example, if we tried this for all pairs of our five websites, we run the risk of getting a "false positive problem" due to the multiple testing problem. There are 10 possible pairs, so assume we test all possible pairs independently at an alpha = 0.05 level. For each test, we have a 95% chance of not getting a false positive, so the probability that all the tests do not yield a false positive is $0.95^{10}$, which is roughly equal to 0.60. This means the probability of getting at least one false positive result is about 0.40 or 40%. If we had more websites and thus more pairs to test, the probability of getting a false positive would only increase.
As you can see, without correcting for multiple testing, we run a high risk of encountering a false positive result.
Beta Distribution and Bayesian Priors
Before introducing the Beta-Binomial hierarchical model, let's discuss the theoretical motivation for the Beta distribution. Consider the Uniform Distribution over the interval (0,1).
End of explanation
a_vals = ( 0.5, 1, 2, 2 )
b_vals = ( 0.5, 1, 1, 2 )
plt.figure( figsize = ( 10, 5 ) )
for a, b in zip( a_vals, b_vals ):
plt.plot( support, stats.beta( a, b ).pdf(support), label = "Beta(%s,%s)" % ( a, b ) )
plt.legend()
plt.ylim( 0,4 )
plt.title("Beta Distribution Examples")
plt.show()
Explanation: As you can see, it's a simple distribution. It assigns equal probability weight to all points in the domain (0,1), also known as the support of the distribution. However, what if we want a distribution over (0,1) that isn't just flat everywhere?
This is where the Beta distribution comes in! The Beta distribution can be seen as a generalization of the Uniform(0,1) as it allows us to define more general probability density functions over the interval (0,1). Using two parameters a and b, the Beta(a,b) distribution is defined with the following probability density function:
$$pdf(x) = C x^{\alpha - 1} (1 - x)^{\beta - 1}, x \in (0, 1), \alpha, \beta > 0$$
Where C is a constant to normalize the integral of the function to 1 (all probability density functions must integrate to 1). This constant is formally known as the Beta Function.
But the important thing is that by changing the values of a and b, we can change the shape and the "steepness" of the distribution, thus allowing us to easily create a wide variety of functions over the interval (0,1).
End of explanation
plt.figure( figsize = ( 10, 5 ) )
plt.plot( support, stats.beta( 61, 41 ).pdf(support), label = "Beta(%s,%s)" % ( 61, 41 ) )
plt.legend()
plt.title("Coin Flips")
plt.show()
Explanation: Notice in the above plot that the green line corresponding to the distribution Beta(1,1) is the same as that of Uniform(0,1), proving that the Beta distribution is indeed a generalization of the Uniform(0,1).
Now, many of you might be wondering what's the big takeaway from this section, so here they are:
The Beta Distribution is a versatile family of probability distributions over (0,1).
This allows us to create prior distributions that incorporate some of our beliefs and thus informative priors. More concretely, when you have a k-successes-out-of-n-trials-type test, you can use the Beta distribution to model your posterior distributions. If you have test with k success amongst n trials, your posterior distribution is $Beta(k+1,n−k+1)$.
In the next section, we will discuss why this is important, and how the Beta Distribution can be used for A/B testing.
Hierarchical Models
Hierarchical models are models that involves multiple parameters such that the credible values of some parameters depend on the values of other parameters. Thus Hierarchical models will model all of the test buckets at once, rather than treating each in isolation. More specifically, they use the observed rates of each bucket to infer a prior distribution for the true rates; these priors then influences the predicted rates by "shrinking" posterior distributions towards the prior.
Let's work our way up to this idea. First, let's say that we flip a coin 100 times and that it lands heads-up on 60 of them. We can then model this as $p \sim Beta(61,41)$, and our posterior distribution looks like this:
End of explanation
a_vals = ( 61, 112, 51 )
b_vals = ( 41, 92, 51 )
plt.figure( figsize = ( 10, 5 ) )
for a, b in zip( a_vals, b_vals ):
plt.plot( support, stats.beta( a, b ).pdf(support), label = "Beta(%s,%s)" % ( a, b ) )
plt.legend()
plt.title("Beta Distribution Examples")
plt.show()
Explanation: Side note on a handy general rule. The intuition behind $Binomial(n,p)$ is that if we flip a coin with probability p of landing heads n times, how likely is it that we see k heads for some k between 0 and n.
And given that information, If your prior is $p \sim Beta(a,b)$ and you observe $X=k$ for $X \sim Binomial(n,p)$, then your posterior is $(p∣X) \sim Beta(a+k,b+n−k)$. Beta is a "conjugate prior" for the Binomial, meaning that the posterior is also Beta.
Second, let's suppose, unrealistically, that we have an explicit prior distribution. We've flipped a lot of similar coins in the past, and we're pretty sure that the true bias of such coins follows a $Beta(51,51)$ distribution. Applying Bayes' rule with this prior, we would now model our observation of 60 out of 100 heads-up as $p \sim Beta(112,92)$.
Now our posterior distribution looks as follows. We keep the original for reference:
End of explanation
@pm.stochastic( dtype = np.float64 )
def beta_priors( value = [ 1.0, 1.0 ] ):
a, b = value
# outside of the support of the distribution
if a <= 0 or b <= 0:
return -np.inf
else:
return np.log( np.power( ( a + b ), -2.5 ) )
a = beta_priors[0]
b = beta_priors[1]
Explanation: Notice how much the distribution has shifted to the towards the prior! The preceding plot tells us that when we know an explicit prior, we should use it. Great. The problem with all of this is that for A/B tests, we often don't have an explicit prior. But when we have multiple test buckets, we can infer a prior.
To keep things concrete, let's say that we are designing a company's website, and we're testing five different layouts for the landing page. When a user clicks on our site, he/she sees one of the five landing page layouts. From there, the user can decide whether or not she wants to create an account on the site.
| Experiment | Clicks | Orders | True Rate | Empirical Rate |
|------------|--------|--------|------------|----------------|
| A | 1055 | 28 | 0.032 | 0.027 |
| B | 1057 | 45 | 0.041 | 0.043 |
| C | 1065 | 69 | 0.058 | 0.065 |
| D | 1039 | 58 | 0.047 | 0.056 |
| E | 1046 | 60 | 0.051 | 0.057 |
As a disclaimer, this is simulated data that is created to mimic a real A/B testing data. The number of Orders for each website was generated by generating a number from a Binomial distribution with n = Clicks and p = True Rate. The Empirical Rate represents the actual or so called observed Orders/Clicks
So now we have $\beta_1, ...,\beta_5$ tests and that for each test $\beta_i$ we observe $k_i$ successes out of $N$ trials. Let's further say that each bucket $\beta_i$ has some true success rate $p_i$; we don't know what $p_i$ is, but we're assuming that $k_i$ was drawn from a $Binomial( N, p_i )$ distribution. What we'd like is a prior for each $p_i$. The key idea is: Let's assume that all the $p_i$ are drawn from the same distribution, and let's use the empirically observed rates, i.e. the $k_i$s to infer what this distribution is.
Here's what the whole setup looks like. We assume that our sign-ups $k_i$ is modeled as $k_i \sim Binomial(100, p_i)$. We then assume that every $p_i$, the true sign-up rates for each website is drawn from the same $Beta(a,b)$ distribution for some parameters a and b; briefly, $p_i \sim Beta(a,b)$. We don't have any prior beliefs for a and b, so we'd like them to be drawn from an "uninformative prior".
Recall that in the Bernoulli Method section, when we had no prior information about the true rates for each website we used an uniform distribution as our "uninformative prior". In this section, we will assume each true sign-ups rate is drawn from a Beta distribution.
Now, we've neglected one important question up until this point, How do we choose the a and b for the Beta distribution? Well, maybe we could assign a prior distribution to choose these hyper-parameters, but then our prior distribution has a prior distribution and it's priors all the way down.... A better alternative is to sample a and b from the distribution:
$$p(a, b) \propto \frac{1}{(a+b)^{5/2}}$$
Where $\propto$ represents is proportional to. This may look like magic, but let's just assume that this is correct that keep going. Now that we've covered the theory, we can finally build our hierarchical model. The beta priors function samples a and b for us from the function defined above.
Using the pymc module, we use the @pm.stochastic decorator to define a custom prior for a parameter in a model, since the decorator requires that we return the log-likelihood, so we will be returning the log of $(a+b)^{-2.5}$.
End of explanation
# The hidden, true rate for each website, or simply
# this is what we don't know, but would like to find out
true_rates = pm.Beta( 'true_rates', a, b, size = 5 )
# The observed values, clicks and orders
trials = np.array([ 1055, 1057, 1065, 1039, 1046 ])
successes = np.array([ 28, 45, 69, 58, 60 ])
observed_values = pm.Binomial( 'observed_values', trials, true_rates,
value = successes, observed = True )
model1 = pm.Model([ a, b, true_rates, observed_values ])
mcmc1 = pm.MCMC(model1)
pm.MAP(model1).fit()
iteration1 = 70000
burn1 = 10000
mcmc1.sample( iteration1, burn1 )
Explanation: We then model the true sign-up rates as Beta distribution and use our observed sign-up data to construct the Binomial distribution. Once again use MCMC to sample the data points and throw out the first few.
End of explanation
traces = mcmc1.trace('true_rates')[:]
plt.figure( figsize = ( 10, 5 ) )
for i in range(5):
# our true rates was a size of 5, thus each column represents each true rate
plt.hist( traces[ :, i ],
histtype = 'stepfilled', bins = 30, alpha = 0.5,
label = "Website %s" % chr( 65 + i ), normed = True )
plt.legend(loc = "upper right")
plt.show()
Explanation: Let's see what our five posterior distributions look like.
End of explanation
diff_BA = traces[ :, 1 ] - traces[ :, 0 ]
plt.figure( figsize = ( 10, 5 ) )
plt.hist( diff_BA, histtype = 'stepfilled', bins = 30, alpha = 0.85,
label = "Difference site B - site A", normed = True )
plt.axvline( x = 0.0, color = "black", linestyle = "--" )
plt.legend(loc = "upper right")
plt.show()
Explanation: Now that we have all five posterior distributions, we can easily computer the difference between any two of them. For example, let's revisit the difference of the posterior distribution of website B and website A.
End of explanation
print( "Prob. that website A gets MORE sign-ups than website C: %0.3f" % (diff_BA < 0).mean() )
print( "Prob. that website A gets LESS sign-ups than website C: %0.3f" % (diff_BA > 0).mean() )
Explanation: we see most of the probability mass of this posterior distribution lies to the right of the line x = 0.00. We can quantify these results using the same method we used for the difference between website C and website A.
End of explanation
diff_ED = traces[ :, 4 ] - traces[ :, 3 ]
plt.figure( figsize = ( 10, 5 ) )
plt.hist( diff_ED, histtype = 'stepfilled', bins = 30, alpha = 0.85,
label = "Difference site E - site D", normed = True )
plt.axvline( x = 0.0, color = "black", linestyle = "--" )
plt.legend(loc = "upper right")
plt.show()
print( "Probability that website D gets MORE sign-ups than website E: %0.3f" % (diff_ED < 0).mean() )
print( "Probability that website D gets LESS sign-ups than website E: %0.3f" % (diff_ED > 0).mean() )
Explanation: Again, we see results showing that website B has a higher sign-up rate than website A at a statistically significant level, same as the result when using the Bernoulli model.
Though it should be noted that It the hierarchical model cannot overcome the limitations of data. For example, let's consider website D and website E. While these two websites have a differing true sign-up rate (website E being better than website D), they have virtually identical click and sign-up data. As a result, our difference of the posterior yields a distribution centered about 0.0 (see plot below), and we cannot conclude that one website has a higher sign-up rate at a statistically significant level.
End of explanation
# trace of the bernoulli way
siteA_distribution = mcmc.trace("p_A")[:]
plt.figure( figsize = ( 10, 5 ) )
plt.hist( traces[ :, 0 ], histtype = 'stepfilled', bins = 30, alpha = 0.6,
label = "Hierachical Beta" )
plt.hist( siteA_distribution, histtype = 'stepfilled', bins = 30, alpha = 0.6,
label = "Bernoulli Model" )
plt.axvline( x = 0.032, color = "black", linestyle = "--" )
plt.legend(loc = "upper right")
plt.show()
Explanation: Comparing the Two Methods
We have gone through two different ways to do A/B testing: One way using Bernoulli distributions, and another using Beta-Binomial distributions. The Beta-Binomial hierarchical model is motivated by using the problem of multiple comparison testing. Let's now compare the performance of the two methods by examining the posterior distribution generated for Website A by each of the two methods. A black vertical line at x = 0.032, the true rate is included.
End of explanation
# true rate : 0.032
# empirical rate : 0.027
# Hierarchical Beta-Binomial posterior's true rate
traces[ :, 0 ].mean()
Explanation: In this case, the mass of the Hierarchical Beta-Binomial model is closer to the true rate than that of the Bernoulli model. The posteriors of the hierarchical model gave us a closer estimate of the true rate.
Why does the Hierarchical Beta-Binomial model appear to be more accurate in estimating the true rate? Well this comes down to the prior distributions used in each method. In the classical Bernoulli method, we used the $Uniform(0,1)$ as our prior distribution. As I mentioned earlier, this is an uninformative prior as it assigns equal weight to all possible probabilities. On the other hand, the Beta prior creates a distribution that puts some of the probability mass towards the "truth" and thus we see a more a bit accurate estimate for the posterior distribution.
The other thing worth noticing is that remember Website A was observed to have a success rate of 0.027% and a true (unobserved) success rate of 0.032%. And our Hierarchical Beta-Binomial's posterior distribution's estimation was about:
End of explanation
# we can also plot the diagnosis that checks for convergence like we did above
mcplot( mcmc1.trace("true_rates"), common_scale = False )
Explanation: As we can see directly, that the hierarchical model gives a much better estimate than the empirical rate. Again, this is because hierarchical models shrink the individual posteriors towards the family-wise posterior. (You can think of it as "regression to the mean" in a special case.)
End of explanation |
1,144 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Run this cell to set everything up!
Step1: In the next two questions, you'll create a boosted hybrid for the Store Sales dataset by implementing a new Python class. Run this cell to create the initial class definition. You'll add fit and predict methods to give it a scikit-learn like interface.
Step2: 1) Define fit method for boosted hybrid
Complete the fit definition for the BoostedHybrid class. Refer back to steps 1 and 2 from the Hybrid Forecasting with Residuals section in the tutorial if you need.
Step3: 2) Define predict method for boosted hybrid
Now define the predict method for the BoostedHybrid class. Refer back to step 3 from the Hybrid Forecasting with Residuals section in the tutorial if you need.
Step4: Now you're ready to use your new BoostedHybrid class to create a model for the Store Sales data. Run the next cell to set up the data for training.
Step5: 3) Train boosted hybrid
Create the hybrid model by initializing a BoostedHybrid class with LinearRegression() and XGBRegressor() instances.
Step6: Depending on your problem, you might want to use other hybrid combinations than the linear regression + XGBoost hybrid you've created in the previous questions. Run the next cell to try other algorithms from scikit-learn.
Step7: These are just some suggestions. You might discover other algorithms you like in the scikit-learn User Guide.
Use the code in this cell to see the predictions your hybrid makes.
Step8: 4) Fit with different learning algorithms
Once you're ready to move on, run the next cell for credit on this question. | Python Code:
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.time_series.ex5 import *
# Setup notebook
from pathlib import Path
from learntools.time_series.style import * # plot style settings
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import LabelEncoder
from statsmodels.tsa.deterministic import DeterministicProcess
from xgboost import XGBRegressor
comp_dir = Path('../input/store-sales-time-series-forecasting')
data_dir = Path("../input/ts-course-data")
store_sales = pd.read_csv(
comp_dir / 'train.csv',
usecols=['store_nbr', 'family', 'date', 'sales', 'onpromotion'],
dtype={
'store_nbr': 'category',
'family': 'category',
'sales': 'float32',
},
parse_dates=['date'],
infer_datetime_format=True,
)
store_sales['date'] = store_sales.date.dt.to_period('D')
store_sales = store_sales.set_index(['store_nbr', 'family', 'date']).sort_index()
family_sales = (
store_sales
.groupby(['family', 'date'])
.mean()
.unstack('family')
.loc['2017']
)
Explanation: Introduction
Run this cell to set everything up!
End of explanation
# You'll add fit and predict methods to this minimal class
class BoostedHybrid:
def __init__(self, model_1, model_2):
self.model_1 = model_1
self.model_2 = model_2
self.y_columns = None # store column names from fit method
Explanation: In the next two questions, you'll create a boosted hybrid for the Store Sales dataset by implementing a new Python class. Run this cell to create the initial class definition. You'll add fit and predict methods to give it a scikit-learn like interface.
End of explanation
def fit(self, X_1, X_2, y):
# YOUR CODE HERE: fit self.model_1
____
y_fit = pd.DataFrame(
# YOUR CODE HERE: make predictions with self.model_1
____,
index=X_1.index, columns=y.columns,
)
# YOUR CODE HERE: compute residuals
y_resid = ____
y_resid = y_resid.stack().squeeze() # wide to long
# YOUR CODE HERE: fit self.model_2 on residuals
self.model_2.fit(____, ____)
# Save column names for predict method
self.y_columns = y.columns
# Save data for question checking
self.y_fit = y_fit
self.y_resid = y_resid
# Add method to class
BoostedHybrid.fit = fit
# Check your answer
q_1.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
#%%RM_IF(PROD)%%
def fit(self, X_1, X_2, y):
# Train model_1
self.model_1.fit(X_1, y)
# Make predictions
y_fit = pd.DataFrame(
self.model_1.predict(X_1), index=X_1.index, columns=y.columns,
)
# Compute residuals
y_resid = y - y_fit
y_resid = y_resid.stack().squeeze() # wide to long
# Train model_2 on residuals
self.model_2.fit(X_2, y.stack().squeeze())
# Save column names for predict method
self.y_columns = y.columns
# Save data for question checking
self.y_fit = y_fit
self.y_resid = y_resid
# Add method to class
BoostedHybrid.fit = fit
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
def fit(self, X_1, X_2, y):
# Train model_1
self.model_1.fit(X_1, y)
# Make predictions
y_fit = pd.DataFrame(
self.model_1.predict(X_1), index=X_1.index, columns=y.columns,
)
# Compute residuals
y_resid = y
y_resid = y_resid.stack().squeeze() # wide to long
# Train model_2 on residuals
self.model_2.fit(X_2, y_resid)
# Save column names for predict method
self.y_columns = y.columns
# Save data for question checking
self.y_fit = y_fit
self.y_resid = y_resid
# Add method to class
BoostedHybrid.fit = fit
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
def fit(self, X_1, X_2, y):
# Train model_1
self.model_1.fit(X_1, y)
# Make predictions
y_fit = pd.DataFrame(
self.model_1.predict(X_1), index=X_1.index, columns=y.columns,
)
# Compute residuals
y_resid = y - y_fit
y_resid = y_resid.stack().squeeze() # wide to long
# Train model_2 on residuals
self.model_2.fit(X_2, y_resid)
# Save column names for predict method
self.y_columns = y.columns
# Save data for question checking
self.y_fit = y_fit
self.y_resid = y_resid
# Add method to class
BoostedHybrid.fit = fit
q_1.assert_check_passed()
Explanation: 1) Define fit method for boosted hybrid
Complete the fit definition for the BoostedHybrid class. Refer back to steps 1 and 2 from the Hybrid Forecasting with Residuals section in the tutorial if you need.
End of explanation
def predict(self, X_1, X_2):
y_pred = pd.DataFrame(
# YOUR CODE HERE: predict with self.model_1
____,
index=X_1.index, columns=self.y_columns,
)
y_pred = y_pred.stack().squeeze() # wide to long
# YOUR CODE HERE: add self.model_2 predictions to y_pred
y_pred += ____
return y_pred.unstack() # long to wide
# Add method to class
BoostedHybrid.predict = predict
# Check your answer
q_2.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_2.hint()
#_COMMENT_IF(PROD)_
q_2.solution()
#%%RM_IF(PROD)%%
def predict(self, X_1, X_2):
# Predict with model_1
y_pred = pd.DataFrame(
self.model_1.predict(X_1), index=X_1.index, columns=self.y_columns,
)
y_pred = y_pred.stack().squeeze() # wide to long
# Add model_2 predictions to model_1 predictions
y_pred += y_pred
return y_pred.unstack()
# Add method to class
BoostedHybrid.predict = predict
q_2.assert_check_failed()
#%%RM_IF(PROD)%%
def predict(self, X_1, X_2):
# Predict with model_1
y_pred = pd.DataFrame(
self.model_1.predict(X_1), index=X_1.index, columns=self.y_columns,
)
y_pred = y_pred.stack().squeeze() # wide to long
# Add model_2 predictions to model_1 predictions
y_pred += self.model_2.predict(X_2)
return y_pred.unstack()
# Add method to class
BoostedHybrid.predict = predict
q_2.assert_check_passed()
Explanation: 2) Define predict method for boosted hybrid
Now define the predict method for the BoostedHybrid class. Refer back to step 3 from the Hybrid Forecasting with Residuals section in the tutorial if you need.
End of explanation
# Target series
y = family_sales.loc[:, 'sales']
# X_1: Features for Linear Regression
dp = DeterministicProcess(index=y.index, order=1)
X_1 = dp.in_sample()
# X_2: Features for XGBoost
X_2 = family_sales.drop('sales', axis=1).stack() # onpromotion feature
# Label encoding for 'family'
le = LabelEncoder() # from sklearn.preprocessing
X_2 = X_2.reset_index('family')
X_2['family'] = le.fit_transform(X_2['family'])
# Label encoding for seasonality
X_2["day"] = X_2.index.day # values are day of the month
Explanation: Now you're ready to use your new BoostedHybrid class to create a model for the Store Sales data. Run the next cell to set up the data for training.
End of explanation
# YOUR CODE HERE: Create LinearRegression + XGBRegressor hybrid with BoostedHybrid
model = ____
# YOUR CODE HERE: Fit and predict
#_UNCOMMENT_IF(PROD)_
#model.fit(____, ____, ____)
y_pred = ____
#_UNCOMMENT_IF(PROD)_
#y_pred = y_pred.clip(0.0)
# Check your answer
q_3.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
#%%RM_IF(PROD)%%
# Create model
model = BoostedHybrid(
model_1=LinearRegression(),
model_2=LinearRegression(),
)
model.fit(X_1, X_2, y)
y_pred = model.predict(X_1, X_2)
y_pred = y_pred.clip(0.0)
q_3.assert_check_failed()
#%%RM_IF(PROD)%%
# Create model
model = BoostedHybrid(
model_1=LinearRegression,
model_2=XGBRegressor,
)
#model.fit(X_1, X_2, y)
#y_pred = model.predict(X_1, X_2)
#y_pred = y_pred.clip(0.0)
q_3.assert_check_failed()
#%%RM_IF(PROD)%%
# Create model
model = BoostedHybrid(
model_1=LinearRegression(),
model_2=XGBRegressor(),
)
model.fit(X_1, X_2, y)
y_pred = model.predict(X_1, X_2)
y_pred = y_pred.clip(0.0)
q_3.assert_check_passed()
Explanation: 3) Train boosted hybrid
Create the hybrid model by initializing a BoostedHybrid class with LinearRegression() and XGBRegressor() instances.
End of explanation
# Model 1 (trend)
from pyearth import Earth
from sklearn.linear_model import ElasticNet, Lasso, Ridge
# Model 2
from sklearn.ensemble import ExtraTreesRegressor, RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.neural_network import MLPRegressor
# Boosted Hybrid
# YOUR CODE HERE: Try different combinations of the algorithms above
model = BoostedHybrid(
model_1=Ridge(),
model_2=KNeighborsRegressor(),
)
Explanation: Depending on your problem, you might want to use other hybrid combinations than the linear regression + XGBoost hybrid you've created in the previous questions. Run the next cell to try other algorithms from scikit-learn.
End of explanation
y_train, y_valid = y[:"2017-07-01"], y["2017-07-02":]
X1_train, X1_valid = X_1[: "2017-07-01"], X_1["2017-07-02" :]
X2_train, X2_valid = X_2.loc[:"2017-07-01"], X_2.loc["2017-07-02":]
# Some of the algorithms above do best with certain kinds of
# preprocessing on the features (like standardization), but this is
# just a demo.
model.fit(X1_train, X2_train, y_train)
y_fit = model.predict(X1_train, X2_train).clip(0.0)
y_pred = model.predict(X1_valid, X2_valid).clip(0.0)
families = y.columns[0:6]
axs = y.loc(axis=1)[families].plot(
subplots=True, sharex=True, figsize=(11, 9), **plot_params, alpha=0.5,
)
_ = y_fit.loc(axis=1)[families].plot(subplots=True, sharex=True, color='C0', ax=axs)
_ = y_pred.loc(axis=1)[families].plot(subplots=True, sharex=True, color='C3', ax=axs)
for ax, family in zip(axs, families):
ax.legend([])
ax.set_ylabel(family)
Explanation: These are just some suggestions. You might discover other algorithms you like in the scikit-learn User Guide.
Use the code in this cell to see the predictions your hybrid makes.
End of explanation
# View the solution (Run this cell to receive credit!)
q_4.check()
Explanation: 4) Fit with different learning algorithms
Once you're ready to move on, run the next cell for credit on this question.
End of explanation |
1,145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: tf.summary の使用箇所を TF 2.0 に移行する
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: TensorFlow 2.0 では、TensorBoard での視覚化に使用する要約データを書き込む際の tf.summary API が大幅に変更されています。
変更点
tf.summary API を 2 つのサブ API として考えると良いでしょう。
個別の要約を記録するための一連の演算 - summary.scalar()、summary.histogram()、summary.image()、summary.audio()、および summary.text()。これらはモデルコードからインラインで呼び出されます。
上記の個別の要約を収集して特別にフォーマットされたログファイル(TensorBoard が読み取って視覚化を生成するファイル)に書き込む書き込みロジック。
TF 1.x の場合
上記の 2 つは、Session.run() でサマリー演算の出力をフェッチし、FileWriter.add_summary(output, step) で呼び出す、というように手動でつなぐ必要がありました。v1.summary.merge_all() 演算によって、グラフコレクションを使ってすべてのサマリー演算出力を集計するという方法で、この処理が簡単になっていますが、Eager execution と制御フローではあまりよく機能しなかったため、TF 2.0 には特に適していません。
TF 2.X の場合
上記の 2 つは密接に統合されており、個別の tf.summary 演算は実行時に直ちにデータを書き込むようになっています。モデルコードから API を使用する方法はあまり変わっていませんが、Eager execution との相性が改善されており、ほかのグラフモードとの互換性もそのままです。この 2 つの API を統合することで、summary.FileWriter は TensorFlow 実行コンテキストの一環となり、tf.summary 演算で直接アクセスできるため、ライターの構成が外見的な主な違いと言えます。
次は、TF 2.x のデフォルトのモードである Eager execution を使用した例です。
Step3: 次は、tf.function グラフ実行の使用例です。
Step4: 次は、レガシー TF 1.x グラフ実行の使用例です。
Step5: コードを変換する
既存の tf.summary の使用箇所を TF 2.x API に変換する作業を確実に変換することは困難であるため、tf_upgrade_v2 スクリプトは、すべてを tf.compat.v1.summary に書き換えることだけを行い、自動的に TF 2.x の動作を有効化することはありません。
部分移行
tf.compat.v1.summary.scalar() といった TF 1.x サマリー API のロギング演算に大きく依存しているモデルコードを使用するユーザーが TF 2.x により簡単に移行できるようにするには、先にライター API のみを移行して、後でモデルコード内の個別の TF 1.x サマリー演算を完全に移行することができます。
このような移行をサポートするために、<a href="https | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow as tf
Explanation: tf.summary の使用箇所を TF 2.0 に移行する
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tensorboard/migrate"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.org で表示</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tensorboard/migrate.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a>
</td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tensorboard/migrate.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tensorboard/migrate.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
注意: このドキュメントは、TensorFlow 1.x TensorBoard に精通している方で大規模な TensorFlow コードベースを TensorFlow 1.x から 2.0 に移行したい方を対象としています。TensorBoard にまだ新しい方は、基礎ドキュメントをご覧ください。tf.keras を使用している場合は、TensorFlow 2.0 にアップグレードするための作業が必要ない場合があります。
End of explanation
writer = tf.summary.create_file_writer("/tmp/mylogs/eager")
with writer.as_default():
for step in range(100):
# other model code would go here
tf.summary.scalar("my_metric", 0.5, step=step)
writer.flush()
ls /tmp/mylogs/eager
Explanation: TensorFlow 2.0 では、TensorBoard での視覚化に使用する要約データを書き込む際の tf.summary API が大幅に変更されています。
変更点
tf.summary API を 2 つのサブ API として考えると良いでしょう。
個別の要約を記録するための一連の演算 - summary.scalar()、summary.histogram()、summary.image()、summary.audio()、および summary.text()。これらはモデルコードからインラインで呼び出されます。
上記の個別の要約を収集して特別にフォーマットされたログファイル(TensorBoard が読み取って視覚化を生成するファイル)に書き込む書き込みロジック。
TF 1.x の場合
上記の 2 つは、Session.run() でサマリー演算の出力をフェッチし、FileWriter.add_summary(output, step) で呼び出す、というように手動でつなぐ必要がありました。v1.summary.merge_all() 演算によって、グラフコレクションを使ってすべてのサマリー演算出力を集計するという方法で、この処理が簡単になっていますが、Eager execution と制御フローではあまりよく機能しなかったため、TF 2.0 には特に適していません。
TF 2.X の場合
上記の 2 つは密接に統合されており、個別の tf.summary 演算は実行時に直ちにデータを書き込むようになっています。モデルコードから API を使用する方法はあまり変わっていませんが、Eager execution との相性が改善されており、ほかのグラフモードとの互換性もそのままです。この 2 つの API を統合することで、summary.FileWriter は TensorFlow 実行コンテキストの一環となり、tf.summary 演算で直接アクセスできるため、ライターの構成が外見的な主な違いと言えます。
次は、TF 2.x のデフォルトのモードである Eager execution を使用した例です。
End of explanation
writer = tf.summary.create_file_writer("/tmp/mylogs/tf_function")
@tf.function
def my_func(step):
with writer.as_default():
# other model code would go here
tf.summary.scalar("my_metric", 0.5, step=step)
for step in tf.range(100, dtype=tf.int64):
my_func(step)
writer.flush()
ls /tmp/mylogs/tf_function
Explanation: 次は、tf.function グラフ実行の使用例です。
End of explanation
g = tf.compat.v1.Graph()
with g.as_default():
step = tf.Variable(0, dtype=tf.int64)
step_update = step.assign_add(1)
writer = tf.summary.create_file_writer("/tmp/mylogs/session")
with writer.as_default():
tf.summary.scalar("my_metric", 0.5, step=step)
all_summary_ops = tf.compat.v1.summary.all_v2_summary_ops()
writer_flush = writer.flush()
with tf.compat.v1.Session(graph=g) as sess:
sess.run([writer.init(), step.initializer])
for i in range(100):
sess.run(all_summary_ops)
sess.run(step_update)
sess.run(writer_flush)
ls /tmp/mylogs/session
Explanation: 次は、レガシー TF 1.x グラフ実行の使用例です。
End of explanation
# Enable eager execution.
tf.compat.v1.enable_v2_behavior()
# A default TF 2.x summary writer is available.
writer = tf.summary.create_file_writer("/tmp/mylogs/enable_v2_in_v1")
# A step is set for the writer.
with writer.as_default(step=0):
# Below invokes `tf.summary.scalar`, and the return value is an empty bytestring.
tf.compat.v1.summary.scalar('float', tf.constant(1.0), family="family")
Explanation: コードを変換する
既存の tf.summary の使用箇所を TF 2.x API に変換する作業を確実に変換することは困難であるため、tf_upgrade_v2 スクリプトは、すべてを tf.compat.v1.summary に書き換えることだけを行い、自動的に TF 2.x の動作を有効化することはありません。
部分移行
tf.compat.v1.summary.scalar() といった TF 1.x サマリー API のロギング演算に大きく依存しているモデルコードを使用するユーザーが TF 2.x により簡単に移行できるようにするには、先にライター API のみを移行して、後でモデルコード内の個別の TF 1.x サマリー演算を完全に移行することができます。
このような移行をサポートするために、<a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/summary"><code>tf.compat.v1.summary</code></a> は以下の条件で TF 2.x 相当に自動的に転送されます。
最も外側のコンテキストが Eager モードである
デフォルトの TF 2.x サマリーライターが設定されている
空でないステップの値がライターに設定済みである(<a href="https://www.tensorflow.org/api_docs/python/tf/summary/SummaryWriter#as_default"><code>tf.summary.SummaryWriter.as_default</code></a>、<a href="https://www.tensorflow.org/api_docs/python/tf/summary/experimental/set_step"><code>tf.summary.experimental.set_step</code></a>、または <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/create_global_step"><code>tf.compat.v1.train.create_global_step</code></a> を使用)
TF 2.x サマリー実装が呼び出されると、戻り値は空のバイト文字列テンソルになり、サマリーの書き込みが重複しないようにされることに注意してください。また、入力引数のフォワーディングはベストエフォートであり、すべての引数が保持されるとは限りません(たとえば、family 引数はサポートされていますが、collections は取り除かれます)。
以下は、<a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/summary/scalar"><code>tf.compat.v1.summary.scalar</code></a> で <a href="https://www.tensorflow.org/api_docs/python/tf/summary/scalar"><code>tf.summary.scalar</code></a> の動作を呼び出す例です。
End of explanation |
1,146 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create The Data
The dataset used in this tutorial is the famous iris dataset. The Iris target data contains 50 samples from three species of Iris, y and four feature variables, X.
The dataset contains three categories (three species of Iris), however for the sake of simplicity it is easier if the target data is binary. Therefore we will remove the data from the last species of Iris.
Step2: View The Data
Step3: Split The Data Into Training And Test Sets
Step4: Standardize Features
Because the regularization penalty is comprised of the sum of the absolute value of the coefficients, we need to scale the data so the coefficients are all based on the same scale.
Step5: Run Logistic Regression With A L1 Penalty With Various Regularization Strengths
The usefulness of L1 is that it can push feature coefficients to 0, creating a method for feature selection. In the code below we run a logistic regression with a L1 penalty four times, each time decreasing the value of C. We should expect that as C decreases, more coefficients become 0. | Python Code:
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
Explanation: Title: Logistic Regression With L1 Regularization
Slug: logistic_regression_with_l1_regularization
Summary: Logistic Regression With L1 Regularization using scikit-learn.
Date: 2016-12-01 12:00
Category: Machine Learning
Tags: Logistic Regression
Authors: Chris Albon
L1 regularization (also called least absolute deviations) is a powerful tool in data science. There are many tutorials out there explaining L1 regularization and I will not try to do that here. Instead, this tutorial is show the effect of the regularization parameter C on the coefficients and model accuracy.
Preliminaries
End of explanation
# Load the iris dataset
iris = datasets.load_iris()
# Create X from the features
X = iris.data
# Create y from output
y = iris.target
# Remake the variable, keeping all data where the category is not 2.
X = X[y != 2]
y = y[y != 2]
Explanation: Create The Data
The dataset used in this tutorial is the famous iris dataset. The Iris target data contains 50 samples from three species of Iris, y and four feature variables, X.
The dataset contains three categories (three species of Iris), however for the sake of simplicity it is easier if the target data is binary. Therefore we will remove the data from the last species of Iris.
End of explanation
# View the features
X[0:5]
# View the target data
y
Explanation: View The Data
End of explanation
# Split the data into test and training sets, with 30% of samples being put into the test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
Explanation: Split The Data Into Training And Test Sets
End of explanation
# Create a scaler object
sc = StandardScaler()
# Fit the scaler to the training data and transform
X_train_std = sc.fit_transform(X_train)
# Apply the scaler to the test data
X_test_std = sc.transform(X_test)
Explanation: Standardize Features
Because the regularization penalty is comprised of the sum of the absolute value of the coefficients, we need to scale the data so the coefficients are all based on the same scale.
End of explanation
C = [10, 1, .1, .001]
for c in C:
clf = LogisticRegression(penalty='l1', C=c)
clf.fit(X_train, y_train)
print('C:', c)
print('Coefficient of each feature:', clf.coef_)
print('Training accuracy:', clf.score(X_train, y_train))
print('Test accuracy:', clf.score(X_test, y_test))
print('')
Explanation: Run Logistic Regression With A L1 Penalty With Various Regularization Strengths
The usefulness of L1 is that it can push feature coefficients to 0, creating a method for feature selection. In the code below we run a logistic regression with a L1 penalty four times, each time decreasing the value of C. We should expect that as C decreases, more coefficients become 0.
End of explanation |
1,147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dirichlet distribution
https
Step1: Setting up the Code
Before we can plot our Dirichlet distributions, we need to do three things
Step2: Gamma | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.tri as tri
from functools import reduce
# import seaborn
from math import gamma
from operator import mul
corners = np.array([[0, 0], [1, 0], [0.5,0.75**0.5]])
print(corners)
triangle = tri.Triangulation(corners[:, 0], corners[:, 1])
refiner = tri.UniformTriRefiner(triangle)
trimesh = refiner.refine_triangulation(subdiv=4)
plt.figure(figsize=(10, 5))
for (i, shape) in enumerate((triangle, trimesh)):
plt.subplot(1, 2, i+ 1)
plt.triplot(shape)
plt.axis('off')
plt.axis('equal')
# Mid-points of triangle sides opposite of each corner
midpoints = []
for i in range(3):
point1 = corners[(i + 1) % 3]
point2 = corners[(i + 2) % 3]
mid = (point1 + point2) / 2.0
print(point1, '+', point2, '=', mid)
midpoints.append(mid)
print('\n')
print(midpoints)
Explanation: Dirichlet distribution
https://en.wikipedia.org/wiki/Dirichlet_distribution
$$
\text{Dir}\left(\boldsymbol{\alpha}\right)\rightarrow \mathrm{p}\left(\boldsymbol{\theta}\mid\boldsymbol{\alpha}\right)=\frac{\Gamma\left(\sum_{i=1}^{k}\boldsymbol{\alpha}{i}\right)}{\prod{i=1}^{k}\Gamma\left(\boldsymbol{\alpha}{i}\right)}\prod{i=1}^{k}\boldsymbol{\theta}{i}^{\boldsymbol{\alpha}{i}-1} \
K\geq2\ \text{number of categories} \
{\alpha {1},\ldots ,\alpha {K}}\ concentration\ parameters,\ where\ {\alpha_{i}>0}
$$
Visulaizing Dirchlet Distributions
http://blog.bogatron.net/blog/2014/02/02/visualizing-dirichlet-distributions/
End of explanation
def xy2bc(xy, tol=1.e-3):
'''Converts 2D Cartesian coordinates to barycentric.'''
s = [(corners[i] - midpoints[i]).dot(xy - midpoints[i]) / 0.75 for i in range(3)]
return np.clip(s, tol, 1.0 - tol)
Explanation: Setting up the Code
Before we can plot our Dirichlet distributions, we need to do three things:
Generate a set of x-y coordinates over our equilateral triangle
Map the x-y coordinates to the 2-simplex coordinate space
Compute Dir(α)Dir(α) for each point
End of explanation
class Dirichlet(object):
def __init__(self, alpha):
self._alpha = np.array(alpha)
self._coef = gamma(np.sum(self._alpha)) / reduce(mul, [gamma(a) for a in self._alpha])
def pdf(self, x):
'''Returns pdf value for `x`.'''
return self._coef * reduce(mul, [xx ** (aa - 1) for (xx, aa)in zip(x, self._alpha)])
def draw_pdf_contours(dist, nlevels=200, subdiv=8, **kwargs):
import math
refiner = tri.UniformTriRefiner(triangle)
trimesh = refiner.refine_triangulation(subdiv=subdiv)
pvals = [dist.pdf(xy2bc(xy)) for xy in zip(trimesh.x, trimesh.y)]
plt.tricontourf(trimesh, pvals, nlevels, **kwargs)
plt.axis('equal')
plt.xlim(0, 1)
plt.ylim(0, 0.75**0.5)
plt.axis('off')
draw_pdf_contours(Dirichlet([1, 1, 1]))
draw_pdf_contours(Dirichlet([0.999, 0.999, 0.999]))
draw_pdf_contours(Dirichlet([5, 5, 5]))
draw_pdf_contours(Dirichlet([1, 2, 3]))
draw_pdf_contours(Dirichlet([3, 2, 1]))
draw_pdf_contours(Dirichlet([2, 3, 1]))
Explanation: Gamma: $\Gamma \left( z \right) = \int\limits_0^\infty {x^{z - 1} e^{ - x} dx}$
$
\text{Dir}\left(\boldsymbol{\alpha}\right)\rightarrow \mathrm{p}\left(\boldsymbol{\theta}\mid\boldsymbol{\alpha}\right)=\frac{\Gamma\left(\sum_{i=1}^{k}\boldsymbol{\alpha}{i}\right)}{\prod{i=1}^{k}\Gamma\left(\boldsymbol{\alpha}{i}\right)}\prod{i=1}^{k}\boldsymbol{\theta}{i}^{\boldsymbol{\alpha}{i}-1} \
K\geq2\ \text{number of categories} \
{\alpha {1},\ldots ,\alpha {K}}\ concentration\ parameters,\ where\ {\alpha_{i}>0}
$
End of explanation |
1,148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Environment Preparation
Install Java 8
Run the cell on the Google Colab to install jdk 1.8.
Note
Step2: Install BigDL Orca
Conda is needed to prepare the Python environment for running this example.
Note
Step3: You can install the latest pre-release version using pip install --pre --upgrade bigdl-orca.
Step4: Distributed PyTorch using Orca APIs
In this guide we will describe how to scale out PyTorch programs using Orca in 4 simple steps.
Step5: Step 1
Step6: This is the only place where you need to specify local or distributed mode. View Orca Context for more details.
Note
Step7: Step 3
Step8: Step 4
Step9: Next, fit and evaluate using the Estimator.
Step10: Finally, evaluate using the Estimator.
Step11: The accuracy of this model has reached 98%. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
Explanation: <a href="https://colab.research.google.com/github/intel-analytics/BigDL/blob/branch-2.0/python/orca/colab-notebook/quickstart/pytorch_lenet_mnist.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2016 The BigDL Authors.
End of explanation
# Install jdk8
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
import os
# Set environment variable JAVA_HOME.
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
!update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
!java -version
Explanation: Environment Preparation
Install Java 8
Run the cell on the Google Colab to install jdk 1.8.
Note: if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer).
End of explanation
import sys
# Set current python version
python_version = f"3.7.10"
# Install Miniconda
!wget https://repo.continuum.io/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh
!chmod +x Miniconda3-4.5.4-Linux-x86_64.sh
!./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local
# Update Conda
!conda install --channel defaults conda python=$python_version --yes
!conda update --channel defaults --all --yes
# Append to the sys.path
_ = (sys.path
.append(f"/usr/local/lib/python3.7/site-packages"))
os.environ['PYTHONHOME']="/usr/local"
Explanation: Install BigDL Orca
Conda is needed to prepare the Python environment for running this example.
Note: The following code cell is specific for setting up conda environment on Colab; for general conda installation, please refer to the install guide for more details.
End of explanation
# Install latest pre-release version of BigDL Orca
# Installing BigDL Orca from pip will automatically install pyspark, bigdl, and their dependencies.
!pip install --pre --upgrade bigdl-orca
# Install python dependencies
!pip install torch==1.7.1 torchvision==0.8.2
!pip install six cloudpickle
!pip install jep==3.9.0
Explanation: You can install the latest pre-release version using pip install --pre --upgrade bigdl-orca.
End of explanation
# import necesary libraries and modules
from __future__ import print_function
import os
import argparse
from bigdl.orca import init_orca_context, stop_orca_context
from bigdl.orca import OrcaContext
Explanation: Distributed PyTorch using Orca APIs
In this guide we will describe how to scale out PyTorch programs using Orca in 4 simple steps.
End of explanation
# recommended to set it to True when running BigDL in Jupyter notebook.
OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook).
cluster_mode = "local"
if cluster_mode == "local":
init_orca_context(cores=1, memory="2g") # run in local mode
elif cluster_mode == "k8s":
init_orca_context(cluster_mode="k8s", num_nodes=2, cores=4) # run on K8s cluster
elif cluster_mode == "yarn":
init_orca_context(
cluster_mode="yarn-client", cores=4, num_nodes=2, memory="2g",
driver_memory="10g", driver_cores=1,
conf={"spark.rpc.message.maxSize": "1024",
"spark.task.maxFailures": "1",
"spark.driver.extraJavaOptions": "-Dbigdl.failure.retryTimes=1"}) # run on Hadoop YARN cluster
Explanation: Step 1: Init Orca Context
End of explanation
import torch
import torch.nn as nn
import torch.nn.functional as F
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4*4*50, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
model = LeNet()
model.train()
criterion = nn.NLLLoss()
lr = 0.001
adam = torch.optim.Adam(model.parameters(), lr)
Explanation: This is the only place where you need to specify local or distributed mode. View Orca Context for more details.
Note: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster.
Step 2: Define the Model
You may define your model, loss and optimizer in the same way as in any standard (single node) PyTorch program.
End of explanation
import torch
from torchvision import datasets, transforms
torch.manual_seed(0)
dir='/tmp/dataset'
batch_size=320
test_batch_size=320
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(dir, train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size= batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(dir, train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=test_batch_size, shuffle=False)
Explanation: Step 3: Define Train Dataset
You can define the dataset using standard Pytorch DataLoader.
End of explanation
from bigdl.orca.learn.pytorch import Estimator
from bigdl.orca.learn.metrics import Accuracy
est = Estimator.from_torch(model=model, optimizer=adam, loss=criterion, metrics=[Accuracy()])
Explanation: Step 4: Fit with Orca Estimator
First, Create an Estimator.
End of explanation
from bigdl.orca.learn.trigger import EveryEpoch
est.fit(data=train_loader, epochs=1, validation_data=test_loader,
checkpoint_trigger=EveryEpoch())
Explanation: Next, fit and evaluate using the Estimator.
End of explanation
result = est.evaluate(data=test_loader)
for r in result:
print(r, ":", result[r])
Explanation: Finally, evaluate using the Estimator.
End of explanation
# stop orca context when program finishes
stop_orca_context()
Explanation: The accuracy of this model has reached 98%.
End of explanation |
1,149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Configuring MNE python
This tutorial gives a short introduction to MNE configurations.
Step1: MNE-python stores configurations to a folder called .mne in the user's
home directory, or to AppData directory on Windows. The path to the config
file can be found out by calling
Step2: These configurations include information like sample data paths and plotter
window sizes. Files inside this folder should never be modified manually.
Let's see what the configurations contain.
Step3: We see fields like "MNE_DATASETS_SAMPLE_PATH". As the name suggests, this is
the path the sample data is downloaded to. All the fields in the
configuration file can be modified by calling
Step4: The default value is now set to INFO. This level will now be used by default
every time we call a function in MNE. We can set the global logging level for
only this session by calling
Step5: Notice how the value in the config file was not changed. Logging level of
WARNING only applies for this session. Let's see what logging level of
WARNING prints for
Step6: Nothing. This means that no warnings were emitted during the computation. If
you look at the documentation of
Step7: As you see there is some info about what the function is doing. The logging
level can be set to 'DEBUG', 'INFO', 'WARNING', 'ERROR' or 'CRITICAL'. It can
also be set to an integer or a boolean value. The correspondance to string
values can be seen in the table below. verbose=None uses the default
value from the configuration file.
+----------+---------+---------+
| String | Integer | Boolean |
+==========+=========+=========+
| DEBUG | 10 | |
+----------+---------+---------+
| INFO | 20 | True |
+----------+---------+---------+
| WARNING | 30 | False |
+----------+---------+---------+
| ERROR | 40 | |
+----------+---------+---------+
| CRITICAL | 50 | |
+----------+---------+---------+ | Python Code:
import os.path as op
import mne
from mne.datasets.sample import data_path
fname = op.join(data_path(), 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(fname).crop(0, 10)
original_level = mne.get_config('MNE_LOGGING_LEVEL', 'INFO')
Explanation: Configuring MNE python
This tutorial gives a short introduction to MNE configurations.
End of explanation
print(mne.get_config_path())
Explanation: MNE-python stores configurations to a folder called .mne in the user's
home directory, or to AppData directory on Windows. The path to the config
file can be found out by calling :func:mne.get_config_path.
End of explanation
print(mne.get_config())
Explanation: These configurations include information like sample data paths and plotter
window sizes. Files inside this folder should never be modified manually.
Let's see what the configurations contain.
End of explanation
mne.set_config('MNE_LOGGING_LEVEL', 'INFO')
print(mne.get_config(key='MNE_LOGGING_LEVEL'))
Explanation: We see fields like "MNE_DATASETS_SAMPLE_PATH". As the name suggests, this is
the path the sample data is downloaded to. All the fields in the
configuration file can be modified by calling :func:mne.set_config.
Logging
Configurations also include the default logging level for the functions. This
field is called "MNE_LOGGING_LEVEL".
End of explanation
mne.set_log_level('WARNING')
print(mne.get_config(key='MNE_LOGGING_LEVEL'))
Explanation: The default value is now set to INFO. This level will now be used by default
every time we call a function in MNE. We can set the global logging level for
only this session by calling :func:mne.set_log_level function.
End of explanation
cov = mne.compute_raw_covariance(raw)
Explanation: Notice how the value in the config file was not changed. Logging level of
WARNING only applies for this session. Let's see what logging level of
WARNING prints for :func:mne.compute_raw_covariance.
End of explanation
cov = mne.compute_raw_covariance(raw, verbose=True)
Explanation: Nothing. This means that no warnings were emitted during the computation. If
you look at the documentation of :func:mne.compute_raw_covariance, you
notice the verbose keyword. Setting this parameter does not touch the
configurations, but sets the logging level for just this one function call.
Let's see what happens with logging level of INFO.
End of explanation
mne.set_config('MNE_LOGGING_LEVEL', original_level)
Explanation: As you see there is some info about what the function is doing. The logging
level can be set to 'DEBUG', 'INFO', 'WARNING', 'ERROR' or 'CRITICAL'. It can
also be set to an integer or a boolean value. The correspondance to string
values can be seen in the table below. verbose=None uses the default
value from the configuration file.
+----------+---------+---------+
| String | Integer | Boolean |
+==========+=========+=========+
| DEBUG | 10 | |
+----------+---------+---------+
| INFO | 20 | True |
+----------+---------+---------+
| WARNING | 30 | False |
+----------+---------+---------+
| ERROR | 40 | |
+----------+---------+---------+
| CRITICAL | 50 | |
+----------+---------+---------+
End of explanation |
1,150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A simple SEA model of two rooms with a dividing wall
In this notebook we create a simple SEA model of two rooms divided by a concrete wall.
We start by importing some of the modules that are needed.
Step1: Creating a SEA model
To create a SEA model we begin by creating an instance of System.
Step2: We are only interested in a limited frequency range, e.g. the third octave bands ranging from 1000 to 4000 Hz.
Materials
The rooms are filled with air, so we add air as material.
Step3: We don't know the shear modulus of concrete, so let's calculate it. With the function modulus we can calculate for an isotropic material any elastic modulus given two other ones.
Step4: Just to be sure, we can list the properties of the concrete.
Step5: Rooms and wall
Now we add the two rooms.
Step6: Given the material type and the volume we can for example calculate the mass of the air in the room
Step7: or plot the modal density of the subsystem representing longitudinal waves
Step8: We now add the concrete wall.
Step9: Let's have a look at the modal densities of the subsystems.
Step10: The modal density of the subsystem representing bending waves in the wall seems to remain constant.
It's also possible to inspect objects further. For example, as was shown with the mass of the room, it is possible to request e.g. multiple parameters.
Step11: Shown is now a table, but what is returned is in fact a pandas DataFrame. Pandas is a data analysis toolkit and offers powerful tools to analyse data and to export data to e.g. spreadsheet formats like Excel.
Junction
The rooms and the wall form a junction and connect along a surface.
Step12: Now, when we call junction1.update_couplings it tries to determine all the couplings between the subsystems of the components that were added.
Step13: We can now for example see the coupling loss factors of all the couplings that were added.
Step14: Now that both the coupling loss factors and damping loss factors are known we can also list the total loss factor
Step15: The coupling loss factor of the coupling between the rooms is based on the non-resonant transmission coefficient.
Step16: Excitation
We have defined the subsystems and couplings. What's left is to add an excitation to the system.
Step17: The input power $P$ depends on the volume velocity $U$ of the source and the real part of the radiation impedance, i.e. the radiation resistance $R$.
Step18: The resistance increases with frequency and therefore the radiated power increases similary.
Step19: Solving the system
Now we can solve the system.
Step20: We can have a look at the modal energy
Step21: but those values are generally hard to interpret. Instead, we could just request the sound pressure levels in the rooms
Step22: or plot them.
Step23: Let's consider the sound pressure level difference between the two rooms.
Step24: Obviously, we can also look at the modal energies
Step25: or see the level contributions of the individual subsystems.
Step26: Path analysis and graphs
All the objects in SeaPy remember to which other objects they're connected. For example, we can list the subsystems in a component.
Step27: As soon as a model gets a bit bigger it can be hard to track which objects are connected. One way to help with keeping an overview is by drawing graphs.
Step28: The following graph shows the relation between components and subsystems.
Step29: We can also show for example subsystems and couplings.
Step30: Path analysis
By creating graphs of subsystems and couplings it is also straightforward to check whether subsystems are in anyway connected
Step31: and to determine the possible paths between any two subsystems.
Step32: We can also calculate the level difference due to a transmission path.
Step33: Saving and restoring a model
SEA models can be saved as YAML.
Step34: YAML is a human-readable file format. Models can be implemented or edited in the YAML file if desired.
Step35: Loading is done using the load method.
Step36: To verify whether the models are similar we check the modal energy.
Step37: That looks correctly. To be really sure we just calculate the modal energies again in the second model, to verify that other parameters have also been restored.
Step38: Same results.
Source code and documentation
Source code of seapy can be found at GitHub and documentation right here. | Python Code:
import numpy as np
import pandas as pd
pd.set_option('float_format', '{:.2e}'.format)
import matplotlib
%matplotlib inline
Explanation: A simple SEA model of two rooms with a dividing wall
In this notebook we create a simple SEA model of two rooms divided by a concrete wall.
We start by importing some of the modules that are needed.
End of explanation
from seapy import System
from acoustics.signal import OctaveBand
f = OctaveBand(fstart=20.0, fstop=4000.0, fraction=1)
system1 = System(f)
Explanation: Creating a SEA model
To create a SEA model we begin by creating an instance of System.
End of explanation
air = system1.add_material('air',
'MaterialGas',
density = 1.296,
temperature = 293.0,
bulk = 1.01e5,
loss_factor=0.05)
concrete = system1.add_material('concrete',
'MaterialSolid',
young=3.0e10,
poisson=0.15,
density=2.3e3,
loss_factor=0.02)
Explanation: We are only interested in a limited frequency range, e.g. the third octave bands ranging from 1000 to 4000 Hz.
Materials
The rooms are filled with air, so we add air as material.
End of explanation
from seapy.materials.materialsolid import modulus
concrete.shear = modulus('shear', young=3.0e10, poisson=0.15)
Explanation: We don't know the shear modulus of concrete, so let's calculate it. With the function modulus we can calculate for an isotropic material any elastic modulus given two other ones.
End of explanation
concrete.info(['density',
'poisson',
'young',
'shear',])
Explanation: Just to be sure, we can list the properties of the concrete.
End of explanation
room1 = system1.add_component('room1',
'Component3DAcoustical',
material='air',
length=4.0,
height=2.5,
width=5.0)
room2 = system1.add_component('room2',
'Component3DAcoustical',
material='air',
length=5.0,
height=2.5,
width=5.0)
Explanation: Rooms and wall
Now we add the two rooms.
End of explanation
room1.mass
Explanation: Given the material type and the volume we can for example calculate the mass of the air in the room
End of explanation
fig = room1.subsystem_long.plot("modal_density", yscale='log')
Explanation: or plot the modal density of the subsystem representing longitudinal waves
End of explanation
wall = system1.add_component('wall',
'Component2DPlate',
material='concrete',
length=3.0,
width=2.5,
height=0.05)
Explanation: We now add the concrete wall.
End of explanation
system1.info(system1.subsystems, 'modal_density')
Explanation: Let's have a look at the modal densities of the subsystems.
End of explanation
wall.subsystem_bend.info(['soundspeed_group',
'soundspeed_phase',
'modal_density',
'average_frequency_spacing',
'power_input',
'dlf',
'tlf',])
Explanation: The modal density of the subsystem representing bending waves in the wall seems to remain constant.
It's also possible to inspect objects further. For example, as was shown with the mass of the room, it is possible to request e.g. multiple parameters.
End of explanation
junction1 = system1.add_junction('junction1', 'Junction', shape='Surface', components=['room1',
'room2',
'wall'])
Explanation: Shown is now a table, but what is returned is in fact a pandas DataFrame. Pandas is a data analysis toolkit and offers powerful tools to analyse data and to export data to e.g. spreadsheet formats like Excel.
Junction
The rooms and the wall form a junction and connect along a surface.
End of explanation
junction1.update_couplings()
Explanation: Now, when we call junction1.update_couplings it tries to determine all the couplings between the subsystems of the components that were added.
End of explanation
system1.info(system1.couplings, 'clf')
Explanation: We can now for example see the coupling loss factors of all the couplings that were added.
End of explanation
system1.info(system1.subsystems, 'tlf')
Explanation: Now that both the coupling loss factors and damping loss factors are known we can also list the total loss factor
End of explanation
system1.get_object('room1_SubsystemLong_room2_SubsystemLong').info(['tau', 'sound_reduction_index'])
system1.get_object('wall_SubsystemBend_room1_SubsystemLong').info(['critical_frequency'])
Explanation: The coupling loss factor of the coupling between the rooms is based on the non-resonant transmission coefficient.
End of explanation
excitation1 = room1.subsystem_long.add_excitation('excitation1',
'ExcitationPointVolume',
velocity=0.001,
radius=0.05)
Explanation: Excitation
We have defined the subsystems and couplings. What's left is to add an excitation to the system.
End of explanation
excitation1.info(['resistance'])
Explanation: The input power $P$ depends on the volume velocity $U$ of the source and the real part of the radiation impedance, i.e. the radiation resistance $R$.
End of explanation
fig = excitation1.plot('power_level')
Explanation: The resistance increases with frequency and therefore the radiated power increases similary.
End of explanation
system1.solve()
Explanation: Solving the system
Now we can solve the system.
End of explanation
system1.info(system1.subsystems, 'modal_energy')
Explanation: We can have a look at the modal energy
End of explanation
system1.info(['room1', 'room2'], 'pressure_level')
Explanation: but those values are generally hard to interpret. Instead, we could just request the sound pressure levels in the rooms
End of explanation
fig = system1.plot(['room1', 'room2'], 'pressure_level')
Explanation: or plot them.
End of explanation
(room1.info(['pressure_level']) - room2.info(['pressure_level']))
fig = system1.get_object('room1_SubsystemLong_room2_SubsystemLong').plot('sound_reduction_index')
Explanation: Let's consider the sound pressure level difference between the two rooms.
End of explanation
system1.info(system1.subsystems, 'modal_energy')
Explanation: Obviously, we can also look at the modal energies
End of explanation
system1.info(system1.subsystems, 'velocity_level')
system1.info(system1.subsystems, 'pressure_level')
Explanation: or see the level contributions of the individual subsystems.
End of explanation
for obj in wall.linked_subsystems:
print(obj.name)
Explanation: Path analysis and graphs
All the objects in SeaPy remember to which other objects they're connected. For example, we can list the subsystems in a component.
End of explanation
import networkx as nx
Explanation: As soon as a model gets a bit bigger it can be hard to track which objects are connected. One way to help with keeping an overview is by drawing graphs.
End of explanation
G = system1.path_analysis.graph(['components', 'subsystems'])
nx.draw_networkx(G)
Explanation: The following graph shows the relation between components and subsystems.
End of explanation
G = system1.path_analysis.graph(['subsystems', 'couplings'])
fig = nx.draw_networkx(G)
from seapy.tools import graph_couplings
G = graph_couplings(system1)
fig = nx.draw_networkx(G)
Explanation: We can also show for example subsystems and couplings.
End of explanation
system1.path_analysis.has_path('room1_SubsystemLong', 'room2_SubsystemLong')
Explanation: Path analysis
By creating graphs of subsystems and couplings it is also straightforward to check whether subsystems are in anyway connected
End of explanation
for path in system1.path_analysis.paths('room1_SubsystemLong', 'room2_SubsystemLong'):
print(path)
Explanation: and to determine the possible paths between any two subsystems.
End of explanation
for path in system1.path_analysis.paths('room1_SubsystemLong', 'room2_SubsystemLong'):
print(path.level_difference)
list(system1.path_analysis.paths('room1_SubsystemLong', 'room2_SubsystemLong'))[0].level_difference
Explanation: We can also calculate the level difference due to a transmission path.
End of explanation
system1.save("model.yaml")
Explanation: Saving and restoring a model
SEA models can be saved as YAML.
End of explanation
!head -n 20 model.yaml
Explanation: YAML is a human-readable file format. Models can be implemented or edited in the YAML file if desired.
End of explanation
system2 = System.load("model.yaml")
Explanation: Loading is done using the load method.
End of explanation
system1.info(system1.subsystems, 'modal_energy')
system2.info(system2.subsystems, 'modal_energy')
Explanation: To verify whether the models are similar we check the modal energy.
End of explanation
system2.solve()
system2.info(system2.subsystems, 'modal_energy')
Explanation: That looks correctly. To be really sure we just calculate the modal energies again in the second model, to verify that other parameters have also been restored.
End of explanation
from IPython.display import IFrame
IFrame("https://seapy.readthedocs.io/en/latest/", width=800, height=600)
Explanation: Same results.
Source code and documentation
Source code of seapy can be found at GitHub and documentation right here.
End of explanation |
1,151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 理解语言的 Transformer 模型
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: 设置输入流水线(input pipeline)
使用 TFDS 来导入 葡萄牙语-英语翻译数据集,该数据集来自于 TED 演讲开放翻译项目.
该数据集包含来约 50000 条训练样本,1100 条验证样本,以及 2000 条测试样本。
Step3: 从训练数据集创建自定义子词分词器(subwords tokenizer)。
Step4: 如果单词不在词典中,则分词器(tokenizer)通过将单词分解为子词来对字符串进行编码。
Step5: 将开始和结束标记(token)添加到输入和目标。
Step6: Note:为了使本示例较小且相对较快,删除长度大于40个标记的样本。
Step7: .map() 内部的操作以图模式(graph mode)运行,.map() 接收一个不具有 numpy 属性的图张量(graph tensor)。该分词器(tokenizer)需要将一个字符串或 Unicode 符号,编码成整数。因此,您需要在 tf.py_function 内部运行编码过程,tf.py_function 接收一个 eager 张量,该 eager 张量有一个包含字符串值的 numpy 属性。
Step8: 位置编码(Positional encoding)
因为该模型并不包括任何的循环(recurrence)或卷积,所以模型添加了位置编码,为模型提供一些关于单词在句子中相对位置的信息。
位置编码向量被加到嵌入(embedding)向量中。嵌入表示一个 d 维空间的标记,在 d 维空间中有着相似含义的标记会离彼此更近。但是,嵌入并没有对在一句话中的词的相对位置进行编码。因此,当加上位置编码后,词将基于它们含义的相似度以及它们在句子中的位置,在 d 维空间中离彼此更近。
参看 位置编码 的 notebook 了解更多信息。计算位置编码的公式如下:
$$\Large{PE_{(pos, 2i)} = sin(pos / 10000^{2i / d_{model}})} $$
$$\Large{PE_{(pos, 2i+1)} = cos(pos / 10000^{2i / d_{model}})} $$
Step9: 遮挡(Masking)
遮挡一批序列中所有的填充标记(pad tokens)。这确保了模型不会将填充作为输入。该 mask 表明填充值 0 出现的位置:在这些位置 mask 输出 1,否则输出 0。
Step10: 前瞻遮挡(look-ahead mask)用于遮挡一个序列中的后续标记(future tokens)。换句话说,该 mask 表明了不应该使用的条目。
这意味着要预测第三个词,将仅使用第一个和第二个词。与此类似,预测第四个词,仅使用第一个,第二个和第三个词,依此类推。
Step12: 按比缩放的点积注意力(Scaled dot product attention)
<img src="https
Step13: 当 softmax 在 K 上进行归一化后,它的值决定了分配到 Q 的重要程度。
输出表示注意力权重和 V(数值)向量的乘积。这确保了要关注的词保持原样,而无关的词将被清除掉。
Step14: 将所有请求一起传递。
Step16: 多头注意力(Multi-head attention)
<img src="https
Step17: 创建一个 MultiHeadAttention 层进行尝试。在序列中的每个位置 y,MultiHeadAttention 在序列中的所有其他位置运行所有8个注意力头,在每个位置y,返回一个新的同样长度的向量。
Step18: 点式前馈网络(Point wise feed forward network)
点式前馈网络由两层全联接层组成,两层之间有一个 ReLU 激活函数。
Step19: 编码与解码(Encoder and decoder)
<img src="https
Step20: 解码器层(Decoder layer)
每个解码器层包括以下子层:
遮挡的多头注意力(前瞻遮挡和填充遮挡)
多头注意力(用填充遮挡)。V(数值)和 K(主键)接收编码器输出作为输入。Q(请求)接收遮挡的多头注意力子层的输出。
点式前馈网络
每个子层在其周围有一个残差连接,然后进行层归一化。每个子层的输出是 LayerNorm(x + Sublayer(x))。归一化是在 d_model(最后一个)维度完成的。
Transformer 中共有 N 个解码器层。
当 Q 接收到解码器的第一个注意力块的输出,并且 K 接收到编码器的输出时,注意力权重表示根据编码器的输出赋予解码器输入的重要性。换一种说法,解码器通过查看编码器输出和对其自身输出的自注意力,预测下一个词。参看按比缩放的点积注意力部分的演示。
Step21: 编码器(Encoder)
编码器 包括:
1. 输入嵌入(Input Embedding)
2. 位置编码(Positional Encoding)
3. N 个编码器层(encoder layers)
输入经过嵌入(embedding)后,该嵌入与位置编码相加。该加法结果的输出是编码器层的输入。编码器的输出是解码器的输入。
Step22: 解码器(Decoder)
解码器包括:
1. 输出嵌入(Output Embedding)
2. 位置编码(Positional Encoding)
3. N 个解码器层(decoder layers)
目标(target)经过一个嵌入后,该嵌入和位置编码相加。该加法结果是解码器层的输入。解码器的输出是最后的线性层的输入。
Step23: 创建 Transformer
Transformer 包括编码器,解码器和最后的线性层。解码器的输出是线性层的输入,返回线性层的输出。
Step24: 配置超参数(hyperparameters)
为了让本示例小且相对较快,已经减小了num_layers、 d_model 和 dff 的值。
Transformer 的基础模型使用的数值为:num_layers=6,d_model = 512,dff = 2048。关于所有其他版本的 Transformer,请查阅论文。
Note:通过改变以下数值,您可以获得在许多任务上达到最先进水平的模型。
Step25: 优化器(Optimizer)
根据论文中的公式,将 Adam 优化器与自定义的学习速率调度程序(scheduler)配合使用。
$$\Large{lrate = d_{model}^{-0.5} * min(step{_}num^{-0.5}, step{_}num * warmup{_}steps^{-1.5})}$$
Step26: 损失函数与指标(Loss and metrics)
由于目标序列是填充(padded)过的,因此在计算损失函数时,应用填充遮挡非常重要。
Step27: 训练与检查点(Training and checkpointing)
Step28: 创建检查点的路径和检查点管理器(manager)。这将用于在每 n 个周期(epochs)保存检查点。
Step29: 目标(target)被分成了 tar_inp 和 tar_real。tar_inp 作为输入传递到解码器。tar_real 是位移了 1 的同一个输入:在 tar_inp 中的每个位置,tar_real 包含了应该被预测到的下一个标记(token)。
例如,sentence = "SOS A lion in the jungle is sleeping EOS"
tar_inp = "SOS A lion in the jungle is sleeping"
tar_real = "A lion in the jungle is sleeping EOS"
Transformer 是一个自回归(auto-regressive)模型:它一次作一个部分的预测,然后使用到目前为止的自身的输出来决定下一步要做什么。
在训练过程中,本示例使用了 teacher-forcing 的方法(就像文本生成教程中一样)。无论模型在当前时间步骤下预测出什么,teacher-forcing 方法都会将真实的输出传递到下一个时间步骤上。
当 transformer 预测每个词时,自注意力(self-attention)功能使它能够查看输入序列中前面的单词,从而更好地预测下一个单词。
为了防止模型在期望的输出上达到峰值,模型使用了前瞻遮挡(look-ahead mask)。
Step30: 葡萄牙语作为输入语言,英语为目标语言。
Step31: 评估(Evaluate)
以下步骤用于评估:
用葡萄牙语分词器(tokenizer_pt)编码输入语句。此外,添加开始和结束标记,这样输入就与模型训练的内容相同。这是编码器输入。
解码器输入为 start token == tokenizer_en.vocab_size。
计算填充遮挡和前瞻遮挡。
解码器通过查看编码器输出和它自身的输出(自注意力)给出预测。
选择最后一个词并计算它的 argmax。
将预测的词连接到解码器输入,然后传递给解码器。
在这种方法中,解码器根据它预测的之前的词预测下一个。
Note:这里使用的模型具有较小的能力以保持相对较快,因此预测可能不太正确。要复现论文中的结果,请使用全部数据集,并通过修改上述超参数来使用基础 transformer 模型或者 transformer XL。
Step32: 您可以为 plot 参数传递不同的层和解码器的注意力模块。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow_datasets as tfds
import tensorflow as tf
import time
import numpy as np
import matplotlib.pyplot as plt
Explanation: 理解语言的 Transformer 模型
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tutorials/text/transformer">
<img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />
在 tensorflow.google.cn 上查看</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/text/transformer.ipynb">
<img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />
在 Google Colab 运行</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/text/transformer.ipynb">
<img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />
在 Github 上查看源代码</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/text/transformer.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载此 notebook</a>
</td>
</table>
Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的
官方英文文档。如果您有改进此翻译的建议, 请提交 pull request 到
tensorflow/docs GitHub 仓库。要志愿地撰写或者审核译文,请加入
[email protected] Google Group
本教程训练了一个 <a href="https://arxiv.org/abs/1706.03762" class="external">Transformer 模型</a> 用于将葡萄牙语翻译成英语。这是一个高级示例,假定您具备文本生成(text generation)和 注意力机制(attention) 的知识。
Transformer 模型的核心思想是自注意力机制(self-attention)——能注意输入序列的不同位置以计算该序列的表示的能力。Transformer 创建了多层自注意力层(self-attetion layers)组成的堆栈,下文的按比缩放的点积注意力(Scaled dot product attention)和多头注意力(Multi-head attention)部分对此进行了说明。
一个 transformer 模型用自注意力层而非 RNNs 或 CNNs 来处理变长的输入。这种通用架构有一系列的优势:
它不对数据间的时间/空间关系做任何假设。这是处理一组对象(objects)的理想选择(例如,星际争霸单位(StarCraft units))。
层输出可以并行计算,而非像 RNN 这样的序列计算。
远距离项可以影响彼此的输出,而无需经过许多 RNN 步骤或卷积层(例如,参见场景记忆 Transformer(Scene Memory Transformer))
它能学习长距离的依赖。在许多序列任务中,这是一项挑战。
该架构的缺点是:
对于时间序列,一个单位时间的输出是从整个历史记录计算的,而非仅从输入和当前的隐含状态计算得到。这可能效率较低。
如果输入确实有时间/空间的关系,像文本,则必须加入一些位置编码,否则模型将有效地看到一堆单词。
在此 notebook 中训练完模型后,您将能输入葡萄牙语句子,得到其英文翻译。
<img src="https://tensorflow.google.cn/images/tutorials/transformer/attention_map_portuguese.png" width="800" alt="Attention heatmap">
End of explanation
examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
Explanation: 设置输入流水线(input pipeline)
使用 TFDS 来导入 葡萄牙语-英语翻译数据集,该数据集来自于 TED 演讲开放翻译项目.
该数据集包含来约 50000 条训练样本,1100 条验证样本,以及 2000 条测试样本。
End of explanation
tokenizer_en = tfds.features.text.SubwordTextEncoder.build_from_corpus(
(en.numpy() for pt, en in train_examples), target_vocab_size=2**13)
tokenizer_pt = tfds.features.text.SubwordTextEncoder.build_from_corpus(
(pt.numpy() for pt, en in train_examples), target_vocab_size=2**13)
sample_string = 'Transformer is awesome.'
tokenized_string = tokenizer_en.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))
original_string = tokenizer_en.decode(tokenized_string)
print ('The original string: {}'.format(original_string))
assert original_string == sample_string
Explanation: 从训练数据集创建自定义子词分词器(subwords tokenizer)。
End of explanation
for ts in tokenized_string:
print ('{} ----> {}'.format(ts, tokenizer_en.decode([ts])))
BUFFER_SIZE = 20000
BATCH_SIZE = 64
Explanation: 如果单词不在词典中,则分词器(tokenizer)通过将单词分解为子词来对字符串进行编码。
End of explanation
def encode(lang1, lang2):
lang1 = [tokenizer_pt.vocab_size] + tokenizer_pt.encode(
lang1.numpy()) + [tokenizer_pt.vocab_size+1]
lang2 = [tokenizer_en.vocab_size] + tokenizer_en.encode(
lang2.numpy()) + [tokenizer_en.vocab_size+1]
return lang1, lang2
Explanation: 将开始和结束标记(token)添加到输入和目标。
End of explanation
MAX_LENGTH = 40
def filter_max_length(x, y, max_length=MAX_LENGTH):
return tf.logical_and(tf.size(x) <= max_length,
tf.size(y) <= max_length)
Explanation: Note:为了使本示例较小且相对较快,删除长度大于40个标记的样本。
End of explanation
def tf_encode(pt, en):
result_pt, result_en = tf.py_function(encode, [pt, en], [tf.int64, tf.int64])
result_pt.set_shape([None])
result_en.set_shape([None])
return result_pt, result_en
train_dataset = train_examples.map(tf_encode)
train_dataset = train_dataset.filter(filter_max_length)
# 将数据集缓存到内存中以加快读取速度。
train_dataset = train_dataset.cache()
train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE)
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)
val_dataset = val_examples.map(tf_encode)
val_dataset = val_dataset.filter(filter_max_length).padded_batch(BATCH_SIZE)
pt_batch, en_batch = next(iter(val_dataset))
pt_batch, en_batch
Explanation: .map() 内部的操作以图模式(graph mode)运行,.map() 接收一个不具有 numpy 属性的图张量(graph tensor)。该分词器(tokenizer)需要将一个字符串或 Unicode 符号,编码成整数。因此,您需要在 tf.py_function 内部运行编码过程,tf.py_function 接收一个 eager 张量,该 eager 张量有一个包含字符串值的 numpy 属性。
End of explanation
def get_angles(pos, i, d_model):
angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
return pos * angle_rates
def positional_encoding(position, d_model):
angle_rads = get_angles(np.arange(position)[:, np.newaxis],
np.arange(d_model)[np.newaxis, :],
d_model)
# 将 sin 应用于数组中的偶数索引(indices);2i
angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
# 将 cos 应用于数组中的奇数索引;2i+1
angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
pos_encoding = angle_rads[np.newaxis, ...]
return tf.cast(pos_encoding, dtype=tf.float32)
pos_encoding = positional_encoding(50, 512)
print (pos_encoding.shape)
plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show()
Explanation: 位置编码(Positional encoding)
因为该模型并不包括任何的循环(recurrence)或卷积,所以模型添加了位置编码,为模型提供一些关于单词在句子中相对位置的信息。
位置编码向量被加到嵌入(embedding)向量中。嵌入表示一个 d 维空间的标记,在 d 维空间中有着相似含义的标记会离彼此更近。但是,嵌入并没有对在一句话中的词的相对位置进行编码。因此,当加上位置编码后,词将基于它们含义的相似度以及它们在句子中的位置,在 d 维空间中离彼此更近。
参看 位置编码 的 notebook 了解更多信息。计算位置编码的公式如下:
$$\Large{PE_{(pos, 2i)} = sin(pos / 10000^{2i / d_{model}})} $$
$$\Large{PE_{(pos, 2i+1)} = cos(pos / 10000^{2i / d_{model}})} $$
End of explanation
def create_padding_mask(seq):
seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
# 添加额外的维度来将填充加到
# 注意力对数(logits)。
return seq[:, tf.newaxis, tf.newaxis, :] # (batch_size, 1, 1, seq_len)
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
Explanation: 遮挡(Masking)
遮挡一批序列中所有的填充标记(pad tokens)。这确保了模型不会将填充作为输入。该 mask 表明填充值 0 出现的位置:在这些位置 mask 输出 1,否则输出 0。
End of explanation
def create_look_ahead_mask(size):
mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
return mask # (seq_len, seq_len)
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp
Explanation: 前瞻遮挡(look-ahead mask)用于遮挡一个序列中的后续标记(future tokens)。换句话说,该 mask 表明了不应该使用的条目。
这意味着要预测第三个词,将仅使用第一个和第二个词。与此类似,预测第四个词,仅使用第一个,第二个和第三个词,依此类推。
End of explanation
def scaled_dot_product_attention(q, k, v, mask):
计算注意力权重。
q, k, v 必须具有匹配的前置维度。
k, v 必须有匹配的倒数第二个维度,例如:seq_len_k = seq_len_v。
虽然 mask 根据其类型(填充或前瞻)有不同的形状,
但是 mask 必须能进行广播转换以便求和。
参数:
q: 请求的形状 == (..., seq_len_q, depth)
k: 主键的形状 == (..., seq_len_k, depth)
v: 数值的形状 == (..., seq_len_v, depth_v)
mask: Float 张量,其形状能转换成
(..., seq_len_q, seq_len_k)。默认为None。
返回值:
输出,注意力权重
matmul_qk = tf.matmul(q, k, transpose_b=True) # (..., seq_len_q, seq_len_k)
# 缩放 matmul_qk
dk = tf.cast(tf.shape(k)[-1], tf.float32)
scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)
# 将 mask 加入到缩放的张量上。
if mask is not None:
scaled_attention_logits += (mask * -1e9)
# softmax 在最后一个轴(seq_len_k)上归一化,因此分数
# 相加等于1。
attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1) # (..., seq_len_q, seq_len_k)
output = tf.matmul(attention_weights, v) # (..., seq_len_q, depth_v)
return output, attention_weights
Explanation: 按比缩放的点积注意力(Scaled dot product attention)
<img src="https://tensorflow.google.cn/images/tutorials/transformer/scaled_attention.png" width="500" alt="scaled_dot_product_attention">
Transformer 使用的注意力函数有三个输入:Q(请求(query))、K(主键(key))、V(数值(value))。用于计算注意力权重的等式为:
$$\Large{Attention(Q, K, V) = softmax_k(\frac{QK^T}{\sqrt{d_k}}) V} $$
点积注意力被缩小了深度的平方根倍。这样做是因为对于较大的深度值,点积的大小会增大,从而推动 softmax 函数往仅有很小的梯度的方向靠拢,导致了一种很硬的(hard)softmax。
例如,假设 Q 和 K 的均值为0,方差为1。它们的矩阵乘积将有均值为0,方差为 dk。因此,dk 的平方根被用于缩放(而非其他数值),因为,Q 和 K 的矩阵乘积的均值本应该为 0,方差本应该为1,这样会获得一个更平缓的 softmax。
遮挡(mask)与 -1e9(接近于负无穷)相乘。这样做是因为遮挡与缩放的 Q 和 K 的矩阵乘积相加,并在 softmax 之前立即应用。目标是将这些单元归零,因为 softmax 的较大负数输入在输出中接近于零。
End of explanation
def print_out(q, k, v):
temp_out, temp_attn = scaled_dot_product_attention(
q, k, v, None)
print ('Attention weights are:')
print (temp_attn)
print ('Output is:')
print (temp_out)
np.set_printoptions(suppress=True)
temp_k = tf.constant([[10,0,0],
[0,10,0],
[0,0,10],
[0,0,10]], dtype=tf.float32) # (4, 3)
temp_v = tf.constant([[ 1,0],
[ 10,0],
[ 100,5],
[1000,6]], dtype=tf.float32) # (4, 2)
# 这条 `请求(query)符合第二个`主键(key)`,
# 因此返回了第二个`数值(value)`。
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
# 这条请求符合重复出现的主键(第三第四个),
# 因此,对所有的相关数值取了平均。
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
# 这条请求符合第一和第二条主键,
# 因此,对它们的数值去了平均。
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
Explanation: 当 softmax 在 K 上进行归一化后,它的值决定了分配到 Q 的重要程度。
输出表示注意力权重和 V(数值)向量的乘积。这确保了要关注的词保持原样,而无关的词将被清除掉。
End of explanation
temp_q = tf.constant([[0, 0, 10], [0, 10, 0], [10, 10, 0]], dtype=tf.float32) # (3, 3)
print_out(temp_q, temp_k, temp_v)
Explanation: 将所有请求一起传递。
End of explanation
class MultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads):
super(MultiHeadAttention, self).__init__()
self.num_heads = num_heads
self.d_model = d_model
assert d_model % self.num_heads == 0
self.depth = d_model // self.num_heads
self.wq = tf.keras.layers.Dense(d_model)
self.wk = tf.keras.layers.Dense(d_model)
self.wv = tf.keras.layers.Dense(d_model)
self.dense = tf.keras.layers.Dense(d_model)
def split_heads(self, x, batch_size):
分拆最后一个维度到 (num_heads, depth).
转置结果使得形状为 (batch_size, num_heads, seq_len, depth)
x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
return tf.transpose(x, perm=[0, 2, 1, 3])
def call(self, v, k, q, mask):
batch_size = tf.shape(q)[0]
q = self.wq(q) # (batch_size, seq_len, d_model)
k = self.wk(k) # (batch_size, seq_len, d_model)
v = self.wv(v) # (batch_size, seq_len, d_model)
q = self.split_heads(q, batch_size) # (batch_size, num_heads, seq_len_q, depth)
k = self.split_heads(k, batch_size) # (batch_size, num_heads, seq_len_k, depth)
v = self.split_heads(v, batch_size) # (batch_size, num_heads, seq_len_v, depth)
# scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
# attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
scaled_attention, attention_weights = scaled_dot_product_attention(
q, k, v, mask)
scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) # (batch_size, seq_len_q, num_heads, depth)
concat_attention = tf.reshape(scaled_attention,
(batch_size, -1, self.d_model)) # (batch_size, seq_len_q, d_model)
output = self.dense(concat_attention) # (batch_size, seq_len_q, d_model)
return output, attention_weights
Explanation: 多头注意力(Multi-head attention)
<img src="https://tensorflow.google.cn/images/tutorials/transformer/multi_head_attention.png" width="500" alt="multi-head attention">
多头注意力由四部分组成:
* 线性层并分拆成多头。
* 按比缩放的点积注意力。
* 多头及联。
* 最后一层线性层。
每个多头注意力块有三个输入:Q(请求)、K(主键)、V(数值)。这些输入经过线性(Dense)层,并分拆成多头。
将上面定义的 scaled_dot_product_attention 函数应用于每个头(进行了广播(broadcasted)以提高效率)。注意力这步必须使用一个恰当的 mask。然后将每个头的注意力输出连接起来(用tf.transpose 和 tf.reshape),并放入最后的 Dense 层。
Q、K、和 V 被拆分到了多个头,而非单个的注意力头,因为多头允许模型共同注意来自不同表示空间的不同位置的信息。在分拆后,每个头部的维度减少,因此总的计算成本与有着全部维度的单个注意力头相同。
End of explanation
temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512)) # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
Explanation: 创建一个 MultiHeadAttention 层进行尝试。在序列中的每个位置 y,MultiHeadAttention 在序列中的所有其他位置运行所有8个注意力头,在每个位置y,返回一个新的同样长度的向量。
End of explanation
def point_wise_feed_forward_network(d_model, dff):
return tf.keras.Sequential([
tf.keras.layers.Dense(dff, activation='relu'), # (batch_size, seq_len, dff)
tf.keras.layers.Dense(d_model) # (batch_size, seq_len, d_model)
])
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
Explanation: 点式前馈网络(Point wise feed forward network)
点式前馈网络由两层全联接层组成,两层之间有一个 ReLU 激活函数。
End of explanation
class EncoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, rate=0.1):
super(EncoderLayer, self).__init__()
self.mha = MultiHeadAttention(d_model, num_heads)
self.ffn = point_wise_feed_forward_network(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = tf.keras.layers.Dropout(rate)
self.dropout2 = tf.keras.layers.Dropout(rate)
def call(self, x, training, mask):
attn_output, _ = self.mha(x, x, x, mask) # (batch_size, input_seq_len, d_model)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(x + attn_output) # (batch_size, input_seq_len, d_model)
ffn_output = self.ffn(out1) # (batch_size, input_seq_len, d_model)
ffn_output = self.dropout2(ffn_output, training=training)
out2 = self.layernorm2(out1 + ffn_output) # (batch_size, input_seq_len, d_model)
return out2
sample_encoder_layer = EncoderLayer(512, 8, 2048)
sample_encoder_layer_output = sample_encoder_layer(
tf.random.uniform((64, 43, 512)), False, None)
sample_encoder_layer_output.shape # (batch_size, input_seq_len, d_model)
Explanation: 编码与解码(Encoder and decoder)
<img src="https://tensorflow.google.cn/images/tutorials/transformer/transformer.png" width="600" alt="transformer">
Transformer 模型与标准的具有注意力机制的序列到序列模型(sequence to sequence with attention model),遵循相同的一般模式。
输入语句经过 N 个编码器层,为序列中的每个词/标记生成一个输出。
解码器关注编码器的输出以及它自身的输入(自注意力)来预测下一个词。
编码器层(Encoder layer)
每个编码器层包括以下子层:
多头注意力(有填充遮挡)
点式前馈网络(Point wise feed forward networks)。
每个子层在其周围有一个残差连接,然后进行层归一化。残差连接有助于避免深度网络中的梯度消失问题。
每个子层的输出是 LayerNorm(x + Sublayer(x))。归一化是在 d_model(最后一个)维度完成的。Transformer 中有 N 个编码器层。
End of explanation
class DecoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, rate=0.1):
super(DecoderLayer, self).__init__()
self.mha1 = MultiHeadAttention(d_model, num_heads)
self.mha2 = MultiHeadAttention(d_model, num_heads)
self.ffn = point_wise_feed_forward_network(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = tf.keras.layers.Dropout(rate)
self.dropout2 = tf.keras.layers.Dropout(rate)
self.dropout3 = tf.keras.layers.Dropout(rate)
def call(self, x, enc_output, training,
look_ahead_mask, padding_mask):
# enc_output.shape == (batch_size, input_seq_len, d_model)
attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask) # (batch_size, target_seq_len, d_model)
attn1 = self.dropout1(attn1, training=training)
out1 = self.layernorm1(attn1 + x)
attn2, attn_weights_block2 = self.mha2(
enc_output, enc_output, out1, padding_mask) # (batch_size, target_seq_len, d_model)
attn2 = self.dropout2(attn2, training=training)
out2 = self.layernorm2(attn2 + out1) # (batch_size, target_seq_len, d_model)
ffn_output = self.ffn(out2) # (batch_size, target_seq_len, d_model)
ffn_output = self.dropout3(ffn_output, training=training)
out3 = self.layernorm3(ffn_output + out2) # (batch_size, target_seq_len, d_model)
return out3, attn_weights_block1, attn_weights_block2
sample_decoder_layer = DecoderLayer(512, 8, 2048)
sample_decoder_layer_output, _, _ = sample_decoder_layer(
tf.random.uniform((64, 50, 512)), sample_encoder_layer_output,
False, None, None)
sample_decoder_layer_output.shape # (batch_size, target_seq_len, d_model)
Explanation: 解码器层(Decoder layer)
每个解码器层包括以下子层:
遮挡的多头注意力(前瞻遮挡和填充遮挡)
多头注意力(用填充遮挡)。V(数值)和 K(主键)接收编码器输出作为输入。Q(请求)接收遮挡的多头注意力子层的输出。
点式前馈网络
每个子层在其周围有一个残差连接,然后进行层归一化。每个子层的输出是 LayerNorm(x + Sublayer(x))。归一化是在 d_model(最后一个)维度完成的。
Transformer 中共有 N 个解码器层。
当 Q 接收到解码器的第一个注意力块的输出,并且 K 接收到编码器的输出时,注意力权重表示根据编码器的输出赋予解码器输入的重要性。换一种说法,解码器通过查看编码器输出和对其自身输出的自注意力,预测下一个词。参看按比缩放的点积注意力部分的演示。
End of explanation
class Encoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
maximum_position_encoding, rate=0.1):
super(Encoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
self.pos_encoding = positional_encoding(maximum_position_encoding,
self.d_model)
self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(rate)
def call(self, x, training, mask):
seq_len = tf.shape(x)[1]
# 将嵌入和位置编码相加。
x = self.embedding(x) # (batch_size, input_seq_len, d_model)
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x += self.pos_encoding[:, :seq_len, :]
x = self.dropout(x, training=training)
for i in range(self.num_layers):
x = self.enc_layers[i](x, training, mask)
return x # (batch_size, input_seq_len, d_model)
sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8,
dff=2048, input_vocab_size=8500,
maximum_position_encoding=10000)
sample_encoder_output = sample_encoder(tf.random.uniform((64, 62)),
training=False, mask=None)
print (sample_encoder_output.shape) # (batch_size, input_seq_len, d_model)
Explanation: 编码器(Encoder)
编码器 包括:
1. 输入嵌入(Input Embedding)
2. 位置编码(Positional Encoding)
3. N 个编码器层(encoder layers)
输入经过嵌入(embedding)后,该嵌入与位置编码相加。该加法结果的输出是编码器层的输入。编码器的输出是解码器的输入。
End of explanation
class Decoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size,
maximum_position_encoding, rate=0.1):
super(Decoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
self.pos_encoding = positional_encoding(maximum_position_encoding, d_model)
self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(rate)
def call(self, x, enc_output, training,
look_ahead_mask, padding_mask):
seq_len = tf.shape(x)[1]
attention_weights = {}
x = self.embedding(x) # (batch_size, target_seq_len, d_model)
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x += self.pos_encoding[:, :seq_len, :]
x = self.dropout(x, training=training)
for i in range(self.num_layers):
x, block1, block2 = self.dec_layers[i](x, enc_output, training,
look_ahead_mask, padding_mask)
attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
attention_weights['decoder_layer{}_block2'.format(i+1)] = block2
# x.shape == (batch_size, target_seq_len, d_model)
return x, attention_weights
sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8,
dff=2048, target_vocab_size=8000,
maximum_position_encoding=5000)
output, attn = sample_decoder(tf.random.uniform((64, 26)),
enc_output=sample_encoder_output,
training=False, look_ahead_mask=None,
padding_mask=None)
output.shape, attn['decoder_layer2_block2'].shape
Explanation: 解码器(Decoder)
解码器包括:
1. 输出嵌入(Output Embedding)
2. 位置编码(Positional Encoding)
3. N 个解码器层(decoder layers)
目标(target)经过一个嵌入后,该嵌入和位置编码相加。该加法结果是解码器层的输入。解码器的输出是最后的线性层的输入。
End of explanation
class Transformer(tf.keras.Model):
def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
target_vocab_size, pe_input, pe_target, rate=0.1):
super(Transformer, self).__init__()
self.encoder = Encoder(num_layers, d_model, num_heads, dff,
input_vocab_size, pe_input, rate)
self.decoder = Decoder(num_layers, d_model, num_heads, dff,
target_vocab_size, pe_target, rate)
self.final_layer = tf.keras.layers.Dense(target_vocab_size)
def call(self, inp, tar, training, enc_padding_mask,
look_ahead_mask, dec_padding_mask):
enc_output = self.encoder(inp, training, enc_padding_mask) # (batch_size, inp_seq_len, d_model)
# dec_output.shape == (batch_size, tar_seq_len, d_model)
dec_output, attention_weights = self.decoder(
tar, enc_output, training, look_ahead_mask, dec_padding_mask)
final_output = self.final_layer(dec_output) # (batch_size, tar_seq_len, target_vocab_size)
return final_output, attention_weights
sample_transformer = Transformer(
num_layers=2, d_model=512, num_heads=8, dff=2048,
input_vocab_size=8500, target_vocab_size=8000,
pe_input=10000, pe_target=6000)
temp_input = tf.random.uniform((64, 62))
temp_target = tf.random.uniform((64, 26))
fn_out, _ = sample_transformer(temp_input, temp_target, training=False,
enc_padding_mask=None,
look_ahead_mask=None,
dec_padding_mask=None)
fn_out.shape # (batch_size, tar_seq_len, target_vocab_size)
Explanation: 创建 Transformer
Transformer 包括编码器,解码器和最后的线性层。解码器的输出是线性层的输入,返回线性层的输出。
End of explanation
num_layers = 4
d_model = 128
dff = 512
num_heads = 8
input_vocab_size = tokenizer_pt.vocab_size + 2
target_vocab_size = tokenizer_en.vocab_size + 2
dropout_rate = 0.1
Explanation: 配置超参数(hyperparameters)
为了让本示例小且相对较快,已经减小了num_layers、 d_model 和 dff 的值。
Transformer 的基础模型使用的数值为:num_layers=6,d_model = 512,dff = 2048。关于所有其他版本的 Transformer,请查阅论文。
Note:通过改变以下数值,您可以获得在许多任务上达到最先进水平的模型。
End of explanation
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, d_model, warmup_steps=4000):
super(CustomSchedule, self).__init__()
self.d_model = d_model
self.d_model = tf.cast(self.d_model, tf.float32)
self.warmup_steps = warmup_steps
def __call__(self, step):
arg1 = tf.math.rsqrt(step)
arg2 = step * (self.warmup_steps ** -1.5)
return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
learning_rate = CustomSchedule(d_model)
optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98,
epsilon=1e-9)
temp_learning_rate_schedule = CustomSchedule(d_model)
plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
Explanation: 优化器(Optimizer)
根据论文中的公式,将 Adam 优化器与自定义的学习速率调度程序(scheduler)配合使用。
$$\Large{lrate = d_{model}^{-0.5} * min(step{_}num^{-0.5}, step{_}num * warmup{_}steps^{-1.5})}$$
End of explanation
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='train_accuracy')
Explanation: 损失函数与指标(Loss and metrics)
由于目标序列是填充(padded)过的,因此在计算损失函数时,应用填充遮挡非常重要。
End of explanation
transformer = Transformer(num_layers, d_model, num_heads, dff,
input_vocab_size, target_vocab_size,
pe_input=input_vocab_size,
pe_target=target_vocab_size,
rate=dropout_rate)
def create_masks(inp, tar):
# 编码器填充遮挡
enc_padding_mask = create_padding_mask(inp)
# 在解码器的第二个注意力模块使用。
# 该填充遮挡用于遮挡编码器的输出。
dec_padding_mask = create_padding_mask(inp)
# 在解码器的第一个注意力模块使用。
# 用于填充(pad)和遮挡(mask)解码器获取到的输入的后续标记(future tokens)。
look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
dec_target_padding_mask = create_padding_mask(tar)
combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)
return enc_padding_mask, combined_mask, dec_padding_mask
Explanation: 训练与检查点(Training and checkpointing)
End of explanation
checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(transformer=transformer,
optimizer=optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
# 如果检查点存在,则恢复最新的检查点。
if ckpt_manager.latest_checkpoint:
ckpt.restore(ckpt_manager.latest_checkpoint)
print ('Latest checkpoint restored!!')
Explanation: 创建检查点的路径和检查点管理器(manager)。这将用于在每 n 个周期(epochs)保存检查点。
End of explanation
EPOCHS = 20
# 该 @tf.function 将追踪-编译 train_step 到 TF 图中,以便更快地
# 执行。该函数专用于参数张量的精确形状。为了避免由于可变序列长度或可变
# 批次大小(最后一批次较小)导致的再追踪,使用 input_signature 指定
# 更多的通用形状。
train_step_signature = [
tf.TensorSpec(shape=(None, None), dtype=tf.int64),
tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]
@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
tar_inp = tar[:, :-1]
tar_real = tar[:, 1:]
enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
with tf.GradientTape() as tape:
predictions, _ = transformer(inp, tar_inp,
True,
enc_padding_mask,
combined_mask,
dec_padding_mask)
loss = loss_function(tar_real, predictions)
gradients = tape.gradient(loss, transformer.trainable_variables)
optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
train_loss(loss)
train_accuracy(tar_real, predictions)
Explanation: 目标(target)被分成了 tar_inp 和 tar_real。tar_inp 作为输入传递到解码器。tar_real 是位移了 1 的同一个输入:在 tar_inp 中的每个位置,tar_real 包含了应该被预测到的下一个标记(token)。
例如,sentence = "SOS A lion in the jungle is sleeping EOS"
tar_inp = "SOS A lion in the jungle is sleeping"
tar_real = "A lion in the jungle is sleeping EOS"
Transformer 是一个自回归(auto-regressive)模型:它一次作一个部分的预测,然后使用到目前为止的自身的输出来决定下一步要做什么。
在训练过程中,本示例使用了 teacher-forcing 的方法(就像文本生成教程中一样)。无论模型在当前时间步骤下预测出什么,teacher-forcing 方法都会将真实的输出传递到下一个时间步骤上。
当 transformer 预测每个词时,自注意力(self-attention)功能使它能够查看输入序列中前面的单词,从而更好地预测下一个单词。
为了防止模型在期望的输出上达到峰值,模型使用了前瞻遮挡(look-ahead mask)。
End of explanation
for epoch in range(EPOCHS):
start = time.time()
train_loss.reset_states()
train_accuracy.reset_states()
# inp -> portuguese, tar -> english
for (batch, (inp, tar)) in enumerate(train_dataset):
train_step(inp, tar)
if batch % 50 == 0:
print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(
epoch + 1, batch, train_loss.result(), train_accuracy.result()))
if (epoch + 1) % 5 == 0:
ckpt_save_path = ckpt_manager.save()
print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
ckpt_save_path))
print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1,
train_loss.result(),
train_accuracy.result()))
print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
Explanation: 葡萄牙语作为输入语言,英语为目标语言。
End of explanation
def evaluate(inp_sentence):
start_token = [tokenizer_pt.vocab_size]
end_token = [tokenizer_pt.vocab_size + 1]
# 输入语句是葡萄牙语,增加开始和结束标记
inp_sentence = start_token + tokenizer_pt.encode(inp_sentence) + end_token
encoder_input = tf.expand_dims(inp_sentence, 0)
# 因为目标是英语,输入 transformer 的第一个词应该是
# 英语的开始标记。
decoder_input = [tokenizer_en.vocab_size]
output = tf.expand_dims(decoder_input, 0)
for i in range(MAX_LENGTH):
enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
encoder_input, output)
# predictions.shape == (batch_size, seq_len, vocab_size)
predictions, attention_weights = transformer(encoder_input,
output,
False,
enc_padding_mask,
combined_mask,
dec_padding_mask)
# 从 seq_len 维度选择最后一个词
predictions = predictions[: ,-1:, :] # (batch_size, 1, vocab_size)
predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
# 如果 predicted_id 等于结束标记,就返回结果
if predicted_id == tokenizer_en.vocab_size+1:
return tf.squeeze(output, axis=0), attention_weights
# 连接 predicted_id 与输出,作为解码器的输入传递到解码器。
output = tf.concat([output, predicted_id], axis=-1)
return tf.squeeze(output, axis=0), attention_weights
def plot_attention_weights(attention, sentence, result, layer):
fig = plt.figure(figsize=(16, 8))
sentence = tokenizer_pt.encode(sentence)
attention = tf.squeeze(attention[layer], axis=0)
for head in range(attention.shape[0]):
ax = fig.add_subplot(2, 4, head+1)
# 画出注意力权重
ax.matshow(attention[head][:-1, :], cmap='viridis')
fontdict = {'fontsize': 10}
ax.set_xticks(range(len(sentence)+2))
ax.set_yticks(range(len(result)))
ax.set_ylim(len(result)-1.5, -0.5)
ax.set_xticklabels(
['<start>']+[tokenizer_pt.decode([i]) for i in sentence]+['<end>'],
fontdict=fontdict, rotation=90)
ax.set_yticklabels([tokenizer_en.decode([i]) for i in result
if i < tokenizer_en.vocab_size],
fontdict=fontdict)
ax.set_xlabel('Head {}'.format(head+1))
plt.tight_layout()
plt.show()
def translate(sentence, plot=''):
result, attention_weights = evaluate(sentence)
predicted_sentence = tokenizer_en.decode([i for i in result
if i < tokenizer_en.vocab_size])
print('Input: {}'.format(sentence))
print('Predicted translation: {}'.format(predicted_sentence))
if plot:
plot_attention_weights(attention_weights, sentence, result, plot)
translate("este é um problema que temos que resolver.")
print ("Real translation: this is a problem we have to solve .")
translate("os meus vizinhos ouviram sobre esta ideia.")
print ("Real translation: and my neighboring homes heard about this idea .")
translate("vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.")
print ("Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .")
Explanation: 评估(Evaluate)
以下步骤用于评估:
用葡萄牙语分词器(tokenizer_pt)编码输入语句。此外,添加开始和结束标记,这样输入就与模型训练的内容相同。这是编码器输入。
解码器输入为 start token == tokenizer_en.vocab_size。
计算填充遮挡和前瞻遮挡。
解码器通过查看编码器输出和它自身的输出(自注意力)给出预测。
选择最后一个词并计算它的 argmax。
将预测的词连接到解码器输入,然后传递给解码器。
在这种方法中,解码器根据它预测的之前的词预测下一个。
Note:这里使用的模型具有较小的能力以保持相对较快,因此预测可能不太正确。要复现论文中的结果,请使用全部数据集,并通过修改上述超参数来使用基础 transformer 模型或者 transformer XL。
End of explanation
translate("este é o primeiro livro que eu fiz.", plot='decoder_layer4_block2')
print ("Real translation: this is the first book i've ever done.")
Explanation: 您可以为 plot 参数传递不同的层和解码器的注意力模块。
End of explanation |
1,152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
데이터 파일 읽고 쓰기
특정 파일을 열어 저장된 데이터를 읽거나 특정 데이터를 특정 파일에 저장해야 하는 일이 매우 빈번하게 발생한다.
파일을 열어 저장된 데이터 읽기
읽은 데이터 다루기
Step1: open 함수는 지정된 파일명을 가진 파일을 생성하고 파일의 위치를 리턴한다.
'w' 인자는 쓰기 전용으로 파일을 생성한다는 의미이며 모드(mode)라 부른다.
이미 test.txt 파일이 존재하면 기존의 내용을 삭제하고 빈 파일로 만들어 파일을 연다.
모드에는 쓰기 모드 'w', 읽기 모드 'r', 추가 모드 'a' 세 종류가 있다.
생성된 객체는 file 자료형이다.
Step2: file 클래스에는 파일의 내용을 어떻게 읽을지, 다룰지 등에 대한 많은 메소드가 포함되어 있다.
여기서는 write, readlines, readline, read, close, seek 메소도의 활용법을 살펴본다.
Step3: write 메소드
write 메소드는 open 함수를 이용하여 w 모드 또는 a 모드로 열린 파일에 내용을 추가할 때 사용한다.
Step4: '\n'은 줄바꿈을 의미한다. 즉 Enter 키를 눌러 줄바꾸기를 한 것과 동일한 효과를 갖는다.
따라서 위 코드는 두 줄을 test.txt 파일에 입력하는 것을 나타낸다. 입력 내용은 다음과 같다.
first line
second line
close 메소드
close 메소드는 파일 내용을 더 이상 수정하거나 확인할 필요가 없어서 파일을 닫고자 할 때 반드시 사용해야 한다.
Step5: 파일을 닫으면 더 이상 파일 내용을 확인할 수 없다.
예를 들어 열려 있는 파일의 경우 read 메소드를 이용하여 내용을 확인할 수 있어야 하는데
이미 test.txt 파일을 닫았기 때문에 오류가 발생한다.
Step6: ipython이 실행중일 경우 터미널 명령어인 cat를 사용하여 파일내용을 확인할 수 있다.
Step7: 다시 한 번 ls 명령어를 이용하여 test.txt 파일이 생성되었음을 확인할 수 있다.
Step8: 터미널 명령어가 아닌 파이썬 명령어를 이용하여 파일 내용을 확인하고자 한다면 다시 열어야 한다.
이번에는 내용을 추가하기 위해 'a' 모드로 열어본다.
Step9: 'third line' 이란 문자열을 새 줄에 추가한다.
Step10: 새 줄이 추가되었음을 확인할 수 있다.
Step11: 파일에 저장된 내용 읽어들이기
파일에 저장된 내용을 읽기 위해서는 아래 메소드들을 이용한다.
readlines
readline
read
파일 내용을 읽기 위해서는 먼저 파일을 열어야 한다. 기본적으로 읽기전용 모드는 'r' 모드를 사용한다.
Step12: readlines 메소드
readlines 메소드는 파일에 저장된 각 줄을 항목으로 갖는 리스트를 리턴한다.
Step13: 그런데 readlines 메소드를 다시 실행하면 빈 리스트를 리턴한다.
Step14: 이유는 오프셋(offset)이라는 책갈피 역할을 하는 기능때문이다.
파일을 새롭게 열면 오프셋은 0으로 초기화된다.
readlines 메소드를 한 번 실행하면 오프셋은 파일에 저장된 내용에 해당하는 바이트 값을 갖는다.
현재 text.txt 파일에 저장된 문자열은 총 33개이다. 따라서 readlines를 한 번 실행한 현재 오프셋의 값은 33이다.
오프셋 관련 보다 자세한 설명은 다음 기회로 미룬다. 현재는 readlines를 한 번 이상 실행하면 아무 것도 읽지 못한다는 사실만 기억하면 된다.
파일을 다시 처음부터 읽고 싶다면 우선은 열린 파일을 닫고 다시 열면 된다. 오프셋을 이용한 방식이 있기는 하지만 여기서는 다루지 않는다.
Step15: read 메소드
read 메소드는 readlines 메소드 처럼 파일내용 전체를 읽는다. 하지만 각 줄의 내용으로 쪼개지 않고 전체 내용을 하나의 문자열로 리턴한다.
Step16: readlines 경우와 마찬가지로 read를 한 번 실행하면 오프셋이 파일의 끝을 가리킨다. 따라서 read를 한 번 실행하면 아무것도 읽지 못한다.
Step17: read 메소드는 readlines 메소드와 사실상 동일한 기능을 수행한다. 문자열 관련 메소드인 split을 사용하기만 하면 된다.
split(str)은 기존 문자열을 str을 기준으로 쪼개어 리스트로 리턴하는 기능을 수행한다.
인자를 넣지 않으면 줄바꾸기('\n') 또는 스페이스(' ')을 기본값으로 해서 줄바꾸기 또는 스페이스를 기준으로 쪼개어 리스트로 리턴한다.
Step18: readline 메소드
readline 메소드는 파일 내용을 한줄한줄 읽는다.
Step19: readline을 반복적으로 실행할 때마다 다음 줄이 읽힌다. 오프셋이 줄 단위로 이동하기 때문이다.
Step20: 오프셋이 파일 끝에 도달하면 더 이상 읽을 내용이 없어 빈 문자열을 리턴한다.
Step21: 파일을 더이상 다루지 않는다면 항상 닫도록 한다.
Step22: 확인된 파일 내용에서 원하는 자료 추출하기
저장된 파일 내용을 확인한 후 원하는 자료만 추출하는 기본적인 방법을 배운다.
예제
Step23: total_expense 함수 정의하기
이제 장볼 때 필요한 비용을 자동을 계산해주는 함수 total_expense 함수를 정의할 준비가 되었다.
먼저 장보기 목록 메모장을 만들어 보자. 새로운 장보기 목록임으로 파일을 새롭게 연다.
Step24: 장보기 목록을 입력한다.
Step25: 더 이상 구입할 목록이 없으면 메모장을 닫는다.
Step26: 내용을 확인하면 다음과 같다.
Step27: 이제 장을 볼 목록을 확인하기 위해 파일을 다시 연다.
readlines 메소드 이용하기
Step28: readline 메소드 이용하기
Step29: read 메소드 이용하기
Step30: 세 가지 방식 중에서 어떤 것도 사용할 수 있지만 지금 경우에 있어서는 read 메소드를 통해 좀 더 깔끔한 값을 얻었다.
이제 각 품목에 대해 필요한 경비를 계산할 수 있다. split으로 각 항목을 쪼갠 후 인덱싱을 이용하기만 하면 된다.
빵값 비용
Step31: 토마토 비용
Step32: 우유 비용
Step33: 빵값, 토마토, 우유 비용을 계산하는 코드를 자세히 살펴보면 buy_list 인덱싱에 사용되는 인덱스 값만 0, 1, 2 순으로 변한 것을 알 수 있다. 따라서 총비용 계산은 for 문을 이용하면 된다.
Step34: 따라서 total_expense 함수를 아래와 같이 코딩할 수 있다.
Step35: 이제 오늘 장을 보기 위해 필요한 총비용을 쉽게 확인할 수 있다. | Python Code:
ls
f = open('test.txt', 'w')
Explanation: 데이터 파일 읽고 쓰기
특정 파일을 열어 저장된 데이터를 읽거나 특정 데이터를 특정 파일에 저장해야 하는 일이 매우 빈번하게 발생한다.
파일을 열어 저장된 데이터 읽기
읽은 데이터 다루기: 계산, 필터링 등등
다룬 결과를 특정 파일에 저장하기
상황 설정: 마트에서 장보기
마트에서 장을 보기 위해 상품 목록을 미리 작성하여 가격을 확인한다.
품목 개수 단가
-----------------
빵 1개 1.39
토마토 6개 0.26
우유 3개 1.45
이제 장을 보기 위해 얼마가 필요한지 계산을 해야 한다.
보통은 계산기를 이용하여 품목별 개수와 단가를 곱해 합하면 된다.
그렇다면 텍스트 파일로 저장된 장보기 메모장을 입력하면 총비용이 얼마인지 계산해주는 함수를 짤 수는 없을까?
준비사항
장보기 목록 메모장을 입력하면 총비용을 계산해주는 함수를 예를 들어 total_expense라 하자.
total_expense 함수를 코딩하기 위해 필요한 사항은 아래와 같다.
메모내용을 특정 파일로 저장할 수 있어야 한다.
파일을 열어서 내용을 확인할 수 있어야 한다.
확인된 내용에서 원하는 데이터를 추출해서 활용할 수 있어야 한다.
파일 생성, 내용 추가 및 내용 읽기
예를 들어 test.txt 라는 파일명을 가진 텍스트파일을 생성하고자 할 때 다음과 같이 한다.
주의: 현재 파이썬이 실행되는 폴더에 test.txt 파일이 없음을 확인하고 시작하도록 한다.
예를 들어 ipython에서는 터미널 명령어인 ls를 실행하면 현재 폴더에 들어 있는 파일들을 확인할 수 있다.
이미 있다면 파일을 옮기거나 파일명을 변경하도록 한다.
주의:
터미널 명령어는 파이썬 함수가 아니라, 윈도우 커맨드 창이나, 리눅스 또는 맥용 셸 창에서 사용되는 명령어이다. python은 기본적으로 터미널 명령어를 지원하지 않지만 ipython과 spyder는 지원한다.
End of explanation
type(f)
Explanation: open 함수는 지정된 파일명을 가진 파일을 생성하고 파일의 위치를 리턴한다.
'w' 인자는 쓰기 전용으로 파일을 생성한다는 의미이며 모드(mode)라 부른다.
이미 test.txt 파일이 존재하면 기존의 내용을 삭제하고 빈 파일로 만들어 파일을 연다.
모드에는 쓰기 모드 'w', 읽기 모드 'r', 추가 모드 'a' 세 종류가 있다.
생성된 객체는 file 자료형이다.
End of explanation
dir(f)
Explanation: file 클래스에는 파일의 내용을 어떻게 읽을지, 다룰지 등에 대한 많은 메소드가 포함되어 있다.
여기서는 write, readlines, readline, read, close, seek 메소도의 활용법을 살펴본다.
End of explanation
f.write("first line\nsecond line")
Explanation: write 메소드
write 메소드는 open 함수를 이용하여 w 모드 또는 a 모드로 열린 파일에 내용을 추가할 때 사용한다.
End of explanation
f.close()
Explanation: '\n'은 줄바꿈을 의미한다. 즉 Enter 키를 눌러 줄바꾸기를 한 것과 동일한 효과를 갖는다.
따라서 위 코드는 두 줄을 test.txt 파일에 입력하는 것을 나타낸다. 입력 내용은 다음과 같다.
first line
second line
close 메소드
close 메소드는 파일 내용을 더 이상 수정하거나 확인할 필요가 없어서 파일을 닫고자 할 때 반드시 사용해야 한다.
End of explanation
f.read()
Explanation: 파일을 닫으면 더 이상 파일 내용을 확인할 수 없다.
예를 들어 열려 있는 파일의 경우 read 메소드를 이용하여 내용을 확인할 수 있어야 하는데
이미 test.txt 파일을 닫았기 때문에 오류가 발생한다.
End of explanation
cat test.txt
Explanation: ipython이 실행중일 경우 터미널 명령어인 cat를 사용하여 파일내용을 확인할 수 있다.
End of explanation
ls
Explanation: 다시 한 번 ls 명령어를 이용하여 test.txt 파일이 생성되었음을 확인할 수 있다.
End of explanation
f = open('test.txt', 'a')
Explanation: 터미널 명령어가 아닌 파이썬 명령어를 이용하여 파일 내용을 확인하고자 한다면 다시 열어야 한다.
이번에는 내용을 추가하기 위해 'a' 모드로 열어본다.
End of explanation
f.write("\nthird line")
f.close()
Explanation: 'third line' 이란 문자열을 새 줄에 추가한다.
End of explanation
cat test.txt
Explanation: 새 줄이 추가되었음을 확인할 수 있다.
End of explanation
f = open('test.txt', 'r')
Explanation: 파일에 저장된 내용 읽어들이기
파일에 저장된 내용을 읽기 위해서는 아래 메소드들을 이용한다.
readlines
readline
read
파일 내용을 읽기 위해서는 먼저 파일을 열어야 한다. 기본적으로 읽기전용 모드는 'r' 모드를 사용한다.
End of explanation
a = f.readlines()
a
Explanation: readlines 메소드
readlines 메소드는 파일에 저장된 각 줄을 항목으로 갖는 리스트를 리턴한다.
End of explanation
b = f.readlines()
b
Explanation: 그런데 readlines 메소드를 다시 실행하면 빈 리스트를 리턴한다.
End of explanation
f.close()
f = open('test.txt', 'r')
a1 = f.readlines()
a1
f.close()
Explanation: 이유는 오프셋(offset)이라는 책갈피 역할을 하는 기능때문이다.
파일을 새롭게 열면 오프셋은 0으로 초기화된다.
readlines 메소드를 한 번 실행하면 오프셋은 파일에 저장된 내용에 해당하는 바이트 값을 갖는다.
현재 text.txt 파일에 저장된 문자열은 총 33개이다. 따라서 readlines를 한 번 실행한 현재 오프셋의 값은 33이다.
오프셋 관련 보다 자세한 설명은 다음 기회로 미룬다. 현재는 readlines를 한 번 이상 실행하면 아무 것도 읽지 못한다는 사실만 기억하면 된다.
파일을 다시 처음부터 읽고 싶다면 우선은 열린 파일을 닫고 다시 열면 된다. 오프셋을 이용한 방식이 있기는 하지만 여기서는 다루지 않는다.
End of explanation
f = open('test.txt', 'r')
a2 = f.read()
a2
Explanation: read 메소드
read 메소드는 readlines 메소드 처럼 파일내용 전체를 읽는다. 하지만 각 줄의 내용으로 쪼개지 않고 전체 내용을 하나의 문자열로 리턴한다.
End of explanation
a3 = f.read()
a3
f.close()
Explanation: readlines 경우와 마찬가지로 read를 한 번 실행하면 오프셋이 파일의 끝을 가리킨다. 따라서 read를 한 번 실행하면 아무것도 읽지 못한다.
End of explanation
c = a2.split('\n')
c
Explanation: read 메소드는 readlines 메소드와 사실상 동일한 기능을 수행한다. 문자열 관련 메소드인 split을 사용하기만 하면 된다.
split(str)은 기존 문자열을 str을 기준으로 쪼개어 리스트로 리턴하는 기능을 수행한다.
인자를 넣지 않으면 줄바꾸기('\n') 또는 스페이스(' ')을 기본값으로 해서 줄바꾸기 또는 스페이스를 기준으로 쪼개어 리스트로 리턴한다.
End of explanation
f = open('test.txt', 'r')
b1 = f.readline()
b1
Explanation: readline 메소드
readline 메소드는 파일 내용을 한줄한줄 읽는다.
End of explanation
b2 = f.readline()
b2
b3 = f.readline()
b3
Explanation: readline을 반복적으로 실행할 때마다 다음 줄이 읽힌다. 오프셋이 줄 단위로 이동하기 때문이다.
End of explanation
b4 = f.readline()
b4
Explanation: 오프셋이 파일 끝에 도달하면 더 이상 읽을 내용이 없어 빈 문자열을 리턴한다.
End of explanation
f.close()
Explanation: 파일을 더이상 다루지 않는다면 항상 닫도록 한다.
End of explanation
num_str = '1 2.0 14 3.3 5'
list_num_str = num_str.split()
sum = 0
for num in list_num_str:
sum = sum + float(num)
print("The total sum is {}!".format(sum))
Explanation: 확인된 파일 내용에서 원하는 자료 추출하기
저장된 파일 내용을 확인한 후 원하는 자료만 추출하는 기본적인 방법을 배운다.
예제: 아래처럼 스페이스로 구분된 숫자들로 구성된 문자열이 있다고 가정하자.
"1 2.0 14 3.3 5"
어떻게 하면 스페이스로 구분된 숫자들의 합을 계산할 수 있을까? 즉,
1 + 2.0 + 14 + 3.3 + 5 = ?
우선 해당 문자열을 스페이스를 기준으로 쪼개는 split함수를 활용해야 한다. 이후에는 순수하게 숫자들오 구성된 문자열을 숫자 자료형(int 또는 float)로 형변환해야 한다.
int() 함수와 float() 함수를 활용하면 된다.
End of explanation
f = open('shopping_list', 'w')
Explanation: total_expense 함수 정의하기
이제 장볼 때 필요한 비용을 자동을 계산해주는 함수 total_expense 함수를 정의할 준비가 되었다.
먼저 장보기 목록 메모장을 만들어 보자. 새로운 장보기 목록임으로 파일을 새롭게 연다.
End of explanation
f.write("bread 1 1.39\ntomatoes 6 0.26\nmilk 3 1.45")
Explanation: 장보기 목록을 입력한다.
End of explanation
f.close()
Explanation: 더 이상 구입할 목록이 없으면 메모장을 닫는다.
End of explanation
cat shopping_list
Explanation: 내용을 확인하면 다음과 같다.
End of explanation
f = open('shopping_list', 'r')
buy_list = f.readlines()
f.close()
buy_list
Explanation: 이제 장을 볼 목록을 확인하기 위해 파일을 다시 연다.
readlines 메소드 이용하기
End of explanation
f = open('shopping_list', 'r')
buy_list = []
while True:
line = f.readline()
if line != '':
buy_list.append(line)
else:
break
f.close()
buy_list
Explanation: readline 메소드 이용하기: 단순하지 않지만 어렵지는 않다. 아래 코드를 이해하도록 노력해보자.
End of explanation
f = open('shopping_list', 'r')
buy_list = f.read().split('\n')
f.close()
buy_list
Explanation: read 메소드 이용하기
End of explanation
int(buy_list[0].split()[1]) * float(buy_list[0].split()[2])
Explanation: 세 가지 방식 중에서 어떤 것도 사용할 수 있지만 지금 경우에 있어서는 read 메소드를 통해 좀 더 깔끔한 값을 얻었다.
이제 각 품목에 대해 필요한 경비를 계산할 수 있다. split으로 각 항목을 쪼갠 후 인덱싱을 이용하기만 하면 된다.
빵값 비용
End of explanation
int(buy_list[1].split()[1]) * float(buy_list[1].split()[2])
Explanation: 토마토 비용
End of explanation
int(buy_list[2].split()[1]) * float(buy_list[2].split()[2])
Explanation: 우유 비용
End of explanation
sum = 0
for item in buy_list:
d = item.split()
item_price = int(d[1]) * float(d[2])
print("The price of {} is ${}.".format(d[0], item_price))
sum = sum + item_price
print("The total expense is ${}.".format(sum))
Explanation: 빵값, 토마토, 우유 비용을 계산하는 코드를 자세히 살펴보면 buy_list 인덱싱에 사용되는 인덱스 값만 0, 1, 2 순으로 변한 것을 알 수 있다. 따라서 총비용 계산은 for 문을 이용하면 된다.
End of explanation
def total_expense(memo):
f = open(memo,'r')
a = f.readlines()
sum = 0
for item in a:
d = item.split()
item_price = int(d[1]) * float(d[2])
print("The price of {} is ${}.".format(d[0], item_price))
sum = sum + item_price
print("The total expense is ${}.".format(sum))
Explanation: 따라서 total_expense 함수를 아래와 같이 코딩할 수 있다.
End of explanation
total_expense('shopping_list')
Explanation: 이제 오늘 장을 보기 위해 필요한 총비용을 쉽게 확인할 수 있다.
End of explanation |
1,153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Process inference in PyMC3
This is the first step in modelling Species occurrence.
The good news is that MCMC works,
The bad one is that it's computationally intense.
Step1: Simulated data
Step2: Examine actual posterior distribution
The posterior is analytically tractable so we can compute the posterior mean explicitly. Rather than computing the inverse of the covariance matrix K, we use the numerically stable calculation described Algorithm 2.1 in the book “Gaussian Processes for Machine Learning” (2006) by Rasmussen and Williams, which is available online for free.
Step3: Ok, it's good to have the analitical solution but not always possible sooooo.
Let's do some computing.
Model in PyM3
Step4: Evaluate posterior fit
The posterior samples are consistent with the analytically derived posterior and behaves how one would expect–narrower near areas with lots of observations and wider in areas with more uncertainty.
Step5: Clasification
In Gaussian process classification, the likelihood is not normal and thus the posterior is not analytically tractable. The prior is again a multivariate normal with covariance matrix K, and the likelihood is the standard likelihood for logistic regression
Step6: Sample from posterior distribution
Step7: Evaluate posterior fit
The posterior looks good, though the fit is, unsurprisingly, erratic outside the range of the observed data. | Python Code:
# Load Biospytial modules and etc.
%matplotlib inline
import sys
sys.path.append('/apps/external_plugins/spystats/')
import django
django.setup()
import pandas as pd
import matplotlib.pyplot as plt
## Use the ggplot style
plt.style.use('ggplot')
import numpy as np
from spystats import tools
Explanation: Gaussian Process inference in PyMC3
This is the first step in modelling Species occurrence.
The good news is that MCMC works,
The bad one is that it's computationally intense.
End of explanation
latent_field = tools.MaternVariogram(sill=1,range_a=0.13,kappa=3.0/2.0)
## Simulations with non squared grid
grid = tools.createGrid(grid_sizex=50,minx=-1,maxx=2,miny=-1,maxy=2,grid_sizey=70)
X,Y,Z = tools.simulatedGaussianFieldAsPcolorMesh(latent_field,grid_sizex=50,minx=0,maxx=1,miny=0,maxy=1,
grid_sizey=50)
plt.pcolormesh(X,Y,Z)
import pandas as pd
data = pd.DataFrame({'X':X.ravel(),'Y':Y.ravel(),'Z':Z.ravel()})
## Model Specification
import pymc3 as pm
X,Y,Z = tools.simulatedGaussianFieldAsPcolorMesh(latent_field,grid_sizex=50,minx=0,maxx=1,miny=0,maxy=1,
grid_sizey=50)
sill=1
range_a=0.13
kappa=3.0/2.0
ls = 0.2
tau = 2.0
cov = pm.gp.cov.Matern32(2, range_a,active_dims=[0,1])
K = cov(data[['X','Y']].values).eval()
plt.figure(figsize=(14,4))
dist = pm.MvNormal.dist(mu=np.zeros(K.shape[0]), cov=K).random(size=1)
plt.imshow(dist.reshape(50,50))
## Ok, let's try to make some inference
import theano.tensor as tt
np.random.seed(1)
# Number of training points
n = 30
##Vector column, [:,None] effect
X0 = np.sort(3 * np.random.rand(n))[:, None]
# Number of points at which to interpolate
## Creating the domain
m = 100
X = np.linspace(0, 3, m)[:, None]
# Covariance kernel parameters
## \tau nugget
noise = 0.1
## \phi = ?
lengthscale = 0.3
## \sigma^{2} ?
f_scale = 1
## Covariance function
## Cov = \sigma^{2} * \rho(\phi) (No nugget)
cov = f_scale * pm.gp.cov.ExpQuad(1, lengthscale)
## Evaluate Covariance at X0 (observations / simulations)
K = cov(X0)
## I dont understand this
K_s = cov(X0, X)
##, Compose the Covariance with nugget effect of noise = \ tau^{2}
## So this includes kernel
K_noise = K + noise * tt.eye(n)
# Add very slight perturbation to the covariance matrix diagonal to improve numerical stability
## Regularisation
K_stable = K + 1e-12 * tt.eye(n)
# Observed data / Simulate data
f = np.random.multivariate_normal(mean=np.zeros(n), cov=K_noise.eval())
## Plot this
plt.scatter(X0,f)
Explanation: Simulated data
End of explanation
fig, ax = plt.subplots(figsize=(14, 6));
ax.scatter(X0, f, s=40, color='b', label='True points');
# Analytically compute posterior mean
## This is the cholesky decomposition of the Covariance Matrix with kernel nugget
L = np.linalg.cholesky(K_noise.eval())
## Faith step, This solves the base x's such that Lx = f and the uses x for solving y's such that L.T y = x
alpha = np.linalg.solve(L.T, np.linalg.solve(L, f))
## Multiply the posterior (ALgorithm 2.1 in Rasmunssen)
## Using the "extended matrix" K_s
post_mean = np.dot(K_s.T.eval(), alpha)
ax.plot(X, post_mean, color='g', alpha=0.8, label='Posterior mean');
ax.set_xlim(0, 3);
ax.set_ylim(-2, 2);
ax.legend();
Explanation: Examine actual posterior distribution
The posterior is analytically tractable so we can compute the posterior mean explicitly. Rather than computing the inverse of the covariance matrix K, we use the numerically stable calculation described Algorithm 2.1 in the book “Gaussian Processes for Machine Learning” (2006) by Rasmussen and Williams, which is available online for free.
End of explanation
with pm.Model() as model:
# The actual distribution of f_sample doesn't matter as long as the shape is right since it's only used
# as a dummy variable for slice sampling with the given prior
### From doc:
###
f_sample = pm.Flat('f_sample', shape=(n, ))
# Likelihood
y = pm.MvNormal('y', observed=f, mu=f_sample, cov=noise * tt.eye(n), shape=n)
# Interpolate function values using noisy covariance matrix
## Deterministic allows to compose (do algebra) with RV in many different ways.
##While these transformations work seamlessly, its results are not stored automatically.
##Thus, if you want to keep track of a transformed variable, you have to use pm.Determinstic:
## from http://docs.pymc.io/notebooks/api_quickstart.html
L = tt.slinalg.cholesky(K_noise)
f_pred = pm.Deterministic('f_pred', tt.dot(K_s.T, tt.slinalg.solve(L.T, tt.slinalg.solve(L, f_sample))))
# Use elliptical slice sampling
ess_step = pm.EllipticalSlice(vars=[f_sample], prior_cov=K_stable)
trace = pm.sample(5000, start=model.test_point, step=[ess_step], progressbar=False, random_seed=1)
Explanation: Ok, it's good to have the analitical solution but not always possible sooooo.
Let's do some computing.
Model in PyM3
End of explanation
fig, ax = plt.subplots(figsize=(14, 6));
for idx in np.random.randint(4000, 5000, 500):
ax.plot(X, trace['f_pred'][idx], alpha=0.02, color='navy')
ax.scatter(X0, f, s=40, color='k', label='True points');
ax.plot(X, post_mean, color='g', alpha=0.8, label='Posterior mean');
ax.legend();
ax.set_xlim(0, 3);
ax.set_ylim(-2, 2);
Explanation: Evaluate posterior fit
The posterior samples are consistent with the analytically derived posterior and behaves how one would expect–narrower near areas with lots of observations and wider in areas with more uncertainty.
End of explanation
np.random.seed(5)
f = np.random.multivariate_normal(mean=np.zeros(n), cov=K_stable.eval())
# Separate data into positive and negative classes
f[f > 0] = 1
f[f <= 0] = 0
fig, ax = plt.subplots(figsize=(14, 6));
for idx in np.random.randint(4000, 5000, 500):
ax.plot(X, trace['f_pred'][idx], alpha=0.02, color='navy')
ax.scatter(X0, f, s=40, color='k', label='True points');
ax.plot(X, post_mean, color='g', alpha=0.8, label='Posterior mean');
ax.legend();
ax.set_xlim(0, 3);
ax.set_ylim(-2, 2);
Explanation: Clasification
In Gaussian process classification, the likelihood is not normal and thus the posterior is not analytically tractable. The prior is again a multivariate normal with covariance matrix K, and the likelihood is the standard likelihood for logistic regression:
\begin{equation}
L(y | f) = \Pi_n \sigma(y_n, f_n)
\end{equation}
Generate some example data
We generate random samples from a Gaussian process, assign any points greater than zero to a “positive” class, and assign all other points to a “negative” class.
End of explanation
with pm.Model() as model:
# Again, f_sample is just a dummy variable
f_sample = pm.Flat('f_sample', shape=n)
f_transform = pm.invlogit(f_sample)
# Binomial likelihood
y = pm.Binomial('y', observed=f, n=np.ones(n), p=f_transform, shape=n)
# Interpolate function values using noiseless covariance matrix
L = tt.slinalg.cholesky(K_stable)
f_pred = pm.Deterministic('f_pred', tt.dot(K_s.T, tt.slinalg.solve(L.T, tt.slinalg.solve(L, f_transform))))
# Use elliptical slice sampling
ess_step = pm.EllipticalSlice(vars=[f_sample], prior_cov=K_stable)
trace = pm.sample(5000, start=model.test_point, step=[ess_step], progressbar=False, random_seed=1)
Explanation: Sample from posterior distribution
End of explanation
fig, ax = plt.subplots(figsize=(14, 6));
for idx in np.random.randint(4000, 5000, 500):
ax.plot(X, trace['f_pred'][idx], alpha=0.04, color='navy')
ax.scatter(X0, f, s=40, color='k');
ax.set_xlim(0, 3);
ax.set_ylim(-0.1, 1.1);
Explanation: Evaluate posterior fit
The posterior looks good, though the fit is, unsurprisingly, erratic outside the range of the observed data.
End of explanation |
1,154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dead time corrections
Daniel FY, Jeffrey AF. Mean and variance of single photon counting with deadtime. Physics in Medicine & Biology. 2000;45(7)
Step1: Equation 2 gives the expectation and variance of dead time processes.
$E[\tilde{Y}] \sim \frac{\lambda t}{1+\lambda t}$
$Var[\tilde{Y}] \sim \frac{\lambda t}{(1+\lambda \tau)^3}$
$\lambda \sim \frac{E[\tilde{Y}]}{t-E[\tilde{Y}] \tau}$
Lets explore the size of these error bars as compared to no dead time.
Step2: Why in the world is the error smaller on the uncorrected data?
Generate Poisson process data and generate exponential
For each interval choose $n$ events from a Poisson. Then draw from a uniform the location in the interval for each of the events.
Step3: This is consistent with a Poisson of parameter 20! But there seems to be an under prediction going on, wonder why?
Go through Posterior Predictive Checks (http
Step4: We are reprodicting well.
Given the data we generated that will be treated as truth, what would we measure with various deadtime and does teh corection match what we think it should?
Correction should look like $n_1 = \frac{R_1}{1-R_1 \tau}$ where $n_1$ is real rate, $R_1$ is observed rate, and $\tau$ is the dead time.
Take edata from above and strep through from beginning to end only keeping points that are dead time away from the previous point.
Step5: And plot the rates per unit time
Step6: Can we use $n_1 = \frac{R_1}{1-R_1 \tau}$ to derive the relation and spread in the dist of R?
Algerbra changes math to
Step7: Use the large dead time
Step8: But this is totally broken!!!
Output data files for each | Python Code:
%matplotlib inline
from pprint import pprint
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as mc
import spacepy.toolbox as tb
import spacepy.plot as spp
import tqdm
from scipy import stats
import seaborn as sns
sns.set(font_scale=1.5)
# matplotlib.pyplot.rc('figure', figsize=(10,10))
# matplotlib.pyplot.rc('lines', lw=3)
# matplotlib.pyplot.rc('font', size=20)
%matplotlib inline
Explanation: Dead time corrections
Daniel FY, Jeffrey AF. Mean and variance of single photon counting with deadtime. Physics in Medicine & Biology. 2000;45(7):2043.
End of explanation
np.random.seed(8675309)
rate = 40
df = pd.DataFrame({'Poisson':np.random.poisson(rate, 100)})
df['var'] = rate # property of Poisson
dt = 0.05
df['obs'] = df['Poisson']/(1+df['Poisson']*dt)
df['obsvar'] = (df['Poisson'])/(1+df['Poisson']*dt)**3
df['ratio'] = df['Poisson']/df['obs']
df.mean()
df.head()
plt.figure(figsize=(8,5))
plt.errorbar(range(len(df)), df['Poisson'], yerr=np.sqrt(df['var']), ecolor='k', c='k',label='Poisson')
plt.errorbar(range(len(df)), df['obs'], yerr=np.sqrt(df['obsvar']), ecolor='g', c='g',label='Obs')
plt.legend(bbox_to_anchor=(1, 1))
plt.tight_layout()
plt.figure(figsize=(8,5))
plt.errorbar(range(len(df)), df['Poisson'], yerr=np.sqrt(df['var']), ecolor='k', c='k',label='Poisson')
plt.errorbar(range(len(df)), df['ratio']*df['obs'], yerr=np.sqrt(df['ratio']*df['obsvar']), ecolor='g', c='g',label='Obs')
plt.legend(bbox_to_anchor=(1, 1))
plt.tight_layout()
plt.figure(figsize=(8,5))
plt.plot(range(len(df)), np.sqrt(df['var'])/df['Poisson'], c='k',label='Poisson')
plt.plot(range(len(df)),np.sqrt(df['ratio']*df['obsvar'])/(df['ratio']*df['obs']), c='g',label='Obs')
plt.legend(bbox_to_anchor=(1, 1))
plt.tight_layout()
Explanation: Equation 2 gives the expectation and variance of dead time processes.
$E[\tilde{Y}] \sim \frac{\lambda t}{1+\lambda t}$
$Var[\tilde{Y}] \sim \frac{\lambda t}{(1+\lambda \tau)^3}$
$\lambda \sim \frac{E[\tilde{Y}]}{t-E[\tilde{Y}] \tau}$
Lets explore the size of these error bars as compared to no dead time.
End of explanation
np.random.seed(8675309)
nT = 400
cts = np.random.poisson(20, size=nT)
edata = []
for i in range(nT):
edata.extend(i + np.sort(np.random.uniform(low=0, high=1, size=cts[i])))
edata = np.asarray(edata)
edata.shape
plt.plot(edata, np.arange(len(edata)))
plt.xlabel('Time of event')
plt.ylabel('Event number')
plt.title("Modeled underlying data")
with mc.Model() as model:
lam = mc.Uniform('lambda', 0, 1000) # this is the exponential parameter
meas = mc.Exponential('meas', lam, observed=np.diff(edata))
lam2 = mc.Uniform('lam2', 0, 1000)
poi = mc.Poisson('Poisson', lam2, observed=cts)
start = mc.find_MAP()
trace = mc.sample(10000, start=start, njobs=8)
mc.traceplot(trace, combined=True, lines={'lambda':20, 'lam2':20})
mc.summary(trace)
fig, ax = plt.subplots(ncols=1, nrows=2, sharex=True)
sns.distplot(trace['lambda'], ax=ax[0])
sns.distplot(trace['lam2'], ax=ax[1])
plt.xlabel('Lambda')
ax[0].set_ylabel('Exp')
ax[1].set_ylabel('Poisson')
ax[0].axvline(20, c='r', lw=1)
ax[1].axvline(20, c='r', lw=1)
plt.tight_layout()
Explanation: Why in the world is the error smaller on the uncorrected data?
Generate Poisson process data and generate exponential
For each interval choose $n$ events from a Poisson. Then draw from a uniform the location in the interval for each of the events.
End of explanation
ppc = mc.sample_ppc(trace, samples=500, model=model, size=100)
ax = plt.subplot()
sns.distplot([n.mean() for n in ppc['Poisson']], kde=False, ax=ax)
ax.axvline(cts.mean())
ax.set(title='Posterior predictive of the mean (Poisson)', xlabel='mean(x)', ylabel='Frequency');
ax = plt.subplot()
sns.distplot([n.var() for n in ppc['Poisson']], kde=False, ax=ax)
ax.axvline(cts.var())
ax.set(title='Posterior predictive of the variance (Poisson)', xlabel='var(x)', ylabel='Frequency');
ax = plt.subplot()
sns.distplot([n.mean() for n in ppc['meas']], kde=False, ax=ax)
ax.axvline(np.diff(edata).mean())
ax.set(title='Posterior predictive of the mean (Exponential)', xlabel='mean(x)', ylabel='Frequency');
ax = plt.subplot()
sns.distplot([n.var() for n in ppc['meas']], kde=False, ax=ax)
ax.axvline(np.diff(edata).var())
ax.set(title='Posterior predictive of the variance (Exponential)', xlabel='var(x)', ylabel='Frequency');
Explanation: This is consistent with a Poisson of parameter 20! But there seems to be an under prediction going on, wonder why?
Go through Posterior Predictive Checks (http://docs.pymc.io/notebooks/posterior_predictive.html) and see if we are reprodicting the mean and variance.
End of explanation
deadtime1 = 0.005 # small dead time
deadtime2 = 0.1 # large dead time
edata_td1 = []
edata_td1.append(edata[0])
edata_td2 = []
edata_td2.append(edata[0])
for ii, v in enumerate(edata[1:], 1): # stop one shy to not run over the end, start enumerate at 1
if v - edata_td1[-1] >= deadtime1:
edata_td1.append(v)
if v - edata_td2[-1] >= deadtime2:
edata_td2.append(v)
edata_td1 = np.asarray(edata_td1)
edata_td2 = np.asarray(edata_td2)
plt.figure(figsize=(8,6))
plt.plot(edata, np.arange(len(edata)), label='Real data')
plt.plot(edata_td1, np.arange(len(edata_td1)), label='Small dead time')
plt.plot(edata_td2, np.arange(len(edata_td2)), label='Large dead time')
plt.xlabel('Time of event')
plt.ylabel('Event number')
plt.title("Modeled underlying data")
plt.legend(bbox_to_anchor=(1, 1))
Explanation: We are reprodicting well.
Given the data we generated that will be treated as truth, what would we measure with various deadtime and does teh corection match what we think it should?
Correction should look like $n_1 = \frac{R_1}{1-R_1 \tau}$ where $n_1$ is real rate, $R_1$ is observed rate, and $\tau$ is the dead time.
Take edata from above and strep through from beginning to end only keeping points that are dead time away from the previous point.
End of explanation
plt.figure(figsize=(8,6))
h1, b1 = np.histogram(edata, np.arange(1000))
plt.plot(tb.bin_edges_to_center(b1), h1, label='Real data', c='k')
h2, b2 = np.histogram(edata_td1, np.arange(1000))
plt.plot(tb.bin_edges_to_center(b2), h2, label='Small dead time', c='r')
h3, b3 = np.histogram(edata_td2, np.arange(1000))
plt.plot(tb.bin_edges_to_center(b3), h3, label='Large dead time')
plt.legend(bbox_to_anchor=(1, 1))
plt.xlim((0,400))
plt.ylabel('Rate')
plt.xlabel('Time')
Explanation: And plot the rates per unit time
End of explanation
# assume R1 is Poisson
with mc.Model() as model:
tau = deadtime1
obsRate = mc.Uniform('obsRate', 0, 1000, shape=1)
obsData = mc.Poisson('obsData', obsRate, observed=h2[:400], shape=1)
realRate = mc.Deterministic('realRate', obsData/(1-obsData*tau))
start = mc.find_MAP()
trace = mc.sample(10000, start=start, njobs=8)
mc.traceplot(trace, combined=True, varnames=('obsRate', ))
mc.summary(trace, varnames=('obsRate', ))
sns.distplot(trace['realRate'].mean(axis=0), bins=10)
plt.xlabel('realRate')
plt.ylabel('Density')
dt1_bounds = np.percentile(trace['realRate'], (2.5, 50, 97.5))
print('The estimate of the real rate given that we know the dead time is:', dt1_bounds,
(dt1_bounds[2]-dt1_bounds[0])/dt1_bounds[1])
dat_bounds = np.percentile(h1[:400], (2.5, 50, 97.5))
print("This compares with if we measured without dead time as:", dat_bounds,
(dat_bounds[2]-dat_bounds[0])/dat_bounds[1])
Explanation: Can we use $n_1 = \frac{R_1}{1-R_1 \tau}$ to derive the relation and spread in the dist of R?
Algerbra changes math to: $R_1=\frac{n_1}{1+n_1\tau}$
Use the small dead time
End of explanation
# assume R1 is Poisson
with mc.Model() as model:
tau = deadtime2
obsRate = mc.Uniform('obsRate', 0, 1000)
obsData = mc.Poisson('obsData', obsRate, observed=h3[:400])
realRate = mc.Deterministic('realRate', obsData/(1-obsData*tau))
start = mc.find_MAP()
trace = mc.sample(10000, start=start, njobs=8)
mc.traceplot(trace, combined=True, varnames=('obsRate', ))
mc.summary(trace, varnames=('obsRate', ))
sns.distplot(trace['realRate'].mean(axis=0))
plt.xlabel('realRate')
plt.ylabel('Density')
dt2_bounds = np.percentile(trace['realRate'], (2.5, 50, 97.5))
print('The estimate of the real rate given that we know the dead time is:', dt1_bounds,
(dt2_bounds[2]-dt2_bounds[0])/dt2_bounds[1])
dat_bounds = np.percentile(h1[:400], (2.5, 50, 97.5))
print("This compares with if we measured without dead time as:", dat_bounds,
(dat_bounds[2]-dat_bounds[0])/dat_bounds[1])
Explanation: Use the large dead time
End of explanation
real = pd.Series(edata)
td1 = pd.Series(edata_td1)
td2 = pd.Series(edata_td2)
real.to_csv('no_deadtime_times.csv')
td1.to_csv('small_deadtime_times.csv')
td2.to_csv('large_deadtime_times.csv')
real = pd.Series(h1[h1>0])
td1 = pd.Series(h2[h2>0])
td2 = pd.Series(h3[h3>0])
real.to_csv('no_deadtime_rates.csv')
td1.to_csv('small_deadtime_rates.csv')
td2.to_csv('large_deadtime_rates.csv')
Explanation: But this is totally broken!!!
Output data files for each
End of explanation |
1,155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-armed bandit as a Markov decision process
Let's model the Bernouilli multi-armed bandit. The Bernoulli MBA is an $N$-armed bandit where each arm gives binary rewards according to some probability
Step1: For the 2-armed, 2-trial Bernoulli bandit, the strategy is simple
Step2: Note that the utility of the root node is 1.08 - what does that mean? If we get rewarded in the initial trial, that means that the posterior for the mean of that arm is .67. OTOH, when we fail on the first trial, we can still pick the other arm, which still has a posterior mean of .5. Thus, we have rewards
Step3: And that's what utility means in this context.
Let's see about the 3-trial 2-armed bandit
Step4: The optimal strategy goes
Step5: What's interesting here is that value iteration always converges in M_trials + 1 iterations - information only travels backwards through time - much as in Viterbi in the context of HMMs. If we're only interested in the next best action given the current state, it might be possible to iterate backwards through time, starting from the terminal states, throwing away the latest data as we go along -- but let's not get into this premature optimization just yet. Let's see how for how many trials we can solve this without crashing my 5 year-old laptop.
Step6: It seems like my laptop can look ahead at least sixteen steps into the future without dying - pretty good!
Optimal versus UCB
Let's try and figure out how the optimal strategy relates to the upper confidence bound (UCB) heuristic. Let's train a logistic regression model with the same inputs as a UCB strategy - mean, standard deviation, time - and see how well it can approximate the optimal strategy.
Step7: Let's train three supervised networks | Python Code:
import itertools
import numpy as np
from pprint import pprint
def sorted_values(dict_):
return [dict_[x] for x in sorted(dict_)]
def solve_bmab_value_iteration(N_arms, M_trials, gamma=1,
max_iter=10, conv_crit = .01):
util = {}
# Initialize every state to utility 0.
state_ranges = [range(M_trials+1) for x in range(N_arms*2)]
# The reward state
state_ranges.append(range(2))
for state in itertools.product(*state_ranges):
# Some states are impossible to reach.
if sum(state[:-1]) > M_trials:
# A state with the total of alphas and betas greater than
# the number of trials.
continue
if sum(state[:-1:2]) == 0 and state[-1] == 1:
# A state with a reward but alphas all equal to 0.
continue
if sum(state[:-1:2]) == M_trials and state[-1] == 0:
# A state with no reward but alphas adding up to M_trials.
continue
if sum(state[:-1]) == 1 and sum(state[:-1:2]) == 1 and state[-1] == 0:
# A state with an initial reward according to alphas but not according
# the reward index
continue
util[state] = 0
# Main loop.
converged = False
new_util = util.copy()
opt_actions = {}
for j in range(max_iter):
# Line 5 of value iteration
for state in util.keys():
reward = state[-1]
# Terminal state.
if sum(state[:-1]) == M_trials:
new_util[state] = reward
continue
values = np.zeros(N_arms)
# Consider every action
for i in range(N_arms):
# Successes and failure for this state.
alpha = state[i*2]
beta = state[i*2+1]
# Two possible outcomes: either that arm gets rewarded,
# or not.
# Transition to unrewarded state:
state0 = list(state)
state0[-1] = 0
state0[2*i+1] += 1
state0 = tuple(state0)
# The probability that we'll transition to this unrewarded state.
p_state0 = (beta + 1) / float(alpha + beta + 2)
# Rewarded state.
state1 = list(state)
state1[-1] = 1
state1[2*i] += 1
state1 = tuple(state1)
p_state1 = 1 - p_state0
try:
value = gamma*(util[state0]*p_state0 +
util[state1]*p_state1)
except KeyError,e:
print state
print state0
print state1
raise e
#print state0, util[state0], p_state0
#print state1, util[state1], p_state1
values[i] = value
#print state, values, reward
new_util[state] = reward + np.max(values)
opt_actions[state] = np.argmax(values)
# Consider the difference between the new util
# and the old util.
max_diff = np.max(abs(np.array(sorted_values(util)) - np.array(sorted_values(new_util))))
util = new_util.copy()
print "Iteration %d, max diff = %.5f" % (j, max_diff)
if max_diff < conv_crit:
converged = True
break
#pprint(util)
if converged:
print "Converged after %d iterations" % j
else:
print "Not converged after %d iterations" % max_iter
return util, opt_actions
util, opt_actions = solve_bmab_value_iteration(2, 2, max_iter=5)
opt_actions
Explanation: Multi-armed bandit as a Markov decision process
Let's model the Bernouilli multi-armed bandit. The Bernoulli MBA is an $N$-armed bandit where each arm gives binary rewards according to some probability:
$r_i \sim Bernouilli(\mu_i)$
Here $i$ is the index of the arm. Let's model this as a Markov decision process. The state is going to be defined as:
$s(t) = (\alpha_1, \beta_1, \ldots, \alpha_N, \beta_N, r_t)$
$\alpha_i$ is the number of successes encountered so far when pulling arm $i$. $\beta_i$ is, similarly, the number of failures encountered when pulling that arm. $r_t$ is the reward, either 0 or 1, from the last trial.
Assuming a uniform prior on $\mu_i$, the posterior distribution of the $\mu_i$ in a given state are:
$p(\mu_i|s(t)) = Beta(\alpha_i+1,\beta_i+1)$
When we're in a given state, we have the choice of performing one of $N$ actions, corresponding to pulling each of the arms. Let's call pulling the $i$'th arm $a_i$. This will put us in a new state, with a certain probability. The new state will be same for arms not equal to i. For the $i$'th arm, we have:
$s(t+1) = (\ldots \alpha_i + 1, \beta_i \ldots 1)$ with probability $(\alpha_i+1)/(\alpha_i+\beta_i+2)$
$s(t+1) = (\ldots \alpha_i, \beta_i + 1 \ldots 0)$ with probability $(\beta_i+1)/(\alpha_i+\beta_i+2)$
We can solve this MDP exactly, e.g. using value iteration, for a small enough state space. For $M$ trials, the state space has cardinality $M^{2N}$ - it's possible to solve the 2-armed bandit for 10-20 trials this way, but it grows exponentially fast.
Nevertheless, we can use this optimal solution to compare it with commonly used heuristics like $\epsilon$-greedy and UCB and determine how often these pick the optimal moves. Then we'll get some intuitions about what $\epsilon$-greedy and UCB get right and wrong. Let's do it!
End of explanation
util
Explanation: For the 2-armed, 2-trial Bernoulli bandit, the strategy is simple: pick the first arm. If it rewards, then pick it again. If not, pick the other. Note that this is the same as most sensible strategies, for instance $\epsilon$- greedy or UCB.
End of explanation
2*.5*2.0/3.0 + .5/3.0 + .5*.5
Explanation: Note that the utility of the root node is 1.08 - what does that mean? If we get rewarded in the initial trial, that means that the posterior for the mean of that arm is .67. OTOH, when we fail on the first trial, we can still pick the other arm, which still has a posterior mean of .5. Thus, we have rewards:
+2 with probability .5*2/3
+1 with prob .5*1/3
+1 with prob .5*.5
+0 with prob .5*.5
That means the expected total reward is:
End of explanation
util, opt_actions = solve_bmab_value_iteration(2, 3, max_iter=5)
opt_actions
Explanation: And that's what utility means in this context.
Let's see about the 3-trial 2-armed bandit:
End of explanation
util, opt_actions = solve_bmab_value_iteration(2, 4, max_iter=6)
Explanation: The optimal strategy goes: pick arm 0. If it rewards, pick it again for the next 2 trials.
If it doesn't reward, then pick arm 1. If that rewards, keep that one. If it doesn't, pick 0 again.
Let's see with 4:
End of explanation
M_trials = 16
%time util, opt_actions = solve_bmab_value_iteration(2, M_trials, max_iter=M_trials+2)
Explanation: What's interesting here is that value iteration always converges in M_trials + 1 iterations - information only travels backwards through time - much as in Viterbi in the context of HMMs. If we're only interested in the next best action given the current state, it might be possible to iterate backwards through time, starting from the terminal states, throwing away the latest data as we go along -- but let's not get into this premature optimization just yet. Let's see how for how many trials we can solve this without crashing my 5 year-old laptop.
End of explanation
# Create a design matrix related to the optimal strategies.
X = []
y = []
seen_keys = {}
for key, val in opt_actions.iteritems():
if key[:-1] in seen_keys:
# We've already seen this, continue.
continue
alpha0 = float(key[0] + 1)
beta0 = float(key[1] + 1)
alpha1 = float(key[2] + 1)
beta1 = float(key[3] + 1)
if alpha0 == alpha1 and beta0 == beta1:
# We're in a perfectly symmetric situtation, skip this then.
continue
seen_keys = key[:-1]
# Standard results for the Beta distribution.
# https://en.wikipedia.org/wiki/Beta_distribution
mean0 = alpha0/(alpha0 + beta0)
mean1 = alpha1/(alpha1 + beta1)
std0 = np.sqrt(alpha0*beta0 / (alpha0 + beta0 + 1)) / (alpha0 + beta0)
std1 = np.sqrt(alpha1*beta1 / (alpha1 + beta1 + 1)) / (alpha1 + beta1)
t = alpha0 + beta0 + alpha1 + beta1
X.append([mean0,mean1,std0,std1,t,1,alpha0 - 1,beta0 - 1,alpha1 - 1,beta1 - 1])
y.append(val)
X = np.array(X)
y = np.array(y)
Explanation: It seems like my laptop can look ahead at least sixteen steps into the future without dying - pretty good!
Optimal versus UCB
Let's try and figure out how the optimal strategy relates to the upper confidence bound (UCB) heuristic. Let's train a logistic regression model with the same inputs as a UCB strategy - mean, standard deviation, time - and see how well it can approximate the optimal strategy.
End of explanation
from sklearn.linear_model import LogisticRegression
the_model = LogisticRegression(C=100.0)
X_ = X[:,:2]
the_model.fit(X_,y)
y_pred = the_model.predict(X_)
print ("Greedy: %.4f%% of moves are incorrect" % ((np.mean(abs(y_pred-y)))*100))
print the_model.coef_
the_model = LogisticRegression(C=100.0)
X_ = X[:,:4]
the_model.fit(X_,y)
y_pred = the_model.predict(X_)
print ("UCB: %.4f%% of moves are incorrect" % ((np.mean(abs(y_pred-y)))*100))
print the_model.coef_
the_model = LogisticRegression(C=100000.0)
X_ = X[:,:4]
X_ = np.hstack((X_,(X[:,4]).reshape((-1,1))*X[:,2:4]))
the_model.fit(X_,y)
y_pred = the_model.predict(X_)
print ("UCB X time: %.4f%% of moves are incorrect" % ((np.mean(abs(y_pred-y)))*100))
print the_model.coef_
Explanation: Let's train three supervised networks:
a purely myopic, greedy strategy
one which uses the uncertainty in the estimates
one which uses both uncertainty and number of trials left to hedge its bets
End of explanation |
1,156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Iris Flower Dataset
Step2: Standardize Features
Step3: Create Logistic Regression
Step4: Train Logistic Regression
Step5: Create Previously Unseen Observation
Step6: Predict Class Of Observation
Step7: View Predicted Probabilities | Python Code:
# Load libraries
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
Explanation: Title: Logistic Regression
Slug: logistic_regression
Summary: How to train a logistic regression in scikit-learn.
Date: 2017-09-21 12:00
Category: Machine Learning
Tags: Logistic Regression
Authors: Chris Albon
Despite having "regression" in its name, a logistic regression is actually a widely used binary classifier (i.e. the target vector can only take two values). In a logistic regression, a linear model (e.g. $\beta_{0}+\beta_{1}x$) is included in a logistic (also called sigmoid) function, ${\frac {1}{1+e^{-z}}}$, such that:
$$P(y_i=1 \mid X)={\frac {1}{1+e^{-(\beta_{0}+\beta_{1}x)}}}$$
where $P(y_i=1 \mid X)$ is the probability of the $i$th observation's target value, $y_i$, being class 1, $X$ is the training data, $\beta_0$ and $\beta_1$ are the parameters to be learned, and $e$ is Euler's number.
Preliminaries
End of explanation
# Load data with only two classes
iris = datasets.load_iris()
X = iris.data[:100,:]
y = iris.target[:100]
Explanation: Load Iris Flower Dataset
End of explanation
# Standarize features
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
Explanation: Standardize Features
End of explanation
# Create logistic regression object
clf = LogisticRegression(random_state=0)
Explanation: Create Logistic Regression
End of explanation
# Train model
model = clf.fit(X_std, y)
Explanation: Train Logistic Regression
End of explanation
# Create new observation
new_observation = [[.5, .5, .5, .5]]
Explanation: Create Previously Unseen Observation
End of explanation
# Predict class
model.predict(new_observation)
Explanation: Predict Class Of Observation
End of explanation
# View predicted probabilities
model.predict_proba(new_observation)
Explanation: View Predicted Probabilities
End of explanation |
1,157 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: 첫 번째 신경망 훈련하기
Step2: 패션 MNIST 데이터셋 임포트하기
10개의 범주(category)와 70,000개의 흑백 이미지로 구성된 패션 MNIST 데이터셋을 사용하겠습니다. 이미지는 해상도(28x28 픽셀)가 낮고 다음처럼 개별 옷 품목을 나타냅니다
Step3: load_data() 함수를 호출하면 네 개의 넘파이(NumPy) 배열이 반환됩니다
Step4: 데이터 탐색
모델을 훈련하기 전에 데이터셋 구조를 살펴보죠. 다음 코드는 훈련 세트에 60,000개의 이미지가 있다는 것을 보여줍니다. 각 이미지는 28x28 픽셀로 표현됩니다
Step5: 비슷하게 훈련 세트에는 60,000개의 레이블이 있습니다
Step6: 각 레이블은 0과 9사이의 정수입니다
Step7: 테스트 세트에는 10,000개의 이미지가 있습니다. 이 이미지도 28x28 픽셀로 표현됩니다
Step8: 테스트 세트는 10,000개의 이미지에 대한 레이블을 가지고 있습니다
Step9: 데이터 전처리
네트워크를 훈련하기 전에 데이터를 전처리해야 합니다. 훈련 세트에 있는 첫 번째 이미지를 보면 픽셀 값의 범위가 0~255 사이라는 것을 알 수 있습니다
Step10: 신경망 모델에 주입하기 전에 이 값의 범위를 0~1 사이로 조정하겠습니다. 이렇게 하려면 255로 나누어야 합니다. 훈련 세트와 테스트 세트를 동일한 방식으로 전처리하는 것이 중요합니다
Step11: 훈련 세트에서 처음 25개 이미지와 그 아래 클래스 이름을 출력해 보죠. 데이터 포맷이 올바른지 확인하고 네트워크 구성과 훈련할 준비를 마칩니다.
Step12: 모델 구성
신경망 모델을 만들려면 모델의 층을 구성한 다음 모델을 컴파일합니다.
층 설정
신경망의 기본 구성 요소는 층(layer)입니다. 층은 주입된 데이터에서 표현을 추출합니다. 아마도 문제를 해결하는데 더 의미있는 표현이 추출될 것입니다.
대부분 딥러닝은 간단한 층을 연결하여 구성됩니다. tf.keras.layers.Dense와 같은 층들의 가중치(parameter)는 훈련하는 동안 학습됩니다.
Step13: 이 네트워크의 첫 번째 층인 tf.keras.layers.Flatten은 2차원 배열(28 x 28 픽셀)의 이미지 포맷을 28 * 28 = 784 픽셀의 1차원 배열로 변환합니다. 이 층은 이미지에 있는 픽셀의 행을 펼쳐서 일렬로 늘립니다. 이 층에는 학습되는 가중치가 없고 데이터를 변환하기만 합니다.
픽셀을 펼친 후에는 두 개의 tf.keras.layers.Dense 층이 연속되어 연결됩니다. 이 층을 밀집 연결(densely-connected) 또는 완전 연결(fully-connected) 층이라고 부릅니다. 첫 번째 Dense 층은 128개의 노드(또는 뉴런)를 가집니다. 두 번째 (마지막) 층은 10개의 노드의 소프트맥스(softmax) 층입니다. 이 층은 10개의 확률을 반환하고 반환된 값의 전체 합은 1입니다. 각 노드는 현재 이미지가 10개 클래스 중 하나에 속할 확률을 출력합니다.
모델 컴파일
모델을 훈련하기 전에 필요한 몇 가지 설정이 모델 컴파일 단계에서 추가됩니다
Step14: 모델 훈련
신경망 모델을 훈련하는 단계는 다음과 같습니다
Step15: 모델이 훈련되면서 손실과 정확도 지표가 출력됩니다. 이 모델은 훈련 세트에서 약 0.88(88%) 정도의 정확도를 달성합니다.
정확도 평가
그다음 테스트 세트에서 모델의 성능을 비교합니다
Step16: 테스트 세트의 정확도가 훈련 세트의 정확도보다 조금 낮습니다. 훈련 세트의 정확도와 테스트 세트의 정확도 사이의 차이는 과대적합(overfitting) 때문입니다. 과대적합은 머신러닝 모델이 훈련 데이터보다 새로운 데이터에서 성능이 낮아지는 현상을 말합니다.
예측 만들기
훈련된 모델을 사용하여 이미지에 대한 예측을 만들 수 있습니다.
Step17: 여기서는 테스트 세트에 있는 각 이미지의 레이블을 예측했습니다. 첫 번째 예측을 확인해 보죠
Step18: 이 예측은 10개의 숫자 배열로 나타납니다. 이 값은 10개의 옷 품목에 상응하는 모델의 신뢰도(confidence)를 나타냅니다. 가장 높은 신뢰도를 가진 레이블을 찾아보죠
Step19: 모델은 이 이미지가 앵클 부츠(class_name[9])라고 가장 확신하고 있습니다. 이 값이 맞는지 테스트 레이블을 확인해 보죠
Step20: 10개의 신뢰도를 모두 그래프로 표현해 보겠습니다
Step21: 0번째 원소의 이미지, 예측, 신뢰도 점수 배열을 확인해 보겠습니다.
Step22: 몇 개의 이미지의 예측을 출력해 보죠. 올바르게 예측된 레이블은 파란색이고 잘못 예측된 레이블은 빨강색입니다. 숫자는 예측 레이블의 신뢰도 퍼센트(100점 만점)입니다. 신뢰도 점수가 높을 때도 잘못 예측할 수 있습니다.
Step23: 마지막으로 훈련된 모델을 사용하여 한 이미지에 대한 예측을 만듭니다.
Step24: tf.keras 모델은 한 번에 샘플의 묶음 또는 배치(batch)로 예측을 만드는데 최적화되어 있습니다. 하나의 이미지를 사용할 때에도 2차원 배열로 만들어야 합니다
Step25: 이제 이 이미지의 예측을 만듭니다
Step26: model.predict는 2차원 넘파이 배열을 반환하므로 첫 번째 이미지의 예측을 선택합니다 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
# tensorflow와 tf.keras를 임포트합니다
import tensorflow.compat.v1 as tf
from tensorflow import keras
# 헬퍼(helper) 라이브러리를 임포트합니다
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
Explanation: 첫 번째 신경망 훈련하기: 기초적인 분류 문제
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/r1/tutorials/keras/basic_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/r1/tutorials/keras/basic_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
tensorflow/docs 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 참여하려면
[email protected]로
메일을 보내주시기 바랍니다.
이 튜토리얼에서는 운동화나 셔츠 같은 옷 이미지를 분류하는 신경망 모델을 훈련합니다. 상세 내용을 모두 이해하지 못해도 괜찮습니다. 여기서는 완전한 텐서플로(TensorFlow) 프로그램을 빠르게 살펴 보겠습니다. 자세한 내용은 앞으로 배우면서 더 설명합니다.
여기에서는 텐서플로 모델을 만들고 훈련할 수 있는 고수준 API인 tf.keras를 사용합니다.
End of explanation
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
Explanation: 패션 MNIST 데이터셋 임포트하기
10개의 범주(category)와 70,000개의 흑백 이미지로 구성된 패션 MNIST 데이터셋을 사용하겠습니다. 이미지는 해상도(28x28 픽셀)가 낮고 다음처럼 개별 옷 품목을 나타냅니다:
<table>
<tr><td>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST sprite" width="600">
</td></tr>
<tr><td align="center">
<b>그림 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">패션-MNIST 샘플</a> (Zalando, MIT License).<br/>
</td></tr>
</table>
패션 MNIST는 컴퓨터 비전 분야의 "Hello, World" 프로그램격인 고전 MNIST 데이터셋을 대신해서 자주 사용됩니다. MNIST 데이터셋은 손글씨 숫자(0, 1, 2 등)의 이미지로 이루어져 있습니다. 여기서 사용하려는 옷 이미지와 동일한 포맷입니다.
패션 MNIST는 일반적인 MNIST 보다 조금 더 어려운 문제이고 다양한 예제를 만들기 위해 선택했습니다. 두 데이터셋은 비교적 작기 때문에 알고리즘의 작동 여부를 확인하기 위해 사용되곤 합니다. 코드를 테스트하고 디버깅하는 용도로 좋습니다.
네트워크를 훈련하는데 60,000개의 이미지를 사용합니다. 그다음 네트워크가 얼마나 정확하게 이미지를 분류하는지 10,000개의 이미지로 평가하겠습니다. 패션 MNIST 데이터셋은 텐서플로에서 바로 임포트하여 적재할 수 있습니다:
End of explanation
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
Explanation: load_data() 함수를 호출하면 네 개의 넘파이(NumPy) 배열이 반환됩니다:
train_images와 train_labels 배열은 모델 학습에 사용되는 훈련 세트입니다.
test_images와 test_labels 배열은 모델 테스트에 사용되는 테스트 세트입니다.
이미지는 28x28 크기의 넘파이 배열이고 픽셀 값은 0과 255 사이입니다. 레이블(label)은 0에서 9까지의 정수 배열입니다. 이 값은 이미지에 있는 옷의 클래스(class)를 나타냅니다:
<table>
<tr>
<th>레이블</th>
<th>클래스</th>
</tr>
<tr>
<td>0</td>
<td>T-shirt/top</td>
</tr>
<tr>
<td>1</td>
<td>Trouser</td>
</tr>
<tr>
<td>2</td>
<td>Pullover</td>
</tr>
<tr>
<td>3</td>
<td>Dress</td>
</tr>
<tr>
<td>4</td>
<td>Coat</td>
</tr>
<tr>
<td>5</td>
<td>Sandal</td>
</tr>
<tr>
<td>6</td>
<td>Shirt</td>
</tr>
<tr>
<td>7</td>
<td>Sneaker</td>
</tr>
<tr>
<td>8</td>
<td>Bag</td>
</tr>
<tr>
<td>9</td>
<td>Ankle boot</td>
</tr>
</table>
각 이미지는 하나의 레이블에 매핑되어 있습니다. 데이터셋에 클래스 이름이 들어있지 않기 때문에 나중에 이미지를 출력할 때 사용하기 위해 별도의 변수를 만들어 저장합니다:
End of explanation
train_images.shape
Explanation: 데이터 탐색
모델을 훈련하기 전에 데이터셋 구조를 살펴보죠. 다음 코드는 훈련 세트에 60,000개의 이미지가 있다는 것을 보여줍니다. 각 이미지는 28x28 픽셀로 표현됩니다:
End of explanation
len(train_labels)
Explanation: 비슷하게 훈련 세트에는 60,000개의 레이블이 있습니다:
End of explanation
train_labels
Explanation: 각 레이블은 0과 9사이의 정수입니다:
End of explanation
test_images.shape
Explanation: 테스트 세트에는 10,000개의 이미지가 있습니다. 이 이미지도 28x28 픽셀로 표현됩니다:
End of explanation
len(test_labels)
Explanation: 테스트 세트는 10,000개의 이미지에 대한 레이블을 가지고 있습니다:
End of explanation
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
Explanation: 데이터 전처리
네트워크를 훈련하기 전에 데이터를 전처리해야 합니다. 훈련 세트에 있는 첫 번째 이미지를 보면 픽셀 값의 범위가 0~255 사이라는 것을 알 수 있습니다:
End of explanation
train_images = train_images / 255.0
test_images = test_images / 255.0
Explanation: 신경망 모델에 주입하기 전에 이 값의 범위를 0~1 사이로 조정하겠습니다. 이렇게 하려면 255로 나누어야 합니다. 훈련 세트와 테스트 세트를 동일한 방식으로 전처리하는 것이 중요합니다:
End of explanation
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
Explanation: 훈련 세트에서 처음 25개 이미지와 그 아래 클래스 이름을 출력해 보죠. 데이터 포맷이 올바른지 확인하고 네트워크 구성과 훈련할 준비를 마칩니다.
End of explanation
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
Explanation: 모델 구성
신경망 모델을 만들려면 모델의 층을 구성한 다음 모델을 컴파일합니다.
층 설정
신경망의 기본 구성 요소는 층(layer)입니다. 층은 주입된 데이터에서 표현을 추출합니다. 아마도 문제를 해결하는데 더 의미있는 표현이 추출될 것입니다.
대부분 딥러닝은 간단한 층을 연결하여 구성됩니다. tf.keras.layers.Dense와 같은 층들의 가중치(parameter)는 훈련하는 동안 학습됩니다.
End of explanation
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
Explanation: 이 네트워크의 첫 번째 층인 tf.keras.layers.Flatten은 2차원 배열(28 x 28 픽셀)의 이미지 포맷을 28 * 28 = 784 픽셀의 1차원 배열로 변환합니다. 이 층은 이미지에 있는 픽셀의 행을 펼쳐서 일렬로 늘립니다. 이 층에는 학습되는 가중치가 없고 데이터를 변환하기만 합니다.
픽셀을 펼친 후에는 두 개의 tf.keras.layers.Dense 층이 연속되어 연결됩니다. 이 층을 밀집 연결(densely-connected) 또는 완전 연결(fully-connected) 층이라고 부릅니다. 첫 번째 Dense 층은 128개의 노드(또는 뉴런)를 가집니다. 두 번째 (마지막) 층은 10개의 노드의 소프트맥스(softmax) 층입니다. 이 층은 10개의 확률을 반환하고 반환된 값의 전체 합은 1입니다. 각 노드는 현재 이미지가 10개 클래스 중 하나에 속할 확률을 출력합니다.
모델 컴파일
모델을 훈련하기 전에 필요한 몇 가지 설정이 모델 컴파일 단계에서 추가됩니다:
손실 함수(Loss function)-훈련 하는 동안 모델의 오차를 측정합니다. 모델의 학습이 올바른 방향으로 향하도록 이 함수를 최소화해야 합니다.
옵티마이저(Optimizer)-데이터와 손실 함수를 바탕으로 모델의 업데이트 방법을 결정합니다.
지표(Metrics)-훈련 단계와 테스트 단계를 모니터링하기 위해 사용합니다. 다음 예에서는 올바르게 분류된 이미지의 비율인 정확도를 사용합니다.
End of explanation
model.fit(train_images, train_labels, epochs=5)
Explanation: 모델 훈련
신경망 모델을 훈련하는 단계는 다음과 같습니다:
훈련 데이터를 모델에 주입합니다-이 예에서는 train_images와 train_labels 배열입니다.
모델이 이미지와 레이블을 매핑하는 방법을 배웁니다.
테스트 세트에 대한 모델의 예측을 만듭니다-이 예에서는 test_images 배열입니다. 이 예측이 test_labels 배열의 레이블과 맞는지 확인합니다.
훈련을 시작하기 위해 model.fit 메서드를 호출하면 모델이 훈련 데이터를 학습합니다:
End of explanation
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('테스트 정확도:', test_acc)
Explanation: 모델이 훈련되면서 손실과 정확도 지표가 출력됩니다. 이 모델은 훈련 세트에서 약 0.88(88%) 정도의 정확도를 달성합니다.
정확도 평가
그다음 테스트 세트에서 모델의 성능을 비교합니다:
End of explanation
predictions = model.predict(test_images)
Explanation: 테스트 세트의 정확도가 훈련 세트의 정확도보다 조금 낮습니다. 훈련 세트의 정확도와 테스트 세트의 정확도 사이의 차이는 과대적합(overfitting) 때문입니다. 과대적합은 머신러닝 모델이 훈련 데이터보다 새로운 데이터에서 성능이 낮아지는 현상을 말합니다.
예측 만들기
훈련된 모델을 사용하여 이미지에 대한 예측을 만들 수 있습니다.
End of explanation
predictions[0]
Explanation: 여기서는 테스트 세트에 있는 각 이미지의 레이블을 예측했습니다. 첫 번째 예측을 확인해 보죠:
End of explanation
np.argmax(predictions[0])
Explanation: 이 예측은 10개의 숫자 배열로 나타납니다. 이 값은 10개의 옷 품목에 상응하는 모델의 신뢰도(confidence)를 나타냅니다. 가장 높은 신뢰도를 가진 레이블을 찾아보죠:
End of explanation
test_labels[0]
Explanation: 모델은 이 이미지가 앵클 부츠(class_name[9])라고 가장 확신하고 있습니다. 이 값이 맞는지 테스트 레이블을 확인해 보죠:
End of explanation
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
Explanation: 10개의 신뢰도를 모두 그래프로 표현해 보겠습니다:
End of explanation
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
Explanation: 0번째 원소의 이미지, 예측, 신뢰도 점수 배열을 확인해 보겠습니다.
End of explanation
# 처음 X 개의 테스트 이미지와 예측 레이블, 진짜 레이블을 출력합니다
# 올바른 예측은 파랑색으로 잘못된 예측은 빨강색으로 나타냅니다
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
Explanation: 몇 개의 이미지의 예측을 출력해 보죠. 올바르게 예측된 레이블은 파란색이고 잘못 예측된 레이블은 빨강색입니다. 숫자는 예측 레이블의 신뢰도 퍼센트(100점 만점)입니다. 신뢰도 점수가 높을 때도 잘못 예측할 수 있습니다.
End of explanation
# 테스트 세트에서 이미지 하나를 선택합니다
img = test_images[0]
print(img.shape)
Explanation: 마지막으로 훈련된 모델을 사용하여 한 이미지에 대한 예측을 만듭니다.
End of explanation
# 이미지 하나만 사용할 때도 배치에 추가합니다
img = (np.expand_dims(img,0))
print(img.shape)
Explanation: tf.keras 모델은 한 번에 샘플의 묶음 또는 배치(batch)로 예측을 만드는데 최적화되어 있습니다. 하나의 이미지를 사용할 때에도 2차원 배열로 만들어야 합니다:
End of explanation
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
plt.xticks(range(10), class_names, rotation=45)
plt.show()
Explanation: 이제 이 이미지의 예측을 만듭니다:
End of explanation
prediction_result = np.argmax(predictions_single[0])
print(prediction_result)
Explanation: model.predict는 2차원 넘파이 배열을 반환하므로 첫 번째 이미지의 예측을 선택합니다:
End of explanation |
1,158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Maskbits QA in dr8c
Step1: Check the masking
Step2: All DUPs should be in an LSLGA blob.
Step3: 1) Find all bright Gaia stars.
2) Make sure the magnitude limits are correct.
3) Make sure the masking behavior around them is correct.
Step4: Make sure the MASKBITS values are set correctly. | Python Code:
import os, time
import numpy as np
import fitsio
from glob import glob
import matplotlib.pyplot as plt
from astropy.table import vstack, Table, hstack
Explanation: Maskbits QA in dr8c
End of explanation
MASKBITS = dict(
NPRIMARY = 0x1, # not PRIMARY
BRIGHT = 0x2,
SATUR_G = 0x4,
SATUR_R = 0x8,
SATUR_Z = 0x10,
ALLMASK_G = 0x20,
ALLMASK_R = 0x40,
ALLMASK_Z = 0x80,
WISEM1 = 0x100, # WISE masked
WISEM2 = 0x200,
BAILOUT = 0x400, # bailed out of processing
MEDIUM = 0x800, # medium-bright star
GALAXY = 0x1000, # LSLGA large galaxy
CLUSTER = 0x2000, # Cluster catalog source
)
# Bits in the "brightblob" bitmask
IN_BLOB = dict(
BRIGHT = 0x1,
MEDIUM = 0x2,
CLUSTER = 0x4,
GALAXY = 0x8,
)
def gather_gaia(camera='decam'):
#dr8dir = '/global/project/projectdirs/cosmo/work/legacysurvey/dr8b'
dr8dir = '/Users/ioannis/work/legacysurvey/dr8c'
#outdir = os.getenv('HOME')
outdir = dr8dir
for cam in np.atleast_1d(camera):
outfile = os.path.join(outdir, 'check-gaia-{}.fits'.format(cam))
if os.path.isfile(outfile):
gaia = Table.read(outfile)
else:
out = []
catfile = glob(os.path.join(dr8dir, cam, 'tractor', '???', 'tractor*.fits'))
for ii, ff in enumerate(catfile[1:]):
if ii % 100 == 0:
print('{} / {}'.format(ii, len(catfile)))
cc = Table(fitsio.read(ff, upper=True, columns=['BRICK_PRIMARY', 'BRICKNAME', 'BX', 'BY',
'REF_CAT', 'REF_ID', 'RA', 'DEC', 'TYPE',
'FLUX_G', 'FLUX_R', 'FLUX_Z',
'FLUX_IVAR_G', 'FLUX_IVAR_R', 'FLUX_IVAR_Z',
'BRIGHTBLOB', 'MASKBITS', 'GAIA_PHOT_G_MEAN_MAG']))
cc = cc[cc['BRICK_PRIMARY']]
out.append(cc)
out = vstack(out)
out.write(outfile, overwrite=True)
return gaia
%time gaia = gather_gaia(camera='decam')
Explanation: Check the masking
End of explanation
idup = gaia['TYPE'] == 'DUP'
assert(np.all(gaia[idup]['MASKBITS'] & MASKBITS['GALAXY'] != 0))
assert(np.all(gaia[idup]['FLUX_G'] == 0))
for band in ('G', 'R', 'Z'):
assert(np.all(gaia[idup]['FLUX_{}'.format(band)] == 0))
assert(np.all(gaia[idup]['FLUX_IVAR_{}'.format(band)] == 0))
gaia[idup]
Explanation: All DUPs should be in an LSLGA blob.
End of explanation
ibright = np.where(((gaia['MASKBITS'] & MASKBITS['BRIGHT']) != 0) * (gaia['REF_CAT'] == 'G2') * (gaia['TYPE'] != 'DUP'))[0]
#bb = (gaia['BRIGHTBLOB'][ibright] & IN_BLOB['BRIGHT'] != 0) == False
#gaia[ibright][bb]
#gaia[ibright]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 6))
_ = ax1.hist(gaia[ibright]['GAIA_PHOT_G_MEAN_MAG'], bins=100)
ax1.set_xlabel('Gaia G')
ax1.set_title('MASKBITS & BRIGHT, REF_CAT==G2, TYPE!=DUP', fontsize=14)
isb = np.where(gaia[ibright]['GAIA_PHOT_G_MEAN_MAG'] < 13.0)[0]
isf = np.where(gaia[ibright]['GAIA_PHOT_G_MEAN_MAG'] >= 13.0)[0]
print(len(isb), len(isf))
ax2.scatter(gaia['RA'][ibright][isb], gaia['DEC'][ibright][isb], s=10, color='green', label='G<13')
ax2.scatter(gaia['RA'][ibright][isf], gaia['DEC'][ibright][isf], s=10, color='red', alpha=0.5, label='G>=13')
ax2.legend(fontsize=14, frameon=True)
ax2.set_title('MASKBITS & BRIGHT, REF_CAT==G2, TYPE!=DUP', fontsize=14)
#ax.set_xlim(136.8, 137.2)
#ax.set_ylim(32.4, 32.8)
print(np.sum(gaia['BRIGHTBLOB'][ibright][isf] & IN_BLOB['BRIGHT'] != 0))
check = np.where(gaia['BRIGHTBLOB'][ibright][isf] & IN_BLOB['BRIGHT'] == 0)[0] # no bright targeting bit set
for key in MASKBITS.keys():
print(key, np.sum(gaia['MASKBITS'][ibright][isf][check] & MASKBITS[key] != 0))
gaia[ibright][isf][check]
Explanation: 1) Find all bright Gaia stars.
2) Make sure the magnitude limits are correct.
3) Make sure the masking behavior around them is correct.
End of explanation
mask = fitsio.read('decam/coadd/132/1325p325/legacysurvey-1325p325-maskbits.fits.fz')
#print(mask.max())
c = plt.imshow(mask > 0, origin='lower')
#plt.colorbar(c)
ww = gaia['BRICKNAME'] == '1325p325'
eq = []
for obj in gaia[ww]:
eq.append(mask[int(obj['BY']), int(obj['BX'])] == obj['MASKBITS'])
assert(np.all(eq))
Explanation: Make sure the MASKBITS values are set correctly.
End of explanation |
1,159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
whatever-forever
Create reusable, higher-order functions using declarative syntaxes in Python.
Installation
pip install whatever-forever
Basic Usage
Chaining <small>in Python</small>
Step1: A random list
Step2: Syntactic Sugar | Python Code:
from whatever import *
__my_chain = __x(5).range.map(lambda x: x+3).list
__my_chain
Explanation: whatever-forever
Create reusable, higher-order functions using declarative syntaxes in Python.
Installation
pip install whatever-forever
Basic Usage
Chaining <small>in Python</small>
End of explanation
from random import random
__random_list = __x(5).range.map(lambda x: random()).list.value()
str(__random_list)
Explanation: A random list
End of explanation
from random import random
__x(__random_list.__()) * (lambda s: '%3.2f' % s) | list
from random import random
((__x(__random_list.__()) + (lambda x: x >.5) )
* (lambda s: '%3.2f' % s)
| list
)
Explanation: Syntactic Sugar
End of explanation |
1,160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot multiple volcanic data sets from the FITS (FIeld Time Series) database
In this notebook we will plot data of multiple types from volcano observatory instruments using data from the FITS (FIeld Time Series) database. This notebook assumes that the reader has either read the previous FITS data access and plotting Jupyter Notebook or has a basic understanding of Python. Some of the code from the previous notebook has been brought over in the form of package imports and a function in the first code segment.
To begin, run the following code segment
Step1: Next we specify the sites and corresponding data types we want to view. Volcanic data has many types, and FITS database TypeID may not be obvious, so this may be a useful resource if the query isn't working.
To discover what geodetic sites exist, browse the GeoNet Network Maps. Volcanic sites can be found by data type in the FITS GUI.
At the Ruapehu Crater Lake (RU001) lake temperature and Mg<sup>2+</sup> concentration are used (in combination with the lake level and wind speed) to model the amount of energy that enters the lake from below. In the next code segment we will gather this data.
Step2: The only difference between this code segment and the corresponding segment of the previous notebook is that here we store DataFrame objects in a list and generate them using a for loop. Again we can plot the data in a simple plot
Step3: While the above plot may succeed in showing the two data series on the same figure, it doesn't do it in a very useful way. We can use a few features of matplotlib to improve the readability of the figure
Step4: This figure is much easier to compare the data series in, but it is also very cluttered. The next code segment plots each data series in separate subplots within the same figure to maximise readability without reducing the operator's ability to compare the two data series.
Step5: Which of these two plot styles is best is data-dependent and a matter of preference. When only two data series are being plotted it is fairly easy to overlay the two, but when more than two are used subplotting quickly becomes favourable.
Another useful dataset used for volcanic activity observation is the CO<sub>2</sub>/SO<sub>2</sub> ratio, as high values of this ratio can indicate a fresh batch of magma beneath a volcano. We will look now at the dataset for monthly airborne measurements of the two gases at White Island. As multiple collection methods exist for these data types, we will need to expand the build_query function to allow methodID specification.
Step6: Now we will gather the data, do a few specific modifications, and then present it. If you want to change the data types used here the code will fail. This is because there are hard-coded variable redefinitions that require this particular type of data. Consider this code segment an example, and not a modifiable script like the other segments of this notebook. | Python Code:
# Import packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Define functions
def build_query(site, data_type):
'''
Take site code and data type and generate a FITS API query for an observations csv file
'''
# Ensure parameters are in the correct format for use with the FITS API
site = str.upper(site) # ensure site code is upper case
# Build a FITS API query by combining parameter:value pairs in the query format
query_suffix = 'siteID=%s&typeID=%s' % (site, data_type)
# Combine the query parameter=value string with the FITS observation data URL
URL = 'https://fits.geonet.org.nz/observation?' + query_suffix
return URL
Explanation: Plot multiple volcanic data sets from the FITS (FIeld Time Series) database
In this notebook we will plot data of multiple types from volcano observatory instruments using data from the FITS (FIeld Time Series) database. This notebook assumes that the reader has either read the previous FITS data access and plotting Jupyter Notebook or has a basic understanding of Python. Some of the code from the previous notebook has been brought over in the form of package imports and a function in the first code segment.
To begin, run the following code segment:
End of explanation
# Set sites and respective data types in lists.
sites = ['RU001', 'RU001']
data_types = ['t', 'Mg-w']
# Ensure input is in list format
if type(sites) != list:
site = sites
sites = []
sites.append(site)
if type(data_types) != list:
temp_data_types = data_types
data_types = []
data_types.append(temp_data_types)
# Check that each site has a corresponding data type and vice versa
if len(sites) != len(data_types):
print('Number of sites and data types are not equal!')
# Create a list to store DataFrame objects in
data = [[] for i in range(len(sites))]
# Parse csv data from the FITS database into the data list
for i in range(len(sites)):
URL = build_query(sites[i], data_types[i]) # FITS API query building function
try:
data[i] = pd.read_csv(URL, names=['date-time', data_types[i], 'error'], header=0, parse_dates = [0], index_col = 0)
except:
print('Site or data type does not exist')
Explanation: Next we specify the sites and corresponding data types we want to view. Volcanic data has many types, and FITS database TypeID may not be obvious, so this may be a useful resource if the query isn't working.
To discover what geodetic sites exist, browse the GeoNet Network Maps. Volcanic sites can be found by data type in the FITS GUI.
At the Ruapehu Crater Lake (RU001) lake temperature and Mg<sup>2+</sup> concentration are used (in combination with the lake level and wind speed) to model the amount of energy that enters the lake from below. In the next code segment we will gather this data.
End of explanation
# Plot the data on one figure
colors = ['red', 'blue']
for i in range(len(data)):
data[i].loc[:, data_types[i]].plot(marker='o', linestyle=':', color = colors[i])
plt.xlabel('Time', fontsize = 12)
plt.ylabel('')
plt.show()
Explanation: The only difference between this code segment and the corresponding segment of the previous notebook is that here we store DataFrame objects in a list and generate them using a for loop. Again we can plot the data in a simple plot:
End of explanation
# Generate blank figure to plot onto
fig, ax1 = plt.subplots()
# Plot the first data series onto the figure
data[0].loc[:, data_types[0]].plot(marker='o', linestyle=':', ax = ax1, color = colors[0], label = data_types[0])
# Plot the second data series onto the figure
ax2 = ax1.twinx() # Share x axis between two y axes
data[1].loc[:, data_types[1]].plot(marker='o', linestyle=':', ax = ax2, color = colors[1], label = data_types[1])
# Make a legend for both plots
plot1, labels1 = ax1.get_legend_handles_labels()
plot2, labels2 = ax2.get_legend_handles_labels()
ax1.legend(plot1 + plot2, labels1 + labels2, loc = 0)
# Tidy up plot
ax1.set_xlabel('Time', rotation = 0, labelpad = 15, fontsize = 12)
ax1.set_ylabel(data_types[0], rotation = 0, labelpad = 35, fontsize = 12)
ax2.set_ylabel(data_types[1], rotation = 0, labelpad = 35, fontsize = 12)
plt.title(data_types[0] + ' and ' + data_types[1] + ' data for ' + sites[0] + ' and ' + sites[1], fontsize = 12)
plt.show()
Explanation: While the above plot may succeed in showing the two data series on the same figure, it doesn't do it in a very useful way. We can use a few features of matplotlib to improve the readability of the figure:
End of explanation
# New figure
plt.figure()
# Plot first data series onto subplot
ax1 = plt.subplot(211) # Generate first subplot
data[0].loc[:, data_types[0]].plot(marker='o', linestyle=':', ax = ax1, color = colors[0], label = data_types[0])
plt.title(data_types[0] + ' and ' + data_types[1] + ' data for ' + sites[0] + ' and ' + sites[1], fontsize = 12)
# Plot second data series onto second subplot
ax2 = plt.subplot(212, sharex = ax1)
data[1].loc[:, data_types[1]].plot(marker='o', linestyle=':', color = colors[1], label = data_types[1])
# Tidy up plot
ax2.set_xlabel('Time', rotation = 0, labelpad = 15, fontsize = 12)
ax1.set_ylabel(data_types[0], rotation = 0, labelpad = 35, fontsize = 12)
ax2.set_ylabel(data_types[1], rotation = 0, labelpad = 35, fontsize = 12)
# Remove messy minor x ticks
ax1.tick_params(axis = 'x', which = 'minor', size = 0)
ax2.tick_params(axis = 'x', which = 'minor', size = 0)
plt.show()
Explanation: This figure is much easier to compare the data series in, but it is also very cluttered. The next code segment plots each data series in separate subplots within the same figure to maximise readability without reducing the operator's ability to compare the two data series.
End of explanation
# Define functions
def build_query(site, data_type, method_type):
'''
Take site code and data type and generate a FITS API query for an observations csv file
'''
# Ensure parameters are in the correct format for use with the FITS API
site = str.upper(site) # ensure site code is upper case
# Build a FITS API query by combining parameter:value pairs in the query format
query_suffix = 'siteID=%s&typeID=%s&methodID=%s' % (site, data_type, method_type)
# Combine the query parameter=value string with the FITS observation data URL
URL = 'https://fits.geonet.org.nz/observation?' + query_suffix
return URL
Explanation: Which of these two plot styles is best is data-dependent and a matter of preference. When only two data series are being plotted it is fairly easy to overlay the two, but when more than two are used subplotting quickly becomes favourable.
Another useful dataset used for volcanic activity observation is the CO<sub>2</sub>/SO<sub>2</sub> ratio, as high values of this ratio can indicate a fresh batch of magma beneath a volcano. We will look now at the dataset for monthly airborne measurements of the two gases at White Island. As multiple collection methods exist for these data types, we will need to expand the build_query function to allow methodID specification.
End of explanation
# Redefine variables
sites = ['WI000','WI000']
data_types = ['SO2-flux-a', 'CO2-flux-a']
method_types = ['cont', 'cont']
# Check that each site has a corresponding data type and vice versa
if (len(sites) != len(data_types)) or (len(sites) != len(method_types)) or (len(data_types) != len(method_types)):
print('Number of sites, data types, and collection methods are not all equal!')
# Create a list to store DataFrame objects in
data = [[] for i in range(len(sites))]
# Parse csv data from the FITS database into the data list
for i in range(len(sites)):
URL = build_query(sites[i], data_types[i], method_types[i]) # FITS API query building function
try:
data[i] = pd.read_csv(URL, names=['date-time', data_types[i], 'error'], header=0, parse_dates = [0], index_col = 0)
except:
print('Site or data type does not exist')
# Remove non-synchronous measurements in the two data series
data[0] = data[0][data[0].index.isin(data[1].index)]
data[1] = data[1][data[1].index.isin(data[0].index)]
# Hard-code in the ratio calculation
ratio = pd.DataFrame() # make an empty DataFrame
ratio['value'] = data[1]['CO2-flux-a'] / data[0]['SO2-flux-a'] # calculate the ratio between observations and call it value
ratio.index = data[1].index # the ratio index is the CO2 flux index (observation times)
# Plot the dataset
ratio.loc[:,'value'].plot(marker='o', linestyle=':', color='blue')
# Add functional aspects to plot
plt.xlabel('Time', fontsize = 12)
plt.ylabel('Ratio', fontsize = 12)
plt.title('Ratio of ' + data_types[0] + ' and ' + data_types[1] + ' for site ' + sites[0], fontsize = 12)
plt.show()
Explanation: Now we will gather the data, do a few specific modifications, and then present it. If you want to change the data types used here the code will fail. This is because there are hard-coded variable redefinitions that require this particular type of data. Consider this code segment an example, and not a modifiable script like the other segments of this notebook.
End of explanation |
1,161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature importances with forests of trees
This examples shows the use of forests of trees to evaluate the importance of
features on an artificial classification task. The red bars are the feature
importances of the forest, along with their inter-trees variability.
Inspired from sciki-tutorials.
Step1: Read dataset with Pandas.
As previous examples, we use pandas to read datasets, and standarize the data.
Step2: Forest trees to compute feature importances.
Here we build a forest to compute feature importances.
Choosing feature importances is a complicated task and is well explained elsewhere. Please read | Python Code:
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import make_classification
from sklearn.ensemble import ExtraTreesClassifier
Explanation: Feature importances with forests of trees
This examples shows the use of forests of trees to evaluate the importance of
features on an artificial classification task. The red bars are the feature
importances of the forest, along with their inter-trees variability.
Inspired from sciki-tutorials.
End of explanation
#I use this dataset because this has clearly separated cathegories,
#Read the database using pandas,
#Note that bad lines are omitted with error_bad_lines=False
df = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/00236/seeds_dataset.txt', header=None, sep="\t", error_bad_lines=False)
#The headers are not given in the dataset, so we give them afterwords:
#1. area A,
#2. perimeter P,
#3. compactness C = 4*pi*A/P^2,
#4. length of kernel,
#5. width of kernel,
#6. asymmetry coefficient
#7. length of kernel groove.
#8. Class: 1=Kama, 2=Rosa, 3=Canadian
column_header= ["area","perimeter","compactness","kernel-length","kernel-width",
"asymmetry","kernel-groove-length","class"]
df.columns = column_header
#This shows the header of the database:
df.head()
#This sets class=2 to 0 and 3 to 1:
y = df.loc[:,'class']
#Extract some cathegories:
X=df.iloc[:,0:7]
#This is to convert the csv dictionary into a numpy matrix to later standarize:
X=X.as_matrix()
nfeature=X.shape[1]
# standardize features
X_std = np.copy(X)
for ifeat in range(0,nfeature):
X_std[:,ifeat] = (X[:,ifeat] - X[:,ifeat].mean()) / X[:,ifeat].std()
Explanation: Read dataset with Pandas.
As previous examples, we use pandas to read datasets, and standarize the data.
End of explanation
# Build a forest and compute the feature importances
forest = ExtraTreesClassifier(n_estimators=250,
random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d (%20s): %f" % (f + 1, indices[f],column_header[indices[f]], importances[f]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
Explanation: Forest trees to compute feature importances.
Here we build a forest to compute feature importances.
Choosing feature importances is a complicated task and is well explained elsewhere. Please read:
https://en.wikipedia.org/wiki/Feature_selection
http://alexperrier.github.io/jekyll/update/2015/08/27/feature-importance-random-forests-gini-accuracy.html
Feature selection can result in more cost effective models, by reducing the number of features when is large.
"Regularized trees penalize using a variable similar to the variables selected at previous tree nodes for splitting the current node. Regularized trees only need build one tree model (or one tree ensemble model) and thus are computationally efficient." [Wiki]
End of explanation |
1,162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Traveling Salesman problem
Names of group members
// put your names here!
Goals of this assignment
The main goal of this assignment is to use Monte Carlo methods to find the shortest path between several cities - the "Traveling Salesman" problem. This is an example of how randomization can be used to optimize problems that would be incredibly computationally expensive (and sometimes impossible) to solve exactly.
The Traveling Salesman problem
The Traveling Salesman Problem is a classic problem in computer science where the focus is on optimization. The problem is as follows
Step1: This code sets up everything we need
Given a number of cities, set up random x and y positions and calculate a table of distances between pairs of cities (used for calculating the total trip distance). Then set up an array that controls the order that the salesman travels between cities, and plots out the initial path.
Step2: Put your code below this!
Your code should take some number of steps, doing the following at each step | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import display, clear_output
def calc_total_distance(table_of_distances, city_order):
'''
Calculates distances between a sequence of cities.
Inputs: N x N table containing distances between each pair of the N
cities, as well as an array of length N+1 containing the city order,
which starts and ends with the same city (ensuring that the path is
closed)
Returns: total path length for the closed loop.
'''
total_distance = 0.0
# loop over cities and sum up the path length between successive pairs
for i in range(city_order.size-1):
total_distance += table_of_distances[city_order[i]][city_order[i+1]]
return total_distance
def plot_cities(city_order,city_x,city_y):
'''
Plots cities and the path between them.
Inputs: ordering of cities, x and y coordinates of each city.
Returns: a plot showing the cities and the path between them.
'''
# first make x,y arrays
x = []
y = []
# put together arrays of x and y positions that show the order that the
# salesman traverses the cities
for i in range(0, city_order.size):
x.append(city_x[city_order[i]])
y.append(city_y[city_order[i]])
# append the first city onto the end so the loop is closed
x.append(city_x[city_order[0]])
y.append(city_y[city_order[0]])
#time.sleep(0.1)
clear_output(wait=True)
display(fig) # Reset display
fig.clear() # clear output for animation
plt.xlim(-0.2, 20.2) # give a little space around the edges of the plot
plt.ylim(-0.2, 20.2)
# plot city positions in blue, and path in red.
plt.plot(city_x,city_y, 'bo', x, y, 'r-')
Explanation: The Traveling Salesman problem
Names of group members
// put your names here!
Goals of this assignment
The main goal of this assignment is to use Monte Carlo methods to find the shortest path between several cities - the "Traveling Salesman" problem. This is an example of how randomization can be used to optimize problems that would be incredibly computationally expensive (and sometimes impossible) to solve exactly.
The Traveling Salesman problem
The Traveling Salesman Problem is a classic problem in computer science where the focus is on optimization. The problem is as follows: Imagine there is a salesman who has to travel to N cities. The order is unimportant, as long as he only visits each city once on each trip, and finishes where he started. The salesman wants to keep the distance traveled (and thus travel costs) as low as possible. This problem is interesting for a variety of reasons - it applies to transportation (finding the most efficient bus routes), logistics (finding the best UPS or FedEx delivery routes for some number of packages), or in optimizing manufacturing processes to reduce cost.
The Traveling Salesman Problem is extremely difficult to solve for large numbers of cities - testing every possible combination of cities would take N! (N factorial) individual tests. For 10 cities, this would require 3,628,800 separate tests. For 20 cities, this would require 2,432,902,008,176,640,000 (approximately $2.4 \times 10^{18}$) tests - if you could test one combination per microsecond ($10^{-6}$ s) it would take approximately 76,000 years! For 30 cities, at the same rate testing every combination would take more than one billion times the age of the Universe. As a result, this is the kind of problem where a "good enough" answer is sufficient, and where randomization comes in.
A good local example of a solution to the Traveling Salesman Problem is an optimized Michigan road trip calculated by a former MSU graduate student (and one across the US). There's also a widely-used software library for solving the Traveling Salesman Problem; the website has some interesting applications of the problem!
End of explanation
# number of cities we'll use.
number_of_cities = 30
# seed for random number generator so we get the same value every time!
np.random.seed(2024561414)
# create random x,y positions for our current number of cities. (Distance scaling is arbitrary.)
city_x = np.random.random(size=number_of_cities)*20.0
city_y = np.random.random(size=number_of_cities)*20.0
# table of city distances - empty for the moment
city_distances = np.zeros((number_of_cities,number_of_cities))
# calculate distnace between each pair of cities and store it in the table.
# technically we're calculating 2x as many things as we need (as well as the
# diagonal, which should all be zeros), but whatever, it's cheap.
for a in range(number_of_cities):
for b in range(number_of_cities):
city_distances[a][b] = ((city_x[a]-city_x[b])**2 + (city_y[a]-city_y[b])**2 )**0.5
# create the array of cities in the order we're going to go through them
city_order = np.arange(city_distances.shape[0])
# tack on the first city to the end of the array, since that ensures a closed loop
city_order = np.append(city_order, city_order[0])
Explanation: This code sets up everything we need
Given a number of cities, set up random x and y positions and calculate a table of distances between pairs of cities (used for calculating the total trip distance). Then set up an array that controls the order that the salesman travels between cities, and plots out the initial path.
End of explanation
fig = plt.figure()
# Put your code here!
Explanation: Put your code below this!
Your code should take some number of steps, doing the following at each step:
Randomly swap two cities in the array of cities (except for the first/last city)
Check the total distance traversed by the salesman
If the new ordering results in a shorter path, keep it. If not, throw it away.
Plot the shorter of the two paths (the original one or the new one)
Also, keep track of the steps and the minimum distance traveled as a function of number of steps and plot out the minimum distance as a function of step!
End of explanation |
1,163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Following the theano Tutorial
Step1: Baby Steps - Algebra
Adding two Scalars
Step2: "Prefer constructors like matrix, vector and scalar to dmatrix, dvector and dscalar because the former will give you float32 variables when floatX=float32." - cf. Using the GPU Theano tutorial
Step3: Adding exercise 1, cf. http
Step4: "At this point it would be wise to begin familiarizing yourself more systematically with Theano’s fundamental objects and operations by browsing this section of the library
Step5: Converting from Python Objects
Step6: Back to More Examples... http
Step7: \begin{gathered}
s(x) = \frac{1}{1+\exp{-x} } = \frac{1+\tanh{(x/2) } }{2}
\end{gathered}
Step8: Computing More than one Thing at the Same Time (!!!)
Step9: Setting a Default Value for an Argument
Step10: "Inputs with default values must follow inputs without default values (like Python’s functions). There can be multiple inputs with default values. These parameters can be set positionally or by name, as in standard Python
Step11: Using Shared Variables
Step12: "It is possible to reset the state. Just use the .set_value() method
Step13: "Also, Theano has more control over where and how shared variables are allocated, which is one of the important elements of getting good performance on the GPU."
Step14: Copying functions
Step15: "We can use copy() to create a similar accumulator but with its own internal state using the swap parameter, which is a dictionary of shared variables to exchange
Step16: Using Random Numbers
Step17: "The RandomStream only work on the CPU, MRG31k3p work on the CPU and GPU. CURAND only work on the GPU." cf. http
Step18: "When we add the extra argument no_default_updates=True to function (as in g), then the random number generator state is not affected by calling the returned function. So, for example, calling g multiple times will return the same numbers."
Step19: "An important remark is that a random variable is drawn at most once during any single function execution. So the nearly_zeros function is guaranteed to return approximately 0 (except for rounding error) even though the rv_u random variable appears three times in the output expression."
Seeding Streams
Step20: Sharing Streams Between Functions
Step21: Copying Random State Between Theano Graphs
Step22: Other Random Distributions
are found here at other distributions implemented
A Real Example
Step23: Derivatives in Theano
Computing Gradients
Step24: For this
$
\begin{gathered}
\frac{d (x^2) }{ dx} = 2 \cdot x
\end{gathered}
$
Step25: fill((x ** 2), 1.0) means to make a matrix of the same shape as x ** 2 and fill it with 1.0
Step26: A plot of the gradient of the logistic function, with x on the x-axis and $ds(x)/dx$ on the y-axis
Step27: Computing the Jacobian
Step28: Computing the Hessian
Step29: R-operator
Step30: L-operator
Step31: Hessian times a Vector
Step32: or, making use of the R-operator
Step33: Conditions
Step34: logistic regression on the GPU
cf. Using the GPU - Theano documentation
Solution for the GPU implementation
Step35: scan - Looping in Theano
Step36: Simple loop with accumulation
Step37: From this link, Loop, which for some reason a Google search overlooks most of the time, I then found this
cf. good ipython notebook with explanation and more examples, a scan tutorial written by Pierre Luc Carrier
Example 1
Step38: The parameter fn receives a function or lambda expression that expresses computation to do at every iteration.
Since we wish to iterate over both X1 and X2 simultaneously, provide them as sequences. This means that every iteration will operate on 2 inputs; an element from X1 and the corresponding element from X2.
Step39: output contains outputs of fn from every timestep concatenated into a tensor. In our case, the output of a single timestep is a scalar so output is a vector where output[i] is the output of the ith iteration.
updates details if and how the execution of scan updates any shared variable in the graph. It should be provided as an argument when compiling the Theano function.
Step40: If updates is omitted, the state of any shared variables modified by Scan will not be updated properly. Random number sampling, for instance, relies on shared variables. If updates is not provided, the state of the random number generator won't be updated properly and the same numbers might be sampled repeatedly. Always provide updates when compiling Theano functions.
Step41: An interesting thing is that we never explicitl told Scan how many iterations to run. It was automatically inferred. When given sequences, Scan will run as many iterations as length of the shortest sequence.
Step42: So we can do the following with scan
Step43: What about reduce?
Step44: What about matrices, tensors?
Step45: So this is the mathematical equivalent to
$ t=0,1,\dots T-1$, $t\in \mathbb{Z}^+$,
$X\in \mathbb{R}^T$
$$ \forall \, t = 0, 1, \dots T-1, \
f
Step46: Example 2
Step47: For the sake of variety (and so lambda is the same as defining a Python function), define computation to be done at every iteration of the loop using step(), instead of lambda expression.
To have $W$ and $b$ be available at every iteration, use the argument non_sequences. Contrary to sequences, non-sequences are not iterated upon by Scan.
This means step() function will need to operate on 3 symbolic inputs; 1 for our sequences $X$, 1 for each non-sequences $W$ and $b$.
The inputs that correspond to the non-sequences are always last and in same order at the non-sequences provided to Scan. This means correspondence between inputs of the step() function and arguments to scan() is the following
Step48: Notice how scan is on the first dimension (or, counting from 0, the 0th dimension), always. So 1 way to think about it is discretized time $t\in \mathbb{R} \xrightarrow{ \text{ discretize } } t\in\mathbb{Z}^+$
$$
X
Step49: Example 3
Step50: The trick part is informing Scan that our step function expects as input the output of a previous iteration. A new parameter, outputs_info, achieves this. This parameter is used to tell Scan how we intend to use each of the ouputs that are computer at each iteration.
This parameter can be omitted (like we have done so far) when the step function doesn't depend on any output of a previous iteration.
outputs_info takes a sequence with 1 element for every output of the step() function
Step51: For input $\begin{aligned} & X
Step52: This is classically what (parallel) scan should do.
Indeed,
Step53: In summary, the dictionary between the mathematics and the Python theano code for scan seems to be the following
Step54: Defining the value of outputs_info
Step56: EY
Step57: Exercises
Exercise 1 - Computing a polynomial
Step58: Solution
cf. scan_ex1_solution.py
Step61: Exercise 2 - Sampling without replacement
takes as input a vector of probabilities and a scalar
Step63: EY
Step64: Solution from author (Pierre Le Duc?)
cf. scan_ex2_solution.py
Step65: Using theano's crossentropy
Step66: making an example for categorical crossentropy | Python Code:
%matplotlib inline
from theano import *
import theano.tensor as T
Explanation: Following the theano Tutorial
End of explanation
import numpy
from theano import function
x = T.dscalar('x')
y = T.dscalar('y')
z = x+y
f = function([x,y],z)
print type(x), type(y), type(z), type(f) # good to know what these
# new classes are in theano
f(2,3)
numpy.allclose(f(16.3,12.1),28.4)
x.type
T.dscalar
x.type is T.dscalar
Explanation: Baby Steps - Algebra
Adding two Scalars
End of explanation
xf = T.scalar('xf')
yf = T.scalar('yf')
zf = xf + yf
ff = function([xf,yf],zf)
from theano import pp
print(pp(z))
print(pp(zf))
x = T.dmatrix('x')
y = T.dmatrix('y')
z = x + y
f = function([x,y],z)
f([[1,2],[3,4]], [[10,20],[30,40]])
f(numpy.array([[1,2],[3,4]]),numpy.array([[10,20],[30,40]]))
xf = T.matrix('xf')
xy = T.matrix('yf')
zf = xf + yf
ff = function([xf,yf],zf)
ff([[1,2],[3,4]], [[10,20],[30,40]])
Explanation: "Prefer constructors like matrix, vector and scalar to dmatrix, dvector and dscalar because the former will give you float32 variables when floatX=float32." - cf. Using the GPU Theano tutorial
End of explanation
a = theano.tensor.vector()
b = theano.tensor.vector()
out = a**2 + b**2 + 2 * a * b
f = theano.function([a,b],out)
print(f([1,2],[4,5]))
Explanation: Adding exercise 1, cf. http://deeplearning.net/software/theano/tutorial/adding.html
End of explanation
dtensor5 = T.TensorType('float64', (False,)*5)
x = dtensor5()
z = dtensor5('z')
my_dmatrix = T.TensorType('float64', (False,)*2)
x = my_dmatrix()
my_dmatrix == T.dmatrix
Explanation: "At this point it would be wise to begin familiarizing yourself more systematically with Theano’s fundamental objects and operations by browsing this section of the library: Basic Tensor Functionality." cf. More Examples
Custom tensor types
End of explanation
x = shared(numpy.random.randn(3,4))
Explanation: Converting from Python Objects
End of explanation
x = T.dmatrix('x')
s = 1 / ( 1 + T.exp(-x))
logistic = theano.function([x],s)
logistic([[0,1],[-1,-2]])
Explanation: Back to More Examples... http://deeplearning.net/software/theano/tutorial/examples.html
End of explanation
s2 = (1 + T.tanh(x/2))/2
logistic2 = theano.function([x],s2)
logistic2([[0,1],[-1,-2]])
Explanation: \begin{gathered}
s(x) = \frac{1}{1+\exp{-x} } = \frac{1+\tanh{(x/2) } }{2}
\end{gathered}
End of explanation
a,b = T.dmatrices('a','b')
diff = a-b
abs_diff = abs(diff)
diff_squared = diff**2
f = theano.function([a,b],[diff,abs_diff,diff_squared])
f([[1,1],[1,1]], [[0,1],[2,3]])
Explanation: Computing More than one Thing at the Same Time (!!!)
End of explanation
from theano import In
from theano import function
x,y = T.dscalars('x','y')
z = x + y
f= function([x,In(y,value=1)],z)
f(33)
f(33,2)
Explanation: Setting a Default Value for an Argument
End of explanation
x,y,w = T.dscalars('x', 'y', 'w')
z = (x+y)*w
f = function([x,In(y,value=1),In(w,value=2,name='w_by_name')],z)
f(33)
f(33,2)
f(33,0,1)
f(33,w_by_name=1)
f(33,w_by_name=1,y=0)
Explanation: "Inputs with default values must follow inputs without default values (like Python’s functions). There can be multiple inputs with default values. These parameters can be set positionally or by name, as in standard Python:"
End of explanation
from theano import shared
state = shared(0)
inc = T.iscalar('inc')
accumulator = function([inc],state,updates=[(state,state+inc)])
print(state.get_value())
accumulator(1)
print(state.get_value())
accumulator(300)
print(state.get_value())
Explanation: Using Shared Variables
End of explanation
state.set_value(-1)
accumulator(3)
print(state.get_value())
decrementor = function([inc],state, updates=[(state,state-inc)])
decrementor(2)
print(state.get_value())
Explanation: "It is possible to reset the state. Just use the .set_value() method:"
End of explanation
fn_of_state = state * 2 + inc
# The type of foo must match the shared variable we are replacing
# with the "givens"
foo = T.scalar(dtype=state.dtype)
skip_shared = function([inc,foo], fn_of_state, givens=[(state,foo)])
skip_shared(1,3)
print(state.get_value())
Explanation: "Also, Theano has more control over where and how shared variables are allocated, which is one of the important elements of getting good performance on the GPU."
End of explanation
inc = T.iscalar('inc')
accumulator = theano.function([inc],state, updates=[(state,state+inc)])
accumulator(10)
print(state.get_value())
Explanation: Copying functions
End of explanation
new_state = theano.shared(0)
new_accumulator = accumulator.copy(swap={state:new_state})
new_accumulator(100)
print(new_state.get_value())
print(state.get_value())
null_accumulator = accumulator.copy(delete_updates=True)
null_accumulator(9000)
print(state.get_value())
Explanation: "We can use copy() to create a similar accumulator but with its own internal state using the swap parameter, which is a dictionary of shared variables to exchange:"
End of explanation
from theano.tensor.shared_randomstreams import RandomStreams
from theano import function
srng = RandomStreams(seed=234)
rv_u = srng.uniform((2,2)) # represents a random stream of 2x2 matrices
rv_n = srng.normal((2,2))
f = function([], rv_u)
g = function([], rv_n, no_default_updates=True) # Not updating rv_n.rng
nearly_zeros = function([],rv_u+rv_u - 2 * rv_u)
Explanation: Using Random Numbers
End of explanation
from theano.sandbox.rng_mrg import MRG_RandomStreams
from theano.sandbox.cuda import CURAND_RandomStreams
f_val0 = f()
f_val1 = f()
Explanation: "The RandomStream only work on the CPU, MRG31k3p work on the CPU and GPU. CURAND only work on the GPU." cf. http://deeplearning.net/software/theano/tutorial/examples.html#other-implementations
End of explanation
g_val0 = g() # different numbers from f_val0 and f_val1
g_val1 = g()
Explanation: "When we add the extra argument no_default_updates=True to function (as in g), then the random number generator state is not affected by calling the returned function. So, for example, calling g multiple times will return the same numbers."
End of explanation
rng_val = rv_u.rng.get_value(borrow=True) # Get the ring for rv_u
rng_val.seed(89234) # seeds the generator
rv_u.rng.set_value(rng_val, borrow=True) # Assign back seeded rng
srng.seed(902340) # seeds rv_u and rv_n with different seeds each
Explanation: "An important remark is that a random variable is drawn at most once during any single function execution. So the nearly_zeros function is guaranteed to return approximately 0 (except for rounding error) even though the rv_u random variable appears three times in the output expression."
Seeding Streams
End of explanation
state_after_v0 = rv_u.rng.get_value().get_state()
nearly_zeros() # this affects rv_u's generator
v1 = f()
rng = rv_u.rng.get_value(borrow=True)
rng.set_state(state_after_v0)
rv_u.rng.set_value(rng,borrow=True)
v2 =f() # v2 != v1
v3=f() # v3 == v1
v2.view()
v1.view()
v3.view()
Explanation: Sharing Streams Between Functions
End of explanation
from __future__ import print_function
from theano.sandbox.rng_mrg import MRG_RandomStreams
from theano.tensor.shared_randomstreams import RandomStreams
class Graph():
def __init__(self, seed=123):
self.rng = RandomStreams(seed)
self.y = self.rng.uniform(size=(1,))
g1 = Graph(seed=123)
f1 = theano.function([], g1.y)
g2 = Graph(seed=987)
f2 = theano.function([], g2.y)
# By default, the two functions are out of sync.
f1()
f2()
def copy_random_state(g1,g2):
if isinstance(g1.rng, MRG_RandomStreams):
g2.rng.rstate = g1.rng.rstate
for (su1, su2) in zip(g1.rng.state_updates, g2.rng.state_updates):
su2[0].set_value(su1[0].get_value())
# We now copy the state of the theano random number generators.
copy_random_state(g1, g2)
f1()
f2()
Explanation: Copying Random State Between Theano Graphs
End of explanation
import numpy
import theano
import theano.tensor as T
rng = numpy.random
N = 400 # training sample size
feats = 784 # number of input variables
# generate a dataset: D = (input_values, target_class)
D = (rng.randn(N, feats), rng.randint(size=N, low=0, high=2))
training_steps = 10000
# Declare Theano symbolic variables
x = T.dmatrix("x")
y = T.dvector("y")
# initialize the weight vector w randomly
#
# this and the following bias variable b
# are shared so they keep their values
# between training iterations (updates)
w = theano.shared(rng.randn(feats), name="w")
# initialize the bias term
b = theano.shared(0., name="b")
print("Initial model:")
print(w.get_value())
print(b.get_value())
# Construct Theano expression graph
p_1 = 1 / (1 + T.exp(-T.dot(x, w) - b)) # Probability that target = 1
prediction = p_1 > 0.5 # The prediction thresholded
xent = -y * T.log(p_1) - (1-y * T.log(1-p_1)) # Cross-entropy loss function
cost = xent.mean() + 0.01 * ( w** 2).sum() # The cost to minimize
gw, gb = T.grad(cost, [w,b]) # Compute the gradient of the cost
# w.r.t weight vector w and
# bias term b
# (we shall return to this in a
# following section of this tutorial)
# Compile
train = theano.function(
inputs=[x,y],
outputs=[prediction, xent],
updates=((w,w-0.1 *gw), (b,b-0.1 * gb)))
predict = theano.function(inputs=[x], outputs=prediction)
# Train
for i in range(training_steps):
pred, err = train(D[0], D[1])
print("Final model:")
print(w.get_value())
print(b.get_value())
print("target values for D:")
print(D[1])
print("prediction on D:")
print(predict(D[0]))
Explanation: Other Random Distributions
are found here at other distributions implemented
A Real Example: Logistic Regression
End of explanation
import numpy
import theano
import theano.tensor as T
from theano import pp
Explanation: Derivatives in Theano
Computing Gradients
End of explanation
x = T.dscalar('x')
y = x ** 2
gy = T.grad(y,x)
pp(gy) # print out the gradient prior to optimization
Explanation: For this
$
\begin{gathered}
\frac{d (x^2) }{ dx} = 2 \cdot x
\end{gathered}
$
End of explanation
f = theano.function([x],gy)
f(4)
numpy.allclose(f(94.2), 188.4)
Explanation: fill((x ** 2), 1.0) means to make a matrix of the same shape as x ** 2 and fill it with 1.0
End of explanation
x = T.dmatrix('x')
s = T.sum(1 / (1 + T.exp(-x)))
gs = T.grad(s, x)
dlogistic = theano.function([x], gs)
dlogistic([[0, 1], [-1, -2]])
Explanation: A plot of the gradient of the logistic function, with x on the x-axis and $ds(x)/dx$ on the y-axis
End of explanation
x = T.dvector('x')
y = x ** 2
J, updates = theano.scan(lambda i, y, x: T.grad(y[i], x), sequences=T.arange(y.shape[0]), non_sequences=[y,x] )
f = theano.function([x], J, updates=updates)
f([4, 4])
Explanation: Computing the Jacobian
End of explanation
x = T.dvector('x')
y = x ** 2
cost = y.sum()
gy = T.grad(cost, x)
H, updates = theano.scan(lambda i, gy, x : T.grad(gy[i], x), sequences=T.arange(gy.shape[0]), non_sequences=[gy, x] )
f = theano.function([x], H, updates=updates)
f([4,4])
Explanation: Computing the Hessian
End of explanation
W = T.dmatrix('W')
V = T.dmatrix('V')
x = T.dvector('x')
y = T.dot(x,W)
JV = T.Rop(y, W, V)
f = theano.function([W,V,x],JV)
f([[1,1], [1,1]],[[2,2],[2,2]],[0,1])
Explanation: R-operator
End of explanation
W = T.dmatrix('W')
v = T.dvector('v')
x = T.dvector('x')
y = T.dot(x,W)
VJ = T.Lop(y,W,v)
f = theano.function([v,x],VJ)
f([2,2],[0,1])
Explanation: L-operator
End of explanation
x = T.dvector('x')
v = T.dvector('v')
y = T.sum(x ** 2)
gy = T.grad(y, x)
vH = T.grad(T.sum( gy * v), x)
f= theano.function([x,v], vH)
f([4,4], [2,2])
Explanation: Hessian times a Vector
End of explanation
x = T.dvector('x')
v = T.dvector('v')
y = T.sum( x ** 2)
gy = T.grad(y,x)
Hv = T.Rop(gy, x, v)
f = theano.function([x,v], Hv)
f([4,4],[2,2])
Explanation: or, making use of the R-operator:
End of explanation
from theano import tensor as T
from theano.ifelse import ifelse
import theano, time, numpy
a,b = T.scalars('a','b')
x,y = T.matrices('x','y')
z_switch = T.switch(T.lt(a,b), T.mean(x), T.mean(y))
z_lazy = ifelse(T.lt(a, b), T.mean(x), T.mean(y))
Explanation: Conditions
End of explanation
rng = numpy.random
N=400
feats=784
D = (rng.randn(N, feats).astype(theano.config.floatX), rng.randint(size=N,low=0, high=2).astype(theano.config.floatX))
print(D[0].shape); D[0]
print(D[1].shape); D[1]
training_steps = 10000
# Declare Theano symbolic variables
x = theano.shared(D[0], name="x")
y = theano.shared(D[1], name="y")
w = theano.shared(rng.randn(feats).astype(theano.config.floatX),name="w")
b = theano.shared(numpy.asarray(0., dtype=theano.config.floatX),name="b")
# Setting the tag.test_value attribute gives the variable its test value, i.e.
# provide Theano with a default test-value
x.tag.test_value = D[0]
y.tag.text_value = D[1]
# Construct Theano expression graph
p_1 = 1 / ( 1+ T.exp( - T.dot( x,w) - b)) # Probabilty of having a 1
prediction = p_1 > 0.5 # the prediction that is done: 0 or 1
xent = -y * T.log(p_1) - (1-y) * T.log(1-p_1) # Cross-entropy
cost = T.cast( xent.mean(), theano.config.floatX) + 0.01 * (w ** 2).sum() # the cost to optimize
gw, gb = T.grad(cost, [w,b])
# compile exprression to functions
train = theano.function(
inputs=[],
outputs=[prediction, xent],
updates=[(w, w - 0.01 * gw), (b, b - 0.01 * gb)],
name="train")
predict = theano.function(inputs=[], outputs=prediction, name="predict")
if any([n.op.__class__.__name__ in ['Gemv', 'CGemv', 'Gemm', 'CGemm'] for n in train.maker.fgraph.toposort()]):
print('Used the cpu')
# elif
if any([n.op.__class__.__name__ in ['GpuGemm', 'GpuGemv'] for n in train.maker.fgraph.toposort()]):
print('Used the gpu')
for i in range(training_steps):
pred, err = train()
print("target values for D")
print(D[1])
print("prediction on D")
print(predict().astype(theano.config.floatX))
Explanation: logistic regression on the GPU
cf. Using the GPU - Theano documentation
Solution for the GPU implementation
End of explanation
print(y.get_value().shape)
y.get_value()
Explanation: scan - Looping in Theano
End of explanation
import numpy as np
k = theano.shared(np.int32(1),"k")
A = theano.shared(np.array(range(10)),"A")
result, updates = theano.scan(fn=lambda prior_result, A: prior_result * A,
outputs_info=T.ones_like(A),
non_sequences=A,
n_steps=k)
result, updates = theano.scan(fn=lambda prior_result, A: prior_result * A,
outputs_info=T.ones_like(A),
non_sequences=A,
n_steps=2)
final_result = result[-1]
power=theano.function(inputs=[A,k], outputs=final_result, updates=updates)
power=theano.function(inputs=[],outputs=final_result, updates=updates)
power()
power()
A.get_value()
result, updates = theano.scan(fn=lambda prior_result, A: prior_result * A,
outputs_info=T.ones_like(A),
non_sequences=A,
n_steps=k)
power=theano.function(inputs=[],outputs=final_result, updates=updates)
power()
k.get_value()
k.set_value( np.int32(3) )
power()
Explanation: Simple loop with accumulation: computing $A^k$
End of explanation
import numpy as np
from theano import sandbox
X1 = T.vector('vector1')
X2 = T.vector('vector2')
Explanation: From this link, Loop, which for some reason a Google search overlooks most of the time, I then found this
cf. good ipython notebook with explanation and more examples, a scan tutorial written by Pierre Luc Carrier
Example 1 : As simple as it gets
End of explanation
output, updates = theano.scan(fn=lambda a, b : a * b, sequences=[X1,X2])
Explanation: The parameter fn receives a function or lambda expression that expresses computation to do at every iteration.
Since we wish to iterate over both X1 and X2 simultaneously, provide them as sequences. This means that every iteration will operate on 2 inputs; an element from X1 and the corresponding element from X2.
End of explanation
f = theano.function(inputs=[X1,X2], outputs=output, updates=updates)
Explanation: output contains outputs of fn from every timestep concatenated into a tensor. In our case, the output of a single timestep is a scalar so output is a vector where output[i] is the output of the ith iteration.
updates details if and how the execution of scan updates any shared variable in the graph. It should be provided as an argument when compiling the Theano function.
End of explanation
X1_value = np.arange(0,5).astype(theano.config.floatX) # [0,1,2,3,4]
X2_value = np.arange(1,6).astype(theano.config.floatX) # [1,2,3,4,5]
f(X1_value,X2_value)
f.maker.fgraph.toposort()
output, updates = theano.scan(fn=lambda a, b : sandbox.cuda.basic_ops.gpu_from_host( a * b ), sequences=[X1,X2])
f = theano.function(inputs=[X1,X2], outputs=output, updates=updates)
f(X1_value,X2_value)
f.maker.fgraph.toposort()
Explanation: If updates is omitted, the state of any shared variables modified by Scan will not be updated properly. Random number sampling, for instance, relies on shared variables. If updates is not provided, the state of the random number generator won't be updated properly and the same numbers might be sampled repeatedly. Always provide updates when compiling Theano functions.
End of explanation
f(X1_value, X2_value[:4])
def vec_addition(a,b):
return sandbox.cuda.basic_ops.gpu_from_host( a + b )
output, updates = theano.scan(fn=vec_addition, sequences=[X1,X2])
f = theano.function(inputs=[X1,X2], outputs=output, updates=updates)
f(X1_value,X2_value)
f.maker.fgraph.toposort()
X1.get_value
Explanation: An interesting thing is that we never explicitl told Scan how many iterations to run. It was automatically inferred. When given sequences, Scan will run as many iterations as length of the shortest sequence.
End of explanation
alpha_expn = T.scalar('alpha') # \alpha
Xs = T.vector('Xs') # Xs \equiv X[i]
lambda x, k : x**k
output, updates = theano.scan(fn=lambda x, k : x**k ,sequences=[Xs])
output, updates = theano.scan(fn=lambda x, k : x**k ,sequences=[Xs,alpha_expn])
Explanation: So we can do the following with scan:
$\forall \, X_1,X_2 \in \mathbb{R}^d$,
$$ + : \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d \
\verb|output|[i] = X_1[i] + X_2[i], \qquad \, \forall \, i = 0,1, \dots d-1
$$
$$ \odot : \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d \
\verb|output|[i] = X_1[i] * X_2[i], \qquad \, \forall \, i = 0,1, \dots d-1
$$
End of explanation
def vec_addition(a,b):
return sandbox.cuda.basic_ops.gpu_from_host( a + b )
output, updates = theano.reduce(fn=vec_addition, sequences=[X1,X2],outputs_info=[None,])
f = theano.function(inputs=[X1,X2], outputs=output, updates=updates)
f(X1_value, X2_value)
def vec_elem_mult(a,b):
return sandbox.cuda.basic_ops.gpu_from_host( a*b)
output, updates = theano.reduce(fn=vec_elem_mult, sequences=[X1,X2],outputs_info=[None,])
f = theano.function(inputs=[X1,X2], outputs=output, updates=updates)
f(X1_value, X2_value)
Explanation: What about reduce?
End of explanation
X1 = T.matrix('matrix1')
X2 = T.matrix('matrix2')
output, updates = theano.scan(fn=lambda a, b : sandbox.cuda.basic_ops.gpu_from_host( a * b ), sequences=[X1,X2])
f = theano.function(inputs=[X1,X2], outputs=output, updates=updates)
X1_value = np.arange(0,6).reshape(2,3).astype(theano.config.floatX); print(X1_value)
X2_value = np.arange(1,7).reshape(2,3).astype(theano.config.floatX); print(X2_value)
f(X1_value,X2_value)
X1 = T.tensor3('tensor1')
X2 = T.tensor3('tensor2')
output, updates = theano.scan(fn=vec_addition, sequences=[X1,X2])
f = theano.function(inputs=[X1,X2], outputs=output, updates=updates)
X1_value = np.arange(0,24).reshape(2,3,4).astype(theano.config.floatX); print(X1_value)
X2_value = np.arange(1,25).reshape(2,3,4).astype(theano.config.floatX); print(X2_value)
f(X1_value,X2_value)
X_t = T.vector('X_t')
output,updates=theano.scan(fn=lambda x:x**3,sequences=[X_t,])
f=theano.function(inputs=[X_t,],outputs=output,updates=updates)
f.maker.fgraph.toposort()
f=theano.function(inputs=[X_t,],outputs=sandbox.cuda.basic_ops.gpu_from_host(output),updates=updates)
f.maker.fgraph.toposort()
X1_value = np.arange(0,5).astype(theano.config.floatX) # [0,1,2,3,4]
print(X1_value)
f(X1_value)
Explanation: What about matrices, tensors?
End of explanation
output,updates=theano.reduce(fn=lambda x:x**4,sequences=[X_t,],outputs_info=[None,])
f=theano.function(inputs=[X_t,],outputs=output,updates=updates)
f(X1_value)
Explanation: So this is the mathematical equivalent to
$ t=0,1,\dots T-1$, $t\in \mathbb{Z}^+$,
$X\in \mathbb{R}^T$
$$ \forall \, t = 0, 1, \dots T-1, \
f:\mathbb{R} \to \mathbb{R} \
X(t) \mapsto f(X(t)) = (X(t))^3 \qquad \, (\text{for example}) \
$$
End of explanation
X = T.matrix('X')
W = T.matrix('W')
b = T.vector('b')
Explanation: Example 2: Non-sequences
We need some variables to be available "as is" at every iteration of the loop. We don't want scan to iterate over them and give only part of them at every iteration.
End of explanation
def step(v,W,b):
return T.dot(v,W) + b
output,updates=theano.scan(fn=step,
sequences=[X],
non_sequences=[W,b])
print(updates)
f=theano.function(inputs=[X,W,b],
outputs=sandbox.cuda.basic_ops.gpu_from_host( output),
updates=updates)
X_value = np.arange(-3,3).reshape(3,2).astype(theano.config.floatX)
W_value = np.eye(2).astype(theano.config.floatX)
b_value=np.arange(2).astype(theano.config.floatX)
print(X_value); print(W_value); print(b_value)
f(X_value,W_value,b_value)
Explanation: For the sake of variety (and so lambda is the same as defining a Python function), define computation to be done at every iteration of the loop using step(), instead of lambda expression.
To have $W$ and $b$ be available at every iteration, use the argument non_sequences. Contrary to sequences, non-sequences are not iterated upon by Scan.
This means step() function will need to operate on 3 symbolic inputs; 1 for our sequences $X$, 1 for each non-sequences $W$ and $b$.
The inputs that correspond to the non-sequences are always last and in same order at the non-sequences provided to Scan. This means correspondence between inputs of the step() function and arguments to scan() is the following:
* $v$ : individual element of the sequence $X$
* $W,b$ : non-sequences $W,b$, respectively
End of explanation
X = T.tensor3('X')
W = T.matrix('W')
b = T.vector('b')
#def step_left(v,W,b):
def step_left(v,W):
return T.dot(W,v) # + b
#output, updates=theano.scan(fn=step_left,sequences=[X],non_sequences=[W,b])
#f=theano.function(inputs=[X,W,b],outputs=output,updates=updates)
output, updates=theano.scan(fn=step_left,sequences=[X],non_sequences=[W])
f=theano.function(inputs=[X,W],outputs=output,updates=updates)
X_value = np.arange(-3,3).reshape(3,2,1).astype(theano.config.floatX)
W_value = np.arange(1,5).reshape(2,2).astype(theano.config.floatX)
#b_value=np.arange(2).reshape(2,1).astype(theano.config.floatX)
test_result_left =f(X_value,W_value) #,b_value)
test_result_left
test_result_left[0].shape
def step_left(v,W,b):
return T.dot(W,v) + b
output, updates=theano.scan(fn=step_left,sequences=[X],non_sequences=[W,b])
f=theano.function(inputs=[X,W,b],outputs=output,updates=updates)
b_value=np.arange(2).reshape(2).astype(theano.config.floatX)
test_result_left =f(X_value,W_value, b_value)
test_result_left
def step_left(v,W,b):
return ( T.dot(W,v) + b )
output, updates=theano.scan(fn=step_left,sequences=[X],outputs_info=[None],non_sequences=[W,b])
test_result_left =f(X_value,W_value, b_value)
test_result_left
np.dot(W_value,X_value)
W_value
np.dot(W_value,X_value[0])
X_value
b_value
T.zeros_like
Explanation: Notice how scan is on the first dimension (or, counting from 0, the 0th dimension), always. So 1 way to think about it is discretized time $t\in \mathbb{R} \xrightarrow{ \text{ discretize } } t\in\mathbb{Z}^+$
$$
X:\mathbb{Z}^+ \to \mathbb{R}^d \text{ or } \text{Mat}{\mathbb{R}}(N_1,N_2) \text{ or } \tau^r_s(V), \text{ space of tensors of type } (r,s), \tau_s^r(V) = \lbrace (r+s)-\text{linear maps}, \underbrace{V\times \dots \times V}{r} \times \underbrace{V^ \times \dots \times V^}_{s} \to \mathbb{F} \rbrace $$
$$ \forall \, t = 0 , 1, \dots T-1, \
\text{ e.g. } \theta \in \text{Mat}_{\mathbb{R}}(d,N_2) \
\qquad b \in \mathbb{R}^{N_2} $$
$$ f_{\theta,b}:\mathbb{R}^d \to \mathbb{R}^{N_2} \
X(t) \mapsto f(X(t)) = X(t) \cdot \theta + b $$
and so nonsequences help to return the functional
$$ f:\text{Mat}{\mathbb{R}}(d,N_2) \times \mathbb{R}^{N_2} \to \text{Hom}(\mathbb{R}^d, \mathbb{R}^{N_2} ) \
f:(\theta,b) \mapsto f{\theta,b} $$
Notice the right action of $\theta$ onto $X(t)$, as necessitated by how the size dimensions are defined. This should be duly noted, as scan only iterates across the first dimension. So to write the equivalent step, with the usual left action,
End of explanation
def step(m_row, cumulative_sum):
return m_row + cumulative_sum
Explanation: Example 3: Reusing outputs from the previous iterations
End of explanation
M=T.matrix('X')
s=T.vector('s') # initial value for the cumulative sum
output, updates = theano.scan(fn=step,
sequences=[M],
outputs_info=[s])
f=theano.function(inputs=[M, s],
outputs=output,
updates=updates)
M_value=np.arange(9).reshape(3,3).astype(theano.config.floatX)
s_value=np.zeros((3,),dtype=theano.config.floatX)
f(M_value,s_value)
M_value
print(M_value[0])
print(M_value[1])
Explanation: The trick part is informing Scan that our step function expects as input the output of a previous iteration. A new parameter, outputs_info, achieves this. This parameter is used to tell Scan how we intend to use each of the ouputs that are computer at each iteration.
This parameter can be omitted (like we have done so far) when the step function doesn't depend on any output of a previous iteration.
outputs_info takes a sequence with 1 element for every output of the step() function:
For a non-recurrent output, element should be None.
For simple recurrent output, (iteration $t$ depends on value of iteration at $t-1$, say), the element must be a tensor. Scan will interpret it as being an initial state for a recurrent output, and give it as input to the first iteration, pretending it's the output value from a previous iteration.
The step() expects 1 additional input for each simple recurrent output: these inputs correspond to outputs from previous iteration and are always after inputs that correspond to sequences, but before those that correspond to non-sequences.
End of explanation
# Further example
X=T.vector('X')
x_0=T.scalar('x_0') # initial value for the cumulative sum
output, updates = theano.scan(fn=step, sequences=[X],outputs_info=[x_0])
f=theano.function(inputs=[X,x_0],outputs=output,updates=updates)
X_value =np.arange(9).astype(theano.config.floatX); print(X_value)
x_0_val = np.cast[theano.config.floatX](0.)
f(X_value,x_0_val)
Explanation: For input $\begin{aligned} & X:\mathbb{Z}^+ \to R-\text{Module} \
& t\mapsto X(t) \end{aligned} $, e.g. $R$-Module such as $\mathbb{R}^d, V, \text{Mat}_{\mathbb{R}}(N_1,N_2), \tau_s^r(V)$, and a function $f$ that acts at each iteration, i.e. $\forall \, t=0,1,\dots T$,
$$
\begin{aligned}
& f: R-\text{Module} \times R-\text{Module} \to R-\text{Module} \
& (X_1,X_0) \mapsto X_1 + X_0
\end{aligned}
$$, then we want to express
$$
f(X(t),X(t-1)) = X(t)+X(t-1) \qquad \, \forall \, t=0,1\dots T-1,
$$
In the end, we should get
$$
X\in (R-\text{Module})^T \xrightarrow{ \text{ scan } } Y \in (R-\text{Module})^T
$$
$Y \equiv $ output
End of explanation
output, updates = theano.scan(fn=lambda x_1,x_0 : x_1*x_0, sequences=[X],outputs_info=[x_0])
f=theano.function(inputs=[X,x_0],outputs=output,updates=updates)
X_value =np.arange(1,11).astype(theano.config.floatX); print(X_value)
x_0_val = np.cast[theano.config.floatX](1.)
f(X_value,x_0_val)
Explanation: This is classically what (parallel) scan should do.
Indeed,
End of explanation
def step(f_minus2,f_minus1):
new_f = f_minus2+f_minus1
ratio=new_f/f_minus1
return new_f,ratio
Explanation: In summary, the dictionary between the mathematics and the Python theano code for scan seems to be the following:
If $k=1$, $\forall \, t = 0,1,\dots T-1$,
\begin{equation}
\begin{gathered}
\begin{aligned}
& F:R-\text{Module}\times R-\text{Module} \to R-\text{Module} \
& F(X(t),X(t-1)) \mapsto X(t) \end{aligned} \Longleftrightarrow \text{Python function (object) or Python lambda expression } \mapsto \verb|scan(fn= )| \
(X(0),X(1),\dots X(T-1)) \in (R-\text{Module})^T \Longleftrightarrow \verb|scan(sequences= )| \
X(-1) \in R-\text{Module} \Longleftrightarrow \verb|scan(outputs_info=[ ] )|
\end{gathered}
\end{equation}
Example 4: Reusing outputs from multiple past iterations
"...Since every example so far had only 1 output at every iteration of the loop, we will also compute, at each time step, the ratio between the new term of the Fibonacci sequence and the previous term." I think what Pierre means is that we're going to try to implement for the output, ratio, as a non-recurrent term, just for the sake of this pedagogical example.
End of explanation
f_init = T.vector()
outputs_info = [dict(initial=f_init, taps=[-2,-1]),None]
output, updates=theano.scan(fn=step,outputs_info=outputs_info,n_steps=10)
next_fibonacci_terms=output[0]
ratios_between_terms=output[1]
f=theano.function(inputs=[f_init],
outputs=[next_fibonacci_terms,ratios_between_terms],updates=updates)
out=f([1,1])
print(len(out))
print(out[0])
print(out[1])
Explanation: Defining the value of outputs_info:
Recall that, for non-recurrent outputs, the value is None, and, for simple recurrent outputs, the value is a single initial state. For general recurrent outputs, where iteration $t$ may depend on multiple past values, the value is a dictionary*. *
That dictionary has 2 values:
taps : list declaring which previous values of that output every iteration will need, e.g. [-2,-1]
initial : tensor of initial values. If every initial value has $n$ dimensions, initial will be a single tensor of $n+1$ dimensions with as many initial values as the oldest requested tap. In the case of Fibonacci sequence, individual initial values are scalars, so initial will be a vector.
End of explanation
def F(Xtm2,Xtm1):
cf. https://www.math.cmu.edu/~af1p/Teaching/Combinatorics/Slides/Generating-Functions.pdf
How many ways to spend n dollars, where everyday, you can only spend it on 1 dollar for a bun
OR 2 dollars for an ice cream
OR 2 dollars for a pastry?
new_f = Xtm2 * 2 + Xtm1
return new_f
X_init = T.vector()
outputs_info=[dict(initial=X_init, taps=[-2,-1])]
output,updates = theano.scan(fn=F, outputs_info=outputs_info,n_steps=20)
f=theano.function(inputs=[X_init],outputs=output,updates=updates)
f([1,1])
Explanation: EY : 20170324 note. Notice the n_steps parameter that's utilized now in scan. I'll try to explain it.
$\forall \, t = 0, 1, \dots T-1$, $T\Longleftrightarrow $ n_steps$=T$,
Consider
\begin{equation}
\begin{aligned}
& F:(R-\text{Module})^k \to R-\text{Module} \
& F(X(t-k),X(t-(k-1)),\dots X(t-1)) = X(t) \end{aligned} \Longleftrightarrow \verb|fn| \in \text{Python function (object) }
\end{equation}
If $k=1$, we'll need to be given $X(0) \in R-\text{Module}$. Perhaps consider $\forall \, t=-k,-(k-1), \dots -1,0,1\dots T-1$ (``in full'').
For $k>1$, we'll need to be given (or declare) $\lbrace X(-k),X(-(k-1)),\dots X(-1)\rbrace$.
So for $k=1$, $X(-1) \in R-\text{Module}$ needed $\Longleftrightarrow $ e.g. T.scalar() if $R$-Module $=\mathbb{R}$.
for $k>1$, $(X(-k),X(-(k-1)),\dots X(-1)) \in (R-\text{Module})^k \Longleftrightarrow $ e.g. T.vector(), into ''initial'' of a Python dict, if $R$-Module $=\mathbb{R}$.
Also, for $k>1$,
$$
(-k,-(k-1), \dots -1) \Longleftrightarrow \verb|taps| = [-k,-(k-1),\dots -1] (\text{a Python}\verb| list|)
$$
scan, essentially, does this:
\begin{equation}
\begin{gathered}
(X(-k),X(-(k-1)),\dots X(-1)) \mapsto (X(0),X(1)\dots X(T-1)) \
F(X(t-k), X(t-(k-1))\dots X(t-1)) = X(t), \qquad \, \forall \, t=0,1,\dots T-1
\end{gathered}
\end{equation}
given $F:(R-\text{Module})^k \to R-\text{Module}$, with $T=$ n_steps.
End of explanation
coefficients = T.vector("coefficients")
x=T.scalar("x")
def step(coeff,power,free_var):
return coeff * free_var ** power
# Generate the components of the polynomial
max_coefficients_supported = 10000
full_range=T.arange(max_coefficients_supported)
components, updates = theano.scan(fn=step,
outputs_info=None,
sequences=[coefficients, full_range],
non_sequences=x)
polynomial = components.sum()
calculate_polynomial = theano.function(inputs=[coefficients,x],outputs=polynomial,updates=updates)
test_coeff=np.asarray([1,0,2],dtype=theano.config.floatX)
calculate_polynomial(test_coeff,3)
Explanation: Exercises
Exercise 1 - Computing a polynomial
End of explanation
coefficients = T.vector("coefficients")
x=T.scalar("x")
max_coefficients_supported = 10000
def step(coeff,power,prior_value,free_var):
return prior_value + (coeff * (free_var ** power))
# Generate the components of the polynomial
full_range = T.arange(max_coefficients_supported,dtype=theano.config.floatX)
outputs_info = np.zeros((),dtype=theano.config.floatX)
print(outputs_info)
components, updates = theano.scan(fn=step,
sequences=[coefficients,full_range],
outputs_info=outputs_info,
non_sequences=x)
polynomial=components[-1]
calculate_polynomial=theano.function(inputs=[coefficients,x], outputs=polynomial, updates=updates)
test_coeff=np.asarray([1,0,2],dtype=theano.config.floatX)
calculate_polynomial(test_coeff,3)
Explanation: Solution
cf. scan_ex1_solution.py
End of explanation
probabilities = T.vector()
nb_samples = T.iscalar()
rng = T.shared_randomstreams.RandomStreams(1234)
def sample_from_pvect(pvect):
Provided utility function: given a symbolic vector of
probabilities (which MUST sum to 1), sample one element
and return its index
onehot_sample = rng.multinomial(n=1,pvals=pvect)
sample = onehot_sample.argmax()
return sample # sample \in \mathbb{Z}^+, i.e. sample = 0,1,...K-1, with K= total number of possible outcomes
def set_p_to_zero(pvect, i):
Provided utility function: given a symbolic vector of
probabilities and an index 'i', set the probability of the
i-th element to 0 and renormalize the probabilities so they
sum to 1.
new_pvect = T.set_subtensor(pvect[i], 0.)
new_pvect = new_pvect / new_pvect.sum()
return new_pvect
Explanation: Exercise 2 - Sampling without replacement
takes as input a vector of probabilities and a scalar
End of explanation
def sample_step(pvect):
sample_step - sample without replacement, at 1 given iteration
sample = sample_from_pvect(pvect) # \in \mathbb{Z}^+, i.e. sample=0,1,...K-1, with K=total number of possible outcomes
new_pvect = set_p_to_zero(pvect,sample) # we had to remove the drawn sample out of the equation
return new_pvect
# this line is not needed, since we want the inputted "hard", numerical values to be the initial values
#pvect_0 = T.vector() # the initial set of probabilities for all $K$ possibilities (outcomes)
output,update=theano.scan(fn=sample_step,outputs_info=[probabilities],n_steps=nb_samples)
# Compiling the function
f = theano.function(inputs=[probabilities, nb_samples], outputs=output,updates=update)
# Testing the function
test_probs = np.asarray([0.6,0.3,0.1], dtype=theano.config.floatX)
for i in range(10):
print(f(test_probs, 2))
Explanation: EY:20170325 notes: raw_random - Low-level random numbers sample $n$ times from a multinomial distribution, defined by probabilities pvals. It's this formula :
$$
\binom{N}{k} p^k(1-p)^{N-k}
$$ for a binomial distribution, probability of picking out the outcome corresponding to probability $p$, $k$ times out of $N$.
My solution, without looking at the author's beforehand
End of explanation
def step(p):
sample=sample_from_pvect(p)
new_p=set_p_to_zero(p,sample)
return new_p,sample
output,updates = theano.scan(fn=step,outputs_info=[probabilities,None],n_steps=nb_samples)
modified_probabilities,samples=output
f=theano.function(inputs=[probabilities,nb_samples],outputs=[samples],updates=updates)
# Testing the function
test_probs=np.asarray([0.6,0.3,0.1],dtype=theano.config.floatX)
for i in range(10):
print(f(test_probs,2))
Explanation: Solution from author (Pierre Le Duc?)
cf. scan_ex2_solution.py
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import sympy
from sympy import Symbol
from sympy.plotting import plot
plot(sympy.log( Symbol('x')) )
m_test = 2 # total number of examples, m
y_test = np.random.randint(2, size=m_test)
y_predicted_test = np.ones(m_test)*0.8
print(y_test)
print(y_predicted_test)
-(y_test*np.log(y_predicted_test)+(1.-y_test)*np.log(1.-y_predicted_test)).mean()
y_test = theano.shared(y_test.astype(theano.config.floatX))
y_predicted_test = theano.shared(y_predicted_test.astype(theano.config.floatX))
J_binary = T.nnet.binary_crossentropy(y_predicted_test, y_test).mean()
J_categorical = T.nnet.categorical_crossentropy(y_predicted_test, y_test).mean()
print( theano.function( inputs=[], outputs=J_categorical)() )
print( theano.function( inputs=[], outputs=J_binary)() )
Explanation: Using theano's crossentropy
End of explanation
y3=np.zeros((m_test,3))
y3_predicted_cls = np.random.randint(3,size=m_test)
print(y3_predicted_cls)
for i in range(m_test):
y3[i][y3_predicted_cls[i]] = 1.
print(y3)
Kclses = 3
y3_predicted = np.array([np.random.dirichlet(np.ones(Kclses),size=1).flatten() for i in range(m_test)])
print(y3_predicted)
y3_predicted[:,0].sum()
y3[
-(y3[0][0]*np.log(y3_predicted[0][0])+(1-y3[0][0])*np.log(y3_predicted[0][np.arange(Kclses)!=0].sum()))
y3_predicted[0][np.arange(Kclses)!=0].sum()
categorical_results=[]
for i in range(m_test):
example_row=[]
for cls in range(Kclses):
entropy= \
(y3[i][cls]*np.log(y3_predicted[i][cls])+(1-y3[i][cls])*np.log(y3_predicted[i][np.arange(Kclses)!=cls].sum()))
example_row.append(-entropy)
categorical_results.append(example_row)
categorical_results = np.array( categorical_results )
categorical_results
categorical_results.sum(1)
categorical_results.sum(1).mean()
y3_test=theano.shared(y3.astype(theano.config.floatX))
y3_predicted_test=theano.shared(y3_predicted.astype(theano.config.floatX))
J_categorical=T.nnet.categorical_crossentropy(y3_predicted_test,y3_test).mean()
print( theano.function(inputs=[],outputs=J_categorical)() )
Explanation: making an example for categorical crossentropy
End of explanation |
1,164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iterative Maximum Likelihood Estimation (iMLE)
Shahnawaz Ahmed, Chalmers University of Technology, Sweden
Email
Step1: Displacement operation
The measurements for determining optical quantum states usually rely on finding the probability of observing a certain number of photons in the cavity, i.e., measuring the photon number operator (occupation) $|n \rangle \langle n|$ after displacing the state by applying the displacement operator $$ D(\beta) = e^{\beta a^{\dagger} - \beta^*a}.$$
Let us look at the effect of a displacement on a coherent state by applying a displacement $2 + 2i$ on the vacuum state. We can use the displace function in QuTiP to construct our displacement operator and apply it to the state vector for vacuum using the function coherent.
Step2: Optical quantum states in the fock basis
In the fock basis, we can describe optical quantum states as $|\psi \rangle = \sum_n^{N_{cut}}c_n |n \rangle$, where $N_{cut}$ is the photon number cutoff which truncates the Hilbert space and $|{c_n}|^2$ is the probability of observing $n$ photons. The coefficients $c_n$ can be complex-valued. The vaccum state $\psi_{vac} = |0 \rangle$ or a superposition of fock states containing two and three photons $\psi_{fock} = \frac{1}{\sqrt{2}}(|2 \rangle + |3 \rangle$ are some examples of simple optical quantum states. The coherent state is a displaced fock state $\psi_{\texttt{coherent}(\alpha)} = D(\alpha) |0 \rangle$ and a superposition of two such coherent states is defined as a CAT state,
$$\psi_{\texttt{CAT}(\alpha)} = \frac{1}{N}[\psi_{\texttt{coherent}}(\alpha) + \psi_{\texttt{coherent}}(-\alpha)]$$ where $N$ is the normalization constant.
A superposition of three coherent states
Let us construct a quantum state by taking the superposition of three coherent states. We first consider the state vector for the superposition of three coherent states, $\psi$ with $\alpha = (2, -2 - i, -2 + i)$. Note that the method .unit() in QuTiP gives us the normalized state. Then we find the density matrix of the state as $$ \rho = | \psi \rangle \langle \psi |$$
Step4: Displace and measure - the generalized Q function
The expectation value of the photon number operators after applying a displacement $D(\beta_i)$ to the state density matrix $\rho = |\psi \rangle \langle \psi|$ forms the so-called generalised $Q$ function
Step6: Iterative Maximum Likelihood Estimation
The measurement statistics from different measurement settings, i.e., values of displacements could be made informationally complete such that they contain the full information required to reconstruct the state, (see Ref [1]). Once we have the data, the iterative Maximum Likelihood Estimation (iMLE) method [2] can be used to start from a random guess of the density matrix and determine the full density matrix by repeatedly applying an operator $R$ to a randomly initialized density matrix.
$R$ is a sum of projection operators into the measured basis - the displaced photon number operator in this case, $M_i = D(-\beta_i) |n \rangle \langle n| D^{\dagger}(-\beta_i)$. Each such operator is weighted by the ratio of observed frequency of the measurement (empirical probability from experimental data, $d_i$) and the estimate of the same from the density matrix, $Tr[M_i \rho]$
$$R = \sum_i \frac{d_i}{Tr[M_i \rho]} M_i$$
Then, we iteratively apply $R$ as $$\rho_{k+1} = R \rho_{k} R$$ until convergence (with trace normalization after each iteration) to get the density matrix from the measured data.
Constructing the R operator from current estimate of the density matrix
Step7: Reconstruction of the quantum state density matrix from (ideal) generalized $Q$ function measurements
Step8: Let us plot the Husimi $Q$ function - Fig 1(d) of Ref~[1]
Step9: We can also look at the density matrix of the states using Hinton plots
The Hinton plot for the complex-valued density matrix visualizes the density matrix elements. The size and shading of a blob in the plot is proportional to the magnitude of the density matrix and the color red (blue) is determined by whether the real part of the density matrix element is positive (negative). Let us look at the first 16 elements only of the reconstructed density matrix in comparison to the target.
Step10: Discussion
In this tutorial, we have considered ideal measurements and have not introduced any noise in the data. In a real experiment, there will be noise both due to experimental errors as well as repetitions of measurements. A simple way to test the effects of noise is to include some Gaussian noise (with zero mean) in the data as follows
Step12: Let us construct an iMLE function that we can reuse
Step13: Visualizing the state reconstructed from noisy data
We find that adding a small amount of Gaussian noise to the data leads to a poorer reconstruction using 200 iterations. The general features of the state are present in the reconstruction, however the fidelity is very low.
Step14: More iterations
Let us now use 1000 iterations and run iMLE again. We find that the fidelity improves slightly but the increase in fidelity is very small compared to the number of iterations. We can consider other maximum likelihood methods for faster convergence such as the "diluted" MLE (Ref [3]) or the "superfast" MLE (Ref [4]). We can also consider neural-network based tomography using generative adversarial neural networks (Ref [5]).
Step15: QuTiP details | Python Code:
# imports
import numpy as np
from qutip import Qobj, rand_dm, fidelity, displace, qdiags, qeye, expect
from qutip.states import coherent, coherent_dm, thermal_dm, fock_dm
from qutip.random_objects import rand_dm
from qutip.visualization import plot_wigner, hinton, plot_wigner_fock_distribution
from qutip.wigner import qfunc
import qutip
import matplotlib.pyplot as plt
from matplotlib import animation, colors
from IPython.display import clear_output
Explanation: Iterative Maximum Likelihood Estimation (iMLE)
Shahnawaz Ahmed, Chalmers University of Technology, Sweden
Email: shahnawaz.ahmed95gmail.com
GitHub: quantshah
Introduction
Quantum State Tomography (QST) is the process of determining an unknown quantum state by making measurements on the system and using the measurement data to reconstruct the density matrix of the state. In this notebook, we will use QuTiP for the tomography of a cavity by counting photon number statistics (see [1, 2]). The data is from a "displace-and-measure" method on which we apply a statistical inference technique - iterative Maximum Likelihood Estimation (iMLE) [2] to reconstruct the full density matrix of an optical quantum state.
We consider a superposition of three coherent states (a three headed cat). We will use QuTiP to reproduce Fig 1(d) of Ref~[1] and reconstruct the Husimi Q function of a quantum state from measurements at the 5 marked points in the phase space.
This notebook only shows an implementation of the basic iMLE which could be slow due to noise in the data (see below). We can consider other maximum likelihood methods for faster convergence such as the "diluted" MLE (Ref [3]) or the "superfast" MLE (Ref [4]). We can also consider neural-network based tomography using generative adversarial neural networks (Ref [5]).
In https://github.com/quantshah/qst-cgan/blob/master/examples/qst-cgan-vs-imle.ipynb we can find an example where iMLE fails to reconstruct a state even without noise while the neural-network-based approach works successfully. This failure of iMLE seems to be due to not selecting the correct measurements for iMLE explored in other notebooks in the repository https://github.com/quantshah/qst-cgan. Note that the Husimi Q function is considered as the data in the notebooks in Ref [5] while here we consider the generalized Q function.
References
[1] Shen, Chao, et al. "Optimized tomography of continuous variable systems using excitation counting." Physical Review A 94.5 (2016): 052327.
Link: https://arxiv.org/abs/1606.07554
[2] Řeháček, J., Z. Hradil, and M. Ježek. "Iterative algorithm for reconstruction of entangled states." Physical Review A 63.4 (2001): 040303.
Link: https://arxiv.org/abs/quant-ph/0009093
[3] Řeháček, Jaroslav, et al. "Diluted maximum-likelihood algorithm for quantum tomography." Physical Review A 75.4 (2007): 042108. Link: https://arxiv.org/abs/quant-ph/0611244
[4] Shang, Jiangwei, Zhengyun Zhang, and Hui Khoon Ng. "Superfast maximum-likelihood reconstruction for quantum tomography." Physical Review A 95.6 (2017): 062336. Link: https://arxiv.org/abs/1609.07881
[5] Ahmed, Shahnawaz, et al. "Quantum state tomography with conditional generative adversarial networks." arXiv preprint arXiv:2008.03240 (2020). Link: https://arxiv.org/abs/2008.03240
End of explanation
hilbert_size = 32
psi = coherent(hilbert_size, 0)
d = displace(hilbert_size, 2+2j)
fig, ax = plt.subplots(1, 4, figsize=(19, 4))
plot_wigner_fock_distribution(psi, fig=fig, axes=[ax[0], ax[1]])
plot_wigner_fock_distribution(d*psi, fig=fig, axes=[ax[2], ax[3]])
ax[0].set_title(r"Initial state, $\psi_{vac} = |0 \rangle$")
ax[2].set_title(r"Displaced state, $D(\alpha=2+2i )\psi_{vac}$")
plt.show()
Explanation: Displacement operation
The measurements for determining optical quantum states usually rely on finding the probability of observing a certain number of photons in the cavity, i.e., measuring the photon number operator (occupation) $|n \rangle \langle n|$ after displacing the state by applying the displacement operator $$ D(\beta) = e^{\beta a^{\dagger} - \beta^*a}.$$
Let us look at the effect of a displacement on a coherent state by applying a displacement $2 + 2i$ on the vacuum state. We can use the displace function in QuTiP to construct our displacement operator and apply it to the state vector for vacuum using the function coherent.
End of explanation
alpha_range = 2
alphas = np.array([alpha_range, -alpha_range - 1j*alpha_range,
-alpha_range + 1j*alpha_range])
psi = sum([coherent(hilbert_size, a) for a in alphas])
psi = psi.unit()
rho = psi*psi.dag()
fig, ax = plot_wigner_fock_distribution(rho, figsize=(9, 4))
ax[0].set_title("Superposition of three coherent states")
plt.show()
Explanation: Optical quantum states in the fock basis
In the fock basis, we can describe optical quantum states as $|\psi \rangle = \sum_n^{N_{cut}}c_n |n \rangle$, where $N_{cut}$ is the photon number cutoff which truncates the Hilbert space and $|{c_n}|^2$ is the probability of observing $n$ photons. The coefficients $c_n$ can be complex-valued. The vaccum state $\psi_{vac} = |0 \rangle$ or a superposition of fock states containing two and three photons $\psi_{fock} = \frac{1}{\sqrt{2}}(|2 \rangle + |3 \rangle$ are some examples of simple optical quantum states. The coherent state is a displaced fock state $\psi_{\texttt{coherent}(\alpha)} = D(\alpha) |0 \rangle$ and a superposition of two such coherent states is defined as a CAT state,
$$\psi_{\texttt{CAT}(\alpha)} = \frac{1}{N}[\psi_{\texttt{coherent}}(\alpha) + \psi_{\texttt{coherent}}(-\alpha)]$$ where $N$ is the normalization constant.
A superposition of three coherent states
Let us construct a quantum state by taking the superposition of three coherent states. We first consider the state vector for the superposition of three coherent states, $\psi$ with $\alpha = (2, -2 - i, -2 + i)$. Note that the method .unit() in QuTiP gives us the normalized state. Then we find the density matrix of the state as $$ \rho = | \psi \rangle \langle \psi |$$
End of explanation
def measure_q(beta, rho):
Measures the generalized q function values for the state density matrix.
Parameters
----------
beta: np.complex
A complex displacement.
rho:
The density matrix as a QuTiP Qobj (`qutip.Qobj`)
Returns
-------
population: ndarray
A 1D array for the probabilities for populations.
hilbertsize = rho.shape[0]
# Apply a displacement to the state and then measure the diagonals.
D = displace(hilbertsize, -beta)
rho_disp = D*rho*D.dag()
# measure all the elements in the diagonal
populations = np.real(np.diagonal(rho_disp.full()))
return populations
betas = [1.7, -2, 2.5j, -2.1 - 2.1j, -2 + 2j]
generalized_Q = [measure_q(b, rho) for b in betas]
fig, ax = plt.subplots(1, 3, figsize=(15, 4))
indices = np.arange(hilbert_size)
plot_wigner(rho, fig, ax[0])
ax[0].scatter(np.real(betas), np.imag(betas), marker="x")
ax[0].set_title(r"Measurement $\beta$ values")
for i in range(len(betas)):
ax[1].bar(indices, generalized_Q[i],
label = r"$beta = {:.2f}$".format(betas[i]))
ax[1].set_title("Population measurement statistics")
ax[1].set_xlabel("n")
ax[1].set_ylabel("Photon number probability")
hinton(rho, ax=ax[2])
ax[2].set_xlabel("Hinton plot of density matrix")
ax[1].legend()
plt.show()
Explanation: Displace and measure - the generalized Q function
The expectation value of the photon number operators after applying a displacement $D(\beta_i)$ to the state density matrix $\rho = |\psi \rangle \langle \psi|$ forms the so-called generalised $Q$ function:
$$Q_n^{\beta} = Tr[|n \rangle \langle n|D(-\beta) \rho D^\dagger(-\beta)$$
This type of "displace-and-measure" techniques can be seen as a generalization of some of the known observables in quantum optics, e.g., Husimi Q function $\frac{1}{\pi}Q_0^{\beta}$ which measures the vaccum state probability or the Wigner function which measures photon parity, $W(\beta) = (2/\pi)\sum_n (-1)^n Q_n^{\beta}$.
Data
We apply displacements $\beta_i$ (left) marked as x on our state and measure expectation values of the photon number operators $|n \rangle \langle n|$. These are simply the diagonal elements of the density matrix. Note that alternatively we can just construct displaced photon number operators and measure their expectation values.
$$D(-\beta_i)|n \rangle \langle n| D^{\dagger}(-\beta_i)$$
We can see $Q_n^{\beta}$ for various values of $Q_n^{\beta_i}$ (colored bar plots) which forms the data, $d_i$. The density matrix itself is shown as a hinton plot on the right. The size of the boxes in the hinton plot represent the magnitude of the density matrix element and the color reflects the absolute value being positive or negative.
End of explanation
def construct_R(hilbert_size, betas, ops=None):
Calculates the set of operators R in a displace and measure method.
Parameters
----------
hilbert_size (int):
The hilbert space size
beta: list_like
A list of the displacements that were applied to the state before
measurement.
op: list of :class:`qutip.Qobj`
Measurement operators such as photon occupation |n><n| that should
be used to construct the R operator for various displacements.
default: |n><n| constructed using `qutip.fock_dm(hilbert_size, n)`
for all n.
Returns
-------
R (list of `qutip.Qobj`) :
The list of iterative operators applied for state reconstruction.
if ops == None:
ops = []
for n in range(hilbert_size):
ops.append(fock_dm(hilbert_size, n))
R_ops = []
for beta in betas:
D = displace(hilbert_size, -beta)
displaced_operators = []
for operator in ops:
displaced_D = D.dag()*operator*D
displaced_operators.append(displaced_D)
R_ops.append(displaced_operators)
return R_ops
r_ops = construct_R(hilbert_size, betas)
expectation_values = [expect(r_ops[i], rho) for i in range(len(betas))]
# test if the expectation values calculated from the R operators match
# the previous calculations with the measure_q function
generalized_Q = [measure_q(b, rho) for b in betas]
np.allclose(expectation_values, generalized_Q)
Explanation: Iterative Maximum Likelihood Estimation
The measurement statistics from different measurement settings, i.e., values of displacements could be made informationally complete such that they contain the full information required to reconstruct the state, (see Ref [1]). Once we have the data, the iterative Maximum Likelihood Estimation (iMLE) method [2] can be used to start from a random guess of the density matrix and determine the full density matrix by repeatedly applying an operator $R$ to a randomly initialized density matrix.
$R$ is a sum of projection operators into the measured basis - the displaced photon number operator in this case, $M_i = D(-\beta_i) |n \rangle \langle n| D^{\dagger}(-\beta_i)$. Each such operator is weighted by the ratio of observed frequency of the measurement (empirical probability from experimental data, $d_i$) and the estimate of the same from the density matrix, $Tr[M_i \rho]$
$$R = \sum_i \frac{d_i}{Tr[M_i \rho]} M_i$$
Then, we iteratively apply $R$ as $$\rho_{k+1} = R \rho_{k} R$$ until convergence (with trace normalization after each iteration) to get the density matrix from the measured data.
Constructing the R operator from current estimate of the density matrix
End of explanation
r_ops = construct_R(hilbert_size, betas)
data = [expect(r_ops[i], rho) for i in range(len(betas))]
max_iter = 200
rho_reconstructed = qeye(hilbert_size)/hilbert_size # initial dm
rho_t = []
rho_t.append(rho_reconstructed)
fidelities = [fidelity(rho_reconstructed, rho)]
for iterations in range(max_iter):
R = 0*qeye(hilbert_size)
for i in range(len(betas)):
# for all the n photons
for n in range(hilbert_size):
r = r_ops[i][n]
R += (data[i][n]/(expect(r, rho_reconstructed) + 1e-20))*r
rho_reconstructed = R*rho_reconstructed*R
# Trace renorm
rho_reconstructed = rho_reconstructed/rho_reconstructed.tr()
rho_t.append(rho_reconstructed)
# Compute fidelity
f = fidelity(rho_reconstructed, rho)
fidelities.append(f)
print(r"Iteration {}; Fidelity: {}".format(iterations, f))
clear_output(wait=True)
Explanation: Reconstruction of the quantum state density matrix from (ideal) generalized $Q$ function measurements
End of explanation
xvec = np.linspace(-7.5, 7.5, 100)
yvec = np.linspace(-7.5, 7.5, 100)
q_state = qfunc(rho, xvec, yvec)
q_reconstruction = qfunc(rho_reconstructed, xvec, yvec)
fig, ax = plt.subplots(1, 2, figsize=(8, 3))
norm = colors.TwoSlopeNorm(vmin=-1e-9, vcenter=0, vmax=np.max(q_state))
ax[0].pcolor(xvec, yvec, q_state, norm=norm, cmap="RdBu_r", shading='auto')
im = ax[1].pcolor(xvec, yvec, q_reconstruction, norm=norm, cmap="RdBu_r", shading='auto')
ax[0].scatter(np.real(betas), np.imag(betas), marker="x", s=20)
ax[0].set_title(r"Target state ($Q$ function)")
ax[1].set_title("Reconstructed state ($Q$ function)")
ax[0].set_xlabel(r"Re($\beta$)", fontsize=13)
ax[0].set_ylabel(r"Im($\beta$)", fontsize=13)
ax[1].set_xlabel(r"Re($\beta$)", fontsize=13)
plt.colorbar(im, ax=[ax[0], ax[1]])
plt.show()
Explanation: Let us plot the Husimi $Q$ function - Fig 1(d) of Ref~[1]
End of explanation
fig, ax = hinton(Qobj(rho[:16, :16]))
ax.set_title("Target state")
plt.show()
hinton(Qobj(rho_t[-1][:16, :16]))
ax.set_title("Reconstructed state")
plt.show()
Explanation: We can also look at the density matrix of the states using Hinton plots
The Hinton plot for the complex-valued density matrix visualizes the density matrix elements. The size and shading of a blob in the plot is proportional to the magnitude of the density matrix and the color red (blue) is determined by whether the real part of the density matrix element is positive (negative). Let us look at the first 16 elements only of the reconstructed density matrix in comparison to the target.
End of explanation
generalized_Q_noisy = generalized_Q + np.abs(np.random.normal(loc=0, scale=0.05, size = [len(betas), hilbert_size]))
plt.figure(figsize=(5,3))
for i in range(1):
plt.bar(indices, generalized_Q_noisy[i],
label = "noisy")
plt.bar(indices, generalized_Q[i], fill=False,
label = "ideal")
plt.xlabel("n")
plt.ylabel("p(n)")
plt.legend()
plt.show()
Explanation: Discussion
In this tutorial, we have considered ideal measurements and have not introduced any noise in the data. In a real experiment, there will be noise both due to experimental errors as well as repetitions of measurements. A simple way to test the effects of noise is to include some Gaussian noise (with zero mean) in the data as follows:
End of explanation
def imle(data, r_ops, initial_rho=None, max_iter=200):
Implements the iterative maximum likelihood estimation algorithm.
Args:
data (array): An array representing measured data for a set of operators.
r_ops (list of `qutip.Qobj`): The list of iterative operators applied
for state reconstruction computed using the
set of measurement operators.
initial_rho (`qutip.Qobj`): Initial density matrix estimate
default: maximally mixed state (I/n).
max_iter (int): The number of iterations to run .
if initial_rho is not None:
rho_reconstructed = initial_rho
else:
rho_reconstructed = qeye(hilbert_size)/hilbert_size
rho_t = []
rho_t.append(rho_reconstructed)
for iterations in range(max_iter):
R = 0*qeye(hilbert_size)
for i in range(len(r_ops)):
# for all the n photons
for n in range(hilbert_size):
r = r_ops[i][n]
R += (data[i][n]/(expect(r, rho_reconstructed) + 1e-20))*r
rho_reconstructed = R*rho_reconstructed*R
# Trace renorm
rho_reconstructed = rho_reconstructed/rho_reconstructed.tr()
rho_t.append(rho_reconstructed)
# Compute fidelity
f = fidelity(rho_reconstructed, rho)
print(r"Iteration {}; Fidelity: {}".format(iterations, f))
clear_output(wait=True)
return rho_t
rho_t_noisy = imle(generalized_Q_noisy, r_ops)
Explanation: Let us construct an iMLE function that we can reuse
End of explanation
q_reconstruction_noisy = qfunc(rho_t_noisy[-1], xvec, yvec)
fig, ax = plt.subplots(1, 2, figsize=(8, 3))
norm = colors.TwoSlopeNorm(vmin=-1e-9, vcenter=0, vmax=np.max(q_state))
ax[0].pcolor(xvec, yvec, q_state, norm=norm, cmap="RdBu_r", shading='auto')
im = ax[1].pcolor(xvec, yvec, q_reconstruction_noisy, norm=norm, cmap="RdBu_r", shading='auto')
ax[0].scatter(np.real(betas), np.imag(betas), marker="x", s=20)
ax[0].set_title(r"Target state ($Q$ function)")
ax[1].set_title("Reconstructed state ($Q$ function)")
ax[0].set_xlabel(r"Re($\beta$)", fontsize=13)
ax[0].set_ylabel(r"Im($\beta$)", fontsize=13)
ax[1].set_xlabel(r"Re($\beta$)", fontsize=13)
plt.colorbar(im, ax=[ax[0], ax[1]])
plt.show()
Explanation: Visualizing the state reconstructed from noisy data
We find that adding a small amount of Gaussian noise to the data leads to a poorer reconstruction using 200 iterations. The general features of the state are present in the reconstruction, however the fidelity is very low.
End of explanation
rho_t_noisy = imle(generalized_Q_noisy, r_ops, max_iter=1000)
Explanation: More iterations
Let us now use 1000 iterations and run iMLE again. We find that the fidelity improves slightly but the increase in fidelity is very small compared to the number of iterations. We can consider other maximum likelihood methods for faster convergence such as the "diluted" MLE (Ref [3]) or the "superfast" MLE (Ref [4]). We can also consider neural-network based tomography using generative adversarial neural networks (Ref [5]).
End of explanation
qutip.about()
Explanation: QuTiP details
End of explanation |
1,165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step24: Decoding - Training
Create a training decoding layer
Step27: Decoding - Inference
Create inference decoder
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step40: Batch and pad the source and target sequences
Step43: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step45: Save Parameters
Save the batch_size and save_path parameters for inference.
Step47: Checkpoint
Step50: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step52: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_split, target_split = source_text.split('\n'), target_text.split('\n')
source_to_int, target_to_int = [], []
for source, target in zip(source_split, target_split):
source_to_int.append([source_vocab_to_int[word] for word in source.split()])
targets = [target_vocab_to_int[word] for word in target.split()]
targets.append((target_vocab_to_int['<EOS>']))
target_to_int.append(targets)
#print(source_to_int, target_to_int)
return source_to_int, target_to_int
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
#max_tar_seq_len = np.max([len(sentence) for sentence in target_int_text])
#max_sour_seq_len = np.max([len(sentence) for sentence in source_int_text])
#max_source_len = np.max([max_tar_seq_len, max_sour_seq_len])
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None])
learning_rate = tf.placeholder(tf.float32)
keep_probability = tf.placeholder(tf.float32, name='keep_prob')
target_seq_len = tf.placeholder(tf.int32, [None], name='target_sequence_length')
max_target_seq_len = tf.reduce_max(target_seq_len, name='target_sequence_length')
source_seq_len = tf.placeholder(tf.int32, [None], name='source_sequence_length')
return inputs, targets, learning_rate, keep_probability, target_seq_len, max_target_seq_len, source_seq_len
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
embed_seq = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
def lstm_cell():
return tf.contrib.rnn.LSTMCell(rnn_size)
rnn = tf.contrib.rnn.MultiRNNCell([lstm_cell() for i in range(num_layers)])
rnn = tf.contrib.rnn.DropoutWrapper(rnn, output_keep_prob=keep_prob)
output, state = tf.nn.dynamic_rnn(rnn, embed_seq, dtype=tf.float32)
return output, state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
training_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
train_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer)
output, _ = tf.contrib.seq2seq.dynamic_decode(train_decoder, impute_finished=False, maximum_iterations=max_summary_length)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id)
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer)
output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True, maximum_iterations=max_target_sequence_length)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
#embed_seq = tf.contrib.layers.embed_sequence(dec_input, target_vocab_size, decoding_embedding_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
def lstm_cell():
return tf.contrib.rnn.LSTMCell(rnn_size)
rnn = tf.contrib.rnn.MultiRNNCell([lstm_cell() for i in range(num_layers)])
rnn = tf.contrib.rnn.DropoutWrapper(rnn, output_keep_prob=keep_prob)
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
with tf.variable_scope("decode"):
training_output = decoding_layer_train(encoder_state, rnn, dec_embed_input,
target_sequence_length, max_target_sequence_length, output_layer, keep_prob)
with tf.variable_scope("decode", reuse=True):
inference_output = decoding_layer_infer(encoder_state, rnn, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size,
output_layer, batch_size, keep_prob)
return training_output, inference_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:max_target_sentence_length: Maximum target sequence lenght
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
_, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
training_output, inference_output = decoding_layer(dec_input, enc_state, target_sequence_length,
max_target_sentence_length, rnn_size, num_layers,
target_vocab_to_int, target_vocab_size, batch_size,
keep_prob, dec_embedding_size)
return training_output, inference_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 254
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 200
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.5
display_step = 10
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
sentence = sentence.lower()
sentence_to_id = [vocab_to_int[word] if word in vocab_to_int.keys() else vocab_to_int['<UNK>'] for word in sentence.split(' ')]
return sentence_to_id
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
1,166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advent of Code 2017
December 1st
[Given] a sequence of digits (your puzzle input) and find the sum of all digits that match the next digit in the list. The list is circular, so the digit after the last digit is the first digit in the list.
For example
Step1: I'll assume the input is a Joy sequence of integers (as opposed to a string or something else.)
We might proceed by creating a word that makes a copy of the sequence with the first item moved to the last, and zips it with the original to make a list of pairs, and a another word that adds (one of) each pair to a total if the pair matches.
AoC2017.1 == pair_up total_matches
Let's derive pair_up
Step2: Now we need to derive total_matches. It will be a step function
Step3: Now we can define our main program and evaluate it on the examples.
Step4: pair_up == dup uncons swap unit concat zip
total_matches == 0 swap [i [=] [pop +] [popop] ifte] step
AoC2017.1 == pair_up total_matches
Now the paired digit is "halfway" round.
[a b c d] dup size 2 / [drop] [take reverse] cleave concat zip
Step5: I realized that each pair is repeated... | Python Code:
from notebook_preamble import J, V, define
Explanation: Advent of Code 2017
December 1st
[Given] a sequence of digits (your puzzle input) and find the sum of all digits that match the next digit in the list. The list is circular, so the digit after the last digit is the first digit in the list.
For example:
1122 produces a sum of 3 (1 + 2) because the first digit (1) matches the second digit and the third digit (2) matches the fourth digit.
1111 produces 4 because each digit (all 1) matches the next.
1234 produces 0 because no digit matches the next.
91212129 produces 9 because the only digit that matches the next one is the last digit, 9.
End of explanation
define('pair_up == dup uncons swap unit concat zip')
J('[1 2 3] pair_up')
J('[1 2 2 3] pair_up')
Explanation: I'll assume the input is a Joy sequence of integers (as opposed to a string or something else.)
We might proceed by creating a word that makes a copy of the sequence with the first item moved to the last, and zips it with the original to make a list of pairs, and a another word that adds (one of) each pair to a total if the pair matches.
AoC2017.1 == pair_up total_matches
Let's derive pair_up:
[a b c] pair_up
-------------------------
[[a b] [b c] [c a]]
Straightforward (although the order of each pair is reversed, due to the way zip works, but it doesn't matter for this program):
[a b c] dup
[a b c] [a b c] uncons swap
[a b c] [b c] a unit concat
[a b c] [b c a] zip
[[b a] [c b] [a c]]
End of explanation
define('total_matches == 0 swap [i [=] [pop +] [popop] ifte] step')
J('[1 2 3] pair_up total_matches')
J('[1 2 2 3] pair_up total_matches')
Explanation: Now we need to derive total_matches. It will be a step function:
total_matches == 0 swap [F] step
Where F will have the pair to work with, and it will basically be a branch or ifte.
total [n m] F
It will probably be easier to write if we dequote the pair:
total [n m] i F′
----------------------
total n m F′
Now F′ becomes just:
total n m [=] [pop +] [popop] ifte
So:
F == i [=] [pop +] [popop] ifte
And thus:
total_matches == 0 swap [i [=] [pop +] [popop] ifte] step
End of explanation
define('AoC2017.1 == pair_up total_matches')
J('[1 1 2 2] AoC2017.1')
J('[1 1 1 1] AoC2017.1')
J('[1 2 3 4] AoC2017.1')
J('[9 1 2 1 2 1 2 9] AoC2017.1')
J('[9 1 2 1 2 1 2 9] AoC2017.1')
Explanation: Now we can define our main program and evaluate it on the examples.
End of explanation
J('[1 2 3 4] dup size 2 / [drop] [take reverse] cleave concat zip')
Explanation: pair_up == dup uncons swap unit concat zip
total_matches == 0 swap [i [=] [pop +] [popop] ifte] step
AoC2017.1 == pair_up total_matches
Now the paired digit is "halfway" round.
[a b c d] dup size 2 / [drop] [take reverse] cleave concat zip
End of explanation
J('[1 2 3 4] dup size 2 / [drop] [take reverse] cleave zip')
define('AoC2017.1.extra == dup size 2 / [drop] [take reverse] cleave zip swap pop total_matches 2 *')
J('[1 2 1 2] AoC2017.1.extra')
J('[1 2 2 1] AoC2017.1.extra')
J('[1 2 3 4 2 5] AoC2017.1.extra')
Explanation: I realized that each pair is repeated...
End of explanation |
1,167 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
a = 0
b = 1
greyscale_min = 0
greyscale_max = 255
return a + (((x - greyscale_min)*(b - a))/(greyscale_max - greyscale_min))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
encoded_labels = np.zeros((len(x), 10))
for i in range(len(x)):
encoded_labels[i][x[i]] = 1
return encoded_labels
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
import numpy as np
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
image_input = tf.placeholder(tf.float32, shape=(None, *image_shape), name="x")
return image_input
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
label_input = tf.placeholder(tf.float32, shape=(None, n_classes), name="y")
return label_input
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
height = conv_ksize[0]
width = conv_ksize[1]
input_depth = x_tensor.get_shape().as_list()[3]
filter_weights = tf.Variable(tf.truncated_normal((height, width, input_depth, conv_num_outputs), stddev=0.1)) # (height, width, input_depth, output_depth)
filter_bias = tf.Variable(tf.zeros(conv_num_outputs))
conv = tf.nn.conv2d(x_tensor, filter_weights, (1, *conv_strides, 1), 'SAME') + filter_bias
conv_layer = tf.nn.relu(conv)
conv_pool = tf.nn.max_pool(conv_layer, (1, *pool_ksize, 1), (1, *pool_strides, 1), 'SAME')
return conv_pool
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
shape_list = x_tensor.get_shape().as_list()[1:4]
#final_size = shape_list[1]*shape_list[2]*shape_list[3]
final_size = np.prod(np.array(shape_list))
return tf.reshape(x_tensor, [-1, final_size])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros([num_outputs]))
fc = tf.add(tf.matmul(x_tensor, weights), bias)
fc = tf.nn.relu(fc)
# fc = tf.nn.dropout(fc, neural_net_keep_prob_input())
# fc = tf.nn.dropout(fc, keep_prob = keep_prob)
return fc
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros([num_outputs]))
out = tf.add(tf.matmul(x_tensor, weights), bias)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 128
conv_ksize = (2, 2)
conv_strides = (1, 1)
pool_ksize = (1, 1)
pool_strides = (1, 1)
fc_num_outputs = 1024
num_outputs = 10
conv = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flat = flatten(conv)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc = fully_conn(flat, fc_num_outputs)
fc_after_drop_out = fc = tf.nn.dropout(fc, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = output(fc_after_drop_out, num_outputs)
# TODO: return output
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.0})
valid_acc = session.run(accuracy, feed_dict={
x: valid_features,
y: valid_labels,
keep_prob: 1.0})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(
loss,
valid_acc))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 10
batch_size = 512
keep_probability = 0.75
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
1,168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
======================================================================
Compute source power spectral density (PSD) of VectorView and OPM data
======================================================================
Here we compute the resting state from raw for data recorded using
a Neuromag VectorView system and a custom OPM system.
The pipeline is meant to mostly follow the Brainstorm [1]
OMEGA resting tutorial pipeline <bst_omega_>.
The steps we use are
Step1: Load data, resample. We will store the raw objects in dicts with entries
"vv" and "opm" to simplify housekeeping and simplify looping later.
Step2: Do some minimal artifact rejection just for VectorView data
Step3: Explore data
Step4: Alignment and forward
Step5: Compute and apply inverse to PSD estimated using multitaper + Welch.
Group into frequency bands, then normalize each source point and sensor
independently. This makes the value of each sensor point and source location
in each frequency band the percentage of the PSD accounted for by that band.
Step7: Now we can make some plots of each frequency band. Note that the OPM head
coverage is only over right motor cortex, so only localization
of beta is likely to be worthwhile.
Theta
Step8: Alpha
Step9: Beta
Here we also show OPM data, which shows a profile similar to the VectorView
data beneath the sensors.
Step10: Gamma | Python Code:
# Authors: Denis Engemann <[email protected]>
# Luke Bloy <[email protected]>
# Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
from mne.filter import next_fast_len
import mne
print(__doc__)
data_path = mne.datasets.opm.data_path()
subject = 'OPM_sample'
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, subject, 'bem')
bem_fname = op.join(subjects_dir, subject, 'bem',
subject + '-5120-5120-5120-bem-sol.fif')
src_fname = op.join(bem_dir, '%s-oct6-src.fif' % subject)
vv_fname = data_path + '/MEG/SQUID/SQUID_resting_state.fif'
vv_erm_fname = data_path + '/MEG/SQUID/SQUID_empty_room.fif'
vv_trans_fname = data_path + '/MEG/SQUID/SQUID-trans.fif'
opm_fname = data_path + '/MEG/OPM/OPM_resting_state_raw.fif'
opm_erm_fname = data_path + '/MEG/OPM/OPM_empty_room_raw.fif'
opm_trans_fname = None
opm_coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')
Explanation: ======================================================================
Compute source power spectral density (PSD) of VectorView and OPM data
======================================================================
Here we compute the resting state from raw for data recorded using
a Neuromag VectorView system and a custom OPM system.
The pipeline is meant to mostly follow the Brainstorm [1]
OMEGA resting tutorial pipeline <bst_omega_>.
The steps we use are:
Filtering: downsample heavily.
Artifact detection: use SSP for EOG and ECG.
Source localization: dSPM, depth weighting, cortically constrained.
Frequency: power spectral density (Welch), 4 sec window, 50% overlap.
Standardize: normalize by relative power for each source.
:depth: 1
Preprocessing
End of explanation
raws = dict()
raw_erms = dict()
new_sfreq = 90. # Nyquist frequency (45 Hz) < line noise freq (50 Hz)
raws['vv'] = mne.io.read_raw_fif(vv_fname, verbose='error') # ignore naming
raws['vv'].load_data().resample(new_sfreq)
raws['vv'].info['bads'] = ['MEG2233', 'MEG1842']
raw_erms['vv'] = mne.io.read_raw_fif(vv_erm_fname, verbose='error')
raw_erms['vv'].load_data().resample(new_sfreq)
raw_erms['vv'].info['bads'] = ['MEG2233', 'MEG1842']
raws['opm'] = mne.io.read_raw_fif(opm_fname)
raws['opm'].load_data().resample(new_sfreq)
raw_erms['opm'] = mne.io.read_raw_fif(opm_erm_fname)
raw_erms['opm'].load_data().resample(new_sfreq)
# Make sure our assumptions later hold
assert raws['opm'].info['sfreq'] == raws['vv'].info['sfreq']
Explanation: Load data, resample. We will store the raw objects in dicts with entries
"vv" and "opm" to simplify housekeeping and simplify looping later.
End of explanation
titles = dict(vv='VectorView', opm='OPM')
ssp_ecg, _ = mne.preprocessing.compute_proj_ecg(
raws['vv'], tmin=-0.1, tmax=0.1, n_grad=1, n_mag=1)
raws['vv'].add_proj(ssp_ecg, remove_existing=True)
# due to how compute_proj_eog works, it keeps the old projectors, so
# the output contains both projector types (and also the original empty-room
# projectors)
ssp_ecg_eog, _ = mne.preprocessing.compute_proj_eog(
raws['vv'], n_grad=1, n_mag=1, ch_name='MEG0112')
raws['vv'].add_proj(ssp_ecg_eog, remove_existing=True)
raw_erms['vv'].add_proj(ssp_ecg_eog)
fig = mne.viz.plot_projs_topomap(raws['vv'].info['projs'][-4:],
info=raws['vv'].info)
fig.suptitle(titles['vv'])
fig.subplots_adjust(0.05, 0.05, 0.95, 0.85)
Explanation: Do some minimal artifact rejection just for VectorView data
End of explanation
kinds = ('vv', 'opm')
n_fft = next_fast_len(int(round(4 * new_sfreq)))
print('Using n_fft=%d (%0.1f sec)' % (n_fft, n_fft / raws['vv'].info['sfreq']))
for kind in kinds:
fig = raws[kind].plot_psd(n_fft=n_fft, proj=True)
fig.suptitle(titles[kind])
fig.subplots_adjust(0.1, 0.1, 0.95, 0.85)
Explanation: Explore data
End of explanation
src = mne.read_source_spaces(src_fname)
# This line removes source-to-source distances that we will not need.
# We only do it here to save a bit of memory, in general this is not required.
del src[0]['dist'], src[1]['dist']
bem = mne.read_bem_solution(bem_fname)
fwd = dict()
trans = dict(vv=vv_trans_fname, opm=opm_trans_fname)
# check alignment and generate forward
with mne.use_coil_def(opm_coil_def_fname):
for kind in kinds:
dig = True if kind == 'vv' else False
fig = mne.viz.plot_alignment(
raws[kind].info, trans=trans[kind], subject=subject,
subjects_dir=subjects_dir, dig=dig, coord_frame='mri',
surfaces=('head', 'white'))
mne.viz.set_3d_view(figure=fig, azimuth=0, elevation=90,
distance=0.6, focalpoint=(0., 0., 0.))
fwd[kind] = mne.make_forward_solution(
raws[kind].info, trans[kind], src, bem, eeg=False, verbose=True)
del trans, src, bem
Explanation: Alignment and forward
End of explanation
freq_bands = dict(
delta=(2, 4), theta=(5, 7), alpha=(8, 12), beta=(15, 29), gamma=(30, 45))
topos = dict(vv=dict(), opm=dict())
stcs = dict(vv=dict(), opm=dict())
snr = 3.
lambda2 = 1. / snr ** 2
for kind in kinds:
noise_cov = mne.compute_raw_covariance(raw_erms[kind])
inverse_operator = mne.minimum_norm.make_inverse_operator(
raws[kind].info, forward=fwd[kind], noise_cov=noise_cov, verbose=True)
stc_psd, sensor_psd = mne.minimum_norm.compute_source_psd(
raws[kind], inverse_operator, lambda2=lambda2,
n_fft=n_fft, dB=False, return_sensor=True, verbose=True)
topo_norm = sensor_psd.data.sum(axis=1, keepdims=True)
stc_norm = stc_psd.sum() # same operation on MNE object, sum across freqs
# Normalize each source point by the total power across freqs
for band, limits in freq_bands.items():
data = sensor_psd.copy().crop(*limits).data.sum(axis=1, keepdims=True)
topos[kind][band] = mne.EvokedArray(
100 * data / topo_norm, sensor_psd.info)
stcs[kind][band] = \
100 * stc_psd.copy().crop(*limits).sum() / stc_norm.data
del inverse_operator
del fwd, raws, raw_erms
Explanation: Compute and apply inverse to PSD estimated using multitaper + Welch.
Group into frequency bands, then normalize each source point and sensor
independently. This makes the value of each sensor point and source location
in each frequency band the percentage of the PSD accounted for by that band.
End of explanation
def plot_band(kind, band):
Plot activity within a frequency band on the subject's brain.
title = "%s %s\n(%d-%d Hz)" % ((titles[kind], band,) + freq_bands[band])
topos[kind][band].plot_topomap(
times=0., scalings=1., cbar_fmt='%0.1f', vmin=0, cmap='inferno',
time_format=title)
brain = stcs[kind][band].plot(
subject=subject, subjects_dir=subjects_dir, views='cau', hemi='both',
time_label=title, title=title, colormap='inferno',
clim=dict(kind='percent', lims=(70, 85, 99)))
brain.show_view(dict(azimuth=0, elevation=0), roll=0)
return fig, brain
fig_theta, brain_theta = plot_band('vv', 'theta')
Explanation: Now we can make some plots of each frequency band. Note that the OPM head
coverage is only over right motor cortex, so only localization
of beta is likely to be worthwhile.
Theta
End of explanation
fig_alpha, brain_alpha = plot_band('vv', 'alpha')
Explanation: Alpha
End of explanation
fig_beta, brain_beta = plot_band('vv', 'beta')
fig_beta_opm, brain_beta_opm = plot_band('opm', 'beta')
Explanation: Beta
Here we also show OPM data, which shows a profile similar to the VectorView
data beneath the sensors.
End of explanation
fig_gamma, brain_gamma = plot_band('vv', 'gamma')
Explanation: Gamma
End of explanation |
1,169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evaluating Services
Sentiment analysis plugins can also be evaluated on a series of pre-defined datasets.
This can be done in three ways
Step1: Programmatically (expert)
A third option is to evaluate plugins manually without launching the server.
This option is particularly interesting for advanced users that want faster iterations and evaluation results, and for automation.
We would first need an instance of a plugin.
In this example we will use the Sentiment140 plugin that is included in every senpy installation
Step2: Then, we need to know what datasets are available.
We can list all datasets and basic stats (e.g., number of instances and labels used) like this
Step3: Now, we will evaluate our plugin in one of the smallest datasets, sts | Python Code:
import requests
from IPython.display import Code
endpoint = 'http://senpy.gsi.upm.es/api'
res = requests.get(f'{endpoint}/evaluate',
params={"algo": "sentiment-vader",
"dataset": "vader,sts",
'outformat': 'json-ld'
})
Code(res.text, language='json')
Explanation: Evaluating Services
Sentiment analysis plugins can also be evaluated on a series of pre-defined datasets.
This can be done in three ways: through the Web UI (playground), through the web API and programmatically.
Regardless of the way you perform the evaluation, you will need to specify a plugin (service) that you want to evaluate, and a series of datasets on which it should be evaluated.
to evaluate a plugin on a dataset, senpy use the plugin to predict the sentiment in each entry in the dataset.
These predictions are compared with the expected values to produce several metrics, such as: accuracy, precision and f1-score.
note: the evaluation process might take long for plugins that use external services, such as sentiment140.
note: plugins are assumed to be pre-trained and invariant. i.e., the prediction for an entry should
Web UI (Playground)
The playground should contain a tab for Evaluation, where you can select any plugin that can be evaluated, and the set of datasets that you want to test the plugin on.
For example, the image below shows the results of the sentiment-vader plugin on the vader and sts datasets:
Web API
The api exposes an endpoint (/evaluate), which accents the plugin and the set of datasets on which it should be evaluated.
The following code is not necessary, but it will display the results better:
Here is a simple call using the requests library:
End of explanation
from senpy.plugins.sentiment import sentiment140_plugin
s140 = sentiment140_plugin.Sentiment140()
Explanation: Programmatically (expert)
A third option is to evaluate plugins manually without launching the server.
This option is particularly interesting for advanced users that want faster iterations and evaluation results, and for automation.
We would first need an instance of a plugin.
In this example we will use the Sentiment140 plugin that is included in every senpy installation:
End of explanation
from senpy.gsitk_compat import datasets
for k, d in datasets.items():
print(k, d['stats'])
Explanation: Then, we need to know what datasets are available.
We can list all datasets and basic stats (e.g., number of instances and labels used) like this:
End of explanation
s140.evaluate(['sts', ])
Explanation: Now, we will evaluate our plugin in one of the smallest datasets, sts:
End of explanation |
1,170 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
In pandas, how do I replace & with '&' from all columns where & could be in any position in a string?Then please evaluate this expression. | Problem:
import pandas as pd
df = pd.DataFrame({'A': ['1 & 1', 'BB', 'CC', 'DD', '1 & 0'], 'B': range(5), 'C': ['0 & 0'] * 5})
def g(df):
for i in df.index:
for col in list(df):
if type(df.loc[i, col]) == str:
if '&' in df.loc[i, col]:
df.loc[i, col] = df.loc[i, col].replace('&', '&')
df.loc[i, col] = df.loc[i, col]+' = '+str(eval(df.loc[i, col]))
df.replace('&', '&', regex=True)
return df
df = g(df.copy()) |
1,171 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements; and to You under the Apache License, Version 2.0.
RNN for Character Level Language Modeling
Dataset pre-processing
sample data
Step1: Prepare the dataset. Download all works of Shakespeare concatenated. Other plain text files can also be used.
Create the network
Step2: Conduct SGD
Step3: Checkpoint
Step4: Sample | Python Code:
from __future__ import division
from __future__ import print_function
from future import standard_library
standard_library.install_aliases()
from builtins import zip
from builtins import range
from builtins import object
from past.utils import old_div
import pickle as pickle
import numpy as np
import argparse
import sys
from tqdm import tnrange, tqdm_notebook
# sys.path.append(os.path.join(os.path.dirname(__file__), '../../build/python'))
from singa import layer
from singa import loss
from singa import device
from singa import tensor
from singa import optimizer
from singa import initializer
from singa.proto import model_pb2
from singa import utils
class Data(object):
def __init__(self, fpath, batch_size=32, seq_length=100, train_ratio=0.8):
'''Data object for loading a plain text file.
Args:
fpath, path to the text file.
train_ratio, split the text file into train and test sets, where
train_ratio of the characters are in the train set.
'''
self.raw_data = open(fpath, 'r').read() # read text file
chars = list(set(self.raw_data))
self.vocab_size = len(chars)
self.char_to_idx = {ch: i for i, ch in enumerate(chars)}
self.idx_to_char = {i: ch for i, ch in enumerate(chars)}
data = [self.char_to_idx[c] for c in self.raw_data]
# seq_length + 1 for the data + label
nsamples = old_div(len(data), (1 + seq_length))
data = data[0:nsamples * (1 + seq_length)]
data = np.asarray(data, dtype=np.int32)
data = np.reshape(data, (-1, seq_length + 1))
# shuffle all sequences
np.random.shuffle(data)
self.train_dat = data[0:int(data.shape[0]*train_ratio)]
self.num_train_batch = old_div(self.train_dat.shape[0], batch_size)
self.val_dat = data[self.train_dat.shape[0]:]
self.num_test_batch = old_div(self.val_dat.shape[0], batch_size)
self.batch_size = batch_size
self.seq_length = seq_length
print('train dat', self.train_dat.shape)
print('val dat', self.val_dat.shape)
def numpy2tensors(npx, npy, dev):
'''batch, seq, dim -- > seq, batch, dim'''
tmpx = np.swapaxes(npx, 0, 1)
tmpy = np.swapaxes(npy, 0, 1)
inputs = []
labels = []
for t in range(tmpx.shape[0]):
x = tensor.from_numpy(tmpx[t])
y = tensor.from_numpy(tmpy[t])
x.to_device(dev)
y.to_device(dev)
inputs.append(x)
labels.append(y)
return inputs, labels
def convert(batch, batch_size, seq_length, vocab_size, dev):
'''convert a batch of data into a sequence of input tensors'''
y = batch[:, 1:]
x1 = batch[:, :seq_length]
x = np.zeros((batch_size, seq_length, vocab_size), dtype=np.float32)
for b in range(batch_size):
for t in range(seq_length):
c = x1[b, t]
x[b, t, c] = 1
return numpy2tensors(x, y, dev)
Explanation: Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements; and to You under the Apache License, Version 2.0.
RNN for Character Level Language Modeling
Dataset pre-processing
sample data
End of explanation
def get_lr(epoch):
return old_div(0.001, float(1 << (old_div(epoch, 50))))
hidden_size=32
num_stacks=1
dropout=0.5
data = Data('static/shakespeare_input.txt')
# SGD with L2 gradient normalization
opt = optimizer.RMSProp(constraint=optimizer.L2Constraint(5))
cuda = device.create_cuda_gpu()
rnn = layer.LSTM(name='lstm', hidden_size=hidden_size, num_stacks=num_stacks, dropout=dropout, input_sample_shape=(data.vocab_size,))
rnn.to_device(cuda)
rnn_w = rnn.param_values()[0]
rnn_w.uniform(-0.08, 0.08)
dense = layer.Dense('dense', data.vocab_size, input_sample_shape=(32,))
dense.to_device(cuda)
dense_w = dense.param_values()[0]
dense_b = dense.param_values()[1]
print('dense w ', dense_w.shape)
print('dense b ', dense_b.shape)
initializer.uniform(dense_w, dense_w.shape[0], 0)
print('dense weight l1 = %f' % (dense_w.l1()))
dense_b.set_value(0)
print('dense b l1 = %f' % (dense_b.l1()))
g_dense_w = tensor.Tensor(dense_w.shape, cuda)
g_dense_b = tensor.Tensor(dense_b.shape, cuda)
Explanation: Prepare the dataset. Download all works of Shakespeare concatenated. Other plain text files can also be used.
Create the network
End of explanation
lossfun = loss.SoftmaxCrossEntropy()
train_loss = 0
for epoch in range(3):
bar = tnrange(data.num_train_batch, desc='Epoch %d' % 0)
for b in bar:
batch = data.train_dat[b * data.batch_size: (b + 1) * data.batch_size]
inputs, labels = convert(batch, data.batch_size, data.seq_length, data.vocab_size, cuda)
inputs.append(tensor.Tensor())
inputs.append(tensor.Tensor())
outputs = rnn.forward(model_pb2.kTrain, inputs)[0:-2]
grads = []
batch_loss = 0
g_dense_w.set_value(0.0)
g_dense_b.set_value(0.0)
for output, label in zip(outputs, labels):
act = dense.forward(model_pb2.kTrain, output)
lvalue = lossfun.forward(model_pb2.kTrain, act, label)
batch_loss += lvalue.l1()
grad = lossfun.backward()
grad /= data.batch_size
grad, gwb = dense.backward(model_pb2.kTrain, grad)
grads.append(grad)
g_dense_w += gwb[0]
g_dense_b += gwb[1]
# print output.l1(), act.l1()
bar.set_postfix(train_loss=old_div(batch_loss, data.seq_length))
train_loss += batch_loss
grads.append(tensor.Tensor())
grads.append(tensor.Tensor())
g_rnn_w = rnn.backward(model_pb2.kTrain, grads)[1][0]
dense_w, dense_b = dense.param_values()
opt.apply_with_lr(epoch, get_lr(epoch), g_rnn_w, rnn_w, 'rnnw')
opt.apply_with_lr(epoch, get_lr(epoch), g_dense_w, dense_w, 'dense_w')
opt.apply_with_lr(epoch, get_lr(epoch), g_dense_b, dense_b, 'dense_b')
print('\nEpoch %d, train loss is %f' % (epoch, train_loss / data.num_train_batch / data.seq_length))
Explanation: Conduct SGD
End of explanation
model_path= 'static/model_' + str(epoch) + '.bin'
with open(model_path, 'wb') as fd:
print('saving model to %s' % model_path)
d = {}
for name, w in zip(['rnn_w', 'dense_w', 'dense_b'],[rnn_w, dense_w, dense_b]):
d[name] = tensor.to_numpy(w)
d['idx_to_char'] = data.idx_to_char
d['char_to_idx'] = data.char_to_idx
d['hidden_size'] = hidden_size
d['num_stacks'] = num_stacks
d['dropout'] = dropout
pickle.dump(d, fd)
fd.close()
Explanation: Checkpoint
End of explanation
nsamples = 300
seed_text = "Before we proceed any further, hear me speak."
do_sample = True
with open(model_path, 'rb') as fd:
d = pickle.load(fd)
rnn_w = tensor.from_numpy(d['rnn_w'])
idx_to_char = d['idx_to_char']
char_to_idx = d['char_to_idx']
vocab_size = len(idx_to_char)
dense_w = tensor.from_numpy(d['dense_w'])
dense_b = tensor.from_numpy(d['dense_b'])
hidden_size = d['hidden_size']
num_stacks = d['num_stacks']
dropout = d['dropout']
rnn = layer.LSTM(name='lstm', hidden_size=hidden_size,
num_stacks=num_stacks, dropout=dropout,
input_sample_shape=(len(idx_to_char),))
rnn.to_device(cuda)
rnn.param_values()[0].copy_data(rnn_w)
dense = layer.Dense('dense', vocab_size, input_sample_shape=(hidden_size,))
dense.to_device(cuda)
dense.param_values()[0].copy_data(dense_w)
dense.param_values()[1].copy_data(dense_b)
hx = tensor.Tensor((num_stacks, 1, hidden_size), cuda)
cx = tensor.Tensor((num_stacks, 1, hidden_size), cuda)
hx.set_value(0.0)
cx.set_value(0.0)
if len(seed_text) > 0:
for c in seed_text:
x = np.zeros((1, vocab_size), dtype=np.float32)
x[0, char_to_idx[c]] = 1
tx = tensor.from_numpy(x)
tx.to_device(cuda)
inputs = [tx, hx, cx]
outputs = rnn.forward(model_pb2.kEval, inputs)
y = dense.forward(model_pb2.kEval, outputs[0])
y = tensor.softmax(y)
hx = outputs[1]
cx = outputs[2]
sys.stdout.write(seed_text)
else:
y = tensor.Tensor((1, vocab_size), cuda)
y.set_value(old_div(1.0, vocab_size))
for i in range(nsamples):
y.to_host()
prob = tensor.to_numpy(y)[0]
if do_sample:
cur = np.random.choice(vocab_size, 1, p=prob)[0]
else:
cur = np.argmax(prob)
sys.stdout.write(idx_to_char[cur])
x = np.zeros((1, vocab_size), dtype=np.float32)
x[0, cur] = 1
tx = tensor.from_numpy(x)
tx.to_device(cuda)
inputs = [tx, hx, cx]
outputs = rnn.forward(model_pb2.kEval, inputs)
y = dense.forward(model_pb2.kEval, outputs[0])
y = tensor.softmax(y)
hx = outputs[1]
cx = outputs[2]
print('')
Explanation: Sample
End of explanation |
1,172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Предобработка данных и логистическая регрессия для задачи бинарной классификации
Programming assignment
В задании вам будет предложено ознакомиться с основными техниками предобработки данных, а так же применить их для обучения модели логистической регрессии. Ответ потребуется загрузить в соответствующую форму в виде 6 текстовых файлов.
Для выполнения задания требуется Python версии 2.7, а также актуальные версии библиотек
Step1: Описание датасета
Задача
Step2: Выделим из датасета целевую переменную Grant.Status и обозначим её за y
Теперь X обозначает обучающую выборку, y - ответы на ней
Step3: Теория по логистической регрессии
После осознания того, какую именно задачу требуется решить на этих данных, следующим шагом при реальном анализе был бы подбор подходящего метода. В данном задании выбор метода было произведён за вас, это логистическая регрессия. Кратко напомним вам используемую модель.
Логистическая регрессия предсказывает вероятности принадлежности объекта к каждому классу. Сумма ответов логистической регрессии на одном объекте для всех классов равна единице.
$$ \sum_{k=1}^K \pi_{ik} = 1, \quad \pi_k \equiv P\,(y_i = k \mid x_i, \theta), $$
где
Step4: Видно, что в датасете есть как числовые, так и категориальные признаки. Получим списки их названий
Step5: Также в нём присутствуют пропущенные значения. Очевидны решением будет исключение всех данных, у которых пропущено хотя бы одно значение. Сделаем это
Step6: Видно, что тогда мы выбросим почти все данные, и такой метод решения в данном случае не сработает.
Пропущенные значения можно так же интерпретировать, для этого существует несколько способов, они различаются для категориальных и вещественных признаков.
Для вещественных признаков
Step7: Преобразование категориальных признаков.
В предыдущей ячейке мы разделили наш датасет ещё на две части
Step8: Как видно, в первые три колонки оказалась закодированна информация о стране, а во вторые две - о поле. При этом для совпадающих элементов выборки строки будут полностью совпадать. Также из примера видно, что кодирование признаков сильно увеличивает их количество, но полностью сохраняет информацию, в том числе о наличии пропущенных значений (их наличие просто становится одним из бинарных признаков в преобразованных данных).
Теперь применим one-hot encoding к категориальным признакам из исходного датасета. Обратите внимание на общий для всех методов преобработки данных интерфейс. Функция
encoder.fit_transform(X)
позволяет вычислить необходимые параметры преобразования, впоследствии к новым данным можно уже применять функцию
encoder.transform(X)
Очень важно применять одинаковое преобразование как к обучающим, так и тестовым данным, потому что в противном случае вы получите непредсказуемые, и, скорее всего, плохие результаты. В частности, если вы отдельно закодируете обучающую и тестовую выборку, то получите вообще говоря разные коды для одних и тех же признаков, и ваше решение работать не будет.
Также параметры многих преобразований (например, рассмотренное ниже масштабирование) нельзя вычислять одновременно на данных из обучения и теста, потому что иначе подсчитанные на тесте метрики качества будут давать смещённые оценки на качество работы алгоритма. Кодирование категориальных признаков не считает на обучающей выборке никаких параметров, поэтому его можно применять сразу к всему датасету.
Step9: Для построения метрики качества по результату обучения требуется разделить исходный датасет на обучающую и тестовую выборки.
Обращаем внимание на заданный параметр для генератора случайных чисел
Step10: Описание классов
Итак, мы получили первые наборы данных, для которых выполнены оба ограничения логистической регрессии на входные данные. Обучим на них регрессию, используя имеющийся в библиотеке sklearn функционал по подбору гиперпараметров модели
optimizer = GridSearchCV(estimator, param_grid)
где
Step11: Масштабирование вещественных признаков.
Попробуем как-то улучшить качество классификации. Для этого посмотрим на сами данные
Step12: Как видно из графиков, разные признаки очень сильно отличаются друг от друга по модулю значений (обратите внимание на диапазоны значений осей x и y). В случае обычной регрессии это никак не влияет на качество обучаемой модели, т.к. у меньших по модулю признаков будут большие веса, но при использовании регуляризации, которая штрафует модель за большие веса, регрессия, как правило, начинает работать хуже.
В таких случаях всегда рекомендуется делать стандартизацию (масштабирование) признаков, для того чтобы они меньше отличались друг друга по модулю, но при этом не нарушались никакие другие свойства признакового пространства. При этом даже если итоговое качество модели на тесте уменьшается, это повышает её интерпретабельность, потому что новые веса имеют смысл "значимости" данного признака для итоговой классификации.
Стандартизация осуществляется посредством вычета из каждого признака среднего значения и нормировки на выборочное стандартное отклонение
Step13: Сравнение признаковых пространств.
Построим такие же графики для преобразованных данных
Step14: Как видно из графиков, мы не поменяли свойства признакового пространства
Step24: Балансировка классов.
Алгоритмы классификации могут быть очень чувствительны к несбалансированным классам. Рассмотрим пример с выборками, сэмплированными из двух гауссиан. Их мат. ожидания и матрицы ковариации заданы так, что истинная разделяющая поверхность должна проходить параллельно оси x. Поместим в обучающую выборку 20 объектов, сэмплированных из 1-й гауссианы, и 10 объектов из 2-й. После этого обучим на них линейную регрессию, и построим на графиках объекты и области классификации.
Step25: Как видно, во втором случае классификатор находит разделяющую поверхность, которая ближе к истинной, т.е. меньше переобучается. Поэтому на сбалансированность классов в обучающей выборке всегда следует обращать внимание.
Посмотрим, сбалансированны ли классы в нашей обучающей выборке
Step26: Видно, что нет.
Исправить ситуацию можно разными способами, мы рассмотрим два
Step27: Стратификация выборок.
Рассмотрим ещё раз пример с выборками из нормальных распределений. Посмотрим ещё раз на качество классификаторов, получаемое на тестовых выборках
Step30: Насколько эти цифры реально отражают качество работы алгоритма, если учесть, что тестовая выборка так же несбалансирована, как обучающая? При этом мы уже знаем, что алгоритм логистический регрессии чувствителен к балансировке классов в обучающей выборке, т.е. в данном случае на тесте он будет давать заведомо заниженные результаты. Метрика классификатора на тесте имела бы гораздо больший смысл, если бы объекты были разделы в выборках поровну
Step31: Как видно, после данной процедуры ответ классификатора изменился незначительно, а вот качество увеличилось. При этом, в зависимости от того, как вы разбили изначально данные на обучение и тест, после сбалансированного разделения выборок итоговая метрика на тесте может как увеличиться, так и уменьшиться, но доверять ей можно значительно больше, т.к. она построена с учётом специфики работы классификатора. Данный подход является частным случаем т.н. метода стратификации.
Задание 4. Стратификация выборки.
По аналогии с тем, как это было сделано в начале задания, разбейте выборки X_real_zeros и X_cat_oh на обучение и тест, передавая в функцию
train_test_split(...)
дополнительно параметр
stratify=y
Также обязательно передайте в функцию переменную random_state=0.
Выполните масштабирование новых вещественных выборок, обучите классификатор и его гиперпараметры при помощи метода кросс-валидации, делая поправку на несбалансированные классы при помощи весов. Убедитесь в том, что нашли оптимум accuracy по гиперпараметрам.
Оцените качество классификатора метрике AUC ROC на тестовой выборке.
Полученный ответ передайте функции write_answer_4
Step35: Теперь вы разобрались с основными этапами предобработки данных для линейных классификаторов.
Напомним основные этапы
Step36: Видно, что данный метод преобразования данных уже позволяет строить нелинейные разделяющие поверхности, которые могут более тонко подстраиваться под данные и находить более сложные зависимости. Число признаков в новой модели
Step37: Но при этом одновременно данный метод способствует более сильной способности модели к переобучению из-за быстрого роста числа признаком с увеличением степени $p$. Рассмотрим пример с $p=11$
Step38: Количество признаков в данной модели
Step41: Задание 5. Трансформация вещественных признаков.
Реализуйте по аналогии с примером преобразование вещественных признаков модели при помощи полиномиальных признаков степени 2
Постройте логистическую регрессию на новых данных, одновременно подобрав оптимальные гиперпараметры. Обращаем внимание, что в преобразованных признаках уже присутствует столбец, все значения которого равны 1, поэтому обучать дополнительно значение $b$ не нужно, его функцию выполняет один из весов $w$. В связи с этим во избежание линейной зависимости в датасете, в вызов класса логистической регрессии требуется передавать параметр fit_intercept=False. Для обучения используйте стратифицированные выборки с балансировкой классов при помощи весов, преобразованные признаки требуется заново отмасштабировать.
Получите AUC ROC на тесте и сравните данный результат с использованием обычных признаков.
Передайте полученный ответ в функцию write_answer_5.
Step42: Регрессия Lasso.
К логистической регрессии также можно применить L1-регуляризацию (Lasso), вместо регуляризации L2, которая будет приводить к отбору признаков. Вам предлагается применить L1-регуляцию к исходным признакам и проинтерпретировать полученные результаты (применение отбора признаков к полиномиальным так же можно успешно применять, но в нём уже будет отсутствовать компонента интерпретации, т.к. смысловое значение оригинальных признаков известно, а полиномиальных - уже может быть достаточно нетривиально). Для вызова логистической регрессии с L1-регуляризацией достаточно передать параметр penalty='l1' в инициализацию класса.
Задание 6. Отбор признаков при помощи регрессии Lasso.
Обучите регрессию Lasso на стратифицированных отмасштабированных выборках, используя балансировку классов при помощи весов.
Получите ROC AUC регрессии, сравните его с предыдущими результатами.
Найдите номера вещественных признаков, которые имеют нулевые веса в итоговой модели.
Передайте их список функции write_answer_6. | Python Code:
import pandas as pd
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
matplotlib.style.use('ggplot')
%matplotlib inline
Explanation: Предобработка данных и логистическая регрессия для задачи бинарной классификации
Programming assignment
В задании вам будет предложено ознакомиться с основными техниками предобработки данных, а так же применить их для обучения модели логистической регрессии. Ответ потребуется загрузить в соответствующую форму в виде 6 текстовых файлов.
Для выполнения задания требуется Python версии 2.7, а также актуальные версии библиотек:
- NumPy: 1.10.4 и выше
- Pandas: 0.17.1 и выше
- Scikit-learn: 0.17 и выше
End of explanation
data = pd.read_csv('data.csv')
data.shape
Explanation: Описание датасета
Задача: по 38 признакам, связанных с заявкой на грант (область исследований учёных, информация по их академическому бэкграунду, размер гранта, область, в которой он выдаётся) предсказать, будет ли заявка принята. Датасет включает в себя информацию по 6000 заявкам на гранты, которые были поданы в университете Мельбурна в период с 2004 по 2008 год.
Полную версию данных с большим количеством признаков можно найти на https://www.kaggle.com/c/unimelb.
End of explanation
X = data.drop('Grant.Status', 1)
y = data['Grant.Status']
Explanation: Выделим из датасета целевую переменную Grant.Status и обозначим её за y
Теперь X обозначает обучающую выборку, y - ответы на ней
End of explanation
data.head()
Explanation: Теория по логистической регрессии
После осознания того, какую именно задачу требуется решить на этих данных, следующим шагом при реальном анализе был бы подбор подходящего метода. В данном задании выбор метода было произведён за вас, это логистическая регрессия. Кратко напомним вам используемую модель.
Логистическая регрессия предсказывает вероятности принадлежности объекта к каждому классу. Сумма ответов логистической регрессии на одном объекте для всех классов равна единице.
$$ \sum_{k=1}^K \pi_{ik} = 1, \quad \pi_k \equiv P\,(y_i = k \mid x_i, \theta), $$
где:
- $\pi_{ik}$ - вероятность принадлежности объекта $x_i$ из выборки $X$ к классу $k$
- $\theta$ - внутренние параметры алгоритма, которые настраиваются в процессе обучения, в случае логистической регрессии - $w, b$
Из этого свойства модели в случае бинарной классификации требуется вычислить лишь вероятность принадлежности объекта к одному из классов (вторая вычисляется из условия нормировки вероятностей). Эта вероятность вычисляется, используя логистическую функцию:
$$ P\,(y_i = 1 \mid x_i, \theta) = \frac{1}{1 + \exp(-w^T x_i-b)} $$
Параметры $w$ и $b$ находятся, как решения следующей задачи оптимизации (указаны функционалы с L1 и L2 регуляризацией, с которыми вы познакомились в предыдущих заданиях):
L2-regularization:
$$ Q(X, y, \theta) = \frac{1}{2} w^T w + C \sum_{i=1}^l \log ( 1 + \exp(-y_i (w^T x_i + b ) ) ) \longrightarrow \min\limits_{w,b} $$
L1-regularization:
$$ Q(X, y, \theta) = \sum_{d=1}^D |w_d| + C \sum_{i=1}^l \log ( 1 + \exp(-y_i (w^T x_i + b ) ) ) \longrightarrow \min\limits_{w,b} $$
$C$ - это стандартный гиперпараметр модели, который регулирует то, насколько сильно мы позволяем модели подстраиваться под данные.
Предобработка данных
Из свойств данной модели следует, что:
- все $X$ должны быть числовыми данными (в случае наличия среди них категорий, их требуется некоторым способом преобразовать в вещественные числа)
- среди $X$ не должно быть пропущенных значений (т.е. все пропущенные значения перед применением модели следует каким-то образом заполнить)
Поэтому базовым этапом в предобработке любого датасета для логистической регрессии будет кодирование категориальных признаков, а так же удаление или интерпретация пропущенных значений (при наличии того или другого).
End of explanation
numeric_cols = ['RFCD.Percentage.1', 'RFCD.Percentage.2', 'RFCD.Percentage.3',
'RFCD.Percentage.4', 'RFCD.Percentage.5',
'SEO.Percentage.1', 'SEO.Percentage.2', 'SEO.Percentage.3',
'SEO.Percentage.4', 'SEO.Percentage.5',
'Year.of.Birth.1', 'Number.of.Successful.Grant.1', 'Number.of.Unsuccessful.Grant.1']
categorical_cols = list(set(X.columns.values.tolist()) - set(numeric_cols))
len(numeric_cols)
Explanation: Видно, что в датасете есть как числовые, так и категориальные признаки. Получим списки их названий:
End of explanation
data.dropna().shape
Explanation: Также в нём присутствуют пропущенные значения. Очевидны решением будет исключение всех данных, у которых пропущено хотя бы одно значение. Сделаем это:
End of explanation
def calculate_means(numeric_data):
means = np.zeros(numeric_data.shape[1])
for j in range(numeric_data.shape[1]):
to_sum = numeric_data.iloc[:,j]
indices = np.nonzero(~numeric_data.iloc[:,j].isnull())[0]
correction = np.amax(to_sum[indices])
to_sum /= correction
for i in indices:
means[j] += to_sum[i]
means[j] /= indices.size
means[j] *= correction
return pd.Series(means, numeric_data.columns)
means = calculate_means(X[numeric_cols])
X_real_zeros = X[numeric_cols]
X_real_mean = X[numeric_cols]
for idx, column in enumerate(numeric_cols):
X_real_zeros[column].fillna(0, inplace=True)
X_real_mean[column].fillna(means.iloc[idx], inplace=True)
X_cat = X[categorical_cols]
for column in categorical_cols:
X_cat[column] = X_cat[column].apply(lambda x: str(x))
X_cat[column] = X_cat[column].fillna('NA')
X_cat.info()
Explanation: Видно, что тогда мы выбросим почти все данные, и такой метод решения в данном случае не сработает.
Пропущенные значения можно так же интерпретировать, для этого существует несколько способов, они различаются для категориальных и вещественных признаков.
Для вещественных признаков:
- заменить на 0 (данный признак давать вклад в предсказание для данного объекта не будет)
- заменить на среднее (каждый пропущенный признак будет давать такой же вклад, как и среднее значение признака на датасете)
Для категориальных:
- интерпретировать пропущенное значение, как ещё одну категорию (данный способ является самым естественным, так как в случае категорий у нас есть уникальная возможность не потерять информацию о наличии пропущенных значений; обратите внимание, что в случае вещественных признаков данная информация неизбежно теряется)
Задание 0. Обработка пропущенных значений.
Заполните пропущенные вещественные значения в X нулями и средними по столбцам, назовите полученные датафреймы X_real_zeros и X_real_mean соответственно. Для подсчёта средних используйте описанную ниже функцию calculate_means, которой требуется передать на вход вешественные признаки из исходного датафрейма.
Все категориальные признаки в X преобразуйте в строки, пропущенные значения требуется также преобразовать в какие-либо строки, которые не являются категориями (например, 'NA'), полученный датафрейм назовите X_cat.
Для объединения выборок здесь и далее в задании рекомендуется использовать функции
np.hstack(...)
np.vstack(...)
End of explanation
from sklearn.linear_model import LogisticRegression as LR
from sklearn.feature_extraction import DictVectorizer as DV
categorial_data = pd.DataFrame({'sex': ['male', 'female', 'male', 'female'],
'nationality': ['American', 'European', 'Asian', 'European']})
print('Исходные данные:\n')
print(categorial_data)
encoder = DV(sparse = False)
encoded_data = encoder.fit_transform(categorial_data.T.to_dict().values())
print('\nЗакодированные данные:\n')
print(encoded_data)
Explanation: Преобразование категориальных признаков.
В предыдущей ячейке мы разделили наш датасет ещё на две части: в одной присутствуют только вещественные признаки, в другой только категориальные. Это понадобится нам для раздельной последующей обработке этих данных, а так же для сравнения качества работы тех или иных методов.
Для использования модели регрессии требуется преобразовать категориальные признаки в вещественные. Рассмотрим основной способ преоборазования категориальных признаков в вещественные: one-hot encoding. Его идея заключается в том, что мы преобразуем категориальный признак при помощи бинарного кода: каждой категории ставим в соответствие набор из нулей и единиц.
Посмотрим, как данный метод работает на простом наборе данных.
End of explanation
encoder = DV(sparse = False)
X_cat_oh = encoder.fit_transform(X_cat.T.to_dict().values())
X_cat_oh.shape
Explanation: Как видно, в первые три колонки оказалась закодированна информация о стране, а во вторые две - о поле. При этом для совпадающих элементов выборки строки будут полностью совпадать. Также из примера видно, что кодирование признаков сильно увеличивает их количество, но полностью сохраняет информацию, в том числе о наличии пропущенных значений (их наличие просто становится одним из бинарных признаков в преобразованных данных).
Теперь применим one-hot encoding к категориальным признакам из исходного датасета. Обратите внимание на общий для всех методов преобработки данных интерфейс. Функция
encoder.fit_transform(X)
позволяет вычислить необходимые параметры преобразования, впоследствии к новым данным можно уже применять функцию
encoder.transform(X)
Очень важно применять одинаковое преобразование как к обучающим, так и тестовым данным, потому что в противном случае вы получите непредсказуемые, и, скорее всего, плохие результаты. В частности, если вы отдельно закодируете обучающую и тестовую выборку, то получите вообще говоря разные коды для одних и тех же признаков, и ваше решение работать не будет.
Также параметры многих преобразований (например, рассмотренное ниже масштабирование) нельзя вычислять одновременно на данных из обучения и теста, потому что иначе подсчитанные на тесте метрики качества будут давать смещённые оценки на качество работы алгоритма. Кодирование категориальных признаков не считает на обучающей выборке никаких параметров, поэтому его можно применять сразу к всему датасету.
End of explanation
from sklearn.cross_validation import train_test_split
(X_train_real_zeros,
X_test_real_zeros,
y_train, y_test) = train_test_split(X_real_zeros, y,
test_size=0.3,
random_state=0)
(X_train_real_mean,
X_test_real_mean) = train_test_split(X_real_mean,
test_size=0.3,
random_state=0)
(X_train_cat_oh,
X_test_cat_oh) = train_test_split(X_cat_oh,
test_size=0.3,
random_state=0)
Explanation: Для построения метрики качества по результату обучения требуется разделить исходный датасет на обучающую и тестовую выборки.
Обращаем внимание на заданный параметр для генератора случайных чисел: random_state. Так как результаты на обучении и тесте будут зависеть от того, как именно вы разделите объекты, то предлагается использовать заранее определённое значение для получение результатов, согласованных с ответами в системе проверки заданий.
End of explanation
from sklearn.linear_model import LogisticRegression
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import roc_auc_score
def plot_scores(optimizer):
scores = [[item[0]['C'],
item[1],
(np.sum((item[2]-item[1])**2)/(item[2].size-1))**0.5] for item in optimizer.grid_scores_]
scores = np.array(scores)
plt.semilogx(scores[:,0], scores[:,1])
plt.fill_between(scores[:,0], scores[:,1]-scores[:,2],
scores[:,1]+scores[:,2], alpha=0.3)
plt.show()
def write_answer_1(auc_1, auc_2):
auc = (auc_1 + auc_2)/2
with open("preprocessing_lr_answer1.txt", "w") as fout:
fout.write(str(auc))
param_grid = {'C': [0.01, 0.05, 0.1, 0.5, 1, 5, 10]}
cv = 3
# place your code here
def make_thinks(X_train, X_test):
train = X_train[:]
test = X_test[:]
train = np.hstack((X_train_cat_oh, train))
test = np.hstack((X_test_cat_oh, test))
estimator = LogisticRegression()
optimizer = GridSearchCV(estimator, param_grid, cv=cv)
optimizer.fit(train, y_train)
plot_scores(optimizer)
print(roc_auc_score(y_test, optimizer.predict_proba(test)[:, 1]))
make_thinks(X_train_real_zeros, X_test_real_zeros)
make_thinks(X_train_real_mean, X_test_real_mean)
write_answer_1(0.884413308899, 0.887894094049)
train = X_train_real_mean[:]
test = X_test_real_mean[:]
train = np.hstack((X_train_cat_oh, train))
test = np.hstack((X_test_cat_oh, test))
Explanation: Описание классов
Итак, мы получили первые наборы данных, для которых выполнены оба ограничения логистической регрессии на входные данные. Обучим на них регрессию, используя имеющийся в библиотеке sklearn функционал по подбору гиперпараметров модели
optimizer = GridSearchCV(estimator, param_grid)
где:
- estimator - обучающий алгоритм, для которого будет производиться подбор параметров
- param_grid - словарь параметров, ключами которого являются строки-названия, которые передаются алгоритму estimator, а значения - набор параметров для перебора
Данный класс выполняет кросс-валидацию обучающей выборки для каждого набора параметров и находит те, на которых алгоритм работает лучше всего. Этот метод позволяет настраивать гиперпараметры по обучающей выборке, избегая переобучения. Некоторые опциональные параметры вызова данного класса, которые нам понадобятся:
- scoring - функционал качества, максимум которого ищется кросс валидацией, по умолчанию используется функция score() класса esimator
- n_jobs - позволяет ускорить кросс-валидацию, выполняя её параллельно, число определяет количество одновременно запущенных задач
- cv - количество фолдов, на которые разбивается выборка при кросс-валидации
После инициализации класса GridSearchCV, процесс подбора параметров запускается следующим методом:
optimizer.fit(X, y)
На выходе для получения предсказаний можно пользоваться функцией
optimizer.predict(X)
для меток или
optimizer.predict_proba(X)
для вероятностей (в случае использования логистической регрессии).
Также можно напрямую получить оптимальный класс estimator и оптимальные параметры, так как они является атрибутами класса GridSearchCV:
- best_estimator_ - лучший алгоритм
- best_params_ - лучший набор параметров
Класс логистической регрессии выглядит следующим образом:
estimator = LogisticRegression(penalty)
где penalty принимает либо значение 'l2', либо 'l1'. По умолчанию устанавливается значение 'l2', и везде в задании, если об этом не оговорено особо, предполагается использование логистической регрессии с L2-регуляризацией.
Задание 1. Сравнение способов заполнения вещественных пропущенных значений.
Составьте две обучающие выборки из вещественных и категориальных признаков: в одной вещественные признаки, где пропущенные значения заполнены нулями, в другой - средними. Рекомендуется записывать в выборки сначала вещественные, а потом категориальные признаки.
Обучите на них логистическую регрессию, подбирая параметры из заданной сетки param_grid по методу кросс-валидации с числом фолдов cv=3. В качестве оптимизируемой функции используйте заданную по умолчанию.
Постройте два графика оценок точности +- их стандратного отклонения в зависимости от гиперпараметра и убедитесь, что вы действительно нашли её максимум. Также обратите внимание на большую дисперсию получаемых оценок (уменьшить её можно увеличением числа фолдов cv).
Получите две метрики качества AUC ROC на тестовой выборке и сравните их между собой. Какой способ заполнения пропущенных вещественных значений работает лучше? В дальнейшем для выполнения задания в качестве вещественных признаков используйте ту выборку, которая даёт лучшее качество на тесте.
Передайте два значения AUC ROC (сначала для выборки, заполненной средними, потом для выборки, заполненной нулями) в функцию write_answer_1 и запустите её. Полученный файл является ответом на 1 задание.
Информация для интересующихся: вообще говоря, не вполне логично оптимизировать на кросс-валидации заданный по умолчанию в классе логистической регрессии функционал accuracy, а измерять на тесте AUC ROC, но это, как и ограничение размера выборки, сделано для ускорения работы процесса кросс-валидации.
End of explanation
from pandas.tools.plotting import scatter_matrix
data_numeric = pd.DataFrame(X_train_real_zeros, columns=numeric_cols)
list_cols = ['Number.of.Successful.Grant.1', 'SEO.Percentage.2', 'Year.of.Birth.1']
scatter_matrix(data_numeric[list_cols], alpha=0.5, figsize=(10, 10))
plt.show()
Explanation: Масштабирование вещественных признаков.
Попробуем как-то улучшить качество классификации. Для этого посмотрим на сами данные:
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler(with_mean=True, with_std=True)
scaler.fit(X_train_real_mean)
X_train_real_scaled = scaler.fit_transform(X_train_real_mean)
X_test_real_scaled = scaler.fit_transform(X_test_real_mean)
Explanation: Как видно из графиков, разные признаки очень сильно отличаются друг от друга по модулю значений (обратите внимание на диапазоны значений осей x и y). В случае обычной регрессии это никак не влияет на качество обучаемой модели, т.к. у меньших по модулю признаков будут большие веса, но при использовании регуляризации, которая штрафует модель за большие веса, регрессия, как правило, начинает работать хуже.
В таких случаях всегда рекомендуется делать стандартизацию (масштабирование) признаков, для того чтобы они меньше отличались друг друга по модулю, но при этом не нарушались никакие другие свойства признакового пространства. При этом даже если итоговое качество модели на тесте уменьшается, это повышает её интерпретабельность, потому что новые веса имеют смысл "значимости" данного признака для итоговой классификации.
Стандартизация осуществляется посредством вычета из каждого признака среднего значения и нормировки на выборочное стандартное отклонение:
$$ x^{scaled}{id} = \dfrac{x{id} - \mu_d}{\sigma_d}, \quad \mu_d = \frac{1}{N} \sum_{i=1}^l x_{id}, \quad \sigma_d = \sqrt{\frac{1}{N-1} \sum_{i=1}^l (x_{id} - \mu_d)^2} $$
Задание 1.5. Масштабирование вещественных признаков.
По аналогии с вызовом one-hot encoder примените масштабирование вещественных признаков для обучающих и тестовых выборок X_train_real_zeros и X_test_real_zeros, используя класс StandardScaler
и методы
StandardScaler.fit_transform(...)
StandardScaler.transform(...)
Сохраните ответ в переменные X_train_real_scaled и X_test_real_scaled соответственно
End of explanation
data_numeric_scaled = pd.DataFrame(X_train_real_scaled, columns=numeric_cols)
list_cols = ['Number.of.Successful.Grant.1', 'SEO.Percentage.2', 'Year.of.Birth.1']
scatter_matrix(data_numeric_scaled[list_cols], alpha=0.5, figsize=(10, 10))
plt.show()
Explanation: Сравнение признаковых пространств.
Построим такие же графики для преобразованных данных:
End of explanation
def write_answer_2(auc):
with open("preprocessing_lr_answer2.txt", "w") as fout:
fout.write(str(auc))
make_thinks(X_train_real_scaled, X_test_real_scaled)
write_answer_2(0.887175168997)
Explanation: Как видно из графиков, мы не поменяли свойства признакового пространства: гистограммы распределений значений признаков, как и их scatter-plots, выглядят так же, как и до нормировки, но при этом все значения теперь находятся примерно в одном диапазоне, тем самым повышая интерпретабельность результатов, а также лучше сочетаясь с идеологией регуляризации.
Задание 2. Сравнение качества классификации до и после масштабирования вещественных признаков.
Обучите ещё раз регрессию и гиперпараметры на новых признаках, объединив их с закодированными категориальными.
Проверьте, был ли найден оптимум accuracy по гиперпараметрам во время кроссвалидации.
Получите значение ROC AUC на тестовой выборке, сравните с лучшим результатом, полученными ранее.
Запишите полученный ответ в файл при помощи функции write_answer_2.
End of explanation
np.random.seed(0)
Сэмплируем данные из первой гауссианы
data_0 = np.random.multivariate_normal([0,0], [[0.5,0],[0,0.5]], size=40)
И из второй
data_1 = np.random.multivariate_normal([0,1], [[0.5,0],[0,0.5]], size=40)
На обучение берём 20 объектов из первого класса и 10 из второго
example_data_train = np.vstack([data_0[:20,:], data_1[:10,:]])
example_labels_train = np.concatenate([np.zeros((20)), np.ones((10))])
На тест - 20 из первого и 30 из второго
example_data_test = np.vstack([data_0[20:,:], data_1[10:,:]])
example_labels_test = np.concatenate([np.zeros((20)), np.ones((30))])
Задаём координатную сетку, на которой будем вычислять область классификации
xx, yy = np.meshgrid(np.arange(-3, 3, 0.02), np.arange(-3, 3, 0.02))
Обучаем регрессию без балансировки по классам
optimizer = GridSearchCV(LogisticRegression(), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
Строим предсказания регрессии для сетки
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
Считаем AUC
auc_wo_class_weights = roc_auc_score(example_labels_test, optimizer.predict_proba(example_data_test)[:,1])
plt.title('Without class weights')
plt.show()
print('AUC: %f'%auc_wo_class_weights)
Для второй регрессии в LogisticRegression передаём параметр class_weight='balanced'
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced'), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
auc_w_class_weights = roc_auc_score(example_labels_test, optimizer.predict_proba(example_data_test)[:,1])
plt.title('With class weights')
plt.show()
print('AUC: %f'%auc_w_class_weights)
Explanation: Балансировка классов.
Алгоритмы классификации могут быть очень чувствительны к несбалансированным классам. Рассмотрим пример с выборками, сэмплированными из двух гауссиан. Их мат. ожидания и матрицы ковариации заданы так, что истинная разделяющая поверхность должна проходить параллельно оси x. Поместим в обучающую выборку 20 объектов, сэмплированных из 1-й гауссианы, и 10 объектов из 2-й. После этого обучим на них линейную регрессию, и построим на графиках объекты и области классификации.
End of explanation
print(np.sum(y_train==0))
print(np.sum(y_train==1))
Explanation: Как видно, во втором случае классификатор находит разделяющую поверхность, которая ближе к истинной, т.е. меньше переобучается. Поэтому на сбалансированность классов в обучающей выборке всегда следует обращать внимание.
Посмотрим, сбалансированны ли классы в нашей обучающей выборке:
End of explanation
def write_answer_3(auc_1, auc_2):
auc = (auc_1 + auc_2) / 2
with open("preprocessing_lr_answer3.txt", "w") as fout:
fout.write(str(auc))
# place your code here
def make_thinks_balanced(X_train, X_test):
train = X_train[:]
test = X_test[:]
train = np.hstack((X_train_cat_oh, train))
test = np.hstack((X_test_cat_oh, test))
estimator = LogisticRegression(class_weight='balanced')
optimizer = GridSearchCV(estimator, param_grid, cv=cv)
optimizer.fit(train, y_train)
plot_scores(optimizer)
print(roc_auc_score(y_test, optimizer.predict_proba(test)[:, 1]))
make_thinks_balanced(X_train_real_scaled, X_test_real_scaled)
np.random.seed(0)
indices_to_add = np.random.randint(low = 0, high = np.sum((y_train.as_matrix() == 1)), size = np.sum(y_train==0) - np.sum(y_train==1))
def make_thinks_stratification(X_train, X_test):
train = X_train[:]
test = X_test[:]
train = np.hstack((X_train_cat_oh, train))
test = np.hstack((X_test_cat_oh, test))
train = np.vstack((train, train[indices_to_add]))
y_train_new = np.append(y_train.as_matrix(), np.ones_like(indices_to_add))
estimator = LogisticRegression()
optimizer = GridSearchCV(estimator, param_grid, cv=cv)
optimizer.fit(train, y_train_new)
plot_scores(optimizer)
print(roc_auc_score(y_test, optimizer.predict_proba(test)[:, 1]))
make_thinks_stratification(X_train_real_scaled, X_test_real_scaled)
write_answer_3(0.887506790191, 0.887093501091)
auc_wo_class_weights = 0.887175168997
auc_w_class_weights = 0.887506790191
Explanation: Видно, что нет.
Исправить ситуацию можно разными способами, мы рассмотрим два:
- давать объектам миноритарного класса больший вес при обучении классификатора (рассмотрен в примере выше)
- досэмплировать объекты миноритарного класса, пока число объектов в обоих классах не сравняется
Задание 3. Балансировка классов.
Обучите логистическую регрессию и гиперпараметры с балансировкой классов, используя веса (параметр class_weight='balanced' регрессии) на отмасштабированных выборках, полученных в предыдущем задании. Убедитесь, что вы нашли максимум accuracy по гиперпараметрам.
Получите метрику ROC AUC на тестовой выборке.
Сбалансируйте выборку, досэмплировав в неё объекты из меньшего класса. Для получения индексов объектов, которые требуется добавить в обучающую выборку, используйте следующую комбинацию вызовов функций:
np.random.seed(0)
indices_to_add = np.random.randint(...)
X_train_to_add = X_train[y_train.as_matrix() == 1,:][indices_to_add,:]
После этого добавьте эти объекты в начало или конец обучающей выборки. Дополните соответствующим образом вектор ответов.
Получите метрику ROC AUC на тестовой выборке, сравните с предыдущим результатом.
Внесите ответы в выходной файл при помощи функции write_asnwer_3, передав в неё сначала ROC AUC для балансировки весами, а потом балансировки выборки вручную.
End of explanation
print('AUC ROC for classifier without weighted classes', auc_wo_class_weights)
print('AUC ROC for classifier with weighted classes: ', auc_w_class_weights)
Explanation: Стратификация выборок.
Рассмотрим ещё раз пример с выборками из нормальных распределений. Посмотрим ещё раз на качество классификаторов, получаемое на тестовых выборках:
End of explanation
Разделим данные по классам поровну между обучающей и тестовой выборками
example_data_train = np.vstack([data_0[:20,:], data_1[:20,:]])
example_labels_train = np.concatenate([np.zeros((20)), np.ones((20))])
example_data_test = np.vstack([data_0[20:,:], data_1[20:,:]])
example_labels_test = np.concatenate([np.zeros((20)), np.ones((20))])
Обучим классификатор
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced'), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
auc_stratified = roc_auc_score(example_labels_test, optimizer.predict_proba(example_data_test)[:,1])
plt.title('With class weights')
plt.show()
print('AUC ROC for stratified samples: ', auc_stratified)
Explanation: Насколько эти цифры реально отражают качество работы алгоритма, если учесть, что тестовая выборка так же несбалансирована, как обучающая? При этом мы уже знаем, что алгоритм логистический регрессии чувствителен к балансировке классов в обучающей выборке, т.е. в данном случае на тесте он будет давать заведомо заниженные результаты. Метрика классификатора на тесте имела бы гораздо больший смысл, если бы объекты были разделы в выборках поровну: по 20 из каждого класса на обучени и на тесте. Переформируем выборки и подсчитаем новые ошибки:
End of explanation
def write_answer_4(auc):
with open("preprocessing_lr_answer4.txt", "w") as fout:
fout.write(str(auc))
# place your code here
data_X = np.vstack((X_train_real_scaled, X_test_real_scaled))
print(data_X.shape)
data_Y = y
(data_train_X_real,
data_test_X_real,
y_train, y_test) = train_test_split(X_real_zeros, data_Y,
stratify=data_Y,
random_state=0)
(X_train_cat_oh,
X_test_cat_oh, _, _) = train_test_split(X_cat_oh, data_Y,
stratify=data_Y,
random_state=0)
data_train_X = np.hstack((data_train_X_real, X_train_cat_oh))
data_test_X = np.hstack((data_test_X_real, X_test_cat_oh))
train = data_train_X[:]
test = data_test_X[:]
estimator = LogisticRegression(class_weight='balanced')
optimizer = GridSearchCV(estimator, param_grid, cv=cv)
optimizer.fit(train, y_train)
plot_scores(optimizer)
print(roc_auc_score(y_test, optimizer.predict_proba(test)[:, 1]))
write_answer_4(0.876087053871)
Explanation: Как видно, после данной процедуры ответ классификатора изменился незначительно, а вот качество увеличилось. При этом, в зависимости от того, как вы разбили изначально данные на обучение и тест, после сбалансированного разделения выборок итоговая метрика на тесте может как увеличиться, так и уменьшиться, но доверять ей можно значительно больше, т.к. она построена с учётом специфики работы классификатора. Данный подход является частным случаем т.н. метода стратификации.
Задание 4. Стратификация выборки.
По аналогии с тем, как это было сделано в начале задания, разбейте выборки X_real_zeros и X_cat_oh на обучение и тест, передавая в функцию
train_test_split(...)
дополнительно параметр
stratify=y
Также обязательно передайте в функцию переменную random_state=0.
Выполните масштабирование новых вещественных выборок, обучите классификатор и его гиперпараметры при помощи метода кросс-валидации, делая поправку на несбалансированные классы при помощи весов. Убедитесь в том, что нашли оптимум accuracy по гиперпараметрам.
Оцените качество классификатора метрике AUC ROC на тестовой выборке.
Полученный ответ передайте функции write_answer_4
End of explanation
from sklearn.preprocessing import PolynomialFeatures
Инициализируем класс, который выполняет преобразование
transform = PolynomialFeatures(2)
Обучаем преобразование на обучающей выборке, применяем его к тестовой
example_data_train_poly = transform.fit_transform(example_data_train)
example_data_test_poly = transform.transform(example_data_test)
Обращаем внимание на параметр fit_intercept=False
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced', fit_intercept=False), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train_poly, example_labels_train)
Z = optimizer.predict(transform.transform(np.c_[xx.ravel(), yy.ravel()])).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
plt.title('With class weights')
plt.show()
Explanation: Теперь вы разобрались с основными этапами предобработки данных для линейных классификаторов.
Напомним основные этапы:
- обработка пропущенных значений
- обработка категориальных признаков
- стратификация
- балансировка классов
- масштабирование
Данные действия с данными рекомендуется проводить всякий раз, когда вы планируете использовать линейные методы. Рекомендация по выполнению многих из этих пунктов справедлива и для других методов машинного обучения.
Трансформация признаков.
Теперь рассмотрим способы преобразования признаков. Существует достаточно много различных способов трансформации признаков, которые позволяют при помощи линейных методов получать более сложные разделяющие поверхности. Самым базовым является полиномиальное преобразование признаков. Его идея заключается в том, что помимо самих признаков вы дополнительно включаете набор все полиномы степени $p$, которые можно из них построить. Для случая $p=2$ преобразование выглядит следующим образом:
$$ \phi(x_i) = [x_{i,1}^2, ..., x_{i,D}^2, x_{i,1}x_{i,2}, ..., x_{i,D} x_{i,D-1}, x_{i,1}, ..., x_{i,D}, 1] $$
Рассмотрим принцип работы данных признаков на данных, сэмплированных их гауссиан:
End of explanation
print(example_data_train_poly.shape)
Explanation: Видно, что данный метод преобразования данных уже позволяет строить нелинейные разделяющие поверхности, которые могут более тонко подстраиваться под данные и находить более сложные зависимости. Число признаков в новой модели:
End of explanation
transform = PolynomialFeatures(11)
example_data_train_poly = transform.fit_transform(example_data_train)
example_data_test_poly = transform.transform(example_data_test)
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced', fit_intercept=False), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train_poly, example_labels_train)
Z = optimizer.predict(transform.transform(np.c_[xx.ravel(), yy.ravel()])).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
plt.title('Corrected class weights')
plt.show()
Explanation: Но при этом одновременно данный метод способствует более сильной способности модели к переобучению из-за быстрого роста числа признаком с увеличением степени $p$. Рассмотрим пример с $p=11$:
End of explanation
print(example_data_train_poly.shape)
Explanation: Количество признаков в данной модели:
End of explanation
def write_answer_5(auc):
with open("preprocessing_lr_answer5.txt", "w") as fout:
fout.write(str(auc))
# place your code here
means = calculate_means(X[numeric_cols])
X_real_zeros = X[numeric_cols]
X_real_mean = X[numeric_cols]
for idx, column in enumerate(numeric_cols):
X_real_zeros[column].fillna(0, inplace=True)
X_real_mean[column].fillna(means.iloc[idx], inplace=True)
X_cat = X[categorical_cols]
for column in categorical_cols:
X_cat[column] = X_cat[column].apply(lambda x: str(x))
X_cat[column] = X_cat[column].fillna('NA')
(data_train_X_real,
data_test_X_real,
y_train, y_test) = train_test_split(X_real_zeros, data_Y,
stratify=data_Y,
random_state=0)
(X_train_cat_oh,
X_test_cat_oh, _, _) = train_test_split(X_cat_oh, data_Y,
stratify=data_Y,
random_state=0)
Инициализируем класс, который выполняет преобразование
transform = PolynomialFeatures(2)
Обучаем преобразование на обучающей выборке, применяем его к тестовой
example_data_train_poly = transform.fit_transform(data_train_X_real)
example_data_test_poly = transform.transform(data_test_X_real)
scaler = StandardScaler(with_mean=True, with_std=True)
X_train_real_scaled = scaler.fit_transform(example_data_train_poly)
X_test_real_scaled = scaler.transform(example_data_test_poly)
data_train_X = np.hstack((X_train_real_scaled, X_train_cat_oh))
data_test_X = np.hstack((X_test_real_scaled, X_test_cat_oh))
estimator = LogisticRegression(class_weight='balanced', fit_intercept=False)
optimizer = GridSearchCV(estimator, param_grid, cv=cv)
optimizer.fit(data_train_X, y_train)
plot_scores(optimizer)
print(roc_auc_score(y_test, optimizer.predict_proba(data_test_X)[:, 1]))
write_answer_5(0.883526040034)
Explanation: Задание 5. Трансформация вещественных признаков.
Реализуйте по аналогии с примером преобразование вещественных признаков модели при помощи полиномиальных признаков степени 2
Постройте логистическую регрессию на новых данных, одновременно подобрав оптимальные гиперпараметры. Обращаем внимание, что в преобразованных признаках уже присутствует столбец, все значения которого равны 1, поэтому обучать дополнительно значение $b$ не нужно, его функцию выполняет один из весов $w$. В связи с этим во избежание линейной зависимости в датасете, в вызов класса логистической регрессии требуется передавать параметр fit_intercept=False. Для обучения используйте стратифицированные выборки с балансировкой классов при помощи весов, преобразованные признаки требуется заново отмасштабировать.
Получите AUC ROC на тесте и сравните данный результат с использованием обычных признаков.
Передайте полученный ответ в функцию write_answer_5.
End of explanation
def write_answer_6(features):
with open("preprocessing_lr_answer6.txt", "w") as fout:
fout.write(" ".join([str(num) for num in features]))
# place your code here
means = calculate_means(X[numeric_cols])
X_real_zeros = X[numeric_cols]
X_real_mean = X[numeric_cols]
for idx, column in enumerate(numeric_cols):
X_real_zeros[column].fillna(0, inplace=True)
X_real_mean[column].fillna(means.iloc[idx], inplace=True)
X_cat = X[categorical_cols]
for column in categorical_cols:
X_cat[column] = X_cat[column].apply(lambda x: str(x))
X_cat[column] = X_cat[column].fillna('NA')
(data_train_X_real,
data_test_X_real,
y_train, y_test) = train_test_split(X_real_zeros, data_Y,
stratify=data_Y,
random_state=0,
test_size=0.3)
(X_train_cat_oh,
X_test_cat_oh, _, _) = train_test_split(X_cat_oh, data_Y,
stratify=data_Y,
random_state=0,
test_size=0.3)
scaler = StandardScaler(with_mean=True, with_std=True)
X_train_real_scaled = scaler.fit_transform(data_train_X_real)
X_test_real_scaled = scaler.transform(data_test_X_real)
data_train_X = np.hstack((X_train_real_scaled, X_train_cat_oh))
data_test_X = np.hstack((X_test_real_scaled, X_test_cat_oh))
estimator = LogisticRegression(penalty='l1', class_weight='balanced')
optimizer = GridSearchCV(estimator, param_grid, cv=cv)
optimizer.fit(data_train_X, y_train)
t = optimizer.best_estimator_.coef_
ans = []
for idx, value in enumerate(t[0, :len(numeric_cols)]):
if abs(value) == 0:
ans.append(idx)
write_answer_6(ans)
ans
t[0, :len(numeric_cols)]
Explanation: Регрессия Lasso.
К логистической регрессии также можно применить L1-регуляризацию (Lasso), вместо регуляризации L2, которая будет приводить к отбору признаков. Вам предлагается применить L1-регуляцию к исходным признакам и проинтерпретировать полученные результаты (применение отбора признаков к полиномиальным так же можно успешно применять, но в нём уже будет отсутствовать компонента интерпретации, т.к. смысловое значение оригинальных признаков известно, а полиномиальных - уже может быть достаточно нетривиально). Для вызова логистической регрессии с L1-регуляризацией достаточно передать параметр penalty='l1' в инициализацию класса.
Задание 6. Отбор признаков при помощи регрессии Lasso.
Обучите регрессию Lasso на стратифицированных отмасштабированных выборках, используя балансировку классов при помощи весов.
Получите ROC AUC регрессии, сравните его с предыдущими результатами.
Найдите номера вещественных признаков, которые имеют нулевые веса в итоговой модели.
Передайте их список функции write_answer_6.
End of explanation |
1,173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How data scientists use BigQuery
This notebook accompanies the presentation
"Machine Learning and Bayesian Statistics in minutes
Step1: But is it right, though? What's with the weird hump for early departures (departure_delay less than zero)?
First, we should verify that we can apply Bayes Law. Grouping by the departure delay is incorrect if the departure delay is a chaotic input variable. We have do exploratory analysis to validate that
Step2: Note the crazy non-linearity for top half of of the flights that leave more than 20 minutes early. Most likely, these are planes that try to beat some weather situation. About half of such flights succeed (the linear bottom) and the other half don't (the non-linear top). The average is what we saw as the weird hump in the probability plot. So yes, the hump is real. The rest of the distribution is clear-cut and the Bayes probabilities are quite valid.
Solving the flights problem using GCP tools end-to-end (from ingest to machine learning) is covered in this book
Step3: BigQuery and TensorFlow
Batch predictions of a TensorFlow model from BigQuery! | Python Code:
%%bigquery df
WITH rawnumbers AS (
SELECT
departure_delay,
COUNT(1) AS num_flights,
COUNTIF(arrival_delay < 15) AS num_ontime
FROM
`bigquery-samples.airline_ontime_data.flights`
GROUP BY
departure_delay
HAVING
num_flights > 100
),
totals AS (
SELECT
SUM(num_flights) AS tot_flights,
SUM(num_ontime) AS tot_ontime
FROM rawnumbers
),
bayes AS (
SELECT
departure_delay,
num_flights / tot_flights AS prob_D,
num_ontime / tot_ontime AS prob_D_theta,
tot_ontime / tot_flights AS prob_theta
FROM
rawnumbers, totals
WHERE
num_ontime > 0
)
SELECT
*, (prob_theta * prob_D_theta / prob_D) AS prob_ontime
FROM
bayes
ORDER BY
departure_delay ASC
df.plot(x='departure_delay', y='prob_ontime');
Explanation: How data scientists use BigQuery
This notebook accompanies the presentation
"Machine Learning and Bayesian Statistics in minutes: How data scientists use BigQuery"
Bayesian Statistics in minutes
Let's say that we want to find the probability of a flight being late $\theta$ given a specific departure delay $\textbf{D}$
Bayes' Law tells that can be obtained for any specific departure delay using the formula:
<center><font size="+5">
$P(\theta|\textbf{D}) = P(\theta ) \frac{P(\textbf{D} |\theta)}{P(\textbf{D})} $
</font></center>
Once you have large datasets, the probabilities above are just exercises in counting and so, applying Bayesian statistics is super-easy in BigQuery.
For example, let's find the probability that a flight will be 15+ minutes late:
End of explanation
%%bigquery df
SELECT
departure_delay,
COUNT(1) AS num_flights,
APPROX_QUANTILES(arrival_delay, 10) AS arrival_delay_deciles
FROM
`bigquery-samples.airline_ontime_data.flights`
GROUP BY
departure_delay
HAVING
num_flights > 100
ORDER BY
departure_delay ASC
import pandas as pd
percentiles = df['arrival_delay_deciles'].apply(pd.Series)
percentiles = percentiles.rename(columns = lambda x : str(x*10) + "%")
df = pd.concat([df['departure_delay'], percentiles], axis=1)
df.head()
without_extremes = df.drop(['0%', '100%'], 1)
without_extremes.plot(x='departure_delay', xlim=(-30,50), ylim=(-50,50));
Explanation: But is it right, though? What's with the weird hump for early departures (departure_delay less than zero)?
First, we should verify that we can apply Bayes Law. Grouping by the departure delay is incorrect if the departure delay is a chaotic input variable. We have do exploratory analysis to validate that:
If a flight departs late, will it arrive late?
Is the relationship between the two variables non-chaotic?
Does the linearity hold even for extreme values of departure delays?
This, too, is straightforward in BigQuery
End of explanation
%%bigquery
CREATE OR REPLACE MODEL ch09eu.bicycle_model_dnn
OPTIONS(input_label_cols=['duration'],
model_type='dnn_regressor', hidden_units=[32, 4])
TRANSFORM(
duration
, start_station_name
, CAST(EXTRACT(dayofweek from start_date) AS STRING)
as dayofweek
, CAST(EXTRACT(hour from start_date) AS STRING)
as hourofday
)
AS
SELECT
duration, start_station_name, start_date
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
%%bigquery
SELECT * FROM ML.EVALUATE(MODEL ch09eu.bicycle_model_dnn)
%%bigquery
SELECT * FROM ML.PREDICT(MODEL ch09eu.bicycle_model_dnn,(
SELECT
'Park Street, Bankside' AS start_station_name
,CURRENT_TIMESTAMP() AS start_date
))
Explanation: Note the crazy non-linearity for top half of of the flights that leave more than 20 minutes early. Most likely, these are planes that try to beat some weather situation. About half of such flights succeed (the linear bottom) and the other half don't (the non-linear top). The average is what we saw as the weird hump in the probability plot. So yes, the hump is real. The rest of the distribution is clear-cut and the Bayes probabilities are quite valid.
Solving the flights problem using GCP tools end-to-end (from ingest to machine learning) is covered in this book:
<img src="https://aisoftwarellc.weebly.com/uploads/5/1/0/0/51003227/published/data-science-on-gcp_2.jpg?1563508887"></img>
Machine Learning in BigQuery
Here, we will use BigQuery ML to create a deep neural network that predicts the duration of bicycle rentals in London.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL advdata.txtclass_tf
OPTIONS (model_type='tensorflow',
model_path='gs://cloud-training-demos/txtclass/export/exporter/1549825580/*')
%%bigquery
SELECT
input,
(SELECT AS STRUCT(p, ['github', 'nytimes', 'techcrunch'][ORDINAL(s)]) prediction FROM
(SELECT p, ROW_NUMBER() OVER() AS s FROM
(SELECT * FROM UNNEST(dense_1) AS p))
ORDER BY p DESC LIMIT 1).*
FROM ML.PREDICT(MODEL advdata.txtclass_tf,
(
SELECT 'Unlikely Partnership in House Gives Lawmakers Hope for Border Deal' AS input
UNION ALL SELECT "Fitbit\'s newest fitness tracker is just for employees and health insurance members"
UNION ALL SELECT "Show HN: Hello, a CLI tool for managing social media"
))
Explanation: BigQuery and TensorFlow
Batch predictions of a TensorFlow model from BigQuery!
End of explanation |
1,174 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyze and Report Current Plate
Analyze and report a Cell Painting screening plate in 384 format
Step1: Report Current Plate with Existing Data
Report a Cell Painting screening plate in 384 format with pre-generated data.
Step2: Reference Plates | Python Code:
DATE = "170530" # "170704", "170530"
PLATE = "SI0012"
CONF = "conf170511mpc" # "conf170623mpc", "conf170511mpc"
QUADRANTS = [1] # [1, 2, 3, 4]
WRITE_PKL = False
UPDATE_SIMILAR = False
UPDATE_DATASTORE = False
for quadrant in QUADRANTS:
SRC_DIR = "/home/pahl/comas/projects/painting/{}-{}-{}_{}".format(DATE, PLATE, quadrant, CONF)
REPORTNAME = "report_{}-{}".format(PLATE, quadrant)
# REPORTNAME = "report"
keep = ["Compound_Id", "Container_Id", "Producer", "Conc_uM", "Activity", "Toxic", "Pure_Flag", "Rel_Cell_Count",
'Act_Profile', "Metadata_Well", "Plate", 'Smiles']
data_keep = keep.copy()
cpt.create_dirs(op.join(REPORTNAME, "details"))
print("\nProcessing plate {}_{}-{}_{} ...".format(DATE, PLATE, quadrant, CONF))
ds_plate = cpp.load(op.join(SRC_DIR, "Results.tsv"))
ds_plate = ds_plate.group_on_well()
ds_plate = ds_plate.remove_skipped_echo_direct_transfer(op.join(SRC_DIR, "*_print.xml"))
ds_plate = ds_plate.well_type_from_position()
ds_plate = ds_plate.flag_toxic()
ds_plate = ds_plate.activity_profile()
ds_plate = ds_plate.join_layout_1536(PLATE, quadrant)
ds_plate.data["Plate"] = "{}-{}-{}".format(DATE, PLATE, quadrant)
ds_plate = ds_plate.join_smiles()
ds_profile = ds_plate[keep]
if UPDATE_SIMILAR:
ds_profile.update_similar_refs(write=False)
if WRITE_PKL:
ds_profile.write_pkl("{}-{}-{}_profile.pkl".format(DATE, PLATE, quadrant))
# ds_profile = cpp.load_pkl("{}-{}-{}_profile.pkl".format(DATE, PLATE, quadrant))
ds_report = ds_profile.sort_values(["Toxic", "Activity"], ascending=[True, False])
# ds_report = ds_profile.remove_toxic()[0].sort_values("Activity", ascending = False)
# ds_report.data = ds_report.data.head(10)
cpr.full_report(ds_report, SRC_DIR, report_name=REPORTNAME,
plate="{}-{}".format(PLATE, quadrant), highlight=True)
if UPDATE_DATASTORE:
ds_profile.update_datastore(mode="cpd", write=False)
if UPDATE_SIMILAR:
cpp.write_sim_refs()
if UPDATE_DATASTORE:
cpp.write_datastore()
cpp.write_datastore()
cpp.clear_resources()
Explanation: Analyze and Report Current Plate
Analyze and report a Cell Painting screening plate in 384 format
End of explanation
DATE = "170530" # "170704", "170530"
PLATE = "SI0012"
CONF = "conf170511mpc" # "conf170623mpc", "conf170511mpc"
QUADRANTS = [1] # [1, 2, 3, 4]
for quadrant in QUADRANTS:
SRC_DIR = "/home/pahl/comas/projects/painting/{}-{}-{}_{}".format(DATE, PLATE, quadrant, CONF)
REPORTNAME = "report_{}-{}".format(PLATE, quadrant)
# REPORTNAME = "report"
cpt.create_dirs(op.join(REPORTNAME, "details"))
print("\nProcessing plate {}_{}-{}_{} ...".format(DATE, PLATE, quadrant, CONF))
ds_profile = cpp.load_pkl("{}-{}-{}_profile.pkl".format(DATE, PLATE, quadrant))
ds_report = ds_profile.sort_values(["Toxic", "Activity"], ascending=[True, False])
# ds_report = ds_profile.remove_toxic()[0].sort_values("Activity", ascending = False)
# ds_report.data = ds_report.data.head(10)
cpr.full_report(ds_report, SRC_DIR, report_name=REPORTNAME,
plate="{}-{}".format(PLATE, quadrant), highlight=True)
Explanation: Report Current Plate with Existing Data
Report a Cell Painting screening plate in 384 format with pre-generated data.
End of explanation
REF_DIR = "/home/pahl/comas/projects/painting/references"
PLATE_NAMES = ["S0195", "S0198", "S0203"] # "S0195", "S0198", "S0203"
DATES = {"S0195": "170523", "S0198": "170516", "S0203": "170512"}
REPORTNAME = "references"
cpt.create_dirs(op.join(REPORTNAME, "details"))
pb = nbt.ProgressbarJS()
ds_ref = cpp.load("references_act_prof.tsv")
num_steps = 4 * len(PLATE_NAMES)
step = 0
for plate in PLATE_NAMES:
for idx in range(1, 5):
step += 1
pb.update(100 * step / num_steps)
SRC_DIR = "{}/{}-{}".format(REF_DIR, plate, idx)
print("\nProcessing plate {}-{} ...".format(plate, idx))
ds_profile = ds_ref[ds_ref["Plate"] == "{}-{}-{}".format(DATES[plate], plate, idx)].copy()
ds_report = ds_profile.sort_values(["Toxic", "Activity"], ascending=[True, False])
cpr.full_report(ds_profile, SRC_DIR, report_name=REPORTNAME,
plate="{}-{}".format(plate, idx), highlight=True, mode="ref")
pb.done()
Explanation: Reference Plates
End of explanation |
1,175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Symbulate Lab 1 - Probability Spaces
This Jupyter notebook provides a template for you to fill in. Complete the parts as indicated. To run a cell, hold down SHIFT and hit ENTER.
In this lab you will use the Python package Symbulate. You should have already completed Section 1 of the "Getting Started with Symbulate Tutorial" ADD LINK and read sections 1 through 3 of the Symbulate documentation. A few specific links to the documentation are provided below, but it will probably make more sense if you read the documentation from start to finish. Whenever possible, you should use Symbulate commands, not general Python code.
To use Symbulate, you must first run (SHIFT-ENTER) the following commands.
Step1: Part I. Introduction to Symbulate, and conditional versus unconditional probability
A deck of 16 cards contains 4 cards in each of four suits ['clubs', 'diamonds', 'hearts', 'spades']. The deck is shuffled and two cards are drawn in sequence. We are interested in the following questions.
What is the probability that the first card drawn is a heart?
What is the probability that the second card drawn is a heart?
If the first card drawn is a heart, what is the probability that the second card drawn is a heart?
Before proceeding, give your best guess of each of these probabilites.
We'll use simulation to obtain approximations to the probabilities in the questions above. First we define the deck of cards (we only care about the suits for this exercise).
Step2: Now we define a BoxModel probability space corresponding to drawing two cards (size=2) from the deck at random. We'll assume that the cards are drawn without replacement (replace=False). We also want to keep track of which card was drawn first and which second (order_matters=True).
Step3: The .draw() method simulates a single outcome from the probability space. Note that each outcome is an ordered pair of cards.
Step4: Many outcomes can be simulated using .sim(). The following simulates 10000 draws and stores the results in the variable sims.
Step5: We can summarize the simulation results with .tabulate(). Note that ('heart', 'club') is counted as a separate outcome than ('club', 'heart') because the order matters.
Step6: The above table could be used to estimate the probabilities in question. Instead, we will illustrate several other tools available in Symbulate to summarize simulation output.
First, we use a filter function to creat a subset of the simulated outcomes for which the first card is a heart. We define a function first_is_heart that takes as an input a pair of values (x) and returns True if the first value in the pair (x[0]) is equal to 'hearts', and False otherwise. (Python indexing starts at 0
Step7: Now we filter the simulated outcomes to create the subset of outcomes for which first_is_heart returns True.
Step8: Returning to question 1, we can estimate the probability that the first card is a heart by dividing the number of simulated draws for which the first card is a heart divided by the total number of simulated draws (using the length function len to count.)
Step9: The true probability is 4/16 = 0.25. Your simulated probability should be close to 0.25, but there will be some natural variability due to the randomness in the simulation. Very roughly, the margin of error of a probability estimate based on $N$ simulated repetitions is about $1/\sqrt{N}$, so about 0.01 for 10000 repetitions. The interval constructed by adding $\pm 0.01$ to your estimate will likely contain 0.25.
a)
Recall question 2
Step10: b)
Many people confuse the probabilities in (2) and (3). The probability in (2) is an unconditional probability
Step11: c)
Given that the first card is a heart, there are 15 cards left in the deck, each of which is equally likely to be the second card, of which 3 are hearts. So the conditional probability that the second card is a heart given that the first card is a heart is 3/15 = 0.20. Verify that your simulated value is consistent with the true value.
Now you will do a few calculations by hand.
Compute, by hand, the conditional probability that the second card is a heart given that the first cards is NOT a heart.
Construct, by hand, a hypothetical two-way table representing the results of 10000 draws.
Use the hypothetical table to compute the probability that the second card is a heart.
What is the relationship between the probability that the second card is a heart and the two conditional probabilities?
(Nothing to respond here, just make sure you understand the answers.)
d)
How would the answers to the previous questions change if the draws were made with replacement (so that the first card is replaced and the deck reshuffled before the second draw is drawn?) In this case, what can we say about the events "the first card is a heart" and "the second card is a heart"?
Type your response here.
Part II. Collector's problem
Each box of a certain type of cereal contains one of $n$ distinct prizes and you want to obtain a complete set. Suppose
that each box of cereal is equally likely to contain any one of the $n$ prizes, and the particular prize
that appears in one box has no bearing on the prize that appears in another box. You purchase
cereal boxes one box at a time until you
have the complete set of $n$ prizes. What is the probability that you buy more than $k$ boxes? In this problem you will use simulation to estimate this probability for different values of $n$ and $k$.
Here is a little Python code you can use to label the $n$ prizes from 0 to $n-1$. (Remember
Step12: And here is a function that returns the number of distinct prizes collected among a set of prizes.
Step13: Aside from the above functions, you should use Symbulate commands exclusively for Part II.
Problem 1.
We'll assume that there are 3 prizes, $n=3$, a situation in which exact probabilities can easily be computed by enumerating the possible outcomes.
Step14: a)
Define a probability space for the sequence of prizes obtained after buying $3$ boxes (first box, second box, third box), and simulate a single outcome. (Hint
Step15: b)
Now simulate many outcomes and summarize the results. Does it appear that each sequence of prizes is equally likely? (Hint
Step16: c)
Count the number of distinct prizes collected for each of the simulated outcomes using the number_collected function. (Hint
Step17: d)
Use the simulation results to estimate the probability the more than $k=3$ boxes are needed to complete a set of $n=3$ prizes. (Hint
Step18: Problem 2.
Use simulation to estimate the probability that more than $k=100$ boxes are need to complete a set of $n=20$ prizes, a situation for which it is extremely difficult to compute the probability analytically.
Step19: Problem 3.
How large of a group of people is needed to have a probability of greater than 0.5 that on every day of the year someone in the group has a birthday? Greater than 0.9? Greater than 0.99? (Assume 365 equally likely birthdays, no multiples, etc.) Before coding, I encourage you to make some guesses for the answers first.
Formulate this scenario as a collector's problem and experimemt with values of $n$ or $k$ until you are satisfied. (You don't have to get any fancier than guess-and-check, but you can if you want.)
Step20: Problem 4.
Now suppose that some prizes are harder to find than others. In particular, suppose that the prizes are labeled 1, 2, 3, 4, 5. Assume that prize 2 is twice as likely as prize 1, prize 3 is three times as likely as prize 1, prize 4 is four times as likely as prize 1, and prize 5 is five times as likely as prize 1.
Estimate the probability that you'll need to buy more than 20 prizes to obtain a complete set. How does this probability compare to the probability in the equally likely situation?
Hint | Python Code:
from symbulate import *
%matplotlib inline
Explanation: Symbulate Lab 1 - Probability Spaces
This Jupyter notebook provides a template for you to fill in. Complete the parts as indicated. To run a cell, hold down SHIFT and hit ENTER.
In this lab you will use the Python package Symbulate. You should have already completed Section 1 of the "Getting Started with Symbulate Tutorial" ADD LINK and read sections 1 through 3 of the Symbulate documentation. A few specific links to the documentation are provided below, but it will probably make more sense if you read the documentation from start to finish. Whenever possible, you should use Symbulate commands, not general Python code.
To use Symbulate, you must first run (SHIFT-ENTER) the following commands.
End of explanation
cards = ['club', 'diamond', 'heart', 'spade'] * 4 # 4 cards of each suit
len(cards)
Explanation: Part I. Introduction to Symbulate, and conditional versus unconditional probability
A deck of 16 cards contains 4 cards in each of four suits ['clubs', 'diamonds', 'hearts', 'spades']. The deck is shuffled and two cards are drawn in sequence. We are interested in the following questions.
What is the probability that the first card drawn is a heart?
What is the probability that the second card drawn is a heart?
If the first card drawn is a heart, what is the probability that the second card drawn is a heart?
Before proceeding, give your best guess of each of these probabilites.
We'll use simulation to obtain approximations to the probabilities in the questions above. First we define the deck of cards (we only care about the suits for this exercise).
End of explanation
P = BoxModel(cards, size=2, replace=False, order_matters=True)
Explanation: Now we define a BoxModel probability space corresponding to drawing two cards (size=2) from the deck at random. We'll assume that the cards are drawn without replacement (replace=False). We also want to keep track of which card was drawn first and which second (order_matters=True).
End of explanation
P.draw()
Explanation: The .draw() method simulates a single outcome from the probability space. Note that each outcome is an ordered pair of cards.
End of explanation
sims = P.sim(10000)
sims
Explanation: Many outcomes can be simulated using .sim(). The following simulates 10000 draws and stores the results in the variable sims.
End of explanation
sims = P.sim(10000)
sims.tabulate()
Explanation: We can summarize the simulation results with .tabulate(). Note that ('heart', 'club') is counted as a separate outcome than ('club', 'heart') because the order matters.
End of explanation
def first_is_heart(x):
return (x[0] == 'heart')
first_is_heart(('heart', 'club'))
first_is_heart(('club', 'heart'))
Explanation: The above table could be used to estimate the probabilities in question. Instead, we will illustrate several other tools available in Symbulate to summarize simulation output.
First, we use a filter function to creat a subset of the simulated outcomes for which the first card is a heart. We define a function first_is_heart that takes as an input a pair of values (x) and returns True if the first value in the pair (x[0]) is equal to 'hearts', and False otherwise. (Python indexing starts at 0: 0 is the first enrty, 1 is the second, and so on.)
End of explanation
sims_first_is_heart = sims.filter(first_is_heart)
sims_first_is_heart.tabulate()
Explanation: Now we filter the simulated outcomes to create the subset of outcomes for which first_is_heart returns True.
End of explanation
len(sims_first_is_heart) / len(sims)
Explanation: Returning to question 1, we can estimate the probability that the first card is a heart by dividing the number of simulated draws for which the first card is a heart divided by the total number of simulated draws (using the length function len to count.)
End of explanation
# Type your Symbulate commands in this cell.
Explanation: The true probability is 4/16 = 0.25. Your simulated probability should be close to 0.25, but there will be some natural variability due to the randomness in the simulation. Very roughly, the margin of error of a probability estimate based on $N$ simulated repetitions is about $1/\sqrt{N}$, so about 0.01 for 10000 repetitions. The interval constructed by adding $\pm 0.01$ to your estimate will likely contain 0.25.
a)
Recall question 2: What is the probability that the second card drawn is a heart? Use an analysis similar to the above — including defining an appropriate function to use with filter — to estimate the probability. (Is your simulated value close to your initial guess?)
Type your commands in the following code cell. Aside from defining a second_is_heart function and using len, you should use Symbulate commands exclusively.
End of explanation
# Type your Symbulate commands in this cell.
Explanation: b)
Many people confuse the probabilities in (2) and (3). The probability in (2) is an unconditional probability: we do not know whether or not the first card is a heart so we need to account for both possibilities. All we know is that each of the 16 cards in the deck is equally likely to be shuffled into the second position, so the probability that the second card is a heart (without knowing what the first card is) is 4/16 = 0.25.
In contrast, the probability in question 3 is a conditional probability: given that the first card drawn is a heart, what is the probability that the second card drawn is a heart? Again, aside from maybe defining a new is_heart function and using len, you should use Symbulate commands exclusively.
End of explanation
n = 10
prizes = list(range(n))
prizes
Explanation: c)
Given that the first card is a heart, there are 15 cards left in the deck, each of which is equally likely to be the second card, of which 3 are hearts. So the conditional probability that the second card is a heart given that the first card is a heart is 3/15 = 0.20. Verify that your simulated value is consistent with the true value.
Now you will do a few calculations by hand.
Compute, by hand, the conditional probability that the second card is a heart given that the first cards is NOT a heart.
Construct, by hand, a hypothetical two-way table representing the results of 10000 draws.
Use the hypothetical table to compute the probability that the second card is a heart.
What is the relationship between the probability that the second card is a heart and the two conditional probabilities?
(Nothing to respond here, just make sure you understand the answers.)
d)
How would the answers to the previous questions change if the draws were made with replacement (so that the first card is replaced and the deck reshuffled before the second draw is drawn?) In this case, what can we say about the events "the first card is a heart" and "the second card is a heart"?
Type your response here.
Part II. Collector's problem
Each box of a certain type of cereal contains one of $n$ distinct prizes and you want to obtain a complete set. Suppose
that each box of cereal is equally likely to contain any one of the $n$ prizes, and the particular prize
that appears in one box has no bearing on the prize that appears in another box. You purchase
cereal boxes one box at a time until you
have the complete set of $n$ prizes. What is the probability that you buy more than $k$ boxes? In this problem you will use simulation to estimate this probability for different values of $n$ and $k$.
Here is a little Python code you can use to label the $n$ prizes from 0 to $n-1$. (Remember: Python starts indexing at 0.)
End of explanation
def number_collected(x):
return len(set(x))
# For example
number_collected([2, 1, 2, 0, 2, 2, 0])
Explanation: And here is a function that returns the number of distinct prizes collected among a set of prizes.
End of explanation
n = 3
prizes = list(range(n))
prizes
Explanation: Aside from the above functions, you should use Symbulate commands exclusively for Part II.
Problem 1.
We'll assume that there are 3 prizes, $n=3$, a situation in which exact probabilities can easily be computed by enumerating the possible outcomes.
End of explanation
# Type your Symbulate commands in this cell.
Explanation: a)
Define a probability space for the sequence of prizes obtained after buying $3$ boxes (first box, second box, third box), and simulate a single outcome. (Hint: try BoxModel.)
End of explanation
# Type your Symbulate commands in this cell.
Explanation: b)
Now simulate many outcomes and summarize the results. Does it appear that each sequence of prizes is equally likely? (Hint: try the various Simulation tools like .sim() and .tabulate().)
End of explanation
# Type your Symbulate commands in this cell.
Explanation: c)
Count the number of distinct prizes collected for each of the simulated outcomes using the number_collected function. (Hint: try .apply().)
End of explanation
# Type your Symbulate commands in this cell.
Explanation: d)
Use the simulation results to estimate the probability the more than $k=3$ boxes are needed to complete a set of $n=3$ prizes. (Hint: see this summary of the simulation tools section for a few suggestions.)
End of explanation
# Type your Symbulate commands in this cell.
Explanation: Problem 2.
Use simulation to estimate the probability that more than $k=100$ boxes are need to complete a set of $n=20$ prizes, a situation for which it is extremely difficult to compute the probability analytically.
End of explanation
# Type your relevant code in this cell for 0.5
# Type your relevant code in this cell for 0.9
# Type your relevant code in this cell for 0.99
Explanation: Problem 3.
How large of a group of people is needed to have a probability of greater than 0.5 that on every day of the year someone in the group has a birthday? Greater than 0.9? Greater than 0.99? (Assume 365 equally likely birthdays, no multiples, etc.) Before coding, I encourage you to make some guesses for the answers first.
Formulate this scenario as a collector's problem and experimemt with values of $n$ or $k$ until you are satisfied. (You don't have to get any fancier than guess-and-check, but you can if you want.)
End of explanation
# Type your Symbulate commands in this cell.
Explanation: Problem 4.
Now suppose that some prizes are harder to find than others. In particular, suppose that the prizes are labeled 1, 2, 3, 4, 5. Assume that prize 2 is twice as likely as prize 1, prize 3 is three times as likely as prize 1, prize 4 is four times as likely as prize 1, and prize 5 is five times as likely as prize 1.
Estimate the probability that you'll need to buy more than 20 prizes to obtain a complete set. How does this probability compare to the probability in the equally likely situation?
Hint: define a BoxModel with a dictionary-like input.
End of explanation |
1,176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate synthetic training data
The goal for this file is to generate some synthetic data for our sample model to train on.
Step1: Power ups
These are the power ups that are available to users.
Step2: Generate random users
We start by generating users with a random values for a few attributes.
Each user will have 2 favorite power-ups, which will be different based on their user profile.
Step3: With users profile generated, we will define their top 2 favorite power ups based on their profile, with some randomness.
Step5: Show distribution of user's favorite powerups | Python Code:
import pandas as pd
import numpy as np
import random
Explanation: Generate synthetic training data
The goal for this file is to generate some synthetic data for our sample model to train on.
End of explanation
power_ups = ['time_machine', 'coin_magnet', 'coin_multiplier', 'sparky_armor', 'extra_life', 'head_start', 'parachute', 'nuclear_missle']
len(power_ups)
Explanation: Power ups
These are the power ups that are available to users.
End of explanation
def random_distance_avg():
return int(max(0, np.random.normal(100, 40)))
def random_coins_spent():
return int(max(0, np.random.normal(1000, 600)))
def random_game_day():
return int(max(0, np.random.normal(100, 100)))
def random_geo_country():
return random.choice(['US', 'Canada', 'China', 'Japan', 'Germany', 'India', 'France', 'UK', 'Italy', 'Russia', 'South Korea'])
def random_device_os():
return random.choice(['iOS', 'Android'])
mock_data = ((i, random_distance_avg(), random_coins_spent(), random_game_day(), random_geo_country(), random_device_os()) for i in range(0,100000))
users_df = pd.DataFrame(mock_data, columns = ['id', 'distance_avg', 'coins_spent', 'game_day', 'geo_country', 'device_os'])
users_df
Explanation: Generate random users
We start by generating users with a random values for a few attributes.
Each user will have 2 favorite power-ups, which will be different based on their user profile.
End of explanation
def get_random_powerup():
return random.choice(power_ups)
def get_favorite_by_game_day(game_day):
if game_day > 100:
return random.choice(['extra_life', 'sparky_armor'])
return get_random_powerup()
def get_favorite_by_distance_avg(distance):
if distance > 100:
return random.choice(['parachute', 'nuclear_missle'])
return get_random_powerup()
def get_favorite_by_coins_spent(coins_spent):
if coins_spent > 1000:
return random.choice(['coin_magnet', 'coin_multiplier'])
return get_random_powerup()
def get_random_favorite_action(user):
if random.uniform(0, 1) < 0.5:
return get_favorite_by_game_day(user['game_day'])
if random.uniform(0, 1) < 0.5:
return get_favorite_by_distance_avg(user['distance_avg'])
if random.uniform(0, 1) < 0.5:
return get_favorite_by_coins_spent(user['coins_spent'])
return get_random_powerup()
users_df['favorite_powerup_1'] = users_df.apply(get_random_favorite_action, axis=1)
users_df['favorite_powerup_2'] = users_df.apply(get_random_favorite_action, axis=1)
users_df
Explanation: With users profile generated, we will define their top 2 favorite power ups based on their profile, with some randomness.
End of explanation
users_df.groupby('favorite_powerup_1').agg('count')['id'].plot(kind='bar')
def get_last_run_end_reason():
return random.choice(['wall', 'laser'])
def get_is_powerup_clicked(row):
if row['last_run_end_reason'] == 'laser' and 'sparky_armor' == row['presented_powerup'] and random.uniform(0, 1) < 0.5:
return True
if row['favorite_powerup_1'] == row['presented_powerup'] and random.uniform(0, 1) < 0.8:
return True
if row['favorite_powerup_2'] == row['presented_powerup'] and random.uniform(0, 1) < 0.6:
return True
return False
def generate_data(n):
game_df = users_df.sample(n=1)
game_df['last_run_end_reason'] = game_df.apply(lambda _ : get_last_run_end_reason(), axis = 1)
game_df['presented_powerup'] = game_df.apply(lambda _ : get_random_powerup(), axis = 1)
game_df['is_powerup_clicked'] = game_df.apply(get_is_powerup_clicked, axis=1)
for i in range(n//10000):
tmp = users_df.sample(n=10000)
tmp['last_run_end_reason'] = tmp.apply(lambda _ : get_last_run_end_reason(), axis = 1)
tmp['presented_powerup'] = tmp.apply(lambda _ : get_random_powerup(), axis = 1)
tmp['is_powerup_clicked'] = tmp.apply(get_is_powerup_clicked, axis=1)
game_df = game_df.append(tmp)
game_df = game_df.drop(['id', 'favorite_powerup_1', 'favorite_powerup_2'], axis=1)
return game_df
training = generate_data(7000000)
validation = generate_data(2000000)
test = generate_data(1000000)
training.to_csv('training.csv', index=False)
validation.to_csv('validation.csv', index=False)
test.to_csv('test.csv', index=False)
training
from google.cloud import storage
def upload_blob(bucket_name, source_file_name, destination_blob_name):
Uploads a file to the bucket.
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file_name)
print(
"File {} uploaded to {}.".format(
source_file_name, destination_blob_name
)
)
upload_blob("iap-optimization-codelab", "./training.csv", "training-data/training.csv")
upload_blob("iap-optimization-codelab", "./validation.csv", "validation-data/validation.csv")
upload_blob("iap-optimization-codelab", "./test.csv", "test-data/test.csv")
Explanation: Show distribution of user's favorite powerups
End of explanation |
1,177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Impedance Matching
Introduction
The general problem is illustrated by the figure below
Step1: Matching with Lumped Elements
To begin, let's assume that the matching network is lossless and the feeding line characteristic impedance is $Z_0$
Step2: Let's define the Frequency and load Network
Step3: We are searching for a L-C Network corresponding to the first configuration above
Step4: Finding the set of inductance $L$ and the capacitance $C$ which matches the load is an optimization problem. The scipy package provides the necessary optimization function(s) for that
Step6: Single-Stub Matching
Matching can be made with a piece of open-ended or shorted transmission line ( stub ), connected either in parallel ( shunt ) or in series. In the example below, a matching network is realized from a shorted transmission line of length ($\theta_{stub}$) connected in parallel, in association with a series transmission line ($\theta_{line}$). Let's assume a load impedance $Z_L=60 - 80j$ connected to a 50 Ohm transmission line.
<img src="figures/Impedance_matching_stub1.svg">
Let's match this load at 2 GHz
Step7: Optimize the matching network variables theta_delay and theta_stub to match the resulting 1-port network ($|S|=0$) | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import skrf as rf
rf.stylely()
Explanation: Impedance Matching
Introduction
The general problem is illustrated by the figure below: a generator with an internal impedance $Z_S$ delivers a power to a passive load $Z_L$, through a 2-ports matching network. This problem is commonly named as "the double matching problem". Impedance matching is important for the following reasons:
maximizing the power transfer. Maximum power is delivered to the load when the generator and the load are matched to the line and power loss in the line minimized
improving signal-to-noise ratio of the system
reducing amplitude and phase errors
reducing reflected power toward generator
<img src="figures/Impedance_matching_general.svg">
As long as the load impedance $Z_L$ has a real positive part, a matching network can always be found. Many choices are available and the examples below only describe a few. The examples are taken from the D.Pozar book "Microwave Engineering", 4th edition.
End of explanation
Z_L = 200 - 100j
Z_0 = 100
f_0_str = '500MHz'
Explanation: Matching with Lumped Elements
To begin, let's assume that the matching network is lossless and the feeding line characteristic impedance is $Z_0$:
<img src="figures/Impedance_matching_lumped1.svg">
The simplest type of matching network is the "L" network, which uses two reactive elements to match an arbitrary load impedance. Two possible configuration exist and are illustrated by the figures below. In either configurations, the reactive elements can be inductive of capacitive, depending on the load impedance.
<img src="figures/Impedance_matching_lumped2.svg">
<img src="figures/Impedance_matching_lumped3.svg">
Let's assume the load is $Z_L = 200 - 100j \Omega$ for a line $Z_0=100\Omega$ at the frequency of 500 MHz.
End of explanation
# frequency band centered on the frequency of interest
frequency = rf.Frequency(start=300, stop=700, npoints=401, unit='MHz')
# transmission line Media
line = rf.DefinedGammaZ0(frequency=frequency, z0=Z_0)
# load Network
load = line.load(rf.zl_2_Gamma0(Z_0, Z_L))
Explanation: Let's define the Frequency and load Network:
End of explanation
def matching_network_LC_1(L, C):
' L and C in nH and pF'
return line.inductor(L*1e-9)**line.shunt_capacitor(C*1e-12)**load
def matching_network_LC_2(L, C):
' L and C in nH and pF'
return line.capacitor(C*1e-12)**line.shunt_inductor(L*1e-9)**load
Explanation: We are searching for a L-C Network corresponding to the first configuration above:
<img src="figures/Impedance_matching_lumped4.svg">
End of explanation
from scipy.optimize import minimize
# initial guess values
L0 = 10 # nH
C0 = 1 # pF
x0 = (L0, C0)
# bounds
L_minmax = (1, 100) #nH
C_minmax = (0.1, 10) # pF
# the objective functions minimize the return loss at the target frequency f_0
def optim_fun_1(x, f0=f_0_str):
_ntw = matching_network_LC_1(*x)
return np.abs(_ntw[f_0_str].s).ravel()
def optim_fun_2(x, f0=f_0_str):
_ntw = matching_network_LC_2(*x)
return np.abs(_ntw[f_0_str].s).ravel()
res1 = minimize(optim_fun_1, x0, bounds=(L_minmax, C_minmax))
print(f'Optimum found for LC network 1: L={res1.x[0]} nH and C={res1.x[1]} pF')
res2 = minimize(optim_fun_2, x0, bounds=(L_minmax, C_minmax))
print(f'Optimum found for LC network 2: L={res2.x[0]} nH and C={res2.x[1]} pF')
ntw1 = matching_network_LC_1(*res1.x)
ntw2 = matching_network_LC_2(*res2.x)
ntw1.plot_s_mag(lw=2, label='LC network 1')
ntw2.plot_s_mag(lw=2, label='LC network 2')
plt.ylim(bottom=0)
Explanation: Finding the set of inductance $L$ and the capacitance $C$ which matches the load is an optimization problem. The scipy package provides the necessary optimization function(s) for that:
End of explanation
Z_L = 60 - 80j
Z_0 = 50
f_0_str = '2GHz'
# Frequency, wavenumber and transmission line media
freq = rf.Frequency(start=1, stop=3, npoints=301, unit='GHz')
beta = freq.w/rf.c
line = rf.DefinedGammaZ0(freq, gamma=1j*beta, z0=Z_0)
def resulting_network(theta_delay, theta_stub):
Return a loaded single stub matching network
NB: theta_delay and theta_stub lengths are in deg
delay_load = line.delay_load(rf.zl_2_Gamma0(Z_0, Z_L), theta_delay)
shunted_stub = line.shunt_delay_short(theta_stub)
return shunted_stub ** delay_load
Explanation: Single-Stub Matching
Matching can be made with a piece of open-ended or shorted transmission line ( stub ), connected either in parallel ( shunt ) or in series. In the example below, a matching network is realized from a shorted transmission line of length ($\theta_{stub}$) connected in parallel, in association with a series transmission line ($\theta_{line}$). Let's assume a load impedance $Z_L=60 - 80j$ connected to a 50 Ohm transmission line.
<img src="figures/Impedance_matching_stub1.svg">
Let's match this load at 2 GHz:
End of explanation
from scipy.optimize import minimize
def optim_fun(x):
return resulting_network(*x)[f_0_str].s_mag.ravel()
x0 = (50, 50)
bnd = (0, 180)
res = minimize(optim_fun, x0, bounds=(bnd, bnd))
print(f'Optimum found for: theta_delay={res.x[0]:.1f} deg and theta_stub={res.x[1]:.1f} deg')
# Optimized network at f0
ntw = resulting_network(*res.x)
ntw.plot_s_db(lw=2)
Explanation: Optimize the matching network variables theta_delay and theta_stub to match the resulting 1-port network ($|S|=0$)
End of explanation |
1,178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Atmospheres & Passbands
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: And we'll add a single light curve dataset to expose all the passband-dependent options.
Step3: Relevant Parameters
An 'atm' parameter exists for each of the components in the system (for each set of compute options) and defines which atmosphere table should be used.
By default, these are set to 'ck2004' (Castelli-Kurucz) but can be set to 'blackbody' as well as 'extern_atmx' and 'extern_planckint' (which are included primarily for direct comparison with PHOEBE legacy).
Step4: Note that if you change the value of 'atm' to anything other than 'ck2004', the corresponding 'ld_func' will need to be changed to something other than 'interp' (warnings and errors will be raised to remind you of this).
Step5: A 'passband' parameter exists for each passband-dependent-dataset (i.e. not meshes or orbits, but light curves and radial velocities). This parameter dictates which passband should be used for the computation of all intensities.
Step6: The available choices will include both locally installed passbands as well as passbands currently available from the online PHOEBE repository. If you choose an online-passband, it will be downloaded and installed locally as soon as required by b.run_compute.
Step7: To see your current locally-installed passbands, call phoebe.list_installed_passbands().
Step8: These installed passbands can be in any of a number of directories, which can be accessed via phoebe.list_passband_directories().
The first entry is the global location - this is where passbands can be stored by a server-admin to be available to all PHOEBE-users on that machine.
The second entry is the local location - this is where individual users can store passbands and where PHOEBE will download and install passbands (by default).
Step9: To see the passbands available from the online repository, call phoebe.list_online_passbands().
Step10: Lastly, to manually download and install one of these online passbands, you can do so explicitly via phoebe.download_passband.
Note that this isn't necessary unless you want to explicitly download passbands before needed by run_compute (perhaps if you're expecting to have unreliable network connection in the future and want to ensure you have all needed passbands). | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Atmospheres & Passbands
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
Explanation: And we'll add a single light curve dataset to expose all the passband-dependent options.
End of explanation
b['atm']
b['atm@primary']
b['atm@primary'].description
b['atm@primary'].choices
Explanation: Relevant Parameters
An 'atm' parameter exists for each of the components in the system (for each set of compute options) and defines which atmosphere table should be used.
By default, these are set to 'ck2004' (Castelli-Kurucz) but can be set to 'blackbody' as well as 'extern_atmx' and 'extern_planckint' (which are included primarily for direct comparison with PHOEBE legacy).
End of explanation
b['ld_func@primary']
b['atm@primary'] = 'blackbody'
print(b.run_checks())
b['ld_func@primary'] = 'logarithmic'
print(b.run_checks())
Explanation: Note that if you change the value of 'atm' to anything other than 'ck2004', the corresponding 'ld_func' will need to be changed to something other than 'interp' (warnings and errors will be raised to remind you of this).
End of explanation
b['passband']
Explanation: A 'passband' parameter exists for each passband-dependent-dataset (i.e. not meshes or orbits, but light curves and radial velocities). This parameter dictates which passband should be used for the computation of all intensities.
End of explanation
print(b['passband'].choices)
Explanation: The available choices will include both locally installed passbands as well as passbands currently available from the online PHOEBE repository. If you choose an online-passband, it will be downloaded and installed locally as soon as required by b.run_compute.
End of explanation
print(phoebe.list_installed_passbands())
Explanation: To see your current locally-installed passbands, call phoebe.list_installed_passbands().
End of explanation
print(phoebe.list_passband_directories())
Explanation: These installed passbands can be in any of a number of directories, which can be accessed via phoebe.list_passband_directories().
The first entry is the global location - this is where passbands can be stored by a server-admin to be available to all PHOEBE-users on that machine.
The second entry is the local location - this is where individual users can store passbands and where PHOEBE will download and install passbands (by default).
End of explanation
print(phoebe.list_online_passbands())
Explanation: To see the passbands available from the online repository, call phoebe.list_online_passbands().
End of explanation
phoebe.download_passband('Cousins:R')
print(phoebe.list_installed_passbands())
Explanation: Lastly, to manually download and install one of these online passbands, you can do so explicitly via phoebe.download_passband.
Note that this isn't necessary unless you want to explicitly download passbands before needed by run_compute (perhaps if you're expecting to have unreliable network connection in the future and want to ensure you have all needed passbands).
End of explanation |
1,179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-3', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-3
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
1,180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<style>
@font-face {
font-family
Step1: One nice feature of ipython notebooks is it's easy to make small changes to code and
then re-execute quickly, to see how things change. For example, printing the first 5 lines
of the labels dataframe (which is the default) isn't really ideal here, since there's a label
("functional needs repair") which doesn't appear in the first five lines. Type 20 in the
parentheses labels_df.head(), so it now reads labels_df.head(20), and press shift-enter to
rerun the code. See the difference?
Now take a quick look at the features, again by calling .head() (set up for you in the code box
below, or add your own code to the code box above). You can print or as few
rows as you like. Take a quick look at the data--approximately how many features are there?
Are they all numeric, or will you have to do work to transform non-numeric features into
numbers?
Step2: Transforming string labels into integers
The machine learning algorithms downstream are not going to handle it well if the class labels
used for training are strings; instead, we'll want to use integers. The mapping that we'll use
is that "non functional" will be transformed to 0, "functional needs repair" will be 1, and
"functional" becomes 2.
There are a number of ways to do this; the framework below uses applymap() in pandas.
Here's
the documentation for applymap(); in the code below, you should fill in the function body for
label_map(y) so that if y is "functional", label_map returns 2; if y is "functional needs
repair" then it should return 1, and "non functional" is 0.
There's a print statement there to help you confirm that the label transformation is working
properly.
As an aside, you could also use apply() here if you like. The difference between apply()
and applymap() is that applymap() operates on a whole dataframe while apply() operates on a series
(or you can think of it as operating on one column of your dataframe). Since labels_df only has
one column (aside from the index column), either one will work here.
Step3: Transforming string features into integers
Now that the labels are ready, we'll turn our attention to the features. Many of the features
are categorical, where a feature can take on one of a few discrete values, which are not ordered.
Fill in the function body of transform_feature( df, column ) below so that it takes our features_df and
the name of a column in that dataframe, and returns the same dataframe but with the indicated
feature encoded with integers rather than strings.
We've provided code to wrap your transformer function in a loop iterating through all the columns that should
be transformed.
Last, add a line of code at the bottom of the block below that removes the date_recorded column from features_df. Time-series information like dates and times need special treatment, which we won't be going into today.
Step4: Ok, a couple last steps to get everything ready for sklearn. The features and labels are taken out of their dataframes and put into a numpy.ndarray and list, respectively.
Step5: Predicting well failures with logistic regression
The cheapest and easiest way to train on one portion of your dataset and test on another, and to get a measure of model quality at the same time, is to use sklearn.cross_validation.cross_val_score(). This splits your data into 3 equal portions, trains on two of them, and tests on the third. This process repeats 3 times. That's why 3 numbers get printed in the code block below.
You don't have to add anything to the code block, it's ready to go already. However, use it for reference in the next part of the tutorial, where you will be looking at other sklearn algorithms.
Step6: Comparing logistic regression to tree-based methods
We have a baseline logistic regression model for well failures. Let's compare to a couple of other classifiers, a decision tree classifier and a random forest classifier, to see which one seems to do the best.
Code this up on your own. You can use the code in the box above as a kind of template, and just drop in the new classifiers. The sklearn documentation might also be helpful
Step7: Congratulations! You have a working data science setup, in which you have
Step8: Now we'll take the to_transform list that you populated above with categorical variables, and use that to loop through columns that will be one-hot encoded.
One note before you code that up
Step9: Now that the features are a little fixed up, I'd invite you to rerun the models, and see if the cross_val_score goes up as a result. It is also a great chance to take some of the theory discussion from the workshop and play around with the parameters of your models, and see if you can increase their scores that way. There's a blank code box below where you can play around.
Step10: End-to-end workflows using Pipeline and GridSearchCV
So far we have made a nice workflow using a few ideas assembled in a script-like workflow. A few spots remain where we can tighten things up though
Step11: Pipeline
After selecting the 100 best features, the natural next step would be to run our random forest again to see if it does a little better with fewer features. So we would have SelectKBest doing selection, with the output of that process going straight into a classifier. A Pipeline packages the transformation step of SelectKBest with the estimation step of RandomForestClassifier into a coherent workflow.
Why might you want to use Pipeline instead of keeping the steps separate?
makes code more readable
don't have to worry about keeping track data during intermediate steps, for example between transforming and estimating
makes it trivial to move ordering of the pipeline pieces, or to swap pieces in and out
Allows you to do GridSearchCV on your workflow
This last point is, in my opinion, the most important. We will get to it very soon, but first let's get a pipeline up and running that does SelectKBest followed by RandomForestClassifier.
In the code box below, I've also set up a slightly better training/testing structure, where I am explicitly splitting the data into training and testing sets which we'll use below. The training/testing split before was handled automatically in cross_val_score, but we'll be using a different evaluation metric from here forward, the classification report, which requires us to handle the train/test split ourselves. | Python Code:
import pandas as pd
features_df = pd.DataFrame.from_csv("well_data.csv")
labels_df = pd.DataFrame.from_csv("well_labels.csv")
print( labels_df.head() )
Explanation: <style>
@font-face {
font-family: CharisSILW;
src: url(files/CharisSIL-R.woff);
}
@font-face {
font-family: CharisSILW;
font-style: italic;
src: url(files/CharisSIL-I.woff);
}
@font-face {
font-family: CharisSILW;
font-weight: bold;
src: url(files/CharisSIL-B.woff);
}
@font-face {
font-family: CharisSILW;
font-weight: bold;
font-style: italic;
src: url(files/CharisSIL-BI.woff);
}
div.cell, div.text_cell_render{
max-width:1000px;
}
h1 {
text-align:center;
font-family: Charis SIL, CharisSILW, serif;
}
.rendered_html {
font-size: 130%;
line-height: 1.3;
}
.rendered_html li {
line-height: 2;
}
.rendered_html h1{
line-height: 1.3;
}
.rendered_html h2{
line-height: 1.2;
}
.rendered_html h3{
line-height: 1.0;
}
.text_cell_render {
font-family: Charis SIL, CharisSILW, serif;
line-height: 145%;
}
li li {
font-size: 85%;
}
</style>
End-to-End Data Science in Python
<img src="scikit-learn.png" />
Introduction
This is the workbook for the "End-to-End Data Analysis in Python" workshop
at the Open Data Science Conference 2015, in beautiful San Francisco.
This notebook contains starter code only; the goal is that we will fill in the
gaps together as we progress through the workshop. If, however, you're doing this
asynchronously or you get stuck, you can reference the solutions workbook.
The objective is to complete the "Pump it Up: Mining the Water Table" challenge
on drivendata.org; the objective here is to predict
African wells that are non-functional or in need of repair. Per the rules of the
competition, you should register for an account with drivendata.org, at which point you
can download the training set values and labels. We will be working with those datasets
during this workshop. You should download those files to the directory in which this
notebook lives, and name them wells_features.csv and wells_labels.csv (to be consistent
with our nomenclature). You are also encouraged to continue developing your solution
after this workshop, and/or to enter your solution in the competition on the drivendata
website!
### Code requirements
Here's the environment you'll need to work with this code base:
python 3 (2.x may work with minor changes, but no guarantees)
pandas
scikit-learn
numpy
First Draft of an Analysis
End of explanation
print( features_df.head() )
Explanation: One nice feature of ipython notebooks is it's easy to make small changes to code and
then re-execute quickly, to see how things change. For example, printing the first 5 lines
of the labels dataframe (which is the default) isn't really ideal here, since there's a label
("functional needs repair") which doesn't appear in the first five lines. Type 20 in the
parentheses labels_df.head(), so it now reads labels_df.head(20), and press shift-enter to
rerun the code. See the difference?
Now take a quick look at the features, again by calling .head() (set up for you in the code box
below, or add your own code to the code box above). You can print or as few
rows as you like. Take a quick look at the data--approximately how many features are there?
Are they all numeric, or will you have to do work to transform non-numeric features into
numbers?
End of explanation
def label_map(y):
### your code goes here
labels_df = labels_df.applymap(label_map)
print(labels_df.head())
Explanation: Transforming string labels into integers
The machine learning algorithms downstream are not going to handle it well if the class labels
used for training are strings; instead, we'll want to use integers. The mapping that we'll use
is that "non functional" will be transformed to 0, "functional needs repair" will be 1, and
"functional" becomes 2.
There are a number of ways to do this; the framework below uses applymap() in pandas.
Here's
the documentation for applymap(); in the code below, you should fill in the function body for
label_map(y) so that if y is "functional", label_map returns 2; if y is "functional needs
repair" then it should return 1, and "non functional" is 0.
There's a print statement there to help you confirm that the label transformation is working
properly.
As an aside, you could also use apply() here if you like. The difference between apply()
and applymap() is that applymap() operates on a whole dataframe while apply() operates on a series
(or you can think of it as operating on one column of your dataframe). Since labels_df only has
one column (aside from the index column), either one will work here.
End of explanation
def transform_feature( df, column_name ):
### your code goes here
return df
### list of column names indicating which columns to transform;
### this is just a start! Use some of the print( labels_df.head() )
### output upstream to help you decide which columns get the
### transformation
names_of_columns_to_transform = ["funder", "installer", "wpt_name"]
for column in names_of_columns_to_transform:
features_df = transform_feature( features_df, column )
### remove the "date_recorded" column--we're not going to make use
### of time-series data today
Explanation: Transforming string features into integers
Now that the labels are ready, we'll turn our attention to the features. Many of the features
are categorical, where a feature can take on one of a few discrete values, which are not ordered.
Fill in the function body of transform_feature( df, column ) below so that it takes our features_df and
the name of a column in that dataframe, and returns the same dataframe but with the indicated
feature encoded with integers rather than strings.
We've provided code to wrap your transformer function in a loop iterating through all the columns that should
be transformed.
Last, add a line of code at the bottom of the block below that removes the date_recorded column from features_df. Time-series information like dates and times need special treatment, which we won't be going into today.
End of explanation
X = features_df.as_matrix()
y = labels_df["status_group"].tolist()
Explanation: Ok, a couple last steps to get everything ready for sklearn. The features and labels are taken out of their dataframes and put into a numpy.ndarray and list, respectively.
End of explanation
import sklearn.linear_model
import sklearn.cross_validation
clf = sklearn.linear_model.LogisticRegression()
score = sklearn.cross_validation.cross_val_score( clf, X, y )
print( score )
Explanation: Predicting well failures with logistic regression
The cheapest and easiest way to train on one portion of your dataset and test on another, and to get a measure of model quality at the same time, is to use sklearn.cross_validation.cross_val_score(). This splits your data into 3 equal portions, trains on two of them, and tests on the third. This process repeats 3 times. That's why 3 numbers get printed in the code block below.
You don't have to add anything to the code block, it's ready to go already. However, use it for reference in the next part of the tutorial, where you will be looking at other sklearn algorithms.
End of explanation
### your code here
Explanation: Comparing logistic regression to tree-based methods
We have a baseline logistic regression model for well failures. Let's compare to a couple of other classifiers, a decision tree classifier and a random forest classifier, to see which one seems to do the best.
Code this up on your own. You can use the code in the box above as a kind of template, and just drop in the new classifiers. The sklearn documentation might also be helpful:
* Decision tree classifier
* Random forest classifier
We will talk about all three of these models more in the next part of the tutorial.
End of explanation
def hot_encoder(df, column_name)
### your code goes here
return df
Explanation: Congratulations! You have a working data science setup, in which you have:
* read in data
* transformed features and labels to make the data amenable to machine learning
* made a train/test split (this was done implicitly when you called cross_val_score)
* evaluated several models for identifying wells that are failed or in danger of failing
Paying down technical debt and tuning the models
We got things running really fast, which is great, but at the cost of being a little quick-and-dirty about some details. First, we got the features encoded as integers, but they really should be dummy variables. Second, it's worth going through the models a little more thoughtfully, to try to understand their performance and if there's any more juice we can get out of them.
One-hot encoding to make dummy variables
A problem with representing categorical variables as integers is that integers are ordered, while categories are not. The standard way to deal with this is to use dummy variables; one-hot encoding is a very common way of dummying. Each possible category becomes a new boolean feature. For example, if our dataframe looked like this:
index country
1 "United States"
2 "Mexico"
3 "Mexico"
4 "Canada"
5 "United States"
6 "Canada"
then after dummying it will look something like this:
index country_UnitedStates country_Mexico country_Canada
1 1 0 0
2 0 1 0
3 0 1 0
4 0 0 1
5 1 0 0
6 0 0 1
Hopefully the origin of the name is clear--each variable is now encoded over several boolean columns, one of which is true (hot) and the others are false.
Now we'll write a hot-encoder function that takes the data frame and the title of a column, and returns the same data frame but one-hot encoding performed on the indicated feature.
Protip: sklearn has a one-hot encoder function available that will be your friend here.
End of explanation
features_df.drop( "funder", axis=1, inplace=True )
features_df.drop( "installer", axis=1, inplace=True )
features_df.drop( "wpt_name", axis=1, inplace=True )
features_df.drop( "subvillage", axis=1, inplace=True )
features_df.drop( "ward", axis=1, inplace=True )
for feature in to_transform:
features_df = hot_encode( features_df, feature )
Explanation: Now we'll take the to_transform list that you populated above with categorical variables, and use that to loop through columns that will be one-hot encoded.
One note before you code that up: one-hot encoding comes with the baggage that it makes your dataset bigger--sometimes a lot bigger. In the countries example above, one column that encoded the country has now been expanded out to three columns. You can imagine that this can sometimes get really, really big (imagine a column encoding all the counties in the United States, for example).
There are some columns in this example that will really blow up the dataset, so we'll remove them before proceeding with the one-hot encoding.
End of explanation
### your code here
Explanation: Now that the features are a little fixed up, I'd invite you to rerun the models, and see if the cross_val_score goes up as a result. It is also a great chance to take some of the theory discussion from the workshop and play around with the parameters of your models, and see if you can increase their scores that way. There's a blank code box below where you can play around.
End of explanation
import sklearn.feature_selection
### your code goes here
Explanation: End-to-end workflows using Pipeline and GridSearchCV
So far we have made a nice workflow using a few ideas assembled in a script-like workflow. A few spots remain where we can tighten things up though:
the best model, the random forest, has a lot of parameters that we'd have to work through if we really wanted to tune it
after dummying, we have lots of features, probably only a subset of which are really offering any discriminatory power (this is a version of the bias-variance tradeoff)
maybe there's a way to make the code more streamlined (hint: there is)
We will solve all these with two related and lovely tools in sklearn: Pipeline and GridSearchCV.
Pipeline in sklearn is a tool for chaining together multiple pieces of a workflow into a single coherent analysis. In our example, we will chain together a tool for feature selection, to will address the second point, which then feeds our optimized feature set into the random forest model, all in a few lines of code (which addresses the third point).
To get to the first point, about finding the best parameters--that's where the magic of GridSearchCV comes in. But first we need to get the feature selector and pipeline up and running, so let's do that now.
In sklearn.feature_selection there is a useful tool, SelectKBest (link)[http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html] that you should use. By default, this will select the 10 best features; that seems like it might be too few features to do well on this problem, so change the number of features to 100.
End of explanation
import sklearn.pipeline
select = ### fill this in--SelectKBest, k=100
clf = ### fill this in--RandomForestClassifier
pipeline = ### fill this in, using sklearn docs as a guide
X_train, X_test, y_train, y_test = sklearn.cross_validation.train_test_split(X, y, test_size=0.33, random_state=42)
### fit your pipeline on X_train and y_train
### call pipeline.predict() on your X_test data to make a set of test predictions
### test your predictions using sklearn.classification_report()
### and print the report
Explanation: Pipeline
After selecting the 100 best features, the natural next step would be to run our random forest again to see if it does a little better with fewer features. So we would have SelectKBest doing selection, with the output of that process going straight into a classifier. A Pipeline packages the transformation step of SelectKBest with the estimation step of RandomForestClassifier into a coherent workflow.
Why might you want to use Pipeline instead of keeping the steps separate?
makes code more readable
don't have to worry about keeping track data during intermediate steps, for example between transforming and estimating
makes it trivial to move ordering of the pipeline pieces, or to swap pieces in and out
Allows you to do GridSearchCV on your workflow
This last point is, in my opinion, the most important. We will get to it very soon, but first let's get a pipeline up and running that does SelectKBest followed by RandomForestClassifier.
In the code box below, I've also set up a slightly better training/testing structure, where I am explicitly splitting the data into training and testing sets which we'll use below. The training/testing split before was handled automatically in cross_val_score, but we'll be using a different evaluation metric from here forward, the classification report, which requires us to handle the train/test split ourselves.
End of explanation |
1,181 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mengukur Downside Risk dengan VaR dan CVaR
Seperti kita bahas dalam studi sebelumnya, distribusi dari keuntungan biasanya bukanlah normal, sehingga pemakaian standar deviasi kurang tepat karena standar deviasi mengasumsikan distibusi yang simetris dan kurtosis yang tipis. Sementara dalam manajemen portfolio, investor biasanya lebih menguatirkan resiko untuk mengalami kerugian yang besar, atau probabilitas dari return negatif yang besar, daripada volatilitas dari laba (tentu kita tidak keberatan kalau suatu saat labanya melonjak tinggi).
Atas pemikiran ini maka diciptakan perhitungan-perhitungan untuk lebih spesifik mengukur resiko atas kerugian (downside risk), yang akan kita bahas dalam studi ini.
Semi-Deviation
Semi-Deviation adalah ukuran volatilitas dari return yang di bawah rata-rata. Semi-deviation dihitung dengan menghitung standard deviasi dari return yang di bawah rata-ratanya
Step1: Berikut untuk menghitung semi-deviation. Catatan
Step2: Value at Risk (VaR)
Value at Risk (VaR) mengukur maksimum kerugian yang bisa dialami dari suatu investasi setelah kasus kerugian ekstrim dihilangkan. Jadi 95% Value at Risk berarti maksimum kerugian yang bisa dialami setelah 5% kerugian yang paling ekstrim dihilangkan.
Angka 95% ini, disebut confidence level, dan bisa diganti dengan yang lain sesuai kebutuhan, misalnya 99% atau 99.9%.
VaR bisa dipakai oleh manajer investasi untuk menyisihkan cadangan modal yang dipakai untuk menutup kemungkinan kerugian terburuk yang bisa dialami.
Untuk menghitung VaR, pertama tentukan batas confidence level-nya, misalkan 95%. Karena batasnya 95%, maka buang 5% data return yang terendah. Lalu ambil return terendah dari data return yang tersisa, lalu dijadikan positif. Itulah nilai VaR-nya.
Untuk menghitung 95% VaR dari data di atas, pertama kita hilangkan 5% data return terburuk, atau dalam hal ini berarti kita hilangkan lima sampel. Nilai VaR-nya adalah nilai terburuk yang tersisa (-0.16138978) lalu kita positifkan (menjadi 0.16138978).
Artinya, dalam 95% kemungkinan, kerugian terburuk investasi kita adalah minus 0.16138978 atau minus 16%.
Cara lain untuk menghitung VaR adalah dengan memanggil fungsi np.percentile() seperti di bawah ini.
Step3: Nilai di atas sedikit berbeda dengan perhitungan manual kita karena np.percentile() mengandung interpolasi. Dan jangan lupa bahwa nilai VaR adalah positif (jadi nilai di atas harus dipositifkan).
Conditional Value at Risk (CVaR)
Dalam kondisi lain, mungkin kita ingin melihat karakteristik dari kerugian terburuk di luar VaR. Seperti contoh di atas, kita sudah hitung bahwa VaR-nya adalah 16%. Bagaimana kalau kasus terburuk (yaitu kasus yg 5% itu) benar-benar terjadi? Berapa kerugiannya? Jangan-jangan kerugiannya sangat ekstrim yang membuat kita bangkrut!
Karakteristik inilah yang diukur oleh Conditional Value at Risk (CVaR).
Perhitungan CVaR sederhana saja. Cukup kita hitung nilai rata-rata dari return terburuk yang kita buang ketika menghitung VaR tadi, lalu kita positifkan.
Dengan contoh di atas, CVaR berarti nilai absolut dari rata-rata 5 kerugian terburuk, yang bisa kita hitung seperti ini
Step4: Cara Perhitungan VaR yang Lain
Metoda penghitungan VaR yang kita pakai di atas memakai metoda historis. Keuntungan dari perhitungan secara historis adalah perhitungan ini tidak membutuhkan asumsi apapun, sedangkan kerugiannya adalah sensitivitas terhadap data (kalau datanya berubah, mungkin nilai VaR akan berubah secara signifikan).
Dengan kata lain, metoda itu mempunyai resiko akibat model (model risk) yang kecil, namun resiko akibat sampelnya (sample risk) besar.
Ada beberapa cara lain untuk menghitung VaR.
Metoda Gaussian
Dengan mengasumsikan bahwa distribusi return-nya adalah normal, maka kita dapat menghitung berapa nilai VaR untuk suatu probabilitas atau confidence level.
Misalkan kita mau menghitung 95% VaR, maka $ \alpha $ adalah 0.05 (=100% - 95%), seperti dalam gambar berikut.
Dari tabel z-table kita bisa tahu bahwa nilai $ z $ yang sesuai adalah -1.645.
Maka nilai VaR dapat dihitung dengan
Step5: Seperti kita lihat, nilainya dekat dengan perhitungan-perhitungan sebelumnya. Namun harap diingat, bahwa untuk contoh ini, nilai returns-nya berasal dari angka random dengan distribusi normal. Kalau distribusi nilai returns nya bukan normal, maka hasil perhitungan bisa berbeda jauh. Dan memang inilah kelemahan dari perhitungan dengan metoda ini, yaitu mengasumsikan distribusi normal, padahal kita telah belajar sebelumnya bahwa kemungkinan besar bukan.
Dengan kata lain, metoda Gaussian mempunyai model risk yang besar, dan sample risk yang kecil.
Di sisi lain, perhitungan ini hanya membutuhkan nilai mean dan standard deviasi, jadi sangat simpel.
Secara umum, cara ini kurang tepat dipakai untuk menghitung VaR, karena VaR menghitung nilai pada ekor distribusi, sedangkan justru bagian ekor dari distribusi return yang karakteristiknya berbeda dengan distribusi normal (tebal vs tipis).
Metoda Cornish-Fisher
Metoda Cornish-Fisher juga mengasumsikan bahwa return mengikuti suatu distribusi, namun distribusinya bisa disesuaikan untuk nilai skew dan kurtosis tertentu. Penyesuaian ini disebut Cornish-Fisher expansion.
Cornish-Fisher pada dasarnya adalah nilai $ z_\alpha $ yang disesuaikan, dimana penyesuaiannya melibatkan skewness $ S $ dan kurtosis $ K $. Jika skewness nol dan kurtosis tiga, maka nilai $ \widetilde{z_\alpha} $ akan sama dengan nilai $ z_\alpha $.
Berikut adalah perhitungan VaR dengan koreksi Cornish-Fisher dalam Python. | Python Code:
import numpy as np
import pandas as pd
np.random.seed(0)
returns = pd.Series(np.random.normal(0, 0.10, 100)).sort_values()
returns.values
Explanation: Mengukur Downside Risk dengan VaR dan CVaR
Seperti kita bahas dalam studi sebelumnya, distribusi dari keuntungan biasanya bukanlah normal, sehingga pemakaian standar deviasi kurang tepat karena standar deviasi mengasumsikan distibusi yang simetris dan kurtosis yang tipis. Sementara dalam manajemen portfolio, investor biasanya lebih menguatirkan resiko untuk mengalami kerugian yang besar, atau probabilitas dari return negatif yang besar, daripada volatilitas dari laba (tentu kita tidak keberatan kalau suatu saat labanya melonjak tinggi).
Atas pemikiran ini maka diciptakan perhitungan-perhitungan untuk lebih spesifik mengukur resiko atas kerugian (downside risk), yang akan kita bahas dalam studi ini.
Semi-Deviation
Semi-Deviation adalah ukuran volatilitas dari return yang di bawah rata-rata. Semi-deviation dihitung dengan menghitung standard deviasi dari return yang di bawah rata-ratanya:
$$ \sigma_{semi} = \sqrt{ \frac{1}{N} \sum_{R_t < \overline{R}} (R_t - \overline{R})^2 }$$
(catatan: N adalah jumlah sampel dari return yang di bawah rata-rata)
Atau praktisnya, untuk suatu data returns, kita hitung rata-ratanya, lalu buang sampel yang nilainya di atas atau sama-dengan rata-rata, lalu hitung standard deviasi dari sampel yang tersisa.
Semi-deviation mengukur volatilitas dari return yang di bawah rata-ratanya, namun tidak memberikan informasi resiko kerugian besar yang bisa dialami investor. Untuk itu kita bisa memakai perhitungan yang lain berikut ini.
Untuk mendemonstrasikan perhitungan semi-deviation, mari kita membuat data random sbb.
End of explanation
semi_dev = returns[ returns < 0 ].std(ddof=0)
semi_dev
Explanation: Berikut untuk menghitung semi-deviation. Catatan: kadang-kadang semi-deviation juga dihitung dari returns yang di bawah nol (bukan di bawah rata-rata). Perhitungan seperti ini yang kita pakai.
End of explanation
np.percentile(returns, 5)
Explanation: Value at Risk (VaR)
Value at Risk (VaR) mengukur maksimum kerugian yang bisa dialami dari suatu investasi setelah kasus kerugian ekstrim dihilangkan. Jadi 95% Value at Risk berarti maksimum kerugian yang bisa dialami setelah 5% kerugian yang paling ekstrim dihilangkan.
Angka 95% ini, disebut confidence level, dan bisa diganti dengan yang lain sesuai kebutuhan, misalnya 99% atau 99.9%.
VaR bisa dipakai oleh manajer investasi untuk menyisihkan cadangan modal yang dipakai untuk menutup kemungkinan kerugian terburuk yang bisa dialami.
Untuk menghitung VaR, pertama tentukan batas confidence level-nya, misalkan 95%. Karena batasnya 95%, maka buang 5% data return yang terendah. Lalu ambil return terendah dari data return yang tersisa, lalu dijadikan positif. Itulah nilai VaR-nya.
Untuk menghitung 95% VaR dari data di atas, pertama kita hilangkan 5% data return terburuk, atau dalam hal ini berarti kita hilangkan lima sampel. Nilai VaR-nya adalah nilai terburuk yang tersisa (-0.16138978) lalu kita positifkan (menjadi 0.16138978).
Artinya, dalam 95% kemungkinan, kerugian terburuk investasi kita adalah minus 0.16138978 atau minus 16%.
Cara lain untuk menghitung VaR adalah dengan memanggil fungsi np.percentile() seperti di bawah ini.
End of explanation
cvar = np.abs(returns[:5].mean())
cvar
Explanation: Nilai di atas sedikit berbeda dengan perhitungan manual kita karena np.percentile() mengandung interpolasi. Dan jangan lupa bahwa nilai VaR adalah positif (jadi nilai di atas harus dipositifkan).
Conditional Value at Risk (CVaR)
Dalam kondisi lain, mungkin kita ingin melihat karakteristik dari kerugian terburuk di luar VaR. Seperti contoh di atas, kita sudah hitung bahwa VaR-nya adalah 16%. Bagaimana kalau kasus terburuk (yaitu kasus yg 5% itu) benar-benar terjadi? Berapa kerugiannya? Jangan-jangan kerugiannya sangat ekstrim yang membuat kita bangkrut!
Karakteristik inilah yang diukur oleh Conditional Value at Risk (CVaR).
Perhitungan CVaR sederhana saja. Cukup kita hitung nilai rata-rata dari return terburuk yang kita buang ketika menghitung VaR tadi, lalu kita positifkan.
Dengan contoh di atas, CVaR berarti nilai absolut dari rata-rata 5 kerugian terburuk, yang bisa kita hitung seperti ini:
End of explanation
from scipy.stats import norm
def get_gaussian_var(returns, alpha):
mean = returns.mean()
std = returns.std(ddof=0)
z = norm.ppf(alpha)
var = -(mean + z * std)
return var
get_gaussian_var(returns, 0.05)
Explanation: Cara Perhitungan VaR yang Lain
Metoda penghitungan VaR yang kita pakai di atas memakai metoda historis. Keuntungan dari perhitungan secara historis adalah perhitungan ini tidak membutuhkan asumsi apapun, sedangkan kerugiannya adalah sensitivitas terhadap data (kalau datanya berubah, mungkin nilai VaR akan berubah secara signifikan).
Dengan kata lain, metoda itu mempunyai resiko akibat model (model risk) yang kecil, namun resiko akibat sampelnya (sample risk) besar.
Ada beberapa cara lain untuk menghitung VaR.
Metoda Gaussian
Dengan mengasumsikan bahwa distribusi return-nya adalah normal, maka kita dapat menghitung berapa nilai VaR untuk suatu probabilitas atau confidence level.
Misalkan kita mau menghitung 95% VaR, maka $ \alpha $ adalah 0.05 (=100% - 95%), seperti dalam gambar berikut.
Dari tabel z-table kita bisa tahu bahwa nilai $ z $ yang sesuai adalah -1.645.
Maka nilai VaR dapat dihitung dengan:
$$ VaR = -(\mu - 1.645\ \sigma) $$
Formula perhitungan generiknya adalah:
$$ VaR_{\alpha} = -(\mu + z_{\alpha} \ \sigma) $$
dimana:
- $ VaR_{\alpha} $ : VaR untuk quantile $ \alpha $
- $ \mu $ : mean
- $ z_{\alpha} $ : z-value untuk quantile $ \alpha $ (lihat z-table)
- $ \sigma $ : standard deviasi
Implementasinya dalam Python adalah sbb.:
End of explanation
from scipy.stats import skew, kurtosis
def get_cornish_fisher_var(returns, alpha):
z = norm.ppf(alpha)
s = skew(returns)
k = kurtosis(returns, fisher=False)
z = (z +
(z**2 - 1)*s/6 +
(z**3 -3*z)*(k-3)/24 -
(2*z**3 - 5*z)*(s**2)/36)
var = -(returns.mean() + z * returns.std(ddof=0))
return var
get_cornish_fisher_var(returns, 0.05)
Explanation: Seperti kita lihat, nilainya dekat dengan perhitungan-perhitungan sebelumnya. Namun harap diingat, bahwa untuk contoh ini, nilai returns-nya berasal dari angka random dengan distribusi normal. Kalau distribusi nilai returns nya bukan normal, maka hasil perhitungan bisa berbeda jauh. Dan memang inilah kelemahan dari perhitungan dengan metoda ini, yaitu mengasumsikan distribusi normal, padahal kita telah belajar sebelumnya bahwa kemungkinan besar bukan.
Dengan kata lain, metoda Gaussian mempunyai model risk yang besar, dan sample risk yang kecil.
Di sisi lain, perhitungan ini hanya membutuhkan nilai mean dan standard deviasi, jadi sangat simpel.
Secara umum, cara ini kurang tepat dipakai untuk menghitung VaR, karena VaR menghitung nilai pada ekor distribusi, sedangkan justru bagian ekor dari distribusi return yang karakteristiknya berbeda dengan distribusi normal (tebal vs tipis).
Metoda Cornish-Fisher
Metoda Cornish-Fisher juga mengasumsikan bahwa return mengikuti suatu distribusi, namun distribusinya bisa disesuaikan untuk nilai skew dan kurtosis tertentu. Penyesuaian ini disebut Cornish-Fisher expansion.
Cornish-Fisher pada dasarnya adalah nilai $ z_\alpha $ yang disesuaikan, dimana penyesuaiannya melibatkan skewness $ S $ dan kurtosis $ K $. Jika skewness nol dan kurtosis tiga, maka nilai $ \widetilde{z_\alpha} $ akan sama dengan nilai $ z_\alpha $.
Berikut adalah perhitungan VaR dengan koreksi Cornish-Fisher dalam Python.
End of explanation |
1,182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
E2.1 Shortest Paths and Cycles
Step1: a) How many nodes and edges does G have?
Step2: b) How many cycles does the cycle basis of the graph contain? How Many
edges does the longest cycle in the cycle basis have?
Step3: c) Create a small graph using G.add nodes from() and G.add edges from()
containing about 5-10 nodes and edges. Check if the graph has a planar
embedding using the following check for planarity
Step4: d) Select two (random) nodes in the graph and calculate the length of the
shortest path between them.
Step5: e) What is the greatest distance between any pair of vertices? (Longest
shortest path/diameter)
Step6: f) Select one node in the graph. Create and plot a histogram of the shortest
paths from this node to every other node.
Step7: E2.2 Edge and Node Attributes
a) Which node/edge attributes does the graph have?
Step8: b) Using the node attributes calculate the total length of the graph.
Step9: c) Using the lengths calculated in c) create a new edge attribute called
“length” for each edge. Calculate the length of the graph again using
the new edge attribute.
Step10: d) Create and plot a histogram of edge lengths. | Python Code:
G=nx.read_graphml("../data/visualization/small_graph.xml", node_type=int)#Load the graph
Explanation: E2.1 Shortest Paths and Cycles
End of explanation
nodes = G.number_of_nodes()
edges = G.number_of_edges()
print("b) The graph has %d nodes and %d edges\n"%(nodes,edges))
Explanation: a) How many nodes and edges does G have?
End of explanation
basis = nx.cycle_basis(G)
entry_length = [len(entry) for entry in basis]
print("c) The cycle basis of the graph contains %d entries, the longest entry contains %d edges.\n"\
%(len(basis),np.asarray(entry_length).max()))
Explanation: b) How many cycles does the cycle basis of the graph contain? How Many
edges does the longest cycle in the cycle basis have?
End of explanation
# function checks if graph G has K (5) or K (3 ,3) as minors ,
# returns True / False on planarity
import itertools as it
from networkx.algorithms import bipartite
def is_planar ( G ):
result = True
n = len ( G . nodes ())
if n > 5:
for subnodes in it . combinations ( G . nodes () ,6):
subG = G . subgraph ( subnodes )
# check if the graph G has a subgraph K (3 ,3)
if bipartite.is_bipartite ( G ):
X , Y = bipartite . sets ( G )
if len ( X )==3:
result = False
if n > 4 and result :
for subnodes in it . combinations ( G . nodes () ,5):
subG = G . subgraph ( subnodes )
# check if the graph G has a subgraph K (5)
if len ( subG . edges ())==10:
result = False
return result
H = nx.Graph()
H.add_nodes_from([1,2,3,4,5])
H.add_edges_from([(1,2),(1,4),(2,3),(3,4),(3,5),(4,5)])
is_planar(H)
Explanation: c) Create a small graph using G.add nodes from() and G.add edges from()
containing about 5-10 nodes and edges. Check if the graph has a planar
embedding using the following check for planarity:
End of explanation
node1 = np.random.random_integers(0,nodes)
node2 = np.random.random_integers(0,nodes)
path = nx.shortest_path(G, node1, node2)
print("e) The shortest path between node %d and node %d contains %d edges.\n"\
%(node1, node2, len(path)))
Explanation: d) Select two (random) nodes in the graph and calculate the length of the
shortest path between them.
End of explanation
diameter = nx.diameter(G)
print("f) The diameter of the graph is %d.\n"%diameter)
Explanation: e) What is the greatest distance between any pair of vertices? (Longest
shortest path/diameter)
End of explanation
path_lengths = []
start = np.random.random_integers(0,nodes)
path_lengths = [len(nx.shortest_path(G,start,end)) for end in G.nodes()]
plt.title("Histogram of shortest paths from node %d to all other nodes"%start)
plt.xlabel("Path length / [number of edges]")
plt.ylabel("Counts")
plt.hist(path_lengths)
plt.show()
Explanation: f) Select one node in the graph. Create and plot a histogram of the shortest
paths from this node to every other node.
End of explanation
node_attribute_dict = G.node[0]
edge_attribute_dict = G.edge[0][16]
print("a) The graph has the node attributes",node_attribute_dict.keys())
print("a) The graph has the edge attributes",edge_attribute_dict.keys())
Explanation: E2.2 Edge and Node Attributes
a) Which node/edge attributes does the graph have?
End of explanation
length = 0
for e in G.edges():
x1 = G.node[e[0]]['x']
x2 = G.node[e[1]]['x']
y1 = G.node[e[0]]['y']
y2 = G.node[e[1]]['y']
length += np.sqrt((x1 - x2)**2 + (y1 - y2)**2)
print("\nb) The total length of the graph is %d.\n"%length)
Explanation: b) Using the node attributes calculate the total length of the graph.
End of explanation
for e in G.edges():
x1 = G.node[e[0]]['x']
x2 = G.node[e[1]]['x']
y1 = G.node[e[0]]['y']
y2 = G.node[e[1]]['y']
length = np.sqrt((x1 - x2)**2 + (y1 - y2)**2)
G.edge[e[0]][e[1]].update({'length':length})
print("c) The graph has the edge attributes",edge_attribute_dict.keys())
length = 0
for e in G.edges(data=True):
length += e[2]['length']
edge_attribute_dict = G.edge[0][16]
print("c) The total length of the graph is %d.\n"%length)
Explanation: c) Using the lengths calculated in c) create a new edge attribute called
“length” for each edge. Calculate the length of the graph again using
the new edge attribute.
End of explanation
edge_lengths = [e[2]['length'] for e in G.edges(data=True)]
plt.clf()
plt.xlabel("Edge length")
plt.ylabel("Counts")
plt.hist(edge_lengths,bins=20)
plt.show()
Explanation: d) Create and plot a histogram of edge lengths.
End of explanation |
1,183 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clase 11
Step1: 2. Uso de Pandas para descargar datos de precios de cierre
Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.
Nota
Step2: 4. Optimización de portafolios | Python Code:
#importar los paquetes que se van a usar
import pandas as pd
import numpy as np
import datetime
from datetime import datetime
import scipy.stats as stats
import scipy as sp
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn.covariance as skcov
import cvxopt as opt
from cvxopt import blas, solvers
solvers.options['show_progress'] = False
%matplotlib inline
pd.set_option('display.notebook_repr_html', True)
pd.set_option('display.max_columns', 6)
pd.set_option('display.max_rows', 10)
pd.set_option('display.width', 78)
pd.set_option('precision', 3)
#Funciones para portafolios
import portfolio_func
from pyomo.environ import *
infinity = float('inf')
import statsmodels.api as sm
assets = ['AAPL','MSFT','AA','AMZN','KO','QAI']
closes = portfolio_func.get_historical_closes(assets, '2016-01-01', '2017-09-22')
daily_returns=portfolio_func.calc_daily_returns(closes)
huber = sm.robust.scale.Huber()
#Mean and standar deviation returns
returns_av, scale = huber(daily_returns)
model = AbstractModel()
model.assets = Set()
model.T = Set(initialize = range(1994, 2014))
model.max_risk = Param(mutable = True, initialize = .00305)
model.R = Param(model.T, model.assets)
def mean_init(model, j):
return sum(model.R[i, j] for i in model.T)/len(model.T)
model.mean = Param(model.assets, initialize = mean_init)
def Q_init(model, i, j):
return sum((model.R[k, i] - model.mean[i])*(model.R[k, j] - model.mean[j]) for k in model.T)
model.Q = Param(model.assets, model.assets, initialize = Q_init)
model.alloc = Var(model.assets, within=NonNegativeReals)
def risk_bound_rule(model):
return (sum(sum(model.Q[i, j] * model.alloc[i] * model.alloc[j] for i in model.assets)for j in model.assets) <= model.max_risk)
model.risk_bound = Constraint(rule=risk_bound_rule)
def tot_mass_rule(model):
return (sum(model.alloc[j] for j in model.assets) == 1)
model.tot_mass = Constraint(rule=tot_mass_rule)
def objective_rule(model):
return summation(model.mean, model.alloc)
model.objective = Objective(sense=maximize, rule=objective_rule)
solver = SolverFactory('glpk')
!type dietdata.dat
!pyomo solve --solver=glpk diet.py dietdata.dat
!type results.yml
Explanation: Clase 11: Algunas mejoras a los códigos para simular y optimizar portafolios
Juan Diego Sánchez Torres,
Profesor, MAF ITESO
Departamento de Matemáticas y Física
[email protected]
Tel. 3669-34-34 Ext. 3069
Oficina: Cubículo 4, Edificio J, 2do piso
1. Motivación
En primer lugar, para poder bajar precios y información sobre opciones de Yahoo, es necesario cargar algunos paquetes de Python. En este caso, el paquete principal será Pandas. También, se usarán el Scipy y el Numpy para las matemáticas necesarias y, el Matplotlib y el Seaborn para hacer gráficos de las series de datos. Finalmente, se usará el paquete cvxopt para optimización convexa, para instalar ingrese en terminal la instrucción: conda install -c anaconda cvxopt
End of explanation
r=0.0001
results_frame = portfolio_func.sim_mont_portfolio(daily_returns,100000,r)
#Sharpe Ratio
max_sharpe_port = results_frame.iloc[results_frame['Sharpe'].idxmax()]
#Menor SD
min_vol_port = results_frame.iloc[results_frame['SD'].idxmin()]
plt.scatter(results_frame.SD,results_frame.Returns,c=results_frame.Sharpe,cmap='RdYlBu')
plt.xlabel('Volatility')
plt.ylabel('Returns')
plt.colorbar()
#Sharpe Ratio
plt.scatter(max_sharpe_port[1],max_sharpe_port[0],marker=(5,1,0),color='r',s=1000);
#Menor SD
plt.scatter(min_vol_port[1],min_vol_port[0],marker=(5,1,0),color='g',s=1000);
pd.DataFrame(max_sharpe_port)
pd.DataFrame(min_vol_port)
Explanation: 2. Uso de Pandas para descargar datos de precios de cierre
Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.
Nota: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete pandas_datareader. Por lo que será necesario instalarlo aparte. El siguiente comando instala el paquete en Anaconda:
*conda install -c conda-forge pandas-datareader *
3. Formulación del riesgo de un portafolio y simulación Montecarlo
End of explanation
N=5000
results_frame_optim = portfolio_func.optimal_portfolio(daily_returns,N,r)
#Montecarlo
plt.scatter(results_frame.SD,results_frame.Returns,c=results_frame.Sharpe,cmap='RdYlBu')
plt.xlabel('Volatility')
plt.ylabel('Returns')
plt.colorbar()
#Markowitz
plt.plot(results_frame_optim.SD, results_frame_optim.Returns, 'b-o');
#Sharpe Ratio
max_sharpe_port_optim = results_frame_optim.iloc[results_frame_optim['Sharpe'].idxmax()]
#Menor SD
min_vol_port_optim = results_frame_optim.iloc[results_frame_optim['SD'].idxmin()]
#Markowitz
plt.scatter(results_frame_optim.SD,results_frame_optim.Returns,c=results_frame_optim.Sharpe,cmap='RdYlBu');
plt.xlabel('Volatility')
plt.ylabel('Returns')
plt.colorbar()
#Sharpe Ratio
plt.scatter(max_sharpe_port_optim[1],max_sharpe_port_optim[0],marker=(5,1,0),color='r',s=1000);
#SD
plt.scatter(min_vol_port_optim[1],min_vol_port_optim[0],marker=(5,1,0),color='g',s=1000);
pd.DataFrame(max_sharpe_port_optim)
pd.DataFrame(min_vol_port_optim)
Explanation: 4. Optimización de portafolios
End of explanation |
1,184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Find distribution of local maxima in a Gaussian Random Field
In this notebook, I evaluate different known distributions for local maxima in a Gaussian Random Field. I followed several steps to (a) look at the distribution through simulations, (b) apply different possible approaches to find the distribution.
It seems that none of these are applicable for local maxima without applying a filtering threshold.
1. Simulate random fields (without activation) and extract local maxima
In a first step, I
- Simulate a guassian random field using nipype. This field is huge (500x500x500) and therefore memory-intensive to ensure we have lots of local maxima.
- Look at the GRF.
- Export the GRF to a nifti file. Before saving the data, I make sure all values are positive, because step (d) only extracts local maxima that are above 0.
- Extract all local maxima using nipype and fsl cluster.
- Look at the table of local maxima and print the total number of peaks.
- Look at the distribution of these maxima.
Import libraries
Step1: Simulate very large RF
Step2: Show part of the RF (20x20x1)
Step3: Save RF
Step4: Run fsl cluster to extract local maxima
Step5: Read and print top of file with peaks , print total number of peaks.
Step6: Plot histogram local maxima
Step7: 2. First approach
Step8: Function for pdf of peaks given a certain threshold, RFT
Step9: Compute density function over a range with different excursion thresholds
Step10: In this figure, we see the observed tail distribution of the local maxima (light colored) in our simulated data. The thick line represents the theoretical distribution of the local maxima above a certain threshold. It is only a good approximation for really high thresholds. RFT works better for lower thresholds.
3. Second approach
Step11: Now we'll look at the number of peaks and the Euler Characteristic against the threshold.
Step12: In this plot we can indeed see that the Euler Characteristic gives the number of peaks, but only above a certain threshold that is high enough. Below these higher thresholds, is gives # peaks - # holes. Is there a way to estimate the number of holes in the presence of peaks using the EC? I don't think so, it's the exact same problem as the number of paeks in the presence of holes? Therefore the Euler Characteristic cannot give us information for lower thresholds.
4. Third approach
Step13: Look at the distribution of the voxels?
We simulated the random fields to be a GRF, from a normal distribution. Here you can see that indeed the distribution is normal independent from the smoothness.
Step14: So how about the distribution of the maximum of a sample of these distributions?
Step15: As expected, from a certain smoothness (3 x voxel size), the distribution remains the same.
First
Step16: Now can we just state that the peaks in our unsmoothed random field is the maxima of a sample?
First
Step17: Show histogram of peaks of uncorrected field with the distribution of maximum of a sample of size n.
Step18: Ok, I'm stuck.
The stuff below is the maximum of a sample with correlation. We have the distribution, but if it doesn't work for uncorrelated values, it's not going to work for correlated values. So I leave it here but it's non-informative. | Python Code:
% matplotlib inline
import os
import numpy as np
import nibabel as nib
from nipy.labs.utils.simul_multisubject_fmri_dataset import surrogate_3d_dataset
import nipy.algorithms.statistics.rft as rft
from __future__ import print_function, division
import math
import matplotlib.pyplot as plt
import palettable.colorbrewer as cb
from nipype.interfaces import fsl
import pandas as pd
import nipy.algorithms.statistics.intvol as intvol
from matplotlib import colors
import scipy.stats as stats
Explanation: Find distribution of local maxima in a Gaussian Random Field
In this notebook, I evaluate different known distributions for local maxima in a Gaussian Random Field. I followed several steps to (a) look at the distribution through simulations, (b) apply different possible approaches to find the distribution.
It seems that none of these are applicable for local maxima without applying a filtering threshold.
1. Simulate random fields (without activation) and extract local maxima
In a first step, I
- Simulate a guassian random field using nipype. This field is huge (500x500x500) and therefore memory-intensive to ensure we have lots of local maxima.
- Look at the GRF.
- Export the GRF to a nifti file. Before saving the data, I make sure all values are positive, because step (d) only extracts local maxima that are above 0.
- Extract all local maxima using nipype and fsl cluster.
- Look at the table of local maxima and print the total number of peaks.
- Look at the distribution of these maxima.
Import libraries
End of explanation
smooth_FWHM = 3
smooth_sd = smooth_FWHM/(2*math.sqrt(2*math.log(2)))
data = surrogate_3d_dataset(n_subj=1,sk=smooth_sd,shape=(500,500,500),noise_level=1)
Explanation: Simulate very large RF
End of explanation
plt.figure(figsize=(6,4))
plt.imshow(data[1:20,1:20,1])
plt.colorbar()
plt.show()
Explanation: Show part of the RF (20x20x1)
End of explanation
minimum = data.min()
newdata = data - minimum #little trick because fsl.model.Cluster ignores negative values
img=nib.Nifti1Image(newdata,np.eye(4))
img.to_filename("files/RF.nii.gz")
Explanation: Save RF
End of explanation
cl=fsl.model.Cluster()
cl.inputs.threshold = 0
cl.inputs.in_file="files/RF.nii.gz"
cl.inputs.out_localmax_txt_file="files/locmax.txt"
cl.inputs.num_maxima=1000000
cl.inputs.connectivity=26
cl.inputs.terminal_output='none'
cl.run()
Explanation: Run fsl cluster to extract local maxima
End of explanation
peaks = pd.read_csv("files/locmax.txt",sep="\t").drop('Unnamed: 5',1)
peaks.Value = peaks.Value + minimum
peaks[:5]
len(peaks)
Explanation: Read and print top of file with peaks , print total number of peaks.
End of explanation
col=cb.qualitative.Set1_8.mpl_colors
plt.figure(figsize=(6,3))
ax=plt.subplot(111)
ax.hist(peaks.Value,40,normed=1,facecolor=col[0],alpha=0.75,lw=0)
ax.set_xlim([-1,5])
plt.show()
Explanation: Plot histogram local maxima
End of explanation
def nulprobdens(exc,peaks):
v = exc
u = peaks - v
f0 = (2+(u+v)**2)*(u+v)*np.exp(-(u+v)**2/2)/(v**2*np.exp(-v**2/2))
return f0
Explanation: 2. First approach: analytically derived distribution of local maxima above u
a. (Cheng & Schwartzman, 2015)
b. RFT
Cheng and Schwartzman recently published a paper in which they derive a distribution of local maxima over a certain threshold. They make (like RFT) the assumption that the field is a GRF whose interior is non-empty... A consequence is that we can only compute this when the threshold is high enough to ensure there are only blobs and no holes. We'll take a look how the theoretical distribution performs for lower thresholds. This is their derived distribution:
For all local maxima above threshold $v$, extending $u$ above this threshold,
For each $t_0 \in T$ and each fixed $u>0$, as $v \rightarrow \infty$,
\begin{equation}
F_t(u,v) = \frac{(u+v)^{N-1}e^{-(u+v)^2/2}}{v^{N-1}e^{-v^2/2}}
\end{equation}
Below you can see that the approximation indeed only works well on very high thresholds, and therefore cannot be used to obtain the full distribution of all peaks.
We also compare with the random field theory approximation, with u the threshold:
\begin{equation}
F_t(u,t_0) = u e^{-u(t_0 - u)}
\end{equation}
Function for pdf of peaks given a certain threshold
End of explanation
def nulprobdensRFT(exc,peaks):
f0 = exc*np.exp(-exc*(peaks-exc))
return f0
Explanation: Function for pdf of peaks given a certain threshold, RFT
End of explanation
fig,axs=plt.subplots(1,5,figsize=(13,3))
fig.subplots_adjust(hspace = .5, wspace=0.3)
axs=axs.ravel()
thresholds=[2,2.5,3,3.5,4]
bins=np.arange(2,5,0.5)
x=np.arange(2,10,0.0001)
twocol=cb.qualitative.Paired_10.mpl_colors
for i in range(5):
thr=thresholds[i]
axs[i].hist(peaks.Value[peaks.Value>thr],lw=0,facecolor=twocol[i*2-2],normed=True,bins=np.arange(thr,5,0.1))
axs[i].set_xlim([thr,5])
axs[i].set_ylim([0,3])
xn = x[x>thr]
yn = nulprobdens(thr,xn)
ynb = nulprobdensRFT(thr,xn)
axs[i].plot(xn,yn,color=twocol[i*2-1],lw=3,label="C&S")
axs[i].plot(xn,ynb,color=twocol[i*2-1],lw=3,linestyle="--",label="RFT")
axs[i].set_title("threshold:"+str(thr))
axs[i].set_xticks(np.arange(thr,5,0.5))
axs[i].set_yticks([1,2])
axs[i].legend(loc="upper right",frameon=False)
axs[i].set_xlabel("peak height")
axs[i].set_ylabel("density")
plt.show()
Explanation: Compute density function over a range with different excursion thresholds
End of explanation
fig,axs=plt.subplots(1,4,figsize=(13,7))
fig.subplots_adjust(hspace = .1, wspace=0.1)
axs=axs.ravel()
thresholds=np.arange(0,4,1)
cmap = colors.ListedColormap(['white', 'black'])
bounds=[0,0.5,1]
norm = colors.BoundaryNorm(bounds, cmap.N)
for t in range(len(thresholds)):
mask = np.zeros(shape=data.shape,dtype=np.intp)
mask[data>thresholds[t]]=1
axs[t].imshow(mask[1:200,1:200,20],cmap=cmap,norm=norm)
axs[t].set_title("threshold:"+str(thresholds[t]))
axs[t].patch.set_visible(False)
axs[t].axis('off')
Explanation: In this figure, we see the observed tail distribution of the local maxima (light colored) in our simulated data. The thick line represents the theoretical distribution of the local maxima above a certain threshold. It is only a good approximation for really high thresholds. RFT works better for lower thresholds.
3. Second approach: what can we use the Euler Characteristic for?
The Euler Characteristic is a topological invariant that represents for a certain threshold applied to a certain random field #peaks - #holes. Therefore, again, if the threshold is high enough (there are no holes), it computes the number of peaks.
If we compute the EC for a range of thresholds (above a certain threshold), we can construct a pdf of peak heights. It should be noted that the number of peaks is dependent of the smoothness, but this dependency can be taken out by dividing the smoothness by the total volume as a correcting factor.
This principle has led to a theoretical approximation of the pdf of peaks, which is closely related to the approach of Cheng \& Schwartzmann.
Let's first look at the masks that result from our random field with certain thresholds
End of explanation
EulerDens = []
EulerDensInv = []
urange = np.arange(-4,4,0.3)
for t in urange:
mask = np.zeros(shape=data.shape,dtype=np.intp)
mask[data>t]=1
EulerDens.append(intvol.EC3d(mask))
mask2 = 1-mask
EulerDensInv.append(intvol.EC3d(mask2))
sumpeak = []
for t in urange:
sumpeak.append(sum(peaks.Value>t))
plt.figure(figsize=(7,5))
plt.plot(urange,EulerDens,color=col[1],lw=3,label="observed Euler Characteristic")
plt.plot(urange,EulerDensInv,color=col[2],lw=3,label="observed inverse Euler Characteristic")
plt.plot(urange,sumpeak,color=col[3],lw=3,label="Number of peaks")
plt.legend(loc="upper right",frameon=False)
plt.ylim([-600000,1200000])
plt.show()
Explanation: Now we'll look at the number of peaks and the Euler Characteristic against the threshold.
End of explanation
smoothnesses = [0,3,6,9]
minima = []
for sm in range(len(smoothnesses)):
smooth_FWHM = smoothnesses[sm]
smooth_sd = smooth_FWHM/(2*math.sqrt(2*math.log(2)))
data = surrogate_3d_dataset(n_subj=1,sk=smooth_sd,shape=(500,500,500),noise_level=1)
minimum = data.min()
newdata = data - minimum #little trick because fsl.model.Cluster ignores negative values
minima.append(minimum)
img=nib.Nifti1Image(newdata,np.eye(4))
img.to_filename(os.path.join("files/RF_"+str(sm)+".nii.gz"))
cl=fsl.model.Cluster()
cl.inputs.threshold = 0
cl.inputs.in_file=os.path.join("files/RF_"+str(sm)+".nii.gz")
cl.inputs.out_localmax_txt_file=os.path.join("files/locmax_"+str(sm)+".txt")
cl.inputs.num_maxima=10000000
cl.inputs.connectivity=26
cl.inputs.terminal_output='none'
cl.run()
Explanation: In this plot we can indeed see that the Euler Characteristic gives the number of peaks, but only above a certain threshold that is high enough. Below these higher thresholds, is gives # peaks - # holes. Is there a way to estimate the number of holes in the presence of peaks using the EC? I don't think so, it's the exact same problem as the number of paeks in the presence of holes? Therefore the Euler Characteristic cannot give us information for lower thresholds.
4. Third approach: peaks as the maximum of something of which we know the distribution?
The above procedures use the fact that these fields are (should be) continuous fields. However, what comes out of the scanner is not continuous at all. We make it more continuous by applying a smoothing kernel, which allows to use these processes. However, I wonder, is it possible to look at the peaks simply as the maximum of a sample of a known distribution?
Can we look at the random fields as multivariate normal with a correlation structure dependent on the smoothness?
Here the smoothness comes into play! How does smoothness affect the distribution of the peaks?
First simulate random fields for different smoothnesses.
End of explanation
col=cb.qualitative.Set1_8.mpl_colors+cb.qualitative.Set2_8.mpl_colors
plt.figure(figsize=(10,5))
ax=plt.subplot(111)
for sm in range(len(smoothnesses)):
file = os.path.join("files/RF_"+str(sm)+".nii.gz")
tvals = nib.load(file).get_data().astype('float64')+minima[sm]
values, base = np.histogram(tvals,100,normed=1)
ax.plot(base[:-1],values,label="smoothness: "+str(smoothnesses[sm]),color=col[sm],lw=2)
ax.set_xlim([-4,4])
ax.set_ylim([0,0.5])
ax.legend(loc="lower right",frameon=False)
ax.set_title("distribution of peak heights for different smoothing kernels (FWHM)")
plt.show()
Explanation: Look at the distribution of the voxels?
We simulated the random fields to be a GRF, from a normal distribution. Here you can see that indeed the distribution is normal independent from the smoothness.
End of explanation
all = []
for sm in range(len(smoothnesses)):
peaks = pd.read_csv(os.path.join("files/locmax_"+str(sm)+".txt"),sep="\t").drop('Unnamed: 5',1).Value
peaks = peaks + minima[sm]
all.append(peaks)
col=cb.qualitative.Set1_8.mpl_colors+cb.qualitative.Set2_8.mpl_colors
plt.figure(figsize=(10,5))
ax=plt.subplot(111)
for sm in range(len(smoothnesses)):
values, base = np.histogram(all[sm],30,normed=1)
ax.plot(base[:-1],values,label="smoothness: "+str(smoothnesses[sm]),color=col[sm],lw=2)
ax.set_xlim([-1,5])
ax.set_ylim([0,1.2])
ax.legend(loc="lower right",frameon=False)
ax.set_title("distribution of peak heights for different smoothing kernels (FWHM)")
plt.show()
Explanation: So how about the distribution of the maximum of a sample of these distributions?
End of explanation
# random sample
smplm = []
for i in range(100000):
smpl = np.random.standard_normal((n,))
smplm.append(max(smpl))
# distribution
xm = np.arange(-1,5,0.001)
ym = n*stats.norm.cdf(xm)**(n-1)*stats.norm.pdf(xm)
# histogram
twocol=cb.qualitative.Paired_10.mpl_colors
plt.figure(figsize=(6,3))
ax=plt.subplot(111)
ax.hist(smplm,100,normed=1,facecolor=twocol[0],alpha=0.75,lw=0)
ax.plot(xm,ym,color=twocol[1],lw=3)
ax.set_xlim([-1,5])
plt.show()
Explanation: As expected, from a certain smoothness (3 x voxel size), the distribution remains the same.
First: the distribution without smoothness!
The RF with smoothness 0 should be just normally distributed values without correlation. The peaks with smoothness 0 should be just maxima of a sample of normally distributed values. We know the distribution of a sample with normally distribution values from order statistics:
$f_{max}(x) = nF(x)^{n-1}f(x)$
This is to show that the distribution of the maximum of the sample is the right one.
Simulate n normally distributed values
Take the maximum
Compute for a range of values the distribution of the maximum
Show histogram of simulated maxima with the pdf on top
End of explanation
n = (500**3)/len(all[1])
n
Explanation: Now can we just state that the peaks in our unsmoothed random field is the maxima of a sample?
First: the sample size is variable! Let's try it on the average sample size, i.e. the average number of voxels per peak.
End of explanation
# distribution of a maximum
xm = np.arange(-1,5,0.001)
ym = n*stats.norm.cdf(xm)**(n-1)*stats.norm.pdf(xm)
# histogram
twocol=cb.qualitative.Paired_10.mpl_colors
plt.figure(figsize=(6,3))
ax=plt.subplot(111)
ax.hist(all[1],100,normed=1,facecolor=twocol[0],alpha=0.75,lw=0)
ax.plot(xm,ym,color=twocol[1],lw=3)
ax.set_xlim([-1,5])
plt.show()
Explanation: Show histogram of peaks of uncorrected field with the distribution of maximum of a sample of size n.
End of explanation
# random sample
smplmc = []
n = 2
mean = [0,0]
r = 0.2
cov = [[1,r],[r,1]]
for i in range(100000):
smpl = np.random.multivariate_normal(mean,cov,int(n/n))
smplmc.append(np.max(smpl))
# distribution
xmc = np.arange(-2,3,0.001)
corf = (1-r)/np.sqrt(1-r**2)
ymc = n*stats.norm.cdf(corf*xmc)**(n-1)*stats.norm.pdf(xmc)
# histogram
twocol=cb.qualitative.Paired_10.mpl_colors
plt.figure(figsize=(6,3))
ax=plt.subplot(111)
ax.hist(smplmc,100,normed=1,facecolor=twocol[2],alpha=0.75,lw=0)
ax.plot(xmc,ymc,color=twocol[3],lw=3)
ax.set_xlim([-1,5])
plt.show()
# random sample
smplmc = []
n = 10
mean = np.array([0,0,0,0,0,0,0,0,0,0])
r = 0.5
cov = np.array([[1,r,r,r,r,r,r,r,r,r],
[r,1,r,r,r,r,r,r,r,r],
[r,r,1,r,r,r,r,r,r,r],
[r,r,r,1,r,r,r,r,r,r],
[r,r,r,r,1,r,r,r,r,r],
[r,r,r,r,r,1,r,r,r,r],
[r,r,r,r,r,r,1,r,r,r],
[r,r,r,r,r,r,r,1,r,r],
[r,r,r,r,r,r,r,r,1,r],
[r,r,r,r,r,r,r,r,r,1]
])
for i in range(100):
smpl = np.random.multivariate_normal(mean,cov,int(n/n))
smplmc.append(np.max(smpl))
# distribution (just max of gaussian normal)
xm = np.arange(-1,5,0.001)
corf = (1-r)/np.sqrt(1-r**2)
ym = n*stats.norm.cdf(xm)**(n-1)*stats.norm.pdf(xm)
# histogram
twocol=cb.qualitative.Paired_10.mpl_colors
plt.figure(figsize=(6,3))
ax=plt.subplot(111)
ax.hist(smplm,100,normed=1,facecolor=twocol[0],alpha=0.75,lw=0)
ax.plot(xm,ym,color=twocol[1],lw=3)
ax.set_xlim([-1,5])
plt.show()
newline
Explanation: Ok, I'm stuck.
The stuff below is the maximum of a sample with correlation. We have the distribution, but if it doesn't work for uncorrelated values, it's not going to work for correlated values. So I leave it here but it's non-informative.
End of explanation |
1,185 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Input example: | Problem:
import numpy as np
a = np.array([[0, 1], [2, 1], [4, 8]])
mask = (a.min(axis=1,keepdims=1) == a) |
1,186 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Non-Linear Time History Analysis (NLTHA) on Single Degree of Freedom (SDOF) Oscillators
In this method, a single degree of freedom (SDOF) model of each structure is subjected to non-linear time history analysis (NLTHA) using a suite of ground motion records. The displacements of the SDOF due to each ground motion record are used as input to determine the distribution of buildings in each damage state for each level of ground motion intensity. A regression algorithm is then applied to derive the fragility model.
The figure below illustrates a fragility model developed using this method.
<img src="../../../../figures/NLTHA_SDOF.png" width="400" align="middle">
Note
Step1: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
Step2: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note
Step3: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are
Step4: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix
Step5: Find an adequate intensity measure
This sections allows users to find an intensity measure (PGA or Spectral Acceleration) that correlates well with damage. To do so, it is necessary to establish a range of periods of vibration and step (minT, maxT and stepT).
Step6: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above
Step7: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above
Step8: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above
Step9: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions
Step10: Plot vulnerability function
Step11: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above | Python Code:
from rmtk.vulnerability.common import utils
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF import NLTHA_on_SDOF
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF.read_pinching_parameters import read_parameters
%matplotlib inline
Explanation: Non-Linear Time History Analysis (NLTHA) on Single Degree of Freedom (SDOF) Oscillators
In this method, a single degree of freedom (SDOF) model of each structure is subjected to non-linear time history analysis (NLTHA) using a suite of ground motion records. The displacements of the SDOF due to each ground motion record are used as input to determine the distribution of buildings in each damage state for each level of ground motion intensity. A regression algorithm is then applied to derive the fragility model.
The figure below illustrates a fragility model developed using this method.
<img src="../../../../figures/NLTHA_SDOF.png" width="400" align="middle">
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
capacity_curves_file = "../../../../../rmtk_data/capacity_curves_point.csv"
sdof_hysteresis = "Default"
#sdof_hysteresis = "../../../../../rmtk_data/pinching_parameters.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
capacity_curves = utils.check_SDOF_curves(capacity_curves)
utils.plot_capacity_curves(capacity_curves)
hysteresis = read_parameters(sdof_hysteresis)
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
End of explanation
gmrs_folder = '../../../../../rmtk_data/accelerograms'
minT, maxT = 0.0, 2.0
gmrs = utils.read_gmrs(gmrs_folder)
#utils.plot_response_spectra(gmrs, minT, maxT)
Explanation: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note: Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
damage_model_file = "../../../../../rmtk_data/damage_model.csv"
damage_model = utils.read_damage_model(damage_model_file)
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are: capacity curve dependent, spectral displacement and interstorey drift. If the damage model type is interstorey drift the user can provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements, otherwise a linear relationship is assumed.
End of explanation
damping_ratio = 0.05
degradation = True
EDPs = NLTHA_on_SDOF.calculate_response(capacity_curves, hysteresis, gmrs, damage_model, damping_ratio, degradation)
utils.save_result(PDM,'../../../../../rmtk_data/PDM.csv')
Explanation: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix:
1. damping_ratio: This parameter defines the damping ratio for the structure.
2. degradation: This boolean parameter should be set to True or False to specify whether structural degradation should be considered in the analysis or not.
End of explanation
minT, maxT,stepT = 0.0, 2.0, 0.1
regression_method = 'cloud analysis'
utils.evaluate_optimal_IM(gmrs,EDPs,minT,maxT,stepT,damage_model,damping_ratio,regression_method)
Explanation: Find an adequate intensity measure
This sections allows users to find an intensity measure (PGA or Spectral Acceleration) that correlates well with damage. To do so, it is necessary to establish a range of periods of vibration and step (minT, maxT and stepT).
End of explanation
IMT = "Sa"
T = 0.7
utils.export_IMLs_PDM(gmrs,T,PDM,damping_ratio,damage_model,'../../../../../rmtk_data/IMLs_PDM.csv')
fragility_model = utils.calculate_mean_fragility(gmrs, PDM, T, damping_ratio,IMT, damage_model, regression_method)
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sa" and "Sd".
2. T: This parameter defines the time period of the fundamental mode of vibration of the structure.
3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
minIML, maxIML = 0.0, 1.2
utils.plot_fragility_model(fragility_model, minIML, maxIML)
utils.plot_fragility_scatter(fragility_model, minIML, maxIML, PDM, gmrs, IMT, T, damping_ratio)
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "csv"
output_path = "../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
cons_model_file = "../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00,
2.20, 2.40, 2.60, 2.80, 3.00, 3.20, 3.40, 3.60, 3.80, 4.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
utils.plot_vulnerability_model(vulnerability_model)
Explanation: Plot vulnerability function
End of explanation
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation |
1,187 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HTML Introduction
Unit 17, Lecture 2
Numerical Methods and Statistics
Prof. Andrew White, April 21, 2016
Websites are made using three core technologies
Step1: Links
Step2: Content Arrangement
Spans, divs and breaks are ways to arrange content. They do not do much alone. Instead you use javascript and CSS to manipulate them. For example, divs are how jupyter notebooks separate input cells, from the notebook, from the top, etc.
Inspecting Tools
The best way to learn HTML is to look at different webpages. Try right-cliking and inspecting something!
Connecting CSS with HTML
Step3: Wow
Notice it made all headers red
In order to finely tune the connection between CSS, JS and HTML, there is something called selectors.
Step4: There is an established philosophy that HTML = content, CSS = style. Therefore, it's incorrect to create a class called "blue-class" because now you've spilled over some style into your HTML.
<span class="attention-getter"> Testing out you're own stuff</span>
So with spans and divs, we're able to attach classes to HTML elements. If you want to learn more, I would recommend jsfiddle which is a wonderful place for testing out. If you want to have a structured lesson, check out w3schools and codeacademy
Javascript
JS is the programming language of the web. It allows you to manipulate elements.
Step5: There's a lot of stuff going on in there! Here are some differences with Python
Step6: I can also change headings to be something different
Step7: Example 2 — Changing the background
Using the inspector, I've found that the dull gray background is from the .notebook_app.
Step8: Example 3 — Creating a nice results report
If you just web-search for HTML table, you'll see the syntax. Basically you have this | Python Code:
%%HTML
<h3> A level-3 (smaller) heading</h3>
<p> This is a paragraph about HTML. HTML surprisingly only has about 5 elements you need to know: </p>
<ul>
<li> Paragraphs</li>
<li> breaks </li>
<li> lists </li>
<li> links </li>
<li> images </li>
<li> divs </li>
<li> spans</li>
</ul>
Explanation: HTML Introduction
Unit 17, Lecture 2
Numerical Methods and Statistics
Prof. Andrew White, April 21, 2016
Websites are made using three core technologies: HTML, CSS, and Javascript. From HTML, there also evolved SVGs which is the same format but describes 2D graphics. Similarly, WebGL is a 3D graphics technology.
End of explanation
%%HTML
<a href="http://beyond.io"> This link will take you on a journey to the beyond!</a>
%%HTML
<a href="http://unsplash.com">
<img width=400px src="https://images.unsplash.com/photo-1460400355256-e87506dcec4f?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&s=f35997a68b394f08caf7d46cf2d27791"/>
<p> Here I've made this entire section a link.</p>
</a>
Explanation: Links
End of explanation
%%HTML
<h1> This is a header</h1>
<style>
h1 {
color:red;
}
</style>
Explanation: Content Arrangement
Spans, divs and breaks are ways to arrange content. They do not do much alone. Instead you use javascript and CSS to manipulate them. For example, divs are how jupyter notebooks separate input cells, from the notebook, from the top, etc.
Inspecting Tools
The best way to learn HTML is to look at different webpages. Try right-cliking and inspecting something!
Connecting CSS with HTML
End of explanation
%%HTML
<h1 class="attention-getter"> This is an important header</h1>
<style>
.attention-getter {
color:blue;
}
</style>
Explanation: Wow
Notice it made all headers red
In order to finely tune the connection between CSS, JS and HTML, there is something called selectors.
End of explanation
%%HTML
<h3 class="grab-me"> This header will change soon</h3>
%%javascript
var grabme = document.querySelector('.grab-me');
grabme.textContent = 'Hoopla';
%%HTML
<ul class="fill-me"> </ul>
%%javascript
var fruits = ['strawberry', 'mango', 'bannana'];
fruits.forEach(function(i) {
document.querySelector('.fill-me').innerHTML += '<li>' + i + '</li>';
});
Explanation: There is an established philosophy that HTML = content, CSS = style. Therefore, it's incorrect to create a class called "blue-class" because now you've spilled over some style into your HTML.
<span class="attention-getter"> Testing out you're own stuff</span>
So with spans and divs, we're able to attach classes to HTML elements. If you want to learn more, I would recommend jsfiddle which is a wonderful place for testing out. If you want to have a structured lesson, check out w3schools and codeacademy
Javascript
JS is the programming language of the web. It allows you to manipulate elements.
End of explanation
%%HTML
<link href='https://fonts.googleapis.com/css?family=Roboto+Condensed:300|Oswald:400,700' rel='stylesheet' type='text/css'>
%%HTML
<style>
.rendered_html {
font-family: 'Roboto Condensed';
font-size: 125%;
}
</style>
Explanation: There's a lot of stuff going on in there! Here are some differences with Python:
You need semicolins at the end of lines
The for loops syntax is different. Rather than have code below, we call a forEach function and give it a function we define right there.
We also used the innerHTML instead of textContent this time around
Using HTML, CSS, and JS in your notebook
Example 1 — Changing font
Using the inspector, I've discovered that all the paragraphs in rendered cells can be selected by .rendered_html. I'll now change their fonts
This cell below grabs the fonts from google's font collection
End of explanation
%%HTML
<style>
.rendered_html > h1,h2,h3,h4,h5,h6 {
font-family: 'Oswald';
}
</style>
Explanation: I can also change headings to be something different
End of explanation
%%HTML
<style>
.notebook_app {
background-image: url('https://images.unsplash.com/photo-1453106037972-08fbfe790762?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&s=cbaaa89f2c5394ff276ac2ccbfffd4a4');
background-repeat: no;
background-size: cover;
}
</style>
Explanation: Example 2 — Changing the background
Using the inspector, I've found that the dull gray background is from the .notebook_app.
End of explanation
import IPython.display as display
def make_pretty_table(x, y):
html = '''
<table>
<tr> <th> x</th> <th> y </th> </tr>
'''
for xi,yi in zip(x,y):
html += '<tr> <td> {} </td> <td> {} </td> </tr> \n'.format(xi,yi)
html += '</table>'
d = display.HTML(html)
display.display(d)
x = range(10)
y = range(10,20)
make_pretty_table(x,y)
def make_really_pretty_table(x, y):
html = '''
<table class='pretty-table'>
<tr> <th> x</th> <th> y </th> </tr>
'''
for xi,yi in zip(x,y):
html += '<tr> <td> {} </td> <td> {} </td> </tr> \n'.format(xi,yi)
html += '</table>'
html += '''
<style>
.pretty-table > tbody > tr:nth-child(odd) {
background-color: #ccc;
}
.pretty-table > tbody > tr > td, th {
text-align: center !important;
}
</style>
'''
d = display.HTML(html)
display.display(d)
x = range(10)
y = range(10,20)
make_really_pretty_table(x,y)
Explanation: Example 3 — Creating a nice results report
If you just web-search for HTML table, you'll see the syntax. Basically you have this:
```
<table>
<tr> each row </tr>
</table>
```
with more than one column:
```
<table>
<tr>
<td> row 1 column 1</td>
<td> column 2</td>
</tr>
</table>
```
End of explanation |
1,188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right
Step1: Suppose we want to access three different elements. We could do it like this
Step2: Alternatively, we can pass a single list or array of indices to obtain the same result
Step3: When using fancy indexing, the shape of the result reflects the shape of the index arrays rather than the shape of the array being indexed
Step4: Fancy indexing also works in multiple dimensions. Consider the following array
Step5: Like with standard indexing, the first index refers to the row, and the second to the column
Step6: Notice that the first value in the result is X[0, 2], the second is X[1, 1], and the third is X[2, 3].
The pairing of indices in fancy indexing follows all the broadcasting rules that were mentioned in Computation on Arrays
Step7: Here, each row value is matched with each column vector, exactly as we saw in broadcasting of arithmetic operations.
For example
Step8: It is always important to remember with fancy indexing that the return value reflects the broadcasted shape of the indices, rather than the shape of the array being indexed.
Combined Indexing
For even more powerful operations, fancy indexing can be combined with the other indexing schemes we've seen
Step9: We can combine fancy and simple indices
Step10: We can also combine fancy indexing with slicing
Step11: And we can combine fancy indexing with masking
Step12: All of these indexing options combined lead to a very flexible set of operations for accessing and modifying array values.
Example
Step13: Using the plotting tools we will discuss in Introduction to Matplotlib, we can visualize these points as a scatter-plot
Step14: Let's use fancy indexing to select 20 random points. We'll do this by first choosing 20 random indices with no repeats, and use these indices to select a portion of the original array
Step15: Now to see which points were selected, let's over-plot large circles at the locations of the selected points
Step16: This sort of strategy is often used to quickly partition datasets, as is often needed in train/test splitting for validation of statistical models (see Hyperparameters and Model Validation), and in sampling approaches to answering statistical questions.
Modifying Values with Fancy Indexing
Just as fancy indexing can be used to access parts of an array, it can also be used to modify parts of an array.
For example, imagine we have an array of indices and we'd like to set the corresponding items in an array to some value
Step17: We can use any assignment-type operator for this. For example
Step18: Notice, though, that repeated indices with these operations can cause some potentially unexpected results. Consider the following
Step19: Where did the 4 go? The result of this operation is to first assign x[0] = 4, followed by x[0] = 6.
The result, of course, is that x[0] contains the value 6.
Fair enough, but consider this operation
Step20: You might expect that x[3] would contain the value 2, and x[3] would contain the value 3, as this is how many times each index is repeated. Why is this not the case?
Conceptually, this is because x[i] += 1 is meant as a shorthand of x[i] = x[i] + 1. x[i] + 1 is evaluated, and then the result is assigned to the indices in x.
With this in mind, it is not the augmentation that happens multiple times, but the assignment, which leads to the rather nonintuitive results.
So what if you want the other behavior where the operation is repeated? For this, you can use the at() method of ufuncs (available since NumPy 1.8), and do the following
Step21: The at() method does an in-place application of the given operator at the specified indices (here, i) with the specified value (here, 1).
Another method that is similar in spirit is the reduceat() method of ufuncs, which you can read about in the NumPy documentation.
Example
Step22: The counts now reflect the number of points within each bin–in other words, a histogram
Step23: Of course, it would be silly to have to do this each time you want to plot a histogram.
This is why Matplotlib provides the plt.hist() routine, which does the same in a single line
Step24: Our own one-line algorithm is several times faster than the optimized algorithm in NumPy! How can this be?
If you dig into the np.histogram source code (you can do this in IPython by typing np.histogram??), you'll see that it's quite a bit more involved than the simple search-and-count that we've done; this is because NumPy's algorithm is more flexible, and particularly is designed for better performance when the number of data points becomes large | Python Code:
import numpy as np
rand = np.random.RandomState(42)
x = rand.randint(100, size=10)
print(x)
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
No changes were made to the contents of this notebook from the original.
<!--NAVIGATION-->
< Comparisons, Masks, and Boolean Logic | Contents | Sorting Arrays >
Fancy Indexing
In the previous sections, we saw how to access and modify portions of arrays using simple indices (e.g., arr[0]), slices (e.g., arr[:5]), and Boolean masks (e.g., arr[arr > 0]).
In this section, we'll look at another style of array indexing, known as fancy indexing.
Fancy indexing is like the simple indexing we've already seen, but we pass arrays of indices in place of single scalars.
This allows us to very quickly access and modify complicated subsets of an array's values.
Exploring Fancy Indexing
Fancy indexing is conceptually simple: it means passing an array of indices to access multiple array elements at once.
For example, consider the following array:
End of explanation
[x[3], x[7], x[2]]
Explanation: Suppose we want to access three different elements. We could do it like this:
End of explanation
ind = [3, 7, 4]
x[ind]
Explanation: Alternatively, we can pass a single list or array of indices to obtain the same result:
End of explanation
ind = np.array([[3, 7],
[4, 5]])
x[ind]
Explanation: When using fancy indexing, the shape of the result reflects the shape of the index arrays rather than the shape of the array being indexed:
End of explanation
X = np.arange(12).reshape((3, 4))
X
Explanation: Fancy indexing also works in multiple dimensions. Consider the following array:
End of explanation
row = np.array([0, 1, 2])
col = np.array([2, 1, 3])
X[row, col]
Explanation: Like with standard indexing, the first index refers to the row, and the second to the column:
End of explanation
X[row[:, np.newaxis], col]
Explanation: Notice that the first value in the result is X[0, 2], the second is X[1, 1], and the third is X[2, 3].
The pairing of indices in fancy indexing follows all the broadcasting rules that were mentioned in Computation on Arrays: Broadcasting.
So, for example, if we combine a column vector and a row vector within the indices, we get a two-dimensional result:
End of explanation
row[:, np.newaxis] * col
Explanation: Here, each row value is matched with each column vector, exactly as we saw in broadcasting of arithmetic operations.
For example:
End of explanation
print(X)
Explanation: It is always important to remember with fancy indexing that the return value reflects the broadcasted shape of the indices, rather than the shape of the array being indexed.
Combined Indexing
For even more powerful operations, fancy indexing can be combined with the other indexing schemes we've seen:
End of explanation
X[2, [2, 0, 1]]
Explanation: We can combine fancy and simple indices:
End of explanation
X[1:, [2, 0, 1]]
Explanation: We can also combine fancy indexing with slicing:
End of explanation
mask = np.array([1, 0, 1, 0], dtype=bool)
X[row[:, np.newaxis], mask]
Explanation: And we can combine fancy indexing with masking:
End of explanation
mean = [0, 0]
cov = [[1, 2],
[2, 5]]
X = rand.multivariate_normal(mean, cov, 100)
X.shape
Explanation: All of these indexing options combined lead to a very flexible set of operations for accessing and modifying array values.
Example: Selecting Random Points
One common use of fancy indexing is the selection of subsets of rows from a matrix.
For example, we might have an $N$ by $D$ matrix representing $N$ points in $D$ dimensions, such as the following points drawn from a two-dimensional normal distribution:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set() # for plot styling
plt.scatter(X[:, 0], X[:, 1]);
Explanation: Using the plotting tools we will discuss in Introduction to Matplotlib, we can visualize these points as a scatter-plot:
End of explanation
indices = np.random.choice(X.shape[0], 20, replace=False)
indices
selection = X[indices] # fancy indexing here
selection.shape
Explanation: Let's use fancy indexing to select 20 random points. We'll do this by first choosing 20 random indices with no repeats, and use these indices to select a portion of the original array:
End of explanation
plt.scatter(X[:, 0], X[:, 1], alpha=0.3)
plt.scatter(selection[:, 0], selection[:, 1],
facecolor='none', s=200);
Explanation: Now to see which points were selected, let's over-plot large circles at the locations of the selected points:
End of explanation
x = np.arange(10)
i = np.array([2, 1, 8, 4])
x[i] = 99
print(x)
Explanation: This sort of strategy is often used to quickly partition datasets, as is often needed in train/test splitting for validation of statistical models (see Hyperparameters and Model Validation), and in sampling approaches to answering statistical questions.
Modifying Values with Fancy Indexing
Just as fancy indexing can be used to access parts of an array, it can also be used to modify parts of an array.
For example, imagine we have an array of indices and we'd like to set the corresponding items in an array to some value:
End of explanation
x[i] -= 10
print(x)
Explanation: We can use any assignment-type operator for this. For example:
End of explanation
x = np.zeros(10)
x[[0, 0]] = [4, 6]
print(x)
Explanation: Notice, though, that repeated indices with these operations can cause some potentially unexpected results. Consider the following:
End of explanation
i = [2, 3, 3, 4, 4, 4]
x[i] += 1
x
Explanation: Where did the 4 go? The result of this operation is to first assign x[0] = 4, followed by x[0] = 6.
The result, of course, is that x[0] contains the value 6.
Fair enough, but consider this operation:
End of explanation
x = np.zeros(10)
np.add.at(x, i, 1)
print(x)
Explanation: You might expect that x[3] would contain the value 2, and x[3] would contain the value 3, as this is how many times each index is repeated. Why is this not the case?
Conceptually, this is because x[i] += 1 is meant as a shorthand of x[i] = x[i] + 1. x[i] + 1 is evaluated, and then the result is assigned to the indices in x.
With this in mind, it is not the augmentation that happens multiple times, but the assignment, which leads to the rather nonintuitive results.
So what if you want the other behavior where the operation is repeated? For this, you can use the at() method of ufuncs (available since NumPy 1.8), and do the following:
End of explanation
np.random.seed(42)
x = np.random.randn(100)
# compute a histogram by hand
bins = np.linspace(-5, 5, 20)
counts = np.zeros_like(bins)
# find the appropriate bin for each x
i = np.searchsorted(bins, x)
# add 1 to each of these bins
np.add.at(counts, i, 1)
Explanation: The at() method does an in-place application of the given operator at the specified indices (here, i) with the specified value (here, 1).
Another method that is similar in spirit is the reduceat() method of ufuncs, which you can read about in the NumPy documentation.
Example: Binning Data
You can use these ideas to efficiently bin data to create a histogram by hand.
For example, imagine we have 1,000 values and would like to quickly find where they fall within an array of bins.
We could compute it using ufunc.at like this:
End of explanation
# plot the results
plt.plot(bins, counts, linestyle='steps');
Explanation: The counts now reflect the number of points within each bin–in other words, a histogram:
End of explanation
print("NumPy routine:")
%timeit counts, edges = np.histogram(x, bins)
print("Custom routine:")
%timeit np.add.at(counts, np.searchsorted(bins, x), 1)
Explanation: Of course, it would be silly to have to do this each time you want to plot a histogram.
This is why Matplotlib provides the plt.hist() routine, which does the same in a single line:
python
plt.hist(x, bins, histtype='step');
This function will create a nearly identical plot to the one seen here.
To compute the binning, matplotlib uses the np.histogram function, which does a very similar computation to what we did before. Let's compare the two here:
End of explanation
x = np.random.randn(1000000)
print("NumPy routine:")
%timeit counts, edges = np.histogram(x, bins)
print("Custom routine:")
%timeit np.add.at(counts, np.searchsorted(bins, x), 1)
Explanation: Our own one-line algorithm is several times faster than the optimized algorithm in NumPy! How can this be?
If you dig into the np.histogram source code (you can do this in IPython by typing np.histogram??), you'll see that it's quite a bit more involved than the simple search-and-count that we've done; this is because NumPy's algorithm is more flexible, and particularly is designed for better performance when the number of data points becomes large:
End of explanation |
1,189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Contrast Effects
Authors
Ndèye Gagnessiry Ndiaye and Christin Seifert
License
This work is licensed under the Creative Commons Attribution 3.0 Unported License https
Step1: The following image shows a gray square on different backgrounds. The inner square always has the same color (84% gray), and is successively shown on 0%, 50%, 100%, and 150% gray background patches. Note, how the inner squares are perceived differently (square on the right looks considerably darker than the square on the left).
Suggestion
Step2: Chevreul Illusion
The following images visualizes the Chevreul illusion. We use a sequence of gray bands (200%, 150%, 100%, 75% and 50% gray). One band has a uniform gray value. When putting the bands next to each other, the gray values seem to be darker at the edges. This is due to lateral inhibition, a feature of our visual system that increases edge contrasts and helps us to better detect outlines of shapes.
Step3: Contrast Crispening
The following images show the gray strips on a gray-scale background. Left image | Python Code:
import numpy as np
import matplotlib.pyplot as plt
Explanation: Contrast Effects
Authors
Ndèye Gagnessiry Ndiaye and Christin Seifert
License
This work is licensed under the Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/
This notebook illustrates 3 contrast effects:
- Simultaneous Brightness Contrast
- Chevreul Illusion
- Contrast Crispening
Simultaneous Brightness Contrast
Simultaneous Brightness Contrast is the general effect where a gray patch placed on a dark background looks lighter than the same gray patch on a light background (foreground and background affect each other). The effect is based on lateral inhibition.
Also see the following video as an example:
https://www.youtube.com/watch?v=ZYh4SxE7Xp8
End of explanation
# defining the inner square as 3x3 array with an initial gray value
inner_gray_value = 120
inner_square = np.full((3,3), inner_gray_value, np.double)
# defining the outer squares and overlaying the inner square
a = np.zeros((5,5), np.double)
a[1:4, 1:4] = inner_square
b = np.full((5,5), 50, np.double)
b[1:4, 1:4] = inner_square
c = np.full((5,5), 100, np.double)
c[1:4, 1:4] = inner_square
d = np.full((5,5), 150, np.double)
d[1:4, 1:4] = inner_square
simultaneous=np.hstack((a,b,c,d))
im=plt.imshow(simultaneous, cmap='gray',interpolation='nearest',vmin=0, vmax=255)
#plt.rcParams["figure.figsize"] = (70,10)
plt.axis('off')
plt.colorbar(im, orientation='horizontal')
plt.show()
Explanation: The following image shows a gray square on different backgrounds. The inner square always has the same color (84% gray), and is successively shown on 0%, 50%, 100%, and 150% gray background patches. Note, how the inner squares are perceived differently (square on the right looks considerably darker than the square on the left).
Suggestion: Change the gray values of the inner and outer squares and see what happens.
End of explanation
e = np.full((9,5), 200, np.double)
f = np.full((9,5), 150, np.double)
g = np.full((9,5), 100, np.double)
h = np.full((9,5), 75, np.double)
i = np.full((9,5), 50, np.double)
image1= np.hstack((e,f,g,h,i))
e[:,4] = 255
f[:,4] = 255
g[:,4] = 255
h[:,4] = 255
i[:,4] = 255
image2=np.hstack((e,f,g,h,i))
plt.subplot(1,2,1)
plt.imshow(image1, cmap='gray',vmin=0, vmax=255,interpolation='nearest',aspect=4)
plt.title('Bands')
plt.axis('off')
plt.subplot(1,2,2)
plt.imshow(image2, cmap='gray',vmin=0, vmax=255,interpolation='nearest',aspect=4)
plt.title('Bands with white breaks')
plt.axis('off')
plt.show()
Explanation: Chevreul Illusion
The following images visualizes the Chevreul illusion. We use a sequence of gray bands (200%, 150%, 100%, 75% and 50% gray). One band has a uniform gray value. When putting the bands next to each other, the gray values seem to be darker at the edges. This is due to lateral inhibition, a feature of our visual system that increases edge contrasts and helps us to better detect outlines of shapes.
End of explanation
strips = np.linspace( 0, 255, 10, np.double)
strips = strips.reshape((-1, 1))
M = np.linspace( 255, 0, 10, np.double)
n = np.ones((20, 10), np.double)
background = n[:,:]*M
background[5:15,::2] = strips
without_background = np.full((20,10), 255, np.double)
without_background[5:15,::2] = strips
plt.subplot(1,2,1)
plt.imshow(background, cmap='gray',vmin=0, vmax=255,interpolation='nearest')
plt.tick_params(axis='both', left='off', top='off', right='off', bottom='off', labelleft='off', labeltop='off', labelright='off', labelbottom='off')
plt.subplot(1,2,2)
plt.imshow(without_background, cmap='gray',vmin=0, vmax=255,interpolation='nearest')
plt.tick_params(axis='both', left='off', top='off', right='off', bottom='off', labelleft='off', labeltop='off', labelright='off', labelbottom='off')
plt.show()
Explanation: Contrast Crispening
The following images show the gray strips on a gray-scale background. Left image: All vertical gray bands are the same. Note how different parts of the vertical gray bands are enhanced (i.e., difference better perceivable) depending on the gray value of the background. In fact, differences are enhanced when the gray value in the foreground is closer to the gray value in the background. On the right, the same vertical bands are shown but without the background. In this image you can (perceptually) verify that all vertical gray bands are indeed the same.
End of explanation |
1,190 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulation of a Noddy history and visualisation of output
This example shows how the module pynoddy.history can be used to compute the model, and how simple visualisations can be generated with pynoddy.output.
Step1: Compute the model
The simplest way to perform the Noddy simulation through Python is simply to call the executable. One way that should be fairly platform independent is to use Python's own subprocess module
Step2: For convenience, the model computation is wrapped into a Python function in pynoddy
Step3: Note
Step4: The object contains the calculated geology blocks and some additional information on grid spacing, model extent, etc. For example
Step5: Plotting sections through the model
The NoddyOutput class has some basic methods for the visualisation of the generated models. To plot sections through the model
Step6: Export model to VTK
A simple possibility to visualise the modeled results in 3-D is to export the model to a VTK file and then to visualise it with a VTK viewer, for example Paraview. To export the model, simply use | Python Code:
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
%matplotlib inline
# Basic settings
import sys, os
import subprocess
sys.path.append("../..")
# Now import pynoddy
import pynoddy
import importlib
importlib.reload(pynoddy)
import pynoddy.output
import pynoddy.history
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
Explanation: Simulation of a Noddy history and visualisation of output
This example shows how the module pynoddy.history can be used to compute the model, and how simple visualisations can be generated with pynoddy.output.
End of explanation
# Change to sandbox directory to store results
os.chdir(os.path.join(repo_path, 'sandbox'))
# Path to exmaple directory in this repository
example_directory = os.path.join(repo_path,'examples')
# Compute noddy model for history file
history_file = 'simple_two_faults.his'
history = os.path.join(example_directory, history_file)
output_name = 'noddy_out'
# call Noddy
# NOTE: Make sure that the noddy executable is accessible in the system!!
print(subprocess.Popen(['noddy.exe', history, output_name, 'BLOCK'],
shell=False, stderr=subprocess.PIPE,
stdout=subprocess.PIPE).stdout.read())
#
Explanation: Compute the model
The simplest way to perform the Noddy simulation through Python is simply to call the executable. One way that should be fairly platform independent is to use Python's own subprocess module:
End of explanation
pynoddy.compute_model(history, output_name)
Explanation: For convenience, the model computation is wrapped into a Python function in pynoddy:
End of explanation
N1 = pynoddy.output.NoddyOutput(output_name)
Explanation: Note: The Noddy call from Python is, to date, calling Noddy through the subprocess function. In a future implementation, this call could be substituted with a full wrapper for the C-functions written in Python. Therefore, using the member function compute_model is not only easier, but also the more "future-proof" way to compute the Noddy model.
Loading Noddy output files
Noddy simulations produce a variety of different output files, depending on the type of simulation. The basic output is the geological model. Additional output files can contain geophysical responses, etc.
Loading the output files is simplified with a class class container that reads all relevant information and provides simple methods for plotting, model analysis, and export. To load the output information into a Python object:
End of explanation
print("The model has an extent of %.0f m in x-direction, with %d cells of width %.0f m" %
(N1.extent_x, N1.nx, N1.delx))
Explanation: The object contains the calculated geology blocks and some additional information on grid spacing, model extent, etc. For example:
End of explanation
N1.plot_section('y', figsize = (5,3))
Explanation: Plotting sections through the model
The NoddyOutput class has some basic methods for the visualisation of the generated models. To plot sections through the model:
End of explanation
N1.export_to_vtk()
Explanation: Export model to VTK
A simple possibility to visualise the modeled results in 3-D is to export the model to a VTK file and then to visualise it with a VTK viewer, for example Paraview. To export the model, simply use:
End of explanation |
1,191 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Single Star with Spots
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Adding Spots
Let's add one spot to our star. Since there is only one star, the spot will automatically attach without needing to provide component (as is needed in the binary with spots example
Step3: Spot Parameters
A spot is defined by the colatitude and longitude of its center, its angular radius, and the ratio of temperature of the spot to the local intrinsic value.
Step4: The 'colat' parameter defines the colatitude on the star measured from its North (spin) Pole. The 'long' parameter measures the longitude of the spot - with longitude = 0 being defined as pointing towards the observer at t0 for a single star. See the spots tutorial for more details.
Step5: If we set t0 to 5 instead of zero, then the spot will cross the line-of-sight at t=5 (since the spot's longitude is 0).
Step6: And if we change the inclination to 0, we'll be looking at the north pole of the star. This clearly illustrates the right-handed rotation of the star. At time=t0=5 the spot will now be pointing in the negative y-direction. | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: Single Star with Spots
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_star()
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
b.add_spot(radius=30, colat=80, long=0, relteff=0.9)
Explanation: Adding Spots
Let's add one spot to our star. Since there is only one star, the spot will automatically attach without needing to provide component (as is needed in the binary with spots example
End of explanation
print(b['spot'])
Explanation: Spot Parameters
A spot is defined by the colatitude and longitude of its center, its angular radius, and the ratio of temperature of the spot to the local intrinsic value.
End of explanation
times = np.linspace(0, 10, 11)
b.set_value('period', 10)
b.add_dataset('mesh', times=times, columns=['teffs'])
b.run_compute(distortion_method='rotstar', irrad_method='none')
afig, mplfig = b.plot(x='us', y='vs', fc='teffs',
animate=True, save='single_spots_1.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: The 'colat' parameter defines the colatitude on the star measured from its North (spin) Pole. The 'long' parameter measures the longitude of the spot - with longitude = 0 being defined as pointing towards the observer at t0 for a single star. See the spots tutorial for more details.
End of explanation
b.set_value('t0', 5)
b.run_compute(distortion_method='rotstar', irrad_method='none')
afig, mplfig = b.plot(x='us', y='vs', fc='teffs',
animate=True, save='single_spots_2.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: If we set t0 to 5 instead of zero, then the spot will cross the line-of-sight at t=5 (since the spot's longitude is 0).
End of explanation
b.set_value('incl', 0)
b.run_compute(distortion_method='rotstar', irrad_method='none')
afig, mplfig = b.plot(x='us', y='vs', fc='teffs',
animate=True, save='single_spots_3.gif', save_kwargs={'writer': 'imagemagick'})
Explanation: And if we change the inclination to 0, we'll be looking at the north pole of the star. This clearly illustrates the right-handed rotation of the star. At time=t0=5 the spot will now be pointing in the negative y-direction.
End of explanation |
1,192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to train a Keras model on TFRecord files
Author
Step1: We want a bigger batch size as our data is not balanced.
Step2: Load the data
Step3: Decoding the data
The images have to be converted to tensors so that it will be a valid input in our model.
As images utilize an RBG scale, we specify 3 channels.
We also reshape our data so that all of the images will be the same shape.
Step4: As we load in our data, we need both our X and our Y. The X is our image; the model
will find features and patterns in our image dataset. We want to predict Y, the
probability that the lesion in the image is malignant. We will to through our TFRecords
and parse out the image and the target values.
Step5: Define loading methods
Our dataset is not ordered in any meaningful way, so the order can be ignored when
loading our dataset. By ignoring the order and reading files as soon as they come in, it
will take a shorter time to load the data.
Step6: We define the following function to get our different datasets.
Step7: Visualize input images
Step8: Building our model
Define callbacks
The following function allows for the model to change the learning rate as it runs each
epoch.
We can use callbacks to stop training when there are no improvements in the model. At the
end of the training process, the model will restore the weights of its best iteration.
Step9: Build our base model
Transfer learning is a great way to reap the benefits of a well-trained model without
having the train the model ourselves. For this notebook, we want to import the Xception
model. A more in-depth analysis of transfer learning can be found
here.
We do not want our metric to be accuracy because our data is imbalanced. For our
example, we will be looking at the area under a ROC curve.
Step10: Train the model
Step11: Predict results
We'll use our model to predict results for our test dataset images. Values closer to 0
are more likely to be benign and values closer to 1 are more likely to be malignant. | Python Code:
import tensorflow as tf
from functools import partial
import matplotlib.pyplot as plt
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
print("Device:", tpu.master())
strategy = tf.distribute.TPUStrategy(tpu)
except:
strategy = tf.distribute.get_strategy()
print("Number of replicas:", strategy.num_replicas_in_sync)
Explanation: How to train a Keras model on TFRecord files
Author: Amy MiHyun Jang<br>
Date created: 2020/07/29<br>
Last modified: 2020/08/07<br>
Description: Loading TFRecords for computer vision models.
Introduction + Set Up
TFRecords store a sequence of binary records, read linearly. They are useful format for
storing data because they can be read efficiently. Learn more about TFRecords
here.
We'll explore how we can easily load in TFRecords for our melanoma classifier.
End of explanation
AUTOTUNE = tf.data.AUTOTUNE
GCS_PATH = "gs://kds-b38ce1b823c3ae623f5691483dbaa0f0363f04b0d6a90b63cf69946e"
BATCH_SIZE = 64
IMAGE_SIZE = [1024, 1024]
Explanation: We want a bigger batch size as our data is not balanced.
End of explanation
FILENAMES = tf.io.gfile.glob(GCS_PATH + "/tfrecords/train*.tfrec")
split_ind = int(0.9 * len(FILENAMES))
TRAINING_FILENAMES, VALID_FILENAMES = FILENAMES[:split_ind], FILENAMES[split_ind:]
TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + "/tfrecords/test*.tfrec")
print("Train TFRecord Files:", len(TRAINING_FILENAMES))
print("Validation TFRecord Files:", len(VALID_FILENAMES))
print("Test TFRecord Files:", len(TEST_FILENAMES))
Explanation: Load the data
End of explanation
def decode_image(image):
image = tf.image.decode_jpeg(image, channels=3)
image = tf.cast(image, tf.float32)
image = tf.reshape(image, [*IMAGE_SIZE, 3])
return image
Explanation: Decoding the data
The images have to be converted to tensors so that it will be a valid input in our model.
As images utilize an RBG scale, we specify 3 channels.
We also reshape our data so that all of the images will be the same shape.
End of explanation
def read_tfrecord(example, labeled):
tfrecord_format = (
{
"image": tf.io.FixedLenFeature([], tf.string),
"target": tf.io.FixedLenFeature([], tf.int64),
}
if labeled
else {"image": tf.io.FixedLenFeature([], tf.string),}
)
example = tf.io.parse_single_example(example, tfrecord_format)
image = decode_image(example["image"])
if labeled:
label = tf.cast(example["target"], tf.int32)
return image, label
return image
Explanation: As we load in our data, we need both our X and our Y. The X is our image; the model
will find features and patterns in our image dataset. We want to predict Y, the
probability that the lesion in the image is malignant. We will to through our TFRecords
and parse out the image and the target values.
End of explanation
def load_dataset(filenames, labeled=True):
ignore_order = tf.data.Options()
ignore_order.experimental_deterministic = False # disable order, increase speed
dataset = tf.data.TFRecordDataset(
filenames
) # automatically interleaves reads from multiple files
dataset = dataset.with_options(
ignore_order
) # uses data as soon as it streams in, rather than in its original order
dataset = dataset.map(
partial(read_tfrecord, labeled=labeled), num_parallel_calls=AUTOTUNE
)
# returns a dataset of (image, label) pairs if labeled=True or just images if labeled=False
return dataset
Explanation: Define loading methods
Our dataset is not ordered in any meaningful way, so the order can be ignored when
loading our dataset. By ignoring the order and reading files as soon as they come in, it
will take a shorter time to load the data.
End of explanation
def get_dataset(filenames, labeled=True):
dataset = load_dataset(filenames, labeled=labeled)
dataset = dataset.shuffle(2048)
dataset = dataset.prefetch(buffer_size=AUTOTUNE)
dataset = dataset.batch(BATCH_SIZE)
return dataset
Explanation: We define the following function to get our different datasets.
End of explanation
train_dataset = get_dataset(TRAINING_FILENAMES)
valid_dataset = get_dataset(VALID_FILENAMES)
test_dataset = get_dataset(TEST_FILENAMES, labeled=False)
image_batch, label_batch = next(iter(train_dataset))
def show_batch(image_batch, label_batch):
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(image_batch[n] / 255.0)
if label_batch[n]:
plt.title("MALIGNANT")
else:
plt.title("BENIGN")
plt.axis("off")
show_batch(image_batch.numpy(), label_batch.numpy())
Explanation: Visualize input images
End of explanation
initial_learning_rate = 0.01
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=20, decay_rate=0.96, staircase=True
)
checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(
"melanoma_model.h5", save_best_only=True
)
early_stopping_cb = tf.keras.callbacks.EarlyStopping(
patience=10, restore_best_weights=True
)
Explanation: Building our model
Define callbacks
The following function allows for the model to change the learning rate as it runs each
epoch.
We can use callbacks to stop training when there are no improvements in the model. At the
end of the training process, the model will restore the weights of its best iteration.
End of explanation
def make_model():
base_model = tf.keras.applications.Xception(
input_shape=(*IMAGE_SIZE, 3), include_top=False, weights="imagenet"
)
base_model.trainable = False
inputs = tf.keras.layers.Input([*IMAGE_SIZE, 3])
x = tf.keras.applications.xception.preprocess_input(inputs)
x = base_model(x)
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x = tf.keras.layers.Dense(8, activation="relu")(x)
x = tf.keras.layers.Dropout(0.7)(x)
outputs = tf.keras.layers.Dense(1, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=lr_schedule),
loss="binary_crossentropy",
metrics=tf.keras.metrics.AUC(name="auc"),
)
return model
Explanation: Build our base model
Transfer learning is a great way to reap the benefits of a well-trained model without
having the train the model ourselves. For this notebook, we want to import the Xception
model. A more in-depth analysis of transfer learning can be found
here.
We do not want our metric to be accuracy because our data is imbalanced. For our
example, we will be looking at the area under a ROC curve.
End of explanation
with strategy.scope():
model = make_model()
history = model.fit(
train_dataset,
epochs=2,
validation_data=valid_dataset,
callbacks=[checkpoint_cb, early_stopping_cb],
)
Explanation: Train the model
End of explanation
def show_batch_predictions(image_batch):
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(image_batch[n] / 255.0)
img_array = tf.expand_dims(image_batch[n], axis=0)
plt.title(model.predict(img_array)[0])
plt.axis("off")
image_batch = next(iter(test_dataset))
show_batch_predictions(image_batch)
Explanation: Predict results
We'll use our model to predict results for our test dataset images. Values closer to 0
are more likely to be benign and values closer to 1 are more likely to be malignant.
End of explanation |
1,193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In this notebook, we will show you how to evaluate the levenshtein ration between the training phrases of two intents.
Prerequisites
Ensure you have a GCP Service Account key with the Dialogflow API Admin privileges assigned to it.
Step1: Imports
Step2: User Inputs
In the next section, we will collect runtime variables needed to execute this notebook.
This should be the only cell of the notebook you need to edit in order for this notebook to run.
Getting an Intent from your existing DFCX agent requires the following information
Step3: Extract Intents from Agent
First, we will instantiate an Intents class object using creds_path.
We can use the UUID or human-readable name to find and
reference the intents you wish to analyze.
In this example, we are extracting all intents
associated with our agent, using the pre-defined Intent human-readable names to
pinpoint the specific Intents we wish to compare.
Step4: Invoke Analysis Tool
Next, we will run our analysis tool which will start calculating the distances between Training Phrases in the Intents.
The caluclations is based on Levenshtein Ratio and Distance which you can read more about in the link provided.
The difference between intent_key and intent_comparator is in the structure of the output.
- intent_key will serve as a unique key in the object that is returned.
- intent_comparator may appear multiple times, as each key can reference every comparator with a similarity ratio over the designated threshold.
- In other words, there is a one-to-many relationship between intent_key and intent_comparator.
NOTE - This process may take a while (~ 5-15 minutes) especially for larger intents. | Python Code:
# If you haven't already, make sure you install the `dfcx-scrapi` library
!pip install dfcx-scrapi
Explanation: Introduction
In this notebook, we will show you how to evaluate the levenshtein ration between the training phrases of two intents.
Prerequisites
Ensure you have a GCP Service Account key with the Dialogflow API Admin privileges assigned to it.
End of explanation
from dfcx_scrapi.core.intents import Intents
from dfcx_scrapi.tools.levenshtein import Levenshtein
Explanation: Imports
End of explanation
creds_path = '<YOUR_CREDS_FILE>'
agent_id = '<YOUR_AGENT_ID>'
intent_1 = 'My Key Intent 1'
intent_2 = 'My Comparator Intent 2'
threshold = 0.75
Explanation: User Inputs
In the next section, we will collect runtime variables needed to execute this notebook.
This should be the only cell of the notebook you need to edit in order for this notebook to run.
Getting an Intent from your existing DFCX agent requires the following information:
- agent_id, which is your GCP agent ID.
- creds_path, path to your service account credentials file.
- intent_1, the Display Name of the Intent to use as your key
- intent_2, the Display Name of the Intent to use as your comparator
- threshold, determines the level of similarity required in order to be included in the output. Default is .75, or 75% similar.
End of explanation
i = Intents(creds_path)
intentsMap = i.get_intents_map(agent_id=agent_id, reverse=True)
intent1 = i.get_intent(intentsMap[intent_1])
intent2 = i.get_intent(intentsMap[intent_2])
Explanation: Extract Intents from Agent
First, we will instantiate an Intents class object using creds_path.
We can use the UUID or human-readable name to find and
reference the intents you wish to analyze.
In this example, we are extracting all intents
associated with our agent, using the pre-defined Intent human-readable names to
pinpoint the specific Intents we wish to compare.
End of explanation
th = threshold
result = Levenshtein.calc_tp_distances(intent_key=intent1, intent_comparator=intent2, threshold=th, silent=False)
Explanation: Invoke Analysis Tool
Next, we will run our analysis tool which will start calculating the distances between Training Phrases in the Intents.
The caluclations is based on Levenshtein Ratio and Distance which you can read more about in the link provided.
The difference between intent_key and intent_comparator is in the structure of the output.
- intent_key will serve as a unique key in the object that is returned.
- intent_comparator may appear multiple times, as each key can reference every comparator with a similarity ratio over the designated threshold.
- In other words, there is a one-to-many relationship between intent_key and intent_comparator.
NOTE - This process may take a while (~ 5-15 minutes) especially for larger intents.
End of explanation |
1,194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyBroMo 4. Two-state dynamics - Static smFRET simulation
<small><i>
This notebook is part of <a href="http
Step1: Define populations
We assume a $\gamma = 0.7$ and two populations, one with $E_{PR}=0.75$ and the other $E_{PR}=0.4$
Step2: The corrected $E$ for the two populations are
Step3: The simulation takes the uncorrected $E_{PR}$ as input.
We want to simulate a second population that has measured brightness scaling as if the difference was
only due to the $\gamma$ factor. Using the definitions $\Lambda$ and $\Lambda_\gamma$ from
(Ingargiola 2017),
we can use the relation
Step4: Create smFRET data-files
Create a file for storing timestamps
Here we load a diffusion simulation opening a file to save
timstamps in write mode. Use 'a' (i.e. append) to keep
previously simulated timestamps for the given diffusion.
Step5: Simulate timestamps of smFRET
We want to simulate two separate smFRET files representing two static populations.
We start definint the simulation parameters for population 1 with the following syntax
Step6: We can now define population 2
Step7: Finally, we also define a static mixture of the two populations
Step8: Simulate static population 1
Population 1
Step9: Run the simualtion
Step10: Save simulation to a smFRET Photon-HDF5 file
Step11: Simulate static population 2
Population 2
Step12: Simulate static mixture
Static mixture
Step13: Burst analysis
The generated Photon-HDF5 files can be analyzed by any smFRET burst
analysis program. Here we show an example using the opensource
FRETBursts program | Python Code:
%matplotlib inline
from pathlib import Path
import numpy as np
import tables
import matplotlib.pyplot as plt
import seaborn as sns
import pybromo as pbm
import phconvert as phc
print('Numpy version:', np.__version__)
print('PyTables version:', tables.__version__)
print('PyBroMo version:', pbm.__version__)
print('phconvert version:', phc.__version__)
SIM_PATH = 'data/'
Explanation: PyBroMo 4. Two-state dynamics - Static smFRET simulation
<small><i>
This notebook is part of <a href="http://tritemio.github.io/PyBroMo" target="_blank">PyBroMo</a> a
python-based single-molecule Brownian motion diffusion simulator
that simulates confocal smFRET
experiments.
</i></small>
Overview
In this notebook we generate single-polulation static smFRET data files
from the same diffusion trajectories. These files are needed by the next notebook
to generate smFRET data with 2-state dynamics.
Loading the software
Import all the relevant libraries:
End of explanation
Epr1 = 0.75
Epr2 = 0.4
Explanation: Define populations
We assume a $\gamma = 0.7$ and two populations, one with $E_{PR}=0.75$ and the other $E_{PR}=0.4$
End of explanation
gamma = 0.7
E1 = Epr1 /(Epr1 * (1 - gamma) + gamma)
E2 = Epr2 /(Epr2 * (1 - gamma) + gamma)
E1, E2
Explanation: The corrected $E$ for the two populations are:
End of explanation
Λ1 = 200e3 # kcps, the detected (i.e. uncorrected) peak emission rate for population 1
Λγ = Λ1 * Epr1 / E1
Λ2 = Λγ * E2 / Epr2
Λ2
Λ2 = np.round(Λ2, -3)
Λ2
Explanation: The simulation takes the uncorrected $E_{PR}$ as input.
We want to simulate a second population that has measured brightness scaling as if the difference was
only due to the $\gamma$ factor. Using the definitions $\Lambda$ and $\Lambda_\gamma$ from
(Ingargiola 2017),
we can use the relation:
$$\frac{E}{E_{PR}} = \frac{\Lambda}{\Lambda_\gamma}$$
Solving for $\Lambda_\gamma$ or $\Lambda$ we get:
$$ \Lambda_\gamma = \Lambda\frac{E_{PR}}{E}$$
$${\Lambda} = {\Lambda_\gamma}\frac{E}{E_{PR}} $$
Since $\Lambda_\gamma$ is gamma corrected does not depend on $E$. We can compute it from
the parameters of population 1 and then use it for finding $\Lambda$ for population 2:
End of explanation
S = pbm.ParticlesSimulation.from_datafile('0eb9', mode='a', path=SIM_PATH)
S.particles.diffusion_coeff_counts
#S = pbm.ParticlesSimulation.from_datafile('44dc', mode='a', path=SIM_PATH)
Explanation: Create smFRET data-files
Create a file for storing timestamps
Here we load a diffusion simulation opening a file to save
timstamps in write mode. Use 'a' (i.e. append) to keep
previously simulated timestamps for the given diffusion.
End of explanation
params1 = dict(
em_rates = (Λ1,), # Peak emission rates (cps) for each population (D+A)
E_values = (Epr1,), # FRET efficiency for each population
num_particles = (35,), # Number of particles in each population
bg_rate_d = 900, # Poisson background rate (cps) Donor channel
bg_rate_a = 600, # Poisson background rate (cps) Acceptor channel
)
Explanation: Simulate timestamps of smFRET
We want to simulate two separate smFRET files representing two static populations.
We start definint the simulation parameters for population 1 with the following syntax:
End of explanation
params2 = dict(
em_rates = (Λ2,), # Peak emission rates (cps) for each population (D+A)
E_values = (Epr2,), # FRET efficiency for each population
num_particles = (35,), # Number of particles in each population
bg_rate_d = 900, # Poisson background rate (cps) Donor channel
bg_rate_a = 600, # Poisson background rate (cps) Acceptor channel
)
Explanation: We can now define population 2:
End of explanation
params_mix = dict(
em_rates = (Λ1, Λ2), # Peak emission rates (cps) for each population (D+A)
E_values = (Epr1, Epr2), # FRET efficiency for each population
num_particles = (20, 15), # Number of particles in each population
bg_rate_d = 900, # Poisson background rate (cps) Donor channel
bg_rate_a = 600, # Poisson background rate (cps) Acceptor channel
)
Explanation: Finally, we also define a static mixture of the two populations:
End of explanation
mix_sim = pbm.TimestapSimulation(S, **params1)
mix_sim.summarize()
Explanation: Simulate static population 1
Population 1: Create the object that will run the simulation and print a summary:
End of explanation
rs = np.random.RandomState(1234)
mix_sim.run(rs=rs, overwrite=False, skip_existing=True)
Explanation: Run the simualtion:
End of explanation
mix_sim.save_photon_hdf5(identity=dict(author='Antonino Ingargiola',
author_affiliation='UCLA'))
Explanation: Save simulation to a smFRET Photon-HDF5 file:
End of explanation
mix_sim = pbm.TimestapSimulation(S, **params2)
mix_sim.summarize()
rs = np.random.RandomState(1234)
mix_sim.run(rs=rs, overwrite=False, skip_existing=True)
mix_sim.save_photon_hdf5(identity=dict(author='Antonino Ingargiola',
author_affiliation='UCLA'))
Explanation: Simulate static population 2
Population 2: Create the object that will run the simulation and print a summary:
End of explanation
mix_sim = pbm.TimestapSimulation(S, **params_mix)
mix_sim.summarize()
rs = np.random.RandomState(1234)
mix_sim.run(rs=rs, overwrite=False, skip_existing=True)
mix_sim.save_photon_hdf5(identity=dict(author='Antonino Ingargiola',
author_affiliation='UCLA'))
!rsync -av --exclude 'pybromo_*.hdf5' /mnt/archive/Antonio/pybromo /mnt/wAntonio/dd
Explanation: Simulate static mixture
Static mixture: Create the object that will run the simulation and print a summary:
End of explanation
import fretbursts as fb
filepath = list(Path(SIM_PATH).glob('smFRET_*'))
filepath
d = fb.loader.photon_hdf5(str(filepath[0]))
d
d.A_em
fb.dplot(d, fb.timetrace);
d.calc_bg(fun=fb.bg.exp_fit, tail_min_us='auto', F_bg=1.7, time_s=5)
fb.dplot(d, fb.timetrace_bg)
d.burst_search(F=7)
d.num_bursts
ds = d.select_bursts(fb.select_bursts.size, th1=20)
ds.num_bursts
with plt.rc_context({#'font.size': 10,
#'savefig.dpi': 200,
'figure.dpi': 150}):
for i in range(3):
fig, ax = plt.subplots(figsize=(100, 3))
fb.dplot(d, fb.timetrace, binwidth=0.5e-3, tmin=i*10, tmax=(i+1)*10, bursts=True,
plot_style=dict(lw=1), ax=ax);
ax.set_xlim(i*10, (i+1)*10);
display(fig)
plt.close(fig)
fb.dplot(ds, fb.hist_fret, pdf=False)
plt.axvline(0.4, color='k', ls='--');
fb.bext.burst_data(ds)
fb.dplot(d, fb.hist_size)
fb.dplot(d, fb.hist_width)
Explanation: Burst analysis
The generated Photon-HDF5 files can be analyzed by any smFRET burst
analysis program. Here we show an example using the opensource
FRETBursts program:
End of explanation |
1,195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Summary" data-toc-modified-id="Summary-1"><span class="toc-item-num">1 </span>Summary</a></div><div class="lev1 toc-item"><a href="#Version-Control" data-toc-modified-id="Version-Control-2"><span class="toc-item-num">2 </span>Version Control</a></div><div class="lev1 toc-item"><a href="#Change-Log" data-toc-modified-id="Change-Log-3"><span class="toc-item-num">3 </span>Change Log</a></div><div class="lev1 toc-item"><a href="#Setup" data-toc-modified-id="Setup-4"><span class="toc-item-num">4 </span>Setup</a></div><div class="lev1 toc-item"><a href="#Secure-Credentials-File" data-toc-modified-id="Secure-Credentials-File-5"><span class="toc-item-num">5 </span>Secure Credentials File</a></div><div class="lev1 toc-item"><a href="#Inspect-the-XML-returned" data-toc-modified-id="Inspect-the-XML-returned-6"><span class="toc-item-num">6 </span>Inspect the XML returned</a></div><div class="lev3 toc-item"><a href="#Data-inspection-(root)" data-toc-modified-id="Data-inspection-(root)-601"><span class="toc-item-num">6.0.1 </span>Data inspection (root)</a></div><div class="lev3 toc-item"><a href="#Get-data--(token)" data-toc-modified-id="Get-data--(token)-602"><span class="toc-item-num">6.0.2 </span>Get data (token)</a></div><div class="lev1 toc-item"><a href="#Client" data-toc-modified-id="Client-7"><span class="toc-item-num">7 </span>Client</a></div>
# Summary
* Master post for the blog series that holds all the links related to making web service calls to Eoddata.com. Overview of the web service can be found [here](http
Step1: Change Log
Date Created
Step2: [Top]
Secure Credentials File
Create credentials file for later usage. The file will have permissions created so only the current user can access the file. The following SO post was followed.
The following directory will be created if it doesn't exist
Step3: [Top]
Inspect the XML returned
Step4: Data inspection (root)
Step5: Get data (token)
Step6: [Top]
Client | Python Code:
%run ../../code/version_check.py
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Summary" data-toc-modified-id="Summary-1"><span class="toc-item-num">1 </span>Summary</a></div><div class="lev1 toc-item"><a href="#Version-Control" data-toc-modified-id="Version-Control-2"><span class="toc-item-num">2 </span>Version Control</a></div><div class="lev1 toc-item"><a href="#Change-Log" data-toc-modified-id="Change-Log-3"><span class="toc-item-num">3 </span>Change Log</a></div><div class="lev1 toc-item"><a href="#Setup" data-toc-modified-id="Setup-4"><span class="toc-item-num">4 </span>Setup</a></div><div class="lev1 toc-item"><a href="#Secure-Credentials-File" data-toc-modified-id="Secure-Credentials-File-5"><span class="toc-item-num">5 </span>Secure Credentials File</a></div><div class="lev1 toc-item"><a href="#Inspect-the-XML-returned" data-toc-modified-id="Inspect-the-XML-returned-6"><span class="toc-item-num">6 </span>Inspect the XML returned</a></div><div class="lev3 toc-item"><a href="#Data-inspection-(root)" data-toc-modified-id="Data-inspection-(root)-601"><span class="toc-item-num">6.0.1 </span>Data inspection (root)</a></div><div class="lev3 toc-item"><a href="#Get-data--(token)" data-toc-modified-id="Get-data--(token)-602"><span class="toc-item-num">6.0.2 </span>Get data (token)</a></div><div class="lev1 toc-item"><a href="#Client" data-toc-modified-id="Client-7"><span class="toc-item-num">7 </span>Client</a></div>
# Summary
* Master post for the blog series that holds all the links related to making web service calls to Eoddata.com. Overview of the web service can be found [here](http://ws.eoddata.com/data.asmx)
* Download the [class definition file](https://adriantorrie.github.io/downloads/code/eoddata.py) for an easy to use client, which is demonstrated below
* This post shows you how to create a secure credentials file to hold the username and password so you don't have to keep entering it, and will allow for automation later.
* A quick overview is given below of establishing a session using the `requests` module, and parsing the xml response using `xml.etree.cElementTree`. Then a quick inspection of the objects created follows.
The following links were used to help get these things working.
* http://stackoverflow.com/a/17378332/893766
* http://stackoverflow.com/a/1912483/893766
* hidden password entry: https://docs.python.org/2/library/getpass.html
# Version Control
End of explanation
%run ../../code/eoddata.py
from getpass import getpass
import json
import os
import os.path
import requests as r
import stat
import xml.etree.cElementTree as etree
ws = 'http://ws.eoddata.com/data.asmx'
ns='http://ws.eoddata.com/Data'
session = r.Session()
username = getpass()
password = getpass()
Explanation: Change Log
Date Created: 2017-03-25
Date of Change Change Notes
-------------- ----------------------------------------------------------------
2017-03-25 Initial draft
2017-04-02 Added "file saved: <location>" output
[Top]
Setup
End of explanation
# gather credentials
credentials = {'username': username, 'password': password}
# set filename variables
credentials_dir = os.path.join(os.path.expanduser("~"), '.eoddata')
credentials_file_name = 'credentials'
credentials_path = os.path.join(credentials_dir, credentials_file_name)
# set security variables
flags = os.O_WRONLY | os.O_CREAT | os.O_EXCL # Refer to "man 2 open".
mode = stat.S_IRUSR | stat.S_IWUSR # This is 0o600 in octal and 384 in decimal.
# create directory for file if not exists
if not os.path.exists(credentials_dir):
os.makedirs(credentials_dir)
# for security, remove file with potentially elevated mode
try:
os.remove(credentials_path)
except OSError:
pass
# open file descriptor
umask_original = os.umask(0)
try:
fdesc = os.open(credentials_path, flags, mode)
finally:
os.umask(umask_original)
# save credentials in secure file
with os.fdopen(fdesc, 'w') as f:
json.dump(credentials, f)
f.write("\n")
print("file saved: {}".format(credentials_path))
Explanation: [Top]
Secure Credentials File
Create credentials file for later usage. The file will have permissions created so only the current user can access the file. The following SO post was followed.
The following directory will be created if it doesn't exist:
* Windows: %USERPROFILE%/.eoddata
* Linux: ~/.eoddata
End of explanation
call = 'Login'
url = '/'.join((ws, call))
payload = {'Username': username, 'Password': password}
response = session.get(url, params=payload, stream=True)
if response.status_code == 200:
root = etree.parse(response.raw).getroot()
Explanation: [Top]
Inspect the XML returned
End of explanation
dir(root)
for child in root.getchildren():
print (child.tag, child.attribute)
for item in root.items():
print (item)
for key in root.keys():
print (key)
print (root.get('Message'))
print (root.get('Token'))
print (root.get('DataFormat'))
print (root.get('Header'))
print (root.get('Suffix'))
Explanation: Data inspection (root)
End of explanation
token = root.get('Token')
Explanation: Get data (token)
End of explanation
# client can be opened using a with statement
with (Client()) as eoddata:
print('token: {}'.format(eoddata.get_token()))
# initialise using secure credentials file
eoddata = Client()
# client field accessors
ws = eoddata.get_web_service()
ns = eoddata.get_namespace()
token = eoddata.get_token()
session = eoddata.get_session()
print('ws: {}'.format(ws))
print('ns: {}'.format(ns))
print('token: {}'.format(token))
print(session)
# the client has a list of exchange codes avaiable once intialised
eoddata.get_exchange_codes()
# client must be closed if opened outside a with block
session.close()
eoddata.close_session()
Explanation: [Top]
Client
End of explanation |
1,196 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graphing catalog numbers vs rank
Are collection codes typically sequential? Let's graph only numeric codes (can do work to conver alpha-numeric codes to based 10 numeric) vs their rank. If codes are sequential we should have a straight line with slope = 1.
Step1: To test our code, find one collection that seems to have numeric ids. Go to the search API and find the most common catalog number
Step2: Is there anything that is not numeric?
Step3: Well there certainly is and there are repeated catalog numbers too. We certainly can't assume that 'r-70049' is the same as '70049' and if we could we certainly can't guess at the catalog number practices across collections.
Graphing
So let's just throw this on a graph.
Step4: Let's zoom in on some interesting parts. | Python Code:
import pyspark.sql.functions as sql
import pyspark.sql.types as types
idb_df_version = "20170130"
idb_df = sqlContext.read.parquet("/guoda/data/idigbio-{0}.parquet".format(idb_df_version))
idb_df.count()
Explanation: Graphing catalog numbers vs rank
Are collection codes typically sequential? Let's graph only numeric codes (can do work to conver alpha-numeric codes to based 10 numeric) vs their rank. If codes are sequential we should have a straight line with slope = 1.
End of explanation
subset = (idb_df
.select(idb_df.catalognumber)
.where(idb_df.recordset == "271a9ce9-c6d3-4b63-a722-cb0adc48863f")
)
subset.cache()
subset.count()
Explanation: To test our code, find one collection that seems to have numeric ids. Go to the search API and find the most common catalog number:
http://search.idigbio.org/v2/summary/top/records?top_fields=catalognumber&count=100
(It is possible that some collections use "good" catalog numbers that are UUIDs or other forms of GUIDs. This work can never apply to them.)
And then look for collection codes that use something popular like "100":
http://search.idigbio.org/v2/summary/top/records?top_fields=recordset&rq={%22catalognumber%22:%22100%22}
Looks like the recordset a6eee223-cf3b-4079-8bb2-b77dad8cae9d has 4 records with this number and 6M records so it sounds interesting but let's start with 271a9ce9-c6d3-4b63-a722-cb0adc48863f since it has 1.8M records and will be a bit easier to work with.
End of explanation
print(subset.where(subset.catalognumber == "0").count())
def to_int(s):
try:
return int(s)
except:
# 0 is a terrible flag value but it is graphable so we can see
# how bad things are
return 0
to_int_udf = sql.udf(to_int, types.IntegerType())
catalognumbers = (subset
.withColumn("number", to_int_udf(subset.catalognumber))
)
print(catalognumbers.where(catalognumbers.number == 0).count())
catalognumbers.where(catalognumbers.number == 0).head(10)
catalognumbers.where("catalognumber='r-70049' OR catalognumber='70049'").head(10)
Explanation: Is there anything that is not numeric?
End of explanation
catalognumbers_pd = (catalognumbers
.select(catalognumbers.number)
.sort(catalognumbers.number)
.toPandas()
)
catalognumbers_pd.describe()
catalognumbers_pd[-10:]
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(catalognumbers_pd.index.values[:-3],
catalognumbers_pd[:-3]["number"])
plt.axis([0, 2000000, 0, 2000000])
plt.xlabel("Rank of specimen record")
plt.ylabel("Numeric catalog number")
plt.title("Catalog numbers from Museum of Comparative Zoology,\nHarvard University (271a9ce9-c6d3-4b63-a722-cb0adc48863f)")
Explanation: Well there certainly is and there are repeated catalog numbers too. We certainly can't assume that 'r-70049' is the same as '70049' and if we could we certainly can't guess at the catalog number practices across collections.
Graphing
So let's just throw this on a graph.
End of explanation
x_start = 1418000
x_end = 1419000
y_start = 199000
y_end = 201000
plt.plot(catalognumbers_pd.index.values[x_start:x_end],
catalognumbers_pd[x_start:x_end]["number"])
plt.axis([x_start, x_end, y_start, y_end])
x_start = 1850000
x_end = 1875000
y_start = 600000
y_end = 700000
plt.plot(catalognumbers_pd.index.values[x_start:x_end],
catalognumbers_pd[x_start:x_end]["number"])
plt.axis([x_start, x_end, y_start, y_end])
Explanation: Let's zoom in on some interesting parts.
End of explanation |
1,197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Course Transcriptomics for Cu-Induced transition in 5GB1
Step1: Aside, how to keep only columns with FM40, and FM34
Step2: identifying the column index in order to remove unnecessary columns from data
Step3: Reordering certain columns alphabetically, only after the initial 0-8 columns.
Step4: Aside to remove all columns containing the string "QC"
Step5: Taking an aside to understanding iloc, loc, ix | Python Code:
import pandas as pd
import natsort as ns #3rd party package for natural sorting
import re
data = pd.read_csv("5G_counts.tsv", sep = "\t")
columns_list = list(range(0,9)) + list(range(20,42)) #creating a list of columns that I care about (see below)
data_1 = data.iloc[:, columns_list] #taking only 0-8 and 20-42 columns removing old FM runs
#Want to sort the data columns (20 - 42) in their timely order. Will split into 2 dataframes, sort, then put together.
first_8 = data_1.iloc[:, 0:9] #new data frame with first 9 columns
remaining_data = data_1.iloc[:,9:] # new data frame with remaining columns (to be sorted)
cols = list(ns.natsorted(remaining_data.columns)) #using natural sort package
newdf=remaining_data[cols]
data_2 = pd.concat([first_8, newdf], axis = 1) #ok so now combined first 8 columns with FM34 and FM40 columns
#the columns still contain many QC runs lets get rid of them (see aside for removing columins with "QC" in them)
list(data_2.columns)
data_3 = data_2.select(lambda x: not re.search("QC", x), axis = 1)
data_3
#OK now need to create a TPM counts for all the columns
data_3["gene_length"] = (data_3["end_coord"]-data_3["start_coord"] + 1)/1000 #gene length in kilo base pair
data_3
#before moving on, I want to see the stats for the gene length column (min, max, mean, etc.)
data_3.gene_length.describe()
#lets find the loc range of the columns I want to divide and my gene length column
print(data_3.columns.get_loc("gene_length")) # need to devide all FM40 columns by this column
print(data_3.columns.get_loc("5GB1_FM40_T0m_TR2")) # this is where my range starts. so columns [9-16]/[17]
RPK = data_3.iloc[:,9:17].div(data_3.gene_length, axis=0) #it is 9-17 because the last value is not inclusive.
data_4 = pd.concat([first_8, RPK], axis = 1)
data_4
data_4.iloc[:,9:17].sum() #the sum of reads normalized to gene length is different between samples.
norm_sum = data_4.iloc[:,9:17].sum(axis=0)/1000000 #creating a series with the sums of each FM40 column / 1,000,000
norm_sum = pd.Series.to_frame(norm_sum) #converting this series into a dataframe
norm_sum = norm_sum.T #transposing the dataframe so that there is one value per column
norm_sum
TPM = data_4.iloc[:,9:17].div(norm_sum.ix[0]) #dividing FM40 columns by the the total transcript counts in each repicate
data_5 = pd.concat([first_8, TPM], axis = 1) #this is the TPM!
data_5.iloc[:,9:17].sum() # can check that the sum total of each colum is identical. Now can do stats!
data_5
Explanation: Time Course Transcriptomics for Cu-Induced transition in 5GB1
End of explanation
#can individualy identify 1 column into a new dataframe
df = data["locus_tag"]
df
# for multiple column selection must use the __getitem__ syntax []
df1 = data[["locus_tag","type"]]
df1
#Dont want to manually enter all the column names
Explanation: Aside, how to keep only columns with FM40, and FM34
End of explanation
list(data.columns)
print(data.columns.get_loc("translation"))
print(data.columns.get_loc("5GB1_FM40_T0m_TR2"))
print(data.columns.get_loc("5GB1_FM40_T90m_TR2_QC"))
columns_list = list(range(0,9)) + list(range(20,42))
columns_list
data_1 = data.iloc[:, columns_list] #slicing the column index the way I want.
Explanation: identifying the column index in order to remove unnecessary columns from data
End of explanation
first_8 = data_1.iloc[:, 0:9] #new data frame with first 9 columns
remaining_data = data_1.iloc[:,9:] # new data frame with remaining columns (to be sorted)
remaining_data
sorted(remaining_data.columns) #sorted is a python function that sorts your input (dont know by what criteria)
list(sorted(remaining_data.columns,key=str)) # will list sorted columns, but doesnt sort this naturally (150 before 40)
import natsort as ns #3rd party package for natural sorting
list(ns.natsorted(remaining_data.columns)) #this works!
cols=list(ns.natsorted(remaining_data.columns)) #this works!
remaining_data[cols].head()
newdf=remaining_data[cols]
newdf.head()
ns.natsorted(remaining_data)#the problem with this package is that passing the object as an argument
#returns a list, and I cant use that list for the dataframe, I need index.
remaining_data.loc("5GB1_Cu_transition_tim")
remaining_data.columns = ns.natsorted(remaining_data.columns)
list(remaining_data.columns)
remaining_data["5GB1_FM40_T150m_TR1_remake"]
data["5GB1_FM40_T150m_TR1_remake"]
Explanation: Reordering certain columns alphabetically, only after the initial 0-8 columns.
End of explanation
#many columns with QC runs. Gotta filter those out.
list(data_2.columns)
data_2.select(lambda x: not re.search("QC", x), axis = 1) #ok this is what I need, not lets break it down.
# re is a regex (regular expression) module. It is useful in selecting strings or parts of strings. Here is an example.
str_1 = 'an example word:cat12, word:cat!!, word:cattt165'
match = re.search(r'word:cat\d+', str_1) #the r ignores slashes (google education has a nice tutorial with re module)
if match:
print("found", match.group())
else:
print("did not find")
#lets find multiple words, the precious example on found 1
#say you have a text with many email addresses
str_2 = "purple [email protected], and many other like [email protected] and also a dishwasher"
emails = re.findall(r"\w+@\w+\.\w+", str_2)
for email in emails:
print (email)
print(emails)
Explanation: Aside to remove all columns containing the string "QC"
End of explanation
data.ix[1:3] #supports mixed integer and label based access. It is
#primarily label based, but will fall back to integer positional
#access unless the corresponding axis is of integer type.
data.loc[1] #returns the first row, Note: "1" is
#interpreted as a *label* of the index, and **never** as an
# integer position along the index).
data.loc[0:2,"5GB1_FM34_T0_TR1_QC"] #the integers are interpreted as names, not positions.
data.iloc[1:3] #returns the data for axis 0 = 1,2,3 (or the first three rows of data)
data.iloc[3] #returns the data for the third row
data.iloc[[1,2,5]] #returns data for rows 1,2,5
type(data.iloc[4])
Explanation: Taking an aside to understanding iloc, loc, ix
End of explanation |
1,198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Discretizations
Here we show how different discretizations work within MasterMSM. An important note is that not all discretizations will be sensible for all systems, but as usual the alanine dipeptide is a good testbed.
We start downloading the data from the following link and importing a number of libraries for plotting and analysis that will be useful for our work.
Step1: Next we import the traj module and read the molecular simulation trajectory in the xtc compressed format from Gromacs.
Step2: Core Ramachandran angle regions
Following previous work we can use core regions in the Ramachandran map to define our states. We use utilities from the MDtraj package to compute the Phi and Psi dihedrals.
Step3: Then we run the actual discretization, using only two states for the alpha and extended conformations.
Step4: Finally we derive the MSM using the tools from the msm module. In particular, we use the SuperMSM class that will help build MSMs at various lag times.
Step5: Next we gather results from all these MSMs and plot the relaxation time corresponding to the A$\leftrightarrow$E transition.
Step6: Fine grid on the Ramachandran map
Alternatively we can make a grid on the Ramachandran map with many more states.
Step7: Then we repeat the same steps as before, but with this fine grained MSM.
Step8: First we take a look at the dependence of the slowest relaxation time with the lag time, $\Delta t$ for the construction of the Markov model as a minimal quality control.
Step9: As a surprise we find that in the fine-grained MSM the slowest relaxation time is slower than in the one where we consider only the $\alpha$-helical and extended basins. We can look at the whole spectrum of eigenvalues to understand why.
Step10: So there is an eigenvalue (the second slowest, $\lambda_2$) whose corresponding relaxation time ($\tau_2$) approximately matches that of the coarse-grained MSM.
We can understand which dynamical processes the eigenvectors are associated to by looking at the corresponding eigenvectors. For this we recalculate the transition matrix but now recovering the eigenvectors.
Step11: Here we are plotting the values of the eigenvectors so that the state indexes correspond to the position in the Ramachandran map. On the left, we show the stationary eigenvector, which is directly proportional to the equilibrium population. The center plot corresponds to the slowest dynamical mode, which in fact corresponds to the $\alpha_L$ and the $\alpha_R$ transition. Finally, on the right, we find that the eigenvector corresponding to the timescale $\tau_2$ in fact corresponds to the exchange between the A and E regions. This explains the mapping between the timescales in the coarse and fine grained MSMs.
Clustering
So it seems two states only may not be a very good clustering for this particular system. Maybe we need one more. In order to do the clustering systematically we use the fewsm module from MasterMSM. From the eigenvectors we are immediately able to produce a sensible, albeit still imperfect, partitioning in three states. | Python Code:
%load_ext autoreload
%matplotlib inline
import math
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="ticks", color_codes=True, font_scale=1.5)
sns.set_style({"xtick.direction": "in", "ytick.direction": "in"})
Explanation: Discretizations
Here we show how different discretizations work within MasterMSM. An important note is that not all discretizations will be sensible for all systems, but as usual the alanine dipeptide is a good testbed.
We start downloading the data from the following link and importing a number of libraries for plotting and analysis that will be useful for our work.
End of explanation
from mastermsm.trajectory import traj
tr = traj.TimeSeries(top='data/alaTB.gro', traj=['data/protein_only.xtc'])
print (tr.mdt)
Explanation: Next we import the traj module and read the molecular simulation trajectory in the xtc compressed format from Gromacs.
End of explanation
import mdtraj as md
phi = md.compute_phi(tr.mdt)
psi = md.compute_psi(tr.mdt)
res = [x for x in tr.mdt.topology.residues]
Explanation: Core Ramachandran angle regions
Following previous work we can use core regions in the Ramachandran map to define our states. We use utilities from the MDtraj package to compute the Phi and Psi dihedrals.
End of explanation
tr.discretize(states=['A', 'E'])
tr.find_keys()
fig, ax = plt.subplots(figsize=(10,3))
ax.plot(tr.mdt.time, [x for x in tr.distraj], lw=2)
ax.set_xlim(0,5000)
ax.set_ylim(0.8,2.2)
ax.set_xlabel('Time (ps)', fontsize=20)
ax.set_ylabel('state', fontsize=20)
Explanation: Then we run the actual discretization, using only two states for the alpha and extended conformations.
End of explanation
from mastermsm.msm import msm
msm_alaTB = msm.SuperMSM([tr])
for i in [1, 2, 5, 10, 20, 50, 100]:
msm_alaTB.do_msm(i)
msm_alaTB.msms[i].do_trans()
msm_alaTB.msms[i].boots()
Explanation: Finally we derive the MSM using the tools from the msm module. In particular, we use the SuperMSM class that will help build MSMs at various lag times.
End of explanation
tau_vs_lagt = np.array([[x,msm_alaTB.msms[x].tauT[0],msm_alaTB.msms[x].tau_std[0]] \
for x in sorted(msm_alaTB.msms.keys())])
fig, ax = plt.subplots()
ax.errorbar(tau_vs_lagt[:,0],tau_vs_lagt[:,1],fmt='o-', yerr=tau_vs_lagt[:,2], markersize=10)
ax.fill_between(10**np.arange(-0.2,3,0.2), 1e-1, 10**np.arange(-0.2,3,0.2), facecolor='lightgray')
ax.set_xlabel(r'$\Delta$t [ps]', fontsize=16)
ax.set_ylabel(r'$\tau$ [ps]', fontsize=16)
ax.set_xlim(0.8,200)
ax.set_ylim(0,60)
_ = ax.set_xscale('log')
#ax.set_yscale('log')
Explanation: Next we gather results from all these MSMs and plot the relaxation time corresponding to the A$\leftrightarrow$E transition.
End of explanation
tr.discretize(method="ramagrid", nbins=20)
tr.find_keys()
fig, ax = plt.subplots(figsize=(10,3))
ax.plot(tr.mdt.time, [x for x in tr.distraj], '.', ms=1)
ax.set_xlim(0, 1e4)
ax.set_ylim(-1, 400)
ax.set_xlabel('Time (ps)', fontsize=20)
ax.set_ylabel('state', fontsize=20)
Explanation: Fine grid on the Ramachandran map
Alternatively we can make a grid on the Ramachandran map with many more states.
End of explanation
from mastermsm.msm import msm
msm_alaTB_grid = msm.SuperMSM([tr])
for i in [1, 2, 5, 10, 20, 50, 100]:
msm_alaTB_grid.do_msm(i)
msm_alaTB_grid.msms[i].do_trans()
msm_alaTB_grid.msms[i].boots()
Explanation: Then we repeat the same steps as before, but with this fine grained MSM.
End of explanation
tau1_vs_lagt = np.array([[x, msm_alaTB_grid.msms[x].tauT[0], \
msm_alaTB_grid.msms[x].tau_std[0]] \
for x in sorted(msm_alaTB_grid.msms.keys())])
tau2_vs_lagt = np.array([[x, msm_alaTB_grid.msms[x].tauT[1], \
msm_alaTB_grid.msms[x].tau_std[1]] \
for x in sorted(msm_alaTB_grid.msms.keys())])
tau3_vs_lagt = np.array([[x,msm_alaTB_grid.msms[x].tauT[2], \
msm_alaTB_grid.msms[x].tau_std[2]] \
for x in sorted(msm_alaTB_grid.msms.keys())])
fig, ax = plt.subplots(figsize=(8,5))
ax.errorbar(tau1_vs_lagt[:,0],tau1_vs_lagt[:,1], tau1_vs_lagt[:,2], fmt='o-', markersize=10)
ax.errorbar(tau2_vs_lagt[:,0],tau2_vs_lagt[:,1], tau2_vs_lagt[:,2], fmt='o-', markersize=10)
ax.errorbar(tau3_vs_lagt[:,0],tau3_vs_lagt[:,1], tau3_vs_lagt[:,2], fmt='o-', markersize=10)
ax.fill_between(10**np.arange(-0.2,3,0.2), 1e-1, 10**np.arange(-0.2,3,0.2), facecolor='lightgray', alpha=0.5)
ax.set_xlabel(r'$\Delta$t [ps]', fontsize=16)
ax.set_ylabel(r'$\tau_i$ [ps]', fontsize=16)
ax.set_xlim(0.8,200)
ax.set_ylim(0,100)
_ = ax.set_xscale('log')
#_ = ax.set_yscale('log')
Explanation: First we take a look at the dependence of the slowest relaxation time with the lag time, $\Delta t$ for the construction of the Markov model as a minimal quality control.
End of explanation
fig, ax = plt.subplots()
ax.errorbar(range(1,16),msm_alaTB_grid.msms[10].tauT[0:15], fmt='o-', \
yerr= msm_alaTB_grid.msms[10].tau_std[0:15], ms=10)
ax.set_xlabel('Eigenvalue index')
ax.set_ylabel(r'$\tau_i$ (ns)')
#ax.set_yscale('log')
Explanation: As a surprise we find that in the fine-grained MSM the slowest relaxation time is slower than in the one where we consider only the $\alpha$-helical and extended basins. We can look at the whole spectrum of eigenvalues to understand why.
End of explanation
msm_alaTB_grid.msms[10].do_trans(evecs=True)
fig, ax = plt.subplots(1,3, figsize=(12,4))
mat = np.zeros((20,20), float)
for i in [x for x in zip(msm_alaTB_grid.msms[10].keep_keys, \
msm_alaTB_grid.msms[10].rvecsT[:,0])]:
#print i, i[0]%20, int(i[0]/20), -i[1]
mat[i[0]%20, int(i[0]/20)] = i[1]
ax[0].imshow(mat.transpose(), interpolation="none", origin='lower', \
cmap='hot')
mat = np.zeros((20,20), float)
for i in [x for x in zip(msm_alaTB_grid.msms[10].keep_keys, \
msm_alaTB_grid.msms[10].rvecsT[:,1])]:
#print i, i[0]%20, int(i[0]/20), -i[1]
mat[i[0]%20, int(i[0]/20)] = -i[1]
ax[1].imshow(mat.transpose(), interpolation="none", origin='lower', \
cmap='hot')
mat = np.zeros((20,20), float)
for i in [x for x in zip(msm_alaTB_grid.msms[10].keep_keys, \
msm_alaTB_grid.msms[10].rvecsT[:,2])]:
#print i, i[0]%20, int(i[0]/20), -i[1]
mat[i[0]%20, int(i[0]/20)] = -i[1]
_ = ax[2].imshow(mat.transpose(), interpolation="none", origin='lower', \
cmap='hot')
Explanation: So there is an eigenvalue (the second slowest, $\lambda_2$) whose corresponding relaxation time ($\tau_2$) approximately matches that of the coarse-grained MSM.
We can understand which dynamical processes the eigenvectors are associated to by looking at the corresponding eigenvectors. For this we recalculate the transition matrix but now recovering the eigenvectors.
End of explanation
from mastermsm.fewsm import fewsm
fewsm3 = fewsm.FEWSM(msm_alaTB_grid.msms[2], N=3)
import matplotlib.cm as cm
fig, ax = plt.subplots(figsize=(5,5))
mat = np.zeros((20,20), float)
for i in msm_alaTB_grid.msms[2].keep_keys:
j = msm_alaTB_grid.msms[2].keep_keys.index(i)
if j in fewsm3.macros[0]:
mat[i%20, int(i/20)] = 1
elif j in fewsm3.macros[1]:
mat[i%20, int(i/20)] = 2
else:
mat[i%20, int(i/20)] = 3
mat#print i, i[0]%20, int(i[0]/20), -i[1]
my_cmap = cm.get_cmap('viridis')
my_cmap.set_under('w')
ax.imshow(mat.transpose(), interpolation="none", origin='lower', \
cmap=my_cmap, vmin = 0.5)
Explanation: Here we are plotting the values of the eigenvectors so that the state indexes correspond to the position in the Ramachandran map. On the left, we show the stationary eigenvector, which is directly proportional to the equilibrium population. The center plot corresponds to the slowest dynamical mode, which in fact corresponds to the $\alpha_L$ and the $\alpha_R$ transition. Finally, on the right, we find that the eigenvector corresponding to the timescale $\tau_2$ in fact corresponds to the exchange between the A and E regions. This explains the mapping between the timescales in the coarse and fine grained MSMs.
Clustering
So it seems two states only may not be a very good clustering for this particular system. Maybe we need one more. In order to do the clustering systematically we use the fewsm module from MasterMSM. From the eigenvectors we are immediately able to produce a sensible, albeit still imperfect, partitioning in three states.
End of explanation |
1,199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to tensor flow
Basic models over MNIST dataset
Linear model
NN one layer node
Convolutional model
Tensoboard example
Save & load models
Step1: Get the MNIST data
Step2: Fist model
Step3: Model 2
Step4: Model 3
Step6: Use tensorboard to show the net & the training process.
- The same previous convolutional model with the commands that need tensorboard
Based on https
Step7: At the end execute tensorboar with
Step8: Load the model and evaluate it
Step9: Recurent neural networks example | Python Code:
# Header
# Basic libraries & options
from __future__ import print_function
#Basic libraries
import numpy as np
import tensorflow as tf
print('Tensorflow version: ', tf.__version__)
#Show images
import matplotlib.pyplot as plt
%matplotlib inline
# plt configuration
plt.rcParams['figure.figsize'] = (10, 10) # size of images
plt.rcParams['image.interpolation'] = 'nearest' # show exact image
plt.rcParams['image.cmap'] = 'gray' # use grayscale
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0"
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
Explanation: Intro to tensor flow
Basic models over MNIST dataset
Linear model
NN one layer node
Convolutional model
Tensoboard example
Save & load models
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('/home/ubuntu/data/training/image/mnist', one_hot=True)
#Examine the data
print('Train shape: ', mnist.train.images.shape)
print('Valid shape: ', mnist.validation.images.shape)
print('Test shape: ', mnist.test.images.shape)
fig = plt.figure()
for i in range(25):
a = fig.add_subplot(5,5,i+1)
a.set_title('Target: ' + str(np.argmax(mnist.train.labels[i])))
fig.tight_layout()
plt.imshow( np.reshape(mnist.train.images[i],(28,28)) )
Explanation: Get the MNIST data
End of explanation
# Start an interactive session
gpu_options = tf.GPUOptions(allow_growth = True)
sess = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=True))
### Define the graph
# - Placeholders
# - Model
# - Loss
# - Trainer
# Inputs
x = tf.placeholder(tf.float32, shape=[None, 784])
y = tf.placeholder(tf.float32, shape=[None, 10])
#------------------------------------
#----------- MODEL ---------------
#------------------------------------
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y_pred = tf.nn.softmax(tf.matmul(x,W) + b)
#------------------------------------
# Loss
cross_entropy = -tf.reduce_sum(y * tf.log(y_pred))
# Trainer
train_step = tf.train.GradientDescentOptimizer(0.001).minimize(cross_entropy)
### Train the graph
# Intialize vars
sess.run(tf.global_variables_initializer())
# Iterate running the trainer
batch_size = 128
num_epoch = 50
for epoch in range(num_epoch):
for i in range(430): # 215 * batch_size is aprox the train size (55000)
batch = mnist.train.next_batch(batch_size)
train_step.run(feed_dict={x: batch[0], y: batch[1]})
# Predict and evaluate
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_pred,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print('Test accuracy: ', accuracy.eval(feed_dict={x: mnist.test.images, y: mnist.test.labels}))
#When finish, close the interactive session
sess.close()
# Reset the graph to the next experiments
tf.reset_default_graph()
Explanation: Fist model: Linear model
Start an interactive session
Define the graph
Train the graph
Initialize variables
Loop over the data runing the trainer
Validate model with test accuracy
End of explanation
def dense_layer(x, input_dim=10, output_dim=10, name='dense'):
'''
Dense layer function
Inputs:
x: Input tensor
input_dim: Dimmension of the input tensor.
output_dim: dimmension of the output tensor
name: Layer name
'''
W = tf.Variable(tf.truncated_normal([input_dim, output_dim], stddev=0.1), name='W_'+name)
b = tf.Variable(tf.constant(0.1, shape=[output_dim]), name='b_'+name)
dense_output = tf.nn.relu(tf.matmul(x, W) + b)
return dense_output
# Start an interactive session
gpu_options = tf.GPUOptions(allow_growth = True)
sess = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=True))
### Define the graph
# Inputs
x = tf.placeholder(tf.float32, shape=[None, 784])
y = tf.placeholder(tf.float32, shape=[None, 10])
#------------------------------------
#----------- MODEL ---------------
#------------------------------------
# First layer
dense_1 = dense_layer(x, input_dim=784, output_dim=500, name='dense1')
# Final layer
dense_2 = dense_layer(dense_1, input_dim=500, output_dim=10, name='dense2')
#------------------------------------
# Loss function
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=dense_2, labels=y)
#Optimizer
train_step = tf.train.AdamOptimizer(0.01).minimize(cross_entropy)
# Predict and evaluate
y_pred = tf.nn.softmax(dense_2)
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_pred,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
### Train the graph
# Intialize vars
sess.run(tf.global_variables_initializer())
# Iterate running the trainer
batch_size = 128
num_epoch = 30
for epoch in range(num_epoch):
for i in range(430): # 430 * batch_size is aprox the train size (55000)
batch = mnist.train.next_batch(batch_size)
train_step.run(feed_dict={x: batch[0], y: batch[1]})
print('Epoch: ',epoch,' - Accuracy: ', accuracy.eval(feed_dict={x: mnist.test.images, y: mnist.test.labels}))
print('Test accuracy: ', accuracy.eval(feed_dict={x: mnist.test.images, y: mnist.test.labels}))
#When finish, close the interactive session
sess.close()
tf.reset_default_graph()
Explanation: Model 2: Neural network model
Add a dense layer between the inputs & the output
End of explanation
def conv_layer(x, size=2, input_channels=1, output_channels=32, name='conv'):
'''
Function to configure a convolution layer
Inputs:
x: Input tensor
size: Convolution filter size x size
input_channels: Num of input channels
output_channels: Num of output channels
'''
W_conv = tf.Variable(tf.truncated_normal([size, size, input_channels, output_channels], stddev=0.1), name='W_'+name)
b_conv = tf.Variable(tf.constant(0.1, shape=[output_channels]), name='b_'+name)
conv_out = tf.nn.relu(tf.nn.conv2d(x, W_conv, strides=[1, 1, 1, 1], padding='SAME') + b_conv)
return conv_out
# Start an interactive session
gpu_options = tf.GPUOptions(allow_growth = True)
sess = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=True))
#Create the net
# Inputs
x = tf.placeholder(tf.float32, shape=[None, 784])
y = tf.placeholder(tf.float32, shape=[None, 10])
#------------------------------------
#----------- MODEL ---------------
#------------------------------------
#Reshape input data to the original image shape
x_image = tf.reshape(x, [-1,28,28,1])
# First convolution
h_conv1 = conv_layer(x_image, size=5, input_channels=1, output_channels=20, name='conv1')
h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
print('Conv - pool 1: ', h_pool1)
#Second convolution
h_conv2 = conv_layer(h_pool1, size=5, input_channels=20, output_channels=50, name='conv2')
h_pool2 = tf.nn.max_pool(h_conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
print('Conv - pool 2: ', h_pool2)
#First dense layer
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*50])
h_fc1 = dense_layer(h_pool2_flat, input_dim=7*7*50, output_dim=500, name='dense1')
print('Dense 1: ', h_fc1)
#Dropout
dropout_keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, dropout_keep_prob)
#Second dense layer
h_fc2 = dense_layer(h_fc1_drop, input_dim=500, output_dim=10)
print('Dense 2: ', h_fc2)
#Prediction
y_pred = tf.nn.softmax(h_fc2)
#------------------------------------
# Loss function
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=h_fc2, labels=y, name='cross_entropy')
#Optimizer
train_step = tf.train.AdamOptimizer(1e-3).minimize(cross_entropy)
#Accuracy
correct_prediction = tf.equal(tf.argmax(y_pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#Inicialization.
sess.run(tf.global_variables_initializer())
# Train proccess
for i in range(3000):
batch = mnist.train.next_batch(128)
train_step.run(feed_dict={x: batch[0], y: batch[1], dropout_keep_prob: 0.5})
if i%500 == 0:
train_accuracy = accuracy.eval(feed_dict={x: batch[0], y: batch[1], dropout_keep_prob: 1})
print("step %d, training accuracy %g"%(i, train_accuracy))
acc_test = 0
for i in range(200):
batch = mnist.test.next_batch(50)
acc_test += accuracy.eval(feed_dict = {x:batch[0], y: batch[1], dropout_keep_prob: 1.0})
print("test accuracy: ", acc_test/200.)
sess.close()
tf.reset_default_graph()
Explanation: Model 3: Convolutional model
Add 2 convolution and max pooling layers previous to the dense layers
Add a dropout regularization layer in the first dense layer
End of explanation
def variable_summaries(var, name):
Attach a lot of summaries to a Tensor.
with tf.name_scope('summaries'):
mean = tf.reduce_mean(var)
tf.summary.scalar('mean/' + name, mean)
tf.summary.scalar('sttdev/' + name, tf.sqrt(tf.reduce_mean(tf.square(var - mean))))
tf.summary.scalar('max/' + name, tf.reduce_max(var))
tf.summary.scalar('min/' + name, tf.reduce_min(var))
tf.summary.histogram(name, var)
def conv_layer(x, size=2, input_channels=1, output_channels=32, name='conv'):
W_conv = tf.Variable(tf.truncated_normal([size, size, input_channels, output_channels], stddev=0.1), name='W_'+name)
b_conv = tf.Variable(tf.constant(0.1, shape=[output_channels]), name='b_'+name)
conv_out = tf.nn.relu(tf.nn.conv2d(x, W_conv, strides=[1, 1, 1, 1], padding='SAME') + b_conv)
# Add summary ops to collect data
variable_summaries(W_conv, "weights_"+name) #TENSORBOARD
variable_summaries(b_conv, "biases_"+name) #TENSORBOARD
return conv_out
def dense_layer(x, input_dim=10, output_dim=10, name='dense'):
W = tf.Variable(tf.truncated_normal([input_dim, output_dim], stddev=0.1), name='W_'+name)
b = tf.Variable(tf.constant(0.1, shape=[output_dim]), name='b_'+name)
dense_output = tf.nn.relu(tf.matmul(x, W) + b)
variable_summaries(W, "weights_"+name) #TENSORBOARD
variable_summaries(b, "biases_"+name) #TENSORBOARD
return dense_output
# Start an interactive session
gpu_options = tf.GPUOptions(allow_growth = True)
sess = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=True))
#Create the net
# Inputs
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y = tf.placeholder(tf.float32, shape=[None, 10] , name='y')
#Reshape input data to the original image shape
x_image = tf.reshape(x, [-1,28,28,1])
# First convolution
# use a name scope to organize nodes in the graph visualizer
with tf.name_scope("conv1") as scope:
h_conv1 = conv_layer(x_image, size=5, input_channels=1, output_channels=20, name='conv1')
h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
print('Conv - pool 1: ', h_pool1)
# Add summary ops to collect data
variable_summaries(h_pool1, "h_pool1_summary") #TENSORBOARD
#Second convolution
with tf.name_scope("conv2") as scope:
h_conv2 = conv_layer(h_pool1, size=5, input_channels=20, output_channels=50, name='conv2')
h_pool2 = tf.nn.max_pool(h_conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
print('Conv - pool 2: ', h_pool2)
variable_summaries(h_pool2, "h_pool2_summary") #TENSORBOARD
#First dense layer
with tf.name_scope("dense1") as scope:
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*50])
h_fc1 = dense_layer(h_pool2_flat, input_dim=7*7*50, output_dim=500, name='dense1')
variable_summaries(h_fc1, "dense1_summary") #TENSORBOARD
#Dropout over
dropout_keep_prob = tf.placeholder(tf.float32, name='dropout')
h_fc1_drop = tf.nn.dropout(h_fc1, dropout_keep_prob)
#Second dense layer
with tf.name_scope("dense2") as scope:
h_fc2 = dense_layer(h_fc1_drop, input_dim=500, output_dim=10, name='dense2')
print('Dense 2: ', h_fc2)
variable_summaries(h_fc2, "dense2_summary") #TENSORBOARD
#Prediction
y_pred = tf.nn.softmax(h_fc2)
# Loss function
with tf.name_scope("xent") as scope:
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=h_fc2, labels=y, name='cross_entropy')
ce_summ = tf.summary.histogram("cross_entropy", cross_entropy) #TENSORBOARD
#Optimizer
with tf.name_scope("train") as scope:
train_step = tf.train.AdamOptimizer(1e-3).minimize(cross_entropy)
#Accuracy
with tf.name_scope("test") as scope:
correct_prediction = tf.equal(tf.argmax(y_pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
accuracy_summary = tf.summary.scalar("accuracy", accuracy) #TENSORBOARD
# Merge all the summaries and write it into /tmp/tensorboard/mnist_logs
summaries_dir = '/tmp/tensorboard/mnist_conv'
with tf.name_scope('summaries') as scope:
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(summaries_dir + '/train', sess.graph)
test_writer = tf.summary.FileWriter(summaries_dir + '/test')
#Inicialization.
sess.run(tf.global_variables_initializer())
# Train proccess
for i in range(3000):
batch = mnist.train.next_batch(128)
train_step.run(feed_dict={x: batch[0], y: batch[1], dropout_keep_prob: 0.5})
if i % 50 == 0: # Record summary data for one batch
summary_str = merged.eval(feed_dict={x: batch[0], y: batch[1], dropout_keep_prob: 1.})
train_writer.add_summary(summary_str, i) #TENSORBOARD
batch_test = mnist.test.next_batch(128)
summary_str = merged.eval(feed_dict={x: batch_test[0], y: batch_test[1], dropout_keep_prob: 1.})
test_writer.add_summary(summary_str, i) #TENSORBOARD
sess.close()
tf.reset_default_graph()
Explanation: Use tensorboard to show the net & the training process.
- The same previous convolutional model with the commands that need tensorboard
Based on https://www.tensorflow.org/how_tos/summaries_and_tensorboard/index.html
End of explanation
# Start interactive session
gpu_options = tf.GPUOptions(allow_growth = True)
sess = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=True))
# Define graph
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y = tf.placeholder(tf.float32, shape=[None, 10], name='y')
with tf.name_scope("model") as scope:
W = tf.Variable(tf.zeros([784,10]), name='W1')
b = tf.Variable(tf.zeros([10]), name='b1')
#Prediction
y_pred = tf.nn.softmax(tf.matmul(x,W) + b, name='y_pred')
#Loss
cross_entropy = -tf.reduce_sum(y*tf.log(y_pred), name='cross_entropy')
# Train graph
train_step = tf.train.GradientDescentOptimizer(0.01, name='train_step').minimize(cross_entropy)
# Inicialize graph vars
sess.run(tf.global_variables_initializer())
for i in range(100):
batch = mnist.train.next_batch(50)
train_step.run(feed_dict={x: batch[0], y: batch[1]})
# Predict and evaluate
correct_prediction = tf.equal(tf.argmax(y_pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name='accuracy')
print('cross_entropy test', cross_entropy.eval(feed_dict={x: mnist.test.images, y: mnist.test.labels}))
print('Accuracy test', accuracy.eval(feed_dict={x: mnist.test.images, y: mnist.test.labels}))
# Create a saver and save weigths.
tf.add_to_collection('cross_entropy', cross_entropy)
tf.add_to_collection('x', x)
tf.add_to_collection('y', y)
tf.add_to_collection('accuracy', accuracy)
saver = tf.train.Saver(max_to_keep=0)
saver.save(sess, '/tmp/my-model',)
#Close session
sess.close()
tf.reset_default_graph()
Explanation: At the end execute tensorboar with:
cd /tmp
tensorboard --logdir=./tensorboard
And accest to it in:
http://localhost:6006
Save and load models
Create and save model
End of explanation
# Load pretrained model and evaluate it
# Start interactive session
gpu_options = tf.GPUOptions(allow_growth = True)
sess = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=True))
#Load model
new_saver = tf.train.import_meta_graph('/tmp/my-model.meta')
new_saver.restore(sess, '/tmp/my-model')
cross_entropy = tf.get_collection('cross_entropy')[0]
x = tf.get_collection('x')[0]
y = tf.get_collection('y')[0]
accuracy = tf.get_collection('accuracy')[0]
# Evaluate over the test data
print('cross_entropy test', cross_entropy.eval(feed_dict={x: mnist.test.images, y: mnist.test.labels}))
print('Accuracy test', accuracy.eval(feed_dict={x: mnist.test.images, y: mnist.test.labels}))
sess.close()
tf.reset_default_graph()
Explanation: Load the model and evaluate it
End of explanation
import tensorflow as tf
# Start interactive session
gpu_options = tf.GPUOptions(allow_growth = True)
sess = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=True))
# Define graph
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y = tf.placeholder(tf.float32, shape=[None, 10], name='y')
# Reshape X to a sequnce of columns of dim (batch, time, features)
x_seq = tf.reshape(x, [-1,28,28])
### RNN layer
lstm_size= 128
#lstm = tf.nn.rnn_cell.LSTMCell(lstm_size)
lstm = tf.contrib.rnn.LSTMCell(lstm_size)
rnn_out, _ = tf.nn.dynamic_rnn(lstm, x_seq, dtype=tf.float32)
#Prediction
W = tf.Variable(tf.zeros([lstm_size,10]))
b = tf.Variable(tf.zeros([10]))
y_pred = tf.nn.softmax(tf.matmul(rnn_out[:,-1,:],W) + b, name='y_pred')
#Loss
cross_entropy = -tf.reduce_sum(y*tf.log(y_pred), name='cross_entropy')
# Train graph
train_step = tf.train.GradientDescentOptimizer(0.01, name='train_step').minimize(cross_entropy)
# Predict and evaluate
correct_prediction = tf.equal(tf.argmax(y_pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name='accuracy')
# Inicialize graph vars
sess.run(tf.global_variables_initializer())
batch_size = 128
num_epoch = 10
for epoch in range(num_epoch):
for i in range(430): # 430 * batch_size is aprox the train size (55000)
batch = mnist.train.next_batch(batch_size)
train_step.run(feed_dict={x: batch[0], y: batch[1]})
print('Epoch: ',epoch,' - Accuracy: ', accuracy.eval(feed_dict={x: mnist.test.images, y: mnist.test.labels}))
print('Test accuracy: ', accuracy.eval(feed_dict={x: mnist.test.images, y: mnist.test.labels}))
#Close session
sess.close()
tf.reset_default_graph()
Explanation: Recurent neural networks example
End of explanation |
Subsets and Splits