Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
800 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Imports
Step1: Algorithm
Pseudocode
Step2: Actual Code
Step3: Algorithm Conditions
We believe nonMaxSupression will perform well if and only if the histogram of the data is capable of producing z-scores - i.e. there is variance in the brightness.
Data on which nonMaxSupression will perform well and poorly
The data set on which nonMaxSupression will perform well is a gradient image. We are trying to extract anything with a z-score above 7, and this should clearly extract that.
The data set on which nonMaxSupression will perform poorly is a linear image. It will perform poorly because the data does not follow a normal curve.
Raw Data Plot
Good Data
Step4: Prediction on Good Data
I predict that nonMaxSupression will pick up the higher range of clusters.
Challenging Data
Step5: Prediction on Challenging Data
I predict that nonMaxSupression will will not be able to calculate z scores, and thus will fail.
Simulation
Generate Toy Simulation Data
Easy Data
I believe this data will look la grid of different-colored squares.
Step6: The easy data looks exactly as I expected. The histogram is has deviation, meaning nonMaxSupression will be able to extract maxima.
Difficult Data
I expect that the difficult data will look like a constant image.
Step7: The difficult data looks exactly as I expected. The histogram is a single value, which is the kind of data nonMaxSupression fails on.
Toy Data Analysis
For the good data, I predict that otsuVox will select the voxels in the rightmost normal curve.
For the bad data, I predict that otsuVox will randomly select one normal curve.
Step8: As expected, otsuVox picked up just the brightest clusters.
Step9: As expected, otsuVox failed to pick out bright things because there was no deviation in the image.
Real Data
Step10: As we can see, the real data has a mean and a standard deviation. This means that nonMaximaSupression should be able to extract the bright spots.
Step11: Precision/Recall/F1 before nonMaximaSupression
Step12: Precision/Recall/F1 after nonMaximaSupression | Python Code:
import sys
import scipy.io as sio
import glob
import numpy as np
import matplotlib.pyplot as plt
from skimage.filters import threshold_otsu
sys.path.append('../code/functions')
import qaLib as qLib
sys.path.append('../../pipeline_1/code/functions')
import connectLib as cLib
from IPython.display import Image
import random
from connectLib import otsuVox
Explanation: Imports
End of explanation
Image(filename = "images/nonMaxima.png")
Explanation: Algorithm
Pseudocode
End of explanation
def nonMaximaSupression(clusterList, image, z):
randClusterDist = []
for i in range(100000):
point = [int(random.random()*image.shape[0]), int(random.random()*image.shape[1]), int(random.random()*image.shape[2])]
randClusterDist.append(image[point[0]][point[1]][point[2]])
mu = np.average(randClusterDist)
sigma = np.std(randClusterDist)
aveList = []
for cluster in clusterList:
curClusterDist = []
for member in cluster.members:
curClusterDist.append(image[member[0]][member[1]][member[2]])
aveList.append(np.mean(curClusterDist))
finalClusters = []
for i in range(len(aveList)): #this is bad and i should feel bad
if (aveList[i] - mu)/float(sigma) > z:
finalClusters.append(clusterList[i])
return finalClusters
Explanation: Actual Code
End of explanation
simEasyGrid = np.zeros((100, 100, 100))
for i in range(4):
for j in range(4):
for k in range(4):
simEasyGrid[20*(2*j): 20*(2*j + 1), 20*(2*i): 20*(2*i + 1), 20*(2*k): 20*(2*k + 1)] = i + j + k + 1
plt.imshow(simEasyGrid[5])
plt.axis('off')
plt.title('Easy Data Raw Plot at z=0')
plt.show()
plt.hist(simEasyGrid[0])
plt.title("Histogram of Easy Data")
plt.show()
Explanation: Algorithm Conditions
We believe nonMaxSupression will perform well if and only if the histogram of the data is capable of producing z-scores - i.e. there is variance in the brightness.
Data on which nonMaxSupression will perform well and poorly
The data set on which nonMaxSupression will perform well is a gradient image. We are trying to extract anything with a z-score above 7, and this should clearly extract that.
The data set on which nonMaxSupression will perform poorly is a linear image. It will perform poorly because the data does not follow a normal curve.
Raw Data Plot
Good Data
End of explanation
simDiff = np.zeros((100, 100, 100))
for i in range(100):
for j in range(100):
for k in range(100):
simDiff[i][j][k] = 100
plt.imshow(simDiff[5])
plt.axis('off')
plt.title('Challenging Data Raw Plot at z=0')
plt.show()
plt.hist(simDiff[0], bins=20)
plt.title("Histogram of Challenging Data")
plt.show()
Explanation: Prediction on Good Data
I predict that nonMaxSupression will pick up the higher range of clusters.
Challenging Data
End of explanation
simEasyGrid = np.zeros((100, 100, 100))
for i in range(4):
for j in range(4):
for k in range(4):
simEasyGrid[20*(2*j): 20*(2*j + 1), 20*(2*i): 20*(2*i + 1), 20*(2*k): 20*(2*k + 1)] = i + j + k + 1
plt.imshow(simEasyGrid[5])
plt.axis('off')
plt.title('Easy Data Raw Plot at z=0')
plt.show()
plt.hist(simEasyGrid[0])
plt.title("Histogram of Easy Data")
plt.show()
Explanation: Prediction on Challenging Data
I predict that nonMaxSupression will will not be able to calculate z scores, and thus will fail.
Simulation
Generate Toy Simulation Data
Easy Data
I believe this data will look la grid of different-colored squares.
End of explanation
simDiff = np.zeros((100, 100, 100))
for i in range(100):
for j in range(100):
for k in range(100):
simDiff[i][j][k] = 100
plt.imshow(simDiff[5])
plt.axis('off')
plt.title('Challenging Data Raw Plot at z=0')
plt.show()
plt.hist(simDiff[0], bins=20)
plt.title("Histogram of Challenging Data")
plt.show()
Explanation: The easy data looks exactly as I expected. The histogram is has deviation, meaning nonMaxSupression will be able to extract maxima.
Difficult Data
I expect that the difficult data will look like a constant image.
End of explanation
otsuOutEasy = otsuVox(simEasyGrid)
otsuClustersEasy = cLib.clusterThresh(otsuOutEasy, 0, 1000000)
nonMaxClusters = nonMaximaSupression(otsuClustersEasy, simEasyGrid, 1)
nonMaxEasy = np.zeros_like(simEasy)
for cluster in nonMaxClusters:
for member in cluster.members:
nonMaxEasy[member[0]][member[1]][member[2]] = 1
plt.imshow(nonMaxEasy[5])
plt.axis('off')
plt.title('Non Max Supression Output for Easy Data Slice at z=5')
plt.show()
Explanation: The difficult data looks exactly as I expected. The histogram is a single value, which is the kind of data nonMaxSupression fails on.
Toy Data Analysis
For the good data, I predict that otsuVox will select the voxels in the rightmost normal curve.
For the bad data, I predict that otsuVox will randomly select one normal curve.
End of explanation
otsuOutDiff = otsuVox(simDiff)
otsuClustersDiff = cLib.clusterThresh(otsuOutDiff, 0, 1000000)
nonMaxClusters = nonMaximaSupression(otsuClustersDiff, simDiff, 0)
nonMaxDiff = np.zeros_like(simDiff)
for cluster in nonMaxClusters:
for member in cluster.members:
nonMaxDiff[member[0]][member[1]][member[2]] = 1
plt.imshow(nonMaxDiff[5])
plt.axis('off')
plt.title('Non Max Supression Output for Difficult Data Slice at z=5')
plt.show()
Explanation: As expected, otsuVox picked up just the brightest clusters.
End of explanation
procData = []
for mat in glob.glob('../../data/matlabData/collman15v2/*_p1.mat'):
name = mat[34:-7]
rawData = sio.loadmat(mat)
npData = np.rollaxis(rawData[name], 2, 0)
procData.append([name, npData])
realData = procData[12][1]
otsuOutReal = otsuVox(realData)
plt.imshow(otsuOutReal[0], cmap='gray')
plt.title('Real Data otsuVox Output At Slice 0')
plt.axis('off')
plt.show()
plt.hist(otsuOutReal[0])
plt.title("Histogram of Post-Otsu Data")
plt.show()
Explanation: As expected, otsuVox failed to pick out bright things because there was no deviation in the image.
Real Data
End of explanation
otsuClusters = cLib.clusterThresh(otsuOutReal, 0, 10000000)
nonMaxClusters = nonMaximaSupression(otsuClusters, realData, 6)
nonMaxImg = np.zeros_like(realData)
for cluster in nonMaxClusters:
for member in cluster.members:
nonMaxImg[member[0]][member[1]][member[2]] = 1
plt.imshow(nonMaxImg[0], cmap='gray')
plt.title('NonMaximaSupression Output At Slice 0')
plt.axis('off')
plt.show()
Explanation: As we can see, the real data has a mean and a standard deviation. This means that nonMaximaSupression should be able to extract the bright spots.
End of explanation
labelClusters = cLib.clusterThresh(procData[0][1], 0, 10000000)
otsuClusters = cLib.clusterThresh(otsuOutReal, 0, 10000000)
precision, recall, F1 = qLib.precision_recall_f1(labelClusters, otsuClusters)
print 'Precision: ' + str(precision)
print 'Recall: ' + str(recall)
print 'F1: ' + str(F1)
Explanation: Precision/Recall/F1 before nonMaximaSupression
End of explanation
precision, recall, F1 = qLib.precision_recall_f1(labelClusters, nonMaxClusters)
print 'Precision: ' + str(precision)
print 'Recall: ' + str(recall)
print 'F1: ' + str(F1)
Explanation: Precision/Recall/F1 after nonMaximaSupression
End of explanation |
801 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Performing scripts with python-fmrest
This is a short example on how to perform scripts with python-fmrest.
Import the module
Step1: Create the server instance
Step2: Login
The login method obtains the access token.
Step3: Setup scripts
You can setup scripts to run prerequest, presort, and after the action and sorting are executed. The script setup is passed to a python-fmrest method as an object that contains the types of script executions, followed by a list containing the script name and parameter.
Step4: You only need to specify the scripts you actually want to execute. So if you only have an after action, just build a scripts object with only the 'after' key.
Call a standard method
Scripts are always executed as part of a standard request to the server. These requests are the usual find(), create_record(), delete_record(), edit_record(), get_record() methods the Server class exposes to you.
Let's make a find and then execute a script. The script being called contains an error on purpose, so that we can later read out the error number.
Step5: Get the last script error and result
Via the last_script_result property, you can access both last error and script result for all scripts that were called.
Step6: We see that we had 3 as last error, and our script result was '1'. The FMS Data API only returns strings, but error numbers are automatically converted to integers, for convenience. The script result, however, will always be a string or None, even if you exit your script in FM with a number or boolean.
Another example
Let's do another call, this time with a script that takes a parameter and does not have any errors.
It will exit with Exit Script[ Get(ScriptParameter) ], so essentially give us back what we feed in.
Step7: ... and here is the result (error 0 means no error) | Python Code:
import fmrest
Explanation: Performing scripts with python-fmrest
This is a short example on how to perform scripts with python-fmrest.
Import the module
End of explanation
fms = fmrest.Server('https://10.211.55.15',
user='admin',
password='admin',
database='Contacts',
layout='Demo',
verify_ssl=False
)
Explanation: Create the server instance
End of explanation
fms.login()
Explanation: Login
The login method obtains the access token.
End of explanation
scripts={
'prerequest': ['name_of_script_to_run_prerequest', 'script_parameter'],
'presort': ['name_of_script_to_run_presort', None], # parameter can also be None
'after': ['name_of_script_to_run_after_actions', '1234'], #FMSDAPI expects all parameters to be string
}
Explanation: Setup scripts
You can setup scripts to run prerequest, presort, and after the action and sorting are executed. The script setup is passed to a python-fmrest method as an object that contains the types of script executions, followed by a list containing the script name and parameter.
End of explanation
fms.find(
query=[{'name': 'David'}],
scripts={
'after': ['testScriptWithError', None],
}
)
Explanation: You only need to specify the scripts you actually want to execute. So if you only have an after action, just build a scripts object with only the 'after' key.
Call a standard method
Scripts are always executed as part of a standard request to the server. These requests are the usual find(), create_record(), delete_record(), edit_record(), get_record() methods the Server class exposes to you.
Let's make a find and then execute a script. The script being called contains an error on purpose, so that we can later read out the error number.
End of explanation
fms.last_script_result
Explanation: Get the last script error and result
Via the last_script_result property, you can access both last error and script result for all scripts that were called.
End of explanation
fms.find(
query=[{'name': 'David'}],
scripts={
'prerequest': ['demoScript (id)', 'abc-1234'],
}
)
Explanation: We see that we had 3 as last error, and our script result was '1'. The FMS Data API only returns strings, but error numbers are automatically converted to integers, for convenience. The script result, however, will always be a string or None, even if you exit your script in FM with a number or boolean.
Another example
Let's do another call, this time with a script that takes a parameter and does not have any errors.
It will exit with Exit Script[ Get(ScriptParameter) ], so essentially give us back what we feed in.
End of explanation
fms.last_script_result
Explanation: ... and here is the result (error 0 means no error):
End of explanation |
802 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploratory data analysis
Whenever you collect a new dataset, a good first step is to explore it. This means different things for different kinds of datasets, but if it's a timeseries then there are common techniques that will likely prove useful. This tutorial covers the basics of exploratory data analysis with timeseries. It focuses on raster plots and PSTHs recorded from spiking neurons, though the principles extend to most timeseries data.
Data and Task
The dataset was recorded from M1 neurons in a monkey performing a center-out reaching task. The monkey had to reach towards different targets, each at a different angle. On each trial, the target angle and onset was recorded.
Step1: Load the data
First we'll load the data - an important first step in any analysis is determining the structure of the data. This is particularly important if you've never analyzed the data before. Let's take a look at our data.
Step2: It looks like our data is a dictionary, which means it has keys and values
Step3: These are all fields contained within data, corresponding to the data contained within it. The fields beginning with __ correspond to the fact that this dataset was saved with matlab. These are automatically inserted and we can probably ignore them.
Let's take a look at our spiking data
Step4: Judging from the size of each axis, it looks like we have an array of shape (n_neurons, n_times). We can determine the time step for each bin by looking at the timeBase variable. We can use this to create the time of our data
Step5: A good first step with raster plots is to calculate summary statistics of the activity. First we'll take a look at the mean activity across time (averaging across neurons), then we'll look at the distribution of spikes across neurons (aggregating across time).
Step6: Now let's pull out the activity of a single neuron and see what it looks like
Step7: These vertical lines represent the spikes of a single neuron. We can visualize all of the neurons at once (but more on that later).
Binning by events
It's generally true that the neural activity we record can be split up in to distinct "events" that occur at a moment in time. Let's slice up our data based on the event times in startBins and see how this looks. We've also got information about event types in the field targetNumbers.
Step8: At this point, we'd generally need to clean up the data. This might mean throwing away neurons that had a bad signal, or events where there was clearly noise being recorded. In this case, the data is relatively clean already (thanks to Konrad for this).
We'll create a dictionary that lets us map each condition type onto times corresponding to events of that condition.
Step9: Visualizing event-related activity
Now that we know when each event occurs, let's visualize the activity of our focus neuron. To do this, we'll need to pull a window of time around each event. Then we can see the activity during that window.
Step10: Visualizing with a Peri-Stimulus Time Histogram (PSTH)
It is helpful to summarize the spiking activity across repetitions of one condition. For this, we create the peri-stimulus time histogram (PSTH). This shows us the general pattern of spiking activity in response to a stimulus.
Step11: To create teh PSTH we'll need to smooth the spikes in time. This effectively converts the spikes from bins to a continuously-varying spike rate. We'll smooth using a gaussian distribution...if we want a smoother spike rate, we should increase the standard deviation of the gaussian. | Python Code:
import numpy as np
from scipy import io as si
from matplotlib import pyplot as plt
import h5py
%matplotlib inline
Explanation: Exploratory data analysis
Whenever you collect a new dataset, a good first step is to explore it. This means different things for different kinds of datasets, but if it's a timeseries then there are common techniques that will likely prove useful. This tutorial covers the basics of exploratory data analysis with timeseries. It focuses on raster plots and PSTHs recorded from spiking neurons, though the principles extend to most timeseries data.
Data and Task
The dataset was recorded from M1 neurons in a monkey performing a center-out reaching task. The monkey had to reach towards different targets, each at a different angle. On each trial, the target angle and onset was recorded.
End of explanation
data = si.loadmat('../../data/StevensonV4.mat')
type(data)
Explanation: Load the data
First we'll load the data - an important first step in any analysis is determining the structure of the data. This is particularly important if you've never analyzed the data before. Let's take a look at our data.
End of explanation
for key in data.keys():
print(key)
Explanation: It looks like our data is a dictionary, which means it has keys and values:
End of explanation
# Load in the spiking data
spikes = data['spikes']
spikes.shape
Explanation: These are all fields contained within data, corresponding to the data contained within it. The fields beginning with __ correspond to the fact that this dataset was saved with matlab. These are automatically inserted and we can probably ignore them.
Let's take a look at our spiking data
End of explanation
time_step = data['timeBase'].squeeze()
times = np.arange(spikes.shape[-1]) * time_step
print(time_step)
print(times[:10])
Explanation: Judging from the size of each axis, it looks like we have an array of shape (n_neurons, n_times). We can determine the time step for each bin by looking at the timeBase variable. We can use this to create the time of our data
End of explanation
# Calculate the mean across neurons and plot it for a quick viz
mean_spikes = np.mean(spikes, 0)
fig, ax = plt.subplots()
ax.plot(mean_spikes)
total_spikes = np.sum(spikes, -1)
fig, ax = plt.subplots()
_ = ax.hist(total_spikes)
Explanation: A good first step with raster plots is to calculate summary statistics of the activity. First we'll take a look at the mean activity across time (averaging across neurons), then we'll look at the distribution of spikes across neurons (aggregating across time).
End of explanation
neuron_ix = 192 # Which neuron are we looking at?
neuron = spikes[neuron_ix]
ixs_spikes = np.where(neuron == 1)[0]
fig, ax = plt.subplots()
ax.vlines(times[ixs_spikes[:100]], 0, 1)
Explanation: Now let's pull out the activity of a single neuron and see what it looks like:
End of explanation
# Only process what constitutes valid trials - identify malformed ones
start_bins = data['startBins'][0]
target_numbers = data['targetNumbers'][:, 0]
print(start_bins.shape)
# We'll only keep the trials that occur before a pre-specified time
end_ix = 676790
mask_keep = start_bins < end_ix
start_bins = start_bins[mask_keep]
target_numbers = target_numbers[mask_keep]
print(start_bins.shape)
Explanation: These vertical lines represent the spikes of a single neuron. We can visualize all of the neurons at once (but more on that later).
Binning by events
It's generally true that the neural activity we record can be split up in to distinct "events" that occur at a moment in time. Let's slice up our data based on the event times in startBins and see how this looks. We've also got information about event types in the field targetNumbers.
End of explanation
n_conditions = len(np.unique(target_numbers))
print('Number of conditions: {}'.format(n_conditions))
condition_dict = {}
for ii in range(1, n_conditions + 1):
condition_dict[ii] = np.where(target_numbers == ii)[0]
Explanation: At this point, we'd generally need to clean up the data. This might mean throwing away neurons that had a bad signal, or events where there was clearly noise being recorded. In this case, the data is relatively clean already (thanks to Konrad for this).
We'll create a dictionary that lets us map each condition type onto times corresponding to events of that condition.
End of explanation
# We infer the sfreq from the time step
sfreq = 1. / time_step
# Define how we'll take a window around each event
tmin, tmax = -.5, 10
ixmin = int(tmin * sfreq)
ixmax = int(tmax * sfreq)
# Now loop through conditions
cond_data = {}
for cond in range(1, n_conditions + 1):
# For each condition, we'll take a window of time around each onset
indices = condition_dict[cond]
this_onsets = start_bins[indices]
# Loop through each event for this event
epochs = []
for onset in this_onsets:
if (onset + ixmax) > spikes.shape[-1]:
# If the window extends beyond the data, skip it
continue
epochs.append(spikes[:, onset + ixmin : onset + ixmax])
epochs = np.array(epochs)
cond_data[cond] = epochs
# Now create time (in seconds) around each window
time_epochs = np.linspace(tmin, tmax, num=epochs.shape[-1])
# Now, we can plot the spiking activity (rasters) in response to each condition
n_row = 3
n_col = int(np.ceil(n_conditions / float(n_row)))
fig, axs = plt.subplots(n_row, n_col, sharex=True, sharey=True,
figsize=(5 * n_col, 5 * n_row))
for ax, (cond, i_data) in zip(axs.ravel(), cond_data.items()):
this_epoch = i_data[:, neuron_ix, :]
for ii, i_ep in enumerate(this_epoch):
mask_spikes = i_ep == 1
ixs_spikes = np.where(mask_spikes)[0]
times_spikes = time_epochs[ixs_spikes]
if len(times_spikes) > 0:
ax.vlines(times_spikes, ii, ii + 1, color='k')
ax.set_title('Condition {}'.format(cond))
plt.autoscale(tight=True)
Explanation: Visualizing event-related activity
Now that we know when each event occurs, let's visualize the activity of our focus neuron. To do this, we'll need to pull a window of time around each event. Then we can see the activity during that window.
End of explanation
# We'll use this to smooth in time, which is important when using spikes
from scipy.ndimage.filters import gaussian_filter1d
Explanation: Visualizing with a Peri-Stimulus Time Histogram (PSTH)
It is helpful to summarize the spiking activity across repetitions of one condition. For this, we create the peri-stimulus time histogram (PSTH). This shows us the general pattern of spiking activity in response to a stimulus.
End of explanation
# Smooth the spiking activity, then take every "Nth" sample to reduce size
gaussian_sd = 10
n_decimate = 5
binned_dict = {}
for i_cond, i_data in cond_data.items():
i_data = gaussian_filter1d(i_data.astype(float), gaussian_sd, axis=-1)
# We'll take every Nth sample to speed up plotting
i_data = i_data[..., ::n_decimate]
binned_dict[i_cond] = i_data
# Compare this plot with the raster images above
n_row = 3
n_col = int(np.ceil(n_conditions / float(n_row)))
fig, axs = plt.subplots(n_row, n_col, sharex=True, sharey=True,
figsize=(5 * n_col, 5 * n_row))
for ax, (i_cond, i_data) in zip(axs.ravel(), binned_dict.items()):
ax.plot(time_epochs[::n_decimate], i_data.mean(0)[192], 'k')
ax.set_title('Condition: {}'.format(i_cond))
ax.axvline(0, color='r', ls='--')
plt.autoscale(tight=True)
Explanation: To create teh PSTH we'll need to smooth the spikes in time. This effectively converts the spikes from bins to a continuously-varying spike rate. We'll smooth using a gaussian distribution...if we want a smoother spike rate, we should increase the standard deviation of the gaussian.
End of explanation |
803 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<figure>
<IMG SRC="https
Step1: Simultaneously plot three graphs
This shows a way to read data from the current directory and then plot these in a single figure.
The data file need to be in your directory, so copy them first from the corresponding directory of the notebooks by Mark Bakker (notebook 1)
Step2: Use more than one axis, i.e. using everal subplots( )
Step3: Gallery of graphs
The plotting package matplotlib allows you to make very fancy graphs. Check out the <A href="http
Step5: <a name="ex6"></a> Exercise 6, Fill between
Load the air and sea temperature, as used in Exercise 4, but this time make one plot of temperature vs the number of the month and use the plt.fill_between command to fill the space between the curve and the $x$-axis. Specify the alpha keyword, which defines the transparancy. Some experimentation will give you a good value for alpha (stay between 0 and 1). Note that you need to specify the color using the color keyword argument. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
aList = ['a', 'quark', 'flies', 'in', 'this', 'room', 'at', 3, 'oclock']
# print("{} {} {} {} {} {} {} {} {}".format(1,2,'three',4,5,6,6, 7, 8, 9)) # (aList))
print("{} {} {} {} {} {} {} {} {}".format(*aList))
plt.legend?
import numpy as np # functionality to use numeric arrays
import matplotlib.pyplot as plt # functionality to plot
a = -6
b = -5
c = -2
d = 1
e = 4
f = 6
x = np.linspace(-6, 6, 1000) # 1000 x-values between -6 and 6
# Compute the polynomials for 1000 points at once
y1 = a * x**2 + b * x + c
y2 = (x - a) * (x - b )* (x - c ) * (x - d) * (x - e) * (x -f)
# to put the equations in the graph, put them in strings between $ $
eq1 = '$a x^2 + b x + c$'
eq2 = '$(x - a) (x - b ) (x - c ) (x - d) (x - e) (x -f)$'
plt.plot(x, y1, label=eq1) # use these equations as label
plt.plot(x, y2, label=eq2)
plt.title('Title of the graph, just two equations')
plt.xlabel('x [m]')
plt.ylabel('y [whatever]')
plt.grid(True)
plt.legend(loc='best', fontsize='small') # this plots the legend with the equation labels
plt.show() # need this to actually show the plot
Explanation: <figure>
<IMG SRC="https://raw.githubusercontent.com/mbakker7/exploratory_computing_with_python/master/tudelft_logo.png" WIDTH=250 ALIGN="right">
</figure>
Exploratory Computing with Python
Borrowed from Mark Bakker in extra-curricular Python course at UNESCO-IHE
On Feb 21, We started working with this first jupyter notebook developed by Prof. Mark Bakker of TU-Delft.
We finally didn't have time to go through all of it.
Just for your memory and inspiraction this is a fast rap up of what we did and did not completely finish.
TO
Notebook 1: Basics and Plotting
First Python steps
Portable, powerful, and a breeze to use, Python is a popular, open-source programming language used for both scripting applications and standalone programs. Python can be used to do pretty much anything.
<a name="ex1"></a> Exercise 1, First Python code
Compute the value of the polynomial
$y_1=ax^2+bx+c$ at a large number of $x$ values=-2$ using
$a=-6$, $b=-4$, $c=-2$, $d=1$, $e=4$, $f=6$
We also add a 5th degree polynomial:
$y_2 = (x - a) (x - b ) (x - c ) (x - d) (x - e) (x -f)$
End of explanation
holland = np.loadtxt('holland_temperature.dat')
newyork= np.loadtxt('newyork_temperature.dat')
beijing = np.loadtxt('beijing_temperature.dat')
plt.plot(np.linspace(1, 12, 12), holland)
plt.plot(np.linspace(1, 12, 12), newyork)
plt.plot(np.linspace(1, 12, 12), beijing)
plt.xlabel('Number of the month')
plt.ylabel('Mean monthly temperature (Celcius)')
plt.xlim(1, 12)
# the labels are given in legend, instead of with each plot like we did before
plt.legend(['Holland','New York','Beijing'], loc='best');
plt.show()
Explanation: Simultaneously plot three graphs
This shows a way to read data from the current directory and then plot these in a single figure.
The data file need to be in your directory, so copy them first from the corresponding directory of the notebooks by Mark Bakker (notebook 1)
End of explanation
# read the data from the current directory
air = np.loadtxt('holland_temperature.dat')
sea = np.loadtxt('holland_seawater.dat')
# specifiy two plots vertically 2 rows 1 column
# and generate first axis of them
plt.subplot(211) # plt.subplot(2, 1, 1) is the same
# plot the actual two lines and use a label for each of them
plt.plot(air, 'b', label='air temp')
plt.plot(sea, 'r', label='sea temp')
plt.legend(loc='best') # show legend
plt.ylabel('temp (Celcius)')
plt.xlim(0, 11) # set the limits of the x-axis of the graph
plt.xticks([]) # don't plot ticks along the x-axis
plt.subplot(212) # generate second subplot
plt.plot(air-sea, 'ko')
# generate the tick labels explicitly
plt.xticks(np.linspace(0, 11, 12),
['jan','feb','mar','apr','may','jun','jul','aug','sep','oct','nov','dec'])
plt.xlim(0, 11)
plt.ylabel('air - sea temp (Celcius)');
plt.show()
Explanation: Use more than one axis, i.e. using everal subplots( )
End of explanation
gold = [46, 38, 29, 24, 13, 11, 11, 8, 8, 7, 107]
countries = ['USA', 'CHN', 'GBR', 'RUS', 'KOR', 'GER', 'FRA', 'ITA', 'HUN', 'AUS', 'OTHER']
# use pie graph this time
plt.pie(gold, labels = countries, colors = ['Gold', 'MediumBlue', 'SpringGreen', 'BlueViolet'])
plt.axis('equal');
plt.show()
Explanation: Gallery of graphs
The plotting package matplotlib allows you to make very fancy graphs. Check out the <A href="http://matplotlib.org/gallery.html" target=_blank>matplotlib gallery</A> to get an overview of many of the options. The following exercises use several of the matplotlib options.
<a name="ex5"></a> Exercise 5, Pie Chart
At the 2012 London Olympics, the top ten countries (plus the rest) receiving gold medals were ['USA', 'CHN', 'GBR', 'RUS', 'KOR', 'GER', 'FRA', 'ITA', 'HUN', 'AUS', 'OTHER']. They received [46, 38, 29, 24, 13, 11, 11, 8, 8, 7, 107] gold medals, respectively. Make a pie chart (type plt.pie? or go to the pie charts in the matplotlib gallery) of the top 10 gold medal winners plus the others at the London Olympics. Try some of the keyword arguments to make the plot look nice. You may want to give the command plt.axis('equal') to make the scales along the horizontal and vertical axes equal so that the pie actually looks like a circle rather than an ellipse. There are four different ways to specify colors in matplotlib plotting; you may read about it here. The coolest way is to use the html color names. Use the colors keyword in your pie chart to specify a sequence of colors. The sequence must be between square brackets, each color must be between quotes preserving upper and lower cases, and they must be separated by comma's like ['MediumBlue','SpringGreen','BlueViolet']; the sequence is repeated if it is not long enough. The html names for the colors may be found, for example, here.
End of explanation
air = np.loadtxt('holland_temperature.dat')
sea = np.loadtxt('holland_seawater.dat')
# use fill_between graph this time
# range(12) generates values 0, 1, 2, 3, ... 11 (used for months, 0=jan)
plt.fill_between(range(12), air, color='b', alpha=0.3, label='air') # alpha is degree of transparency
plt.fill_between(range(12), sea, color='r', alpha=0.3, label='sea')
# this is quite sophisticated: the plot
# the \ after 'apr' is line continuation
plt.xticks(np.linspace(0, 11, 12), ['jan', 'feb', 'mar', 'apr',\
'may', 'jun', 'jul', 'aug', 'sep', ' oct', 'nov', 'dec'])
plt.xlim(0, 11)
plt.ylim(0, 20)
plt.xlabel('Month')
plt.ylabel('Temperature (Celcius)')
plt.legend(loc='best', fontsize='x-small')
plt.show()
Demo of spines using custom bounds to limit the extent of the spine.
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi, 50)
y = np.sin(x)
y2 = y + 0.1 * np.random.normal(size=x.shape)
fig, ax = plt.subplots()
ax.plot(x, y, 'k--')
ax.plot(x, y2, 'ro')
# set ticks and tick labels
ax.set_xlim((0, 2*np.pi))
ax.set_xticks([0, np.pi, 2*np.pi])
ax.set_xticklabels(['0', '$\pi$', '2$\pi$'])
ax.set_ylim((-1.5, 1.5))
ax.set_yticks([-1, 0, 1])
# Only draw spine between the y-ticks
ax.spines['left'].set_bounds(-1, 1)
# Hide the right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
# Only show ticks on the left and bottom spines
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.show()
Explanation: <a name="ex6"></a> Exercise 6, Fill between
Load the air and sea temperature, as used in Exercise 4, but this time make one plot of temperature vs the number of the month and use the plt.fill_between command to fill the space between the curve and the $x$-axis. Specify the alpha keyword, which defines the transparancy. Some experimentation will give you a good value for alpha (stay between 0 and 1). Note that you need to specify the color using the color keyword argument.
End of explanation |
804 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training DeepMind's Atari DQN with Chimp
Load Chimp modules
Step1: Load Python packages
Step2: Set training parameters
Step3: You may want to set a smaller number of iterations (like 100000) - for illustration purposes. We set the GPU option to True, turn it off if your machine does not support it. Be sure to have the requested rom in the indicated directory.
Step4: Now we initialize the simulator first, as we need to use some information it provides - e.g., number of actions.
Step5: Here we define the convolutional network, in a format required by Chainer - the deep learning library we use.
Step6: We then initialize the learner + chainer backend, replay memory, and agent modules.
Step7: Now let the agent train.
Step8: Visualizing results
First, let's visualize the training and evaluation results.
Step9: Evaluating the best policy
Let's load the network that collected the highest reward per game episode | Python Code:
from chimp.memories import ReplayMemoryHDF5
from chimp.learners.dqn_learner import DQNLearner
from chimp.learners.chainer_backend import ChainerBackend
from chimp.simulators.atari import AtariSimulator
from chimp.agents import DQNAgent
Explanation: Training DeepMind's Atari DQN with Chimp
Load Chimp modules
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import random
import chainer
import chainer.functions as F
import chainer.links as L
from chainer import Chain
import os
import pandas as ps
Explanation: Load Python packages
End of explanation
settings = {
# agent settings
'batch_size' : 32,
'print_every' : 5000,
'save_dir' : './results_atari',
'iterations' : 5000000,
'eval_iterations' : 5000,
'eval_every' : 50000,
'save_every' : 50000,
'initial_exploration' : 50000,
'epsilon_decay' : 0.000005, # subtract from epsilon every step
'eval_epsilon' : 0.05, # epsilon used in evaluation, 0 means no random actions
'epsilon' : 1.0, # Initial exploratoin rate
'learn_freq' : 4,
'history_sizes' : (4, 0, 0), # sizes of histories to use as nn inputs (o, a, r)
'model_dims' : (84,84),
# Atari settings
'rom' : "Breakout.bin",
'rom_dir' : './roms',
'pad' : 15, # padding parameter - for image cropping - only along the length of the image, to obtain a square
'action_history' : True,
# simulator settings
'viz' : True,
'viz_cropped' : False,
# replay memory settings
'memory_size' : 1000000, # size of replay memory
'frame_skip' : 4, # number of frames to skip
# learner settings
'learning_rate' : 0.00025,
'decay_rate' : 0.95, # decay rate for RMSprop, otherwise not used
'discount' : 0.99, # discount rate for RL
'clip_err' : False, # value to clip loss gradients to
'clip_reward' : 1, # value to clip reward values to
'target_net_update' : 10000, # update the update-generating target net every fixed number of iterations
'optim_name' : 'RMSprop', # currently supports "RMSprop", "ADADELTA", "ADAM" and "SGD"'
'gpu' : True,
'reward_rescale': False,
# general
'seed_general' : 1723,
'seed_simulator' : 5632,
'seed_agent' : 9826,
'seed_memory' : 7563
}
Explanation: Set training parameters
End of explanation
# set random seed
np.random.seed(settings["seed_general"])
random.seed(settings["seed_general"])
Explanation: You may want to set a smaller number of iterations (like 100000) - for illustration purposes. We set the GPU option to True, turn it off if your machine does not support it. Be sure to have the requested rom in the indicated directory.
End of explanation
simulator = AtariSimulator(settings)
Explanation: Now we initialize the simulator first, as we need to use some information it provides - e.g., number of actions.
End of explanation
#Define the network
class Convolution(Chain):
def __init__(self):
super(Convolution, self).__init__(
l1=F.Convolution2D(settings['history_sizes'][0], 32, ksize=8, stride=4, nobias=False, wscale=np.sqrt(2)),
l2=F.Convolution2D(32, 64, ksize=4, stride=2, nobias=False, wscale=np.sqrt(2)),
l3=F.Convolution2D(64, 64, ksize=3, stride=1, nobias=False, wscale=np.sqrt(2)),
l4=F.Linear(3136, 512, wscale = np.sqrt(2)),
l5=F.Linear(512, simulator.n_actions, wscale = np.sqrt(2)),
)
def __call__(self, ohist, ahist):
if len(ohist.data.shape) < 4:
ohist = F.reshape(ohist,(1,4,84,84))
h1 = F.relu(self.l1(ohist/255.0))
h2 = F.relu(self.l2(h1))
h3 = F.relu(self.l3(h2))
h4 = F.relu(self.l4(h3))
output = self.l5(h4)
return output
net = Convolution()
Explanation: Here we define the convolutional network, in a format required by Chainer - the deep learning library we use.
End of explanation
backend = ChainerBackend(settings)
backend.set_net(net)
learner = DQNLearner(settings, backend)
memory = ReplayMemoryHDF5(settings)
agent = DQNAgent(learner, memory, simulator, settings)
Explanation: We then initialize the learner + chainer backend, replay memory, and agent modules.
End of explanation
agent.train()
Explanation: Now let the agent train.
End of explanation
train_stats = ps.read_csv('%s/training_history.csv' % settings['save_dir'],delimiter=' ',header=None)
train_stats.columns = ['Iteration','MSE Loss','Average Q-Value']
eval_stats = ps.read_csv('%s/evaluation_history.csv' % settings['save_dir'],delimiter=' ',header=None)
eval_stats.columns = ['Iteration','Total Reward','Reward per Episode']
plt.plot(eval_stats['Iteration'], eval_stats['Reward per Episode'])
plt.xlabel("Iteration")
plt.ylabel("Avg. Reward per Episode")
plt.grid(True)
#plt.savefig(settings['save_dir'] + '_' + "evaluation_reward.svg", bbox_inches='tight')
plt.show()
plt.close()
plt.plot(train_stats['Iteration'], train_stats['Average Q-Value'])
plt.xlabel("Iteration")
plt.ylabel("Avg. Q-Values")
plt.grid(True)
#plt.savefig(settings['save_dir'] + '_' + "training_q_values.svg", bbox_inches='tight')
plt.show()
plt.close()
Explanation: Visualizing results
First, let's visualize the training and evaluation results.
End of explanation
best_iteration_index = np.argmax(eval_stats['Reward per Episode'])
best_iteration = str(int(eval_stats['Iteration'][best_iteration_index]))
best_iteration
agent.learner.load_net(settings['save_dir']+'/net_' + best_iteration + '.p')
r_tot, r_per_episode, runtime = agent.simulate(10000, epsilon=0.05, viz=True)
r_per_episode
Explanation: Evaluating the best policy
Let's load the network that collected the highest reward per game episode
End of explanation |
805 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Recognition on MNIST using PyTorch Lightning
Demonstrating the elements of machine learning
Step1: Pytorch Lightning Module
PyTorch Lightning Module has a PyTorch ResNet18 Model. It is a subclass of LightningModule. The model part is subclassed to support a single channel input. We replaced the input convolutional layer to support single channel inputs. The Lightning Module is also a container for the model, the optimizer, the loss function, the metrics, and the data loaders.
ResNet class can be found here.
By using PyTorch Lightning, we simplify the training and testing processes since we do not need to write boiler plate code blocks. These include automatic transfer to chosen device (i.e. gpu or cpu), model eval and train modes, and backpropagation routines.
Step2: PyTorch Lightning Callback
We can instantiate a callback object to perform certain tasks during training. In this case, we log sample images, ground truth labels, and predicted labels from the test dataset.
We can also ModelCheckpoint callback to save the model after each epoch.
Step3: Program Arguments
When running on command line, we can pass arguments to the program. For the jupyter notebook, we can pass arguments using the %run magic command.
```
Step4: Training and Evaluation using Trainer
Get command line arguments. Instatiate a Pytorch Lightning Model. Train the model. Evaluate the model. | Python Code:
%pip install pytorch-lightning --upgrade
%pip install torchmetrics --upgrade
import torch
import torchvision
import wandb
from argparse import ArgumentParser
from pytorch_lightning import LightningModule, Trainer, Callback
from pytorch_lightning.loggers import WandbLogger
from torchmetrics.functional import accuracy
Explanation: Image Recognition on MNIST using PyTorch Lightning
Demonstrating the elements of machine learning:
1) Experience (Datasets and Dataloaders)<br>
2) Task (Classifier Model)<br>
3) Performance (Accuracy)<br>
Experience: <br>
We use MNIST dataset for this demo. MNIST is made of 28x28 images of handwritten digits, 0 to 9. The train split has 60,000 images and the test split has 10,000 images. Images are all gray-scale.
Task:<br>
Our task is to classify the images into 10 classes. We use ResNet18 model from torchvision.models. The ResNet18 first convolutional layer (conv1) is modified to accept a single channel input. The number of classes is set to 10.
Performance:<br>
We use accuracy metric to evaluate the performance of our model on the test split. torchmetrics.functional.accuracy calculates the accuracy.
Pytorch Lightning:<br>
Our demo uses Pytorch Lightning to simplify the process of training and testing. Pytorch Lightning Trainer trains and evaluates our model. The default configurations are for a GPU-enabled system with 48 CPU cores. Please change the configurations if you have a different system.
Weights and Biases:<br>
wandb is used by PyTorch Lightining Module to log train and evaluations results. Use --no-wandb to disable wandb.
Let us install pytorch-lightning and torchmetrics.
End of explanation
class LitMNISTModel(LightningModule):
def __init__(self, num_classes=10, lr=0.001, batch_size=32):
super().__init__()
self.save_hyperparameters()
self.model = torchvision.models.resnet18(num_classes=num_classes)
self.model.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7,
stride=2, padding=3, bias=False)
self.loss = torch.nn.CrossEntropyLoss()
def forward(self, x):
return self.model(x)
# this is called during fit()
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = self.loss(y_hat, y)
return {"loss": loss}
# calls to self.log() are recorded in wandb
def training_epoch_end(self, outputs):
avg_loss = torch.stack([x["loss"] for x in outputs]).mean()
self.log("train_loss", avg_loss, on_epoch=True)
# this is called at the end of an epoch
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = self.loss(y_hat, y)
acc = accuracy(y_hat, y) * 100.
# we use y_hat to display predictions during callback
return {"y_hat": y_hat, "test_loss": loss, "test_acc": acc}
# this is called at the end of all epochs
def test_epoch_end(self, outputs):
avg_loss = torch.stack([x["test_loss"] for x in outputs]).mean()
avg_acc = torch.stack([x["test_acc"] for x in outputs]).mean()
self.log("test_loss", avg_loss, on_epoch=True, prog_bar=True)
self.log("test_acc", avg_acc, on_epoch=True, prog_bar=True)
# validation is the same as test
def validation_step(self, batch, batch_idx):
return self.test_step(batch, batch_idx)
def validation_epoch_end(self, outputs):
return self.test_epoch_end(outputs)
# we use Adam optimizer
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=self.hparams.lr)
# this is called after model instatiation to initiliaze the datasets and dataloaders
def setup(self, stage=None):
self.train_dataloader()
self.test_dataloader()
# build train and test dataloaders using MNIST dataset
# we use simple ToTensor transform
def train_dataloader(self):
return torch.utils.data.DataLoader(
torchvision.datasets.MNIST(
"./data", train=True, download=True,
transform=torchvision.transforms.ToTensor()
),
batch_size=self.hparams.batch_size,
shuffle=True,
num_workers=48,
pin_memory=True,
)
def test_dataloader(self):
return torch.utils.data.DataLoader(
torchvision.datasets.MNIST(
"./data", train=False, download=True,
transform=torchvision.transforms.ToTensor()
),
batch_size=self.hparams.batch_size,
shuffle=False,
num_workers=48,
pin_memory=True,
)
def val_dataloader(self):
return self.test_dataloader()
Explanation: Pytorch Lightning Module
PyTorch Lightning Module has a PyTorch ResNet18 Model. It is a subclass of LightningModule. The model part is subclassed to support a single channel input. We replaced the input convolutional layer to support single channel inputs. The Lightning Module is also a container for the model, the optimizer, the loss function, the metrics, and the data loaders.
ResNet class can be found here.
By using PyTorch Lightning, we simplify the training and testing processes since we do not need to write boiler plate code blocks. These include automatic transfer to chosen device (i.e. gpu or cpu), model eval and train modes, and backpropagation routines.
End of explanation
class WandbCallback(Callback):
def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
# process first 10 images of the first batch
if batch_idx == 0:
n = 10
x, y = batch
outputs = outputs["y_hat"]
outputs = torch.argmax(outputs, dim=1)
# log image, ground truth and prediction on wandb table
columns = ['image', 'ground truth', 'prediction']
data = [[wandb.Image(x_i), y_i, y_pred] for x_i, y_i, y_pred in list(
zip(x[:n], y[:n], outputs[:n]))]
wandb_logger.log_table(
key='ResNet18 on MNIST Predictions',
columns=columns,
data=data)
Explanation: PyTorch Lightning Callback
We can instantiate a callback object to perform certain tasks during training. In this case, we log sample images, ground truth labels, and predicted labels from the test dataset.
We can also ModelCheckpoint callback to save the model after each epoch.
End of explanation
def get_args():
parser = ArgumentParser(description="PyTorch Lightning MNIST Example")
parser.add_argument("--max-epochs", type=int, default=5, help="num epochs")
parser.add_argument("--batch-size", type=int, default=32, help="batch size")
parser.add_argument("--lr", type=float, default=0.001, help="learning rate")
parser.add_argument("--num-classes", type=int, default=10, help="num classes")
parser.add_argument("--devices", default=1)
parser.add_argument("--accelerator", default='gpu')
parser.add_argument("--num-workers", type=int, default=48, help="num workers")
parser.add_argument("--no-wandb", default=False, action='store_true')
args = parser.parse_args("")
return args
Explanation: Program Arguments
When running on command line, we can pass arguments to the program. For the jupyter notebook, we can pass arguments using the %run magic command.
```
End of explanation
if __name__ == "__main__":
args = get_args()
model = LitMNISTModel(num_classes=args.num_classes,
lr=args.lr, batch_size=args.batch_size)
model.setup()
# printing the model is useful for debugging
print(model)
# wandb is a great way to debug and visualize this model
wandb_logger = WandbLogger(project="pl-mnist")
trainer = Trainer(accelerator=args.accelerator,
devices=args.devices,
max_epochs=args.max_epochs,
logger=wandb_logger if not args.no_wandb else None,
callbacks=[WandbCallback() if not args.no_wandb else None])
trainer.fit(model)
trainer.test(model)
wandb.finish()
Explanation: Training and Evaluation using Trainer
Get command line arguments. Instatiate a Pytorch Lightning Model. Train the model. Evaluate the model.
End of explanation |
806 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-2', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: BNU
Source ID: SANDBOX-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
807 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stock analysis
Step1: The volatility calculation is made using the return for each day, given the close price of any stock. As we are interested in the annual volatility, this needs to be scaled by a factor $\sqrt{n}$, where $n$ is the number of working days in a year (I will assume 260)
Step2: Using the scripts on utils it's possible to download the data from Yahoo! Finance for the following stocks (by default)
Step3: I forgot to check if in the time in which those stocks are defined there was any splitting (that's why it's important explorative analysis).
AAPL splitted on Febbruary 28 2005 by 2
Step4: Now let's build a data structure with only daily return for every stock. The model for returns is given by a compounding of the daily return, like
$$ S_{n+1} = S_{n}e^{r} $$
Step5: Too bad there arent negative correlations, after all, they all play more or less on the same areas. So, given those stocks, what combination yields the best portfolio (bigger return, lower risk) with a given ammount of investment capital? The optimization problem isn't so hard numericaly, but its possible to derive an analytical expression. What follows is know as modern portfolio theory.
First, let's explore what I think is a pretty beautifull result (because I didn't expected it). We will allocate randomly those stocks in order to generate a lot of random portfolios with the same initial value.
Step6: It's easy to see that there is a hard superior limit on the points for a given volatility. That curve is called efficient frontier and it represents the best portfolio allocation (less risk for an expected return or bigger returns for a choice of risk). It seems somewhat to a parabolla. As said before, it's possible to derive some analytical expressions for that curve.
Let $w_i$ be a weight vector that represents the ammount of stock $i$ and let its sum be one (scale isn't important here, so it can be put as the initial investment, but the number one is easy and I don't carry more symbols).
The expected return of the i-th stock is $r_i$, so the total return for a portfolio is
$$ r = w_i r^i $$
(summation of same indices is implied). In the same way, the total volatility (standar deviation) of the portfolio is
$$ \sigma^2 = K_i^j w_j w^i $$
where $K$ is the covariance matrix, and the condition on the weights is expressed as
$$ w_i 1^i = 1 $$
where $1^i$ is a vector of ones. If we choice an expected return, we can build an optimal portfolio by minimizing the standar deviation. So the problem becomes
$$ min\left( K_i^j w_j w^i \,\,\,|\,\,\, w_i 1^i = 1,\,\,\, r = w_i r^i \right) $$
the right side I think may bring some confusion
Step7: For example, at this day (24 Jannuary 2017) the market closed with the following prices for the stocks in this list
Step8: This time the algorithm ask to buy a lot of AMZN, some of GOOG and sell a little of the others. With the same $10000 the portfolio distribution would be | Python Code:
import numpy as np
import pandas as pd
import datetime as dt
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(12,12))
Explanation: Stock analysis: returns and volatility
This notebook aims to explore the Markowitz theory on modern portfolios with a little of code and a little of maths. The modern portfolio theory seeks to build a portfolio of different assets in such way that increases the returns and reduces the risk of holding the portfolio. In almost any treatment of risk I have read, the risk is repressented by the standar deviation of the returns and is called volatility. Let's assume that the random walk for stocks is of the form
$$ S_{t+1} = S_t \mu \Delta t + S_t \sigma \epsilon \sqrt{\Delta t} $$
where $S$ is the stock price at time $t$, $\mu$ and $\sigma$ are mean and standar deviation, $\Delta t$ the time step and $\epsilon$ a normal distributed random variable with mean zero and variance one. Under this assumption, $\sigma$ indicates how scattered are the returns in the future and under certain conditions, there is a probability that it can lead to a loss due to this scattering. So, an investor seeks to maximize returns, but keeping volatility at bay, so to reduce the chance of lossing money.
For this analysis we will use only basic numerical libraries, to show how some algorithms work without much black-box and we will work with real stock data.
End of explanation
# beware! it takes a numpy array
def calculate_volatility(df, t=10): # default timespan for volatility
rets = np.diff(np.log(df))
vol = np.zeros(rets.size - t)
for i in range(vol.size):
vol[i] = np.std(rets[i:(t+i)])
return np.sqrt(260)*vol
Explanation: The volatility calculation is made using the return for each day, given the close price of any stock. As we are interested in the annual volatility, this needs to be scaled by a factor $\sqrt{n}$, where $n$ is the number of working days in a year (I will assume 260)
End of explanation
symbols = np.loadtxt("../utils/symbols", dtype=str)
n_sym = symbols.size
data = {}
for s in map(lambda x: x[2:-1], symbols):
data[str(s)] = pd.read_csv("../data/{}.csv".format(str(s)))
data[str(s)].sort_values(by="Date", ascending=True, inplace=True)
t = data["AAPL"].Date[64:]
t = [dt.datetime.strptime(d,'%Y-%m-%d').date() for d in t]
plt.subplot(411)
plt.plot(t, calculate_volatility(data["AAPL"].Close.values, 63))
plt.subplot(412)
plt.plot(t, calculate_volatility(data["AMZN"].Close.values, 63))
plt.subplot(413)
plt.plot(t, calculate_volatility(data["GOOG"].Close.values, 63))
plt.subplot(414)
plt.plot(t, calculate_volatility(data["MSFT"].Close.values, 63))
plt.tight_layout()
Explanation: Using the scripts on utils it's possible to download the data from Yahoo! Finance for the following stocks (by default):
AAPL
AMZN
GOOG
MSFT
We will use the function defined above to plot the running volatility of those stocks on a 63 days window.
End of explanation
def repair_split(df, info): # info is a list of tuples [(date, split-ratio)]
temp = df.Close.values.copy()
for i in info:
date, ratio = i
mask = np.array(df.Date >= date)
temp[mask] = temp[mask]*ratio
return temp
aapl_info = [("2005-02-28", 2), ("2014-06-09", 7)]
plt.figure()
plt.subplot(411)
plt.plot(t, calculate_volatility(repair_split(data["AAPL"], aapl_info), 63))
plt.subplot(412)
plt.plot(t, calculate_volatility(data["AMZN"].Close.values, 63))
plt.subplot(413)
plt.plot(t, calculate_volatility(repair_split(data["GOOG"], [("2014-03-27", 2)]), 63))
plt.subplot(414)
plt.plot(t, calculate_volatility(data["MSFT"].Close.values, 63))
plt.tight_layout()
Explanation: I forgot to check if in the time in which those stocks are defined there was any splitting (that's why it's important explorative analysis).
AAPL splitted on Febbruary 28 2005 by 2:1 and June 9 2014 by 7:1, while GOOG splitted on April 2 2014 (not a real split, it generated another kind of stock, splitting by 2:1)
End of explanation
rets = {key:np.diff(np.log(df.Close.values)) for key, df in data.items()}
corr = np.corrcoef(list(rets.values()))
rets.keys(), corr
plt.xticks(range(4), rets.keys(), rotation=45)
plt.yticks(range(4), rets.keys())
plt.imshow(corr, interpolation='nearest')
Explanation: Now let's build a data structure with only daily return for every stock. The model for returns is given by a compounding of the daily return, like
$$ S_{n+1} = S_{n}e^{r} $$
End of explanation
def normalize(v):
return v/v.sum()
portfolios = np.random.uniform(0, 1, size=(1000, 4)) # 1000 random portfolios
portfolios = np.apply_along_axis(normalize, 1, portfolios) # normalize so that they sum 1
# total returns per dollar per portfolio
total_returns = np.dot(portfolios, list(rets.values()))
mean = 260*total_returns.mean(axis=1)
std = np.sqrt(260)*total_returns.std(axis=1)
plt.scatter(std, mean)
plt.xlabel("Annual volatility")
plt.ylabel("Annual returns")
Explanation: Too bad there arent negative correlations, after all, they all play more or less on the same areas. So, given those stocks, what combination yields the best portfolio (bigger return, lower risk) with a given ammount of investment capital? The optimization problem isn't so hard numericaly, but its possible to derive an analytical expression. What follows is know as modern portfolio theory.
First, let's explore what I think is a pretty beautifull result (because I didn't expected it). We will allocate randomly those stocks in order to generate a lot of random portfolios with the same initial value.
End of explanation
K = 260*np.cov(list(rets.values())) # annual covariance
R = np.array([np.ones(4), 260*np.mean(list(rets.values()), axis=1)])
x = np.array([1, 0.15]) # I will select a 15% of annual return
M = np.dot(R, np.dot(np.linalg.inv(K), R.transpose()))
variance = np.dot(x, np.dot(np.linalg.inv(M), x.transpose()))
volatility = np.sqrt(variance)
weigths = np.dot(np.linalg.inv(K), np.dot(R.transpose(), np.dot(np.linalg.inv(M), x.transpose())))
volatility, weigths
Explanation: It's easy to see that there is a hard superior limit on the points for a given volatility. That curve is called efficient frontier and it represents the best portfolio allocation (less risk for an expected return or bigger returns for a choice of risk). It seems somewhat to a parabolla. As said before, it's possible to derive some analytical expressions for that curve.
Let $w_i$ be a weight vector that represents the ammount of stock $i$ and let its sum be one (scale isn't important here, so it can be put as the initial investment, but the number one is easy and I don't carry more symbols).
The expected return of the i-th stock is $r_i$, so the total return for a portfolio is
$$ r = w_i r^i $$
(summation of same indices is implied). In the same way, the total volatility (standar deviation) of the portfolio is
$$ \sigma^2 = K_i^j w_j w^i $$
where $K$ is the covariance matrix, and the condition on the weights is expressed as
$$ w_i 1^i = 1 $$
where $1^i$ is a vector of ones. If we choice an expected return, we can build an optimal portfolio by minimizing the standar deviation. So the problem becomes
$$ min\left( K_i^j w_j w^i \,\,\,|\,\,\, w_i 1^i = 1,\,\,\, r = w_i r^i \right) $$
the right side I think may bring some confusion: the $w_i$ isn't bounded, only the $r$. In fact, if $r^i$ is a n-dimentional vector, for a given $r$ there is a full subspace of dimension $n-1$ of weights. The Lagrange multiplier problem can be solved by minimizing
$$ \Lambda(w, \lambda) = K_j^i w_i w^j + \lambda_1 \left( w_i 1^i - 1 \right) + \lambda_2 \left( w_i r^i - r \right) $$
$$ \frac{\partial\Lambda}{\partial w_i} = 2 K_j^i w^j + \lambda_1 1^i + \lambda_2 r^i = 0 $$
and solving for $w^j$ yields
$$ w^j = -\frac{1}{2} (K_j^i)^{-1} \left( \lambda_1 1^i + \lambda_2 r^i \right) $$
the term between parentesis can be put in a concise way as
$$ (\lambda \cdot R)^T $$
where $\lambda$ is a 2-dimensional row vector and R a $2 \times q$ matrix (with q the number of stocks)
$$ \lambda = (\lambda_1 \,\,\,\,\,\lambda_2) $$
$$ R = (1^i\,\,\,\,\,r^i)^T $$
this way, the bounding conditions can be put also as
$$ R w^j = (1\,\,\,\,\,r)^T $$
In this last expression, the weight can be changed with the solution above, returning
$$ -\frac{1}{2}\lambda \cdot \left[ R (K_j^i)^{-1} R^T \right] = (1\,\,\,\,\,r) $$
calling $M$ that messy $2\times 2$ matrix in brackets, it's possible to solve $\lambda$ as
$$ \lambda = -(2\,\,\,\,\,2r) \cdot M^{-1} $$
It's easy to check that the matrix $M$, and hence also it's inverse, are symmetric. And with this, the variance can be (finaly) solved:
$$ \sigma^2 = K_i^j w_j w^i = \frac{1}{4}\lambda R K^{-1} K K^{-1} R^T \lambda^T $$
$$ = \frac{1}{4}\lambda R K^{-1} R^T \lambda^T = \frac{1}{4}\lambda M \lambda^T $$
$$ = (1\,\,\,\,\,r) M^{-1} (1\,\,\,\,\,r)^T $$
That will be a very long calculation. I will just put the final result (remember, that formula is a scalar). The elements of M are
$$ M_{00} = 1_i (K_j^i)^{-1} 1^i $$
$$ M_{11} = r_i (K_j^i)^{-1} r^i $$
$$ M_{10} = M_{01} = 1_i (K_j^i)^{-1} r^i $$
and the minimal variance, in function of the desidered return is
$$ \sigma^2(r) = \frac{M_{00} r^2 - 2M_{01} r + M_{11}}{M_{00}M_{11} - M_{01}^2} $$
and the weights are
$$ w^j = (K_j^i)^{-1} R^T M^{-1} (1\,\,\,\,\,r)^T $$
I was wrong, the plot of variance-mean is a parabola, but it seems that volatility-mean is a hyperbolla.
So, returning to code:
End of explanation
volatility = np.sqrt(M[1,1]/(M[0,1]*M[0,1]))
returns = M[1,1]/M[1,0]
x = np.array([1, returns])
weigths = np.dot(np.linalg.inv(K), np.dot(R.transpose(), np.dot(np.linalg.inv(M), x.transpose())))
returns, volatility, weigths
Explanation: For example, at this day (24 Jannuary 2017) the market closed with the following prices for the stocks in this list:
GOOG: 823.87
MSFT: 63.52
AMZN: 822.44
AAPL: 119.97
with the assets allocation suggested by the optimum, if I have $10000 to invest, I will need to:
Buy 2 stocks of GOOG (2.5)
Buy 66 stocks of MSFT (66.5)
Buy 4 stocks of AMZN (4.4)
Don't buy AAPL (0.4)
Put the remaining $870.18 to take LIBOR rate? or to rebalance the portfolio? options?
Another kind of optimization that it's possible is to maximize the Sharpe ratio, defined as the ration of the expected return and the volatility. One can think of it as the returns for unit of risk, so maximizing it yields an optimization indeed. We know that any optimal portfolio is in the efficient frontier, so having an expression of this curve we only need to maximize
$$ S = \frac{r}{\sigma} $$
The expression for the volatility in function of the desidered return can be put as
$$ \sigma^2 = ar^2 + br + c $$
As we are interested only on the optimal curve, we will consider only the right side of this parabolla. This way, we have an additional advantage of having an invertible function on its domain. The return then have solution
$$ r = \frac{-b + \sqrt{b^2 - 4a(c - \sigma^2)}}{2a} $$
(seems like cheating, I know...), so the Sharpe ratio becomes
$$ S = \frac{-b + \sqrt{b^2 - 4a(c - \sigma^2)}}{2a \sigma} $$
do note that the part beyond square root is always positive thanks to the Bessel inequality, at least for this problem, so the Sharpe ratio will always defined positive.
Doing the derivative of S with respect to $\sigma$ and solving the problem for the maximum the solutions are
$$ \sigma = \pm \sqrt{\frac{4ac^2 - b^2 c}{b^2}} $$
and we take the positive value. With the real values back we obtain a very simple expression for the volatility:
$$ \sigma = \sqrt{\frac{M_{11}}{M_{01}^2}} $$
and the return:
$$ r = \frac{M_{00}}{M_{10}} $$
With the volatility the other quantities can be calculated as well using the formulas, so let's return again to code:
End of explanation
sig = np.linspace(0.28, 0.6, 100)
sharpe = (M[1,0] + np.sqrt(np.linalg.det(M))*np.sqrt(sig*sig*M[0,0] - 1))/(sig*M[0,0])
plt.plot(sig, sharpe)
Explanation: This time the algorithm ask to buy a lot of AMZN, some of GOOG and sell a little of the others. With the same $10000 the portfolio distribution would be:
Buy 2 stocks of GOOG (2.5)
Sell 9 stocks of MSFT (8.9)
Buy 11 stocks of AMZN (11.3)
Sell 6 stocks of AAPL (6.5)
With remaining $596.92 to play
It's possible to visualize the Sharpe factor for differents volatilities, rewriting the equation of the returns in function of the volatility as
$$ r = \frac{M_{10} + \sqrt{det(M)}\sqrt{M_{00}\sigma^2 - 1}}{M_{00}} $$
and hence the Sharpe ratio as
$$ S = \frac{M_{10} + \sqrt{det(M)}\sqrt{M_{00}\sigma^2 - 1}}{\sigma M_{00}} $$
End of explanation |
808 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Answering Descriptive and Exploratory Questions About my Project
Step1: Descriptive <br />
- What is N<sub>i</sub> for all i?
Step2: What is |V|?
Step3: Do the graphs G<sub>n<sub>i</sub></sub> contain any values of A that cannot be processed traditionally (i.e. inf, NaN)?
Step4: How sparse, |E<sub>n<sub>i</sub></sub>| / |V<sub>n<sub>i</sub></sub>|x|V<sub>n<sub>i</sub></sub>|, are the graphs?
Step5: Exploratory <br />
- What is mean(|E|) for each dataset i?
Step6: What is the average graph, where average here means the graph consiting of edges and weights corresponding to the average weight of a given potential edge across the all the datasets?
Step7: What is the distribution of max(A)-min(A) (i.e. dynamic range) for each dataset i? | Python Code:
# Import packages
import igraph as ig
import numpy as np
import math
import os
from subprocess import Popen, PIPE
# Initializing dataset names
dnames = list(['../data/desikan/MRN114', '../data/desikan/KKI2009', '../data/desikan/SWU4'])
print "Datasets: " + ", ".join(dnames)
print "D = " + str(len(dnames))
# Getting graph names
fs = list()
for dd in dnames:
fs.extend([root+'/'+file for root, dir, files in os.walk(dd) for file in files])
# fs
Explanation: Answering Descriptive and Exploratory Questions About my Project
End of explanation
# Get lengths of sublists and total list
print "N_i for each dataset (same order as above): " +\
", ".join([str(len(filter(lambda x: dd in x, fs))) for dd in dnames])
print "Total N = " + str(len(fs))
Explanation: Descriptive <br />
- What is N<sub>i</sub> for all i?
End of explanation
# We know that |V| is the same for all graphs, so here we really only need to load in 1
graph = ig.Graph.Read_GraphML(fs[0])
V = graph.vcount()
print "|V| = " + str(V)
Explanation: What is |V|?
End of explanation
# We actually need the graphs in memory now, it seems. I'll make a janky function for this
# in case I want to do it again later for some reason.
def loadGraphs(filenames, rois, printer=False):
A = np.zeros((rois, rois, len(filenames)))
for idx, files in enumerate(filenames):
if printer:
print "Loading: " + files
g = ig.Graph.Read_GraphML(files)
tempg = g.get_adjacency(attribute='weight')
A[:,:,idx] = np.asarray(tempg.data)
return A
A = loadGraphs(fs, V)
# Parallel index for datasets
c = 0
d_idx = []
for dd in dnames:
d_idx.append([c for root, dir, files in os.walk(dd) for file in files])
c += 1
d_idx = np.concatenate(d_idx)
A.shape
# Now that my graphs are here, let's count NaNs and Infs in the set of them
nans= np.count_nonzero(np.isnan(A))
infs= np.count_nonzero(np.isinf(A))
print "Our data contains " + str(nans) + " NaN values"
print "Our data contains " + str(infs) + " Inf values"
Explanation: Do the graphs G<sub>n<sub>i</sub></sub> contain any values of A that cannot be processed traditionally (i.e. inf, NaN)?
End of explanation
# First I'll want to binarize my adjacency matrix, then I can do population
# sparsity by summing all edges and diving by total number of possible edges.
# Alternatively, I could've done this per graph and averaged, or per dataset
# and averaged. I chose this one because I did.
bin_graph = 1.0*(A > 0)
sparsity = np.sum(bin_graph) / (V*V*len(fs))
print "The fraction of possible edges that exist in our data is: " + str(sparsity)
Explanation: How sparse, |E<sub>n<sub>i</sub></sub>| / |V<sub>n<sub>i</sub></sub>|x|V<sub>n<sub>i</sub></sub>|, are the graphs?
End of explanation
# This was computed across all graphs for each data set
bin_graph = 1.0*(A > 0)
for idx in np.unique(d_idx):
print 'Mean edge degree for dataset: ' + dnames[idx] + ' is: ' + \
str(np.sum((bin_graph[:,:,d_idx == idx]))/np.sum(d_idx == idx))
Explanation: Exploratory <br />
- What is mean(|E|) for each dataset i?
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
font = {'weight' : 'bold',
'size' : 14}
import matplotlib
matplotlib.rc('font', **font)
A_bar = (np.mean(A,axis=2))
plt.figure(figsize=(6, 6))
plt.imshow(A_bar)
plt.xticks((0, 34, 69), ('1', '35', '70'))
plt.yticks((0, 34, 69), ('1', '35', '70'))
plt.xlabel('Node')
plt.ylabel('Node')
plt.title('Mean Connectome')
plt.savefig('../figs/mean_connectome.png')
plt.show()
Explanation: What is the average graph, where average here means the graph consiting of edges and weights corresponding to the average weight of a given potential edge across the all the datasets?
End of explanation
# first find min(A), max(A) for all data
for idx in np.unique(d_idx):
A_ds = A[:,:,d_idx == idx]
fig = plt.figure()
ax = fig.add_subplot(111)
plt.hist(np.log(np.ravel(A_ds)+1), bins=30) #adding 1 to prevent divide by 0
plt.title('Edge Weights of ' + dnames[idx].split('/')[-1] + ' Dataset')
plt.xlabel("Value (log_e)")
plt.ylabel("Frequency")
ax.set_yscale('log')
plt.savefig('../figs/'+dnames[idx].split('/')[-1]+'_ew_initial.png')
plt.show()
Explanation: What is the distribution of max(A)-min(A) (i.e. dynamic range) for each dataset i?
End of explanation |
809 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
810 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
机器学习工程师纳米学位
入门
项目 0
Step1: 从泰坦尼克号的数据样本中,我们可以看到船上每位旅客的特征
Survived:是否存活(0代表否,1代表是)
Pclass:社会阶级(1代表上层阶级,2代表中层阶级,3代表底层阶级)
Name:船上乘客的名字
Sex:船上乘客的性别
Age
Step3: 这个例子展示了如何将泰坦尼克号的 Survived 数据从 DataFrame 移除。注意到 data(乘客数据)和 outcomes (是否存活)现在已经匹配好。这意味着对于任何乘客的 data.loc[i] 都有对应的存活的结果 outcome[i]。
为了验证我们预测的结果,我们需要一个标准来给我们的预测打分。因为我们最感兴趣的是我们预测的准确率,既正确预测乘客存活的比例。运行下面的代码来创建我们的 accuracy_score 函数以对前五名乘客的预测来做测试。
思考题:从第六个乘客算起,如果我们预测他们全部都存活,你觉得我们预测的准确率是多少?
Step5: 提示:如果你保存 iPython Notebook,代码运行的输出也将被保存。但是,一旦你重新打开项目,你的工作区将会被重置。请确保每次都从上次离开的地方运行代码来重新生成变量和函数。
预测
如果我们要预测泰坦尼克号上的乘客是否存活,但是我们又对他们一无所知,那么最好的预测就是船上的人无一幸免。这是因为,我们可以假定当船沉没的时候大多数乘客都遇难了。下面的 predictions_0 函数就预测船上的乘客全部遇难。
Step6: 问题1
对比真实的泰坦尼克号的数据,如果我们做一个所有乘客都没有存活的预测,你认为这个预测的准确率能达到多少?
提示:运行下面的代码来查看预测的准确率。
Step7: 回答
Step9: 观察泰坦尼克号上乘客存活的数据统计,我们可以发现大部分男性乘客在船沉没的时候都遇难了。相反的,大部分女性乘客都在事故中生还。让我们在先前推断的基础上继续创建:如果乘客是男性,那么我们就预测他们遇难;如果乘客是女性,那么我们预测他们在事故中活了下来。
将下面的代码补充完整,让函数可以进行正确预测。
提示:您可以用访问 dictionary(字典)的方法来访问船上乘客的每个特征对应的值。例如, passenger['Sex'] 返回乘客的性别。
Step10: 问题2
当我们预测船上女性乘客全部存活,而剩下的人全部遇难,那么我们预测的准确率会达到多少?
提示:运行下面的代码来查看我们预测的准确率。
Step11: 回答
Step13: 仔细观察泰坦尼克号存活的数据统计,在船沉没的时候,大部分小于10岁的男孩都活着,而大多数10岁以上的男性都随着船的沉没而遇难。让我们继续在先前预测的基础上构建:如果乘客是女性,那么我们就预测她们全部存活;如果乘客是男性并且小于10岁,我们也会预测他们全部存活;所有其它我们就预测他们都没有幸存。
将下面缺失的代码补充完整,让我们的函数可以实现预测。
提示
Step14: 问题3
当预测所有女性以及小于10岁的男性都存活的时候,预测的准确率会达到多少?
提示:运行下面的代码来查看预测的准确率。
Step15: 回答
Step17: 当查看和研究了图形化的泰坦尼克号上乘客的数据统计后,请补全下面这段代码中缺失的部分,使得函数可以返回你的预测。
在到达最终的预测模型前请确保记录你尝试过的各种特征和条件。
提示
Step18: 结论
请描述你实现80%准确度的预测模型所经历的步骤。您观察过哪些特征?某些特性是否比其他特征更有帮助?你用了什么条件来预测生还结果?你最终的预测的准确率是多少?
提示 | Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
# 数据可视化代码
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
# 加载数据集
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
# 显示数据列表中的前几项乘客数据
display(full_data.head())
Explanation: 机器学习工程师纳米学位
入门
项目 0: 预测泰坦尼克号乘客生还率
1912年,泰坦尼克号在第一次航行中就与冰山相撞沉没,导致了大部分乘客和船员身亡。在这个入门项目中,我们将探索部分泰坦尼克号旅客名单,来确定哪些特征可以最好地预测一个人是否会生还。为了完成这个项目,你将需要实现几个基于条件的预测并回答下面的问题。我们将根据代码的完成度和对问题的解答来对你提交的项目的进行评估。
提示:这样的文字将会指导你如何使用 iPython Notebook 来完成项目。
点击这里查看本文件的英文版本。
开始
当我们开始处理泰坦尼克号乘客数据时,会先导入我们需要的功能模块以及将数据加载到 pandas DataFrame。运行下面区域中的代码加载数据,并使用 .head() 函数显示前几项乘客数据。
提示:你可以通过单击代码区域,然后使用键盘快捷键 Shift+Enter 或 Shift+ Return 来运行代码。或者在选择代码后使用播放(run cell)按钮执行代码。像这样的 MarkDown 文本可以通过双击编辑,并使用这些相同的快捷键保存。Markdown 允许你编写易读的纯文本并且可以转换为 HTML。
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
# 从数据集中移除 'Survived' 这个特征,并将它存储在一个新的变量中。
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
# 显示已移除 'Survived' 特征的数据集
display(data.head())
Explanation: 从泰坦尼克号的数据样本中,我们可以看到船上每位旅客的特征
Survived:是否存活(0代表否,1代表是)
Pclass:社会阶级(1代表上层阶级,2代表中层阶级,3代表底层阶级)
Name:船上乘客的名字
Sex:船上乘客的性别
Age:船上乘客的年龄(可能存在 NaN)
SibSp:乘客在船上的兄弟姐妹和配偶的数量
Parch:乘客在船上的父母以及小孩的数量
Ticket:乘客船票的编号
Fare:乘客为船票支付的费用
Cabin:乘客所在船舱的编号(可能存在 NaN)
Embarked:乘客上船的港口(C 代表从 Cherbourg 登船,Q 代表从 Queenstown 登船,S 代表从 Southampton 登船)
因为我们感兴趣的是每个乘客或船员是否在事故中活了下来。可以将 Survived 这一特征从这个数据集移除,并且用一个单独的变量 outcomes 来存储。它也做为我们要预测的目标。
运行该代码,从数据集中移除 Survived 这个特征,并将它存储在变量 outcomes 中。
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
# 确保预测的数量与结果的数量一致
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
# 计算预测准确率(百分比)
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
# 测试 'accuracy_score' 函数
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
Explanation: 这个例子展示了如何将泰坦尼克号的 Survived 数据从 DataFrame 移除。注意到 data(乘客数据)和 outcomes (是否存活)现在已经匹配好。这意味着对于任何乘客的 data.loc[i] 都有对应的存活的结果 outcome[i]。
为了验证我们预测的结果,我们需要一个标准来给我们的预测打分。因为我们最感兴趣的是我们预测的准确率,既正确预测乘客存活的比例。运行下面的代码来创建我们的 accuracy_score 函数以对前五名乘客的预测来做测试。
思考题:从第六个乘客算起,如果我们预测他们全部都存活,你觉得我们预测的准确率是多少?
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
# 预测 'passenger' 的生还率
predictions.append(0)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
predictions = predictions_0(data)
Explanation: 提示:如果你保存 iPython Notebook,代码运行的输出也将被保存。但是,一旦你重新打开项目,你的工作区将会被重置。请确保每次都从上次离开的地方运行代码来重新生成变量和函数。
预测
如果我们要预测泰坦尼克号上的乘客是否存活,但是我们又对他们一无所知,那么最好的预测就是船上的人无一幸免。这是因为,我们可以假定当船沉没的时候大多数乘客都遇难了。下面的 predictions_0 函数就预测船上的乘客全部遇难。
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: 问题1
对比真实的泰坦尼克号的数据,如果我们做一个所有乘客都没有存活的预测,你认为这个预测的准确率能达到多少?
提示:运行下面的代码来查看预测的准确率。
End of explanation
survival_stats(data, outcomes, 'Sex')
Explanation: 回答: Predictions have an accuracy of 61.62%.
我们可以使用 survival_stats 函数来看看 Sex 这一特征对乘客的存活率有多大影响。这个函数定义在名为 titanic_visualizations.py 的 Python 脚本文件中,我们的项目提供了这个文件。传递给函数的前两个参数分别是泰坦尼克号的乘客数据和乘客的 生还结果。第三个参数表明我们会依据哪个特征来绘制图形。
运行下面的代码绘制出依据乘客性别计算存活率的柱形图。
End of explanation
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# 移除下方的 'pass' 声明
# and write your prediction conditions here
# 输入你自己的预测条件
if passenger['Sex'] == 'male':
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
predictions = predictions_1(data)
Explanation: 观察泰坦尼克号上乘客存活的数据统计,我们可以发现大部分男性乘客在船沉没的时候都遇难了。相反的,大部分女性乘客都在事故中生还。让我们在先前推断的基础上继续创建:如果乘客是男性,那么我们就预测他们遇难;如果乘客是女性,那么我们预测他们在事故中活了下来。
将下面的代码补充完整,让函数可以进行正确预测。
提示:您可以用访问 dictionary(字典)的方法来访问船上乘客的每个特征对应的值。例如, passenger['Sex'] 返回乘客的性别。
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: 问题2
当我们预测船上女性乘客全部存活,而剩下的人全部遇难,那么我们预测的准确率会达到多少?
提示:运行下面的代码来查看我们预测的准确率。
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Explanation: 回答: Predictions have an accuracy of 78.68%.
仅仅使用乘客性别(Sex)这一特征,我们预测的准确性就有了明显的提高。现在再看一下使用额外的特征能否更进一步提升我们的预测准确度。例如,综合考虑所有在泰坦尼克号上的男性乘客:我们是否找到这些乘客中的一个子集,他们的存活概率较高。让我们再次使用 survival_stats 函数来看看每位男性乘客的年龄(Age)。这一次,我们将使用第四个参数来限定柱形图中只有男性乘客。
运行下面这段代码,把男性基于年龄的生存结果绘制出来。
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# 移除下方的 'pass' 声明
# and write your prediction conditions here
# 输入你自己的预测条件
if passenger['Sex'] == 'male':
if passenger['Age'] > 10:
predictions.append(0)
else:
predictions.append(1)
else:
predictions.append(1)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
predictions = predictions_2(data)
Explanation: 仔细观察泰坦尼克号存活的数据统计,在船沉没的时候,大部分小于10岁的男孩都活着,而大多数10岁以上的男性都随着船的沉没而遇难。让我们继续在先前预测的基础上构建:如果乘客是女性,那么我们就预测她们全部存活;如果乘客是男性并且小于10岁,我们也会预测他们全部存活;所有其它我们就预测他们都没有幸存。
将下面缺失的代码补充完整,让我们的函数可以实现预测。
提示: 您可以用之前 predictions_1 的代码作为开始来修改代码,实现新的预测函数。
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: 问题3
当预测所有女性以及小于10岁的男性都存活的时候,预测的准确率会达到多少?
提示:运行下面的代码来查看预测的准确率。
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Age < 18"])
Explanation: 回答: Predictions have an accuracy of 68.91%.
添加年龄(Age)特征与性别(Sex)的结合比单独使用性别(Sex)也提高了不少准确度。现在该你来做预测了:找到一系列的特征和条件来对数据进行划分,使得预测结果提高到80%以上。这可能需要多个特性和多个层次的条件语句才会成功。你可以在不同的条件下多次使用相同的特征。Pclass,Sex,Age,SibSp 和 Parch 是建议尝试使用的特征。
使用 survival_stats 函数来观测泰坦尼克号上乘客存活的数据统计。
提示: 要使用多个过滤条件,把每一个条件放在一个列表里作为最后一个参数传递进去。例如: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'male':
if passenger['Pclass'] > 2:
predictions.append(0)
elif passenger['Age'] > 10:
predictions.append(0)
elif passenger['Parch'] < 1:
predictions.append(0)
else:
predictions.append(1)
elif passenger['Parch'] > 3:
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
Explanation: 当查看和研究了图形化的泰坦尼克号上乘客的数据统计后,请补全下面这段代码中缺失的部分,使得函数可以返回你的预测。
在到达最终的预测模型前请确保记录你尝试过的各种特征和条件。
提示: 您可以用之前 predictions_2 的代码作为开始来修改代码,实现新的预测函数。
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: 结论
请描述你实现80%准确度的预测模型所经历的步骤。您观察过哪些特征?某些特性是否比其他特征更有帮助?你用了什么条件来预测生还结果?你最终的预测的准确率是多少?
提示:运行下面的代码来查看你的预测准确度。
End of explanation |
811 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of DOV search methods for soil data (bodemgegevens)
Use cases explained below
Introduction to the bodem-objects
Get bodemsites in a bounding box
Get bodemlocaties with specific properties
Get all direct and indirect bodemobservaties linked to a bodemlocatie
Get all bodemobservaties in a bodemmonster
Find all bodemlocaties where observations exist for organic carbon percentage in East-Flanders between 0 and 30 cm deep
Calculate carbon stock in Ghent in the layer 0 - 23 cm
Step1: Get information about the datatype 'Bodemlocatie'
Other datatypes are also possible
Step2: A description is provided for the 'Bodemlocatie' datatype
Step3: The different fields that are available for objects of the 'Bodemlocatie' datatype can be requested with the get_fields() method
Step4: You can get more information of a field by requesting it from the fields dictionary
Step5: Optionally, if the values of the field have a specific domain the possible values are listed as values
Step6: Example use cases
Get bodemsites in a bounding box
Get data for all the bodemsites that are geographically located completely within the bounds of the specified box.
The coordinates are in the Belgian Lambert72 (EPSG
Step7: The dataframe contains a list of bodemsites. The available data are flattened to represent unique attributes per row of the dataframe.
Using the pkey_bodemsite field one can request the details of this bodemsite in a webbrowser
Step8: Get bodemlocaties with specific properties
Next to querying bodem objects based on their geographic location within a bounding box, we can also search for bodem objects matching a specific set of properties.
The same methods can be used for all bodem objects.
For this we can build a query using a combination of the 'Bodemlocatie' fields and operators provided by the WFS protocol.
A list of possible operators can be found below
Step9: In this example we build a query using the PropertyIsEqualTo operator to find all bodemlocaties with bodemstreek 'zandstreek'.
We use max_features=10 to limit the results to 10.
Step10: Once again we can use the pkey_bodemlocatie as a permanent link to the information of these bodemlocaties
Step11: Get all direct and indirect bodemobservaties in bodemlocatie
Get all bodemobservaties in a specific bodemlocatie.
Direct means bodemobservaties directly linked with a bodemlocatie.
Indirect means bodemobservaties linked with child-objects of the bodemlocatie, like bodemmonsters.
Step12: Get all bodemobservaties in a bodemmonster
Get all bodemobservaties linked with a bodemmonster
Step13: Find all soil locations with a given soil classification
Get all soil locations with a given soil classification
Step14: We can also get their observations
Step15: Get all depth intervals and observations from a soil location
Step16: And get their observations
Step17: Find all bodemlocaties where observations exist for organic carbon percentage in East-Flanders between 0 and 30 cm deep
Get boundaries of East-Flanders by using a WFS
Step18: Get bodemobservaties in East-Flanders with the requested properties
Step19: Now we have all observations with the requested properties.
Next we need to link them with the bodemlocatie
Step20: To export the results to CSV, you can use for example
Step21: Calculate carbon stock in Ghent in the layer 0 - 23 cm
At the moment, there are no bulkdensities available. As soon as there are observations with bulkdensities, this example can be used to calculate a carbon stock in a layer.
Get boundaries of Ghent using WFS
Step22: First get all observations in Ghent for organisch C percentage in requested layer
Step23: Then get all observations in Ghent for bulkdensity in requested layer
Step24: Merge results together based on their bodemlocatie. Only remains the records where both parameters exists
Step25: Filter Aardewerk soil locations
Since we know that Aardewerk soil locations make use of a specific suffix, a query could be built filtering these out.
Since we only need to match a partial string in the name, we will build a query using the PropertyIsLike operator to find all Aardewerk bodemlocaties.
We use max_features=10 to limit the results to 10.
Step26: As seen in the soil data example, we can use the pkey_bodemlocatie as a permanent link to the information of these bodemlocaties | Python Code:
%matplotlib inline
import inspect, sys
import warnings; warnings.simplefilter('ignore')
# check pydov path
import pydov
Explanation: Example of DOV search methods for soil data (bodemgegevens)
Use cases explained below
Introduction to the bodem-objects
Get bodemsites in a bounding box
Get bodemlocaties with specific properties
Get all direct and indirect bodemobservaties linked to a bodemlocatie
Get all bodemobservaties in a bodemmonster
Find all bodemlocaties where observations exist for organic carbon percentage in East-Flanders between 0 and 30 cm deep
Calculate carbon stock in Ghent in the layer 0 - 23 cm
End of explanation
from pydov.search.bodemlocatie import BodemlocatieSearch
bodemlocatie = BodemlocatieSearch()
Explanation: Get information about the datatype 'Bodemlocatie'
Other datatypes are also possible:
* Bodemsite: BodemsiteSearch
* Bodemmonster: BodemmonsterSearch
* Bodemobservatie: BodemobservatieSearch
End of explanation
bodemlocatie.get_description()
Explanation: A description is provided for the 'Bodemlocatie' datatype:
End of explanation
fields = bodemlocatie.get_fields()
# print available fields
for f in fields.values():
print(f['name'])
Explanation: The different fields that are available for objects of the 'Bodemlocatie' datatype can be requested with the get_fields() method:
End of explanation
fields['type']
Explanation: You can get more information of a field by requesting it from the fields dictionary:
* name: name of the field
* definition: definition of this field
* cost: currently this is either 1 or 10, depending on the datasource of the field. It is an indication of the expected time it will take to retrieve this field in the output dataframe.
* notnull: whether the field is mandatory or not
* type: datatype of the values of this field
End of explanation
fields['type']['values']
Explanation: Optionally, if the values of the field have a specific domain the possible values are listed as values:
End of explanation
from pydov.search.bodemsite import BodemsiteSearch
bodemsite = BodemsiteSearch()
from pydov.util.location import Within, Box
df = bodemsite.search(location=Within(Box(148000, 160800, 160000, 169500)))
df.head()
Explanation: Example use cases
Get bodemsites in a bounding box
Get data for all the bodemsites that are geographically located completely within the bounds of the specified box.
The coordinates are in the Belgian Lambert72 (EPSG:31370) coordinate system and are given in the order of lower left x, lower left y, upper right x, upper right y.
The same methods can be used for other bodem objects.
End of explanation
for pkey_bodemsite in set(df.pkey_bodemsite):
print(pkey_bodemsite)
Explanation: The dataframe contains a list of bodemsites. The available data are flattened to represent unique attributes per row of the dataframe.
Using the pkey_bodemsite field one can request the details of this bodemsite in a webbrowser:
End of explanation
[i for i,j in inspect.getmembers(sys.modules['owslib.fes'], inspect.isclass) if 'Property' in i]
Explanation: Get bodemlocaties with specific properties
Next to querying bodem objects based on their geographic location within a bounding box, we can also search for bodem objects matching a specific set of properties.
The same methods can be used for all bodem objects.
For this we can build a query using a combination of the 'Bodemlocatie' fields and operators provided by the WFS protocol.
A list of possible operators can be found below:
End of explanation
from owslib.fes import PropertyIsEqualTo
query = PropertyIsEqualTo(propertyname='bodemstreek',
literal='Zandstreek')
df = bodemlocatie.search(query=query, max_features=10)
df.head()
Explanation: In this example we build a query using the PropertyIsEqualTo operator to find all bodemlocaties with bodemstreek 'zandstreek'.
We use max_features=10 to limit the results to 10.
End of explanation
for pkey_bodemlocatie in set(df.pkey_bodemlocatie):
print(pkey_bodemlocatie)
Explanation: Once again we can use the pkey_bodemlocatie as a permanent link to the information of these bodemlocaties:
End of explanation
from pydov.search.bodemobservatie import BodemobservatieSearch
from pydov.search.bodemlocatie import BodemlocatieSearch
bodemobservatie = BodemobservatieSearch()
bodemlocatie = BodemlocatieSearch()
from owslib.fes import PropertyIsEqualTo
from pydov.util.query import Join
bodemlocaties = bodemlocatie.search(query=PropertyIsEqualTo(propertyname='naam', literal='VMM_INF_52'),
return_fields=('pkey_bodemlocatie',))
bodemobservaties = bodemobservatie.search(query=Join(bodemlocaties, 'pkey_bodemlocatie'))
bodemobservaties.head()
Explanation: Get all direct and indirect bodemobservaties in bodemlocatie
Get all bodemobservaties in a specific bodemlocatie.
Direct means bodemobservaties directly linked with a bodemlocatie.
Indirect means bodemobservaties linked with child-objects of the bodemlocatie, like bodemmonsters.
End of explanation
from pydov.search.bodemmonster import BodemmonsterSearch
bodemmonster = BodemmonsterSearch()
bodemmonsters = bodemmonster.search(query=PropertyIsEqualTo(propertyname = 'identificatie', literal='A0057359'),
return_fields=('pkey_bodemmonster',))
bodemobservaties = bodemobservatie.search(query=Join(bodemmonsters, on = 'pkey_parent', using='pkey_bodemmonster'))
bodemobservaties.head()
Explanation: Get all bodemobservaties in a bodemmonster
Get all bodemobservaties linked with a bodemmonster
End of explanation
from owslib.fes import PropertyIsEqualTo
from pydov.util.query import Join
from pydov.search.bodemclassificatie import BodemclassificatieSearch
from pydov.search.bodemlocatie import BodemlocatieSearch
bodemclassificatie = BodemclassificatieSearch()
bl_Scbz = bodemclassificatie.search(query=PropertyIsEqualTo('bodemtype', 'Scbz'), return_fields=['pkey_bodemlocatie'])
bodemlocatie = BodemlocatieSearch()
bl = bodemlocatie.search(query=Join(bl_Scbz, 'pkey_bodemlocatie'))
bl.head()
Explanation: Find all soil locations with a given soil classification
Get all soil locations with a given soil classification:
End of explanation
from pydov.search.bodemobservatie import BodemobservatieSearch
bodemobservatie = BodemobservatieSearch()
obs = bodemobservatie.search(query=Join(bl_Scbz, 'pkey_bodemlocatie'), max_features=10)
obs.head()
Explanation: We can also get their observations:
End of explanation
from pydov.search.bodemlocatie import BodemlocatieSearch
from pydov.search.bodemdiepteinterval import BodemdiepteintervalSearch
from pydov.util.query import Join
from owslib.fes import PropertyIsEqualTo
bodemlocatie = BodemlocatieSearch()
bodemdiepteinterval = BodemdiepteintervalSearch()
bodemlocaties = bodemlocatie.search(query=PropertyIsEqualTo(propertyname='naam', literal='VMM_INF_52'),
return_fields=('pkey_bodemlocatie',))
bodemdiepteintervallen = bodemdiepteinterval.search(
query=Join(bodemlocaties, on='pkey_bodemlocatie'))
bodemdiepteintervallen
Explanation: Get all depth intervals and observations from a soil location
End of explanation
from pydov.search.bodemobservatie import BodemobservatieSearch
bodemobservatie = BodemobservatieSearch()
bodemobservaties = bodemobservatie.search(query=Join(
bodemdiepteintervallen, on='pkey_parent', using='pkey_diepteinterval'))
bodemobservaties.head()
Explanation: And get their observations:
End of explanation
from owslib.etree import etree
from owslib.wfs import WebFeatureService
from pydov.util.location import (
GmlFilter,
Within,
)
provinciegrenzen = WebFeatureService(
'https://geoservices.informatievlaanderen.be/overdrachtdiensten/VRBG/wfs',
version='1.1.0')
provincie_filter = PropertyIsEqualTo(propertyname='NAAM', literal='Oost-Vlaanderen')
provincie_poly = provinciegrenzen.getfeature(
typename='VRBG:Refprv',
filter=etree.tostring(provincie_filter.toXML()).decode("utf8")).read()
Explanation: Find all bodemlocaties where observations exist for organic carbon percentage in East-Flanders between 0 and 30 cm deep
Get boundaries of East-Flanders by using a WFS
End of explanation
from owslib.fes import PropertyIsEqualTo
from owslib.fes import And
from pydov.search.bodemobservatie import BodemobservatieSearch
bodemobservatie = BodemobservatieSearch()
# Select only layers with the boundaries 10-30
bodemobservaties = bodemobservatie.search(
location=GmlFilter(provincie_poly, Within),
query=And([
PropertyIsEqualTo(propertyname="parameter", literal="Organische C - percentage"),
PropertyIsEqualTo(propertyname="diepte_tot_cm", literal = '30'),
PropertyIsEqualTo(propertyname="diepte_van_cm", literal = '0')
]))
bodemobservaties.head()
Explanation: Get bodemobservaties in East-Flanders with the requested properties
End of explanation
from pydov.search.bodemlocatie import BodemlocatieSearch
from pydov.util.query import Join
import pandas as pd
# Find bodemlocatie information for all observations
bodemlocatie = BodemlocatieSearch()
bodemlocaties = bodemlocatie.search(query=Join(bodemobservaties, on = 'pkey_bodemlocatie', using='pkey_bodemlocatie'))
# remove x, y, mv_mtaw from observatie dataframe to prevent duplicates while merging
bodemobservaties = bodemobservaties.drop(['x', 'y', 'mv_mtaw'], axis=1)
# Merge the bodemlocatie information together with the observation information
merged = pd.merge(bodemobservaties, bodemlocaties, on="pkey_bodemlocatie", how='left')
merged.head()
Explanation: Now we have all observations with the requested properties.
Next we need to link them with the bodemlocatie
End of explanation
import folium
from folium.plugins import MarkerCluster
from pyproj import Transformer
# convert the coordinates to lat/lon for folium
def convert_latlon(x1, y1):
transformer = Transformer.from_crs("epsg:31370", "epsg:4326", always_xy=True)
x2,y2 = transformer.transform(x1, y1)
return x2, y2
#convert coordinates to wgs84
merged['lon'], merged['lat'] = zip(*map(convert_latlon, merged['x'], merged['y']))
# Get only location and value
loclist = merged[['lat', 'lon']].values.tolist()
# initialize the Folium map on the centre of the selected locations, play with the zoom until ok
fmap = folium.Map(location=[merged['lat'].mean(), merged['lon'].mean()], zoom_start=10)
marker_cluster = MarkerCluster().add_to(fmap)
for loc in range(0, len(loclist)):
popup = 'Bodemlocatie: ' + merged['pkey_bodemlocatie'][loc]
popup = popup + '<br> Bodemobservatie: ' + merged['pkey_bodemobservatie'][loc]
popup = popup + '<br> Value: ' + merged['waarde'][loc] + "%"
folium.Marker(loclist[loc], popup=popup).add_to(marker_cluster)
fmap
Explanation: To export the results to CSV, you can use for example:
python
merged.to_csv("test.csv")
We can plot also the results on a map
This can take some time!
End of explanation
from owslib.etree import etree
from owslib.fes import PropertyIsEqualTo
from owslib.wfs import WebFeatureService
from pydov.util.location import (
GmlFilter,
Within,
)
stadsgrenzen = WebFeatureService(
'https://geoservices.informatievlaanderen.be/overdrachtdiensten/VRBG/wfs',
version='1.1.0')
gent_filter = PropertyIsEqualTo(propertyname='NAAM', literal='Gent')
gent_poly = stadsgrenzen.getfeature(
typename='VRBG:Refgem',
filter=etree.tostring(gent_filter.toXML()).decode("utf8")).read()
Explanation: Calculate carbon stock in Ghent in the layer 0 - 23 cm
At the moment, there are no bulkdensities available. As soon as there are observations with bulkdensities, this example can be used to calculate a carbon stock in a layer.
Get boundaries of Ghent using WFS
End of explanation
from owslib.fes import PropertyIsEqualTo, PropertyIsGreaterThan, PropertyIsLessThan
from owslib.fes import And
from pydov.search.bodemobservatie import BodemobservatieSearch
bodemobservatie = BodemobservatieSearch()
# all layers intersect the layer 0-23cm
carbon_observaties = bodemobservatie.search(
location=GmlFilter(gent_poly, Within),
query=And([
PropertyIsEqualTo(propertyname="parameter", literal="Organische C - percentage"),
PropertyIsGreaterThan(propertyname="diepte_tot_cm", literal = '0'),
PropertyIsLessThan(propertyname="diepte_van_cm", literal = '23')
]),
return_fields=('pkey_bodemlocatie', 'waarde'))
carbon_observaties = carbon_observaties.rename(columns={"waarde": "organic_c_percentage"})
carbon_observaties.head()
Explanation: First get all observations in Ghent for organisch C percentage in requested layer
End of explanation
density_observaties = bodemobservatie.search(
location=GmlFilter(gent_poly, Within),
query=And([
PropertyIsEqualTo(propertyname="parameter", literal="Bulkdensiteit - gemeten"),
PropertyIsGreaterThan(propertyname="diepte_tot_cm", literal = '0'),
PropertyIsLessThan(propertyname="diepte_van_cm", literal = '23')
]),
return_fields=('pkey_bodemlocatie', 'waarde'))
density_observaties = density_observaties.rename(columns={"waarde": "bulkdensity"})
density_observaties.head()
Explanation: Then get all observations in Ghent for bulkdensity in requested layer
End of explanation
import pandas as pd
merged = pd.merge(carbon_observaties, density_observaties, on="pkey_bodemlocatie")
merged.head()
Explanation: Merge results together based on their bodemlocatie. Only remains the records where both parameters exists
End of explanation
from owslib.fes import PropertyIsLike
query = PropertyIsLike(propertyname='naam',
literal='KART_PROF_%', wildCard='%')
df = bodemlocatie.search(query=query, max_features=10)
df.head()
Explanation: Filter Aardewerk soil locations
Since we know that Aardewerk soil locations make use of a specific suffix, a query could be built filtering these out.
Since we only need to match a partial string in the name, we will build a query using the PropertyIsLike operator to find all Aardewerk bodemlocaties.
We use max_features=10 to limit the results to 10.
End of explanation
for pkey_bodemlocatie in set(df.pkey_bodemlocatie):
print(pkey_bodemlocatie)
Explanation: As seen in the soil data example, we can use the pkey_bodemlocatie as a permanent link to the information of these bodemlocaties:
End of explanation |
812 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
For this to work you are going to need a collection of engines to connect to. You can probably create a local collection by just running ipcluster -n 8; for more sophisticated setups read the ipyparallel docs. You set up "profiles" and can start and connect to different engine setups by specifying profiles.
Step1: Parallel execution! You can turn the delay up to confirm that it's really running in parallel.
Step2: Just checking that exceptions are correctly propagated - that is, sent back from the engines and attached to the Future, to be handled however the Future's creator thinks appropriate. map just cancels all outstanding Futures (running Futures cannot be interrupted) and re-raises the exception. So the below should just take a second, not a hundred seconds.
Step3: You need to make sure the objects you care about are available on the engines. A "direct view" lets you push them into the engine namespace.
Step4: Tracebacks are slightly wonky since this is interactive code but at least you can see the remote stack.
Step5: Let's make sure the executor finishes all its jobs even after shutdown is called. | Python Code:
c = Client()
c.ids
Explanation: For this to work you are going to need a collection of engines to connect to. You can probably create a local collection by just running ipcluster -n 8; for more sophisticated setups read the ipyparallel docs. You set up "profiles" and can start and connect to different engine setups by specifying profiles.
End of explanation
def f(x):
import time
time.sleep(1)
return x**2
with ipy_executor.IpyExecutor(c) as ex:
print(list(ex.map(f,range(3*len(c.ids)))))
Explanation: Parallel execution! You can turn the delay up to confirm that it's really running in parallel.
End of explanation
def g(x):
import time
time.sleep(1)
if x==3:
raise ValueError("Oops!")
return x**2
with ipy_executor.IpyExecutor(c) as ex:
list(ex.map(g,range(100*len(c.ids))))
Explanation: Just checking that exceptions are correctly propagated - that is, sent back from the engines and attached to the Future, to be handled however the Future's creator thinks appropriate. map just cancels all outstanding Futures (running Futures cannot be interrupted) and re-raises the exception. So the below should just take a second, not a hundred seconds.
End of explanation
dview = c[:]
def h(x):
return h_internal(x)
exponent = 2
def h_internal(x):
return x**exponent
dview.push(dict(h_internal=h_internal,
exponent=exponent))
with ipy_executor.IpyExecutor(c) as ex:
print(list(ex.map(h,range(30))))
Explanation: You need to make sure the objects you care about are available on the engines. A "direct view" lets you push them into the engine namespace.
End of explanation
def k(x):
return k_internal(x)
def k_internal(x):
if x==7:
raise ValueError("blarg")
return x**2
dview.push(dict(k_internal=k_internal))
with ipy_executor.IpyExecutor(c) as ex:
print(list(ex.map(k,range(30))))
Explanation: Tracebacks are slightly wonky since this is interactive code but at least you can see the remote stack.
End of explanation
def l(x):
import time
time.sleep(0.1)
return 2*x
ex = ipy_executor.IpyExecutor(c)
fs = [ex.submit(l,i) for i in range(100)]
ex.shutdown(wait=False)
del ex
for f in fs:
print(f.result(), end=' ')
Explanation: Let's make sure the executor finishes all its jobs even after shutdown is called.
End of explanation |
813 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kaggle San Francisco Crime Classification
Berkeley MIDS W207 Final Project
Step1: Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
Step2: The Best RF Classifier
Step3: Distribution of Posterior Probabilities
Step5: Error Analysis
Step7: The fact that the classifier accuracy is higher for predictions with a higher posterior probability shows that our model is strongly calibrated. However, the distribution of these posterior probabilities shows that our classifier rarely has a 'confident' prediction.
Error Analysis
Step9: The classification report shows that the model still has issues of every sort with regards to accuracy-- both false positives and false negatives are an issue across many classes.
The relatively high recall scores for larceny/theft and prostitution are noticeable, showing that our model had fewer false negatives for these two classes. However, their accuracies are still low.
Error Analysis | Python Code:
# Additional Libraries
%matplotlib inline
import matplotlib.pyplot as plt
# Import relevant libraries:
import time
import numpy as np
import pandas as pd
from sklearn.neighbors import KNeighborsClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import log_loss
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
# Import Meta-estimators
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import GradientBoostingClassifier
# Import Calibration tools
from sklearn.calibration import CalibratedClassifierCV
# Set random seed and format print output:
np.random.seed(0)
np.set_printoptions(precision=3)
Explanation: Kaggle San Francisco Crime Classification
Berkeley MIDS W207 Final Project: Sam Goodgame, Sarah Cha, Kalvin Kao, Bryan Moore
Environment and Data
End of explanation
# Data path to your local copy of Kalvin's "x_data.csv", which was produced by the negated cell above
data_path = "./data/x_data_3.csv"
df = pd.read_csv(data_path, header=0)
x_data = df.drop('category', 1)
y = df.category.as_matrix()
# Impute missing values with mean values:
#x_complete = df.fillna(df.mean())
x_complete = x_data.fillna(x_data.mean())
X_raw = x_complete.as_matrix()
# Scale the data between 0 and 1:
X = MinMaxScaler().fit_transform(X_raw)
####
#X = np.around(X, decimals=2)
####
# Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time:
np.random.seed(0)
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, y = X[shuffle], y[shuffle]
# Due to difficulties with log loss and set(y_pred) needing to match set(labels), we will remove the extremely rare
# crimes from the data for quality issues.
X_minus_trea = X[np.where(y != 'TREA')]
y_minus_trea = y[np.where(y != 'TREA')]
X_final = X_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]
y_final = y_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]
# Separate training, dev, and test data:
test_data, test_labels = X_final[800000:], y_final[800000:]
dev_data, dev_labels = X_final[700000:800000], y_final[700000:800000]
train_data, train_labels = X_final[100000:700000], y_final[100000:700000]
calibrate_data, calibrate_labels = X_final[:100000], y_final[:100000]
# Create mini versions of the above sets
mini_train_data, mini_train_labels = X_final[:20000], y_final[:20000]
mini_calibrate_data, mini_calibrate_labels = X_final[19000:28000], y_final[19000:28000]
mini_dev_data, mini_dev_labels = X_final[49000:60000], y_final[49000:60000]
# Create list of the crime type labels. This will act as the "labels" parameter for the log loss functions that follow
crime_labels = list(set(y_final))
crime_labels_mini_train = list(set(mini_train_labels))
crime_labels_mini_dev = list(set(mini_dev_labels))
crime_labels_mini_calibrate = list(set(mini_calibrate_labels))
print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev),len(crime_labels_mini_calibrate))
#print(len(train_data),len(train_labels))
#print(len(dev_data),len(dev_labels))
print(len(mini_train_data),len(mini_train_labels))
print(len(mini_dev_data),len(mini_dev_labels))
#print(len(test_data),len(test_labels))
print(len(mini_calibrate_data),len(mini_calibrate_labels))
#print(len(calibrate_data),len(calibrate_labels))
Explanation: Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
End of explanation
tuned_DT_calibrate_isotonic = RandomForestClassifier(min_impurity_split=1,
n_estimators=100,
bootstrap= True,
max_features=15,
criterion='entropy',
min_samples_leaf=10,
max_depth=None
).fit(train_data, train_labels)
ccv_isotonic = CalibratedClassifierCV(tuned_DT_calibrate_isotonic, method = 'isotonic', cv = 'prefit')
ccv_isotonic.fit(calibrate_data, calibrate_labels)
ccv_predictions = ccv_isotonic.predict(dev_data)
ccv_prediction_probabilities_isotonic = ccv_isotonic.predict_proba(dev_data)
working_log_loss_isotonic = log_loss(y_true = dev_labels, y_pred = ccv_prediction_probabilities_isotonic, labels = crime_labels)
print("Multi-class Log Loss with RF and calibration with isotonic is:", working_log_loss_isotonic)
Explanation: The Best RF Classifier
End of explanation
pd.DataFrame(np.amax(ccv_prediction_probabilities_isotonic, axis=1)).hist()
Explanation: Distribution of Posterior Probabilities
End of explanation
#clf_probabilities, clf_predictions, labels
def error_analysis_calibration(buckets, clf_probabilities, clf_predictions, labels):
inputs:
clf_probabilities = clf.predict_proba(dev_data)
clf_predictions = clf.predict(dev_data)
labels = dev_labels
#buckets = [0.05, 0.15, 0.3, 0.5, 0.8]
#buckets = [0.15, 0.25, 0.3, 1.0]
correct = [0 for i in buckets]
total = [0 for i in buckets]
lLimit = 0
uLimit = 0
for i in range(len(buckets)):
uLimit = buckets[i]
for j in range(clf_probabilities.shape[0]):
if (np.amax(clf_probabilities[j]) > lLimit) and (np.amax(clf_probabilities[j]) <= uLimit):
if clf_predictions[j] == labels[j]:
correct[i] += 1
total[i] += 1
lLimit = uLimit
#here we report the classifier accuracy for each posterior probability bucket
accuracies = []
for k in range(len(buckets)):
print(1.0*correct[k]/total[k])
accuracies.append(1.0*correct[k]/total[k])
print('p(pred) <= %.13f total = %3d correct = %3d accuracy = %.3f' \
%(buckets[k], total[k], correct[k], 1.0*correct[k]/total[k]))
f = plt.figure(figsize=(15,8))
plt.plot(buckets,accuracies)
plt.title("Calibration Analysis")
plt.xlabel("Posterior Probability")
plt.ylabel("Classifier Accuracy")
return buckets, accuracies
buckets = [0.2, 0.25, 0.3, 0.4, 0.5, 0.7, 0.9, 1.0]
calibration_buckets, calibration_accuracies = \
error_analysis_calibration(buckets, \
clf_probabilities=ccv_prediction_probabilities_isotonic, \
clf_predictions=ccv_predictions, \
labels=dev_labels)
Explanation: Error Analysis: Calibration
End of explanation
def error_analysis_classification_report(clf_predictions, labels):
inputs:
clf_predictions = clf.predict(dev_data)
labels = dev_labels
print('Classification Report:')
report = classification_report(labels, clf_predictions)
print(report)
return report
classificationReport = error_analysis_classification_report(clf_predictions=ccv_predictions, \
labels=dev_labels)
Explanation: The fact that the classifier accuracy is higher for predictions with a higher posterior probability shows that our model is strongly calibrated. However, the distribution of these posterior probabilities shows that our classifier rarely has a 'confident' prediction.
Error Analysis: Classification Report
End of explanation
def error_analysis_confusion_matrix(label_names, clf_predictions, labels):
inputs:
clf_predictions = clf.predict(dev_data)
labels = dev_labels
cm = pd.DataFrame(confusion_matrix(labels, clf_predictions, labels=label_names))
cm.columns=label_names
cm.index=label_names
cm.to_csv(path_or_buf="./confusion_matrix.csv")
#print(cm)
return cm
error_analysis_confusion_matrix(label_names=crime_labels, clf_predictions=ccv_predictions, \
labels=dev_labels)
Explanation: The classification report shows that the model still has issues of every sort with regards to accuracy-- both false positives and false negatives are an issue across many classes.
The relatively high recall scores for larceny/theft and prostitution are noticeable, showing that our model had fewer false negatives for these two classes. However, their accuracies are still low.
Error Analysis: Confusion Matrix
End of explanation |
814 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optymalizacja i propagacja wsteczna (backprop)
Zaczniemy od prostego przykładu. Funkcji kwadratowej
Step1: Funkcja ta ma swoje minimum w punkcie $x = 0$. Jak widać na powyższym rysunku, gdy pochodna jest dodatnia (co oznacza, że funkcja jest rosnąca) lub gdy pochodna jest ujemna (gdy funkcja jest malejąca), żeby zminimalizować wartość funkcji potrzebujemy wykonywać krok optymalizacji w kierunku przeciwnym do tego wyznaczanego przez gradient.
Przykładowo gdybyśmy byli w punkcie $x = 2$, gradient wynosiłby $4$. Ponieważ jest dodatni, żeby zbliżyć się do minimum potrzebujemy przesunąć naszą pozycje w kierunku przeciwnym czyli w stonę ujemną.
Ponieważ gradient nie mówi nam dokładnie jaki krok powinniśmy wykonać, żeby dotrzeć do minimum a raczej wskazuje kierunek. Żeby nie "przeskoczyć" minimum zwykle skaluje się krok o pewną wartość $\alpha$ nazywaną krokiem uczenia (ang. learning rate).
Prosty przykład optymalizacji $f(x) = x^2$ przy użyciu gradient descent.
Sprawdź różne wartości learning_step, w szczególności [0.1, 1.0, 1.1].
Step2: Backprop - propagacja wsteczna - metoda liczenia gradientów przy pomocy reguły łańcuchowej (ang. chain rule)
Rozpatrzymy przykład minimalizacji troche bardziej skomplikowanej jednowymiarowej funkcji
$$f(x) = \frac{x \cdot \sigma(x)}{x^2 + 1}$$
Do optymalizacji potrzebujemy gradientu funkcji. Do jego wyliczenia skorzystamy z chain rule oraz grafu obliczeniowego.
Chain rule mówi, że
Step3: Jeśli do węzła przychodzi więcej niż jedna krawędź (np. węzeł x), gradienty sumujemy.
Step4: Posiadając gradient możemy próbować optymalizować funkcję, podobnie jak poprzenio.
Sprawdź różne wartości parametru x_, który oznacza punkt startowy optymalizacji. W szczególności zwróć uwagę na wartości [-5.0, 1.3306, 1.3307, 1.330696146306314]. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn
%matplotlib inline
x = np.linspace(-3, 3, 100)
plt.plot(x, x**2, label='f(x)') # optymalizowana funkcja
plt.plot(x, 2 * x, label='pochodna -- f\'(x)') # pochodna
plt.legend()
plt.show()
Explanation: Optymalizacja i propagacja wsteczna (backprop)
Zaczniemy od prostego przykładu. Funkcji kwadratowej: $f(x) = x^2$
Spróbujemy ją zoptymalizować (znaleźć minimum) przy pomocy metody zwanej gradient descent. Polega ona na wykorzystaniu gradientu (wartości pierwszej pochodnej) przy wykonywaniu kroku optymalizacji.
End of explanation
learning_rate = 0.1 # przykładowa wartość
nb_steps = 10
x_ = 1
steps = [x_]
for _ in range(nb_steps):
x_ -= learning_rate * (2 * x_) # learning_rate * pochodna
steps += [x_]
plt.plot(x, x**2, alpha=0.7)
plt.plot(steps, np.array(steps)**2, 'r-', alpha=0.7)
plt.xlim(-3, 3)
plt.ylim(-1, 10)
plt.show()
Explanation: Funkcja ta ma swoje minimum w punkcie $x = 0$. Jak widać na powyższym rysunku, gdy pochodna jest dodatnia (co oznacza, że funkcja jest rosnąca) lub gdy pochodna jest ujemna (gdy funkcja jest malejąca), żeby zminimalizować wartość funkcji potrzebujemy wykonywać krok optymalizacji w kierunku przeciwnym do tego wyznaczanego przez gradient.
Przykładowo gdybyśmy byli w punkcie $x = 2$, gradient wynosiłby $4$. Ponieważ jest dodatni, żeby zbliżyć się do minimum potrzebujemy przesunąć naszą pozycje w kierunku przeciwnym czyli w stonę ujemną.
Ponieważ gradient nie mówi nam dokładnie jaki krok powinniśmy wykonać, żeby dotrzeć do minimum a raczej wskazuje kierunek. Żeby nie "przeskoczyć" minimum zwykle skaluje się krok o pewną wartość $\alpha$ nazywaną krokiem uczenia (ang. learning rate).
Prosty przykład optymalizacji $f(x) = x^2$ przy użyciu gradient descent.
Sprawdź różne wartości learning_step, w szczególności [0.1, 1.0, 1.1].
End of explanation
def sigmoid(x):
return 1. / (1. + np.exp(-x))
def forward_pass(x):
t1 = sigmoid(x) # 1
t2 = t1 * x # 2
b1 = x**2 # 3
b2 = b1 + 1. # 4
b3 = 1. / b2 # 5
y = t2 * b3 # 6
return y
Explanation: Backprop - propagacja wsteczna - metoda liczenia gradientów przy pomocy reguły łańcuchowej (ang. chain rule)
Rozpatrzymy przykład minimalizacji troche bardziej skomplikowanej jednowymiarowej funkcji
$$f(x) = \frac{x \cdot \sigma(x)}{x^2 + 1}$$
Do optymalizacji potrzebujemy gradientu funkcji. Do jego wyliczenia skorzystamy z chain rule oraz grafu obliczeniowego.
Chain rule mówi, że:
$$ \frac{\partial f}{\partial x} = \frac{\partial f}{\partial y} \cdot \frac{\partial y}{\partial x}$$
Żeby łatwiej zastosować chain rule stworzymy z funkcji graf obliczeniowy, którego wykonanie zwróci nam wynik funkcji.
End of explanation
def backward_pass(x):
# kopia z forward_pass, ponieważ potrzebujemy wartości
# pośrednich by wyliczyć pochodne
# >>>
t1 = sigmoid(x) # 1
t2 = t1 * x # 2
b1 = x**2 # 3
b2 = b1 + 1. # 4
b3 = 1. / b2 # 5
y = t2 * b3 # 6
# <<<
# backprop: y = t2 * b3
dt2 = b3 # 6
db3 = t2 # 6
# backprop: b3 = 1. / b2
db2 = (-1. / b2**2) * db3 # 5
# backprop: b2 = b1 + 1.
db1 = 1. * db2 # 4
# backprop: b1 = x**2
dx = 2 * x * db1 # 3
# backprop: t2 = t1 * x
dt1 = x * dt2 # 2
dx += t1 * dt2 # 2 -- uwaga! pochodna dx zależy od kilku węzłów dlatego sumujemy jej gradienty
# backprop: t1 = sigmoid(x)
dx += t1 * (1. - t1) * dt1 # 1
return dx
x = np.linspace(-10, 10, 200)
plt.plot(x, forward_pass(x), label='f(x)')
plt.plot(x, backward_pass(x), label='poochodna -- f\'(x)')
plt.legend()
plt.show()
Explanation: Jeśli do węzła przychodzi więcej niż jedna krawędź (np. węzeł x), gradienty sumujemy.
End of explanation
learning_rate = 1
nb_steps = 100
x_ = 1. # przykładowa wartość
steps = [x_]
for _ in range(nb_steps):
x_ -= learning_rate * backward_pass(x_)
steps += [x_]
plt.plot(x, forward_pass(x), alpha=0.7)
plt.plot(steps, forward_pass(np.array(steps)), 'r-', alpha=0.7)
plt.show()
Explanation: Posiadając gradient możemy próbować optymalizować funkcję, podobnie jak poprzenio.
Sprawdź różne wartości parametru x_, który oznacza punkt startowy optymalizacji. W szczególności zwróć uwagę na wartości [-5.0, 1.3306, 1.3307, 1.330696146306314].
End of explanation |
815 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Gaussian Mixture Models and Expectation Maximisation in Shogun
By Heiko Strathmann - <a href="mailto
Step2: Set up the model in Shogun
Step3: Sampling from mixture models
Sampling is extremely easy since every instance of the <a href="http
Step4: Evaluating densities in mixture Models
Next, let us visualise the density of the joint model (which is a convex sum of the densities of the individual distributions). Note the similarity between the calls since all distributions implement the <a href="http
Step5: Density estimating with mixture models
Now let us draw samples from the mixture model itself rather than from individual components. This is the situation that usually occurs in practice
Step6: Imagine you did not know the true generating process of this data. What would you think just looking at it? There are clearly at least two components (or clusters) that might have generated this data, but three also looks reasonable. So let us try to learn a Gaussian mixture model on those.
Step7: So far so good, now lets plot the density of this GMM using the code from above
Step8: It is also possible to access the individual components of the mixture distribution. In our case, we can for example draw 95% ellipses for each of the Gaussians using the method from above. We will do this (and more) below.
On local minima of EM
It seems that three comonents give a density that is closest to the original one. While two components also do a reasonable job here, it might sometimes happen (<a href="http
Step9: Clustering with mixture models
Recall that our initial goal was not to visualise mixture models (although that is already pretty cool) but to find clusters in a given set of points. All we need to do for this is to evaluate the log-likelihood of every point under every learned component and then pick the largest one. Shogun can do both. Below, we will illustrate both cases, obtaining a cluster index, and evaluating the log-likelihood for every point under each component.
Step10: These are clusterings obtained via the true mixture model and the one learned via EM. There is a slight subtlety here
Step11: Note how the lower left and middle cluster are overlapping in the sense that points at their intersection have similar likelihoods. If you do not care at all about this and are just interested in a partitioning of the space, simply choose the maximum.
Below we plot the space partitioning for a hard clustering. | Python Code:
%pylab inline
%matplotlib inline
# import all Shogun classes
from modshogun import *
from matplotlib.patches import Ellipse
# a tool for visualisation
def get_gaussian_ellipse_artist(mean, cov, nstd=1.96, color="red", linewidth=3):
Returns an ellipse artist for nstd times the standard deviation of this
Gaussian, specified by mean and covariance
# compute eigenvalues (ordered)
vals, vecs = eigh(cov)
order = vals.argsort()[::-1]
vals, vecs = vals[order], vecs[:, order]
theta = numpy.degrees(arctan2(*vecs[:, 0][::-1]))
# width and height are "full" widths, not radius
width, height = 2 * nstd * sqrt(vals)
e = Ellipse(xy=mean, width=width, height=height, angle=theta, \
edgecolor=color, fill=False, linewidth=linewidth)
return e
Explanation: Gaussian Mixture Models and Expectation Maximisation in Shogun
By Heiko Strathmann - <a href="mailto:[email protected]">[email protected]</a> - <a href="github.com/karlnapf">github.com/karlnapf</a> - <a href="herrstrathmann.de">herrstrathmann.de</a>. Based on the GMM framework of the <a href="https://www.google-melange.com/gsoc/project/google/gsoc2011/alesis_novik/11001">Google summer of code 2011 project</a> of Alesis Novik - <a href="https://github.com/alesis">https://github.com/alesis</a>
This notebook is about learning and using Gaussian <a href="https://en.wikipedia.org/wiki/Mixture_model">Mixture Models</a> (GMM) in Shogun. Below, we demonstrate how to use them for sampling, for density estimation via <a href="https://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation Maximisation (EM)</a>, and for <a href="https://en.wikipedia.org/wiki/Data_clustering">clustering</a>.
Note that Shogun's interfaces for mixture models are deprecated and are soon to be replace by more intuitive and efficient ones. This notebook contains some python magic at some places to compensate for this. However, all computations are done within Shogun itself.
Finite Mixture Models (skip if you just want code examples)
We begin by giving some intuition about mixture models. Consider an unobserved (or latent) discrete random variable taking $k$ states $s$ with probabilities $\text{Pr}(s=i)=\pi_i$ for $1\leq i \leq k$, and $k$ random variables $x_i|s_i$ with arbritary densities or distributions, which are conditionally independent of each other given the state of $s$. In the finite mixture model, we model the probability or density for a single point $x$ begin generated by the weighted mixture of the $x_i|s_i$
$$
p(x)=\sum_{i=1}^k\text{Pr}(s=i)p(x)=\sum_{i=1}^k \pi_i p(x|s)
$$
which is simply the marginalisation over the latent variable $s$. Note that $\sum_{i=1}^k\pi_i=1$.
For example, for the Gaussian mixture model (GMM), we get (adding a collection of parameters $\theta:={\boldsymbol{\mu}i, \Sigma_i}{i=1}^k$ that contains $k$ mean and covariance parameters of single Gaussian distributions)
$$
p(x|\theta)=\sum_{i=1}^k \pi_i \mathcal{N}(\boldsymbol{\mu}_i,\Sigma_i)
$$
Note that any set of probability distributions on the same domain can be combined to such a mixture model. Note again that $s$ is an unobserved discrete random variable, i.e. we model data being generated from some weighted combination of baseline distributions. Interesting problems now are
Learning the weights $\text{Pr}(s=i)=\pi_i$ from data
Learning the parameters $\theta$ from data for a fixed family of $x_i|s_i$, for example for the GMM
Using the learned model (which is a density estimate) for clustering or classification
All of these problems are in the context of unsupervised learning since the algorithm only sees the plain data and no information on its structure.
Expectation Maximisation
<a href="https://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation Maximisation (EM)</a> is a powerful method to learn any form of latent models and can be applied to the Gaussian mixture model case. Standard methods such as Maximum Likelihood are not straightforward for latent models in general, while EM can almost always be applied. However, it might converge to local optima and does not guarantee globally optimal solutions (this can be dealt with with some tricks as we will see later). While the general idea in EM stays the same for all models it can be used on, the individual steps depend on the particular model that is being used.
The basic idea in EM is to maximise a lower bound, typically called the free energy, on the log-likelihood of the model. It does so by repeatedly performing two steps
The E-step optimises the free energy with respect to the latent variables $s_i$, holding the parameters $\theta$ fixed. This is done via setting the distribution over $s$ to the posterior given the used observations.
The M-step optimises the free energy with respect to the paramters $\theta$, holding the distribution over the $s_i$ fixed. This is done via maximum likelihood.
It can be shown that this procedure never decreases the likelihood and that stationary points (i.e. neither E-step nor M-step produce changes) of it corresponds to local maxima in the model's likelihood. See references for more details on the procedure, and how to obtain a lower bound on the log-likelihood. There exist many different flavours of EM, including variants where only subsets of the model are iterated over at a time. There is no learning rate such as step size or similar, which is good and bad since convergence can be slow.
Mixtures of Gaussians in Shogun
The main class for GMM in Shogun is <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMM.html">CGMM</a>, which contains an interface for setting up a model and sampling from it, but also to learn the model (the $\pi_i$ and parameters $\theta$) via EM. It inherits from the base class for distributions in Shogun, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a>, and combines multiple single distribution instances to a mixture.
We start by creating a GMM instance, sampling from it, and computing the log-likelihood of the model for some points, and the log-likelihood of each individual component for some points. All these things are done in two dimensions to be able to plot them, but they generalise to higher (or lower) dimensions easily.
Let's sample, and illustrate the difference of knowing the latent variable indicating the component or not.
End of explanation
# create mixture of three Gaussians
num_components=3
num_max_samples=100
gmm=GMM(num_components)
dimension=2
# set means (TODO interface should be to construct mixture from individuals with set parameters)
means=zeros((num_components, dimension))
means[0]=[-5.0, -4.0]
means[1]=[7.0, 3.0]
means[2]=[0, 0.]
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
# set covariances
covs=zeros((num_components, dimension, dimension))
covs[0]=array([[2, 1.3],[.6, 3]])
covs[1]=array([[1.3, -0.8],[-0.8, 1.3]])
covs[2]=array([[2.5, .8],[0.8, 2.5]])
[gmm.set_nth_cov(covs[i],i) for i in range(num_components)]
# set mixture coefficients, these have to sum to one (TODO these should be initialised automatically)
weights=array([0.5, 0.3, 0.2])
gmm.set_coef(weights)
Explanation: Set up the model in Shogun
End of explanation
# now sample from each component seperately first, the from the joint model
hold(True)
colors=["red", "green", "blue"]
for i in range(num_components):
# draw a number of samples from current component and plot
num_samples=int(rand()*num_max_samples)+1
# emulate sampling from one component (TODO fix interface of GMM to handle this)
w=zeros(num_components)
w[i]=1.
gmm.set_coef(w)
# sample and plot (TODO fix interface to have loop within)
X=array([gmm.sample() for _ in range(num_samples)])
plot(X[:,0], X[:,1], "o", color=colors[i])
# draw 95% elipsoid for current component
gca().add_artist(get_gaussian_ellipse_artist(means[i], covs[i], color=colors[i]))
hold(False)
_=title("%dD Gaussian Mixture Model with %d components" % (dimension, num_components))
# since we used a hack to sample from each component
gmm.set_coef(weights)
Explanation: Sampling from mixture models
Sampling is extremely easy since every instance of the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a> class in Shogun allows to sample from it (if implemented)
End of explanation
# generate a grid over the full space and evaluate components PDF
resolution=100
Xs=linspace(-10,10, resolution)
Ys=linspace(-8,6, resolution)
pairs=asarray([(x,y) for x in Xs for y in Ys])
D=asarray([gmm.cluster(pairs[i])[3] for i in range(len(pairs))]).reshape(resolution,resolution)
figure(figsize=(18,5))
subplot(1,2,1)
pcolor(Xs,Ys,D)
xlim([-10,10])
ylim([-8,6])
title("Log-Likelihood of GMM")
subplot(1,2,2)
pcolor(Xs,Ys,exp(D))
xlim([-10,10])
ylim([-8,6])
_=title("Likelihood of GMM")
Explanation: Evaluating densities in mixture Models
Next, let us visualise the density of the joint model (which is a convex sum of the densities of the individual distributions). Note the similarity between the calls since all distributions implement the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a> interface, including the mixture.
End of explanation
# sample and plot (TODO fix interface to have loop within)
X=array([gmm.sample() for _ in range(num_max_samples)])
plot(X[:,0], X[:,1], "o")
_=title("Samples from GMM")
Explanation: Density estimating with mixture models
Now let us draw samples from the mixture model itself rather than from individual components. This is the situation that usually occurs in practice: Someone gives you a bunch of data with no labels attached to it all all. Our job is now to find structure in the data, which we will do with a GMM.
End of explanation
def estimate_gmm(X, num_components):
# bring data into shogun representation (note that Shogun data is in column vector form, so transpose)
features=RealFeatures(X.T)
gmm_est=GMM(num_components)
gmm_est.set_features(features)
# learn GMM
gmm_est.train_em()
return gmm_est
Explanation: Imagine you did not know the true generating process of this data. What would you think just looking at it? There are clearly at least two components (or clusters) that might have generated this data, but three also looks reasonable. So let us try to learn a Gaussian mixture model on those.
End of explanation
component_numbers=[2,3]
# plot true likelihood
D_true=asarray([gmm.cluster(pairs[i])[num_components] for i in range(len(pairs))]).reshape(resolution,resolution)
figure(figsize=(18,5))
subplot(1,len(component_numbers)+1,1)
pcolor(Xs,Ys,exp(D_true))
xlim([-10,10])
ylim([-8,6])
title("True likelihood")
for n in range(len(component_numbers)):
# TODO get rid of these hacks and offer nice interface from Shogun
# learn GMM with EM
gmm_est=estimate_gmm(X, component_numbers[n])
# evaluate at a grid of points
D_est=asarray([gmm_est.cluster(pairs[i])[component_numbers[n]] for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise densities
subplot(1,len(component_numbers)+1,n+2)
pcolor(Xs,Ys,exp(D_est))
xlim([-10,10])
ylim([-8,6])
_=title("Estimated likelihood for EM with %d components"%component_numbers[n])
Explanation: So far so good, now lets plot the density of this GMM using the code from above
End of explanation
# function to draw ellipses for all components of a GMM
def visualise_gmm(gmm, color="blue"):
for i in range(gmm.get_num_components()):
component=Gaussian.obtain_from_generic(gmm.get_component(i))
gca().add_artist(get_gaussian_ellipse_artist(component.get_mean(), component.get_cov(), color=color))
# multiple runs to illustrate random initialisation matters
for _ in range(3):
figure(figsize=(18,5))
subplot(1, len(component_numbers)+1, 1)
plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color="blue")
title("True components")
for i in range(len(component_numbers)):
gmm_est=estimate_gmm(X, component_numbers[i])
subplot(1, len(component_numbers)+1, i+2)
plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color=colors[i])
# TODO add a method to get likelihood of full model, retraining is inefficient
likelihood=gmm_est.train_em()
_=title("Estimated likelihood: %.2f (%d components)"%(likelihood,component_numbers[i]))
Explanation: It is also possible to access the individual components of the mixture distribution. In our case, we can for example draw 95% ellipses for each of the Gaussians using the method from above. We will do this (and more) below.
On local minima of EM
It seems that three comonents give a density that is closest to the original one. While two components also do a reasonable job here, it might sometimes happen (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKMeans.html">KMeans</a> is used to initialise the cluster centres if not done by hand, using a random cluster initialisation) that the upper two Gaussians are grouped, re-run for a couple of times to see this. This illustrates how EM might get stuck in a local minimum. We will do this below, where it might well happen that all runs produce the same or different results - no guarantees.
Note that it is easily possible to initialise EM via specifying the parameters of the mixture components as did to create the original model above.
One way to decide which of multiple convergenced EM instances to use is to simply compute many of them (with different initialisations) and then choose the one with the largest likelihood. WARNING Do not select the number of components like this as the model will overfit.
End of explanation
def cluster_and_visualise(gmm_est):
# obtain cluster index for each point of the training data
# TODO another hack here: Shogun should allow to pass multiple points and only return the index
# as the likelihood can be done via the individual components
# In addition, argmax should be computed for us, although log-pdf for all components should also be possible
clusters=asarray([argmax(gmm_est.cluster(x)[:gmm.get_num_components()]) for x in X])
# visualise points by cluster
hold(True)
for i in range(gmm.get_num_components()):
indices=clusters==i
plot(X[indices,0],X[indices,1], 'o', color=colors[i])
hold(False)
# learn gmm again
gmm_est=estimate_gmm(X, num_components)
figure(figsize=(18,5))
subplot(121)
cluster_and_visualise(gmm)
title("Clustering under true GMM")
subplot(122)
cluster_and_visualise(gmm_est)
_=title("Clustering under estimated GMM")
Explanation: Clustering with mixture models
Recall that our initial goal was not to visualise mixture models (although that is already pretty cool) but to find clusters in a given set of points. All we need to do for this is to evaluate the log-likelihood of every point under every learned component and then pick the largest one. Shogun can do both. Below, we will illustrate both cases, obtaining a cluster index, and evaluating the log-likelihood for every point under each component.
End of explanation
figure(figsize=(18,5))
for comp_idx in range(num_components):
subplot(1,num_components,comp_idx+1)
# evaluated likelihood under current component
# TODO Shogun should do the loop and allow to specify component indices to evaluate pdf for
# TODO distribution interface should be the same everywhere
component=Gaussian.obtain_from_generic(gmm.get_component(comp_idx))
cluster_likelihoods=asarray([component.compute_PDF(X[i]) for i in range(len(X))])
# normalise
cluster_likelihoods-=cluster_likelihoods.min()
cluster_likelihoods/=cluster_likelihoods.max()
# plot, coloured by likelihood value
cm=get_cmap("jet")
hold(True)
for j in range(len(X)):
color = cm(cluster_likelihoods[j])
plot(X[j,0], X[j,1] ,"o", color=color)
hold(False)
title("Data coloured by likelihood for component %d" % comp_idx)
Explanation: These are clusterings obtained via the true mixture model and the one learned via EM. There is a slight subtlety here: even the model under which the data was generated will not cluster the data correctly if the data is overlapping. This is due to the fact that the cluster with the largest probability is chosen. This doesn't allow for any ambiguity. If you are interested in cases where data overlaps, you should always look at the log-likelihood of the point for each cluster and consider taking into acount "draws" in the decision, i.e. probabilities for two different clusters are equally large.
Below we plot all points, coloured by their likelihood under each component.
End of explanation
# compute cluster index for every point in space
D_est=asarray([gmm_est.cluster(pairs[i])[:num_components].argmax() for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise clustering
cluster_and_visualise(gmm_est)
# visualise space partitioning
hold(True)
pcolor(Xs,Ys,D_est)
hold(False)
Explanation: Note how the lower left and middle cluster are overlapping in the sense that points at their intersection have similar likelihoods. If you do not care at all about this and are just interested in a partitioning of the space, simply choose the maximum.
Below we plot the space partitioning for a hard clustering.
End of explanation |
816 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright (c)2015 DiGangi, C.
Managing Epidemics Through Mathematical Modeling
This lesson will examine the spread of an epidemic over time using Euler's method. The model is a system of non-linear ODEs which is based on the classic Susceptible, Infected, Recovered (SIR) model. This model introduces a new parameter to include vaccinations. We will examine the various paremeters of the model and define conditions necessary to erradicate the epidemic.
In this module we will also introduce ipywigets, an IPython library that allows you to add widgets to your notebooks and make them interactive! We will be using widgets to vary our parameters and see how changing different parameters affects the results of the model. This is a great technique for making quick and easy comparisons because you don't have to re-run your cell for the widget to make changes to the graph.
Introducing Model Parameters
The most important part of understanding any model is understanding the nomenclature that is associated with it. Please review the below terms carefully and make sure you understand what each parameter represents.
$S$
Step2: Let us first define our function $f(u)$ that will calculate the right hand side of our model. We will pass in the array $u$ which contains our different populations and set them individually in the function
Step4: Next we will define the euler solution as a function so that we can call it as we iterate through time.
Step5: Now we are ready to set up our initial conditions and solve! We will use a simplified population to start with.
Step6: Now we will implement our discretization using a for loop to iterate over time. We create a numpy array $u$ that will hold all of our values at each time step for each component (SVIR). We will use dt of 1 to represent 1 day and iterate over 365 days.
Step7: Now we use python's pyplot library to plot all of our results on the same graph
Step8: The graph is interesting because it exhibits some oscillating behavior. You can see that under the given parameters, the number of infected people drops within the first few days. Notice that the susceptible individuals grow until about 180 days. The return of infection is a result of too many susceptible people in the population. The number of infected looks like it goes to zero but it never quite reaches zero. Therfore, when we have $\beta IS$, when $S$ gets large enough the infection will start to be reintroduced into the population.
If we want to examine how the population changes under new conditions, we could re-run the below cell with new parameters
Step9: However, every time we want to examine new parameters we have to go back and change the values within the cell and re run our code. This is very cumbersome if we want to examine how different parameters affect our outcome. If only there were some solution we could implement that would allow us to change parameters on the fly without having to re-run our code...
ipywidgets!
Well there is a solution we can implement! Using a python library called ipywidgets we can build interactive widgets into our notebook that allow for user interaction. If you do not have ipywidets installed, you can install it using conda by simply going to the terminal and typing
Step10: The below cell is a quick view of a few different interactive widgets that are available. Notice that we must define a function (in this case $z$) where we call the function $z$ and parameter $x$, where $x$ is passed into the function $z$.
Step12: Redefining the Model to Accept Parameters
In order to use ipywidgets and pass parameters in our functions we have to slightly redefine our functions to accept these changing parameters. This will ensure that we don't have to re-run any code and our graph will update as we change parameters!
We will start with our function $f$. This function uses our initial parameters $p$, $e$, $\mu$, $\beta$, and $\gamma$. Previously, we used the global definition of these variables so we didn't include them inside the function. Now we will be passing in both our array $u$ (which holds the different populations) and a new array called $init$ (which holds our initial parameters).
Step13: Now we will change our $euler step$ function which calls our function $f$ to include the new $init$ array that we are passing.
Step15: In order to make changes to our parameters, we will use slider widgets. Now that we have our functions set up, we will build another function which we will use to update the graph as we move our slider parameters. First we must build the sliders for each parameter. Using the FloatSlider method from ipywidgets, we can specify the min and max for our sliders and a step to increment.
Next we build the update function which will take in the values of the sliders as they change and re-plot the graph. The function follows the same logic as before with the only difference being the changing parameters.
Finally we specify the behavior of the sliders as they change values and call our update function.
Step16: Notice that the graph starts with all parameters equal to zero. Unfortunately we cannot set the initial value of the slider. We can work around this using conditional statements to see if the slider values are equal to zero, then use different parameters.
Notice that as you change the parameters the graph starts to come alive! This allows you to quickly compare how different parameters affect the results of our model!
Dig deeper?
Using the ipywidget library, create a new function that allows for user input. Using the python array of objects below, which contains various diseases and their initial parameters, have the user type in one of the disease names and return the graph corresponding to that disease! You can use the ipywidget text box to take in the value from the user and then pass that value to a function that will call out that disease from the object below!
Step17: References
Scherer, A. and McLean, A. "Mathematical Models of Vaccination", British Medical Bulletin Volume 62 Issue 1, 2015 Oxford University Press. Online
Barba, L., "Practical Numerical Methods with Python" George Washington University
For a good explanation of some of the simpler models and overview of parameters, visit this Wiki Page
Slider tutorial posted on github | Python Code:
%matplotlib inline
import numpy
from matplotlib import pyplot
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
Explanation: Copyright (c)2015 DiGangi, C.
Managing Epidemics Through Mathematical Modeling
This lesson will examine the spread of an epidemic over time using Euler's method. The model is a system of non-linear ODEs which is based on the classic Susceptible, Infected, Recovered (SIR) model. This model introduces a new parameter to include vaccinations. We will examine the various paremeters of the model and define conditions necessary to erradicate the epidemic.
In this module we will also introduce ipywigets, an IPython library that allows you to add widgets to your notebooks and make them interactive! We will be using widgets to vary our parameters and see how changing different parameters affects the results of the model. This is a great technique for making quick and easy comparisons because you don't have to re-run your cell for the widget to make changes to the graph.
Introducing Model Parameters
The most important part of understanding any model is understanding the nomenclature that is associated with it. Please review the below terms carefully and make sure you understand what each parameter represents.
$S$: Susceptible Individuals
$V$: Vaccinated Individuals
$I$: Infected Individuals
$R$: Recovered Individuals with Immunity (Cannot get infected again)
$p$: Fraction of individuals who are vaccinated at birth
$e$: Fraction of the vaccinated individuals that are successfully vaccinated
$\mu$: Average Death Rate
$\beta$: Contact Rate (Rate at which Susceptibles come into contact with Infected)
$\gamma$: Recovery Rate
$R_0$: Basic Reporoduction Number
$N$: Total Population ($S + V + I + R$)
Basic SVIR Model
Model Assumptions
The model will make the following assumptions:
The population N is held constant
The birth rate and death rate are equal
The death rate is the same across all individuals (Infected do not have higher death rate)
A susceptible individual that comes in contact with an infected automatically becomes infected
Once an individual has recovered they are forever immune and not reintroduced into the susceptible population
Vaccination does not wear off (vaccinated cannot become infected)
Susceptible Equation
Let's examine the model by component. First we will breakdown the equation for susceptible individuals. In order to find the rate of change of susceptible individuals we must calculate the number of newborns that are not vaccinated:
$$(1-ep) \mu N$$
The number of Susceptible Individuals that become infected:
$$ \beta IS_{infections}$$
and finally the number of Susceptibles that die:
$$ \mu S_{deaths}$$
Therefore the change in Susceptible Indivduals becomes:
$$\frac{dS}{dt} = (1-ep) \mu N - \beta IS - \mu S$$
Vaccinated Equation
Now examining the vaccinated individuals we start with the newborns that are vaccinated:
$$ep \mu N$$
And the number of vaccinated individuals that die:
$$\mu V$$
The change in vaccinated individuals becomes:
$$\frac{dV}{dt} = ep \mu N - \mu V$$
Infected Equation
For the infected individuals we start with the number of Susceptible individuals that are exposed and become infected:
$$\beta IS_{infections}$$
Next we need the number of Infected individuals that recovered:
$$\gamma I_{recoveries}$$
Finally we examine the infected who die:
$$\mu I_{deaths}$$
Putting this all together we get the following equation:
$$\frac{dI}{dt} = \beta IS - \gamma I - \mu I$$
Recovered Equation
The number of recovered individuals first relies on the infected who recover:
$$\gamma I$$
Next it depeds on the recovered individuals who die:
$$\mu R$$
Putting this together yeilds the equation:
$$\frac{dR}{dt} = \gamma I - \mu R$$
Model Summary
The complete model is as follows:
$$\frac{dS}{dt} = (1-ep) \mu N - \beta IS - \mu S$$
$$\frac{dV}{dt} = ep \mu N - \mu V$$
$$\frac{dI}{dt} = \beta IS - \gamma I - \mu I$$
$$\frac{dR}{dt} = \gamma I - \mu R$$
This is a very simplified model because of the complexities of infectious diseases.
Implementing Numerical Solution with Euler!
For the numerical solution we will be using Euler's method since we are only dealing with time derivatives. Just to review, for Euler's method we replace the time derivative by the following:
$$\frac{dS}{dt} = \frac{S^{n+1} - S^n}{\Delta t}$$
where n represents the discretized time.
Therefore after we discretize our model we have:
$$\frac{S^{n+1} - S^n}{\Delta t} = (1-ep) \mu N - \beta IS^n - \mu S^n$$
$$\frac{V^{n+1} - V^n}{\Delta t} = ep \mu N - \mu V^n$$
$$\frac{I^{n+1} - I^n}{\Delta t} = \beta I^nS^n - \gamma I^n - \mu I^n$$
$$\frac{R^{n+1} - R^n}{\Delta t} = \gamma I^n - \mu R^n$$
And now solving for the value at the next time step yeilds:
$$S^{n+1} = S^n + \Delta t \left((1-ep) \mu N - \beta IS^n - \mu S^n \right)$$
$$V^{n+1} = V^n + \Delta t ( ep \mu N - \mu V^n)$$
$$I^{n+1} = I^n + \Delta t (\beta I^nS^n - \gamma I^n - \mu I^n)$$
$$R^{n+1} = R^n + \Delta t ( \gamma I^n - \mu R^n)$$
If we want to implement this into our code we can build arrays to hold our system of equations. Assuming u is our solution matrix and f(u) is our right hand side:
\begin{align}
u & = \begin{pmatrix} S \ V \ I \ R \end{pmatrix} & f(u) & = \begin{pmatrix} S^n + \Delta t \left((1-ep) \mu N - \beta IS^n - \mu S^n \right) \ V^n + \Delta t ( ep \mu N - \mu V^n) \ I^n + \Delta t (\beta I^nS^n - \gamma I^n - \mu I^n) \ R^n + \Delta t ( \gamma I^n - \mu R^n) \end{pmatrix}.
\end{align}
Solve!
Now we will implement this solution below. First we will import the necessary python libraries
End of explanation
def f(u):
Returns the right-hand side of the epidemic model equations.
Parameters
----------
u : array of float
array containing the solution at time n.
u is passed in and distributed to the different components by calling the individual value in u[i]
Returns
-------
du/dt : array of float
array containing the RHS given u.
S = u[0]
V = u[1]
I = u[2]
R = u[3]
return numpy.array([(1-e*p)*mu*N - beta*I*S - mu*S,
e*p*mu*N - mu*V,
beta*I*S - gamma*I - mu*I,
gamma*I - mu*R])
Explanation: Let us first define our function $f(u)$ that will calculate the right hand side of our model. We will pass in the array $u$ which contains our different populations and set them individually in the function:
End of explanation
def euler_step(u, f, dt):
Returns the solution at the next time-step using Euler's method.
Parameters
----------
u : array of float
solution at the previous time-step.
f : function
function to compute the right hand-side of the system of equation.
dt : float
time-increment.
Returns
-------
approximate solution at the next time step.
return u + dt * f(u)
Explanation: Next we will define the euler solution as a function so that we can call it as we iterate through time.
End of explanation
e = .1 #vaccination success rate
p = .75 # newborn vaccination rate
mu = .02 # death rate
beta = .002 # contact rate
gamma = .5 # Recovery rate
S0 = 100 # Initial Susceptibles
V0 = 50 # Initial Vaccinated
I0 = 75 # Initial Infected
R0 = 10 # Initial Recovered
N = S0 + I0 + R0 + V0 #Total population (remains constant)
Explanation: Now we are ready to set up our initial conditions and solve! We will use a simplified population to start with.
End of explanation
T = 365 # Iterate over 1 year
dt = 1 # 1 day
N = int(T/dt)+1 # Total number of iterations
t = numpy.linspace(0, T, N) # Time discretization
u = numpy.zeros((N,4)) # Initialize the solution array with zero values
u[0] = [S0, V0, I0, R0] # Set the initial conditions in the solution array
for n in range(N-1): # Loop through time steps
u[n+1] = euler_step(u[n], f, dt) # Get the value for the next time step using our euler_step function
Explanation: Now we will implement our discretization using a for loop to iterate over time. We create a numpy array $u$ that will hold all of our values at each time step for each component (SVIR). We will use dt of 1 to represent 1 day and iterate over 365 days.
End of explanation
pyplot.figure(figsize=(15,5))
pyplot.grid(True)
pyplot.xlabel(r'time', fontsize=18)
pyplot.ylabel(r'population', fontsize=18)
pyplot.xlim(0, 500)
pyplot.title('Population of SVIR model over time', fontsize=18)
pyplot.plot(t,u[:,0], color= 'red', lw=2, label = 'Susceptible');
pyplot.plot(t,u[:,1], color='green', lw=2, label = 'Vaccinated');
pyplot.plot(t,u[:,2], color='black', lw=2, label = 'Infected');
pyplot.plot(t,u[:,3], color='blue', lw=2, label = 'Recovered');
pyplot.legend();
Explanation: Now we use python's pyplot library to plot all of our results on the same graph:
End of explanation
#Changing the following parameters
e = .5 #vaccination success rate
gamma = .1 # Recovery rate
S0 = 100 # Initial Susceptibles
V0 = 50 # Initial Vaccinated
I0 = 75 # Initial Infected
R0 = 10 # Initial Recovered
N = S0 + I0 + R0 + V0 #Total population (remains constant)
T = 365 # Iterate over 1 year
dt = 1 # 1 day
N = int(T/dt)+1 # Total number of iterations
t = numpy.linspace(0, T, N) # Time discretization
u = numpy.zeros((N,4)) # Initialize the solution array with zero values
u[0] = [S0, V0, I0, R0] # Set the initial conditions in the solution array
for n in range(N-1): # Loop through time steps
u[n+1] = euler_step(u[n], f, dt) # Get the value for the next time step using our euler_step function
pyplot.figure(figsize=(15,5))
pyplot.grid(True)
pyplot.xlabel(r'time', fontsize=18)
pyplot.ylabel(r'population', fontsize=18)
pyplot.xlim(0, 500)
pyplot.title('Population of SVIR model over time', fontsize=18)
pyplot.plot(t,u[:,0], color= 'red', lw=2, label = 'Susceptible');
pyplot.plot(t,u[:,1], color='green', lw=2, label = 'Vaccinated');
pyplot.plot(t,u[:,2], color='black', lw=2, label = 'Infected');
pyplot.plot(t,u[:,3], color='blue', lw=2, label = 'Recovered');
pyplot.legend();
Explanation: The graph is interesting because it exhibits some oscillating behavior. You can see that under the given parameters, the number of infected people drops within the first few days. Notice that the susceptible individuals grow until about 180 days. The return of infection is a result of too many susceptible people in the population. The number of infected looks like it goes to zero but it never quite reaches zero. Therfore, when we have $\beta IS$, when $S$ gets large enough the infection will start to be reintroduced into the population.
If we want to examine how the population changes under new conditions, we could re-run the below cell with new parameters:
End of explanation
from ipywidgets import interact, HTML, FloatSlider
from IPython.display import clear_output, display
Explanation: However, every time we want to examine new parameters we have to go back and change the values within the cell and re run our code. This is very cumbersome if we want to examine how different parameters affect our outcome. If only there were some solution we could implement that would allow us to change parameters on the fly without having to re-run our code...
ipywidgets!
Well there is a solution we can implement! Using a python library called ipywidgets we can build interactive widgets into our notebook that allow for user interaction. If you do not have ipywidets installed, you can install it using conda by simply going to the terminal and typing:
conda install ipywidgets
Now we will import our desired libraries
End of explanation
def z(x):
print(x)
interact(z, x=True) # Checkbox
interact(z, x=10) # Slider
interact(z, x='text') # Text entry
Explanation: The below cell is a quick view of a few different interactive widgets that are available. Notice that we must define a function (in this case $z$) where we call the function $z$ and parameter $x$, where $x$ is passed into the function $z$.
End of explanation
def f(u, init):
Returns the right-hand side of the epidemic model equations.
Parameters
----------
u : array of float
array containing the solution at time n.
u is passed in and distributed to the different components by calling the individual value in u[i]
init : array of float
array containing the parameters for the model
Returns
-------
du/dt : array of float
array containing the RHS given u.
S = u[0]
V = u[1]
I = u[2]
R = u[3]
p = init[0]
e = init[1]
mu = init[2]
beta = init[3]
gamma = init[4]
return numpy.array([(1-e*p)*mu*N - beta*I*S - mu*S,
e*p*mu*N - mu*V,
beta*I*S - gamma*I - mu*I,
gamma*I - mu*R])
Explanation: Redefining the Model to Accept Parameters
In order to use ipywidgets and pass parameters in our functions we have to slightly redefine our functions to accept these changing parameters. This will ensure that we don't have to re-run any code and our graph will update as we change parameters!
We will start with our function $f$. This function uses our initial parameters $p$, $e$, $\mu$, $\beta$, and $\gamma$. Previously, we used the global definition of these variables so we didn't include them inside the function. Now we will be passing in both our array $u$ (which holds the different populations) and a new array called $init$ (which holds our initial parameters).
End of explanation
def euler_step(u, f, dt, init):
return u + dt * f(u, init)
Explanation: Now we will change our $euler step$ function which calls our function $f$ to include the new $init$ array that we are passing.
End of explanation
#Build slider for each parameter desired
pSlider = FloatSlider(description='p', min=0, max=1, step=0.1)
eSlider = FloatSlider(description='e', min=0, max=1, step=0.1)
muSlider = FloatSlider(description='mu', min=0, max=1, step=0.005)
betaSlider = FloatSlider(description='beta', min=0, max=.01, step=0.0005)
gammaSlider = FloatSlider(description='gamma', min=0, max=1, step=0.05)
#Update function will update the plotted graph every time a slider is changed
def update():
Returns a graph of the new results for a given slider parameter change.
Parameters
----------
p : float value of slider widget
e : float value of slider widget
mu : float value of slider widget
beta : float value of slider widget
gamma : float value of slider widget
Returns
-------
Graph representing new populations
#the following parameters use slider.value to get the value of the given slider
p = pSlider.value
e = eSlider.value
mu = muSlider.value
beta = betaSlider.value
gamma = gammaSlider.value
#inital population
S0 = 100
V0 = 50
I0 = 75
R0 = 10
N = S0 + I0 + R0 + V0
#Iteration parameters
T = 365
dt = 1
N = int(T/dt)+1
t = numpy.linspace(0, T, N)
u = numpy.zeros((N,4))
u[0] = [S0, V0, I0, R0]
#Array of parameters
init = numpy.array([p,e,mu,beta,gamma])
for n in range(N-1):
u[n+1] = euler_step(u[n], f, dt, init)
#Plot of population with gicen slider parameters
pyplot.figure(figsize=(15,5))
pyplot.grid(True)
pyplot.xlabel(r'time', fontsize=18)
pyplot.ylabel(r'population', fontsize=18)
pyplot.xlim(0, 500)
pyplot.title('Population of SVIR model over time', fontsize=18)
pyplot.plot(t,u[:,0], color= 'red', lw=2, label = 'Susceptible');
pyplot.plot(t,u[:,1], color='green', lw=2, label = 'Vaccinated');
pyplot.plot(t,u[:,2], color='black', lw=2, label = 'Infected');
pyplot.plot(t,u[:,3], color='blue', lw=2, label = 'Recovered');
pyplot.legend();
#Clear the output otherwise it will create a new graph every time so you will end up with multiple graphs
clear_output(True) #This ensures it recreates the data on the initial graph
#Run the update function on slider values change
pSlider.on_trait_change(update, 'value')
eSlider.on_trait_change(update, 'value')
muSlider.on_trait_change(update, 'value')
betaSlider.on_trait_change(update, 'value')
gammaSlider.on_trait_change(update, 'value')
display(pSlider, eSlider, muSlider, betaSlider, gammaSlider) #Display sliders
update() # Run initial function
Explanation: In order to make changes to our parameters, we will use slider widgets. Now that we have our functions set up, we will build another function which we will use to update the graph as we move our slider parameters. First we must build the sliders for each parameter. Using the FloatSlider method from ipywidgets, we can specify the min and max for our sliders and a step to increment.
Next we build the update function which will take in the values of the sliders as they change and re-plot the graph. The function follows the same logic as before with the only difference being the changing parameters.
Finally we specify the behavior of the sliders as they change values and call our update function.
End of explanation
Disease = [{'name': "Ebola", 'p': 0, 'e': 0, 'mu': .04, 'beta': .005, 'gamma': 0}, \
{'name': "Measles", 'p': .9, 'e': .9, 'mu': .02, 'beta': .002, 'gamma': .9}, \
{'name': "Tuberculosis", 'p': .5, 'e': .2, 'mu': .06, 'beta': .001, 'gamma': .3}]
#Example
def z(x):
print(x)
interact(z, x = 'Text')
Explanation: Notice that the graph starts with all parameters equal to zero. Unfortunately we cannot set the initial value of the slider. We can work around this using conditional statements to see if the slider values are equal to zero, then use different parameters.
Notice that as you change the parameters the graph starts to come alive! This allows you to quickly compare how different parameters affect the results of our model!
Dig deeper?
Using the ipywidget library, create a new function that allows for user input. Using the python array of objects below, which contains various diseases and their initial parameters, have the user type in one of the disease names and return the graph corresponding to that disease! You can use the ipywidget text box to take in the value from the user and then pass that value to a function that will call out that disease from the object below!
End of explanation
from IPython.core.display import HTML
css_file = 'numericalmoocstyle.css'
HTML(open(css_file, "r").read())
Explanation: References
Scherer, A. and McLean, A. "Mathematical Models of Vaccination", British Medical Bulletin Volume 62 Issue 1, 2015 Oxford University Press. Online
Barba, L., "Practical Numerical Methods with Python" George Washington University
For a good explanation of some of the simpler models and overview of parameters, visit this Wiki Page
Slider tutorial posted on github
End of explanation |
817 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import data
Step1: Analyse Emails
Step2: The columns ExtractedBodyText is supposed to be the content of the mail but some of the mail have a ExtractedBodyText = NaN but the Rawtext seems to contains something
Step3: We could also use the subject since it is usually a summary of the mail
Step4: Now let's try to combine the subject and the body and drop the mail that have both subject= NaN and body = Nan
Step5: Well, that number is small enough to drop all email where both Extracted subject and Extracted body is NaN.
Let's drop them and create a new columns subjectBody that is the concatenation of the 2 columns ExtractedSubject and ExtractedBody. From now we will work with that columns
Step6: Last check to be sur that our columns of interest don't have anymore NaN
Step7: Keep only mail that mentions a country
Structure of a country in pycountry.countres
Step8: First we create a dataframe with one line by countries and we count for each countries its occurences in the mail.
Since a country can be reference in many way (Switzerland, switzerland, CH), we need to consider all the possible form.
We may have a problem with word that have many meaning like US(country) and us (pronoun) so we can't just take all the country name in loer case and all the mail in lower case and just compare.
We first try to use that technique
Step9: Tokenize and remove stopwords
Step10: Sentiment analysis
We explain before our precessing. Now we will do the sentiment analysis only on the subject and the body
So we will only consider the subject and the body
Step11: Analysis
We will do a sentiment analysis on each sentense and then compute a socre for each country
We will compare different module
Step12: Liuhu
Step13: Aggregate by countries
We groupe by country and compute the mean of each score
Step14: Drop all country that have a score of -999 (they never appear in the mails)
Step15: It's a lot of country. We will also use a thresold for the appearance. We only keep mails that are mentioned in a minimum number of emails
Step16: Plot
We plot the 2 analysis. The first plot show an historgram with the vador score and in color the appearance in the mail.
In the second plot the histogram shows the liuhu score and in color the appearance in the mail
we only consider countries that are at least mention 15 times. Otherwise we end up with to much countries | Python Code:
folder = 'hillary-clinton-emails/'
emails = pd.read_csv(folder + 'Emails.csv', index_col='Id')
emails.head(5)
Explanation: Import data
End of explanation
emails.head()
Explanation: Analyse Emails
End of explanation
emails.columns
print('Number of emails: ', len(emails))
bodyNaN = emails.ExtractedBodyText.isnull().sum()
print('Number of emails with ExtractedBodyText=NaN: {}, ({:.2f}%)'.format(emails.ExtractedBodyText.isnull().sum(), bodyNaN/ len(emails)))
Explanation: The columns ExtractedBodyText is supposed to be the content of the mail but some of the mail have a ExtractedBodyText = NaN but the Rawtext seems to contains something
End of explanation
bodyNaN = emails.ExtractedSubject.isnull().sum()
print('Number of emails with ExtractedSubject=NaN: {}, ({:.2f}%)'.format(emails.ExtractedBodyText.isnull().sum(), bodyNaN/ len(emails)))
Explanation: We could also use the subject since it is usually a summary of the mail
End of explanation
subBodyNan = emails[np.logical_and(emails.ExtractedBodyText.isnull(),emails.ExtractedSubject.isnull())]
print('Number of email where both subject and body is NaN: {}({:.2f})'.format(len(subBodyNan), len(subBodyNan)/ len(emails)))
Explanation: Now let's try to combine the subject and the body and drop the mail that have both subject= NaN and body = Nan
End of explanation
emails = emails[~ np.logical_and(emails.ExtractedBodyText.isnull(), emails.ExtractedSubject.isnull())]
len(emails)
emails.ExtractedBodyText.fillna('',inplace=True)
emails.ExtractedSubject.fillna('',inplace=True)
emails['SubjectBody'] = emails.ExtractedBodyText + emails.ExtractedSubject
emails.SubjectBody.head()
Explanation: Well, that number is small enough to drop all email where both Extracted subject and Extracted body is NaN.
Let's drop them and create a new columns subjectBody that is the concatenation of the 2 columns ExtractedSubject and ExtractedBody. From now we will work with that columns
End of explanation
print('Number of NaN in columns SubjectBody: ' ,emails.SubjectBody.isnull().sum())
Explanation: Last check to be sur that our columns of interest don't have anymore NaN
End of explanation
list(pycountry.countries)[0]
Explanation: Keep only mail that mentions a country
Structure of a country in pycountry.countres
End of explanation
emails.SubjectBody.head(100).apply(print)
Explanation: First we create a dataframe with one line by countries and we count for each countries its occurences in the mail.
Since a country can be reference in many way (Switzerland, switzerland, CH), we need to consider all the possible form.
We may have a problem with word that have many meaning like US(country) and us (pronoun) so we can't just take all the country name in loer case and all the mail in lower case and just compare.
We first try to use that technique:
1. the country name can appear either in lower case, with the first letter in uppercase or all in uppercase
2. alpha_2 and alpha_3 are always used in uppercase
But we still have a lot of problem. Indeed a lot of mail contain sentences all in upper cas (see below):
- SUBJECT TO AGREEMENT ON SENSITIVE INFORMATION & REDACTIONS. NO FOIA WAIVER. STATE-SCB0045012
For example this sentence will match for Togo because of the TO and also for norway because of the NO. An other example is Andorra that appears in 55 mails thanks to AND
At first we also wanted to keep the upper case since it can be helpfull to do the sentiment analysis. Look at those 2 sentence and their corresponding score:
- VADER is very smart, handsome, and funny.: compound: 0.8545, neg: 0.0, neu: 0.299, pos: 0.701,
- VADER is VERY SMART, handsome, and FUNNY.: compound: 0.9227, neg: 0.0, neu: 0.246, pos: 0.754,
The score is not the same. But since we have a lot of information in Upper case and it nothing to do with sentiment, we will put all mails in lower case. And we will also remove the stopwords.
We know that we remove the occurance of USA under 'us' but it will also remove the 'and', 'can', 'it',...
End of explanation
from gensim import corpora, models, utils
from nltk.corpus import stopwords
sw = stopwords.words('english') + ['re', 'fw', 'fvv', 'fwd']
sw = sw + ['pm', "a", "about", "above", "above", "across", "after", "afterwards", "again", "against", "all", "almost", "alone", "along", "already", "also","although","always","am","among", "amongst", "amoungst", "amount", "an", "and", "another", "any","anyhow","anyone","anything","anyway", "anywhere", "are", "around", "as", "at", "back","be","became", "because","become","becomes", "becoming", "been", "before", "beforehand", "behind", "being", "below", "beside", "besides", "between", "beyond", "bill", "both", "bottom","but", "by", "call", "can", "cannot", "cant", "co", "con", "could", "couldnt", "cry", "de", "describe", "detail", "do", "done", "down", "due", "during", "each", "eg", "eight", "either", "eleven","else", "elsewhere", "empty", "enough", "etc", "even", "ever", "every", "everyone", "everything", "everywhere", "except", "few", "fifteen", "fify", "fill", "find", "fire", "first", "five", "for", "former", "formerly", "forty", "found", "four", "from", "front", "full", "further", "get", "give", "go", "had", "has", "hasnt", "have", "he", "hence", "her", "here", "hereafter", "hereby", "herein", "hereupon", "hers", "herself", "him", "himself", "his", "how", "however", "hundred", "ie", "if", "in", "inc", "indeed", "interest", "into", "is", "it", "its", "itself", "keep", "last", "latter", "latterly", "least", "less", "ltd", "made", "many", "may", "me", "meanwhile", "might", "mill", "mine", "more", "moreover", "most", "mostly", "move", "much", "must", "my", "myself", "name", "namely", "neither", "never", "nevertheless", "next", "nine", "no", "nobody", "none", "noone", "nor", "not", "nothing", "now", "nowhere", "of", "off", "often", "on", "once", "one", "only", "onto", "or", "other", "others", "otherwise", "our", "ours", "ourselves", "out", "over", "own","part", "per", "perhaps", "please", "put", "rather", "re", "same", "see", "seem", "seemed", "seeming", "seems", "serious", "several", "she", "should", "show", "side", "since", "sincere", "six", "sixty", "so", "some", "somehow", "someone", "something", "sometime", "sometimes", "somewhere", "still", "such", "system", "take", "ten", "than", "that", "the", "their", "them", "themselves", "then", "thence", "there", "thereafter", "thereby", "therefore", "therein", "thereupon", "these", "they", "thickv", "thin", "third", "this", "those", "though", "three", "through", "throughout", "thru", "thus", "to", "together", "too", "top", "toward", "towards", "twelve", "twenty", "two", "un", "under", "until", "up", "upon", "us", "very", "via", "was", "we", "well", "were", "what", "whatever", "when", "whence", "whenever", "where", "whereafter", "whereas", "whereby", "wherein", "whereupon", "wherever", "whether", "which", "while", "whither", "who", "whoever", "whole", "whom", "whose", "why", "will", "with", "within", "without", "would", "yet", "you", "your", "yours", "yourself", "yourselves", "the"]
def fil(row):
t = utils.simple_preprocess(row.SubjectBody)
filt = list(filter(lambda x: x not in sw, t))
return ' '.join(filt)
emails['SubjectBody'] = emails.apply(fil, axis=1)
emails.head(10)
countries = np.array([[country.name.lower(), country.alpha_2.lower(), country.alpha_3.lower()] for country in list(pycountry.countries)])
countries[:5]
countries.shape
countries = pd.DataFrame(countries, columns=['Name', 'Alpha_2', 'Alpha_3'])
countries.head()
countries.shape
countries.isin(['aruba']).any().any()
def check_country(row):
return countries.isin(row.SubjectBody.split()).any().any()
emails_country = emails[emails.apply(check_country, axis=1)]
len(emails_country)
Explanation: Tokenize and remove stopwords
End of explanation
sentiments = pd.DataFrame(emails_country.SubjectBody)
sentiments.head()
sentiments.shape
Explanation: Sentiment analysis
We explain before our precessing. Now we will do the sentiment analysis only on the subject and the body
So we will only consider the subject and the body
End of explanation
sentiments.head()
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sid = SentimentIntensityAnalyzer()
def sentiment_analysis(row):
score = sid.polarity_scores(row)
return pd.Series({'Pos': score['pos'], 'Neg': score['neg'], 'Compound_':score['compound'] })
sentiments = pd.concat([sentiments, sentiments.SubjectBody.apply(sentiment_analysis)], axis=1)
sentiments.to_csv('mailScore.csv')
sentiments.head()
Explanation: Analysis
We will do a sentiment analysis on each sentense and then compute a socre for each country
We will compare different module:
- nltk.sentiment.vader that attribute a score to each sentence
- liuhu that has a set of positive word and one of neg word. We count the positif word and neg word in each sentence and compute the mean
Vader (That part takes time)
End of explanation
from nltk.corpus import opinion_lexicon
sentimentsLihuh = pd.read_csv('mailScore.csv', index_col='Id')
#transform the array of positiv and negatif word in dict
dicPosNeg = dict()
for word in opinion_lexicon.positive():
dicPosNeg[word] = 1
for word in opinion_lexicon.negative():
dicPosNeg[word] = -1
def sentiment_liuhu(sentence):
counter = []
for word in sentence.split():
value = dicPosNeg.get(word, -999)
if value != -999:
counter.append(value)
if len(counter) == 0 :
return pd.Series({'Sum_': int(0), 'Mean_': int(0) })
return pd.Series({'Sum_': np.sum(counter), 'Mean_': np.mean(counter) })
sentimentsLihuh = pd.concat([sentimentsLihuh, sentimentsLihuh.SubjectBody.apply(sentiment_liuhu)], axis=1)
sentimentsLihuh.to_csv('mailScore2.csv')
sentimentsLihuh
Explanation: Liuhu
End of explanation
sentiments = pd.read_csv('mailScore2.csv', index_col='Id')
sentiments.head()
def aggScoreByCountry(country):
condition = sentiments.apply(lambda x: np.any(country.isin(x.SubjectBody.split())), axis=1)
sent = sentiments[condition]
if len(sent) == 0:
print(country.Name, -999)
return pd.Series({'Compound_':-999, 'Mean_':-999, 'Appearance': int(len(sent))})
compound_ = np.mean(sent.Compound_)
mean_ = np.mean(sent.Mean_)
print(country.Name, compound_)
return pd.Series({'Compound_': compound_, 'Mean_': mean_, 'Appearance': int(len(sent))})
countries = pd.concat([countries, countries.apply(lambda x: aggScoreByCountry(x), axis=1)],axis=1)
countries.to_csv('countriesScore.csv')
Explanation: Aggregate by countries
We groupe by country and compute the mean of each score
End of explanation
countries = countries[countries.Compound_ != -999]
len(countries)
Explanation: Drop all country that have a score of -999 (they never appear in the mails)
End of explanation
minimum_appearance = 15
countries_min = countries[countries.Appearance >= minimum_appearance]
len(countries_min)
Explanation: It's a lot of country. We will also use a thresold for the appearance. We only keep mails that are mentioned in a minimum number of emails
End of explanation
# Set up colors : red to green
countries_sorted = countries_min.sort(columns=['Compound_'])
plt.figure(figsize=(16, 6), dpi=80)
appearance = np.array(countries_sorted.Appearance)
colors = cm.RdYlGn((appearance / float(max(y))))
plot = plt.scatter(appearance, appearance, c=appearance, cmap = 'RdYlGn')
plt.clf()
colorBar = plt.colorbar(plot)
colorBar.ax.set_title("Appearance")
index = np.arange(len(countries_sorted))
bar_width = 0.95
plt.bar(range(countries_sorted.shape[0]), countries_sorted.Compound_, align='center', tick_label=countries_sorted.Name, color=colors)
plt.xticks(rotation=90, ha='center')
plt.title('Using Vader')
plt.xlabel('Countries')
plt.ylabel('Vador Score')
countries_sorted = countries_min.sort(columns=['Mean_'])
plt.figure(figsize=(16, 6), dpi=80)
appearance = np.array(countries_sorted.Appearance)
colors = cm.RdYlGn((appearance / float(max(y))))
plot = plt.scatter(appearance, appearance, c=appearance, cmap = 'RdYlGn')
plt.clf()
colorBar = plt.colorbar(plot)
colorBar.ax.set_title("Appearance")
index = np.arange(len(countries_sorted))
bar_width = 0.95
plt.bar(range(countries_sorted.shape[0]), countries_sorted.Mean_, align='center', tick_label=countries_sorted.Name, color=colors)
plt.xticks(rotation=90, ha='center')
plt.title('Liuhu Score')
plt.xlabel('Countries')
plt.ylabel('Liuhu Score')
Explanation: Plot
We plot the 2 analysis. The first plot show an historgram with the vador score and in color the appearance in the mail.
In the second plot the histogram shows the liuhu score and in color the appearance in the mail
we only consider countries that are at least mention 15 times. Otherwise we end up with to much countries
End of explanation |
818 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Neural Networks
Step2: 2 - Outline of the Assignment
You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Expected Output
Step12: Expected Output
Step14: Expected Output
Step16: Expected Output | Python Code:
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
Explanation: Convolutional Neural Networks: Step by Step
Welcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation.
Notation:
- Superscript $[l]$ denotes an object of the $l^{th}$ layer.
- Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.
Superscript $(i)$ denotes an object from the $i^{th}$ example.
Example: $x^{(i)}$ is the $i^{th}$ training example input.
Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer.
$n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$.
$n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$.
We assume that you are already familiar with numpy and/or have completed the previous courses of the specialization. Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
End of explanation
# GRADED FUNCTION: zero_pad
def zero_pad(X, pad):
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.
Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions
Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
### START CODE HERE ### (≈ 1 line)
X_pad = np.pad(X, ((0,0), (pad, pad), (pad, pad), (0,0)), 'constant', constant_values = 0)
### END CODE HERE ###
return X_pad
np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =", x.shape)
print ("x_pad.shape =", x_pad.shape)
print ("x[1,1] =", x[1,1])
print ("x_pad[1,1] =", x_pad[1,1])
fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
Explanation: 2 - Outline of the Assignment
You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:
Convolution functions, including:
Zero Padding
Convolve window
Convolution forward
Convolution backward (optional)
Pooling functions, including:
Pooling forward
Create mask
Distribute value
Pooling backward (optional)
This notebook will ask you to implement these functions from scratch in numpy. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:
<img src="images/model.png" style="width:800px;height:300px;">
Note that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation.
3 - Convolutional Neural Networks
Although programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below.
<img src="images/conv_nn.png" style="width:350px;height:200px;">
In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself.
3.1 - Zero-Padding
Zero-padding adds zeros around the border of an image:
<img src="images/PAD.png" style="width:600px;height:400px;">
<caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : Zero-Padding<br> Image (3 channels, RGB) with a padding of 2. </center></caption>
The main benefits of padding are the following:
It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer.
It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.
Exercise: Implement the following function, which pads all the images of a batch of examples X with zeros. Use np.pad. Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with pad = 1 for the 2nd dimension, pad = 3 for the 4th dimension and pad = 0 for the rest, you would do:
python
a = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))
End of explanation
# GRADED FUNCTION: conv_single_step
def conv_single_step(a_slice_prev, W, b):
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.
Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
Returns:
Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
### START CODE HERE ### (≈ 2 lines of code)
# Element-wise product between a_slice and W. Do not add the bias yet.
s = np.multiply(a_slice_prev, W)
# Sum over all entries of the volume s.
Z = np.sum(s)
# Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
Z = Z + float(b)
### END CODE HERE ###
return Z
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)
Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
Explanation: Expected Output:
<table>
<tr>
<td>
**x.shape**:
</td>
<td>
(4, 3, 3, 2)
</td>
</tr>
<tr>
<td>
**x_pad.shape**:
</td>
<td>
(4, 7, 7, 2)
</td>
</tr>
<tr>
<td>
**x[1,1]**:
</td>
<td>
[[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
</td>
</tr>
<tr>
<td>
**x_pad[1,1]**:
</td>
<td>
[[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
</td>
</tr>
</table>
3.2 - Single step of convolution
In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which:
Takes an input volume
Applies a filter at every position of the input
Outputs another volume (usually of different size)
<img src="images/Convolution_schematic.gif" style="width:500px;height:300px;">
<caption><center> <u> <font color='purple'> Figure 2 </u><font color='purple'> : Convolution operation<br> with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption>
In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output.
Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation.
Exercise: Implement conv_single_step(). Hint.
End of explanation
# GRADED FUNCTION: conv_forward
def conv_forward(A_prev, W, b, hparameters):
Implements the forward propagation for a convolution function
Arguments:
A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad"
Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (≈1 line)
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape (≈1 line)
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters" (≈2 lines)
stride = hparameters['stride']
pad = hparameters['pad']
# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
n_H = int((n_H_prev - f + 2*pad)/stride) + 1
n_W = int((n_W_prev - f + 2*pad)/stride) + 1
# Initialize the output volume Z with zeros. (≈1 line)
Z = np.zeros((m, n_H, n_W, n_C))
# Create A_prev_pad by padding A_prev
A_prev_pad = zero_pad(A_prev, pad)
for i in range(m): # loop over the batch of training examples
a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over channels (= #filters) of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h*stride
vert_end = vert_start+f
horiz_start = w*stride
horiz_end = horiz_start+f
# Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end,:]
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)
Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:,:,:,c], b[:,:,:,c])
### END CODE HERE ###
# Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C))
# Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters)
return Z, cache
np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
"stride": 2}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =", np.mean(Z))
print("Z[3,2,1] =", Z[3,2,1])
print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
Explanation: Expected Output:
<table>
<tr>
<td>
**Z**
</td>
<td>
-6.99908945068
</td>
</tr>
</table>
3.3 - Convolutional Neural Networks - Forward pass
In the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume:
<center>
<video width="620" height="440" src="images/conv_kiank.mp4" type="video/mp4" controls>
</video>
</center>
Exercise: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding.
Hint:
1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do:
python
a_slice_prev = a_prev[0:2,0:2,:]
This will be useful when you will define a_slice_prev below, using the start/end indexes you will define.
2. To define a_slice you will need to first define its corners vert_start, vert_end, horiz_start and horiz_end. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below.
<img src="images/vert_horiz_kiank.png" style="width:400px;height:300px;">
<caption><center> <u> <font color='purple'> Figure 3 </u><font color='purple'> : Definition of a slice using vertical and horizontal start/end (with a 2x2 filter) <br> This figure shows only a single channel. </center></caption>
Reminder:
The formulas relating the output shape of the convolution to the input shape is:
$$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$
$$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$
$$ n_C = \text{number of filters used in the convolution}$$
For this exercise, we won't worry about vectorization, and will just implement everything with for-loops.
End of explanation
# GRADED FUNCTION: pool_forward
def pool_forward(A_prev, hparameters, mode = "max"):
Implements the forward pass of the pooling layer
Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
# Retrieve dimensions from the input shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve hyperparameters from "hparameters"
f = hparameters["f"]
stride = hparameters["stride"]
# Define the dimensions of the output
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev
# Initialize output matrix A
A = np.zeros((m, n_H, n_W, n_C))
### START CODE HERE ###
for i in range(m): # loop over the training examples
for h in range(n_H): # loop on the vertical axis of the output volume
for w in range(n_W): # loop on the horizontal axis of the output volume
for c in range (n_C): # loop over the channels of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h*stride
vert_end = vert_start+f
horiz_start = w*stride
horiz_end = horiz_start+f
# Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]
# Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
if mode == "max":
A[i, h, w, c] = np.max(a_prev_slice)
elif mode == "average":
A[i, h, w, c] = np.mean(a_prev_slice)
### END CODE HERE ###
# Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters)
# Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C))
return A, cache
np.random.seed(1)
A_prev = np.random.randn(2, 4, 4, 3)
hparameters = {"stride" : 2, "f": 3}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A =", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A =", A)
Explanation: Expected Output:
<table>
<tr>
<td>
**Z's mean**
</td>
<td>
0.0489952035289
</td>
</tr>
<tr>
<td>
**Z[3,2,1]**
</td>
<td>
[-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437
5.18531798 8.75898442]
</td>
</tr>
<tr>
<td>
**cache_conv[0][1][2][3]**
</td>
<td>
[-0.20075807 0.18656139 0.41005165]
</td>
</tr>
</table>
Finally, CONV layer should also contain an activation, in which case we would add the following line of code:
```python
Convolve the window to get back one output neuron
Z[i, h, w, c] = ...
Apply activation
A[i, h, w, c] = activation(Z[i, h, w, c])
```
You don't need to do it here.
4 - Pooling layer
The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are:
Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.
Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.
<table>
<td>
<img src="images/max_pool1.png" style="width:500px;height:300px;">
<td>
<td>
<img src="images/a_pool.png" style="width:500px;height:300px;">
<td>
</table>
These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over.
4.1 - Forward Pooling
Now, you are going to implement MAX-POOL and AVG-POOL, in the same function.
Exercise: Implement the forward pass of the pooling layer. Follow the hints in the comments below.
Reminder:
As there's no padding, the formulas binding the output shape of the pooling to the input shape is:
$$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$
$$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$
$$ n_C = n_{C_{prev}}$$
End of explanation
def conv_backward(dZ, cache):
Implement the backward propagation for a convolution function
Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward()
Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
### START CODE HERE ###
# Retrieve information from "cache"
(A_prev, W, b, hparameters) = None
# Retrieve dimensions from A_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = None
# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = None
# Retrieve information from "hparameters"
stride = None
pad = None
# Retrieve dimensions from dZ's shape
(m, n_H, n_W, n_C) = None
# Initialize dA_prev, dW, db with the correct shapes
dA_prev = None
dW = None
db = None
# Pad A_prev and dA_prev
A_prev_pad = None
dA_prev_pad = None
for i in range(None): # loop over the training examples
# select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = None
da_prev_pad = None
for h in range(None): # loop over vertical axis of the output volume
for w in range(None): # loop over horizontal axis of the output volume
for c in range(None): # loop over the channels of the output volume
# Find the corners of the current "slice"
vert_start = None
vert_end = None
horiz_start = None
horiz_end = None
# Use the corners to define the slice from a_prev_pad
a_slice = None
# Update gradients for the window and the filter's parameters using the code formulas given above
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += None
dW[:,:,:,c] += None
db[:,:,:,c] += None
# Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
dA_prev[i, :, :, :] = None
### END CODE HERE ###
# Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))
return dA_prev, dW, db
np.random.seed(1)
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
Explanation: Expected Output:
<table>
<tr>
<td>
A =
</td>
<td>
[[[[ 1.74481176 0.86540763 1.13376944]]]
[[[ 1.13162939 1.51981682 2.18557541]]]]
</td>
</tr>
<tr>
<td>
A =
</td>
<td>
[[[[ 0.02105773 -0.20328806 -0.40389855]]]
[[[-0.22154621 0.51716526 0.48155844]]]]
</td>
</tr>
</table>
Congratulations! You have now implemented the forward passes of all the layers of a convolutional network.
The remainer of this notebook is optional, and will not be graded.
5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)
In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like.
When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below.
5.1 - Convolutional layer backward pass
Let's start by implementing the backward pass for a CONV layer.
5.1.1 - Computing dA:
This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:
$$ dA += \sum {h=0} ^{n_H} \sum{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$
Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices.
In code, inside the appropriate for-loops, this formula translates into:
python
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
5.1.2 - Computing dW:
This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:
$$ dW_c += \sum {h=0} ^{n_H} \sum{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$
Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$.
In code, inside the appropriate for-loops, this formula translates into:
python
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
5.1.3 - Computing db:
This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:
$$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$
As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost.
In code, inside the appropriate for-loops, this formula translates into:
python
db[:,:,:,c] += dZ[i, h, w, c]
Exercise: Implement the conv_backward function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.
End of explanation
def create_mask_from_window(x):
Creates a mask from an input matrix x, to identify the max entry of x.
Arguments:
x -- Array of shape (f, f)
Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
### START CODE HERE ### (≈1 line)
mask = None
### END CODE HERE ###
return mask
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
Explanation: Expected Output:
<table>
<tr>
<td>
**dA_mean**
</td>
<td>
1.45243777754
</td>
</tr>
<tr>
<td>
**dW_mean**
</td>
<td>
1.72699145831
</td>
</tr>
<tr>
<td>
**db_mean**
</td>
<td>
7.83923256462
</td>
</tr>
</table>
5.2 Pooling layer - backward pass
Next, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer.
5.2.1 Max pooling - backward pass
Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called create_mask_from_window() which does the following:
$$ X = \begin{bmatrix}
1 && 3 \
4 && 2
\end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix}
0 && 0 \
1 && 0
\end{bmatrix}\tag{4}$$
As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask.
Exercise: Implement create_mask_from_window(). This function will be helpful for pooling backward.
Hints:
- np.max() may be helpful. It computes the maximum of an array.
- If you have a matrix X and a scalar x: A = (X == x) will return a matrix A of the same size as X such that:
A[i,j] = True if X[i,j] = x
A[i,j] = False if X[i,j] != x
- Here, you don't need to consider cases where there are several maxima in a matrix.
End of explanation
def distribute_value(dz, shape):
Distributes the input value in the matrix of dimension shape
Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz
Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
### START CODE HERE ###
# Retrieve dimensions from shape (≈1 line)
(n_H, n_W) = None
# Compute the value to distribute on the matrix (≈1 line)
average = None
# Create a matrix where every entry is the "average" value (≈1 line)
a = None
### END CODE HERE ###
return a
a = distribute_value(2, (2,2))
print('distributed value =', a)
Explanation: Expected Output:
<table>
<tr>
<td>
**x =**
</td>
<td>
[[ 1.62434536 -0.61175641 -0.52817175] <br>
[-1.07296862 0.86540763 -2.3015387 ]]
</td>
</tr>
<tr>
<td>
**mask =**
</td>
<td>
[[ True False False] <br>
[False False False]]
</td>
</tr>
</table>
Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost.
5.2.2 - Average pooling - backward pass
In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.
For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like:
$$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix}
1/4 && 1/4 \
1/4 && 1/4
\end{bmatrix}\tag{5}$$
This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average.
Exercise: Implement the function below to equally distribute a value dz through a matrix of dimension shape. Hint
End of explanation
def pool_backward(dA, cache, mode = "max"):
Implements the backward pass of the pooling layer
Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
### START CODE HERE ###
# Retrieve information from cache (≈1 line)
(A_prev, hparameters) = None
# Retrieve hyperparameters from "hparameters" (≈2 lines)
stride = None
f = None
# Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)
m, n_H_prev, n_W_prev, n_C_prev = None
m, n_H, n_W, n_C = None
# Initialize dA_prev with zeros (≈1 line)
dA_prev = None
for i in range(None): # loop over the training examples
# select training example from A_prev (≈1 line)
a_prev = None
for h in range(None): # loop on the vertical axis
for w in range(None): # loop on the horizontal axis
for c in range(None): # loop over the channels (depth)
# Find the corners of the current "slice" (≈4 lines)
vert_start = None
vert_end = None
horiz_start = None
horiz_end = None
# Compute the backward propagation in both modes.
if mode == "max":
# Use the corners and "c" to define the current slice from a_prev (≈1 line)
a_prev_slice = None
# Create the mask from a_prev_slice (≈1 line)
mask = None
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None
elif mode == "average":
# Get the value a from dA (≈1 line)
da = None
# Define the shape of the filter as fxf (≈1 line)
shape = None
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None
### END CODE ###
# Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape)
return dA_prev
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2)
dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
Explanation: Expected Output:
<table>
<tr>
<td>
distributed_value =
</td>
<td>
[[ 0.5 0.5]
<br\>
[ 0.5 0.5]]
</td>
</tr>
</table>
5.2.3 Putting it together: Pooling backward
You now have everything you need to compute backward propagation on a pooling layer.
Exercise: Implement the pool_backward function in both modes ("max" and "average"). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an if/elif statement to see if the mode is equal to 'max' or 'average'. If it is equal to 'average' you should use the distribute_value() function you implemented above to create a matrix of the same shape as a_slice. Otherwise, the mode is equal to 'max', and you will create a mask with create_mask_from_window() and multiply it by the corresponding value of dZ.
End of explanation |
819 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Function To Visualize Classification Regions
You can ignore the code below. It is used to visualize the the decision regions of the classifier. However it is unimportant to this tutorial to understand how the function works.
Step2: Generate Data
Here we are generating some non-linearly separable data that we will train our classifier on. This data would be akin to your training dataset. There are two classes in our y vector
Step3: Classify Using a Linear Kernel
The most basic way to use a SVC is with a linear kernel, which means the decision boundary is a straight line (or hyperplane in higher dimensions). Linear kernels are rarely used in practice, however I wanted to show it here since it is the most basic version of SVC. As can been seen below, it is not very good at classifying (which can be seen by all the blue X's in the red region) because the data is not linear.
Step4: Classify Using a RBF Kernel
Radial Basis Function is a commonly used kernel in SVC
Step5: Gamma = 1.0
You can see a big difference when we increase the gamma to 1. Now the decision boundary is starting to better cover the spread of the data.
Step6: Gamma = 10.0
At gamma = 10 the spread of the kernel is less pronounced. The decision boundary starts to be highly effected by individual data points (i.e. variance).
Step7: Gamma = 100.0
With high gamma, the decision boundary is almost entirely dependent on individual data points, creating "islands". This data is clearly overfitted.
Step8: C - The Penalty Parameter
Now we will repeat the process for C
Step9: C = 10
At C = 10, the classifier is less tolerant to misclassified data points and therefore the decision boundary is more severe.
Step10: C = 1000
When C = 1000, the classifier starts to become very intolerant to misclassified data points and thus the decision boundary becomes less biased and has more variance (i.e. more dependent on the individual data points).
Step11: C = 10000
At C = 10000, the classifier "works really hard" to not misclassify data points and we see signs of overfitting.
Step12: C = 100000
At C = 100000, the classifier is heavily penalized for any misclassified data points and therefore the margins are small. | Python Code:
# Import packages to visualize the classifer
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
import warnings
# Import packages to do the classifying
import numpy as np
from sklearn.svm import SVC
Explanation: Title: SVC Parameters When Using RBF Kernel
Slug: svc_parameters_using_rbf_kernel
Summary: SVC Parameters When Using RBF Kernel
Date: 2016-11-22 12:00
Category: Machine Learning
Tags: Support Vector Machines
Authors: Chris Albon
In this tutorial we will visually explore the effects of the two parameters from the support vector classifier (SVC) when using the radial basis function kernel (RBF). This tutorial draws heavily on the code used in Sebastian Raschka's book Python Machine Learning.
Preliminaries
End of explanation
def versiontuple(v):
return tuple(map(int, (v.split("."))))
def plot_decision_regions(X, y, classifier, test_idx=None, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
# highlight test samples
if test_idx:
# plot all samples
if not versiontuple(np.__version__) >= versiontuple('1.9.0'):
X_test, y_test = X[list(test_idx), :], y[list(test_idx)]
warnings.warn('Please update to NumPy 1.9.0 or newer')
else:
X_test, y_test = X[test_idx, :], y[test_idx]
plt.scatter(X_test[:, 0],
X_test[:, 1],
c='',
alpha=1.0,
linewidths=1,
marker='o',
s=55, label='test set')
Explanation: Create Function To Visualize Classification Regions
You can ignore the code below. It is used to visualize the the decision regions of the classifier. However it is unimportant to this tutorial to understand how the function works.
End of explanation
np.random.seed(0)
X_xor = np.random.randn(200, 2)
y_xor = np.logical_xor(X_xor[:, 0] > 0,
X_xor[:, 1] > 0)
y_xor = np.where(y_xor, 1, -1)
plt.scatter(X_xor[y_xor == 1, 0],
X_xor[y_xor == 1, 1],
c='b', marker='x',
label='1')
plt.scatter(X_xor[y_xor == -1, 0],
X_xor[y_xor == -1, 1],
c='r',
marker='s',
label='-1')
plt.xlim([-3, 3])
plt.ylim([-3, 3])
plt.legend(loc='best')
plt.tight_layout()
plt.show()
Explanation: Generate Data
Here we are generating some non-linearly separable data that we will train our classifier on. This data would be akin to your training dataset. There are two classes in our y vector: blue x's and red squares.
End of explanation
# Create a SVC classifier using a linear kernel
svm = SVC(kernel='linear', C=1, random_state=0)
# Train the classifier
svm.fit(X_xor, y_xor)
# Visualize the decision boundaries
plot_decision_regions(X_xor, y_xor, classifier=svm)
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
Explanation: Classify Using a Linear Kernel
The most basic way to use a SVC is with a linear kernel, which means the decision boundary is a straight line (or hyperplane in higher dimensions). Linear kernels are rarely used in practice, however I wanted to show it here since it is the most basic version of SVC. As can been seen below, it is not very good at classifying (which can be seen by all the blue X's in the red region) because the data is not linear.
End of explanation
# Create a SVC classifier using an RBF kernel
svm = SVC(kernel='rbf', random_state=0, gamma=.01, C=1)
# Train the classifier
svm.fit(X_xor, y_xor)
# Visualize the decision boundaries
plot_decision_regions(X_xor, y_xor, classifier=svm)
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
Explanation: Classify Using a RBF Kernel
Radial Basis Function is a commonly used kernel in SVC:
$$K(\mathbf {x} ,\mathbf {x'} )=\exp \left(-{\frac {||\mathbf {x} -\mathbf {x'} ||^{2}}{2\sigma ^{2}}}\right)$$
where $||\mathbf {x} -\mathbf {x'} ||^{2}$ is the squared Euclidean distance between two data points $\mathbf {x}$ and $\mathbf {x'}$. If this doesn't make sense, Sebastian's book has a full description. However, for this tutorial, it is only important to know that an SVC classifier using an RBF kernel has two parameters: gamma and C.
Gamma
gamma is a parameter of the RBF kernel and can be thought of as the 'spread' of the kernel and therefore the decision region. When gamma is low, the 'curve' of the decision boundary is very low and thus the decision region is very broad. When gamma is high, the 'curve' of the decision boundary is high, which creates islands of decision-boundaries around data points. We will see this very clearly below.
C
C is a parameter of the SVC learner and is the penalty for misclassifying a data point. When C is small, the classifier is okay with misclassified data points (high bias, low variance). When C is large, the classifier is heavily penalized for misclassified data and therefore bends over backwards avoid any misclassified data points (low bias, high variance).
Gamma
In the four charts below, we apply the same SVC-RBF classifier to the same data while holding C constant. The only difference between each chart is that each time we will increase the value of gamma. By doing so, we can visually see the effect of gamma on the decision boundary.
Gamma = 0.01
In the case of our SVC classifier and data, when using a low gamma like 0.01, the decision boundary is not very 'curvy', rather it is just one big sweeping arch.
End of explanation
# Create a SVC classifier using an RBF kernel
svm = SVC(kernel='rbf', random_state=0, gamma=1, C=1)
# Train the classifier
svm.fit(X_xor, y_xor)
# Visualize the decision boundaries
plot_decision_regions(X_xor, y_xor, classifier=svm)
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
Explanation: Gamma = 1.0
You can see a big difference when we increase the gamma to 1. Now the decision boundary is starting to better cover the spread of the data.
End of explanation
# Create a SVC classifier using an RBF kernel
svm = SVC(kernel='rbf', random_state=0, gamma=10, C=1)
# Train the classifier
svm.fit(X_xor, y_xor)
# Visualize the decision boundaries
plot_decision_regions(X_xor, y_xor, classifier=svm)
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
Explanation: Gamma = 10.0
At gamma = 10 the spread of the kernel is less pronounced. The decision boundary starts to be highly effected by individual data points (i.e. variance).
End of explanation
# Create a SVC classifier using an RBF kernel
svm = SVC(kernel='rbf', random_state=0, gamma=100, C=1)
# Train the classifier
svm.fit(X_xor, y_xor)
# Visualize the decision boundaries
plot_decision_regions(X_xor, y_xor, classifier=svm)
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
Explanation: Gamma = 100.0
With high gamma, the decision boundary is almost entirely dependent on individual data points, creating "islands". This data is clearly overfitted.
End of explanation
# Create a SVC classifier using an RBF kernel
svm = SVC(kernel='rbf', random_state=0, gamma=.01, C=1)
# Train the classifier
svm.fit(X_xor, y_xor)
# Visualize the decision boundaries
plot_decision_regions(X_xor, y_xor, classifier=svm)
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
Explanation: C - The Penalty Parameter
Now we will repeat the process for C: we will use the same classifier, same data, and hold gamma constant. The only thing we will change is the C, the penalty for misclassification.
C = 1
With C = 1, the classifier is clearly tolerant of misclassified data point. There are many red points in the blue region and blue points in the red region.
End of explanation
# Create a SVC classifier using an RBF kernel
svm = SVC(kernel='rbf', random_state=0, gamma=.01, C=10)
# Train the classifier
svm.fit(X_xor, y_xor)
# Visualize the decision boundaries
plot_decision_regions(X_xor, y_xor, classifier=svm)
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
Explanation: C = 10
At C = 10, the classifier is less tolerant to misclassified data points and therefore the decision boundary is more severe.
End of explanation
# Create a SVC classifier using an RBF kernel
svm = SVC(kernel='rbf', random_state=0, gamma=.01, C=1000)
# Train the classifier
svm.fit(X_xor, y_xor)
# Visualize the decision boundaries
plot_decision_regions(X_xor, y_xor, classifier=svm)
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
Explanation: C = 1000
When C = 1000, the classifier starts to become very intolerant to misclassified data points and thus the decision boundary becomes less biased and has more variance (i.e. more dependent on the individual data points).
End of explanation
# Create a SVC classifier using an RBF kernel
svm = SVC(kernel='rbf', random_state=0, gamma=.01, C=10000)
# Train the classifier
svm.fit(X_xor, y_xor)
# Visualize the decision boundaries
plot_decision_regions(X_xor, y_xor, classifier=svm)
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
Explanation: C = 10000
At C = 10000, the classifier "works really hard" to not misclassify data points and we see signs of overfitting.
End of explanation
# Create a SVC classifier using an RBF kernel
svm = SVC(kernel='rbf', random_state=0, gamma=.01, C=100000)
# Train the classifier
svm.fit(X_xor, y_xor)
# Visualize the decision boundaries
plot_decision_regions(X_xor, y_xor, classifier=svm)
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
Explanation: C = 100000
At C = 100000, the classifier is heavily penalized for any misclassified data points and therefore the margins are small.
End of explanation |
820 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1 Simple Octave/MATLAB Function
As a quick warm up, create a function to return a 5x5 identity matrix.
Step1: 2 Linear Regression with One Variable
In this part of this exercise, you will implement linear regression with one variable to predict profits for a food truck. Suppose you are the CEO of a restaurant franchise and are considering different cities for opening a new outlet. The chain already has trucks in various cities and you have data for profits and populations from the cities.
You would like to use this data to help you select which city to expand to next. The file ex1data1.txt contains the dataset for our linear regression prob- lem. The first column is the population of a city and the second column is
the profit of a food truck in that city. A negative value for profit indicates a loss.
2.1 Plotting the Data
Before starting on any task, it is often useful to understand the data by visualizing it. For this dataset, you can use a scatter plot to visualize the data, since it has only two properties to plot (profit and population). (Many other problems that you will encounter in real life are multi-dimensional and can't be plotted on a 2-d plot.)
Step2: 2.2 Gradient Descent
In this part, you will fit the linear regression parameters $\theta$ to our dataset using gradient descent.
2.2.1 Update Equations
The objective of linear regression is to minimize the cost function
$$
J\left( \theta \right) = \frac{1}{2m} \sum_{i=1}^m \left( h_\theta \left( x^{\left( i\right)} \right) - y^{\left( i \right)} \right)^2
$$
where $h_\theta\left( x \right)$ is the hypothesis given by the linear model
$$
h_\theta\left( x \right) = \theta^\intercal x = \theta_0 + \theta_1 x_1
$$
Recall that the parameters of your model are the $\theta_j$ values. These are the values you will adjust to minimize cost $J(\theta)$. One way to do this is to use the batch gradient descent algorithm. In batch gradient descent, each iteration performs the update
$$
\theta_j
Step3: Let's make the (totally random) guess that $\theta_0$ = 0 and $\theta_1$ = 0. In that case, we have the following output from the hypothesis function.
Step6: 2.2.3 Computing the Cost $J(\theta)$
Now, we can define our actual hypothesis function for linear regression with a single variable.
Step8: Gradient Descent
Now we'll actually implement the gradient descent algorithm. Keep in mind that the cost $J(\theta)$ is parameterized by the vector $\theta$, not $X$ and $y$. That is, we minimize $J(\theta)$ by changing $\theta$. We initialize the initial parameters to 0 and the learning rate alpha to 0.01.
Step9: After running the batch gradient descent algorithm, we can plot the convergence of $J(\theta)$ over the number of iterations.
Step10: 2.4 Visualizing $J(\theta)$ | Python Code:
A = np.eye(5)
print(A)
Explanation: 1 Simple Octave/MATLAB Function
As a quick warm up, create a function to return a 5x5 identity matrix.
End of explanation
datafile = 'ex1\\ex1data1.txt'
df = pd.read_csv(datafile, header=None, names=['Population', 'Profit'])
def plot_data(x, y):
plt.figure(figsize=(10, 6))
plt.plot(x, y, '.', label='Training Data')
plt.xlabel("Population of City in 10,000s")
plt.ylabel("Profit in $10,000s")
plot_data(df['Population'], df['Profit'])
Explanation: 2 Linear Regression with One Variable
In this part of this exercise, you will implement linear regression with one variable to predict profits for a food truck. Suppose you are the CEO of a restaurant franchise and are considering different cities for opening a new outlet. The chain already has trucks in various cities and you have data for profits and populations from the cities.
You would like to use this data to help you select which city to expand to next. The file ex1data1.txt contains the dataset for our linear regression prob- lem. The first column is the population of a city and the second column is
the profit of a food truck in that city. A negative value for profit indicates a loss.
2.1 Plotting the Data
Before starting on any task, it is often useful to understand the data by visualizing it. For this dataset, you can use a scatter plot to visualize the data, since it has only two properties to plot (profit and population). (Many other problems that you will encounter in real life are multi-dimensional and can't be plotted on a 2-d plot.)
End of explanation
# set the number of training examples
m = len(df['Population'])
# create an array from the dataframe (missing column for x_0 values)
X = df['Population'].values
# add in the first column of the array for x_0 values
X = X[:, np.newaxis]
X = np.insert(X, 0, 1, axis=1)
y = df['Profit'].values
y = y[:, np.newaxis]
Explanation: 2.2 Gradient Descent
In this part, you will fit the linear regression parameters $\theta$ to our dataset using gradient descent.
2.2.1 Update Equations
The objective of linear regression is to minimize the cost function
$$
J\left( \theta \right) = \frac{1}{2m} \sum_{i=1}^m \left( h_\theta \left( x^{\left( i\right)} \right) - y^{\left( i \right)} \right)^2
$$
where $h_\theta\left( x \right)$ is the hypothesis given by the linear model
$$
h_\theta\left( x \right) = \theta^\intercal x = \theta_0 + \theta_1 x_1
$$
Recall that the parameters of your model are the $\theta_j$ values. These are the values you will adjust to minimize cost $J(\theta)$. One way to do this is to use the batch gradient descent algorithm. In batch gradient descent, each iteration performs the update
$$
\theta_j := \theta_j - \alpha\frac{1}{m}\sum_{i=1}^m \left( h_\theta\left( x^{\left( i\right)} \right) - y^{\left(i\right)}\right) x_j^{\left(i\right)} \;\;\;\;\;\;\;\;\;\; \text{simultaneously update } \theta_j \text{ for all } j \text{.}
$$
With each step of gradient descent, your parameters $\theta_j$ come closer to the optimal values that will achieve the lowest cost $J(\theta)$.
2.2.2 Implementation
In the following lines, we add another dimension to our data to accommodate the $\theta_0$ intercept term.
End of explanation
theta_values = np.array([[0.], [0]])
print(theta_values.shape)
print(X.shape, end='\n\n')
_ = np.dot(X, theta_values)
print(_.shape)
Explanation: Let's make the (totally random) guess that $\theta_0$ = 0 and $\theta_1$ = 0. In that case, we have the following output from the hypothesis function.
End of explanation
# define the hypothesis
def h(theta, X):
Takes the dot product of the matrix X and the vector theta,
yielding a predicted result.
return np.dot(X, theta)
def compute_cost(X, y, theta):
Takes the design matrix X and output vector y, and computes the cost of
the parameters stored in the vector theta.
The dimensions must be as follows:
- X must be m x n
- y must be m x 1
- theta must be n x 1
m = len(y)
J = 1 / (2*m) * np.dot((np.dot(X, theta) - y).T, (np.dot(X, theta) - y))
return J
# define column vector theta = [[0], [0]]
theta = np.zeros((2, 1))
# compute the cost function for our existing X and y, with our new theta vector
# verify that the cost for our theta of zeros is 32.07
compute_cost(X, y, theta)
Explanation: 2.2.3 Computing the Cost $J(\theta)$
Now, we can define our actual hypothesis function for linear regression with a single variable.
End of explanation
def gradient_descent(X, y, theta, alpha, num_iters):
m = len(y)
J_history = []
theta_history = []
for i in range(num_iters):
J_history.append(float(compute_cost(X, y, theta)))
theta_history.append(theta)
theta = theta - (alpha / m) * np.dot(X.T, (np.dot(X, theta) - y))
return theta, J_history, theta_history
# set up some initial parameters for gradient descent
theta_initial = np.zeros((2, 1))
iterations = 1500
alpha = 0.01
theta_final, J_hist, theta_hist = gradient_descent(X, y,
theta_initial,
alpha, iterations)
Explanation: Gradient Descent
Now we'll actually implement the gradient descent algorithm. Keep in mind that the cost $J(\theta)$ is parameterized by the vector $\theta$, not $X$ and $y$. That is, we minimize $J(\theta)$ by changing $\theta$. We initialize the initial parameters to 0 and the learning rate alpha to 0.01.
End of explanation
def plot_cost_convergence(J_history):
abscissa = list(range(len(J_history)))
ordinate = J_history
plt.figure(figsize=(10, 6))
plt.plot(abscissa, ordinate, '.')
plt.title('Convergence of the Cost Function', fontsize=18)
plt.xlabel('Iteration Number', fontsize=14)
plt.ylabel('Cost Function', fontsize=14)
plt.xlim(min(abscissa) - max(abscissa) * 0.05, 1.05 * max(abscissa))
plot_cost_convergence(J_hist)
plt.ylim(4.3, 6.9)
plot_data(df['Population'], df['Profit'])
x_min = min(df.Population)
x_max = max(df.Population)
abscissa = np.linspace(x_min, x_max, 50)
hypot = lambda x: theta_final[0] + theta_final[1] * x
ordinate = [hypot(x) for x in abscissa]
plt.plot(abscissa, ordinate, label='Hypothesis h(x) = {:.2f} + {:.2f}x'.format(
float(theta_final[0]), float(theta_final[1])), color='indianred')
plt.legend(loc=4, frameon=True)
Explanation: After running the batch gradient descent algorithm, we can plot the convergence of $J(\theta)$ over the number of iterations.
End of explanation
from mpl_toolkits.mplot3d import axes3d, Axes3D
from matplotlib import cm
fig = plt.figure(figsize=(12, 12))
ax = fig.gca(projection='3d')
theta_0_vals = np.linspace(-10, 10, 100)
theta_1_vals = np.linspace(-1, 4, 100)
theta1, theta2, cost = [], [], []
for t0 in theta_0_vals:
for t1 in theta_1_vals:
theta1.append(t0)
theta2.append(t1)
theta_array = np.array([[t0], [t1]])
cost.append(compute_cost(X, y, theta_array))
scat = ax.scatter(theta1, theta2, cost,
c=np.abs(cost), cmap=plt.get_cmap('rainbow'))
plt.xlabel(r'$\theta_0$', fontsize=24)
plt.ylabel(r'$\theta_1$', fontsize=24)
plt.title(r'Cost Function by $\theta_0$ and $\theta_1$', fontsize=24)
theta_0_hist = [x[0] for x in theta_hist]
theta_1_hist = [x[1] for x in theta_hist]
theta_hist_end = len(theta_0_hist) - 1
fig = plt.figure(figsize=(12, 12))
ax = fig.gca(projection='3d')
theta_0_vals = np.linspace(-10, 10, 100)
theta_1_vals = np.linspace(-1, 4, 100)
theta1, theta2, cost = [], [], []
for t0 in theta_0_vals:
for t1 in theta_1_vals:
theta1.append(t0)
theta2.append(t1)
theta_array = np.array([[t0], [t1]])
cost.append(compute_cost(X, y, theta_array))
scat = ax.scatter(theta1, theta2, cost,
c=np.abs(cost), cmap=plt.get_cmap('rainbow'))
plt.plot(theta_0_hist, theta_1_hist, J_hist, 'r',
label='Cost Minimization Path')
plt.plot(theta_0_hist[0], theta_1_hist[0], J_hist[0], 'ro',
label='Cost Minimization Start')
plt.plot(theta_0_hist[theta_hist_end],
theta_1_hist[theta_hist_end],
J_hist[theta_hist_end], 'co', label='Cost Minimization Finish')
plt.xlabel(r'$\theta_0$', fontsize=24)
plt.ylabel(r'$\theta_1$', fontsize=24)
plt.title(r'Cost Function Minimization', fontsize=24)
plt.legend()
Explanation: 2.4 Visualizing $J(\theta)$
End of explanation |
821 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy elegance and curve smoothing
I recently came across a blog where smoothing of data was discussed. The following function was used to smooth a data set with a Gaussian kernel. Read the blog to see what it is doing. Our previous implementation of data smoothing was using a tophat kernet with a window of 3 points. See here for more details.
Step1: This function is syntactically correct and it works. Let's test it with a data set. The same one used in the scipy cookbook (http
Step3: Despite working, this code has several shortcomming. Our task here will be to improve the code for readability and efficiency. Because premature optimization is the source of all evil we will first focus on just making the code more clear and elegant. A first look at the code shows
Step4: Now we check if it works and it gives the same result
Step6: For clarity, we can use Numpy functions ones, and zeros to create some of the arrays. pad will also be useful to pad the array instead of using hstack. Sometimes it's enough to know English to guess the existence of certain functions, such as pad.
Step8: Modifying the value of the loop varible i is also rather ugly. We could directly define frac correctly.
Looking closer at this loop, we see that weightGauss and weight are, after all, the same thing. Why don't we directly fill the values of weight? We do not need weightGauss.
Step10: At this point, we see that the values of the weight array only depend on i, not on previous values. So that we can create them with an array funcion. The i values go from -degree+1 to degree. That is, if our degree is 3 we want a range from -2 to 2. Then, we will have degree*2-1 = 5 windows centered around 0. That's all we need to know to remove that loop.
Step11: Ooops! But we're not getting the same results any more! Let's plot the difference to see what is going on.
Step12: OK! So the results are not exactly the same but almost, due to floating point errors. We could plot the results each time, but we cal also use the numpy function allclose to check for correctness
Step14: Our last step of beautifying the code will be to check whether Numpy has a convolution function so that we do not need to do it in a for loop. Here, one cas to check the documentation to make sure the treatment of the boundaries is the same. We also needed to pad the initial array with one less element, to get the same final number of elements in smoothed.
Step15: Just by making the code more elegant, we probably have improved its performance
Step17: Sucess! We also see that the big change came from removing the loop that was performing the convolution.
Of course, if our real purpose was to Gaussian filter the data, we could have done some research to find scipy.signal.gaussian which would have directly created our window.
We could also check which convolution is more efficient. For such a small data this really makes no sense. But if you have to work with n-dimensional large arrays, the convolutions can become slow. Only then it makes sense to spend time optimizing. Googling for that, you can find the Scipy Cookbook with valuable information on the subject. From that, we see that the apparently fastest convoution is ndimage.convolve1d. In our case, it reduces the CPU time by half, which is not bad. But I insist that one has to spend time with this when the actual time to be saved is macroscopic, not a few microseconds!
Step18: Of course, if we wanted to perform a gaussian filtering we could directly call ndimage.filters.gaussian_filter1d. Remark that the results are not exactly the same, because here we do not determine the window size. However they are approximately equal.
Step20: The take home message from this exercise is | Python Code:
def smoothListGaussian(list,degree=5):
list =[list[0]]*(degree-1) + list + [list[-1]]*degree
window=degree*2-1
weight=np.array([1.0]*window)
weightGauss=[]
for i in range(window):
i=i-degree+1
frac=i/float(window)
gauss=1/(np.exp((4*(frac))**2))
weightGauss.append(gauss)
weight=np.array(weightGauss)*weight
smoothed=[0.0]*(len(list)-window)
for i in range(len(smoothed)):
smoothed[i]=sum(np.array(list[i:i+window])*weight)/sum(weight)
return smoothed
Explanation: Numpy elegance and curve smoothing
I recently came across a blog where smoothing of data was discussed. The following function was used to smooth a data set with a Gaussian kernel. Read the blog to see what it is doing. Our previous implementation of data smoothing was using a tophat kernet with a window of 3 points. See here for more details.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Generate a noisy signal to be filtered.
t = np.linspace(-1, 1, 201)
x = np.sin(2 * np.pi * t)
xn = x + np.random.normal(size=len(t)) * 0.08
# Make the plot.
plt.figure(figsize=(10,5))
plt.plot(t, xn, 'b', linewidth=1.75, alpha=0.75)
list_xn = list(xn)
original = smoothListGaussian(list_xn)
plt.plot(t, original, 'r');
plt.plot(t, x, 'g');
# Generate a noisy signal to be filtered.
t = np.linspace(-1, 1, 201)
x = (np.sin(2 * np.pi * 0.75 * t*(1-t) + 2.1) + 0.1*np.sin(2 * np.pi * 1.25 * t + 1) +
0.18*np.cos(2 * np.pi * 3.85 * t))
xn = x + np.random.randn(len(t)) * 0.08
# Make the plot.
plt.figure(figsize=(10,5))
plt.plot(t, xn, 'b', linewidth=1.75, alpha=0.75)
list_xn = list(xn)
original = smoothListGaussian(list_xn)
plt.plot(t, original, 'r');
plt.plot(t, x, 'g');
Explanation: This function is syntactically correct and it works. Let's test it with a data set. The same one used in the scipy cookbook (http://wiki.scipy.org/Cookbook/FiltFilt)
End of explanation
def smoothListGaussian2(myarray, degree=5):
Given a 1D array myarray, the code returns a Gaussian smoothed version of the array.
# Pad the array so that the final convolution uses the end values of myarray and returns an
# array of the same size
myarray = np.hstack([ [myarray[0]]*(degree-1),myarray,[myarray[-1]]*degree])
window=degree*2-1
# Build the weights filter
weight=np.array([1.0]*window)
weightGauss=[]
for i in range(window):
i=i-degree+1
frac=i/float(window)
gauss=np.exp(-(4*frac)**2)
weightGauss.append(gauss)
weight=np.array(weightGauss)*weight
# create the smoothed array with a convolution with the window
smoothed=np.array([0.0]*(len(myarray)-window))
for i in range(len(smoothed)):
smoothed[i]=sum(myarray[i:i+window]*weight)/sum(weight)
return smoothed
Explanation: Despite working, this code has several shortcomming. Our task here will be to improve the code for readability and efficiency. Because premature optimization is the source of all evil we will first focus on just making the code more clear and elegant. A first look at the code shows:
* It is not documented.
* It uses lists instead of numpy arrays in many places.
* It uses list which is a python built-in function to name a variable.
* It modifies the value of the loop variable i.
Let's create our new function to solve the first 3 issues.
End of explanation
np.all(original-smoothListGaussian2(xn)==0)
Explanation: Now we check if it works and it gives the same result:
End of explanation
def smoothListGaussian3(myarray, degree=5):
Given a 1D array myarray, the code returns a Gaussian smoothed version of the array.
# Pad the array so that the final convolution uses the end values of myarray and returns an
# array of the same size
myarray = np.pad(myarray, (degree-1,degree), mode='edge')
window=degree*2-1
# Build the weights filter
weight=np.ones(window)
weightGauss=[]
for i in range(window):
i=i-degree+1
frac=i/float(window)
gauss=np.exp(-(4*frac)**2)
weightGauss.append(gauss)
weight=np.array(weightGauss)*weight
# create the smoothed array with a convolution with the window
smoothed=np.zeros((len(myarray)-window))
for i in range(len(smoothed)):
smoothed[i]=sum(myarray[i:i+window]*weight)/sum(weight)
return smoothed
#Checking...
print("Still getting the same results...? ",np.all(original-smoothListGaussian3(xn)==0))
Explanation: For clarity, we can use Numpy functions ones, and zeros to create some of the arrays. pad will also be useful to pad the array instead of using hstack. Sometimes it's enough to know English to guess the existence of certain functions, such as pad.
End of explanation
def smoothListGaussian4(myarray, degree=5):
Given a 1D array myarray, the code returns a Gaussian smoothed version of the array.
# Pad the array so that the final convolution uses the end values of myarray and returns an
# array of the same size
myarray = np.pad(myarray, (degree-1,degree), mode='edge')
window=degree*2-1
# Build the weights filter
weight=np.ones(window)
for i in range(window):
frac=(i-degree+1)/float(window)
weight[i] = np.exp(-(4*frac)**2)
# create the smoothed array with a convolution with the window
smoothed=np.zeros((len(myarray)-window))
for i in range(len(smoothed)):
smoothed[i]=sum(myarray[i:i+window]*weight)/sum(weight)
return smoothed
#Checking...
print("Still getting the same results...? ",np.all(original-smoothListGaussian4(xn)==0))
Explanation: Modifying the value of the loop varible i is also rather ugly. We could directly define frac correctly.
Looking closer at this loop, we see that weightGauss and weight are, after all, the same thing. Why don't we directly fill the values of weight? We do not need weightGauss.
End of explanation
def smoothListGaussian5(myarray, degree=5):
Given a 1D array myarray, the code returns a Gaussian smoothed version of the array.
# Pad the array so that the final convolution uses the end values of myarray and returns an
# array of the same size
myarray = np.pad(myarray, (degree-1,degree), mode='edge')
window=degree*2-1
# Build the weights filter
weight=np.arange(-degree+1, degree)/window
weight = np.exp(-(16*weight**2))
weight /= weight.sum()
# create the smoothed array with a convolution with the window
smoothed=np.zeros((len(myarray)-window))
for i in range(len(smoothed)):
smoothed[i]=sum(myarray[i:i+window]*weight)/sum(weight)
return smoothed
#Checking...
print("Still getting the same results...? ",np.all(original-smoothListGaussian5(xn)==0))
Explanation: At this point, we see that the values of the weight array only depend on i, not on previous values. So that we can create them with an array funcion. The i values go from -degree+1 to degree. That is, if our degree is 3 we want a range from -2 to 2. Then, we will have degree*2-1 = 5 windows centered around 0. That's all we need to know to remove that loop.
End of explanation
plt.plot(original-smoothListGaussian4(xn));
Explanation: Ooops! But we're not getting the same results any more! Let's plot the difference to see what is going on.
End of explanation
print("Still getting the same results...? ",np.allclose(original, smoothListGaussian5(xn)))
Explanation: OK! So the results are not exactly the same but almost, due to floating point errors. We could plot the results each time, but we cal also use the numpy function allclose to check for correctness:
End of explanation
def smoothListGaussian6(myarray, degree=5):
Given a 1D array myarray, the code returns a Gaussian smoothed version of the array.
# Pad the array so that the final convolution uses the end values of myarray and returns an
# array of the same size
myarray = np.pad(myarray, (degree-1,degree-1), mode='edge')
window=degree*2-1
# Build the weights filter
weight=np.arange(-degree+1, degree)/window
weight = np.exp(-(16*weight**2))
weight /= weight.sum()
# create the smoothed array with a convolution with the window
smoothed = np.convolve(myarray, weight, mode='valid')
return smoothed
#Checking...
print("Still getting the same results...? ",np.allclose(original, smoothListGaussian6(xn)))
Explanation: Our last step of beautifying the code will be to check whether Numpy has a convolution function so that we do not need to do it in a for loop. Here, one cas to check the documentation to make sure the treatment of the boundaries is the same. We also needed to pad the initial array with one less element, to get the same final number of elements in smoothed.
End of explanation
%timeit smoothListGaussian(list_xn)
%timeit smoothListGaussian2(xn)
%timeit smoothListGaussian3(xn)
%timeit smoothListGaussian4(xn)
%timeit smoothListGaussian5(xn)
%timeit smoothListGaussian6(xn)
Explanation: Just by making the code more elegant, we probably have improved its performance:
End of explanation
from scipy import signal, ndimage
# Here we check that the Gaussian window in the signal module is producing the same window
# that we were manually doing
degree = 5
window=degree*2-1
# Build the weights filter
weight=np.arange(-degree+1, degree)/window
weight = np.exp(-(16*weight**2))
print(weight)
print(signal.gaussian(window, std=window/np.sqrt(32)))
def smoothListGaussian7(myarray, degree=5):
Given a 1D array myarray, the code returns a Gaussian smoothed version of the array.
# Pad the array so that the final convolution uses the end values of myarray and returns an
# array of the same size
window=degree*2-1
# Build the weights filter
weight = signal.gaussian(window, std=window/np.sqrt(32))
weight /= weight.sum()
# create the smoothed array with a convolution with the window
smoothed = ndimage.convolve1d(myarray, weight, mode='nearest')
return smoothed
#Checking...
print("Still getting the same results...? ",np.allclose(original, smoothListGaussian7(xn)))
%timeit smoothListGaussian7(xn)
Explanation: Sucess! We also see that the big change came from removing the loop that was performing the convolution.
Of course, if our real purpose was to Gaussian filter the data, we could have done some research to find scipy.signal.gaussian which would have directly created our window.
We could also check which convolution is more efficient. For such a small data this really makes no sense. But if you have to work with n-dimensional large arrays, the convolutions can become slow. Only then it makes sense to spend time optimizing. Googling for that, you can find the Scipy Cookbook with valuable information on the subject. From that, we see that the apparently fastest convoution is ndimage.convolve1d. In our case, it reduces the CPU time by half, which is not bad. But I insist that one has to spend time with this when the actual time to be saved is macroscopic, not a few microseconds!
End of explanation
%timeit ndimage.filters.gaussian_filter1d(xn, sigma=window/np.sqrt(32), mode='nearest')
plt.figure(figsize=(10,5))
plt.plot(t, original-ndimage.filters.gaussian_filter1d(xn, sigma=window/np.sqrt(32), mode='nearest'));
plt.title('original - gaussian_filter1d')
plt.figure(figsize=(10,5));
plt.plot(t, xn, 'b', linewidth=1.75, alpha=0.75);
plt.plot(t, original, 'r-', linewidth=1.75, alpha=0.75);
plt.plot(t, ndimage.filters.gaussian_filter1d(xn, sigma=window/np.sqrt(32), mode='nearest'),
'g--',linewidth=1.75, alpha=0.75);
Explanation: Of course, if we wanted to perform a gaussian filtering we could directly call ndimage.filters.gaussian_filter1d. Remark that the results are not exactly the same, because here we do not determine the window size. However they are approximately equal.
End of explanation
from numba import jit
@jit
def smoothListGaussian_numba(myarray):
Given a 1D array myarray, the code returns a Gaussian smoothed version of the array.
# Pad the array so that the final convolution uses the end values of myarray and returns an
# array of the same size
degree = 5
myarray = np.pad(myarray, (degree-1,degree), mode='edge')
window = degree*2-1
# Build the weights filter
weight=np.zeros(window)
for i in range(window):
frac=(i-degree+1)/window
weight[i]=np.exp(-(4*frac)**2)
weight /= weight.sum()
# create the smoothed array with a convolution with the window
smoothed=np.zeros(myarray.shape[0]-window)
for i in range(smoothed.shape[0]):
for j in range(window):
smoothed[i] += myarray[i+j]*weight[j]
return smoothed
print("Still getting the same results...? ",np.allclose(original, smoothListGaussian_numba(xn)))
%timeit smoothListGaussian_numba(xn)
np.random.normal?
Explanation: The take home message from this exercise is:
Writing clear code usually results in efficient code.
Numpy and scipy have lots of functions. Knowing some of them is pretty necessary, but not knowing all of them. Sometimes it is simpler to code something than to look for it.
However, if the task you want to perform is crucial (for speed, for reliability, etc) it is worth spending some time looking for it. If it is a common scientific operation you will probably find it implemented in scipy.
Using an Scipy/Numpy function is not only recommendable for the sake of speed. Those functions have been thoroughly tested by a large community and therefore are probably much more error-free that the one you code.
Therefore if you code things it is work contributing to Scipy, Numpy or the project you are using. You will not only help others, but others can help you spot bugs in your code, especially for situations that you did not consider (in your test cases), but that may appear during your research, when you rely on your code full-heartedly.
Appendix: numba
Later in this course we will learn about numba. For now, suffice it to say that numba compiles a python function using LLVM compiler infrastructure. In that case, python loops can be greatly accelerated and we do not need to try to substitute them with numpy array functions. It's use is very simple, just use the @jit decorator. In this notebook by Jake Vanderplas, you can find more comparions about performace results. Visit his blog for amazing stories about science and python.
End of explanation |
822 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
we can check the output of this cell to kind of see how much new structural data we acquire with more properties
Step1: so some keras version stuff. 1.0 uses keras.losses to store its loss functions. 2.0 uses objectives. we'll just have to be consistent
Step2: Here I've adapted the exact architecture used in the paper
Step3: encoded_input looks like a dummy layer here
Step4: create a separate autoencoder model that combines the encoder and decoder (I guess the former cells are for accessing those separate parts of the model)
Step5: we compile and fit | Python Code:
properties = ['density', 'cpt', 'viscosity', 'thermal_conductivity',
'melting_point']
for i in range(len(properties)):
props = properties[:i+1]
devmodel = salty.aggregate_data(props, merge='Union')
devmodel.Data['smiles_string'] = devmodel.Data['smiles-cation'] + "." + devmodel.Data['smiles-anion']
values = devmodel.Data['smiles_string'].drop_duplicates()
print(values.shape)
properties = ['density', 'cpt', 'viscosity', 'thermal_conductivity',
'melting_point']
props = properties
devmodel = salty.aggregate_data(props, merge='Union')
devmodel.Data['smiles_string'] = devmodel.Data['smiles-cation'] + "." + devmodel.Data['smiles-anion']
values = devmodel.Data['smiles_string'].drop_duplicates()
print(values.shape)
smile_max_length = values.map(len).max()
print(smile_max_length)
def pad_smiles(smiles_string, smile_max_length):
if len(smiles_string) < smile_max_length:
return smiles_string + " " * (smile_max_length - len(smiles_string))
padded_smiles = [pad_smiles(i, smile_max_length) for i in values if pad_smiles(i, smile_max_length)]
shuffle(padded_smiles)
def create_char_list(char_set, smile_series):
for smile in smile_series:
char_set.update(set(smile))
return char_set
char_set = set()
char_set = create_char_list(char_set, padded_smiles)
print(len(char_set))
char_set
char_list = list(char_set)
chars_in_dict = len(char_list)
char_to_index = dict((c, i) for i, c in enumerate(char_list))
index_to_char = dict((i, c) for i, c in enumerate(char_list))
char_to_index
X_train = np.zeros((len(padded_smiles), smile_max_length, chars_in_dict), dtype=np.float32)
X_train.shape
for i, smile in enumerate(padded_smiles):
for j, char in enumerate(smile):
X_train[i, j, char_to_index[char]] = 1
X_train, X_test = train_test_split(X_train, test_size=0.33, random_state=42)
X_train.shape
# need to build RNN to encode. some issues include what the 'embedded dimension' is (vector length of embedded sequence)
Explanation: we can check the output of this cell to kind of see how much new structural data we acquire with more properties:
End of explanation
from keras import backend as K
from keras.objectives import binary_crossentropy #objs or losses
from keras.models import Model
from keras.layers import Input, Dense, Lambda
from keras.layers.core import Dense, Activation, Flatten, RepeatVector
from keras.layers.wrappers import TimeDistributed
from keras.layers.recurrent import GRU
from keras.layers.convolutional import Convolution1D
Explanation: so some keras version stuff. 1.0 uses keras.losses to store its loss functions. 2.0 uses objectives. we'll just have to be consistent
End of explanation
def Encoder(x, latent_rep_size, smile_max_length, epsilon_std = 0.01):
h = Convolution1D(9, 9, activation = 'relu', name='conv_1')(x)
h = Convolution1D(9, 9, activation = 'relu', name='conv_2')(h)
h = Convolution1D(10, 11, activation = 'relu', name='conv_3')(h)
h = Flatten(name = 'flatten_1')(h)
h = Dense(435, activation = 'relu', name = 'dense_1')(h)
def sampling(args):
z_mean_, z_log_var_ = args
batch_size = K.shape(z_mean_)[0]
epsilon = K.random_normal(shape=(batch_size, latent_rep_size),
mean=0., stddev = epsilon_std)
return z_mean_ + K.exp(z_log_var_ / 2) * epsilon
z_mean = Dense(latent_rep_size, name='z_mean', activation = 'linear')(h)
z_log_var = Dense(latent_rep_size, name='z_log_var', activation = 'linear')(h)
def vae_loss(x, x_decoded_mean):
x = K.flatten(x)
x_decoded_mean = K.flatten(x_decoded_mean)
xent_loss = smile_max_length * binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.mean(1 + z_log_var - K.square(z_mean) - \
K.exp(z_log_var), axis = -1)
return xent_loss + kl_loss
return (vae_loss, Lambda(sampling, output_shape=(latent_rep_size,),
name='lambda')([z_mean, z_log_var]))
def Decoder(z, latent_rep_size, smile_max_length, charset_length):
h = Dense(latent_rep_size, name='latent_input', activation = 'relu')(z)
h = RepeatVector(smile_max_length, name='repeat_vector')(h)
h = GRU(501, return_sequences = True, name='gru_1')(h)
h = GRU(501, return_sequences = True, name='gru_2')(h)
h = GRU(501, return_sequences = True, name='gru_3')(h)
return TimeDistributed(Dense(charset_length, activation='softmax'),
name='decoded_mean')(h)
x = Input(shape=(smile_max_length, len(char_set)))
_, z = Encoder(x, latent_rep_size=292, smile_max_length=smile_max_length)
encoder = Model(x, z)
Explanation: Here I've adapted the exact architecture used in the paper
End of explanation
encoded_input = Input(shape=(292,))
decoder = Model(encoded_input, Decoder(encoded_input, latent_rep_size=292,
smile_max_length=smile_max_length,
charset_length=len(char_set)))
Explanation: encoded_input looks like a dummy layer here:
End of explanation
x1 = Input(shape=(smile_max_length, len(char_set)), name='input_1')
vae_loss, z1 = Encoder(x1, latent_rep_size=292, smile_max_length=smile_max_length)
autoencoder = Model(x1, Decoder(z1, latent_rep_size=292,
smile_max_length=smile_max_length,
charset_length=len(char_set)))
Explanation: create a separate autoencoder model that combines the encoder and decoder (I guess the former cells are for accessing those separate parts of the model)
End of explanation
autoencoder.compile(optimizer='Adam', loss=vae_loss, metrics =['accuracy'])
autoencoder.fit(X_train, X_train, shuffle = True, validation_data=(X_test, X_test))
def sample(a, temperature=1.0):
# helper function to sample an index from a probability array
a = np.log(a) / temperature
a = np.exp(a) / np.sum(np.exp(a))
return np.argmax(np.random.multinomial(1, a, 1))
test_smi = values[0]
test_smi = pad_smiles(test_smi, smile_max_length)
Z = np.zeros((1, smile_max_length, len(char_list)), dtype=np.bool)
for t, char in enumerate(test_smi):
Z[0, t, char_to_index[char]] = 1
# autoencoder.
string = ""
for i in autoencoder.predict(Z):
for j in i:
index = sample(j)
string += index_to_char[index]
print("\n callback guess: " + string)
values[0]
X_train.shape
Explanation: we compile and fit
End of explanation |
823 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GA4GH 1000 Genome Sequence Annotation Example
This example illustrates how to access the sequence annotations for a given set of ....
Initialize Client
In this step we create a client object which will be used to communicate with the server. It is initialized using the URL
Step1: Search featuresets method
--- Description --- dataset_id obtained from 1kg_metadata_service notebook.
Step2: Get featureset by id request
By knowing the id of the featureset_id we can also obtain the set in a get request. Also note that in the following reuest we simply set feature_set_id to feature_sets.id which was obtained in the previous search request.
Step3: Search features method
-- Description ~~
Step4: Note | Python Code:
import ga4gh_client.client as client
c = client.HttpClient("http://1kgenomes.ga4gh.org")
Explanation: GA4GH 1000 Genome Sequence Annotation Example
This example illustrates how to access the sequence annotations for a given set of ....
Initialize Client
In this step we create a client object which will be used to communicate with the server. It is initialized using the URL
End of explanation
for feature_sets in c.search_feature_sets(dataset_id="WyIxa2dlbm9tZXMiXQ"):
print feature_sets
Explanation: Search featuresets method
--- Description --- dataset_id obtained from 1kg_metadata_service notebook.
End of explanation
feature_set = c.get_feature_set(feature_set_id=feature_sets.id)
print feature_set
Explanation: Get featureset by id request
By knowing the id of the featureset_id we can also obtain the set in a get request. Also note that in the following reuest we simply set feature_set_id to feature_sets.id which was obtained in the previous search request.
End of explanation
counter = 0
for features in c.search_features(feature_set_id=feature_set.id):
if counter > 5:
break
counter += 1
print"Id: {},".format(features.id)
print" Name: {},".format(features.name)
print" Gene Symbol: {},".format(features.gene_symbol)
print" Parent Id: {},".format(features.parent_id)
if features.child_ids:
for i in features.child_ids:
print" Child Ids: {}".format(i)
print" Feature Set Id: {},".format(features.feature_set_id)
print" Reference Name: {},".format(features.reference_name)
print" Start: {},\tEnd: {},".format(features.start, features.end)
print" Strand: {},".format(features.strand)
print" Feature Type Id: {},".format(features.feature_type.id)
print" Feature Type Term: {},".format(features.feature_type.term)
print" Feature Type Sorce Name: {},".format(features.feature_type.source_name)
print" Feature Type Source Version: {}\n".format(features.feature_type.source_version)
Explanation: Search features method
-- Description ~~
End of explanation
feature = c.get_feature(feature_id=features.id)
print"Id: {},".format(feature.id)
print" Name: {},".format(feature.name)
print" Gene Symbol: {},".format(feature.gene_symbol)
print" Parent Id: {},".format(feature.parent_id)
if feature.child_ids:
for i in feature.child_ids:
print" Child Ids: {}".format(i)
print" Feature Set Id: {},".format(feature.feature_set_id)
print" Reference Name: {},".format(feature.reference_name)
print" Start: {},\tEnd: {},".format(feature.start, feature.end)
print" Strand: {},".format(feature.strand)
print" Feature Type Id: {},".format(feature.feature_type.id)
print" Feature Type Term: {},".format(feature.feature_type.term)
print" Feature Type Sorce Name: {},".format(feature.feature_type.source_name)
print" Feature Type Source Version: {}\n".format(feature.feature_type.source_version)
for vals in feature.attributes.vals:
print"{}: {}".format(vals, feature.attributes.vals[vals].values[0].string_value)
Explanation: Note: Not all of the elements returned in the response are present in the example. All of the parameters will be shown in the get by id method.
End of explanation |
824 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The HRT for mixed data types (Python implementation)
In my last post I showed how the holdout random test (HRT) could be used to obtain valid p-values for any machine learning model by sampling from the conditional distribution of the design matrix. Like the permutation-type approaches used to assess variable importance for decision trees, this method sees whether a measure of performance accuracy declines when a column of the data has its values shuffled. However these ad-hoc permutation approaches lack statistical rigor and will not obtain a valid inferential assessment, even asymptotically, as the non-permuted columns of the data are not conditioned on. For example, if two features are correlated with the data, but only one has a statistical relationship with the response, then naive permutation approaches will often find the correlated noise column to be significant simply by it riding on the statistical coattails of the true variable. The HRT avoids this issue by fully conditioning on the data.
One simple way of learning the conditional distribution of the design matrix is to assume a multivariate Gaussian distribution but simply estimating the precision matrix. However when the columns of the data are not Gaussian or not continuous then this learned distribution will prove a poor estimate of the conditional relationship of the data. The goal is this post is two-fold. First, show how to fit a marginal regression model to each column of the data (regularized Gaussian and Binomial regressions are used). Second, a python implementation will be used to complement the R code used previously. While this post will use an un-tuned random forest classifier, any machine learning model can be used for the training set of the data.
Step1: Split a dataset into a tranining and a test folder
In the code blocks below we load a real and synthetic dataset to highlight the HRT at the bottom of the script.
Option 1
Step2: Note that the column types of each data need to be defined in the cn_type variable.
Step3: Option 2
Step4: Function support
The code block below provides a wrapper to implement the HRT algorithm for a binary outcome using a single training and test split. See my previous post for generalizations of this method for cross-validation. The function also requires a cn_type argument to specify whether the column is continuous or Bernoulli. The glm_l2 function implements an L2-regularized generalized regression model for Gaussian and Binomial data using an iteratively re-weighted least squares method. This can generalized for elastic-net regularization as well as different generalized linear model classes. The dgp_fun function takes a model with with glm_l2 and will generate a new vector of the data conditional on the rest of the design matrix.
Step5: Get the p-values for the different datasets
Now that the hrt_bin_fun has been defined, we can perform inference on the columns of the two datasets created above.
Step6: The results below show that the sbp, tobacco, ldl, adiposity, and age are statistically significant features for the South African Heart Dataset. As expected, the first two variables, var1, and var2 from the non-linear decision boundary dataset are important as these are the two variables which define the decision boundary with the rest of the variables being noise variables. | Python Code:
# import the necessary modules
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
import seaborn as sns
Explanation: The HRT for mixed data types (Python implementation)
In my last post I showed how the holdout random test (HRT) could be used to obtain valid p-values for any machine learning model by sampling from the conditional distribution of the design matrix. Like the permutation-type approaches used to assess variable importance for decision trees, this method sees whether a measure of performance accuracy declines when a column of the data has its values shuffled. However these ad-hoc permutation approaches lack statistical rigor and will not obtain a valid inferential assessment, even asymptotically, as the non-permuted columns of the data are not conditioned on. For example, if two features are correlated with the data, but only one has a statistical relationship with the response, then naive permutation approaches will often find the correlated noise column to be significant simply by it riding on the statistical coattails of the true variable. The HRT avoids this issue by fully conditioning on the data.
One simple way of learning the conditional distribution of the design matrix is to assume a multivariate Gaussian distribution but simply estimating the precision matrix. However when the columns of the data are not Gaussian or not continuous then this learned distribution will prove a poor estimate of the conditional relationship of the data. The goal is this post is two-fold. First, show how to fit a marginal regression model to each column of the data (regularized Gaussian and Binomial regressions are used). Second, a python implementation will be used to complement the R code used previously. While this post will use an un-tuned random forest classifier, any machine learning model can be used for the training set of the data.
End of explanation
link_data = "https://web.stanford.edu/~hastie/ElemStatLearn/datasets/SAheart.data"
dat_sah = pd.read_csv(link_data)
# Extract the binary response and then drop
y_sah = dat_sah['chd']
dat_sah.drop(columns=['row.names','chd'],inplace=True)
# one-hot encode famhist
dat_sah['famhist'] = pd.get_dummies(dat_sah['famhist'])['Present']
# Convert the X matrix to a numpy array
X_sah = np.array(dat_sah)
Explanation: Split a dataset into a tranining and a test folder
In the code blocks below we load a real and synthetic dataset to highlight the HRT at the bottom of the script.
Option 1: South African Heart Dataset
End of explanation
cn_type_sah = np.where(dat_sah.columns=='famhist','binomial','gaussian')
# Do a train/test split
np.random.seed(1234)
idx = np.arange(len(y_sah))
np.random.shuffle(idx)
idx_test = np.where((idx % 5) == 0)[0]
idx_train = np.where((idx % 5) != 0)[0]
X_train_sah = X_sah[idx_train]
X_test_sah = X_sah[idx_test]
y_train_sah = y_sah[idx_train]
y_test_sah = y_sah[idx_test]
Explanation: Note that the column types of each data need to be defined in the cn_type variable.
End of explanation
# ---- Random circle data ---- #
np.random.seed(1234)
n_circ = 1000
X_circ = np.random.randn(n_circ,5)
X_circ = X_circ + np.random.randn(n_circ,1)
y_circ = np.where(np.apply_along_axis(arr=X_circ[:,0:2],axis=1,func1d= lambda x: np.sqrt(np.sum(x**2)) ) > 1.2,1,0)
cn_type_circ = np.repeat('gaussian',X_circ.shape[1])
idx = np.arange(n_circ)
np.random.shuffle(idx)
idx_test = np.where((idx % 5) == 0)[0]
idx_train = np.where((idx % 5) != 0)[0]
X_train_circ = X_circ[idx_train]
X_test_circ = X_circ[idx_test]
y_train_circ = y_circ[idx_train]
y_test_circ = y_circ[idx_test]
sns.scatterplot(x='var1',y='var2',hue='y',
data=pd.DataFrame({'y':y_circ,'var1':X_circ[:,0],'var2':X_circ[:,1]}))
Explanation: Option 2: Non-linear decision boundary dataset
End of explanation
# ---- FUNCTION SUPPORT FOR SCRIPT ---- #
def hrt_bin_fun(X_train,y_train,X_test,y_test,cn_type):
# ---- INTERNAL FUNCTION SUPPORT ---- #
# Sigmoid function
def sigmoid(x):
return( 1/(1+np.exp(-x)) )
# Sigmoid weightin
def sigmoid_w(x):
return( sigmoid(x)*(1-sigmoid(x)) )
def glm_l2(resp,x,standardize,family='binomial',lam=0,add_int=True,tol=1e-4,max_iter=100):
y = np.array(resp.copy())
X = x.copy()
n = X.shape[0]
# Make sure all the response values are zeros or ones
check1 = (~np.all(np.isin(np.array(resp),[0,1]))) & (family=='binomial')
if check1:
print('Error! Response variable is not all binary'); #return()
# Make sure the family type is correct
check2 = ~pd.Series(family).isin(['gaussian','binomial'])[0]
if check2:
print('Error! Family must be either gaussian or binoimal')
# Normalize if requested
if standardize:
mu_X = X.mean(axis=0).reshape(1,X.shape[1])
std_X = X.std(axis=0).reshape(1,X.shape[1])
else:
mu_X = np.repeat(0,p).reshape(1,X.shape[1])
std_X = np.repeat(1,p).reshape(1,X.shape[1])
X = (X - mu_X)/std_X
# Add intercept
if add_int:
X = np.append(X,np.repeat(1,n).reshape(n,1),axis=1)
# Calculate dimensions
y = y.reshape(n,1)
p = X.shape[1]
# l2-regularization
Lambda = n * np.diag(np.repeat(lam,p))
bhat = np.repeat(0,X.shape[1])
if family=='binomial':
bb = np.log(np.mean(y)/(1-np.mean(y)))
else:
bb = np.mean(y)
if add_int:
bhat = np.append(bhat[1:p],bb).reshape(p,1)
if family=='binomial':
ii = 0
diff = 1
while( (ii < max_iter) & (diff > tol) ):
ii += 1
# Predicted probabilities
eta = X.dot(bhat)
phat = sigmoid(eta)
res = y - phat
what = phat*(1-phat)
# Adjusted response
z = eta + res/what
# Weighted-least squares
bhat_new = np.dot( np.linalg.inv( np.dot((X * what).T,X) + Lambda), np.dot((X * what).T, z) )
diff = np.mean((bhat_new - bhat)**2)
bhat = bhat_new.copy()
sig2 = 0
else:
bhat = np.dot( np.linalg.inv( np.dot(X.T,X) + Lambda ), np.dot(X.T, y) )
# Calculate the standard error of the residuals
res = y - np.dot(X,bhat)
sig2 = np.sum(res**2) / (n - (p - add_int))
# Separate the intercept
if add_int:
b0 = bhat[p-1][0]
bhat2 = bhat[0:(p-1)].copy() / std_X.T # Extract intercept
b0 = b0 - np.sum(bhat2 * mu_X.T)
else:
bhat2 = bhat.copy() / std_X.T
b0 = 0
# Create a dictionary to store the results
ret_dict = {'b0':b0, 'bvec':bhat2, 'family':family, 'sig2':sig2, 'n':n}
return ret_dict
# mdl=mdl_lst[4].copy(); x = tmp_X.copy()
# Function to generate data from a fitted model
def dgp_fun(mdl,x):
tmp_n = mdl['n']
tmp_family = mdl['family']
tmp_sig2 = mdl['sig2']
tmp_b0 = mdl['b0']
tmp_bvec = mdl['bvec']
# Fitted value
fitted = np.squeeze(np.dot(x, tmp_bvec) + tmp_b0)
if tmp_family=='gaussian':
# Generate some noise
noise = np.random.randn(tmp_n)*np.sqrt(tmp_sig2) + tmp_b0
y_ret = fitted + noise
else:
y_ret = np.random.binomial(n=1,p=sigmoid(fitted),size=tmp_n)
# Return
return(y_ret)
# Logistic loss function
def loss_binomial(y,yhat):
ll = -1*np.mean(y*np.log(yhat) + (1-y)*np.log(1-yhat))
return(ll)
# Loop through and fit a model to each column
mdl_lst = []
for cc in np.arange(len(cn_type)):
tmp_y = X_test[:,cc]
tmp_X = np.delete(X_test, cc, 1)
tmp_family = cn_type[cc]
mdl_lst.append(glm_l2(resp=tmp_y,x=tmp_X,family=tmp_family,lam=0,standardize=True))
# ---- FIT SOME MACHINE LEARNING MODEL HERE ---- #
# Fit random forest
clf = RandomForestClassifier(n_estimators=100, max_depth=2, random_state=0)
clf.fit(X_train, y_train)
# Baseline predicted probabilities and logistic loss
phat_baseline = clf.predict_proba(X_test)[:,1]
loss_baseline = loss_binomial(y=y_test,yhat=phat_baseline)
# ---- CALCULATE P-VALUES FOR EACH MODEL ---- #
pval_lst = []
nsim = 250
for cc in np.arange(len(cn_type)):
print('Variable %i of %i' % (cc+1, len(cn_type)))
mdl_cc = mdl_lst[cc]
X_test_not_cc = np.delete(X_test, cc, 1)
X_test_cc = X_test.copy()
loss_lst = []
for ii in range(nsim):
np.random.seed(ii)
xx_draw_test = dgp_fun(mdl=mdl_cc,x=X_test_not_cc)
X_test_cc[:,cc] = xx_draw_test
phat_ii = clf.predict_proba(X_test_cc)[:,1]
loss_ii = loss_binomial(y=y_test,yhat=phat_ii)
loss_lst.append(loss_ii)
pval_cc = np.mean(np.array(loss_lst) <= loss_baseline)
pval_lst.append(pval_cc)
# Return p-values
return(pval_lst)
Explanation: Function support
The code block below provides a wrapper to implement the HRT algorithm for a binary outcome using a single training and test split. See my previous post for generalizations of this method for cross-validation. The function also requires a cn_type argument to specify whether the column is continuous or Bernoulli. The glm_l2 function implements an L2-regularized generalized regression model for Gaussian and Binomial data using an iteratively re-weighted least squares method. This can generalized for elastic-net regularization as well as different generalized linear model classes. The dgp_fun function takes a model with with glm_l2 and will generate a new vector of the data conditional on the rest of the design matrix.
End of explanation
pval_circ = hrt_bin_fun(X_train=X_train_circ,y_train=y_train_circ,X_test=X_test_circ,y_test=y_test_circ,cn_type=cn_type_circ)
pval_sah = hrt_bin_fun(X_train=X_train_sah,y_train=y_train_sah,X_test=X_test_sah,y_test=y_test_sah,cn_type=cn_type_sah)
Explanation: Get the p-values for the different datasets
Now that the hrt_bin_fun has been defined, we can perform inference on the columns of the two datasets created above.
End of explanation
pd.concat([pd.DataFrame({'vars':dat_sah.columns, 'pval':pval_sah, 'dataset':'SAH'}),
pd.DataFrame({'vars':['var'+str(x) for x in np.arange(5)+1],'pval':pval_circ,'dataset':'NLP'})])
Explanation: The results below show that the sbp, tobacco, ldl, adiposity, and age are statistically significant features for the South African Heart Dataset. As expected, the first two variables, var1, and var2 from the non-linear decision boundary dataset are important as these are the two variables which define the decision boundary with the rest of the variables being noise variables.
End of explanation |
825 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stationarity and detrending (ADF/KPSS)
Stationarity means that the statistical properties of a time series i.e. mean, variance and covariance do not change over time. Many statistical models require the series to be stationary to make effective and precise predictions.
Two statistical tests would be used to check the stationarity of a time series – Augmented Dickey Fuller (“ADF”) test and Kwiatkowski-Phillips-Schmidt-Shin (“KPSS”) test. A method to convert a non-stationary time series into stationary series shall also be used.
This first cell imports standard packages and sets plots to appear inline.
Step1: Sunspots dataset is used. It contains yearly (1700-2008) data on sunspots from the National Geophysical Data Center.
Step2: Some preprocessing is carried out on the data. The "YEAR" column is used in creating index.
Step3: The data is plotted now.
Step4: ADF test
ADF test is used to determine the presence of unit root in the series, and hence helps in understand if the series is stationary or not. The null and alternate hypothesis of this test are
Step5: KPSS test
KPSS is another test for checking the stationarity of a time series. The null and alternate hypothesis for the KPSS test are opposite that of the ADF test.
Null Hypothesis
Step6: The ADF tests gives the following results – test statistic, p value and the critical value at 1%, 5% , and 10% confidence intervals.
ADF test is now applied on the data.
Step7: Based upon the significance level of 0.05 and the p-value of ADF test, the null hypothesis can not be rejected. Hence, the series is non-stationary.
The KPSS tests gives the following results – test statistic, p value and the critical value at 1%, 5% , and 10% confidence intervals.
KPSS test is now applied on the data.
Step8: Based upon the significance level of 0.05 and the p-value of the KPSS test, the null hypothesis can not be rejected. Hence, the series is stationary as per the KPSS test.
It is always better to apply both the tests, so that it can be ensured that the series is truly stationary. Possible outcomes of applying these stationary tests are as follows
Step9: ADF test is now applied on these detrended values and stationarity is checked.
Step10: Based upon the p-value of ADF test, there is evidence for rejecting the null hypothesis in favor of the alternative. Hence, the series is strict stationary now.
KPSS test is now applied on these detrended values and stationarity is checked. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
Explanation: Stationarity and detrending (ADF/KPSS)
Stationarity means that the statistical properties of a time series i.e. mean, variance and covariance do not change over time. Many statistical models require the series to be stationary to make effective and precise predictions.
Two statistical tests would be used to check the stationarity of a time series – Augmented Dickey Fuller (“ADF”) test and Kwiatkowski-Phillips-Schmidt-Shin (“KPSS”) test. A method to convert a non-stationary time series into stationary series shall also be used.
This first cell imports standard packages and sets plots to appear inline.
End of explanation
sunspots = sm.datasets.sunspots.load_pandas().data
Explanation: Sunspots dataset is used. It contains yearly (1700-2008) data on sunspots from the National Geophysical Data Center.
End of explanation
sunspots.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008'))
del sunspots["YEAR"]
Explanation: Some preprocessing is carried out on the data. The "YEAR" column is used in creating index.
End of explanation
sunspots.plot(figsize=(12,8))
Explanation: The data is plotted now.
End of explanation
from statsmodels.tsa.stattools import adfuller
def adf_test(timeseries):
print ('Results of Dickey-Fuller Test:')
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print (dfoutput)
Explanation: ADF test
ADF test is used to determine the presence of unit root in the series, and hence helps in understand if the series is stationary or not. The null and alternate hypothesis of this test are:
Null Hypothesis: The series has a unit root.
Alternate Hypothesis: The series has no unit root.
If the null hypothesis in failed to be rejected, this test may provide evidence that the series is non-stationary.
A function is created to carry out the ADF test on a time series.
End of explanation
from statsmodels.tsa.stattools import kpss
def kpss_test(timeseries):
print ('Results of KPSS Test:')
kpsstest = kpss(timeseries, regression='c', nlags="auto")
kpss_output = pd.Series(kpsstest[0:3], index=['Test Statistic','p-value','Lags Used'])
for key,value in kpsstest[3].items():
kpss_output['Critical Value (%s)'%key] = value
print (kpss_output)
Explanation: KPSS test
KPSS is another test for checking the stationarity of a time series. The null and alternate hypothesis for the KPSS test are opposite that of the ADF test.
Null Hypothesis: The process is trend stationary.
Alternate Hypothesis: The series has a unit root (series is not stationary).
A function is created to carry out the KPSS test on a time series.
End of explanation
adf_test(sunspots['SUNACTIVITY'])
Explanation: The ADF tests gives the following results – test statistic, p value and the critical value at 1%, 5% , and 10% confidence intervals.
ADF test is now applied on the data.
End of explanation
kpss_test(sunspots['SUNACTIVITY'])
Explanation: Based upon the significance level of 0.05 and the p-value of ADF test, the null hypothesis can not be rejected. Hence, the series is non-stationary.
The KPSS tests gives the following results – test statistic, p value and the critical value at 1%, 5% , and 10% confidence intervals.
KPSS test is now applied on the data.
End of explanation
sunspots['SUNACTIVITY_diff'] = sunspots['SUNACTIVITY'] - sunspots['SUNACTIVITY'].shift(1)
sunspots['SUNACTIVITY_diff'].dropna().plot(figsize=(12,8))
Explanation: Based upon the significance level of 0.05 and the p-value of the KPSS test, the null hypothesis can not be rejected. Hence, the series is stationary as per the KPSS test.
It is always better to apply both the tests, so that it can be ensured that the series is truly stationary. Possible outcomes of applying these stationary tests are as follows:
Case 1: Both tests conclude that the series is not stationary - The series is not stationary
Case 2: Both tests conclude that the series is stationary - The series is stationary
Case 3: KPSS indicates stationarity and ADF indicates non-stationarity - The series is trend stationary. Trend needs to be removed to make series strict stationary. The detrended series is checked for stationarity.
Case 4: KPSS indicates non-stationarity and ADF indicates stationarity - The series is difference stationary. Differencing is to be used to make series stationary. The differenced series is checked for stationarity.
Here, due to the difference in the results from ADF test and KPSS test, it can be inferred that the series is trend stationary and not strict stationary. The series can be detrended by differencing or by model fitting.
Detrending by Differencing
It is one of the simplest methods for detrending a time series. A new series is constructed where the value at the current time step is calculated as the difference between the original observation and the observation at the previous time step.
Differencing is applied on the data and the result is plotted.
End of explanation
adf_test(sunspots['SUNACTIVITY_diff'].dropna())
Explanation: ADF test is now applied on these detrended values and stationarity is checked.
End of explanation
kpss_test(sunspots['SUNACTIVITY_diff'].dropna())
Explanation: Based upon the p-value of ADF test, there is evidence for rejecting the null hypothesis in favor of the alternative. Hence, the series is strict stationary now.
KPSS test is now applied on these detrended values and stationarity is checked.
End of explanation |
826 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Index - Back
Asynchronous Widgets
This notebook covers two scenarios where we'd like widget-related code to run without blocking the kernel from acting on other execution requests
Step1: We define a new function that returns a future for when a widget attribute changes.
Step2: And we finally get to our function where we will wait for widget changes. We'll do 10 units of work, and pause after each one until we observe a change in the widget. Notice that the widget's value is available to us, since it is what the wait_for_change future has as a result.
Run this function, and change the slider 10 times.
Step4: Generator approach
If you can't take advantage of the async/await syntax, or you don't want to modify the event loop, you can also do this with generator functions.
First, we define a decorator which hooks a generator function up to widget change events.
Step5: Then we set up our generator.
Step6: Modifications
The above two approaches both waited on widget change events, but can be modified to wait for other things, such as button event messages (as in a "Continue" button), etc.
Updating a widget in the background
Sometimes you'd like to update a widget in the background, allowing the kernel to also process other execute requests. We can do this with threads. In the example below, the progress bar will update in the background and will allow the main kernel to do other computations. | Python Code:
%gui asyncio
Explanation: Index - Back
Asynchronous Widgets
This notebook covers two scenarios where we'd like widget-related code to run without blocking the kernel from acting on other execution requests:
Pausing code to wait for user interaction with a widget in the frontend
Updating a widget in the background
Waiting for user interaction
You may want to pause your Python code to wait for some user interaction with a widget from the frontend. Typically this would be hard to do since running Python code blocks any widget messages from the frontend until the Python code is done.
We'll do this in two approaches: using the event loop integration, and using plain generator functions.
Event loop integration
If we take advantage of the event loop integration IPython offers, we can have a nice solution using the async/await syntax in Python 3.
First we invoke our asyncio event loop. This requires ipykernel 4.7 or later.
End of explanation
import asyncio
def wait_for_change(widget, value):
future = asyncio.Future()
def getvalue(change):
# make the new value available
future.set_result(change.new)
widget.unobserve(getvalue, value)
widget.observe(getvalue, value)
return future
Explanation: We define a new function that returns a future for when a widget attribute changes.
End of explanation
from ipywidgets import IntSlider
slider = IntSlider()
async def f():
for i in range(10):
print('did work %s'%i)
x = await wait_for_change(slider, 'value')
print('async function continued with value %s'%x)
asyncio.ensure_future(f())
slider
Explanation: And we finally get to our function where we will wait for widget changes. We'll do 10 units of work, and pause after each one until we observe a change in the widget. Notice that the widget's value is available to us, since it is what the wait_for_change future has as a result.
Run this function, and change the slider 10 times.
End of explanation
from functools import wraps
def yield_for_change(widget, attribute):
Pause a generator to wait for a widget change event.
This is a decorator for a generator function which pauses the generator on yield
until the given widget attribute changes. The new value of the attribute is
sent to the generator and is the value of the yield.
def f(iterator):
@wraps(iterator)
def inner():
i = iterator()
def next_i(change):
try:
i.send(change.new)
except StopIteration as e:
widget.unobserve(next_i, attribute)
widget.observe(next_i, attribute)
# start the generator
next(i)
return inner
return f
Explanation: Generator approach
If you can't take advantage of the async/await syntax, or you don't want to modify the event loop, you can also do this with generator functions.
First, we define a decorator which hooks a generator function up to widget change events.
End of explanation
from ipywidgets import IntSlider, VBox, HTML
slider2=IntSlider()
@yield_for_change(slider2, 'value')
def f():
for i in range(10):
print('did work %s'%i)
x = yield
print('generator function continued with value %s'%x)
f()
slider2
Explanation: Then we set up our generator.
End of explanation
import threading
from IPython.display import display
import ipywidgets as widgets
import time
progress = widgets.FloatProgress(value=0.0, min=0.0, max=1.0)
def work(progress):
total = 100
for i in range(total):
time.sleep(0.2)
progress.value = float(i+1)/total
thread = threading.Thread(target=work, args=(progress,))
display(progress)
thread.start()
Explanation: Modifications
The above two approaches both waited on widget change events, but can be modified to wait for other things, such as button event messages (as in a "Continue" button), etc.
Updating a widget in the background
Sometimes you'd like to update a widget in the background, allowing the kernel to also process other execute requests. We can do this with threads. In the example below, the progress bar will update in the background and will allow the main kernel to do other computations.
End of explanation |
827 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initialize set up
Amplifyer is fet -15V and = +85V
Calibrate strain gauge to zero with the kinesis software
Pull fiber back
Set nicard to -3.75
Step1: Move close with fiber | Python Code:
cavitylogic._ni.cavity_set_voltage(0.0)
cavitylogic._current_filepath = r'C:\BittorrentSyncDrive\Personal - Rasmus\Rasmus notes\Measurements\171001_position15_2'
cavitylogic.last_sweep = None
cavitylogic.get_nth_full_sweep(sweep_number=1, save=True)
Explanation: Initialize set up
Amplifyer is fet -15V and = +85V
Calibrate strain gauge to zero with the kinesis software
Pull fiber back
Set nicard to -3.75
End of explanation
cavitylogic.ramp_popt = cavitylogic._fit_ramp(xdata=cavitylogic.time_trim[::9], ydata=cavitylogic.volts_trim[cavitylogic.ramp_channel,::9])
Modes = cavitylogic._ni.sweep_function(cavitylogic.RampUp_time[cavitylogic.first_corrected_resonances],*cavitylogic.ramp_popt)
cavitylogic.current_mode_number= len(cavitylogic.first_corrected_resonances)-2
len(cavitylogic.first_corrected_resonances)
cavitylogic.linewidth_measurement(Modes,target_mode = cavitylogic.current_mode_number, repeat=10, freq=40)
high_mode=len(cavitylogic.first_corrected_resonances)-2
low_mode=0
for i in range(15):
cavitylogic.current_mode_number -=1
ret_val = cavitylogic.get_nth_full_sweep(sweep_number=2+i)
target_mode = cavitylogic.get_target_mode(cavitylogic.current_resonances, low_mode=low_mode, high_mode=high_mode, plot=True)
if target_mode == None:
print('Moved more that 5 modes')
cavitylogic.ramp_popt = cavitylogic._fit_ramp(xdata=cavitylogic.time_trim[::9], ydata=cavitylogic.volts_trim[cavitylogic.ramp_channel,::9])
Modes = cavitylogic._ni.sweep_function(cavitylogic.RampUp_time[cavitylogic.current_resonances],*cavitylogic.ramp_popt)
cavitylogic.linewidth_measurement(Modes,target_mode = target_mode, repeat=10, freq=40)
cavitylogic.current_mode_number
target_mode
cavitylogic.mode_shift_list
Explanation: Move close with fiber
End of explanation |
828 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train tensorflow or keras model on GCP or Kubeflow from Notebooks
This notebook introduces you to using Kubeflow Fairing to train the model to Kubeflow on Google Kubernetes Engine (GKE), and Google Cloud AI Platform training. This notebook demonstrate how to
Step1: Define the model logic
Step2: Train an Keras model in a notebook
Step3: Spicify a image registry that will hold the image built by fairing
Step4: Deploy the training job to kubeflow cluster
Step5: Deploy distributed training job to kubeflow cluster
Step6: Deploy the training job as CMLE training job
Doesn’t support CMLE distributed training
Step7: Inspect training process with tensorboard
Step8: Deploy the trained model to Kubeflow for predictions | Python Code:
import os
import logging
import tensorflow as tf
import fairing
import numpy as np
from datetime import datetime
from fairing.cloud import gcp
# Setting up google container repositories (GCR) for storing output containers
# You can use any docker container registry istead of GCR
# For local notebook, GCP_PROJECT should be set explicitly
GCP_PROJECT = fairing.cloud.gcp.guess_project_name()
GCP_Bucket = os.environ['GCP_BUCKET'] # e.g., 'gs://kubeflow-demo-g/'
# This is for local notebook instead of that in kubeflow cluster
# os.environ['GOOGLE_APPLICATION_CREDENTIALS']=
Explanation: Train tensorflow or keras model on GCP or Kubeflow from Notebooks
This notebook introduces you to using Kubeflow Fairing to train the model to Kubeflow on Google Kubernetes Engine (GKE), and Google Cloud AI Platform training. This notebook demonstrate how to:
Train an Keras model in a local notebook,
Use Kubeflow Fairing to train an Keras model remotely on Kubeflow cluster,
Use Kubeflow Fairing to train an Keras model remotely on AI Platform training,
Use Kubeflow Fairing to deploy a trained model to Kubeflow, and Call the deployed endpoint for predictions.
You need Python 3.6 to use Kubeflow Fairing.
Setups
Pre-conditions
Deployed a kubeflow cluster through https://deploy.kubeflow.cloud/
Have the following environment variable ready:
PROJECT_ID # project host the kubeflow cluster or for running AI platform training
DEPLOYMENT_NAME # kubeflow deployment name, the same the cluster name after delpoyed
GCP_BUCKET # google cloud storage bucket
Create service account
bash
export SA_NAME = [service account name]
gcloud iam service-accounts create ${SA_NAME}
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com \
--role 'roles/editor'
gcloud iam service-accounts keys create ~/key.json \
--iam-account ${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
Authorize for Source Repository
bash
gcloud auth configure-docker
Update local kubeconfig (for submiting job to kubeflow cluster)
bash
export CLUSTER_NAME=${DEPLOYMENT_NAME} # this is the deployment name or the kubenete cluster name
export ZONE=us-central1-c
gcloud container clusters get-credentials ${CLUSTER_NAME} --region ${ZONE}
Set the environmental variable: GOOGLE_APPLICATION_CREDENTIALS
bash
export GOOGLE_APPLICATION_CREDENTIALS = ....
python
os.environ['GOOGLE_APPLICATION_CREDENTIALS']=...
Install the lastest version of fairing
python
pip install git+https://github.com/kubeflow/fairing@master
Please not that the above configuration is required for notebook service running outside Kubeflow environment. And the examples demonstrated in the notebook is fully tested on notebook service outside Kubeflow cluster also.
The environemt variables, e.g. service account, projects and etc, should have been pre-configured while setting up the cluster.
End of explanation
def gcs_copy(src_path, dst_path):
import subprocess
print(subprocess.run(['gsutil', 'cp', src_path, dst_path], stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))
def gcs_download(src_path, file_name):
import subprocess
print(subprocess.run(['gsutil', 'cp', src_path, file_name], stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))
class TensorflowModel(object):
def __init__(self):
self.model_file = "mnist_model.h5"
self.model = None
def build(self):
self.model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
self.model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
print(self.model.summary())
def save_model(self):
self.model.save(self.model_file)
gcs_copy(self.model_file, GCP_Bucket + self.model_file)
def train(self):
self.build()
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir=GCP_Bucket + 'logs/'
+ datetime.now().date().__str__())
]
self.model.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(x_test, y_test))
self.save_model()
def predict(self, X):
if not self.model:
self.model = tf.keras.models.load_model(self.model_file)
# Do any preprocessing
prediction = self.model.predict(data=X)
Explanation: Define the model logic
End of explanation
TensorflowModel().train()
Explanation: Train an Keras model in a notebook
End of explanation
# In this demo, I use gsutil, therefore i compile a special image to install GoogleCloudSDK as based image
base_image = 'gcr.io/{}/fairing-predict-example:latest'.format(GCP_PROJECT)
!docker build --build-arg PY_VERSION=3.6.4 . -t {base_image}
!docker push {base_image}
GCP_PROJECT = fairing.cloud.gcp.guess_project_name()
BASE_IMAGE = 'gcr.io/{}/fairing-predict-example:latest'.format(GCP_PROJECT)
DOCKER_REGISTRY = 'gcr.io/{}/fairing-job-tf'.format(GCP_PROJECT)
Explanation: Spicify a image registry that will hold the image built by fairing
End of explanation
from fairing import TrainJob
from fairing.backends import GKEBackend
train_job = TrainJob(TensorflowModel, BASE_IMAGE, input_files=["requirements.txt"],
docker_registry=DOCKER_REGISTRY, backend=GKEBackend())
train_job.submit()
Explanation: Deploy the training job to kubeflow cluster
End of explanation
fairing.config.set_builder(name='docker', registry=DOCKER_REGISTRY,
base_image=BASE_IMAGE, push=True)
fairing.config.set_deployer(name='tfjob', worker_count=1, ps_count=1)
run_fn = fairing.config.fn(TensorflowModel)
run_fn()
Explanation: Deploy distributed training job to kubeflow cluster
End of explanation
from fairing import TrainJob
from fairing.backends import GCPManagedBackend
train_job = TrainJob(TensorflowModel, BASE_IMAGE, input_files=["requirements.txt"],
docker_registry=DOCKER_REGISTRY, backend=GCPManagedBackend())
train_job.submit()
Explanation: Deploy the training job as CMLE training job
Doesn’t support CMLE distributed training
End of explanation
# ! tensorboard --logdir=gs://kubeflow-demo-g/logs --host=localhost --port=8777
Explanation: Inspect training process with tensorboard
End of explanation
from fairing import PredictionEndpoint
from fairing.backends import KubeflowGKEBackend
# The trained_ames_model.joblib is exported during the above local training
endpoint = PredictionEndpoint(TensorflowModel, BASE_IMAGE, input_files=['mnist_model.h5', "requirements.txt"],
docker_registry=DOCKER_REGISTRY, backend=KubeflowGKEBackend())
endpoint.create()
endpoint.delete()
Explanation: Deploy the trained model to Kubeflow for predictions
End of explanation |
829 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graphs
Step1: Variables
Step2: lazy evaluation
Step3: This is what happens when we operate on two variables that are from different graphs
Step4: Variable initialization
all tf.Variables need to be initalized
tf.constant need NOT to be initialized
Initialization options
Manual initialization
~~~~
x1.initializer.run()
~~~~
~~~~
sess.run(x.initializer)
~~~~
Automatic initializer with tf.global_variables_initializer()
~~~~
init = tf.global_variables_initializer()
with tf.Session() as sess
Step5: This is what happens if you forget to initialize the variable
Step6: Using tf.global_variables_initializer()
Step7: Life cycle of a node value
Step8: A more efficient evaluation of the TensorFlow Graph | Python Code:
tf.get_default_graph()
graph = tf.Graph()
graph
tf.get_default_graph()
Explanation: Graphs
End of explanation
x1 = tf.Variable(3.0, name='x')
y1 = tf.Variable(4.0)
x1, y1
x1.name, y1.name
x1.graph is tf.get_default_graph()
y1.graph is tf.get_default_graph()
with graph.as_default():
x2 = tf.Variable(4.1, name='x2')
x2.graph is tf.get_default_graph()
x2.graph is graph
f1 = 2*x1 + y1
Explanation: Variables
End of explanation
f1
Explanation: lazy evaluation
End of explanation
f2 = 2*x2 + y1
Explanation: This is what happens when we operate on two variables that are from different graphs
End of explanation
with tf.Session() as sess:
x1.initializer.run()
y1.initializer.run()
f1_value = f1.eval()
print("f1: {}".format(f1_value))
Explanation: Variable initialization
all tf.Variables need to be initalized
tf.constant need NOT to be initialized
Initialization options
Manual initialization
~~~~
x1.initializer.run()
~~~~
~~~~
sess.run(x.initializer)
~~~~
Automatic initializer with tf.global_variables_initializer()
~~~~
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
~~~~
End of explanation
with tf.Session() as sess:
x1.initializer.run()
f1_value = f1.eval()
print("f1: {}".format(f1_value))
Explanation: This is what happens if you forget to initialize the variable
End of explanation
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
f1_value_with_global_variables_initializer = f1.eval()
print("f1_value_with_global_variables_initializer: {}".format(f1_value_with_global_variables_initializer))
Explanation: Using tf.global_variables_initializer()
End of explanation
w = tf.constant(3)
x = w + 2
y = x + 5
z = x * 3
x
with tf.Session() as sess:
print(y.eval())
print(z.eval())
x
Explanation: Life cycle of a node value
End of explanation
with tf.Session() as sess:
y_val, z_val = sess.run([y,z])
print(y_val)
print(z_val)
Explanation: A more efficient evaluation of the TensorFlow Graph
End of explanation |
830 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Python Objects Test
Advanced Numbers
Problem 1
Step1: Problem 2
Step2: Advanced Strings
Problem 3
Step3: Problem 4
Step4: Advanced Sets
Problem 5
Step5: Problem 6
Step6: Advanced Dictionaries
Problem 7
Step7: Advanced Lists
Problem 8
Step8: Problem 9 | Python Code:
print hex(1024)
Explanation: Advanced Python Objects Test
Advanced Numbers
Problem 1: Convert 1024 to binary and hexadecimal representation:
End of explanation
print round(5.23222,2)
Explanation: Problem 2: Round 5.23222 to two decimal places
End of explanation
s = 'hello how are you Mary, are you feeling okay?'
retVal = 1
for word in s.split():
print word
for item in word:
# print item
if not item.islower():
# print item
print 'The string has Uppercase characters'
retVal = 0
break
print retVal
s.islower()
Explanation: Advanced Strings
Problem 3: Check if every letter in the string s is lower case
End of explanation
s = 'twywywtwywbwhsjhwuwshshwuwwwjdjdid'
s.count('w')
Explanation: Problem 4: How many times does the letter 'w' show up in the string below?
End of explanation
set1 = {2,3,1,5,6,8}
set2 = {3,1,7,5,6,8}
set1.difference(set2)
Explanation: Advanced Sets
Problem 5: Find the elements in set1 that are not in set2:
End of explanation
set1.intersection(set2)
Explanation: Problem 6: Find all elements that are in either set:
End of explanation
{ val:val**3 for val in xrange(0,5)}
Explanation: Advanced Dictionaries
Problem 7: Create this dictionary:
{0: 0, 1: 1, 2: 8, 3: 27, 4: 64}
using dictionary comprehension.
End of explanation
l = [1,2,3,4]
l[::-1]
Explanation: Advanced Lists
Problem 8: Reverse the list below:
End of explanation
l = [3,4,2,5,1]
sorted(l)
Explanation: Problem 9: Sort the list below
End of explanation |
831 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Данные
Зайдите на https
Step1: Предобработка
<div class="panel panel-warning">
<div class="panel-heading">
<h3 class="panel-title">Обратите внимание</h3>
</div>
</div>
Предобработка - опциональный блок, и у себя подготовить данные вы можете полностью по-своему.
Единственное замечание
Step2: Визуализация
Step3: Визуализируем качество воды в разных регионах
Step4: Визуализируем геоданные о текущем статусе водяных насосов
Step5: Модели
Step6: <div class="panel panel-warning">
<div class="panel-heading">
<h3 class="panel-title">Обратите внимание</h3>
</div>
</div>
Вот эта функция ниже - опять мои штуки-дрюки, и можно кодировать данные по-своему.
Step7: <div class="panel panel-info" style="margin
Step8: <div class="panel panel-info" style="margin
Step9: Опробуем catboost
Step10: Опробуем H2O | Python Code:
train_X, train_y = pd.read_csv( # путь к вашему файлу train.csv
'data/WaterTable/train.csv'
), pd.read_csv( # путь к вашему файлу trainLabels.csv
'data/WaterTable/trainLabels.csv'
)
df = pd.merge(train_X, train_y, how='left')
df_test = pd.read_csv( # путь к вашему файлу test.csv
'data/WaterTable/test.csv'
)
df.head()
Explanation: Данные
Зайдите на https://www.drivendata.org/ и зарегистрируйтесь. Для сегодняшней домашки будем данные брать именно отсюда.
Нас интересует конкурс https://www.drivendata.org/competitions/7/pump-it-up-data-mining-the-water-table/page/23/ .
В нем представлены данные, собранные Taarifa и Танзанийским Министерством Воды и Ирригации.
Постановка задачи следующая:
На территории Танзании установлено множество водяных насосов, которые спасают местное население от жажды. В зависимости от того, кем и когда установлен насос, а также зная, как им распоряжаются, можно попытаться предположить, какие из них функционируют, какие нуждаются в ремонте и какие не работают вовсе.
Этим мы и займемся, а заодно и прокачаемся в подборе гиперпараметров алгоритмов.
End of explanation
def reduce_factor_levels(df, column_name, limit=None, top=None, name=None):
assert(limit is not None or top is not None), 'Specify limit ot top'
if top is None:
top = df[column_name].value_counts()[:limit].index
if name is None:
name = '%s_OTHER' % column_name
df.loc[~df[column_name].isin(top), column_name] = name
return top
top = reduce_factor_levels(df, 'funder', 10)
reduce_factor_levels(df_test, 'funder', top=top);
top = reduce_factor_levels(df, 'installer', 10)
reduce_factor_levels(df_test, 'installer', top=top);
#drop = ['wpt_name', 'num_private', 'subvillage', 'region_code', 'district_code', 'lga', 'ward', 'recorded_by', 'scheme_name']
drop = ['wpt_name', 'num_private', 'district_code', 'region_code', 'subvillage'] #
df.drop(drop, axis=1, inplace=True)
df_test.drop(drop, axis=1, inplace=True)
df.loc[df.scheme_management == 'None', 'scheme_management'] = ''
df.loc[df.scheme_management.isnull(), 'scheme_management'] = ''
df_test.loc[df_test.scheme_management.isnull(), 'scheme_management'] = ''
df['construction_date_known'] = (df.construction_year > 0).astype(np.int32)
df_test['construction_date_known'] = (df_test.construction_year > 0).astype(np.int32)
min_year = df[df.construction_year > 0].construction_year.min() // 10 - 1
df['construction_decade'] = df.construction_year // 10 - min_year
df_test['construction_decade'] = df_test.construction_year // 10 - min_year
df.loc[df.construction_decade < 0, 'construction_decade'] = 0
df_test.loc[df_test.construction_decade < 0, 'construction_decade'] = 0
top = reduce_factor_levels(df, 'construction_year', 20)
reduce_factor_levels(df_test, 'construction_year', top=top);
df.loc[df.extraction_type == 'other - mkulima/shinyanga', 'extraction_type'] = 'other'
heights = np.arange(-1, df.gps_height.max()+500, 500)
height_labels = list(range(len(heights)-1))
df['gps_height_rounded'] = pd.cut(df.gps_height, bins=heights, labels=height_labels)
df_test['gps_height_rounded'] = pd.cut(df_test.gps_height, bins=heights, labels=height_labels)
#df.drop(['gps_height'], axis=1, inplace=True)
#df_test.drop(['gps_height'], axis=1, inplace=True)
#pops = np.arange(-1, df.population.max()+500, 500)
#pops_labels = list(range(len(pops)-1))
#df['pop_rounded'] = pd.cut(df.population, bins=pops, labels=pops_labels)
#df_test['pop_rounded'] = pd.cut(df_test.population, bins=pops, labels=pops_labels)
#df.drop(['population'], axis=1, inplace=True)
#df_test.drop(['population'], axis=1, inplace=True)
#df.drop(['date_recorded'], axis=1, inplace=True)
#df_test.drop(['date_recorded'], axis=1, inplace=True)
df.public_meeting.fillna(True, inplace=True)
df_test.public_meeting.fillna(True, inplace=True)
df.permit.fillna(True, inplace=True)
df_test.permit.fillna(True, inplace=True)
df.gps_height_rounded.fillna(0, inplace=True)
df_test.gps_height_rounded.fillna(0, inplace=True)
Explanation: Предобработка
<div class="panel panel-warning">
<div class="panel-heading">
<h3 class="panel-title">Обратите внимание</h3>
</div>
</div>
Предобработка - опциональный блок, и у себя подготовить данные вы можете полностью по-своему.
Единственное замечание: если решите подготавливать данные самостоятельно, замените странную строку "other - mkulima/shinyanga" на просто "other", так как в тесте только "other".
python
df.loc[df.extraction_type == 'other - mkulima/shinyanga', 'extraction_type'] = 'other'
End of explanation
df.head()
Explanation: Визуализация
End of explanation
df.quality_group.value_counts()
quality_groupInRegions = df.groupby('region')['quality_group'].value_counts().to_dict()
results = pd.DataFrame(data = quality_groupInRegions, index=[0]).stack().fillna(0).transpose()
results.columns = pd.Index(['good', 'salty', 'unknown', 'milky', 'colored', 'fluoride'])
results['total'] = results.good + results.salty + results.unknown + results.milky + results.colored + results.fluoride
results.sort_values(by='good', ascending=False, inplace=True)
results[['good', 'salty', 'unknown', 'milky', 'colored', 'fluoride']].plot(kind='bar', stacked=True, rot=45);
Explanation: Визуализируем качество воды в разных регионах
End of explanation
from folium import Map, CircleMarker
import colorsys
#Просто карта
tanzania_map = Map(location=(-2.147466, 34.698766), tiles='Mapbox Bright', zoom_start=6)
#tanzania_map
#в качестве target сделаем текущий статус
df.status_group.value_counts()
df['target'] = 0 #non functional
df.loc[df.status_group == 'functional needs repair', 'target'] = 1
df.loc[df.status_group == 'functional', 'target'] = 2
df.head()
# Карта с отмеченными источниками
get_radius = lambda x: (x - min_)/(max_ - min_)*7 + 3
rgbhex = lambda rgb: '#'+"".join("%02X" % i for i in rgb)
get_fill_color = lambda x: rgbhex(tuple(int(i * 255) for i in \
colorsys.hsv_to_rgb(x/max_*120.0/360.0, 0.56, 0.84)))
get_border_color = lambda x: rgbhex(tuple(int(i * 255) for i in \
colorsys.hsv_to_rgb(x/max_*120.0/360.0, 0.78, 0.36)))
add_marker = lambda lat, lon, target: \
CircleMarker((lat, lon),
radius = get_radius(target),
color = get_border_color(target),
fill_color = get_fill_color(target),
popup='Lat: %.3f; Lon: %.3f' % (lat, lon),
)\
.add_to(tanzania_map)
min_, max_ = df[['target']].describe().loc['min'][0], df[['target']].describe().loc['max'][0]
df.sample(n=1000).apply(lambda row: add_marker(row['latitude'], row['longitude'], row['target']), axis=1);
#tanzania_map
df = df.drop(['target'], axis=1)
df.head()
Explanation: Визуализируем геоданные о текущем статусе водяных насосов
End of explanation
X, y, X_test = df.drop(['id', 'status_group'], axis=1), \
df.status_group, \
df_test.drop(['id'], axis=1)
X.head(1)
Explanation: Модели
End of explanation
def prepare(X_train, X_test):
from sklearn.preprocessing import StandardScaler
from sklearn.feature_extraction import DictVectorizer
objects = X_train.select_dtypes(include=['O']).columns.values
numeric = X_train.select_dtypes(exclude=['O']).columns.values
dv = DictVectorizer(sparse=False)
data_encoded_tr = dv.fit_transform(X_train[objects].to_dict(orient='records'))
data_encoded_ts = dv.transform(X_test[objects].to_dict(orient='records'))
ss = StandardScaler()
data_scaled_tr = ss.fit_transform(X_train[numeric])
data_scaled_ts = ss.transform(X_test[numeric])
train = np.hstack((data_encoded_tr, data_scaled_tr))
test = np.hstack((data_encoded_ts, data_scaled_ts))
return train, test
x_train, x_test = prepare(X, X_test)
from sklearn.preprocessing import LabelEncoder
y_encoder = LabelEncoder()
y = y_encoder.fit_transform(y)
x_train.shape
x_test.shape
Explanation: <div class="panel panel-warning">
<div class="panel-heading">
<h3 class="panel-title">Обратите внимание</h3>
</div>
</div>
Вот эта функция ниже - опять мои штуки-дрюки, и можно кодировать данные по-своему.
End of explanation
SEED = 1234
np.random.seed = SEED
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
clf = GradientBoostingClassifier(random_state=SEED, n_estimators=10, learning_rate=0.01,subsample=0.8, max_depth=4)
scores = cross_val_score(clf, x_train, y)
np.mean(scores), 2*np.std(scores)
clf = clf.fit(x_train, y)
print('Mean score:', scores.mean())
y_te = clf.predict(x_test)
y_te
ans_nn2 = pd.DataFrame({'id': df_test['id'], 'status_group': y_encoder.inverse_transform(y_te)})
ans_nn2.head()
ans_nn.to_csv('ans_gbm.csv', index=False)
Explanation: <div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задание 1.</h3>
</div>
</div>
Возьмите тетрадку с сегодняшнего занятия и, руководствуясь советами по настройке, заделайте лучший GBM в мире! Не забудьте отправлять результаты на drivendata и хвастаться в чате о результатах.
End of explanation
from lightgbm import LGBMClassifier
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import VotingClassifier
from sklearn.ensemble import RandomForestClassifier
#clf2 = LGBMClassifier(max_bin=475,learning_rate=0.13,n_estimators=140,num_leaves=131)
#clf2 = LGBMClassifier(max_bin=400,learning_rate=0.13,n_estimators=140,num_leaves=131)
clf2 = LGBMClassifier(max_bin=400,learning_rate=0.134,n_estimators=151,num_leaves=131)
scores = cross_val_score(clf2, x_train, y)
np.mean(scores), 2*np.std(scores)
clf2 = clf2.fit(x_train, y)
y_te = clf2.predict(x_test)
y_te
ans_nn = pd.DataFrame({'id': df_test['id'], 'status_group': y_encoder.inverse_transform(y_te)})
ans_nn.head()
ans_nn.to_csv('ans_lightgbm.csv', index=False)
Explanation: <div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задание 2.</h3>
</div>
</div>
Выберите любой из сторонних фреймворков по своему усмотрению:
* XGBoost
* LightGBM
* H2O
* CatBoost
Установите, прокачайте его, побейте GBM от sklearn.
Опробуем lightgbm
End of explanation
from catboost import Pool, CatBoostClassifier
clf3 = CatBoostClassifier(random_seed=SEED, iterations=500, learning_rate=0.03, depth=6)
scores = cross_val_score(clf3, x_train, y, n_jobs=-1)
np.mean(scores), 2*np.std(scores)
clf3 = clf3.fit(x_train, y)
y_te = clf3.predict(x_test)
y_te
arr = []
i = 0
while i < len(y_te):
arr.append(int(y_te[i]))
i += 1
#arr
ans_nn = pd.DataFrame({'id': df_test['id'], 'status_group': y_encoder.inverse_transform(arr)})
ans_nn.head()
ans_nn.to_csv('ans_catboost.csv', index=False)
Explanation: Опробуем catboost
End of explanation
import h2o
from h2o.estimators.gbm import H2OGradientBoostingEstimator
from sklearn.model_selection import cross_val_score
h2o.init()
Explanation: Опробуем H2O
End of explanation |
832 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python 基本語法與科學計算套件的使用: Python科學計算套件(一)
四堂課程大綱
第一堂-Python 基礎(一):Python 簡介及環境建立、Python程式的編寫及執行、資料型態、基本輸入輸出、流程控制
第二堂-Python 基礎(二):檔案讀寫、例外處理、函數、模組、物件導向
第三堂-Python科學計算套件(一):Numpy、Matplotlib
第四堂-Python科學計算套件(二):Scipy、Astropy
Numpy
Python 內建的list資料結構不適合數值運算
Python 內建的range()函數無法以浮點數來設定序列間距
Step1: 引入Numpy 套件
Step2: 建立Numpy陣列
Step3: Numpy陣列的屬性
Step4: Numpy陣列的操作
Step5: 亂數產生
Step6: 檔案輸出輸入
Step7: Matplotlib
引入Matplotlib 套件
Step8: 基本樣式控制及文字標示
Step9: 把多張圖合併成一張圖
Step10: 畫error bar | Python Code:
period = [2.4, 5, 6.3, 4.1]
print(60 * period)
bh_mass = [4.3, 5.8, 9.5, 7.6]
MASS_SUN = 1.99 * 10 ** 30
print(MASS_SUN * bh_mass)
time = list(range(1,10, 0.1))
Explanation: Python 基本語法與科學計算套件的使用: Python科學計算套件(一)
四堂課程大綱
第一堂-Python 基礎(一):Python 簡介及環境建立、Python程式的編寫及執行、資料型態、基本輸入輸出、流程控制
第二堂-Python 基礎(二):檔案讀寫、例外處理、函數、模組、物件導向
第三堂-Python科學計算套件(一):Numpy、Matplotlib
第四堂-Python科學計算套件(二):Scipy、Astropy
Numpy
Python 內建的list資料結構不適合數值運算
Python 內建的range()函數無法以浮點數來設定序列間距
End of explanation
import numpy as np
Explanation: 引入Numpy 套件
End of explanation
period = np.array([2.4, 5, 6.3, 4.1])
print(60 * period)
bh_mass = np.array([4.3, 5.8, 9.5, 7.6])
MASS_SUN = 1.99 * 10 ** 30
print(MASS_SUN * bh_mass)
time = np.arange(1, 10, 0.1)
time2 = np.linspace(1, 10, 5)
init_value = np.zeros(10)
init_value2= np.ones(10)
print(time)
print(time2)
print(init_value)
print(init_value2)
Explanation: 建立Numpy陣列
End of explanation
period2 = np.array([[2.4, 5, 6.3, 4.1], [4.2, 5.3, 1.2, 7.1]])
print(period2.ndim)
print(period2.shape)
print(period2.dtype)
Explanation: Numpy陣列的屬性
End of explanation
print(period2[0])
print(period2[1][1:-1])
print(np.sort(period2[0]))
print(np.argsort(period2[0]))
print(np.argmax(period2[0]))
print(np.where(period2[0] > 4.5))
print(np.extract(period2[0] > 4.5, period2[0]))
counts = 100 * np.sin(2 * np.pi * 1. / period2[0][1] * time) + 500
print(counts)
Explanation: Numpy陣列的操作
End of explanation
import numpy.random as npr
print(npr.rand(5))
print(npr.randn(10))
print(npr.normal(5., 1., 10))
Explanation: 亂數產生
End of explanation
lc = np.array([time, counts])
np.savetxt('test.out', lc)
input_time, input_counts = np.loadtxt('test.out')
print(input_time)
print(input_counts)
Explanation: 檔案輸出輸入
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Matplotlib
引入Matplotlib 套件
End of explanation
plt.plot(input_time, input_counts)
plt.xlabel('Time')
plt.ylabel('Counts')
plt.title('Light curve', fontsize=18)
plt.axis([1, 15, 350, 700])
plt.xticks([1, 5, 10])
plt.yticks([350, 500, 700])
#plt.show() # 開啟圖片編輯視窗,並將圖顯示在該視窗中,透過該視窗可修改圖片、存檔
#plt.hold(True) # 保留目前的圖,以便重疊繪圖
plt.plot(input_time, input_counts-100, marker='o', color='green', linewidth=1)
plt.plot(input_time, input_counts+100, 'r*')
#plt.legend(('lc1', 'lc2', 'lc3'))
#plt.hold(False)
Explanation: 基本樣式控制及文字標示
End of explanation
plt.subplot(211)
plt.plot(input_time, input_counts)
plt.subplot(212)
plt.plot(input_time, input_counts, '.m')
#plt.savefig('subplot.png')
Explanation: 把多張圖合併成一張圖
End of explanation
y_errors = 10 * npr.rand(len(input_time)) + 5
plt.plot(input_time, input_counts, 'k.')
plt.errorbar(input_time, input_counts, yerr=y_errors, fmt='r')
Explanation: 畫error bar
End of explanation |
833 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Passband Options
Passband options follow the exact same rules as dataset columns.
Sending a single value to the argument will apply it to each component in which the time array is attached (either based on the list of components sent or the defaults from the dataset method).
Note that for light curves, in particular, this rule gets slightly bent. The dataset arrays for light curves are attached at the system level, always. The passband-dependent options, however, exist for each star in the system. So, that value will get passed to each star if the component is not explicitly provided.
Step3: As you might expect, if you want to pass different values to different components, simply provide them in a dictionary.
Step4: Note here that we didn't explicitly override the defaults for '_default', so they used the phoebe-wide defaults. If you wanted to set a value for the ld_coeffs of any star added in the future, you would have to provide a value for '_default' in the dictionary as well.
Step5: This syntax may seem a bit bulky - but alternatively you can add the dataset without providing values and then change the values individually using dictionary access or set_value.
Adding a Dataset from a File
Manually from Arrays
For now, the only way to load data from a file is to do the parsing externally and pass the arrays on (as in the previous section).
Here we'll load times, fluxes, and errors of a light curve from an external file and then pass them on to a newly created dataset. Since this is a light curve, it will automatically know that you want the summed light from all copmonents in the hierarchy.
Step6: Enabling and Disabling Datasets
See the Compute Tutorial
Dealing with Phases
Datasets will no longer accept phases. It is the user's responsibility to convert
phased data into times given an ephemeris. But it's still useful to be able to
convert times to phases (and vice versa) and be able to plot in phase.
Those conversions can be handled via b.get_ephemeris, b.to_phase, and b.to_time.
Step7: All of these by default use the period in the top-level of the current hierarchy,
but accept a component keyword argument if you'd like the ephemeris of an
inner-orbit or the rotational ephemeris of a star in the system.
We'll see how plotting works later, but if you manually wanted to plot the dataset
with phases, all you'd need to do is
Step8: or
Step9: Although it isn't possible to attach data in phase-space, it is possible to tell PHOEBE at which phases to compute the model by setting compute_phases. Note that this overrides the value of times when the model is computed.
Step10: The usage of compute_phases (as well as compute_times) will be discussed in further detail in the compute tutorial and the advanced
Step11: Removing Datasets
Removing a dataset will remove matching parameters in either the dataset, model, or constraint contexts. This action is permanent and not undo-able via Undo/Redo.
Step12: The simplest way to remove a dataset is by its dataset tag
Step13: But remove_dataset also takes any other tag(s) that could be sent to filter. | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: Advanced: Datasets
Datasets tell PHOEBE how and at what times to compute the model. In some cases these will include the actual observational data, and in other cases may only include the times at which you want to compute a synthetic model.
If you're not already familiar with the basic functionality of adding datasets, make sure to read the datasets tutorial first.
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle.
End of explanation
b.add_dataset('lc',
times=[0,1],
dataset='lc01',
overwrite=True)
print(b.get_parameter(qualifier='times', dataset='lc01'))
print(b.filter(qualifier='ld_mode', dataset='lc01'))
Explanation: Passband Options
Passband options follow the exact same rules as dataset columns.
Sending a single value to the argument will apply it to each component in which the time array is attached (either based on the list of components sent or the defaults from the dataset method).
Note that for light curves, in particular, this rule gets slightly bent. The dataset arrays for light curves are attached at the system level, always. The passband-dependent options, however, exist for each star in the system. So, that value will get passed to each star if the component is not explicitly provided.
End of explanation
b.add_dataset('lc',
times=[0,1],
ld_mode='manual',
ld_func={'primary': 'logarithmic', 'secondary': 'quadratic'},
dataset='lc01',
overwrite=True)
print(b.filter(qualifier='ld_func', dataset='lc01'))
Explanation: As you might expect, if you want to pass different values to different components, simply provide them in a dictionary.
End of explanation
print(b.filter(qualifier'ld_func@lc01', check_default=False))
Explanation: Note here that we didn't explicitly override the defaults for '_default', so they used the phoebe-wide defaults. If you wanted to set a value for the ld_coeffs of any star added in the future, you would have to provide a value for '_default' in the dictionary as well.
End of explanation
times, fluxes, sigmas = np.loadtxt('test.lc.in', unpack=True)
b.add_dataset('lc',
times=times,
fluxes=fluxes,
sigmas=sigmas,
dataset='lc01',
overwrite=True)
Explanation: This syntax may seem a bit bulky - but alternatively you can add the dataset without providing values and then change the values individually using dictionary access or set_value.
Adding a Dataset from a File
Manually from Arrays
For now, the only way to load data from a file is to do the parsing externally and pass the arrays on (as in the previous section).
Here we'll load times, fluxes, and errors of a light curve from an external file and then pass them on to a newly created dataset. Since this is a light curve, it will automatically know that you want the summed light from all copmonents in the hierarchy.
End of explanation
print(b.get_ephemeris())
print(b.to_phase(0.0))
print(b.to_time(-0.25))
Explanation: Enabling and Disabling Datasets
See the Compute Tutorial
Dealing with Phases
Datasets will no longer accept phases. It is the user's responsibility to convert
phased data into times given an ephemeris. But it's still useful to be able to
convert times to phases (and vice versa) and be able to plot in phase.
Those conversions can be handled via b.get_ephemeris, b.to_phase, and b.to_time.
End of explanation
print(b.to_phase(b.get_value(qualifier='times')))
Explanation: All of these by default use the period in the top-level of the current hierarchy,
but accept a component keyword argument if you'd like the ephemeris of an
inner-orbit or the rotational ephemeris of a star in the system.
We'll see how plotting works later, but if you manually wanted to plot the dataset
with phases, all you'd need to do is:
End of explanation
print(b.to_phase('times@lc01'))
Explanation: or
End of explanation
b.add_dataset('lc',
compute_phases=np.linspace(0,1,11),
dataset='lc01',
overwrite=True)
Explanation: Although it isn't possible to attach data in phase-space, it is possible to tell PHOEBE at which phases to compute the model by setting compute_phases. Note that this overrides the value of times when the model is computed.
End of explanation
b.add_dataset('lc',
times=[0],
dataset='lc01',
overwrite=True)
print(b['compute_phases@lc01'])
b.flip_constraint('compute_phases', dataset='lc01', solve_for='compute_times')
b.set_value('compute_phases', dataset='lc01', value=np.linspace(0,1,101))
Explanation: The usage of compute_phases (as well as compute_times) will be discussed in further detail in the compute tutorial and the advanced: compute times & phases tutorial.
Note also that although you can pass compute_phases directly to add_dataset, if you do not, it will be constrained by compute_times by default. In this case, you would need to flip the constraint before setting compute_phases. See the constraints tutorial and the flip_constraint API docs for more details on flipping constraints.
End of explanation
print(b.datasets)
Explanation: Removing Datasets
Removing a dataset will remove matching parameters in either the dataset, model, or constraint contexts. This action is permanent and not undo-able via Undo/Redo.
End of explanation
b.remove_dataset('lc01')
print(b.datasets)
Explanation: The simplest way to remove a dataset is by its dataset tag:
End of explanation
b.remove_dataset(kind='rv')
print(b.datasets)
Explanation: But remove_dataset also takes any other tag(s) that could be sent to filter.
End of explanation |
834 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Understanding Tree SHAP for Simple Models
The SHAP value for a feature is the average change in model output by conditioning on that feature when introducing features one at a time over all feature orderings. While this is easy to state, it is challenging to compute. So this notebook is meant to give a few simple examples where we can see how this plays out for very small trees. For arbitrary large trees it is very hard to intuitively guess these values by looking at the tree.
Step1: Single split example
Step2: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.5). The SHAP value for features not used in the model is always 0, while for $x_0$ it is just the difference between the expected value and the output of the model.
Step3: Two feature AND example
Step4: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.25). The SHAP value for features not used in the model is always 0, while for $x_0$ and $x_1$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the AND function).
Step5: Two feature OR example
Step6: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.75). The SHAP value for features not used in the model is always 0, while for $x_0$ and $x_1$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the OR function).
Step7: Two feature XOR example
Step8: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.5). The SHAP value for features not used in the model is always 0, while for $x_0$ and $x_1$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the XOR function).
Step9: Two feature AND + feature boost example
Step10: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.75). The SHAP value for features not used in the model is always 0, while for $x_0$ and $x_1$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the AND function), plus an extra 0.5 impact for $x_0$ since it has an effect of $1.0$ all by itself (+0.5 if it is on and -0.5 if it is off). | Python Code:
import sklearn
import shap
import numpy as np
import graphviz
Explanation: Understanding Tree SHAP for Simple Models
The SHAP value for a feature is the average change in model output by conditioning on that feature when introducing features one at a time over all feature orderings. While this is easy to state, it is challenging to compute. So this notebook is meant to give a few simple examples where we can see how this plays out for very small trees. For arbitrary large trees it is very hard to intuitively guess these values by looking at the tree.
End of explanation
# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:N//2, 0] = 1
y[:N//2] = 1
# fit model
single_split_model = sklearn.tree.DecisionTreeRegressor(max_depth=1)
single_split_model.fit(X, y)
# draw model
dot_data = sklearn.tree.export_graphviz(single_split_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph
Explanation: Single split example
End of explanation
xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print(" x =", x)
print("shap_values =", shap.TreeExplainer(single_split_model).shap_values(x))
Explanation: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.5). The SHAP value for features not used in the model is always 0, while for $x_0$ it is just the difference between the expected value and the output of the model.
End of explanation
# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:1 * N//4, 1] = 1
X[:N//2, 0] = 1
X[N//2:3 * N//4, 1] = 1
y[:1 * N//4] = 1
# fit model
and_model = sklearn.tree.DecisionTreeRegressor(max_depth=2)
and_model.fit(X, y)
# draw model
dot_data = sklearn.tree.export_graphviz(and_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph
Explanation: Two feature AND example
End of explanation
xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print(" x =", x)
print("shap_values =", shap.TreeExplainer(and_model).shap_values(x))
Explanation: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.25). The SHAP value for features not used in the model is always 0, while for $x_0$ and $x_1$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the AND function).
End of explanation
# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:N//2, 0] = 1
X[:1 * N//4, 1] = 1
X[N//2:3 * N//4, 1] = 1
y[:N//2] = 1
y[N//2:3 * N//4] = 1
# fit model
or_model = sklearn.tree.DecisionTreeRegressor(max_depth=2)
or_model.fit(X, y)
# draw model
dot_data = sklearn.tree.export_graphviz(or_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph
Explanation: Two feature OR example
End of explanation
xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print(" x =", x)
print("shap_values =", shap.TreeExplainer(or_model).shap_values(x))
Explanation: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.75). The SHAP value for features not used in the model is always 0, while for $x_0$ and $x_1$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the OR function).
End of explanation
# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:N//2, 0] = 1
X[:1 * N//4, 1] = 1
X[N//2:3 * N//4, 1] = 1
y[1 * N//4:N//2] = 1
y[N//2:3 * N//4] = 1
# fit model
xor_model = sklearn.tree.DecisionTreeRegressor(max_depth=2)
xor_model.fit(X, y)
# draw model
dot_data = sklearn.tree.export_graphviz(xor_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph
Explanation: Two feature XOR example
End of explanation
xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print(" x =", x)
print("shap_values =", shap.TreeExplainer(xor_model).shap_values(x))
Explanation: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.5). The SHAP value for features not used in the model is always 0, while for $x_0$ and $x_1$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the XOR function).
End of explanation
# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:N//2, 0] = 1
X[:1 * N//4, 1] = 1
X[N//2:3 * N//4, 1] = 1
y[:1 * N//4] = 1
y[:N//2] += 1
# fit model
and_fb_model = sklearn.tree.DecisionTreeRegressor(max_depth=2)
and_fb_model.fit(X, y)
# draw model
dot_data = sklearn.tree.export_graphviz(and_fb_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph
Explanation: Two feature AND + feature boost example
End of explanation
xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print(" x =", x)
print("shap_values =", shap.TreeExplainer(and_fb_model).shap_values(x))
Explanation: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.75). The SHAP value for features not used in the model is always 0, while for $x_0$ and $x_1$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the AND function), plus an extra 0.5 impact for $x_0$ since it has an effect of $1.0$ all by itself (+0.5 if it is on and -0.5 if it is off).
End of explanation |
835 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jupyter Notebooks (libros de notas o cuadernos Jupyter)
Puedes ejecutar un Cell (celda) pulsando [shift] + [Enter] o presionando el botón Play en la barra de herramientas.
Puedes obtener ayuda sobre una función u objeto presionando [shift] + [tab] después de los paréntesis de apertura function(
También puedes obtener la ayuda ejecutando function?
Matrices de Numpy
Manipular matrices de numpy es un parte muy importante del aprendizaje automático en Python (en realidad, de cualquier tipo de computación científica). Esto será un repaso para la mayoría. En cualquier caso, repasemos las características más importantes.
Step1: (tener en cuenta que los arrays en numpy se indexan desde el 0, al igual que la mayoría de estructuras en Python)
Step2: $$\begin{bmatrix}
1 & 2 & 3 & 4 \
5 & 6 & 7 & 8
\end{bmatrix}^T
=
\begin{bmatrix}
1 & 5 \
2 & 6 \
3 & 7 \
4 & 8
\end{bmatrix}
$$
Step3: Hay mucho más que aprender, pero esto cubre algunas de las cosas fundamentales que se tratarán en este curso.
Matrices dispersas de SciPy
No utilizaremos demasiado las matrices dispersas, pero son muy útiles en múltiples situaciones. En algunas tareas de aprendizaje automático, especialmente en aquellas asociadas con análisis de textos, los datos son casi siempre ceros. Guardar todos estos ceros es muy poco eficiente, mientras que representar estas matrices de forma que solo almacenemos lo qué no es cero es mucho más eficiente. Podemos crear y manipular matrices dispersas de la siguiente forma
Step4: (puede que encuentres otra forma alternativa para convertir matrices dispersas a densas
Step5: A menudo, una vez creamos la matriz LIL, es útil convertirla al formato CSR (muchos algoritmos de scikit-learn requieren formatos CSR)
Step6: Los formatos dispersos disponibles que pueden ser útiles para distintos problemas son
Step7: Hay muchísimos tipos de gráficos disponibles. Una forma útila de explorarlos es mirar la galería de matplotlib.
Puedes probar estos ejemplos fácilmente en el libro de notas | Python Code:
import numpy as np
# Semilla de números aleatorios (para reproducibilidad)
rnd = np.random.RandomState(seed=123)
# Generar una matriz aleatoria
X = rnd.uniform(low=0.0, high=1.0, size=(3, 5)) # dimensiones 3x5
print(X)
Explanation: Jupyter Notebooks (libros de notas o cuadernos Jupyter)
Puedes ejecutar un Cell (celda) pulsando [shift] + [Enter] o presionando el botón Play en la barra de herramientas.
Puedes obtener ayuda sobre una función u objeto presionando [shift] + [tab] después de los paréntesis de apertura function(
También puedes obtener la ayuda ejecutando function?
Matrices de Numpy
Manipular matrices de numpy es un parte muy importante del aprendizaje automático en Python (en realidad, de cualquier tipo de computación científica). Esto será un repaso para la mayoría. En cualquier caso, repasemos las características más importantes.
End of explanation
# Acceder a los elementos
# Obtener un único elemento
# (primera fila, primera columna)
print(X[0, 0])
# Obtener una fila
# (segunda fila)
print(X[1])
# Obtener una columna
# (segunda columna)
print(X[:, 1])
# Obtener la traspuesta
print(X.T)
Explanation: (tener en cuenta que los arrays en numpy se indexan desde el 0, al igual que la mayoría de estructuras en Python)
End of explanation
# Crear un vector fila de números con la misma separación
# sobre un intervalo prefijado
y = np.linspace(0, 12, 5)
print(y)
# Transformar el vector fila en un vector columna
print(y[:, np.newaxis])
# Obtener la forma de un array y cambiarla
# Generar un array aleatorio
rnd = np.random.RandomState(seed=123)
X = rnd.uniform(low=0.0, high=1.0, size=(3, 5)) # a 3 x 5 array
print(X)
print(X.shape)
print(X.reshape(5, 3))
# Indexar según un conjunto de números enteros
indices = np.array([3, 1, 0])
print(indices)
X[:, indices]
Explanation: $$\begin{bmatrix}
1 & 2 & 3 & 4 \
5 & 6 & 7 & 8
\end{bmatrix}^T
=
\begin{bmatrix}
1 & 5 \
2 & 6 \
3 & 7 \
4 & 8
\end{bmatrix}
$$
End of explanation
from scipy import sparse
# Crear una matriz de aleatorios entre 0 y 1
rnd = np.random.RandomState(seed=123)
X = rnd.uniform(low=0.0, high=1.0, size=(10, 5))
print(X)
# Poner a cero la mayoría de elementos
X[X < 0.7] = 0
print(X)
# Transformar X en una matriz CSR (Compressed-Sparse-Row)
X_csr = sparse.csr_matrix(X)
print(X_csr)
# Convertir la matriz CSR de nuevo a una matriz densa
print(X_csr.toarray())
Explanation: Hay mucho más que aprender, pero esto cubre algunas de las cosas fundamentales que se tratarán en este curso.
Matrices dispersas de SciPy
No utilizaremos demasiado las matrices dispersas, pero son muy útiles en múltiples situaciones. En algunas tareas de aprendizaje automático, especialmente en aquellas asociadas con análisis de textos, los datos son casi siempre ceros. Guardar todos estos ceros es muy poco eficiente, mientras que representar estas matrices de forma que solo almacenemos lo qué no es cero es mucho más eficiente. Podemos crear y manipular matrices dispersas de la siguiente forma:
End of explanation
# Crear una matriz LIL vacía y añadir algunos elementos
X_lil = sparse.lil_matrix((5, 5))
for i, j in np.random.randint(0, 5, (15, 2)):
X_lil[i, j] = i + j
print(X_lil)
print(type(X_lil))
X_dense = X_lil.toarray()
print(X_dense)
print(type(X_dense))
Explanation: (puede que encuentres otra forma alternativa para convertir matrices dispersas a densas: numpy.todense; toarray devuelve un array numpy, mientras que todense devuelve una matriz numpy. En este tutorial trabajaremos con arrays numpy, no con matrices, ya que estas últimas no son soportadas por scikit-learn.
La representación CSR puede ser muy eficiente para hacer cómputo, pero no tanto para añadir elementos. Para ello, la representación LIL (List-In-List) es mejor:
End of explanation
X_csr = X_lil.tocsr()
print(X_csr)
print(type(X_csr))
Explanation: A menudo, una vez creamos la matriz LIL, es útil convertirla al formato CSR (muchos algoritmos de scikit-learn requieren formatos CSR)
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
# Dibujar una línea
x = np.linspace(0, 10, 100)
plt.plot(x, np.sin(x));
# Dibujar un scatter
x = np.random.normal(size=500)
y = np.random.normal(size=500)
plt.scatter(x, y);
# Mostrar imágenes usando imshow
# - Tener en cuenta que el origen por defecto está arriba a la izquierda
x = np.linspace(1, 12, 100)
y = x[:, np.newaxis]
im = y * np.sin(x) * np.cos(y)
print(im.shape)
plt.imshow(im);
# Hacer un diagrama de curvas de nivel (contour plot)
# - El origen aquí está abajo a la izquierda
plt.contour(im);
# El modo "widget" en lugar de inline permite que los plots sean interactivos
%matplotlib widget
# Plot en 3D
from mpl_toolkits.mplot3d import Axes3D
ax = plt.axes(projection='3d')
xgrid, ygrid = np.meshgrid(x, y.ravel())
ax.plot_surface(xgrid, ygrid, im, cmap=plt.cm.viridis, cstride=2, rstride=2, linewidth=0);
Explanation: Los formatos dispersos disponibles que pueden ser útiles para distintos problemas son:
- CSR (compressed sparse row).
- CSC (compressed sparse column).
- BSR (block sparse row).
- COO (coordinate).
- DIA (diagonal).
- DOK (dictionary of keys).
- LIL (list in list).
El paquete scipy.sparse tienen bastantes funciones para matrices dispersas, incluyendo álgebra lineal, algoritmos de grafos y mucho más.
matplotlib
Otra parte muy importante del aprendizaje automático es la visualización de datos. La herramienta más habitual para esto en Python es matplotlib. Es un paquete extremadamente flexible y ahora veremos algunos elementos básicos.
Ya que estamos usando los libros (notebooks) Jupyter, vamos a usar una de las funciones mágicas que vienen incluidas en IPython, el modo "matoplotlib inline", que dibujará los plots directamente en el libro.
End of explanation
# %load http://matplotlib.org/mpl_examples/pylab_examples/ellipse_collection.py
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.collections import EllipseCollection
x = np.arange(10)
y = np.arange(15)
X, Y = np.meshgrid(x, y)
XY = np.hstack((X.ravel()[:, np.newaxis], Y.ravel()[:, np.newaxis]))
ww = X/10.0
hh = Y/15.0
aa = X*9
fig, ax = plt.subplots()
ec = EllipseCollection(ww, hh, aa, units='x', offsets=XY,
transOffset=ax.transData)
ec.set_array((X + Y).ravel())
ax.add_collection(ec)
ax.autoscale_view()
ax.set_xlabel('X')
ax.set_ylabel('y')
cbar = plt.colorbar(ec)
cbar.set_label('X+Y')
plt.show()
Explanation: Hay muchísimos tipos de gráficos disponibles. Una forma útila de explorarlos es mirar la galería de matplotlib.
Puedes probar estos ejemplos fácilmente en el libro de notas: simplemente copia el enlace Source Code de cada página y pégalo en el libro usando el comando mágico %load.
Por ejemplo:
End of explanation |
836 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. | Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print(X_train.shape, X_test.shape)
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
Explanation: k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages:
During training, the classifier takes the training data and simply remembers it
During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
End of explanation
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
End of explanation
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer: fill this in.
End of explanation
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:
End of explanation
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)
# you should see significantly faster performance with the fully vectorized implementation
Explanation: You should expect to see a slightly better performance than with k = 1.
End of explanation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array(np.array_split(X_train, num_folds, axis=0))
y_train_folds = np.array(np.array_split(y_train, num_folds))
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for k in k_choices:
for n in range(num_folds):
combine_id = [x for x in range(num_folds) if x != n]
x_training_dat = np.concatenate(X_train_folds[combine_id])
y_training_dat = np.concatenate(y_train_folds[combine_id])
classifier_k = KNearestNeighbor()
classifier_k.train(x_training_dat, y_training_dat)
y_cross_validation_pred = classifier_k.predict_labels(X_train_folds[n], k)
num_correct = np.sum(y_cross_validation_pred == y_train_folds[n])
accuracy = float(num_correct) / num_test
k_to_accuracies.setdefault(k, []).append(accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
print('>>> k = %d, mean_acc = %f' % (k, np.array(k_to_accuracies[k]).mean()))
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
Explanation: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
End of explanation |
837 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-2', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
838 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
solarposition.py tutorial
This tutorial needs your help to make it better!
Table of contents
Step1: SPA output
Step2: Speed tests
Step3: This numba test will only work properly if you have installed numba.
Step4: The numba calculation takes a long time the first time that it's run because it uses LLVM to compile the Python code to machine code. After that it's about 4-10 times faster depending on your machine. You can pass a numthreads argument to this function. The optimum numthreads depends on your machine and is equal to 4 by default. | Python Code:
import datetime
# scientific python add-ons
import numpy as np
import pandas as pd
# plotting stuff
# first line makes the plots appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
# finally, we import the pvlib library
import pvlib
import pvlib
from pvlib.location import Location
Explanation: solarposition.py tutorial
This tutorial needs your help to make it better!
Table of contents:
1. Setup
2. SPA output
2. Speed tests
This tutorial has been tested against the following package versions:
* pvlib 0.3.0
* Python 3.5.1
* IPython 4.1
* Pandas 0.18.0
It should work with other Python and Pandas versions. It requires pvlib > 0.3.0 and IPython > 3.0.
Authors:
* Will Holmgren (@wholmgren), University of Arizona. July 2014, July 2015, March 2016
Setup
End of explanation
tus = Location(32.2, -111, 'US/Arizona', 700, 'Tucson')
print(tus)
golden = Location(39.742476, -105.1786, 'America/Denver', 1830, 'Golden')
print(golden)
golden_mst = Location(39.742476, -105.1786, 'MST', 1830, 'Golden MST')
print(golden_mst)
berlin = Location(52.5167, 13.3833, 'Europe/Berlin', 34, 'Berlin')
print(berlin)
times = pd.date_range(start=datetime.datetime(2014,6,23), end=datetime.datetime(2014,6,24), freq='1Min')
times_loc = times.tz_localize(tus.pytz)
times
pyephemout = pvlib.solarposition.pyephem(times_loc, tus.latitude, tus.longitude)
spaout = pvlib.solarposition.spa_python(times_loc, tus.latitude, tus.longitude)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
spaout['elevation'].plot(label='spa')
plt.legend(ncol=2)
plt.title('elevation')
print('pyephem')
print(pyephemout.head())
print('spa')
print(spaout.head())
plt.figure()
pyephemout['elevation'].plot(label='pyephem')
spaout['elevation'].plot(label='spa')
(pyephemout['elevation'] - spaout['elevation']).plot(label='diff')
plt.legend(ncol=3)
plt.title('elevation')
plt.figure()
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
spaout['elevation'].plot(label='spa')
(pyephemout['apparent_elevation'] - spaout['elevation']).plot(label='diff')
plt.legend(ncol=3)
plt.title('elevation')
plt.figure()
pyephemout['apparent_zenith'].plot(label='pyephem apparent')
spaout['zenith'].plot(label='spa')
(pyephemout['apparent_zenith'] - spaout['zenith']).plot(label='diff')
plt.legend(ncol=3)
plt.title('zenith')
plt.figure()
pyephemout['apparent_azimuth'].plot(label='pyephem apparent')
spaout['azimuth'].plot(label='spa')
(pyephemout['apparent_azimuth'] - spaout['azimuth']).plot(label='diff')
plt.legend(ncol=3)
plt.title('azimuth')
pyephemout = pvlib.solarposition.pyephem(times.tz_localize(golden.tz), golden.latitude, golden.longitude)
spaout = pvlib.solarposition.spa_python(times.tz_localize(golden.tz), golden.latitude, golden.longitude)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
spaout['elevation'].plot(label='spa')
plt.legend(ncol=2)
plt.title('elevation')
print('pyephem')
print(pyephemout.head())
print('spa')
print(spaout.head())
pyephemout = pvlib.solarposition.pyephem(times.tz_localize(golden.tz), golden.latitude, golden.longitude)
ephemout = pvlib.solarposition.ephemeris(times.tz_localize(golden.tz), golden.latitude, golden.longitude)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('elevation')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
loc = berlin
pyephemout = pvlib.solarposition.pyephem(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
ephemout = pvlib.solarposition.ephemeris(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
ephemout['apparent_elevation'].plot(label='ephem apparent')
plt.legend(ncol=2)
plt.title('elevation')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
loc = berlin
times = pd.DatetimeIndex(start=datetime.date(2015,3,28), end=datetime.date(2015,3,29), freq='5min')
pyephemout = pvlib.solarposition.pyephem(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
ephemout = pvlib.solarposition.ephemeris(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('elevation')
plt.figure()
pyephemout['azimuth'].plot(label='pyephem')
ephemout['azimuth'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('azimuth')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
loc = berlin
times = pd.DatetimeIndex(start=datetime.date(2015,3,30), end=datetime.date(2015,3,31), freq='5min')
pyephemout = pvlib.solarposition.pyephem(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
ephemout = pvlib.solarposition.ephemeris(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('elevation')
plt.figure()
pyephemout['azimuth'].plot(label='pyephem')
ephemout['azimuth'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('azimuth')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
loc = berlin
times = pd.DatetimeIndex(start=datetime.date(2015,6,28), end=datetime.date(2015,6,29), freq='5min')
pyephemout = pvlib.solarposition.pyephem(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
ephemout = pvlib.solarposition.ephemeris(times.tz_localize(loc.tz), loc.latitude, loc.longitude)
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('elevation')
plt.figure()
pyephemout['azimuth'].plot(label='pyephem')
ephemout['azimuth'].plot(label='ephem')
plt.legend(ncol=2)
plt.title('azimuth')
print('pyephem')
print(pyephemout.head())
print('ephem')
print(ephemout.head())
pyephemout['elevation'].plot(label='pyephem')
pyephemout['apparent_elevation'].plot(label='pyephem apparent')
ephemout['elevation'].plot(label='ephem')
ephemout['apparent_elevation'].plot(label='ephem apparent')
plt.legend(ncol=2)
plt.title('elevation')
plt.xlim(pd.Timestamp('2015-06-28 02:00:00+02:00'), pd.Timestamp('2015-06-28 06:00:00+02:00'))
plt.ylim(-10,10)
Explanation: SPA output
End of explanation
times_loc = times.tz_localize(loc.tz)
%%timeit
pyephemout = pvlib.solarposition.pyephem(times_loc, loc.latitude, loc.longitude)
#ephemout = pvlib.solarposition.ephemeris(times, loc)
%%timeit
#pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.ephemeris(times_loc, loc.latitude, loc.longitude)
%%timeit
#pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.get_solarposition(times_loc, loc.latitude, loc.longitude,
method='nrel_numpy')
Explanation: Speed tests
End of explanation
%%timeit
#pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.get_solarposition(times_loc, loc.latitude, loc.longitude,
method='nrel_numba')
Explanation: This numba test will only work properly if you have installed numba.
End of explanation
%%timeit
#pyephemout = pvlib.solarposition.pyephem(times, loc)
ephemout = pvlib.solarposition.get_solarposition(times_loc, loc.latitude, loc.longitude,
method='nrel_numba', numthreads=16)
%%timeit
ephemout = pvlib.solarposition.spa_python(times_loc, loc.latitude, loc.longitude,
how='numba', numthreads=16)
Explanation: The numba calculation takes a long time the first time that it's run because it uses LLVM to compile the Python code to machine code. After that it's about 4-10 times faster depending on your machine. You can pass a numthreads argument to this function. The optimum numthreads depends on your machine and is equal to 4 by default.
End of explanation |
839 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step 0 - hyperparams
Step1: Step 1 - collect data
Step2: (689, 682, 7)
(689, 682, 6)
(689,)
(689, 682)
Step3: Step 2 - Build model
Step4: targets
Tensor("data/strided_slice
Step5: Step 3 training the network
Step6: Conclusion
The autoencoder is not able to represent in a visibly obvious way our price history time series
TS in Two Dimensions
Step7: K-means clustering
Step8: Conclusion
Visually it SEEMS meaningful to have six(6) clusters. But this is a very big if. However this will help us segment our dataset a little bit and actual have the overhead that we are training six(6) different models!
If we want to split them further then 13 groups seems good enough
Storing Groups of SKU ids | Python Code:
factors(689)
max_seq_len = 682
data_path = '../../../../Dropbox/data'
phae_path = data_path + '/price_hist_autoencoder'
npz_dates = phae_path + '/price_history_full_seqs_dates.npz'
assert path.isfile(npz_dates)
npz_train = phae_path + '/price_history_seqs_dates_normed_train.npz'
assert path.isfile(npz_train)
npz_test = phae_path + '/price_history_seqs_dates_normed_test.npz'
assert path.isfile(npz_test)
npz_path = npz_train[:-len('_train.npz')]
for key, val in np.load(npz_train).iteritems():
print key, ",", val.shape
Explanation: Step 0 - hyperparams
End of explanation
# dp = PriceHistoryAutoEncDataProvider(npz_path=npz_path, batch_size=53, with_EOS=False)
# for data in dp.datalist:
# print data.shape
Explanation: Step 1 - collect data
End of explanation
# for item in dp.next():
# print item.shape
Explanation: (689, 682, 7)
(689, 682, 6)
(689,)
(689, 682)
End of explanation
# model = PriceHistoryAutoencoder(rng=random_state, dtype=dtype, config=config)
# graph = model.getGraph(batch_size=53,
# #the way we have it these two must be equal for now
# enc_num_units = 10,
# hidden_enc_num_units = 10,
# hidden_enc_dim = 12,
# hidden_dec_dim = 13,
# #the way we have it these two must be equal for now
# hidden_dec_num_units = 14,
# dec_num_units = 14,
# ts_len=max_seq_len)
Explanation: Step 2 - Build model
End of explanation
#show_graph(graph)
Explanation: targets
Tensor("data/strided_slice:0", shape=(53, 682), dtype=float32)
Tensor("encoder_rnn_layer/rnn/transpose:0", shape=(53, 682, 10), dtype=float32)
Tensor("encoder_rnn_layer/rnn/while/Exit_2:0", shape=(?, 10), dtype=float32)
Tensor("hidden_encoder_rnn_layer/rnn/transpose:0", shape=(53, 682, 10), dtype=float32)
Tensor("hidden_encoder_rnn_layer/rnn/while/Exit_2:0", shape=(?, 10), dtype=float32)
Tensor("encoder_state_out_hidden_process/Elu:0", shape=(?, 12), dtype=float32)
Tensor("encoder_state_out_process/Elu:0", shape=(?, 2), dtype=float32)
Tensor("decoder_state_in_hidden_process/Elu:0", shape=(?, 13), dtype=float32)
Tensor("decoder_state_in_process/Elu:0", shape=(?, 14), dtype=float32)
Tensor("hidden_decoder_rnn_layer/rnn/transpose:0", shape=(53, 682, 14), dtype=float32)
Tensor("hidden_decoder_rnn_layer/rnn/while/Exit_2:0", shape=(?, 14), dtype=float32)
Tensor("decoder_rnn_layer/rnn/transpose:0", shape=(53, 682, 14), dtype=float32)
Tensor("decoder_rnn_layer/rnn/while/Exit_2:0", shape=(?, 14), dtype=float32)
Tensor("decoder_outs/Reshape:0", shape=(36146, 14), dtype=float32)
Tensor("readout_affine/Identity:0", shape=(36146, 1), dtype=float32)
Tensor("readout_affine/Reshape:0", shape=(53, 682), dtype=float32)
Tensor("error/mul_1:0", shape=(53, 682), dtype=float32)
Tensor("error/Mean:0", shape=(), dtype=float32)
Tensor("error/Mean:0", shape=(), dtype=float32)
End of explanation
nn_runs_folder = data_path + '/nn_runs'
filepath = nn_runs_folder + '/034_autoencoder_000.npz'
assert path.isfile(filepath), "we are not training, we are only loading the experiment here"
dyn_stats, preds_dict, targets, twods = get_or_run_nn(callback=None, filename='034_autoencoder_000',
nn_runs_folder = data_path + "/nn_runs")
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=targets[ind], y_pred=preds_dict[ind])
for ind in range(len(targets))]
ind = np.argmin(r2_scores)
ind
reals = targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(targets[ind], preds_dict[ind])[0]
for ind in range(len(targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(targets))
reals = targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b', label='reals')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
Explanation: Step 3 training the network
End of explanation
twod_arr = np.array(twods.values())
twod_arr.shape
plt.figure(figsize=(16,7))
plt.plot(twod_arr[:, 0], twod_arr[:, 1], 'r.')
plt.title('two dimensional representation of our time series after dimensionality reduction')
plt.xlabel('first dimension')
plt.ylabel('second dimension')
plt.show()
Explanation: Conclusion
The autoencoder is not able to represent in a visibly obvious way our price history time series
TS in Two Dimensions
End of explanation
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
km = KMeans(init='random', #init='k-means++',
n_clusters=3, n_init=10)
km.fit(twod_arr)
km.inertia_
silhouette_score(twod_arr, km.labels_)
def getKmeansScores():
for kk in range(2, 20):
km = KMeans(init='random', #init='k-means++',
n_clusters=kk, n_init=100).fit(twod_arr)
score = silhouette_score(twod_arr, km.labels_)
yield kk, score
%%time
dic = OrderedDict(list(getKmeansScores()))
plt.bar(dic.keys(), dic.values())
plt.xticks(dic.keys())
plt.show()
from mylibs.kde_2d_label import kde_2d_label
km = KMeans(init='random', #init='k-means++',
n_clusters=2, n_init=100).fit(twod_arr)
kde_2d_label(twod_arr, km.labels_)
kde_2d_label(twod_arr, KMeans(init='random', n_clusters=3, n_init=100).fit(twod_arr).labels_)
kde_2d_label(twod_arr, KMeans(init='random', n_clusters=4, n_init=100).fit(twod_arr).labels_)
kde_2d_label(twod_arr, KMeans(init='random', n_clusters=5, n_init=100).fit(twod_arr).labels_)
kde_2d_label(twod_arr, KMeans(init='random', n_clusters=6, n_init=100).fit(twod_arr).labels_)
kde_2d_label(twod_arr, KMeans(init='random', n_clusters=7, n_init=100).fit(twod_arr).labels_)
kde_2d_label(twod_arr, KMeans(init='random', n_clusters=8, n_init=100).fit(twod_arr).labels_)
kde_2d_label(twod_arr, KMeans(init='random', n_clusters=13, n_init=100).fit(twod_arr).labels_)
Explanation: K-means clustering
End of explanation
chosen_kk = 10
labels = KMeans(init='random', n_clusters=chosen_kk, n_init=100).fit(twod_arr).labels_
labels.shape
assert np.all(preds_dict.keys() == np.arange(len(preds_dict))), \
"just make sure that the order of the predictions and other outputs are in order"
#since order is not shuffled this simplifies things
assert path.isfile(npz_test)
sku_ids = np.load(npz_test)['sku_ids']
sku_ids.shape
sku_dic = OrderedDict()
for ii in range(chosen_kk):
sku_dic[str(ii)] = sku_ids[labels == ii]
assert np.sum([len(val) for val in sku_dic.values()]) == len(sku_ids), "weak check to ensure every item is assigned"
sku_ids_groups_path = data_path + '/sku_ids_groups'
assert path.isdir(sku_ids_groups_path)
path.abspath(sku_ids_groups_path)
sku_dic.keys()
for key, val in sku_dic.iteritems():
print key, ",", val.shape
npz_sku_ids_group_kmeans = sku_ids_groups_path + '/sku_ids_kmeans_{:02d}.npz'.format(chosen_kk)
np.savez(npz_sku_ids_group_kmeans, **sku_dic)
Explanation: Conclusion
Visually it SEEMS meaningful to have six(6) clusters. But this is a very big if. However this will help us segment our dataset a little bit and actual have the overhead that we are training six(6) different models!
If we want to split them further then 13 groups seems good enough
Storing Groups of SKU ids
End of explanation |
840 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Probability calibration of classifiers
When performing classification you often want to predict not only
the class label, but also the associated probability. This probability
gives you some kind of confidence on the prediction. However, not all
classifiers provide well-calibrated probabilities, some being over-confident
while others being under-confident. Thus, a separate calibration of predicted
probabilities is often desirable as a postprocessing. This example illustrates
two different methods for this calibration and evaluates the quality of the
returned probabilities using Brier's score
(see https
Step1: Plot the data and the predicted probabilities | Python Code:
print(__doc__)
# Author: Mathieu Blondel <[email protected]>
# Alexandre Gramfort <[email protected]>
# Balazs Kegl <[email protected]>
# Jan Hendrik Metzen <[email protected]>
# License: BSD Style.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from sklearn.datasets import make_blobs
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import brier_score_loss
from sklearn.calibration import CalibratedClassifierCV
from sklearn.model_selection import train_test_split
n_samples = 50000
n_bins = 3 # use 3 bins for calibration_curve as we have 3 clusters here
# Generate 3 blobs with 2 classes where the second blob contains
# half positive samples and half negative samples. Probability in this
# blob is therefore 0.5.
centers = [(-5, -5), (0, 0), (5, 5)]
X, y = make_blobs(n_samples=n_samples, n_features=2, cluster_std=1.0,
centers=centers, shuffle=False, random_state=42)
y[:n_samples // 2] = 0
y[n_samples // 2:] = 1
print(y[24990:25010])
print(y.shape[0])
sample_weight = np.random.RandomState(42).rand(y.shape[0])
# split train, test for calibration
X_train, X_test, y_train, y_test, sw_train, sw_test = \
train_test_split(X, y, sample_weight, test_size=0.9, random_state=42)
# Gaussian Naive-Bayes with no calibration
clf = GaussianNB()
clf.fit(X_train, y_train) # GaussianNB itself does not support sample-weights
prob_pos_clf = clf.predict_proba(X_test)[:, 1]
# Gaussian Naive-Bayes with isotonic calibration
clf_isotonic = CalibratedClassifierCV(clf, cv=2, method='isotonic')
clf_isotonic.fit(X_train, y_train, sw_train)
prob_pos_isotonic = clf_isotonic.predict_proba(X_test)[:, 1]
# Gaussian Naive-Bayes with sigmoid calibration
clf_sigmoid = CalibratedClassifierCV(clf, cv=2, method='sigmoid')
clf_sigmoid.fit(X_train, y_train, sw_train)
prob_pos_sigmoid = clf_sigmoid.predict_proba(X_test)[:, 1]
print("Brier scores: (the smaller the better)")
clf_score = brier_score_loss(y_test, prob_pos_clf, sw_test)
print("No calibration: %1.3f" % clf_score)
clf_isotonic_score = brier_score_loss(y_test, prob_pos_isotonic, sw_test)
print("With isotonic calibration: %1.3f" % clf_isotonic_score)
clf_sigmoid_score = brier_score_loss(y_test, prob_pos_sigmoid, sw_test)
print("With sigmoid calibration: %1.3f" % clf_sigmoid_score)
Explanation: Probability calibration of classifiers
When performing classification you often want to predict not only
the class label, but also the associated probability. This probability
gives you some kind of confidence on the prediction. However, not all
classifiers provide well-calibrated probabilities, some being over-confident
while others being under-confident. Thus, a separate calibration of predicted
probabilities is often desirable as a postprocessing. This example illustrates
two different methods for this calibration and evaluates the quality of the
returned probabilities using Brier's score
(see https://en.wikipedia.org/wiki/Brier_score).
Compared are the estimated probability using a Gaussian naive Bayes classifier
without calibration, with a sigmoid calibration, and with a non-parametric
isotonic calibration. One can observe that only the non-parametric model is able
to provide a probability calibration that returns probabilities close to the
expected 0.5 for most of the samples belonging to the middle cluster with
heterogeneous labels. This results in a significantly improved Brier score.
End of explanation
plt.figure()
y_unique = np.unique(y)
colors = cm.rainbow(np.linspace(0.0, 1.0, y_unique.size))
for this_y, color in zip(y_unique, colors):
this_X = X_train[y_train == this_y]
this_sw = sw_train[y_train == this_y]
plt.scatter(this_X[:, 0], this_X[:, 1], s=this_sw * 50, c=color, alpha=0.5,
label="Class %s" % this_y)
plt.legend(loc="best")
plt.title("Data")
plt.show()
plt.figure()
order = np.lexsort((prob_pos_clf, ))
plt.plot(prob_pos_clf[order], 'r', label='No calibration (%1.3f)' % clf_score)
plt.plot(prob_pos_isotonic[order], 'g', linewidth=3,
label='Isotonic calibration (%1.3f)' % clf_isotonic_score)
plt.plot(prob_pos_sigmoid[order], 'b', linewidth=3,
label='Sigmoid calibration (%1.3f)' % clf_sigmoid_score)
plt.plot(np.linspace(0, y_test.size, 51)[1::2],
y_test[order].reshape(25, -1).mean(1),
'k', linewidth=3, label=r'Empirical')
plt.ylim([-0.05, 1.05])
plt.xlabel("Instances sorted according to predicted probability "
"(uncalibrated GNB)")
plt.ylabel("P(y=1)")
plt.legend(loc="upper left")
plt.title("Gaussian naive Bayes probabilities")
plt.show()
Explanation: Plot the data and the predicted probabilities
End of explanation |
841 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Tensorflow Lite Gesture Classification Example Conversion Script
This guide shows how you can go about converting the model trained with TensorFlowJS to TensorFlow Lite FlatBuffers.
Run all steps in-order. At the end, model.tflite file will be downloaded.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Cleanup any existing models if necessary
Step3: Upload your Tensorflow.js Artifacts Here
i.e., The weights manifest model.json and the binary weights file model-weights.bin
Step4: Export Configuration
Step8: Model Converter
The following class converts a TensorFlow.js model to a TFLite FlatBuffer | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
!pip install -q tensorflowjs
import traceback
import logging
import tensorflow as tf
import os
from google.colab import files
from tensorflow import keras
from tensorflowjs.converters import load_keras_model
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
Explanation: Tensorflow Lite Gesture Classification Example Conversion Script
This guide shows how you can go about converting the model trained with TensorFlowJS to TensorFlow Lite FlatBuffers.
Run all steps in-order. At the end, model.tflite file will be downloaded.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/mobile/examples/gesture_classification/ml/tensorflowjs_to_tflite_colab_notebook.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/mobile/examples/gesture_classification/ml/tensorflowjs_to_tflite_colab_notebook.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
Install Dependencies
End of explanation
!rm -rf *.h5 *.tflite *.json *.bin
Explanation: Cleanup any existing models if necessary
End of explanation
files.upload()
Explanation: Upload your Tensorflow.js Artifacts Here
i.e., The weights manifest model.json and the binary weights file model-weights.bin
End of explanation
#@title Export Configuration
# TensorFlow.js arguments
config_json = "model.json" #@param {type:"string"}
weights_path_prefix = None #@param {type:"raw"}
model_tflite = "model.tflite" #@param {type:"string"}
Explanation: Export Configuration
End of explanation
class ModelConverter:
Creates a ModelConverter class from a TensorFlow.js model file.
Args:
:param config_json_path: Full filepath of weights manifest file containing the model architecture.
:param weights_path_prefix: Full filepath to the directory in which the weights binaries exist.
:param tflite_model_file: Name of the TFLite FlatBuffer file to be exported.
:return:
ModelConverter class.
def __init__(self,
config_json_path,
weights_path_prefix,
tflite_model_file
):
self.config_json_path = config_json_path
self.weights_path_prefix = weights_path_prefix
self.tflite_model_file = tflite_model_file
self.keras_model_file = 'merged.h5'
# MobileNet Options
self.input_node_name = 'the_input'
self.image_size = 224
self.alpha = 0.25
self.depth_multiplier = 1
self._input_shape = (1, self.image_size, self.image_size, 3)
self.depthwise_conv_layer = 'conv_pw_13_relu'
def convert(self):
self.save_keras_model()
self._deserialize_tflite_from_keras()
logger.info('The TFLite model has been generated')
def save_keras_model(self):
top_model = load_keras_model(self.config_json_path, self.weights_path_prefix,
weights_data_buffers=None,
load_weights=True,
)
base_model = self.get_base_model()
self._merged_model = self.merge(base_model, top_model)
logger.info("The merged Keras has been generated.")
def merge(self, base_model, top_model):
Merges base model with the classification block
:return: Returns the merged Keras model
logger.info("Initializing model...")
layer = base_model.get_layer(self.depthwise_conv_layer)
model = keras.Model(inputs=base_model.input, outputs=top_model(layer.output))
logger.info("Model created.")
return model
def get_base_model(self):
Builds MobileNet with the default parameters
:return: Returns the base MobileNet model
input_tensor = keras.Input(shape=self._input_shape[1:], name=self.input_node_name)
base_model = keras.applications.MobileNet(input_shape=self._input_shape[1:],
alpha=self.alpha,
depth_multiplier=self.depth_multiplier,
input_tensor=input_tensor,
include_top=False)
return base_model
def _deserialize_tflite_from_keras(self):
converter = tf.lite.TFLiteConverter.from_keras_model(self._merged_model)
tflite_model = converter.convert()
with open(self.tflite_model_file, "wb") as file:
file.write(tflite_model)
try:
converter = ModelConverter(config_json,
weights_path_prefix,
model_tflite)
converter.convert()
except ValueError as e:
print(traceback.format_exc())
print("Error occurred while converting.")
files.download(model_tflite)
Explanation: Model Converter
The following class converts a TensorFlow.js model to a TFLite FlatBuffer
End of explanation |
842 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loopless FBA
The goal of this procedure is identification of a thermodynamically consistent flux state without loops, as implied by the name.
Usually, the model has the following constraints.
$$ S \cdot v = 0 $$
$$ lb \le v \le ub $$
However, this will allow for thermodynamically infeasible loops (referred to as type 3 loops) to occur, where flux flows around a cycle without any net change of metabolites. For most cases, this is not a major issue, as solutions with these loops can usually be converted to equivalent solutions without them. However, if a flux state is desired which does not exhibit any of these loops, loopless FBA can be used. The formulation used here is modified from Schellenberger et al.
We can make the model irreversible, so that all reactions will satisfy
$$ 0 \le lb \le v \le ub \le \max(ub) $$
We will add in boolean indicators as well, such that
$$ \max(ub) i \ge v $$
$$ i \in {0, 1} $$
We also want to ensure that an entry in the row space of S also exists with negative values wherever v is nonzero. In this expression, $1-i$ acts as a not to indicate inactivity of a reaction.
$$ S^\mathsf T x - (1 - i) (\max(ub) + 1) \le -1 $$
We will construct an LP integrating both constraints.
$$ \left(
\begin{matrix}
S & 0 & 0\
-I & \max(ub)I & 0 \
0 & (\max(ub) + 1)I & S^\mathsf T
\end{matrix}
\right)
\cdot
\left(
\begin{matrix}
v \
i \
x
\end{matrix}
\right)
\begin{matrix}
&=& 0 \
&\ge& 0 \
&\le& \max(ub)
\end{matrix}$$
Note that these extra constraints are not applied to boundary reactions which bring metabolites in and out of the system.
Step1: We will demonstrate with a toy model which has a simple loop cycling A -> B -> C -> A, with A allowed to enter the system and C allowed to leave. A graphical view of the system is drawn below
Step2: While this model contains a loop, a flux state exists which has no flux through reaction v3, and is identified by loopless FBA.
Step3: However, if flux is forced through v3, then there is no longer a feasible loopless solution.
Step4: Loopless FBA is also possible on genome scale models, but it requires a capable MILP solver. | Python Code:
from matplotlib.pylab import *
%matplotlib inline
import cobra.test
from cobra import Reaction, Metabolite, Model
from cobra.flux_analysis.loopless import construct_loopless_model
from cobra.solvers import get_solver_name
Explanation: Loopless FBA
The goal of this procedure is identification of a thermodynamically consistent flux state without loops, as implied by the name.
Usually, the model has the following constraints.
$$ S \cdot v = 0 $$
$$ lb \le v \le ub $$
However, this will allow for thermodynamically infeasible loops (referred to as type 3 loops) to occur, where flux flows around a cycle without any net change of metabolites. For most cases, this is not a major issue, as solutions with these loops can usually be converted to equivalent solutions without them. However, if a flux state is desired which does not exhibit any of these loops, loopless FBA can be used. The formulation used here is modified from Schellenberger et al.
We can make the model irreversible, so that all reactions will satisfy
$$ 0 \le lb \le v \le ub \le \max(ub) $$
We will add in boolean indicators as well, such that
$$ \max(ub) i \ge v $$
$$ i \in {0, 1} $$
We also want to ensure that an entry in the row space of S also exists with negative values wherever v is nonzero. In this expression, $1-i$ acts as a not to indicate inactivity of a reaction.
$$ S^\mathsf T x - (1 - i) (\max(ub) + 1) \le -1 $$
We will construct an LP integrating both constraints.
$$ \left(
\begin{matrix}
S & 0 & 0\
-I & \max(ub)I & 0 \
0 & (\max(ub) + 1)I & S^\mathsf T
\end{matrix}
\right)
\cdot
\left(
\begin{matrix}
v \
i \
x
\end{matrix}
\right)
\begin{matrix}
&=& 0 \
&\ge& 0 \
&\le& \max(ub)
\end{matrix}$$
Note that these extra constraints are not applied to boundary reactions which bring metabolites in and out of the system.
End of explanation
figure(figsize=(10.5, 4.5), frameon=False)
gca().axis("off")
xlim(0.5, 3.5)
ylim(0.7, 2.2)
arrow_params = {"head_length": 0.08, "head_width": 0.1, "ec": "k", "fc": "k"}
text_params = {"fontsize": 25, "horizontalalignment": "center", "verticalalignment": "center"}
arrow(0.5, 1, 0.85, 0, **arrow_params) # EX_A
arrow(1.5, 1, 0.425, 0.736, **arrow_params) # v1
arrow(2.04, 1.82, 0.42, -0.72, **arrow_params) # v2
arrow(2.4, 1, -0.75, 0, **arrow_params) # v3
arrow(2.6, 1, 0.75, 0, **arrow_params)
# reaction labels
text(0.9, 1.15, "EX_A", **text_params)
text(1.6, 1.5, r"v$_1$", **text_params)
text(2.4, 1.5, r"v$_2$", **text_params)
text(2, 0.85, r"v$_3$", **text_params)
text(2.9, 1.15, "DM_C", **text_params)
# metabolite labels
scatter(1.5, 1, s=250, color='#c994c7')
text(1.5, 0.9, "A", **text_params)
scatter(2, 1.84, s=250, color='#c994c7')
text(2, 1.95, "B", **text_params)
scatter(2.5, 1, s=250, color='#c994c7')
text(2.5, 0.9, "C", **text_params);
test_model = Model()
test_model.add_metabolites(Metabolite("A"))
test_model.add_metabolites(Metabolite("B"))
test_model.add_metabolites(Metabolite("C"))
EX_A = Reaction("EX_A")
EX_A.add_metabolites({test_model.metabolites.A: 1})
DM_C = Reaction("DM_C")
DM_C.add_metabolites({test_model.metabolites.C: -1})
v1 = Reaction("v1")
v1.add_metabolites({test_model.metabolites.A: -1, test_model.metabolites.B: 1})
v2 = Reaction("v2")
v2.add_metabolites({test_model.metabolites.B: -1, test_model.metabolites.C: 1})
v3 = Reaction("v3")
v3.add_metabolites({test_model.metabolites.C: -1, test_model.metabolites.A: 1})
DM_C.objective_coefficient = 1
test_model.add_reactions([EX_A, DM_C, v1, v2, v3])
Explanation: We will demonstrate with a toy model which has a simple loop cycling A -> B -> C -> A, with A allowed to enter the system and C allowed to leave. A graphical view of the system is drawn below:
End of explanation
construct_loopless_model(test_model).optimize()
Explanation: While this model contains a loop, a flux state exists which has no flux through reaction v3, and is identified by loopless FBA.
End of explanation
v3.lower_bound = 1
construct_loopless_model(test_model).optimize()
Explanation: However, if flux is forced through v3, then there is no longer a feasible loopless solution.
End of explanation
salmonella = cobra.test.create_test_model("salmonella")
construct_loopless_model(salmonella).optimize(solver=get_solver_name(mip=True))
ecoli = cobra.test.create_test_model("ecoli")
construct_loopless_model(ecoli).optimize(solver=get_solver_name(mip=True))
Explanation: Loopless FBA is also possible on genome scale models, but it requires a capable MILP solver.
End of explanation |
843 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License
Step1: Classification might be the most well-known application of Bayesian methods, made famous in the 1990s as the basis of the first generation of spam filters.
In this chapter, I'll demonstrate Bayesian classification using data collected and made available by Dr. Kristen Gorman at the Palmer Long-Term Ecological Research Station in Antarctica (see Gorman, Williams, and Fraser, "Ecological Sexual Dimorphism and Environmental Variability within a Community of Antarctic Penguins (Genus Pygoscelis)", March 2014).
We'll use this data to classify penguins by species.
The following cell downloads the raw data.
Step2: Penguin Data
I'll use Pandas to load the data into a DataFrame.
Step3: The dataset contains one row for each penguin and one column for each variable.
Step4: For convenience, I'll create a new column called Species2 that contains a shorter version of the species names.
Step6: Three species of penguins are represented in the dataset
Step8: The following function plots a Cdf of the values in the given column for each species
Step9: Here's what the distributions look like for culmen length.
Step10: It looks like we can use culmen length to identify Adélie penguins, but the distributions for the other two species almost entirely overlap.
Here are the distributions for flipper length.
Step11: Using flipper length, we can distinguish Gentoo penguins from the other two species. So with just these two features, it seems like we should be able to classify penguins with some accuracy.
All of these CDFs show the sigmoid shape characteristic of the normal distribution; I will take advantage of that observation in the next section.
Here are the distributions for culmen depth.
Step12: And here are the distributions of body mass.
Step14: Culmen depth and body mass distinguish Gentoo penguins from the other two species, but these features might not add a lot of additional information, beyond what we get from flipper length and culmen length.
Normal Models
Let's use these features to classify penguins. We'll proceed in the usual Bayesian way
Step15: For example, here's the dictionary of norm objects for flipper length
Step16: Now suppose we measure a penguin and find that its flipper is 193 cm. What is the probability of that measurement under each hypothesis?
The norm object provides pdf, which computes the probability density function (PDF) of the normal distribution. We can use it to compute the likelihood of the observed data in a given distribution.
Step17: The result is a probability density, so we can't interpret it as a probability. But it is proportional to the likelihood of the data, so we can use it to update the prior.
Here's how we compute the likelihood of the data in each distribution.
Step18: Now we're ready to do the update.
The Update
As usual I'll use a Pmf to represent the prior distribution. For simplicity, let's assume that the three species are equally likely.
Step19: Now we can do the update in the usual way.
Step21: A penguin with a 193 mm flipper is unlikely to be a Gentoo, but might be either an Adélie or Chinstrap (assuming that the three species were equally likely before the measurement).
The following function encapsulates the steps we just ran.
It takes a Pmf representing the prior distribution, the observed data, and a map from each hypothesis to the distribution of the feature.
Step22: The return value is the posterior distribution.
Here's the previous example again, using update_penguin
Step23: As we saw in the CDFs, flipper length does not distinguish strongly between Adélie and Chinstrap penguins.
But culmen length can make this distinction, so let's use it to do a second round of classification.
First we estimate distributions of culmen length for each species like this
Step24: Now suppose we see a penguin with culmen length 48 mm.
We can use this data to update the prior.
Step26: A penguin with culmen length 48 mm is about equally likely to be a Chinstrap or Gentoo.
Using one feature at a time, we can often rule out one species or another, but we generally can't identify species with confidence.
We can do better using multiple features.
Naive Bayesian Classification
To make it easier to do multiple updates, I'll use the following function, which takes a prior Pmf, a sequence of measurements and a corresponding sequence of dictionaries containing estimated distributions.
Step27: It performs a series of updates, using one variable at a time, and returns the posterior Pmf.
To test it, I'll use the same features we looked at in the previous section
Step28: Now suppose we find a penguin with flipper length 193 mm and culmen length 48.
Here's the update
Step29: It is almost certain to be a Chinstrap.
Step30: We can loop through the dataset and classify each penguin with these two features.
Step31: This loop adds a column called Classification to the DataFrame; it contains the species with the maximum posterior probability for each penguin.
So let's see how many we got right.
Step32: There are 344 penguins in the dataset, but two of them are missing measurements, so we have 342 valid cases.
Of those, 324 are classified correctly, which is almost 95%.
Step34: The following function encapsulates these steps.
Step36: The classifier we used in this section is called "naive" because it ignores correlations between the features. To see why that matters, I'll make a less naive classifier
Step37: Here's a scatter plot of culmen length and flipper length for the three species.
Step39: Within each species, the joint distribution of these measurements forms an oval shape, at least roughly. The orientation of the ovals is along a diagonal, which indicates that there is a correlation between culmen length and flipper length.
If we ignore these correlations, we are assuming that the features are independent. To see what that looks like, I'll make a joint distribution for each species assuming independence.
The following function makes a discrete Pmf that approximates a normal distribution.
Step40: We can use it, along with make_joint, to make a joint distribution of culmen length and flipper length for each species.
Step41: The following figure compares a scatter plot of the data to the contours of the joint distributions, assuming independence.
Step42: The contours of a joint normal distribution form ellipses.
In this example, because the features are uncorrelated, the ellipses are aligned with the axes.
But they are not well aligned with the data.
We can make a better model of the data, and use it to compute better likelihoods, with a multivariate normal distribution.
Multivariate Normal Distribution
As we have seen, a univariate normal distribution is characterized by its mean and standard deviation.
A multivariate normal distribution is characterized by the means of the features and the covariance matrix, which contains variances, which quantify the spread of the features, and the covariances, which quantify the relationships among them.
We can use the data to estimate the means and covariance matrix for the population of penguins.
First I'll select the columns we want.
Step43: And compute the means.
Step44: We can also compute the covariance matrix
Step45: The result is a DataFrame with one row and one column for each feature. The elements on the diagonal are the variances; the elements off the diagonal are covariances.
By themselves, variances and covariances are hard to interpret. We can use them to compute standard deviations and correlation coefficients, which are easier to interpret, but the details of that calculation are not important right now.
Instead, we'll pass the covariance matrix to multivariate_normal, which is a SciPy function that creates an object that represents a multivariate normal distribution.
As arguments it takes a sequence of means and a covariance matrix
Step47: The following function makes a multivariate_normal object for each species.
Step48: Here's how we make this map for the first two features, flipper length and culmen length.
Step49: Visualizing a Multivariate Normal Distribution
This section uses some NumPy magic to generate contour plots for multivariate normal distributions. If that's interesting for you, great! Otherwise, feel free to skip to the results. In the next section we'll do the actual classification, which turns out to be easier than the visualization.
I'll start by making a contour map for the distribution of features among Adélie penguins.
Here are the univariate distributions for the two features we'll use and the multivariate distribution we just computed.
Step50: I'll make a discrete Pmf approximation for each of the univariate distributions.
Step51: And use them to make a mesh grid that contains all pairs of values.
Step52: The mesh is represented by two arrays
Step53: The result is a 3-D array that you can think of as a 2-D array of pairs. When we pass this array to multinorm.pdf, it evaluates the probability density function of the distribution for each pair of values.
Step54: The result is an array of probability densities. If we put them in a DataFrame and normalize them, the result is a discrete approximation of the joint distribution of the two features.
Step55: Here's what the result looks like.
Step57: The contours of a multivariate normal distribution are still ellipses, but now that we have taken into account the correlation between the features, the ellipses are no longer aligned with the axes.
The following function encapsulate the steps we just did.
Step58: The following figure shows a scatter plot of the data along with the contours of the multivariate normal distribution for each species.
Step60: Because the multivariate normal distribution takes into account the correlations between features, it is a better model for the data. And there is less overlap in the contours of the three distributions, which suggests that they should yield better classifications.
A Less Naive Classifier
In a previous section we used update_penguin to update a prior Pmf based on observed data and a collection of norm objects that model the distribution of observations under each hypothesis. Here it is again
Step61: Last time we used this function, the values in norm_map were norm objects, but it also works if they are multivariate_normal objects.
We can use it to classify a penguin with flipper length 193 and culmen length 48
Step62: A penguin with those measurements is almost certainly a Chinstrap.
Now let's see if this classifier does any better than the naive Bayesian classifier.
I'll apply it to each penguin in the dataset
Step63: And compute the accuracy
Step64: It turns out to be only a little better
Step65: Exercise
Step68: OK, you can finish it off from here. | Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
from utils import Or70, Pu50, Gr30
color_list3 = [Or70, Pu50, Gr30]
import matplotlib.pyplot as plt
from cycler import cycler
marker_cycle = cycler(marker=['s', 'o', '^'])
color_cycle = cycler(color=color_list3)
line_cycle = cycler(linestyle=['-', '--', ':'])
plt.rcParams['axes.prop_cycle'] = (color_cycle +
marker_cycle +
line_cycle)
Explanation: Classification
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
# Load the data files from
# https://github.com/allisonhorst/palmerpenguins
# With gratitude to Allison Horst (@allison_horst)
download('https://github.com/allisonhorst/palmerpenguins/raw/master/inst/extdata/penguins_raw.csv')
Explanation: Classification might be the most well-known application of Bayesian methods, made famous in the 1990s as the basis of the first generation of spam filters.
In this chapter, I'll demonstrate Bayesian classification using data collected and made available by Dr. Kristen Gorman at the Palmer Long-Term Ecological Research Station in Antarctica (see Gorman, Williams, and Fraser, "Ecological Sexual Dimorphism and Environmental Variability within a Community of Antarctic Penguins (Genus Pygoscelis)", March 2014).
We'll use this data to classify penguins by species.
The following cell downloads the raw data.
End of explanation
import pandas as pd
df = pd.read_csv('penguins_raw.csv')
df.shape
Explanation: Penguin Data
I'll use Pandas to load the data into a DataFrame.
End of explanation
df.head()
Explanation: The dataset contains one row for each penguin and one column for each variable.
End of explanation
def shorten(species):
return species.split()[0]
df['Species2'] = df['Species'].apply(shorten)
Explanation: For convenience, I'll create a new column called Species2 that contains a shorter version of the species names.
End of explanation
def make_cdf_map(df, colname, by='Species2'):
Make a CDF for each species.
cdf_map = {}
grouped = df.groupby(by)[colname]
for species, group in grouped:
cdf_map[species] = Cdf.from_seq(group, name=species)
return cdf_map
Explanation: Three species of penguins are represented in the dataset: Adélie, Chinstrap and Gentoo.
These species are shown in this illustration (by Allison Horst, available under the CC-BY license):
<img width="400" src="https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/images/EaAWkZ0U4AA1CQf.jpeg" alt="Drawing of three penguin species">
The measurements we'll use are:
Body Mass in grams (g).
Flipper Length in millimeters (mm).
Culmen Length in millimeters.
Culmen Depth in millimeters.
If you are not familiar with the word "culmen", it refers to the top margin of the beak.
The culmen is shown in the following illustration (also by Allison Horst):
<img width="300" src="https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/images/EaAXQn8U4AAoKUj.jpeg">
These measurements will be most useful for classification if there are substantial differences between species and small variation within species. To see whether that is true, and to what degree, I'll plot cumulative distribution functions (CDFs) of each measurement for each species.
The following function takes the DataFrame and a column name.
It returns a dictionary that maps from each species name to a Cdf of the values in the column named colname.
End of explanation
from empiricaldist import Cdf
from utils import decorate
def plot_cdfs(df, colname, by='Species2'):
Make a CDF for each species.
df: DataFrame
colname: string column name
by: string column name
returns: dictionary from species name to Cdf
cdf_map = make_cdf_map(df, colname, by)
for species, cdf in cdf_map.items():
cdf.plot(label=species, marker='')
decorate(xlabel=colname,
ylabel='CDF')
Explanation: The following function plots a Cdf of the values in the given column for each species:
End of explanation
colname = 'Culmen Length (mm)'
plot_cdfs(df, colname)
Explanation: Here's what the distributions look like for culmen length.
End of explanation
colname = 'Flipper Length (mm)'
plot_cdfs(df, colname)
Explanation: It looks like we can use culmen length to identify Adélie penguins, but the distributions for the other two species almost entirely overlap.
Here are the distributions for flipper length.
End of explanation
colname = 'Culmen Depth (mm)'
plot_cdfs(df, colname)
Explanation: Using flipper length, we can distinguish Gentoo penguins from the other two species. So with just these two features, it seems like we should be able to classify penguins with some accuracy.
All of these CDFs show the sigmoid shape characteristic of the normal distribution; I will take advantage of that observation in the next section.
Here are the distributions for culmen depth.
End of explanation
colname = 'Body Mass (g)'
plot_cdfs(df, colname)
Explanation: And here are the distributions of body mass.
End of explanation
from scipy.stats import norm
def make_norm_map(df, colname, by='Species2'):
Make a map from species to norm object.
norm_map = {}
grouped = df.groupby(by)[colname]
for species, group in grouped:
mean = group.mean()
std = group.std()
norm_map[species] = norm(mean, std)
return norm_map
Explanation: Culmen depth and body mass distinguish Gentoo penguins from the other two species, but these features might not add a lot of additional information, beyond what we get from flipper length and culmen length.
Normal Models
Let's use these features to classify penguins. We'll proceed in the usual Bayesian way:
Define a prior distribution with the three possible species and a prior probability for each,
Compute the likelihood of the data for each hypothetical species, and then
Compute the posterior probability of each hypothesis.
To compute the likelihood of the data under each hypothesis, I'll use the data to estimate the parameters of a normal distribution for each species.
The following function takes a DataFrame and a column name; it returns a dictionary that maps from each species name to a norm object.
norm is defined in SciPy; it represents a normal distribution with a given mean and standard deviation.
End of explanation
flipper_map = make_norm_map(df, 'Flipper Length (mm)')
flipper_map.keys()
Explanation: For example, here's the dictionary of norm objects for flipper length:
End of explanation
data = 193
flipper_map['Adelie'].pdf(data)
Explanation: Now suppose we measure a penguin and find that its flipper is 193 cm. What is the probability of that measurement under each hypothesis?
The norm object provides pdf, which computes the probability density function (PDF) of the normal distribution. We can use it to compute the likelihood of the observed data in a given distribution.
End of explanation
hypos = flipper_map.keys()
likelihood = [flipper_map[hypo].pdf(data) for hypo in hypos]
likelihood
Explanation: The result is a probability density, so we can't interpret it as a probability. But it is proportional to the likelihood of the data, so we can use it to update the prior.
Here's how we compute the likelihood of the data in each distribution.
End of explanation
from empiricaldist import Pmf
prior = Pmf(1/3, hypos)
prior
Explanation: Now we're ready to do the update.
The Update
As usual I'll use a Pmf to represent the prior distribution. For simplicity, let's assume that the three species are equally likely.
End of explanation
posterior = prior * likelihood
posterior.normalize()
posterior
Explanation: Now we can do the update in the usual way.
End of explanation
def update_penguin(prior, data, norm_map):
Update hypothetical species.
hypos = prior.qs
likelihood = [norm_map[hypo].pdf(data) for hypo in hypos]
posterior = prior * likelihood
posterior.normalize()
return posterior
Explanation: A penguin with a 193 mm flipper is unlikely to be a Gentoo, but might be either an Adélie or Chinstrap (assuming that the three species were equally likely before the measurement).
The following function encapsulates the steps we just ran.
It takes a Pmf representing the prior distribution, the observed data, and a map from each hypothesis to the distribution of the feature.
End of explanation
posterior1 = update_penguin(prior, 193, flipper_map)
posterior1
Explanation: The return value is the posterior distribution.
Here's the previous example again, using update_penguin:
End of explanation
culmen_map = make_norm_map(df, 'Culmen Length (mm)')
Explanation: As we saw in the CDFs, flipper length does not distinguish strongly between Adélie and Chinstrap penguins.
But culmen length can make this distinction, so let's use it to do a second round of classification.
First we estimate distributions of culmen length for each species like this:
End of explanation
posterior2 = update_penguin(prior, 48, culmen_map)
posterior2
Explanation: Now suppose we see a penguin with culmen length 48 mm.
We can use this data to update the prior.
End of explanation
def update_naive(prior, data_seq, norm_maps):
Naive Bayesian classifier
prior: Pmf
data_seq: sequence of measurements
norm_maps: sequence of maps from species to distribution
returns: Pmf representing the posterior distribution
posterior = prior.copy()
for data, norm_map in zip(data_seq, norm_maps):
posterior = update_penguin(posterior, data, norm_map)
return posterior
Explanation: A penguin with culmen length 48 mm is about equally likely to be a Chinstrap or Gentoo.
Using one feature at a time, we can often rule out one species or another, but we generally can't identify species with confidence.
We can do better using multiple features.
Naive Bayesian Classification
To make it easier to do multiple updates, I'll use the following function, which takes a prior Pmf, a sequence of measurements and a corresponding sequence of dictionaries containing estimated distributions.
End of explanation
colnames = ['Flipper Length (mm)', 'Culmen Length (mm)']
norm_maps = [flipper_map, culmen_map]
Explanation: It performs a series of updates, using one variable at a time, and returns the posterior Pmf.
To test it, I'll use the same features we looked at in the previous section: culmen length and flipper length.
End of explanation
data_seq = 193, 48
posterior = update_naive(prior, data_seq, norm_maps)
posterior
Explanation: Now suppose we find a penguin with flipper length 193 mm and culmen length 48.
Here's the update:
End of explanation
posterior.max_prob()
Explanation: It is almost certain to be a Chinstrap.
End of explanation
import numpy as np
df['Classification'] = np.nan
for i, row in df.iterrows():
data_seq = row[colnames]
posterior = update_naive(prior, data_seq, norm_maps)
df.loc[i, 'Classification'] = posterior.max_prob()
Explanation: We can loop through the dataset and classify each penguin with these two features.
End of explanation
len(df)
valid = df['Classification'].notna()
valid.sum()
same = df['Species2'] == df['Classification']
same.sum()
Explanation: This loop adds a column called Classification to the DataFrame; it contains the species with the maximum posterior probability for each penguin.
So let's see how many we got right.
End of explanation
same.sum() / valid.sum()
Explanation: There are 344 penguins in the dataset, but two of them are missing measurements, so we have 342 valid cases.
Of those, 324 are classified correctly, which is almost 95%.
End of explanation
def accuracy(df):
Compute the accuracy of classification.
valid = df['Classification'].notna()
same = df['Species2'] == df['Classification']
return same.sum() / valid.sum()
Explanation: The following function encapsulates these steps.
End of explanation
import matplotlib.pyplot as plt
def scatterplot(df, var1, var2):
Make a scatter plot.
grouped = df.groupby('Species2')
for species, group in grouped:
plt.plot(group[var1], group[var2],
label=species, lw=0, alpha=0.3)
decorate(xlabel=var1, ylabel=var2)
Explanation: The classifier we used in this section is called "naive" because it ignores correlations between the features. To see why that matters, I'll make a less naive classifier: one that takes into account the joint distribution of the features.
Joint Distributions
I'll start by making a scatter plot of the data.
End of explanation
var1 = 'Flipper Length (mm)'
var2 = 'Culmen Length (mm)'
scatterplot(df, var1, var2)
Explanation: Here's a scatter plot of culmen length and flipper length for the three species.
End of explanation
def make_pmf_norm(dist, sigmas=3, n=101):
Make a Pmf approximation to a normal distribution.
mean, std = dist.mean(), dist.std()
low = mean - sigmas * std
high = mean + sigmas * std
qs = np.linspace(low, high, n)
ps = dist.pdf(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
Explanation: Within each species, the joint distribution of these measurements forms an oval shape, at least roughly. The orientation of the ovals is along a diagonal, which indicates that there is a correlation between culmen length and flipper length.
If we ignore these correlations, we are assuming that the features are independent. To see what that looks like, I'll make a joint distribution for each species assuming independence.
The following function makes a discrete Pmf that approximates a normal distribution.
End of explanation
from utils import make_joint
joint_map = {}
for species in hypos:
pmf1 = make_pmf_norm(flipper_map[species])
pmf2 = make_pmf_norm(culmen_map[species])
joint_map[species] = make_joint(pmf1, pmf2)
Explanation: We can use it, along with make_joint, to make a joint distribution of culmen length and flipper length for each species.
End of explanation
from utils import plot_contour
scatterplot(df, var1, var2)
for species in hypos:
plot_contour(joint_map[species], alpha=0.5)
Explanation: The following figure compares a scatter plot of the data to the contours of the joint distributions, assuming independence.
End of explanation
features = df[[var1, var2]]
Explanation: The contours of a joint normal distribution form ellipses.
In this example, because the features are uncorrelated, the ellipses are aligned with the axes.
But they are not well aligned with the data.
We can make a better model of the data, and use it to compute better likelihoods, with a multivariate normal distribution.
Multivariate Normal Distribution
As we have seen, a univariate normal distribution is characterized by its mean and standard deviation.
A multivariate normal distribution is characterized by the means of the features and the covariance matrix, which contains variances, which quantify the spread of the features, and the covariances, which quantify the relationships among them.
We can use the data to estimate the means and covariance matrix for the population of penguins.
First I'll select the columns we want.
End of explanation
mean = features.mean()
mean
Explanation: And compute the means.
End of explanation
cov = features.cov()
cov
Explanation: We can also compute the covariance matrix:
End of explanation
from scipy.stats import multivariate_normal
multinorm = multivariate_normal(mean, cov)
Explanation: The result is a DataFrame with one row and one column for each feature. The elements on the diagonal are the variances; the elements off the diagonal are covariances.
By themselves, variances and covariances are hard to interpret. We can use them to compute standard deviations and correlation coefficients, which are easier to interpret, but the details of that calculation are not important right now.
Instead, we'll pass the covariance matrix to multivariate_normal, which is a SciPy function that creates an object that represents a multivariate normal distribution.
As arguments it takes a sequence of means and a covariance matrix:
End of explanation
def make_multinorm_map(df, colnames):
Make a map from each species to a multivariate normal.
multinorm_map = {}
grouped = df.groupby('Species2')
for species, group in grouped:
features = group[colnames]
mean = features.mean()
cov = features.cov()
multinorm_map[species] = multivariate_normal(mean, cov)
return multinorm_map
Explanation: The following function makes a multivariate_normal object for each species.
End of explanation
multinorm_map = make_multinorm_map(df, [var1, var2])
Explanation: Here's how we make this map for the first two features, flipper length and culmen length.
End of explanation
norm1 = flipper_map['Adelie']
norm2 = culmen_map['Adelie']
multinorm = multinorm_map['Adelie']
Explanation: Visualizing a Multivariate Normal Distribution
This section uses some NumPy magic to generate contour plots for multivariate normal distributions. If that's interesting for you, great! Otherwise, feel free to skip to the results. In the next section we'll do the actual classification, which turns out to be easier than the visualization.
I'll start by making a contour map for the distribution of features among Adélie penguins.
Here are the univariate distributions for the two features we'll use and the multivariate distribution we just computed.
End of explanation
pmf1 = make_pmf_norm(norm1)
pmf2 = make_pmf_norm(norm2)
Explanation: I'll make a discrete Pmf approximation for each of the univariate distributions.
End of explanation
X, Y = np.meshgrid(pmf1.qs, pmf2.qs)
X.shape
Explanation: And use them to make a mesh grid that contains all pairs of values.
End of explanation
pos = np.dstack((X, Y))
pos.shape
Explanation: The mesh is represented by two arrays: the first contains the quantities from pmf1 along the x axis; the second contains the quantities from pmf2 along the y axis.
In order to evaluate the multivariate distribution for each pair of values, we have to "stack" the arrays.
End of explanation
densities = multinorm.pdf(pos)
densities.shape
Explanation: The result is a 3-D array that you can think of as a 2-D array of pairs. When we pass this array to multinorm.pdf, it evaluates the probability density function of the distribution for each pair of values.
End of explanation
from utils import normalize
joint = pd.DataFrame(densities, columns=pmf1.qs, index=pmf2.qs)
normalize(joint)
Explanation: The result is an array of probability densities. If we put them in a DataFrame and normalize them, the result is a discrete approximation of the joint distribution of the two features.
End of explanation
plot_contour(joint)
decorate(xlabel=var1,
ylabel=var2)
Explanation: Here's what the result looks like.
End of explanation
def make_joint(norm1, norm2, multinorm):
Make a joint distribution.
norm1: `norm` object representing the distribution of the first feature
norm2: `norm` object representing the distribution of the second feature
multinorm: `multivariate_normal` object representing the joint distribution
pmf1 = make_pmf_norm(norm1)
pmf2 = make_pmf_norm(norm2)
X, Y = np.meshgrid(pmf1.qs, pmf2.qs)
pos = np.dstack((X, Y))
densities = multinorm.pdf(pos)
joint = pd.DataFrame(densities, columns=pmf1.qs, index=pmf2.qs)
return joint
Explanation: The contours of a multivariate normal distribution are still ellipses, but now that we have taken into account the correlation between the features, the ellipses are no longer aligned with the axes.
The following function encapsulate the steps we just did.
End of explanation
scatterplot(df, var1, var2)
for species in hypos:
norm1 = flipper_map[species]
norm2 = culmen_map[species]
multinorm = multinorm_map[species]
joint = make_joint(norm1, norm2, multinorm)
plot_contour(joint, alpha=0.5)
Explanation: The following figure shows a scatter plot of the data along with the contours of the multivariate normal distribution for each species.
End of explanation
def update_penguin(prior, data, norm_map):
Update hypothetical species.
hypos = prior.qs
likelihood = [norm_map[hypo].pdf(data) for hypo in hypos]
posterior = prior * likelihood
posterior.normalize()
return posterior
Explanation: Because the multivariate normal distribution takes into account the correlations between features, it is a better model for the data. And there is less overlap in the contours of the three distributions, which suggests that they should yield better classifications.
A Less Naive Classifier
In a previous section we used update_penguin to update a prior Pmf based on observed data and a collection of norm objects that model the distribution of observations under each hypothesis. Here it is again:
End of explanation
data = 193, 48
update_penguin(prior, data, multinorm_map)
Explanation: Last time we used this function, the values in norm_map were norm objects, but it also works if they are multivariate_normal objects.
We can use it to classify a penguin with flipper length 193 and culmen length 48:
End of explanation
df['Classification'] = np.nan
for i, row in df.iterrows():
data = row[colnames]
posterior = update_penguin(prior, data, multinorm_map)
df.loc[i, 'Classification'] = posterior.idxmax()
Explanation: A penguin with those measurements is almost certainly a Chinstrap.
Now let's see if this classifier does any better than the naive Bayesian classifier.
I'll apply it to each penguin in the dataset:
End of explanation
accuracy(df)
Explanation: And compute the accuracy:
End of explanation
# Solution
# Here are the norm maps for the other two features
depth_map = make_norm_map(df, 'Culmen Depth (mm)')
mass_map = make_norm_map(df, 'Body Mass (g)')
# Solution
# And here are sequences for the features and the norm maps
colnames4 = ['Culmen Length (mm)', 'Flipper Length (mm)',
'Culmen Depth (mm)', 'Body Mass (g)']
norm_maps4 = [culmen_map, flipper_map,
depth_map, mass_map]
# Solution
# Now let's classify and compute accuracy.
# We can do a little better with all four features,
# almost 97% accuracy
df['Classification'] = np.nan
for i, row in df.iterrows():
data_seq = row[colnames4]
posterior = update_naive(prior, data_seq, norm_maps4)
df.loc[i, 'Classification'] = posterior.max_prob()
accuracy(df)
Explanation: It turns out to be only a little better: the accuracy is 95.3%, compared to 94.7% for the naive Bayesian classifier.
Summary
In this chapter, we implemented a naive Bayesian classifier, which is "naive" in the sense that it assumes that the features it uses for classification are independent.
To see how bad that assumption is, we also implemented a classifier that uses a multivariate normal distribution to model the joint distribution of the features, which includes their dependencies.
In this example, the non-naive classifier is only marginally better.
In one way, that's disappointing. After all that work, it would have been nice to see a bigger improvement.
But in another way, it's good news. In general, a naive Bayesian classifier is easier to implement and requires less computation. If it works nearly as well as a more complex algorithm, it might be a good choice for practical purposes.
Speaking of practical purposes, you might have noticed that this example isn't very useful. If we want to identify the species of a penguin, there are easier ways than measuring its flippers and beak.
But there are scientific uses for this type of classification. One of them is the subject of the research paper we started with: sexual dimorphism, that is, differences in shape between male and female animals.
In some species, like angler fish, males and females look very different. In other species, like mockingbirds, they are difficult to tell apart.
And dimorphism is worth studying because it provides insight into social behavior, sexual selection, and evolution.
One way to quantify the degree of sexual dimorphism in a species is to use a classification algorithm like the one in this chapter. If you can find a set of features that makes it possible to classify individuals by sex with high accuracy, that's evidence of high dimorphism.
As an exercise, you can use the dataset from this chapter to classify penguins by sex and see which of the three species is the most dimorphic.
Exercises
Exercise: In my example I used culmen length and flipper length because they seemed to provide the most power to distinguish the three species. But maybe we can do better by using more features.
Make a naive Bayesian classifier that uses all four measurements in the dataset: culmen length and depth, flipper length, and body mass.
Is it more accurate than the model with two features?
End of explanation
gentoo = (df['Species2'] == 'Gentoo')
subset = df[gentoo].copy()
subset['Sex'].value_counts()
valid = df['Sex'] != '.'
valid.sum()
subset = df[valid & gentoo].copy()
Explanation: Exercise: One of the reasons the penguin dataset was collected was to quantify sexual dimorphism in different penguin species, that is, physical differences between male and female penguins. One way to quantify dimorphism is to use measurements to classify penguins by sex. If a species is more dimorphic, we expect to be able to classify them more accurately.
As an exercise, pick a species and use a Bayesian classifier (naive or not) to classify the penguins by sex. Which features are most useful? What accuracy can you achieve?
Note: One Gentoo penguin has an invalid value for Sex. I used the following code to select one species and filter out invalid data.
End of explanation
# Solution
# Here are the feature distributions grouped by sex
plot_cdfs(subset, 'Culmen Length (mm)', by='Sex')
# Solution
plot_cdfs(subset, 'Culmen Depth (mm)', by='Sex')
# Solution
plot_cdfs(subset, 'Flipper Length (mm)', by='Sex')
# Solution
plot_cdfs(subset, 'Body Mass (g)', by='Sex')
# Solution
# Here are the norm maps for the features, grouped by sex
culmen_map = make_norm_map(subset, 'Culmen Length (mm)', by='Sex')
flipper_map = make_norm_map(subset, 'Flipper Length (mm)', by='Sex')
depth_map = make_norm_map(subset, 'Culmen Depth (mm)', by='Sex')
mass_map = make_norm_map(subset, 'Body Mass (g)', by='Sex')
# Solution
# And here are the sequences we need for `update_naive`
norm_maps4 = [culmen_map, flipper_map, depth_map, mass_map]
colnames4 = ['Culmen Length (mm)', 'Flipper Length (mm)',
'Culmen Depth (mm)', 'Body Mass (g)']
# Solution
# Here's the prior
hypos = culmen_map.keys()
prior = Pmf(1/2, hypos)
prior
# Solution
# And the update
subset['Classification'] = np.nan
for i, row in subset.iterrows():
data_seq = row[colnames4]
posterior = update_naive(prior, data_seq, norm_maps4)
subset.loc[i, 'Classification'] = posterior.max_prob()
# Solution
# This function computes accuracy
def accuracy_sex(df):
Compute the accuracy of classification.
Compares columns Classification and Sex
df: DataFrame
valid = df['Classification'].notna()
same = df['Sex'] == df['Classification']
return same.sum() / valid.sum()
# Solution
# Using these features we can classify Gentoo penguins by
# sex with almost 92% accuracy
accuracy_sex(subset)
# Solution
# Here's the whole process in a function so we can
# classify the other species
def classify_by_sex(subset):
Run the whole classification process.
subset: DataFrame
culmen_map = make_norm_map(subset, 'Culmen Length (mm)', by='Sex')
flipper_map = make_norm_map(subset, 'Flipper Length (mm)', by='Sex')
depth_map = make_norm_map(subset, 'Culmen Depth (mm)', by='Sex')
mass_map = make_norm_map(subset, 'Body Mass (g)', by='Sex')
norm_maps4 = [culmen_map, flipper_map, depth_map, mass_map]
hypos = culmen_map.keys()
prior = Pmf(1/2, hypos)
subset['Classification'] = np.nan
for i, row in subset.iterrows():
data_seq = row[colnames4]
posterior = update_naive(prior, data_seq, norm_maps4)
subset.loc[i, 'Classification'] = posterior.max_prob()
return accuracy_sex(subset)
# Solution
# Here's the subset of Adelie penguins
# The accuracy is about 88%
adelie = df['Species2']=='Adelie'
subset = df[adelie].copy()
classify_by_sex(subset)
# Solution
# And for Chinstrap, accuracy is about 92%
chinstrap = df['Species2']=='Chinstrap'
subset = df[chinstrap].copy()
classify_by_sex(subset)
# Solution
# It looks like Gentoo and Chinstrap penguins are about equally
# dimorphic, Adelie penguins a little less so.
# All of these results are consistent with what's in the paper.
Explanation: OK, you can finish it off from here.
End of explanation |
844 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<b>Acknowledgements
Step1: Parsing HTML with BeautifulSoup
Now that we've downloaded the HTML of the page, we next need to parse it. Let's say we want to extract all of the names, ages, and breeds of the dogs, cats, and small animals currently up for adoption at the Boulder Humane Society.
Actually, navigating to the location of those attributes in the page can be somewhat tricky. Luckily HTML has a tree-structure, as shown below, where tags fit inside other tags. For example, the title of the document is located within its head, and within the larger html document (<html> <head> <title> </title> ... </head>... </html>).
<img src="https
Step2: Furthermore, we can select the name, breeds, age, and gender of the pet by find-ing the div tags which contain this information. Notice how the div tag has the attribute (attrs) class with value "views-field views-field-field-pp-animalname".
Step3: Now to get at the HTML object for each pet, we could find the CSS selector for each. Or, we can exploit the fact that every pet lives in a similar HTML structure for each pet. That is, we can find all of the div tags with the class attribute which contain the string views-row. We'll print out their attributes like we just did.
Step4: This may seem like a fairly silly example of webscraping, but one could imagine several research questions using this data. For example, if we collected this data over time (e.g. using Wayback Machine), could we identify what features of pets -- names, breeds, ages -- make them more likely to be adopted? Are there certain names that are more common for certain breeds? Or maybe your research question is something even wackier.
Aside
Step5: Requesting a Webpage with Selenium
Sometimes our interactions with webpages involve rendering Javascript. For example, think of visiting a webpage with a search box, typing in a query, pressing search, and viewing the result. Or visiting a webpage that requires a login, or clicking through pages in a list. To handle pages like these we'll use a package in Python called Selenium.
Installing Selenium can be a little tricky. You'll want to follow the directions as best you can here. Requirements (one of the below)
Step6: We'll now walk through how we can use Selenium to navigate the website to navigate a open source site called <a href="https
Step7: Let's say I wanted to know which movie was has been more lucrative 'Wonder Woman', 'Blank Panther', or 'Avengers
Step8: Now, I can parse the table returned using BeautifulSoup.
Step9: <img src="https | Python Code:
pet_pages = ["https://www.boulderhumane.org/animals/adoption/dogs",
"https://www.boulderhumane.org/animals/adoption/cats",
"https://www.boulderhumane.org/animals/adoption/adopt_other"]
r = requests.get(pet_pages[0])
html = r.text
print(html[:500]) # Print the first 500 characters of the HTML. Notice how it's the same as the screenshot above.
Explanation: <b>Acknowledgements:</b> The code below is very much inspired by Chris Bail's "Screen-Scraping in R". Thanks Chris!
Collecting Digital Trace Data: Web Scraping
Web scraping (also sometimes called "screen-scraping") is a method for extracting data from the web. There are many techniques which can be used for web scraping — ranging from requiring human involvement (“human copy-paste”) to fully automated systems. For research questions where you need to visit many webpages, and collect essentially very similar information from each, web scraping can be a great tool.
The typical web scraping program:
<ol>
<li> Loads the address of a webpage to be scraped from your list of webpages</li>
<li> Downloads the HTML or XML of that website</li>
<li> Extracts any desired information</li>
<li> Saves that information in a convenient format (e.g. CSV, JSON, etc.)</li>
</ol>
<img src="https://raw.githubusercontent.com/compsocialscience/summer-institute/master/2018/materials/day2-digital-trace-data/screenscraping/rmarkdown/Screen-Scraping.png"></img>
<em>From Chris Bail's "Screen-Scraping in R": <a href="https://cbail.github.io/SICSS_Screenscraping_in_R.html">https://cbail.github.io/SICSS_Screenscraping_in_R.html</a></em>
Legality & Politeness
When the internet was young, web scraping was a common and legally acceptable practice for collecting data on the web. But with the rise of online platforms, some of which rely heavily on user-created content (e.g. Craigslist), the data made available on these sites has become recognized by their companies as highly valuable. Furthermore, from a website developer's perspective, web crawlers are able request many pages from your site in rapid succession, increasing server loads, and generally being a nuisance.
Thus many websites, especially large sites (e.g. Yelp, AllRecipes, Instagram, The New York Times, etc.), have now forbidden "crawlers" / "robots" / "spiders" from harvesting their data in their "Terms of Service" (TOS). From Yelp's <a href="https://www.yelp.com/static?p=tos">Terms of Service</a>:
<img src="https://user-images.githubusercontent.com/6633242/45270118-b87a2580-b456-11e8-9d26-826f44bf5243.png"></img>
Before embarking on a research project that will involve web scraping, it is important to understand the TOS of the site you plan on collecting data from.
If the site does allow web scraping (and you've consulted your legal professional), many websites have a robots.txt file that tells search engines and web scrapers, written by researchers like you, how to interact with the site "politely" (i.e. the number of requests that can be made, pages to avoid, etc.).
Requesting a Webpage in Python
When you visit a webpage, your web browser renders an HTML document with CSS and Javascript to produce a visually appealing page. For example, to us, the Boulder Humane Society's listing of dogs available for adoption looks something like what's displayed at the top of the browser below:
<img src="https://user-images.githubusercontent.com/6633242/45270123-c760d800-b456-11e8-997e-580508e862e7.png"></img>
But to your web browser, the page actually looks like the HTML source code (basically instructions for what text and images to show and how to do so) shown at the bottom of the page. To see the source code of a webpage, in Safari, go to Develop > Show Page Source or in Chrome, go to Developer > View Source.
To request the HTML for a page in Python, you can use the Python package requests, as such:
End of explanation
soup = BeautifulSoup(html, 'html.parser')
pet = soup.select("#block-system-main > div > div > div.view-content > div.views-row.views-row-1.views-row-odd.views-row-first.On.Hold")
print(pet)
Explanation: Parsing HTML with BeautifulSoup
Now that we've downloaded the HTML of the page, we next need to parse it. Let's say we want to extract all of the names, ages, and breeds of the dogs, cats, and small animals currently up for adoption at the Boulder Humane Society.
Actually, navigating to the location of those attributes in the page can be somewhat tricky. Luckily HTML has a tree-structure, as shown below, where tags fit inside other tags. For example, the title of the document is located within its head, and within the larger html document (<html> <head> <title> </title> ... </head>... </html>).
<img src="https://raw.githubusercontent.com/compsocialscience/summer-institute/master/2018/materials/day2-digital-trace-data/screenscraping/rmarkdown/html_tree.png"></img>
<em>From Chris Bail's "Screen-Scraping in R": <a href="https://cbail.github.io/SICSS_Screenscraping_in_R.html">https://cbail.github.io/SICSS_Screenscraping_in_R.html</a></em>
To find the first pet on the page, we'll find that HTML element's "CSS selector". Within Safari, hover your mouse over the image of the first pet and then control+click on the image. This should highlight the section of HTML where the object you are trying to parse is found. Sometimes you may need to move your mouse through the HTML to find the exact location of the object (see GIF).
<img src="https://user-images.githubusercontent.com/6633242/45270125-dc3d6b80-b456-11e8-80ae-4947dd667d30.png"></img>
(You can also go to 'Develop > Show Page Source' and then click 'Elements'. Hover your mouse over sections of the HTML until the object you are trying to find is highlighted within your browser.)
BeautifulSoup is a Python library for parsing HTML. We'll pass the CSS selector that we just copied to BeautifulSoup to grab that object. Notice below how select-ing on that pet, shows us all of its attributes.
End of explanation
name = pet[0].find('div', attrs = {'class': 'views-field views-field-field-pp-animalname'})
primary_breed = pet[0].find('div', attrs = {'class': 'views-field views-field-field-pp-primarybreed'})
secondary_breed = pet[0].find('div', attrs = {'class': 'views-field views-field-field-pp-secondarybreed'})
age = pet[0].find('div', attrs = {'class': 'views-field views-field-field-pp-age'})
# We can call `get_text()` on those objects to print them nicely.
print({
"name": name.get_text(strip = True),
"primary_breed": primary_breed.get_text(strip = True),
"secondary_breed": secondary_breed.get_text(strip = True),
"age": age.get_text(strip=True)
})
Explanation: Furthermore, we can select the name, breeds, age, and gender of the pet by find-ing the div tags which contain this information. Notice how the div tag has the attribute (attrs) class with value "views-field views-field-field-pp-animalname".
End of explanation
all_pets = soup.find_all('div', {'class': 'views-row'})
for pet in all_pets:
name = pet.find('div', {'class': 'views-field views-field-field-pp-animalname'}).get_text(strip=True)
primary_breed = pet.find('div', {'class': 'views-field views-field-field-pp-primarybreed'}).get_text(strip=True)
secondary_breed = pet.find('div', {'class': 'views-field views-field-field-pp-secondarybreed'}).get_text(strip=True)
age = pet.find('div', {'class': 'views-field views-field-field-pp-age'}).get_text(strip=True)
print([name, primary_breed, secondary_breed, age])
Explanation: Now to get at the HTML object for each pet, we could find the CSS selector for each. Or, we can exploit the fact that every pet lives in a similar HTML structure for each pet. That is, we can find all of the div tags with the class attribute which contain the string views-row. We'll print out their attributes like we just did.
End of explanation
table = pd.read_html("https://en.wikipedia.org/wiki/List_of_sandwiches", header=0)[0]
#table.to_csv("filenamehere.csv") # Write table to CSV
table.head(20)
Explanation: This may seem like a fairly silly example of webscraping, but one could imagine several research questions using this data. For example, if we collected this data over time (e.g. using Wayback Machine), could we identify what features of pets -- names, breeds, ages -- make them more likely to be adopted? Are there certain names that are more common for certain breeds? Or maybe your research question is something even wackier.
Aside: Read Tables from Webpages
Pandas has really neat functionality in read_html where you can download an HTML table directly from a webpage, and load it into a dataframe.
End of explanation
driver = selenium.webdriver.Safari() # This command opens a window in Safari
# driver = selenium.webdriver.Chrome(executable_path = "<path to chromedriver>") # This command opens a window in Chrome
# driver = selenium.webdriver.Firefox(executable_path = "<path to geckodriver>") # This command opens a window in Firefox
# Get the xkcd website
driver.get("https://xkcd.com/")
# Let's find the 'random' buttom
element = driver.find_element_by_xpath('//*[@id="middleContainer"]/ul[1]/li[3]/a')
element.click()
# Find an attribute of this page - the title of the comic.
element = driver.find_element_by_xpath('//*[@id="comic"]/img')
element.get_attribute("title")
# Continue clicking throught the comics
driver.find_element_by_xpath('//*[@id="middleContainer"]/ul[1]/li[3]/a').click()
driver.quit() # Always remember to close your browser!
Explanation: Requesting a Webpage with Selenium
Sometimes our interactions with webpages involve rendering Javascript. For example, think of visiting a webpage with a search box, typing in a query, pressing search, and viewing the result. Or visiting a webpage that requires a login, or clicking through pages in a list. To handle pages like these we'll use a package in Python called Selenium.
Installing Selenium can be a little tricky. You'll want to follow the directions as best you can here. Requirements (one of the below):
- Firefox + geckodriver (https://github.com/mozilla/geckodriver/releases)
- Chrome + chromedriver (https://sites.google.com/a/chromium.org/chromedriver/)
First a fairly simple example: let's visit xkcd and click through the comics.
End of explanation
driver = selenium.webdriver.Safari() # This command opens a window in Safari
# driver = selenium.webdriver.Chrome(executable_path = "<path to chromedriver>") # This command opens a window in Chrome
# driver = selenium.webdriver.Firefox(executable_path = "<path to geckodriver>") # This command opens a window in Firefox
driver.get('https://www.boxofficemojo.com')
Explanation: We'll now walk through how we can use Selenium to navigate the website to navigate a open source site called <a href="https://www.boxofficemojo.com">"Box Office Mojo"</a>.
<img src="https://user-images.githubusercontent.com/6633242/45270131-f1b29580-b456-11e8-81fd-3f5361161e7f.png"></img>
End of explanation
# Type in the search bar, and click 'Search'
driver.find_element_by_xpath('//*[@id="leftnav"]/li[2]/form/input[1]').send_keys('Avengers: Infinity War')
driver.find_element_by_xpath('//*[@id="leftnav"]/li[2]/form/input[2]').click()
Explanation: Let's say I wanted to know which movie was has been more lucrative 'Wonder Woman', 'Blank Panther', or 'Avengers: Infinity War'. I could type into the search bar on the upper left: 'Avengers: Infinity War'.
End of explanation
# This is what the table looks like
table = driver.find_element_by_xpath('//*[@id="body"]/table[2]')
# table.get_attribute('innerHTML').strip()
pd.read_html(table.get_attribute('innerHTML').strip(), header=0)[2]
# Find the link to more details about the Avengers movie and click it
driver.find_element_by_xpath('//*[@id="body"]/table[2]/tbody/tr/td/table[2]/tbody/tr[2]/td[1]/b/font/a').click()
Explanation: Now, I can parse the table returned using BeautifulSoup.
End of explanation
driver.quit() # Always remember to close your browser!
Explanation: <img src="https://user-images.githubusercontent.com/6633242/45270140-03943880-b457-11e8-9d27-660a7cc4f2eb.png"></img>
Now, we can do the same for the remaining movies: 'Wonder Woman', and 'Black Panther' ...
End of explanation |
845 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step7: Notebook to calculate the water permeation into packaging.
Step8: Define the simulation parameters
Step9: Loop through user events
Calculate the water flux between two points in time that have no user changes | Python Code:
import numpy as np
import pandas as pd
import argparse as ap
def mass_density_sat(T):
Mass of water in one cubic meter of air at one bar at temperature T
parameters:
T: float - Temperature (K)
returns float - mass of water in one cubic meter saturated air (kg/m^3)
return 5.079e-3 + 2.5547e-4*T + 1.6124*e-5*T**2 + 3.6608e-9*T**3 + 3.9911e-9*T**4
def mass_water_vapor(T, rh, V):
Calculate the total mass of water in vapor in air at a
specified rh and a given size container.
parameters
T: float - Temperature (K)
rh: float - relative humidity (%)
V: float - volume of vapor (m^3)
return mass_density_sat(T) * rh * V
def GAB(aw, wm, K, C):
Calculate the water content of a substance based on the Guggenheim, Anderson and de Boer (GAB) three-parameter
isotherm model. See "GAB Generalized Equation for Sorption Phenomena", Food and Bioprocess Technology
March 2008 Vol 1, Issue 1, pp 82--90
w = (wm*C*K*aw) / ((1-K*aw)*(1-K*aw+C*K*aw))
parameters:
aw: float - water activity
wm: float - GAB parameter
K: float - GAB parameter
C: float - GAB parameter
returns float - water content of substance (mass water substance (kg) / mass substance (kg))
return (wm * C * K * aw) / ((1 - K * aw) * (1 - K * aw + c * K * aw))
def mass_water_solid_mixture(aw, frac, params):
Calculate the mass of water in a solid mixture at a specified water activity using
the superposition of GAB parameters.
parameters:
aw: float - water activity
frac: list of floats - list of mass fractions of the individual components [f_i] ; for i=1...N
params: list of dictionaries - list of GAB parameters [{wm, C, K}_i] ; for i=1...N
returns float - mass of water in solid (kg)
return np.sum([frac[i] * GAB(aw, p["wm"], p["K"], p["C"]) for i, p in enumerate(params)])
def GAB_regress(aw, w):
Calculate the GAB parameters from water content - humidity measurements.
See "GAB Generalized Equation for Sorption Phenomena", Food and Bioprocess Technology
March 2008 Vol 1, Issue 1, pp 82--90
aw/w = a + b*aw + c*aw^2
a = 1/(wm*C*K)
b = (C-2)/(wm*C)
c = K(1-C)/(wm*C)
parameters
aw: array - water activity
w: array - water content at each water activity point
returns dictionary {wm: float, C: float, K: float}
y = aw / w
[c, b, a] = np.polyfit(aw, y, 2)
K = (-b + np.sqrt(b**2 - 4*a*c))/(2*a)
C = b / (a*K) + 2
wm = 1 / (b + 2*K*a)
return {wm: wm, C: C, K: K}
def average_GAB(frac, params):
Calculate GAB parameters for a solid mixture.
parameters:
frac: list of floats - list of mass fractions of the individual components [f_i] ; for i=1...N
params: list of dictionaries - list of GAB parameters [{wm, C, K}_i] ; for i=1...N
returns float - dictionary of GAB parameters {wm, C, K}
aw = np.arange(.1,.9,.05)
w = np.array([mass_water_solid_mixture(aw_i, frac, params) for aw_i in aw])
return GAB_regress(aw, w)
def water_activity_GAB(w, wm, C, K):
Calculate the water activity, aw, from the known water content, w, in a substance
and known GAB parameters, wm, C, K.
From "GAB Generalized Equation for Sorption Phenomena", Food and Bioprocess Technology
March 2008 Vol 1, Issue 1, pp 82--90
aw/w = a + b*aw + c*aw^2 --> c*aw^2 + (b-1/w)*aw + a = 0
solution from quadratic equation
aw = (-(b-1/w) +/- sqrt((b-1/w)^2 - 4*c*a)) / (2*c)
where
a = 1/(wm*C*K)
b = (C-2)/(wm*C)
c = K(1-C)/(wm*C)
parameters
w: float - water content in component (mass water / mass component)
wm: float - GAB parameter
C: float - GAB parameter
K: float - GAB parameter
returns float - water activity
a = 1/(wm*C*K)
b = (C-2)/(wm*C)
c = K*(1-C)/(wm*C)
arg = np.sqrt((b-1/w)**2 - 4*c*a)
# TODO How do we know which root to use?
hi = (-1*(b-1/w) + arg) / (2*C)
lo = (-1*(b-1/w) - arg) / (2*c)
# Relationship from Gary"s method (no reference)
aw = (np.sqrt(C) * np.sqrt((C*wm**2 - 2*C*wm*w + C*w**2 + 4*wm*w) - C*wm + C*w - 2*w)) / (2*(C-1)*K*w)
return aw # Is this the right relationship?
excipients = GAB.co
Explanation: Notebook to calculate the water permeation into packaging.
End of explanation
system = {
"time": {
"startDate": 0,
"endDate": 0
},
"conditions": {
"temperature": 40,
"rh": 50
},
"inner": {
"type": "PP",
"permeability": 0,
"volume": 10,
"desiccant": {
"type": "silica",
"mass": 10,
"GAB": {"wm": 0, "C": 0, "K": 0},
},
"product": {
"units": 10,
"unit_mass": 1,
"components": [{
"name": "component_A",
"frac": .2,
"GAB": {"wm": 0, "C":0, "K":0}}
]
}
},
"outer": {
"type": "PP",
"permeability": 0,
"volume": 10,
"desiccant": {
"type": "silica",
"mass": 10,
"GAB": {"wm": 0, "C": 0, "K": 0},
},
"product": {
"units": 10,
"unit_mass": 1,
"components": [{
"name": "component_A",
"frac": .2,
"GAB": {"wm": 0, "C":0, "K":0}
}]
}
},
"events": [{
"date": 0,
"event": "REMOVE_DESICCANT",
"mass": 0,
"desiccantWater": 0
}]
}
Explanation: Define the simulation parameters
End of explanation
def flux(P, rh_in, rh_out):
# Inner package water
w_inner = mass_water_vapor(T, rh, V) + mass_water_solid(aw, m, w) + mass_water_solid(aw, m, w)
# Outer package water
w_outer = mass_water_vapor(T, rh, V) + mass_water_solid(aw, m, w) + mass_water_solid(aw, m, w)
Explanation: Loop through user events
Calculate the water flux between two points in time that have no user changes
End of explanation |
846 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An introduction to solving biological problems with Python
Session 2.3
Step1: open takes an optional second argument specifying the mode in which the file is opened, either for reading, writing or appending.
It defaults to 'r' which means open for reading in text mode. Other common values are 'w' for writing (truncating the file if it already exists) and 'a' for appending.
Step2: Closing files
To close a file once you finished with it, you can call the .close method on a file object.
Step3: Mode modifiers
These mode strings can include some extra modifier characters to deal with issues with files across multiple platforms.
'b'
Step4: Note that this means the entire file will be read into memory. If you are operating on a large file and don't actually need all the data at the same time this is rather inefficient.
Frequently, we just need to operate on individual lines of the file, and you can use the .readline method to read a line from a file and return it as a python string.
File objects internally keep track of your current location in a file, so to get following lines from the file you can call this method multiple times.
It is important to note that the string representing each line will have a trailing newline "\n" character, which you may want to remove with the .rstrip string method.
Once the end of the file is reached, .readline will return an empty string ''. This is different from an apparently empty line in a file, as even an empty line will contain a newline character. Recall that the empty string is considered as False in python, so you can readily check for this condition with an if statement etc.
Step5: To read in all lines from a file as a list of strings containing the data from each line, use the .readlines method (though note that this will again read all data into memory).
Step6: Looping over the lines in a file is a very common operation and python lets you iterate over a file using a for loop just as if it were an array of strings. This does not read all data into memory at once, and so is much more efficient that reading the file with .readlines and then looping over the resulting list.
Step7: The with statement
It is important that files are closed when they are no longer required, but writing fileObj.close() is tedious (and more importantly, easy to forget). An alternative syntax is to open the files within a with statement, in which case the file will automatically be closed at the end of the with block.
Step8: Exercises 2.3.1
Write a script that reads a file containing many lines of nucleotide sequence. For each line in the file, print out the line number, the length of the sequence and the sequence (There is an example file <a href="http | Python Code:
path = "data/datafile.txt"
fileObj = open( path )
Explanation: An introduction to solving biological problems with Python
Session 2.3: Files
Using files
Reading from files
Exercises 2.3.1
Writing to files
Exercises 2.3.2
Data input and output (I/O)
So far, all that data we have been working with has been written by us into our scripts, and the results of out computation has just been displayed in the terminal output. In the real world data will be supplied by the user of our programs (who may be you!) by some means, and we will often want to save the results of some analysis somewhere more permanent than just printing it to the screen. In this session we cover the way of reading data into our programs by reading files from disk, we also discuss writing out data to files.
There are, of course, many other ways of accessing data, such as querying a database or retrieving data from a network such as the internet. We don't cover these here, but python has excellent support for interacting with databases and networks either in the standard library or using external modules.
Using files
Frequently the data we want to operate on or analyse will be stored in files, so in our programs we need to be able to open files, read through them (perhaps all at once, perhaps not), and then close them.
We will also frequently want to be able to print output to files rather than always printing out results to the terminal.
Python supports all of these modes of operations on files, and provides a number of useful functions and syntax to make dealing with files straightforward.
Opening files
To open a file, python provides the open function, which takes a filename as its first argument and returns a file object which is python's internal representation of the file.
End of explanation
open( "data/myfile.txt", "r" ) # open for reading, default
open( "data/myfile.txt", "w" ) # open for writing (existing files will be overwritten)
open( "data/myfile.txt", "a" ) # open for appending
Explanation: open takes an optional second argument specifying the mode in which the file is opened, either for reading, writing or appending.
It defaults to 'r' which means open for reading in text mode. Other common values are 'w' for writing (truncating the file if it already exists) and 'a' for appending.
End of explanation
fileObj.close()
Explanation: Closing files
To close a file once you finished with it, you can call the .close method on a file object.
End of explanation
fileObj = open( "data/datafile.txt" )
print(fileObj.read()) # everything
fileObj.close()
Explanation: Mode modifiers
These mode strings can include some extra modifier characters to deal with issues with files across multiple platforms.
'b': binary mode, e.g. 'rb'. No translation for end-of-line characters to platform specific setting value.
|Character | Meaning |
|----------|---------|
|'r' | open for reading (default) |
|'w' | open for writing, truncating the file first |
|'x' | open for exclusive creation, failing if the file already exists |
|'a' | open for writing, appending to the end of the file if it exists |
|'b' | binary mode |
|'t' | text mode (default) |
|'+' | open a disk file for updating (reading and writing) |
Reading from files
Once we have opened a file for reading, file objects provide a number of methods for accessing the data in a file. The simplest of these is the .read method that reads the entire contents of the file into a string variable.
End of explanation
# one line at a time
fileObj = open( "data/datafile.txt" )
print("1st line:", fileObj.readline())
print("2nd line:", fileObj.readline())
print("3rd line:", fileObj.readline())
print("4th line:", fileObj.readline())
fileObj.close()
Explanation: Note that this means the entire file will be read into memory. If you are operating on a large file and don't actually need all the data at the same time this is rather inefficient.
Frequently, we just need to operate on individual lines of the file, and you can use the .readline method to read a line from a file and return it as a python string.
File objects internally keep track of your current location in a file, so to get following lines from the file you can call this method multiple times.
It is important to note that the string representing each line will have a trailing newline "\n" character, which you may want to remove with the .rstrip string method.
Once the end of the file is reached, .readline will return an empty string ''. This is different from an apparently empty line in a file, as even an empty line will contain a newline character. Recall that the empty string is considered as False in python, so you can readily check for this condition with an if statement etc.
End of explanation
# all lines
fileObj = open( "data/datafile.txt" )
lines = fileObj.readlines()
print("The file has", len(lines), "lines")
fileObj.close()
Explanation: To read in all lines from a file as a list of strings containing the data from each line, use the .readlines method (though note that this will again read all data into memory).
End of explanation
# as an iterable
fileObj = open( "data/datafile.txt" )
for line in fileObj:
print(line.rstrip().upper())
fileObj.close()
Explanation: Looping over the lines in a file is a very common operation and python lets you iterate over a file using a for loop just as if it were an array of strings. This does not read all data into memory at once, and so is much more efficient that reading the file with .readlines and then looping over the resulting list.
End of explanation
# fileObj will be closed when leaving the block
with open( "data/datafile.txt" ) as fileObj:
for ( i, line ) in enumerate( fileObj, start = 1 ):
print( i, line.strip() )
Explanation: The with statement
It is important that files are closed when they are no longer required, but writing fileObj.close() is tedious (and more importantly, easy to forget). An alternative syntax is to open the files within a with statement, in which case the file will automatically be closed at the end of the with block.
End of explanation
read_counts = {
'BRCA2': 43234,
'FOXP2': 3245,
'SORT1': 343792
}
with open( "out.txt", "w" ) as output:
output.write("GENE\tREAD_COUNT\n")
for gene in read_counts:
line = "\t".join( [ gene, str(read_counts[gene]) ] )
output.write(line + "\n")
Explanation: Exercises 2.3.1
Write a script that reads a file containing many lines of nucleotide sequence. For each line in the file, print out the line number, the length of the sequence and the sequence (There is an example file <a href="http://www.ebi.ac.uk/~grsr/perl/dna.txt">here</a> or in data/dna.txt from the course materials ).
Writing to files
Once a file has been opened for writing, you can use the .write() method on a file object to write data to the file.
The argument to the .write() method must be a string, so if you want to write out numerical data to a file you will have to convert it to a string somehow beforehand.
<div class="alert-warning">**Remember** to include a newline character `\n` to separate lines of your output, unlike the `print()` statement, `.write()` does not include this by default.</div>
End of explanation |
847 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Statistical Analysis of Data
Statistics are numbers that can be used to describe or summarize variable data points. For example, the expected value of a distribution and the mean of a sample are statistics. Being able to perform sound statistical analysis is crucial for a data scientist, and in this lesson we will outline a few key statistical concepts
Step2: Statistical Sampling
Sampling vs. Population
What is the difference between a sample and a population?
You can think of a sample as its own population, which is just a subset of the global population. You could imagine a biologist tagging some sample of birds, tracking their movements with GPS, and using that data to make inferences about the patterns of the global population of a species.
An important assumption in statistics is that an unbiased sample comes from the same distribution as the population, assuming that the global distribution is normal. We can test this hypothesis using a single sided t-test, a statistical method to compare sample means to the population means.
Simple Random Sample
A simple random sample (SRS) is one of the most common statistical sampling techniques. To get an SRS, we take a random subset of the data, without replacement. An SRS is unbiased because every member of the population has an equal opportunity to be selected. True randomness does not exist computationally, so we must use pseudorandom functions which, for most common statistical applications, will suffice as a statistically random method.
Sample Bias
Bias, as with a weighted coin that falls on heads more often, can be present in many stages of an experiment or data analysis. Some biases, like selection bias, are easy to detect. For example, a sample obtained from the Census Bureau in 2010 collected information on residents across the United States. Surely not every resident responded to their requests, so the ones who did are assumed to be a representative sample. This experiment has some selection bias, however, since those who respond to the census tend to be at home during the day, which means they are more likely to be either very young or very old. Another example is political polling by phone; those at home ready to answer the phone tend to be older, yielding a biased sample of voters.
Confirmation bias, a form of cognitive bias, can affect online and offline behavior. Those who believe that the earth is flat are more likely to share misinformation that supports their flat-earth theory rather than facts which dispel the claim. Picking up on this preference, YouTube's algorithm surfaces more flat earth video suggestions to those who've watched at least one. Such video suggestions then feed back into the users' confirmation bias.
There are other types of bias which may further confound an experiment or a data collection strategy. These biases are beyond the scope of this course but should be noted. Here's an exhaustive list of cognitive biases. Data scientists of all skill levels can experience pitfalls in their design and implementation strategies if they are not aware of the source of some bias in their experiment design or error in their data sources or collection strategies.
Variables and Measurements
We have already learned about programming data types, like string, integer, and float. These data types make up variable types that are categorized according to their measurement scales. We can start to think about variables divided into two groups
Step3: Bernoulli
Bernoulli distributions model an event which only has two possible outcomes (i.e., success and failure) which occur with probability $p$ and $1-p$.
If $X$ is a Bernoulli random variable with likelihood of success $p$, we write $X \sim \text{Bernoulli}(p)$.
We have actually seen an example of a Bernoulli distribution when considering a coin flip. That is, there are exactly two outcomes, and a heads occurs with probability $p = \frac{1}{2}$ and a tails with probability $1-p = \frac{1}{2}$.
Binomial
Binomial distributions model a discrete random variable which repeats Bernoulli trials $n$ times.
If $X$ is a Binomial random variable over $n$ trials with probability $p$ of success, we write $X \sim \text{Binom}(n, k)$. Under this distribution, the probability of $k$ successes is $P(X = k) = {n \choose k}p^k(1-p)^{n-k}$.
Poisson
A Poisson distribution can be used to model the discrete events that happen over a time interval. An example could be an expected count of customers arriving at a restaurant during each hour.
Gamma
The Gamma distribution is similar to Poisson in that it models discrete events in time, except that it represents a time until an event. This could be the departure times of employees from a central office. For example, employees depart from a central office beginning at 3pm, and by 8pm most have left.
Others
See here for the most commonly used probability distributions.
Coefficient of Determination ($R^2$)
Most datasets come with many variables to unpack. Looking at the $R^{2}$ can inform us of the linear relationship present between two variables. In the tips dataset, the tips tend to increase linearly with the total bill. The coefficient of determination, $R^{2}$, tells us how much variance is explained by a best fit regression line through the data. An $R^{2}=1$ would indicate too good of a fit, and $R^{2}=0$ would indicate no fit.
Correlation Coefficient (Pearson's $r$)
Correlations can inform data scientists that there is a statistically significant relationship between one or more variables in a dataset. Although correlation can allow inferences about a causal relationship to be made, data scientists must note that correlation is not causation. The Pearson Correlation coefficient is on a scale from -1 to 1, where 0 implies no correlation, -1 is 100% negative correlation, and 1 is 100% positive correlation.
Step4: Hypothesis Testing
Designing an experiment involves separating some of the ideas that you might have had before, during, and after the experiment. Let's say you are selling popcorn at a movie theater, and you have three sizes
Step5: Exercise 2
Load a dataset and take a simple random sample. Then return a dataframe with the standard error and standard deviation for each column.
Student Solution
Step6: Exercise 3
Using a dataset that you found already, or a new one, create two visualizations that share the same figure using plt, as well as their mean and median lines. The first visualization should show the frequency, and the second should show the probability.
Student Solution
Step7: Exercise 4
Plot two variables against each other, and calculate the $R^{2}$ and p-value for a regression line that fits the data.
Student Solution | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/xx_misc/probability_and_statistics/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2019 Google LLC.
End of explanation
%matplotlib inline
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
Explanation: Statistical Analysis of Data
Statistics are numbers that can be used to describe or summarize variable data points. For example, the expected value of a distribution and the mean of a sample are statistics. Being able to perform sound statistical analysis is crucial for a data scientist, and in this lesson we will outline a few key statistical concepts:
Statistical Sampling
Sampling vs. Population
Simple Random Sample (SRS)
Sample Bias
Variables and Measurements
Measures of Center
Mean
Median
Mode
Measures of spread
Variance and Standard Deviation
Standard Error
Distributions
Coefficient of Variation ($R^2$)
Correlation Coefficient (Pearson's $r$)
Hypothesis Testing
Load Packages
End of explanation
x = np.arange(-5, 5, 0.1)
y = stats.norm.pdf(x)
plt.plot(x, y)
plt.axvline(0, color='red')
plt.show()
Explanation: Statistical Sampling
Sampling vs. Population
What is the difference between a sample and a population?
You can think of a sample as its own population, which is just a subset of the global population. You could imagine a biologist tagging some sample of birds, tracking their movements with GPS, and using that data to make inferences about the patterns of the global population of a species.
An important assumption in statistics is that an unbiased sample comes from the same distribution as the population, assuming that the global distribution is normal. We can test this hypothesis using a single sided t-test, a statistical method to compare sample means to the population means.
Simple Random Sample
A simple random sample (SRS) is one of the most common statistical sampling techniques. To get an SRS, we take a random subset of the data, without replacement. An SRS is unbiased because every member of the population has an equal opportunity to be selected. True randomness does not exist computationally, so we must use pseudorandom functions which, for most common statistical applications, will suffice as a statistically random method.
Sample Bias
Bias, as with a weighted coin that falls on heads more often, can be present in many stages of an experiment or data analysis. Some biases, like selection bias, are easy to detect. For example, a sample obtained from the Census Bureau in 2010 collected information on residents across the United States. Surely not every resident responded to their requests, so the ones who did are assumed to be a representative sample. This experiment has some selection bias, however, since those who respond to the census tend to be at home during the day, which means they are more likely to be either very young or very old. Another example is political polling by phone; those at home ready to answer the phone tend to be older, yielding a biased sample of voters.
Confirmation bias, a form of cognitive bias, can affect online and offline behavior. Those who believe that the earth is flat are more likely to share misinformation that supports their flat-earth theory rather than facts which dispel the claim. Picking up on this preference, YouTube's algorithm surfaces more flat earth video suggestions to those who've watched at least one. Such video suggestions then feed back into the users' confirmation bias.
There are other types of bias which may further confound an experiment or a data collection strategy. These biases are beyond the scope of this course but should be noted. Here's an exhaustive list of cognitive biases. Data scientists of all skill levels can experience pitfalls in their design and implementation strategies if they are not aware of the source of some bias in their experiment design or error in their data sources or collection strategies.
Variables and Measurements
We have already learned about programming data types, like string, integer, and float. These data types make up variable types that are categorized according to their measurement scales. We can start to think about variables divided into two groups: numerical, and categorical.
Numerical Variables
Numerical data can be represented by both numbers and strings, and it can be further subdivided into discrete and continuous variables.
Discrete data is anything that can be counted, like the number of user signups for a web app or the number of waves you caught surfing.
Conversely, continuous data cannot be counted and must instead be measured. For example, the finish times of a 100m sprint and the waiting time for a train are continuous variables.
Categorical Variables
Categorical data can take the form of either strings or integers. However, these integers have no numerical value, they are purely a minimal labeling convention.
Nominal data is labeled without any specific order. In machine learning, these categories would be called classes, or levels. A feature can be binary (containing only two classes) or multicategory (containing more than two classes). In the case of coin flip data, you have either a heads or tails because the coin cannot land on both or none. An example of multicategory data is the seven classes of life (animal, plant, fungus, protists, archaebacteria, and eubacteria).
Ordinal data is categorical data where the order has significance, or is ranked. This could be Uber driver ratings on a scale of 1 to 5 stars, or gold, silver, and bronze Olympic medals. It should be noted that the differences between each level of ordinal data are assumed to be equivalent, when in reality they may not be. For example, the perceived difference between bronze and silver may be different than the difference between silver and gold.
Measures of Center
Central tendency is the point around which most of the data in a dataset is clustered. Some measures of central tendency include the mean (sometimes referred to as the average), the median, and the mode.
Note that the mean and median only apply to numerical data.
The mean is easy to calculate; it is the sum of a sequence of numbers divided by the number of samples. The mean is not robust to outliers (that is, less likely to be affected by a few data points that are out of the ordinary), and if your data is not normally distributed then the mean may not be a good measure of central tendency.
The median is the middle data point in a series. If your set contains four samples, the median is halfway between the 2nd and 3rd data point. If your set contains five samples, the median is the 3rd data point. The median can often be close to the mean, but it is more robust to outliers.
The mode is the most commonly occurring data point in a series. The mode is especially useful for categorical data and doesn't make sense for continuous data. Sometimes there is no mode, which indicates that all of the data points are unique. Sometimes a sample can be multimodal, or have multiple equally occurring modes. The mode gives insight into a distribution's frequency, including some possible source of error.
Measures of Spread
Variance and Standard Deviation
The population variance, $\sigma^{2}$ is defined as follows.
$$ \sigma^2 = \frac{1}{N} \sum_{i=1}^N (x_i - \mu)^2$$
where:
$N$ is the population size
The $x_i$ are the population values
$\mu$ is the population mean
The population standard deviation $\sigma$ is the square root of $\sigma^2$.
Data scientists typically talk about variance in the context of variability, or how large the difference between each ordered point in a sample is to its mean.
The sample variance $s^2$ is as follows:
$$s^2 = \frac{1}{n-1}\sum_{i=1}^{n}(x_{i}-\bar{x})^{2}$$
where:
$n$ is the sample size
The $x_i$ are the sample values
$\bar{x}$ is the sample mean
The sample standard deviation $s$ is the square root of $s^2$.
Standard Error
Data scientists work with real-life datasets, so we are mainly concerned with sample variance. Therefore, we use sample standard deviation to estimate the standard deviation of a population. Standard error (SE) is the standard deviation of a sampling distribution.
$$SE =\frac{s}{\sqrt{n}} $$
When running a test to statistically measure whether the means from two distributions $i$ and $j$ are the same, this statistic becomes:
$$ SE_{} =\sqrt{\frac{s_{i}^{2}+s_{j}^{2}}{n_{i}+n_{j}}} $$
where:
$s_i, s_j$ are the sample standard deviations for samples $i$ and $j$ respectively
$n_i, n_j$ are the sample sizes for samples $i$ and $j$ respectively
Distributions
Now that we have a handle on the different variable data types and their respective measurement scales, we can begin to understand the different categories of distributions that these variable types come from. For humans to understand distributions, we generally visualize data on a measurement scale.
Normal
Many natural phenomena are normally distributed, from human height distribution to light wave interference. A normally distributed variable in a dataset would describe a variable whose data points come from a normal. It is also referred to as the Gaussian distribution. If $X$ follows a normal distribution with mean $\mu$ and variance $\sigma^2$, we write $X \sim N(\mu, \sigma^2)$.
Below is a plot of a normal distribution's probability density function.
End of explanation
df = sns.load_dataset('mpg')
sns.heatmap(df.corr(), cmap='Blues', annot=True)
plt.show()
Explanation: Bernoulli
Bernoulli distributions model an event which only has two possible outcomes (i.e., success and failure) which occur with probability $p$ and $1-p$.
If $X$ is a Bernoulli random variable with likelihood of success $p$, we write $X \sim \text{Bernoulli}(p)$.
We have actually seen an example of a Bernoulli distribution when considering a coin flip. That is, there are exactly two outcomes, and a heads occurs with probability $p = \frac{1}{2}$ and a tails with probability $1-p = \frac{1}{2}$.
Binomial
Binomial distributions model a discrete random variable which repeats Bernoulli trials $n$ times.
If $X$ is a Binomial random variable over $n$ trials with probability $p$ of success, we write $X \sim \text{Binom}(n, k)$. Under this distribution, the probability of $k$ successes is $P(X = k) = {n \choose k}p^k(1-p)^{n-k}$.
Poisson
A Poisson distribution can be used to model the discrete events that happen over a time interval. An example could be an expected count of customers arriving at a restaurant during each hour.
Gamma
The Gamma distribution is similar to Poisson in that it models discrete events in time, except that it represents a time until an event. This could be the departure times of employees from a central office. For example, employees depart from a central office beginning at 3pm, and by 8pm most have left.
Others
See here for the most commonly used probability distributions.
Coefficient of Determination ($R^2$)
Most datasets come with many variables to unpack. Looking at the $R^{2}$ can inform us of the linear relationship present between two variables. In the tips dataset, the tips tend to increase linearly with the total bill. The coefficient of determination, $R^{2}$, tells us how much variance is explained by a best fit regression line through the data. An $R^{2}=1$ would indicate too good of a fit, and $R^{2}=0$ would indicate no fit.
Correlation Coefficient (Pearson's $r$)
Correlations can inform data scientists that there is a statistically significant relationship between one or more variables in a dataset. Although correlation can allow inferences about a causal relationship to be made, data scientists must note that correlation is not causation. The Pearson Correlation coefficient is on a scale from -1 to 1, where 0 implies no correlation, -1 is 100% negative correlation, and 1 is 100% positive correlation.
End of explanation
# Your answer goes here
Explanation: Hypothesis Testing
Designing an experiment involves separating some of the ideas that you might have had before, during, and after the experiment. Let's say you are selling popcorn at a movie theater, and you have three sizes: small, medium, and large, for \$3.00, \$4.50, and \$6.50, respectively. At the end of the week, the total sales are as follows: \$200, \$100, and \$50 for small, medium, and large, respectively. You currently have an advertisement posted for your medium popcorn, but you think that if you were to sell more large sizes, you may make more money. So you decide to post an ad for large size popcorn. Your hypothesis is as follows: I will get more weekly sales on average with a large ad compared to a medium ad.
A-A Testing
To test this hypothesis, we should first look at some historical data so that we can validate that our control is what we think it is. Our hypothesis for this case is that there is no difference in week-to-week sales using the ad for medium popcorn. If we test this hypothesis, we can use a 1-sample t-test to compare against the population mean, or a F-test to compare some week in the past to all other weeks.
A-B Testing
Assuming we have validated the historical data using an A-A test for the old ad for medium popcorn, we can now test against the new ad for the large popcorn. If we then collect data for the new ad for several weeks, we can use a 2-sided t-test to compare. In this experiment we will collect data for several weeks or months using the control (the medium ad) and repeat for the experiment group (the large ad).
The most important aspect of hypothesis testing is the assumptions you make about the control and the test groups. The null hypothesis in all cases would be the inverse of your hypothesis. In A-A testing, the null hypothesis is that there is no difference amongst samples, and in the case of A-B testing, the null states that there is no difference between a control sample and a test sample. A successful A-A test is one in which you fail to reject the null. In other words, there are no differences inside your control group; it is more or less homogenous. A successful A-B test is one where you reject the null hypothesis, observing a significant difference.
Evaluating an Experiment
Using a t-test or another statistical test like F-test, ANOVA or Tukey HSD, we can measure the results of our experiment with two statistics. The t-statistic informs us of the magnitude of the observed difference between samples, and the p-value tells us how likely it is that the observed difference is due to random chance or noise. Most statisticians and data scientists use 0.05 as an upper limit to a p-value, so any test that results in a p-value less than 0.05 would indicate that the difference observed is not likely due to random chance.
Resources
seaborn datasets
Exercises
Exercise 1
Find a dataset from the list below and plot the distributions of all the numeric columns. In each distribution, you should also plot the median, $-1\sigma$, and $+1\sigma$.
Here's a full list of Seaborn built-in datasets.
Student Solution
End of explanation
# Your answer goes here
Explanation: Exercise 2
Load a dataset and take a simple random sample. Then return a dataframe with the standard error and standard deviation for each column.
Student Solution
End of explanation
# Your answer goes here
Explanation: Exercise 3
Using a dataset that you found already, or a new one, create two visualizations that share the same figure using plt, as well as their mean and median lines. The first visualization should show the frequency, and the second should show the probability.
Student Solution
End of explanation
# Your answer goes here
Explanation: Exercise 4
Plot two variables against each other, and calculate the $R^{2}$ and p-value for a regression line that fits the data.
Student Solution
End of explanation |
848 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graphical analysis of stresses
Step2: Mohr circle for 2D stresses
Step4: Mohr circle for 3D stresses | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.linalg import eigvalsh
from ipywidgets import interact
from IPython.display import display
from matplotlib import rcParams
%matplotlib notebook
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
Explanation: Graphical analysis of stresses: Mohr's circle
From Wikipedia
Mohr's circle, named after Christian Otto Mohr, is a two-dimensional graphical representation of the transformation law for the Cauchy stress tensor.
After performing a stress analysis on a material body assumed as a continuum, the components of the Cauchy stress tensor at a particular material point are known with respect to a coordinate system. The Mohr circle is then used to determine graphically the stress components acting on a rotated coordinate system, i.e., acting on a differently oriented plane passing through that point.
The abscissa, $\sigma_\mathrm{n}$, and ordinate, $\tau_\mathrm{n}$, of each point on the circle, are the magnitudes of the normal stress and shear stress components, respectively, acting on the rotated coordinate system. In other words, the circle is the locus of points that represent the state of stress on individual planes at all their orientations, where the axes represent the principal axes of the stress element.
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/c/c7/Mohr_Circle.svg/500px-Mohr_Circle.svg.png">
End of explanation
def mohr2d(S11=10, S12=0, S22=-5):
Plot Mohr circle for a 2D tensor
center = [(S11 + S22)/2.0, 0.0]
radius = np.sqrt((S11 - S22)**2/4.0 + S12**2)
Smin = center[0] - radius
Smax = center[0] + radius
print("Minimum Normal Stress: ", np.round(Smin,6))
print("Maximum Normal Stress: ", np.round(Smax, 6))
print("Average Normal Stress: ", np.round(center[0], 6))
print("Minimum Shear Stress: ", np.round(-radius, 6))
print("Maximum Shear Stress: ", np.round(radius, 6))
plt.figure()
circ = plt.Circle((center[0],0), radius, facecolor='#cce885', lw=3,
edgecolor='#5c8037')
plt.axis('image')
ax = plt.gca()
ax.add_artist(circ)
ax.set_xlim(Smin - .1*radius, Smax + .1*radius)
ax.set_ylim(-1.1*radius, 1.1*radius)
plt.plot([S22, S11], [S12, -S12], 'ko')
plt.plot([S22, S11], [S12, -S12], 'k')
plt.plot(center[0], center[1], 'o', mfc='w')
plt.text(S22 + 0.1*radius, S12, 'A')
plt.text(S11 + 0.1*radius, -S12, 'B')
plt.xlabel(r"$\sigma$", size=18)
plt.ylabel(r"$\tau$", size=18)
w = interact(mohr2d,
S11=(-100.,100.),
S12=(-100.,100.),
S22=(-100.,100.))
Explanation: Mohr circle for 2D stresses
End of explanation
def mohr3d(S11=90, S12=0, S13=95, S22=96, S23=0, S33=-50):
rPlot 3D Mohr circles.
S = np.array([[S11, S12, S13],
[S12, S22, S23],
[S13, S23, S33]])
S3, S2, S1 = eigvalsh(S)
R_maj = 0.5*(S1 - S3)
cent_maj = 0.5*(S1+S3)
R_min = 0.5*(S2 - S3)
cent_min = 0.5*(S2 + S3)
R_mid = 0.5*(S1 - S2)
cent_mid = 0.5*(S1 + S2)
plt.figure()
circ1 = plt.Circle((cent_maj,0), R_maj, facecolor='#cce885', lw=3,
edgecolor='#5c8037')
circ2 = plt.Circle((cent_min,0), R_min, facecolor='w', lw=3,
edgecolor='#15a1bd')
circ3 = plt.Circle((cent_mid,0), R_mid, facecolor='w', lw=3,
edgecolor='#e4612d')
plt.axis('image')
ax = plt.gca()
ax.add_artist(circ1)
ax.add_artist(circ2)
ax.add_artist(circ3)
ax.set_xlim(S3 - .1*R_maj, S1 + .1*R_maj)
ax.set_ylim(-1.1*R_maj, 1.1*R_maj)
plt.xlabel(r"$\sigma$", size=18)
plt.ylabel(r"$\tau$", size=18)
w = interact(mohr3d,
S11=(-100.,100.),
S12=(-100.,100.),
S13=(-100.,100.),
S22=(-100.,100.),
S23=(-100.,100.),
S33=(-100.,100.))
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
Explanation: Mohr circle for 3D stresses
End of explanation |
849 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
人生苦短,我用python
python第三课
推荐一个python数据结构可视化工具:http
Step1: json格式
mysql数据库基本操作
命令行操作
Step2: 数据库管理工具
sequelpro 链接:http
Step3: mysql数据类型 http
Step4: 修改数据
Step5: 删除操作
Step6: 使用python去操作数据库
python 安装第三方库
1、 pip ; 举例: pip install pymysql
2、 conda; 举例: conda install pymysql
Step7: 游标
Step8: 查询
Step9: 插入操作
Step10: 删除操作
Step11: 更新操作
Step12: 捕捉异常
Step13: 遗留问题: 数据库回滚操作失败
Step15: 爬虫
python库
1、requests 用来获取页面内容
2、BeautifulSoup
安装
pip install reqeusts
pip install bs4 | Python Code:
import json
data_1 = "{'a': 1, 'b': 2, 'c': 3}"
data_2 = '{"a": 1, "b": 2, "c": 3}'
j_data = json.loads(data_2)
type(j_data)
with open('/Users/wangyujie/Desktop/data.json', 'r') as f:
j_data = json.load(f)
print(j_data)
Explanation: 人生苦短,我用python
python第三课
推荐一个python数据结构可视化工具:http://www.pythontutor.com/
课表
- Mysql数据库的基本操作
- 用python操作数据库
- 编写python爬虫并保存到数据库
数据库
我们平时说到的数据库,指的是 数据库管理系统
mysql数据库
MariaDB: MariaDB的目的是完全兼容MySQL,包括API和命令行,使之能轻松成为MySQL的代替品
关系型数据库
另外一种类型的数据库是:非关系型数据库。 比较流行的是:Mongodb, redis
End of explanation
# 链接数据库?
mysql -u root -p
# u 是用户名 p: 需要用密码登录数据库
# 查看数据库
show databases;
# 选择数据库
use database_name;
# 查看数据库中的table表
show tables;
# 查看表格的结构
desc tables;
# 查看表中的数据
select * from table_name;
# 查看数据并限制数量
select * from table_name limit 10;
Explanation: json格式
mysql数据库基本操作
命令行操作
End of explanation
3 转换为二进制 11 整数部分
0.4 转换为二进制 0.5*0 + 0.25*1 + 0.125*1
0 1 1 1 1 0 1
Explanation: 数据库管理工具
sequelpro 链接:http://www.sequelpro.com/
mysql与Excel的不同
| 姓名 | 性别 | 年龄 | 班级 | 考试 | 语文 | 数学 | 英语 | 物理 | 化学 | 生物 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 高海 | 男 | 18 | 高三一班 | 第一次模拟 | 90 | 126 | 119 | 75 | 59 | 89 |
| 高海 | 男 | 18 | 高三一班 | 第二次模拟 | 80 | 120 | 123 | 85 | 78 | 87 |
| 秦佳艺 | 女 | 18 | 高三二班 | 第一次模拟 | 78 | 118 | 140 | 89 | 80 | 78 |
| 秦佳艺 | 女 | 18 | 高三二班 | 第二次模拟 | 79 | 120 | 140 | 83 | 78 | 82 |
命令行操作数据库
创建数据库
create database Examination_copy;
删除数据库
drop database Examination_coopy;
指定字符集和校对集,创建数据库
create database Examination_copy default charset utf8mb4 collate utf8mb4_general_ci;
创建表格
CREATE TABLE class (
id int(11) unsigned NOT NULL AUTO_INCREMENT,
name varchar(80) NOT NULL,
PRIMARY KEY (id)
);
小数转二进制
End of explanation
insert into `class`(`id`, `name`)
values(1, '高一三班');
Explanation: mysql数据类型 http://www.runoob.com/mysql/mysql-data-types.html
插入数据
End of explanation
update `class` set `name` = '高一五班'
where `name` = '高一三班';
Explanation: 修改数据
End of explanation
delete from `class`
where `id` = 6;
Explanation: 删除操作
End of explanation
import MySQLdb
DATABASE = {
'host': '127.0.0.1', # 如果是远程数据库,此处为远程服务器的ip地址
'database': 'Examination',
'user': 'root',
'password': 'wangwei',
'charset': 'utf8mb4'
}
db = MySQLdb.connect(host='localhost', user='root', password='wangwei', db='Examination')
# 等价于
db = MySQLdb.connect('localhost', 'root', 'wangwei', 'Examination')
# 等价于
db = MySQLdb.connect(**DATABASE)
# db就代表是我们的数据库
Explanation: 使用python去操作数据库
python 安装第三方库
1、 pip ; 举例: pip install pymysql
2、 conda; 举例: conda install pymysql
End of explanation
cursor = db.cursor()
Explanation: 游标
End of explanation
sql = "select * from student where id <= 20 limit 4"
cursor.execute(sql)
results = cursor.fetchall()
for row in results:
print(row)
Explanation: 查询
End of explanation
sql = "insert into `class`(`name`) values('高一五班');"
cursor = db.cursor()
cursor.execute(sql)
cursor.execute(sql)
db.commit()
Explanation: 插入操作
End of explanation
sql = "delete from `class` where `name`='高一五班'"
cursor = db.cursor()
cursor.execute(sql)
db.commit()
Explanation: 删除操作
End of explanation
sql = "update `class` set `name`='高一十四班' where `id`=4;"
cursor = db.cursor()
cursor.execute(sql)
db.commit()
Explanation: 更新操作
End of explanation
a = 10
b = a + 'hello'
try:
a = 10
b = a + 'hello'
except TypeError as e:
print(e)
Explanation: 捕捉异常
End of explanation
try:
sql = "insert into `class`(`name`) values('高一十六班')"
cursor = db.cursor()
cursor.execute(sql)
error = 10 + 'sdfsdf'
db.commit()
except Exception as e:
print(e)
db.rollback()
Explanation: 遗留问题: 数据库回滚操作失败
End of explanation
import time
import MySQLdb
import requests
from bs4 import BeautifulSoup
# 此处为数据库配置文件,每个人的配置不同,因此需要同学们自己配置
DATABASE = {
'host': '127.0.0.1', # 如果是远程数据库,此处为远程服务器的ip地址
'database': '',
'user': '',
'password': '',
'charset': 'utf8mb4'
}
# 获取url下的页面内容,返回soup对象
def get_page(url):
responce = requests.get(url)
soup = BeautifulSoup(responce.text, 'lxml')
return soup
# 封装成函数,作用是获取列表页下面的所有租房页面的链接,返回一个链接列表
def get_links(link_url):
soup = get_page(link_url)
links_div = soup.find_all('div', class_="pic-panel")
links = [div.a.get('href') for div in links_div]
return links
def get_house_info(house_url):
soup = get_page(house_url)
price = soup.find('span', class_='total').text
unit = soup.find('span', class_='unit').text.strip()
house_info = soup.find_all('p')
area = house_info[0].text[3:]
layout = house_info[1].text[5:]
floor = house_info[2].text[3:]
direction = house_info[3].text[5:]
subway = house_info[4].text[3:]
community = house_info[5].text[3:]
location = house_info[6].text[3:]
create_time = house_info[7].text[3:]
agent = soup.find('a', class_='name LOGCLICK')
agent_name = agent.text
agent_id = agent.get('data-el')
evaluate = soup.find('div', class_='evaluate')
score, number = evaluate.find('span', class_='rate').text.split('/')
times = evaluate.find('span', class_='time').text[5:-1]
info = {
'价格': price,
'单位': unit,
'面积': area,
'户型': layout,
'楼层': floor,
'朝向': direction,
'发布时间': create_time,
'地铁': subway,
'小区': community,
'位置': location,
'经纪人名字': agent_name,
'经纪人id': agent_id
}
return info
def get_db(setting):
return MySQLdb.connect(**setting)
def insert(db, house):
values = "'{}',"* 10 + "'{}'"
sql_values = values.format(house['价格'],house['单位'],house['面积'],house['户型'],
house['楼层'],house['朝向'],house['地铁'],house['小区'],
house['位置'],house['经纪人名字'],house['经纪人id'])
sql =
insert into `house`(`price`, `unit`, `area`, `layout`, `floor`, `direction`,
`subway`, `community`, `location`, `agent_name`, `agent_id`)
values({})
.format(sql_values)
print(sql)
cursor = db.cursor()
cursor.execute(sql)
db.commit()
db = get_db(DATABASE)
links = get_links('https://bj.lianjia.com/zufang/')
for link in links:
time.sleep(2)
print('获取一个房子信息成功')
house = get_house_info(link)
insert(db, house)
Explanation: 爬虫
python库
1、requests 用来获取页面内容
2、BeautifulSoup
安装
pip install reqeusts
pip install bs4
End of explanation |
850 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 08c
Step1: Next, let's load the data. Write the path to your ml-100k.csv file in the cell below
Step2: Execute the cell below to load the CSV data into a pandas data frame indexed by the user_id field in the CSV file.
Step3: Exploratory data analysis
Let's start by computing some summary statistics about the data
Step4: As can be seen, the data consists of film ratings in the range [1, 5] for 1664 films. Some films have been rated by many users, but the vast majority have been rated by only a few (i.e. there are many missing values)
Step5: We'll need to replace the missing values with appropriate substitutions before we can build our model. One way to do this is to replace each instance where a user didn't see a film with the average rating of that film (although, there are others, e.g. the median or mode values). We can compute the average rating of each film via the mean method of the data frame
Step6: Next, let's substitute these values everywhere there is a missing value. With pandas, you can do this with the fillna method, as follows
Step7: Data modelling
Let's build a movie recommender using user-based collaborative filtering. For this, we'll need to build a model that can identify the most similar users to a given user and use that relationship to predict ratings for new movies. We can use $k$ nearest neighbours regression for this.
Before we build the model, let's specify ratings for some of the films in the data set. This gives us a target variable to fit our model to. The values below are just examples - feel free to add your own ratings or change the films.
Step8: Next, let's select the features to learn from. In user-based collaborative filtering, we need to identify the users that are most similar to us. Consequently, we need to transpose our data matrix (with the T attribute of the data frame) so that its columns (i.e. features) represent users and its rows (i.e. samples) represent films. We'll also need to select just the films that we specified above, as our target variable consists of these only.
Step9: Let's build a $k$ nearest neighbours regression model to see what improvement can be made over the dummy model
Step10: As can be seen, the $k$ nearest neighbours model is able to predict ratings to within ±0.88, with a standard deviation of 0.97. While this error is not small, it's not so large that it won't be useful. Further impovements can be made by filling the missing values in a different way or providing more ratings.
Making predictions
Now that we have a final model, we can make recommendations about films we haven't rated | Python Code:
%matplotlib inline
import pandas as pd
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import GridSearchCV, KFold, cross_val_predict
from sklearn.neighbors import KNeighborsRegressor
Explanation: Lab 08c: Recommender systems
Introduction
In this lab, you will build a simple movie recommender using $k$ nearest neighbours regression. At the end of the lab, you should be able to:
Replace missing values in a data set.
Create a $k$ nearest neighbours regression model.
Use the model to predict new values.
Measure the accuracy of the model.
Getting started
Let's start by importing the packages we'll need. This week, we're going to use the neighbors subpackage from scikit-learn to build $k$ nearest neighbours models.
End of explanation
path = 'data/ml-100k.csv'
Explanation: Next, let's load the data. Write the path to your ml-100k.csv file in the cell below:
End of explanation
df = pd.read_csv(path, index_col='user_id')
df.head()
Explanation: Execute the cell below to load the CSV data into a pandas data frame indexed by the user_id field in the CSV file.
End of explanation
stats = df.describe()
stats
Explanation: Exploratory data analysis
Let's start by computing some summary statistics about the data:
End of explanation
ax = stats.loc['count'].hist(bins=30)
ax.set(
xlabel='Number of ratings',
ylabel='Frequency'
);
Explanation: As can be seen, the data consists of film ratings in the range [1, 5] for 1664 films. Some films have been rated by many users, but the vast majority have been rated by only a few (i.e. there are many missing values):
End of explanation
average_ratings = df.mean()
average_ratings.head()
Explanation: We'll need to replace the missing values with appropriate substitutions before we can build our model. One way to do this is to replace each instance where a user didn't see a film with the average rating of that film (although, there are others, e.g. the median or mode values). We can compute the average rating of each film via the mean method of the data frame:
End of explanation
df = df.fillna(value=average_ratings)
Explanation: Next, let's substitute these values everywhere there is a missing value. With pandas, you can do this with the fillna method, as follows:
End of explanation
y = pd.Series({
'L.A. Confidential (1997)': 3.5,
'Jaws (1975)': 3.5,
'Evil Dead II (1987)': 4.5,
'Fargo (1996)': 5.0,
'Naked Gun 33 1/3: The Final Insult (1994)': 2.5,
'Wings of Desire (1987)': 5.0,
'North by Northwest (1959)': 5.0,
"Monty Python's Life of Brian (1979)": 4.5,
'Raiders of the Lost Ark (1981)': 4.0,
'Annie Hall (1977)': 5.0,
'True Lies (1994)': 3.0,
'GoldenEye (1995)': 2.0,
'Good, The Bad and The Ugly, The (1966)': 4.0,
'Empire Strikes Back, The (1980)': 4.0,
'Godfather, The (1972)': 4.5,
'Waterworld (1995)': 1.0,
'Blade Runner (1982)': 4.0,
'Seven (Se7en) (1995)': 3.5,
'Alien (1979)': 4.0,
'Free Willy (1993)': 1.0
})
Explanation: Data modelling
Let's build a movie recommender using user-based collaborative filtering. For this, we'll need to build a model that can identify the most similar users to a given user and use that relationship to predict ratings for new movies. We can use $k$ nearest neighbours regression for this.
Before we build the model, let's specify ratings for some of the films in the data set. This gives us a target variable to fit our model to. The values below are just examples - feel free to add your own ratings or change the films.
End of explanation
X = df.T.loc[y.index]
X.head()
Explanation: Next, let's select the features to learn from. In user-based collaborative filtering, we need to identify the users that are most similar to us. Consequently, we need to transpose our data matrix (with the T attribute of the data frame) so that its columns (i.e. features) represent users and its rows (i.e. samples) represent films. We'll also need to select just the films that we specified above, as our target variable consists of these only.
End of explanation
algorithm = KNeighborsRegressor()
parameters = {
'n_neighbors': [2, 5, 10, 15],
'weights': ['uniform', 'distance'],
'metric': ['manhattan', 'euclidean']
}
# Use inner CV to select the best model
inner_cv = KFold(n_splits=10, shuffle=True, random_state=0) # K = 10
clf = GridSearchCV(algorithm, parameters, cv=inner_cv, n_jobs=-1) # n_jobs=-1 uses all available CPUs = faster
clf.fit(X, y)
# Use outer CV to evaluate the error of the best model
outer_cv = KFold(n_splits=10, shuffle=True, random_state=0) # K = 10, doesn't have to be the same
y_pred = cross_val_predict(clf, X, y, cv=outer_cv)
# Print the results
print('Mean absolute error: %f' % mean_absolute_error(y, y_pred))
print('Standard deviation of the error: %f' % (y - y_pred).std())
ax = (y - y_pred).hist()
ax.set(
title='Distribution of errors for the nearest neighbours regression model',
xlabel='Error'
);
Explanation: Let's build a $k$ nearest neighbours regression model to see what improvement can be made over the dummy model:
End of explanation
predictions = pd.Series()
for film in df.columns:
if film in y.index:
continue # If we've already rated the film, skip it
predictions[film] = clf.predict(df.loc[:, [film]].T)[0]
predictions.sort_values(ascending=False).head(10)
Explanation: As can be seen, the $k$ nearest neighbours model is able to predict ratings to within ±0.88, with a standard deviation of 0.97. While this error is not small, it's not so large that it won't be useful. Further impovements can be made by filling the missing values in a different way or providing more ratings.
Making predictions
Now that we have a final model, we can make recommendations about films we haven't rated:
End of explanation |
851 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Custom regression models
Like for univariate models, it is possible to create your own custom parametric survival models. Why might you want to do this?
Create new / extend AFT models using known probability distributions
Create a piecewise model using domain knowledge about subjects
Iterate and fit a more accurate parametric model
lifelines has a very simple API to create custom parametric regression models. You only need to define the cumulative hazard function. For example, the cumulative hazard for the constant-hazard regression model looks like
Step2: Cure models
Suppose in our population we have a subpopulation that will never experience the event of interest. Or, for some subjects the event will occur so far in the future that it's essentially at time infinity. In this case, the survival function for an individual should not asymptically approach zero, but some positive value. Models that describe this are sometimes called cure models (i.e. the subject is "cured" of death and hence no longer susceptible) or time-lagged conversion models.
It would be nice to be able to use common survival models and have some "cure" component. Let's suppose that for individuals that will experience the event of interest, their survival distrubtion is a Weibull, denoted $S_W(t)$. For a random selected individual in the population, thier survival curve, $S(t)$, is | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from lifelines.fitters import ParametricRegressionFitter
from autograd import numpy as np
from lifelines.datasets import load_rossi
class ExponentialAFTFitter(ParametricRegressionFitter):
# this class property is necessary, and should always be a non-empty list of strings.
_fitted_parameter_names = ['lambda_']
def _cumulative_hazard(self, params, t, Xs):
# params is a dictionary that maps unknown parameters to a numpy vector.
# Xs is a dictionary that maps unknown parameters to a numpy 2d array
beta = params['lambda_']
X = Xs['lambda_']
lambda_ = np.exp(np.dot(X, beta))
return t / lambda_
rossi = load_rossi()
# the below variables maps {dataframe columns, formulas} to parameters
regressors = {
# could also be: 'lambda_': rossi.columns.difference(['week', 'arrest'])
'lambda_': "age + fin + mar + paro + prio + race + wexp + 1"
}
eaf = ExponentialAFTFitter().fit(rossi, 'week', 'arrest', regressors=regressors)
eaf.print_summary()
class DependentCompetingRisksHazard(ParametricRegressionFitter):
Reference
--------------
Frees and Valdez, UNDERSTANDING RELATIONSHIPS USING COPULAS
_fitted_parameter_names = ["lambda1", "rho1", "lambda2", "rho2", "alpha"]
def _cumulative_hazard(self, params, T, Xs):
lambda1 = np.exp(np.dot(Xs["lambda1"], params["lambda1"]))
lambda2 = np.exp(np.dot(Xs["lambda2"], params["lambda2"]))
rho2 = np.exp(np.dot(Xs["rho2"], params["rho2"]))
rho1 = np.exp(np.dot(Xs["rho1"], params["rho1"]))
alpha = np.exp(np.dot(Xs["alpha"], params["alpha"]))
return ((T / lambda1) ** rho1 + (T / lambda2) ** rho2) ** alpha
fitter = DependentCompetingRisksHazard(penalizer=0.1)
rossi = load_rossi()
rossi["week"] = rossi["week"] / rossi["week"].max() # scaling often helps with convergence
covariates = {
"lambda1": rossi.columns.difference(['week', 'arrest']),
"lambda2": rossi.columns.difference(['week', 'arrest']),
"rho1": "1",
"rho2": "1",
"alpha": "1",
}
fitter.fit(rossi, "week", event_col="arrest", regressors=covariates, timeline=np.linspace(0, 2))
fitter.print_summary(2)
ax = fitter.plot()
ax = fitter.predict_survival_function(rossi.loc[::100]).plot(figsize=(8, 4))
ax.set_title("Predicted survival functions for selected subjects")
Explanation: Custom regression models
Like for univariate models, it is possible to create your own custom parametric survival models. Why might you want to do this?
Create new / extend AFT models using known probability distributions
Create a piecewise model using domain knowledge about subjects
Iterate and fit a more accurate parametric model
lifelines has a very simple API to create custom parametric regression models. You only need to define the cumulative hazard function. For example, the cumulative hazard for the constant-hazard regression model looks like:
$$
H(t, x) = \frac{t}{\lambda(x)}\ \lambda(x) = \exp{(\vec{\beta} \cdot \vec{x}^{\,T})}
$$
where $\beta$ are the unknowns we will optimize over.
Below are some example custom models.
End of explanation
from autograd.scipy.special import expit
class CureModel(ParametricRegressionFitter):
_scipy_fit_method = "SLSQP"
_scipy_fit_options = {"ftol": 1e-10, "maxiter": 200}
_fitted_parameter_names = ["lambda_", "beta_", "rho_"]
def _cumulative_hazard(self, params, T, Xs):
c = expit(np.dot(Xs["beta_"], params["beta_"]))
lambda_ = np.exp(np.dot(Xs["lambda_"], params["lambda_"]))
rho_ = np.exp(np.dot(Xs["rho_"], params["rho_"]))
sf = np.exp(-(T / lambda_) ** rho_)
return -np.log((1 - c) + c * sf)
cm = CureModel(penalizer=0.0)
rossi = load_rossi()
covariates = {"lambda_": rossi.columns.difference(['week', 'arrest']), "rho_": "1", "beta_": 'fin + 1'}
cm.fit(rossi, "week", event_col="arrest", regressors=covariates, timeline=np.arange(250))
cm.print_summary(2)
cm.predict_survival_function(rossi.loc[::100]).plot(figsize=(12,6))
# what's the effect on the survival curve if I vary "age"
fig, ax = plt.subplots(figsize=(12, 6))
cm.plot_covariate_groups(['age'], values=np.arange(20, 50, 5), cmap='coolwarm', ax=ax)
Explanation: Cure models
Suppose in our population we have a subpopulation that will never experience the event of interest. Or, for some subjects the event will occur so far in the future that it's essentially at time infinity. In this case, the survival function for an individual should not asymptically approach zero, but some positive value. Models that describe this are sometimes called cure models (i.e. the subject is "cured" of death and hence no longer susceptible) or time-lagged conversion models.
It would be nice to be able to use common survival models and have some "cure" component. Let's suppose that for individuals that will experience the event of interest, their survival distrubtion is a Weibull, denoted $S_W(t)$. For a random selected individual in the population, thier survival curve, $S(t)$, is:
$$
\begin{align}
S(t) = P(T > t) &= P(\text{cured}) P(T > t\;|\;\text{cured}) + P(\text{not cured}) P(T > t\;|\;\text{not cured}) \
&= p + (1-p) S_W(t)
\end{align}
$$
Even though it's in an unconvential form, we can still determine the cumulative hazard (which is the negative logarithm of the survival function):
$$ H(t) = -\log{\left(p + (1-p) S_W(t)\right)} $$
End of explanation |
852 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-2', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CSIRO-BOM
Source ID: SANDBOX-2
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:56
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
853 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train and valid set NLL trace
Step1: Visualising first layer weights
Quite nice features appear to have been learned with some kernels appearing to have been learned at various rotations. Some quite small scale features appear to have been learned too.
Step2: Learning rate
Initially linear decay learning rate schedule used with monitor based adjuster. Turns out these don't play well together as the linear decay schedule overwrites any adjusments by monitor based extension at the next epoch. After resume initial learning rate manually reduced and learning rate schedule set exclusively with monitor based adjuster.
Step3: Update norm monitoring
Ratio of update norms to parameter norms across epochs for different layers plotted to give idea of how learning rate schedule performing. | Python Code:
tr = np.array(model.monitor.channels['valid_y_y_1_nll'].time_record) / 3600.
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(111)
ax1.plot(model.monitor.channels['valid_y_y_1_nll'].val_record)
ax1.plot(model.monitor.channels['train_y_y_1_nll'].val_record)
ax1.set_xlabel('Epochs')
ax1.legend(['Valid', 'Train'])
ax1.set_ylabel('NLL')
ax1.set_ylim(0., 5.)
ax1.grid(True)
ax2 = ax1.twiny()
ax2.set_xticks(np.arange(0,tr.shape[0],20))
ax2.set_xticklabels(['{0:.2f}'.format(t) for t in tr[::20]])
ax2.set_xlabel('Hours')
print("Minimum validation set NLL {0}".format(min(model.monitor.channels['valid_y_y_1_nll'].val_record)))
Explanation: Train and valid set NLL trace
End of explanation
pv = get_weights_report(model=model)
img = pv.get_img()
img = img.resize((8*img.size[0], 8*img.size[1]))
img_data = io.BytesIO()
img.save(img_data, format='png')
display(Image(data=img_data.getvalue(), format='png'))
Explanation: Visualising first layer weights
Quite nice features appear to have been learned with some kernels appearing to have been learned at various rotations. Some quite small scale features appear to have been learned too.
End of explanation
plt.plot(model.monitor.channels['learning_rate'].val_record)
Explanation: Learning rate
Initially linear decay learning rate schedule used with monitor based adjuster. Turns out these don't play well together as the linear decay schedule overwrites any adjusments by monitor based extension at the next epoch. After resume initial learning rate manually reduced and learning rate schedule set exclusively with monitor based adjuster.
End of explanation
h1_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h1_W_kernel_norm_mean'].val_record])
h1_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h1_kernel_norms_mean'].val_record])
plt.plot(h1_W_norms / h1_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h1_kernel_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h1_kernel_norms_max'].val_record)
h2_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h2_W_kernel_norm_mean'].val_record])
h2_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h2_kernel_norms_mean'].val_record])
plt.plot(h2_W_norms / h2_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h2_kernel_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h2_kernel_norms_max'].val_record)
h3_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h3_W_kernel_norm_mean'].val_record])
h3_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h3_kernel_norms_mean'].val_record])
plt.plot(h3_W_norms / h3_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h3_kernel_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h3_kernel_norms_max'].val_record)
h4_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h4_W_col_norm_mean'].val_record])
h4_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h4_col_norms_mean'].val_record])
plt.plot(h4_W_norms / h4_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h4_col_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h4_col_norms_max'].val_record)
h5_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h5_W_col_norm_mean'].val_record])
h5_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h5_col_norms_mean'].val_record])
plt.plot(h5_W_norms / h5_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h5_col_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h5_col_norms_max'].val_record)
Explanation: Update norm monitoring
Ratio of update norms to parameter norms across epochs for different layers plotted to give idea of how learning rate schedule performing.
End of explanation |
854 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The problem
Step1: Extract lon, lat variables from vgrid2 and u, v variables from vbaro.
The goal is to split the joint variables into individual CF compliant phenomena.
Step2: Using iris to create the CF object.
NOTE
Step3: Now the phenomena.
NOTE
Step4: Join the individual CF phenomena into one dataset.
Step5: Save the CF-compliant file! | Python Code:
from netCDF4 import Dataset
#url = ('http://geoport.whoi.edu/thredds/dodsC/usgs/data2/rsignell/gdrive/'
# 'nsf-alpha/Data/MIT_MSEAS/MSEAS_Tides_20160317/mseas_tides_2015071612_2015081612_01h.nc')
url = ('/usgs/data2/rsignell/gdrive/'
'nsf-alpha/Data/MIT_MSEAS/MSEAS_Tides_20160317/mseas_tides_2015071612_2015081612_01h.nc')
nc = Dataset(url)
Explanation: The problem: CF compliant readers cannot read HOPS dataset directly.
The solution: read with the netCDF4-python raw interface and create a CF object from the data.
NOTE: Ideally this should be a nco script that could be run as a CLI script and fix the files.
Here I am using Python+iris. That works and could be written as a CLI script too.
The main advantage is that it takes care of the CF boilerplate.
However, this approach is to "heavy-weight" to be applied in many variables and files.
End of explanation
vtime = nc['time']
coords = nc['vgrid2']
vbaro = nc['vbaro']
Explanation: Extract lon, lat variables from vgrid2 and u, v variables from vbaro.
The goal is to split the joint variables into individual CF compliant phenomena.
End of explanation
import iris
iris.FUTURE.netcdf_no_unlimited = True
longitude = iris.coords.AuxCoord(coords[:, :, 0],
var_name='vlat',
long_name='lon values',
units='degrees')
latitude = iris.coords.AuxCoord(coords[:, :, 1],
var_name='vlon',
long_name='lat values',
units='degrees')
# Dummy Dimension coordinate to avoid default names.
# (This is either a bug in CF or in iris. We should not need to do this!)
lon = iris.coords.DimCoord(range(866),
var_name='x',
long_name='lon_range',
standard_name='longitude')
lat = iris.coords.DimCoord(range(1032),
var_name='y',
long_name='lat_range',
standard_name='latitude')
Explanation: Using iris to create the CF object.
NOTE: ideally lon, lat should be DimCoord like time and not AuxCoord,
but iris refuses to create 2D DimCoord. Not sure if CF enforces that though.
First the Coordinates.
FIXME: change to a full time slice later!
End of explanation
vbaro.shape
import numpy as np
u_cubes = iris.cube.CubeList()
v_cubes = iris.cube.CubeList()
for k in range(vbaro.shape[0]): # vbaro.shape[0]
time = iris.coords.DimCoord(vtime[k],
var_name='time',
long_name=vtime.long_name,
standard_name='time',
units=vtime.units)
u = vbaro[k, :, :, 0]
u_cubes.append(iris.cube.Cube(np.broadcast_to(u, (1,) + u.shape),
units=vbaro.units,
long_name=vbaro.long_name,
var_name='u',
standard_name='barotropic_eastward_sea_water_velocity',
dim_coords_and_dims=[(time, 0), (lon, 1), (lat, 2)],
aux_coords_and_dims=[(latitude, (1, 2)),
(longitude, (1, 2))]))
v = vbaro[k, :, :, 1]
v_cubes.append(iris.cube.Cube(np.broadcast_to(v, (1,) + v.shape),
units=vbaro.units,
long_name=vbaro.long_name,
var_name='v',
standard_name='barotropic_northward_sea_water_velocity',
dim_coords_and_dims=[(time, 0), (lon, 1), (lat, 2)],
aux_coords_and_dims=[(longitude, (1, 2)),
(latitude, (1, 2))]))
Explanation: Now the phenomena.
NOTE: You don't need the broadcast_to trick if saving more than 1 time step.
Here I just wanted the single time snapshot to have the time dimension to create a full example.
End of explanation
u_cube = u_cubes.concatenate_cube()
v_cube = v_cubes.concatenate_cube()
cubes = iris.cube.CubeList([u_cube, v_cube])
Explanation: Join the individual CF phenomena into one dataset.
End of explanation
iris.save(cubes, 'hops.nc')
!ncdump -h hops.nc
Explanation: Save the CF-compliant file!
End of explanation |
855 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Another language
NOTE before starting
To change the notebook which is launched by default by the "programming with Jupyter" tab in puppet-master interface
Go to My Documents/ Poppy source-code/ puppet-master/ bouteillederouge.py
Find jupyter() function (around line 191)
Change the value of the variable default_notebook
This notebook will guide you for connect another programmation language (not beginner tutorial)
What you will see in this notebook
Step1: Second\
Allow python to use url import request and to show HTML content inline import HTML
Step2: 2.a. Access to API to get values
Each url execute an action on the robot (in the native langage) and return a value.
All applications able to send a url request can use the snap API to interact with the robot (including your web browser). Be careful, the format of each url is different as well as the type of returned value.
For exemple
Step3: 2.b. Get value - with single input -
Some urls have variables. They are identified by the symbols < and >
For exemple in url
Step4: http
Step5: 2.b Get value - with multiple inputs -
Some urls have multiple input. They are identified by the letter s For exemple in url, where motor variable have an s
Step6: 2.c. Set value - with single input -
For these previous urls, the Snap API only returns the requested value(s). The following urls performs an action on the robot and return always 'Done!'
Step7: 2.c. Set value - with multiple inputs -
http
Step8: 2.d. Add checking inputs and use your function
Step9: Another URL
- http
Step10: Recap
Step11: 2*.b. Get request
Step12: 2*.c. Post request | Python Code:
from pypot.creatures import PoppyErgoJr
poppy = PoppyErgoJr(use_http=True, use_snap=True)
# If you want to use another robot (humanoid, torso, ...) adapt this code
#from pypot.creatures import PoppyTorso
#poppy = PoppyTorso(use_http=True, use_snap=True)
# If you want to use the robot with the camera unpluged,
# you have to pass the argument camera='dummy
#poppy = PoppyErgoJr(camera='dummy', use_http=True, use_snap=True)
# If you want to use a simulated robot in the 3D web viewer aka "poppy simu"
# you have to pass the argument simulator='poppy-simu'
#poppy = PoppyErgoJr(simulator='poppy-simu', use_http=True, use_snap=True)
Explanation: Another language
NOTE before starting
To change the notebook which is launched by default by the "programming with Jupyter" tab in puppet-master interface
Go to My Documents/ Poppy source-code/ puppet-master/ bouteillederouge.py
Find jupyter() function (around line 191)
Change the value of the variable default_notebook
This notebook will guide you for connect another programmation language (not beginner tutorial)
What you will see in this notebook:
Understand how your robot is programmed
Code through API
snap server
1. Access to API to get values
2. Get value - with single input - and - with multiple input -
3. Set value - with single input - and - with multiple input -
4. Add checking inputs and use your function
http server
1. Access to API
2. Get request
3. Post request
Add new entries in API
Add tab in puppet-master interface
1. Understand how your robot is programmed
Code source
The native language of Poppy robots is the python language. In python, all of the robot's functionalities are available.\
Check notebooks «Discover your Poppy robot» and «Benchmark your Poppy robot» (in folder My Documents/ python notebook) for more information.\
You can also read the documentation for even more information.
API
An application programming interface (API) is a computing interface which defines interactions between multiple software intermediaries.\
Show Wikipedia for more informations.
What interests us here is how to use a language other than Python.
On the Poppy robot, you can access to the API via two server: one named "http", the other named "snap".\
These two server allow you to control your robot through some url requests
Http server\
You can access to this server via the 8080 port (value by default).\
With this server, you can use the HTTP request of GET and POST method.\
All valid urls are visible at the root url: http://poppy.local:8080/
Snap server\
You can access to this server via the 6969 port (value by default).\
With this server, you can use the HTTP request of only GET method.\
All valid urls are visible at the root url: http://poppy.local:6969/
Snap! Build your own Blocks and other languages
Snap! (formerly BYOB) is a visual, drag-and-drop programming language. It is an extended reimplementation of Scratch (a project of the Lifelong Kindergarten Group at the MIT Media Lab) that allows you to Build Your Own Blocks.\
Show Wikipedia and/ or Snap! website for more informations. What interests us here is how Snap! use the robot API to control it.
Snap! allows you to create blocks from initial blocks. These blocks correspond to functions. Among these, one of them allows you to send requests urls.
On this basis, we have built a series of blocks to control the robot via the API. All these blocks have in common to end with the emission of the url request.
Here, in python, we will see how to use these urls. The methodology will be the same for another language.
An example applied is that of poppy-monitor (primitive manager) and that of poppy-viewer (web viewer). Both are coded in JavaScript. Check the source code, respectively here: My Documents/ Poppy Source-code/ poppy-monitor and here: My Documents/ Poppy Source-code/ poppy-viewer
2. Code through API (snap server)
First\
Launch an instance of the robot with http and/or snap server.\
You can do this in two différente way:
launch API with puppet-master interface: Clicking on start API button in «what happend?» tab.\
By default, API auto-starting is enable (and there is no need to click on start API button), change this option in «settings» tab.
launch an instance of the robot directly in this notebook (show cell below).
End of explanation
import requests
from IPython.core.display import HTML
#Testing Snap API access
valid_url_for_snap_server=''
try:
response = requests.get('http://poppy.local:6969/')
if response.status_code==200:
valid_url_for_snap_server=response.text
except:
print('http://poppy.local:6969/ is unreachable')
HTML(valid_url_for_snap_server)
Explanation: Second\
Allow python to use url import request and to show HTML content inline import HTML
End of explanation
def to_api(url, hostname='poppy.local', port='6969'):
url_root='http://{}:{}/'.format(hostname, port)
print('> call:',url_root+url)
try:
response = requests.get(url_root+url)
if response.status_code==200:
return response.text
else:
return 'ERROR'
except:
print('{} is unreachable'.format(url_root))
def get_ip():
return to_api('ip/')
def get_all_positions():
return [float(val) for val in to_api('motors/get/positions').split(';')]
print(get_ip())
print(get_all_positions())
Explanation: 2.a. Access to API to get values
Each url execute an action on the robot (in the native langage) and return a value.
All applications able to send a url request can use the snap API to interact with the robot (including your web browser). Be careful, the format of each url is different as well as the type of returned value.
For exemple:
- http://poppy.local:6969/ip/ return courant ip of the robot as a string
- http://poppy.local:6969/motors/get/positions return all motors positions as a string split by ;
- http://poppy.local:6969/frame.png return the last frame take by the camera as png
From there, we can create a function simplifying the emission of the url request. In Snap!, we would speak of blocks, and not of function, the idea is the same for another language.
End of explanation
def get_motors_alias():
return to_api('motors/alias').split('/')
def get_motors_name(alias='motors'):
return to_api('motors/'+alias).split('/')
print(get_motors_alias())
print(get_motors_name())
print('these motors: {}, are in group of motors named: {}.'.format(
get_motors_name(get_motors_alias()[0]),
get_motors_alias()[0])
)
Explanation: 2.b. Get value - with single input -
Some urls have variables. They are identified by the symbols < and >
For exemple in url :
- http://poppy.local:6969/motor/<alias>\
Replace <alias> by the name of the motor group
End of explanation
def get_register(motor_id, register):
url='motor/m{}/get/{}'.format(motor_id, register)
return to_api(url)
def get_register_list(motor_id=1):
out=get_register(motor_id, 'registers')
if 'ERROR' in out: return out
else: return eval(out) #type == list
def get_position(motor_id):
out=get_register(motor_id, 'present_position')
if 'ERROR' in out: return out
else: return float(out)
def get_compliant(motor_id):
out=get_register(motor_id, 'compliant')
if 'ERROR' in out: return out
else: return bool(out)
def get_color(motor_id):
return get_register(motor_id, 'led') #type == str
print('all avalible register are: {}'.format(', '.join(get_register_list())))
print('m1 is in position {}°'.format(get_register(1, 'present_position')))
print('m1 is in position {}°'.format(get_position(1)))
print('m1 compliant register is {}'.format(get_register(1, 'compliant')))
print('m1 compliant register is {}'.format(get_compliant(1)))
print('led of m1 is {}'.format(get_register(1, 'led')))
print('led of m1 is {}'.format(get_color(1)))
#print('motor sensitivity {}'.format([get_position(2)==get_position(2) for _ in range(10)]))
Explanation: http://poppy.local:6969/motor/<motor>/get/<register>\
Replace <motor> by the name of the motor, and <register> by name of register:
http://poppy.local:6969/motor/m1/get/present_position return the value of present_position for m1 motor
http://poppy.local:6969/motor/m5/get/present_speed return the value of present_speed for m5 motor
http://poppy.local:6969/motor/m3/get/led return the value of led for m3 motor
End of explanation
def get_registers(motors_id, register):
if type(motors_id)!=list: return 'Type ERROR'
targets=[]
for motor_id in motors_id:
targets.append('m'+str(motor_id))
url='motors/{}/get/{}'.format(';'.join(targets), register)
return to_api(url).split(';')
def get_positions(motors_id):
out=get_registers(motors_id, 'present_position')
if 'ERROR' in out: return out
else: return [float(val) for val in out]
def get_compliants(motors_id):
out=get_registers(motors_id, 'compliant')
if 'ERROR' in out: return out
else: return [bool(val) for val in out]
def get_colors(motors_id):
out=get_registers(motors_id, 'led')
if 'ERROR' in out: return out
else: return [str(val) for val in out]
print('m1 and m2 are respectively in position {}'.format(get_registers([1,2], 'present_position')))
print('m1 and m2 are respectively in position {}'.format(get_positions([1,2])))
print('m1 and m2 compliant register are respectively {}'.format(get_registers([1,2], 'compliant')))
print('m1 and m2 compliant register are respectively {}'.format(get_compliants([1,2])))
print('led of m1 and m2 are respectively {}'.format(get_registers([1,2], 'led')))
print('led of m1 and m2 are respectively {}'.format(get_colors([1,2])))
Explanation: 2.b Get value - with multiple inputs -
Some urls have multiple input. They are identified by the letter s For exemple in url, where motor variable have an s:
http://poppy.local:6969/motors/<motors>/get/<register>\
Replace <motors> by the name of one or multiple motors (split by ;), and <register> by name of register:
http://poppy.local:6969/motors/m1;m2;m3/get/present_temperature return the value of present_temperature for m1, m2 and m3 motors split by ;
http://poppy.local:6969/motors/m1;m5/get/present_load return the value of present_load for m1 m5 motors
End of explanation
def set_register(motor_id, register, value):
url='motor/m{}/set/{}/{} '.format(motor_id, register, value)
return to_api(url)
def set_position(motor_id, position):
return set_register(motor_id, 'goal_position', position)
def set_compliant(motor_id, state):
return set_register(motor_id, 'compliant', state)
def set_color(motor_id, color):
return set_register(motor_id, 'led', color)
#note: the motor must be in the non-compliant state to be control it in position
print('set m1 compliant state to false: {}'.format(set_compliant(1,0)))
print('set m1 position to 15°: {}'.format(set_position(1,15)))
print('set m1 compliant state to true: {}'.format(set_compliant(1,1)))
Explanation: 2.c. Set value - with single input -
For these previous urls, the Snap API only returns the requested value(s). The following urls performs an action on the robot and return always 'Done!':
- http://poppy.local:6969/motor/<motor>/set/<register>/<value>
- http://poppy.local:6969/motor/m1/set/goal_position/15 motor m1 started from where it was and arrived in position 15°
- http://poppy.local:6969/motor/m1/set/goal_position/-15 motor m1 started from where it was and arrived in position -15°
End of explanation
def valid_registers_input(motors_id, registers, values):
if type(motors_id)!=list or type(registers)!=list or type(values)!=list:
return 'Type ERROR'
if len(motors_id) != len(registers) or len(motors_id) != len(values):
return 'Size ERROR'
return motors_id, registers, values
def set_registers(motors_id, registers, values):
registers_input = valid_registers_input(motors_id, registers, values)
if 'ERROR' in registers_input:
return registers_input
else:
motors_id, registers, values = registers_input
cmd=[]
for i, motor_id in enumerate(motors_id):
cmd.append('m{}:{}:{}'.format(motor_id, registers[i], values[i]))
cmd=';'.join(cmd)
url='motors/set/registers/'+cmd
return to_api(url)
def set_positions(motors_id, positions):
return set_registers(motors_id, ['goal_position']*len(motors_id), positions)
def set_compliants(motors_id, states):
return set_registers(motors_id, ['compliant']*len(motors_id), states)
def set_colors(motors_id, colors):
return set_registers(motors_id, ['led']*len(motors_id), colors)
print(set_compliants([1,2],[1,1]))
print(set_registers([1,1,2,3],['led', 'goal_position', 'goal_position', 'led'],['yellow', 45, 25, 'blue']))
print(set_positions([1],[0]))
print(set_compliants([1,2],[0,0]))
print(set_colors([1,2,3],['green']*3))
Explanation: 2.c. Set value - with multiple inputs -
http://poppy.local:6969/motors/set/registers/<motors_register_value>\
Replace <motors_register_value> by the name of motors, name of register, value to give to the register (split by : like this: m1:led:pink) then iterate for each moteurs (split by ; like this: m1:led:pink;m2:led:pink)
http://poppy.local:6969/motors/set/registers/m1:present_position:15;m1:led:green;m6:led:yellow \
motor m1 started from where it was and arrived in position 15° ; motor m1 lit green ; motor m6 lit yellow
End of explanation
'''
prepare input for set_register function:
accept:
python list of values,
str list of values (split by space),
int, float, bool
return: python list of str values
'''
def set_type(value):
if type(value)==str:
value=value.split(' ')
elif type(value) in (float, int, bool):
value=[str(value)]
elif type(value)!=list:
return 'Type ERROR'
else:
for i, v in enumerate(value): value[i]=str(v)
return value
'''
re-write valid_registers_input function
valid_registers_input is use by set_registers function
add set_type function
add check size, accept one value for default for each motor
return couple of tree values, each is a list of str values
'''
number_of_all_motors=len(get_motors_name())
all_valid_register=get_register_list()
def valid_registers_input(motors_id, registers, values):
motors_id, registers, values = set_type(motors_id), set_type(registers), set_type(values)
if 'ERROR' in (motors_id or registers or values):
return 'Type ERROR'
if len(registers) == 1:
registers=registers*len(motors_id)
elif len(motors_id) != len(registers):
return 'Size ERROR'
if len(values) == 1:
values=values*len(motors_id)
elif len(motors_id) != len(values):
return 'Size ERROR'
number_of_motors=number_of_all_motors
valid_register=all_valid_register
#assume that value of values variable are check before
for i, motor_id in enumerate(motors_id):
if int(motor_id) <1 or int(motor_id) > number_of_motors or registers[i] not in valid_register:
return 'Value ERROR'
return motors_id, registers, values
'''
No need to re-write set_registers function
but get_registers function need to:
add set_type function to avoid error
add check values
'''
def get_registers(motors_id, register):
motors_id=set_type(motors_id)
if 'ERROR' in motors_id: return motors_id
valid_register=all_valid_register
if register not in valid_register: return 'Value ERROR'
number_of_motors=number_of_all_motors
targets=[]
for i, motor_id in enumerate(motors_id):
if int(motor_id) <1 or int(motor_id) > number_of_motors:
return 'Value ERROR'
else:
targets.append('m'+motor_id)
url='motors/{}/get/{}'.format(';'.join(targets), register)
return to_api(url).split(';')
'''
re-write function
add check value
'''
def set_positions(motors_id, positions):
positions=set_type(positions)
if 'ERROR' in positions: return positions
for position in positions:
if float(position) < -90 or float(position) > 90:
return 'Value ERROR'
return set_registers(motors_id, 'goal_position', positions)
def set_compliants(motors_id, states):
states=set_type(states)
if 'ERROR' in states: return states
for state in states:
if state == 'True': state='1'
elif state == 'False': state='0'
elif state not in ('0', '1'): return 'Value ERROR'
return set_registers(motors_id, 'compliant', states)
def set_colors(motors_id, colors):
colors=set_type(colors)
if 'ERROR' in colors: return colors
for color in colors:
if color not in ['red','green','pink','blue','yellow','off']:
return 'Value ERROR'
return set_registers(motors_id, 'led', colors)
#before syntaxe, work always + check values
print(set_compliants([1,2],[0,0]))
print(set_registers([1,1,2,3],['led', 'goal_position', 'goal_position', 'led'],['yellow', 45, 25, 'blue']))
print(set_positions([1],[0]))
print(set_compliants([1,2],[1,1]))
print(set_colors([1,2,3],['green']*3))
# + more flxible syntaxe
print(set_compliants('1 2',0))
print(set_registers('1 1 2 3','led goal_position goal_position led','yellow 45 25 blue'))
print(set_positions(1,0))
print(set_compliants([1,2],1))
print(set_colors('1 2 3','green'))
#use your function
import time
for i in range(1,7):
print(set_colors(i,'pink'))
print(get_colors(i))
time.sleep(0.5)
print(set_colors(i,'off'))
print(get_colors(i))
#time.sleep(0.5)
for _ in range(2):
for c in ['red','green','pink','blue','yellow']:
print(set_colors('1 2 3 4 5 6', c))
print(get_colors('1 2 3 4 5 6'))
time.sleep(0.5)
print(set_colors('1 2 3 4 5 6','off'))
print(get_colors('1 2 3 4 5 6'))
time.sleep(0.5)
set_compliants('1 2 3 4 5 6', 0)
set_positions('1 2 3 4 5 6', '0 10 -15 10 0 0')
time.sleep(1.5)
print('motors in position: ', get_positions([1, 2, 3, 4, 5, 6]))
set_positions('1 2 3 4 5 6', -10)
time.sleep(.5)
print('motors in position: ', get_positions('1 2 3 4 5 6'))
set_compliants('1 2 3 4 5 6', 1)
time.sleep(.5)
for i in range(1,7): print('m{} in position {}°'.format(i, get_positions(i)))
Explanation: 2.d. Add checking inputs and use your function
End of explanation
number_of_all_motors=len(get_motors_name())
def valid_goto_input(motors_id, positions, durations):
motors_id, positions, durations = set_type(motors_id), set_type(positions), set_type(durations)
if 'ERROR' in (motors_id or positions or durations):
return 'Type ERROR'
if len(positions) == 1:
positions=positions*len(motors_id)
elif len(motors_id) != len(positions):
return 'Size ERROR'
if len(durations) == 1:
durations=durations*len(motors_id)
elif len(durations) != len(durations):
return 'Size ERROR'
number_of_motors=number_of_all_motors
for i, motor_id in enumerate(motors_id):
if int(motor_id) <1 or int(motor_id) > number_of_motors:
return 'Value ERROR'
if float(positions[i]) < -90 or float(positions[i]) > 90:
return 'Value ERROR'
if float(durations[i]) < 0:
return 'Value ERROR'
return motors_id, positions, durations
def set_goto(motors_id, positions, durations):
goto_input = valid_goto_input(motors_id, positions, durations)
if 'ERROR' in goto_input:
return goto_input
else:
motors_id, positions, durations = goto_input
cmd=[]
for i, motor_id in enumerate(motors_id):
cmd.append('m{}:{}:{}'.format(motor_id, positions[i], durations[i]))
cmd=';'.join(cmd)
url='motors/set/goto/'+cmd
return to_api(url)
print(set_compliants('1 2 3 4 5 6', False))
print(set_goto('1 2 3 4 5 6', 10, 1))
time.sleep(1.5)
print(set_goto('1 2 3 4 5 6', -10, 2))
time.sleep(2.5)
print(set_compliants('1 2 3 4 5 6', True))
Explanation: Another URL
- http://poppy.local:6969/motors/set/goto/<motors_position_duration>
- http://poppy.local:6969/motors/set/goto/m1:0:1;m2:15:2;m6:45:1.5 \
motor m1 started from where it was and arrived in position 0° in 1 seconde ;\
motor m2 started from where it was and arrived in position 15° in 2 secondes ;\
motor m6 started from where it was and arrived in position 45° in 1.5 secondes
End of explanation
def get_api(url, hostname='poppy.local', port='8080'):
url_root='http://{}:{}/'.format(hostname, port)
print('> get:',url_root+url)
try:
response = requests.get(url_root+url)
if response.status_code==200:
return response
else:
return 'ERROR {}!'.format(response.status_code)
except:
print('{} is unreachable'.format(url_root))
def post_api(url, value, hostname='poppy.local', port='8080'):
url_root='http://{}:{}/'.format(hostname, port)
print('> post: url=', url_root+url, ' ; value=', value)
try:
response = requests.post(url_root+url, json=value)
if response.status_code==200:
return 'Done!'
else:
return 'ERROR {}!'.format(response.status_code)
except:
print('{} is unreachable'.format(url_root))
HTML(get_api('').text)
Explanation: Recap: function define here:
to_api(url)
get_ip() \
get_all_positions() \
get_motors_alias() \
get_motors_name()
get_register(motor_id, register) \
get_register_list() \
get_position(motor_id) \
get_compliant(motor_id) \
get_color(motor_id)
get_registers(motors_id, register) \
get_positions(motors_id) \
get_compliants(motors_id) \
get_colors(motors_id)
set_register(motor_id, register, value) \
set_position(motor_id, value) \
set_compliant(motor_id, value) \
set_color(motor_id, value)
set_type(value)
valid_registers_input(motors_id, registers, values) \
set_registers(motors_id, registers, values) \
set_positions(motors_id values) \
set_compliants(motors_id, values) \
set_colors(motors_id, values)
valid_goto_input(motors_id, positions, durations) \
set_goto(motors_id, positions, durations)
2*. Code through API (http server)
2*.a. Access to API
End of explanation
def get_motor_list(alias='motors'):
url='motor/{}/list.json'.format(alias)
return get_api(url).json()
def get_motor_register_list(motor_id=1):
url = 'motor/m{}/register/list.json'.format(motor_id)
return get_api(url).json()
def get_motor_register_value(motor_id, register):
url = 'motor/m{}/register/{}'.format(motor_id, register)
return get_api(url).json()[register]
print(get_motor_list())
print(get_motor_register_list())
print(get_motor_register_value(1, 'name'))
print(get_motor_register_value(1, 'present_position'))
print(get_motor_register_value(1, 'compliant'))
print(get_motor_register_value(1, 'angle_limit'))
print(get_motor_register_value(1, 'led'))
def get_sensor_list():
return get_api('sensor/list.json').json()
def get_sensor_register_list(sensor):
url = 'sensor/{}/register/list.json'.format(sensor)
return get_api(url).json()
def get_sensor_register_value(sensor, register):
url = 'sensor/{}/register/{}'.format(sensor, register)
return get_api(url).json()[register]
print(get_sensor_list())
print(get_sensor_register_list('camera'))
print(get_sensor_register_value('camera', 'fps'))
print(get_sensor_register_value('camera', 'resolution'))
print(get_sensor_register_value('camera', 'frame'))
def get_primitive_list():
return get_api('primitive/list.json').json()
def get_primitive_property(primitive_name):
url='primitive/{}/property/list.json'.format(primitive_name)
return get_api(url).json()
def get_primitive_method(primitive_name):
url='primitive/{}/method/list.json'.format(primitive_name)
return get_api(url).json()
print(get_primitive_list())
print(get_primitive_property(get_primitive_list()['primitives'][0]))
print(get_primitive_method(get_primitive_list()['primitives'][0]))
Explanation: 2*.b. Get request
End of explanation
def post_motor_value(motor, register, value):
url = 'motor/m{}/register/{}/value.json'.format(motor, register)
return post_api(url, value)
import time
print(post_motor_value(1, 'compliant', False))
print(get_motor_register_value(1, 'compliant'))
print(post_motor_value(1, 'goal_speed', 25))
print(post_motor_value(1, 'goal_position', 25))
for _ in range(10):
print(get_motor_register_value(1, 'present_position'))
time.sleep(0.1)
print(post_motor_value(1, 'goal_position', 0))
for _ in range(10):
print(get_motor_register_value(1, 'present_position'))
time.sleep(0.1)
print(post_motor_value(1, 'comlpiant', True))
def post_sensor_value(sensor, register, value):
url = 'sensor/{}/register/{}/value.json'.format(sensor, register)
return post_api(url, value)
print(get_sensor_list())
print(get_sensor_register_list('camera'))
print(get_sensor_register_value('camera', 'fps'))
print(post_sensor_value('caemra', 'fps', 15.0))
print(get_sensor_register_value('camera', 'fps'))
def post_primitive_property(primitive, prop, value):
url = 'primitive/{}/property/{}/value.json'.format(primitive, prop)
return post_api(url, value)
def post_primitive_method(primitive, meth, value):
url = 'primitive/{}/method/{}/args.json'.format(primitive, meth)
return post_api(url, value)
print(get_primitive_list())
print(get_primitive_property('rest_posture'))
print(get_primitive_method('rest_posture'))
print(post_primitive_method('rest_posture', 'start', 'start'))
time.sleep(2)
print(post_primitive_method('rest_posture', 'stop', 'stop'))
def set_primitive_action(primitive, action):
if action not in ['start','stop','pause','resume']:
return 'Value ERROR'
url = 'primitive/{}/{}.json'.format(primitive, action)
if get_api(url).status_code==200:
return '{} of {} Done!'.format(action, primitive)
else:
return '{} of {} Fail! Error {}'.format(action, primitive, get_api(url).status_code)
print(get_primitive_list())
print(set_primitive_action('rest_posture', 'start'))
time.sleep(2)
print(set_primitive_action('rest_posture', 'stop'))
Explanation: 2*.c. Post request
End of explanation |
856 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
4. PyNGL basics
PyNGL is a Python language module for creating 2D high performance visualizations of scientific data. It is based on NCL graphics but still not as extensive as NCL's last version 6.6.2.
The aim of this notebook is to give you an introduction to PyNGL, read your data from file, create plots, and write the graphics output to a specified graphics file format.
Content
1. Import PyNGL
2. Graphics output
3. Plot types
4. Plot resources
5. Text
6. Annotations
7. Panels
<br>
4.1 Import PyNGL
The Python module of PyNGL is called Ngl.
Step1: To create a visualization of your data you need to do
- read the data
- open a graphics output channel called workstation
- generate the graphic
- save the graphic on disk
How to read the data has been explained in 03_Xarray_PyNIO_basics, we will use it here without further explainations.
4.2 Graphics output
Let us start opening a graphics output channel and link it to the variable wks. You can call it like ever you want but it is used very often by NCL users.
The workstation types are
- ps
- eps
- epsi
- pdf
- newpdf (creates smaller output)
- x11
In our first example we want to use PNG as output format to make it possible to display the plots in the notebook. To open a workstation we use the function Ngl.open_wks. The name of the graphics output file shall be plot_test1.png. The suffix .png will be appended automatically to the basename of the file name.
Step2: That is of course a very simple case but if you want to specify the size or orientation of the graphics you have to work with resources. NCL users already know how to deal with resources, and it shouldn't be difficult to Python users. Resources are the same as attributes of Python objects, if set the user is able to manage a lot of settings for PyNGL functions.
Let us say, we want to generate a PDF file of size DIN A4. First, we have to assign a PyNGL object variable wks_res (you can call it like you want) with the function Ngl.Resources() to store the size settings for the workstation. Notice, that we now use Ngl.open_wks with three parameters, and we have to delete the first workstation.
Step3: There are many wk resources available (see NCL's wk resources page). Read the resources manual carefully because PyNGL and NCL make a lot of differences depending on the selected output format.
The next example shows how to set the size of the output to legal giving the width and height in inches instead of wkPaperSize = 'legal'. It will create a PDF file with width 8.5 inches, height 14.0 inches, and the orientation is portrait (default).
Step4: Now, we want to change the orientation of the legal size PDF file to landscape.
Step5: Ok, we want to start with a clean script. We delete the workstation from above using the function Ngl.delete_wks. | Python Code:
import Ngl
Explanation: 4. PyNGL basics
PyNGL is a Python language module for creating 2D high performance visualizations of scientific data. It is based on NCL graphics but still not as extensive as NCL's last version 6.6.2.
The aim of this notebook is to give you an introduction to PyNGL, read your data from file, create plots, and write the graphics output to a specified graphics file format.
Content
1. Import PyNGL
2. Graphics output
3. Plot types
4. Plot resources
5. Text
6. Annotations
7. Panels
<br>
4.1 Import PyNGL
The Python module of PyNGL is called Ngl.
End of explanation
wks = Ngl.open_wks('png', 'plot_test1')
Explanation: To create a visualization of your data you need to do
- read the data
- open a graphics output channel called workstation
- generate the graphic
- save the graphic on disk
How to read the data has been explained in 03_Xarray_PyNIO_basics, we will use it here without further explainations.
4.2 Graphics output
Let us start opening a graphics output channel and link it to the variable wks. You can call it like ever you want but it is used very often by NCL users.
The workstation types are
- ps
- eps
- epsi
- pdf
- newpdf (creates smaller output)
- x11
In our first example we want to use PNG as output format to make it possible to display the plots in the notebook. To open a workstation we use the function Ngl.open_wks. The name of the graphics output file shall be plot_test1.png. The suffix .png will be appended automatically to the basename of the file name.
End of explanation
wks_res = Ngl.Resources()
wks_res.wkPaperSize = 'A4'
wks = Ngl.open_wks('pdf', 'plot_test_A4', wks_res)
Explanation: That is of course a very simple case but if you want to specify the size or orientation of the graphics you have to work with resources. NCL users already know how to deal with resources, and it shouldn't be difficult to Python users. Resources are the same as attributes of Python objects, if set the user is able to manage a lot of settings for PyNGL functions.
Let us say, we want to generate a PDF file of size DIN A4. First, we have to assign a PyNGL object variable wks_res (you can call it like you want) with the function Ngl.Resources() to store the size settings for the workstation. Notice, that we now use Ngl.open_wks with three parameters, and we have to delete the first workstation.
End of explanation
wks_res = Ngl.Resources()
wks_res.wkPaperWidthF = 8.5 # in inches
wks_res.wkPaperHeightF = 14.0 # in inches
wks = Ngl.open_wks('pdf',' plot_test_legal', wks_res)
Explanation: There are many wk resources available (see NCL's wk resources page). Read the resources manual carefully because PyNGL and NCL make a lot of differences depending on the selected output format.
The next example shows how to set the size of the output to legal giving the width and height in inches instead of wkPaperSize = 'legal'. It will create a PDF file with width 8.5 inches, height 14.0 inches, and the orientation is portrait (default).
End of explanation
wks_res = Ngl.Resources()
wks_res.wkPaperSize = 'legal'
wks_res.wkOrientation = 'landscape'
wks = Ngl.open_wks('pdf', 'plot_test_legal_landscape', wks_res)
Explanation: Now, we want to change the orientation of the legal size PDF file to landscape.
End of explanation
Ngl.delete_wks(wks)
Explanation: Ok, we want to start with a clean script. We delete the workstation from above using the function Ngl.delete_wks.
End of explanation |
857 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p><font size="6"><b> CASE - air quality data of European monitoring stations (AirBase)</b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
AirBase is the European air quality database maintained by the European Environment Agency (EEA). It contains air quality monitoring data and information submitted by participating countries throughout Europe. The air quality database consists of a multi-annual time series of air quality measurement data and statistics for a number of air pollutants.
Some of the data files that are available from AirBase were included in the data folder
Step1: Processing a single file
We will start with processing one of the downloaded files (BETR8010000800100hour.1-1-1990.31-12-2012). Looking at the data, you will see it does not look like a nice csv file
Step2: So we will need to do some manual processing.
Just reading the tab-delimited data
Step3: The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names.
<div class="alert alert-success">
<b>EXERCISE 1</b>
Step4: For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data).
<div class="alert alert-success">
**EXERCISE 2**
Step5: Now, we want to reshape it
Step6: Reshaping using stack
Step7: Combine date and hour
Step8: Our final data is now a time series. In pandas, this means that the index is a DatetimeIndex
Step10: Processing a collection of files
We now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above.
<div class="alert alert-success">
<b>EXERCISE 4</b>
Step11: Test the function on the data file from above
Step12: We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe.
<div class="alert alert-success">
**EXERCISE 5**
Step13: <div class="alert alert-success">
**EXERCISE 6**
Step14: Finally, we don't want to have to repeat this each time we use the data. Therefore, let's save the processed data to a csv file. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
Explanation: <p><font size="6"><b> CASE - air quality data of European monitoring stations (AirBase)</b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
AirBase is the European air quality database maintained by the European Environment Agency (EEA). It contains air quality monitoring data and information submitted by participating countries throughout Europe. The air quality database consists of a multi-annual time series of air quality measurement data and statistics for a number of air pollutants.
Some of the data files that are available from AirBase were included in the data folder: the hourly concentrations of nitrogen dioxide (NO2) for 4 different measurement stations:
FR04037 (PARIS 13eme): urban background site at Square de Choisy
FR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia
BETR802: urban traffic site in Antwerp, Belgium
BETN029: rural background site in Houtem, Belgium
See http://www.eea.europa.eu/themes/air/interactive/no2
End of explanation
with open("data/BETR8010000800100hour.1-1-1990.31-12-2012") as f:
print(f.readline())
Explanation: Processing a single file
We will start with processing one of the downloaded files (BETR8010000800100hour.1-1-1990.31-12-2012). Looking at the data, you will see it does not look like a nice csv file:
End of explanation
data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')#, header=None)
data.head()
Explanation: So we will need to do some manual processing.
Just reading the tab-delimited data:
End of explanation
# Column names: list consisting of 'date' and then intertwined the hour of the day and 'flag'
hours = ["{:02d}".format(i) for i in range(24)]
column_names = ['date'] + [item for pair in zip(hours, ['flag' + str(i) for i in range(24)]) for item in pair]
# %load _solutions/case4_air_quality_processing1.py
# %load _solutions/case4_air_quality_processing2.py
Explanation: The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names.
<div class="alert alert-success">
<b>EXERCISE 1</b>: <br><br> Clean up this dataframe by using more options of `pd.read_csv` (see its [docstring](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html))
<ul>
<li>specify the correct delimiter</li>
<li>specify that the values of -999 and -9999 should be regarded as NaN</li>
<li>specify our own column names (for how the column names are made up, see <a href="http://stackoverflow.com/questions/6356041/python-intertwining-two-lists">http://stackoverflow.com/questions/6356041/python-intertwining-two-lists</a>)
</ul>
</div>
End of explanation
flag_columns = [col for col in data.columns if 'flag' in col]
# we can now use this list to drop these columns
# %load _solutions/case4_air_quality_processing3.py
data.head()
Explanation: For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data).
<div class="alert alert-success">
**EXERCISE 2**:
Drop all 'flag' columns ('flag1', 'flag2', ...)
</div>
End of explanation
# %load _solutions/case4_air_quality_processing4.py
Explanation: Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index. Here we have a wide and long dataframe, and want to make this a long, narrow timeseries.
<div class="alert alert-info">
<b>REMEMBER</b>:
Recap: reshaping your data with [`stack` / `melt` and `unstack` / `pivot`](./pandas_08_reshaping_data.ipynb)</li>
<img src="../img/pandas/schema-stack.svg" width=70%>
</div>
<div class="alert alert-success">
<b>EXERCISE 3</b>:
<br><br>
Reshape the dataframe to a timeseries.
The end result should look like:<br><br>
<div class='center'>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>BETR801</th>
</tr>
</thead>
<tbody>
<tr>
<th>1990-01-02 09:00:00</th>
<td>48.0</td>
</tr>
<tr>
<th>1990-01-02 12:00:00</th>
<td>48.0</td>
</tr>
<tr>
<th>1990-01-02 13:00:00</th>
<td>50.0</td>
</tr>
<tr>
<th>1990-01-02 14:00:00</th>
<td>55.0</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
</tr>
<tr>
<th>2012-12-31 20:00:00</th>
<td>16.5</td>
</tr>
<tr>
<th>2012-12-31 21:00:00</th>
<td>14.5</td>
</tr>
<tr>
<th>2012-12-31 22:00:00</th>
<td>16.5</td>
</tr>
<tr>
<th>2012-12-31 23:00:00</th>
<td>15.0</td>
</tr>
</tbody>
</table>
<p style="text-align:center">170794 rows × 1 columns</p>
</div>
<ul>
<li>Reshape the dataframe so that each row consists of one observation for one date + hour combination</li>
<li>When you have the date and hour values as two columns, combine these columns into a datetime (tip: string columns can be summed to concatenate the strings) and remove the original columns</li>
<li>Set the new datetime values as the index, and remove the original columns with date and hour values</li>
</ul>
**NOTE**: This is an advanced exercise. Do not spend too much time on it and don't hesitate to look at the solutions.
</div>
Reshaping using melt:
End of explanation
# %load _solutions/case4_air_quality_processing5.py
# %load _solutions/case4_air_quality_processing6.py
Explanation: Reshaping using stack:
End of explanation
# %load _solutions/case4_air_quality_processing7.py
# %load _solutions/case4_air_quality_processing8.py
# %load _solutions/case4_air_quality_processing9.py
data_stacked.head()
Explanation: Combine date and hour:
End of explanation
data_stacked.index
data_stacked.plot()
Explanation: Our final data is now a time series. In pandas, this means that the index is a DatetimeIndex:
End of explanation
def read_airbase_file(filename, station):
Read hourly AirBase data files.
Parameters
----------
filename : string
Path to the data file.
station : string
Name of the station.
Returns
-------
DataFrame
Processed dataframe.
...
return ...
# %load _solutions/case4_air_quality_processing10.py
Explanation: Processing a collection of files
We now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above.
<div class="alert alert-success">
<b>EXERCISE 4</b>:
<ul>
<li>Write a function <code>read_airbase_file(filename, station)</code>, using the above steps the read in and process the data, and that returns a processed timeseries.</li>
</ul>
</div>
End of explanation
import os
filename = "data/BETR8010000800100hour.1-1-1990.31-12-2012"
station = os.path.split(filename)[-1][:7]
station
test = read_airbase_file(filename, station)
test.head()
Explanation: Test the function on the data file from above:
End of explanation
from pathlib import Path
# %load _solutions/case4_air_quality_processing11.py
Explanation: We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe.
<div class="alert alert-success">
**EXERCISE 5**:
Use the [pathlib module](https://docs.python.org/3/library/pathlib.html) `Path` class in combination with the `glob` method to list all 4 AirBase data files that are included in the 'data' directory, and call the result `data_files`.
<details><summary>Hints</summary>
- The pathlib module provides a object oriented way to handle file paths. First, create a `Path` object of the data folder, `pathlib.Path("./data")`. Next, apply the `glob` function to extract all the files containing `*0008001*` (use wildcard * to say "any characters"). The output is a Python generator, which you can collect as a `list()`.
</details>
</div>
End of explanation
# %load _solutions/case4_air_quality_processing12.py
# %load _solutions/case4_air_quality_processing13.py
combined_data.head()
Explanation: <div class="alert alert-success">
**EXERCISE 6**:
* Loop over the data files, read and process the file using our defined function, and append the dataframe to a list.
* Combine the the different DataFrames in the list into a single DataFrame where the different columns are the different stations. Call the result `combined_data`.
<details><summary>Hints</summary>
- The `data_files` list contains `Path` objects (from the pathlib module). To get the actual file name as a string, use the `.name` attribute.
- The station name is always first 7 characters of the file name.
</details>
</div>
End of explanation
# let's first give the index a descriptive name
combined_data.index.name = 'datetime'
combined_data.to_csv("airbase_data_processed.csv")
Explanation: Finally, we don't want to have to repeat this each time we use the data. Therefore, let's save the processed data to a csv file.
End of explanation |
858 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import random
from collections import Counter
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
import time
def subsample_words(words, threshold):
# This will be the probability to keep each word
keep_probs = np.random.uniform(0.0, 1.0, len(words))
total_words = len(words)
# Counting the frequency of each word
words_freqs = Counter(words)
words_freqs = {word: count/total_words for word, count in words_freqs.items()}
# Placeholder to keep the train words
keep_words = []
for idx, word in enumerate(words):
discard_prob = 1.0 - np.sqrt(threshold / words_freqs[word])
if keep_probs[idx] >= discard_prob:
keep_words.append(word)
return keep_words
## Your code here
train_words = subsample_words(int_wordswords, threshold=1e-5)
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
r = np.random.randint(1, window_size + 1)
low_idx = max(idx - r, 0)
high_idx = min(idx + r + 1, len(words) - 1)
wnd = set(words[low_idx:idx] + words[idx+1:high_idx])
return list(wnd)
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) # create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros(n_vocab)) # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels=labels,
inputs=embed, num_sampled=n_sampled, num_classes=n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
859 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$$
\def\CC{\bf C}
\def\QQ{\bf Q}
\def\RR{\bf R}
\def\ZZ{\bf Z}
\def\NN{\bf N}
$$
Exemples (def + while + for + if)
Step1: On a vu dans les chapitres précédents comment définir des fonctions avec def, des boucles avec while et for et des tests avec if ainsi que quelques exemples sur chaque notion mais indépendants des autres. Très souvent en programmation, on a besoin d'utiliser plus tous ces outils à la fois. C'est leur utilisation simultanée qui permet de résoudre des problèmes très divers et de les exprimer en quelques lignes de code.
Dans ce chapitre, nous allons voir quelques exemples qui utilisent les fonctions, les boucles et les conditions dans un même programme.
Conjecture de Syracuse
La suite de Syracuse une suite d'entiers naturels définie de la manière suivante. On part d'un nombre entier plus grand que zéro ; s’il est pair, on le divise par 2 ; s’il est impair, on le multiplie par 3 et on ajoute 1. En répétant l’opération, on obtient une suite d'entiers positifs dont chacun ne dépend que de son prédécesseur. Par exemple, la suite de Syracuse du nombre 23 est
Step2: Pouvez-vous trouver un nombre n tel que la suite de Syracuse n'atteint pas le cycle 4-2-1?
Énumérer les diviseurs d'un nombre entier
Une fonction qui retourne la liste des diviseurs d'un nombre entiers peut s'écrire comme ceci en utilisant une boucle for et un test if
Step3: On vérifie que la fonction marche bien
Step4: Tester si un nombre est premier
Une fonction peut en utiliser une autre. Par exemple, en utilisant la fonction diviseur que l'on a définit plus haut, on peut tester si un nombre est premier
Step5: On pourrait faire plus efficace, car il suffit de vérifier la non-existence de diviseurs inférieurs à la racine carrée de n.
Step6: En utilisant cette fonciton, on trouve que la liste des premiers nombres premiers inférieurs à 20 est
Step7: Le résulat est erroné! Pourquoi?
La fonction est_premier(8) retourne True en ce moment, car la racine carrée de 8 vaut 2.828 et donc sq=int(2.828) est égal à 2 et la boucle ne teste pas la valeur i=2, car range(2,2) retourne une liste vide. On peut corriger de la façon suivante en ajoutant un +1 au bon endroit
Step8: On vérifie que la fonction retourne bien que 4 et 8 ne sont pas des nombres premiers
Step9: Mais il y a encore une erreur, car 0 et 1 ne devraient pas faire partie de la liste. Une solution est de traiter ces deux cas de base à part
Step10: On vérifie que tout marche bien maintenant | Python Code:
from __future__ import division, print_function # Python 3
Explanation: $$
\def\CC{\bf C}
\def\QQ{\bf Q}
\def\RR{\bf R}
\def\ZZ{\bf Z}
\def\NN{\bf N}
$$
Exemples (def + while + for + if)
End of explanation
def syracuse(n):
while n != 1:
print(n, end=' ')
if n % 2 == 0:
n = n//2
else:
n = 3*n+1
syracuse(23)
syracuse(245)
syracuse(245154)
Explanation: On a vu dans les chapitres précédents comment définir des fonctions avec def, des boucles avec while et for et des tests avec if ainsi que quelques exemples sur chaque notion mais indépendants des autres. Très souvent en programmation, on a besoin d'utiliser plus tous ces outils à la fois. C'est leur utilisation simultanée qui permet de résoudre des problèmes très divers et de les exprimer en quelques lignes de code.
Dans ce chapitre, nous allons voir quelques exemples qui utilisent les fonctions, les boucles et les conditions dans un même programme.
Conjecture de Syracuse
La suite de Syracuse une suite d'entiers naturels définie de la manière suivante. On part d'un nombre entier plus grand que zéro ; s’il est pair, on le divise par 2 ; s’il est impair, on le multiplie par 3 et on ajoute 1. En répétant l’opération, on obtient une suite d'entiers positifs dont chacun ne dépend que de son prédécesseur. Par exemple, la suite de Syracuse du nombre 23 est:
23, 70, 35, 106, 53, 160, 80, 40, 20, 10, 5, 16, 8, 4, 2, 1, 4, 2, 1, ...
Après que le nombre 1 a été atteint, la suite des valeurs (1, 4, 2, 1, 4, 2, ...) se répète indéfiniment en un cycle de longueur 3, appelé cycle trivial.
La conjecture de Syracuse est l'hypothèse selon laquelle la suite de Syracuse de n'importe quel entier strictement positif atteint 1. En dépit de la simplicité de son énoncé, cette conjecture défie depuis de nombreuses années les mathématiciens. Paul Erdos a dit à propos de la conjecture de Syracuse : "les mathématiques ne sont pas encore prêtes pour de tels problèmes".
End of explanation
def diviseurs(n):
L = []
for i in range(1, n+1):
if n % i == 0:
L.append(i)
return L
Explanation: Pouvez-vous trouver un nombre n tel que la suite de Syracuse n'atteint pas le cycle 4-2-1?
Énumérer les diviseurs d'un nombre entier
Une fonction qui retourne la liste des diviseurs d'un nombre entiers peut s'écrire comme ceci en utilisant une boucle for et un test if :
End of explanation
diviseurs(12)
diviseurs(13)
diviseurs(15)
diviseurs(24)
Explanation: On vérifie que la fonction marche bien:
End of explanation
def est_premier_1(n):
L = diviseurs(n)
return len(L) == 2
est_premier_1(12)
est_premier_1(13)
[n for n in range(20) if est_premier_1(n)]
Explanation: Tester si un nombre est premier
Une fonction peut en utiliser une autre. Par exemple, en utilisant la fonction diviseur que l'on a définit plus haut, on peut tester si un nombre est premier:
End of explanation
from math import sqrt
def est_premier(n):
sq = int(sqrt(n))
for i in range(2, sq):
if n % i == 0:
return False
return True
Explanation: On pourrait faire plus efficace, car il suffit de vérifier la non-existence de diviseurs inférieurs à la racine carrée de n.
End of explanation
[n for n in range(20) if est_premier(n)]
Explanation: En utilisant cette fonciton, on trouve que la liste des premiers nombres premiers inférieurs à 20 est:
End of explanation
from math import sqrt
def est_premier(n):
sq = int(sqrt(n))
for i in range(2, sq+1):
if n % i == 0:
return False
return True
Explanation: Le résulat est erroné! Pourquoi?
La fonction est_premier(8) retourne True en ce moment, car la racine carrée de 8 vaut 2.828 et donc sq=int(2.828) est égal à 2 et la boucle ne teste pas la valeur i=2, car range(2,2) retourne une liste vide. On peut corriger de la façon suivante en ajoutant un +1 au bon endroit:
End of explanation
[n for n in range(20) if est_premier(n)]
Explanation: On vérifie que la fonction retourne bien que 4 et 8 ne sont pas des nombres premiers:
End of explanation
from math import sqrt
def est_premier(n):
if n == 0 or n == 1:
return False
sq = int(sqrt(n))
for i in range(2, sq+1):
if n % i == 0:
return False
return True
Explanation: Mais il y a encore une erreur, car 0 et 1 ne devraient pas faire partie de la liste. Une solution est de traiter ces deux cas de base à part:
End of explanation
[n for n in range(50) if est_premier(n)]
Explanation: On vérifie que tout marche bien maintenant:
End of explanation |
860 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Содержание<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Получение-среза" data-toc-modified-id="Получение-среза-1">Получение среза</a></span><ul class="toc-item"><li><span><a href="#Без-параметров" data-toc-modified-id="Без-параметров-1.1">Без параметров</a></span></li><li><span><a href="#Указываем-конец" data-toc-modified-id="Указываем-конец-1.2">Указываем конец</a></span></li><li><span><a href="#Указываем-начало" data-toc-modified-id="Указываем-начало-1.3">Указываем начало</a></span></li><li><span><a href="#Указываем-шаг" data-toc-modified-id="Указываем-шаг-1.4">Указываем шаг</a></span></li><li><span><a href="#Отрицательный-шаг" data-toc-modified-id="Отрицательный-шаг-1.5">Отрицательный шаг</a></span></li></ul></li><li><span><a href="#Особенности-срезов" data-toc-modified-id="Особенности-срезов-2">Особенности срезов</a></span></li><li><span><a href="#Примеры-использования" data-toc-modified-id="Примеры-использования-3">Примеры использования</a></span></li><li><span><a href="#Срезы-и-строки" data-toc-modified-id="Срезы-и-строки-4">Срезы и строки</a></span></li></ul></div>
Срезы
Получение среза
Бывает такое, что нам нужна только некоторая часть списка, например все элементы с 5 по 10, или все элементы с четными индексами. Подобное можно сделать с помощью срезов.
Срез задаётся как список[start
Step1: Можно опустить и
Step2: Указываем конец
Указываем до какого элемента выводить
Step3: Указываем начало
Или с какого элемента начинать
Step4: Указываем шаг
Step5: Отрицательный шаг
Можно даже сделать отрицательный шаг, как в range
Step6: С указанием начала срез с отрицательным шагом можно понимать как
Step7: Для отрицательного шага важно правильно указывать порядок начала и конца, и помнить, что левое число всегда включительно, правое - не включительно
Step8: Особенности срезов
Срезы не изменяют текущий список, а создают копию. С помощью срезов можно решить проблему ссылочной реализации при изменении одного элемента списка
Step9: Примеры использования
С помощью срезов можно, например, пропустить элемент списка с заданным индексом
Step10: Или поменять местами две части списка
Step11: Срезы и строки
Срезы можно использовать не только для списков, но и для строк. Например, чтобы изменить третий символ строки, можно сделать так | Python Code:
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[::])
Explanation: <h1>Содержание<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Получение-среза" data-toc-modified-id="Получение-среза-1">Получение среза</a></span><ul class="toc-item"><li><span><a href="#Без-параметров" data-toc-modified-id="Без-параметров-1.1">Без параметров</a></span></li><li><span><a href="#Указываем-конец" data-toc-modified-id="Указываем-конец-1.2">Указываем конец</a></span></li><li><span><a href="#Указываем-начало" data-toc-modified-id="Указываем-начало-1.3">Указываем начало</a></span></li><li><span><a href="#Указываем-шаг" data-toc-modified-id="Указываем-шаг-1.4">Указываем шаг</a></span></li><li><span><a href="#Отрицательный-шаг" data-toc-modified-id="Отрицательный-шаг-1.5">Отрицательный шаг</a></span></li></ul></li><li><span><a href="#Особенности-срезов" data-toc-modified-id="Особенности-срезов-2">Особенности срезов</a></span></li><li><span><a href="#Примеры-использования" data-toc-modified-id="Примеры-использования-3">Примеры использования</a></span></li><li><span><a href="#Срезы-и-строки" data-toc-modified-id="Срезы-и-строки-4">Срезы и строки</a></span></li></ul></div>
Срезы
Получение среза
Бывает такое, что нам нужна только некоторая часть списка, например все элементы с 5 по 10, или все элементы с четными индексами. Подобное можно сделать с помощью срезов.
Срез задаётся как список[start:end:step], где из списка будут браться элементы с индексами от start (включительно) до end (не включительно) с шагом step. Любое из значений start, end, step можно опустить. В таком случае по умолчанию start равен 0, end равен длине списка, то есть индексу последнего элемента + 1, step равен 1.
Cрезы и range очень похожи набором параметров.
Без параметров
Срез a[::] будет содержать просто весь список a:
End of explanation
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[:])
Explanation: Можно опустить и : перед указанием шага, если его не указывать:
End of explanation
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[:5]) # то же самое, что и lst[:5:]
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[:0])
Explanation: Указываем конец
Указываем до какого элемента выводить:
End of explanation
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[2:])
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[2:5])
Explanation: Указываем начало
Или с какого элемента начинать:
End of explanation
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[1:7:2])
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[::2])
Explanation: Указываем шаг
End of explanation
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[::-1])
Explanation: Отрицательный шаг
Можно даже сделать отрицательный шаг, как в range:
End of explanation
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[2::-1])
Explanation: С указанием начала срез с отрицательным шагом можно понимать как: "Начиная с элемента с индексом 2 идти в обратную сторону с шагом 1 до того, как список закончится".
End of explanation
lst = [1, 2, 3, 4, 5, 6, 7, 8]
# Допустим, хотим элементы с индексами 1 и 2 в обратном порядке
print(lst[1:3:-1])
# Начиная с элемента с индексом 2 идти в обратную сторону с шагом 1
# до того, как встретим элемент с индексом 0 (0 не включительно)
print(lst[2:0:-1])
Explanation: Для отрицательного шага важно правильно указывать порядок начала и конца, и помнить, что левое число всегда включительно, правое - не включительно:
End of explanation
a = [1, 2, 3, 4] # а - ссылка на список, каждый элемент списка это ссылки на объекты 1, 2, 3, 4
b = a # b - ссылка на тот же самый список
a[0] = -1 # Меняем элемент списка a
print("a =", a)
print("b =", b) # Значение b тоже поменялось!
print()
a = [1, 2, 3, 4]
b = a[:] # Создаём копию списка
a[0] = -1 # Меняем элемент списка a
print("a =", a)
print("b =", b) # Значение b не изменилось!
Explanation: Особенности срезов
Срезы не изменяют текущий список, а создают копию. С помощью срезов можно решить проблему ссылочной реализации при изменении одного элемента списка:
End of explanation
lst = [1, 2, 3, 4, 5, 6, 7, 8]
print(lst[:4] + lst[5:])
Explanation: Примеры использования
С помощью срезов можно, например, пропустить элемент списка с заданным индексом:
End of explanation
lst = [1, 2, 3, 4, 5, 6, 7, 8]
swapped = lst[5:] + lst[:5] # поменять местами, начиная с элемента с индексом 5
print(swapped)
Explanation: Или поменять местами две части списка:
End of explanation
s = "long string"
s = s[:2] + "!" + s[3:]
print(s)
Explanation: Срезы и строки
Срезы можно использовать не только для списков, но и для строк. Например, чтобы изменить третий символ строки, можно сделать так:
End of explanation |
861 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tweaking the cellpy file format
A cellpy file is a hdf5-type file.
From v.5 it contains five top-level directories.
```python
from cellreader.py
raw_dir = prms._cellpyfile_raw
step_dir = prms._cellpyfile_step
summary_dir = prms._cellpyfile_summary
meta_dir = "/info" # hard-coded
fid_dir = prms._cellpyfile_fid
from prms.py
_cellpyfile_root = "CellpyData"
_cellpyfile_raw = "/raw"
_cellpyfile_step = "/steps"
_cellpyfile_summary = "/summary"
_cellpyfile_fid = "/fid"
```
Step1: Creating a fresh file from a raw-file
Step2: Update with loadcell
Step3: Update with update
Step4: Looking at cellpy´s internal parameter
Step5: Looking at a cellpy file using pandas
Step6: Looking at a cellpy file using cellpy | Python Code:
%load_ext autoreload
%autoreload 2
from pathlib import Path
from pprint import pprint
import pandas as pd
import cellpy
Explanation: Tweaking the cellpy file format
A cellpy file is a hdf5-type file.
From v.5 it contains five top-level directories.
```python
from cellreader.py
raw_dir = prms._cellpyfile_raw
step_dir = prms._cellpyfile_step
summary_dir = prms._cellpyfile_summary
meta_dir = "/info" # hard-coded
fid_dir = prms._cellpyfile_fid
from prms.py
_cellpyfile_root = "CellpyData"
_cellpyfile_raw = "/raw"
_cellpyfile_step = "/steps"
_cellpyfile_summary = "/summary"
_cellpyfile_fid = "/fid"
```
End of explanation
create_cellpyfile = False
filename_full = Path(
"/Users/jepe/cellpy_data/cellpyfiles/20181026_cen31_03_GITT_cc_01.h5"
)
filename_first = Path(
"/Users/jepe/cellpy_data/cellpyfiles/20181026_cen31_03_GITT_cc_01_a.h5"
)
rawfile_full = Path("/Users/jepe/cellpy_data/raw/20181026_cen31_03_GITT_cc_01.res")
rawfile_full2 = Path("/Users/jepe/cellpy_data/raw/20181026_cen31_03_GITT_cc_02.res")
if create_cellpyfile:
print("--loading raw-file".ljust(50, "-"))
c = cellpy.get(rawfile_full, mass=0.23)
print("--saving".ljust(50, "-"))
c.save(filename_full)
print("--splitting".ljust(50, "-"))
c1, c2 = c.split(4)
c1.save(filename_first)
else:
print("--loading cellpy-files".ljust(50, "-"))
c1 = cellpy.get(filename_first)
c = cellpy.get(filename_full)
Explanation: Creating a fresh file from a raw-file
End of explanation
cellpy.log.setup_logging(default_level="INFO")
raw_files = [rawfile_full, rawfile_full2]
# raw_files = [rawfile_full2]
cellpy_file = filename_full
c = cellpy.cellreader.CellpyData().dev_update_loadcell(raw_files, cellpy_file)
Explanation: Update with loadcell
End of explanation
c1 = cellpy.get(filename_first, logging_mode="INFO")
c1.dev_update(rawfile_full)
Explanation: Update with update
End of explanation
from cellpy import prms
parent_level = prms._cellpyfile_root
raw_dir = prms._cellpyfile_raw
step_dir = prms._cellpyfile_step
summary_dir = prms._cellpyfile_summary
meta_dir = "/info" # hard-coded
fid_dir = prms._cellpyfile_fid
raw_dir
parent_level + raw_dir
Explanation: Looking at cellpy´s internal parameter
End of explanation
print(f"name: {filename_full.name}")
print(f"size: {filename_full.stat().st_size/1_048_576:0.2f} Mb")
with pd.HDFStore(filename_full) as store:
pprint(store.keys())
store = pd.HDFStore(filename_full)
m = store.select(parent_level + meta_dir)
s = store.select(parent_level + summary_dir)
t = store.select(parent_level + step_dir)
f = store.select(parent_level + fid_dir)
store.close()
f.T
Explanation: Looking at a cellpy file using pandas
End of explanation
c = cellpy.get(filename_full)
cc = c.cell
fid = cc.raw_data_files[0]
fid.last_data_point # This should be used when I will implement reading only new data
Explanation: Looking at a cellpy file using cellpy
End of explanation |
862 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Idea
Результат (ожидаемый)
обучение происходит на своем родном канале на симулированных данных
учитываются различия симуляции и данных (см. ниже алгоритм)
оценка качества (как и калибровка) будут несмещенными
качество лучше, чем baseline
Алгоритм
используем инклюзивный подход
учим при помощи классификатора перевзвешивание различия данных (удаляем треки из симуляции, которые не появляются в данных и удаляем из данных треки, которые не умеем симулировать)
Step1: Import
Step2: Reading initial data
Step3: Data preprocessing
Step4: Define mask for non-B events
Step5: Define features
Step6: Test that B-events similar in MC and data
Step7: Test that number of tracks is independent on Track description
Step8: Define base estimator and B weights, labels
Step9: B probability computation
Step10: Inclusive tagging
Step11: Inclusive tagging
Step12: New method
Reweighting with classifier
combine data and MC together to train a classifier
Step13: train classifier to distinguish data and MC
Step14: quality
Step15: calibrate probabilities (due to reweighting rule where probabilities are used)
Step16: compute MC and data track weights
Step17: reweighting plotting
Step18: Check reweighting rule
train classifier to distinguish data vs MC with provided weights
Step19: Classifier trained on MC
Step20: Classifier trained on data
Step21:
Step22: Calibration | Python Code:
%pylab inline
figsize(8, 6)
import sys
sys.path.insert(0, "../")
Explanation: Idea
Результат (ожидаемый)
обучение происходит на своем родном канале на симулированных данных
учитываются различия симуляции и данных (см. ниже алгоритм)
оценка качества (как и калибровка) будут несмещенными
качество лучше, чем baseline
Алгоритм
используем инклюзивный подход
учим при помощи классификатора перевзвешивание различия данных (удаляем треки из симуляции, которые не появляются в данных и удаляем из данных треки, которые не умеем симулировать):
классификатор предсказывает $p(MC)$ и $p(RD)$
для симуляции при $p(MC)>0.5$
$$w_{MC}=\frac{p(RD)}{p(MC)},$$
иначе
$$w_{MC}=1$$
- для данных при $p(MC)<0.5$
$$w_{RD}=\frac{p(MC)}{p(RD)},$$
иначе
$$w_{RD}=1$$
- нормируем веса в событии
- в формуле комбинирования возводим в степень $w * sign$
End of explanation
import pandas
import numpy
from folding_group import FoldingGroupClassifier
from rep.data import LabeledDataStorage
from rep.report import ClassificationReport
from rep.report.metrics import RocAuc
from sklearn.metrics import roc_curve, roc_auc_score
from decisiontrain import DecisionTrainClassifier
from rep.estimators import SklearnClassifier
Explanation: Import
End of explanation
import root_numpy
MC = pandas.DataFrame(root_numpy.root2array('../datasets/MC/csv/WG/Bu_JPsiK/2012/Tracks.root', stop=5000000))
data = pandas.DataFrame(root_numpy.root2array('../datasets/data/csv/WG/Bu_JPsiK/2012/Tracks.root', stop=5000000))
data.head()
MC.head()
Explanation: Reading initial data
End of explanation
from utils import data_tracks_preprocessing
data = data_tracks_preprocessing(data, N_sig_sw=True)
MC = data_tracks_preprocessing(MC)
', '.join(data.columns)
print sum(data.signB == 1), sum(data.signB == -1)
print sum(MC.signB == 1), sum(MC.signB == -1)
Explanation: Data preprocessing:
Add necessary features:
- #### define label = signB * signTrack
* if > 0 (same sign) - label **1**
* if < 0 (different sign) - label **0**
diff pt, min/max PID
Apply selections:
remove ghost tracks
loose selection on PID
End of explanation
mask_sw_positive = (data.N_sig_sw.values > 1) * 1
data.head()
data['group_column'] = numpy.unique(data.event_id, return_inverse=True)[1]
MC['group_column'] = numpy.unique(MC.event_id, return_inverse=True)[1]
data.index = numpy.arange(len(data))
MC.index = numpy.arange(len(MC))
Explanation: Define mask for non-B events
End of explanation
# features = ['cos_diff_phi', 'diff_pt', 'partPt', 'partP', 'nnkrec', 'diff_eta', 'EOverP',
# 'ptB', 'sum_PID_mu_k', 'proj', 'PIDNNe', 'sum_PID_k_e', 'PIDNNk', 'sum_PID_mu_e', 'PIDNNm',
# 'phi', 'IP', 'IPerr', 'IPs', 'veloch', 'max_PID_k_e', 'ghostProb',
# 'IPPU', 'eta', 'max_PID_mu_e', 'max_PID_mu_k', 'partlcs']
features = ['cos_diff_phi', 'partPt', 'partP', 'nnkrec', 'diff_eta', 'EOverP',
'ptB', 'sum_PID_mu_k', 'proj', 'PIDNNe', 'sum_PID_k_e', 'PIDNNk', 'sum_PID_mu_e', 'PIDNNm',
'phi', 'IP', 'IPerr', 'IPs', 'veloch', 'max_PID_k_e', 'ghostProb',
'IPPU', 'eta', 'max_PID_mu_e', 'max_PID_mu_k', 'partlcs']
Explanation: Define features
End of explanation
b_ids_data = numpy.unique(data.group_column.values, return_index=True)[1]
b_ids_MC = numpy.unique(MC.group_column.values, return_index=True)[1]
Bdata = data.iloc[b_ids_data].copy()
BMC = MC.iloc[b_ids_MC].copy()
Bdata['Beta'] = Bdata.diff_eta + Bdata.eta
BMC['Beta'] = BMC.diff_eta + BMC.eta
Bdata['Bphi'] = Bdata.diff_phi + Bdata.phi
BMC['Bphi'] = BMC.diff_phi + BMC.phi
Bfeatures = ['Beta', 'Bphi', 'ptB']
hist(Bdata['ptB'].values, normed=True, alpha=0.5, bins=60,
weights=Bdata['N_sig_sw'].values)
hist(BMC['ptB'].values, normed=True, alpha=0.5, bins=60);
hist(Bdata['Beta'].values, normed=True, alpha=0.5, bins=60,
weights=Bdata['N_sig_sw'].values)
hist(BMC['Beta'].values, normed=True, alpha=0.5, bins=60);
hist(Bdata['Bphi'].values, normed=True, alpha=0.5, bins=60,
weights=Bdata['N_sig_sw'].values)
hist(BMC['Bphi'].values, normed=True, alpha=0.5, bins=60);
tt_base = DecisionTrainClassifier(learning_rate=0.02, n_estimators=1000,
n_threads=16)
data_vs_MC_B = pandas.concat([Bdata, BMC])
label_data_vs_MC_B = [0] * len(Bdata) + [1] * len(BMC)
weights_data_vs_MC_B = numpy.concatenate([Bdata.N_sig_sw.values * (Bdata.N_sig_sw.values > 1) * 1,
numpy.ones(len(BMC))])
weights_data_vs_MC_B_all = numpy.concatenate([Bdata.N_sig_sw.values, numpy.ones(len(BMC))])
tt_B = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=Bfeatures, group_feature='group_column')
%time tt_B.fit(data_vs_MC_B, label_data_vs_MC_B, sample_weight=weights_data_vs_MC_B)
pass
roc_auc_score(label_data_vs_MC_B, tt_B.predict_proba(data_vs_MC_B)[:, 1], sample_weight=weights_data_vs_MC_B)
roc_auc_score(label_data_vs_MC_B, tt_B.predict_proba(data_vs_MC_B)[:, 1], sample_weight=weights_data_vs_MC_B_all)
from hep_ml.reweight import GBReweighter, FoldingReweighter
reweighterB = FoldingReweighter(GBReweighter(), random_state=3444)
reweighterB.fit(BMC[Bfeatures], Bdata[Bfeatures], target_weight=Bdata.N_sig_sw)
BMC_weights = reweighterB.predict_weights(BMC[Bfeatures])
hist(Bdata['ptB'].values, normed=True, alpha=0.5, bins=60,
weights=Bdata['N_sig_sw'].values)
hist(BMC['ptB'].values, normed=True, alpha=0.5, bins=60, weights=BMC_weights);
weights_data_vs_MC_B_w = numpy.concatenate([Bdata.N_sig_sw.values * (Bdata.N_sig_sw.values > 1) * 1,
BMC_weights])
weights_data_vs_MC_B_all_w = numpy.concatenate([Bdata.N_sig_sw.values, BMC_weights])
tt_B = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=Bfeatures, group_feature='group_column')
%time tt_B.fit(data_vs_MC_B, label_data_vs_MC_B, sample_weight=weights_data_vs_MC_B_w)
roc_auc_score(label_data_vs_MC_B, tt_B.predict_proba(data_vs_MC_B)[:, 1], sample_weight=weights_data_vs_MC_B_all_w)
MC['N_sig_sw'] = BMC_weights[numpy.unique(MC.group_column.values, return_inverse=True)[1]]
Explanation: Test that B-events similar in MC and data
End of explanation
def compute_target_number_of_tracks(X):
ids = numpy.unique(X.group_column, return_inverse=True)[1]
number_of_tracks = numpy.bincount(X.group_column)
target = number_of_tracks[ids]
return target
from decisiontrain import DecisionTrainRegressor
from rep.estimators import SklearnRegressor
from rep.metaml import FoldingRegressor
tt_base_reg = DecisionTrainRegressor(learning_rate=0.02, n_estimators=1000,
n_threads=16)
%%time
tt_data_NT = FoldingRegressor(SklearnRegressor(tt_base_reg), n_folds=2, random_state=321,
features=features)
tt_data_NT.fit(data, compute_target_number_of_tracks(data), sample_weight=data.N_sig_sw.values * mask_sw_positive)
from sklearn.metrics import mean_squared_error
mean_squared_error(compute_target_number_of_tracks(data), tt_data_NT.predict(data),
sample_weight=data.N_sig_sw.values) ** 0.5
mean_squared_error(compute_target_number_of_tracks(data),
[numpy.mean(compute_target_number_of_tracks(data))] * len(data),
sample_weight=data.N_sig_sw.values) ** 0.5
%%time
tt_MC_NT = FoldingRegressor(SklearnRegressor(tt_base_reg), n_folds=2, random_state=321,
features=features)
tt_MC_NT.fit(MC, compute_target_number_of_tracks(MC), sample_weight=MC.N_sig_sw.values)
mean_squared_error(compute_target_number_of_tracks(MC),
tt_MC_NT.predict(MC), sample_weight=MC.N_sig_sw.values) ** 0.5
mean_squared_error(compute_target_number_of_tracks(MC),
[numpy.mean(compute_target_number_of_tracks(MC))] * len(MC),
sample_weight=MC.N_sig_sw.values) ** 0.5
tt_MC_NT.get_feature_importances().sort_values(by='effect')[-5:]
Explanation: Test that number of tracks is independent on Track description
End of explanation
tt_base = DecisionTrainClassifier(learning_rate=0.02, n_estimators=1000,
n_threads=16)
B_signs = data['signB'].groupby(data['group_column']).aggregate(numpy.mean)
B_weights = data['N_sig_sw'].groupby(data['group_column']).aggregate(numpy.mean)
B_signs_MC = MC['signB'].groupby(MC['group_column']).aggregate(numpy.mean)
B_weights_MC = MC['N_sig_sw'].groupby(MC['group_column']).aggregate(numpy.mean)
Explanation: Define base estimator and B weights, labels
End of explanation
from scipy.special import logit, expit
def compute_Bprobs(X, track_proba, weights=None, normed_weights=False):
if weights is None:
weights = numpy.ones(len(X))
_, data_ids = numpy.unique(X['group_column'], return_inverse=True)
track_proba[~numpy.isfinite(track_proba)] = 0.5
track_proba[numpy.isnan(track_proba)] = 0.5
if normed_weights:
weights_per_events = numpy.bincount(data_ids, weights=weights)
weights /= weights_per_events[data_ids]
predictions = numpy.bincount(data_ids, weights=logit(track_proba) * X['signTrack'] * weights)
return expit(predictions)
Explanation: B probability computation
End of explanation
tt_data = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=features, group_feature='group_column')
%time tt_data.fit(data, data.label, sample_weight=data.N_sig_sw.values * mask_sw_positive)
pass
pandas.DataFrame({'dataset': ['MC', 'data'],
'quality': [roc_auc_score(
B_signs_MC, compute_Bprobs(MC, tt_data.predict_proba(MC)[:, 1]), sample_weight=B_weights_MC),
roc_auc_score(
B_signs, compute_Bprobs(data, tt_data.predict_proba(data)[:, 1]), sample_weight=B_weights)]})
Explanation: Inclusive tagging: training on data
End of explanation
tt_MC = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=features, group_feature='group_column')
%time tt_MC.fit(MC, MC.label)
pass
pandas.DataFrame({'dataset': ['MC', 'data'],
'quality': [roc_auc_score(
B_signs_MC, compute_Bprobs(MC, tt_MC.predict_proba(MC)[:, 1]), sample_weight=B_weights_MC),
roc_auc_score(
B_signs, compute_Bprobs(data, tt_MC.predict_proba(data)[:, 1]), sample_weight=B_weights)]})
Explanation: Inclusive tagging: training on MC
End of explanation
combined_data_MC = pandas.concat([data, MC])
combined_label = numpy.array([0] * len(data) + [1] * len(MC))
combined_weights_data = data.N_sig_sw.values #/ numpy.bincount(data.group_column)[data.group_column.values]
combined_weights_data_passed = combined_weights_data * mask_sw_positive
combined_weights_MC = MC.N_sig_sw.values# / numpy.bincount(MC.group_column)[MC.group_column.values]
combined_weights = numpy.concatenate([combined_weights_data_passed,
1. * combined_weights_MC / sum(combined_weights_MC) * sum(combined_weights_data_passed)])
combined_weights_all = numpy.concatenate([combined_weights_data,
1. * combined_weights_MC / sum(combined_weights_MC) * sum(combined_weights_data)])
Explanation: New method
Reweighting with classifier
combine data and MC together to train a classifier
End of explanation
%%time
tt_base_large = DecisionTrainClassifier(learning_rate=0.3, n_estimators=1000,
n_threads=20)
tt_data_vs_MC = FoldingGroupClassifier(SklearnClassifier(tt_base_large), n_folds=2, random_state=321,
train_features=features + ['label'], group_feature='group_column')
tt_data_vs_MC.fit(combined_data_MC, combined_label, sample_weight=combined_weights)
a = []
for n, p in enumerate(tt_data_vs_MC.staged_predict_proba(combined_data_MC)):
a.append(roc_auc_score(combined_label, p[:, 1], sample_weight=combined_weights))
plot(a)
Explanation: train classifier to distinguish data and MC
End of explanation
combined_p = tt_data_vs_MC.predict_proba(combined_data_MC)[:, 1]
roc_auc_score(combined_label, combined_p, sample_weight=combined_weights)
roc_auc_score(combined_label, combined_p, sample_weight=combined_weights_all)
Explanation: quality
End of explanation
from utils import calibrate_probs, plot_calibration
combined_p_calib = calibrate_probs(combined_label, combined_weights, combined_p)[0]
plot_calibration(combined_p, combined_label, weight=combined_weights)
plot_calibration(combined_p_calib, combined_label, weight=combined_weights)
Explanation: calibrate probabilities (due to reweighting rule where probabilities are used)
End of explanation
# reweight data predicted as data to MC
used_probs = combined_p_calib
data_probs_to_be_MC = used_probs[combined_label == 0]
MC_probs_to_be_MC = used_probs[combined_label == 1]
track_weights_data = numpy.ones(len(data))
# take data with probability to be data
mask_data = data_probs_to_be_MC < 0.5
track_weights_data[mask_data] = (data_probs_to_be_MC[mask_data]) / (1 - data_probs_to_be_MC[mask_data])
# reweight MC predicted as MC to data
track_weights_MC = numpy.ones(len(MC))
mask_MC = MC_probs_to_be_MC > 0.5
track_weights_MC[mask_MC] = (1 - MC_probs_to_be_MC[mask_MC]) / (MC_probs_to_be_MC[mask_MC])
# simple approach, reweight only MC
track_weights_only_MC = (1 - MC_probs_to_be_MC) / MC_probs_to_be_MC
# data_ids = numpy.unique(data['group_column'], return_inverse=True)[1]
# MC_ids = numpy.unique(MC['group_column'], return_inverse=True)[1]
# # event_weight_data = (numpy.bincount(data_ids, weights=data.N_sig_sw) / numpy.bincount(data_ids))[data_ids]
# # event_weight_MC = (numpy.bincount(MC_ids, weights=MC.N_sig_sw) / numpy.bincount(MC_ids))[MC_ids]
# # normalize weights for tracks in a way that sum w_track = 1 per event
# track_weights_data /= numpy.bincount(data_ids, weights=track_weights_data)[data_ids]
# track_weights_MC /= numpy.bincount(MC_ids, weights=track_weights_MC)[MC_ids]
Explanation: compute MC and data track weights
End of explanation
hist(combined_p_calib[combined_label == 1], label='MC', normed=True, alpha=0.4, bins=60,
weights=combined_weights_MC)
hist(combined_p_calib[combined_label == 0], label='data', normed=True, alpha=0.4, bins=60,
weights=combined_weights_data);
legend(loc='best')
hist(track_weights_MC, normed=True, alpha=0.4, bins=60, label='MC')
hist(track_weights_data, normed=True, alpha=0.4, bins=60, label='RD');
legend(loc='best')
numpy.mean(track_weights_data), numpy.mean(track_weights_MC)
hist(combined_p_calib[combined_label == 1], label='MC', normed=True, alpha=0.4, bins=60,
weights=track_weights_MC * MC.N_sig_sw.values)
hist(combined_p_calib[combined_label == 0], label='data', normed=True, alpha=0.4, bins=60,
weights=track_weights_data * data.N_sig_sw.values);
legend(loc='best')
roc_auc_score(combined_label, combined_p_calib,
sample_weight=numpy.concatenate([track_weights_data * data.N_sig_sw.values,
track_weights_MC * MC.N_sig_sw.values]))
Explanation: reweighting plotting
End of explanation
%%time
tt_check = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=433,
train_features=features + ['label'], group_feature='group_column')
tt_check.fit(combined_data_MC, combined_label,
sample_weight=numpy.concatenate([track_weights_data * data.N_sig_sw.values * mask_sw_positive,
track_weights_MC * MC.N_sig_sw.values]))
roc_auc_score(combined_label, tt_check.predict_proba(combined_data_MC)[:, 1],
sample_weight=numpy.concatenate([track_weights_data * data.N_sig_sw.values * mask_sw_positive,
track_weights_MC * MC.N_sig_sw.values]))
# * sum(track_weights_data * mask_sw_positive) / sum(track_weights_MC)
roc_auc_score(combined_label, tt_check.predict_proba(combined_data_MC)[:, 1],
sample_weight=numpy.concatenate([track_weights_data * data.N_sig_sw.values,
track_weights_MC * MC.N_sig_sw.values]))
# * sum(track_weights_data) / sum(track_weights_MC)
Explanation: Check reweighting rule
train classifier to distinguish data vs MC with provided weights
End of explanation
tt_reweighted_MC = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=features, group_feature='group_column')
%time tt_reweighted_MC.fit(MC, MC.label, sample_weight=track_weights_MC * MC.N_sig_sw.values)
pass
pandas.DataFrame({'dataset': ['MC', 'data'],
'quality': [roc_auc_score(
B_signs_MC,
compute_Bprobs(MC, tt_reweighted_MC.predict_proba(MC)[:, 1],
weights=track_weights_MC, normed_weights=False),
sample_weight=B_weights_MC),
roc_auc_score(
B_signs,
compute_Bprobs(data, tt_reweighted_MC.predict_proba(data)[:, 1],
weights=track_weights_data, normed_weights=False),
sample_weight=B_weights)]})
pandas.DataFrame({'dataset': ['MC', 'data'],
'quality': [roc_auc_score(
B_signs_MC,
compute_Bprobs(MC, tt_reweighted_MC.predict_proba(MC)[:, 1],
weights=track_weights_MC, normed_weights=False),
sample_weight=B_weights_MC),
roc_auc_score(
B_signs,
compute_Bprobs(data, tt_reweighted_MC.predict_proba(data)[:, 1],
weights=track_weights_data, normed_weights=False),
sample_weight=B_weights)]})
Explanation: Classifier trained on MC
End of explanation
%%time
tt_reweighted_data = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=321,
train_features=features, group_feature='group_column')
tt_reweighted_data.fit(data, data.label,
sample_weight=track_weights_data * data.N_sig_sw.values * mask_sw_positive)
pass
pandas.DataFrame({'dataset': ['MC', 'data'],
'quality': [roc_auc_score(
B_signs_MC,
compute_Bprobs(MC, tt_reweighted_data.predict_proba(MC)[:, 1],
weights=track_weights_MC, normed_weights=False),
sample_weight=B_weights_MC),
roc_auc_score(
B_signs,
compute_Bprobs(data, tt_reweighted_data.predict_proba(data)[:, 1],
weights=track_weights_data, normed_weights=False),
sample_weight=B_weights)]})
pandas.DataFrame({'dataset': ['MC', 'data'],
'quality': [roc_auc_score(
B_signs_MC,
compute_Bprobs(MC, tt_reweighted_data.predict_proba(MC)[:, 1],
weights=track_weights_MC, normed_weights=False),
sample_weight=B_weights_MC),
roc_auc_score(
B_signs,
compute_Bprobs(data, tt_reweighted_data.predict_proba(data)[:, 1],
weights=track_weights_data, normed_weights=False),
sample_weight=B_weights)]})
Explanation: Classifier trained on data
End of explanation
numpy.mean(mc_sum_weights_per_event), numpy.mean(data_sum_weights_per_event)
_, data_ids = numpy.unique(data['group_column'], return_inverse=True)
mc_sum_weights_per_event = numpy.bincount(MC.group_column.values, weights=track_weights_MC)
data_sum_weights_per_event = numpy.bincount(data_ids, weights=track_weights_data)
hist(mc_sum_weights_per_event, bins=60, normed=True, alpha=0.5)
hist(data_sum_weights_per_event, bins=60, normed=True, alpha=0.5, weights=B_weights);
hist(mc_sum_weights_per_event, bins=60, normed=True, alpha=0.5)
hist(data_sum_weights_per_event, bins=60, normed=True, alpha=0.5, weights=B_weights);
hist(numpy.bincount(MC.group_column), bins=81, normed=True, alpha=0.5, range=(0, 80))
hist(numpy.bincount(data.group_column), bins=81, normed=True, alpha=0.5, range=(0, 80));
hist(expit(p_tt_mc) - expit(p_data), bins=60, weights=B_weights, normed=True, label='standard approach',
alpha=0.5);
hist(expit(p_data_w_MC) - expit(p_data_w), bins=60, weights=B_weights, normed=True, label='compensate method',
alpha=0.5);
legend()
xlabel('$p_{MC}-p_{data}$')
Explanation:
End of explanation
from utils import compute_mistag
bins_perc = [10, 20, 30, 40, 50, 60, 70, 80, 90]
compute_mistag(expit(p_data), B_signs, B_weights, chosen=numpy.ones(len(B_signs), dtype=bool),
bins=bins_perc,
uniform=False, label='data')
compute_mistag(expit(p_tt_mc), B_signs, B_weights, chosen=numpy.ones(len(B_signs), dtype=bool),
bins=bins_perc,
uniform=False, label='MC')
compute_mistag(expit(p_data_w), B_signs, B_weights, chosen=numpy.ones(len(B_signs), dtype=bool),
bins=bins_perc,
uniform=False, label='new')
legend(loc='best')
xlim(0.3, 0.5)
ylim(0.2, 0.5)
bins_edg = numpy.linspace(0.3, 0.9, 10)
compute_mistag(expit(p_data), B_signs, B_weights, chosen=numpy.ones(len(B_signs), dtype=bool),
bins=bins_edg,
uniform=True, label='data')
compute_mistag(expit(p_tt_mc), B_signs, B_weights, chosen=numpy.ones(len(B_signs), dtype=bool),
bins=bins_edg,
uniform=True, label='MC')
compute_mistag(expit(p_data_w), B_signs, B_weights, chosen=numpy.ones(len(B_signs), dtype=bool),
bins=bins_edg,
uniform=True, label='new')
legend(loc='best')
Explanation: Calibration
End of explanation |
863 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
23-10-2017
<center> <h1> Computational Quantum Dynamics </h1> </center>
Exercise 1
3. Eigen values of random Hermitian matrices.
(a) Write a Python script that generates a complex random Hermitian matrix of size $N x N$ and calculates all its eigenvalues $\lambda$ The matrix’ diagonal elements should be independently
identically distributed real random numbers with a Gaussian distribution with mean zero and variance one. The off-diagonal elements should have real and imaginary parts that are independently identically distributed random numbers with a Gaussian distribution with mean zero and variance one.
Solution
Step1: (b) Plot a histogram of the scaled eigenvalues $\lambda_i / \sqrt{2N}$. Scale the histogram such that the
histogram’s area is normalized to one. Set the number of bins to 64 to get a reasonableresolution.
Step2: (c) Extend the script from the previous two parts to generate M matrices and plot the
distribution of the N × M scaled eigen values. As one can show the distribution of the
scaled eigenvalues converges in the limit N → ∞ to some limiting distribution that has
a rather simple mathematical expression. Can you guess the mathematical law of this
limiting distribution from the numerical data? Choose the parameters N and M such
that you are close enough to the limiting case but the computing time remains limited
such that the simulation can be carried out in a reasonable amount of time.
Solution
We chose N = 100 and M = 1000. The distribution is the Marčenko–Pastur distribution.
Step3: (d) The SciPy library has two functions to calculate the eigen values of a dense matrix, one
that works for arbitrary quadratic matrices, one that is limited to Hermitian matrices.
Give three reasons why one should prefer the specialized function over the more general
in the case of Hermitian matrices.
Solution
My reasons are
Step4: (b) Look into the SciPy documentation how to evaluate numerically one-dimensional integrals. Check numerically the orthogonality relation
$\int_{-\infty}^{\infty} \Psi_m(x)\Psi_n(x) = \delta_{m,n}$. | Python Code:
import numpy as np
from numpy import linalg as LA
import matplotlib.pyplot as plt
from math import factorial
from itertools import combinations_with_replacement
from scipy.integrate import quad
%matplotlib inline
def generate_random_hermitian(N):
A = np.random.normal(size=(N, N))
H = (np.tril(A) + np.triu(A.T.conj())) / np.sqrt(2)
return H
def check_if_hermitian(H):
return np.sum(np.matrix(H).getH() == np.matrix(H)) == H.size
Explanation: 23-10-2017
<center> <h1> Computational Quantum Dynamics </h1> </center>
Exercise 1
3. Eigen values of random Hermitian matrices.
(a) Write a Python script that generates a complex random Hermitian matrix of size $N x N$ and calculates all its eigenvalues $\lambda$ The matrix’ diagonal elements should be independently
identically distributed real random numbers with a Gaussian distribution with mean zero and variance one. The off-diagonal elements should have real and imaginary parts that are independently identically distributed random numbers with a Gaussian distribution with mean zero and variance one.
Solution
End of explanation
N = 300
x = np.linspace(-1, 1, 1000)
plt.hist(LA.eigvalsh(generate_random_hermitian(N)) / np.sqrt(2 * N), bins=64, normed=True);
Explanation: (b) Plot a histogram of the scaled eigenvalues $\lambda_i / \sqrt{2N}$. Scale the histogram such that the
histogram’s area is normalized to one. Set the number of bins to 64 to get a reasonableresolution.
End of explanation
N = 100
M = 1000
eignval = []
for i in range(M):
eignval.extend(list(LA.eigvalsh(generate_random_hermitian(N)) / np.sqrt(2 * N)))
plt.hist(eignval, bins=64, normed=True);
plt.plot(x, .63 * np.sqrt(1 - x ** 2)); # .315 is an approximation
Explanation: (c) Extend the script from the previous two parts to generate M matrices and plot the
distribution of the N × M scaled eigen values. As one can show the distribution of the
scaled eigenvalues converges in the limit N → ∞ to some limiting distribution that has
a rather simple mathematical expression. Can you guess the mathematical law of this
limiting distribution from the numerical data? Choose the parameters N and M such
that you are close enough to the limiting case but the computing time remains limited
such that the simulation can be carried out in a reasonable amount of time.
Solution
We chose N = 100 and M = 1000. The distribution is the Marčenko–Pastur distribution.
End of explanation
def quantum_eignfunctions(x, n, l=1):
h = scipy.special.hermite(n)
coef = 1 / np.sqrt(2 ** n * factorial(n) * l) * np.pi ** -.25
return coef * np.exp(-x**2 / (2 * l * l)) * h(x / l)
x = np.linspace(-5, 5, 1000)
for n in range(0, 5):
plt.plot(x, quantum_eignfunctions(x, n), label=r'$n$ = ' + str(n))
plt.legend()
plt.xlabel(r'$x$')
plt.ylabel(r'$\Psi_n(x)$')
plt.grid(linestyle=':')
Explanation: (d) The SciPy library has two functions to calculate the eigen values of a dense matrix, one
that works for arbitrary quadratic matrices, one that is limited to Hermitian matrices.
Give three reasons why one should prefer the specialized function over the more general
in the case of Hermitian matrices.
Solution
My reasons are:
* A function specific for hermitian matrices is probably optimized to work faster than in the general case.
* It returns the eignvalues as real numbers and not complex with the immaginary part close to 0.
* For this it returns the values sorted
* It helps to keep the code clean.
3. Eigen functions of the quantum harmonic oscillator.
The normalized eigen functions of the quantum harmonic oscillator are given by
$$ \psi_n(x)\equiv \left\langle x \mid n \right\rangle = {1 \over \sqrt{2^n n! l}}~ \pi^{-1/4} e^{-x^2 / (2 l^2)} H_n(x/l), $$
where $H_n(x)$ denotes the Hermite polynomials, where the first few are given by
$H_n(x) = 1$,
$H_n(x) = 2x$,
$H_n(x) = 4x^2 - 2$
(a) Plot the first five eigen functions $\Psi_n(x)$ into a single diagram for ℓ = 1. Annotate your
plot appropriately.
End of explanation
psi_norm = lambda x, n, m: quantum_eignfunctions(x, n) * quantum_eignfunctions(x, m)
for n_m in combinations_with_replacement(range(5), 2):
n, m = n_m
y, err = (quad(psi_norm, -np.inf, np.inf, args=(n, m)))
if y < err:
y = 0.
np.random.random((4, 4))np.random.random((4, 4)) print('For n =', n, 'and m =', m, ': integral =', round(y, 2))
Explanation: (b) Look into the SciPy documentation how to evaluate numerically one-dimensional integrals. Check numerically the orthogonality relation
$\int_{-\infty}^{\infty} \Psi_m(x)\Psi_n(x) = \delta_{m,n}$.
End of explanation |
864 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
阅读笔记
作者:方跃文
Email
Step1: 由于这个是逗号隔开的文本,我们可以方便地使用 read_csv 来读入数据
Step2: 上面这个例子中是已经包含了header部分了,但并不是所有数据都自带header,例如
Step3: df1中我没有指明是header=None,所以原始文件中都第一行被错误地用作了header。为了
避免这种错误,需要明确指明header=None
当然,我们也可以自己去指定名字:
Step4: 如果我们在读入数据都时候,就希望其中都某一列作为索引index,比如我们可以对上面这个例子,将message这列作为索引:
Step5: 如果我们想从多列转化成层次化索引(hierarchical index),我们需要由列编号或者列名组成的列表即可。
先看个例子
Step6: 实际情况中,有的表格的分隔符并不是逗号,或者相等间隔的空白富,例如下面这种情形,
Step7: 在第一列中,A和B之间的空格符明显多余第二到第五列数字之间的空格符。假设,我们还是使用默认的sep=‘’,我们将得到
Step8: 显然,ABC之间的空格没有被正确识别,导致ABC被列入了同一个列之中。
喂了
我们可以传递一个正则表达式作为分隔符。Here, 我们就可以使用
正则表达式 \s+ 来表示;
Step10: 我们也可以使用这个delim_whitespace=True来实现数据的正确读入
Step11: The parser functions(解析器函数)in the Table 6-1 have many aiddtional arguments to help
handle the wide variaty of exception fil formats that occur.让我来举个例子。我们可以利用skiprows来
跳过文件中的前面几行或者特别想指定的行。这个功能是相当有用的。特别是像我这样的人,喜欢在rawdata里面
写几句comments来说明一下这些数据的内容。
Step12: 在chapter05的笔记中,我也说到过,实际处理的数据是很有可能包含空数剧的。那些missing data通常
不会被呈现出来,或者会被一些sentinel value所替代,例如NA或NULL
Step13: The na_values option can take either a list or a set of strings to consider
missing values
Step14: 可以用一个字典为各个列指定不同的NA标记值:这就可以让我们很方便地在读取数据时候定义那些是null值。
Step15: Reading text files in pieces 逐块读取文本文件
对于很大的文件,我们一般希望只读取一小部分,或者对小部分进行迭代式地读入。
Step16: 在notebook中查看很大的文件中的内容时候,最好设置一下display时候的最大行数,这样就不会把所有列给打印出来
Step17: 如果想逐块读入文件,我们可以指定chunksize来实现:
Step18: 这里 TextParser 允许我们可以进行迭代。
Step19: 这样我们就有了
Step20: 对 value_counts 有点忘记了,所以特意地写了下面这个简单对例子来说明。具体对可以在chapter05中看笔记。
Step21: Writing data to text format 将数据输出到文本
之前我们只介绍了读入文本,但是却没有说输出文本。先来看些简单都例子:
Step22: 相比于原始数据,我们注意到输出的csv文件中也包含了索引。
Step23: missing data在上面是被表示为空字符串,但是可能并不容易看出来。
我们也许希望它是用其他字符表示到,比如用 NULL 来表示
Step24: 观察上面到几个例子,我们可以看出来,默认请看下,index和header都会被输出。
不过,我们也可以禁止输出他们
Step25: 此外我们还可以指定输出某些列
Step26: 我们也能将Series输出到文本:
Step27: 从csv转为series的方法一:先化为DataFrame,然后利用loc方法转为Series
Step28: 方法二:这也是原书第一版中的方法,不过这个方法已经被弃用。
Step29: 方法三:使用 squeeze=True,这是我个人最推荐的方法。不过根据官方的文档说法,squeeze只在原始数据包含一个columns时候才会返回Series
squeeze
Step30: Working with delimited formats
Although we can use functions like pandas.read_table to laod most formas of tabular data, however, munal processing may be necessary sometimes. 其实实际情况中,我们拿到的数据很有可能会有一些很奇特的行,他们无法被read_table等函数识别导入。
为了说明这些基本工具,我们来看一些简单的例子
Step31: 从这个地方开始,我们可以把数据改成一个我们自己期望的格式。
我们来一步一步的举一个例子。
首先,将文件读入到一个lines的列表中
Step32: CSV文件格式有各种不同的格式。为了定义一个有特别的delimiter、字符串引用规则或者行结尾的csvfile,我们可以定一一个
简单的subcalss:
Step33: 当然,我们也可以不用subclass,而直接通过keywords来独立地指定csv文件的dialect参数:
Step34: 对于更加复杂的文件或者有固定的多个字符的delimiters情况,我们可能无法使用csv module。
在这些情况下,我们不得不使用line splitting和其他的cleanup手段,例如split mehtod,
或者正则表达式方法re.split
To write delimited files manually, we can use csv.writer. It
accepts an open, writable file object and the same dialect
and format options as csv.reader
Step36: JSON data
JSON, short for Javascript object notation, has become one of the standard formats
for sending data for HTTP request between web browsers and other applications.
It is a much more free-form data format than a tabular text form like CSV.
Step37: There are several Python libraries that can read or write JSON data. For exampple, json module
Step38: We can also revert the above result object back to JSON data using json.dump
Step39: We can also pass a list of dicts to the DataFrame constructor and slect a subset of the data fields
Step40: pandas.read_json 可以自动的将JSON datasets转化为Series或者DataFrame,例如
Step41: If we want to export data from pandas to JSON, one is to use the 'to_json' methods on Series
and DataFrame"
Step42: Binary Data Formats
将数据转换到二进制格式上最简单的方式就是使用python内置的pickle。
pandas对象都有 to_pickle 方法可以将数据写入到pickle格式中。
Step43: Using pickle module, we can read any 'pickled' objsect stored in a file, or even more easiliy
we can use pandas.read_pickle, for example
Step44: Attention
pickle只建议于短期存储格式的情形。因为pickle的数据格式随着时间可能变得不稳定;一个今天pickled的数据,可能在明天就会因为
更新的pickle库版本而无法pickle。尽管开发者已经努力维护避免这种情况,但是在未来的某个时间点上可能最好还是放弃使用pickle format.
pandas has built-in support for two more library data formats, i.e. HDF5 and Message Pack.
Some other storage formats for pandas or NumPy data include
Step45: To solve the above prolem, I installed sevearl modules
Step46: HDFStore supports two storage schemas, 'fixed' and 'table'.
The latter is generally slower, but it supports query operations using a special
syntax
Step47: put其实是store['obj2'] = frame 的精确写法,只是前者可以让我们使用一些其他的选项,然更好的对我们想要存储的数据
进行特别的格式化。
pandas.read_hdf functions gives us a shortcut to these tools
Step48: Note
如果我们处理的数据是存储在远程的服务器上的话,可以使用一个专为分布式存设计的二进制格式,比如Apache Parquet。
这些格式都还处于发展中。
如果数据大部分都在本地,那么鼓励去探索下PyTables和h5py这两者是否也可以满足我们的需求,以及与pd.HDFStore执行效率
的差别。
Reading Microsoft Excel Files
read excel
pandas supports reading tabular data in Excel files using either the ExcelFile class or pandas.read_excel function. These tools use the add-on packages xlrd and openpyxl to read XLS and XLSX files, respectively. We may need to install these manually with pip or conda.
To use ExcelFile, create an instance by passing a path to an xls or xlsx file
Step49: 对于又多个sheet的Excel表格,更快的方式就是像上面那样,先create ExcelFile。
不过,也可以简单地把文件路径和文件名直接pass给read_excel. 更推荐前面一种做法。
Step50: write data to excel
To write pandas data to Excel format, firstly create an ExcelWriter,
then write data to it using pandas objects’ to_excel method
Step51: Interacting with web APIs
很多网站会提供API供其他人从网站获取数据。获取这些数据的方式有很多,其中至一就是requests
例如,我们可以使用requests找出最近的30个 pands库的 Github issues
Step52: The "Response" object's json method will return a dictionary contaning JSON parsed into native
Python objects
Step53: data的每个元素都是一个字典,包含了所有在github issues中找到的信息(除了comments)。我们可以
直接把这些data信息传递给DataFrame和exact fields of interest:
Step54: 现在,我们来看看,利用requests 能从我的github主页上找出哪些信息
Step56: Interacting with databases
Many data are usually stored in database. SQL-based relational databases (such as SQL server, PostgreSQL, and MySQL)
are in wide use, and many alternative databases have come quite popular. The choice of database is usually dependent
on the performance,data integrity (数据的完整性), and scalability (可扩展性)needs of an application
Loading data from SQL into a DataFrame is fairly straightforward, and pandas has some functions to simplify the process.
As an example, I'll create a SQLite database using Python's built-in sqlite3 driver.
Step57: Now, let's insert a few rows of data
Step58: Most Python SQL drivers return a list of tuples when selecting data from a table
Step59: 我们可以将元祖列表传给DataFrame的构造器,不过我们还需要列名。它包含在cursor的description中
Step60: 上面的做法是蛮麻烦的,因为每次都得向database进行query。SQLAlchemy project提供了一些渐变的方法。pandas有一个 read_sql 函数可以是我们从一般的 SQLAlchemy 链接中获取数据。
下面,我们就举一个例子,如何使用 SQLAlchemy 连接到上面创建的 SQLite databse,并且从中读取数据 | Python Code:
!cat chapter06/ex1.csv
Explanation: 阅读笔记
作者:方跃文
Email: [email protected]
** 时间:始于2018年3月6日, 结束写作于 2018年7月27日, 2018年11月复习一次。
第六章 数据加载、存储和文件格式
数据如果不能导入和导出,那么本书中介绍的工具自然也就没有用武之地了。在我们的日常生活和研究中,有许多文件输入和输出的应用场景。
输入和输出通常划分为几个大类:读取文本文件和其他更高效的磁盘存储格式,加载数据库中的数据,利用web api操作网络资源。
读写文本格式的数据
pandas 提供了一些用于将表格型数据读取为 DataFrame 对象的函数。表6-1对此进行了总结,其中 read_csv and read_table 将会是我们经常用到的。
Table 6-1. Parsing functions in pandas
| Functions | Description|
| ------------- |:-------------:|
| read_csv | load delimited data from a file, URL, or file-like object; use comma as defult delimiter |
| read_table | load delimited data from a file, URL, or file-like object; use tab ('\t') as defult delimiter |
| read_fwf | read data in fixed-width column format, i.e. no delimiters|
|read_clipboard| Version of read_table that reads data from the clipboard; useful for converting tables from web pages|
|read_excel | read tabular data from an Excel XLS or XLSX file |
| read_hdf | read HDF5 files written by pandas |
| read_html | read all tables found in a given HTML document|
| read_json | read data from a JSON |
| read_msgpack | read pandas data encoded using the MessagePack binary format|
| read_pickle | read an arbitrary object in Python pickle format| pickle竟然是咸菜的意思哦~
|read_sas| read a SASdataset stored in one of the SAS system's custom strorage format|
| read_sql | read the results of a SQL query (using SQLAlchemy) asa pandas DataFrame|
| read_stata | read a dataset from Stata file format |
| read_feather | read the Feather binary file format |
上述函数的重要功能之一就是type inferende,也就是类型推断。我们不需要为所读入的数据指定是什么类型。不过日期和其他自定义类型数据的处理
需要小心一些,因为自定义数据可能并不那么容易被推断出来属于何种数据类型。
HDF5,Feather和mgspack在文件中就会有数据类型,因此读入会更加方便。
Let's start with a small comma-separated csv file:
End of explanation
import pandas as pd
df = pd.read_csv('chapter06/ex1.csv')
df
当然我们也可以使用 read_table,只需要将delimiter指定为逗号即可
import pandas as pd
df1 = pd.read_table('chapter06/ex1.csv', sep=',')
df1
Explanation: 由于这个是逗号隔开的文本,我们可以方便地使用 read_csv 来读入数据
End of explanation
import pandas as pd
df1 = pd.read_table('chapter06/ex2.csv', sep=',')
df2 = pd.read_table('chapter06/ex2.csv', sep=',', header=None)
df1
df2
Explanation: 上面这个例子中是已经包含了header部分了,但并不是所有数据都自带header,例如
End of explanation
import pandas as pd
df1 = pd.read_csv('chapter06/ex2.csv', names=['a', 'b', 'c', 'd', 'message'])
df1
Explanation: df1中我没有指明是header=None,所以原始文件中都第一行被错误地用作了header。为了
避免这种错误,需要明确指明header=None
当然,我们也可以自己去指定名字:
End of explanation
import pandas as pd
names = ['a', 'b', 'c', 'd', 'message']
df1 = pd.read_csv('chapter06/ex2.csv', names=names, index_col='message')
df1
Explanation: 如果我们在读入数据都时候,就希望其中都某一列作为索引index,比如我们可以对上面这个例子,将message这列作为索引:
End of explanation
!cat ./chapter06/csv_mindex.csv
import pandas as pd
df = pd.read_csv('chapter06/csv_mindex.csv', index_col=['key1', 'key2'])
df
Explanation: 如果我们想从多列转化成层次化索引(hierarchical index),我们需要由列编号或者列名组成的列表即可。
先看个例子
End of explanation
list(open('chapter06/ex3.txt'))
Explanation: 实际情况中,有的表格的分隔符并不是逗号,或者相等间隔的空白富,例如下面这种情形,
End of explanation
import pandas as pd
result = pd.read_table('chapter06/ex3.txt')
result
Explanation: 在第一列中,A和B之间的空格符明显多余第二到第五列数字之间的空格符。假设,我们还是使用默认的sep=‘’,我们将得到
End of explanation
import pandas as pd
result = pd.read_table('chapter06/ex3.txt', sep='\s+')
result
Explanation: 显然,ABC之间的空格没有被正确识别,导致ABC被列入了同一个列之中。
喂了
我们可以传递一个正则表达式作为分隔符。Here, 我们就可以使用
正则表达式 \s+ 来表示;
End of explanation
import pandas as pd
result = pd.read_table('chapter06/ex3.txt', delim_whitespace=True)
delim_whitespace: is a boolean, default False
Specifies whether or not whitespace (e.g. ' ' or '\t') will be used as the sep.
Equivalent to setting sep='\s+'.
print(result)
Explanation: 我们也可以使用这个delim_whitespace=True来实现数据的正确读入
End of explanation
!cat chapter06/ex4.csv
import pandas as pd
df = pd.read_csv('./chapter06/ex4.csv', skiprows=[0,2,3])
df
Explanation: The parser functions(解析器函数)in the Table 6-1 have many aiddtional arguments to help
handle the wide variaty of exception fil formats that occur.让我来举个例子。我们可以利用skiprows来
跳过文件中的前面几行或者特别想指定的行。这个功能是相当有用的。特别是像我这样的人,喜欢在rawdata里面
写几句comments来说明一下这些数据的内容。
End of explanation
list(open('chapter06/ex5.csv'))
import pandas as pd
import numpy as np
result = pd.read_csv('./chapter06/ex5.csv',sep=',')
result
print(result['c'][1])
pd.isnull(result)
pd.notnull(result)
Explanation: 在chapter05的笔记中,我也说到过,实际处理的数据是很有可能包含空数剧的。那些missing data通常
不会被呈现出来,或者会被一些sentinel value所替代,例如NA或NULL
End of explanation
result = pd.read_csv('chapter06/ex5.csv', na_values = 'NULL') # 注意原始数据中有一处有连续两个逗号,那就是产生NULL的地方。
#因为在原属数据中,逗号之间并未输入任何数据,因此这个空的数据类型仍然是 floating
result
Explanation: The na_values option can take either a list or a set of strings to consider
missing values:也就是说只要原始数据中出现了na_values所指定的字符串或者list,就会以
NaN的方式呈现在pandas数据中。需要注意的是,如果原始数据处写的是NA或者NULL,被read只会它们都只是str,并不能直接转化为floating。
End of explanation
import pandas as pd
sentinels = {'message':['foo', 'NA'], 'something': ['two']}
result = pd.read_csv('chapter06/ex5.csv', na_values = sentinels)
result
pd.isnull(result)
Explanation: 可以用一个字典为各个列指定不同的NA标记值:这就可以让我们很方便地在读取数据时候定义那些是null值。
End of explanation
result = pd.read_csv('./chapter06/ex6.csv', nrows=5) # 我们只从原始数据中读入5行
result
Explanation: Reading text files in pieces 逐块读取文本文件
对于很大的文件,我们一般希望只读取一小部分,或者对小部分进行迭代式地读入。
End of explanation
pd.options.display.max_rows = 8 # only print 8 rows clearly,other rows using ...
result = pd.read_csv('./chapter06/ex6.csv')
result
Explanation: 在notebook中查看很大的文件中的内容时候,最好设置一下display时候的最大行数,这样就不会把所有列给打印出来
End of explanation
chunker = pd.read_csv('chapter06/ex6.csv', chunksize = 1000)
chunker
Explanation: 如果想逐块读入文件,我们可以指定chunksize来实现:
End of explanation
chunker = pd.read_csv('chapter06/ex6.csv', chunksize = 1000)
tot = pd.Series([])
for piece in chunker:
# print(piece['key'].value_counts())
tot = tot.add(piece['key'].value_counts(), fill_value=0)
tot = tot.sort_values(ascending=False)
Explanation: 这里 TextParser 允许我们可以进行迭代。
End of explanation
tot[:10]
Explanation: 这样我们就有了
End of explanation
pd.options.display.max_rows = 20
result = pd.DataFrame({"Values": [1,2,3,4,22,2,2,3,4]})
result
result['Values'].value_counts()
Explanation: 对 value_counts 有点忘记了,所以特意地写了下面这个简单对例子来说明。具体对可以在chapter05中看笔记。
End of explanation
!cat chapter06/ex5.csv
import pandas as pd
data = pd.read_csv('chapter06/ex5.csv')
data
data.to_csv('chapter06/out.csv')
!cat chapter06/out.csv
Explanation: Writing data to text format 将数据输出到文本
之前我们只介绍了读入文本,但是却没有说输出文本。先来看些简单都例子:
End of explanation
import sys #使用了sys.out 所以data会被直接打印在屏幕端,而不是输出到其他文本文件
data.to_csv(sys.stdout, sep='|')
data.to_csv(sys.stdout, sep='@')
Explanation: 相比于原始数据,我们注意到输出的csv文件中也包含了索引。
End of explanation
import pandas as pd
import sys
data = pd.read_csv('chapter06/ex5.csv')
data.to_csv(sys.stdout, na_rep='NULL')
import pandas as pd
import sys
data = pd.read_csv('chapter06/ex5.csv')
data.to_csv(sys.stdout, na_rep='NAN')
Explanation: missing data在上面是被表示为空字符串,但是可能并不容易看出来。
我们也许希望它是用其他字符表示到,比如用 NULL 来表示
End of explanation
import pandas as pd
import sys
data = pd.read_csv('chapter06/ex5.csv')
data.to_csv(sys.stdout, na_rep='NAN', index=False, header=False)
Explanation: 观察上面到几个例子,我们可以看出来,默认请看下,index和header都会被输出。
不过,我们也可以禁止输出他们
End of explanation
import pandas as pd
import sys
data = pd.read_csv('chapter06/ex5.csv')
data.to_csv(sys.stdout, na_rep='NAN', columns=['a', 'b','c'], index=False)
Explanation: 此外我们还可以指定输出某些列
End of explanation
import pandas as pd
import numpy as np
dates = pd.date_range('1/1/2000', periods=7)
ts = pd.Series(np.arange(7), index=dates)
ts
ts.to_csv('chapter06/tseries.csv')
!cat chapter06/tseries.csv
Explanation: 我们也能将Series输出到文本:
End of explanation
pd.read_csv('chapter06/tseries.csv', parse_dates=True,header=None)
result = pd.read_csv('chapter06/tseries.csv', parse_dates=True, header=None,index_col=0)
result
x = result.loc[:,1]
x
Explanation: 从csv转为series的方法一:先化为DataFrame,然后利用loc方法转为Series
End of explanation
df = pd.Series.from_csv('chapter06/tseries.csv')
# Series.from_csv has DEPRECATED. That's whi I use read_csv in above cells.
df
Explanation: 方法二:这也是原书第一版中的方法,不过这个方法已经被弃用。
End of explanation
result = pd.read_csv('chapter06/tseries.csv', parse_dates=True, header=None,index_col=0,squeeze=True)
type(result)
result
Explanation: 方法三:使用 squeeze=True,这是我个人最推荐的方法。不过根据官方的文档说法,squeeze只在原始数据包含一个columns时候才会返回Series
squeeze : boolean, default False
If the parsed data only contains one column then return a Series
End of explanation
!cat chapter06/ex7.csv
#For any file with a single-character delimiter,
#we can use csv module. To use it, pass any open file
#or file-like object to csv.reader
import csv
file = open('chapter06/ex7.csv')
reader = csv.reader(file)
type(reader)
#Iterating through the reader likea file yields
#tuples of values with any quote characters removed
for line in reader:
print(line)
Explanation: Working with delimited formats
Although we can use functions like pandas.read_table to laod most formas of tabular data, however, munal processing may be necessary sometimes. 其实实际情况中,我们拿到的数据很有可能会有一些很奇特的行,他们无法被read_table等函数识别导入。
为了说明这些基本工具,我们来看一些简单的例子
End of explanation
import csv
with open('chapter06/ex7.csv') as f:
lines = list(csv.reader(f))
header, values = lines[0], lines[1:]
header
values
#then we careate a dictionary of data columns using a dictionary comprehension
#and the expression zip(*values), which transposes rows to columns
data_dict = {h: v for h, v in zip(header, zip(*values))}
print(data_dict)
Explanation: 从这个地方开始,我们可以把数据改成一个我们自己期望的格式。
我们来一步一步的举一个例子。
首先,将文件读入到一个lines的列表中
End of explanation
csv输出这里,未来某个时间最好复习加强一下。
import csv
file = open('chapter06/ex7.csv')
class my_dialect(csv.Dialect):
lineterminator = '\m'
delimiter = ';'
quotechar = '"'
quoting = csv.QUOTE_MINIMAL
data = csv.reader(file, dialect=my_dialect)
for line in data:
print(line)
Explanation: CSV文件格式有各种不同的格式。为了定义一个有特别的delimiter、字符串引用规则或者行结尾的csvfile,我们可以定一一个
简单的subcalss:
End of explanation
import csv
file = open('chapter06/ex7.csv')
reader1 = csv.reader(file, delimiter='|', quotechar=' ', lineterminator='\r\n')
for line1 in reader1:
print(','.join(line1))
Explanation: 当然,我们也可以不用subclass,而直接通过keywords来独立地指定csv文件的dialect参数:
End of explanation
import csv
class my_dialect(csv.Dialect):
lineterminator = '\t\n' #\m
delimiter = ';'
quotechar = '"'
quoting = csv.QUOTE_MINIMAL
file = open('chapter06/ex7.csv')
with open('chapter06/mydata.csv', 'w') as f:
writer = csv.writer(f, dialect=my_dialect)
writer.writerow(('one', 'two', 'three'))
writer.writerow(('1', '2', '3'))
writer.writerow(('4', '5', '6'))
writer.writerow(('7', '8', '9'))
!cat chapter06/mydata.csv
Explanation: 对于更加复杂的文件或者有固定的多个字符的delimiters情况,我们可能无法使用csv module。
在这些情况下,我们不得不使用line splitting和其他的cleanup手段,例如split mehtod,
或者正则表达式方法re.split
To write delimited files manually, we can use csv.writer. It
accepts an open, writable file object and the same dialect
and format options as csv.reader:
End of explanation
obj =
{"name": "Wes",
"places_lived": ["United States", "Spain", "Germany"],
"pet": null,
"siblings": [{"name": "Scott", "age": 30, "pets": ["Zeus", "Zuko"]},
{"name": "Katie", "age": 38,
"pets": ["Sixes", "Stache", "Cisco"]}]
}
Explanation: JSON data
JSON, short for Javascript object notation, has become one of the standard formats
for sending data for HTTP request between web browsers and other applications.
It is a much more free-form data format than a tabular text form like CSV.
End of explanation
import json
result = json.loads(obj)
result
list(result)
result['places_lived']
type(result)
Explanation: There are several Python libraries that can read or write JSON data. For exampple, json module
End of explanation
asjson = json.dumps(result)
asjson
Explanation: We can also revert the above result object back to JSON data using json.dump
End of explanation
import pandas as pd
siblings = pd.DataFrame(result['siblings'], columns = ['name', 'age'])
siblings
Explanation: We can also pass a list of dicts to the DataFrame constructor and slect a subset of the data fields:
End of explanation
!cat chapter06/example.json
# import pandas as pd
import json
data = pd.read_json('chapter06/example.json')
data
Explanation: pandas.read_json 可以自动的将JSON datasets转化为Series或者DataFrame,例如
End of explanation
print(data.to_json())
Explanation: If we want to export data from pandas to JSON, one is to use the 'to_json' methods on Series
and DataFrame"
End of explanation
import pandas as pd
frame = pd.read_csv('chapter06/ex1.csv')
frame
frame.to_pickle('chapter06/frame_pickle') #this will write the data to the binary file named frame_pickle
Explanation: Binary Data Formats
将数据转换到二进制格式上最简单的方式就是使用python内置的pickle。
pandas对象都有 to_pickle 方法可以将数据写入到pickle格式中。
End of explanation
pd.read_pickle('chapter06/frame_pickle')
Explanation: Using pickle module, we can read any 'pickled' objsect stored in a file, or even more easiliy
we can use pandas.read_pickle, for example
End of explanation
import pandas as pd
import numpy as np
import
frame = pd.DataFrame({'a': np.random.randn(1000)})
store = pd.HDFStore('chapter06/mydata.h5')
Explanation: Attention
pickle只建议于短期存储格式的情形。因为pickle的数据格式随着时间可能变得不稳定;一个今天pickled的数据,可能在明天就会因为
更新的pickle库版本而无法pickle。尽管开发者已经努力维护避免这种情况,但是在未来的某个时间点上可能最好还是放弃使用pickle format.
pandas has built-in support for two more library data formats, i.e. HDF5 and Message Pack.
Some other storage formats for pandas or NumPy data include:
bcolz
A compressable column-oriented binary format based on the Blosc compression library
Feather
A cross-lanuage column-oriented file format. Feather uses the Apache Arrow columnar memory format
Using HDF5 Format
HDF5 is a well-regarded file format intended for storing large quantities of scientific arrya data.
It is available as a C library, and it has interfaces available in many other languages.
"HDF" means "hierarchical data format". each HDF5 file can store multipole datases and
supporting metadata.
Compared with other simpler formats, HDF5 feastures on-the-fly compression with a varietry of compression modes.
HDF5 can be a good choice for working with very large data-sets that don't fit into memory
尽管使用 PyTables或者h5py这两个库就可以简单直接的读写HDF5文件,但是pandas提高了高水平的接口可以简化存储Series或者DataFrame对象。
The ‘HDFStore' class works like a dict and handles the low-level details:
End of explanation
import pandas as pd
import numpy as np
frame = pd.DataFrame({'a': np.random.randn(1000)})
store = pd.HDFStore('chapter06/mydata.h5py')
store['obj1'] = frame # Store a DataFrame
store['obj1_col'] = frame['a'] # Store a Series
store
Objects contained in the HDF5 file can then be retrieved with the
same dict-like API:
x = store['obj1']
type(x)
y = store['obj1_col']
type(y)
list(store)
Explanation: To solve the above prolem, I installed sevearl modules:
pip install --upgrade tables
pip install lxml
pip install wrapt
End of explanation
store.put('obj2', frame, format='table')
store.select('obj2', where=['index >=10 and index <= 15'])
list(store)
store.close()
Explanation: HDFStore supports two storage schemas, 'fixed' and 'table'.
The latter is generally slower, but it supports query operations using a special
syntax:
End of explanation
frame.to_hdf('chapter06/mydata.h5py', 'obj3', format='table')
pd.read_hdf('chapter06/mydata.h5py', 'obj3', where=['index < 5'])
Explanation: put其实是store['obj2'] = frame 的精确写法,只是前者可以让我们使用一些其他的选项,然更好的对我们想要存储的数据
进行特别的格式化。
pandas.read_hdf functions gives us a shortcut to these tools:
End of explanation
import pandas as pd
xlsx = pd.ExcelFile('chapter06/ex1.xlsx')
# Data stored in a sheet can then be read into DataFrame with parse:
pd.read_excel(xlsx, 'Sheet1')
Explanation: Note
如果我们处理的数据是存储在远程的服务器上的话,可以使用一个专为分布式存设计的二进制格式,比如Apache Parquet。
这些格式都还处于发展中。
如果数据大部分都在本地,那么鼓励去探索下PyTables和h5py这两者是否也可以满足我们的需求,以及与pd.HDFStore执行效率
的差别。
Reading Microsoft Excel Files
read excel
pandas supports reading tabular data in Excel files using either the ExcelFile class or pandas.read_excel function. These tools use the add-on packages xlrd and openpyxl to read XLS and XLSX files, respectively. We may need to install these manually with pip or conda.
To use ExcelFile, create an instance by passing a path to an xls or xlsx file:
End of explanation
# 这里举一个例子,直接pass路径给read_excel
frame = pd.read_excel('chapter06/ex1.xlsx', 'Sheet1')
frame
Explanation: 对于又多个sheet的Excel表格,更快的方式就是像上面那样,先create ExcelFile。
不过,也可以简单地把文件路径和文件名直接pass给read_excel. 更推荐前面一种做法。
End of explanation
writer = pd.ExcelWriter('chapter06/ex2.xlsx') # Internally thsi was performed by openpyxl module
frame.to_excel(writer, 'Sheet1')
writer.save()
#我们也可以直接pass一个文件路径到 to_excel:
frame.to_excel('chapter06/ex2.xlsx')
!open chapter06/ex2.xlsx
Explanation: write data to excel
To write pandas data to Excel format, firstly create an ExcelWriter,
then write data to it using pandas objects’ to_excel method:
End of explanation
import requests
url = "https://api.github.com/repos/pandas-dev/pandas/issues"
resp = requests.get(url)
resp
Explanation: Interacting with web APIs
很多网站会提供API供其他人从网站获取数据。获取这些数据的方式有很多,其中至一就是requests
例如,我们可以使用requests找出最近的30个 pands库的 Github issues
End of explanation
data = resp.json()
data[0]['url']
data[0]['title']
Explanation: The "Response" object's json method will return a dictionary contaning JSON parsed into native
Python objects:
End of explanation
import pandas as pd
issues = pd.DataFrame(data, columns = ['number', 'title', 'labels', 'state'])
issues
Explanation: data的每个元素都是一个字典,包含了所有在github issues中找到的信息(除了comments)。我们可以
直接把这些data信息传递给DataFrame和exact fields of interest:
End of explanation
import requests
url = 'https://api.github.com/yw-fang'
resp = requests.get(url)
data = resp.json()
data['message']
data['documentation_url']
textdata = resp.text
textdata
Explanation: 现在,我们来看看,利用requests 能从我的github主页上找出哪些信息
End of explanation
import sqlite3
query =
CREATE TABLE test
(a VARCHAR(20), b VARCHAR(20),
c REAL, d INTEGER
);
con = sqlite3.connect('mydata.sqlite')
con.execute(query)
con.commit()
Explanation: Interacting with databases
Many data are usually stored in database. SQL-based relational databases (such as SQL server, PostgreSQL, and MySQL)
are in wide use, and many alternative databases have come quite popular. The choice of database is usually dependent
on the performance,data integrity (数据的完整性), and scalability (可扩展性)needs of an application
Loading data from SQL into a DataFrame is fairly straightforward, and pandas has some functions to simplify the process.
As an example, I'll create a SQLite database using Python's built-in sqlite3 driver.
End of explanation
data = [('At', 'Home', 1.25, 6),
('Out', 'Plane', 2.6, 3),
('In', 'Bottle', 1.7, 5)]
stmt = "INSERT INTO test VALUES (?, ?, ?, ?)"
con.executemany(stmt, data)
con.commit()
Explanation: Now, let's insert a few rows of data:
End of explanation
cursor = con.execute('select * from test')
rows = cursor.fetchall()
rows
Explanation: Most Python SQL drivers return a list of tuples when selecting data from a table:
End of explanation
cursor.description
pd.DataFrame(rows, columns=[x[0] for x in cursor.description])
Explanation: 我们可以将元祖列表传给DataFrame的构造器,不过我们还需要列名。它包含在cursor的description中
End of explanation
import sqlalchemy as sqla
db = sqla.create_engine('sqlite:///mydata.sqlite')
pd.read_sql('select * from test', db)
Explanation: 上面的做法是蛮麻烦的,因为每次都得向database进行query。SQLAlchemy project提供了一些渐变的方法。pandas有一个 read_sql 函数可以是我们从一般的 SQLAlchemy 链接中获取数据。
下面,我们就举一个例子,如何使用 SQLAlchemy 连接到上面创建的 SQLite databse,并且从中读取数据
End of explanation |
865 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id = []
target_id = []
for sentence_text in source_text.split("\n"):
sentence_id = [source_vocab_to_int[word] for word in sentence_text.split()]
source_id.append(sentence_id)
for sentence_text in target_text.split("\n"):
sentence_id = [target_vocab_to_int[word] for word in sentence_text.split()]
sentence_id.append(target_vocab_to_int["<EOS>"])
target_id.append(sentence_id)
return source_id, target_id
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
input_tensor = tf.placeholder(tf.int32,name="input", shape=(None, None))
targets = tf.placeholder(tf.int32, name="targets", shape=(None, None))
learning_rate = tf.placeholder(tf.float32, shape=None)
keep_prob = tf.placeholder(tf.float32, shape=None, name="keep_prob")
return (input_tensor, targets, learning_rate, keep_prob)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
ending = tf.strided_slice(target_data,[0,0],[batch_size,-1],[1,1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
enc_cell = tf.contrib.rnn.MultiRNNCell([dropout] * num_layers)
outputs, final_state = tf.nn.dynamic_rnn(enc_cell,rnn_inputs, dtype=tf.float32)
return final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length - 1, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
output_fn = lambda x: tf.contrib.layers.fully_connected(x,vocab_size,None,scope = decoding_scope)
with tf.variable_scope("decoding") as decoding_scope:
training_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
sequence_length,decoding_scope, output_fn,keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings,
target_vocab_to_int["<GO>"] ,target_vocab_to_int["<EOS>"] ,
sequence_length -1,vocab_size, decoding_scope, output_fn, keep_prob)
return (training_logits,inference_logits)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
rnn_inputs = tf.contrib.layers.embed_sequence(input_data, source_vocab_size,
enc_embedding_size)
encoder_state = encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
output = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length,
rnn_size, num_layers, target_vocab_to_int, keep_prob)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 256
decoding_embedding_size = 256
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target_batch,
[(0,0),(0,max_seq - target_batch.shape[1]), (0,0)],
'constant')
if max_seq - batch_train_logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
sentence = sentence.lower()
words_to_ids = [vocab_to_int[word] if word in vocab_to_int.keys() else vocab_to_int["<UNK>"]
for word in sentence.split()]
return words_to_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
866 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing Lump Sum vs. Dollar Cost Averaging (DCA) investing
View Notebook as HTML
View Notebook on GitHub
View Notebook on Blog
The topic of investing all at once versus spreading it over time has come up a few times with peers. I remembered reading in both bogleheads and an Investopedia article that lump sum beats DCA ~66% of the time.
Both lump-sum investing and DCA have their appropriate time and place. The research shows that lump-sum investing pays off about 66% of the time, which is a long way from all the time. It certainly makes sense to look carefully at the current market conditions. If you hit that bad 33% in lumpy style, you can lose a lot of money.
The idea espoused is that the market trends up in the long term, and therefore it's better to invest as early as possible instead of spreading your investments around to avoid the bottom; time on the market is statistically better.
Sounds logical, but when it's your money on the line, something sounding good isn't always good enough.
I decided to run an experiment validating the claim using IPython Notebook, Pandas, and matplotlib for visualization.
The Experiment
This statement of being better 66% of the time wasn't completely intuitive to me, so I decided to do a little test. Let's imagine we have \$10k to invest any time in the last 16 years, from Feb 22, 2000 to Jan 9, 2016. And we want to choose the time and strategy that would have returned us the most money today. The two strategies I chose are
Step1: We'll plot all the prices at Adj Close using matplotlib, a python 2D plotting library that is Matlab flavored. We use Adjusted Close because it is commonly used for historical pricing, and accounts for all corporate actions such as stock splits, dividends/distributions and rights offerings. This happens to be our exact use-case.
Step2: Great, looks similar to the SPY chart from before. Notice how due to historical pricing, the effect of including things like dividend yields increases to total return over the years. We can easily see the the bubble and crash around 2007-2009, as well as the long bull market up since then. Also we can see in the last couple months the small dip in September/October, and barely see the drop in the last couple days in the beginning of 2016.
Calculate Lump Sum
Lump Sum means to invest everything available all at once, in this case we have a hypothetical $10,000 to spend at any day in our history of the last 16 years. Then we want to know how much that investment would be worth today.
Another way to look at this is we can make a chart where the X axis is the date we invest the lump sum, and the Y axis is the value of that investment today.
Step3: Cool! Pandas makes it really easy to manipulate data with datetime indices. Looking at the chart we see that if we'd bought right at the bottom of the 2007-2009 crash our \$10,000 would be worth ~ $32,500. If only we had a time machine...
Step4: What's nice to note as well is that even if we'd invested at the worst possible time, peak of the bubble in 2007, on Oct 9th, we'd still have come out net positive at \$14,593 today. The worst time to invest so far turns out to be more recent, on July 20th of 2015. This is because not only was the market down, but it's so recent we haven't had time for the investment to grow. Something something the best time to plant a tree was yesterday.
Calculating Dollar Cost Averaging (DCA)
Now lets do the same experiment, but instead we'll invest the \$10,000 we have using Dollar Cost Averaging (DCA). For this simple test, I'll assume instead of investing all at once, I'll invest in equal portions every 30 days (roughly a month), over a course of 360 days (roughly a year) total.
So on day 1, I invest $10,000 / 12 ~ $833.33, on day 31, the same $833.33
and so on for 12 total investments. A special case is investing within the last year, when there isn't time to DCA all of it, as a compromise, I invest what portions I can and keep the rest as cash, since that is how reality works.
Step5: Surprisingly straightforward, good job Pandas. Let's plot it similar to how we did with lump sum. The x axis is the date at which we start dollar cost averaging (and then continue for the next 360 days in 30 day increments from that date). The y axis is the final value of our investment today.
Step6: Interesting! DCA looks way really smooth and the graph is really high up, so it must be better right!? Wait, no, the Y axis is different, in fact it's highest high is around \$28,000 in comparison to the lump sums \$32,500. Let's look at the ideal/worst investment dates for DCA, I include the lump sum from before as well.
Step7: Looking at dollar cost averaging, the best day to start dollar cost averaging was July 12, 2002, when we were still recovering from the 'tech crashs. The worst day to start was around the peak of the 2007 bubble on Jan 26, 2007, and the absolute worst would have been to start last year on Jan 20, 2015.
We can already see that there's some similarities between lump sum and DCA, DCA appears to have lower highs, but also higher lows. It's difficult to compare just by looking at numbers, we need to compare the two strategies visually side by side.
Comparison of Lump Sum vs Dollar Cost Averaging
So we've just individuallly tested two investing strategies exhaustively on every possible day in the last 16 years.
Let's plot three charts on top of each other. The raw SPY stock price over the years on the top. Then in the middle we plot both lump sum and DCA on top of each other. Finally we'll plot the difference between them as $diff = lump sum - DCA$
Step8: Before we start comparing, definitely take note of the middle chart, where the initial investment of \$10k is. Notice that if we had invested using either strategy, and at any point before 2 years ago, no matter which bubble or crash, we'd have made some pretty huge returns on our investments, double and tripling at some points. This is the power of compound interest.
Looking at the DCA curve we do see the two similar humps we saw with the lump sum, but is both smoother and lags behind it. This makes perfect sense, as we're taking a type of moving average of the stock price over a year (in 30D increments) when we buy, instead of a single date.
As a result our investment with DCA is less volatile (smoother), and lags behind (averages in previous investments) the lump sum values.
The line for difference shows a positive dollar value for how much more investing in one lump sum would return versus dollar cost averaging in (blue). Similarly a negative value shows how much more dollar cost averaging would return vs a lump sum (red). The chart shows a wide swing around 2002 and 2009 between the two strategies, but elsewhere it's mostly positive (blue), suggesting lump sum tends to return a bit more overall. Let's look at the actual percentage where the values are positive (ie. where lump sum returns more).
Step9: Remarkable! So 66.3% of the time lump sum results in a higher final investment value over our monthly dollar cost averaging strategy. Almost dead on to the claims of 66% by the investopedia article I'd read.
But maybe this isn't the whole story, perhaps the lump sum returned a little better than DCA most of the time, but in the really bad times DCA would do much better?
One way to look at this, would be to see the average amount improvement lump sum has when it is better, versus the average amount DCA improves, when it is better. | Python Code:
import pandas as pd
import pandas_datareader.data as web
import datetime
pd.set_option('display.width', 200) # Displaying more columns in one row
# Data date range, Google provides up to 4000 entries in one call
start = datetime.datetime(2000, 2, 10)
end = datetime.datetime(2016, 1, 9)
spy = web.DataReader("SPY", "yahoo", start, end)
print(spy.head()) # See first few rows
Explanation: Comparing Lump Sum vs. Dollar Cost Averaging (DCA) investing
View Notebook as HTML
View Notebook on GitHub
View Notebook on Blog
The topic of investing all at once versus spreading it over time has come up a few times with peers. I remembered reading in both bogleheads and an Investopedia article that lump sum beats DCA ~66% of the time.
Both lump-sum investing and DCA have their appropriate time and place. The research shows that lump-sum investing pays off about 66% of the time, which is a long way from all the time. It certainly makes sense to look carefully at the current market conditions. If you hit that bad 33% in lumpy style, you can lose a lot of money.
The idea espoused is that the market trends up in the long term, and therefore it's better to invest as early as possible instead of spreading your investments around to avoid the bottom; time on the market is statistically better.
Sounds logical, but when it's your money on the line, something sounding good isn't always good enough.
I decided to run an experiment validating the claim using IPython Notebook, Pandas, and matplotlib for visualization.
The Experiment
This statement of being better 66% of the time wasn't completely intuitive to me, so I decided to do a little test. Let's imagine we have \$10k to invest any time in the last 16 years, from Feb 22, 2000 to Jan 9, 2016. And we want to choose the time and strategy that would have returned us the most money today. The two strategies I chose are:
Lump Sum, invest \$10k all at once on a date of choice
Dollar Cost Average, invest \$10k in 12 equal portions of \$833.33 every 30 days starting from a date of choice, for a total investment period of 360 days. There are alternatives but this is the one I arbitrarily chose.
I then chose the SPDR S&P 500 (SPY) as the stock we'll be investing in because it follows the Standard & Poor 500 index, and is one of the most common ways to measure/invest in the market.
The Standard & Poor's 500 often abbreviated as the S&P 500 (or just "the S&P"). Chosen for market size, liquidity and industry grouping, among other factors. The S&P 500 is designed to be a leading indicator of U.S. equities and is meant to reflect the risk/return characteristics of the large cap universe.
Here is SPY for the last 16 years, for our actual tests we'll be using historical pricing to account for dividend yield and other things.
This was partially inspired by looking at my portfolio today, 2015 and early 2016 has been pretty rough for the market, relative to the bull run up to this point, and a drop in the bucket vs. the last crash so far.
I have been lump sum investing up till this point. Perhaps this experiment will help illuminate the validity or flaws in the thought process behind that decision.
I list my assumptions at the bottom of this page.
Import financial data using Pandas
First let's import data, Pandas is a great python library for data analysis and has a helper for pulling stock data from different websites like Yahoo Finance or Google Finance.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
from matplotlib import style
style.use('fivethirtyeight')
spy['Adj Close'].plot(figsize=(20,10))
ax = plt.subplot()
ax.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}'.format(x))) # Y axis dollarsymbols
plt.title('SPY Historical Price on Close')
plt.xlabel('')
plt.ylabel('Stock Price ($)');
Explanation: We'll plot all the prices at Adj Close using matplotlib, a python 2D plotting library that is Matlab flavored. We use Adjusted Close because it is commonly used for historical pricing, and accounts for all corporate actions such as stock splits, dividends/distributions and rights offerings. This happens to be our exact use-case.
End of explanation
value_price = spy['Adj Close'][-1] # The final value of our stock
initial_investment = 10000 # Our initial investment of $10k
num_stocks_bought = initial_investment / spy['Adj Close']
lumpsum = num_stocks_bought * value_price
lumpsum.name = 'Lump Sum'
lumpsum.plot(figsize=(20,10))
ax = plt.subplot()
ax.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}'.format(x))) # Y axis dollarsymbols
plt.title('Lump sum - Value today of $10,000 invested on date')
plt.xlabel('')
plt.ylabel('Investment Value ($)');
Explanation: Great, looks similar to the SPY chart from before. Notice how due to historical pricing, the effect of including things like dividend yields increases to total return over the years. We can easily see the the bubble and crash around 2007-2009, as well as the long bull market up since then. Also we can see in the last couple months the small dip in September/October, and barely see the drop in the last couple days in the beginning of 2016.
Calculate Lump Sum
Lump Sum means to invest everything available all at once, in this case we have a hypothetical $10,000 to spend at any day in our history of the last 16 years. Then we want to know how much that investment would be worth today.
Another way to look at this is we can make a chart where the X axis is the date we invest the lump sum, and the Y axis is the value of that investment today.
End of explanation
print("Lump sum: Investing on the 1 - Best day, 2 - Worst day in past, 3 - Worst day in all")
print("1 - Investing $10,000 on {} would be worth ${:,.2f} today.".format(lumpsum.idxmax().strftime('%b %d, %Y'), lumpsum.max()))
print("2 - Investing $10,000 on {} would be worth ${:,.2f} today.".format(lumpsum[:-1000].idxmin().strftime('%b %d, %Y'), lumpsum[:-1000].min()))
print("3 - Investing $10,000 on {} would be worth ${:,.2f} today.".format(lumpsum.idxmin().strftime('%b %d, %Y'), lumpsum.min()))
Explanation: Cool! Pandas makes it really easy to manipulate data with datetime indices. Looking at the chart we see that if we'd bought right at the bottom of the 2007-2009 crash our \$10,000 would be worth ~ $32,500. If only we had a time machine...
End of explanation
def doDCA(investment, start_date):
# Get 12 investment dates in 30 day increments starting from start date
investment_dates_all = pd.date_range(start_date,periods=12,freq='30D')
# Remove those dates beyond our known data range
investment_dates = investment_dates_all[investment_dates_all < spy.index[-1]]
# Get closest business dates with available data
closest_investment_dates = spy.index.searchsorted(investment_dates)
# How much to invest on each date
portion = investment/12.0 # (Python 3.0 does implicit double conversion, Python 2.7 does not)
# Get the total of all stocks purchased for each of those dates (on the Close)
stocks_invested = sum(portion / spy['Adj Close'][closest_investment_dates])
# Add uninvested amount back
uninvested_dollars = portion * sum(investment_dates_all >= spy.index[-1])
# value of stocks today
total_value = value_price*stocks_invested + uninvested_dollars
return total_value
# Generate DCA series for every possible date
dca = pd.Series(spy.index.map(lambda x: doDCA(initial_investment, x)), index=spy.index, name='Dollar Cost Averaging (DCA)')
Explanation: What's nice to note as well is that even if we'd invested at the worst possible time, peak of the bubble in 2007, on Oct 9th, we'd still have come out net positive at \$14,593 today. The worst time to invest so far turns out to be more recent, on July 20th of 2015. This is because not only was the market down, but it's so recent we haven't had time for the investment to grow. Something something the best time to plant a tree was yesterday.
Calculating Dollar Cost Averaging (DCA)
Now lets do the same experiment, but instead we'll invest the \$10,000 we have using Dollar Cost Averaging (DCA). For this simple test, I'll assume instead of investing all at once, I'll invest in equal portions every 30 days (roughly a month), over a course of 360 days (roughly a year) total.
So on day 1, I invest $10,000 / 12 ~ $833.33, on day 31, the same $833.33
and so on for 12 total investments. A special case is investing within the last year, when there isn't time to DCA all of it, as a compromise, I invest what portions I can and keep the rest as cash, since that is how reality works.
End of explanation
dca.plot(figsize=(20,10))
ax = plt.subplot()
ax.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}'.format(x))) # Y axis dollarsymbols
plt.title('Dollar Cost Averaging - Value today of $10,000 invested on date')
plt.xlabel('')
plt.ylabel('Investment Value ($)');
Explanation: Surprisingly straightforward, good job Pandas. Let's plot it similar to how we did with lump sum. The x axis is the date at which we start dollar cost averaging (and then continue for the next 360 days in 30 day increments from that date). The y axis is the final value of our investment today.
End of explanation
print("Lump sum")
print(" Crash - Investing $10,000 on {} would be worth ${:,.2f} today.".format(lumpsum.idxmax().strftime('%b %d, %Y'), lumpsum.max()))
print("Bubble - Investing $10,000 on {} would be worth ${:,.2f} today.".format(lumpsum[:-1500].idxmin().strftime('%b %d, %Y'), lumpsum[:-1500].min()))
print("Recent - Investing $10,000 on {} would be worth ${:,.2f} today.".format(lumpsum.idxmin().strftime('%b %d, %Y'), lumpsum.min()))
print("\nDollar Cost Averaging")
print(" Crash - Investing $10,000 on {} would be worth ${:,.2f} today.".format(dca.idxmax().strftime('%b %d, %Y'), dca.max()))
print("Bubble - Investing $10,000 on {} would be worth ${:,.2f} today.".format(dca[:-1500].idxmin().strftime('%b %d, %Y'), dca[:-1500].min()))
print("Recent - Investing $10,000 on {} would be worth ${:,.2f} today.".format(dca.idxmin().strftime('%b %d, %Y'), dca.min()))
Explanation: Interesting! DCA looks way really smooth and the graph is really high up, so it must be better right!? Wait, no, the Y axis is different, in fact it's highest high is around \$28,000 in comparison to the lump sums \$32,500. Let's look at the ideal/worst investment dates for DCA, I include the lump sum from before as well.
End of explanation
# Difference between lump sum and DCA
diff = (lumpsum - dca)
diff.name = 'Difference (Lump Sum - DCA)'
fig, (ax1, ax2, ax3) = plt.subplots(3,1, sharex=True, figsize=(20,15))
# SPY Actual
spy['Adj Close'].plot(ax=ax1)
ax1.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}'.format(x))) # Y axis in dollars
ax1.set_xlabel('')
ax1.set_title('SPY Historical Stock Price')
ax1.set_ylabel('Stock Value ($)')
# Comparison
dca.plot(ax=ax2)
lumpsum.plot(ax=ax2)
ax2.axhline(initial_investment, alpha=0.5, linestyle="--", color="black")
ax2.text(spy.index[50],initial_investment*1.1, "Initial Investment")
# ax2.axhline(conservative, alpha=0.5, linestyle="--", color="black")
# ax2.text(spy.index[-800],conservative*1.05, "Conservative Investing Strategy")
ax2.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}K'.format(x*1e-3))) # Y axis $1,000s
ax2.legend()
ax2.set_title('Comparison Lump Sum vs. Dollar Cost Averaging - Value of $10,000 invested on date')
ax2.set_ylabel('Investment Value ($)')
# Difference
ax3.fill_between(diff.index, y1=diff, y2=0, color='blue', where=diff>0)
ax3.fill_between(diff.index, y1=diff, y2=0, color='red', where=diff<0)
ax3.yaxis.set_major_formatter(FuncFormatter(lambda x, pos: '${:,.0f}K'.format(x*1e-3))) # Y axis $1,000s
ax3.set_ylabel('Difference ($)')
ax3.set_title('Difference (Lump Sum - Dollar Cost Average)')
ax3.legend(["Lump Sum > DCA", "DCA > Lump Sum"]);
Explanation: Looking at dollar cost averaging, the best day to start dollar cost averaging was July 12, 2002, when we were still recovering from the 'tech crashs. The worst day to start was around the peak of the 2007 bubble on Jan 26, 2007, and the absolute worst would have been to start last year on Jan 20, 2015.
We can already see that there's some similarities between lump sum and DCA, DCA appears to have lower highs, but also higher lows. It's difficult to compare just by looking at numbers, we need to compare the two strategies visually side by side.
Comparison of Lump Sum vs Dollar Cost Averaging
So we've just individuallly tested two investing strategies exhaustively on every possible day in the last 16 years.
Let's plot three charts on top of each other. The raw SPY stock price over the years on the top. Then in the middle we plot both lump sum and DCA on top of each other. Finally we'll plot the difference between them as $diff = lump sum - DCA$
End of explanation
print("Lump sum returns more than DCA %.1f%% of all the days" % (100*sum(diff>0)/len(diff)))
print("DCA returns more than Lump sum %.1f%% of all the days" % (100*sum(diff<0)/len(diff)))
Explanation: Before we start comparing, definitely take note of the middle chart, where the initial investment of \$10k is. Notice that if we had invested using either strategy, and at any point before 2 years ago, no matter which bubble or crash, we'd have made some pretty huge returns on our investments, double and tripling at some points. This is the power of compound interest.
Looking at the DCA curve we do see the two similar humps we saw with the lump sum, but is both smoother and lags behind it. This makes perfect sense, as we're taking a type of moving average of the stock price over a year (in 30D increments) when we buy, instead of a single date.
As a result our investment with DCA is less volatile (smoother), and lags behind (averages in previous investments) the lump sum values.
The line for difference shows a positive dollar value for how much more investing in one lump sum would return versus dollar cost averaging in (blue). Similarly a negative value shows how much more dollar cost averaging would return vs a lump sum (red). The chart shows a wide swing around 2002 and 2009 between the two strategies, but elsewhere it's mostly positive (blue), suggesting lump sum tends to return a bit more overall. Let's look at the actual percentage where the values are positive (ie. where lump sum returns more).
End of explanation
print("Mean difference: Average dollar improvement lump sum returns vs. dca: ${:,.2f}".format(sum(diff) / len(diff)))
print("Mean difference when lump sum > dca: ${:,.2f}".format(sum(diff[diff>0]) / sum(diff>0)))
print("Mean difference when dca > lump sum: ${:,.2f}".format(sum(-diff[diff<0]) / sum(diff<0)))
Explanation: Remarkable! So 66.3% of the time lump sum results in a higher final investment value over our monthly dollar cost averaging strategy. Almost dead on to the claims of 66% by the investopedia article I'd read.
But maybe this isn't the whole story, perhaps the lump sum returned a little better than DCA most of the time, but in the really bad times DCA would do much better?
One way to look at this, would be to see the average amount improvement lump sum has when it is better, versus the average amount DCA improves, when it is better.
End of explanation |
867 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nonlinear elasticity
Step1: In this chapter we investigate a nonlinear model of elastic strain in heterogeneous materials. This system is equivalent to the $p$-system of gas dynamics, although the stress-strain relation we will use here is very different from the pressure-density relation typically used in gas dynamics. The equations we consider are
Step2: Approximate solution of the Riemann problem using $f$-waves
The exact solver above requires a nonlinear iterative rootfinder and is relatively expensive. A very cheap approximate Riemann solver for this system was developed in <cite data-cite="leveque2002"><a href="riemann.html#leveque2002">(LeVeque, 2002)</a></cite> using the $f$-wave approach. One simply approximates both nonlinear waves as shocks, with speeds equal to the characteristic speeds of the left and right states
Step3: Comparison of exact and approximate solutions | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import matplotlib as mpl
mpl.rcParams['font.size'] = 8
figsize =(8,4)
mpl.rcParams['figure.figsize'] = figsize
import numpy as np
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
from utils import riemann_tools
from ipywidgets import interact
from ipywidgets import widgets
from clawpack import riemann
from exact_solvers import nonlinear_elasticity
Explanation: Nonlinear elasticity
End of explanation
# %load exact_solvers/nonlinear_elasticity.py
def dsigma(eps, K1, K2):
"Derivative of stress w.r.t. strain."
return K1 + 2*K2*eps
def lambda1(q, xi, aux):
eps = q[0]
rho, K1, K2 = aux
return -np.sqrt(dsigma(eps, K1, K2)/rho)
def lambda2(q, xi, aux):
return -lambda1(q,xi,aux)
def make_plot_function(q_l, q_r, aux_l, aux_r):
states, speeds, reval, wave_types = \
nonlinear_elasticity.exact_riemann_solution(q_l,q_r,aux_l,aux_r)
def plot_function(t,which_char):
ax = riemann_tools.plot_riemann(states,speeds,reval,wave_types,
t=t,t_pointer=0,
extra_axes=True,
variable_names=['Strain','Momentum'])
if which_char == 1:
riemann_tools.plot_characteristics(reval,lambda1,(aux_l,aux_r),ax[0])
elif which_char == 2:
riemann_tools.plot_characteristics(reval,lambda2,(aux_l,aux_r),ax[0])
nonlinear_elasticity.phase_plane_plot(q_l, q_r, aux_l, aux_r, ax[3])
plt.show()
return plot_function
def plot_riemann_nonlinear_elasticity(rho_l,rho_r,v_l,v_r):
plot_function = make_plot_function(rho_l,rho_r,v_l,v_r)
interact(plot_function, t=widgets.FloatSlider(value=0.,min=0,max=1.,step=0.1),
which_char=widgets.Dropdown(options=[None,1,2],
description='Show characteristics'));
aux_l = np.array((1., 5., 1.))
aux_r = np.array((1., 2., 1.))
q_l = np.array([2.1, 0.])
q_r = np.array([0.0, 0.])
plot_riemann_nonlinear_elasticity(q_l, q_r, aux_l, aux_r)
Explanation: In this chapter we investigate a nonlinear model of elastic strain in heterogeneous materials. This system is equivalent to the $p$-system of gas dynamics, although the stress-strain relation we will use here is very different from the pressure-density relation typically used in gas dynamics. The equations we consider are:
\begin{align}
\epsilon_t(x,t) - u_x(x,t) & = 0 \
(\rho(x)u(x,t))_t - \sigma(\epsilon(x,t),x)_x & = 0.
\end{align}
Here $\epsilon$ is the strain, $u$ is the velocity, $\rho$ is the material density, $\sigma$ is the stress,
and ${\mathcal M}=\rho u$ is the momentum.
The first equation is a kinematic relation, while the second represents Newton's second law. This is a nonlinear
conservation law with spatially varying flux function, in which
\begin{align}
q & = \begin{bmatrix} \epsilon \ \rho u \end{bmatrix}, & f(q,x) & = \begin{bmatrix} -{\mathcal M}/\rho(x) \ -\sigma(\epsilon,x) \end{bmatrix}.
\end{align}
If the stress-strain relationship is linear -- i.e. if $\sigma(\epsilon,x)=K(x)\epsilon$ -- then this system is equivalent to the acoustics equations that we have
studied previously. Here we consider instead a quadratic stress response:
\begin{align}
\sigma(\epsilon,x) = K_1(x) \epsilon + K_2(x) \epsilon^2.
\end{align}
We assume that the spatially-varying functions $\rho, K_1, K_2$ are piecewise constant, taking values
$(\rho_l, K_{1l}, K_{2l})$ for $x<0$ and values $(\rho_r, K_{1r}, K_{2r})$ for $x>0$. This system has been investigated numerically in <cite data-cite="leveque2002"><a href="riemann.html#leveque2002">(LeVeque, 2002)</a></cite>, <cite data-cite="leveque2003"><a href="riemann.html#leveque2003">(LeVeque & Yong, 2003)</a></cite>, and <cite data-cite="2012_ketchesonleveque_periodic"><a href="riemann.html#2012_ketchesonleveque_periodic">(Ketcheson & LeVeque, 2012)</a></cite>.
Note that if we take $\rho=1$, $\sigma=-p$, and $\epsilon=v$, this system is equivalent to the p-system of Lagrangian gas dynamics
\begin{align}
v_t - u_x & = 0 \
u_t - p(v)_x & = 0,
\end{align}
in which $p$ represents pressure and $v$ represents specific volume.
Hyperbolic structure
The flux jacobian is
\begin{align}
f'(q) = \begin{bmatrix} 0 & -1/\rho(x) \ -\sigma_\epsilon(\epsilon,x) & 0 \end{bmatrix},
\end{align}
with eigenvalues (characteristic speeds)
\begin{align}
\lambda^\pm(x) = \pm \sqrt{\frac{\sigma_\epsilon(\epsilon,x)}{\rho(x)}} = \pm c(\epsilon, x).
\end{align}
Here for the stress-strain relation we have chosen, $\sigma_\epsilon = K_1(x) + 2 K_2(x)\epsilon$.
It's also convenient to define the nonlinear impedance $Z(\epsilon, x) = \rho(x) c(\epsilon,x)$. Then the eigenvectors of the flux Jacobian are
\begin{align}
R & = \begin{bmatrix} 1 & 1 \ Z(\epsilon,x) & -Z(\epsilon,x) \end{bmatrix}.
\end{align}
Both characteristic fields are genuinely nonlinear. Furthermore, since the characteristic speeds each have a definite sign, this system does not admit transonic rarefactions.
Structure of the Riemann solution
Based on the eigenstructure of the flux jacobian above, the Riemann solution will always include a left-going and a right-going wave, each of which may be a shock or rarefaction (since both fields are genuinely nonlinear). We will see -- similarly to our analysis in the chapter on variable-speed-limit traffic that the jump in $\rho$ and $K$ at $x=0$ induces a stationary wave there. See the figure below for the overall structure of the Riemann solution.
Hugoniot loci
From the Rankine-Hugoniot jump conditions for the system we obtain the 1-Hugoniot locus for a state $(\epsilon^_l, u^_l)$ connected by a 1-shock to a state $(\epsilon_l, u_l)$:
\begin{align}
u^_l & = u_l - \left( \frac{\left(\sigma_l(\epsilon^_l)-\sigma_l(\epsilon_l)\right)(\epsilon^*_l-\epsilon_l)}{\rho_l} \right)^{1/2}
\end{align}
Here $\sigma_l(\epsilon)$ is shorthand for the stress relation in the left medium.
Similarly, a state $(\epsilon^_r,u^_r)$ that is connected by a 2-shock to a state $(\epsilon_r, u_r)$ must satisfy
\begin{align}
u^_r & = u_r - \left( \frac{\left(\sigma_r(\epsilon^_r)-\sigma_r(\epsilon_r)\right)(\epsilon^*_r-\epsilon_r)}{\rho_r} \right)^{1/2}.
\end{align}
Integral curves
The integral curves can be found by writing $\tilde{q}'(\xi) = r^{1,2}(\tilde{q}(\xi))$ and integrating. This leads to
\begin{align}
u^l & = u_l + \frac{1}{3 K{2l} \sqrt{\rho_l}} \left( \sigma_{l,\epsilon}(\epsilon^l)^{3/2} - \sigma{l,\epsilon}(\epsilon)^{3/2} \right) \label{NE:integral-curve-1} \
u^r & = u_r - \frac{1}{3 K{2r} \sqrt{\rho_r}} \left( \sigma_{r,\epsilon}(\epsilon^r)^{3/2} - \sigma{r,\epsilon}(\epsilon)^{3/2} \right)\label{NE:integral-curve-2}
\end{align}
Here $\sigma_{l,\epsilon}$ is the derivative of the stress function w.r.t $\epsilon$ in the left medium.
The entropy condition
For a 1-shock, we need that $\lambda^-(\epsilon_l,x<0) > \lambda^-(\epsilon^_l,x<0)$, which is equivalent to the condition $\epsilon^_l>\epsilon_l$. Similarly, a 2-shock is entropy-satisfying if $\epsilon^*_r > \epsilon_r$.
Interface conditions
Because the flux depends explicitly on $x$, we do not necessarily have continuity of $q$ at $x=0$; i.e. in general $q^_l \ne q^_r$. Instead, the flux must be continuous: $f(q^_l)=f(q^_r)$. For the present system, this means that the stress and velocity must be continuous:
\begin{align}
u^l & = u^_r \
\sigma(\epsilon^_l, K{1l}, K_{2l}) & = \sigma(\epsilon^r, K{1r}, K_{2r}).
\end{align}
This makes sense from a physical point of view: if the velocity were not continuous, the material would fracture (or overlap itself). Continuity of the stress is required by Newton's laws.
Structure of rarefaction waves
For this system, the structure of a centered rarefaction wave can be determined very simply. Since the characteristic velocity must be equal to $\xi = x/t$ at each point along the wave, we have $\xi = \pm\sqrt{\sigma_\epsilon/\rho}$, or
\begin{align}
\xi^2 = \frac{K_1 + 2K_2\epsilon}{\rho}
\end{align}
which leads to $\epsilon = (\rho\xi^2 - K_1)/(2K_2)$. Once the value of $\epsilon$ is known, $u$ can be determined using the integral curves \eqref{NE:integral-curve-1} or \eqref{NE:integral-curve-2}.
Solution of the Riemann problem
Below we show the solution of the Riemann problem. To view the code that computes this exact solution, uncomment and execute the next cell.
End of explanation
solver = riemann.nonlinear_elasticity_1D_py.nonlinear_elasticity_1D
problem_data = {'stress_relation' : 'quadratic'}
fw_states, fw_speeds, fw_reval = \
riemann_tools.riemann_solution(solver,q_l,q_r,aux_l,aux_r,
problem_data=problem_data,
verbose=False,
stationary_wave=True,
fwave=True)
plot_function = \
riemann_tools.make_plot_function(fw_states,fw_speeds, fw_reval,
layout='vertical',
variable_names=('Strain','Momentum'))
interact(plot_function, t=widgets.FloatSlider(value=0.4,min=0,max=.9,step=.1));
Explanation: Approximate solution of the Riemann problem using $f$-waves
The exact solver above requires a nonlinear iterative rootfinder and is relatively expensive. A very cheap approximate Riemann solver for this system was developed in <cite data-cite="leveque2002"><a href="riemann.html#leveque2002">(LeVeque, 2002)</a></cite> using the $f$-wave approach. One simply approximates both nonlinear waves as shocks, with speeds equal to the characteristic speeds of the left and right states:
\begin{align}
s^1 & = - \sqrt{\frac{\sigma_{\epsilon,l}(\epsilon_l)}{\rho_l}} \
s^2 & = + \sqrt{\frac{\sigma_{\epsilon,r}(\epsilon_r)}{\rho_r}}.
\end{align}
Meanwhile, the waves are obtained by decomposing the jump in the flux $f(q_r,x>0) - f(q_l,x<0)$ in terms of the eigenvectors of the flux jacobian.
End of explanation
ex_states, ex_speeds, ex_reval, wave_types = \
nonlinear_elasticity.exact_riemann_solution(q_l,q_r,aux_l,aux_r)
varnames = nonlinear_elasticity.conserved_variables
plot_function = riemann_tools.make_plot_function([ex_states,fw_states],
[ex_speeds,fw_speeds],
[ex_reval,fw_reval],
[wave_types,['contact']*3],
['Exact','$f$-wave'],
layout='vertical',
variable_names=varnames)
interact(plot_function, t=widgets.FloatSlider(value=0.4,min=0, max=0.9, step=0.1));
Explanation: Comparison of exact and approximate solutions
End of explanation |
868 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
according to the histogram, the ratios are mainly under 0.2.
Step1: Improved app
Step2: improved pattern
Step3: woren pattern | Python Code:
index = ratio>0.05#get the index of ratio larger than 0.05
appfilter = app.loc[index]#filter the apps which number of current rating over number of overall rating larger than 0.1
#use histogram to show the range of current_rating-overall_rating
plt.hist(appfilter['current_rating']-appfilter['overall_rating'],bins = 20, alpha = .4, label = 'diff')
plt.legend()
plt.show()
diff = appfilter['current_rating']-appfilter['overall_rating']
index2 = diff>=0.1#get the index of the difference larger than 0.1
index2b = diff<= -0.1#get the index of the difference smaller than -0.1
appinprove = appfilter.loc[index2]
appdecrease = appfilter.loc[index2b]
nvd = appinprove['new_version_desc']
nvdd = appdecrease['new_version_desc']
#compile documents
doc_complete = nvd.tolist()
doc_complete2 = nvdd.tolist()
#clean doc
import nltk
from nltk import corpus
from nltk.stem.porter import PorterStemmer
from nltk.corpus import stopwords
import string
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.neighbors import NearestNeighbors
stemmer = PorterStemmer().stem
tokenize = nltk.word_tokenize
stop = stopwords.words('english')+list(string.punctuation)+['we','new','fix','io','updat','improv','bug',
'app','featur','perform','ad',"\'s","--","us"
,"minor","support","iphon","issu","add","enhanc",
"user","pleas","10","7","experi","thank",
"version","experi","screen","\'\'","2","6","icon",
"stabil","review","5","``"]
def stem(tokens,stemmer = PorterStemmer().stem):
stemwords = [stemmer(w.lower()) for w in tokens if w not in stop]
return [w for w in stemwords if w not in stop]
def lemmatize(text):
return stem(tokenize(text))
doc_clean = [lemmatize(doc) for doc in doc_complete]
doc_clean2 = [lemmatize(doc) for doc in doc_complete2]
# Importing Gensim
import gensim
from gensim import corpora
# Creating the term dictionary of our courpus, where every unique term is assigned an index.
dictionary = corpora.Dictionary(doc_clean)
dictionary2 = corpora.Dictionary(doc_clean2)
# Converting list of documents (corpus) into Document Term Matrix using dictionary prepared above.
doc_term_matrix = [dictionary.doc2bow(doc) for doc in doc_clean]
doc_term_matrix2 = [dictionary2.doc2bow(doc) for doc in doc_clean2]
# Creating the object for LDA model using gensim library
Lda = gensim.models.ldamodel.LdaModel
# Running and Trainign LDA model on the document term matrix.
ldamodel = Lda(doc_term_matrix, num_topics=3, id2word = dictionary, passes=50)
ldamodel2 = Lda(doc_term_matrix2, num_topics=3, id2word = dictionary2, passes=50)
print(ldamodel.print_topics(num_topics=3, num_words=3))
print(ldamodel2.print_topics(num_topics=3, num_words=3))
Explanation: according to the histogram, the ratios are mainly under 0.2.
End of explanation
index_interfac = []
for i in range(len(doc_clean)):
if 'interfac' in doc_clean[i]:
index_interfac.append(True)
else:
index_interfac.append(False)
nvd[index_interfac][1342]
index_feedback = []
for i in range(len(doc_clean)):
if 'feedback' in doc_clean[i]:
index_feedback.append(True)
else:
index_feedback.append(False)
nvd[index_feedback][193]
index_store = []
for i in range(len(doc_clean)):
if 'store' in doc_clean[i]:
index_store.append(True)
else:
index_store.append(False)
nvd[index_store][1024]
Explanation: Improved app
End of explanation
index_ipad = []
for i in range(len(doc_clean2)):
if 'ipad' in doc_clean2[i]:
index_ipad.append(True)
else:
index_ipad.append(False)
nvdd[index_ipad][1373]
index_music = []
for i in range(len(doc_clean2)):
if 'music' in doc_clean2[i]:
index_music.append(True)
else:
index_music.append(False)
nvdd[index_music][2157]
index_card = []
for i in range(len(doc_clean2)):
if 'card' in doc_clean2[i]:
index_card.append(True)
else:
index_card.append(False)
nvdd[index_card][646]
Explanation: improved pattern:
1.some improvements on interface
2.ask for feedbacks
3.ask for reviews on app store
Worsen app
End of explanation
import pyLDAvis.gensim
pyLDAvis.enable_notebook()
dec_improv = pyLDAvis.gensim.prepare(ldamodel,doc_term_matrix, dictionary)
dec_decrea = pyLDAvis.gensim.prepare(ldamodel2,doc_term_matrix2, dictionary2)
dec_improv
pyLDAvis.save_html(dec_improv,'improved_apps.html')
dec_decrea
pyLDAvis.save_html(dec_decrea,'worsen_apps.html')
Explanation: woren pattern:
1.add more features on ipad version
2.add more features related to music function
3.the apps are designed for cards
End of explanation |
869 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
k-近邻算法
优点: 精度高、对异常值不敏感、无数据输入假定
缺点: 计算复杂度高、空间复杂度高、无法给出数据的内在含义
适用数据范围: 数值型和标称型
k-近邻算法更适用于数据集很大很全的情况?
算法思想及过程
对要分类的样本,在已有样本中寻找最近邻的 K 个样本,以这 K 个样本的分类标签中出现次数最多的标签作为待分类样本的分类标签。
寻找最近邻时,通过距离(欧几里得距离)来计算。
计算距离时注意归一化
Step1: 先了解程序清单 2-1 要用到的几个函数
tile 函数将数组横向及纵向复制得到新的数组
Step2: ** 是幂运算
Step3: 可见幂运算对array来说是element-wise的,而对于matrix来说,就是矩阵的乘法做幂。
同样地,array的乘法是element-wise的,matrix的乘法就是线性代数的矩阵乘法
矩阵除法要用
Python
linalg.solve(A, B)
Step4: .I 求矩阵的逆
.T 求矩阵的转置
Step5: sum 求和
Step6: sum(0) 按列求和
sum(1) 按行求和
min() max() 两个函数同样0列1行
Step7: dict.get(x,0)
在字典中查找指定的键对应的值,若找不到则返回第二个参数的值
Step8: 该函数为
Python
get(key,default=None)
第二个参数默认为None,则若不指定第二个参数,函数返回None
Step9: Python 2.7 中
- dict.iteritems() 返回迭代器
- dict.items() 返回字典的复制
Python 3 中
- dict.items() 返回迭代器
- dict.iteritems() 该函数在 Python 3 中不存在了
我用的 Python 3,所以下面的代码中,我用的是 dict.items()
operator.itemgetter 函数可以获取一个对象指定序号的数据<br>
operator.itemgetter 获取的不是值,而是一个函数,通过该函数作用到对象上才能获取值。<br>
一般该函数用在 sorted 函数中。<br>
需要 import operator 模块
Step10: 以上方法可以对字典按值排序<br>
排序从小到大,reverse=True则是从大到小
Step11: 以上方法按键排序
程序清单 2-1
Step12: 程序清单 2-2 | Python Code:
from numpy import *
import operator
def createDataSet():
group = array([[1.0, 1.1],[1.0, 1.0],[0, 0],[0, 0.1]])
labels = ['A', 'A', 'B', 'B']
return group, labels
Explanation: k-近邻算法
优点: 精度高、对异常值不敏感、无数据输入假定
缺点: 计算复杂度高、空间复杂度高、无法给出数据的内在含义
适用数据范围: 数值型和标称型
k-近邻算法更适用于数据集很大很全的情况?
算法思想及过程
对要分类的样本,在已有样本中寻找最近邻的 K 个样本,以这 K 个样本的分类标签中出现次数最多的标签作为待分类样本的分类标签。
寻找最近邻时,通过距离(欧几里得距离)来计算。
计算距离时注意归一化
End of explanation
from numpy import *
tile(1, 3)
tile(2.5,(2,4))
tile([1,3],(2,3))
a=[1,3]
a=array(a)
tile(a,(2,3))
Explanation: 先了解程序清单 2-1 要用到的几个函数
tile 函数将数组横向及纵向复制得到新的数组
End of explanation
3**2
a=array([[1, 2], [3, 4]])
a**2
b=mat([[1, 2], [3, 4]])
b**2
Explanation: ** 是幂运算
End of explanation
linalg.solve(b**2,b)
Explanation: 可见幂运算对array来说是element-wise的,而对于matrix来说,就是矩阵的乘法做幂。
同样地,array的乘法是element-wise的,matrix的乘法就是线性代数的矩阵乘法
矩阵除法要用
Python
linalg.solve(A, B)
End of explanation
b.I
b.T
Explanation: .I 求矩阵的逆
.T 求矩阵的转置
End of explanation
a=array([[1, 2],[3, 4]])
a.sum()
a.sum(0)
a.sum(1)
Explanation: sum 求和
End of explanation
a.min()
a.min(0)
a.min(1)
Explanation: sum(0) 按列求和
sum(1) 按行求和
min() max() 两个函数同样0列1行
End of explanation
d={'a':1,'b':2,'c':3,'d':4}
d.get('b')
d.get('e')
Explanation: dict.get(x,0)
在字典中查找指定的键对应的值,若找不到则返回第二个参数的值
End of explanation
d.get('e',5)
Explanation: 该函数为
Python
get(key,default=None)
第二个参数默认为None,则若不指定第二个参数,函数返回None
End of explanation
a={'a':3,'b':2,'c':5,'d':1}
import operator
sorted(a.items(),key=operator.itemgetter(1),reverse=False)
Explanation: Python 2.7 中
- dict.iteritems() 返回迭代器
- dict.items() 返回字典的复制
Python 3 中
- dict.items() 返回迭代器
- dict.iteritems() 该函数在 Python 3 中不存在了
我用的 Python 3,所以下面的代码中,我用的是 dict.items()
operator.itemgetter 函数可以获取一个对象指定序号的数据<br>
operator.itemgetter 获取的不是值,而是一个函数,通过该函数作用到对象上才能获取值。<br>
一般该函数用在 sorted 函数中。<br>
需要 import operator 模块
End of explanation
sorted(a.items(),key=operator.itemgetter(0),reverse=False)
Explanation: 以上方法可以对字典按值排序<br>
排序从小到大,reverse=True则是从大到小
End of explanation
def classify0(inX, dataSet, labels, k):
dataSetSize = dataSet.shape[0]
diffMat = tile(inX, (dataSetSize, 1)) - dataSet
sqDiffMat = diffMat ** 2
sqDistances = sqDiffMat.sum(axis=1)
distances = sqDistances ** 0.5
sortedDistIndicies = distances.argsort()
classCount = {}
for i in range(k):
voteIlabel = labels[sortedDistIndicies[i]]
classCount[voteIlabel] = classCount.get(voteIlabel, 0) + 1
sortedClassCount = sorted(classCount.items(), key=operator.itemgetter(1), reverse=True)
return sortedClassCount[0][0]
Explanation: 以上方法按键排序
程序清单 2-1
End of explanation
def file2matrix(filename):
fr = open(filename)
arrayOfLines = fr.readlines()
numberOfLines = len(arrayOfLines)
returnMat = zeros((numberOfLines, 3))
classLabelVector = []
index = 0
for line in arrayOfLines:
line = line.strip()
listFromLine = line.split('\t')
returnMat[index, :] = listFromLine[0:3]
classLabelVector.append(int(listFromLine[-1]))
index += 1
return returnMat, classLabelVector
datingDataMat, datingLabels = file2matrix('Ch02/datingTestSet2.txt')
datingDataMat
datingLabels[0:20]
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(datingDataMat[:, 0], datingDataMat[:, 2], 10.0*array(datingLabels), 255.0*array(datingLabels))
plt.show()
def autoNorm(dataset):
minVals = dataset.min(0)
maxVals = dataset.max(0)
ranges = maxVals - minVals
m = dataset.shape[0]
normDataset = dataset - tile(minVals, (m, 1))
normDataset = normDataset / (tile(ranges, (m, 1)))
return normDataset, ranges, minVals
normMat, ranges, minVals = autoNorm(datingDataMat)
normMat
ranges
minVals
def datingClassTest():
hoRatio = 0.1
datingDataMat, datingLabels = file2matrix('Ch02/datingTestSet2.txt')
normMat, ranges, minVals = autoNorm(datingDataMat)
m = normMat.shape[0]
numTestVecs = int(m * hoRatio)
errorCount = 0.0
for i in range(numTestVecs):
classifierResult = classify0(normMat[i, :], normMat[numTestVecs:m, :], datingLabels[numTestVecs:m], 3)
print("the classifier came back with %d,the real answer is %d" %(classifierResult, datingLabels[i]))
#print(classifierResult)
if classifierResult != datingLabels[i]:
errorCount+=1.0
print("the total error rate is %f" %(errorCount / float(numTestVecs)))
datingClassTest()
def classifyPerson():
resultList = ['not at all', 'in small doses', 'in large doses']
percentTats = float(input("percentage of time spent playing video games?"))
ffMiles = float(input("frequent flier miles earned consumed per year?"))
iceCream = float(input("liters of ice cream consumed per year?"))
datingDataMat, datingLabels = file2matrix('Ch02/datingTestSet2.txt')
normMat, ranges, minVals = autoNorm(datingDataMat)
inArr = array([ffMiles, percentTats, iceCream])
classifierResult = classify0((inArr - minVals) / ranges, normMat, datingLabels, 3)
print("you will probably like this person " + str(resultList[classifierResult - 1]))
classifyPerson()
def img2vector(filename):
returnVect=zeros((1, 1024))
fr=open(filename)
for i in range(32):
linestr=fr.readline()
for j in range(32):
returnVect[0, 32 * i + j] = int(linestr[j])
return returnVect
testVector = img2vector('Ch02/digits/trainingDigits/0_13.txt')
testVector[0, 0:31]
from os import listdir
def handwritingClassTest():
hwLabels = []
trainingFileList = listdir('Ch02/digits/trainingDigits')
m = len(trainingFileList)
trainingMat = zeros((m, 1024))
for i in range(m):
fileNameStr = trainingFileList[i]
fileStr = fileNameStr.split('.')[0]
classNumStr = int(fileStr.split('_')[0])
hwLabels.append(classNumStr)
trainingMat[i, :] = img2vector('Ch02/digits/trainingDigits/%s' %(fileNameStr))
testFileList = listdir("Ch02/digits/testDigits/")
errorCount = 0.0
mTest = len(testFileList)
for i in range(mTest):
fileNameStr = testFileList[i]
fileStr = fileNameStr.split('.')[0]
classNumStr = int(fileStr.split('_')[0])
vectorUnderTest = img2vector('Ch02/digits/testDigits/%s' %(fileNameStr))
classifierResult = classify0(vectorUnderTest, trainingMat, hwLabels, 3)
#print("the classifier came back with %d, the real answer is %d " %(classifierResult, classNumStr))
if classifierResult != classNumStr:
print("the classifier came back with %d, the real answer is %d " %(classifierResult, classNumStr))
errorCount += 1.0
print("the total number of errors is %d" %(errorCount))
print("the total error rate is %f" %(errorCount / float(mTest)))
handwritingClassTest()
Explanation: 程序清单 2-2
End of explanation |
870 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inefficient photon detection
Step1: Direct photo-detection
Here we follow an example from Wiseman and Milburn, Quantum measurement and control, section. 4.8.1.
Consider cavity that leaks photons with a rate $\kappa$. The dissipated photons are detected with an inefficient photon detector,
with photon-detection efficiency $\eta$. The master equation describing this scenario, where a separate dissipation channel has been added for detections and missed detections, is
$\dot\rho = -i[H, \rho] + \mathcal{D}[\sqrt{1-\eta} \sqrt{\kappa} a] + \mathcal{D}[\sqrt{\eta} \sqrt{\kappa}a]$
To describe the photon measurement stochastically, we can unravelling only the dissipation term that corresponds to detections, and leaving the missed detections as a deterministic dissipation term, we obtain [Eq. (4.235) in W&M]
$d\rho = \mathcal{H}[-iH -\eta\frac{1}{2}a^\dagger a] \rho dt + \mathcal{D}[\sqrt{1-\eta} a] \rho dt + \mathcal{G}[\sqrt{\eta}a] \rho dN(t)$
or
$d\rho = -i[H, \rho] dt + \mathcal{D}[\sqrt{1-\eta} a] \rho dt -\mathcal{H}[\eta\frac{1}{2}a^\dagger a] \rho dt + \mathcal{G}[\sqrt{\eta}a] \rho dN(t)$
where
$\displaystyle \mathcal{G}[A] \rho = \frac{A\rho A^\dagger}{\mathrm{Tr}[A\rho A^\dagger]} - \rho$
$\displaystyle \mathcal{H}[A] \rho = A\rho + \rho A^\dagger - \mathrm{Tr}[A\rho + \rho A^\dagger] \rho $
and $dN(t)$ is a Poisson distributed increment with $E[dN(t)] = \eta \langle a^\dagger a\rangle (t)$.
Formulation in QuTiP
In QuTiP, the photocurrent stochastic master equation is written in the form
Step2: Highly efficient detection
Step3: Highly inefficient photon detection
Step4: Efficient homodyne detection
The stochastic master equation for inefficient homodyne detection, when unravaling the detection part of the master equation
$\dot\rho = -i[H, \rho] + \mathcal{D}[\sqrt{1-\eta} \sqrt{\kappa} a] + \mathcal{D}[\sqrt{\eta} \sqrt{\kappa}a]$,
is given in W&M as
$d\rho = -i[H, \rho]dt + \mathcal{D}[\sqrt{1-\eta} \sqrt{\kappa} a] \rho dt
+
\mathcal{D}[\sqrt{\eta} \sqrt{\kappa}a] \rho dt
+
\mathcal{H}[\sqrt{\eta} \sqrt{\kappa}a] \rho d\xi$
where $d\xi$ is the Wiener increment. This can be described as a standard homodyne detection with efficiency $\eta$ together with a stochastic dissipation process with collapse operator $\sqrt{(1-\eta)\kappa} a$. Alternatively we can combine the two deterministic terms on standard Lindblad for and obtain the stochastic equation (which is the form given in W&M)
$d\rho = -i[H, \rho]dt + \mathcal{D}[\sqrt{\kappa} a]\rho dt + \sqrt{\eta}\mathcal{H}[\sqrt{\kappa}a] \rho d\xi$
Below we solve these two equivalent equations with QuTiP
Step5: Form 1
Step6: Form 2
Step7: $\displaystyle D_{2}[A]\rho(t) = \sqrt{\eta} \mathcal{H}[\sqrt{\kappa} a]\rho(t) = \sqrt{\eta} \mathcal{H}[A]\rho(t)
= \sqrt{\eta}(A\rho + \rho A^\dagger - \mathrm{Tr}[A\rho + \rho A^\dagger] \rho)
\rightarrow \sqrt{\eta} \left((A_L + A_R^\dagger)\rho_v - \mathrm{Tr}[(A_L + A_R^\dagger)\rho_v] \rho_v\right)$
Step8: Versions | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from qutip import *
from qutip.expect import expect_rho_vec
from matplotlib import rcParams
rcParams['font.family'] = 'STIXGeneral'
rcParams['mathtext.fontset'] = 'stix'
rcParams['font.size'] = '14'
Explanation: Inefficient photon detection: Mixing stochastic and deterministic master equations
Copyright (C) 2011 and later, Paul D. Nation & Robert J. Johansson
End of explanation
N = 15
w0 = 0.5 * 2 * np.pi
times = np.linspace(0, 15, 150)
dt = times[1] - times[0]
gamma = 0.1
a = destroy(N)
H = w0 * a.dag() * a
rho0 = fock(N, 5)
e_ops = [a.dag() * a, a + a.dag()]
Explanation: Direct photo-detection
Here we follow an example from Wiseman and Milburn, Quantum measurement and control, section. 4.8.1.
Consider cavity that leaks photons with a rate $\kappa$. The dissipated photons are detected with an inefficient photon detector,
with photon-detection efficiency $\eta$. The master equation describing this scenario, where a separate dissipation channel has been added for detections and missed detections, is
$\dot\rho = -i[H, \rho] + \mathcal{D}[\sqrt{1-\eta} \sqrt{\kappa} a] + \mathcal{D}[\sqrt{\eta} \sqrt{\kappa}a]$
To describe the photon measurement stochastically, we can unravelling only the dissipation term that corresponds to detections, and leaving the missed detections as a deterministic dissipation term, we obtain [Eq. (4.235) in W&M]
$d\rho = \mathcal{H}[-iH -\eta\frac{1}{2}a^\dagger a] \rho dt + \mathcal{D}[\sqrt{1-\eta} a] \rho dt + \mathcal{G}[\sqrt{\eta}a] \rho dN(t)$
or
$d\rho = -i[H, \rho] dt + \mathcal{D}[\sqrt{1-\eta} a] \rho dt -\mathcal{H}[\eta\frac{1}{2}a^\dagger a] \rho dt + \mathcal{G}[\sqrt{\eta}a] \rho dN(t)$
where
$\displaystyle \mathcal{G}[A] \rho = \frac{A\rho A^\dagger}{\mathrm{Tr}[A\rho A^\dagger]} - \rho$
$\displaystyle \mathcal{H}[A] \rho = A\rho + \rho A^\dagger - \mathrm{Tr}[A\rho + \rho A^\dagger] \rho $
and $dN(t)$ is a Poisson distributed increment with $E[dN(t)] = \eta \langle a^\dagger a\rangle (t)$.
Formulation in QuTiP
In QuTiP, the photocurrent stochastic master equation is written in the form:
$\displaystyle d\rho(t) = -i[H, \rho] dt + \mathcal{D}[B] \rho dt
- \frac{1}{2}\mathcal{H}[A^\dagger A] \rho(t) dt
+ \mathcal{G}[A]\rho(t) d\xi$
where the first two term gives the deterministic master equation (Lindblad form with collapse operator $B$ (c_ops)) and $A$ the stochastic collapse operator (sc_ops).
Here $A = \sqrt{\eta\gamma} a$ and $B = \sqrt{(1-\eta)\gamma} $a.
End of explanation
eta = 0.7
c_ops = [np.sqrt(1-eta) * np.sqrt(gamma) * a] # collapse operator B
sc_ops = [np.sqrt(eta) * np.sqrt(gamma) * a] # stochastic collapse operator A
result_ref = mesolve(H, rho0, times, c_ops+sc_ops, e_ops)
result1 = photocurrent_mesolve(H, rho0, times, c_ops=c_ops, sc_ops=sc_ops, e_ops=e_ops,
ntraj=1, nsubsteps=100, store_measurement=True)
result2 = photocurrent_mesolve(H, rho0, times, c_ops=c_ops, sc_ops=sc_ops, e_ops=e_ops,
ntraj=10, nsubsteps=100, store_measurement=True)
fig, axes = plt.subplots(2,2, figsize=(12,8), sharex=True)
axes[0,0].plot(times, result1.expect[0], label=r'Stochastic ME (ntraj = 1)', lw=2)
axes[0,0].plot(times, result_ref.expect[0], label=r'Lindblad ME', lw=2)
axes[0,0].set_title("Cavity photon number (ntraj = 1)")
axes[0,0].legend()
axes[0,1].plot(times, result2.expect[0], label=r'Stochatic ME (ntraj = 10)', lw=2)
axes[0,1].plot(times, result_ref.expect[0], label=r'Lindblad ME', lw=2)
axes[0,1].set_title("Cavity photon number (ntraj = 10)")
axes[0,1].legend()
axes[1,0].step(times, dt * np.cumsum(result1.measurement[0].real), lw=2)
axes[1,0].set_title("Cummulative photon detections (ntraj = 1)")
axes[1,1].step(times, dt * np.cumsum(np.array(result2.measurement).sum(axis=0).real) / 10, lw=2)
axes[1,1].set_title("Cummulative avg. photon detections (ntraj = 10)")
fig.tight_layout()
Explanation: Highly efficient detection
End of explanation
eta = 0.1
c_ops = [np.sqrt(1-eta) * np.sqrt(gamma) * a] # collapse operator B
sc_ops = [np.sqrt(eta) * np.sqrt(gamma) * a] # stochastic collapse operator A
result_ref = mesolve(H, rho0, times, c_ops+sc_ops, e_ops)
result1 = photocurrent_mesolve(H, rho0, times, c_ops=c_ops, sc_ops=sc_ops, e_ops=e_ops,
ntraj=1, nsubsteps=100, store_measurement=True)
result2 = photocurrent_mesolve(H, rho0, times, c_ops=c_ops, sc_ops=sc_ops, e_ops=e_ops,
ntraj=10, nsubsteps=100, store_measurement=True)
fig, axes = plt.subplots(2,2, figsize=(12,8), sharex=True)
axes[0,0].plot(times, result1.expect[0], label=r'Stochastic ME (ntraj = 1)', lw=2)
axes[0,0].plot(times, result_ref.expect[0], label=r'Lindblad ME', lw=2)
axes[0,0].set_title("Cavity photon number (ntraj = 1)")
axes[0,0].legend()
axes[0,1].plot(times, result2.expect[0], label=r'Stochatic ME (ntraj = 10)', lw=2)
axes[0,1].plot(times, result_ref.expect[0], label=r'Lindblad ME', lw=2)
axes[0,1].set_title("Cavity photon number (ntraj = 10)")
axes[0,1].legend()
axes[1,0].step(times, dt * np.cumsum(result1.measurement[0].real), lw=2)
axes[1,0].set_title("Cummulative photon detections (ntraj = 1)")
axes[1,1].step(times, dt * np.cumsum(np.array(result2.measurement).sum(axis=0).real) / 10, lw=2)
axes[1,1].set_title("Cummulative avg. photon detections (ntraj = 10)")
fig.tight_layout()
Explanation: Highly inefficient photon detection
End of explanation
rho0 = coherent(N, np.sqrt(5))
Explanation: Efficient homodyne detection
The stochastic master equation for inefficient homodyne detection, when unravaling the detection part of the master equation
$\dot\rho = -i[H, \rho] + \mathcal{D}[\sqrt{1-\eta} \sqrt{\kappa} a] + \mathcal{D}[\sqrt{\eta} \sqrt{\kappa}a]$,
is given in W&M as
$d\rho = -i[H, \rho]dt + \mathcal{D}[\sqrt{1-\eta} \sqrt{\kappa} a] \rho dt
+
\mathcal{D}[\sqrt{\eta} \sqrt{\kappa}a] \rho dt
+
\mathcal{H}[\sqrt{\eta} \sqrt{\kappa}a] \rho d\xi$
where $d\xi$ is the Wiener increment. This can be described as a standard homodyne detection with efficiency $\eta$ together with a stochastic dissipation process with collapse operator $\sqrt{(1-\eta)\kappa} a$. Alternatively we can combine the two deterministic terms on standard Lindblad for and obtain the stochastic equation (which is the form given in W&M)
$d\rho = -i[H, \rho]dt + \mathcal{D}[\sqrt{\kappa} a]\rho dt + \sqrt{\eta}\mathcal{H}[\sqrt{\kappa}a] \rho d\xi$
Below we solve these two equivalent equations with QuTiP
End of explanation
eta = 0.95
c_ops = [np.sqrt(1-eta) * np.sqrt(gamma) * a] # collapse operator B
sc_ops = [np.sqrt(eta) * np.sqrt(gamma) * a] # stochastic collapse operator A
result_ref = mesolve(H, rho0, times, c_ops+sc_ops, e_ops)
result = smesolve(H, rho0, times, c_ops, sc_ops, e_ops, ntraj=75, nsubsteps=100, solver="platen",
method='homodyne', store_measurement=True, map_func=parallel_map, noise=111)
plot_expectation_values([result, result_ref]);
fig, ax = plt.subplots(figsize=(8,4))
M = np.sqrt(eta * gamma)
for m in result.measurement:
ax.plot(times, m[:, 0].real / M, 'b', alpha=0.025)
ax.plot(times, result_ref.expect[1], 'k', lw=2);
ax.set_ylim(-25, 25)
ax.set_xlim(0, times.max())
ax.set_xlabel('time', fontsize=12)
ax.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real / M, 'b', lw=2);
Explanation: Form 1: Standard homodyne with deterministic dissipation on Lindblad form
End of explanation
L = liouvillian(H, np.sqrt(gamma) * a)
def d1_rho_func(t, rho_vec):
return L * rho_vec
Explanation: Form 2: Combined homodyne with deterministic dissipation for missed detection events
$\displaystyle D_{1}[A]\rho(t) = \mathcal{D}[\kappa a]\rho(t) = \mathcal{D}[A]\rho(t)$
End of explanation
n_sum = spre(np.sqrt(gamma) * a) + spost(np.sqrt(gamma) * a.dag())
def d2_rho_func(t, rho_vec):
e1 = expect_rho_vec(n_sum.data, rho_vec, False)
return np.vstack([np.sqrt(eta) * (n_sum * rho_vec - e1 * rho_vec)])
result_ref = mesolve(H, rho0, times, c_ops+sc_ops, e_ops)
result = general_stochastic(ket2dm(rho0), times, e_ops=[spre(op) for op in e_ops],
ntraj=75, nsubsteps=100, solver="platen",
d1=d1_rho_func, d2=d2_rho_func, len_d2=1,
m_ops=[spre(a + a.dag())], dW_factors=[1/np.sqrt(gamma * eta)],
store_measurement=True, map_func=parallel_map, noise=111)
plot_expectation_values([result, result_ref])
fig, ax = plt.subplots(figsize=(8,4))
for m in result.measurement:
ax.plot(times, m[:, 0].real, 'b', alpha=0.025)
ax.plot(times, result_ref.expect[1], 'k', lw=2);
ax.set_ylim(-25, 25)
ax.set_xlim(0, times.max())
ax.set_xlabel('time', fontsize=12)
ax.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real, 'b', lw=2);
Explanation: $\displaystyle D_{2}[A]\rho(t) = \sqrt{\eta} \mathcal{H}[\sqrt{\kappa} a]\rho(t) = \sqrt{\eta} \mathcal{H}[A]\rho(t)
= \sqrt{\eta}(A\rho + \rho A^\dagger - \mathrm{Tr}[A\rho + \rho A^\dagger] \rho)
\rightarrow \sqrt{\eta} \left((A_L + A_R^\dagger)\rho_v - \mathrm{Tr}[(A_L + A_R^\dagger)\rho_v] \rho_v\right)$
End of explanation
from qutip.ipynbtools import version_table
version_table()
Explanation: Versions
End of explanation |
871 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sum of amounts in the TN bucket of the test dataset.
Step1: Sum of amounts in the FN bucket of the test dataset. | Python Code:
df.Amount.values[training_size:][(y_test == 0) & (y_test_predict == 0)].sum()
Explanation: Sum of amounts in the TN bucket of the test dataset.
End of explanation
df.Amount.values[training_size:][(y_test == 1) & (y_test_predict == 0)].sum()
100 * 8336.05/7224977.58
Explanation: Sum of amounts in the FN bucket of the test dataset.
End of explanation |
872 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
rotate_error_ellipse
誤差楕円の回転がなぜ、共分散行列の両側から回転行列をかけているのかよくわからない。
なので実際に回してみるコードを書いてみた。
Step1: 共分散行列を回転させる部分を以下に抜き出してみた。
Step2: 試しに左側だけ掛けてみる。
虚数解が出たらしい。傾き22.5度の直線を軸に90度回転しているっぽい。
前後にも飛び出している状況。
Step3: 当たり前だけど右側も掛けたらちゃんと回転した。
Step4: 転置した回転行列を左右入れ替えると逆回転した。
回転行列を転置することと、回転角を逆にするのは同じなので当然。 | Python Code:
%matplotlib inline
import numpy as np
import math
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
class Error_ellipse:
def __init__(self, sigma_x = 1.0, sigma_y = 1.0, cov_xy = 0.0, mu_x = 0.0, mu_y = 0.0):
self.cov = np.array([[sigma_x, cov_xy], [cov_xy, sigma_y]]) # 分散共分散行列
self.mean = np.array([mu_x, mu_y]).T # 平均値(楕円の中央)
def shift(self, delta, angle):
ca = math.cos(angle)
sa = math.sin(angle)
self.rot = np.array([[ca, sa],[-sa, ca]]) # 回転行列
self.cov = self.rot.dot(self.cov).dot(self.rot.T) # 回転
self.mean = self.mean + delta # 移動
def draw(self):
eigen = np.linalg.eig(self.cov) # eigenvalue
v1 = eigen[0][0] * eigen[1][0] # eigenvector
v2 = eigen[0][1] * eigen[1][1]
v1_direction = math.atan2(v1[1], v1[0])
ellipse = Ellipse(self.mean, width=np.linalg.norm(v1), height=np.linalg.norm(v2), angle=v1_direction / 3.14 * 180)
ellipse.set_alpha(0.2)
fig = plt.figure(0)
sp = fig.add_subplot(111, aspect='equal')
sp.add_artist(ellipse)
sp.set_xlim(-2.0, 2.0)
sp.set_ylim(-2.0, 2.0)
plt.show()
p = Error_ellipse(1.0, 2.0, 0.0)
p.draw()
p.shift([0.0, 0.0], math.pi / 4) # 45度回転
p.draw()
Explanation: rotate_error_ellipse
誤差楕円の回転がなぜ、共分散行列の両側から回転行列をかけているのかよくわからない。
なので実際に回してみるコードを書いてみた。
End of explanation
p.cov = p.rot.dot(p.cov).dot(p.rot.T)
p.draw()
Explanation: 共分散行列を回転させる部分を以下に抜き出してみた。
End of explanation
p.cov = p.rot.dot(p.cov)
p.draw()
Explanation: 試しに左側だけ掛けてみる。
虚数解が出たらしい。傾き22.5度の直線を軸に90度回転しているっぽい。
前後にも飛び出している状況。
End of explanation
p.cov = p.cov.dot(p.rot.T)
p.draw()
Explanation: 当たり前だけど右側も掛けたらちゃんと回転した。
End of explanation
p.cov = p.rot.T.dot(p.cov).dot(p.rot)
p.draw()
Explanation: 転置した回転行列を左右入れ替えると逆回転した。
回転行列を転置することと、回転角を逆にするのは同じなので当然。
End of explanation |
873 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Usable Data Map (UDM2) Cloud Detection
In this guide, you'll learn about Planet's automatic detection of pixels which are cloudy or otherwise obscured, so that you can make more intelligent choices about whether the data meets your needs.
In 2018, Planet undertook a project to improve cloud detection, and this guide will focus on the improved metadata that can be used for filtering and the new ortho_udm2 asset that provides access to detail classification of every pixel. This new information will be available for all PSScene and PSOrthoTile items created after 2018-08-01 and for some items before this date (note that a very small number of items created after this date are without the ortho_udm2 asset). Planet is not currently planning on removing old cloud cover metadata or the old udm asset.
Full specification
The full specification for the ortho_udm2 asset and the related metadata fields can be found in the UDM 2 section of the API documentation.
Finding clear imagery
One of the benefits of accurate and automated cloud detection is that it allows users to filter out images that don't meet a certain quality threshold. Planet's Data API allows users to search based on the value of the imagery metadata.
For example, if you are using the Planet command-line tool, you can search for all four-band PlanetScope scenes that have less than 10% cloud cover in them with the following
Step1: The udm2 asset
In addition to metadata for filtering, the ortho_udm2 asset provides a pixel-by-pixel map that identifies the classification of each pixel.
In the example below, cloudy pixels are highlighted in yellow, shadows in red and light haze in blue.
| Original image | udm2 overlay |
| | Python Code:
from planet import api
import time
import os
import rasterio
from rasterio.plot import show
client = api.ClientV1()
# build a filter for the AOI
filter = api.filters.range_filter("clear_percent", gte=90)
# show the structure of the filter
print(filter)
# we are requesting PlanetScope 4 Band imagery
item_types = ['PSScene']
request = api.filters.build_search_request(filter, item_types)
# this will cause an exception if there are any API related errors
results = client.quick_search(request)
# print out the ID of the most recent 10 images that matched
for item in results.items_iter(10):
print('%s' % item['id'])
Explanation: Usable Data Map (UDM2) Cloud Detection
In this guide, you'll learn about Planet's automatic detection of pixels which are cloudy or otherwise obscured, so that you can make more intelligent choices about whether the data meets your needs.
In 2018, Planet undertook a project to improve cloud detection, and this guide will focus on the improved metadata that can be used for filtering and the new ortho_udm2 asset that provides access to detail classification of every pixel. This new information will be available for all PSScene and PSOrthoTile items created after 2018-08-01 and for some items before this date (note that a very small number of items created after this date are without the ortho_udm2 asset). Planet is not currently planning on removing old cloud cover metadata or the old udm asset.
Full specification
The full specification for the ortho_udm2 asset and the related metadata fields can be found in the UDM 2 section of the API documentation.
Finding clear imagery
One of the benefits of accurate and automated cloud detection is that it allows users to filter out images that don't meet a certain quality threshold. Planet's Data API allows users to search based on the value of the imagery metadata.
For example, if you are using the Planet command-line tool, you can search for all four-band PlanetScope scenes that have less than 10% cloud cover in them with the following:
planet data search --item-type PSScene --range cloud_percent lt 10 --asset-type ortho_analytic_4b,ortho_udm2
Planet's cloud detection algorithm classifies every pixel into one of six different categories, each of which has a corresponding metadata field that reflects the percentage of data that falls into the category.
| Class | Metadata field |
| --- | --- |
| clear | clear_percent |
| snow | snow_ice_percent |
| shadow | shadow_percent |
| light haze | light_haze_percent |
| heavy haze| heavy_haze_percent |
| cloud | cloud_percent |
These can be combined to refine search results even further. An example of searching for imagery that has less than 10% clouds and less than 10% heavy haze:
planet data search --item-type PSScene --range cloud_percent lt 10 --range heavy_haze_percent lt 10
--asset-type ortho_analytic_4b,ortho_udm2
Every pixel will be classified into only one of the categories above; a pixel may be snowy or obscured by a shadow but it can not be both at the same time!
The following example will show how to do a search for imagery that is at least 90% clear using Planet's Python client.
End of explanation
item_type = "PSScene"
item_id = "20190228_172942_0f1a"
# activate assets
assets = client.get_assets_by_id("PSScene", item_id).get()
client.activate(assets["ortho_analytic_4b"])
client.activate(assets["ortho_udm2"])
# wait until activation completes
while True:
assets = client.get_assets_by_id("PSScene", item_id).get()
if "location" in assets["ortho_analytic_4b"] and "location" in assets["ortho_udm2"]:
print('assets activated')
break
time.sleep(10)
# start downloads
data_dir = 'data'
r1 = client.download(assets["ortho_analytic_4b"], callback=api.write_to_file(data_dir))
r2 = client.download(assets["ortho_udm2"], callback=api.write_to_file(data_dir))
# wait until downloads complete
r1.wait()
r2.wait()
img_file = os.path.join(data_dir, r1.get_body().name)
udm_file = os.path.join(data_dir, r2.get_body().name)
print("image: {}".format(img_file))
print("udm2: {}".format(udm_file))
with rasterio.open(udm_file) as src:
shadow_mask = src.read(3).astype(bool)
cloud_mask = src.read(6).astype(bool)
show(shadow_mask, title="shadow", cmap="binary")
show(cloud_mask, title="cloud", cmap="binary")
mask = shadow_mask + cloud_mask
show(mask, title="mask", cmap="binary")
with rasterio.open(img_file) as src:
profile = src.profile
img_data = src.read([3, 2, 1], masked=True) / 10000.0 # apply RGB ordering and scale down
show(img_data, title=item_id)
img_data.mask = mask
img_data = img_data.filled(fill_value=0)
show(img_data, title="masked image")
Explanation: The udm2 asset
In addition to metadata for filtering, the ortho_udm2 asset provides a pixel-by-pixel map that identifies the classification of each pixel.
In the example below, cloudy pixels are highlighted in yellow, shadows in red and light haze in blue.
| Original image | udm2 overlay |
| :--- | :--- |
| | |
| 20190228_172942_0f1a_3B_Visual.tif | 20190228_172942_0f1a_udm2.tif |
The udm2 structure is to use a separate band for each classification type. Band 2, for example, indicates that a pixel is snowy when its value is 1, band 3 indicates shadow and so on.
The following Python will download the data above and then display pixels that fall into a certain classifications.
End of explanation |
874 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inspired by R P Herrold's challenge.
Some ways of splitting data in Python follow.
Step1: Tuple unpacking is nice and in this case
enables
DRY
(aka Single Source of Truth)
code. Compare
Step2: Unnesting the chopping code makes it easier to read.
The for loop is still pretty. | Python Code:
MONTH_NDAYS = '''
0:31
1:29
2:31
3:30
4:31
5:30
6:31
7:31
8:30
9:31
10:30
11:31
'''.split()
MONTH_NDAYS
for month_n_days in MONTH_NDAYS:
month, n_days = map(int, month_n_days.split(':'))
print(f'{month} has {n_days}')
Explanation: Inspired by R P Herrold's challenge.
Some ways of splitting data in Python follow.
End of explanation
MONTH_NDAYS = [list(map(int, s.split(':'))) for s in '''
0:31
1:29
2:31
3:30
4:31
5:30
6:31
7:31
8:30
9:31
10:30
11:31
'''.split()]
MONTH_NDAYS
for month, n_days in MONTH_NDAYS:
print(f'{month} has {n_days}')
Explanation: Tuple unpacking is nice and in this case
enables
DRY
(aka Single Source of Truth)
code. Compare:
month, n_days = month_n_days.split(':')
with
FUTLHS=`echo "$j" | awk -F: {'print $1'}`
FUTRHS=`echo "$j" | awk -F: {'print $2'}`
One can do all the chopping up front.
The chopping code is less readable,
but the subsequent for loop is just pretty.
End of explanation
MONTH_NDAYS = '''
0:31
1:29
2:31
3:30
4:31
5:30
6:31
7:31
8:30
9:31
10:30
11:31
'''.split()
MONTH_NDAYS
MONTH_NDAYS = [s.split(':') for s in MONTH_NDAYS]
MONTH_NDAYS
MONTH_NDAYS = [list(map(int, x)) for x in MONTH_NDAYS]
MONTH_NDAYS
for month, n_days in MONTH_NDAYS:
print(f'{month} has {n_days}')
Explanation: Unnesting the chopping code makes it easier to read.
The for loop is still pretty.
End of explanation |
875 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
数据抓取
抓取历届政府工作报告
王成军
[email protected]
计算传播网 http
Step1: Inspect
<td width="274" class="bl">· <a href="./d12qgrdzfbg/201603/t20160318_369509.html" target="_blank" title="2016年政府工作报告">2016年政府工作报告</a></td>
<td width="274" class="bl">·&nbsp;<a href="./d12qgrdzfbg/201603/t20160318_369509.html" target="_blank" title="2016年政府工作报告">2016年政府工作报告</a></td>
Step2: Encoding
ASCII
7位字符集
美国标准信息交换代码(American Standard Code for Information Interchange)的缩写, 为美国英语通信所设计。
它由128个字符组成,包括大小写字母、数字0-9、标点符号、非打印字符(换行符、制表符等4个)以及控制字符(退格、响铃等)组成。
iso8859-1 通常叫做Latin-1。
和ascii编码相似。
属于单字节编码,最多能表示的字符范围是0-255,应用于英文系列。比如,字母a的编码为0x61=97。
无法表示中文字符。
单字节编码,和计算机最基础的表示单位一致,所以很多时候,仍旧使用iso8859-1编码来表示。在很多协议上,默认使用该编码。
gb2312/gbk/gb18030
是汉字的国标码,专门用来表示汉字,是双字节编码,而英文字母和iso8859-1一致(兼容iso8859-1编码)。
其中gbk编码能够用来同时表示繁体字和简体字,K 为汉语拼音 Kuo Zhan(扩展)中“扩”字的声母
gb2312只能表示简体字,gbk是兼容gb2312编码的。
gb18030,全称:国家标准 GB 18030-2005《信息技术中文编码字符集》,是中华人民共和国现时最新的内码字集
unicode
最统一的编码,用来表示所有语言的字符。
占用更多的空间,定长双字节(也有四字节的)编码,包括英文字母在内。
不兼容iso8859-1等其它编码。相对于iso8859-1编码来说,uniocode编码只是在前面增加了一个0字节,比如字母a为"00 61"。
定长编码便于计算机处理(注意GB2312/GBK不是定长编码),unicode又可以用来表示所有字符,所以在很多软件内部是使用unicode编码来处理的,比如java。
UTF
unicode不便于传输和存储,产生了utf编码
utf编码兼容iso8859-1编码,同时也可以用来表示所有语言的字符
utf编码是不定长编码,每一个字符的长度从1-6个字节不等。
其中,utf8(8-bit Unicode Transformation Format)是一种针对Unicode的可变长度字符编码,又称万国码。
由Ken Thompson于1992年创建。现在已经标准化为RFC 3629。
decode
<del>urllib2.urlopen(url).read().decode('gb18030') </del>
content.encoding = 'gb18030'
content = content.text
Or
content = content.text.encode(content.encoding).decode('gb18030')
html.parser
BeautifulSoup(content, 'html.parser')
Step3: Inspect 下一页
<a href="t20090818_27775_1.html"><span style="color | Python Code:
import requests
from bs4 import BeautifulSoup
from IPython.display import display_html, HTML
HTML('<iframe src=http://www.hprc.org.cn/wxzl/wxysl/lczf/ width=1000 height=500></iframe>')
# the webpage we would like to crawl
Explanation: 数据抓取
抓取历届政府工作报告
王成军
[email protected]
计算传播网 http://computational-communication.com
End of explanation
# get the link for each year
url = "http://www.hprc.org.cn/wxzl/wxysl/lczf/"
content = requests.get(url)
content.encoding
Explanation: Inspect
<td width="274" class="bl">· <a href="./d12qgrdzfbg/201603/t20160318_369509.html" target="_blank" title="2016年政府工作报告">2016年政府工作报告</a></td>
<td width="274" class="bl">·&nbsp;<a href="./d12qgrdzfbg/201603/t20160318_369509.html" target="_blank" title="2016年政府工作报告">2016年政府工作报告</a></td>
End of explanation
# Specify the encoding
content.encoding = 'utf8' # 'gb18030'
content = content.text
soup = BeautifulSoup(content, 'html.parser')
# links = soup.find_all('td', {'class', 'bl'})
links = soup.select('.bl a')
print(links[0])
len(links)
links[-1]['href']
links[0]['href'].split('./')[1]
url + links[0]['href'].split('./')[1]
hyperlinks = [url + i['href'].split('./')[1] for i in links]
hyperlinks[:5]
hyperlinks[-5:]
hyperlinks[12] # 2007年有分页
from IPython.display import display_html, HTML
HTML('<iframe src=http://www.hprc.org.cn/wxzl/wxysl/lczf/dishiyijie_1/200908/t20090818_3955570.html width=1000 height=500></iframe>')
# 2007年有分页
Explanation: Encoding
ASCII
7位字符集
美国标准信息交换代码(American Standard Code for Information Interchange)的缩写, 为美国英语通信所设计。
它由128个字符组成,包括大小写字母、数字0-9、标点符号、非打印字符(换行符、制表符等4个)以及控制字符(退格、响铃等)组成。
iso8859-1 通常叫做Latin-1。
和ascii编码相似。
属于单字节编码,最多能表示的字符范围是0-255,应用于英文系列。比如,字母a的编码为0x61=97。
无法表示中文字符。
单字节编码,和计算机最基础的表示单位一致,所以很多时候,仍旧使用iso8859-1编码来表示。在很多协议上,默认使用该编码。
gb2312/gbk/gb18030
是汉字的国标码,专门用来表示汉字,是双字节编码,而英文字母和iso8859-1一致(兼容iso8859-1编码)。
其中gbk编码能够用来同时表示繁体字和简体字,K 为汉语拼音 Kuo Zhan(扩展)中“扩”字的声母
gb2312只能表示简体字,gbk是兼容gb2312编码的。
gb18030,全称:国家标准 GB 18030-2005《信息技术中文编码字符集》,是中华人民共和国现时最新的内码字集
unicode
最统一的编码,用来表示所有语言的字符。
占用更多的空间,定长双字节(也有四字节的)编码,包括英文字母在内。
不兼容iso8859-1等其它编码。相对于iso8859-1编码来说,uniocode编码只是在前面增加了一个0字节,比如字母a为"00 61"。
定长编码便于计算机处理(注意GB2312/GBK不是定长编码),unicode又可以用来表示所有字符,所以在很多软件内部是使用unicode编码来处理的,比如java。
UTF
unicode不便于传输和存储,产生了utf编码
utf编码兼容iso8859-1编码,同时也可以用来表示所有语言的字符
utf编码是不定长编码,每一个字符的长度从1-6个字节不等。
其中,utf8(8-bit Unicode Transformation Format)是一种针对Unicode的可变长度字符编码,又称万国码。
由Ken Thompson于1992年创建。现在已经标准化为RFC 3629。
decode
<del>urllib2.urlopen(url).read().decode('gb18030') </del>
content.encoding = 'gb18030'
content = content.text
Or
content = content.text.encode(content.encoding).decode('gb18030')
html.parser
BeautifulSoup(content, 'html.parser')
End of explanation
url_i = 'http://www.hprc.org.cn/wxzl/wxysl/lczf/dishiyijie_1/200908/t20090818_3955570.html'
content = requests.get(url_i)
content.encoding = 'utf8'
content = content.text
#content = content.text.encode(content.encoding).decode('gb18030')
soup = BeautifulSoup(content, 'html.parser')
#scripts = soup.find_all('script')
#scripts[0]
scripts = soup.select('td script')[0]
scripts
scripts.text
# countPage = int(''.join(scripts).split('countPage = ')\
# [1].split('//')[0])
# countPage
countPage = int(scripts.text.split('countPage = ')[1].split('//')[0])
countPage
import sys
def flushPrint(s):
sys.stdout.write('\r')
sys.stdout.write('%s' % s)
sys.stdout.flush()
def crawler(url_i):
content = requests.get(url_i)
content.encoding = 'utf8'
content = content.text
soup = BeautifulSoup(content, 'html.parser')
year = soup.find('span', {'class', 'huang16c'}).text[:4]
year = int(year)
report = ''.join(s.text for s in soup('p'))
# 找到分页信息
scripts = soup.find_all('script')
countPage = int(''.join(scripts[1]).split('countPage = ')[1].split('//')[0])
if countPage == 1:
pass
else:
for i in range(1, countPage):
url_child = url_i.split('.html')[0] +'_'+str(i)+'.html'
content = requests.get(url_child)
content.encoding = 'gb18030'
content = content.text
soup = BeautifulSoup(content, 'html.parser')
report_child = ''.join(s.text for s in soup('p'))
report = report + report_child
return year, report
# 抓取50年政府工作报告内容
reports = {}
for link in hyperlinks:
year, report = crawler(link)
flushPrint(year)
reports[year] = report
with open('../data/gov_reports1954-2019.txt', 'w', encoding = 'utf8') as f:
for r in reports:
line = str(r)+'\t'+reports[r].replace('\n', '\t') +'\n'
f.write(line)
import pandas as pd
df = pd.read_table('../data/gov_reports1954-2019.txt', names = ['year', 'report'])
df[-5:]
Explanation: Inspect 下一页
<a href="t20090818_27775_1.html"><span style="color:#0033FF;font-weight:bold">下一页</span></a>
<a href="t20090818_27775_1.html"><span style="color:#0033FF;font-weight:bold">下一页</span></a>
a
script
td
End of explanation |
876 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
이 노트북은 Hell, Tensorflow! 의 내용을 참고로 하여 노트북으로 재 정돈한 것입니다.
Step1: 텐서플로우의 디폴트 그래프는 직접 접근을 할 수 없고 get_default_graph 메소드를 이용합니다.
Step2: 초기에는 디폴트 그래프에 아무런 연산도 들어 있지 않고 비어 있습니다.
Step3: 실수 1.0 값을 가지는 상수 input_value 를 만듭니다. name 옵션을 사용하여 이름을 지정하면 텐서보드의 그래프에서 구분하기 편리합니다.
Step4: 상수 하나도 텐서플로우 그래프에서는 하나의 노드로 취급되어 get_operations 에서 리턴되는 리스트가 비어있지 않습니다.
Step5: get_operations 로 받은 리스트에는 하나의 엘리먼트가 들어 있고 엘리먼트는 Operation 클래스의 인스턴스입니다.
Step6: ops 의 첫번째 노드(여기서는 상수노드)의 정의를 조회하면 프로토콜 버퍼 형식으로 연산 노드를 표현하고 있음을 알 수 있습니다.
Step7: input_value는 상수 텐서를 위한 일종의 연산 노드이며 값이 들어 있지 않습니다.
Step8: 텐서플로우의 세션을 생성한 후에 input_value 를 실행하면 결과 값이 리턴됩니다.
Step9: 두 수를 곱하는 연산을 만들어 보기 위해 weight 변수를 만듭니다. 상수 노드는 tensorflow.python.framework.ops.Tensor 의 객체인데 변수 노드는 tensorflow.python.ops.variables.Variable 의 객체입니다.
Step10: 이제 연산 노드는 다섯 개로 늘어납니다. 변수 텐서를 생성하면 그래프에 초기값과 할당, 조회에 관련된 노드가 추가로 생성되기 때문입니다.
Step11: weight 변수와 input_value 상수를 곱하여 곱셈 노드로 만들어지는 output_value 텐서를 만듭니다.
Step12: 그래프의 노드를 다시 조회하면 mul 노드가 추가된 것을 확인할 수 있습니다.
Step13: 그래프의 모든 변수를 초기화하기 위해 init 노드를 만들고 run 메소드로 실행합니다.
Step14: 1 * 0.8 의 출력 값은 예상대로 0.8을 리턴합니다.
Step15: SummaryWriter를 사용하여 log_simple_graph 디렉토리에 sess에서 만들어진 세션 그래프 정보를 기록합니다.
Step16: 쉘에서 tensorboard --logdir=log_simple_graph 명령으로 텐서보드를 실행하고 브라우저로 http
Step17: 실제 참 값(y_)을 0이라고 정의하고 예측값과의 차이의 제곱을 에러 함수(loss function or error function)로 정의합니다.
Step18: 학습속도를 0.025로 하고 그래디언트 디센트 최적화 방식을 선택합니다.
Step19: 수동으로 에러함수의 기울기를 계산해 보면 아래와 같이 1.6이 나옵니다. 가중치 값이 0.8 이므로 y = 0.8 * 1.0 = 0.8 이 되고 위에서 가정한 대로 y_ = 0 입니다. 에러 함수의 미분 방정식은 2(y - y_) 이므로 결과 값은 2 * 0.8 = 1.6 이 됩니다.
Step20: 계산된 그래디언트를 가중치에 반영하면 학습속도가 0.025 이므로 0.025 * 1.6 = 0.04 가 되어 w 가 0.04 만큼 감소됩니다.
Step21: 이 과정을 반복하도록 루프를 작성하고 summary_writer를 이용하여 결과 y 값을 기록하면 텐서보드에서 그래프로 값의 변화를 확인할 수 있습니다. | Python Code:
%load_ext watermark
%watermark -vm -p tensorflow,numpy,scikit-learn
import tensorflow as tf
Explanation: 이 노트북은 Hell, Tensorflow! 의 내용을 참고로 하여 노트북으로 재 정돈한 것입니다.
End of explanation
graph = tf.get_default_graph()
Explanation: 텐서플로우의 디폴트 그래프는 직접 접근을 할 수 없고 get_default_graph 메소드를 이용합니다.
End of explanation
graph.get_operations()
Explanation: 초기에는 디폴트 그래프에 아무런 연산도 들어 있지 않고 비어 있습니다.
End of explanation
input_value = tf.constant(1.0, name='input_value')
Explanation: 실수 1.0 값을 가지는 상수 input_value 를 만듭니다. name 옵션을 사용하여 이름을 지정하면 텐서보드의 그래프에서 구분하기 편리합니다.
End of explanation
graph.get_operations()
Explanation: 상수 하나도 텐서플로우 그래프에서는 하나의 노드로 취급되어 get_operations 에서 리턴되는 리스트가 비어있지 않습니다.
End of explanation
ops = graph.get_operations()
len(ops), ops[0].__class__
Explanation: get_operations 로 받은 리스트에는 하나의 엘리먼트가 들어 있고 엘리먼트는 Operation 클래스의 인스턴스입니다.
End of explanation
op = ops[0]
op.node_def
Explanation: ops 의 첫번째 노드(여기서는 상수노드)의 정의를 조회하면 프로토콜 버퍼 형식으로 연산 노드를 표현하고 있음을 알 수 있습니다.
End of explanation
input_value.__class__, input_value
Explanation: input_value는 상수 텐서를 위한 일종의 연산 노드이며 값이 들어 있지 않습니다.
End of explanation
sess = tf.Session()
sess.run(input_value)
Explanation: 텐서플로우의 세션을 생성한 후에 input_value 를 실행하면 결과 값이 리턴됩니다.
End of explanation
weight = tf.Variable(0.8, name='weight')
weight
Explanation: 두 수를 곱하는 연산을 만들어 보기 위해 weight 변수를 만듭니다. 상수 노드는 tensorflow.python.framework.ops.Tensor 의 객체인데 변수 노드는 tensorflow.python.ops.variables.Variable 의 객체입니다.
End of explanation
ops = graph.get_operations()
for op in ops:
print(op.name)
Explanation: 이제 연산 노드는 다섯 개로 늘어납니다. 변수 텐서를 생성하면 그래프에 초기값과 할당, 조회에 관련된 노드가 추가로 생성되기 때문입니다.
End of explanation
output_value = weight * input_value
output_value
Explanation: weight 변수와 input_value 상수를 곱하여 곱셈 노드로 만들어지는 output_value 텐서를 만듭니다.
End of explanation
ops = graph.get_operations()
for op in ops:
print(op.name)
Explanation: 그래프의 노드를 다시 조회하면 mul 노드가 추가된 것을 확인할 수 있습니다.
End of explanation
init = tf.initialize_all_variables()
sess.run(init)
Explanation: 그래프의 모든 변수를 초기화하기 위해 init 노드를 만들고 run 메소드로 실행합니다.
End of explanation
sess.run(output_value)
Explanation: 1 * 0.8 의 출력 값은 예상대로 0.8을 리턴합니다.
End of explanation
summary_writer = tf.train.SummaryWriter('log_simple_graph', sess.graph)
Explanation: SummaryWriter를 사용하여 log_simple_graph 디렉토리에 sess에서 만들어진 세션 그래프 정보를 기록합니다.
End of explanation
x = tf.constant(1.0, name='x')
w = tf.Variable(0.8, name='w')
y = tf.mul(w, x, name='y')
init = tf.initialize_all_variables()
sess.run(init)
Explanation: 쉘에서 tensorboard --logdir=log_simple_graph 명령으로 텐서보드를 실행하고 브라우저로 http://localhost:6006 으로 접속한 후 그래프 탭을 클릭하면 아래와 같은 그림이 보입니다. 이 그래프는 init 연산에서 weight 변수를 초기화하고 mul 연산에서 input_value 와 wegiht 를 사용하고 있다는 것을 잘 보여 주고 있습니다.
같은 값을 가지는 상수와 변수를 x, w 로 다시 만듭니다. 그리고 곱셉 노드를 파이썬의 곱셉 연산자를 사용하지 않고 텐서플로우에서 제공하는 수학 함수인 mul을 사용하여 표현할 수 있습니다.
End of explanation
y_ = tf.constant(0.0)
loss = (y - y_)**2
Explanation: 실제 참 값(y_)을 0이라고 정의하고 예측값과의 차이의 제곱을 에러 함수(loss function or error function)로 정의합니다.
End of explanation
optim = tf.train.GradientDescentOptimizer(learning_rate=0.025)
grads_and_vars = optim.compute_gradients(loss)
grads_and_vars
Explanation: 학습속도를 0.025로 하고 그래디언트 디센트 최적화 방식을 선택합니다.
End of explanation
sess.run(grads_and_vars[1][0])
Explanation: 수동으로 에러함수의 기울기를 계산해 보면 아래와 같이 1.6이 나옵니다. 가중치 값이 0.8 이므로 y = 0.8 * 1.0 = 0.8 이 되고 위에서 가정한 대로 y_ = 0 입니다. 에러 함수의 미분 방정식은 2(y - y_) 이므로 결과 값은 2 * 0.8 = 1.6 이 됩니다.
End of explanation
sess.run(optim.apply_gradients(grads_and_vars))
sess.run(w)
Explanation: 계산된 그래디언트를 가중치에 반영하면 학습속도가 0.025 이므로 0.025 * 1.6 = 0.04 가 되어 w 가 0.04 만큼 감소됩니다.
End of explanation
train_step = tf.train.GradientDescentOptimizer(0.025).minimize(loss)
summary_y = tf.scalar_summary('output', y)
for i in range(100):
summary_str = sess.run(summary_y)
summary_writer.add_summary(summary_str, i)
sess.run(train_step)
Explanation: 이 과정을 반복하도록 루프를 작성하고 summary_writer를 이용하여 결과 y 값을 기록하면 텐서보드에서 그래프로 값의 변화를 확인할 수 있습니다.
End of explanation |
877 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting Models Exercise 1
Imports
Step1: Fitting a quadratic curve
For this problem we are going to work with the following model
Step2: First, generate a dataset using this model using these parameters and the following characteristics
Step3: Now fit the model to the dataset to recover estimates for the model's parameters | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
Explanation: Fitting Models Exercise 1
Imports
End of explanation
a_true = 0.5
b_true = 2.0
c_true = -4.0
Explanation: Fitting a quadratic curve
For this problem we are going to work with the following model:
$$ y_{model}(x) = a x^2 + b x + c $$
The true values of the model parameters are as follows:
End of explanation
x = np.linspace(-5,5,30)
dy = 2.0
np.random.seed(0)
y = a_true*x**2 +b_true*x + c_true + np.random.normal(0.0, dy, size=len(x))
plt.figure(figsize=(10,5))
plt.errorbar(x, y, dy, fmt = '.k', ecolor='lightgray')
plt.ylabel("Y")
plt.xlabel("X")
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.spines['bottom'].set_color('#a2a7ff')
ax.spines['left'].set_color('#a2a7ff')
plt.title("Quadratic Curve Plot with Noise")
plt.show()
assert True # leave this cell for grading the raw data generation and plot
Explanation: First, generate a dataset using this model using these parameters and the following characteristics:
For your $x$ data use 30 uniformly spaced points between $[-5,5]$.
Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal).
After you generate the data, make a plot of the raw data (use points).
End of explanation
def model(x,a,b,c):
return a*x**2 + b*x + c
theta_best, theta_cov = opt.curve_fit(model, x, y, sigma=dy)
print('a = {0:.3f} +/- {1:.3f}'.format(theta_best[0],np.sqrt(theta_cov[0,0])))
print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[1],np.sqrt(theta_cov[1,1])))
print('c = {0:.3f} +/- {1:.3f}'.format(theta_best[2],np.sqrt(theta_cov[2,2])))
plt.figure(figsize=(10,5))
plt.errorbar(x, y, dy, fmt = '.k', ecolor='lightgray')
Y = theta_best[0]*x**2 + theta_best[1]*x + theta_best[2]
plt.plot(x, Y)
plt.xlabel("X")
plt.ylabel("Y")
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.spines['bottom'].set_color('#a2a7ff')
ax.spines['left'].set_color('#a2a7ff')
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plt.title("Curve Fit for Quadratic Curve Plot with Noise")
plt.show()
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
Explanation: Now fit the model to the dataset to recover estimates for the model's parameters:
Print out the estimates and uncertainties of each parameter.
Plot the raw data and best fit of the model.
End of explanation |
878 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 2
Imports
Step1: Factorial
Write a function that computes the factorial of small numbers using np.arange and np.cumprod.
Step2: Write a function that computes the factorial of small numbers using a Python loop.
Step3: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Numpy Exercise 2
Imports
End of explanation
def np_fact(n):
# if n == 0:
# return 1
# elif n == 1:
# return 1
# elif n == 10:
# return 3628800
if n==0:
return 1
return np.cumprod(np.arange(1.0,n+1.0,1.0))[-1.0]
print np_fact(10)
assert np_fact(0)==1
assert np_fact(1)==1
assert np_fact(10)==3628800
assert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
Explanation: Factorial
Write a function that computes the factorial of small numbers using np.arange and np.cumprod.
End of explanation
def loop_fact(n):
q = 1
for i in range(n):
q = q * (i+1)
return q
print np_fact(5)
assert loop_fact(0)==1
assert loop_fact(1)==1
assert loop_fact(10)==3628800
assert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
Explanation: Write a function that computes the factorial of small numbers using a Python loop.
End of explanation
# YOUR CODE HERE
# raise NotImplementedError()
%timeit -n1 -r1 np_fact(50)
%timeit -n1 -r1 loop_fact(50)
Explanation: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is:
python
%timeit -n1 -r1 function_to_time()
End of explanation |
879 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NLP Workshop
Author
Step1: Load A Text Dataset
Collection of TED talk transcripts from the ted.com website
Step2: Storage and File types for text
Common text file types for text include .json .csv .txt.
It is also common to store text with other data in relational databases or indexes.
Step3: The pandas data structure DataFrame is like a spreadsheet.
It's easy to select columns, records, or data points. There are a multitude of handy features in this library for manipulating data.
Step4: The Basics
We're going to use TextBlob to manipulate our text data. It wraps a number of handy text manipulation tools like NLTK and pattern, making those easy to use under one library.
Method and Python Package
We'll walk though all these methods by posing questions of the data. I'll number them for easy reference.
Back to Background
Linguistics, or the scientific study of language, plays a large role in how we build methods to understand text.
Computational Linguistics is "field concerned with the statistical or rule-based modeling of natural language from a computational perspective." - wp
1. Intuition for Text Analysis
Step5: From the TextBlob object, we can get things like
Step6: So we might say this sentence is about...
Step7: These are noun phrases, a useful grammatical lens for extracting topics from text.
Noun Phrases
"A noun phrase or nominal phrase (abbreviated NP) is a phrase which has a noun (or indefinite pronoun) as its head word, or which performs the same grammatical function as such a phrase." - Wikipedia
Noun Phrases are slightly more inclusive than just nouns, they encompass the whole "idea" of the noun, with any modifiers it may have.
If we do this across the whole transcript, we see roughly what it's about without reading it word-for-word
Step8: If we pick a few random topics from the talk, maybe we can generalize about what it's about
Step9: The computer can't read and summarize on our behalf yet - but so far it's interesting!
Alternatively, we can look at noun phrases that occur more than twice -
Step10: What are all the texts about?
It's interesting to look at one text, but looking at what's going on across these TED talks may be more interesting. We care about similarities and differences among aggregates.
Step11: Note
Step12: Great! But how do we see these themes across speakers' talks?
PAUSE - We'll come back to this later.
Sidebar
Step13: Psst -- I'll let you in on a secret
Step14: Note the overlap here - the window moves over one token to slice the next ngram.
What's a sentence made up of?
The Building Blocks of Language
Language is made of up of many components, but one way to think about language is by the types of words used. Words are categorized into functional "Parts of Speech", which describe the role they play. The most common parts of speech in English are
Step15: Note that "I'm" resolves to a personal pronoun 'PRP' and a verb 'VBP'. It's a contraction!
<img src="assets/pos_codes.png">
See the notation legend for part-of-speech tags here
Practice Example
So let's pick another sentence and check our understanding of labeling
Step16: Write this sentence out. What are the part of speech tags? Use the Part-of-Speech Code table above.
And the answer is
Step17: And for fun, What are the noun phrases?
hint
Step18: Bad things are bad
But don't worry - it's not just the computer that has trouble understanding what roles words play in a sentence. Some sentences are ambiguous or difficult for humans, too
Step19: So this text is mildly positive. Does that tell us anything useful? Not really. But what about the change of tone over the course of the talk?
Step20: Ok, so we see some significant range here. Let's make it easier to see with a visualization.
Step21: Interesting trend - does the talk seem to become more positive over time?
Anecdotally, we know that TED talks seek to motivate and inspire, which could be one explanation for this sentiment polarity pattern.
Play around with this data yourself!
2. Representations of Text for Computation
Text is messy and requires pre-processing. Once we've pre-processed and normalized the text, how do we use computation to understand it?
We may want to do a number of things computationally, but I'll focus generally on finding differences and similarities. In the case study, I'll focus on a supervised classification example, where these goals are key.
So let's start where we always do - counting words.
Step22: What are the most common words?
Step23: Hm. So the most common words will not tell us much about what's going on in this text.
In general, the more frequent a word is in the english language or in a text, the less important it will likely be to us. This concept is well known in text mining, called "term frequency–inverse document frequency". We can represent text using a tf-idf statistic to weigh how important a term is in a particular document. This statistic gives contextual weight to different terms, more accurately representing the importance of different terms of ngrams.
Compare two transcripts
Let's look at two transcripts -
Step24: So what we start to see is that if we removed common words from this set, we'd see a few common themes between the two talks, but again, much of the vocabulary is common.
Vectorizers
Let's go back to the dataframe of most frequently used noun phrases across transcripts
Step25: A vectorizer is a method that, understandably, creates vectors from data, and ultimately a matrix. Each vector contains the incidence (in this case) of a token across all the documents (in this case, transcripts).
Sparsity
In text analysis, any matrix representing a set of documents against a vocabulary will be sparse. This is because not every word in the vocabulary occurs in every document - quite the contrary. Most of each vector is empty.
<img src="assets/sparse_matrix.png">
Step26: Which themes were most common across speakers? | Python Code:
%pwd
# make sure we're running our script from the right place;
# imports like "filename" are relative to where we're running ipython
Explanation: NLP Workshop
Author: Clare Corthell, Luminant Data
Conference: Talking Machines, Manila
Date: 18 February 2016
Description: Much of human knowledge is “locked up” in a type of data called text. Humans are great at reading, but are computers? This workshop leads you through open source data science libraries in python that turn text into valuable data, then tours an open source system built for the Wordnik dictionary to source definitions of words from across the internet.
Goal: Learn the basics of text manipulation and analytics with open sources tools in python.
Requirements
There are many great libraries and programmatic resources out there in languages other than python. For the purposes of a contained intro, I'll focus soley on Python today.
Python 2.7 Environment
Anaconda, which includes 400 popular python packages
TextBlob (python library)
Give yourself a good chunk of time to troubleshoot installation if you're doing this for the first time. These resources are available for most platforms, including OSX and Windows.
Learning Resources
NLTK Online Book
The Open Source Data Science Masters
TextBlob Documentation
Setup
Go to the root of this repository
cd < root directory >
run the notebook
ipython notebook
End of explanation
# file of half of history of ted talks from (http://ted.com)
# i've already preprocessed this in an easy-to-consume .csv file
filename = 'data/tedtalks.csv'
Explanation: Load A Text Dataset
Collection of TED talk transcripts from the ted.com website:
<img href="http://ted.com" src="assets/tedtranscript.png">
End of explanation
# pandas, a handy and powerful data manipulation library
import pandas as pd
# this file has a header that includes column names
df = pd.DataFrame.from_csv(filename, encoding='utf8')
Explanation: Storage and File types for text
Common text file types for text include .json .csv .txt.
It is also common to store text with other data in relational databases or indexes.
End of explanation
# look at a slice (sub-selection of records) of the first four records
df[:4]
df.shape
# look at a slice of one column
df['headline'][:4]
# select one data point
df['headline'][2]
Explanation: The pandas data structure DataFrame is like a spreadsheet.
It's easy to select columns, records, or data points. There are a multitude of handy features in this library for manipulating data.
End of explanation
from textblob import TextBlob
# create a textblob object with one transcript
t = TextBlob(df['transcript'][18])
print "Reading the transcript for '%s'" % df['headline'][18]
Explanation: The Basics
We're going to use TextBlob to manipulate our text data. It wraps a number of handy text manipulation tools like NLTK and pattern, making those easy to use under one library.
Method and Python Package
We'll walk though all these methods by posing questions of the data. I'll number them for easy reference.
Back to Background
Linguistics, or the scientific study of language, plays a large role in how we build methods to understand text.
Computational Linguistics is "field concerned with the statistical or rule-based modeling of natural language from a computational perspective." - wp
1. Intuition for Text Analysis
End of explanation
t.sentences[21]
Explanation: From the TextBlob object, we can get things like:
- Frequency Analysis
- Noun Phrases
- Part-of-Speech Tags
- Tokenization
- Parsing
- Sentiment Polarity
- Word inflection
- Spelling correction
Using the questions we pose, we'll motivate using these methods and explain them throughout this workshop.
Q1. What is this text about?
There are many ways we could think about answering this question, but the first might be to look at the topics that the text describes.
Let's look at a sentence in the middle:
End of explanation
t.sentences[21].noun_phrases
Explanation: So we might say this sentence is about...
End of explanation
t.noun_phrases
Explanation: These are noun phrases, a useful grammatical lens for extracting topics from text.
Noun Phrases
"A noun phrase or nominal phrase (abbreviated NP) is a phrase which has a noun (or indefinite pronoun) as its head word, or which performs the same grammatical function as such a phrase." - Wikipedia
Noun Phrases are slightly more inclusive than just nouns, they encompass the whole "idea" of the noun, with any modifiers it may have.
If we do this across the whole transcript, we see roughly what it's about without reading it word-for-word:
End of explanation
import random
rand_nps = random.sample(list(t.noun_phrases), 5)
print "This text might be about: \n%s" % ', and '.join(rand_nps)
Explanation: If we pick a few random topics from the talk, maybe we can generalize about what it's about:
End of explanation
np_cnt = t.np_counts
[(n, np_cnt[n]) for n in np_cnt if np_cnt[n] > 2] # pythonic list comprehension
Explanation: The computer can't read and summarize on our behalf yet - but so far it's interesting!
Alternatively, we can look at noun phrases that occur more than twice -
End of explanation
# get texblobs and noun phrase counts for everything -- this takes a while
blobs = [TextBlob(b).np_counts for b in df['transcript']]
blobs[2:3]
Explanation: What are all the texts about?
It's interesting to look at one text, but looking at what's going on across these TED talks may be more interesting. We care about similarities and differences among aggregates.
End of explanation
# as we did before, pull the higher incident themes
np_themes = [[n for n in b if b[n] > 2] for b in blobs] # (list comprehension inception)
# pair the speaker with their top themes
speaker_themes = zip(df['speaker'], np_themes)
speaker_themes_df = pd.DataFrame(speaker_themes, columns=['speaker','themes'])
speaker_themes_df[:10]
Explanation: Note: Dirty Data
Text is hard to work with because it is invariably dirty data. Misspelled words, poorly-formed sentences, corrupted files, wrong encodings, cut off text, long-winded writing styles, and a multitude of other problems plague this data type. Because of that, you'll find yourself writing many special cases, cleaning data, and modifying existing solutions to fit your dataset. That's normal. And it's part of the reason that these approaches don't work for every dataset out of the box.
End of explanation
print "There are %s sentences" % len(t.sentences)
print "And %s words" % len(t.words)
Explanation: Great! But how do we see these themes across speakers' talks?
PAUSE - We'll come back to this later.
Sidebar: Unicode
If you see text like this:
u'\u266b'
don't worry. It's unicode. Text is usually in unicode in Python, and this is a representation of a special character. If you use the python print function, you'll see that it encodes this character:
♫
Let's take another seemingly simple question:
Q2. What's in the text?
End of explanation
# to get ngrams from our TextBlob (slice of the first five ngrams)
t.ngrams(n=3)[:5]
Explanation: Psst -- I'll let you in on a secret: most of NLP concerns counting things.
(that isn't always easy, though, as you'll notice)
Parsing
In text analytics, we use two important terms to refer to types of words and phrases:
token - a single "word", or set of continguous characters
Example tokens: ("won't", "giraffes", "1998")
n-gram - a contiguous sequence of n tokens
Example 3-grams or tri-grams: ("You won't dance", "giraffes really smell", "that's so 1998")
End of explanation
sent = t.sentences[0] # the first sentence in the transcript
sent_tags = sent.tags # get the part of speech tags - you'll notice it takes a second
print "The full sentence:\n", sent
print "\nThe sentence with tags by word: \n", sent_tags
print "\nThe tags of the sentence in order: \n", " ".join([b for a, b in sent_tags])
Explanation: Note the overlap here - the window moves over one token to slice the next ngram.
What's a sentence made up of?
The Building Blocks of Language
Language is made of up of many components, but one way to think about language is by the types of words used. Words are categorized into functional "Parts of Speech", which describe the role they play. The most common parts of speech in English are:
noun
verb
adjective
adverb
pronoun
preposition
conjunction
interjection
article or determiner
In an example sentence:
<img src="assets/partsofspeech.png">
We can think about Parts of Speech as an abstraction of the words' behavior within a sentence, rather than the things they describe in the world.
What's the big deal about Part-of-Speech Tagging?
POS tagging is hard because language and grammatical structure can and often are ambiguous. Humans can easily identify the subject and object of a sentence to identify a compound noun or a sub-clause, but a computer cannot. The computer needs to learn from examples to tell the difference.
Learning about the most likely tag that a given word should have is called incorporating "prior knowledge" and is the main task of training supervised machine learning models. Models use prior knowledge to infer the correct tag for new sentences.
For example,
He said he banks on the river.
He said the banks of the river were high.
Here, the context for the use of the word "banks" is important to determine whether it's a noun or a verb. Before the 1990s, a more popular technique for part of speech tagging was the rules-based approach, which involved writing a lengthy litany of rules that described these contextual differences in detail. Only in some cases did this work very well, but never in generalized cases. Statistical inference approaches, or describing these difference by learning from data, is now a more popular and more widely used approch.
This becomes important in identifying sub-clauses, which then allow disambiguation for other tasks, such as sentiment analysis. For example:
He said she was sad.
We could be led to believe that the subject of the sentence is "He", and the sentiment is negative ("sad"), but connecting "He" to "sad" would be incorrect. Parsing the clause "she was sad" allows us to achieve greater accuracy in sentiment analysis, especially if we are concerned with attributing sentiment to actors in text.
So let's take an example sentence:
End of explanation
t.sentences[35]
Explanation: Note that "I'm" resolves to a personal pronoun 'PRP' and a verb 'VBP'. It's a contraction!
<img src="assets/pos_codes.png">
See the notation legend for part-of-speech tags here
Practice Example
So let's pick another sentence and check our understanding of labeling:
End of explanation
answer_sent = t.sentences[35].tags
print ' '.join(['/'.join(i) for i in answer_sent])
Explanation: Write this sentence out. What are the part of speech tags? Use the Part-of-Speech Code table above.
And the answer is:
End of explanation
t.sentences[35].noun_phrases
Explanation: And for fun, What are the noun phrases?
hint: noun phrases are defined slightly differently by different people, so this is sometimes subjective
Answer:
End of explanation
print t.sentiment.polarity
Explanation: Bad things are bad
But don't worry - it's not just the computer that has trouble understanding what roles words play in a sentence. Some sentences are ambiguous or difficult for humans, too:
Mr/NNP Calhoun/NNP never/RB got/VBD around/RP to/TO joining/VBG
All/DT we/PRP gotta/VBN do/VB is/VBZ go/VB around/IN the/DT block/NN
Suite/NNP Mondant/NNP costs/VBZ around/RB 250/CD
This is a great example of why you will always have error - sometimes even a human can't tell the difference, and in those cases, the computer will fail, too.
Part-of-Speech Tagging - Runtime
Part-of-Speech taggers have achieved quite high quality results in recent years, but still suffer from underwhelming performance. This is because they usually are not heuristic-based, or derived from empirical rules, but rather use machine-learned models that require significant pre-processing and feature generation to predict tags. In practice, Data Scientists might develop their own heuristics or domain-trained models for POS tagging. This is a great example of a case where domain-specificity of the model will improve the accuracy, even if based on heuristics alone (see further reading).
Something important to note: NLTK, a popular tool that is underneath much of TextBlob, is a massive framework that was built originally as an academic and teaching tool. It was not built with production performance in mind.
Unfortunately, as Matthew Honnibal notes,
Up-to-date knowledge about natural language processing is mostly locked away in academia.
Further Reading:
A Good Part-of-Speech Tagger in about 200 Lines of Python
Exploiting Wiktionary for Lightweight Part-of-Speech Tagging for Machine Learning Tasks
Q2: What tone does this transcript take?
Sentiment refers to the emotive value of language.
Polarity is the generalized sentiment measured from negative to positive. A sentiment polarity score can be calculated in a range of [-1.0,1.0].
So the general polarity of the text is:
End of explanation
tone_change = list()
for sentence in t.sentences:
tone_change.append(sentence.sentiment.polarity)
print tone_change
Explanation: So this text is mildly positive. Does that tell us anything useful? Not really. But what about the change of tone over the course of the talk?
End of explanation
# this will show the graph here in the notebook
import matplotlib.pyplot as plt
%matplotlib inline
# dataframes have a handy plot method
pd.DataFrame(tone_change).plot(title='Polarity of Transcript by Sentence')
Explanation: Ok, so we see some significant range here. Let's make it easier to see with a visualization.
End of explanation
dict(t.word_counts)
# put this in a dataframe for easy viewing and sorting
word_count_df = pd.DataFrame.from_dict(t.word_counts, orient='index')
word_count_df.columns = ['count']
Explanation: Interesting trend - does the talk seem to become more positive over time?
Anecdotally, we know that TED talks seek to motivate and inspire, which could be one explanation for this sentiment polarity pattern.
Play around with this data yourself!
2. Representations of Text for Computation
Text is messy and requires pre-processing. Once we've pre-processed and normalized the text, how do we use computation to understand it?
We may want to do a number of things computationally, but I'll focus generally on finding differences and similarities. In the case study, I'll focus on a supervised classification example, where these goals are key.
So let's start where we always do - counting words.
End of explanation
word_count_df.sort('count', ascending=False)[:10]
Explanation: What are the most common words?
End of explanation
one = df.ix[16]
two = df.ix[20]
one['headline'], two['headline']
len(one['transcript']), len(two['transcript'])
one_blob = TextBlob(one['transcript'])
two_blob = TextBlob(two['transcript'])
one_set = set(one_blob.tokenize())
two_set = set(two_blob.tokenize())
# How many words did the two talks use commonly?
len(one_set.intersection(two_set))
# How many different words did they use total?
total_diff = len(one_set.difference(two_set)) + len(two_set.difference(one_set))
total_diff
proportion = len(one_set.intersection(two_set)) / float(total_diff)
print "Proportion of vocabulary that is common:", round(proportion, 4)*100, "%"
print one_set.intersection(two_set)
Explanation: Hm. So the most common words will not tell us much about what's going on in this text.
In general, the more frequent a word is in the english language or in a text, the less important it will likely be to us. This concept is well known in text mining, called "term frequency–inverse document frequency". We can represent text using a tf-idf statistic to weigh how important a term is in a particular document. This statistic gives contextual weight to different terms, more accurately representing the importance of different terms of ngrams.
Compare two transcripts
Let's look at two transcripts -
End of explanation
themes_list = speaker_themes_df['themes'].tolist()
speaker_themes_df[7:10]
# sci-kit learn is a machine learning library for python
from sklearn.feature_extraction.text import CountVectorizer
import numpy as np
vocab = set([d for d in themes_list for d in d])
# we'll just look at ngrams > 2 to see some richer topics
cv = CountVectorizer(stop_words=None, vocabulary=vocab, ngram_range=(2, 4))
# going to turn these back into documents
document_list = [','.join(t) for t in themes_list]
data = cv.fit_transform(document_list).toarray()
Explanation: So what we start to see is that if we removed common words from this set, we'd see a few common themes between the two talks, but again, much of the vocabulary is common.
Vectorizers
Let's go back to the dataframe of most frequently used noun phrases across transcripts
End of explanation
cv.get_feature_names() # names of features
dist = np.sum(data, axis=0)
counts_list = list()
for tag, count in zip(vocab, dist):
counts_list.append((count, tag))
Explanation: A vectorizer is a method that, understandably, creates vectors from data, and ultimately a matrix. Each vector contains the incidence (in this case) of a token across all the documents (in this case, transcripts).
Sparsity
In text analysis, any matrix representing a set of documents against a vocabulary will be sparse. This is because not every word in the vocabulary occurs in every document - quite the contrary. Most of each vector is empty.
<img src="assets/sparse_matrix.png">
End of explanation
count_df = pd.DataFrame(counts_list, columns=['count','feature'])
count_df.sort('count', ascending=False)[:20]
Explanation: Which themes were most common across speakers?
End of explanation |
880 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
这个分析笔记由Jake Vanderplas编辑汇总。 源代码和license文件在GitHub。 中文翻译由派兰数据在派兰大数据分析平台上完成。 源代码在GitHub上。
深度探索监督学习:支持向量机
之前我们已经介绍了监督学习。监督学习中有很多算法,在这里我们深入探索其中一种最强大的也最有趣的算法之一:支持向量机(Support Vector Machines,SVMs).
Step1: 支持向量机
支持向量机(SVMs)是监督学习中用来分类或者回归的最强大的算法之一。支持向量机是一种判别分类器:它可以在数据的集合中画出一条分割线。
我们可以来看一个简单的支持向量机的做分类的例子。首先我们需要创建一个数据集:
Step2: 一个判别分类器尝试着去在两组数据间画一条分割线。我们首先需要面临一个问题:这条线的位置很难去定。比如,我们可以找出很多可能的线去将两个数据群体完美的划分:
Step3: 上面的图中有三个各异的分割线,它们都可以将数据集合完美地分隔开来。一个新的数据的分类结果会根据你的选择,得出完全不一样的结果。
我们如何去改进这一点呢?
支持向量机:最大化边界
支持向量机有一种方法去解决这个问题。支持向量机做的事情不仅仅是画一条线,它还考虑了这条分割线两侧的“区域“的选择。关于这个“区域”是个什么,下面是一个例子:
Step4: 注意到,如果我们需要使得直线旁边的区域的宽度达到最大,中间的那条线是最合适的选择。这也就是支持向量机的特点和属性,它会优化分隔的直线,使得直线的边界与数据集的垂直距离最大。
生成一个支持向量机
现在我们需要根据这些点来生成我们的支持向量机模型。固然生成模型的数学细节很有趣,但是我们想让您在其他地方去了解这些东西。在这里,我们会让您掌握使用scikit-learn的黑盒算法去完成上面的工作。
Step6: 为了更好的知道发生了什么,我们创造一个简单方便的函数,去画出SVM算法生成的数据集边界:
Step7: 注意到图中的虚线碰到了一些点:这些点在这次模型的生成中非常重要,它们也就是所谓的支持向量。
在scikit-learn中,这些支持向量被存储在分类器的suppport_vectors_属性中:
Step8: 让我们使用IPython的interact功能去探索这些点的分布是如何影响支持向量和判别模型生成的。
(这个功能只适用于IPython 2.0+,而且在静态视图下无效)
Step9: 注意到,只有那些支持向量才会影响SVM:如果你移动任意的非支持向量,只要它们不越过边界,对分类的结果就不会有影响。
进一步探索:核方法
当SVM与核(kernels)联系起来的时候,它会变得非常有趣。为了进一步的解释“核”是什么,我们去看一些无法被线性分隔的数据。
Step10: 很显然,线性的分隔是不能把这些数据隔开的。我们可以通过应用核方法去改变,核方法是一些可以转换输入数据的方法。
比如,我们可以使用一个简单的径向基函数
Step11: 如果我们连同数据一起去绘图,我们可以看见它的效果:
Step12: 我们可以看到,这个增加的维度让我们的数据变得线性可分起来!这是一个相对简单的核方法;SVM有很多更成熟更复杂的集成的方法可供使用。这个方法可以通过使用kernel='rbf'来实现,其中rbf是radial basis function的缩写: | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# 使用seaborn的一些默认配置
import seaborn as sns; sns.set()
Explanation: 这个分析笔记由Jake Vanderplas编辑汇总。 源代码和license文件在GitHub。 中文翻译由派兰数据在派兰大数据分析平台上完成。 源代码在GitHub上。
深度探索监督学习:支持向量机
之前我们已经介绍了监督学习。监督学习中有很多算法,在这里我们深入探索其中一种最强大的也最有趣的算法之一:支持向量机(Support Vector Machines,SVMs).
End of explanation
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring');
Explanation: 支持向量机
支持向量机(SVMs)是监督学习中用来分类或者回归的最强大的算法之一。支持向量机是一种判别分类器:它可以在数据的集合中画出一条分割线。
我们可以来看一个简单的支持向量机的做分类的例子。首先我们需要创建一个数据集:
End of explanation
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
#绘制分割线
plt.plot(xfit, m * xfit + b, '-k')
plt.xlim(-1, 3.5);
Explanation: 一个判别分类器尝试着去在两组数据间画一条分割线。我们首先需要面临一个问题:这条线的位置很难去定。比如,我们可以找出很多可能的线去将两个数据群体完美的划分:
End of explanation
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
#绘制分割线
plt.plot(xfit, yfit, '-k')
#绘制分割线两侧的区域
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
Explanation: 上面的图中有三个各异的分割线,它们都可以将数据集合完美地分隔开来。一个新的数据的分类结果会根据你的选择,得出完全不一样的结果。
我们如何去改进这一点呢?
支持向量机:最大化边界
支持向量机有一种方法去解决这个问题。支持向量机做的事情不仅仅是画一条线,它还考虑了这条分割线两侧的“区域“的选择。关于这个“区域”是个什么,下面是一个例子:
End of explanation
from sklearn.svm import SVC # "Support Vector Classifier"
clf = SVC(kernel='linear')
clf.fit(X, y)
Explanation: 注意到,如果我们需要使得直线旁边的区域的宽度达到最大,中间的那条线是最合适的选择。这也就是支持向量机的特点和属性,它会优化分隔的直线,使得直线的边界与数据集的垂直距离最大。
生成一个支持向量机
现在我们需要根据这些点来生成我们的支持向量机模型。固然生成模型的数学细节很有趣,但是我们想让您在其他地方去了解这些东西。在这里,我们会让您掌握使用scikit-learn的黑盒算法去完成上面的工作。
End of explanation
def plot_svc_decision_function(clf, ax=None):
绘制一个 2D SVC 的决策函数
if ax is None:
ax = plt.gca()
x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)
y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)
Y, X = np.meshgrid(y, x)
P = np.zeros_like(X)
for i, xi in enumerate(x):
for j, yj in enumerate(y):
P[i, j] = clf.decision_function([[xi, yj]])
# 绘制边界
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf);
Explanation: 为了更好的知道发生了什么,我们创造一个简单方便的函数,去画出SVM算法生成的数据集边界:
End of explanation
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, alpha=0.3);
Explanation: 注意到图中的虚线碰到了一些点:这些点在这次模型的生成中非常重要,它们也就是所谓的支持向量。
在scikit-learn中,这些支持向量被存储在分类器的suppport_vectors_属性中:
End of explanation
from ipywidgets import interact
def plot_svm(N=10):
X, y = make_blobs(n_samples=200, centers=2,
random_state=0, cluster_std=0.60)
X = X[:N]
y = y[:N]
clf = SVC(kernel='linear')
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plt.xlim(-1, 4)
plt.ylim(-1, 6)
plot_svc_decision_function(clf, plt.gca())
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, alpha=0.3)
interact(plot_svm, N=[10, 200], kernel='linear');
Explanation: 让我们使用IPython的interact功能去探索这些点的分布是如何影响支持向量和判别模型生成的。
(这个功能只适用于IPython 2.0+,而且在静态视图下无效)
End of explanation
from sklearn.datasets.samples_generator import make_circles
X, y = make_circles(100, factor=.1, noise=.1)
clf = SVC(kernel='linear').fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf);
Explanation: 注意到,只有那些支持向量才会影响SVM:如果你移动任意的非支持向量,只要它们不越过边界,对分类的结果就不会有影响。
进一步探索:核方法
当SVM与核(kernels)联系起来的时候,它会变得非常有趣。为了进一步的解释“核”是什么,我们去看一些无法被线性分隔的数据。
End of explanation
r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2))
Explanation: 很显然,线性的分隔是不能把这些数据隔开的。我们可以通过应用核方法去改变,核方法是一些可以转换输入数据的方法。
比如,我们可以使用一个简单的径向基函数
End of explanation
from mpl_toolkits import mplot3d
def plot_3D(elev=30, azim=30):
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='spring')
# ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r')
# interact(plot_3D, elev=[-90, 90], azip=(-180, 180));
plot_3D()
Explanation: 如果我们连同数据一起去绘图,我们可以看见它的效果:
End of explanation
clf = SVC(kernel='rbf')
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, alpha=0.3);
Explanation: 我们可以看到,这个增加的维度让我们的数据变得线性可分起来!这是一个相对简单的核方法;SVM有很多更成熟更复杂的集成的方法可供使用。这个方法可以通过使用kernel='rbf'来实现,其中rbf是radial basis function的缩写:
End of explanation |
881 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Downloading public data
Something you may want to do in the future is compare your results to papers that came before you. Today we'll go through how to find these data and how to analyze them
Reading list
What the FPKM
Step1: We'll read in the data using pandas and look at the first 5 rows of the dataframe with the dataframe-specific function .head(). Whenever I read a new table or modify a dataframe, I ALWAYS look at it to make sure it was correctly imported and read in, and I want you to get into the same habit.
Step2: That's kind of annoying ... we don't see all the samples.
So we have 21 columns but looks like here pandas by default is showing a maximum of 20 so let's change the setting so we can see ALL of the samples instead of just skipping single cell 11 (S11). Let's change to 50 for good measure.
Step3: Now we can see all the samples!
Let's take a look at the full size of the matrix with .shape
Step4: Wow, ~28k rows! That must be the genes, while there are 18 single cell samples and 3 pooled samples as the columns. We'll do some filtering in the next few steps.
5. Reading in the metadata
Step5: Let's transpose this matrix so the samples are the rows, and the features are the columns. We'll do that with .T
Step6: Now we'll do some mild data cleaning. Notice that the columns have the exclamation point at the beginning, so let's get rid of that. In computer science, you keep letters between quotes, and you call those "strings." Let's talk about the string function .strip(). This removes any characters that are on the outer edges of the string. For example, let's take the string "Whoooo!!!!!!!"
Step7: Now let's remove the exclamation points
Step8: Exercise 1
Step9:
Step10: We can access the column names with dataframe.columns, like below
Step11: We can map the stripping function to every item of the columns. In Python, the square brackets ([ and ]) show that we're making a list. What we're doing below is called a "list comprehension."
Step12: In pandas, we can do the same thing by map-ping a lambda, which is a small, anonymous function that does one thing. It's called "anonymous" because it doesn't have a name. map runs the function on every element of the columns.
Step13: The above lambda is the same as if we had written a named function called remove_exclamation, as below.
Step14: Now we can assign the new column names to our matrix
Step15: Okay, now we're ready to do some analysis!
We've looked at the top of the dataframe by using head(). By default, this shows the first 5 rows.
Step16: To specify a certain number of rows, put a number between the parentheses.
Step17: Exercise 2
Step18:
Step19: Let's get a sense of this data by plotting the distributions using boxplot from seaborn. To save the output, we'll need to get access to the current figure, and save this to a variable using plt.gcf(). And then we'll save this figure with fig.savefig("filename.pdf"). You can use other extensions (e.g. ".png", ".tiff" and it'll automatically save as that forma)
Step20: Notice the 140,000 maximum ... Oh right we have expression data and the scales are enormous... Let's add 1 to all values and take the log2 of the data. We add one because log(0) is undefined and then all our logged values start from zero too. This "$\log_2(TPM + 1)$" is a very common transformation of expression data so it's easier to analyze.
Step21: Exercise 3
Step22: What's nice about booleans is that False is 0 and True is 1, so we can sum to get the number of "Trues." This is a simple, clever way that we can filter on a count for the data. We could use this boolean dataframe to filter our original dataframe, but then we lose information. For all values that are greater than 2, it puts in a "not a number" - "NaN."
Step23: Exercise 4
Step24:
Step25: The crude filtering above is okay, but we're smarter than that. We want to use the filtering in the paper
Step26: pandas is column-oriented and by default, it will give you a sum for each column. But we want a sum for each row. How do we do that?
We can sum the boolean matrix we created with "expression_logged < 10" along axis=1 (along the samples) to get for each gene, how many samples have expression less than 10. In pandas, this column is called a "Series" because it has only one dimension - its length. Internally, pandas stores dataframes as a bunch of columns - specifically these Seriesssssss.
This turns out to be not that many.
Step27: Now we can apply ANOTHER filter and find genes that are "present" (expression greater than 10) in at least 5 samples. We'll save this as the variable genes_of_interest. Notice that this doesn't the genes_of_interest but rather the list at the bottom. This is because what you see under a code cell is the output of the last thing you called. The "hash mark"/"number sign" "#" is called a comment character and makes the rest of the line after it not read by the Python language.
Exercise 5
Step28: Getting only rows that you want (aka subsetting)
Now we have some genes that we want to use - how do you pick just those? This can also be called "subsetting" and in pandas has the technical name indexing
In pandas, to get the rows (genes) you want using their name (gene symbol) or boolean matrix, you use .loc[rows_you_want]. Check it out below.
Step29: Wow, our matrix is very small - 197 genes! We probably don't want to filter THAT much... I'd say a range of 5,000-15,000 genes after filtering is a good ballpark. Not too big so it's impossible to work with but not too small that you can't do any statistics.
We'll get closer to the expression data created by the paper. Remember that they filtered on genes that had expression greater than 1 in at least 3 single cells. We'll filter for expression greater than 1 in at least 3 samples for now - we'll get to the single stuff in a bit. For now, we'll filter on all samples.
Exercise 6
Step30:
Step31: Just for fun, let's see how our the distributions in our expression matrix have changed. If you want to save the figure, you can
Step32: Discussion
How did the gene expression distributions change? Why?
Were the single and pooled samples' distributions affected differently? Why or why not?
Getting only the columns you want
In the next exercise, we'll get just the single cells
For the next step, we're going to pull out just the pooled - which are conveniently labeled as "P#". We'll do this using a list comprehension, which means we'll create a new list based on the items in shalek2013_expression.columns and whether or not they start with the letter 'P'.
In Python, things in square brackets ([]) are lists unless indicated otherwise. We are using a list comprehension here instead of a map, because we only want a subset of the columns, rather than all of them.
Step33: We'll access the columns we want using this bracket notation (note that this only works for columns, not rows)
Step34: We could do the same thing using .loc but we would need to put a colon "
Step35: Exercise 7
Step36:
Step37: Using two different dataframes for filtering
Exercise 8
Step38:
Step39: Let's make a boxplot again to see how the data has changed.
Step40: This is much nicer because now we don't have so many zeros and each sample has a reasonable dynamic range.
Why did this filtering even matter?
You may be wondering, we did all this work to remove some zeros..... so the FPKM what? Let's take a look at how this affects the relationships between samples using sns.jointplot from seaborn, which will plot a correlation scatterplot. This also calculates the Pearson correlation, a linear correlation metric.
Let's first do this on the unlogged data.
Step41: Pretty funky looking huh? That's why we logged it
Step42: Hmm our pearson correlation increased from 0.62 to 0.64. Why could that be?
Let's look at this same plot using the filtered data. | Python Code:
# Alphabetical order is standard
# We're doing "import superlongname as abbrev" for our laziness - this way we don't have to type out the whole thing each time.
# Python plotting library
import matplotlib.pyplot as plt
# Numerical python library (pronounced "num-pie")
import numpy as np
# Dataframes in Python
import pandas as pd
# Statistical plotting library we'll use
import seaborn as sns
# This is necessary to show the plotted figures inside the notebook -- "inline" with the notebook cells
%matplotlib inline
Explanation: Downloading public data
Something you may want to do in the future is compare your results to papers that came before you. Today we'll go through how to find these data and how to analyze them
Reading list
What the FPKM: A review of RNA-Seq expression units - Explain difference between TPM/FPKM/RPKM units
Pearson correlation - linear correlation unit
Single-cell transcriptomics reveals bimodality in expression and splicing in immune cells (Shalek and Satija, et al. Nature (2013))
1. Find the database and accession codes
At the end of most recent papers, they'll put a section called "Accession Codes" or "Accession Numbers" which will list a uniquely identifying number and letter combination.
In the US, the Gene Expression Omnibus (GEO) is a website funded by the NIH to store the expression data associated with papers. Many journals require you to submit your data to GEO to be able to publish.
Example data accession section from a Cell paper
Example data accession section from a Nature Biotech paper
Let's do this for the Shalek2013 paper.
Note: For some "older" papers (pre 2014), the accession code may not be on the PDF version of the paper but on the online version only. What I usually do then is search for the title of the paper and go to the journal website.
For your homework, you'll need to find another dataset to use and the expression matrix that you want may not be on a database, but rather posted in supplementary data on the journal's website.
What database was the data deposited to?
What is its' accession number?
2. Go to the data in the database
If you search for the database and the accession number, the first result will usually be the database with the paper info and the deposited data! Below is an example search for "Array Express E-MTAB-2805."
Search for its database and accession number and you should get to a page that looks like this:
3. Find the gene expression matrix
Lately, for many papers, they do give a processed expression matrix in the accession database that you can use directly. Luckily for us, that's exactly what the authors of the Shalek 2013 dataset did. If you notice at the bottom of the page, there's a table of Supplementary files and one of them is called "GSE41265_allGenesTPM.txt.gz". The link below is the "(ftp)" link copied down with the command "wget" which I think of as short for "web-get" so you can download files from the internet with the command line.
In addition to the gene expression file, we'll also look at the metadata in the "Series Matrix" file.
Download the "Series Matrix" to your laptop and
Download the GSE41265_allGenesTPM.txt.gz" file.
All the "Series" file formats contain the same information in different formats. I find the matrix one is the easiest to understand.
Open the "Series Matrix" in Excel (or equivalent) on your laptop, and look at the format and what's described. What line does the actual matrix of metadata start? You can find it where it says in the first column ,"!!Sample_title." It's after an empty line.
Get the data easy here:
Follow this link to jump directly to the GEO page for this data. Scroll down to the bottom in supplemental material. And download the link for the table called GSE41265_allGenesTPM.txt.gz.
We also need the link to the metadata. It is here. Download the file called GSE41265_series_matrix.txt.gz.
Where did those files go on your computer? Maybe you moved it somewhere. Figure out what the full path of those files are and we will read that in directly below.
4. Reading in the data file
To read the gene expression matrix, we'll use "pandas" a Python package for "Panel Data Analysis" (as in panels of data), which is a fantastic library for working with dataframes, and is Python's answer to R's dataframes. We'll take this opportunity to import ALL of the python libaries that we'll use today.
We'll be using several additional libraries in Python:
matplotlib - This is the base plotting library in Python.
numpy - (pronounced "num-pie") which is basis for most scientific packages. It's basically a nice-looking Python interface to C code. It's very fast.
pandas - This is the "DataFrames in Python." (like R's nice dataframes) They're a super convenient form that's based on numpy so they're fast. And you can do convenient things like calculate mea n and variance very easily.
scipy - (pronounced "sigh-pie") "Scientific Python" - Contains statistical methods and calculations
seaborn - Statistical plotting library. To be completely honest, R's plotting and graphics capabilities are much better than Python's. However, Python is a really nice langauge to learn and use, it's very memory efficient, can be parallized well, and has a very robust machine learning library, scikit-learn, which has a very nice and consistent interface. So this is Python's answer to ggplot2 (very popular R library for plotting) to try and make plotting in Python nicer looking and to make statistical plots easier to do.
End of explanation
# Read the data table
# You may need to change the path to the file (what's in quotes below) relative
# to where you downloaded the file and where this notebook is
shalek2013_expression = pd.read_table('/home/ecwheele/cshl2017/GSE41265_allGenesTPM.txt.gz',
# Sets the first (Python starts counting from 0 not 1) column as the row names
index_col=0,
# Tells pandas to decompress the gzipped file
compression='gzip')
print(shalek2013_expression.shape)
shalek2013_expression.head()
Explanation: We'll read in the data using pandas and look at the first 5 rows of the dataframe with the dataframe-specific function .head(). Whenever I read a new table or modify a dataframe, I ALWAYS look at it to make sure it was correctly imported and read in, and I want you to get into the same habit.
End of explanation
pd.options.display.max_columns = 50
pd.options.display.max_rows = 50
shalek2013_expression.head()
Explanation: That's kind of annoying ... we don't see all the samples.
So we have 21 columns but looks like here pandas by default is showing a maximum of 20 so let's change the setting so we can see ALL of the samples instead of just skipping single cell 11 (S11). Let's change to 50 for good measure.
End of explanation
shalek2013_expression.shape
Explanation: Now we can see all the samples!
Let's take a look at the full size of the matrix with .shape:
End of explanation
shalek2013_metadata = pd.read_table('/home/ecwheele/cshl2017/GSE41265_series_matrix.txt.gz',
compression = 'gzip',
skiprows=33,
index_col=0)
print(shalek2013_metadata.shape)
shalek2013_metadata
Explanation: Wow, ~28k rows! That must be the genes, while there are 18 single cell samples and 3 pooled samples as the columns. We'll do some filtering in the next few steps.
5. Reading in the metadata
End of explanation
shalek2013_metadata = shalek2013_metadata.T
shalek2013_metadata
Explanation: Let's transpose this matrix so the samples are the rows, and the features are the columns. We'll do that with .T
End of explanation
"Whoooo!!!!!!!"
Explanation: Now we'll do some mild data cleaning. Notice that the columns have the exclamation point at the beginning, so let's get rid of that. In computer science, you keep letters between quotes, and you call those "strings." Let's talk about the string function .strip(). This removes any characters that are on the outer edges of the string. For example, let's take the string "Whoooo!!!!!!!"
End of explanation
'Whoooo!!!!!!!'.strip('!')
Explanation: Now let's remove the exclamation points:
End of explanation
# YOUR CODE HERE
Explanation: Exercise 1: Stripping strings
What happens if you try to remove the 'o's?
End of explanation
'Whoooo!!!!!!!'.strip('o')
'Whoooo!!!!!!!'.replace("o","")
Explanation:
End of explanation
shalek2013_metadata.columns
Explanation: We can access the column names with dataframe.columns, like below:
End of explanation
[x.strip('!') for x in shalek2013_metadata.columns]
Explanation: We can map the stripping function to every item of the columns. In Python, the square brackets ([ and ]) show that we're making a list. What we're doing below is called a "list comprehension."
End of explanation
shalek2013_metadata.columns.map(lambda x: x.strip('!'))
Explanation: In pandas, we can do the same thing by map-ping a lambda, which is a small, anonymous function that does one thing. It's called "anonymous" because it doesn't have a name. map runs the function on every element of the columns.
End of explanation
def remove_exclamation(x):
return x.strip('!')
shalek2013_metadata.columns.map(remove_exclamation)
Explanation: The above lambda is the same as if we had written a named function called remove_exclamation, as below.
End of explanation
shalek2013_metadata.columns = shalek2013_metadata.columns.map(lambda x: x.strip('!'))
shalek2013_metadata.head()
Explanation: Now we can assign the new column names to our matrix:
End of explanation
shalek2013_expression.head()
Explanation: Okay, now we're ready to do some analysis!
We've looked at the top of the dataframe by using head(). By default, this shows the first 5 rows.
End of explanation
shalek2013_expression.head(8)
Explanation: To specify a certain number of rows, put a number between the parentheses.
End of explanation
# YOUR CODE HERE
Explanation: Exercise 2: using .head()
Show the first 17 rows of shalek2013_expression
End of explanation
shalek2013_expression.head(17)
Explanation:
End of explanation
sns.boxplot(shalek2013_expression)
# gcf = Get current figure
fig = plt.gcf()
fig.savefig('shalek2013_expression_boxplot.pdf')
Explanation: Let's get a sense of this data by plotting the distributions using boxplot from seaborn. To save the output, we'll need to get access to the current figure, and save this to a variable using plt.gcf(). And then we'll save this figure with fig.savefig("filename.pdf"). You can use other extensions (e.g. ".png", ".tiff" and it'll automatically save as that forma)
End of explanation
expression_logged = np.log2(shalek2013_expression+1)
expression_logged.head()
sns.boxplot(expression_logged)
# gcf = Get current figure
fig = plt.gcf()
fig.savefig('expression_logged_boxplot.pdf')
Explanation: Notice the 140,000 maximum ... Oh right we have expression data and the scales are enormous... Let's add 1 to all values and take the log2 of the data. We add one because log(0) is undefined and then all our logged values start from zero too. This "$\log_2(TPM + 1)$" is a very common transformation of expression data so it's easier to analyze.
End of explanation
at_most_2 = expression_logged < 2
at_most_2
Explanation: Exercise 3: Interpreting distributions
Now that these are on moreso on the same scale ...
Q: What do you notice about the pooled samples (P1, P2, P3) that is different from the single cells?
YOUR ANSWER HERE
Filtering expression data
Seems like a lot of genes are near zero, which means we need to filter our genes.
We can ask which genes have log2 expression values are less than 2 (weird example I know - stay with me). This creates a dataframe of boolean values of True/False.
End of explanation
expression_at_most_2 = expression_logged[expression_logged < 2]
print(expression_at_most_2.shape)
expression_at_most_2.head()
Explanation: What's nice about booleans is that False is 0 and True is 1, so we can sum to get the number of "Trues." This is a simple, clever way that we can filter on a count for the data. We could use this boolean dataframe to filter our original dataframe, but then we lose information. For all values that are greater than 2, it puts in a "not a number" - "NaN."
End of explanation
# YOUR CODE HERE
Explanation: Exercise 4: Crude filtering on expression data
Create a dataframe called "expression_greater_than_5" which contains only values that are greater than 5 from expression_logged.
End of explanation
expression_logged.head()
expression_greater_than_5 = expression_logged[expression_logged > 5]
expression_greater_than_5.head()
Explanation:
End of explanation
(expression_logged > 10).sum()
Explanation: The crude filtering above is okay, but we're smarter than that. We want to use the filtering in the paper:
... discarded genes that were not appreciably expressed (transcripts per million (TPM) > 1) in at least three individual cells, retaining 6,313 genes for further analysis.
We want to do THAT, but first we need a couple more concepts. The first one is summing booleans.
A smarter way to filter
Remember that booleans are really 0s (False) and 1s (True)? This turns out to be VERY convenient and we can use this concept in clever ways.
We can use .sum() on a boolean matrix to get the number of genes with expression greater than 10 for each sample:
End of explanation
(expression_logged > 10).sum(axis=1)
Explanation: pandas is column-oriented and by default, it will give you a sum for each column. But we want a sum for each row. How do we do that?
We can sum the boolean matrix we created with "expression_logged < 10" along axis=1 (along the samples) to get for each gene, how many samples have expression less than 10. In pandas, this column is called a "Series" because it has only one dimension - its length. Internally, pandas stores dataframes as a bunch of columns - specifically these Seriesssssss.
This turns out to be not that many.
End of explanation
genes_of_interest = (expression_logged > 10).sum(axis=1) >= 5
#genes_of_interest
[1, 2, 3]
Explanation: Now we can apply ANOTHER filter and find genes that are "present" (expression greater than 10) in at least 5 samples. We'll save this as the variable genes_of_interest. Notice that this doesn't the genes_of_interest but rather the list at the bottom. This is because what you see under a code cell is the output of the last thing you called. The "hash mark"/"number sign" "#" is called a comment character and makes the rest of the line after it not read by the Python language.
Exercise 5: Commenting and uncommenting
To see genes_of_interest, "uncomment" the line by removing the hash sign, and commenting out the list [1, 2, 3].
End of explanation
expression_filtered = expression_logged.loc[genes_of_interest]
print(expression_filtered.shape) # shows (nrows, ncols) - like in manhattan you do the Street then the Avenue
expression_filtered.head()
Explanation: Getting only rows that you want (aka subsetting)
Now we have some genes that we want to use - how do you pick just those? This can also be called "subsetting" and in pandas has the technical name indexing
In pandas, to get the rows (genes) you want using their name (gene symbol) or boolean matrix, you use .loc[rows_you_want]. Check it out below.
End of explanation
# YOUR CODE HERE
print(expression_filtered_by_all_samples.shape)
expression_filtered_by_all_samples.head()
Explanation: Wow, our matrix is very small - 197 genes! We probably don't want to filter THAT much... I'd say a range of 5,000-15,000 genes after filtering is a good ballpark. Not too big so it's impossible to work with but not too small that you can't do any statistics.
We'll get closer to the expression data created by the paper. Remember that they filtered on genes that had expression greater than 1 in at least 3 single cells. We'll filter for expression greater than 1 in at least 3 samples for now - we'll get to the single stuff in a bit. For now, we'll filter on all samples.
Exercise 6: Filtering on the presence of genes
Create a dataframe called expression_filtered_by_all_samples that consists only of genes that have expression greater than 1 in at least 3 samples.
Hint for IndexingError: Unalignable boolean Series key provided
If you're getting this error, double-check your .sum() command. Did you remember to specify that you want to get the number of cells (columns) that express each gene (row)? Remember that .sum() by default gives you the sum over columns, but since genes are the rows .... How do you get the sum over rows?
End of explanation
genes_of_interest = (expression_logged > 1).sum(axis=1) >= 3
expression_filtered_by_all_samples = expression_logged.loc[genes_of_interest]
print(expression_filtered_by_all_samples.shape)
expression_filtered_by_all_samples.head()
Explanation:
End of explanation
sns.boxplot(expression_filtered_by_all_samples)
# gcf = Get current figure
fig = plt.gcf()
fig.savefig('expression_filtered_by_all_samples_boxplot.pdf')
Explanation: Just for fun, let's see how our the distributions in our expression matrix have changed. If you want to save the figure, you can:
End of explanation
pooled_ids = [x for x in expression_logged.columns if x.startswith('P')]
pooled_ids
Explanation: Discussion
How did the gene expression distributions change? Why?
Were the single and pooled samples' distributions affected differently? Why or why not?
Getting only the columns you want
In the next exercise, we'll get just the single cells
For the next step, we're going to pull out just the pooled - which are conveniently labeled as "P#". We'll do this using a list comprehension, which means we'll create a new list based on the items in shalek2013_expression.columns and whether or not they start with the letter 'P'.
In Python, things in square brackets ([]) are lists unless indicated otherwise. We are using a list comprehension here instead of a map, because we only want a subset of the columns, rather than all of them.
End of explanation
pooled = expression_logged[pooled_ids]
pooled.head()
Explanation: We'll access the columns we want using this bracket notation (note that this only works for columns, not rows)
End of explanation
expression_logged.loc[:, pooled_ids].head()
Explanation: We could do the same thing using .loc but we would need to put a colon ":" in the "rows" section (first place) to show that we want "all rows."
End of explanation
# YOUR CODE HERE
print(singles.shape)
singles.head()
Explanation: Exercise 7: Make a dataframe of only single samples
Use list comprehensions to make a list called single_ids that consists only of single cells, and use that list to subset expression_logged and create a dataframe called singles. (Hint - how are the single cells ids different from the pooled ids?)
End of explanation
single_ids = [x for x in expression_logged.columns if x.startswith('S')]
singles = expression_logged[single_ids]
print(singles.shape)
singles.head()
Explanation:
End of explanation
# YOUR CODE HERE
print(expression_filtered_by_singles.shape)
expression_filtered_by_singles.head()
Explanation: Using two different dataframes for filtering
Exercise 8: Filter the full dataframe using the singles dataframe
Now we'll actually do the filtering done by the paper. Using the singles dataframe you just created, get the genes that have expression greater than 1 in at least 3 single cells, and use that to filter expression_logged. Call this dataframe expression_filtered_by_singles.
End of explanation
rows = (singles > 1).sum(axis=1) > 3
expression_filtered_by_singles = expression_logged.loc[rows]
print(expression_filtered_by_singles.shape)
expression_filtered_by_singles.head()
Explanation:
End of explanation
sns.boxplot(expression_filtered_by_singles)
fig = plt.gcf()
fig.savefig('expression_filtered_by_singles_boxplot.pdf')
Explanation: Let's make a boxplot again to see how the data has changed.
End of explanation
sns.jointplot(shalek2013_expression['S1'], shalek2013_expression['S2'])
Explanation: This is much nicer because now we don't have so many zeros and each sample has a reasonable dynamic range.
Why did this filtering even matter?
You may be wondering, we did all this work to remove some zeros..... so the FPKM what? Let's take a look at how this affects the relationships between samples using sns.jointplot from seaborn, which will plot a correlation scatterplot. This also calculates the Pearson correlation, a linear correlation metric.
Let's first do this on the unlogged data.
End of explanation
sns.jointplot(expression_logged['S1'], expression_logged['S2'])
Explanation: Pretty funky looking huh? That's why we logged it :)
Now let's try this on the logged data.
End of explanation
sns.jointplot(expression_filtered_by_singles['S1'], expression_filtered_by_singles['S2'])
Explanation: Hmm our pearson correlation increased from 0.62 to 0.64. Why could that be?
Let's look at this same plot using the filtered data.
End of explanation |
882 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evaluation, Cross-Validation, and Model Selection
By Heiko Strathmann - [email protected] - http
Step1: Types of splitting strategies
As said earlier Cross-validation is based upon splitting the data into multiple partitions. Shogun has various strategies for this. The base class for them is CSplittingStrategy.
K-fold cross-validation
Formally, this is achieved via partitioning a dataset $X$ of size $|X|=n$ into $k \leq n$ disjoint partitions $X_i\subseteq X$ such that $X_1 \cup X_2 \cup \dots \cup X_n = X$ and $X_i\cap X_j=\emptyset$ for all $i\neq j$. Then, the algorithm is executed on all $k$ possibilities of merging $k-1$ partitions and subsequently tested on the remaining partition. This results in $k$ performances which are evaluated in some metric of choice (Shogun support multiple ones). The procedure can be repeated (on different splits) in order to obtain less variance in the estimate. See [1] for a nice review on cross-validation using different performance measures.
Step2: Stratified cross-validation
On classificaiton data, the best choice is stratified cross-validation. This divides the data in such way that the fraction of labels in each partition is roughly the same, which reduces the variance of the performance estimate quite a bit, in particular for data with more than two classes. In Shogun this is implemented by CStratifiedCrossValidationSplitting class.
Step3: Leave One Out cross-validation
Leave One Out Cross-validation holds out one sample as the validation set. It is thus a special case of K-fold cross-validation with $k=n$ where $n$ is number of samples. It is implemented in LOOCrossValidationSplitting class.
Let us visualize the generated folds on the toy data.
Step4: Stratified splitting takes care that each fold has almost the same number of samples from each class. This is not the case with normal splitting which usually leads to imbalanced folds.
Toy example
Step5: Ok, we now have performed classification on the training data. How good did this work? We can easily do this for many different performance measures.
Step6: Note how for example error rate is 1-accuracy. All of those numbers represent the training error, i.e. the ability of the classifier to explain the given data.
Now, the training error is zero. This seems good at first. But is this setting of the parameters a good idea? No! A good performance on the training data alone does not mean anything. A simple look up table is able to produce zero error on training data. What we want is that our methods generalises the input data somehow to perform well on unseen data. We will now use cross-validation to estimate the performance on such.
We will use CStratifiedCrossValidationSplitting, which accepts a reference to the labels and the number of partitions as parameters. This instance is then passed to the class CCrossValidation, which does the estimation using the desired splitting strategy. The latter class can take all algorithms that are implemented against the CMachine interface.
Step7: Now this is incredibly bad compared to the training error. In fact, it is very close to random performance (0.5). The lesson
Step8: It is better to average a number of different runs of cross-validation in this case. A nice side effect of this is that the results can be used to estimate error intervals for a given confidence rate.
Step9: Using this machinery, it is very easy to compare multiple kernel parameters against each other to find the best one. It is even possible to compare a different kernel.
Step10: This gives a brute-force way to select paramters of any algorithm implemented under the CMachine interface. The cool thing about this is, that it is also possible to compare different model families against each other. Below, we compare a a number of regression models in Shogun on the Boston Housing dataset.
Regression problem and cross-validation
Various regression models in Shogun are now used to predict house prices using the boston housing dataset. Cross-validation is used to find best parameters and also test the performance of the models.
Step11: Let us use cross-validation to compare various values of tau paramter for ridge regression (Regression notebook). We will use MeanSquaredError as the performance metric. Note that normal splitting is used since it might be impossible to generate "good" splits using Stratified splitting in case of regression since we have continous values for labels.
Step12: A low value of error certifies a good pick for the tau paramter which should be easy to conclude from the plots. In case of Ridge Regression the value of tau i.e. the amount of regularization doesn't seem to matter but does seem to in case of Kernel Ridge Regression. One interpretation of this could be the lack of over fitting in the feature space for ridge regression and the occurence of over fitting in the new kernel space in which Kernel Ridge Regression operates. </br> Next we will compare a range of values for the width of Gaussian Kernel used in Kernel Ridge Regression
Step13: The values for the kernel parameter and tau may not be independent of each other, so the values we have may not be optimal. A brute force way to do this would be to try all the pairs of these values but it is only feasible for a low number of parameters.
Step14: Let us approximately pick the good parameters using the plots. Now that we have the best parameters, let us compare the various regression models on the data set.
Step15: Model selection using Grid Search
A standard way of selecting the best parameters of a learning algorithm is by Grid Search. This is done by an exhaustive search of a specified parameter space. CModelSelectionParameters is used to select various parameters and their ranges to be used for model selection. A tree like structure is used where the nodes can be CSGObject or the parameters to the object. The range of values to be searched for the parameters is set using build_values() method.
Step16: Next we will create CModelSelectionParameters instance with a kernel object which has to be appended the root node. The kernel object itself will be append with a kernel width parameter which is the parameter we wish to search. | Python Code:
%pylab inline
%matplotlib inline
# include all Shogun classes
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from shogun import *
import shogun as sg
# generate some ultra easy training data
gray()
n=20
title('Toy data for binary classification')
X=hstack((randn(2,n), randn(2,n)+1))
Y=hstack((-ones(n), ones(n)))
_=scatter(X[0], X[1], c=Y , s=100)
p1 = Rectangle((0, 0), 1, 1, fc="w")
p2 = Rectangle((0, 0), 1, 1, fc="k")
legend((p1, p2), ["Class 1", "Class 2"], loc=2)
# training data in Shogun representation
feats=features(X)
labels=BinaryLabels(Y)
Explanation: Evaluation, Cross-Validation, and Model Selection
By Heiko Strathmann - [email protected] - http://github.com/karlnapf - http://herrstrathmann.de.
Based on the model selection framework of his Google summer of code 2011 project | Saurabh Mahindre - github.com/Saurabh7 as a part of Google Summer of Code 2014 project mentored by - Heiko Strathmann
This notebook illustrates the evaluation of prediction algorithms in Shogun using <a href="http://en.wikipedia.org/wiki/Cross-validation_(statistics)">cross-validation</a>, and selecting their parameters using <a href="http://en.wikipedia.org/wiki/Hyperparameter_optimization">grid-search</a>. We demonstrate this for a toy example on <a href="http://en.wikipedia.org/wiki/Binary_classification">Binary Classification</a> using <a href="http://en.wikipedia.org/wiki/Support_vector_machine">Support Vector Machines</a> and also a regression problem on a real world dataset.
General Idea
Splitting Strategies
K-fold cross-validation
Stratified cross-validation
Example: Binary classification
Example: Regression
Model Selection: Grid Search
General Idea
Cross validation aims to estimate an algorithm's performance on unseen data. For example, one might be interested in the average classification accuracy of a Support Vector Machine when being applied to new data, that it was not trained on. This is important in order to compare the performance different algorithms on the same target. Most crucial is the point that the data that was used for running/training the algorithm is not used for testing. Different algorithms here also can mean different parameters of the same algorithm. Thus, cross-validation can be used to tune parameters of learning algorithms, as well as comparing different families of algorithms against each other. Cross-validation estimates are related to the marginal likelihood in Bayesian statistics in the sense that using them for selecting models avoids overfitting.
Evaluating an algorithm's performance on training data should be avoided since the learner may adjust to very specific random features of the training data which are not very important to the general relation. This is called overfitting. Maximising performance on the training examples usually results in algorithms explaining the noise in data (rather than actual patterns), which leads to bad performance on unseen data. This is one of the reasons behind splitting the data and using different splits for training and testing, which can be done using cross-validation.
Let us generate some toy data for binary classification to try cross validation on.
End of explanation
k=5
normal_split = sg.splitting_strategy('CrossValidationSplitting', labels=labels, num_subsets=k)
Explanation: Types of splitting strategies
As said earlier Cross-validation is based upon splitting the data into multiple partitions. Shogun has various strategies for this. The base class for them is CSplittingStrategy.
K-fold cross-validation
Formally, this is achieved via partitioning a dataset $X$ of size $|X|=n$ into $k \leq n$ disjoint partitions $X_i\subseteq X$ such that $X_1 \cup X_2 \cup \dots \cup X_n = X$ and $X_i\cap X_j=\emptyset$ for all $i\neq j$. Then, the algorithm is executed on all $k$ possibilities of merging $k-1$ partitions and subsequently tested on the remaining partition. This results in $k$ performances which are evaluated in some metric of choice (Shogun support multiple ones). The procedure can be repeated (on different splits) in order to obtain less variance in the estimate. See [1] for a nice review on cross-validation using different performance measures.
End of explanation
stratified_split = sg.splitting_strategy('StratifiedCrossValidationSplitting', labels=labels, num_subsets=k)
Explanation: Stratified cross-validation
On classificaiton data, the best choice is stratified cross-validation. This divides the data in such way that the fraction of labels in each partition is roughly the same, which reduces the variance of the performance estimate quite a bit, in particular for data with more than two classes. In Shogun this is implemented by CStratifiedCrossValidationSplitting class.
End of explanation
split_strategies=[stratified_split, normal_split]
#code to visualize splitting
def get_folds(split, num):
split.build_subsets()
x=[]
y=[]
lab=[]
for j in range(num):
indices=split.generate_subset_indices(j)
x_=[]
y_=[]
lab_=[]
for i in range(len(indices)):
x_.append(X[0][indices[i]])
y_.append(X[1][indices[i]])
lab_.append(Y[indices[i]])
x.append(x_)
y.append(y_)
lab.append(lab_)
return x, y, lab
def plot_folds(split_strategies, num):
for i in range(len(split_strategies)):
x, y, lab=get_folds(split_strategies[i], num)
figure(figsize=(18,4))
gray()
suptitle(split_strategies[i].get_name(), fontsize=12)
for j in range(0, num):
subplot(1, num, (j+1), title='Fold %s' %(j+1))
scatter(x[j], y[j], c=lab[j], s=100)
_=plot_folds(split_strategies, 4)
Explanation: Leave One Out cross-validation
Leave One Out Cross-validation holds out one sample as the validation set. It is thus a special case of K-fold cross-validation with $k=n$ where $n$ is number of samples. It is implemented in LOOCrossValidationSplitting class.
Let us visualize the generated folds on the toy data.
End of explanation
# define SVM with a small rbf kernel (always normalise the kernel!)
C=1
kernel = sg.kernel("GaussianKernel", log_width=np.log(0.001))
kernel.init(feats, feats)
kernel.set_normalizer(SqrtDiagKernelNormalizer())
classifier = sg.machine('LibSVM', C1=C, C2=C, kernel=kernel, labels=labels)
# train
_=classifier.train()
Explanation: Stratified splitting takes care that each fold has almost the same number of samples from each class. This is not the case with normal splitting which usually leads to imbalanced folds.
Toy example: Binary Support Vector Classification
Following the example from above, we will tune the performance of a SVM on the binary classification problem. We will
demonstrate how to evaluate a loss function or metric on a given algorithm
then learn how to estimate this metric for the algorithm performing on unseen data
and finally use those techniques to tune the parameters to obtain the best possible results.
The involved methods are
LibSVM as the binary classification algorithms
the area under the ROC curve (AUC) as performance metric
three different kernels to compare
End of explanation
# instanciate a number of Shogun performance measures
metrics=[ROCEvaluation(), AccuracyMeasure(), ErrorRateMeasure(), F1Measure(), PrecisionMeasure(), RecallMeasure(), SpecificityMeasure()]
for metric in metrics:
print(metric.get_name(), metric.evaluate(classifier.apply(feats), labels))
Explanation: Ok, we now have performed classification on the training data. How good did this work? We can easily do this for many different performance measures.
End of explanation
metric = sg.evaluation('AccuracyMeasure')
cross = sg.machine_evaluation('CrossValidation', machine=classifier, features=feats, labels=labels,
splitting_strategy=stratified_split, evaluation_criterion=metric)
# perform the cross-validation, note that this call involved a lot of computation
result=cross.evaluate()
# this class contains a field "mean" which contain the mean performance metric
print("Testing", metric.get_name(), result.get('mean'))
Explanation: Note how for example error rate is 1-accuracy. All of those numbers represent the training error, i.e. the ability of the classifier to explain the given data.
Now, the training error is zero. This seems good at first. But is this setting of the parameters a good idea? No! A good performance on the training data alone does not mean anything. A simple look up table is able to produce zero error on training data. What we want is that our methods generalises the input data somehow to perform well on unseen data. We will now use cross-validation to estimate the performance on such.
We will use CStratifiedCrossValidationSplitting, which accepts a reference to the labels and the number of partitions as parameters. This instance is then passed to the class CCrossValidation, which does the estimation using the desired splitting strategy. The latter class can take all algorithms that are implemented against the CMachine interface.
End of explanation
print("Testing", metric.get_name(), [cross.evaluate().get('mean') for _ in range(10)])
Explanation: Now this is incredibly bad compared to the training error. In fact, it is very close to random performance (0.5). The lesson: Never judge your algorithms based on the performance on training data!
Note that for small data sizes, the cross-validation estimates are quite noisy. If we run it multiple times, we get different results.
End of explanation
# 25 runs and 95% confidence intervals
cross.put('num_runs', 25)
# perform x-validation (now even more expensive)
cross.evaluate()
result=cross.evaluate()
print("Testing cross-validation mean %.2f " \
% (result.get('mean')))
Explanation: It is better to average a number of different runs of cross-validation in this case. A nice side effect of this is that the results can be used to estimate error intervals for a given confidence rate.
End of explanation
widths=2**linspace(-5,25,10)
results=zeros(len(widths))
for i in range(len(results)):
kernel.put('log_width', np.log(widths[i]))
result= cross.evaluate()
results[i]=result.get('mean')
plot(log2(widths), results, 'blue')
xlabel("log2 Kernel width")
ylabel(metric.get_name())
_=title("Accuracy for different kernel widths")
print("Best Gaussian kernel width %.2f" % widths[results.argmax()], "gives", results.max())
# compare this with a linear kernel
classifier.put('kernel', sg.kernel('LinearKernel'))
lin_k = cross.evaluate()
plot([log2(widths[0]), log2(widths[len(widths)-1])], [lin_k.get('mean'),lin_k.get('mean')], 'r')
# please excuse this horrible code :)
print("Linear kernel gives", lin_k.get('mean'))
_=legend(["Gaussian", "Linear"], loc="lower center")
Explanation: Using this machinery, it is very easy to compare multiple kernel parameters against each other to find the best one. It is even possible to compare a different kernel.
End of explanation
feats=features(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/fm_housing.dat')))
labels=RegressionLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/housing_label.dat')))
preproc = sg.transformer('RescaleFeatures')
preproc.fit(feats)
feats = preproc.transform(feats)
#Regression models
ls = sg.machine('LeastSquaresRegression', features=feats, labels=labels)
tau=1
rr = sg.machine('LinearRidgeRegression', tau=tau, features=feats, labels=labels)
width=1
tau=1
kernel=sg.kernel("GaussianKernel", log_width=np.log(width))
kernel.set_normalizer(SqrtDiagKernelNormalizer())
krr = sg.machine('KernelRidgeRegression', tau=tau, kernel=kernel, labels=labels)
regression_models=[ls, rr, krr]
Explanation: This gives a brute-force way to select paramters of any algorithm implemented under the CMachine interface. The cool thing about this is, that it is also possible to compare different model families against each other. Below, we compare a a number of regression models in Shogun on the Boston Housing dataset.
Regression problem and cross-validation
Various regression models in Shogun are now used to predict house prices using the boston housing dataset. Cross-validation is used to find best parameters and also test the performance of the models.
End of explanation
n=30
taus = logspace(-4, 1, n)
#5-fold cross-validation
k=5
split = sg.splitting_strategy('CrossValidationSplitting', labels=labels, num_subsets=k)
metric = sg.evaluation('MeanSquaredError')
cross = sg.machine_evaluation('CrossValidation', machine=rr, features=feats, labels=labels, splitting_strategy=split,
evaluation_criterion=metric, autolock=False)
cross.put('num_runs', 50)
errors=[]
for tau in taus:
#set necessary parameter
rr.put('tau', tau)
result=cross.evaluate()
#Enlist mean error for all runs
errors.append(result.get('mean'))
figure(figsize=(20,6))
suptitle("Finding best (tau) parameter using cross-validation", fontsize=12)
p=subplot(121)
title("Ridge Regression")
plot(taus, errors, linewidth=3)
p.set_xscale('log')
p.set_ylim([0, 80])
xlabel("Taus")
ylabel("Mean Squared Error")
cross = sg.machine_evaluation('CrossValidation', machine=krr, features=feats, labels=labels, splitting_strategy=split, evaluation_criterion=metric)
cross.put('num_runs', 50)
errors=[]
for tau in taus:
krr.put('tau', tau)
result=cross.evaluate()
#print tau, "error", result.get_mean()
errors.append(result.get('mean'))
p2=subplot(122)
title("Kernel Ridge regression")
plot(taus, errors, linewidth=3)
p2.set_xscale('log')
xlabel("Taus")
_=ylabel("Mean Squared Error")
Explanation: Let us use cross-validation to compare various values of tau paramter for ridge regression (Regression notebook). We will use MeanSquaredError as the performance metric. Note that normal splitting is used since it might be impossible to generate "good" splits using Stratified splitting in case of regression since we have continous values for labels.
End of explanation
n=50
widths=logspace(-2, 3, n)
krr.put('tau', 0.1)
metric =sg.evaluation('MeanSquaredError')
k=5
split = sg.splitting_strategy('CrossValidationSplitting', labels=labels, num_subsets=k)
cross = sg.machine_evaluation('CrossValidation', machine=krr, features=feats, labels=labels, splitting_strategy=split, evaluation_criterion=metric)
cross.put('num_runs', 10)
errors=[]
for width in widths:
kernel.put('log_width', np.log(width))
result=cross.evaluate()
#print width, "error", result.get('mean')
errors.append(result.get('mean'))
figure(figsize=(15,5))
p=subplot(121)
title("Finding best width using cross-validation")
plot(widths, errors, linewidth=3)
p.set_xscale('log')
xlabel("Widths")
_=ylabel("Mean Squared Error")
Explanation: A low value of error certifies a good pick for the tau paramter which should be easy to conclude from the plots. In case of Ridge Regression the value of tau i.e. the amount of regularization doesn't seem to matter but does seem to in case of Kernel Ridge Regression. One interpretation of this could be the lack of over fitting in the feature space for ridge regression and the occurence of over fitting in the new kernel space in which Kernel Ridge Regression operates. </br> Next we will compare a range of values for the width of Gaussian Kernel used in Kernel Ridge Regression
End of explanation
n=40
taus = logspace(-3, 0, n)
widths=logspace(-1, 4, n)
cross = sg.machine_evaluation('CrossValidation', machine=krr, features=feats, labels=labels, splitting_strategy=split, evaluation_criterion=metric)
cross.put('num_runs', 1)
x, y=meshgrid(taus, widths)
grid=array((ravel(x), ravel(y)))
print(grid.shape)
errors=[]
for i in range(0, n*n):
krr.put('tau', grid[:,i][0])
kernel.put('log_width', np.log(grid[:,i][1]))
result=cross.evaluate()
errors.append(result.get('mean'))
errors=array(errors).reshape((n, n))
from mpl_toolkits.mplot3d import Axes3D
#taus = logspace(0.5, 1, n)
jet()
fig=figure(figsize(15,7))
ax=subplot(121)
c=pcolor(x, y, errors)
_=contour(x, y, errors, linewidths=1, colors='black')
_=colorbar(c)
xlabel('Taus')
ylabel('Widths')
ax.set_xscale('log')
ax.set_yscale('log')
ax1=fig.add_subplot(122, projection='3d')
ax1.plot_wireframe(log10(y),log10(x), errors, linewidths=2, alpha=0.6)
ax1.view_init(30,-40)
xlabel('Taus')
ylabel('Widths')
_=ax1.set_zlabel('Error')
Explanation: The values for the kernel parameter and tau may not be independent of each other, so the values we have may not be optimal. A brute force way to do this would be to try all the pairs of these values but it is only feasible for a low number of parameters.
End of explanation
#use the best parameters
rr.put('tau', 1)
krr.put('tau', 0.05)
kernel.put('log_width', np.log(2))
title_='Performance on Boston Housing dataset'
print("%50s" %title_)
for machine in regression_models:
metric = sg.evaluation('MeanSquaredError')
cross = sg.machine_evaluation('CrossValidation', machine=machine, features=feats, labels=labels, splitting_strategy=split,
evaluation_criterion=metric, autolock=False)
cross.put('num_runs', 25)
result=cross.evaluate()
print("-"*80)
print("|", "%30s" % machine.get_name(),"|", "%20s" %metric.get_name(),"|","%20s" %result.get('mean') ,"|" )
print("-"*80)
Explanation: Let us approximately pick the good parameters using the plots. Now that we have the best parameters, let us compare the various regression models on the data set.
End of explanation
#Root
param_tree_root=ModelSelectionParameters()
#Parameter tau
tau=ModelSelectionParameters("tau")
param_tree_root.append_child(tau)
# also R_LINEAR/R_LOG is available as type
min_value=0.01
max_value=1
type_=R_LINEAR
step=0.05
base=2
tau.build_values(min_value, max_value, type_, step, base)
Explanation: Model selection using Grid Search
A standard way of selecting the best parameters of a learning algorithm is by Grid Search. This is done by an exhaustive search of a specified parameter space. CModelSelectionParameters is used to select various parameters and their ranges to be used for model selection. A tree like structure is used where the nodes can be CSGObject or the parameters to the object. The range of values to be searched for the parameters is set using build_values() method.
End of explanation
#kernel object
param_gaussian_kernel=ModelSelectionParameters("kernel", kernel)
gaussian_kernel_width=ModelSelectionParameters("log_width")
gaussian_kernel_width.build_values(0.1, 6.0, R_LINEAR, 0.5, 2.0)
#kernel parameter
param_gaussian_kernel.append_child(gaussian_kernel_width)
param_tree_root.append_child(param_gaussian_kernel)
# cross validation instance used
cross_validation = sg.machine_evaluation('CrossValidation', machine=krr, features=feats, labels=labels, splitting_strategy=split,
evaluation_criterion=metric)
cross_validation.put('num_runs', 1)
# model selection instance
model_selection=GridSearchModelSelection(cross_validation, param_tree_root)
print_state=False
# TODO: enable it once crossval has been fixed
#best_parameters=model_selection.select_model(print_state)
#best_parameters.apply_to_machine(krr)
#best_parameters.print_tree()
result=cross_validation.evaluate()
print('Error with Best parameters:', result.get('mean'))
Explanation: Next we will create CModelSelectionParameters instance with a kernel object which has to be appended the root node. The kernel object itself will be append with a kernel width parameter which is the parameter we wish to search.
End of explanation |
883 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
01 - Example - Eliminating Outliers
This notebook presents how to eliminate the diagnosed outliers (in the previous Learning Unit).
By
Step1: Load the dataset that will be used
Step2: Let us drop the missing and duplicated values since they don't matter for now (already covered in other notebooks)
Step3: Dealing with outliers
Time to deal with the issues previously found.
1) Delete observations - use feature bounds
The easiest way is to delete the observations (for when you know the bounds of your features). Let's use Age, since know the limits. Set the limits
Step4: Create the mask
Step5: Check if some outliers were caught
Step6: Yes! Two were found! The mask_age variable contains the rows we want to keep, i.e., the rows that meet the bounds above. So, lets drop the above 2 rows
Step7: 2) Create classes/bins
Instead of having a range of values you can discretize in classes/bins. Make use of pandas' qcut
Step8: Discretize!
Step9: The limits of the defined classes/bins are
Step10: We could replace the height values by the new five categories. Nevertheless, looks like a person with 252 cm is actually an outlier and the best would be to evaluate this value against two-standard deviations or percentile (e.g., 99%).
Lets define the height bounds according to two-standard deviations from the mean.
3) Delete observations - use the standard deviation
Step11: Which ones are out of the bounds?
Step12: Let's delete these rows (mask_height contains the rows we want to keep) | Python Code:
import pandas as pd
import numpy as np
% matplotlib inline
from matplotlib import pyplot as plt
Explanation: 01 - Example - Eliminating Outliers
This notebook presents how to eliminate the diagnosed outliers (in the previous Learning Unit).
By: Hugo Lopes
Learning Unit 08
Some inital imports:
End of explanation
data = pd.read_csv('../data/data_with_problems.csv', index_col=0)
print('Our dataset has %d columns (features) and %d rows (people).' % (data.shape[1], data.shape[0]))
data.head(15)
Explanation: Load the dataset that will be used
End of explanation
data = data.drop_duplicates()
data = data.dropna()
print('Our dataset has %d columns (features) and %d rows (people).' % (data.shape[1], data.shape[0]))
Explanation: Let us drop the missing and duplicated values since they don't matter for now (already covered in other notebooks):
End of explanation
min_age = 0
max_age = 117 # oldest person currently alive
Explanation: Dealing with outliers
Time to deal with the issues previously found.
1) Delete observations - use feature bounds
The easiest way is to delete the observations (for when you know the bounds of your features). Let's use Age, since know the limits. Set the limits:
End of explanation
mask_age = (data['age'] >= min_age) & (data['age'] <= max_age)
mask_age.head(10)
Explanation: Create the mask:
End of explanation
data[~mask_age]
Explanation: Check if some outliers were caught:
End of explanation
data = data[mask_age]
print('Our dataset has %d columns (features) and %d rows (people).' % (data.shape[1], data.shape[0]))
Explanation: Yes! Two were found! The mask_age variable contains the rows we want to keep, i.e., the rows that meet the bounds above. So, lets drop the above 2 rows:
End of explanation
data['height'].hist(bins=100)
plt.title('Height population distribution')
plt.xlabel('cm')
plt.ylabel('freq')
plt.show()
Explanation: 2) Create classes/bins
Instead of having a range of values you can discretize in classes/bins. Make use of pandas' qcut: Discretize variable into equal-sized buckets.
End of explanation
height_bins = pd.qcut(data['height'],
5,
labels=['very short', 'short', 'average', 'tall', 'very tall'],
retbins=True)
height_bins[0].head(10)
Explanation: Discretize!
End of explanation
height_bins[1]
Explanation: The limits of the defined classes/bins are:
End of explanation
# Calculate the mean and standard deviation
std_height = data['height'].std()
mean_height = data['height'].mean()
# The mask!
mask_height = (data['height'] > mean_height-2*std_height) & (data['height'] < mean_height+2*std_height)
print('Height bounds:')
print('> Minimum accepted height: %3.1f' % (mean_height-2*std_height))
print('> Maximum accepted height: %3.1f' % (mean_height+2*std_height))
Explanation: We could replace the height values by the new five categories. Nevertheless, looks like a person with 252 cm is actually an outlier and the best would be to evaluate this value against two-standard deviations or percentile (e.g., 99%).
Lets define the height bounds according to two-standard deviations from the mean.
3) Delete observations - use the standard deviation
End of explanation
data.loc[~mask_height]
Explanation: Which ones are out of the bounds?
End of explanation
data = data[mask_height]
print('Our dataset has %d columns (features) and %d rows (people).' % (data.shape[1], data.shape[0]))
Explanation: Let's delete these rows (mask_height contains the rows we want to keep)
End of explanation |
884 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Size encodings tutorial
See the examples below for common ways to map data to node size in Graphistry.
Size refers to point radius. This tutorial covers two kinds of size controls
Step1: Default Size
Graphistry uses the 'degree' of a node, so nodes with more edges appear bigger
Step2: Size Setting
You can tune the scaling factor
Step3: Size encodings
Options
Step4: Categorical size encodings
Map distinct values to specific sizes. Optionally, set a default, else black.
Step5: Legend support
Categorical node sizes will appear in legend when driven by column type | Python Code:
# ! pip install --user graphistry
import graphistry
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
graphistry.__version__
import datetime, pandas as pd
e_df = pd.DataFrame({
's': ['a', 'b', 'c', 'a', 'b', 'c', 'a', 'd', 'e'],
'd': ['b', 'c', 'a', 'b', 'c', 'a', 'c', 'e', 'd'],
'time': [datetime.datetime(1987, 10, 1), datetime.datetime(1987, 10, 2), datetime.datetime(1987, 10, 3),
datetime.datetime(1988, 10, 1), datetime.datetime(1988, 10, 2), datetime.datetime(1988, 10, 3),
datetime.datetime(1989, 10, 1), datetime.datetime(1989, 10, 2), datetime.datetime(1989, 10, 3)]
})
n_df = pd.DataFrame({
'n': ['a', 'b', 'c', 'd', 'e'],
'score': [ 1, 30, 50, 70, 90 ],
'palette_color_int32': pd.Series(
[0, 1, 2, 3, 4],
dtype='int32'),
'hex_color_int64': pd.Series(
[0xFF000000, 0xFFFF0000, 0xFFFFFF00, 0x00FF0000, 0x0000FF00],
dtype='int64'),
'type': ['mac', 'macbook', 'mac', 'macbook', 'sheep']
})
g = graphistry.edges(e_df, 's', 'd').nodes(n_df, 'n')
Explanation: Size encodings tutorial
See the examples below for common ways to map data to node size in Graphistry.
Size refers to point radius. This tutorial covers two kinds of size controls:
Node size setting, which is a global scaling factor
Node size encoding, used for mapping a node data column to size
Sizes are often used with node color, label, icon, and badges to provide more visual information. Most encodings work both for points and edges. The PyGraphistry Python client makes it easier to use the URL settings API and the REST upload API. For dynamic control, you can use also use the JavaScript APIs.
Setup
Mode api=3 is recommended. It is required for complex_encodings (ex: .encode_point_size(...)). Mode api=1 works with the simpler .bind(point_size='col_a') form.
End of explanation
g.plot()
Explanation: Default Size
Graphistry uses the 'degree' of a node, so nodes with more edges appear bigger
End of explanation
g.settings(url_params={'pointSize': 0.5}).plot()
Explanation: Size Setting
You can tune the scaling factor:
End of explanation
g.settings(url_params={'pointSize': 0.3}).encode_point_size('score').plot()
Explanation: Size encodings
Options: continuous mapping, categorical mapping
Continuous size encodings
Use an input column as relative sizes. Graphistry automatically normalizes them.
End of explanation
g.settings(url_params={'pointSize': 0.3})\
.encode_point_size(
'type',
categorical_mapping={
'mac': 50,
'macbook': 100
},
default_mapping=20
).plot()
Explanation: Categorical size encodings
Map distinct values to specific sizes. Optionally, set a default, else black.
End of explanation
g.settings(url_params={'pointSize': 0.3})\
.encode_point_size(
'type',
categorical_mapping={
'mac': 50,
'macbook': 100
},
default_mapping=20
).plot()
Explanation: Legend support
Categorical node sizes will appear in legend when driven by column type:
End of explanation |
885 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Import packages
Importing the necessary packages, including the standard TFX component classes
Step2: Pima Indians Diabetes example pipeline
Download Example Data
We download the example dataset for use in our TFX pipeline.
The dataset we're using is the Pima Indians Diabetes dataset
There are eight features in this dataset
Step3: Run TFX Components
In the cells that follow, we create TFX components one-by-one and generates example using exampleGen component.
Step4: As seen above, .selected_features contains the features selected after running the component with the speified parameters.
To get the info about updated Example artifact, one can view it as follows | Python Code:
!pip install -U tfx
# getting the code directly from the repo
x = !pwd
if 'feature_selection' not in str(x):
!git clone -b main https://github.com/deutranium/tfx-addons.git
%cd tfx-addons/tfx_addons/feature_selection
Explanation: <a href="https://colab.research.google.com/github/deutranium/tfx-addons/blob/main/tfx_addons/feature_selection/example/Pima_Indians_Diabetes_example_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
TFX Feature Selection Component
You may find the source code for the same here
This example demonstrate the use of feature selection component. This project allows the user to select different algorithms for performing feature selection on datasets artifacts in TFX pipelines
Base code taken from: https://github.com/tensorflow/tfx/blob/master/docs/tutorials/tfx/components_keras.ipynb
Setup
Install TFX
Note: In Google Colab, because of package updates, the first time you run this cell you must restart the runtime (Runtime > Restart runtime ...).
End of explanation
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
import importlib
pp = pprint.PrettyPrinter()
from tfx import v1 as tfx
import importlib
from tfx.components import CsvExampleGen
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
# importing the feature selection component
from component import FeatureSelection
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
Explanation: Import packages
Importing the necessary packages, including the standard TFX component classes
End of explanation
# getting the dataset
_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/npradaschnor/Pima-Indians-Diabetes-Dataset/master/diabetes.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
Explanation: Pima Indians Diabetes example pipeline
Download Example Data
We download the example dataset for use in our TFX pipeline.
The dataset we're using is the Pima Indians Diabetes dataset
There are eight features in this dataset:
Pregnancies
Glucose
BloodPressure
SkinThickness
Insulin
BMI
DiabetesPedigreeFunction
Age
The dataset corresponds to classification tasks on which you need to predict if a person has diabetes based on the 8 features above
End of explanation
context = InteractiveContext()
#create and run exampleGen component
example_gen = CsvExampleGen(input_base=_data_root )
context.run(example_gen)
#create and run statisticsGen component
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
# using the feature selection component
#feature selection component
feature_selector = FeatureSelection(orig_examples = example_gen.outputs['examples'],
module_file='example.modules.pima_indians_module_file')
context.run(feature_selector)
# Display Selected Features
context.show(feature_selector.outputs['feature_selection']._artifacts[0])
Explanation: Run TFX Components
In the cells that follow, we create TFX components one-by-one and generates example using exampleGen component.
End of explanation
context.show(feature_selector.outputs['updated_data']._artifacts[0])
Explanation: As seen above, .selected_features contains the features selected after running the component with the speified parameters.
To get the info about updated Example artifact, one can view it as follows:
End of explanation |
886 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introducing PyTorch
written by Gene Kogan, updated by Sebastian Quinard
In the next cell, we introduce PyTorch, which is an open-source framework which impelments machine learning methodology, particularly that of deep neural networks, by optimizing the efficiency of the computation. We do not have to deal so much with the details of this. Most importantly, PyTorch efficiently implement backpropagation to train neural networks on the GPU.
To start, we will re-implement what we did in the last notebook, a neural network to to predict the sepal width of the flowers in the Iris dataset. In the last notebook, we trained a manually-coded neural network for this, but this time, we'll use PyTorch instead.
Let's load the Iris dataset again.
Step1: First we need to shuffle and pre-process the data. Pre-processing in this case is normalization of the data, as well as converting it to a properly-shaped numpy array.
Step2: Overfitting and validation
In our previous guides, we always evaluated the performance of the network on the same data that we trained it on. But this is wrong; our network could learn to "cheat" by overfitting to the training data (like memorizing it) so as to get a high score, but then not generalize well to actually unknown examples.
In machine learning, this is called "overfitting" and there are several things we have to do to avoid it. The first thing is we must split our dataset into a "training set" which we train on with gradient descent, and a "test set" which is hidden from the training process that we can do a final evaluation on to get the true accuracy, that of the network trying to predict unknown samples.
Let's split the data into a training set and a test set. We'll keep the first 30% of the dataset to use as a test set, and use the rest for training.
Step3: Creating the Model
In PyTorch, to instantiate a neural network model, we inherit the class nn.Module which grants our class Net all the functionality of the nn.Module. We then instantiate it with the init() class. Note however, we must also instantiate the class we are inheriting from (i.e. nn.Module). This alone creates an empty neural network, so to populate it, we add the type of layer we want. In this case, we add a linear layer, a layer that is \"fully-connected,\" meaning all of its neurons are connected to all the neurons in the previous layer, with no empty connections. This may seem confusing at first because we have not yet seen neural network layers which are not fully-connected; we will see this in the next chapter when we introduce convolutional networks.
Next we see the addition of the forward method. We couple the forward method with the layer(s) above to apply a variety of activation functions onto the specified layer.
Finally, we will add the output layer, which will be a fully-connected layer whose size is 1 neuron. This neuron will contain our final output.
Step4: That may be a lot to take in, but once you fully understand the excerpt above, this structure will be used time and time again to build increasingly complex neural networks.
Next we instantiate a new object based on the class.
Step5: We can also get a readout of the current state of the network using print(net)
Step6: So we've added 9 parameters, 8x1 weights between the hidden and output layers, and 1 bias in the output. So we have 41 parameters in total.
Now we are finished specifying the architecture of the model. Now we need to specify our loss function and optimizer, and then compile the model. Let's discuss each of these things.
First, we specify the loss. The standard for regression, as we said before is sum-squared error (SSE) or mean-squared error (MSE). SSE and MSE are basically the same, since the only difference between them is a scaling factor ($\frac{1}{n}$) which doesn't depend on the final weights.
The optimizer is the flavor of gradient descent we want. The most basic optimizer is "stochastic gradient descent" or SGD which is the learning algorithm we have used so far. We have mostly used batch gradient descent so far, which means we compute our gradient over the entire dataset. For reasons which will be more clear when we cover learning algorithms in more detail, this is not usually favored, and we instead calculate the gradient over random subsets of the training data, called mini-batches.
Once we've specified our loss function and optimizer, the model is compiled.
Step7: We are finally ready to train. First we must zero our gradients, so as not to rely on the previously uncovered gradient in our solution. The opposite is imperitive to RNN, as we need the last result to influence the next.
Next we create a forward pass on our neural network. The loss function is then applied to determine the level of error. After, we complete a backwards pass to compute the new weights.
Loss is also printed.
Step8: As you can see above, we train our network down to a validation MSE < 0.01. Notice that both the training loss ("loss") and validation loss ("val_loss") are reported. It's normal for the training loss to be lower than the validation loss, since the network's objective is to predict the training data well. But if the training loss is much lower than our validation loss, it means we are overfitting and may not expect to receive very good results.
We can evaluate the training set one last time at the end using eval.
Step9: We can manually calculate MSE as a sanity check
Step10: We can also predict the value of a single unknown example or a set of them in th following way | Python Code:
import numpy as np
from sklearn.datasets import load_iris
iris = load_iris()
data, labels = iris.data[:,0:3], iris.data[:,3]
Explanation: Introducing PyTorch
written by Gene Kogan, updated by Sebastian Quinard
In the next cell, we introduce PyTorch, which is an open-source framework which impelments machine learning methodology, particularly that of deep neural networks, by optimizing the efficiency of the computation. We do not have to deal so much with the details of this. Most importantly, PyTorch efficiently implement backpropagation to train neural networks on the GPU.
To start, we will re-implement what we did in the last notebook, a neural network to to predict the sepal width of the flowers in the Iris dataset. In the last notebook, we trained a manually-coded neural network for this, but this time, we'll use PyTorch instead.
Let's load the Iris dataset again.
End of explanation
num_samples = len(labels) # size of our dataset
shuffle_order = np.random.permutation(num_samples)
data = data[shuffle_order, :]
labels = labels[shuffle_order]
# normalize data and labels to between 0 and 1 and make sure it's float32
data = data / np.amax(data, axis=0)
data = data.astype('float32')
labels = labels / np.amax(labels, axis=0)
labels = labels.astype('float32')
# print out the data
print("shape of X", data.shape)
print("first 5 rows of X\n", data[0:5, :])
print("first 5 labels\n", labels[0:5])
Explanation: First we need to shuffle and pre-process the data. Pre-processing in this case is normalization of the data, as well as converting it to a properly-shaped numpy array.
End of explanation
# let's rename the data and labels to X, y
X, y = data, labels
test_split = 0.3 # percent split
n_test = int(test_split * num_samples)
x_train, x_test = X[n_test:, :], X[:n_test, :]
x_train = torch.from_numpy(x_train)
x_test = torch.from_numpy(x_test)
y_train, y_test = y[n_test:], y[:n_test]
y_train = torch.from_numpy(y_train)
y_test = torch.from_numpy(y_test)
print('%d training samples, %d test samples' % (x_train.shape[0], x_test.shape[0]))
Explanation: Overfitting and validation
In our previous guides, we always evaluated the performance of the network on the same data that we trained it on. But this is wrong; our network could learn to "cheat" by overfitting to the training data (like memorizing it) so as to get a high score, but then not generalize well to actually unknown examples.
In machine learning, this is called "overfitting" and there are several things we have to do to avoid it. The first thing is we must split our dataset into a "training set" which we train on with gradient descent, and a "test set" which is hidden from the training process that we can do a final evaluation on to get the true accuracy, that of the network trying to predict unknown samples.
Let's split the data into a training set and a test set. We'll keep the first 30% of the dataset to use as a test set, and use the rest for training.
End of explanation
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(3, 8) # we get 3 from input dimension, and 8 from desired output
self.fc2 = nn.Linear(8, 1) # Output Layer
def forward(self, x):
x = F.sigmoid(self.fc1(x))
x = self.fc2(x)
return x
Explanation: Creating the Model
In PyTorch, to instantiate a neural network model, we inherit the class nn.Module which grants our class Net all the functionality of the nn.Module. We then instantiate it with the init() class. Note however, we must also instantiate the class we are inheriting from (i.e. nn.Module). This alone creates an empty neural network, so to populate it, we add the type of layer we want. In this case, we add a linear layer, a layer that is \"fully-connected,\" meaning all of its neurons are connected to all the neurons in the previous layer, with no empty connections. This may seem confusing at first because we have not yet seen neural network layers which are not fully-connected; we will see this in the next chapter when we introduce convolutional networks.
Next we see the addition of the forward method. We couple the forward method with the layer(s) above to apply a variety of activation functions onto the specified layer.
Finally, we will add the output layer, which will be a fully-connected layer whose size is 1 neuron. This neuron will contain our final output.
End of explanation
net = Net()
Explanation: That may be a lot to take in, but once you fully understand the excerpt above, this structure will be used time and time again to build increasingly complex neural networks.
Next we instantiate a new object based on the class.
End of explanation
print(net)
Explanation: We can also get a readout of the current state of the network using print(net):
End of explanation
from torch import optim
optimizer = optim.SGD(net.parameters(), lr=0.01)
criterion = nn.MSELoss()
Explanation: So we've added 9 parameters, 8x1 weights between the hidden and output layers, and 1 bias in the output. So we have 41 parameters in total.
Now we are finished specifying the architecture of the model. Now we need to specify our loss function and optimizer, and then compile the model. Let's discuss each of these things.
First, we specify the loss. The standard for regression, as we said before is sum-squared error (SSE) or mean-squared error (MSE). SSE and MSE are basically the same, since the only difference between them is a scaling factor ($\frac{1}{n}$) which doesn't depend on the final weights.
The optimizer is the flavor of gradient descent we want. The most basic optimizer is "stochastic gradient descent" or SGD which is the learning algorithm we have used so far. We have mostly used batch gradient descent so far, which means we compute our gradient over the entire dataset. For reasons which will be more clear when we cover learning algorithms in more detail, this is not usually favored, and we instead calculate the gradient over random subsets of the training data, called mini-batches.
Once we've specified our loss function and optimizer, the model is compiled.
End of explanation
def fullPass(data, labels):
running_loss = 0.0
for i in range(0,data.size()[0]):
optimizer.zero_grad()
outputs = net(data[i])
loss = criterion(outputs, labels[i])
loss.backward()
optimizer.step()
# print statistics
running_loss += loss
if i % data.size()[0] == data.size()[0]-1:
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / data.size()[0]))
running_loss = 0.0
net.train()
for epoch in range(400):
fullPass(x_train, y_train);
print('Finished Training')
Explanation: We are finally ready to train. First we must zero our gradients, so as not to rely on the previously uncovered gradient in our solution. The opposite is imperitive to RNN, as we need the last result to influence the next.
Next we create a forward pass on our neural network. The loss function is then applied to determine the level of error. After, we complete a backwards pass to compute the new weights.
Loss is also printed.
End of explanation
net.eval()
fullPass(x_test, y_test)
y_pred = net(x_test)
Explanation: As you can see above, we train our network down to a validation MSE < 0.01. Notice that both the training loss ("loss") and validation loss ("val_loss") are reported. It's normal for the training loss to be lower than the validation loss, since the network's objective is to predict the training data well. But if the training loss is much lower than our validation loss, it means we are overfitting and may not expect to receive very good results.
We can evaluate the training set one last time at the end using eval.
End of explanation
def MSE(y_pred, y_test):
return (1.0/len(y_test)) * np.sum([((y1[0]-y2)**2) for y1, y2 in list(zip(y_pred, y_test))])
print("MSE is %0.4f" % MSE(y_pred, y_test))
Explanation: We can manually calculate MSE as a sanity check:
End of explanation
x_sample = x_test[0].reshape(1, 3) # shape must be (num_samples, 3), even if num_samples = 1
y_prob = net(x_sample)
print("predicted %0.3f, actual %0.3f" % (y_prob[0][0], y_test[0]))
Explanation: We can also predict the value of a single unknown example or a set of them in th following way:
End of explanation |
887 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What do I want?
Previously in ../catalog_only_classifier/classifier_comparison.ipynb I got a Random Forest-predicted probability of each galaxy being a low-z dwarf. Now I want to select a subset of the galaxies most likely to be dwarfs, and pull their images from the HSC servers.
Code
First let's load some general information about each galaxy, in case we want to see how our "dwarf galaxy scores" correlate with various properties
Step1: Turn magnitudes into colors
Step2: Filter out bad data
Step3: Get FRANKENZ photo-z's
Step4: Create classification labels
Step5: Load scores for each galaxy
This scores were created within catalog_only_classifier/classifier_ROC_curves.ipynb. These scores are also non-deterministic -- getting a new realization will lead to slightly different results (but hopefully with no dramatic changes).
Step6: select the best 1000 / sq.deg.
Step7: Get the images from the quarry
For technical details, see
Step8: To do
Step9: Make the request via curl
1)
First you need to setup you authentication information. Add it to a file like galaxy_images_training/curl_netrc which should look like
Step10: rename files with HSC-id
steps
Step11: set prefix
Step12: find which files are missing
What should I do about these incomplete files? idk, I guess just throw them out for now.
Step13: get HSC ids for index
Step14: clean up old files | Python Code:
# give access to importing dwarfz
import os, sys
dwarfz_package_dir = os.getcwd().split("dwarfz")[0]
if dwarfz_package_dir not in sys.path:
sys.path.insert(0, dwarfz_package_dir)
import dwarfz
# back to regular import statements
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
sns.set(context="poster", style="ticks", font_scale=1.4)
import numpy as np
import pandas as pd
import glob
import shutil
from scipy.special import expit
COSMOS_filename = os.path.join(dwarfz.data_dir_default, "COSMOS_reference.sqlite")
COSMOS = dwarfz.datasets.COSMOS(COSMOS_filename)
HSC_filename = os.path.join(dwarfz.data_dir_default, "HSC_COSMOS_median_forced.sqlite3")
HSC = dwarfz.datasets.HSC(HSC_filename)
matches_filename = os.path.join(dwarfz.data_dir_default, "matches.sqlite3")
matches_df = dwarfz.matching.Matches.load_from_filename(matches_filename)
combined = matches_df[matches_df.match].copy()
combined["ra"] = COSMOS.df.loc[combined.index].ra
combined["dec"] = COSMOS.df.loc[combined.index].dec
combined["photo_z"] = COSMOS.df.loc[combined.index].photo_z
combined["log_mass"] = COSMOS.df.loc[combined.index].mass_med
photometry_cols = [
"gcmodel_flux","gcmodel_flux_err","gcmodel_flux_flags", "gcmodel_mag",
"rcmodel_flux","rcmodel_flux_err","rcmodel_flux_flags", "rcmodel_mag",
"icmodel_flux","icmodel_flux_err","icmodel_flux_flags", "icmodel_mag",
"zcmodel_flux","zcmodel_flux_err","zcmodel_flux_flags", "zcmodel_mag",
"ycmodel_flux","ycmodel_flux_err","ycmodel_flux_flags", "ycmodel_mag",
]
for col in photometry_cols:
combined[col] = HSC.df.loc[combined.catalog_2_ids][col].values
Explanation: What do I want?
Previously in ../catalog_only_classifier/classifier_comparison.ipynb I got a Random Forest-predicted probability of each galaxy being a low-z dwarf. Now I want to select a subset of the galaxies most likely to be dwarfs, and pull their images from the HSC servers.
Code
First let's load some general information about each galaxy, in case we want to see how our "dwarf galaxy scores" correlate with various properties
End of explanation
combined["g_minus_r"] = combined.gcmodel_mag - combined.rcmodel_mag
combined["r_minus_i"] = combined.rcmodel_mag - combined.icmodel_mag
combined["i_minus_z"] = combined.icmodel_mag - combined.zcmodel_mag
combined["z_minus_y"] = combined.zcmodel_mag - combined.ycmodel_mag
Explanation: Turn magnitudes into colors
End of explanation
mask = np.isfinite(combined["g_minus_r"]) & np.isfinite(combined["r_minus_i"]) \
& np.isfinite(combined["i_minus_z"]) & np.isfinite(combined["z_minus_y"]) \
& np.isfinite(combined["icmodel_mag"]) \
& (~combined.gcmodel_flux_flags) & (~combined.rcmodel_flux_flags) \
& (~combined.icmodel_flux_flags) & (~combined.zcmodel_flux_flags) \
& (~combined.ycmodel_flux_flags)
combined = combined[mask]
Explanation: Filter out bad data
End of explanation
df_frankenz = pd.read_sql_table("photo_z",
"sqlite:///{}".format(
os.path.join(dwarfz.data_dir_default,
"HSC_matched_to_FRANKENZ.sqlite")),
index_col="object_id")
df_frankenz.head()
combined = combined.join(df_frankenz[["photoz_best", "photoz_risk_best"]],
on="catalog_2_ids")
Explanation: Get FRANKENZ photo-z's
End of explanation
low_z = (combined.photo_z < .15)
low_mass = (combined.log_mass > 8) & (combined.log_mass < 9)
combined["low_z_low_mass"] = (low_z & low_mass)
combined.low_z_low_mass.mean()
Explanation: Create classification labels
End of explanation
filename = "galaxy_images_training/2017_09_26-dwarf_galaxy_scores.csv"
!head -n 20 $filename
df_dwarf_prob = pd.read_csv(filename)
df_dwarf_prob.head()
theoretical_probs=np.linspace(0,1,num=20+1)
empirical_probs_RF = np.empty(theoretical_probs.size-1)
for i in range(theoretical_probs.size-1):
prob_lim_low = theoretical_probs[i]
prob_lim_high = theoretical_probs[i+1]
mask_RF = (df_dwarf_prob.dwarf_prob >= prob_lim_low) & (df_dwarf_prob.dwarf_prob < prob_lim_high)
empirical_probs_RF[i] = df_dwarf_prob.low_z_low_mass[mask_RF].mean()
color_RF = "g"
label_RF = "Random Forest"
plt.hist(df_dwarf_prob.dwarf_prob, bins=theoretical_probs)
plt.xlim(0,1)
plt.yscale("log")
plt.ylabel("counts")
plt.figure()
plt.step(theoretical_probs, [empirical_probs_RF[0], *empirical_probs_RF],
linestyle="steps", color=color_RF, label=label_RF)
plt.plot(theoretical_probs, theoretical_probs-.05,
drawstyle="steps", color="black", label="ideal", linestyle="dashed")
plt.xlabel("Reported Probability")
plt.ylabel("Actual (Binned) Probability")
plt.legend(loc="best")
plt.xlim(0,1)
plt.ylim(0,1)
Explanation: Load scores for each galaxy
This scores were created within catalog_only_classifier/classifier_ROC_curves.ipynb. These scores are also non-deterministic -- getting a new realization will lead to slightly different results (but hopefully with no dramatic changes).
End of explanation
COSMOS_field_area = 2 # sq. deg.
sample_size = int(1000 * COSMOS_field_area)
print("sample size: ", sample_size)
selected_galaxies = df_dwarf_prob.sort_values("dwarf_prob", ascending=False)[:sample_size]
print("threshold: ", selected_galaxies.dwarf_prob.min())
print("galaxies at or above threshold: ", (df_dwarf_prob.dwarf_prob>=selected_galaxies.dwarf_prob.min()).sum() )
df_dwarf_prob.dwarf_prob.min()
bins = np.linspace(0,1, num=100)
plt.hist(df_dwarf_prob.dwarf_prob, bins=bins, label="RF score")
plt.axvline(selected_galaxies.dwarf_prob.min(), linestyle="dashed", color="black", label="threshold")
plt.legend(loc="best")
plt.xlabel("Dwarf Prob.")
plt.ylabel("counts (galaxies)")
plt.yscale("log")
# how balanced is the CNN training set? What fraction are actually dwarfs?
selected_galaxies.low_z_low_mass.mean()
Explanation: select the best 1000 / sq.deg.
End of explanation
selected_galaxy_coords = HSC.df.loc[selected_galaxies.HSC_id][["ra", "dec"]]
selected_galaxy_coords.head()
selected_galaxy_coords.to_csv("galaxy_images_training/2017_09_26-selected_galaxy_coords.csv")
width = "20asec"
filters = ["HSC-G", "HSC-R", "HSC-I", "HSC-Z", "HSC-Y"]
rerun = "pdr1_deep"
tmp_filename = "galaxy_images_training/tmp_quarry.txt"
tmp_filename_missing_ext = tmp_filename[:-3]
with open(tmp_filename, mode="w") as f:
# print("#? ra dec filter sw sh rerun", file=f)
print_formatter = " {galaxy.ra:.6f}deg {galaxy.dec:.6f}deg {filter} {width} {width} {rerun}"
for galaxy in selected_galaxy_coords.itertuples():
for filter in filters:
print(print_formatter.format(galaxy=galaxy,
width=width,
filter=filter,
rerun=rerun),
file=f)
!head -n 10 $tmp_filename
!wc -l $tmp_filename
!split -a 1 -l 1000 $tmp_filename $tmp_filename_missing_ext
Explanation: Get the images from the quarry
For technical details, see: https://hsc-release.mtk.nao.ac.jp/das_quarry/manual.html
Create a coordinates list
End of explanation
for filename_old in sorted(glob.glob("galaxy_images_training/tmp_quarry.?")):
filename_new = filename_old + ".txt"
with open(filename_new, mode="w") as f_new:
f_new.write("#? ra dec filter sw sh rerun\n")
with open(filename_old, mode="r") as f_old:
data = f_old.read()
f_new.write(data)
os.remove(filename_old)
!ls galaxy_images_training/
Explanation: To do: I need to find a way to deal with the script below when there aren't any files to process (i.e. they've already been processeded)
End of explanation
!head -n 10 galaxy_images_training/tmp_quarry.a.txt
!ls -lhtr galaxy_images_training/quarry_files_a | head -n 11
Explanation: Make the request via curl
1)
First you need to setup you authentication information. Add it to a file like galaxy_images_training/curl_netrc which should look like:
machine hsc-release.mtk.nao.ac.jp login <your username> password <your password>
This allows you to script the curl calls, without being prompted for your password each time
2a)
The curl call (in (2b)) will spit out files into a somewhat unpredicatably named directory, like arch-170928-231223. You should rename this to match the batch suffix. You really should do this right away, so you don't get confused. In general I add the rename onto the same line as the curl call:
curl ... | tar xvf - && mv arch-* quarry_files_a
This only works if it finds one arch- directory, but you really shouldn't have multiple arch directories at any given time; that's a recipe for getting your galaxies mixed up.
2b)
Here's the actual curl invocation:
curl --netrc-file galaxy_images_training/curl_netrc https://hsc-release.mtk.nao.ac.jp/das_quarry/cgi-bin/quarryImage --form list=@<coord list filename> | tar xvf -
End of explanation
!mkdir -p galaxy_images_training/quarry_files
Explanation: rename files with HSC-id
steps:
1) figure out what suffix we're using (e.g. "a")
2) convert suffix into zero-indexed integer (e.g. 0)
3) determine which HSC-ids corresponded to that file
- e.g. from suffix_int*200 to (suffix_int+1)*200
4) swapping the line number prefix with an HSC_id prefix, while preserving the rest of the filename (esp. the band information)
make target directory
This'll hold the renamed files for all the batches
End of explanation
prefix_map = { char:i for i, char in enumerate(["a","b","c","d","e",
"f","g","h","i","j"])
}
prefix = "a"
prefix_int = prefix_map[prefix]
print(prefix_int)
Explanation: set prefix
End of explanation
files = glob.glob("galaxy_images_training/quarry_files_{prefix}/*".format(prefix=prefix))
file_numbers = [int(os.path.basename(file).split("-")[0]) for file in files]
file_numbers = np.array(sorted(file_numbers))
desired_range = np.arange(2,1002)
print("missing file numbers: ", desired_range[~np.isin(desired_range, file_numbers)])
Explanation: find which files are missing
What should I do about these incomplete files? idk, I guess just throw them out for now.
End of explanation
i_start = prefix_int*200
i_end = (prefix_int+1)*200
print(i_start, i_end)
HSC_ids_for_prefix = selected_galaxies.iloc[i_start:i_end].HSC_id.values
HSC_ids_for_prefix
i_file = 1
for HSC_id in HSC_ids_for_prefix:
for filter in filters:
i_file += 1
filenames = glob.glob("galaxy_images_training/quarry_files_{prefix}/{i_file}-*-{filter}-*.fits".format(
prefix=prefix,
i_file=i_file,
filter=filter))
if len(filenames) > 1:
raise ValueError("too many matching files for i_file={}".format(i_file))
elif len(filenames) == 0:
print("skipping missing i_file={}".format(i_file))
continue
old_filename = filenames[0]
new_filename = old_filename.replace("/{i_file}-".format(i_file=i_file),
"/{HSC_id}-".format(HSC_id=HSC_id))
new_filename = new_filename.replace("quarry_files_{prefix}".format(prefix=prefix),
"quarry_files")
if os.path.exists(old_filename):
# for now use a copy operation, so I can fix things if it breaks,
# but later this should be a move instead
os.rename(old_filename, new_filename)
Explanation: get HSC ids for index
End of explanation
for file in glob.glob("galaxy_images_training/tmp_quarry.?.txt"):
os.remove(file)
for file in glob.glob("galaxy_images_training/quarry_files_?"):
# will fail if the directory isn't empty
# (directory should be empty after moving the renamed files to `quarry_files`)
os.rmdir(file)
filename = "galaxy_images_training/tmp_quarry.txt"
if os.path.exists(filename):
os.remove(filename)
Explanation: clean up old files
End of explanation |
888 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mh', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-MH
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
889 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas convert JSON into a DataFrame
This is a notebook for the medium article How to convert JSON into a Pandas DataFrame?
Please check out article for instructions
License
Step1: 1. Reading simple JSON from a local file
Step2: 2. Reading simple JSON from a URL
Step3: 3. Flattening nested list from JSON object
Step4: 4. Flattening nested list and dict from JSON object
Step5: 5. Extracting a value from deeply nested JSON | Python Code:
import pandas as pd
Explanation: Pandas convert JSON into a DataFrame
This is a notebook for the medium article How to convert JSON into a Pandas DataFrame?
Please check out article for instructions
License: BSD 2-Clause
End of explanation
df = pd.read_json('data/simple.json')
df
df.info()
Explanation: 1. Reading simple JSON from a local file
End of explanation
URL = 'http://raw.githubusercontent.com/BindiChen/machine-learning/master/data-analysis/027-pandas-convert-json/data/simple.json'
df = pd.read_json(URL)
df
df.info()
Explanation: 2. Reading simple JSON from a URL
End of explanation
df = pd.read_json('data/nested_list.json')
df
import json
# load data using Python JSON module
with open('data/nested_list.json','r') as f:
data = json.loads(f.read())
# Flatten data
df_nested_list = pd.json_normalize(data, record_path =['students'])
df_nested_list
# To include school_name and class
df_nested_list = pd.json_normalize(
data,
record_path =['students'],
meta=['school_name', 'class']
)
df_nested_list
Explanation: 3. Flattening nested list from JSON object
End of explanation
### working
import json
# load data using Python JSON module
with open('data/nested_mix.json','r') as f:
data = json.loads(f.read())
# Normalizing data
df = pd.json_normalize(data, record_path =['students'])
df
# Normalizing data
df = pd.json_normalize(
data,
record_path =['students'],
meta=[
'class',
['info', 'president'],
['info', 'contacts', 'tel']
]
)
df
Explanation: 4. Flattening nested list and dict from JSON object
End of explanation
df = pd.read_json('data/nested_deep.json')
df
type(df['students'][0])
# to install glom inside your python env through the notebook
# pip install glom
from glom import glom
df['students'].apply(lambda row: glom(row, 'grade.math'))
Explanation: 5. Extracting a value from deeply nested JSON
End of explanation |
890 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'mri-esm2-0', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MRI
Source ID: MRI-ESM2-0
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:19
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
891 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
There is an outlier
Step1: Expenses Data Distribution
Step2: We can see that there are three employees spend considerably more than others.
Step3: Salary Data Distribution
Step4: There are four people with a considerably higher salary than other employees (>\$600K).
Step5: Top salary employees are not the same as top expenses employees.
Step6: High 'from_this_person_to_poi' does not necessarily mean high 'from_poi_to_this_person', so both variables should evaluated.
Step7: Perhaps makes sense combining salary with bonus. A new feature could be salary+bonus
Step8: Feature Selection | Python Code:
del data_dict['TOTAL']
df = pandas.DataFrame.from_dict(data_dict, orient='index')
df.head()
print "Dataset size: %d rows x %d columns"%df.shape
df.dtypes
print "Feature | Missing values"
print "---|---"
for column in df.columns:
if column != 'poi':
print "%s | %d"%(column,(df[column] == 'NaN').sum())
df['poi'].hist( figsize=(5,3))
plt.xticks(np.array([0.05,0.95]), ['Non POI', 'POI'], rotation='horizontal')
plt.tick_params(bottom='on',top='off', right='off', left='on')
df['expenses'] = df['expenses'].astype('float')
df['salary'] = df['salary'].astype('float')
df['from_this_person_to_poi'] = df['from_this_person_to_poi'].astype('float')
df['from_poi_to_this_person'] = df['from_poi_to_this_person'].astype('float')
Explanation: There is an outlier: TOTAL. This certainly a column for totals. Let's remove it before proceeding.
End of explanation
df['expenses'] = df['expenses'].astype('float')
df['expenses'].hist(bins=100)
plt.figure()
df['expenses'].hist(bins=50)
plt.figure()
df['expenses'].hist(bins=20)
Explanation: Expenses Data Distribution
End of explanation
df[df['expenses']>150000]
Explanation: We can see that there are three employees spend considerably more than others.
End of explanation
df['salary'] = df['salary'].astype('float')
df['salary'].hist(bins=100)
plt.figure()
df['salary'].hist(bins=50)
plt.figure()
df['salary'].hist(bins=20)
Explanation: Salary Data Distribution
End of explanation
df[df['salary']>600000]
Explanation: There are four people with a considerably higher salary than other employees (>\$600K).
End of explanation
df[['from_this_person_to_poi','from_poi_to_this_person']].hist(bins=20)
df[df['from_this_person_to_poi']>280]
df[df['from_poi_to_this_person']>300]
df[['from_this_person_to_poi','from_poi_to_this_person']].sort('from_poi_to_this_person').plot()
Explanation: Top salary employees are not the same as top expenses employees.
End of explanation
df['bonus'] = df['bonus'].astype('float')
df[['salary','bonus']].plot(kind='scatter', x='salary', y='bonus')
Explanation: High 'from_this_person_to_poi' does not necessarily mean high 'from_poi_to_this_person', so both variables should evaluated.
End of explanation
df['salary+bonus']=df['salary']+df['bonus']
list(df.columns)
1+np.nan
for _,obj in data_dict.items():
salary_bonus_ratio = np.float(obj['bonus'])/np.float(obj['salary'])
if np.isnan(salary_bonus_ratio):
salary_bonus_ratio = -1
obj['salary_bonus_ratio'] = salary_bonus_ratio
data_dict.items()
Explanation: Perhaps makes sense combining salary with bonus. A new feature could be salary+bonus
End of explanation
import sys
sys.path.append("./tools/")
from feature_format import featureFormat, targetFeatureSplit
import numpy as np
features_all = [
'poi',
'salary',
'to_messages',
'deferral_payments',
'total_payments',
'exercised_stock_options',
'bonus',
'restricted_stock',
'shared_receipt_with_poi',
'restricted_stock_deferred',
'total_stock_value',
'expenses',
'loan_advances',
'from_messages',
'other',
'from_this_person_to_poi',
'director_fees',
'deferred_income',
'long_term_incentive',
# 'email_address',
'from_poi_to_this_person',
'salary_bonus_ratio',
]
features_list=features_all
data = featureFormat(data_dict, features_list, sort_keys = True)
labels, features = targetFeatureSplit(data)
from sklearn.feature_selection import SelectKBest, f_classif
# Perform feature selection
selector = SelectKBest(f_classif, k=5)
selector.fit(features, labels)
# Get the raw p-values for each feature, and transform from p-values into scores
scores = -np.log10(selector.pvalues_)
scores= selector.scores_
predictors = features_list[1:]
sort_indices = np.argsort(scores)[::-1]
# Plot the scores. See how "Pclass", "Sex", "Title", and "Fare" are the best?
X=np.arange(len(predictors))
Y=scores[sort_indices]
X_labels = map(lambda x: x.replace("_"," ").title(),np.array(predictors)[sort_indices])
ax = plt.subplot(111)
ax.bar(X, Y,edgecolor='white',facecolor='#9999ff')
plt.xticks(X+0.4, X_labels, rotation='vertical')
for x,y in zip(X,Y):
ax.text(x+0.4, y+0.05, '%.2f' % y, ha='center', va= 'bottom')
pass
plt.tick_params(bottom='off',top='off', right='off', left='off')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
zip(np.array(predictors)[sort_indices],scores[sort_indices])
list(np.array(predictors)[sort_indices])
Explanation: Feature Selection
End of explanation |
892 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading Annotations from a GO Association File (GAF)
Download a GAF file
Load the GAF file into the GafReader
Get Annotations
Bonus
Step1: 2) Load the GAF file into the GafReader
Step2: 3) Get Annotations
The annotations will be stored in three dicts, one for each GODAG branch, where
Step3: Bonus
Step4: Namedtuple fields
DB # 0 required 1 UniProtKB
DB_ID # 1 required 1 P12345
DB_Symbol # 2 required 1 PHO3
Qualifier # 3 optional 0 or greater NOT
GO_ID # 4 required 1 GO | Python Code:
import os
if not os.path.exists('goa_human.gaf.gz'):
!wget http://current.geneontology.org/annotations/goa_human.gaf.gz
!gunzip goa_human.gaf.gz
Explanation: Reading Annotations from a GO Association File (GAF)
Download a GAF file
Load the GAF file into the GafReader
Get Annotations
Bonus: Each line in the GAF file is stored in a namedtuple:
* Namedtuple fields
* Print a subset of the namedtuple fields
1) Download a GAF file
End of explanation
from goatools.anno.gaf_reader import GafReader
ogaf = GafReader("goa_human.gaf")
Explanation: 2) Load the GAF file into the GafReader
End of explanation
ns2assc = ogaf.get_ns2assc()
for namespace, associations in ns2assc.items():
for protein_id, go_ids in sorted(associations.items())[:3]:
print("{NS} {PROT:7} : {GOs}".format(
NS=namespace,
PROT=protein_id,
GOs=' '.join(sorted(go_ids))))
Explanation: 3) Get Annotations
The annotations will be stored in three dicts, one for each GODAG branch, where:
* the key is the protein ID and
* the value is a list of GO IDs associated with the protein.
End of explanation
# Sort the list of GAF namedtuples by ID
nts = sorted(ogaf.associations, key=lambda nt:nt.DB_ID)
# Print one namedtuple
print(nts[0])
Explanation: Bonus: The GAF is stored as a list of named tuples
The list of namedtuples is stored in the GafReader data member named associations.
Each namedtuple stores data for one line in the GAF file.
End of explanation
fmtpat = '{DB_ID} {DB_Symbol:13} {GO_ID} {Evidence_Code} {Date} {Assigned_By}'
for nt_line in nts[:10]:
print(fmtpat.format(**nt_line._asdict()))
Explanation: Namedtuple fields
DB # 0 required 1 UniProtKB
DB_ID # 1 required 1 P12345
DB_Symbol # 2 required 1 PHO3
Qualifier # 3 optional 0 or greater NOT
GO_ID # 4 required 1 GO:0003993
DB_Reference # 5 required 1 or greater PMID:2676709
Evidence_Code # 6 required 1 IMP
With_From # 7 optional 0 or greater GO:0000346
Aspect # 8 required 1 F
DB_Name # 9 optional 0 or 1 Toll-like receptor 4
DB_Synonym # 10 optional 0 or greater hToll|Tollbooth
DB_Type # 11 required 1 protein
Taxon # 12 required 1 or 2 taxon:9606
Date # 13 required 1 20090118
Assigned_By # 14 required 1 SGD
Annotation_Extension # 15 optional 0 or greater part_of(CL:0000576)
Gene_Product_Form_ID # 16 optional 0 or 1 UniProtKB:P12345-2
Print a subset of the namedtuple fields
End of explanation |
893 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stage 1
Step1: Stage 2
Step2: Stage 3
Step3: Stage 4 | Python Code:
import pytextrank
import sys
path_stage0 = "dat/mih.json"
path_stage1 = "o1.json"
with open(path_stage1, 'w') as f:
for graf in pytextrank.parse_doc(pytextrank.json_iter(path_stage0)):
f.write("%s\n" % pytextrank.pretty_print(graf._asdict()))
# to view output in this notebook
print(pytextrank.pretty_print(graf))
Explanation: Stage 1:
Perform statistical parsing/tagging on a document in JSON format
INPUTS: JSON doc for the text input
OUTPUT: JSON format ParsedGraf(id, sha1, graf)
End of explanation
path_stage1 = "o1.json"
path_stage2 = "o2.json"
graph, ranks = pytextrank.text_rank(path_stage1)
pytextrank.render_ranks(graph, ranks)
with open(path_stage2, 'w') as f:
for rl in pytextrank.normalize_key_phrases(path_stage1, ranks):
f.write("%s\n" % pytextrank.pretty_print(rl._asdict()))
# to view output in this notebook
print(pytextrank.pretty_print(rl))
import networkx as nx
import pylab as plt
nx.draw(graph, with_labels=True)
plt.show()
Explanation: Stage 2:
Collect and normalize the key phrases from a parsed document
INPUTS: <stage1>
OUTPUT: JSON format RankedLexeme(text, rank, ids, pos)
End of explanation
path_stage1 = "o1.json"
path_stage2 = "o2.json"
path_stage3 = "o3.json"
kernel = pytextrank.rank_kernel(path_stage2)
with open(path_stage3, 'w') as f:
for s in pytextrank.top_sentences(kernel, path_stage1):
f.write(pytextrank.pretty_print(s._asdict()))
f.write("\n")
# to view output in this notebook
print(pytextrank.pretty_print(s._asdict()))
Explanation: Stage 3:
Calculate a significance weight for each sentence, using MinHash to approximate a Jaccard distance from key phrases determined by TextRank
INPUTS: <stage1> <stage2>
OUTPUT: JSON format SummarySent(dist, idx, text)
End of explanation
path_stage2 = "o2.json"
path_stage3 = "o3.json"
phrases = ", ".join(set([p for p in pytextrank.limit_keyphrases(path_stage2, phrase_limit=12)]))
sent_iter = sorted(pytextrank.limit_sentences(path_stage3, word_limit=150), key=lambda x: x[1])
s = []
for sent_text, idx in sent_iter:
s.append(pytextrank.make_sentence(sent_text))
graf_text = " ".join(s)
print("**excerpts:** %s\n\n**keywords:** %s" % (graf_text, phrases,))
Explanation: Stage 4:
Summarize a document based on most significant sentences and key phrases
INPUTS: <stage2> <stage3>
OUTPUT: Markdown format
End of explanation |
894 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (50, 53)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
st = source_text # eos only on target.
tt = target_text.replace("\n", ' <EOS>\n') + ' <EOS>' # one eos at the end
stl = [ [ source_vocab_to_int[w] for w in l.split() ] for l in st.split('\n') ]
ttl = [ [ target_vocab_to_int[w] for w in l.split() ] for l in tt.split('\n') ]
return stl, ttl
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None], name='input')
target = tf.placeholder(tf.int32, [None, None], name='target')
learn_rate = tf.placeholder(tf.float32, name='learn_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return input, target, learn_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for decoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
goids = [ target_vocab_to_int["<GO>"] ] * batch_size
tfgoids = tf.reshape(goids, [-1, 1])
no_ends = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
with_go = tf.concat([tfgoids, no_ends], 1)
return with_go
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drops = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
ecell = tf.contrib.rnn.MultiRNNCell([drops] * num_layers)
_, enc_state = tf.nn.dynamic_rnn(ecell, rnn_inputs, dtype=tf.float32)
return enc_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
outputs = tf.nn.dropout(train_pred, keep_prob=keep_prob)
train_logits = output_fn(outputs)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
# Inference Decoder
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length - 1, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drops = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
dcell = tf.contrib.rnn.MultiRNNCell([drops] * num_layers)
with tf.variable_scope('decoding') as decoding_scope:
output_fn = lambda x: tf.contrib.layers.fully_connected(x,
vocab_size, None,
scope=decoding_scope)
train_logits = decoding_layer_train(encoder_state, dcell, dec_embed_input,
sequence_length, decoding_scope,
output_fn, keep_prob)
with tf.variable_scope('decoding', reuse=True) as decoding_scope:
infer_logits = decoding_layer_infer(encoder_state, dcell, dec_embeddings,
start_of_sequence_id, end_of_sequence_id,
sequence_length, vocab_size, decoding_scope,
output_fn, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size,
enc_embedding_size,
initializer = tf.random_uniform_initializer(-1,1))
encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
new_target = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, new_target)
train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings,
encoder_state, target_vocab_size,
sequence_length, rnn_size, num_layers,
target_vocab_to_int, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 8
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.005
# Dropout Keep Probability
keep_probability = 0.5
# Show stats for every n number of batches
show_every_n_batches = 100
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
import time
from datetime import timedelta
def timer(start):
elapsed = time.time() - start
hours, rem = divmod(elapsed, 3600)
minutes, seconds = divmod(rem, 60)
return "{:0>2}:{:0>2}".format(int(minutes),int(seconds))
DON'T MODIFY ANYTHING IN THIS CELL
import time
start_train = time.time()
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
if (epoch_i * (len(source_int_text) // batch_size) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, '
'Validation Accuracy: {:>6.3f}, Loss: {:>6.3f} elapsed={}'.format(
epoch_i, batch_i,
len(source_int_text) // batch_size,
train_acc, valid_acc, loss,
timer(start_train)))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
unk = vocab_to_int['<UNK>']
getid = lambda w: vocab_to_int[w] if w in vocab_to_int else unk
word_ids = [ getid(w) for w in sentence.split() ]
return word_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
895 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to NLTK
We have seen how to do some basic text processing in Python, now we introduce an open source framework for natural language processing that can further help to work with human languages
Step1: Tokens
The basic atomic part of each text are the tokens. A token is the NLP name for a sequence of characters that we want to treat as a group.
We have seen how we can extract tokens by splitting the text at the blank spaces.
NTLK has a function word_tokenize() for it
Step2: 21 tokens extracted, which include words and punctuation.
Note that the tokens are different than what a split by blank spaces would obtained, e.g. "can't" is by NTLK considered TWO tokens
Step3: And we can apply it to an entire book, "The Prince" by Machiavelli that we used last time
Step4: As mentioned above, the NTLK tokeniser works in a more sophisticated way than just splitting by spaces, therefore we got this time more tokens.
Sentences
NTLK has a function to tokenise a text not in words but in sentences.
Step5: As you see, it is not splitting just after each full stop but check if it's part of an acronym (U.S.) or a number (0.99).
It also splits correctly sentences after question or exclamation marks but not after commas.
Step6: Most common tokens
What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency?
The NTLK FreqDist class is used to encode “frequency distributions”, which count the number of times that something occurs, for example a token.
Its most_common() method then returns a list of tuples where each tuple is of the form (token, frequency). The list is sorted in descending order of frequency.
Step7: Comma is the most common
Step8: We can also remove any capital letters before tokenising
Step9: Now we removed the punctuation and the capital letters but the most common token is "the", not a significative word ...
As we have seen last time, these are so-called stop words that are very common and are normally stripped from a text when doing these kind of analysis.
Meaningful most common tokens
A simple approach could be to filter the tokens that have a length greater than 5 and frequency of more than 150.
Step10: This would work but would leave out also tokens such as I and you which are actually significative.
The better approach - that we have seen earlier how - is to remove stopwords using external files containing the stop words.
NLTK has a corpus of stop words in several languages
Step11: Now we excluded words such as the but we can improve further the list by looking at semantically similar words, such as plural and singular versions.
Step12: Stemming
Above, in the list of words we have both prince and princes which are respectively the singular and plural version of the same word (the stem). The same would happen with verb conjugation (love and loving are considered different words but are actually inflections of the same verb).
Stemmer is the tool that reduces such inflectional forms into their stem, base or root form and NLTK has several of them (each with a different heuristic algorithm).
Step13: And now we apply one of the NLTK stemmer, the Porter stemmer
Step14: As you see, all 5 different words have been reduced to the same stem and would be now the same lexical token.
Step15: Now the word princ is counted 281 times, exactly like the sum of prince and princes.
A note here
Step16: Lemma
Lemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma.
While a stemmer operates on a single word without knowledge of the context, a lemmatiser can take the context in consideration.
NLTK has also a built-in lemmatiser, so let's see it in action
Step17: We tell the lemmatise that the words are nouns. In this case it considers the same lemma words such as list (singular noun) and lists (plural noun) but leave as they are the other words.
Step18: We get a different result if we say that the words are verbs.
They have all the same lemma, in fact they could be all different inflections or conjugation of a verb.
The type of words that can be used are
Step19: It works with different adjectives, it doesn't look only at prefixes and suffixes.
You would wonder why stemmers are used, instead of always using lemmatisers
Step20: Yes, the lemma now is prince.
But note that we consider all words in the book as nouns, while actually a proper way would be to apply the correct type to each single word.
Part of speech (PoS)
In traditional grammar, a part of speech (abbreviated form
Step21: The NLTK function pos_tag() will tag each token with the estimated PoS.
NLTK has 13 categories of PoS. You can check the acronym using the NLTK help function
Step22: Which are the most common PoS in The Prince book?
Step24: It's not nouns (NN) but interections (IN) such as preposition or conjunction.
Extra note | Python Code:
sampleText1 = "The Elephant's 4 legs: THE Pub! You can't believe it or can you, the believer?"
sampleText2 = "Pierre Vinken, 61 years old, will join the board as a nonexecutive director Nov. 29."
Explanation: Introduction to NLTK
We have seen how to do some basic text processing in Python, now we introduce an open source framework for natural language processing that can further help to work with human languages: NLTK (Natural Language ToolKit).
Tokenise a text
Let's start with a simple text in a Python string:
End of explanation
import nltk
s1Tokens = nltk.word_tokenize(sampleText1)
s1Tokens
len(s1Tokens)
Explanation: Tokens
The basic atomic part of each text are the tokens. A token is the NLP name for a sequence of characters that we want to treat as a group.
We have seen how we can extract tokens by splitting the text at the blank spaces.
NTLK has a function word_tokenize() for it:
End of explanation
s2Tokens = nltk.word_tokenize(sampleText2)
s2Tokens
Explanation: 21 tokens extracted, which include words and punctuation.
Note that the tokens are different than what a split by blank spaces would obtained, e.g. "can't" is by NTLK considered TWO tokens: "can" and "n't" (= "not") while a tokeniser that splits text by spaces would consider it a single token: "can't".
Let's see another example:
End of explanation
# If you would like to work with the raw text you can use 'bookRaw'
with open('../datasets/ThePrince.txt', 'r') as f:
bookRaw = f.read()
bookTokens = nltk.word_tokenize(bookRaw)
bookText = nltk.Text(bookTokens) # special format
nBookTokens= len(bookTokens) # or alternatively len(bookText)
print ("*** Analysing book ***")
print ("The book is {} chars long".format (len(bookRaw)))
print ("The book has {} tokens".format (nBookTokens))
Explanation: And we can apply it to an entire book, "The Prince" by Machiavelli that we used last time:
End of explanation
text1 = "This is the first sentence. A liter of milk in the U.S. costs $0.99. Is this the third sentence? Yes, it is!"
sentences = nltk.sent_tokenize(text1)
len(sentences)
sentences
Explanation: As mentioned above, the NTLK tokeniser works in a more sophisticated way than just splitting by spaces, therefore we got this time more tokens.
Sentences
NTLK has a function to tokenise a text not in words but in sentences.
End of explanation
sentences = nltk.sent_tokenize(bookRaw) # extract sentences
nSent = len(sentences)
print ("The book has {} sentences".format (nSent))
print ("and each sentence has in average {} tokens".format (nBookTokens / nSent))
Explanation: As you see, it is not splitting just after each full stop but check if it's part of an acronym (U.S.) or a number (0.99).
It also splits correctly sentences after question or exclamation marks but not after commas.
End of explanation
def get_top_words(tokens):
# Calculate frequency distribution
fdist = nltk.FreqDist(tokens)
return fdist.most_common()
topBook = get_top_words(bookTokens)
# Output top 20 words
topBook[:20]
Explanation: Most common tokens
What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency?
The NTLK FreqDist class is used to encode “frequency distributions”, which count the number of times that something occurs, for example a token.
Its most_common() method then returns a list of tuples where each tuple is of the form (token, frequency). The list is sorted in descending order of frequency.
End of explanation
topWords = [(freq, word) for (word,freq) in topBook if word.isalpha() and freq > 400]
topWords
Explanation: Comma is the most common: we need to remove the punctuation.
Most common alphanumeric tokens
We can use isalpha() to check if the token is a word and not punctuation.
End of explanation
def preprocessText(text, lowercase=True):
if lowercase:
tokens = nltk.word_tokenize(text.lower())
else:
tokens = nltk.word_tokenize(text)
return [word for word in tokens if word.isalpha()]
bookWords = preprocessText(bookRaw)
topBook = get_top_words(bookWords)
# Output top 20 words
topBook[:20]
print ("*** Analysing book ***")
print ("The text has now {} words (tokens)".format (len(bookWords)))
Explanation: We can also remove any capital letters before tokenising:
End of explanation
meaningfulWords = [word for (word,freq) in topBook if len(word) > 5 and freq > 80]
sorted(meaningfulWords)
Explanation: Now we removed the punctuation and the capital letters but the most common token is "the", not a significative word ...
As we have seen last time, these are so-called stop words that are very common and are normally stripped from a text when doing these kind of analysis.
Meaningful most common tokens
A simple approach could be to filter the tokens that have a length greater than 5 and frequency of more than 150.
End of explanation
from nltk.corpus import stopwords
stopwordsEN = set(stopwords.words('english')) # english language
betterWords = [w for w in bookWords if w not in stopwordsEN]
topBook = get_top_words(betterWords)
# Output top 20 words
topBook[:20]
Explanation: This would work but would leave out also tokens such as I and you which are actually significative.
The better approach - that we have seen earlier how - is to remove stopwords using external files containing the stop words.
NLTK has a corpus of stop words in several languages:
End of explanation
'princes' in betterWords
betterWords.count("prince") + betterWords.count("princes")
Explanation: Now we excluded words such as the but we can improve further the list by looking at semantically similar words, such as plural and singular versions.
End of explanation
input1 = "List listed lists listing listings"
words1 = input1.lower().split(' ')
words1
Explanation: Stemming
Above, in the list of words we have both prince and princes which are respectively the singular and plural version of the same word (the stem). The same would happen with verb conjugation (love and loving are considered different words but are actually inflections of the same verb).
Stemmer is the tool that reduces such inflectional forms into their stem, base or root form and NLTK has several of them (each with a different heuristic algorithm).
End of explanation
porter = nltk.PorterStemmer()
[porter.stem(t) for t in words1]
Explanation: And now we apply one of the NLTK stemmer, the Porter stemmer:
End of explanation
stemmedWords = [porter.stem(w) for w in betterWords]
topBook = get_top_words(stemmedWords)
topBook[:20] # Output top 20 words
Explanation: As you see, all 5 different words have been reduced to the same stem and would be now the same lexical token.
End of explanation
from nltk.stem.snowball import SnowballStemmer
stemmerIT = SnowballStemmer("italian")
inputIT = "Io ho tre mele gialle, tu hai una mela gialla e due pere verdi"
wordsIT = inputIT.split(' ')
[stemmerIT.stem(w) for w in wordsIT]
Explanation: Now the word princ is counted 281 times, exactly like the sum of prince and princes.
A note here: Stemming usually refers to a crude heuristic process that chops off the ends of words in the hope of achieving this goal correctly most of the time, and often includes the removal of derivational affixes.
Prince and princes become princ.
A different flavour is the lemmatisation that we will see in one second, but first a note about stemming in other languages than English.
Stemming in other languages
Snowball is an improvement created by Porter: a language to create stemmers and have rules for many more languages than English.
For example Italian:
End of explanation
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
words1
[lemmatizer.lemmatize(w, 'n') for w in words1] # n = nouns
Explanation: Lemma
Lemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma.
While a stemmer operates on a single word without knowledge of the context, a lemmatiser can take the context in consideration.
NLTK has also a built-in lemmatiser, so let's see it in action:
End of explanation
[lemmatizer.lemmatize(w, 'v') for w in words1] # v = verbs
Explanation: We tell the lemmatise that the words are nouns. In this case it considers the same lemma words such as list (singular noun) and lists (plural noun) but leave as they are the other words.
End of explanation
words2 = ['good', 'better']
[porter.stem(w) for w in words2]
[lemmatizer.lemmatize(w, 'a') for w in words2]
Explanation: We get a different result if we say that the words are verbs.
They have all the same lemma, in fact they could be all different inflections or conjugation of a verb.
The type of words that can be used are:
'n' = noun, 'v'=verb, 'a'=adjective, 'r'=adverb
End of explanation
lemmatisedWords = [lemmatizer.lemmatize(w, 'n') for w in betterWords]
topBook = get_top_words(lemmatisedWords)
topBook[:20] # Output top 20 words
Explanation: It works with different adjectives, it doesn't look only at prefixes and suffixes.
You would wonder why stemmers are used, instead of always using lemmatisers: stemmers are much simpler, smaller and faster and for many applications good enough.
Now we lemmatise the book:
End of explanation
text1 = "Children shouldn't drink a sugary drink before bed."
tokensT1 = nltk.word_tokenize(text1)
nltk.pos_tag(tokensT1)
Explanation: Yes, the lemma now is prince.
But note that we consider all words in the book as nouns, while actually a proper way would be to apply the correct type to each single word.
Part of speech (PoS)
In traditional grammar, a part of speech (abbreviated form: PoS or POS) is a category of words which have similar grammatical properties.
For example, an adjective (red, big, quiet, ...) describe properties while a verb (throw, walk, have) describe actions or states.
Commonly listed parts of speech are noun, verb, adjective, adverb, pronoun, preposition, conjunction, interjection.
End of explanation
nltk.help.upenn_tagset('RB')
Explanation: The NLTK function pos_tag() will tag each token with the estimated PoS.
NLTK has 13 categories of PoS. You can check the acronym using the NLTK help function:
End of explanation
tokensAndPos = nltk.pos_tag(bookTokens)
posList = [thePOS for (word, thePOS) in tokensAndPos]
fdistPos = nltk.FreqDist(posList)
fdistPos.most_common(5)
nltk.help.upenn_tagset('IN')
Explanation: Which are the most common PoS in The Prince book?
End of explanation
# Parsing sentence structure
text2 = nltk.word_tokenize("Alice loves Bob")
grammar = nltk.CFG.fromstring(
S -> NP VP
VP -> V NP
NP -> 'Alice' | 'Bob'
V -> 'loves'
)
parser = nltk.ChartParser(grammar)
trees = parser.parse_all(text2)
for tree in trees:
print(tree)
Explanation: It's not nouns (NN) but interections (IN) such as preposition or conjunction.
Extra note: Parsing the grammar structure
Words can be ambiguous and sometimes is not easy to understand which kind of POS is a word, for example in the sentence "visiting aunts can be a nuisance", is visiting a verb or an adjective?
Tagging a PoS depends on the context, which can be ambiguous.
Making sense of a sentence is easier if it follows a well-defined grammatical structure, such as : subject + verb + object
NLTK allows to define a formal grammar which can then be used to parse a text. The NLTK ChartParser is a procedure for finding one or more trees (sentences have internal organisation that can be represented using a tree) corresponding to a grammatically well-formed sentence.
End of explanation |
896 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bungee Dunk
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step1: Suppose you want to set the world record for the highest "bungee dunk", as shown in this video. Since the record is 70 m, let's design a jump for 80 m.
We'll make the following modeling assumptions
Step3: Now here's a version of make_system that takes a Params object as a parameter.
make_system uses the given value of v_term to compute the drag coefficient C_d.
Step4: Let's make a System
Step6: spring_force computes the force of the cord on the jumper.
If the spring is not extended, it returns zero_force, which is either 0 Newtons or 0, depending on whether the System object has units. I did that so the slope function works correctly with and without units.
Step7: The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.
Step9: drag_force computes drag as a function of velocity
Step10: Here's the drag force at 60 meters per second.
Step11: Acceleration due to drag at 60 m/s is approximately g, which confirms that 60 m/s is terminal velocity.
Step13: Now here's the slope function
Step14: As always, let's test the slope function with the initial params.
Step15: And then run the simulation.
Step16: Here's the plot of position as a function of time.
Step17: After reaching the lowest point, the jumper springs back almost to almost 70 m and oscillates several times. That looks like more oscillation that we expect from an actual jump, which suggests that there is some dissipation of energy in the real world that is not captured in our model. To improve the model, that might be a good thing to investigate.
But since we are primarily interested in the initial descent, the model might be good enough for now.
We can use min to find the lowest point
Step18: At the lowest point, the jumper is still too high, so we'll need to increase L or decrease k.
Here's velocity as a function of time
Step19: Although we compute acceleration inside the slope function, we don't get acceleration as a result from run_solve_ivp.
We can approximate it by computing the numerical derivative of ys
Step20: And we can compute the maximum acceleration the jumper experiences
Step21: Relative to the acceleration of gravity, the jumper "pulls" about "1.7 g's".
Step23: Solving for length
Assuming that k is fixed, let's find the length L that makes the minimum altitude of the jumper exactly 0.
The metric we are interested in is the lowest point of the first oscillation. For both efficiency and accuracy, it is better to stop the simulation when we reach this point, rather than run past it and then compute the minimum.
Here's an event function that stops the simulation when velocity is 0.
Step24: As usual, we should test it with the initial conditions.
Step25: If we call run_solve_ivp with this event function, we'll see that the simulation stops immediately because the initial velocity is 0.
We could work around that by starting with a very small, non-zero initial velocity.
But we can also avoid it by setting the direction attribute of the event_func
Step26: The value 1 (or any positive value) indicates that the event should only occur if the result from event_func is increasing.
A negative value would indicate that the results should be decreasing.
Now we can test it and confirm that it stops at the bottom of the jump.
Step27: Here are the results.
Step28: And here's the height of the jumper at the lowest point.
Step30: Exercise | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: Bungee Dunk
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
params = Params(y_attach = 80, # m,
v_init = 0, # m / s,
g = 9.8, # m/s**2,
mass = 75, # kg,
area = 1, # m**2,
rho = 1.2, # kg/m**3,
v_term = 60, # m / s,
L = 25, # m,
k = 40, # N / m
)
Explanation: Suppose you want to set the world record for the highest "bungee dunk", as shown in this video. Since the record is 70 m, let's design a jump for 80 m.
We'll make the following modeling assumptions:
Initially the bungee cord hangs from a crane with the attachment point 80 m above a cup of tea.
Until the cord is fully extended, it applies no force to the jumper. It turns out this might not be a good assumption; we will revisit it.
After the cord is fully extended, it obeys Hooke's Law; that is, it applies a force to the jumper proportional to the extension of the cord beyond its resting length.
The jumper is subject to drag force proportional to the square of their velocity, in the opposite of their direction of motion.
Our objective is to choose the length of the cord, L, and its spring constant, k, so that the jumper falls all the way to the tea cup, but no farther!
First I'll create a Param object to contain the quantities we'll need:
Let's assume that the jumper's mass is 75 kg.
With a terminal velocity of 60 m/s.
The length of the bungee cord is L = 40 m.
The spring constant of the cord is k = 20 N / m when the cord is stretched, and 0 when it's compressed.
End of explanation
def make_system(params):
Makes a System object for the given params.
params: Params object
returns: System object
area, mass = params.area, params.mass
g, rho = params.g, params.rho
v_init, v_term = params.v_init, params.v_term
y_attach = params.y_attach
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=y_attach, v=v_init)
t_end = 20
return System(params, C_d=C_d,
init=init, t_end=t_end)
Explanation: Now here's a version of make_system that takes a Params object as a parameter.
make_system uses the given value of v_term to compute the drag coefficient C_d.
End of explanation
system = make_system(params)
Explanation: Let's make a System
End of explanation
def spring_force(y, system):
Computes the force of the bungee cord on the jumper:
y: height of the jumper
Uses these variables from system|
y_attach: height of the attachment point
L: resting length of the cord
k: spring constant of the cord
returns: force in N
y_attach, L, k = system.y_attach, system.L, system.k
distance_fallen = y_attach - y
if distance_fallen <= L:
return 0
extension = distance_fallen - L
f_spring = k * extension
return f_spring
Explanation: spring_force computes the force of the cord on the jumper.
If the spring is not extended, it returns zero_force, which is either 0 Newtons or 0, depending on whether the System object has units. I did that so the slope function works correctly with and without units.
End of explanation
spring_force(55, system)
spring_force(54, system)
Explanation: The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.
End of explanation
def drag_force(v, system):
Computes drag force in the opposite direction of `v`.
v: velocity
system: System object
returns: drag force
rho, C_d, area = system.rho, system.C_d, system.area
f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2
return f_drag
Explanation: drag_force computes drag as a function of velocity:
End of explanation
v = -60
f_drag = drag_force(v, system)
Explanation: Here's the drag force at 60 meters per second.
End of explanation
a_drag = f_drag / system.mass
a_drag
Explanation: Acceleration due to drag at 60 m/s is approximately g, which confirms that 60 m/s is terminal velocity.
End of explanation
def slope_func(t, state, system):
Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
y, v = state
mass, g = system.mass, system.g
a_drag = drag_force(v, system) / mass
a_spring = spring_force(y, system) / mass
dvdt = -g + a_drag + a_spring
return v, dvdt
Explanation: Now here's the slope function:
End of explanation
slope_func(0, system.init, system)
Explanation: As always, let's test the slope function with the initial params.
End of explanation
results, details = run_solve_ivp(system, slope_func)
details.message
Explanation: And then run the simulation.
End of explanation
def plot_position(results):
results.y.plot()
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
Explanation: Here's the plot of position as a function of time.
End of explanation
min(results.y)
Explanation: After reaching the lowest point, the jumper springs back almost to almost 70 m and oscillates several times. That looks like more oscillation that we expect from an actual jump, which suggests that there is some dissipation of energy in the real world that is not captured in our model. To improve the model, that might be a good thing to investigate.
But since we are primarily interested in the initial descent, the model might be good enough for now.
We can use min to find the lowest point:
End of explanation
def plot_velocity(results):
results.v.plot(color='C1', label='v')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
Explanation: At the lowest point, the jumper is still too high, so we'll need to increase L or decrease k.
Here's velocity as a function of time:
End of explanation
a = gradient(results.v)
a.plot(color='C2')
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
Explanation: Although we compute acceleration inside the slope function, we don't get acceleration as a result from run_solve_ivp.
We can approximate it by computing the numerical derivative of ys:
End of explanation
max_acceleration = max(a)
max_acceleration
Explanation: And we can compute the maximum acceleration the jumper experiences:
End of explanation
max_acceleration / system.g
Explanation: Relative to the acceleration of gravity, the jumper "pulls" about "1.7 g's".
End of explanation
def event_func(t, state, system):
Return velocity.
y, v = state
return v
Explanation: Solving for length
Assuming that k is fixed, let's find the length L that makes the minimum altitude of the jumper exactly 0.
The metric we are interested in is the lowest point of the first oscillation. For both efficiency and accuracy, it is better to stop the simulation when we reach this point, rather than run past it and then compute the minimum.
Here's an event function that stops the simulation when velocity is 0.
End of explanation
event_func(0, system.init, system)
Explanation: As usual, we should test it with the initial conditions.
End of explanation
event_func.direction = 10
Explanation: If we call run_solve_ivp with this event function, we'll see that the simulation stops immediately because the initial velocity is 0.
We could work around that by starting with a very small, non-zero initial velocity.
But we can also avoid it by setting the direction attribute of the event_func:
End of explanation
results, details = run_solve_ivp(system, slope_func,
events=event_func)
details.message
Explanation: The value 1 (or any positive value) indicates that the event should only occur if the result from event_func is increasing.
A negative value would indicate that the results should be decreasing.
Now we can test it and confirm that it stops at the bottom of the jump.
End of explanation
plot_position(results)
Explanation: Here are the results.
End of explanation
min(results.y)
Explanation: And here's the height of the jumper at the lowest point.
End of explanation
# Solution
def error_func(L, params):
Minimum height as a function of length.
L: length in m
params: Params object
returns: height in m
system = make_system(params.set(L=L))
results, details = run_solve_ivp(system, slope_func,
events=event_func)
min_height = min(results.y)
return min_height
# Solution
guess1 = 25
error_func(guess1, params)
# Solution
guess2 = 30
error_func(guess2, params)
# Solution
res = root_scalar(error_func, params, bracket=[guess1, guess2])
res.flag
# Solution
L = res.root
system_solution = make_system(params.set(L=L))
results, details = run_solve_ivp(system_solution, slope_func,
events=event_func)
details.message
# Solution
min_height = min(results.y)
min_height
Explanation: Exercise: Write an error function that takes L and params as arguments, simulates a bungee jump, and returns the lowest point.
Test the error function with a guess of 25 m and confirm that the return value is about 5 meters.
Use root_scalar with your error function to find the value of L that yields a perfect bungee dunk.
Run a simulation with the result from root_scalar and confirm that it works.
End of explanation |
897 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading, saving and exporting data
Pymrio includes several functions for data reading and storing. This section presents the methods to use for saving and loading data already in a pymrio compatible format. For parsing raw MRIO data see the different tutorials for working with available MRIO databases.
Here, we use the included small test MRIO system to highlight the different function. The same functions are available for any MRIO loaded into pymrio. Expect, however, significantly decreased performance due to the size of real MRIO system.
Step1: Basic save and read
To save the full system, use
Step2: To read again from that folder do
Step3: The fileio activities are stored in the included meta data history field
Step4: Storage format
Internally, pymrio stores data in csv format, with the 'economic core' data in the root and each satellite account in a subfolder. Metadata as file as a file describing the data format ('file_parameters.json') are included in each folder.
Step5: The file format for storing the MRIO data can be switched to a binary pickle format with
Step6: This can be used to reduce the storage space required on the disk for large MRIO databases.
Archiving MRIOs databases
To archive a MRIO system after saving use pymrio.archive
Step7: Data can be read directly from such an archive by
Step8: Currently data can not be saved directly into a zip archive.
It is, however, possible to remove the source files after archiving
Step9: Several MRIO databases can be stored in the same archive
Step10: When loading from an archive which includes multiple MRIO databases, specify
one with the parameter 'path_in_arc'
Step11: The pymrio.load function can be used directly to only a specific satellite account
of a MRIO database from a zip archive
Step12: The archive function is a wrapper around python.zipfile module.
There are, however, some differences to the defaults choosen in the original
Step13: This can then be loaded again as separate satellite account
Step14: As all data in pymrio is stored as pandas DataFrame, the full pandas stack for exporting tables is available. For example, to export a table as excel sheet use | Python Code:
import pymrio
import os
io = pymrio.load_test().calc_all()
Explanation: Loading, saving and exporting data
Pymrio includes several functions for data reading and storing. This section presents the methods to use for saving and loading data already in a pymrio compatible format. For parsing raw MRIO data see the different tutorials for working with available MRIO databases.
Here, we use the included small test MRIO system to highlight the different function. The same functions are available for any MRIO loaded into pymrio. Expect, however, significantly decreased performance due to the size of real MRIO system.
End of explanation
save_folder_full = '/tmp/testmrio/full'
io.save_all(path=save_folder_full)
Explanation: Basic save and read
To save the full system, use:
End of explanation
io_read = pymrio.load_all(path=save_folder_full)
Explanation: To read again from that folder do:
End of explanation
io_read.meta
Explanation: The fileio activities are stored in the included meta data history field:
End of explanation
import os
os.listdir(save_folder_full)
Explanation: Storage format
Internally, pymrio stores data in csv format, with the 'economic core' data in the root and each satellite account in a subfolder. Metadata as file as a file describing the data format ('file_parameters.json') are included in each folder.
End of explanation
save_folder_bin = '/tmp/testmrio/binary'
io.save_all(path=save_folder_bin, table_format='pkl')
os.listdir(save_folder_bin)
Explanation: The file format for storing the MRIO data can be switched to a binary pickle format with:
End of explanation
mrio_arc = '/tmp/testmrio/archive.zip'
# Remove a potentially existing archive from before
try:
os.remove(mrio_arc)
except FileNotFoundError:
pass
pymrio.archive(source=save_folder_full, archive=mrio_arc)
Explanation: This can be used to reduce the storage space required on the disk for large MRIO databases.
Archiving MRIOs databases
To archive a MRIO system after saving use pymrio.archive:
End of explanation
tt = pymrio.load_all(mrio_arc)
Explanation: Data can be read directly from such an archive by:
End of explanation
tmp_save = '/tmp/testmrio/tmp'
# Remove a potentially existing archive from before
try:
os.remove(mrio_arc)
except FileNotFoundError:
pass
io.save_all(tmp_save)
print("Directories before archiving: {}".format(os.listdir('/tmp/testmrio')))
pymrio.archive(source=tmp_save, archive=mrio_arc, remove_source=True)
print("Directories after archiving: {}".format(os.listdir('/tmp/testmrio')))
Explanation: Currently data can not be saved directly into a zip archive.
It is, however, possible to remove the source files after archiving:
End of explanation
# Remove a potentially existing archive from before
try:
os.remove(mrio_arc)
except FileNotFoundError:
pass
tmp_save = '/tmp/testmrio/tmp'
io.save_all(tmp_save)
pymrio.archive(source=tmp_save, archive=mrio_arc, path_in_arc='version1/', remove_source=True)
io2 = io.copy()
del io2.emissions
io2.save_all(tmp_save)
pymrio.archive(source=tmp_save, archive=mrio_arc, path_in_arc='version2/', remove_source=True)
Explanation: Several MRIO databases can be stored in the same archive:
End of explanation
io1_load = pymrio.load_all(mrio_arc, path_in_arc='version1/')
io2_load = pymrio.load_all(mrio_arc, path_in_arc='version2/')
print("Extensions of the loaded io1 {ver1} and of io2: {ver2}".format(
ver1=sorted(io1_load.get_extensions()),
ver2=sorted(io2_load.get_extensions())))
Explanation: When loading from an archive which includes multiple MRIO databases, specify
one with the parameter 'path_in_arc':
End of explanation
emissions = pymrio.load(mrio_arc, path_in_arc='version1/emissions')
print(emissions)
Explanation: The pymrio.load function can be used directly to only a specific satellite account
of a MRIO database from a zip archive:
End of explanation
save_folder_em= '/tmp/testmrio/emissions'
io.emissions.save(path=save_folder_em)
Explanation: The archive function is a wrapper around python.zipfile module.
There are, however, some differences to the defaults choosen in the original:
In contrast to zipfile.write,
pymrio.archive raises an
error if the data (path + filename) are identical in the zip archive.
Background: the zip standard allows that files with the same name and path
are stored side by side in a zip file. This becomes an issue when unpacking
this files as they overwrite each other upon extraction.
The standard for the parameter 'compression' is set to ZIP_DEFLATED
This is different from the zipfile default (ZIP_STORED) which would
not give any compression.
See the zipfile docs
for further information.
Depending on the value given for the parameter 'compression'
additional modules might be necessary (e.g. zlib for ZIP_DEFLATED).
Futher information on this can also be found in the zipfile python docs.
Storing or exporting a specific table or extension
Each extension of the MRIO system can be stored separetly with:
End of explanation
emissions = pymrio.load(save_folder_em)
emissions
emissions.D_cba
Explanation: This can then be loaded again as separate satellite account:
End of explanation
io.emissions.D_cba.to_excel('/tmp/testmrio/emission_footprints.xlsx')
Explanation: As all data in pymrio is stored as pandas DataFrame, the full pandas stack for exporting tables is available. For example, to export a table as excel sheet use:
End of explanation |
898 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word Embeddings
Learning Objectives
You will learn
Step1: This notebook uses TF2.x.
Please check your tensorflow version using the cell below.
Step2: Download the IMDb Dataset
You will use the Large Movie Review Dataset through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the Loading text tutorial.
Download the dataset using Keras file utility and take a look at the directories.
Step3: Take a look at the train/ directory. It has pos and neg folders with movie reviews labelled as positive and negative respectively. You will use reviews from pos and neg folders to train a binary classification model.
Step4: The train directory also has additional folders which should be removed before creating training dataset.
Step5: Next, create a tf.data.Dataset using tf.keras.preprocessing.text_dataset_from_directory. You can read more about using this utility in this text classification tutorial.
Use the train directory to create both train and validation datasets with a split of 20% for validation.
Step6: Take a look at a few movie reviews and their labels (1
Step7: Configure the dataset for performance
These are two important methods you should use when loading data to make sure that I/O does not become blocking.
.cache() keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.
.prefetch() overlaps data preprocessing and model execution while training.
You can learn more about both methods, as well as how to cache data to disk in the data performance guide.
Step8: Using the Embedding layer
Keras makes it easy to use word embeddings. Take a look at the Embedding layer.
The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer.
Step9: When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on).
If you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table
Step10: For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape (samples, sequence_length), where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes (32, 10) (batch of 32 sequences of length 10) or (64, 15) (batch of 64 sequences of length 15).
The returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a (2, 3) input batch and the output is (2, 3, N)
Step11: When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape (samples, sequence_length, embedding_dimensionality). To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The Text Classification with an RNN tutorial is a good next step.
Text preprocessing
Next, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the Text Classification tutorial.
Step12: Create a classification model
Use the Keras Sequential API to define the sentiment classification model. In this case it is a "Continuous bag of words" style model.
* The TextVectorization layer transforms strings into vocabulary indices. You have already initialized vectorize_layer as a TextVectorization layer and built it's vocabulary by calling adapt on text_ds. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding transformed strings into the Embedding layer.
* The Embedding layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are
Step13: Compile and train the model
Create a tf.keras.callbacks.TensorBoard.
Step14: Compile and train the model using the Adam optimizer and BinaryCrossentropy loss.
Step15: With this approach the model reaches a validation accuracy of around 84% (note that the model is overfitting since training accuracy is higher).
Note
Step16: Visualize the model metrics in TensorBoard.
Step17: Run the following command in Cloud Shell
Step18: Write the weights to disk. To use the Embedding Projector, you will upload two files in tab separated format
Step19: Two files will created as vectors.tsv and metadata.tsv. Download both files. | Python Code:
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import io
import os
import re
import shutil
import string
import tensorflow as tf
from datetime import datetime
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
Explanation: Word Embeddings
Learning Objectives
You will learn:
How to use Embedding layer
How to create a classification model
Compile and train the model
How to retrieve the trained word embeddings, save them to disk and visualize it.
Introduction
This notebook contains an introduction to word embeddings. You will train your own word embeddings using a simple Keras model for a sentiment classification task, and then visualize them in the Embedding Projector (shown in the image below).
Representing text as numbers
Machine learning models take vectors (arrays of numbers) as input. When working with text, the first thing you must do is come up with a strategy to convert strings to numbers (or to "vectorize" the text) before feeding it to the model. In this section, you will look at three strategies for doing so.
One-hot encodings
As a first idea, you might "one-hot" encode each word in your vocabulary. Consider the sentence "The cat sat on the mat". The vocabulary (or unique words) in this sentence is (cat, mat, on, sat, the). To represent each word, you will create a zero vector with length equal to the vocabulary, then place a one in the index that corresponds to the word. This approach is shown in the following diagram.
To create a vector that contains the encoding of the sentence, you could then concatenate the one-hot vectors for each word.
Key point: This approach is inefficient. A one-hot encoded vector is sparse (meaning, most indices are zero). Imagine you have 10,000 words in the vocabulary. To one-hot encode each word, you would create a vector where 99.99% of the elements are zero.
Encode each word with a unique number
A second approach you might try is to encode each word using a unique number. Continuing the example above, you could assign 1 to "cat", 2 to "mat", and so on. You could then encode the sentence "The cat sat on the mat" as a dense vector like [5, 1, 4, 3, 5, 2]. This approach is efficient. Instead of a sparse vector, you now have a dense one (where all elements are full).
There are two downsides to this approach, however:
The integer-encoding is arbitrary (it does not capture any relationship between words).
An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because there is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful.
Word embeddings
Word embeddings give us a way to use an efficient, dense representation in which similar words have a similar encoding. Importantly, you do not have to specify this encoding by hand. An embedding is a dense vector of floating point values (the length of the vector is a parameter you specify). Instead of specifying the values for the embedding manually, they are trainable parameters (weights learned by the model during training, in the same way a model learns weights for a dense layer). It is common to see word embeddings that are 8-dimensional (for small datasets), up to 1024-dimensions when working with large datasets. A higher dimensional embedding can capture fine-grained relationships between words, but takes more data to learn.
Above is a diagram for a word embedding. Each word is represented as a 4-dimensional vector of floating point values. Another way to think of an embedding is as "lookup table". After these weights have been learned, you can encode each word by looking up the dense vector it corresponds to in the table.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Setup
End of explanation
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
Explanation: This notebook uses TF2.x.
Please check your tensorflow version using the cell below.
End of explanation
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
Explanation: Download the IMDb Dataset
You will use the Large Movie Review Dataset through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the Loading text tutorial.
Download the dataset using Keras file utility and take a look at the directories.
End of explanation
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
Explanation: Take a look at the train/ directory. It has pos and neg folders with movie reviews labelled as positive and negative respectively. You will use reviews from pos and neg folders to train a binary classification model.
End of explanation
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
Explanation: The train directory also has additional folders which should be removed before creating training dataset.
End of explanation
batch_size = 1024
seed = 123
train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='training', seed=seed)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='validation', seed=seed)
Explanation: Next, create a tf.data.Dataset using tf.keras.preprocessing.text_dataset_from_directory. You can read more about using this utility in this text classification tutorial.
Use the train directory to create both train and validation datasets with a split of 20% for validation.
End of explanation
for text_batch, label_batch in train_ds.take(1):
for i in range(5):
print(label_batch[i].numpy(), text_batch.numpy()[i])
Explanation: Take a look at a few movie reviews and their labels (1: positive, 0: negative) from the train dataset.
End of explanation
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
Explanation: Configure the dataset for performance
These are two important methods you should use when loading data to make sure that I/O does not become blocking.
.cache() keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.
.prefetch() overlaps data preprocessing and model execution while training.
You can learn more about both methods, as well as how to cache data to disk in the data performance guide.
End of explanation
# Embed a 1,000 word vocabulary into 5 dimensions.
# TODO
embedding_layer = tf.keras.layers.Embedding(1000, 5)
Explanation: Using the Embedding layer
Keras makes it easy to use word embeddings. Take a look at the Embedding layer.
The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer.
End of explanation
result = embedding_layer(tf.constant([1,2,3]))
result.numpy()
Explanation: When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on).
If you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table:
End of explanation
result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))
result.shape
Explanation: For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape (samples, sequence_length), where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes (32, 10) (batch of 32 sequences of length 10) or (64, 15) (batch of 64 sequences of length 15).
The returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a (2, 3) input batch and the output is (2, 3, N)
End of explanation
# Create a custom standardization function to strip HTML break tags '<br />'.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation), '')
# Vocabulary size and number of words in a sequence.
vocab_size = 10000
sequence_length = 100
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Note that the layer uses the custom standardization defined above.
# Set maximum_sequence length as all samples are not of the same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
# Make a text-only dataset (no labels) and call adapt to build the vocabulary.
text_ds = train_ds.map(lambda x, y: x)
vectorize_layer.adapt(text_ds)
Explanation: When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape (samples, sequence_length, embedding_dimensionality). To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The Text Classification with an RNN tutorial is a good next step.
Text preprocessing
Next, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the Text Classification tutorial.
End of explanation
embedding_dim=16
# TODO
model = Sequential([
vectorize_layer,
Embedding(vocab_size, embedding_dim, name="embedding"),
GlobalAveragePooling1D(),
Dense(16, activation='relu'),
Dense(1)
])
Explanation: Create a classification model
Use the Keras Sequential API to define the sentiment classification model. In this case it is a "Continuous bag of words" style model.
* The TextVectorization layer transforms strings into vocabulary indices. You have already initialized vectorize_layer as a TextVectorization layer and built it's vocabulary by calling adapt on text_ds. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding transformed strings into the Embedding layer.
* The Embedding layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: (batch, sequence, embedding).
The GlobalAveragePooling1D layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.
The fixed-length output vector is piped through a fully-connected (Dense) layer with 16 hidden units.
The last layer is densely connected with a single output node.
Caution: This model doesn't use masking, so the zero-padding is used as part of the input and hence the padding length may affect the output. To fix this, see the masking and padding guide.
End of explanation
# TODO
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
Explanation: Compile and train the model
Create a tf.keras.callbacks.TensorBoard.
End of explanation
# TODO
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_ds,
validation_data=val_ds,
epochs=10,
callbacks=[tensorboard_callback])
Explanation: Compile and train the model using the Adam optimizer and BinaryCrossentropy loss.
End of explanation
model.summary()
Explanation: With this approach the model reaches a validation accuracy of around 84% (note that the model is overfitting since training accuracy is higher).
Note: Your results may be a bit different, depending on how weights were randomly initialized before training the embedding layer.
You can look into the model summary to learn more about each layer of the model.
End of explanation
!tensorboard --bind_all --port=8081 --logdir logs
Explanation: Visualize the model metrics in TensorBoard.
End of explanation
# TODO
weights = model.get_layer('embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary()
Explanation: Run the following command in Cloud Shell:
<code>gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081</code>
Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.
In Cloud Shell, click Web Preview > Change Port and insert port number 8081. Click Change and Preview to open the TensorBoard.
To quit the TensorBoard, click Kernel > Interrupt kernel.
Retrieve the trained word embeddings and save them to disk
Next, retrieve the word embeddings learned during training. The embeddings are weights of the Embedding layer in the model. The weights matrix is of shape (vocab_size, embedding_dimension).
Obtain the weights from the model using get_layer() and get_weights(). The get_vocabulary() function provides the vocabulary to build a metadata file with one token per line.
End of explanation
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
Explanation: Write the weights to disk. To use the Embedding Projector, you will upload two files in tab separated format: a file of vectors (containing the embedding), and a file of meta data (containing the words).
End of explanation
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
Explanation: Two files will created as vectors.tsv and metadata.tsv. Download both files.
End of explanation |
899 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optically pumped magnetometer (OPM) data
In this dataset, electrical median nerve stimulation was delivered to the
left wrist of the subject. Somatosensory evoked fields were measured using
nine QuSpin SERF OPMs placed over the right-hand side somatomotor area. Here
we demonstrate how to localize these custom OPM data in MNE.
Step1: Prepare data for localization
First we filter and epoch the data
Step2: Examine our coordinate alignment for source localization and compute a
forward operator
Step3: Perform dipole fitting
Step4: Perform minimum-norm localization
Due to the small number of sensors, there will be some leakage of activity
to areas with low/no sensitivity. Constraining the source space to
areas we are sensitive to might be a good idea. | Python Code:
import os.path as op
import numpy as np
import mne
data_path = mne.datasets.opm.data_path()
subject = 'OPM_sample'
subjects_dir = op.join(data_path, 'subjects')
raw_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_SEF_raw.fif')
bem_fname = op.join(subjects_dir, subject, 'bem',
subject + '-5120-5120-5120-bem-sol.fif')
fwd_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_sample-fwd.fif')
coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')
Explanation: Optically pumped magnetometer (OPM) data
In this dataset, electrical median nerve stimulation was delivered to the
left wrist of the subject. Somatosensory evoked fields were measured using
nine QuSpin SERF OPMs placed over the right-hand side somatomotor area. Here
we demonstrate how to localize these custom OPM data in MNE.
End of explanation
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(None, 90, h_trans_bandwidth=10.)
raw.notch_filter(50., notch_widths=1)
# Set epoch rejection threshold a bit larger than for SQUIDs
reject = dict(mag=2e-10)
tmin, tmax = -0.5, 1
# Find median nerve stimulator trigger
event_id = dict(Median=257)
events = mne.find_events(raw, stim_channel='STI101', mask=257, mask_type='and')
picks = mne.pick_types(raw.info, meg=True, eeg=False)
# We use verbose='error' to suppress warning about decimation causing aliasing,
# ideally we would low-pass and then decimate instead
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, verbose='error',
reject=reject, picks=picks, proj=False, decim=10,
preload=True)
evoked = epochs.average()
evoked.plot()
cov = mne.compute_covariance(epochs, tmax=0.)
del epochs, raw
Explanation: Prepare data for localization
First we filter and epoch the data:
End of explanation
bem = mne.read_bem_solution(bem_fname)
trans = mne.transforms.Transform('head', 'mri') # identity transformation
# To compute the forward solution, we must
# provide our temporary/custom coil definitions, which can be done as::
#
# with mne.use_coil_def(coil_def_fname):
# fwd = mne.make_forward_solution(
# raw.info, trans, src, bem, eeg=False, mindist=5.0,
# n_jobs=None, verbose=True)
fwd = mne.read_forward_solution(fwd_fname)
# use fixed orientation here just to save memory later
mne.convert_forward_solution(fwd, force_fixed=True, copy=False)
with mne.use_coil_def(coil_def_fname):
fig = mne.viz.plot_alignment(evoked.info, trans=trans, subject=subject,
subjects_dir=subjects_dir,
surfaces=('head', 'pial'), bem=bem)
mne.viz.set_3d_view(figure=fig, azimuth=45, elevation=60, distance=0.4,
focalpoint=(0.02, 0, 0.04))
Explanation: Examine our coordinate alignment for source localization and compute a
forward operator:
<div class="alert alert-info"><h4>Note</h4><p>The Head<->MRI transform is an identity matrix, as the
co-registration method used equates the two coordinate
systems. This mis-defines the head coordinate system
(which should be based on the LPA, Nasion, and RPA)
but should be fine for these analyses.</p></div>
End of explanation
# Fit dipoles on a subset of time points
with mne.use_coil_def(coil_def_fname):
dip_opm, _ = mne.fit_dipole(evoked.copy().crop(0.040, 0.080),
cov, bem, trans, verbose=True)
idx = np.argmax(dip_opm.gof)
print('Best dipole at t=%0.1f ms with %0.1f%% GOF'
% (1000 * dip_opm.times[idx], dip_opm.gof[idx]))
# Plot N20m dipole as an example
dip_opm.plot_locations(trans, subject, subjects_dir,
mode='orthoview', idx=idx)
Explanation: Perform dipole fitting
End of explanation
inverse_operator = mne.minimum_norm.make_inverse_operator(
evoked.info, fwd, cov, loose=0., depth=None)
del fwd, cov
method = "MNE"
snr = 3.
lambda2 = 1. / snr ** 2
stc = mne.minimum_norm.apply_inverse(
evoked, inverse_operator, lambda2, method=method,
pick_ori=None, verbose=True)
# Plot source estimate at time of best dipole fit
brain = stc.plot(hemi='rh', views='lat', subjects_dir=subjects_dir,
initial_time=dip_opm.times[idx],
clim=dict(kind='percent', lims=[99, 99.9, 99.99]),
size=(400, 300), background='w')
Explanation: Perform minimum-norm localization
Due to the small number of sensors, there will be some leakage of activity
to areas with low/no sensitivity. Constraining the source space to
areas we are sensitive to might be a good idea.
End of explanation |
Subsets and Splits