Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
900 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Forte Tutorial 2.02
Step2: We will start by generating SCF orbitals for methane via psi4 using the forte.util.psi4_scf function
Step3: Next we start forte, setup the MOSpaceInfo object specifying the number of orbitals, and start the integral object
Step4: Localizing the orbitals
To localize the orbitals we create a Localize object. We have to pass a few object to perform the localization
Step5: Once the Localize object is created, we specify which orbital space we want to be localized and compute the unitary transformation
Step6: From the localizer we can then extract the unitary transformation matrix that correspond to the orbital localizaition. Here we get the alpha part
Step7: We are now ready to read the MOs from psi4 and transform them by computing the product $\mathbf{C}' = \mathbf{C} \mathbf{U}$. We then place the orbitals back into psi4 by calling the copy function on wfn.Ca(). We have to do this because this function returns a smart pointer to the matrix that holds $\mathbf{C}$. If we assigned Ca_local via wfn.Ca() = Ca_local we would not change the orbitals in psi4.
Step8: Lastly, we can optionally generate cube files for all the occupied orbitals and visualize them. The resulting orbitals consist of a core orbital (which cannot be seen) and four localized C-H $\sigma$ bond orbitals | Python Code:
import psi4
import forte
import forte.utils
Explanation: Forte Tutorial 2.02: Orbital localization
In this tutorial we are going to explore how to localize orbitals and visualize them in Jupyter notebooks using the Python API.
Let's import the psi4 and forte modules, including the forte.utils submodule
End of explanation
xyz =
0 1
C -1.9565506735 0.4146729724 0.0000000000
H -0.8865506735 0.4146729724 0.0000000000
H -2.3132134555 1.1088535618 -0.7319870007
H -2.3132183114 0.7015020975 0.9671697106
H -2.3132196063 -0.5663349614 -0.2351822830
symmetry c1
E_scf, wfn = forte.utils.psi4_scf(xyz, 'sto-3g', 'rhf', functional = 'hf')
Explanation: We will start by generating SCF orbitals for methane via psi4 using the forte.util.psi4_scf function
End of explanation
from forte import forte_options
options = forte.forte_options
mos_spaces = {'RESTRICTED_DOCC' : [5], 'ACTIVE' : [0]}
nmopi = wfn.nmopi()
point_group = wfn.molecule().point_group().symbol()
mo_space_info = forte.make_mo_space_info_from_map(nmopi,point_group,mos_spaces,[])
scf_info = forte.SCFInfo(wfn)
ints = forte.make_ints_from_psi4(wfn, options, mo_space_info)
Explanation: Next we start forte, setup the MOSpaceInfo object specifying the number of orbitals, and start the integral object
End of explanation
localizer = forte.Localize(forte.forte_options, ints, mo_space_info)
Explanation: Localizing the orbitals
To localize the orbitals we create a Localize object. We have to pass a few object to perform the localization
End of explanation
localizer.set_orbital_space(['RESTRICTED_DOCC'])
localizer.compute_transformation()
Explanation: Once the Localize object is created, we specify which orbital space we want to be localized and compute the unitary transformation
End of explanation
Ua = localizer.get_Ua()
Explanation: From the localizer we can then extract the unitary transformation matrix that correspond to the orbital localizaition. Here we get the alpha part
End of explanation
Ca = wfn.Ca()
Ca_local = psi4.core.doublet(Ca,Ua,False,False)
wfn.Ca().copy(Ca_local)
Explanation: We are now ready to read the MOs from psi4 and transform them by computing the product $\mathbf{C}' = \mathbf{C} \mathbf{U}$. We then place the orbitals back into psi4 by calling the copy function on wfn.Ca(). We have to do this because this function returns a smart pointer to the matrix that holds $\mathbf{C}$. If we assigned Ca_local via wfn.Ca() = Ca_local we would not change the orbitals in psi4.
End of explanation
# make cube files
# forte.utils.psi4_cubeprop(wfn,path='cubes',nocc=5,nvir=0, load=True)
Explanation: Lastly, we can optionally generate cube files for all the occupied orbitals and visualize them. The resulting orbitals consist of a core orbital (which cannot be seen) and four localized C-H $\sigma$ bond orbitals
End of explanation |
901 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
splot.pysal.lib
Step1: Data Preparation
Let's first have a look at the dataset with pysal.lib.examples.explain
Step2: Load data into a geopandas geodataframe
Step3: This warning tells us that our dataset contains islands. Islands are polygons that do not share edges and nodes with adjacent polygones. This can for example be the case if polygones are truly not neighbouring, eg. when two land parcels are separated by a river. However, these islands often stems from human error when digitizing features into polygons.
This unwanted error can be assessed using splot.pysal.lib plot_spatial_weights functionality
Step4: This visualisation depicts the spatial weights network, a network of connections of the centroid of each polygon to the centroid of its neighbour. As we can see, there are many polygons in the south and west of this map, that are not connected to it's neighbors. This stems from digitization errors and needs to be corrected before we can start our statistical analysis.
pysal.lib offers a tool to correct this error by 'snapping' incorrectly separated neighbours back together
Step5: We can now visualise if the nonplanar_neighbors tool adjusted all errors correctly
Step6: The visualization shows that all erroneous islands are now stored as neighbors in our new weights object, depicted by the new joins displayed in orange.
We can now adapt our visualization to show all joins in the same color, by using the nonplanar_edge_kws argument in plot_spatial_weights | Python Code:
from pysal.lib.weights.contiguity import Queen
import pysal.lib
from pysal.lib import examples
import matplotlib.pyplot as plt
import geopandas as gpd
%matplotlib inline
from pysal.viz.splot.pysal.lib import plot_spatial_weights
Explanation: splot.pysal.lib: assessing neighbors & spatial weights
In spatial analysis workflows it is often important and necessary to asses the relationships of neighbouring polygons. pysal.lib and splot can help you to inspect if two neighbouring polygons share an edge or not.
Content:
* Imports
* Data Preparation
* Plotting
Imports
End of explanation
examples.explain('rio_grande_do_sul')
Explanation: Data Preparation
Let's first have a look at the dataset with pysal.lib.examples.explain
End of explanation
gdf = gpd.read_file(examples.get_path('map_RS_BR.shp'))
gdf.head()
weights = Queen.from_dataframe(gdf)
Explanation: Load data into a geopandas geodataframe
End of explanation
plot_spatial_weights(weights, gdf)
plt.show()
Explanation: This warning tells us that our dataset contains islands. Islands are polygons that do not share edges and nodes with adjacent polygones. This can for example be the case if polygones are truly not neighbouring, eg. when two land parcels are separated by a river. However, these islands often stems from human error when digitizing features into polygons.
This unwanted error can be assessed using splot.pysal.lib plot_spatial_weights functionality:
Plotting
End of explanation
wnp = pysal.lib.weights.util.nonplanar_neighbors(weights, gdf)
Explanation: This visualisation depicts the spatial weights network, a network of connections of the centroid of each polygon to the centroid of its neighbour. As we can see, there are many polygons in the south and west of this map, that are not connected to it's neighbors. This stems from digitization errors and needs to be corrected before we can start our statistical analysis.
pysal.lib offers a tool to correct this error by 'snapping' incorrectly separated neighbours back together:
End of explanation
plot_spatial_weights(wnp, gdf)
plt.show()
Explanation: We can now visualise if the nonplanar_neighbors tool adjusted all errors correctly:
End of explanation
plot_spatial_weights(wnp, gdf, nonplanar_edge_kws=dict(color='#4393c3'))
plt.show()
Explanation: The visualization shows that all erroneous islands are now stored as neighbors in our new weights object, depicted by the new joins displayed in orange.
We can now adapt our visualization to show all joins in the same color, by using the nonplanar_edge_kws argument in plot_spatial_weights:
End of explanation |
902 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot train and valid set NLL
Step1: Plot ratio of update norms to parameter norms across epochs for different layers | Python Code:
tr = np.array(model.monitor.channels['valid_y_y_1_nll'].time_record) / 3600.
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(111)
ax1.plot(model.monitor.channels['valid_y_y_1_nll'].val_record)
ax1.plot(model.monitor.channels['train_y_y_1_nll'].val_record)
ax1.set_xlabel('Epochs')
ax1.legend(['Valid', 'Train'])
ax1.set_ylabel('NLL')
ax1.set_ylim(0., 5.)
ax1.grid(True)
ax2 = ax1.twiny()
ax2.set_xticks(np.arange(0,tr.shape[0],20))
ax2.set_xticklabels(['{0:.2f}'.format(t) for t in tr[::20]])
ax2.set_xlabel('Hours')
plt.plot(model.monitor.channels['train_term_1_l1_penalty'].val_record)
plt.plot(model.monitor.channels['train_term_2_weight_decay'].val_record)
pv = get_weights_report(model=model)
img = pv.get_img()
img = img.resize((4*img.size[0], 4*img.size[1]))
img_data = io.BytesIO()
img.save(img_data, format='png')
display(Image(data=img_data.getvalue(), format='png'))
plt.plot(model.monitor.channels['learning_rate'].val_record)
Explanation: Plot train and valid set NLL
End of explanation
h1_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h1_W_kernel_norm_mean'].val_record])
h1_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h1_kernel_norms_mean'].val_record])
plt.plot(h1_W_norms / h1_W_up_norms)
#plt.ylim(0,1000)
plt.show()
plt.plot(model.monitor.channels['valid_h1_kernel_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h1_kernel_norms_max'].val_record)
h2_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h2_W_kernel_norm_mean'].val_record])
h2_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h2_kernel_norms_mean'].val_record])
plt.plot(h2_W_norms / h2_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h2_kernel_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h2_kernel_norms_max'].val_record)
h3_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h3_W_kernel_norm_mean'].val_record])
h3_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h3_kernel_norms_mean'].val_record])
plt.plot(h3_W_norms / h3_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h3_kernel_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h3_kernel_norms_max'].val_record)
h4_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h4_W_kernel_norm_mean'].val_record])
h4_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h4_kernel_norms_mean'].val_record])
plt.plot(h4_W_norms / h4_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h4_kernel_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h4_kernel_norms_max'].val_record)
h5_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h5_W_kernel_norm_mean'].val_record])
h5_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h5_kernel_norms_mean'].val_record])
plt.plot(h5_W_norms / h5_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h5_kernel_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h5_kernel_norms_max'].val_record)
h6_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h6_W_col_norm_mean'].val_record])
h6_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h6_col_norms_mean'].val_record])
plt.plot(h6_W_norms / h6_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h6_col_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h6_col_norms_max'].val_record)
y_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_softmax_W_col_norm_mean'].val_record])
y_W_norms = np.array([float(v) for v in model.monitor.channels['valid_y_y_1_col_norms_mean'].val_record])
plt.plot(y_W_norms / y_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_y_y_1_col_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_y_y_1_col_norms_max'].val_record)
Explanation: Plot ratio of update norms to parameter norms across epochs for different layers
End of explanation |
903 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Building game trees</h1>
<i>Theodore L. Turocy</i><br/>
<i>University of East Anglia</i>
<br/><br/>
<h3>EC'16 Workshop
24 July 2016</h3>
Step1: One can build up extensive games from scratch and manipulate them. This example shows one way to do that for the simple one-card poker game.
The primitives for doing this remain rather low-level, as I have never settled on a good way to write higher-level ones that apply broadly. It has always seemed to me that different games have different "natural" ways to build them out. It is nevertheless likely that having some more powerful idioms could be useful - suggestions are welcomed.
Step2: Define the players; save them as variables for convenience laster.
Step3: The root node is a chance move, between Ace and King. Chance moves default to uniform randomisation, but I set the probabilities here anyway just for explicitness.
Step4: After an Ace, Alice can Raise or Fold.
Step5: She can also Raise or Fold after the King. Calling append_move with the player, rather than the previously created move, creates a new information set for Alice.
Step6: After Alice raises with the Ace, Bob can Meet or Pass.
Step7: Likewise after Alice raises with the King. Using the same move here adds the move to the same information set for Bob.
Step8: The game so far, as an .efg file. We see the tree structure is in place; now to deal with payoffs.
Step9: Attach the outcomes to the corresponding terminal nodes. Notice how we can re-use the outcomes to denote that different terminal nodes may correspond to the same logical outcome.
Step10: Here's the game we've just built
Step11: For comparison, here's the game we started with for the Poker example.
Step12: We can confirm they're the same | Python Code:
import gambit
Explanation: <h1>Building game trees</h1>
<i>Theodore L. Turocy</i><br/>
<i>University of East Anglia</i>
<br/><br/>
<h3>EC'16 Workshop
24 July 2016</h3>
End of explanation
g = gambit.Game.new_tree()
g.title = "A simple poker example"
Explanation: One can build up extensive games from scratch and manipulate them. This example shows one way to do that for the simple one-card poker game.
The primitives for doing this remain rather low-level, as I have never settled on a good way to write higher-level ones that apply broadly. It has always seemed to me that different games have different "natural" ways to build them out. It is nevertheless likely that having some more powerful idioms could be useful - suggestions are welcomed.
End of explanation
alice = g.players.add("Alice")
bob = g.players.add("Bob")
Explanation: Define the players; save them as variables for convenience laster.
End of explanation
move = g.root.append_move(g.players.chance, 2)
move.actions[0].label = "A"
move.actions[0].prob = gambit.Rational(1, 2)
move.actions[1].label = "K"
move.actions[1].prob = gambit.Rational(1, 2)
Explanation: The root node is a chance move, between Ace and King. Chance moves default to uniform randomisation, but I set the probabilities here anyway just for explicitness.
End of explanation
move = g.root.children[0].append_move(alice, 2)
move.label = 'a'
move.actions[0].label = "R"
move.actions[1].label = "F"
Explanation: After an Ace, Alice can Raise or Fold.
End of explanation
move = g.root.children[1].append_move(alice, 2)
move.label = 'k'
move.actions[0].label = "R"
move.actions[1].label = "F"
Explanation: She can also Raise or Fold after the King. Calling append_move with the player, rather than the previously created move, creates a new information set for Alice.
End of explanation
move = g.root.children[0].children[0].append_move(bob, 2)
move.label = 'b'
move.actions[0].label = "M"
move.actions[1].label = "P"
Explanation: After Alice raises with the Ace, Bob can Meet or Pass.
End of explanation
g.root.children[1].children[0].append_move(move)
Explanation: Likewise after Alice raises with the King. Using the same move here adds the move to the same information set for Bob.
End of explanation
g
alice_big = g.outcomes.add("Alice wins big")
alice_big[0] = 2
alice_big[1] = -2
alice_small = g.outcomes.add("Alice wins")
alice_small[0] = 1
alice_small[1] = -1
bob_small = g.outcomes.add("Bob wins")
bob_small[0] = -1
bob_small[1] = 1
bob_big = g.outcomes.add("Bob wins big")
bob_big[0] = -2
bob_big[1] = 2
Explanation: The game so far, as an .efg file. We see the tree structure is in place; now to deal with payoffs.
End of explanation
g.root.children[0].children[0].children[0].outcome = alice_big
g.root.children[0].children[0].children[1].outcome = alice_small
g.root.children[0].children[1].outcome = bob_small
g.root.children[1].children[0].children[0].outcome = bob_big
g.root.children[1].children[0].children[1].outcome = alice_small
g.root.children[1].children[1].outcome = bob_small
Explanation: Attach the outcomes to the corresponding terminal nodes. Notice how we can re-use the outcomes to denote that different terminal nodes may correspond to the same logical outcome.
End of explanation
g
Explanation: Here's the game we've just built
End of explanation
gambit.Game.read_game("poker.efg")
Explanation: For comparison, here's the game we started with for the Poker example.
End of explanation
g.write() == gambit.Game.read_game("poker.efg").write()
Explanation: We can confirm they're the same:
End of explanation |
904 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Repeated measures ANOVA on source data with spatio-temporal clustering
This example illustrates how to make use of the clustering functions
for arbitrary, self-defined contrasts beyond standard t-tests. In this
case we will tests if the differences in evoked responses between
stimulation modality (visual VS auditory) depend on the stimulus
location (left vs right) for a group of subjects (simulated here
using one subject's data). For this purpose we will compute an
interaction effect using a repeated measures ANOVA. The multiple
comparisons problem is addressed with a cluster-level permutation test
across space and time.
Step1: Set parameters
Step2: Read epochs for all channels, removing a bad one
Step3: Transform to source space
Step4: Transform to common cortical space
Normally you would read in estimates across several subjects and morph them
to the same cortical space (e.g. fsaverage). For example purposes, we will
simulate this by just having each "subject" have the same response (just
noisy in source space) here.
We'll only consider the left hemisphere in this tutorial.
Step5: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 ICO source space
with vertices 0
Step6: Now we need to prepare the group matrix for the ANOVA statistic. To make the
clustering function work correctly with the ANOVA function X needs to be a
list of multi-dimensional arrays (one per condition) of shape
Step7: Prepare function for arbitrary contrast
As our ANOVA function is a multi-purpose tool we need to apply a few
modifications to integrate it with the clustering function. This
includes reshaping data, setting default arguments and processing
the return values. For this reason we'll write a tiny dummy function.
We will tell the ANOVA how to interpret the data matrix in terms of
factors. This is done via the factor levels argument which is a list
of the number factor levels for each factor.
Step8: Finally we will pick the interaction effect by passing 'A
Step9: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions
Step10: Compute clustering statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial adjacency matrix (instead of spatio-temporal).
Step11: Visualize the clusters
Step12: Finally, let's investigate interaction effect by reconstructing the time
courses | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Eric Larson <[email protected]>
# Denis Engemannn <[email protected]>
#
# License: BSD-3-Clause
import os.path as op
import numpy as np
from numpy.random import randn
import matplotlib.pyplot as plt
import mne
from mne.stats import (spatio_temporal_cluster_test, f_threshold_mway_rm,
f_mway_rm, summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
Explanation: Repeated measures ANOVA on source data with spatio-temporal clustering
This example illustrates how to make use of the clustering functions
for arbitrary, self-defined contrasts beyond standard t-tests. In this
case we will tests if the differences in evoked responses between
stimulation modality (visual VS auditory) depend on the stimulus
location (left vs right) for a group of subjects (simulated here
using one subject's data). For this purpose we will compute an
interaction effect using a repeated measures ANOVA. The multiple
comparisons problem is addressed with a cluster-level permutation test
across space and time.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
src_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
Explanation: Set parameters
End of explanation
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
# we'll load all four conditions that make up the 'two ways' of our ANOVA
event_id = dict(l_aud=1, r_aud=2, l_vis=3, r_vis=4)
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
epochs.equalize_event_counts(event_id)
Explanation: Read epochs for all channels, removing a bad one
End of explanation
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE, sLORETA, or eLORETA)
inverse_operator = read_inverse_operator(fname_inv)
# we'll only use one hemisphere to speed up this example
# instead of a second vertex array we'll pass an empty array
sample_vertices = [inverse_operator['src'][0]['vertno'], np.array([], int)]
# Let's average and compute inverse, then resample to speed things up
conditions = []
for cond in ['l_aud', 'r_aud', 'l_vis', 'r_vis']: # order is important
evoked = epochs[cond].average()
evoked.resample(30).crop(0., None)
condition = apply_inverse(evoked, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition.crop(0, None)
conditions.append(condition)
tmin = conditions[0].tmin
tstep = conditions[0].tstep * 1000 # convert to milliseconds
Explanation: Transform to source space
End of explanation
n_vertices_sample, n_times = conditions[0].lh_data.shape
n_subjects = 6
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 4) * 10
for ii, condition in enumerate(conditions):
X[:, :, :, ii] += condition.lh_data[:, :, np.newaxis]
Explanation: Transform to common cortical space
Normally you would read in estimates across several subjects and morph them
to the same cortical space (e.g. fsaverage). For example purposes, we will
simulate this by just having each "subject" have the same response (just
noisy in source space) here.
We'll only consider the left hemisphere in this tutorial.
End of explanation
# Read the source space we are morphing to (just left hemisphere)
src = mne.read_source_spaces(src_fname)
fsave_vertices = [src[0]['vertno'], []]
morph_mat = mne.compute_source_morph(
src=inverse_operator['src'], subject_to='fsaverage',
spacing=fsave_vertices, subjects_dir=subjects_dir, smooth=20).morph_mat
morph_mat = morph_mat[:, :n_vertices_sample] # just left hemi from src
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 4)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 4)
Explanation: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 ICO source space
with vertices 0:10242 for each hemisphere. Usually you'd have to morph
each subject's data separately, but here since all estimates are on
'sample' we can use one morph matrix for all the heavy lifting.
End of explanation
X = np.transpose(X, [2, 1, 0, 3]) #
X = [np.squeeze(x) for x in np.split(X, 4, axis=-1)]
Explanation: Now we need to prepare the group matrix for the ANOVA statistic. To make the
clustering function work correctly with the ANOVA function X needs to be a
list of multi-dimensional arrays (one per condition) of shape: samples
(subjects) x time x space.
First we permute dimensions, then split the array into a list of conditions
and discard the empty dimension resulting from the split using numpy squeeze.
End of explanation
factor_levels = [2, 2]
Explanation: Prepare function for arbitrary contrast
As our ANOVA function is a multi-purpose tool we need to apply a few
modifications to integrate it with the clustering function. This
includes reshaping data, setting default arguments and processing
the return values. For this reason we'll write a tiny dummy function.
We will tell the ANOVA how to interpret the data matrix in terms of
factors. This is done via the factor levels argument which is a list
of the number factor levels for each factor.
End of explanation
effects = 'A:B'
# Tell the ANOVA not to compute p-values which we don't need for clustering
return_pvals = False
# a few more convenient bindings
n_times = X[0].shape[1]
n_conditions = 4
Explanation: Finally we will pick the interaction effect by passing 'A:B'.
(this notation is borrowed from the R formula language).
As an aside, note that in this particular example, we cannot use the A*B
notation which return both the main and the interaction effect. The reason
is that the clustering function expects stat_fun to return a 1-D array.
To get clusters for both, you must create a loop.
End of explanation
def stat_fun(*args):
# get f-values only.
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=return_pvals)[0]
Explanation: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions: subjects X conditions X observations (optional).
The following function catches the list input and swaps the first and the
second dimension, and finally calls ANOVA.
<div class="alert alert-info"><h4>Note</h4><p>For further details on this ANOVA function consider the
corresponding
`time-frequency tutorial <tut-timefreq-twoway-anova>`.</p></div>
End of explanation
# as we only have one hemisphere we need only need half the adjacency
print('Computing adjacency.')
adjacency = mne.spatial_src_adjacency(src[:1])
# Now let's actually do the clustering. Please relax, on a small
# notebook and one single thread only this will take a couple of minutes ...
pthresh = 0.005
f_thresh = f_threshold_mway_rm(n_subjects, factor_levels, effects, pthresh)
# To speed things up a bit we will ...
n_permutations = 50 # ... run way fewer permutations (reduces sensitivity)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_test(X, adjacency=adjacency, n_jobs=1,
threshold=f_thresh, stat_fun=stat_fun,
n_permutations=n_permutations,
buffer_size=None)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
Explanation: Compute clustering statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial adjacency matrix (instead of spatio-temporal).
End of explanation
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# The brighter the color, the stronger the interaction between
# stimulus modality and stimulus location
brain = stc_all_cluster_vis.plot(subjects_dir=subjects_dir, views='lat',
time_label='temporal extent (ms)',
clim=dict(kind='value', lims=[0, 1, 40]))
brain.save_image('cluster-lh.png')
brain.show_view('medial')
Explanation: Visualize the clusters
End of explanation
inds_t, inds_v = [(clusters[cluster_ind]) for ii, cluster_ind in
enumerate(good_cluster_inds)][0] # first cluster
times = np.arange(X[0].shape[1]) * tstep * 1e3
plt.figure()
colors = ['y', 'b', 'g', 'purple']
event_ids = ['l_aud', 'r_aud', 'l_vis', 'r_vis']
for ii, (condition, color, eve_id) in enumerate(zip(X, colors, event_ids)):
# extract time course at cluster vertices
condition = condition[:, :, inds_v]
# normally we would normalize values across subjects but
# here we use data from the same subject so we're good to just
# create average time series across subjects and vertices.
mean_tc = condition.mean(axis=2).mean(axis=0)
std_tc = condition.std(axis=2).std(axis=0)
plt.plot(times, mean_tc.T, color=color, label=eve_id)
plt.fill_between(times, mean_tc + std_tc, mean_tc - std_tc, color='gray',
alpha=0.5, label='')
ymin, ymax = mean_tc.min() - 5, mean_tc.max() + 5
plt.xlabel('Time (ms)')
plt.ylabel('Activation (F-values)')
plt.xlim(times[[0, -1]])
plt.ylim(ymin, ymax)
plt.fill_betweenx((ymin, ymax), times[inds_t[0]],
times[inds_t[-1]], color='orange', alpha=0.3)
plt.legend()
plt.title('Interaction between stimulus-modality and location.')
plt.show()
Explanation: Finally, let's investigate interaction effect by reconstructing the time
courses:
End of explanation |
905 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Интернет, сетевые протоколы, клиент-серверные приложения
Requests for Comment (RFCs) публикуются организацией Internet Engineering Task Force (IETF).
Step1: IP
Internet Assigned Numbers Authority (IANA) занимается распределением ip-адресов.
Назначается интерфейсу статически или динамически (DHCP).
IPv4-адрес
32-битное число
Step2: Порты
Данные прибывают на интерфейс с определенным IP - но что с ними делать дальше? Каждая программа должна проверить их? Конечно, нет. Существует концепция портов - процесс, взаимодействующий с сетью, может занять определенный номер порта в промежутке от 1 до 65535. Порт входит в заголовок пакета - т.о. протокол знает какому приложению направить данные.
UDP (User Datagram Protocol)
UDP чрезвычайно простой протокол - он не предлагает никаких доп фич помимо доставки данных удаленному процессу. Никакого постоянного соединения. Никакой гарантии доставки и доставки в порядку отправления.
Step3: Упражнение #1
Step4: упражнение #2 - отправьте свои имя и фамилию по TCP
упражнение #3 коннект-фамилия-вопрос-ответ
HTTP (HyperText Transfer Protocol)
Работает поверх TCP
Пример запроса клиента
Step5: Биндинг на интерфейс и порт
Step6: Получение данных от клиента
Step7: Закрываем сокет
Step8: TCP-сервер
Step9: Примеры TCP и UDP серверов
Примеры в папке с лекцией
Step10: HTTP-клиент
Step11: Пример, демонстрирующий почему requests гораздо удобнее в использовании
Step12: vs | Python Code:
import socket
Explanation: Интернет, сетевые протоколы, клиент-серверные приложения
Requests for Comment (RFCs) публикуются организацией Internet Engineering Task Force (IETF).
End of explanation
import socket
hostname = 'httpbin.org'
addr = socket.gethostbyname(hostname)
print(addr)
import socket
hostname = 'httpbin.org'
addr_info = socket.getaddrinfo(hostname, 80)
print(addr_info)
import socket
print (socket.gethostbyaddr(addr))
Explanation: IP
Internet Assigned Numbers Authority (IANA) занимается распределением ip-адресов.
Назначается интерфейсу статически или динамически (DHCP).
IPv4-адрес
32-битное число:
Например, 93.184.216.34
127.0.0.1 - Каждый девайс имеет виртуальный интерфейс - loopback interface.
Приватные адреса:
10.0.0.0 to 10.255.255.255
172.16.0.0 to 172.31.255.255
192.168.0.0 to 192.168.255.255
NAT - Network Address Translation - преобразует трафик с приватных адресов в трафик, исходящий с одного публичного ip.
Пакеты:
Структура IPv4 пакета:
IPv6-адрес
128-битное число:
2606:2800:220:1:248:1893:25c8:1946
::1
Структура заголовка IPv6 пакета:
DNS (Domain Name System)
Распределенная база соответствия доменных имен и IP-адресов.
End of explanation
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
print('Socket created')
s
s.sendto(bytes("data", encoding="utf-8"), ("127.0.0.1", 3000))
s.sendto(b"data", ("127.0.0.1", 3000))
Explanation: Порты
Данные прибывают на интерфейс с определенным IP - но что с ними делать дальше? Каждая программа должна проверить их? Конечно, нет. Существует концепция портов - процесс, взаимодействующий с сетью, может занять определенный номер порта в промежутке от 1 до 65535. Порт входит в заголовок пакета - т.о. протокол знает какому приложению направить данные.
UDP (User Datagram Protocol)
UDP чрезвычайно простой протокол - он не предлагает никаких доп фич помимо доставки данных удаленному процессу. Никакого постоянного соединения. Никакой гарантии доставки и доставки в порядку отправления.
End of explanation
import socket #create a TCP socket (SOCK_STREAM)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('Socket created')
s
s.connect(("127.0.0.1", 9999))
# запись в сокет
s.send(b"data")
# s.send(bytes("data", encoding="utf-8"))
# чтение из сокета
s.recv(1024)
Explanation: Упражнение #1: отправьте свое имя и фамилию по UDP
TCP (Transmission Control Protocol)
Доставка пакетов в правильном порядке
Подтверждение доставки
Проверка ошибок и переотправление пакетов
Контроль потока сообщений (чтобы предотвратить заспамление получателя)
End of explanation
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
Explanation: упражнение #2 - отправьте свои имя и фамилию по TCP
упражнение #3 коннект-фамилия-вопрос-ответ
HTTP (HyperText Transfer Protocol)
Работает поверх TCP
Пример запроса клиента:
```
GET / HTTP/1.1
Accept: /
Accept-Encoding: gzip, deflate, compress
Host: httpbin.org
User-Agent: HTTPie/0.3.1
```
Пример ответа сервера:
```
HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 12150
Content-Type: text/html; charset=utf-8
Date: Sun, 30 Oct 2016 13:48:11 GMT
Server: nginx
<!DOCTYPE html>
<html>
<head>
...
```
HTTP2
UDP-cервер
Создание сокета
End of explanation
sock.bind(("127.0.0.1", 6668))
Explanation: Биндинг на интерфейс и порт
End of explanation
data, addr = sock.recvfrom(1024)
print("received message:", data, "from", addr)
Explanation: Получение данных от клиента
End of explanation
sock.close()
Explanation: Закрываем сокет
End of explanation
# Показать telnet!!!
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(("127.0.0.1", 6667))
sock.listen(5)
client_sock, addr = sock.accept()
print("connected", addr)
data = client_sock.recv(1024)
print(data)
client_sock.close()
sock.close()
Explanation: TCP-сервер
End of explanation
import struct
struct.pack("i", 123)
struct.unpack("i", b'{\x00\x00\x00')
Explanation: Примеры TCP и UDP серверов
Примеры в папке с лекцией:
простой UDP echo сервер
UDP echo сервер на asyncio
простой TCP echo сервер
TCP echo сервер на asyncio
многопоточный TCP чат сервер
асинхронный TCP чат сервер с Event Loop'ом
асинхронный TCP чат сервер с Event Loop'ом и протоколом сообщений
модуль struct
End of explanation
import urllib.request
f = urllib.request.urlopen('http://www.python.org/', timeout=3)
print(f.read().decode('utf-8')[:100])
import requests
resp = requests.get('http://www.python.org/', timeout=3)
print(resp.text[:100])
Explanation: HTTP-клиент
End of explanation
import urllib.request
auth_handler = urllib.request.HTTPBasicAuthHandler()
auth_handler.add_password(
realm='PDQ Application',
uri='https://mahler:8092/site-updates.py',
user='user',
passwd='pass'
)
opener = urllib.request.build_opener(auth_handler)
urllib.request.install_opener(opener)
#urllib.request.urlopen('http://www.example.com/login.html')
Explanation: Пример, демонстрирующий почему requests гораздо удобнее в использовании:
End of explanation
import requests
requests.get('https://api.github.com/user', auth=('user', 'pass'))
Explanation: vs
End of explanation |
906 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Category 4
Step1: Module imports
Step2: Overall configuration
These parameters are later used, but shouldn't have to change between different model categories (model 1-5)
Step3: Preparation and model generation
Necessary preliminary steps and then the generation of all possible models based on the settings at the top of this notebook.
Step4: Loading the data
Step5: Running through all generated models
Note
Step6: Model selection based on the validation MAE
Select the top 5 models based on the Mean Absolute Error in the validation data
Step7: Evaluate top 5 models | Python Code:
# Model category name used throughout the subsequent analysis
model_cat_id = "04"
# Which features from the dataset should be loaded:
# ['all', 'actual', 'entsoe', 'weather_t', 'weather_i', 'holiday', 'weekday', 'hour', 'month']
features = ['actual', 'entsoe', 'calendar']
# LSTM Layer configuration
# ========================
# Stateful True or false
layer_conf = [ True, True, True ]
# Number of neurons per layer
cells = [[ 5, 10, 20, 30, 50, 75, 100, 125, 150 ], [0, 10, 20, 50], [0, 10, 15, 20]]
# Regularization per layer
dropout = [0, 0.1, 0.2]
# Size of how many samples are used for one forward/backward pass
batch_size = [8]
# In a sense this is the output neuron dimension, or how many timesteps the neuron should output. Currently not implemented, defaults to 1.
timesteps = [1]
Explanation: Model Category 4: ENTSO-E + Calendar
The fourth model category will ENTSO-E and calendar features to create a forecast for the electricity load.
Model category specific configuration
These parameters are model category specific
End of explanation
import os
import sys
import math
import itertools
import datetime as dt
import pytz
import time as t
import numpy as np
import pandas as pd
from pandas import read_csv
from pandas import datetime
from numpy import newaxis
import matplotlib as mpl
import matplotlib.pyplot as plt
import scipy.stats as stats
from statsmodels.tsa import stattools
from tabulate import tabulate
import math
import keras as keras
from keras import backend as K
from keras.models import Sequential
from keras.layers import Activation, Dense, Dropout, LSTM
from keras.callbacks import TensorBoard
from keras.utils import np_utils
from keras.models import load_model
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error, mean_absolute_error
from IPython.display import HTML
from IPython.display import display
%matplotlib notebook
mpl.rcParams['figure.figsize'] = (9,5)
# Import custom module functions
module_path = os.path.abspath(os.path.join('../'))
if module_path not in sys.path:
sys.path.append(module_path)
from lstm_load_forecasting import data, lstm
Explanation: Module imports
End of explanation
# Directory with dataset
path = os.path.join(os.path.abspath(''), '../data/fulldataset.csv')
# Splitdate for train and test data. As the TBATS and ARIMA benchmark needs 2 full cycle of all seasonality, needs to be after jan 01.
loc_tz = pytz.timezone('Europe/Zurich')
split_date = loc_tz.localize(dt.datetime(2017,2,1,0,0,0,0))
# Validation split percentage
validation_split = 0.2
# How many epochs in total
epochs = 30
# Set verbosity level. 0 for only per model, 1 for progress bar...
verbose = 0
# Dataframe containing the relevant data from training of all models
results = pd.DataFrame(columns=['model_name', 'config', 'dropout',
'train_loss', 'train_rmse', 'train_mae', 'train_mape',
'valid_loss', 'valid_rmse', 'valid_mae', 'valid_mape',
'test_rmse', 'test_mae', 'test_mape',
'epochs', 'batch_train', 'input_shape',
'total_time', 'time_step', 'splits'
])
# Early stopping parameters
early_stopping = True
min_delta = 0.006
patience = 2
Explanation: Overall configuration
These parameters are later used, but shouldn't have to change between different model categories (model 1-5)
End of explanation
# Generate output folders and files
res_dir = '../results/notebook_' + model_cat_id + '/'
plot_dir = '../plots/notebook_' + model_cat_id + '/'
model_dir = '../models/notebook_' + model_cat_id + '/'
os.makedirs(res_dir, exist_ok=True)
os.makedirs(model_dir, exist_ok=True)
output_table = res_dir + model_cat_id + '_results_' + t.strftime("%Y%m%d") + '.csv'
test_output_table = res_dir + model_cat_id + '_test_results' + t.strftime("%Y%m%d") + '.csv'
# Generate model combinations
models = []
models = lstm.generate_combinations(
model_name=model_cat_id + '_', layer_conf=layer_conf, cells=cells, dropout=dropout,
batch_size=batch_size, timesteps=[1])
Explanation: Preparation and model generation
Necessary preliminary steps and then the generation of all possible models based on the settings at the top of this notebook.
End of explanation
# Load data and prepare for standardization
df = data.load_dataset(path=path, modules=features)
df_scaled = df.copy()
df_scaled = df_scaled.dropna()
# Get all float type columns and standardize them
floats = [key for key in dict(df_scaled.dtypes) if dict(df_scaled.dtypes)[key] in ['float64']]
scaler = StandardScaler()
scaled_columns = scaler.fit_transform(df_scaled[floats])
df_scaled[floats] = scaled_columns
# Split in train and test dataset
df_train = df_scaled.loc[(df_scaled.index < split_date )].copy()
df_test = df_scaled.loc[df_scaled.index >= split_date].copy()
# Split in features and label data
y_train = df_train['actual'].copy()
X_train = df_train.drop('actual', 1).copy()
y_test = df_test['actual'].copy()
X_test = df_test.drop('actual', 1).copy()
Explanation: Loading the data:
End of explanation
start_time = t.time()
for idx, m in enumerate(models):
stopper = t.time()
print('========================= Model {}/{} ========================='.format(idx+1, len(models)))
print(tabulate([['Starting with model', m['name']], ['Starting time', datetime.fromtimestamp(stopper)]],
tablefmt="jira", numalign="right", floatfmt=".3f"))
try:
# Creating the Keras Model
model = lstm.create_model(layers=m['layers'], sample_size=X_train.shape[0], batch_size=m['batch_size'],
timesteps=m['timesteps'], features=X_train.shape[1])
# Training...
history = lstm.train_model(model=model, mode='fit', y=y_train, X=X_train,
batch_size=m['batch_size'], timesteps=m['timesteps'], epochs=epochs,
rearrange=False, validation_split=validation_split, verbose=verbose,
early_stopping=early_stopping, min_delta=min_delta, patience=patience)
# Write results
min_loss = np.min(history.history['val_loss'])
min_idx = np.argmin(history.history['val_loss'])
min_epoch = min_idx + 1
if verbose > 0:
print('______________________________________________________________________')
print(tabulate([['Minimum validation loss at epoch', min_epoch, 'Time: {}'.format(t.time()-stopper)],
['Training loss & MAE', history.history['loss'][min_idx], history.history['mean_absolute_error'][min_idx] ],
['Validation loss & mae', history.history['val_loss'][min_idx], history.history['val_mean_absolute_error'][min_idx] ],
], tablefmt="jira", numalign="right", floatfmt=".3f"))
print('______________________________________________________________________')
result = [{'model_name': m['name'], 'config': m, 'train_loss': history.history['loss'][min_idx], 'train_rmse': 0,
'train_mae': history.history['mean_absolute_error'][min_idx], 'train_mape': 0,
'valid_loss': history.history['val_loss'][min_idx], 'valid_rmse': 0,
'valid_mae': history.history['val_mean_absolute_error'][min_idx],'valid_mape': 0,
'test_rmse': 0, 'test_mae': 0, 'test_mape': 0, 'epochs': '{}/{}'.format(min_epoch, epochs), 'batch_train':m['batch_size'],
'input_shape':(X_train.shape[0], timesteps, X_train.shape[1]), 'total_time':t.time()-stopper,
'time_step':0, 'splits':str(split_date), 'dropout': m['layers'][0]['dropout']
}]
results = results.append(result, ignore_index=True)
# Saving the model and weights
model.save(model_dir + m['name'] + '.h5')
# Write results to csv
results.to_csv(output_table, sep=';')
K.clear_session()
import tensorflow as tf
tf.reset_default_graph()
# Shouldn't catch all errors, but for now...
except BaseException as e:
print('=============== ERROR {}/{} ============='.format(idx+1, len(models)))
print(tabulate([['Model:', m['name']], ['Config:', m]], tablefmt="jira", numalign="right", floatfmt=".3f"))
print('Error: {}'.format(e))
result = [{'model_name': m['name'], 'config': m, 'train_loss': str(e)}]
results = results.append(result, ignore_index=True)
results.to_csv(output_table,sep=';')
continue
Explanation: Running through all generated models
Note: Depending on the above settings, this can take very long!
End of explanation
# Number of the selected top models
selection = 5
# If run in the same instance not necessary. If run on the same day, then just use output_table
results_fn = res_dir + model_cat_id + '_results_' + '20170616' + '.csv'
results_csv = pd.read_csv(results_fn, delimiter=';')
top_models = results_csv.nsmallest(selection, 'valid_mae')
Explanation: Model selection based on the validation MAE
Select the top 5 models based on the Mean Absolute Error in the validation data:
http://scikit-learn.org/stable/modules/model_evaluation.html#mean-absolute-error
End of explanation
# Init test results table
test_results = pd.DataFrame(columns=['Model name', 'Mean absolute error', 'Mean squared error'])
# Init empty predictions
predictions = {}
# Loop through models
for index, row in top_models.iterrows():
filename = model_dir + row['model_name'] + '.h5'
model = load_model(filename)
batch_size = int(row['batch_train'])
# Calculate scores
loss, mae = lstm.evaluate_model(model=model, X=X_test, y=y_test, batch_size=batch_size, timesteps=1, verbose=verbose)
# Store results
result = [{'Model name': row['model_name'],
'Mean squared error': loss, 'Mean absolute error': mae
}]
test_results = test_results.append(result, ignore_index=True)
# Generate predictions
model.reset_states()
model_predictions = lstm.get_predictions(model=model, X=X_test, batch_size=batch_size, timesteps=timesteps[0], verbose=verbose)
# Save predictions
predictions[row['model_name']] = model_predictions
K.clear_session()
import tensorflow as tf
tf.reset_default_graph()
test_results = test_results.sort_values('Mean absolute error', ascending=True)
test_results = test_results.set_index(['Model name'])
if not os.path.isfile(test_output_table):
test_results.to_csv(test_output_table, sep=';')
else: # else it exists so append without writing the header
test_results.to_csv(test_output_table,mode = 'a',header=False, sep=';')
print('Test dataset performance of the best {} (out of {} tested models):'.format(min(selection, len(models)), len(models)))
print(tabulate(test_results, headers='keys', tablefmt="grid", numalign="right", floatfmt=".3f"))
Explanation: Evaluate top 5 models
End of explanation |
907 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create directory if not exists
Step1: Mkdir -p behaviour
Step2: Delete directory
Including subdirectories, if they exist!
foo
└── bar
├── baz
│ └── some-other-file.txt
└── some-file.txt | Python Code:
new_directory_path = "/path/to/new/directory"
if not os.path.exists(new_directory_path):
os.mkdir(new_directory_path)
Explanation: Create directory if not exists
End of explanation
new_directory_path = "/path/to/new/directory"
if not os.path.exists(new_directory_path):
os.makedirs(new_directory_path)
Explanation: Mkdir -p behaviour
End of explanation
import shutil
directory_path_to_remove = "/path/to/foo/"
if os.path.exists(directory_path_to_remove):
shutil.rmtree(directory_path_to_remove)
Explanation: Delete directory
Including subdirectories, if they exist!
foo
└── bar
├── baz
│ └── some-other-file.txt
└── some-file.txt
End of explanation |
908 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Network Tour of Data Science
Xavier Bresson, Winter 2016/17
Exercise 4 - Code 1
Step1: Question 1a
Step2: Question 1b
Step3: Question 1c
Step4: Question 1d
Step5: Question 2a
Step6: Question 2b | Python Code:
# Load libraries
# Math
import numpy as np
# Visualization
%matplotlib notebook
import matplotlib.pyplot as plt
plt.rcParams.update({'figure.max_open_warning': 0})
from mpl_toolkits.axes_grid1 import make_axes_locatable
from scipy import ndimage
# Print output of LFR code
import subprocess
# Sparse matrix
import scipy.sparse
import scipy.sparse.linalg
# 3D visualization
import pylab
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot
# Import data
import scipy.io
# Import functions in lib folder
import sys
sys.path.insert(1, 'lib')
# Import helper functions
%load_ext autoreload
%autoreload 2
from lib.utils import compute_ncut
from lib.utils import reindex_W_with_classes
from lib.utils import nldr_visualization
from lib.utils import construct_knn_graph
from lib.utils import compute_purity
# Import distance function
import sklearn.metrics.pairwise
# Remove warnings
import warnings
warnings.filterwarnings("ignore")
# Load 10 classes of 4,000 text documents
mat = scipy.io.loadmat('datasets/20news_5classes_raw_data.mat')
X = mat['X']
n = X.shape[0]
d = X.shape[1]
Cgt = mat['Cgt'] - 1; Cgt = Cgt.squeeze()
nc = len(np.unique(Cgt))
print('Number of data =',n)
print('Data dimensionality =',d);
print('Number of classes =',nc);
Explanation: A Network Tour of Data Science
Xavier Bresson, Winter 2016/17
Exercise 4 - Code 1 : Graph Science
Construct Network of Text Documents
End of explanation
# Your code here
Explanation: Question 1a: Compute the k-NN graph (k=10) with L2/Euclidean distance<br>
Hint: You may use the function W=construct_knn_graph(X,k,'euclidean')
End of explanation
# Your code here
Explanation: Question 1b: Plot the adjacency matrix of the graph. <br>
Hint: Use function plt.spy(W, markersize=1)
End of explanation
# Your code here
# Your code here
Explanation: Question 1c: Reindex the adjacency matrix of the graph w.r.t. ground
truth communities. Plot the reindexed adjacency matrix of the graph.<br>
Hint: You may use the function [W_reindex,C_classes_reindex]=reindex_W_with_classes(W,C_classes).
End of explanation
# Your code here
Explanation: Question 1d: Perform graph clustering with NCut technique. What is
the clustering accuracy of the NCut solution? What is the clustering
accuracy of a random partition? Reindex the adjacency matrix of the
graph w.r.t. NCut communities. Plot the reindexed adjacency matrix of
the graph.<br>
Hint: You may use function C_ncut, accuracy = compute_ncut(W,C_solution,n_clusters) that performs Ncut clustering.<br>
Hint: You may use function accuracy = compute_purity(C_computed,C_solution,n_clusters) that returns the
accuracy of a computed partition w.r.t. the ground truth partition. A
random partition can be generated with the function np.random.randint.
End of explanation
# Reload data matrix
X = mat['X']
# Your code here
# Compute the k-NN graph with Cosine distance
# Your code here
# Your code here
# Your code here
Explanation: Question 2a: Compute the k-NN graph (k=10) with Cosine distance.<br>
Answer to questions 1b-1d for this graph.<br>
Hint: You may use function W=construct_knn_graph(X,10,'cosine').
End of explanation
# Your code here
Explanation: Question 2b: Visualize the adjacency matrix with the non-linear reduction
technique in 2D and 3D. <br>
Hint: You may use function [X,Y,Z] = nldr_visualization(W).<br>
Hint: You may use function plt.scatter(X,Y,c=Cncut) for 2D visualization and ax.scatter(X,Y,Z,c=Cncut) for 3D visualization.
End of explanation |
909 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'mpi-esm-1-2-hr', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MPI-M
Source ID: MPI-ESM-1-2-HR
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
910 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Understanding Resnet Model Features
We know that the Resnet model works well, but why does it work? How can we have confidence that it is searching out the correct features? A recent paper, Axiomatic Attribution for Deep Networks, shows that averaging gradients taken along a path of images from a blank image (e.g. pure black or grey) to the actual image, can robustly predict sets of pixels that have a strong impact on the overall classification of the image. The below code shows how to modify the TF estimator code to analyze model behavior of different images.
Step1: Constants
Step2: Download model checkpoint
The next step is to load the researcher's saved checkpoint into our estimator. We will download it from
http
Step3: Import the Model Architecture
In order to reconstruct the Resnet neural network used to train the Imagenet model, we need to load the architecture pieces. During the setup step, we checked out https
Step5: Image preprocessing functions
Note that preprocessing functions are called during training as well (see https
Step7: Resnet Model Functions
We are going to create two estimators here since we need to run two model predictions.
The first prediction computes the top labels for the image by returning the argmax_k top logits.
The second prediction returns a sequence of gradients along the straightline path from a purely grey image (127.5, 127.5, 127.5) to the final image. We use grey here because the resnet model transforms this pixel value to all 0s.
Below is the resnet model function.
Step9: Gradients Model Function
The Gradients model function takes as input a single image (a 4d tensor of dimension [1, 244, 244, 3]) and expands it to a series of images (tensor dimension [RIEMANN_STEPS + 1, 244, 244, 3]), where each image is simply a "fractional" image, with image 0 being pure gray to image RIEMANN_STEPS being the original image. The gradients are then computed for each of these images, and various outputs are returned.
Note
Step10: Estimators
Load in the model_fn using the checkpoints from MODEL_DIR. This will initialize our weights which we will then use to run backpropagation to find integrated gradients.
Step12: Create properly sized image in numpy
Load whatever image you would like (local or url), and resize to 224 x 224 x 3 using opencv2.
Step13: Prediction Input Function
Since we are analyzing the model using the estimator api, we need to provide an input function for prediction. Fortunately, there are built-in input functions that can read from numpy arrays, e.g. tf.estimator.inputs.numpy_input_fn.
Step14: Computing Gradients
Run the gradients estimator to retrieve a generator of metrics and gradient pictures, and pickle the images.
Step15: Visualization
If you simply want to play around with visualization, unpickle the result from above so you do not have to rerun prediction again. The following visualizes the gradients with different amplification of pixels, and prints their derivatives and logits as well to view where the biggest differentiators lie. You can also modify the INTERPOLATION flag to increase the "fatness" of pixels.
Below are two examples of visualization methods
Step16: Plot the Integrated Gradient
When integrating over all gradients along the path, the result is an image that captures larger signals from pixels with the large gradients. Is the integrated gradient a clear representation of what it is trying to detect?
Step17: Plot the integrated gradients for each channel
We can also visualize individual pixel contributions from different RGB channels.
Can you think of any other visualization ideas to try out? | Python Code:
import csv
import io
import matplotlib.pyplot as plt
import numpy as np
import os
import pickle
import requests
import tensorflow as tf
from io import BytesIO
from PIL import Image
from subprocess import call
Explanation: Understanding Resnet Model Features
We know that the Resnet model works well, but why does it work? How can we have confidence that it is searching out the correct features? A recent paper, Axiomatic Attribution for Deep Networks, shows that averaging gradients taken along a path of images from a blank image (e.g. pure black or grey) to the actual image, can robustly predict sets of pixels that have a strong impact on the overall classification of the image. The below code shows how to modify the TF estimator code to analyze model behavior of different images.
End of explanation
_DEFAULT_IMAGE_SIZE = 224
_NUM_CHANNELS = 3
_LABEL_CLASSES = 1001
RESNET_SIZE = 50 # We're loading a resnet-50 saved model.
# Model directory
MODEL_DIR='resnet_model_checkpoints'
VIS_DIR='visualization'
# RIEMANN STEPS is the number of steps in a Riemann Sum.
# This is used to compute an approximate the integral of gradients by supplying
# images on the path from a blank image to the original image.
RIEMANN_STEPS = 30
# Return the top k classes and probabilities, so we can also visualize model inference
# against other contending classes besides the most likely class.
TOP_K = 5
Explanation: Constants
End of explanation
import urllib.request
urllib.request.urlretrieve("http://download.tensorflow.org/models/official/resnet50_2017_11_30.tar.gz ", "resnet.tar.gz")
#unzip the file into a directory called resnet
call(["mkdir", MODEL_DIR])
call(["tar", "-zxvf", "resnet.tar.gz", "-C", MODEL_DIR])
# Make sure you see model checkpoint files in this directory
os.listdir(MODEL_DIR)
Explanation: Download model checkpoint
The next step is to load the researcher's saved checkpoint into our estimator. We will download it from
http://download.tensorflow.org/models/official/resnet50_2017_11_30.tar.gz using the following commands.
End of explanation
%run ../models/official/resnet/resnet_model.py #TODO: modify directory based on where you git cloned the TF models.
Explanation: Import the Model Architecture
In order to reconstruct the Resnet neural network used to train the Imagenet model, we need to load the architecture pieces. During the setup step, we checked out https://github.com/tensorflow/models/tree/v1.4.0/official/resnet. We can now load functions and constants from resnet_model.py into the notebook.
End of explanation
def preprocess_images(images):
Preprocesses the image by subtracting out the mean from all channels.
Args:
image: A 4D `Tensor` representing a batch of images.
Returns:
image pixels normalized to be between -0.5 and 0.5
return tf.to_float(images) / 255 - 0.5
Explanation: Image preprocessing functions
Note that preprocessing functions are called during training as well (see https://github.com/tensorflow/models/blob/master/official/resnet/imagenet_main.py and https://github.com/tensorflow/models/blob/master/official/resnet/vgg_preprocessing.py), so we will need to extract relevant logic from these functions. Below is a simplified preprocessing code that normalizes the image's pixel values.
For simplicity, we assume the client provides properly-sized images 224 x 224 x 3 in batches. It will become clear later that sending images over ip in protobuf format can be more easily handled by storing a 4d tensor. The only preprocessing required here is to subtract the mean.
End of explanation
def resnet_model_fn(features, labels, mode):
Our model_fn for ResNet to be used with our Estimator.
# Preprocess images as necessary for resnet
features = preprocess_images(features['images'])
# This network must be IDENTICAL to that used to train.
network = imagenet_resnet_v2(RESNET_SIZE, _LABEL_CLASSES)
# tf.estimator.ModeKeys.TRAIN will be false since we are predicting.
logits = network(
inputs=features, is_training=(mode == tf.estimator.ModeKeys.TRAIN))
# Instead of the top 1 result, we can now return top k!
top_k_logits, top_k_classes = tf.nn.top_k(logits, k=TOP_K)
top_k_probs = tf.nn.softmax(top_k_logits)
predictions = {
'classes': top_k_classes,
'probabilities': top_k_probs
}
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
)
Explanation: Resnet Model Functions
We are going to create two estimators here since we need to run two model predictions.
The first prediction computes the top labels for the image by returning the argmax_k top logits.
The second prediction returns a sequence of gradients along the straightline path from a purely grey image (127.5, 127.5, 127.5) to the final image. We use grey here because the resnet model transforms this pixel value to all 0s.
Below is the resnet model function.
End of explanation
def gradients_model_fn(features, labels, mode):
Our model_fn for ResNet to be used with our Estimator.
# Supply the most likely class from features dict to determine which logit function
# to use gradients along the
most_likely_class = features['most_likely_class']
# Features here is a 4d tensor of ONE image. Normalize it as in training and serving.
features = preprocess_images(features['images'])
# This network must be IDENTICAL to that used to train.
network = imagenet_resnet_v2(RESNET_SIZE, _LABEL_CLASSES)
# path_features should have dim [RIEMANN_STEPS + 1, 224, 224, 3]
path_features = tf.zeros([1, 224, 224, 3])
for i in range(1, RIEMANN_STEPS + 1):
path_features = tf.concat([path_features, features * i / RIEMANN_STEPS], axis=0)
# Path logits should evaluate logits for each path feature and return a 2d array for all path images and classes
path_logits = network(inputs=path_features, is_training=(mode == tf.estimator.ModeKeys.TRAIN))
# The logit we care about is only that pertaining to the most likely class
# The most likely class contains only a single integer, so retrieve it.
target_logits = path_logits[:, most_likely_class[0]]
# Compute gradients for each image with respect to each logit
gradients = tf.gradients(target_logits, path_features)
# Multiply elementwise to the original image to get weighted gradients for each pixel.
gradients = tf.squeeze(tf.multiply(gradients, features))
predictions = {
'path_features': path_features, # for debugging
'path_logits': path_logits, # for debugging
'target_logits': target_logits, # use this to verify that the riemann integral works out
'path_features': path_features, # for displaying path images
'gradients': gradients # for displaying gradient images and computing integrated gradient
}
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions, # This is the returned value
)
Explanation: Gradients Model Function
The Gradients model function takes as input a single image (a 4d tensor of dimension [1, 244, 244, 3]) and expands it to a series of images (tensor dimension [RIEMANN_STEPS + 1, 244, 244, 3]), where each image is simply a "fractional" image, with image 0 being pure gray to image RIEMANN_STEPS being the original image. The gradients are then computed for each of these images, and various outputs are returned.
Note: Each step is a single inference that returns an entire gradient pixel map.
The total gradient map evaluation can take a couple minutes!
End of explanation
# Load this model into our estimator
resnet_estimator = tf.estimator.Estimator(
model_fn=resnet_model_fn, # Call our generate_model_fn to create model function
model_dir=MODEL_DIR, # Where to look for model checkpoints
#config not needed
)
gradients_estimator = tf.estimator.Estimator(
model_fn=gradients_model_fn, # Call our generate_model_fn to create model function
model_dir=MODEL_DIR, # Where to look for model checkpoints
#config not needed
)
Explanation: Estimators
Load in the model_fn using the checkpoints from MODEL_DIR. This will initialize our weights which we will then use to run backpropagation to find integrated gradients.
End of explanation
def resize_and_pad_image(img, output_image_dim):
Resize the image to make it IMAGE_DIM x IMAGE_DIM pixels in size.
If an image is not square, it will pad the top/bottom or left/right
with black pixels to ensure the image is square.
Args:
img: the input 3-color image
output_image_dim: resized and padded output length (and width)
Returns:
resized and padded image
old_size = img.size # old_size[0] is in (width, height) format
ratio = float(output_image_dim) / max(old_size)
new_size = tuple([int(x * ratio) for x in old_size])
# use thumbnail() or resize() method to resize the input image
# thumbnail is a in-place operation
# im.thumbnail(new_size, Image.ANTIALIAS)
scaled_img = img.resize(new_size, Image.ANTIALIAS)
# create a new image and paste the resized on it
padded_img = Image.new("RGB", (output_image_dim, output_image_dim))
padded_img.paste(scaled_img, ((output_image_dim - new_size[0]) // 2,
(output_image_dim - new_size[1]) // 2))
return padded_img
IMAGE_PATH = 'https://www.popsci.com/sites/popsci.com/files/styles/1000_1x_/public/images/2017/09/depositphotos_33210141_original.jpg?itok=MLFznqbL&fc=50,50'
IMAGE_NAME = os.path.splitext(os.path.basename(IMAGE_PATH))[0]
print(IMAGE_NAME)
image = None
if 'http' in IMAGE_PATH:
resp = requests.get(IMAGE_PATH)
image = Image.open(BytesIO(resp.content))
else:
image = Image.open(IMAGE_PATH) # Parse the image from your local disk.
# Resize and pad the image
image = resize_and_pad_image(image, _DEFAULT_IMAGE_SIZE)
feature = np.asarray(image)
feature = np.array([feature])
# Display the image to validate
imgplot = plt.imshow(feature[0])
plt.show()
Explanation: Create properly sized image in numpy
Load whatever image you would like (local or url), and resize to 224 x 224 x 3 using opencv2.
End of explanation
label_predictions = resnet_estimator.predict(
tf.estimator.inputs.numpy_input_fn(
x={'images': feature},
shuffle=False
)
)
label_dict = next(label_predictions)
# Print out probabilities and class names
classval = label_dict['classes']
probsval = label_dict['probabilities']
labels = []
with open('client/imagenet1000_clsid_to_human.txt', 'r') as f:
label_reader = csv.reader(f, delimiter=':', quotechar='\'')
for row in label_reader:
labels.append(row[1][:-1])
# The served model uses 0 as the miscellaneous class, and so starts indexing
# the imagenet images from 1. Subtract 1 to reference the text correctly.
classval = [labels[x - 1] for x in classval]
class_and_probs = [str(p) + ' : ' + c for c, p in zip(classval, probsval)]
for j in range(0, 5):
print(class_and_probs[j])
Explanation: Prediction Input Function
Since we are analyzing the model using the estimator api, we need to provide an input function for prediction. Fortunately, there are built-in input functions that can read from numpy arrays, e.g. tf.estimator.inputs.numpy_input_fn.
End of explanation
# make the visualization directory
IMAGE_DIR = os.path.join(VIS_DIR, IMAGE_NAME)
call(['mkdir', '-p', IMAGE_DIR])
# Get one of the top classes. 0 picks out the best, 1 picks out second best, etc...
best_label = label_dict['classes'][0]
# Compute gradients with respect to this class
gradient_predictions = gradients_estimator.predict(
tf.estimator.inputs.numpy_input_fn(
x={'images': feature, 'most_likely_class': np.array([best_label])},
shuffle=False
)
)
# Start computing the sum of gradients (to be used for integrated gradients)
int_gradients = np.zeros((224, 224, 3))
gradients_and_logits = []
# Print gradients along the path, and pickle them
for i in range(0, RIEMANN_STEPS + 1):
gradient_dict = next(gradient_predictions)
gradient_map = gradient_dict['gradients']
print('Path image %d: gradient: %f, logit: %f' % (i, np.sum(gradient_map), gradient_dict['target_logits']))
# Gradient visualization output pickles
pickle.dump(gradient_map, open(os.path.join(IMAGE_DIR, 'path_gradient_' + str(i) + '.pkl'), "wb" ))
int_gradients = np.add(int_gradients, gradient_map)
gradients_and_logits.append((np.sum(gradient_map), gradient_dict['target_logits']))
pickle.dump(int_gradients, open(os.path.join(IMAGE_DIR, 'int_gradients.pkl'), "wb" ))
pickle.dump(gradients_and_logits, open(os.path.join(IMAGE_DIR, 'gradients_and_logits.pkl'), "wb" ))
Explanation: Computing Gradients
Run the gradients estimator to retrieve a generator of metrics and gradient pictures, and pickle the images.
End of explanation
AMPLIFICATION = 2.0
INTERPOLATION = 'none'
gradients_and_logits = pickle.load(open(os.path.join(IMAGE_DIR, 'gradients_and_logits.pkl'), "rb" ))
for i in range(0, RIEMANN_STEPS + 1):
gradient_map = pickle.load(open(os.path.join(IMAGE_DIR, 'path_gradient_' + str(i) + '.pkl'), "rb" ))
min_grad = np.ndarray.min(gradient_map)
max_grad = np.ndarray.max(gradient_map)
median_grad = np.median(gradient_map)
gradient_and_logit = gradients_and_logits[i]
plt.figure(figsize=(10,10))
plt.subplot(121)
plt.title('Image %d: grad: %.2f, logit: %.2f' % (i, gradient_and_logit[0], gradient_and_logit[1]))
imgplot = plt.imshow((gradient_map - min_grad) / (max_grad - min_grad),
interpolation=INTERPOLATION)
plt.subplot(122)
plt.title('Image %d: grad: %.2f, logit: %.2f' % (i, gradient_and_logit[0], gradient_and_logit[1]))
imgplot = plt.imshow(np.abs(gradient_map - median_grad) * AMPLIFICATION / max(max_grad - median_grad, median_grad - min_grad),
interpolation=INTERPOLATION)
plt.show()
Explanation: Visualization
If you simply want to play around with visualization, unpickle the result from above so you do not have to rerun prediction again. The following visualizes the gradients with different amplification of pixels, and prints their derivatives and logits as well to view where the biggest differentiators lie. You can also modify the INTERPOLATION flag to increase the "fatness" of pixels.
Below are two examples of visualization methods: one computing the gradient value normalized to between 0 and 1, and another visualizing absolute deviation from the median.
Plotting individual image gradients along path
First, let us plot the individual gradient value for all gradient path images. Pay special attention to the images with a large positive gradient (i.e. in the direction of increasing logit for the most likely class). Do the pixel gradients resemble the image class you are trying to detect?
End of explanation
AMPLIFICATION = 2.0
INTERPOLATION = 'none'
# Plot the integrated gradients
int_gradients = pickle.load(open(os.path.join(IMAGE_DIR, 'int_gradients.pkl'), "rb" ))
min_grad = np.ndarray.min(int_gradients)
max_grad = np.ndarray.max(int_gradients)
median_grad = np.median(int_gradients)
plt.figure(figsize=(15,15))
plt.subplot(131)
imgplot = plt.imshow((int_gradients - min_grad) / (max_grad - min_grad),
interpolation=INTERPOLATION)
plt.subplot(132)
imgplot = plt.imshow(np.abs(int_gradients - median_grad) * AMPLIFICATION / max(max_grad - median_grad, median_grad - min_grad),
interpolation=INTERPOLATION)
plt.subplot(133)
imgplot = plt.imshow(feature[0])
plt.show()
# Verify that the average of gradients is equal to the difference in logits
print('total logit diff: %f' % (gradients_and_logits[RIEMANN_STEPS][1] - gradients_and_logits[0][1]))
print('sum of integrated gradients: %f' % (np.sum(int_gradients) / RIEMANN_STEPS + 1))
Explanation: Plot the Integrated Gradient
When integrating over all gradients along the path, the result is an image that captures larger signals from pixels with the large gradients. Is the integrated gradient a clear representation of what it is trying to detect?
End of explanation
AMPLIFICATION = 2.0
INTERPOLATION = 'none'
# Show red-green-blue channels for integrated gradients
for channel in range(0, 3):
gradient_channel = int_gradients[:,:,channel]
min_grad = np.ndarray.min(gradient_channel)
max_grad = np.ndarray.max(gradient_channel)
median_grad = np.median(gradient_channel)
plt.figure(figsize=(10,10))
plt.subplot(121)
imgplot = plt.imshow((gradient_channel - min_grad) / (max_grad - min_grad),
interpolation=INTERPOLATION,
cmap='gray')
plt.subplot(122)
imgplot = plt.imshow(np.abs(gradient_channel - median_grad) * AMPLIFICATION / max(max_grad - median_grad, median_grad - min_grad),
interpolation=INTERPOLATION,
cmap='gray')
plt.show()
Explanation: Plot the integrated gradients for each channel
We can also visualize individual pixel contributions from different RGB channels.
Can you think of any other visualization ideas to try out?
End of explanation |
911 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spark SQL Tables via Pyspark
Goals
Step1: Create a SparkSession instance
Step2: Read data from the HELK Elasticsearch via Spark SQL
Step3: Read Sysmon Events
Step4: Register Sysmon SQL temporary View
Step5: Read PowerShell Events
Step6: Register PowerShell SQL temporary View | Python Code:
from pyspark.sql import SparkSession
Explanation: Spark SQL Tables via Pyspark
Goals:
Practice Spark SQL via PySpark skills
Ensure JupyterLab Server, Spark Cluster & Elasticsearch are communicating
Practice Query execution via Pyspark
Create template for future queries
Import SparkSession Class
End of explanation
spark = SparkSession.builder \
.appName("HELK Reader") \
.master("spark://helk-spark-master:7077") \
.enableHiveSupport() \
.getOrCreate()
Explanation: Create a SparkSession instance
End of explanation
es_reader = (spark.read
.format("org.elasticsearch.spark.sql")
.option("inferSchema", "true")
.option("es.read.field.as.array.include", "tags")
.option("es.nodes","helk-elasticsearch:9200")
.option("es.net.http.auth.user","elastic")
)
#PLEASE REMEMBER!!!!
#If you are using elastic TRIAL license, then you need the es.net.http.auth.pass config option set
#Example: .option("es.net.http.auth.pass","elasticpassword")
Explanation: Read data from the HELK Elasticsearch via Spark SQL
End of explanation
%%time
sysmon_df = es_reader.load("logs-endpoint-winevent-sysmon-*/")
Explanation: Read Sysmon Events
End of explanation
sysmon_df.createOrReplaceTempView("sysmon_events")
## Run SQL Queries
sysmon_ps_execution = spark.sql(
'''
SELECT event_id,process_parent_name,process_name
FROM sysmon_events
WHERE event_id = 1
AND process_name = "powershell.exe"
AND NOT process_parent_name = "explorer.exe"
'''
)
sysmon_ps_execution.show(10)
sysmon_ps_module = spark.sql(
'''
SELECT event_id,process_name
FROM sysmon_events
WHERE event_id = 7
AND (
lower(file_description) = "system.management.automation"
OR lower(module_loaded) LIKE "%\\\\system.management.automation%"
)
'''
)
sysmon_ps_module.show(10)
sysmon_ps_pipe = spark.sql(
'''
SELECT event_id,process_name
FROM sysmon_events
WHERE event_id = 17
AND lower(pipe_name) LIKE "\\\\pshost%"
'''
)
sysmon_ps_pipe.show(10)
Explanation: Register Sysmon SQL temporary View
End of explanation
%%time
powershell_df = es_reader.load("logs-endpoint-winevent-powershell-*/")
Explanation: Read PowerShell Events
End of explanation
powershell_df.createOrReplaceTempView("powershell_events")
ps_named_pipe = spark.sql(
'''
SELECT event_id
FROM powershell_events
WHERE event_id = 53504
'''
)
ps_named_pipe.show(10)
Explanation: Register PowerShell SQL temporary View
End of explanation |
912 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Original basis Q0
Recovered basis Qc (Controlled-bsais)
Effected basis Qeff = Qt.T * Q0
Use effected basis for error sampling
Learn Qt progressively better
When data comes in from the Qeff alignment, you must transform it back to the standard basis before average with the existing channel estimate
Error x Time x Cycle_ratio
Step1: Regime 1 basis alignment
Step2: Regime 2 basis alignment
Step3: The only thing that matters | Python Code:
D = 0.01
N_ERRORS = 1e6
N_TRIALS = 100
N_CYCLES = np.logspace(1, 3, 10).astype(np.int)
RECORDS = []
for trial in tqdm(range(N_TRIALS)):
for n_cycles in N_CYCLES:
n = int(N_ERRORS / n_cycles)
channel = Channel(kx=0.7, ky=0.2, kz=0.1,
Q=np.linalg.qr(np.random.randn(3,3))[0],
n=n, d=D)
RECORDS.append({
"trial": trial,
"cycle_length": n,
"n_cycles": n_cycles,
"time": 0,
"Mdist": np.linalg.norm(channel.Mhat-channel.C),
"Qdist": np.linalg.norm(np.dot(channel.Qc.T, channel.Q) - np.eye(3))
})
for cycle in range(n_cycles):
channel.update()
RECORDS.append({
"trial": trial,
"cycle_length": n,
"n_cycles": n_cycles,
"time": (cycle+1)*n,
"Mdist": np.linalg.norm(channel.Mhat-channel.C),
"Qdist": np.linalg.norm(np.dot(channel.Qc.T, channel.Q) - np.eye(3))
})
df = pd.DataFrame(RECORDS)
df.to_csv("{}errorsd{}.csv".format(N_ERRORS,D))
df["cycle_length"] = (N_ERRORS / df["n_cycles"]).astype(np.int)
df.tail(10)
PAL = sns.color_palette("hls", len(N_CYCLES))
fig, ax = plt.subplots(1, 1, figsize=(8,6))
for idx, n_cycles in enumerate(N_CYCLES):
sel = (df["n_cycles"] == n_cycles)
subdf = df.loc[sel, :]
v = subdf.groupby("time").mean()
s = subdf.groupby("time").std()
t = v.index.values
y = v["Mdist"].values
e = s["Mdist"].values
ax.loglog(t, y, label=str(subdf.iloc[0, 2]), c=PAL[idx])
ax.fill_between(t, y-e, y+e, alpha=0.1, color=PAL[idx])
plt.title("Recover error over time for varied ratios of cycles to realignments")
plt.xlabel("Time [cycles]")
plt.ylabel("Basis recovery error")
plt.legend()
Explanation: Original basis Q0
Recovered basis Qc (Controlled-bsais)
Effected basis Qeff = Qt.T * Q0
Use effected basis for error sampling
Learn Qt progressively better
When data comes in from the Qeff alignment, you must transform it back to the standard basis before average with the existing channel estimate
Error x Time x Cycle_ratio
End of explanation
D = 0.01
N_TRIALS = 100
MAX_N = int(1e8)
N_STEP = int(1e5)
RECORDS = []
for trial in tqdm(range(N_TRIALS)):
channel = Channel(kx=0.7, ky=0.2, kz=0.1,
Q=np.linalg.qr(np.random.randn(3,3))[0],
n=N_STEP, d=D)
pxhat, pyhat, pzhat = list(np.linalg.svd(channel.Mhat)[1])
RECORDS.append({
"trial": trial,
"time": 0,
"Mdist": np.linalg.norm(channel.Mhat-channel.C),
"Qdist": np.linalg.norm(np.dot(channel.Qc.T, channel.Q) - np.eye(3)),
"pxval": channel.kx, "pyval": channel.ky, "pzval": channel.kz,
"pxhat": pxhat, "pyhat": pyhat, "pzhat": pzhat
})
for time in range(0, MAX_N, N_STEP):
channel.update()
pxhat, pyhat, pzhat = list(np.linalg.svd(channel.Mhat)[1])
RECORDS.append({
"trial": trial,
"time": time,
"Mdist": np.linalg.norm(channel.Mhat-channel.C),
"Qdist": np.linalg.norm(np.dot(channel.Qc.T, channel.Q) - np.eye(3)),
"pxval": channel.kx, "pyval": channel.ky, "pzval": channel.kz,
"pxhat": pxhat, "pyhat": pyhat, "pzhat": pzhat
})
df = pd.DataFrame(RECORDS)
df.to_csv("regime1.csv")
df = pd.read_csv("regime1.csv")
v = df.groupby("time").mean()["Qdist"]
s = df.groupby("time").std()["Qdist"]
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
t = v.index.values
y = v.values
e = s.values
ax.plot(t, y,)
ax.fill_between(t, y-e, y+e, alpha=0.25)
plt.ylabel("Measure of orthonormality between $Q_{hat}$ and $Q_{val}$")
plt.xlabel("Time [n_errors]")
df = pd.read_csv("regime1.csv")
v = df.groupby("time").mean()["Mdist"]
s = df.groupby("time").std()["Mdist"]
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
t = v.index.values
y = v.values
e = s.values
ax.loglog(t, y,)
ax.fill_between(t, y-e, y+e, alpha=0.25)
plt.ylabel("Norm distance between $M_{hat}$ and $M_{val}$")
plt.xlabel("Time [n_errors]")
Explanation: Regime 1 basis alignment
End of explanation
D = 0.01
N_TRIALS = 100
MAX_N = int(1e8)
N_STEP = int(1e5)
RECORDS = []
for trial in tqdm(range(N_TRIALS)):
channel = Channel(kx=0.985, ky=0.01, kz=0.005,
Q=np.linalg.qr(np.random.randn(3,3))[0],
n=N_STEP, d=D)
pxhat, pyhat, pzhat = list(np.linalg.svd(channel.Mhat)[1])
RECORDS.append({
"trial": trial,
"time": 0,
"Mdist": np.linalg.norm(channel.Mhat-channel.C),
"Qdist": np.linalg.norm(np.dot(channel.Qc.T, channel.Q) - np.eye(3)),
"pxval": channel.kx, "pyval": channel.ky, "pzval": channel.kz,
"pxhat": pxhat, "pyhat": pyhat, "pzhat": pzhat
})
for time in range(0, MAX_N, N_STEP):
channel.update()
pxhat, pyhat, pzhat = list(np.linalg.svd(channel.Mhat)[1])
RECORDS.append({
"trial": trial,
"time": time,
"Mdist": np.linalg.norm(channel.Mhat-channel.C),
"Qdist": np.linalg.norm(np.dot(channel.Qc.T, channel.Q) - np.eye(3)),
"pxval": channel.kx, "pyval": channel.ky, "pzval": channel.kz,
"pxhat": pxhat, "pyhat": pyhat, "pzhat": pzhat
})
df = pd.DataFrame(RECORDS)
df.to_csv("regime2.csv")
df = pd.read_csv("regime2.csv")
v = df.groupby("time").mean()["Qdist"]
s = df.groupby("time").std()["Qdist"]
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
t = v.index.values
y = v.values
e = s.values
ax.plot(t, y)
ax.plot(t, y-e, ls="--")
ax.plot(t, y+e, ls="--")
plt.ylabel("Measure of orthonormality between $Q_{hat}$ and $Q_{val}$")
plt.xlabel("Time [n_errors]")
df = pd.read_csv("regime2.csv")
v = df.groupby("time").mean()["Mdist"]
s = df.groupby("time").std()["Mdist"]
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
t = v.index.values
y = v.values
e = s.values
ax.loglog(t, y)
ax.fill_between(t, y-e, y+e, alpha=0.25)
plt.ylabel("Norm distance between $M_{hat}$ and $M_{val}$")
plt.xlabel("Time [n_errors]")
Explanation: Regime 2 basis alignment
End of explanation
df1 = pd.read_csv("regime1.csv")
df1["dpx"] = df1["pxval"] - df1["pxhat"]
df1["dpy"] = df1["pyval"] - df1["pyhat"]
df1["dpz"] = df1["pzval"] - df1["pzhat"]
v1 = df1.groupby("time").mean()
s1 = df1.groupby("time").std()
df2 = pd.read_csv("regime2.csv")
df2["dpx"] = df2["pxval"] - df2["pxhat"]
df2["dpy"] = df2["pyval"] - df2["pyhat"]
df2["dpz"] = df2["pzval"] - df2["pzhat"]
v2 = df2.groupby("time").mean()
s2 = df2.groupby("time").std()
fig, axs = plt.subplots(2, 3, figsize=(12, 8), sharey=True, sharex=True,
tight_layout={"h_pad": 1.0, "rect": [0.0, 0.0, 1.0, 0.95]})
for idx, stat in enumerate(["dpx", "dpy", "dpz"]):
t1 = v1[stat].index.values
y1 = v1[stat].values
e1 = s1[stat].values
axs[0, idx].semilogy(t1, y1, color=sns.color_palette()[idx])
axs[0, idx].semilogy(t1, y1+e1, ls="--", color=sns.color_palette()[idx])
axs[0, idx].set_title(stat)
t2 = v2[stat].index.values
y2 = v2[stat].values
e2 = s2[stat].values
axs[1, idx].semilogy(t2, y2, color=sns.color_palette()[idx])
axs[1, idx].semilogy(t2, y2+e2, ls="--", color=sns.color_palette()[idx])
axs[1, idx].set_xlabel("Number of errors")
fig.suptitle("Average difference in effective error probability")
axs[0, 0].set_ylabel("kx=0.7, ky=0.2, kz=0.1")
axs[1, 0].set_ylabel("kx=0.985, ky=0.01, kz=0.005")
Explanation: The only thing that matters: effective error probabilities
End of explanation |
913 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
eXtreme Gradient Boosting library (XGBoost)
<center>An unfocused introduction by Ivan Nazarov</center>
Import the main toolkit.
Step1: Now import some ML stuff
Step2: Mind the seed!!
Step3: Let's begin this introduction with usage examples.
The demonstration uses the dataset, which was originally used in
Otto Group Product Classification Challenge. We load the data directly from ZIP archives.
Step4: As usual do the train-test split.
Step5: scikit-learn interface
Use scikit-learn compatible interface of XGBoost.
Step6: Fit the a gradient boosted tree ensemble.
Step7: Now let's validate.
Step8: Let's check out the confusuion matrix
Step9: Let's plot one-vs-all ROC-AUC curves
Step10: alternative interface
Internally XGBoost relies heavily on a custom dataset format DMatrix. It is ...
The interface, which is exposed into python has three capabilities
Step11: DMatrix exports several useful methods
Step12: The xgboost.train class initalizes an appropriate booster, and then fits it on the provided train dataset. Besides the booster parameters and the train DMatrix , the class initializer accepts
Step13: The method xgboost.booster.update performs one iteration of gradinet boosting
Step14: Besides these methods xgboost.booster exports
Step15: Let's plot one-vs-all ROC-AUC curves | Python Code:
import time, os, re, zipfile
import numpy as np, pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: eXtreme Gradient Boosting library (XGBoost)
<center>An unfocused introduction by Ivan Nazarov</center>
Import the main toolkit.
End of explanation
import sklearn as sk, xgboost as xg
# from sklearn.model_selection import train_test_split
from sklearn.cross_validation import train_test_split
Explanation: Now import some ML stuff
End of explanation
random_state = np.random.RandomState( seed = 0x0BADC0DE )
Explanation: Mind the seed!!
End of explanation
df_train = pd.read_csv( zipfile.ZipFile( 'train.csv.zip' ).open( 'train.csv' ), index_col = 'id' )
X = np.asanyarray( df_train.drop( 'target', axis = 1 ) )
y = sk.preprocessing.LabelEncoder( ).fit_transform( df_train[ 'target' ] )
Explanation: Let's begin this introduction with usage examples.
The demonstration uses the dataset, which was originally used in
Otto Group Product Classification Challenge. We load the data directly from ZIP archives.
End of explanation
X_train, X_, y_train, y_ = train_test_split( X, y, test_size = 0.25, random_state = random_state )
X_valid, X_test, y_valid, y_test = train_test_split( X_, y_, test_size = 0.5, random_state = random_state )
Explanation: As usual do the train-test split.
End of explanation
clf_ = xg.XGBClassifier( n_estimators = 50,
gamma = 1.0,
max_depth = 1000,
objective = "multi:softmax",
nthread = -1,
silent = False )
Explanation: scikit-learn interface
Use scikit-learn compatible interface of XGBoost.
End of explanation
clf_.fit( X_train, y_train, eval_set = [ ( X_valid, y_valid ), ], verbose = True )
Explanation: Fit the a gradient boosted tree ensemble.
End of explanation
y_predict = clf_.predict( X_test )
y_score = clf_.predict_proba( X_test )
Explanation: Now let's validate.
End of explanation
pd.DataFrame( sk.metrics.confusion_matrix( y_test, y_predict ), index = clf_.classes_, columns = clf_.classes_ )
Explanation: Let's check out the confusuion matrix
End of explanation
fig = plt.figure( figsize = ( 16, 9 ) )
axis = fig.add_subplot( 111 )
axis.set_title( 'ROC-AUC (ovr) curves for the heldout dataset' )
axis.set_xlabel( "False positive rate" ) ; axis.set_ylabel( "True positive rate" )
axis.set_ylim( -0.01, 1.01 ) ; axis.set_xlim( -0.01, 1.01 )
for cls_ in clf_.classes_ :
fpr, tpr, _ = sk.metrics.roc_curve( y_test, y_score[:, cls_], pos_label = cls_ )
axis.plot( fpr, tpr, lw = 2, zorder = cls_, label = "C%d" % ( cls_, ) )
axis.legend( loc = 'lower right', shadow = True, ncol = 3 )
Explanation: Let's plot one-vs-all ROC-AUC curves
End of explanation
train_dmat = xg.DMatrix( data = X_train,
label = y_train,
feature_names = None,
feature_types = None )
test_dmat = xg.DMatrix( data = X_test, label = y_test )
Explanation: alternative interface
Internally XGBoost relies heavily on a custom dataset format DMatrix. It is ...
The interface, which is exposed into python has three capabilities:
- load datasets in libSVM compatible format;
- load SciPy's sparse matrices;
- load Numpy's ndarrays.
Let's load the train dataset using numpy interface :
- data : the matrix of features $X$;
- label : the observation labels $y$ (could be categorical or numeric);
- missing : a vector of values that encode missing observations;
- feature_names : the columns names of $X$;
- feature_types : defines the python types of each column of $X$, in case of heterogeneous data;
- weight : the vector of nonnegative weights of each observation in the dataset.
End of explanation
xgb_params = {
'bst:max_depth':2,
'bst:eta':1,
'silent':1,
'objective':'multi:softmax',
'num_class': 9,
'nthread': 2,
'eval_metric' : 'auc'
}
Explanation: DMatrix exports several useful methods:
- num_col() : returns the number of columns;
- num_row() : gets the number of items;
- save_binary( fname ) : saves the DMatrix object into a specified file.
For a more detailed list, it is useful to have a look at the official manual
Having dafined the datasets, it is the right time to initialize the booster. To this end one uses xgboost.Learner class. Among other parameters, its instance is initialized with a dictionary of parameters, which allows for a more flexible booster initialization.
End of explanation
xgbooster_ = xg.train( params = xgb_params,
dtrain = train_dmat,
num_boost_round = 10,
evals = (),
obj = None,
feval = None,
maximize = False,
early_stopping_rounds = None,
evals_result = None,
verbose_eval = True,
learning_rates = None,
xgb_model = None )
Explanation: The xgboost.train class initalizes an appropriate booster, and then fits it on the provided train dataset. Besides the booster parameters and the train DMatrix , the class initializer accepts:
- num_boost_round : and interger number of boosting iterations which is the number of trees in the final ensemble;
- evals : a list of DMatrix validation datasets to be evaluated during training;
- obj : a custom objective function;
- feval : a custom evaluation function;
- early_stopping_rounds : Activates early stopping, which checks every early_stopping_rounds round(s) if the validation error has decreased in order to continue training;
- maximize : a flag, which determines if the objective (feval) should be maximized;
- learning_rates : a schedule for learning rates for each boosting round or a function that calculates $\eta$, for the current round;
- xgb_model : an XGB model (booster or file), the training of which is to be continued.
End of explanation
y_predict = xgbooster_.predict( test_dmat )
y_score = xgbooster_.predict( test_dmat, output_margin = True )
Explanation: The method xgboost.booster.update performs one iteration of gradinet boosting:
- dtrain : DMatrix of train dataset;
- iteration : the current iteration number;
- fobj : a custom objective function touse for this update.
The method xboost.booster.boost performs one iteration of boosting on the custom gradient statistics:
- dtrain : the DMatrix dataset to operate on;
- grad, hess : pair of lists of loss gradients and hessians, respectively, evaluated at each datapoint in dtrain.
The method xgboost.booster.predict returns either the learned value, or the index of the target leaf. The parameters are :
- data : a DMatrix object storing the input;
- output_margin : a flag, determining, if raw untransformed margin values should be returned;
- ntree_limit : limit the number of trees used for predicting (defaults to 0, which use all trees);
- pred_leaf : determined wether the output should be a matrix of $(n, K)$ of predicted leaf indices, where $K$ is the number of trees in the ensemble.
The returned result is a numpy ndarray.
End of explanation
pd.DataFrame( sk.metrics.confusion_matrix( y_test, y_predict ), index = clf_.classes_, columns = clf_.classes_ )
Explanation: Besides these methods xgboost.booster exports:
load_model( fname ) and save_model( fname ).
Let's check out the confusuion matrix
End of explanation
fig = plt.figure( figsize = ( 16, 9 ) )
axis = fig.add_subplot( 111 )
axis.set_title( 'ROC-AUC (ovr) curves for the heldout dataset' )
axis.set_xlabel( "False positive rate" ) ; axis.set_ylabel( "True positive rate" )
axis.set_ylim( -0.01, 1.01 ) ; axis.set_xlim( -0.01, 1.01 )
for cls_ in clf_.classes_ :
fpr, tpr, _ = sk.metrics.roc_curve( y_test, y_score[:, cls_], pos_label = cls_ )
axis.plot( fpr, tpr, lw = 2, zorder = cls_, label = "C%d" % ( cls_, ) )
axis.legend( loc = 'lower right', shadow = True, ncol = 3 )
Explanation: Let's plot one-vs-all ROC-AUC curves
End of explanation |
914 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fine tuning classification example
We will fine-tune an ada classifier to distinguish between the two sports
Step1: ## Data exploration
The newsgroup dataset can be loaded using sklearn. First we will look at the data itself
Step2: One sample from the baseball category can be seen above. It is an email to a mailing list. We can observe that we have 1197 examples in total, which are evenly split between the two sports.
Data Preparation
We transform the dataset into a pandas dataframe, with a column for prompt and completion. The prompt contains the email from the mailing list, and the completion is a name of the sport, either hockey or baseball. For demonstration purposes only and speed of fine-tuning we take only 300 examples. In a real use case the more examples the better the performance.
Step3: Both baseball and hockey are single tokens. We save the dataset as a jsonl file.
Step4: Data Preparation tool
We can now use a data preparation tool which will suggest a few improvements to our dataset before fine-tuning. Before launching the tool we update the openai library to ensure we're using the latest data preparation tool. We additionally specify -q which auto-accepts all suggestions.
Step5: The tool helpfully suggests a few improvements to the dataset and splits the dataset into training and validation set.
A suffix between a prompt and a completion is necessary to tell the model that the input text has stopped, and that it now needs to predict the class. Since we use the same separator in each example, the model is able to learn that it is meant to predict either baseball or hockey following the separator.
A whitespace prefix in completions is useful, as most word tokens are tokenized with a space prefix.
The tool also recognized that this is likely a classification task, so it suggested to split the dataset into training and validation datasets. This will allow us to easily measure expected performance on new data.
Fine-tuning
The tool suggests we run the following command to train the dataset. Since this is a classification task, we would like to know what the generalization performance on the provided validation set is for our classification use case. The tool suggests to add --compute_classification_metrics --classification_positive_class " baseball" in order to compute the classification metrics.
We can simply copy the suggested command from the CLI tool. We specifically add -m ada to fine-tune a cheaper and faster ada model, which is usually comperable in performance to slower and more expensive models on classification use cases.
Step6: The model is successfully trained in about ten minutes. We can see the model name is ada
Step7: The accuracy reaches 99.6%. On the plot below we can see how accuracy on the validation set increases during the training run.
Step8: Using the model
We can now call the model to get the predictions.
Step9: We need to use the same separator following the prompt which we used during fine-tuning. In this case it is \n\n###\n\n. Since we're concerned with classification, we want the temperature to be as low as possible, and we only require one token completion to determine the prediction of the model.
Step10: To get the log probabilities, we can specify logprobs parameter on the completion request
Step13: We can see that the model predicts hockey as a lot more likely than baseball, which is the correct prediction. By requesting log_probs, we can see the prediction (log) probability for each class.
Generalization
Interestingly, our fine-tuned classifier is quite versatile. Despite being trained on emails to different mailing lists, it also successfully predicts tweets. | Python Code:
from sklearn.datasets import fetch_20newsgroups
import pandas as pd
import openai
categories = ['rec.sport.baseball', 'rec.sport.hockey']
sports_dataset = fetch_20newsgroups(subset='train', shuffle=True, random_state=42, categories=categories)
Explanation: Fine tuning classification example
We will fine-tune an ada classifier to distinguish between the two sports: Baseball and Hockey.
End of explanation
print(sports_dataset['data'][0])
sports_dataset.target_names[sports_dataset['target'][0]]
len_all, len_baseball, len_hockey = len(sports_dataset.data), len([e for e in sports_dataset.target if e == 0]), len([e for e in sports_dataset.target if e == 1])
print(f"Total examples: {len_all}, Baseball examples: {len_baseball}, Hockey examples: {len_hockey}")
Explanation: ## Data exploration
The newsgroup dataset can be loaded using sklearn. First we will look at the data itself:
End of explanation
import pandas as pd
labels = [sports_dataset.target_names[x].split('.')[-1] for x in sports_dataset['target']]
texts = [text.strip() for text in sports_dataset['data']]
df = pd.DataFrame(zip(texts, labels), columns = ['prompt','completion']) #[:300]
df.head()
Explanation: One sample from the baseball category can be seen above. It is an email to a mailing list. We can observe that we have 1197 examples in total, which are evenly split between the two sports.
Data Preparation
We transform the dataset into a pandas dataframe, with a column for prompt and completion. The prompt contains the email from the mailing list, and the completion is a name of the sport, either hockey or baseball. For demonstration purposes only and speed of fine-tuning we take only 300 examples. In a real use case the more examples the better the performance.
End of explanation
df.to_json("sport2.jsonl", orient='records', lines=True)
Explanation: Both baseball and hockey are single tokens. We save the dataset as a jsonl file.
End of explanation
!pip install --upgrade openai
!openai tools fine_tunes.prepare_data -f sport2.jsonl -q
Explanation: Data Preparation tool
We can now use a data preparation tool which will suggest a few improvements to our dataset before fine-tuning. Before launching the tool we update the openai library to ensure we're using the latest data preparation tool. We additionally specify -q which auto-accepts all suggestions.
End of explanation
!openai api fine_tunes.create -t "sport2_prepared_train.jsonl" -v "sport2_prepared_valid.jsonl" --compute_classification_metrics --classification_positive_class " baseball" -m ada
Explanation: The tool helpfully suggests a few improvements to the dataset and splits the dataset into training and validation set.
A suffix between a prompt and a completion is necessary to tell the model that the input text has stopped, and that it now needs to predict the class. Since we use the same separator in each example, the model is able to learn that it is meant to predict either baseball or hockey following the separator.
A whitespace prefix in completions is useful, as most word tokens are tokenized with a space prefix.
The tool also recognized that this is likely a classification task, so it suggested to split the dataset into training and validation datasets. This will allow us to easily measure expected performance on new data.
Fine-tuning
The tool suggests we run the following command to train the dataset. Since this is a classification task, we would like to know what the generalization performance on the provided validation set is for our classification use case. The tool suggests to add --compute_classification_metrics --classification_positive_class " baseball" in order to compute the classification metrics.
We can simply copy the suggested command from the CLI tool. We specifically add -m ada to fine-tune a cheaper and faster ada model, which is usually comperable in performance to slower and more expensive models on classification use cases.
End of explanation
!openai api fine_tunes.results -i ft-2zaA7qi0rxJduWQpdvOvmGn3 > result.csv
results = pd.read_csv('result.csv')
results[results['classification/accuracy'].notnull()].tail(1)
Explanation: The model is successfully trained in about ten minutes. We can see the model name is ada:ft-openai-2021-07-30-12-26-20, which we can use for doing inference.
[Advanced] Results and expected model performance
We can now download the results file to observe the expected performance on a held out validation set.
End of explanation
results[results['classification/accuracy'].notnull()]['classification/accuracy'].plot()
Explanation: The accuracy reaches 99.6%. On the plot below we can see how accuracy on the validation set increases during the training run.
End of explanation
test = pd.read_json('sport2_prepared_valid.jsonl', lines=True)
test.head()
Explanation: Using the model
We can now call the model to get the predictions.
End of explanation
ft_model = 'ada:ft-openai-2021-07-30-12-26-20'
res = openai.Completion.create(model=ft_model, prompt=test['prompt'][0] + '\n\n###\n\n', max_tokens=1, temperature=0)
res['choices'][0]['text']
Explanation: We need to use the same separator following the prompt which we used during fine-tuning. In this case it is \n\n###\n\n. Since we're concerned with classification, we want the temperature to be as low as possible, and we only require one token completion to determine the prediction of the model.
End of explanation
res = openai.Completion.create(model=ft_model, prompt=test['prompt'][0] + '\n\n###\n\n', max_tokens=1, temperature=0, logprobs=2)
res['choices'][0]['logprobs']['top_logprobs'][0]
Explanation: To get the log probabilities, we can specify logprobs parameter on the completion request
End of explanation
sample_hockey_tweet = Thank you to the
@Canes
and all you amazing Caniacs that have been so supportive! You guys are some of the best fans in the NHL without a doubt! Really excited to start this new chapter in my career with the
@DetroitRedWings
!!
res = openai.Completion.create(model=ft_model, prompt=sample_hockey_tweet + '\n\n###\n\n', max_tokens=1, temperature=0, logprobs=2)
res['choices'][0]['text']
sample_baseball_tweet=BREAKING: The Tampa Bay Rays are finalizing a deal to acquire slugger Nelson Cruz from the Minnesota Twins, sources tell ESPN.
res = openai.Completion.create(model=ft_model, prompt=sample_baseball_tweet + '\n\n###\n\n', max_tokens=1, temperature=0, logprobs=2)
res['choices'][0]['text']
Explanation: We can see that the model predicts hockey as a lot more likely than baseball, which is the correct prediction. By requesting log_probs, we can see the prediction (log) probability for each class.
Generalization
Interestingly, our fine-tuned classifier is quite versatile. Despite being trained on emails to different mailing lists, it also successfully predicts tweets.
End of explanation |
915 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sveučilište u Zagrebu
Fakultet elektrotehnike i računarstva
Strojno učenje 2018/2019
http
Step1: Zadatci
1. Linearna regresija kao klasifikator
U prvoj laboratorijskoj vježbi koristili smo model linearne regresije za, naravno, regresiju. Međutim, model linearne regresije može se koristiti i za klasifikaciju. Iako zvuči pomalo kontraintuitivno, zapravo je dosta jednostavno. Naime, cilj je naučiti funkciju $f(\mathbf{x})$ koja za negativne primjere predviđa vrijednost $1$, dok za pozitivne primjere predviđa vrijednost $0$. U tom slučaju, funkcija $f(\mathbf{x})=0.5$ predstavlja granicu između klasa, tj. primjeri za koje vrijedi $h(\mathbf{x})\geq 0.5$ klasificiraju se kao pozitivni, dok se ostali klasificiraju kao negativni.
Klasifikacija pomoću linearne regresije implementirana je u razredu RidgeClassifier. U sljedećim podzadatcima istrenirajte taj model na danim podatcima i prikažite dobivenu granicu između klasa. Pritom isključite regularizaciju ($\alpha = 0$, odnosno alpha=0). Također i ispišite točnost vašeg klasifikacijskog modela (smijete koristiti funkciju metrics.accuracy_score). Skupove podataka vizualizirajte korištenjem pomoćne funkcije plot_clf_problem(X, y, h=None) koja je dostupna u pomoćnom paketu mlutils (datoteku mlutils.py možete preuzeti sa stranice kolegija). X i y predstavljaju ulazne primjere i oznake, dok h predstavlja funkciju predikcije modela (npr. model.predict).
U ovom zadatku cilj je razmotriti kako se klasifikacijski model linearne regresije ponaša na linearno odvojim i neodvojivim podatcima.
Step2: (a)
Prvo, isprobajte ugrađeni model na linearno odvojivom skupu podataka seven ($N=7$).
Step3: Kako bi se uvjerili da se u isprobanoj implementaciji ne radi o ničemu doli o običnoj linearnoj regresiji, napišite kôd koji dolazi do jednakog rješenja korištenjem isključivo razreda LinearRegression. Funkciju za predikciju, koju predajete kao treći argument h funkciji plot_2d_clf_problem, možete definirati lambda-izrazom
Step4: Q
Step5: Q
Step6: Q
Step7: Trenirajte tri binarna klasifikatora, $h_1$, $h_2$ i $h_3$ te prikažite granice između klasa (tri grafikona). Zatim definirajte $h(\mathbf{x})=\mathrm{argmax}_j h_j(\mathbf{x})$ (napišite svoju funkciju predict koja to radi) i prikažite granice između klasa za taj model. Zatim se uvjerite da biste identičan rezultat dobili izravno primjenom modela RidgeClassifier, budući da taj model za višeklasan problem zapravo interno implementira shemu jedan-naspram-ostali.
Q
Step8: 3. Logistička regresija
Ovaj zadatak bavi se probabilističkim diskriminativnim modelom, logističkom regresijom, koja je, unatoč nazivu, klasifikacijski model.
Logistička regresija tipičan je predstavnik tzv. poopćenih linearnih modela koji su oblika
Step9: Q
Step10: (c)
Koristeći funkciju lr_train, trenirajte model logističke regresije na skupu seven, prikažite dobivenu granicu između klasa te izračunajte pogrešku unakrsne entropije.
NB
Step11: Q
Step12: Q
Step13: 4. Analiza logističke regresije
(a)
Koristeći ugrađenu implementaciju logističke regresije, provjerite kako se logistička regresija nosi s vrijednostima koje odskaču. Iskoristite skup outlier iz prvog zadatka. Prikažite granicu između klasa.
Q
Step14: (b)
Trenirajte model logističke regresije na skupu seven te na dva odvojena grafikona prikažite, kroz iteracije optimizacijskoga algoritma, (1) izlaz modela $h(\mathbf{x})$ za svih sedam primjera te (2) vrijednosti težina $w_0$, $w_1$, $w_2$.
Step15: (c)
Ponovite eksperiment iz podzadatka (b) koristeći linearno neodvojiv skup podataka unsep iz prvog zadatka.
Q
Step16: 5. Regularizirana logistička regresija
Trenirajte model logističke regresije na skupu seven s različitim faktorima L2-regularizacije, $\alpha\in{0,1,10,100}$. Prikažite na dva odvojena grafikona (1) pogrešku unakrsne entropije te (2) L2-normu vektora $\mathbf{w}$ kroz iteracije optimizacijskog algoritma.
Q
Step17: 6. Logistička regresija s funkcijom preslikavanja
Proučite funkciju datasets.make_classification. Generirajte i prikažite dvoklasan skup podataka s ukupno $N=100$ dvodimenzijskih ($n=2)$ primjera, i to sa dvije grupe po klasi (n_clusters_per_class=2). Malo je izgledno da će tako generiran skup biti linearno odvojiv, međutim to nije problem jer primjere možemo preslikati u višedimenzijski prostor značajki pomoću klase preprocessing.PolynomialFeatures, kao što smo to učinili kod linearne regresije u prvoj laboratorijskoj vježbi. Trenirajte model logističke regresije koristeći za preslikavanje u prostor značajki polinomijalnu funkciju stupnja $d=2$ i stupnja $d=3$. Prikažite dobivene granice između klasa. Možete koristiti svoju implementaciju, ali se radi brzine preporuča koristiti linear_model.LogisticRegression. Regularizacijski faktor odaberite po želji.
NB | Python Code:
# Učitaj osnovne biblioteke...
import sklearn
import mlutils
import matplotlib.pyplot as plt
%pylab inline
Explanation: Sveučilište u Zagrebu
Fakultet elektrotehnike i računarstva
Strojno učenje 2018/2019
http://www.fer.unizg.hr/predmet/su
Laboratorijska vježba 2: Linearni diskriminativni modeli
Verzija: 1.2
Zadnji put ažurirano: 26. listopada 2018.
(c) 2015-2018 Jan Šnajder, Domagoj Alagić
Objavljeno: 26. listopada 2018.
Rok za predaju: 5. studenog 2018. u 07:00h
Upute
Prva laboratorijska vježba sastoji se od šest zadataka. U nastavku slijedite upute navedene u ćelijama s tekstom. Rješavanje vježbe svodi se na dopunjavanje ove bilježnice: umetanja ćelije ili više njih ispod teksta zadatka, pisanja odgovarajućeg kôda te evaluiranja ćelija.
Osigurajte da u potpunosti razumijete kôd koji ste napisali. Kod predaje vježbe, morate biti u stanju na zahtjev asistenta (ili demonstratora) preinačiti i ponovno evaluirati Vaš kôd. Nadalje, morate razumjeti teorijske osnove onoga što radite, u okvirima onoga što smo obradili na predavanju. Ispod nekih zadataka možete naći i pitanja koja služe kao smjernice za bolje razumijevanje gradiva (nemojte pisati odgovore na pitanja u bilježnicu). Stoga se nemojte ograničiti samo na to da riješite zadatak, nego slobodno eksperimentirajte. To upravo i jest svrha ovih vježbi.
Vježbe trebate raditi samostalno. Možete se konzultirati s drugima o načelnom načinu rješavanja, ali u konačnici morate sami odraditi vježbu. U protivnome vježba nema smisla.
End of explanation
from sklearn.linear_model import LinearRegression, RidgeClassifier
from sklearn.metrics import accuracy_score
Explanation: Zadatci
1. Linearna regresija kao klasifikator
U prvoj laboratorijskoj vježbi koristili smo model linearne regresije za, naravno, regresiju. Međutim, model linearne regresije može se koristiti i za klasifikaciju. Iako zvuči pomalo kontraintuitivno, zapravo je dosta jednostavno. Naime, cilj je naučiti funkciju $f(\mathbf{x})$ koja za negativne primjere predviđa vrijednost $1$, dok za pozitivne primjere predviđa vrijednost $0$. U tom slučaju, funkcija $f(\mathbf{x})=0.5$ predstavlja granicu između klasa, tj. primjeri za koje vrijedi $h(\mathbf{x})\geq 0.5$ klasificiraju se kao pozitivni, dok se ostali klasificiraju kao negativni.
Klasifikacija pomoću linearne regresije implementirana je u razredu RidgeClassifier. U sljedećim podzadatcima istrenirajte taj model na danim podatcima i prikažite dobivenu granicu između klasa. Pritom isključite regularizaciju ($\alpha = 0$, odnosno alpha=0). Također i ispišite točnost vašeg klasifikacijskog modela (smijete koristiti funkciju metrics.accuracy_score). Skupove podataka vizualizirajte korištenjem pomoćne funkcije plot_clf_problem(X, y, h=None) koja je dostupna u pomoćnom paketu mlutils (datoteku mlutils.py možete preuzeti sa stranice kolegija). X i y predstavljaju ulazne primjere i oznake, dok h predstavlja funkciju predikcije modela (npr. model.predict).
U ovom zadatku cilj je razmotriti kako se klasifikacijski model linearne regresije ponaša na linearno odvojim i neodvojivim podatcima.
End of explanation
seven_X = np.array([[2,1], [2,3], [1,2], [3,2], [5,2], [5,4], [6,3]])
seven_y = np.array([1, 1, 1, 1, 0, 0, 0])
clf = RidgeClassifier().fit(seven_X, seven_y)
predicted_y = clf.predict(seven_X)
score = accuracy_score(y_pred=predicted_y, y_true=seven_y)
print(score)
mlutils.plot_2d_clf_problem(X=seven_X, y=predicted_y, h=None)
Explanation: (a)
Prvo, isprobajte ugrađeni model na linearno odvojivom skupu podataka seven ($N=7$).
End of explanation
lr = LinearRegression().fit(seven_X, seven_y)
predicted_y_2 = lr.predict(seven_X)
mlutils.plot_2d_clf_problem(X=seven_X, y=seven_y, h= lambda x : lr.predict(x) >= 0.5)
Explanation: Kako bi se uvjerili da se u isprobanoj implementaciji ne radi o ničemu doli o običnoj linearnoj regresiji, napišite kôd koji dolazi do jednakog rješenja korištenjem isključivo razreda LinearRegression. Funkciju za predikciju, koju predajete kao treći argument h funkciji plot_2d_clf_problem, možete definirati lambda-izrazom: lambda x : model.predict(x) >= 0.5.
End of explanation
outlier_X = np.append(seven_X, [[12,8]], axis=0)
outlier_y = np.append(seven_y, 0)
lr2 = LinearRegression().fit(outlier_X, outlier_y)
predicted_y_2 = lr2.predict(outlier_X)
mlutils.plot_2d_clf_problem(X=outlier_X, y=outlier_y, h= lambda x : lr2.predict(x) >= 0.5)
Explanation: Q: Kako bi bila definirana granica između klasa ako bismo koristili oznake klasa $-1$ i $1$ umjesto $0$ i $1$?
(b)
Probajte isto na linearno odvojivom skupu podataka outlier ($N=8$):
End of explanation
unsep_X = np.append(seven_X, [[2,2]], axis=0)
unsep_y = np.append(seven_y, 0)
lr3 = LinearRegression().fit(unsep_X, unsep_y)
predicted_y_2 = lr3.predict(unsep_X)
mlutils.plot_2d_clf_problem(X=unsep_X, y=unsep_y, h= lambda x : lr3.predict(x) >= 0.5)
Explanation: Q: Zašto model ne ostvaruje potpunu točnost iako su podatci linearno odvojivi?
(c)
Završno, probajte isto na linearno neodvojivom skupu podataka unsep ($N=8$):
End of explanation
from sklearn.datasets import make_classification
x, y = sklearn.datasets.make_classification(n_samples=100, n_informative=2, n_redundant=0, n_repeated=0, n_features=2, n_classes=3, n_clusters_per_class=1)
#print(dataset)
mlutils.plot_2d_clf_problem(X=x, y=y, h=None)
Explanation: Q: Očito je zašto model nije u mogućnosti postići potpunu točnost na ovom skupu podataka. Međutim, smatrate li da je problem u modelu ili u podacima? Argumentirajte svoj stav.
2. Višeklasna klasifikacija
Postoji više načina kako se binarni klasifikatori mogu se upotrijebiti za višeklasnu klasifikaciju. Najčešće se koristi shema tzv. jedan-naspram-ostali (engl. one-vs-rest, OVR), u kojoj se trenira po jedan klasifikator $h_j$ za svaku od $K$ klasa. Svaki klasifikator $h_j$ trenira se da razdvaja primjere klase $j$ od primjera svih drugih klasa, a primjer se klasificira u klasu $j$ za koju je $h_j(\mathbf{x})$ maksimalan.
Pomoću funkcije datasets.make_classification generirajte slučajan dvodimenzijski skup podataka od tri klase i prikažite ga koristeći funkciju plot_2d_clf_problem. Radi jednostavnosti, pretpostavite da nema redundantnih značajki te da je svaka od klasa "zbijena" upravo u jednu grupu.
End of explanation
fig = plt.figure(figsize=(5,15))
fig.subplots_adjust(wspace=0.2)
y_ovo1 = [ 0 if i == 0 else 1 for i in y]
lrOvo1 = LinearRegression().fit(x, y_ovo1)
fig.add_subplot(3,1,1)
mlutils.plot_2d_clf_problem(X=x, y=y_ovo1, h= lambda x : lrOvo1.predict(x) >= 0.5)
y_ovo2 = [ 0 if i == 1 else 1 for i in y]
lrOvo2 = LinearRegression().fit(x, y_ovo2)
fig.add_subplot(3,1,2)
mlutils.plot_2d_clf_problem(X=x, y=y_ovo2, h= lambda x : lrOvo2.predict(x) >= 0.5)
y_ovo3 = [ 0 if i == 2 else 1 for i in y]
lrOvo3 = LinearRegression().fit(x, y_ovo3)
fig.add_subplot(3,1,3)
mlutils.plot_2d_clf_problem(X=x, y=y_ovo3, h= lambda x : lrOvo3.predict(x) >= 0.5)
Explanation: Trenirajte tri binarna klasifikatora, $h_1$, $h_2$ i $h_3$ te prikažite granice između klasa (tri grafikona). Zatim definirajte $h(\mathbf{x})=\mathrm{argmax}_j h_j(\mathbf{x})$ (napišite svoju funkciju predict koja to radi) i prikažite granice između klasa za taj model. Zatim se uvjerite da biste identičan rezultat dobili izravno primjenom modela RidgeClassifier, budući da taj model za višeklasan problem zapravo interno implementira shemu jedan-naspram-ostali.
Q: Alternativna shema jest ona zvana jedan-naspram-jedan (engl, one-vs-one, OVO). Koja je prednost sheme OVR nad shemom OVO? A obratno?
End of explanation
def sigm(alpha):
def f(x):
return 1 / (1 + exp(-alpha*x))
return f
ax = list(range(-10, 10))
ay1 = list(map(sigm(1), ax))
ay2 = list(map(sigm(2), ax))
ay3 = list(map(sigm(4), ax))
fig = plt.figure(figsize=(5,15))
p1 = fig.add_subplot(3, 1, 1)
p1.plot(ax, ay1)
p2 = fig.add_subplot(3, 1, 2)
p2.plot(ax, ay2)
p3 = fig.add_subplot(3, 1, 3)
p3.plot(ax, ay3)
Explanation: 3. Logistička regresija
Ovaj zadatak bavi se probabilističkim diskriminativnim modelom, logističkom regresijom, koja je, unatoč nazivu, klasifikacijski model.
Logistička regresija tipičan je predstavnik tzv. poopćenih linearnih modela koji su oblika: $h(\mathbf{x})=f(\mathbf{w}^\intercal\tilde{\mathbf{x}})$. Logistička funkcija za funkciju $f$ koristi tzv. logističku (sigmoidalnu) funkciju $\sigma (x) = \frac{1}{1 + \textit{exp}(-x)}$.
(a)
Definirajte logističku (sigmoidalnu) funkciju $\mathrm{sigm}(x)=\frac{1}{1+\exp(-\alpha x)}$ i prikažite je za $\alpha\in{1,2,4}$.
End of explanation
from sklearn.preprocessing import PolynomialFeatures as PolyFeat
from sklearn.metrics import log_loss
def loss_function(h_x, y):
return -y * np.log(h_x) - (1 - y) * np.log(1 - h_x)
def lr_h(x, w):
Phi = PolyFeat(1).fit_transform(x.reshape(1,-1))
return sigm(1)(Phi.dot(w))
def cross_entropy_error(X, y, w):
Phi = PolyFeat(1).fit_transform(X)
return log_loss(y, sigm(1)(Phi.dot(w)))
def lr_train(X, y, eta = 0.01, max_iter = 2000, alpha = 0, epsilon = 0.0001, trace= False):
w = zeros(shape(X)[1] + 1)
N = len(X)
w_trace = [];
error = epsilon**-1
for i in range(0, max_iter):
dw0 = 0; dw = zeros(shape(X)[1]);
new_error = 0
for j in range(0, N):
h = lr_h(X[j], w)
dw0 += h - y[j]
dw += (h - y[j])*X[j]
new_error += loss_function(h, y[j])
if abs(error - new_error) < epsilon:
print('stagnacija na i = ', i)
break
else: error = new_error
w[0] -= eta*dw0
w[1:] = w[1:] * (1-eta*alpha) - eta*dw
w_trace.extend(w)
if trace:
return w, w_trace
else: return w
Explanation: Q: Zašto je sigmoidalna funkcija prikladan izbor za aktivacijsku funkciju poopćenoga linearnog modela?
</br>
Q: Kakav utjecaj ima faktor $\alpha$ na oblik sigmoide? Što to znači za model logističke regresije (tj. kako izlaz modela ovisi o normi vektora težina $\mathbf{w}$)?
(b)
Implementirajte funkciju
lr_train(X, y, eta=0.01, max_iter=2000, alpha=0, epsilon=0.0001, trace=False)
za treniranje modela logističke regresije gradijentnim spustom (batch izvedba). Funkcija uzima označeni skup primjera za učenje (matrica primjera X i vektor oznaka y) te vraća $(n+1)$-dimenzijski vektor težina tipa ndarray. Ako je trace=True, funkcija dodatno vraća listu (ili matricu) vektora težina $\mathbf{w}^0,\mathbf{w}^1,\dots,\mathbf{w}^k$ generiranih kroz sve iteracije optimizacije, od 0 do $k$. Optimizaciju treba provoditi dok se ne dosegne max_iter iteracija, ili kada razlika u pogrešci unakrsne entropije između dviju iteracija padne ispod vrijednosti epsilon. Parametar alpha predstavlja faktor regularizacije.
Preporučamo definiranje pomoćne funkcije lr_h(x,w) koja daje predikciju za primjer x uz zadane težine w. Također, preporučamo i funkciju cross_entropy_error(X,y,w) koja izračunava pogrešku unakrsne entropije modela na označenom skupu (X,y) uz te iste težine.
NB: Obratite pozornost na to da je način kako su definirane oznake (${+1,-1}$ ili ${1,0}$) kompatibilan s izračunom funkcije gubitka u optimizacijskome algoritmu.
End of explanation
trained = lr_train(seven_X, seven_y)
print(cross_entropy_error(seven_X, seven_y, trained))
print(trained)
h3c = lambda x: lr_h(x, trained) > 0.5
figure()
mlutils.plot_2d_clf_problem(seven_X, seven_y, h3c)
Explanation: (c)
Koristeći funkciju lr_train, trenirajte model logističke regresije na skupu seven, prikažite dobivenu granicu između klasa te izračunajte pogrešku unakrsne entropije.
NB: Pripazite da modelu date dovoljan broj iteracija.
End of explanation
from sklearn.metrics import zero_one_loss
eta = [0.005, 0.01, 0.05, 0.1]
[w3d, w3d_trace] = lr_train(seven_X, seven_y, trace=True)
Phi = PolyFeat(1).fit_transform(seven_X)
h_3d = lambda x: x >= 0.5
error_unakrs = []
errror_classy = []
errror_eta = []
for k in range(0, len(w3d_trace), 3):
error_unakrs.append(cross_entropy_error(seven_X, seven_y, w3d_trace[k:k+3]))
errror_classy.append(zero_one_loss(seven_y, h_3d(sigm(1)(Phi.dot(w3d_trace[k:k+3])))))
for i in eta:
err = []
[w3, w3_trace] = lr_train(seven_X, seven_y, i, trace=True)
for j in range(0, len(w3_trace), 3):
err.append(cross_entropy_error(seven_X, seven_y, w3_trace[j:j+3]))
errror_eta.append(err)
figure(figsize(12, 15))
subplots_adjust(wspace=0.1)
subplot(2,1,1)
grid()
plot(error_unakrs); plot(errror_classy);
subplot(2,1,2)
grid()
for i in range(0, len(eta)):
plot(errror_eta[i], label = 'eta = ' + str(i))
legend(loc = 'best');
Explanation: Q: Koji kriterij zaustavljanja je aktiviran?
Q: Zašto dobivena pogreška unakrsne entropije nije jednaka nuli?
Q: Kako biste utvrdili da je optimizacijski postupak doista pronašao hipotezu koja minimizira pogrešku učenja? O čemu to ovisi?
Q: Na koji način biste preinačili kôd ako biste htjeli da se optimizacija izvodi stohastičkim gradijentnim spustom (online learning)?
(d)
Prikažite na jednom grafikonu pogrešku unakrsne entropije (očekivanje logističkog gubitka) i pogrešku klasifikacije (očekivanje gubitka 0-1) na skupu seven kroz iteracije optimizacijskog postupka. Koristite trag težina funkcije lr_train iz zadatka (b) (opcija trace=True). Na drugom grafikonu prikažite pogrešku unakrsne entropije kao funkciju broja iteracija za različite stope učenja, $\eta\in{0.005,0.01,0.05,0.1}$.
End of explanation
from sklearn.linear_model import LogisticRegression
reg3e = LogisticRegression(max_iter=2000, tol=0.0001, C=0.01**-1, solver='lbfgs').fit(seven_X,seven_y)
h3e = lambda x : reg3e.predict(x)
figure(figsize(7, 7))
mlutils.plot_2d_clf_problem(seven_X,seven_y, h3e)
Explanation: Q: Zašto je pogreška unakrsne entropije veća od pogreške klasifikacije? Je li to uvijek slučaj kod logističke regresije i zašto?
Q: Koju stopu učenja $\eta$ biste odabrali i zašto?
(e)
Upoznajte se s klasom linear_model.LogisticRegression koja implementira logističku regresiju. Usporedite rezultat modela na skupu seven s rezultatom koji dobivate pomoću vlastite implementacije algoritma.
NB: Kako ugrađena implementacija koristi naprednije verzije optimizacije funkcije, vrlo je vjerojatno da Vam se rješenja neće poklapati, ali generalne performanse modela bi trebale. Ponovno, pripazite na broj iteracija i snagu regularizacije.
End of explanation
logReg4 = LogisticRegression(solver='liblinear').fit(outlier_X, outlier_y)
mlutils.plot_2d_clf_problem(X=outlier_X, y=outlier_y, h= lambda x : logReg4.predict(x) >= 0.5)
Explanation: 4. Analiza logističke regresije
(a)
Koristeći ugrađenu implementaciju logističke regresije, provjerite kako se logistička regresija nosi s vrijednostima koje odskaču. Iskoristite skup outlier iz prvog zadatka. Prikažite granicu između klasa.
Q: Zašto se rezultat razlikuje od onog koji je dobio model klasifikacije linearnom regresijom iz prvog zadatka?
End of explanation
[w4b, w4b_trace] = lr_train(seven_X, seven_y, trace = True)
w0_4b = []; w1_4b = []; w2_4b = [];
for i in range(0, len(w4b_trace), 3):
w0_4b.append(w4b_trace[i])
w1_4b.append(w4b_trace[i+1])
w2_4b.append(w4b_trace[i+2])
h_gl = []
for i in range(0, len(seven_X)):
h = []
for j in range(0, len(w4b_trace), 3):
h.append(lr_h(seven_X[i], w4b_trace[j:j+3]))
h_gl.append(h)
figure(figsize(7, 14))
subplot(2,1,1)
grid()
for i in range(0, len(h_gl)):
plot(h_gl[i], label = 'x' + str(i))
legend(loc = 'best') ;
subplot(2,1,2)
grid()
plot(w0_4b); plot(w1_4b); plot(w2_4b);
legend(['w0', 'w1', 'w2'], loc = 'best');
Explanation: (b)
Trenirajte model logističke regresije na skupu seven te na dva odvojena grafikona prikažite, kroz iteracije optimizacijskoga algoritma, (1) izlaz modela $h(\mathbf{x})$ za svih sedam primjera te (2) vrijednosti težina $w_0$, $w_1$, $w_2$.
End of explanation
unsep_y = np.append(seven_y, 0)
[w4c, w4c_trace] = lr_train(unsep_X, unsep_y, trace = True)
w0_4c = []; w1_4c = []; w2_4c = [];
for i in range(0, len(w4c_trace), 3):
w0_4c.append(w4c_trace[i])
w1_4c.append(w4c_trace[i+1])
w2_4c.append(w4c_trace[i+2])
h_gl = []
for i in range(0, len(unsep_X)):
h = []
for j in range(0, len(w4c_trace), 3):
h.append(lr_h(unsep_X[i], w4c_trace[j:j+3]))
h_gl.append(h)
figure(figsize(7, 14))
subplots_adjust(wspace=0.1)
subplot(2,1,1)
grid()
for i in range(0, len(h_gl)):
plot(h_gl[i], label = 'x' + str(i))
legend(loc = 'best') ;
subplot(2,1,2)
grid()
plot(w0_4c); plot(w1_4c); plot(w2_4c);
legend(['w0', 'w1', 'w2'], loc = 'best');
Explanation: (c)
Ponovite eksperiment iz podzadatka (b) koristeći linearno neodvojiv skup podataka unsep iz prvog zadatka.
Q: Usporedite grafikone za slučaj linearno odvojivih i linearno neodvojivih primjera te komentirajte razliku.
End of explanation
from numpy.linalg import norm
alpha5 = [0, 1, 10, 100]
err_gl = []; norm_gl = [];
for a in alpha5:
[w5, w5_trace] = lr_train(seven_X, seven_y, alpha = a, trace = True)
err = []; L2_norm = [];
for k in range(0, len(w5_trace), 3):
err.append(cross_entropy_error(seven_X, seven_y, w5_trace[k:k+3]))
L2_norm.append(linalg.norm(w5_trace[k:k+1]))
err_gl.append(err)
norm_gl.append(L2_norm)
figure(figsize(7, 14))
subplot(2,1,1)
grid()
for i in range(0, len(err_gl)):
plot(err_gl[i], label = 'alpha = ' + str(alpha5[i]) )
legend(loc = 'best') ;
subplot(2,1,2)
grid()
for i in range(0, len(err_gl)):
plot(norm_gl[i], label = 'alpha = ' + str(alpha5[i]) )
legend(loc = 'best');
Explanation: 5. Regularizirana logistička regresija
Trenirajte model logističke regresije na skupu seven s različitim faktorima L2-regularizacije, $\alpha\in{0,1,10,100}$. Prikažite na dva odvojena grafikona (1) pogrešku unakrsne entropije te (2) L2-normu vektora $\mathbf{w}$ kroz iteracije optimizacijskog algoritma.
Q: Jesu li izgledi krivulja očekivani i zašto?
Q: Koju biste vrijednost za $\alpha$ odabrali i zašto?
End of explanation
from sklearn.preprocessing import PolynomialFeatures
[x6, y6] = make_classification(n_samples=100, n_features=2, n_redundant=0, n_classes=2, n_clusters_per_class=2)
figure(figsize(7, 5))
mlutils.plot_2d_clf_problem(x6, y6)
d = [2,3]
j = 1
figure(figsize(12, 4))
subplots_adjust(wspace=0.1)
for i in d:
subplot(1,2,j)
poly = PolynomialFeatures(i)
Phi = poly.fit_transform(x6)
model = LogisticRegression(solver='lbfgs')
model.fit(Phi, y6)
h = lambda x : model.predict(poly.transform(x))
mlutils.plot_2d_clf_problem(x6, y6, h)
title('d = ' + str(i))
j += 1
# Vaš kôd ovdje...
Explanation: 6. Logistička regresija s funkcijom preslikavanja
Proučite funkciju datasets.make_classification. Generirajte i prikažite dvoklasan skup podataka s ukupno $N=100$ dvodimenzijskih ($n=2)$ primjera, i to sa dvije grupe po klasi (n_clusters_per_class=2). Malo je izgledno da će tako generiran skup biti linearno odvojiv, međutim to nije problem jer primjere možemo preslikati u višedimenzijski prostor značajki pomoću klase preprocessing.PolynomialFeatures, kao što smo to učinili kod linearne regresije u prvoj laboratorijskoj vježbi. Trenirajte model logističke regresije koristeći za preslikavanje u prostor značajki polinomijalnu funkciju stupnja $d=2$ i stupnja $d=3$. Prikažite dobivene granice između klasa. Možete koristiti svoju implementaciju, ali se radi brzine preporuča koristiti linear_model.LogisticRegression. Regularizacijski faktor odaberite po želji.
NB: Kao i ranije, za prikaz granice između klasa koristite funkciju plot_2d_clf_problem. Funkciji kao argumente predajte izvorni skup podataka, a preslikavanje u prostor značajki napravite unutar poziva funkcije h koja čini predikciju, na sljedeći način:
End of explanation |
916 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linking and brushing with bokeh
Linking and brushing is a powerful method for exploratory data analysis.
One way to create linked plots in the notebook is to use Bokeh.
Step1: We will output to a static html file.
The output_notebook() function can output to the notebook,
but with 50,000 points it really slows down.
Step2: See many examples of configuring plot tools at http
Step3: Here we'll interact with Glue from the notebook.
Step4: Now we have access to the data collection in our notebook
Step5: Now go select the "Western arm" of the star-forming region (in Glue) and make a subset of it
Step6: We can add something to our catalog and it shows up in Glue.
Step7: We can define a new subset group here or in Glue | Python Code:
import bokeh
import numpy as np
from astropy.table import Table
sdss = Table.read('data/sdss_galaxies_qsos_50k.fits')
sdss
from bokeh.models import ColumnDataSource
from bokeh.plotting import figure, gridplot, output_notebook, output_file, show
umg = sdss['u'] - sdss['g']
gmr = sdss['g'] - sdss['r']
rmi = sdss['r'] - sdss['i']
imz = sdss['i'] - sdss['z']
# create a column data source for the plots to share
source = ColumnDataSource(data=dict(umg=umg, gmr=gmr, rmi=rmi,imz=imz))
Explanation: Linking and brushing with bokeh
Linking and brushing is a powerful method for exploratory data analysis.
One way to create linked plots in the notebook is to use Bokeh.
End of explanation
output_file('sdss_color_color.html')
TOOLS = "pan,wheel_zoom,reset,box_select,poly_select,help"
# create a new plot and add a renderer
left = figure(tools=TOOLS, width=400, height=400, title='SDSS g-r vs u-g', webgl=True)
left.x('umg', 'gmr', source=source)
# create another new plot and add a renderer
right = figure(tools=TOOLS, width=400, height=400, title='SDSS i-z vs r-i')
right.x('rmi', 'imz', source=source)
p = gridplot([[left, right]])
show(p)
Explanation: We will output to a static html file.
The output_notebook() function can output to the notebook,
but with 50,000 points it really slows down.
End of explanation
#import glue
# Quick way to launch Glue
#from glue import qglue
#qglue()
Explanation: See many examples of configuring plot tools at http://bokeh.pydata.org/en/latest/docs/user_guide/tools.html
Interacting with Glue
End of explanation
import astropy.io.fits as fits
hdu = fits.open('data/w5.fits')
hdu[0].header
from astropy.table import Table
w5catalog = Table.read('data/w5_psc.vot')
wisecat = Table.read('data/w5_wise.tbl', format='ipac')
%gui qt
#qglue(catalog=catalog, image=hdu, wisecat=wisecat)
from glue.core.data_factories import load_data
from glue.core import DataCollection
from glue.core.link_helpers import LinkSame
from glue.app.qt.application import GlueApplication
#load 2 datasets from files
image = load_data('data/w5.fits')
catalog = load_data('data/w5_psc.vot')
dc = DataCollection([image, catalog])
# link positional information
dc.add_link(LinkSame(image.id['Right Ascension'], catalog.id['RAJ2000']))
dc.add_link(LinkSame(image.id['Declination'], catalog.id['DEJ2000']))
#start Glue
app = GlueApplication(dc)
app.start()
Explanation: Here we'll interact with Glue from the notebook.
End of explanation
dc
dc[0].components
dc[0].id['Right Ascension']
Explanation: Now we have access to the data collection in our notebook
End of explanation
catalog = dc[1]
j_minus_h = catalog['Jmag'] - catalog['Hmag']
Explanation: Now go select the "Western arm" of the star-forming region (in Glue) and make a subset of it
End of explanation
catalog['jmh'] = j_minus_h
hmag = catalog.id['Hmag']
jmag = catalog.id['Jmag']
Explanation: We can add something to our catalog and it shows up in Glue.
End of explanation
jmhred = (jmag - hmag) > 1.5
dc.new_subset_group('j - h > 1.5', jmhred)
dc.subset_groups
dc.subset_groups[2].label
catalog.subsets
catalog.subsets[0]['Jmag']
mask = catalog.subsets[0].to_mask()
new_catalog = w5catalog[mask]
Explanation: We can define a new subset group here or in Glue
End of explanation |
917 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST from scratch
This notebook walks through an example of training a TensorFlow model to do digit classification using the MNIST data set. MNIST is a labeled set of images of handwritten digits.
An example follows.
Step2: We're going to be building a model that recognizes these digits as 5, 0, and 4.
Imports and input data
We'll proceed in steps, beginning with importing and inspecting the MNIST data. This doesn't have anything to do with TensorFlow in particular -- we're just downloading the data archive.
Step3: Working with the images
Now we have the files, but the format requires a bit of pre-processing before we can work with it. The data is gzipped, requiring us to decompress it. And, each of the images are grayscale-encoded with values from [0, 255]; we'll normalize these to [-0.5, 0.5].
Let's try to unpack the data using the documented format
Step4: The first 10 pixels are all 0 values. Not very interesting, but also unsurprising. We'd expect most of the pixel values to be the background color, 0.
We could print all 28 * 28 values, but what we really need to do to make sure we're reading our data properly is look at an image.
Step5: The large number of 0 values correspond to the background of the image, another large mass of value 255 is black, and a mix of grayscale transition values in between.
Both the image and histogram look sensible. But, it's good practice when training image models to normalize values to be centered around 0.
We'll do that next. The normalization code is fairly short, and it may be tempting to assume we haven't made mistakes, but we'll double-check by looking at the rendered input and histogram again. Malformed inputs are a surprisingly common source of errors when developing new models.
Step6: Great -- we've retained the correct image data while properly rescaling to the range [-0.5, 0.5].
Reading the labels
Let's next unpack the test label data. The format here is similar
Step8: Indeed, the first label of the test set is 7.
Forming the training, testing, and validation data sets
Now that we understand how to read a single element, we can read a much larger set that we'll use for training, testing, and validation.
Image data
The code below is a generalization of our prototyping above that reads the entire test and training data set.
Step9: A crucial difference here is how we reshape the array of pixel values. Instead of one image that's 28x28, we now have a set of 60,000 images, each one being 28x28. We also include a number of channels, which for grayscale images as we have here is 1.
Let's make sure we've got the reshaping parameters right by inspecting the dimensions and the first two images. (Again, mangled input is a very common source of errors.)
Step11: Looks good. Now we know how to index our full set of training and test images.
Label data
Let's move on to loading the full set of labels. As is typical in classification problems, we'll convert our input labels into a 1-hot encoding over a length 10 vector corresponding to 10 digits. The vector [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], for example, would correspond to the digit 1.
Step12: As with our image data, we'll double-check that our 1-hot encoding of the first few values matches our expectations.
Step13: The 1-hot encoding looks reasonable.
Segmenting data into training, test, and validation
The final step in preparing our data is to split it into three sets
Step14: Defining the model
Now that we've prepared our data, we're ready to define our model.
The comments describe the architecture, which fairly typical of models that process image data. The raw input passes through several convolution and max pooling layers with rectified linear activations before several fully connected layers and a softmax loss for predicting the output class. During training, we use dropout.
We'll separate our model definition into three steps
Step16: Now that we've defined the variables to be trained, we're ready to wire them together into a TensorFlow graph.
We'll define a helper to do this, model, which will return copies of the graph suitable for training and testing. Note the train argument, which controls whether or not dropout is used in the hidden layer. (We want to use dropout only during training.)
Step17: Having defined the basic structure of the graph, we're ready to stamp out multiple copies for training, testing, and validation.
Here, we'll do some customizations depending on which graph we're constructing. train_prediction holds the training graph, for which we use cross-entropy loss and weight regularization. We'll adjust the learning rate during training -- that's handled by the exponential_decay operation, which is itself an argument to the MomentumOptimizer that performs the actual training.
The validation and prediction graphs are much simpler to generate -- we need only create copies of the model with the validation and test inputs and a softmax classifier as the output.
Step18: Training and visualizing results
Now that we have the training, test, and validation graphs, we're ready to actually go through the training loop and periodically evaluate loss and error.
All of these operations take place in the context of a session. In Python, we'd write something like
Step19: Now we're ready to perform operations on the graph. Let's start with one round of training. We're going to organize our training steps into batches for efficiency; i.e., training using a small set of examples at each step rather than a single example.
Step20: Let's take a look at the predictions. How did we do? Recall that the output will be probabilities over the possible classes, so let's look at those probabilities.
Step21: As expected without training, the predictions are all noise. Let's write a scoring function that picks the class with the maximum probability and compares with the example's label. We'll start by converting the probability vectors returned by the softmax into predictions we can match against the labels.
Step22: Next, we can do the same thing for our labels -- using argmax to convert our 1-hot encoding into a digit class.
Step23: Now we can compare the predicted and label classes to compute the error rate and confusion matrix for this batch.
Step25: Now let's wrap this up into our scoring function.
Step26: We'll need to train for some time to actually see useful predicted values. Let's define a loop that will go through our data. We'll print the loss and error periodically.
Here, we want to iterate over the entire data set rather than just the first batch, so we'll need to slice the data to that end.
(One pass through our training set will take some time on a CPU, so be patient if you are executing this notebook.)
Step27: The error seems to have gone down. Let's evaluate the results using the test set.
To help identify rare mispredictions, we'll include the raw count of each (prediction, label) pair in the confusion matrix.
Step28: We can see here that we're mostly accurate, with some errors you might expect, e.g., '9' is often confused as '4'.
Let's do another sanity check to make sure this matches roughly the distribution of our test set, e.g., it seems like we have fewer '5' values. | Python Code:
from __future__ import print_function
from IPython.display import Image
import base64
Image(data=base64.decodestring("iVBORw0KGgoAAAANSUhEUgAAAMYAAABFCAYAAAARv5krAAAYl0lEQVR4Ae3dV4wc1bYG4D3YYJucc8455yCSSIYrBAi4EjriAZHECyAk3rAID1gCIXGRgIvASIQr8UTmgDA5imByPpicTcYGY+yrbx+tOUWpu2e6u7qnZ7qXVFPVVbv2Xutfce+q7hlasmTJktSAXrnn8vR/3/xXmnnadg1aTfxL3/7rwfSPmT+kf/7vf098YRtK+FnaZaf/SS++OjNNathufF9caiT2v/xxqbTGki/SXyM1nODXv/r8+7Tb+r+lnxZNcEFHEG/e3LnpoINXSh/PWzxCy/F9eWjOnDlLrr/++jR16tQakgylqdOWTZOGFqX5C/5IjXNLjdt7/NTvv/+eTjnllLT//vunr776Kl100UVpueWWq8n10lOmpSmTU5o/f0Fa3DDH1ry9p0/++eefaZ999slYYPS0005LK664Yk2eJ02ekqZNnZx+XzA/LfprYgGxePHitOqqq6YZM2akyfPmzUvXXXddHceoic2EOckxDj300CzPggUL0g033NC3OKy00krDer3pppv6FgcBIjvGUkv9u5paZZVVhoHpl4Mvv/wyhfxDQ0NZ7H7EQbacPHny39Tejzj88ccfacqUKRmHEecYf0Nr8GGAQJ8gMHCMPlH0QMzmEBg4RnN4DVr3CQIDx+gTRQ/EbA6BgWM0h9egdZ8g8PeliD4RutfF/Ouvfz9OtZy8aNGiNH/+/GGWl1122XzseYuVNKtqsaI23Ghw0DYCA8doG8JqO+AUG2+8cVq4cGHaY4890vLLL5/WXXfdfI6jvPDCC3lJ8amnnkoezP3000/pl19+GThHtWpIPekYomTxFS7HnkqKjMsss0yGgFE4r62tSBFVJ02aNPyconi9V4/JwzHwT9ZNNtkkeZ6w5ZZbph133DH99ttv6ccff8zXX3nllcRRnHNfv2cNGMQWGRaOrWbUrjsGBRLAA6U4Lhoqw9h2223ztRBq6aWXzsbgvueffz4Lu9NOO2UnYTgrr7xy7tO9nOH111/Pbb744ov0ww8/jAvngAdFMvQDDjggG/0GG2yQX1GZNm1aziCCwzrrrJPl3muvvXKwePnll9M333wzHDCKWPbLMbuAkfISjnvvvXcW/emnn85lqCBqa4a65hiYR/Gk2RNGRlwm3n7ggQfmdrKD9sqJtdZaKxvCnDlz8n3Tp09PXmPYeuutc0SVNQjvnmuvvTa3efzxx9N33303PGZ5rF75DBvvqq233nrp22+/TWeddVbyikpgxCE4vQDhlQUBRfDw2esbs2fPTquvvnqviNN1PuIdJ4GErVx44YUZowsuuCB9+umn6eeff84BspmsWqljhPFDxjGGYx/lDkN33udajCoVlAjRzl4U8LjefRwnPjsXG8OJqKBd8NB1LTU5IHyCd7LJGOYXNoGjFqaGIKtrERDIDKtukfGMH/zRZa1A101+YBF44KfMYzO8VOYYjDWiukiGqc022yyXOUqdzTffPJ/z1ialeqNVxA9gi0wzlOJ5juJlR8JeddVV+ZrIKTq4ZvJp/8EHH+SU+txzz+W2SqmxVFZRplrH5DTRXmGFFdKuu+6azjjjjOzosl5g6D54CQCI4mGjhNQO5occckh2LvLTA6fqJOEnyhU6kNlkZmUuvrtNcFx77bUzhsZWXgoSsm6t4Dsa/tp2DErCmA04HAI4FLjaaqtlBhmnSKiNY4rDtHZFB6jFMMH0RVDH+nCPYxtDCFJnKkniRbDitWjTK3sykQUuMLPn3DZGX8SFnCG/fVyz5zCCBtIHTLshdzif8fERn8cKXxjCNOwCTu3Qf6yqhV4AQokiP489//zzM0DxnQYKwqAtIkko1kQzFFxvaNcJ6u3Pe+65J/cRRvDee+9lA2BInIyRff/997nNO++8k7t0vl2A6vHWynmyiPJ43WKLLbIijz/++LTddtvlTCdzwIWSg9yjxBJ0GN/DDz+c7zv77LOzbEceeWSekwVGgsOsWbNyNo0+qt7DfPvtt8/dmtvIGnPnzk3PPPPMsJ6rHrNef/BBeJA90RprrJEDcNhctMkXR/mnbccwuCjNGTbaaKMc8TBZprITxOdgOvbuKxqGz6LSJ598kseJ9Gi1CYmSv/76a3YyJZWMZJ6Ceskp8EMusihFEAyUmVaa8G2rxTNHIrd733///eH7YeaLNe5xrEzlWNF/HqQDf0Tm+GIbvYdD43MsKAIo/JDgE0G5aFfN8NaWYxiUshikqGYTTUSt0TCkjXsYNqJQQso+rgGa0vX58ccf56hQTtk+48F92rmvlnE1A0on2uKP0Yrw+Nxzzz0zn+ZhjKwRXq6vueaa2TmUiRQfS7SyNeMks9IV9vrvJOl/q622yo4Mfw5Pvm6TMclLdit6shh+YAMnq1E29tEsteUYBgMSgxa5MOAzJZcVXQs4bUR8XxhCHIwzMALCBuCcx5q0tF3u133l8XrRMchFiRYNyMxBKM/5IjZlWVzjULKwACISytIWFsi56aab5mvOKyEikmdAO/iHY+BDCRUZuoPD1e1akECyLseA7d13352DhdKak8Cmlt3U7TSl9p58FwejYK8ncAwKpDTnGDcARbWiAUjHiNEHsITSPlagpEZChcfrZzwSOfBOiQwXLuR3PjAhtwAD08iAMCO/a+5xPTIm3ALjwERf0V+c69QeT7ZujVdLDhgKBrANXAMreMESRkU7rdVPrXNtZ4xIpSLH1VdfnR3j4IMPzkbw2Wefpa+//jovo5188slZsZjArAcvFP3YY4+lSy+9NEdTdTTy0I5xHHfccfm1CH2LtuORKEqmkwVlVU+sBY+IdJRmE0zeeOONnEXuu+++7AhnnnlmWn/99XMJ5brtzTffzHMJx/o555xzkgdb0U8rRtAKrnTYqtG1Ml6teyxInHDCCdlGYByBmG2Z97ChVvFo2zEwbHCRTbqP7EDxPjN2pUBEe86AXAcsg+f10TYMSTvnRM1ulQe1wG/nHEXZZEJZUIYQ5cgWMsEgMgqclFdkdh+MbFFyuddnWMLNfTYkcuuXHlBkpFYNI3dS+mMMfCHHsZWadfUjmQVn8iLywscG21apMscQwR555JEM3KuvvpoZ5LHOmzgjAvBwzFt2/Oijj3Lm4Ayin/MU/eGHH+b2N998c/5MGSaZ44nw7OEd5Rx77LE5+1EehYXxkpes5li2K6+8Mhv8Lrvsko381ltvzcEBfvHQKh5auk9GPvHEE3NJAx+/eKL/HXbYIQcbK3nwN067xAk4s5VHdbvsx0nxrYQeKxJMZAfBA7GlRx99NC9EtCN7JY4RoPBeAHIAyrB3jpHYwqu1d02d7HpZcfqINo5dL7eJMXtxTzk2sgWFM/gcsnCakI2cFOk+523O+Qw7WaeYHYpYRp9xn4BkbPdWSfgJXYYM+ne+2xRj2sdx8EDu8rm4Ntp9pY4RSmb0CIPOAVNGoLA47yU4S2xen37ppZdy9CkLE/3lm8bJHzJbbiavt2Q9p7AkK7oyXAZOLk7gs9c4PJC0AOE8DDyrgJkaWgYQkSPYuAdpWySfteU8HhqKouYq+io6ZfGeZo7xpbT1+jt+jGULfprpq922ePHMBibwjWVq523KVrzBsIzTaMeu1DFi0HI0YyyYtAekY5MltbRyihFJiROBKIYTwMCTWJNubwdQFCXFapK9z96mtbjgs3thFKWnUgjBzNZIya5FOyUcPG36q4LwRgZ6Ix8HtBk3tirGGU0feAkslHfk5PzBh2cXSkvtWqWOOEaRGcoSHdXDMoYn1tK8yaON0ahbCWgFS/vxSnjn5F4ItLeiFAGAzCKc7MDA1OlIjc4pLFKE7FEyxb5ZPNTbtuiv2fvrtddfOFsYXcwj8d8qv/XGq3femLvvvnvOvrIYPPEjG+PDseDbDnXcMXiyiGiyyACOPvrovN95552zV3/++ef5zVveznlEo6CICvG5l/d4JSvHP+qoo7JjKDs4PkVSGPm9HSz9W5rlPEoCQYHjVFXyRGnBOcKA28VOP/qTBWX6YnS2IKB8qYL/enyGHPbKziOOOCLj6sGeslGW8L6Y4ANr2MY99fpsdL7jjmFwkSTSr6gDVCk+tmDQedcJ5LgdwaLPbu7xjJRRNlErSsiQhVHJlOEQoh182o1wRTnharwYs3itnWP9Rd/RD5mLW5yveh/YRhYMjItyBh/wjPat8tEVx6B00RKo5513XpIl7rzzzuwEourMmTOz95uIcyBfTSXYiy++mCOrSFS1klsFrNZ9eGPoJtmeyRx00EE5cpGbIi21XnbZZbkMee2117KMHIKMIVcotVb/vXoOz6I0+URoMlVFcBFE7L1+IjNYIo6v/fo+D3tC+FCR+FHuwNUCgfOtUlccI5hnJMoIBhN1sBICqMoNNaLP3pkiFGciIIBC4HaEbRWk0dyHb3Mp/EY0I6+NsytvyKxsKhpQr8ozGpm1IZ8IbV+PyllGuyh1YBXXOQEcy6R8M5eAHzuxxX3GRvbaCKJ4aRfXrjkG5jEbk00Prxi8SZTJKmc5/PDDc5v99tsvC+hBjWtqStmD0F4Ma1foMvDtfqZMUc3/lYjMSFFW3NS7JtyyoKzSiTocHoFJHMc+MlK7Mta7n9NbATJerbEYvQWIWCVitIyaXrV3nsG7H2Y2GVcbxyj6NX+waKEPmOvbfShwtjhQDDz5Ygt/uuoY+OPtnICDEMBTWsAQUu0NBBsDEgFEWOADAiDaVRERWsCq5i34IRN+TbTJgn8KwzOFuR4KDUXW7Kyik53Ep8w/+RkxWeO5S1EM5wVABguXMGp69dk1x87D0ObdL32GHI5tsDQGHtwbm/Hw4TpnKvNY5Ge0x113DEwT3tIsIdSnDIfxcxJAevCHfE9cXcmotHXfAw88kIFUdgFjLMn4HuZRuh9FExmjRCCnZxRqcPxz8ioUVk9eRhJkPAYHV8ZVFRkjjFSfAtw222yTy2OZ0iv15fHcQ4dKaMcwsBdEEL26RzaIh5+yK7LSBGPno8yOZX+vzRhfXzZ8cRrtyzzkzpr803XHwB8wTJYIRol+VY8zqMMBbP0f+cExE1qTdbU7x3jwwQdzVBYdesExKNiEWx2MfwoOAyCbJ9uRHZvUTcPmsENhGNE4HBKOHKNqZzQu3KNfX9H1nRABQZlbNkpt4SNo4DWIIesDj9qYnwki2giWqol3330348kZLPm7xvi1Pffcc7MzhA3gy/0oeIuxWtmPiWNgNCIFYwcCAa2FA1ikJZz1aeUVsBmge9TyoqGoIqKUFdEKCFXcU0/pHJizVMUnXBiBh6IicdTTzsEOnuZkDE/2rcJI4KMf/TF+0TucwDhkZ+DGL4/nGkPGV/AIC+2RvfP6ZPTI4gu5XNM/Um7RPzuIFyn1zW7wpQ9UHj+fbOHPmDlGCOGBGIeQQfwuq0jnISBQfOHft7JEHN94Q5xF6XLFFVfkyKIEGyuiGAo3r6BIx0imcM6k+6GHHspOEQbcDq+UTl4BwRu7PstUiPEJFsa9/PLL83nXg6d2xnUvoxS5L7744uGyh/wyRpRF9YwSHsHjE088kWWADQeRFThZkTgBstensZG5h4m56oEdcAp9CwTOVUlj6hgECcGBpA6XDazeiLKhVABQAhKB3cNxbEAL4KoEppm+gjf3OMafDf+UW7zeTL/ltqIiAxBMOIIxnLOHgbFsMGQ4InhE0nJfrXw2hnIRD3SFBKmYWDfqE49woFvOzZno3NxM0HDciMjBDsjEBgLTsJHYN+qjmWtj7hjBLKFFQgL7qRz14jHHHJPBcC2M3wRPVDT5ohzZRv0Z16O/sdozAKmdopUH5kftTrzJpl+lk29CcgpLw3BgpMbwwqF/S80pGJ6xO0WM+8Ybbxw2TuOEoTYakwyovB/JKdzDMVQOHvCRzXju890fL11aGhcMqqIxdwwCRkYQDZAaE7lWBhyosQEmQM439MgffDHm0Si8EcuBC0ezcQSZVKYktzFEW+3sfQ4natRvu9eMTS9F7IvHo+m/2fb6LNuCc0WsW+mzHq9j6hgE9YCHp5tkez2EAVjlMOmyUlU2Lis8ygVR0rykyoltPZCaOY9fr32Qp50X6xi7pWCGbsHBvwLgGIcddljGxvcsjOU1GseyiKjJQWydpiqNsBlei85BfhNxeJunVCl31x0jBOMAjJ9jRC3OEERDS7QMI0qQohIYgLSq7FJuMZbi9WZA7kRbvFAWx5Dyy449mjEDG/dyDPW4VSiy2iNvBcCSUdxyyy35OYHrqJUx843j8I/qQpA074BVVdR1x+AIHCIiIGewsqIuds41tSSlOxeOFHuOQ/E+2zPEuFYVKM32U3RMvGy44YbZMTg2B2+GOIXXJcjpR9lkUy/QyZ7GUU8zAD9RCiuR0oQYVv1IMAk7qFL+rjkGg7GZQPLufffdN69QKJtkCAKKjNGu1p7gMgWDYEDRpkpAmu0rnMLehie/RavcI49Sr1ZW0w6V91ac/IsxmdHPB0U5pQ+4+TExDudNUhPufnaKIn7N6m2k9h11jKLRqP+UQJb2eHh4uYjK0LW1D0MpCq0NR4g24RTR/0hCdvM6/m14FtljeTL4D/liedFeO7LYcyh7eMGDY8X16IM8Vp9kWjj2GwWG5IZb2FKVOHTMMTCvDKBgD2Z22223bNynnnpqVrZXBFxjQDZUFJiwIqKHN8qHO+64IxvN/fffn9vG/VWC0UpfeC5uZMEbg/ctM/8SzYOxZ599Nhs4ebSx0ECpcDFvMCdRggkesoQ+zaHU0N4EgAEnue2227JTON+LgaEVDFu5h+w2Wdl33GFkEUIQqYIqdYwwbJGO8q2xOydqUiTFWpJVPzsuUwhlzzFETxlGdFSCqaMB4XwvUzgKWU3AyW4uwFns4QMbilUyxbq8p/4cw3UEB8FDGQUDx/acqB8zRS2dw5qthe3VatPKucocg6JiYu3lP2nfawvekKVITzgJQLH24QTBtPZeE2D89957b27jwZ1IwIm8R2OMWHmJ+3pxTzaK8l+HyMrgTzrppMxqOIEsGoZvz0nsyWiliRMUl2G9aOk6POyLZVUvYtBpniL4wA1m9lVSW46BOQqKpTLK9FnUsxftvW4swssa4dkhCGFCMNfcp08lhM9KKc4h0obgsa8ShHb6Cv5DJnu8IwHB9TB852DkOlzIRV6kXbSVMfQj48BWdhE0TLr1Fe3zQR/+gRMK5yjuq4KjZccQ2SlYjexHmCnSkiLjtsesmlnpQ5naFo1A5GMAHoJxBI709ttv54ygntZWmWEcQMS9VQleRT9kNmfAG0P3HRPGbHnVudg4gEyJOAYiE0wikHAAcxHyxndO4KI/WHEK/Qzo7wjAXfaFNdurikaNtIERRTqmYIYdE2tGEs8hfJ8iFB/3xV67MCjG8NZbb6Unn3wyC+XfDxfnDxFp496qhK6qn5CDA5twK/fIRH5Gb0MMOhxCFgkKjOBoHqKEkmWvueaanG04iTHcP3CKQO0/e3ZhgceP2smqcKyKRuUYlEKhPDL+d5z1c4qVFTDnmBIZMwZ9DiKAzTmvCetPNFR7W7fXXt/KLddqTcyjr17bRybkEF5XiQhPHnMuDlF07MCB3I49l4EDxTrnfsFBJBxQbQSKeGoROqjdurWzIzoGJqRxS2KUf/rpp2flcRDRjRKVCdpFhCwz7rOVKE5z++235/7uuuuuXDq5P5yKEY0np8B3TKb9K1/vLTF0/7MiJtyRPYrq4fx+7R2e7vFDDzDyfx1goPwcUGMEYG/rFI3oGAYW0UUyimQIcRwGzbgpVsZAUTYE065xCtc5GUeSHTyg4kzKs/FKoSBljyhvTz6y2gseZAwlwgI+cNBGtpV9ZRj4BobjFY9O8g0bQcXWaRpxBE5hHuFnJ0XB6dOn56ge2QGDlK2dFSSG4b8kxVzEdSWGVxgYQLzrxJkIGgbTaUE73b9MZ/KNfIMOJpdcckndYZWmFAwv+wgydW/o8wsCK3xnz56dFzx8oxPGtk7QiI5h0FBaeGzRKYIpjDN2ig6lB9OiprmI60qNieIMIXvsQy7yotjH9eI+2hbPDY4bI8D+2JdnWTYY+iwDs78qaUTHEM0sI1pClAVMnqX9ImGQszB6DHoNOLzZNZlGRlEq9JNB9JOsRXvoxDGnsDTudwFUHTNmzMjDqEaU9xYvGgWiZnka0TEo16CeNyCM1SLtwmt5cNEoCOUa5xjQAIFWEGBP5rbKdTRr1qwcfGUMthXVTCt917pnRMdwE6ZiQm0JckADBMYCgWLwtXjTSeq/d5Y7ieag7wmDwMAxJowqB4JUicDAMapEc9DXhEFgcjxcM7vvR4on7bHS1q84WNkpUr/iEL+aOLRw4cIlQCmuIhUBmsjHlpQ9c7EmzjEsN1vd6DeCg8UVT+qRd7b6EQey8wMT+6El8RSu36xhIO8AgQYI9F94bADG4NIAgUDg/wHX+3lgThDIegAAAABJRU5ErkJggg==".encode('utf-8')), embed=True)
Explanation: MNIST from scratch
This notebook walks through an example of training a TensorFlow model to do digit classification using the MNIST data set. MNIST is a labeled set of images of handwritten digits.
An example follows.
End of explanation
import os
from six.moves.urllib.request import urlretrieve
SOURCE_URL = 'https://storage.googleapis.com/cvdf-datasets/mnist/'
#SOURCE_URL = 'http://yann.lecun.com/exdb/mnist/'
# for those who have no access to google storage, use lecun's repo please
WORK_DIRECTORY = "/tmp/mnist-data"
def maybe_download(filename):
A helper to download the data files if not present.
if not os.path.exists(WORK_DIRECTORY):
os.mkdir(WORK_DIRECTORY)
filepath = os.path.join(WORK_DIRECTORY, filename)
if not os.path.exists(filepath):
filepath, _ = urlretrieve(SOURCE_URL + filename, filepath)
statinfo = os.stat(filepath)
print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')
else:
print('Already downloaded', filename)
return filepath
train_data_filename = maybe_download('train-images-idx3-ubyte.gz')
train_labels_filename = maybe_download('train-labels-idx1-ubyte.gz')
test_data_filename = maybe_download('t10k-images-idx3-ubyte.gz')
test_labels_filename = maybe_download('t10k-labels-idx1-ubyte.gz')
Explanation: We're going to be building a model that recognizes these digits as 5, 0, and 4.
Imports and input data
We'll proceed in steps, beginning with importing and inspecting the MNIST data. This doesn't have anything to do with TensorFlow in particular -- we're just downloading the data archive.
End of explanation
import gzip, binascii, struct, numpy
import matplotlib.pyplot as plt
with gzip.open(test_data_filename) as f:
# Print the header fields.
for field in ['magic number', 'image count', 'rows', 'columns']:
# struct.unpack reads the binary data provided by f.read.
# The format string '>i' decodes a big-endian integer, which
# is the encoding of the data.
print(field, struct.unpack('>i', f.read(4))[0])
# Read the first 28x28 set of pixel values.
# Each pixel is one byte, [0, 255], a uint8.
buf = f.read(28 * 28)
image = numpy.frombuffer(buf, dtype=numpy.uint8)
# Print the first few values of image.
print('First 10 pixels:', image[:10])
Explanation: Working with the images
Now we have the files, but the format requires a bit of pre-processing before we can work with it. The data is gzipped, requiring us to decompress it. And, each of the images are grayscale-encoded with values from [0, 255]; we'll normalize these to [-0.5, 0.5].
Let's try to unpack the data using the documented format:
[offset] [type] [value] [description]
0000 32 bit integer 0x00000803(2051) magic number
0004 32 bit integer 60000 number of images
0008 32 bit integer 28 number of rows
0012 32 bit integer 28 number of columns
0016 unsigned byte ?? pixel
0017 unsigned byte ?? pixel
........
xxxx unsigned byte ?? pixel
Pixels are organized row-wise. Pixel values are 0 to 255. 0 means background (white), 255 means foreground (black).
We'll start by reading the first image from the test data as a sanity check.
End of explanation
%matplotlib inline
# We'll show the image and its pixel value histogram side-by-side.
_, (ax1, ax2) = plt.subplots(1, 2)
# To interpret the values as a 28x28 image, we need to reshape
# the numpy array, which is one dimensional.
ax1.imshow(image.reshape(28, 28), cmap=plt.cm.Greys);
ax2.hist(image, bins=20, range=[0,255]);
Explanation: The first 10 pixels are all 0 values. Not very interesting, but also unsurprising. We'd expect most of the pixel values to be the background color, 0.
We could print all 28 * 28 values, but what we really need to do to make sure we're reading our data properly is look at an image.
End of explanation
# Let's convert the uint8 image to 32 bit floats and rescale
# the values to be centered around 0, between [-0.5, 0.5].
#
# We again plot the image and histogram to check that we
# haven't mangled the data.
scaled = image.astype(numpy.float32)
scaled = (scaled - (255 / 2.0)) / 255
_, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(scaled.reshape(28, 28), cmap=plt.cm.Greys);
ax2.hist(scaled, bins=20, range=[-0.5, 0.5]);
Explanation: The large number of 0 values correspond to the background of the image, another large mass of value 255 is black, and a mix of grayscale transition values in between.
Both the image and histogram look sensible. But, it's good practice when training image models to normalize values to be centered around 0.
We'll do that next. The normalization code is fairly short, and it may be tempting to assume we haven't made mistakes, but we'll double-check by looking at the rendered input and histogram again. Malformed inputs are a surprisingly common source of errors when developing new models.
End of explanation
with gzip.open(test_labels_filename) as f:
# Print the header fields.
for field in ['magic number', 'label count']:
print(field, struct.unpack('>i', f.read(4))[0])
print('First label:', struct.unpack('B', f.read(1))[0])
Explanation: Great -- we've retained the correct image data while properly rescaling to the range [-0.5, 0.5].
Reading the labels
Let's next unpack the test label data. The format here is similar: a magic number followed by a count followed by the labels as uint8 values. In more detail:
[offset] [type] [value] [description]
0000 32 bit integer 0x00000801(2049) magic number (MSB first)
0004 32 bit integer 10000 number of items
0008 unsigned byte ?? label
0009 unsigned byte ?? label
........
xxxx unsigned byte ?? label
As with the image data, let's read the first test set value to sanity check our input path. We'll expect a 7.
End of explanation
IMAGE_SIZE = 28
PIXEL_DEPTH = 255
def extract_data(filename, num_images):
Extract the images into a 4D tensor [image index, y, x, channels].
For MNIST data, the number of channels is always 1.
Values are rescaled from [0, 255] down to [-0.5, 0.5].
print('Extracting', filename)
with gzip.open(filename) as bytestream:
# Skip the magic number and dimensions; we know these values.
bytestream.read(16)
buf = bytestream.read(IMAGE_SIZE * IMAGE_SIZE * num_images)
data = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.float32)
data = (data - (PIXEL_DEPTH / 2.0)) / PIXEL_DEPTH
data = data.reshape(num_images, IMAGE_SIZE, IMAGE_SIZE, 1)
return data
train_data = extract_data(train_data_filename, 60000)
test_data = extract_data(test_data_filename, 10000)
Explanation: Indeed, the first label of the test set is 7.
Forming the training, testing, and validation data sets
Now that we understand how to read a single element, we can read a much larger set that we'll use for training, testing, and validation.
Image data
The code below is a generalization of our prototyping above that reads the entire test and training data set.
End of explanation
print('Training data shape', train_data.shape)
_, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(train_data[0].reshape(28, 28), cmap=plt.cm.Greys);
ax2.imshow(train_data[1].reshape(28, 28), cmap=plt.cm.Greys);
Explanation: A crucial difference here is how we reshape the array of pixel values. Instead of one image that's 28x28, we now have a set of 60,000 images, each one being 28x28. We also include a number of channels, which for grayscale images as we have here is 1.
Let's make sure we've got the reshaping parameters right by inspecting the dimensions and the first two images. (Again, mangled input is a very common source of errors.)
End of explanation
NUM_LABELS = 10
def extract_labels(filename, num_images):
Extract the labels into a 1-hot matrix [image index, label index].
print('Extracting', filename)
with gzip.open(filename) as bytestream:
# Skip the magic number and count; we know these values.
bytestream.read(8)
buf = bytestream.read(1 * num_images)
labels = numpy.frombuffer(buf, dtype=numpy.uint8)
# Convert to dense 1-hot representation.
return (numpy.arange(NUM_LABELS) == labels[:, None]).astype(numpy.float32)
train_labels = extract_labels(train_labels_filename, 60000)
test_labels = extract_labels(test_labels_filename, 10000)
Explanation: Looks good. Now we know how to index our full set of training and test images.
Label data
Let's move on to loading the full set of labels. As is typical in classification problems, we'll convert our input labels into a 1-hot encoding over a length 10 vector corresponding to 10 digits. The vector [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], for example, would correspond to the digit 1.
End of explanation
print('Training labels shape', train_labels.shape)
print('First label vector', train_labels[0])
print('Second label vector', train_labels[1])
Explanation: As with our image data, we'll double-check that our 1-hot encoding of the first few values matches our expectations.
End of explanation
VALIDATION_SIZE = 5000
validation_data = train_data[:VALIDATION_SIZE, :, :, :]
validation_labels = train_labels[:VALIDATION_SIZE]
train_data = train_data[VALIDATION_SIZE:, :, :, :]
train_labels = train_labels[VALIDATION_SIZE:]
train_size = train_labels.shape[0]
print('Validation shape', validation_data.shape)
print('Train size', train_size)
Explanation: The 1-hot encoding looks reasonable.
Segmenting data into training, test, and validation
The final step in preparing our data is to split it into three sets: training, test, and validation. This isn't the format of the original data set, so we'll take a small slice of the training data and treat that as our validation set.
End of explanation
import tensorflow as tf
# We'll bundle groups of examples during training for efficiency.
# This defines the size of the batch.
BATCH_SIZE = 60
# We have only one channel in our grayscale images.
NUM_CHANNELS = 1
# The random seed that defines initialization.
SEED = 42
# This is where training samples and labels are fed to the graph.
# These placeholder nodes will be fed a batch of training data at each
# training step, which we'll write once we define the graph structure.
train_data_node = tf.placeholder(
tf.float32,
shape=(BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS))
train_labels_node = tf.placeholder(tf.float32,
shape=(BATCH_SIZE, NUM_LABELS))
# For the validation and test data, we'll just hold the entire dataset in
# one constant node.
validation_data_node = tf.constant(validation_data)
test_data_node = tf.constant(test_data)
# The variables below hold all the trainable weights. For each, the
# parameter defines how the variables will be initialized.
conv1_weights = tf.Variable(
tf.truncated_normal([5, 5, NUM_CHANNELS, 32], # 5x5 filter, depth 32.
stddev=0.1,
seed=SEED))
conv1_biases = tf.Variable(tf.zeros([32]))
conv2_weights = tf.Variable(
tf.truncated_normal([5, 5, 32, 64],
stddev=0.1,
seed=SEED))
conv2_biases = tf.Variable(tf.constant(0.1, shape=[64]))
fc1_weights = tf.Variable( # fully connected, depth 512.
tf.truncated_normal([IMAGE_SIZE // 4 * IMAGE_SIZE // 4 * 64, 512],
stddev=0.1,
seed=SEED))
fc1_biases = tf.Variable(tf.constant(0.1, shape=[512]))
fc2_weights = tf.Variable(
tf.truncated_normal([512, NUM_LABELS],
stddev=0.1,
seed=SEED))
fc2_biases = tf.Variable(tf.constant(0.1, shape=[NUM_LABELS]))
print('Done')
Explanation: Defining the model
Now that we've prepared our data, we're ready to define our model.
The comments describe the architecture, which fairly typical of models that process image data. The raw input passes through several convolution and max pooling layers with rectified linear activations before several fully connected layers and a softmax loss for predicting the output class. During training, we use dropout.
We'll separate our model definition into three steps:
Defining the variables that will hold the trainable weights.
Defining the basic model graph structure described above. And,
Stamping out several copies of the model graph for training, testing, and validation.
We'll start with the variables.
End of explanation
def model(data, train=False):
The Model definition.
# 2D convolution, with 'SAME' padding (i.e. the output feature map has
# the same size as the input). Note that {strides} is a 4D array whose
# shape matches the data layout: [image index, y, x, depth].
conv = tf.nn.conv2d(data,
conv1_weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Bias and rectified linear non-linearity.
relu = tf.nn.relu(tf.nn.bias_add(conv, conv1_biases))
# Max pooling. The kernel size spec ksize also follows the layout of
# the data. Here we have a pooling window of 2, and a stride of 2.
pool = tf.nn.max_pool(relu,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
conv = tf.nn.conv2d(pool,
conv2_weights,
strides=[1, 1, 1, 1],
padding='SAME')
relu = tf.nn.relu(tf.nn.bias_add(conv, conv2_biases))
pool = tf.nn.max_pool(relu,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Reshape the feature map cuboid into a 2D matrix to feed it to the
# fully connected layers.
pool_shape = pool.get_shape().as_list()
reshape = tf.reshape(
pool,
[pool_shape[0], pool_shape[1] * pool_shape[2] * pool_shape[3]])
# Fully connected layer. Note that the '+' operation automatically
# broadcasts the biases.
hidden = tf.nn.relu(tf.matmul(reshape, fc1_weights) + fc1_biases)
# Add a 50% dropout during training only. Dropout also scales
# activations such that no rescaling is needed at evaluation time.
if train:
hidden = tf.nn.dropout(hidden, 0.5, seed=SEED)
return tf.matmul(hidden, fc2_weights) + fc2_biases
print('Done')
Explanation: Now that we've defined the variables to be trained, we're ready to wire them together into a TensorFlow graph.
We'll define a helper to do this, model, which will return copies of the graph suitable for training and testing. Note the train argument, which controls whether or not dropout is used in the hidden layer. (We want to use dropout only during training.)
End of explanation
# Training computation: logits + cross-entropy loss.
logits = model(train_data_node, True)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=train_labels_node, logits=logits))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(fc1_weights) + tf.nn.l2_loss(fc1_biases) +
tf.nn.l2_loss(fc2_weights) + tf.nn.l2_loss(fc2_biases))
# Add the regularization term to the loss.
loss += 5e-4 * regularizers
# Optimizer: set up a variable that's incremented once per batch and
# controls the learning rate decay.
batch = tf.Variable(0)
# Decay once per epoch, using an exponential schedule starting at 0.01.
learning_rate = tf.train.exponential_decay(
0.01, # Base learning rate.
batch * BATCH_SIZE, # Current index into the dataset.
train_size, # Decay step.
0.95, # Decay rate.
staircase=True)
# Use simple momentum for the optimization.
optimizer = tf.train.MomentumOptimizer(learning_rate,
0.9).minimize(loss,
global_step=batch)
# Predictions for the minibatch, validation set and test set.
train_prediction = tf.nn.softmax(logits)
# We'll compute them only once in a while by calling their {eval()} method.
validation_prediction = tf.nn.softmax(model(validation_data_node))
test_prediction = tf.nn.softmax(model(test_data_node))
print('Done')
Explanation: Having defined the basic structure of the graph, we're ready to stamp out multiple copies for training, testing, and validation.
Here, we'll do some customizations depending on which graph we're constructing. train_prediction holds the training graph, for which we use cross-entropy loss and weight regularization. We'll adjust the learning rate during training -- that's handled by the exponential_decay operation, which is itself an argument to the MomentumOptimizer that performs the actual training.
The validation and prediction graphs are much simpler to generate -- we need only create copies of the model with the validation and test inputs and a softmax classifier as the output.
End of explanation
# Create a new interactive session that we'll use in
# subsequent code cells.
s = tf.InteractiveSession()
# Use our newly created session as the default for
# subsequent operations.
s.as_default()
# Initialize all the variables we defined above.
tf.global_variables_initializer().run()
Explanation: Training and visualizing results
Now that we have the training, test, and validation graphs, we're ready to actually go through the training loop and periodically evaluate loss and error.
All of these operations take place in the context of a session. In Python, we'd write something like:
with tf.Session() as s:
...training / test / evaluation loop...
But, here, we'll want to keep the session open so we can poke at values as we work out the details of training. The TensorFlow API includes a function for this, InteractiveSession.
We'll start by creating a session and initializing the variables we defined above.
End of explanation
BATCH_SIZE = 60
# Grab the first BATCH_SIZE examples and labels.
batch_data = train_data[:BATCH_SIZE, :, :, :]
batch_labels = train_labels[:BATCH_SIZE]
# This dictionary maps the batch data (as a numpy array) to the
# node in the graph it should be fed to.
feed_dict = {train_data_node: batch_data,
train_labels_node: batch_labels}
# Run the graph and fetch some of the nodes.
_, l, lr, predictions = s.run(
[optimizer, loss, learning_rate, train_prediction],
feed_dict=feed_dict)
print('Done')
Explanation: Now we're ready to perform operations on the graph. Let's start with one round of training. We're going to organize our training steps into batches for efficiency; i.e., training using a small set of examples at each step rather than a single example.
End of explanation
print(predictions[0])
Explanation: Let's take a look at the predictions. How did we do? Recall that the output will be probabilities over the possible classes, so let's look at those probabilities.
End of explanation
# The highest probability in the first entry.
print('First prediction', numpy.argmax(predictions[0]))
# But, predictions is actually a list of BATCH_SIZE probability vectors.
print(predictions.shape)
# So, we'll take the highest probability for each vector.
print('All predictions', numpy.argmax(predictions, 1))
Explanation: As expected without training, the predictions are all noise. Let's write a scoring function that picks the class with the maximum probability and compares with the example's label. We'll start by converting the probability vectors returned by the softmax into predictions we can match against the labels.
End of explanation
print('Batch labels', numpy.argmax(batch_labels, 1))
Explanation: Next, we can do the same thing for our labels -- using argmax to convert our 1-hot encoding into a digit class.
End of explanation
correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(batch_labels, 1))
total = predictions.shape[0]
print(float(correct) / float(total))
confusions = numpy.zeros([10, 10], numpy.float32)
bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(batch_labels, 1))
for predicted, actual in bundled:
confusions[predicted, actual] += 1
plt.grid(False)
plt.xticks(numpy.arange(NUM_LABELS))
plt.yticks(numpy.arange(NUM_LABELS))
plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');
Explanation: Now we can compare the predicted and label classes to compute the error rate and confusion matrix for this batch.
End of explanation
def error_rate(predictions, labels):
Return the error rate and confusions.
correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(labels, 1))
total = predictions.shape[0]
error = 100.0 - (100 * float(correct) / float(total))
confusions = numpy.zeros([10, 10], numpy.float32)
bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(labels, 1))
for predicted, actual in bundled:
confusions[predicted, actual] += 1
return error, confusions
print('Done')
Explanation: Now let's wrap this up into our scoring function.
End of explanation
# Train over the first 1/4th of our training set.
steps = train_size // BATCH_SIZE
for step in range(steps):
# Compute the offset of the current minibatch in the data.
# Note that we could use better randomization across epochs.
offset = (step * BATCH_SIZE) % (train_size - BATCH_SIZE)
batch_data = train_data[offset:(offset + BATCH_SIZE), :, :, :]
batch_labels = train_labels[offset:(offset + BATCH_SIZE)]
# This dictionary maps the batch data (as a numpy array) to the
# node in the graph it should be fed to.
feed_dict = {train_data_node: batch_data,
train_labels_node: batch_labels}
# Run the graph and fetch some of the nodes.
_, l, lr, predictions = s.run(
[optimizer, loss, learning_rate, train_prediction],
feed_dict=feed_dict)
# Print out the loss periodically.
if step % 100 == 0:
error, _ = error_rate(predictions, batch_labels)
print('Step %d of %d' % (step, steps))
print('Mini-batch loss: %.5f Error: %.5f Learning rate: %.5f' % (l, error, lr))
print('Validation error: %.1f%%' % error_rate(
validation_prediction.eval(), validation_labels)[0])
Explanation: We'll need to train for some time to actually see useful predicted values. Let's define a loop that will go through our data. We'll print the loss and error periodically.
Here, we want to iterate over the entire data set rather than just the first batch, so we'll need to slice the data to that end.
(One pass through our training set will take some time on a CPU, so be patient if you are executing this notebook.)
End of explanation
test_error, confusions = error_rate(test_prediction.eval(), test_labels)
print('Test error: %.1f%%' % test_error)
plt.xlabel('Actual')
plt.ylabel('Predicted')
plt.grid(False)
plt.xticks(numpy.arange(NUM_LABELS))
plt.yticks(numpy.arange(NUM_LABELS))
plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');
for i, cas in enumerate(confusions):
for j, count in enumerate(cas):
if count > 0:
xoff = .07 * len(str(count))
plt.text(j-xoff, i+.2, int(count), fontsize=9, color='white')
Explanation: The error seems to have gone down. Let's evaluate the results using the test set.
To help identify rare mispredictions, we'll include the raw count of each (prediction, label) pair in the confusion matrix.
End of explanation
plt.xticks(numpy.arange(NUM_LABELS))
plt.hist(numpy.argmax(test_labels, 1));
Explanation: We can see here that we're mostly accurate, with some errors you might expect, e.g., '9' is often confused as '4'.
Let's do another sanity check to make sure this matches roughly the distribution of our test set, e.g., it seems like we have fewer '5' values.
End of explanation |
918 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content and Objectives
Confirm theoretical values for variations and combinations
Large number of tuples are sampled and occurrences of according events are being counted
Import
Step1: Parameters
Step2: Function for Sampling with Parametrization
Step3: Simulations
Step4: Exercise
Quiz | Python Code:
# importing
import numpy as np
from scipy import special
import time
Explanation: Content and Objectives
Confirm theoretical values for variations and combinations
Large number of tuples are sampled and occurrences of according events are being counted
Import
End of explanation
# parameters of the combinatoric model
N = 7
K = 4
# number of trials to be simulated
N_trials = int( 1e4 )
start = time.time()
Explanation: Parameters
End of explanation
def count_samples( N, K, N_trials, respect_order, return_elements ):
'''
Function generating samples and counting their number
IN: N = size of set out of which elements are sampled
K = size of sample
N_trials = number of trials for simulation
respect_order = order of samples mattering (or not),
resulting in variations or combinations, respectively;
boolean
return_elements = returning sampled elements to box (or not);
boolean
OUT: numbers of different samples
'''
# check that sample size is feasible
assert return_elements == 1 or K <= N, 'Sample has to be feasible!'
# empty list for collecting samples
collected = []
# loop for realizations
for _n in range( N_trials ):
# sample subset
sample = list( np.random.choice( N, K, replace = return_elements ) )
# sort sample if required
if not respect_order:
sample.sort()
# add sample to history if not observed yet
if sample in collected:
continue
else:
collected.append( sample )
return len( collected )
Explanation: Function for Sampling with Parametrization
End of explanation
print('\nSample with order, returning elements:')
print('---------------------------------------------\n')
# get values
theo = N**K
sim = count_samples( N, K, N_trials, respect_order = 1, return_elements = 1 )
print('Theoretical value:\t\t\t{}'.format( theo ) )
print('Different tuples in simulation: \t{}'.format( sim ) )
print('\nSample with order, not returning elements:')
print('---------------------------------------------\n')
# get values
# NOTE: upper limit not included in arange -> increase by 1
theo = np.prod( np.arange( N-K+1, N+1 ) )
sim = count_samples( N, K, N_trials, respect_order = 1, return_elements = 0 )
print('Theoretical value:\t\t\t{}'.format( theo ) )
print('Different tuples in simulation: \t{}'.format( sim ) )
print('\nSample without order, not returning elements:')
print('---------------------------------------------\n')
# get values
theo = special.binom( N, K ).astype( int )
sim = count_samples( N, K, N_trials, respect_order = 0, return_elements = 0 )
print('Theoretical value:\t\t\t{}'.format( theo ) )
print('Different tuples in simulation: \t{}'.format( sim ) )
print('\nSample without order, returning elements:')
print('---------------------------------------------\n')
# get values
theo = special.binom( N+K-1, K ).astype( int )
sim = count_samples( N, K, N_trials, respect_order = 0, return_elements = 1 )
print('Theoretical value:\t\t\t{}'.format( theo ) )
print('Different tuples in simulation: \t{}'.format( sim ) )
Explanation: Simulations
End of explanation
print('Elapsed: {}'.format( time.time() - start ))
Explanation: Exercise
Quiz: Can you reason how to speed-up simulation while maintaining accuracy?
Show Time for Simulation
End of explanation |
919 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fuel
History
Started as a part of Blocks, a framework for building and managing Theano graphs in the context of neural networks.
Became its own project when we realized it was distinct enough that it could be used by other frameworks too.
Goal
Simplify downloading, storing, iterating over and preprocessing data used to train machine learning models.
Quick start
We'll go over a quick example to see what Fuel is capable of.
Let's start by creating some random data to act as features and targets. We'll pretend that we have eight 2x2 grayscale images separated into four classes.
Step1: Our goal is to use Fuel to interface with this data, iterate over it in various ways and apply transformations to it on the fly.
Division of labor
There are four basic tasks that Fuel needs to handle
Step2: We can ask the dataset what sources of data it provides by accessing its sources attribute. We can also know which axes correspond to what by accessing its axis_labels attribute. It also has a num_examples property telling us the number of examples it contains.
Step3: Datasets themselves are stateless objects (as opposed to, say, an open file handle, or an iterator object). In order to request data from the dataset, we need to ask it to instantiate some stateful object with which it will interact. This is done through the open method
Step4: We see that in IterableDataset's case the state is an iterator object. We can now visit the examples this dataset contains using its get_data method.
Step5: (Note that the return order depends on the order of dataset.sources, which is nondeterministic if you use dict instances. In order to have deterministic behaviour, it is recommended that you use OrderedDict instances instead.)
Eventually, the iterator is depleted and it raises a StopIteration exception. We can iterate over the dataset again by requesting a fresh iterator through the dataset's reset method.
Step6: IndexableDataset
The IterableDataset implementation is pretty minimal. For instance, it only lets you iterate sequentially and examplewise over your data.
If your data happens to be indexable (e.g. a list, or a numpy array), then IndexableDataset will let you do much more.
We instantiate IndexableDataset just like IterableDataset.
Step7: The main advantage of IndexableDataset over IterableDataset is that it allows random access of the data it contains. In order to do so, we need to pass an additional request argument to get_data in the form of a list of indices.
Step8: (See how IndexableDataset returns a None state
Step9: You can see that in this case only the features are returned by get_data.
Iteration schemes
Step10: We can therefore use an iteration scheme to visit a dataset in some order.
Step11: Data streams
Step12: Transformers
Step13: The resulting data stream can be used to iterate over the dataset just like before, but this time features will be standardized on-the-fly.
Step14: Now, let's imagine that for some reason (e.g. running Theano code on GPU) we need features to have a data type of float32.
Step15: As you can see, Fuel makes it easy to chain transformations to form a preprocessing pipeline. The complete pipeline now looks like this
Step16: Going further
You now know enough to find your way around Fuel. Let's cover some more advanced use cases.
Large datasets
Sometimes, the dataset you're working on is too big to fit in memory. In that case, you'll want to use another common Dataset subclass, H5PYDataset.
H5PYDataset
As the name implies, H5PYDataset is a dataset class that interfaces with HDF5 files using the h5py library.
HDF5 is a wonderful storage format, as it is organizable and self-documentable. This allows us to make a basic set of assumptions about the structure of an HDF5 file which, if met, greatly simplify creating new datasets and interacting with them. We won't go through these assumptions right now, but if you're curious, the online documentation offers an in-depth tutorial on how to create new H5PYDataset-compatible files.
Let's create new random data. This time, we'll pretend that we're given a training set and a test set.
Step17: We now create an HDF5 file and populate it with our data.
Step18: The fill_hdf5_file function fills the HDF5 file with our data and sets up metadata so H5PYDataset is able to recover our train and test splits.
Before closing the file, let's also tag axes with their label. The populated HDF5 file features one dataset per data source (in our case, image_features, vector_features and targets), whose dimensions we can tag with a name. H5PYDataset is able to recover this information and create an axis_labels dict for us.
Step19: We now have everything we need to load this HDF5 file in Fuel.
We'll instantiate H5PYDataset by passing it the path to our HDF5 file as well as a tuple of splits to use. For now, we'll just load the train and test sets separately, but note that it is also possible to concatenate splits that way (e.g. concatenate the training and validation sets).
Step20: H5PYDataset instances allow the same level of introspection as IndexableDataset instances.
Step21: We can iterate over data the same way as well.
Step22: H5PYDataset for small datasets
The H5PYDataset class isn't suitable only to large datasets. In fact, most of Fuel's built-in datasets rely on HDF5 for storage.
At first sight, this might seem inefficient (data in an HDF5 file is read off disk instead of being stored in memory, which is considerably slower), but H5PYDataset features a load_in_memory constructor argument which, when set to True, reads data off disk once and stores it in memory as a numpy array.
Built-in datasets
Fuel aims to facilitate iterating over and transforming data, which we've covered up to now, but one of its goals is also to make it easy to download, convert and store often-used datasets. This is what will be covered in this section.
Built-in datasets are datasets which can be obtained through Fuel's automated downloading and conversion tools. Here are some built-in datasets available in Fuel
Step23: Downloading raw data files
Fuel comes with a fuel-download script which automates downloading raw data files for built-in datasets. You can have a look at the full list of built-in datasets with fuel-download -h. For now, we'll download the MNIST dataset in our newly-created data directory.
Step24: Converting raw data files
We'll now convert the raw data files to an HDF5 file which the MNIST dataset class can read. This is done using the fuel-convert script.
Step25: Using built-in datasets
Now that the data has been downloaded and converted, we can instantiate and use the built-in dataset class just like any other H5PYDataset instance.
Step26: Default transformers
Datasets can define a convenience transformer pipeline, which can automatically be applied when instantiating a data stream by using the alternative DataStream.default_stream constructor. We call these default transformers. Use cases for default transformers include the following
Step27: Like explained above, MNIST defines a two-transformer pipeline as its default transformers. The first transformer scales the features by 1 / 255 so that they range between 0 and 1, and the second transformer casts the features to floatX.
Let's compare the output of a data stream with and without the default transformers applied.
Step28: Extending Fuel
New dataset classes
New dataset classes are implemented by subclassing Dataset and implementing a get_data method. If your dataset interacts with stateful objects (e.g. files on disk), then you should also override the open and close methods.
If your data fits in memory, you can save yourself some time by inheriting from IndexableDataset. In that case, all you need to do is load the data as a dict mapping source names to their corresponding data and pass it to the superclass as the indexables argument.
For instance, here's how you would implement a specialized class to interface with .npy files.
Step29: Here's this class in action
Step30: New transformers
An important thing to know about data streams is that they distinguish between two types of outputs
Step31: Most transformers you'll implement will call their superclass constructor by passing the data stream and declaring whether they produce examples or batches. Since we wish to support both batches and examples, we'll declare our output type to be the same as our data stream's output type.
If you were to build a transformer that only works on batches, you would pass produces_examples=False and implement only transform_batch. If anyone tried to use your transformer on an example data stream, an error would automatically be raised.
Let's test our doubler on some dummy dataset. Note that the this implementation is brittle and only works on numpy arrays.
Step32: If you think the transform_example and transform_batch implementations are repetitive, you're right! In cases where the example and batch implementations of a transformer are the same, you can subclass from AgnosticTransformer instead. It requires that you implement a transform_any method, which will be called by both transform_example and transform_batch.
Step33: Our transformer could be more general
Step34: Let's try this implementation on our dummy dataset.
Step35: Finally, there exists a Mapping transformer which acts as a swiss-army knife transformer. In addition to a data stream, its constructor accepts a function which will be applied to data coming from the stream.
Here's how you would implement the feature doubler using Mapping.
Step36: New iteration schemes
New iteration schemes are implemented by subclassing IterationScheme and implementing a get_request_iterator method, which should return an iterator that returns lists of indices.
Two subclasses of IterationScheme typically serve as a basis for other iteration schemes
Step37: Here are the two iteration scheme classes in action
Step38: Parallelizing data processing
Fuel allows to parallelize data processing in a separate process. This feature is still under development, but it is already pretty useful.
Implementing a parallelized preprocessing pipeline is done in two steps. At first, you should write a Python script that sets up the data processing pipeline and spawns a server that listens to requests. See the fuel_server notebook for more details on that.
Once the server is up and running, you'll need to instantiate a ServerDataStream instance, which will connect to the server and make requests. | Python Code:
import numpy
seed = 1234
rng = numpy.random.RandomState(seed)
features = rng.randint(256, size=(8, 2, 2))
targets = rng.randint(4, size=(8, 1))
Explanation: Fuel
History
Started as a part of Blocks, a framework for building and managing Theano graphs in the context of neural networks.
Became its own project when we realized it was distinct enough that it could be used by other frameworks too.
Goal
Simplify downloading, storing, iterating over and preprocessing data used to train machine learning models.
Quick start
We'll go over a quick example to see what Fuel is capable of.
Let's start by creating some random data to act as features and targets. We'll pretend that we have eight 2x2 grayscale images separated into four classes.
End of explanation
from fuel.datasets import IterableDataset
dataset = IterableDataset(
iterables={'features': features, 'targets': targets},
axis_labels={'features': ('batch', 'height', 'width'),
'targets': ('batch', 'index')})
Explanation: Our goal is to use Fuel to interface with this data, iterate over it in various ways and apply transformations to it on the fly.
Division of labor
There are four basic tasks that Fuel needs to handle:
Interface with the data, be it on disk or in memory.
Decide which data points to visit, and in which order.
Iterate over the selected data points.
At each iteration step, apply some transformation to the selected data points.
Each of those four tasks is delegated to a particular class of objects, which we'll be introducing in order.
Datasets: interfacing with data
The Dataset class is responsible for interfacing with the data and handling data access requests. Subclasses of Dataset specialize in certain types of data.
IterableDataset
The simplest Dataset subclass is IterableDataset, which interfaces with iterable objects.
It is created by passing a dict mapping source names to their associated data and, optionally, a dict mapping source names to tuples of axis labels.
End of explanation
print('Sources are {}.'.format(dataset.sources))
print('Axis labels are {}.'.format(dataset.axis_labels))
print('Dataset contains {} examples.'.format(dataset.num_examples))
Explanation: We can ask the dataset what sources of data it provides by accessing its sources attribute. We can also know which axes correspond to what by accessing its axis_labels attribute. It also has a num_examples property telling us the number of examples it contains.
End of explanation
state = dataset.open()
print(state.__class__.__name__)
Explanation: Datasets themselves are stateless objects (as opposed to, say, an open file handle, or an iterator object). In order to request data from the dataset, we need to ask it to instantiate some stateful object with which it will interact. This is done through the open method:
End of explanation
print(dataset.get_data(state=state))
Explanation: We see that in IterableDataset's case the state is an iterator object. We can now visit the examples this dataset contains using its get_data method.
End of explanation
while True:
try:
dataset.get_data(state=state)
except StopIteration:
print('Iteration over')
break
state = dataset.reset(state=state)
print(dataset.get_data(state=state))
dataset.close(state=state)
Explanation: (Note that the return order depends on the order of dataset.sources, which is nondeterministic if you use dict instances. In order to have deterministic behaviour, it is recommended that you use OrderedDict instances instead.)
Eventually, the iterator is depleted and it raises a StopIteration exception. We can iterate over the dataset again by requesting a fresh iterator through the dataset's reset method.
End of explanation
from fuel.datasets import IndexableDataset
from collections import OrderedDict
dataset = IndexableDataset(
indexables=OrderedDict([('features', features), ('targets', targets)]),
axis_labels={'features': ('batch', 'height', 'width'), 'targets': ('batch', 'index')})
Explanation: IndexableDataset
The IterableDataset implementation is pretty minimal. For instance, it only lets you iterate sequentially and examplewise over your data.
If your data happens to be indexable (e.g. a list, or a numpy array), then IndexableDataset will let you do much more.
We instantiate IndexableDataset just like IterableDataset.
End of explanation
state = dataset.open()
print('State is {}'.format(state))
print(dataset.get_data(state=state, request=[0, 1]))
dataset.close(state=state)
Explanation: The main advantage of IndexableDataset over IterableDataset is that it allows random access of the data it contains. In order to do so, we need to pass an additional request argument to get_data in the form of a list of indices.
End of explanation
restricted_dataset = IndexableDataset(
indexables=OrderedDict([('features', features), ('targets', targets)]),
axis_labels={'features': ('batch', 'height', 'width'), 'targets': ('batch', 'index')},
sources=('features',))
state = restricted_dataset.open()
print(restricted_dataset.get_data(state=state, request=[0, 1]))
restricted_dataset.close(state=state)
Explanation: (See how IndexableDataset returns a None state: this is because there's no actual state to maintain in this case.)
Restricting sources
In some cases (e.g. unsupervised learning), you might want to use a subset of the provided sources. This is achieved by passing a sources argument to the dataset constructor. Here's an example:
End of explanation
from fuel.schemes import (SequentialScheme, ShuffledScheme,
SequentialExampleScheme, ShuffledExampleScheme)
schemes = [SequentialScheme(examples=8, batch_size=4),
ShuffledScheme(examples=8, batch_size=4),
SequentialExampleScheme(examples=8),
ShuffledExampleScheme(examples=8)]
for scheme in schemes:
print([request for request in scheme.get_request_iterator()])
Explanation: You can see that in this case only the features are returned by get_data.
Iteration schemes: which examples to visit
Encapsulating and accessing our data is good, but if we're to integrate it into a training loop, we need to be able to iterate over the data. For that, we need to decide which indices to request and in which order. This is accomplished via an IterationScheme subclass.
At its most basic level, an iteration scheme is responsible, through its get_request_iterator method, for building an iterator that will return requests. Here are some examples:
End of explanation
state = dataset.open()
scheme = ShuffledScheme(examples=dataset.num_examples, batch_size=4)
for request in scheme.get_request_iterator():
data = dataset.get_data(state=state, request=request)
print(data[0].shape, data[1].shape)
dataset.close(state)
Explanation: We can therefore use an iteration scheme to visit a dataset in some order.
End of explanation
from fuel.streams import DataStream
data_stream = DataStream(dataset=dataset, iteration_scheme=scheme)
for data in data_stream.get_epoch_iterator():
print(data[0].shape, data[1].shape)
Explanation: Data streams: automating the iteration process
Iteration schemes offer a more convenient way to visit the dataset than accessing the data by hand, but we can do better: the act of getting a fresh state from the dataset, getting a request iterator from the iteration scheme, using both to access the data and closing the state is repetitive. To automate this, we have data streams, which are subclasses of AbstractDataStream.
The most common AbstractDataStream subclass is DataStream. It is instantiated with a dataset and an iteration scheme, and returns an epoch iterator through its get_epoch_iterator method, which iterates over the dataset in the order defined by the iteration scheme.
End of explanation
from fuel.transformers import ScaleAndShift
# Note: ScaleAndShift applies (batch * scale) + shift, as
# opposed to (batch + shift) * scale.
scale = 1.0 / features.std()
shift = - scale * features.mean()
standardized_stream = ScaleAndShift(data_stream=data_stream,
scale=scale, shift=shift,
which_sources=('features',))
Explanation: Transformers: apply some transformation on the fly
Some data streams take data streams as input. We call them transformers, and they enable us to build complex data preprocessing pipelines.
Transformers are Transformer subclasses. Most of the the transformers you'll encounter are located in the fuel.transformers module. Here are some commonly used ones:
Flatten: flattens the input into a matrix (for batch input) or a vector (for examplewise input).
ScaleAndShift: scales and shifts the input by scalar quantities.
Cast: casts the input into some data type.
As an example, let's standardize the images we have by substracting their mean and dividing by their standard deviation.
End of explanation
for batch in standardized_stream.get_epoch_iterator():
print(batch)
Explanation: The resulting data stream can be used to iterate over the dataset just like before, but this time features will be standardized on-the-fly.
End of explanation
from fuel.transformers import Cast
cast_standardized_stream = Cast(data_stream=standardized_stream,
dtype='float32', which_sources=('features',))
Explanation: Now, let's imagine that for some reason (e.g. running Theano code on GPU) we need features to have a data type of float32.
End of explanation
data_stream = Cast(
ScaleAndShift(
DataStream(
dataset=dataset, iteration_scheme=scheme),
scale=scale, shift=shift, which_sources=('features',)),
dtype='float32', which_sources=('features',))
for batch in data_stream.get_epoch_iterator():
print(batch)
Explanation: As you can see, Fuel makes it easy to chain transformations to form a preprocessing pipeline. The complete pipeline now looks like this:
End of explanation
train_image_features = rng.randint(256, size=(90, 3, 32, 32)).astype('uint8')
train_vector_features = rng.normal(size=(90, 16))
train_targets = rng.randint(10, size=(90, 1)).astype('uint8')
test_image_features = rng.randint(256, size=(10, 3, 32, 32)).astype('uint8')
test_vector_features = rng.normal(size=(10, 16))
test_targets = rng.randint(10, size=(10, 1)).astype('uint8')
Explanation: Going further
You now know enough to find your way around Fuel. Let's cover some more advanced use cases.
Large datasets
Sometimes, the dataset you're working on is too big to fit in memory. In that case, you'll want to use another common Dataset subclass, H5PYDataset.
H5PYDataset
As the name implies, H5PYDataset is a dataset class that interfaces with HDF5 files using the h5py library.
HDF5 is a wonderful storage format, as it is organizable and self-documentable. This allows us to make a basic set of assumptions about the structure of an HDF5 file which, if met, greatly simplify creating new datasets and interacting with them. We won't go through these assumptions right now, but if you're curious, the online documentation offers an in-depth tutorial on how to create new H5PYDataset-compatible files.
Let's create new random data. This time, we'll pretend that we're given a training set and a test set.
End of explanation
import h5py
from fuel.converters.base import fill_hdf5_file
f = h5py.File('dataset.hdf5', mode='w')
data = (('train', 'image_features', train_image_features),
('train', 'vector_features', train_vector_features),
('train', 'targets', train_targets),
('test', 'image_features', test_image_features),
('test', 'vector_features', test_vector_features),
('test', 'targets', test_targets))
fill_hdf5_file(f, data)
Explanation: We now create an HDF5 file and populate it with our data.
End of explanation
for i, label in enumerate(('batch', 'channel', 'height', 'width')):
f['image_features'].dims[i].label = label
for i, label in enumerate(('batch', 'feature')):
f['vector_features'].dims[i].label = label
for i, label in enumerate(('batch', 'index')):
f['targets'].dims[i].label = label
f.flush()
f.close()
Explanation: The fill_hdf5_file function fills the HDF5 file with our data and sets up metadata so H5PYDataset is able to recover our train and test splits.
Before closing the file, let's also tag axes with their label. The populated HDF5 file features one dataset per data source (in our case, image_features, vector_features and targets), whose dimensions we can tag with a name. H5PYDataset is able to recover this information and create an axis_labels dict for us.
End of explanation
from fuel.datasets import H5PYDataset
train_dataset = H5PYDataset('dataset.hdf5', which_sets=('train',))
test_dataset = H5PYDataset('dataset.hdf5', which_sets=('test',))
Explanation: We now have everything we need to load this HDF5 file in Fuel.
We'll instantiate H5PYDataset by passing it the path to our HDF5 file as well as a tuple of splits to use. For now, we'll just load the train and test sets separately, but note that it is also possible to concatenate splits that way (e.g. concatenate the training and validation sets).
End of explanation
print('Sources are {}.'.format(train_dataset.sources))
print('Axis labels are {}.'.format(train_dataset.axis_labels))
print('Training set contains {} examples.'.format(train_dataset.num_examples))
print('Test set contains {} examples.'.format(test_dataset.num_examples))
Explanation: H5PYDataset instances allow the same level of introspection as IndexableDataset instances.
End of explanation
train_stream = DataStream(
dataset=train_dataset,
iteration_scheme=ShuffledScheme(
examples=train_dataset.num_examples, batch_size=10))
for batch in train_stream.get_epoch_iterator():
print([source.shape for source in batch])
Explanation: We can iterate over data the same way as well.
End of explanation
!mkdir fuel_data
import os
os.environ['FUEL_DATA_PATH'] = os.path.abspath('./fuel_data')
Explanation: H5PYDataset for small datasets
The H5PYDataset class isn't suitable only to large datasets. In fact, most of Fuel's built-in datasets rely on HDF5 for storage.
At first sight, this might seem inefficient (data in an HDF5 file is read off disk instead of being stored in memory, which is considerably slower), but H5PYDataset features a load_in_memory constructor argument which, when set to True, reads data off disk once and stores it in memory as a numpy array.
Built-in datasets
Fuel aims to facilitate iterating over and transforming data, which we've covered up to now, but one of its goals is also to make it easy to download, convert and store often-used datasets. This is what will be covered in this section.
Built-in datasets are datasets which can be obtained through Fuel's automated downloading and conversion tools. Here are some built-in datasets available in Fuel:
Iris
MNIST
Binarized MNIST
CIFAR10
CIFAR100
SVHN (format 1, format 2)
Caltech 101 silhouettes
Defining where Fuel looks for data
Fuel implements specific Dataset subclasses for each of the built-in datasets. They all expect their corresponding data files to be contained inside one of the directories defined in the Fuel data path.
You can define this data path by setting the data_path variable in ~/.fuelrc:
You can override it by setting the FUEL_DATA_PATH environment variable.
In both cases, Fuel expects a sequence of paths separated by an OS-specific delimiter (: for Linux / Mac OS, ; for Windows).
Let's create a directory in which to put our data files and set it as our Fuel data path.
End of explanation
!fuel-download mnist -d $FUEL_DATA_PATH
Explanation: Downloading raw data files
Fuel comes with a fuel-download script which automates downloading raw data files for built-in datasets. You can have a look at the full list of built-in datasets with fuel-download -h. For now, we'll download the MNIST dataset in our newly-created data directory.
End of explanation
!fuel-convert mnist -d $FUEL_DATA_PATH -o $FUEL_DATA_PATH
Explanation: Converting raw data files
We'll now convert the raw data files to an HDF5 file which the MNIST dataset class can read. This is done using the fuel-convert script.
End of explanation
from fuel.datasets import MNIST
from matplotlib import pyplot, cm
dataset = MNIST(('train',), sources=('features',))
state = dataset.open()
image, = dataset.get_data(state=state, request=[1234])
pyplot.imshow(image.reshape((28, 28)), cmap=cm.Greys_r, interpolation='nearest')
pyplot.show()
dataset.close(state)
Explanation: Using built-in datasets
Now that the data has been downloaded and converted, we can instantiate and use the built-in dataset class just like any other H5PYDataset instance.
End of explanation
print(MNIST.default_transformers)
Explanation: Default transformers
Datasets can define a convenience transformer pipeline, which can automatically be applied when instantiating a data stream by using the alternative DataStream.default_stream constructor. We call these default transformers. Use cases for default transformers include the following:
To save disk space, some datasets may store their data in a format that's different from the format that's typically used for machine learning applications. This is the case for the MNIST, CIFAR10 and CIFAR100 built-in datasets: the raw data being 8-bit images, the datasets are stored using uint8 bytes, which is space-efficient. However, this means that pixel values range from 0 to 255, as opposed to the [0.0, 1.0] range machine learning practitioners are used to. In order to reduce the amount of boilerplate code users have to write to use these datasets, their default transformers divide features by 255 and cast them as floatX.
Some datasets, such as SVHN or ImageNet, are composed of variable-length features, but some preprocessing (e.g. scale the short side of the image to 256 pixels and take a random square crop) is usually applied to obtain fixed-sized features. Although there is no unique way to preprocess these features, such datasets may define default transformers corresponding to an often-used method, or to a method used in a landmark paper (e.g. AlexNet).
Default transformers are defined through the default_transformers class attribute. It is expected to be a tuple with one element per transformer in the pipeline. Each element is a tuple with three elements:
the Transformer subclass to apply,
a list of arguments to pass to the subclass constructor, and
a dict of keyword arguments to pass to the subclass constructor.
Let's look at what MNIST defines as a default transformer:
End of explanation
vanilla_stream = DataStream(
dataset=dataset,
iteration_scheme=SequentialExampleScheme(dataset.num_examples))
print(next(vanilla_stream.get_epoch_iterator())[0].max())
default_stream = DataStream.default_stream(
dataset=dataset,
iteration_scheme=SequentialExampleScheme(dataset.num_examples))
print(next(default_stream.get_epoch_iterator())[0].max())
Explanation: Like explained above, MNIST defines a two-transformer pipeline as its default transformers. The first transformer scales the features by 1 / 255 so that they range between 0 and 1, and the second transformer casts the features to floatX.
Let's compare the output of a data stream with and without the default transformers applied.
End of explanation
from six import iteritems
class NPYDataset(IndexableDataset):
def __init__(self, source_paths, **kwargs):
indexables = dict(
[(source, numpy.load(path)) for
source, path in iteritems(source_paths)])
super(NPYDataset, self).__init__(indexables, **kwargs)
Explanation: Extending Fuel
New dataset classes
New dataset classes are implemented by subclassing Dataset and implementing a get_data method. If your dataset interacts with stateful objects (e.g. files on disk), then you should also override the open and close methods.
If your data fits in memory, you can save yourself some time by inheriting from IndexableDataset. In that case, all you need to do is load the data as a dict mapping source names to their corresponding data and pass it to the superclass as the indexables argument.
For instance, here's how you would implement a specialized class to interface with .npy files.
End of explanation
numpy.save('fuel_data/npy_dataset_features.npy',
numpy.arange(40).reshape((10, 4)))
numpy.save('fuel_data/npy_dataset_targets.npy',
numpy.arange(10).reshape((10, 1)))
dataset = NPYDataset({'features': 'fuel_data/npy_dataset_features.npy',
'targets': 'fuel_data/npy_dataset_targets.npy'})
state = dataset.open()
print(dataset.get_data(state=state, request=[0, 1, 2, 3]))
dataset.close(state)
Explanation: Here's this class in action:
End of explanation
from fuel.transformers import Transformer
class FeaturesDoubler(Transformer):
def __init__(self, data_stream, **kwargs):
super(FeaturesDoubler, self).__init__(
data_stream=data_stream,
produces_examples=data_stream.produces_examples,
**kwargs)
def transform_example(self, example):
if 'features' in self.sources:
example = list(example)
index = self.sources.index('features')
example[index] *= 2
example = tuple(example)
return example
def transform_batch(self, batch):
if 'features' in self.sources:
batch = list(batch)
index = self.sources.index('features')
batch[index] *= 2
batch = tuple(batch)
return batch
Explanation: New transformers
An important thing to know about data streams is that they distinguish between two types of outputs: single examples, and batches of examples. Depending on your choice of iteration scheme, a data stream's produces_examples property will either be True (it produces examples) or False (it produces batches).
Transformers are aware of this, and as such implement two distinct methods: transform_example and transform_batch. A new transformer is typically implemented by subclassing Transformer and implementing one or both of these methods.
As an example, here's how you would double the value of the 'features' data source.
End of explanation
dataset = IndexableDataset(
indexables={'features': numpy.array([1, 2, 3, 4]),
'targets': numpy.array([-1, 1, -1, 1])})
example_scheme = SequentialExampleScheme(examples=dataset.num_examples)
example_stream = FeaturesDoubler(
data_stream=DataStream(
dataset=dataset, iteration_scheme=example_scheme))
batch_scheme = SequentialScheme(
examples=dataset.num_examples, batch_size=2)
batch_stream = FeaturesDoubler(
data_stream=DataStream(
dataset=dataset, iteration_scheme=batch_scheme))
print([example for example in example_stream.get_epoch_iterator()])
print([batch for batch in batch_stream.get_epoch_iterator()])
Explanation: Most transformers you'll implement will call their superclass constructor by passing the data stream and declaring whether they produce examples or batches. Since we wish to support both batches and examples, we'll declare our output type to be the same as our data stream's output type.
If you were to build a transformer that only works on batches, you would pass produces_examples=False and implement only transform_batch. If anyone tried to use your transformer on an example data stream, an error would automatically be raised.
Let's test our doubler on some dummy dataset. Note that the this implementation is brittle and only works on numpy arrays.
End of explanation
from fuel.transformers import AgnosticTransformer
class FeaturesDoubler(AgnosticTransformer):
def __init__(self, data_stream, **kwargs):
super(FeaturesDoubler, self).__init__(
data_stream=data_stream,
produces_examples=data_stream.produces_examples,
**kwargs)
def transform_any(self, data):
if 'features' in self.sources:
data = list(data)
index = self.sources.index('features')
data[index] *= 2
data = tuple(data)
return data
Explanation: If you think the transform_example and transform_batch implementations are repetitive, you're right! In cases where the example and batch implementations of a transformer are the same, you can subclass from AgnosticTransformer instead. It requires that you implement a transform_any method, which will be called by both transform_example and transform_batch.
End of explanation
from fuel.transformers import AgnosticSourcewiseTransformer
class Doubler(AgnosticSourcewiseTransformer):
def __init__(self, data_stream, **kwargs):
super(Doubler, self).__init__(
data_stream=data_stream,
produces_examples=data_stream.produces_examples,
**kwargs)
def transform_any_source(self, source, _):
return 2 * source
Explanation: Our transformer could be more general: what if we want to double 'features' and 'targets', or only 'targets'?
Transformers which are applied sourcewise like our doubler should usually subclass from SourcewiseTransformer. Their constructor takes an additional which_sources keyword argument specifying which sources to apply the transformer to. It's expected to be a tuple of source names. If which_sources is None, then the transformer is applied to all sources. Subclasses of SourcewiseTransformer should implement a transform_source_example method and/or a transform_source_batch method, which apply on an individual source.
There also exists an AgnosticSourcewiseTransformer class for cases where the example and batch implementations of a sourcewise transformer are the same. This class requires a transform_any_source method to be implemented.
End of explanation
target_stream = Doubler(
data_stream=DataStream(
dataset=dataset,
iteration_scheme=batch_scheme),
which_sources=('targets',))
all_stream = Doubler(
data_stream=DataStream(
dataset=dataset,
iteration_scheme=batch_scheme),
which_sources=None)
print([batch for batch in target_stream.get_epoch_iterator()])
print([batch for batch in all_stream.get_epoch_iterator()])
Explanation: Let's try this implementation on our dummy dataset.
End of explanation
from fuel.transformers import Mapping
features_index = dataset.sources.index('features')
def double(data):
data = list(data)
data[features_index] *= 2
return tuple(data)
mapping_stream = Mapping(
data_stream=DataStream(
dataset=dataset, iteration_scheme=batch_scheme),
mapping=double)
print([batch for batch in mapping_stream.get_epoch_iterator()])
Explanation: Finally, there exists a Mapping transformer which acts as a swiss-army knife transformer. In addition to a data stream, its constructor accepts a function which will be applied to data coming from the stream.
Here's how you would implement the feature doubler using Mapping.
End of explanation
from fuel.schemes import IndexScheme, BatchScheme
# `iter_` : A picklable version of `iter`
from picklable_itertools import iter_, imap
# Partition all elements of a sequence into tuples of length at most n
from picklable_itertools.extras import partition_all
class ExampleEvenScheme(IndexScheme):
def get_request_iterator(self):
indices = list(self.indices)[::2]
return iter_(indices)
class BatchEvenScheme(BatchScheme):
def get_request_iterator(self):
indices = list(self.indices)[::2]
return imap(list, partition_all(self.batch_size, indices))
Explanation: New iteration schemes
New iteration schemes are implemented by subclassing IterationScheme and implementing a get_request_iterator method, which should return an iterator that returns lists of indices.
Two subclasses of IterationScheme typically serve as a basis for other iteration schemes: IndexScheme (for schemes requesting examples) and BatchScheme (for schemes requesting batches). Both subclasses are instantiated by providing a list of indices or a number of examples, and BatchScheme accepts an additional batch_size argument.
Here's how you would implement an iteration scheme that iterates over even examples:
End of explanation
print(list(ExampleEvenScheme(10).get_request_iterator()))
print(list(BatchEvenScheme(10, 2).get_request_iterator()))
Explanation: Here are the two iteration scheme classes in action:
End of explanation
import argparse
import time
from fuel.streams import DataStream, ServerDataStream
from fuel.transformers import Transformer
class Bottleneck(Transformer):
def __init__(self, data_stream, **kwargs):
self.slowdown = kwargs.pop('slowdown', 0)
super(Bottleneck, self).__init__(
data_stream, data_stream.produces_examples, **kwargs)
def get_data(self, request=None):
if request is not None:
raise ValueError
time.sleep(self.slowdown)
return next(self.child_epoch_iterator)
dataset = IndexableDataset({'features': [[0] * 128] * 1000})
iteration_scheme = ShuffledScheme(examples=1000, batch_size=100)
regular_data_stream = Bottleneck(
data_stream=DataStream(
dataset=dataset, iteration_scheme=iteration_scheme),
slowdown=0.005)
def time_iteration(parallel):
if parallel:
data_stream = ServerDataStream(('features',), produces_examples=False)
else:
data_stream = regular_data_stream
start_time = time.time()
for i in range(10):
for data in data_stream.get_epoch_iterator(): time.sleep(0.01)
stop_time = time.time()
print('Training took {} seconds'.format(stop_time - start_time))
time_iteration(False)
time_iteration(True)
Explanation: Parallelizing data processing
Fuel allows to parallelize data processing in a separate process. This feature is still under development, but it is already pretty useful.
Implementing a parallelized preprocessing pipeline is done in two steps. At first, you should write a Python script that sets up the data processing pipeline and spawns a server that listens to requests. See the fuel_server notebook for more details on that.
Once the server is up and running, you'll need to instantiate a ServerDataStream instance, which will connect to the server and make requests.
End of explanation |
920 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Programowanie masywnie równoleglych procesorów
Wstęp
Trendy w rozwoju CPU GPU
W XX wieku procesory graficzne służyły do transformacji grafiki, głównie dwuwymiarowej. Potrzeby przemysłu związanego z nowoczesnymi grami wykorzystującymi grafikę trójwymiarową doprowadziły do intensywnego rozwoju i co równie ważne - upowszechnienia się, wyspecjalizowanych procesorów zwanych GPU (Graphics Processor Units), zdolnych renderować coraz bardziej skomplikowane sceny. W 2001 wraz z powstaniem technologii Pixel Shader i Vertex Shader zaczęto wykorzystywać procesor graficzny do obliczeń niezwiązanych z grafiką, określane mianem GPGPU - General-Purpose computing on Graphics Processor Units.
W 2004, firma Ageia zaprezentowała przełomowy produkt, znany pod nazwą PhysX a będący sprzętowym akceleratorem obliczeń związanych z symulacjami fizyki w grach komputerowych. Po co fizyka w grach? Rozwój grafiki 3D szybko doprowadził do sytuacji w której komputer mógł całkiem rozsądnie wyrenderować żądaną scenę. Jednak aby to zrobił musi wiedzieć co renderować. Po części informacje o położeniu objektów w grach są zapisywane z odgrywanych przez aktorów scen lub wprowadzane ręcznie przez animatora. Jednak ma to wiele wad
Step1: Oprócz modułow z pakietu pyCUDA, importujemy również numpy. Jest to niezwykle istotne, bowiem tablice numpy to podstawowa struktura danych którą będziemy przesyłać na urządzenie GPU i z powrotem.
Utwórzmy wektor danych, na przykład wykorzystując linspace
Step2: Celem naszego programu będzie pomnożenie wszystkich elementów tej tablicy przez zadaną liczbę.
Tablica a znajduje się w pamięci systemu gospodarza. Aby wykonać operacje na niej z pomocą GPU musimy zaalokować pamięć na urządzeniu GPU
Step3: a następnie skopiować dane do GPU
Step5: Mamy teraz na GPU zainicjalizowaną tablice, do ktorej możemy się z poziomu hosta (gospodarza) odwoływać przez a_gpu. Typową praktyką w programowaniu na GPU jest posiadania dwóch kopii danych - jednej w pamięci dostępnej dla CPU a drugą w pamięci urządzenia GPU. Warto jeszcze zaznaczyć, że nasza tablica a_gpu znajduje się w pamięci globlanej GPU.
Teraz musimy napisać jądro, które będziemy wywoływać na urządzeniu na danych w pamięci globalnej GPU. Jądro jest napisane w zmodyfikowanej wersji języka C, zwanej CUDA-C.
Jądro, które będzie mnożyło wszystkie elementy tablicy przez 2 ma następującą postać
Step6: Zmienna mod jest teraz obiektem, który ma m.in. funkcję mod.get_function("nazwa") zwracającą funkcję pythonową, której wywołanie uruchomi odpowiednie jądro ze źródła uprzednio podanego do SourceModule. Sprawdźmy
Step7: Wywołajmy teraz jądro!
Step8: W tablicy a_gpu powinniśmy mieć teraz
Step9: Zauważmy, że w tym programie równie dobrze można by wykonać następujące wywołanie
Step10: gpuarray
Moduł gpuarray umożliwia bardzo zwarty zapis operacji na wektorach na GPU. Kluczową komendą jest gpuarray.to_gpu(.., która pozwala skopiować wektor numpy do urządzenia GPU, zwracając uchwyt do danych na GPU. Pewne operacje można wykonać automatycznie na GPU po prostu wpisując wzór arytmetyczny.
Step12: Uwaga printf - działa tylko z konsoli!
Step14: Grids & Blocks
W CUDA wątki można uruchamiać w blokach (blocks). Należy zapamiętać regułe | Python Code:
import numpy as np
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
Explanation: Programowanie masywnie równoleglych procesorów
Wstęp
Trendy w rozwoju CPU GPU
W XX wieku procesory graficzne służyły do transformacji grafiki, głównie dwuwymiarowej. Potrzeby przemysłu związanego z nowoczesnymi grami wykorzystującymi grafikę trójwymiarową doprowadziły do intensywnego rozwoju i co równie ważne - upowszechnienia się, wyspecjalizowanych procesorów zwanych GPU (Graphics Processor Units), zdolnych renderować coraz bardziej skomplikowane sceny. W 2001 wraz z powstaniem technologii Pixel Shader i Vertex Shader zaczęto wykorzystywać procesor graficzny do obliczeń niezwiązanych z grafiką, określane mianem GPGPU - General-Purpose computing on Graphics Processor Units.
W 2004, firma Ageia zaprezentowała przełomowy produkt, znany pod nazwą PhysX a będący sprzętowym akceleratorem obliczeń związanych z symulacjami fizyki w grach komputerowych. Po co fizyka w grach? Rozwój grafiki 3D szybko doprowadził do sytuacji w której komputer mógł całkiem rozsądnie wyrenderować żądaną scenę. Jednak aby to zrobił musi wiedzieć co renderować. Po części informacje o położeniu objektów w grach są zapisywane z odgrywanych przez aktorów scen lub wprowadzane ręcznie przez animatora. Jednak ma to wiele wad: jest kosztowne, zabiera duzo pamięci a powtarzalność ruchów negatywnie wpływa na odbiór akcji. Dlatego optymalnym rozwiązaniem było generowanie scen przez pewne algorytmy. I tu przeszkodą stał się nasz mózg, który jest siecią neuronową doskonale znającą podstawowe prawa fizyki! Bez trudu potrafimy odgadnąć jak ruszał by się słoń, gdyby był ze styropianu, znamy prawa odbicia, wiemy jak przepływa ciecz itp. Nie łatwo nas oszukać - więć programiści gier komputerowych postanowili nie "generować", ale (prawie uczciwie) symulować świat w grach. Jednak, aby obliczyć trajektorie na przykład miliona liści opadających z drzewa, potrzeba ogromnych mocy obliczeniowych. Z drugiej strony gra wymaga wykonania tych obliczeń w pewnym ogranicznonym czasie. Właśnie ta potrzeba, wraz z siedemdziesięcio-miliardowym biznesem gier, doprowadziła opracowania koncepcji sprzętowego akceleratora fizyki- tzw. PPU (Physics Processing Unit).
Co się stało, że firmy Ageia nie ma już na runku? Okazało się, że producenci kart graficznych z procesorami GPU, zauważyli, że jest to ich branża. Ponadto, okazało się, że architektura GPU jest we wielu aspektach zliżona do architektury PPU. GPU wykonuje równoległe obliczenia na "pixelach", PPU wykonuje pewne obliczenia na wielu cząstkach. Firma Nvidia wykonała w 2006 kilka kroków - po pierwsze udostępniła standart programowania jednostek GPU zwany CUDA i zachęciła świat nauki to rozwijania oprogramowania. Po drugie wykupiła Agei-ę i zaimplementowała bibliotekę PhysX w CUDA, wobec czego jej akceleracja była możliwa nie tylko z procesorami PPU, ale na każdej nowej karcie firmy Nvidia. Praktycznie od momentu pojawienia się generacji GeForce serii 8, wszystkie procesory GPU z Nvidii są kompatybilne z CUDA i mogą wykonywać równolegle napisane programy. Różnice pomiędzy procesorami polegają głównie na ilości rdzeni, zwanych "CUDA -cores", których liczba waha się od 16 do 2500!
Moc obliczeniowa procesorów GPU a także, co jest niesłychanie ważne - przepustowość pamięci są o wiele większe od parametrów dostępnych na CPU. Oczywiście jest to kosztem ograniczenia możliwości procesorów GPU. Aby zdać sobie sprawę z liczb, warto przyjrzeć się grafikom opublikowanym przez Nvidię:
Architektura GPU
Jak już się dowiedzieliśmy, nowoczesne procesory graficzne to urządzenia obliczeniowe, zdolne do wykonywania trylionów operacji zmiennoprzecinkowych na sekundę.
Na dzień dzisiejszy dostępne urządzenia GPU działające w standardzie CUDA można podzielić na trzy generacje o nazwach kodowych: Tesla, Fermi i Kepler. Każda z nich oferuje coraz bardziej zaawansowane funkcje i lepszą wydajność.
Procesor GPU jest zorganizowany wokół koncepcji multiprocesorów strumieniowych (MP).
Takie multiprocesorowy składają się z kilku-kilkunastu procesorów skalarnych (SP), z których każdy jest zdolny do wykonywania wątku w SIMT (Single Instruction, Multiple Threads). Każdy MP posiada również ograniczoną ilość wyspecjalizowanej pamięci on-chip: zestaw rejestrów 32-bitowych, wspólny blok pamięci i pamięci podręcznej L1, cache stałych i cache tekstur. Rejestry są logicznie lokalne dla procesora skalarnego, ale inne typy pamięci są wspólne dla wszystkich SPs w każdym w MP. Umożliwia to m.in. wymianę danych pomiędzy wątkami operującymi na tym samym MP.
CUDA - jako warstwa abstakcji w dostępie do GPU
Procesory graficzne mogą różnić się ilością i organizacją rdzeni obliczeniowych (CUDA-cores). Implementacja algorytmów w równoległy sposób powinienna być niezależna od konkrentego sprzętu na którym się one wykonują. W ogólności jest to zagadnienie - jak się wydaje - nierozwiązywalne, ale technologia CUDA uczyniła olbrzymi krok w kierunku uzyskania niezależności od sprzętu.
Po pierwsze programując na GPU tworzymy dużo więcej wątków niż jest fizycznie dostępnych rdzeni obliczeniowych. System kolejkowania wykonań zajmuje się częściową serializacją wywołań, tym intensywniejszą im gorszy sprzęt posiadamy.
Po drugie dostęp do sprzętu jest realizowany przez warstwe abstrakcji, zwaną właśnie CUDA, która zawiera w sobie informacje o hierarchii pamięci. Programista, nie przejmuje się tym ile rdzeni wykonuje jego program, ale wie, że pewne grupy wątków mają dostęp do wspólnej szybkiej pamięci.
Programy napisane na CUDA dramatycznie różnią się od wielowątkowych równoległych odpowiedników napisanych na nawet kilkunastordzeniowe jednostki CPU. Z jednej strony jest to ograniczenie, skutkujące koniecznością przepisania wszystkich algorytmów. Jednak poztywnym efektem programowania na takiej architekturze jest ogromna kompatybilność i przenośność kodu na współczesne i z dużym prawdopodobieństwem przyszłe urządzenia.
Hierarchia pamieci
Być może najbardziej istotną cechą architektury CUDA jest hierarchia pamięci
z różnicą czasów dostępu o 1-2 rzędy wielkości pomiędzy kolejnymi poziomami. Najwolniejszą z punktu widzenia GPU jest pamięć RAM komputera gospodarza (host). Pamięć ta jest oddzielona od procesora graficznego magistralą PCIe z teoretyczną maksymalną przepustowością w jednym kierunku 16 GB/s (PCI Express 3.0,x16).
Następną w kolejności jest globalna pamięć urządzenia GPU, który jest obecnie ograniczony do kilku gigabajtów (najnowsze karty mają juz 12GB) i o szerokości pasma około 100-200 ~ Gb/s. Jest to bardzo duża wartość, jednak dostęp do globalnej pamięci jest operacją wysokiej latencji wynoszącą kilkaset cykli zegara GPU.
Najszybszym rodzajem pamięci dostępnym obecnie na GPU jest pamięć współdzielona (shared memory) znajdująca się bezpośrednio na multiprocesorze. Jest ona obecnie ograniczona do 48 kB, ale ma przepustowość około ~1,3 TB/s. Co ciekawsze, latenacja w dostępie do tej pamięci jest bardzo mała - podobna jak dostęp do rejestrów jednostek SP.
Powyższy opis łatwo sugeruje strategię pisania wydajnych programów na CUDA, a które można streścić jako: przenieść jak najwięcej danych, jak to możliwe
do najszybszych rodzaju dostępnej pamięci i przechowywać je tam tak długo, jak to możliwe, przy jednoczesnej minimalizacji dostępu do wolniejszych rodzajów pamięci. Dodatkowo, jeśli dostęp do wolniejszej pamięci jest konieczny, można próbować wykorzystać "wolny" czas na wykonanie operacji arytmetycznych na pozostałych wątkach.
Wątki
W programowaniu na CUDA czy OpenCL wykorzystujemy wątki. Co jest niespotykane w pisaniu kodu HPC na zwykłe procesory, wątków powinno się włączać o wiele więcej od dostępnych fizycznych rdzeni procesora. W praktyce programowania na CPU znany jest fakt, że nadmierne rozmnożenie wątków zazwyczaj spowalnia wykonanie programu, ze względu na "context switching", który może prowadzić do nieefektywnego wykorzystania pamieci cache jednostki centralnej a ponadto prowadzi w nieunikniony sposób do zmniejszenia dostępnej pamięci RAM. W programowaniu w standardzie CUDA wątki są lekkie, ich przełączanie nie zabiera więcej niż pojedyńczych cykli procesora. Co więcej, nadmiarowe wątki potrafią mieć pozytywny efekt na wydajność, gdyż mogą przesłonić latencję dostępu do pamięci głównej.
Z punktu widzenia użytkownika, programy CUDA są zorganizowane w jądrach. Jądro
Jest to funkcja, która jest wykonywana wielokrotnie, jednocześnie na różnych multiprocesorach. Każda instancja jądra zwana jest wątkiem i jest przydzielana do pojedyńczego procesora skalarnego (SP). Wątki są grupowane w jedno-, dwu - lub trójwymiarowe bloki przypisanych do multiprocesorów jeden do jednego - czyli mamy gwarancję, że w ramach jednego bloku jesteśmy na tym samym MP.
Język programowania i struktura kursu
Kernele pisze się w okrojonym dialekcie C/C++, zwanym CUDA-C. Referencyjnym i stosunkowo przystępnie napisanym dokumentem jest CUDA C Programming Guide, który jest dostępny pod adresem: http://docs.nvidia.com/cuda/cuda-c-programming-guide/
W tym kursie nie będziemy systematycznie analizować wszystkich elementów CUDA-C. Postąpimy inaczej. Pokażemy na przykładach typowe zastosowania programowania GPU, które są niezwykle przydatne w fizyce. Rozwiązanie własnego problemu, proponujemy rozpocząć od znalezienia stosownego przykładu i próbie samodzielnego dostosowania jego kodu. Jest to rozwiązanie typu "crash course", które nie zastąpi systematycznego kursu. Jednak w dużej większości przypadków, doświadczenie nabyte podczas zabawy z zamieszczonymi przykładami umożliwi optymalne rozwiązanie napotkanego problemu.
Sposób pracy
Interaktywność - używamy interfejsu pyCUDA
W tych materiałach zostanie zaprezentowane podejście, które pozwoli maksymalnie uprościć drogę do efektywnej pracy w CUDA. W tym celu zostanie zastosowany pakiet pyCUDA, który umożliwi pracę interaktywną z urządzeniem CUDA bez kompromisu jeśli chodzi o wydajność. W pyCUDA, sposób pracy sprowadza się do napisania jądra obliczeniowego w języku C z rozszerzeniami CUDA. Wywołanie jądra, jego kompilacja oraz inspekcja danych, w tym transfer do i z urzadzenia, wykonuje się w wygodnym języku python.
Pierwszy program pyCUDA
Przedstawimy teraz pierwszy program, który będzie napisany w CUDA, z użyciem pyCUDA. Przykład ten będzie bardzo prosty: mnożenie wektora przez liczbę. Na tym prostym przykładzie poznamy sposób pracy, który potem będziemy mogli użyć jako szablonu do tworzenia bardziej zaawansowanych programów.
Inicjalizacja
Najprostszy sposób inicjalizacji urządzenia GPU tak by móc go dalej używać wygląda następująco:
End of explanation
a = np.linspace(1,16,16).astype(np.float32)
a
Explanation: Oprócz modułow z pakietu pyCUDA, importujemy również numpy. Jest to niezwykle istotne, bowiem tablice numpy to podstawowa struktura danych którą będziemy przesyłać na urządzenie GPU i z powrotem.
Utwórzmy wektor danych, na przykład wykorzystując linspace:
End of explanation
a_gpu = cuda.mem_alloc(a.nbytes)
Explanation: Celem naszego programu będzie pomnożenie wszystkich elementów tej tablicy przez zadaną liczbę.
Tablica a znajduje się w pamięci systemu gospodarza. Aby wykonać operacje na niej z pomocą GPU musimy zaalokować pamięć na urządzeniu GPU:
End of explanation
cuda.memcpy_htod(a_gpu, a)
Explanation: a następnie skopiować dane do GPU:
End of explanation
mod = SourceModule(
__global__ void dubluj(float *a)
{
int idx = threadIdx.y;
a[idx] *= 3.12;
}
)
Explanation: Mamy teraz na GPU zainicjalizowaną tablice, do ktorej możemy się z poziomu hosta (gospodarza) odwoływać przez a_gpu. Typową praktyką w programowaniu na GPU jest posiadania dwóch kopii danych - jednej w pamięci dostępnej dla CPU a drugą w pamięci urządzenia GPU. Warto jeszcze zaznaczyć, że nasza tablica a_gpu znajduje się w pamięci globlanej GPU.
Teraz musimy napisać jądro, które będziemy wywoływać na urządzeniu na danych w pamięci globalnej GPU. Jądro jest napisane w zmodyfikowanej wersji języka C, zwanej CUDA-C.
Jądro, które będzie mnożyło wszystkie elementy tablicy przez 2 ma następującą postać:
c
__global__ void dubluj(float *a)
{
int idx = threadIdx.x;
a[idx] *= 2;
}
Jądro wygląda jak zwykła funkcja w C.
Deklaracja __global__ jest rozszerzeniem CUDA C.
Jądro będzie uruchomione w tylu kopiach ile jest elementów wektora a_gpu a każda będzie mnożyła jeden element wektora.
threadIdx jest strukturą określającą numer wątku wewnątrz bloku, który wykonuje daną kopię. Oprócz niego ważny jest też blockIdx, ale w tym przypadku odpalamy tylko jeden blok więc nie jest nam potrzebny. Indeksy wątku, czy też wewnątrz bloku czy numer bloku, to jedyny sposób rozróżnienia wątków! Zmienna idx jest liniowym wskaźnikiem, który będzie się zmieniał od 0 do N-1.
W pyCUDA źródło modułu podajemy jako string funkcji SourceModule w poniższy sposób:
End of explanation
func = mod.get_function("dubluj")
Explanation: Zmienna mod jest teraz obiektem, który ma m.in. funkcję mod.get_function("nazwa") zwracającą funkcję pythonową, której wywołanie uruchomi odpowiednie jądro ze źródła uprzednio podanego do SourceModule. Sprawdźmy:
End of explanation
func(a_gpu, block=(16,1,1))
Explanation: Wywołajmy teraz jądro!
End of explanation
print( a)
cuda.memcpy_dtoh(a, a_gpu)
print( a)
Explanation: W tablicy a_gpu powinniśmy mieć teraz
End of explanation
func(cuda.InOut(a), block=(4, 4, 1))
print( a)
Explanation: Zauważmy, że w tym programie równie dobrze można by wykonać następujące wywołanie:
func(a_gpu, block=(4,4,1))
ale wtedy musimy zmodyfikować odpowiednio obliczanie wskaźnika idx w źródle jądra, na:
int idx = threadIdx.x + threadIdx.y*4;
Ćwiczenie:
Sprawdź to!
Udogodnienia w pyCUDA
Powyższy sposób budowania programu w CUDA jest bardzo podobny gdybyśmy używali jedynie kompilatora oraz języka C/C++ zamiast pythona z pyCUDA.
InOut
Warto zaznajomić się z kilkoma udogodnieniami, które mamy wbudowane w pyCUDA. Pierwszym z nich jest zautomatyzowanie procesu alokacji i kopiowania danych. Jeśli chcemy wykonać po kolei: alokacje wektora na GPU, transfer danych z wektora numpy, wywołanie jądra oraz nadpisanie wyjściowego wektora numpy wynikiem z GPU, to możemy użyć funkcji InOut:
End of explanation
import pycuda.gpuarray as gpuarray
a = np.linspace(1,16,16).astype(np.float32)
a_gpu = gpuarray.to_gpu(a.astype(np.float32))
a_doubled = (a_gpu*2).get()
print (a_gpu)
print (a_doubled)
Explanation: gpuarray
Moduł gpuarray umożliwia bardzo zwarty zapis operacji na wektorach na GPU. Kluczową komendą jest gpuarray.to_gpu(.., która pozwala skopiować wektor numpy do urządzenia GPU, zwracając uchwyt do danych na GPU. Pewne operacje można wykonać automatycznie na GPU po prostu wpisując wzór arytmetyczny.
End of explanation
%%writefile cuda_printf.py
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
mod = SourceModule(
#include <stdio.h>
__global__ void printf_test()
{
printf("GONZO %d.%d.%d\\n", threadIdx.x, threadIdx.y, threadIdx.z);
}
)
func = mod.get_function("printf_test")
func(block=(2,2,1))
Explanation: Uwaga printf - działa tylko z konsoli!
End of explanation
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
mod = SourceModule(
__global__ void kernel(float *a)
{
int idx = threadIdx.x + blockDim.x*blockIdx.x;
if(threadIdx.x==0)
a[idx] += 1.0f;
a[idx] += 10.0f * idx;
}
)
a = np.zeros(10).astype(np.float32)
func = mod.get_function("kernel")
print (np.linspace(0,9,10))
print ("----------------")
print (a)
func(cuda.InOut(a),block=(5,1,1),grid=(2,1,1))
print (a)
Explanation: Grids & Blocks
W CUDA wątki można uruchamiać w blokach (blocks). Należy zapamiętać regułe:
wszystkie wątki na jednym bloku zawsze będą uruchamiane na tym samym multiprocesorze
Co za tym idzie, będą miały do dyspozycji tą samą pamięc typu "shared memory". Maksymalna liczba wątków w jednym bloku jest z reguły ograniczona do 512 lub 1024 w zależności od typu urządzenia GPU. Warto zauważyć, że jest to liczba zdecydowanie większa od liczby procesorów skalarnych w na jednym multiprocesorze.
Jeżeli chcemy odpalic np. milion wątków, to musimy wykorzystać tzw. grid bloków. Czyli odpalając jądro obliczeniowe specyfikujemy ile potrzeujemy wątków i ile jaki chcemy mieć rozmiar bloku. Chcąc mieć np. $128\times 64$ wątki, rozsądnym jest odpalenie jądra z rozmiarem gridu $128$ i rozmiarem bloku $64$.
Ponadto, ponieważ często operujemy siatkami reprezentujemy dwu i trój-wymiarowe pola, CUDA ma wbudowany mechanizm, który ułatwia posługiwaniem się dwu i trój-wymiarowymi siatkami.
Poniższy przykład pokazuje jak zbudować grid składający się z dwóch bloków, po pięc wątków.
Ćwiczenie.
Zastąp w kodzie if(threadIdx.x==1) przez:
if(blockIdx.x==1)
if(idx==1)
i sprawdź działanie.
End of explanation |
921 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LeNet Lab Solution
Source
Step1: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
Step2: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
Step3: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
Step4: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
Step5: SOLUTION
Step6: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
Step7: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
Step8: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
Step9: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
Step10: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section. | Python Code:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
Explanation: LeNet Lab Solution
Source: Yan LeCun
Load Data
Load the MNIST data, which comes pre-loaded with TensorFlow.
You do not need to modify this section.
End of explanation
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
Explanation: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
End of explanation
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
Explanation: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
End of explanation
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
Explanation: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
End of explanation
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
Explanation: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
End of explanation
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(10))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
Explanation: SOLUTION: Implement LeNet-5
Implement the LeNet-5 neural network architecture.
This is the only cell you need to edit.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 10 outputs.
Output
Return the result of the 2nd fully connected layer.
End of explanation
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
Explanation: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
End of explanation
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
Explanation: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
End of explanation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
Explanation: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
Explanation: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section.
End of explanation |
922 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting which passengers survived the sinking of the Titanic
using a random forest this time
Step1: Since Age is missing some data, we'll have to clean it by inserting the median age
Step2: Non-numeric columns
We have to either exclude our non-numeric columns when we train our algorithm (Name, Sex, Cabin, Embarked, and Ticket), or find a way to convert them to numeric columns.
We'll ignore the Ticket, Cabin, and Name columns. There isn't much information we can extract from there. Most of the values in the cabin column are missing (only 204 values out of 891 rows), and it likely isn't a particularly informative column in the first place.
The Ticket and Name columns are unlikely to tell us much without some domain knowledge about what the ticket numbers mean, and about which names correlate with characteristics like large or rich families.
Converting the Sex column
The Sex column is non-numeric, but we want to keep it around
first have to find all the unique genders in the column (we know male and female are there, but did whoever recorded the dataset use another code for missing values?)
We'll also assign a code of 0 to male, and a code of 1 to female
Step3: Converting the Embarked column
Step4: Parameter Tuning
train more trees to increase accuracy
tweak min_samples_split and min_samples_leaf to prevent overfitting
Step5: Generating new features
The length of the name -- this could pertain to how rich the person was, and therefore their position in the Titanic.
The total number of people in a family (SibSp + Parch).
Step6: Extract the title using a regexp and map each unique title to an integer value
Step7: Family groups
We can also generate a feature indicating which family people are in. Because survival was likely highly dependent on your family and the people around you, this has a good chance at being a good feature.
To get this, we'll concatenate someone's last name with FamilySize to get a unique family id. We'll then be able to assign a code to each person based on their family id.
Step8: Run it over the test data | Python Code:
import csv as csv
import numpy as np
import pandas as pd
# We can use the pandas library in python to read in the csv file.
# This creates a pandas dataframe and assigns it to the titanic variable.
titanic = pd.read_csv("data/train.csv")
# Print the first 5 rows of the dataframe.
print(titanic.head(5))
print(titanic.describe())
Explanation: Predicting which passengers survived the sinking of the Titanic
using a random forest this time
End of explanation
titanic["Age"] = titanic["Age"].fillna(titanic["Age"].median())
Explanation: Since Age is missing some data, we'll have to clean it by inserting the median age
End of explanation
# Find all the unique genders -- the column appears to contain only male and female.
print(titanic["Sex"].unique())
# Replace all the occurences of male with the number 0.
titanic.loc[titanic["Sex"] == "male", "Sex"] = 0
titanic.loc[titanic["Sex"] == "female", "Sex"] = 1
Explanation: Non-numeric columns
We have to either exclude our non-numeric columns when we train our algorithm (Name, Sex, Cabin, Embarked, and Ticket), or find a way to convert them to numeric columns.
We'll ignore the Ticket, Cabin, and Name columns. There isn't much information we can extract from there. Most of the values in the cabin column are missing (only 204 values out of 891 rows), and it likely isn't a particularly informative column in the first place.
The Ticket and Name columns are unlikely to tell us much without some domain knowledge about what the ticket numbers mean, and about which names correlate with characteristics like large or rich families.
Converting the Sex column
The Sex column is non-numeric, but we want to keep it around
first have to find all the unique genders in the column (we know male and female are there, but did whoever recorded the dataset use another code for missing values?)
We'll also assign a code of 0 to male, and a code of 1 to female
End of explanation
# Find all the unique values for "Embarked".
print(titanic["Embarked"].unique())
titanic["Embarked"] = titanic["Embarked"].fillna("S")
titanic.loc[titanic["Embarked"] == "S", "Embarked"] = 0
titanic.loc[titanic["Embarked"] == "C", "Embarked"] = 1
titanic.loc[titanic["Embarked"] == "Q", "Embarked"] = 2
from sklearn import cross_validation
from sklearn.ensemble import RandomForestClassifier
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]
# Initialize our algorithm with the default paramters
# n_estimators is the number of trees we want to make
# min_samples_split is the minimum number of rows we need to make a split
# min_samples_leaf is the minimum number of samples we can have at the place where a tree branch ends (the bottom points of the tree)
alg = RandomForestClassifier(random_state=1, n_estimators=10, min_samples_split=2, min_samples_leaf=1)
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv=3)
# Take the mean of the scores (because we have one for each fold)
print(scores.mean())
Explanation: Converting the Embarked column
End of explanation
alg = RandomForestClassifier(random_state=1, n_estimators=150, min_samples_split=4, min_samples_leaf=2)
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv=3)
# Take the mean of the scores (because we have one for each fold)
print(scores.mean())
Explanation: Parameter Tuning
train more trees to increase accuracy
tweak min_samples_split and min_samples_leaf to prevent overfitting
End of explanation
# Generating a familysize column
titanic["FamilySize"] = titanic["SibSp"] + titanic["Parch"]
# The .apply method generates a new series
titanic["NameLength"] = titanic["Name"].apply(lambda x: len(x))
Explanation: Generating new features
The length of the name -- this could pertain to how rich the person was, and therefore their position in the Titanic.
The total number of people in a family (SibSp + Parch).
End of explanation
import re
import pandas
# A function to get the title from a name.
def get_title(name):
# Use a regular expression to search for a title. Titles always consist of capital and lowercase letters, and end with a period.
title_search = re.search(' ([A-Za-z]+)\.', name)
# If the title exists, extract and return it.
if title_search:
return title_search.group(1)
return ""
# Get all the titles and print how often each one occurs.
titles = titanic["Name"].apply(get_title)
print(pandas.value_counts(titles))
# Map each title to an integer. Some titles are very rare, and are compressed into the same codes as other titles.
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Dr": 5, "Rev": 6, "Major": 7, "Col": 7, "Mlle": 8, "Mme": 8, "Don": 9, "Lady": 10, "Countess": 10, "Jonkheer": 10, "Sir": 9, "Capt": 7, "Ms": 2}
for k,v in title_mapping.items():
titles[titles == k] = v
# Verify that we converted everything.
print(pandas.value_counts(titles))
# Add in the title column.
titanic["Title"] = titles
Explanation: Extract the title using a regexp and map each unique title to an integer value
End of explanation
import operator
# A dictionary mapping family name to id
family_id_mapping = {}
# A function to get the id given a row
def get_family_id(row):
# Find the last name by splitting on a comma
last_name = row["Name"].split(",")[0]
# Create the family id
family_id = "{0}{1}".format(last_name, row["FamilySize"])
# Look up the id in the mapping
if family_id not in family_id_mapping:
if len(family_id_mapping) == 0:
current_id = 1
else:
# Get the maximum id from the mapping and add one to it if we don't have an id
current_id = (max(family_id_mapping.items(), key=operator.itemgetter(1))[1] + 1)
family_id_mapping[family_id] = current_id
return family_id_mapping[family_id]
# Get the family ids with the apply method
family_ids = titanic.apply(get_family_id, axis=1)
# There are a lot of family ids, so we'll compress all of the families under 3 members into one code.
family_ids[titanic["FamilySize"] < 3] = -1
# Print the count of each unique id.
print(pandas.value_counts(family_ids))
titanic["FamilyId"] = family_ids
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.feature_selection import SelectKBest, f_classif
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked", "FamilySize", "Title", "FamilyId"]
# Perform feature selection
selector = SelectKBest(f_classif, k=5)
selector.fit(titanic[predictors], titanic["Survived"])
# Get the raw p-values for each feature, and transform from p-values into scores
scores = -np.log10(selector.pvalues_)
%matplotlib notebook
# Plot the scores. See how "Pclass", "Sex", "Title", and "Fare" are the best?
plt.bar(range(len(predictors)), scores)
plt.xticks(range(len(predictors)), predictors, rotation='vertical')
plt.show()
# Pick only the four best features.
predictors = ["Pclass", "Sex", "Fare", "Title"]
alg = RandomForestClassifier(random_state=1, n_estimators=150, min_samples_split=8, min_samples_leaf=4)
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv=3)
# Take the mean of the scores (because we have one for each fold)
print(scores.mean())
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import KFold
import numpy as np
# The algorithms we want to ensemble.
# We're using the more linear predictors for the logistic regression, and everything with the gradient boosting classifier.
algorithms = [
[GradientBoostingClassifier(random_state=1, n_estimators=25, max_depth=3), ["Pclass", "Sex", "Age", "Fare", "Embarked", "FamilySize", "Title", "FamilyId"]],
[LogisticRegression(random_state=1), ["Pclass", "Sex", "Fare", "FamilySize", "Title", "Age", "Embarked"]]
]
# Initialize the cross validation folds
kf = KFold(titanic.shape[0], n_folds=3, random_state=1)
predictions = []
for train, test in kf:
train_target = titanic["Survived"].iloc[train]
full_test_predictions = []
# Make predictions for each algorithm on each fold
for alg, predictors in algorithms:
# Fit the algorithm on the training data.
alg.fit(titanic[predictors].iloc[train,:], train_target)
# Select and predict on the test fold.
# The .astype(float) is necessary to convert the dataframe to all floats and avoid an sklearn error.
test_predictions = alg.predict_proba(titanic[predictors].iloc[test,:].astype(float))[:,1]
full_test_predictions.append(test_predictions)
# Use a simple ensembling scheme -- just average the predictions to get the final classification.
test_predictions = (full_test_predictions[0] + full_test_predictions[1]) / 2
# Any value over .5 is assumed to be a 1 prediction, and below .5 is a 0 prediction.
test_predictions[test_predictions <= .5] = 0
test_predictions[test_predictions > .5] = 1
predictions.append(test_predictions)
# Put all the predictions together into one array.
predictions = np.concatenate(predictions, axis=0)
# Compute accuracy by comparing to the training data.
accuracy = sum(predictions[predictions == titanic["Survived"]]) / len(predictions)
print(accuracy)
Explanation: Family groups
We can also generate a feature indicating which family people are in. Because survival was likely highly dependent on your family and the people around you, this has a good chance at being a good feature.
To get this, we'll concatenate someone's last name with FamilySize to get a unique family id. We'll then be able to assign a code to each person based on their family id.
End of explanation
import pandas
titanic_test = pandas.read_csv("data/test.csv")
titanic_test["Age"] = titanic_test["Age"].fillna(titanic["Age"].median())
titanic_test["Fare"] = titanic_test["Fare"].fillna(titanic_test["Fare"].median())
titanic_test.loc[titanic_test["Sex"] == "male", "Sex"] = 0
titanic_test.loc[titanic_test["Sex"] == "female", "Sex"] = 1
titanic_test["Embarked"] = titanic_test["Embarked"].fillna("S")
titanic_test.loc[titanic_test["Embarked"] == "S", "Embarked"] = 0
titanic_test.loc[titanic_test["Embarked"] == "C", "Embarked"] = 1
titanic_test.loc[titanic_test["Embarked"] == "Q", "Embarked"] = 2
# First, we'll add titles to the test set.
titles = titanic_test["Name"].apply(get_title)
# We're adding the Dona title to the mapping, because it's in the test set, but not the training set
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Dr": 5, "Rev": 6, "Major": 7, "Col": 7, "Mlle": 8, "Mme": 8, "Don": 9, "Lady": 10, "Countess": 10, "Jonkheer": 10, "Sir": 9, "Capt": 7, "Ms": 2, "Dona": 10}
for k,v in title_mapping.items():
titles[titles == k] = v
titanic_test["Title"] = titles
# Check the counts of each unique title.
print(pandas.value_counts(titanic_test["Title"]))
# Now, we add the family size column.
titanic_test["FamilySize"] = titanic_test["SibSp"] + titanic_test["Parch"]
# Now we can add family ids.
# We'll use the same ids that we did earlier.
print(family_id_mapping)
family_ids = titanic_test.apply(get_family_id, axis=1)
family_ids[titanic_test["FamilySize"] < 3] = -1
titanic_test["FamilyId"] = family_ids
titanic_test["NameLength"] = titanic_test["Name"].apply(lambda x: len(x))
predictors = ["Pclass", "Sex", "Age", "Fare", "Embarked", "FamilySize", "Title", "FamilyId"]
algorithms = [
[GradientBoostingClassifier(random_state=1, n_estimators=25, max_depth=3), predictors],
[LogisticRegression(random_state=1), ["Pclass", "Sex", "Fare", "FamilySize", "Title", "Age", "Embarked"]]
]
full_predictions = []
for alg, predictors in algorithms:
# Fit the algorithm using the full training data.
alg.fit(titanic[predictors], titanic["Survived"])
# Predict using the test dataset. We have to convert all the columns to floats to avoid an error.
predictions = alg.predict_proba(titanic_test[predictors].astype(float))[:,1]
full_predictions.append(predictions)
# The gradient boosting classifier generates better predictions, so we weight it higher.
predictions = (full_predictions[0] * 3 + full_predictions[1]) / 4
predictors = ["Pclass", "Sex", "Age", "Fare", "Embarked", "FamilySize", "Title", "FamilyId"]
algorithms = [
[GradientBoostingClassifier(random_state=1, n_estimators=25, max_depth=3), predictors],
[LogisticRegression(random_state=1), ["Pclass", "Sex", "Fare", "FamilySize", "Title", "Age", "Embarked"]]
]
full_predictions = []
for alg, predictors in algorithms:
# Fit the algorithm using the full training data.
alg.fit(titanic[predictors], titanic["Survived"])
# Predict using the test dataset. We have to convert all the columns to floats to avoid an error.
predictions = alg.predict_proba(titanic_test[predictors].astype(float))[:,1]
full_predictions.append(predictions)
# The gradient boosting classifier generates better predictions, so we weight it higher.
predictions = (full_predictions[0] * 3 + full_predictions[1]) / 4
# The gradient boosting classifier generates better predictions, so we weight it higher.
predictions = (full_predictions[0] * 3 + full_predictions[1]) / 4
predictions[predictions <= .5] = 0
predictions[predictions > .5] = 1
predictions = predictions.astype(int)
submission = pandas.DataFrame({
"PassengerId": titanic_test["PassengerId"],
"Survived": predictions
})
# make a kaggle submission from test set predictions with PassengerId,Survived
submission.to_csv("kaggle.csv", index=False)
Explanation: Run it over the test data
End of explanation |
923 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to pandas and sklearn
Recommendation System
We live in a world surrounded by recommendation systems - our shopping habbits, our reading habits, political opinions are heavily influenced by recommendation algorithms. So lets take a closer look at how to build a basic recommendation system.
Simply put a recommendation system learns from your previous behavior and tries to recommend items that are similar to your previous choices. While there are a multitude of approaches for building recommendation systems, we will take a simple approach that is easy to understand and has a reasonable performance.
For this exercise we will build a recommendation system that predicts which talks you'll enjoy at a conference - specifically our favorite conference Pycon!
Before you proceed
This project is still in alpha stage. Bugs, typos, spelling, grammar, terminologies - there's every scope of finding bugs. If you have found one - open an issue on github. Pull Requests with corrections, fixes and enhancements will be received with open arms! Don't forget to add yourself to the list of contributors to this project.
Recommendation for Pycon talks
Take a look at 2018 schedule.
With 32 tuotorials, 12 sponsor workshops, 16 talks at the education summit, and 95 talks at the main conference - Pycon has a lot to offer. Reading through all the talk descriptions and filtering out the ones that you should go to is a tedious process.
Lets build a recommendation system that recommends talks from Pycon 2018, based on the ones that a person went to in 2017. This way the attendee does not waste any time deciding which talk to go to and spend more time making friends on the hallway track!
We will be using pandas and scikit-learn to build the recommnedation system using the text description of talks.
Definitions
Documents
In our example the talk descriptions make up the documents
Class
We have two classes to classify our documents
- The talks that the attendee would like to see "in person". Denoted by 1
- The talks that the attendee would watch "later online". Denoted by 0
A talk description is labeled 0 would mean the user has chosen to watch it later and a label 1 would mean the user has chose to watch it in person.
Supervised Learning
In Supervised learning we inspect each observation in a given dataset and manually label them. These manually labeled data is used to construct a model that can predict the labels on new data. We will use a Supervised Learning technique called Support Vector Machines.
In unsupervised learning we do not need any manual labeling. The recommendation system finds the pattern in the data to build a model that can be used for recommendation.
Dataset
The dataset contains the talk description and speaker details from Pycon 2017 and 2018. All the 2017 talk data has been labeled by a user who has been to Pycon 2017.
Required packages installation
The following packages are needed for this project. Execute the cell below to install them.
numpy==1.14.2
pandas==0.22.0
python-dateutil==2.7.2
pytz==2018.4
scikit-learn==0.19.1
scipy==1.0.1
six==1.11.0
sklearn==0.0
Step1: Exercise A
Step2: Here is a brief description of the interesting fields.
variable | description
------|------|
title|Title of the talk
description|Description of the talk
year|Is it a 2017 talk or 2018
label|1 indicates the user preferred seeing the talk in person,<br> 0 indicates they would schedule it for later.
Note all 2018 talks are set to 1. However they are only placeholders, and are not used in training the model. We will use 2017 data for training, and predict the labels on the 2018 talks.
Lets start by selecting the 2017 talk descriptions that were labeled by the user for watching in person.
python
df[(df.year==2017) & (df.label==1)]['description']
Print the description of the talks that the user preferred watching in person. How many such talks are there?
Exercise 1
Step3: Quick Introduction to Text Analysis
Lets have a quick overview of text analysis. Our end goal is to train a machine learning algorithm by making it go through enough documents from each class to recognize the distingusihing characteristics in documents from a particular class.
Labeling - This is the step where the user (i.e. a human) reviews a set of documents and manually classifies them. For our problem, here a Pycon attendee is labeling a talk description from 2017 as "watch later"(0) or "watch now" (1).
Training/Testing split - In order to test our algorithm, we split parts of our labeled data into training (used to train the algorithm) and testing set (used to test the algorithm).
Vectorization & feature extraction - Since machine learning algorithms deal with numbers rather than words, we vectorize our documents - i.e. we split the documents into individual unique words and count the frequency of their occurance across documents. There are different data normalization is possible at this stage like stop words removal, lemmatization - but we will skip them for now. Each individual token occurrence frequency (normalized or not) is treated as a feature.
Model training - This is where we build the model.
Model testing - Here we test out the model to see how it is performing against label data as we subject it to the previously set aside test set.
Tweak and train - If our measures are not satisfactory, we will change the parameters that define different aspects of the machine learning algorithm and we will train the model again.
Once satisfied with the results from the previous step, we are now ready to deploy the model and have new unlabled documents be classified by it.
Exercise 2
Step4: Extra Credit
Note that we are choosing default value on all parameters for TfidfVectorizer. While this is a starting point, for better results we would want to come back and tune them to reduce noise. You can try that after you have taken a first pass through all the exercises. You might consider using spacy to fine tune the input to TfidfVectorizer.
Exercise 2.1 Fit_transform
We will use the fit_transform method to learn the vocabulary dictionary and return term-document matrix. What should be the input to fit_transform?
Step5: Exercise 2.2 Inspect the vocabulary
Take a look at the vocabulary dictionary that is accessible by calling vocabulary_ on the vectorizer. The stopwords can be accessed using stop_words_ attribute.
Use the get_feature_names function on the Tfidf vectorizer to get the features (terms).
Step6: Exercise 2.3 Transform documents for prediction into document-term matrix
For the data on which we will do our predictions, we will use the transform method to get the document-term matrix.
We will use this later, once we have our model ready. What should be the input to the transform function?
Step7: Exercise 3
Step8: Exercise 3.1 Inspect the shape of each output of train_test_split
For each of the output above, get the shape of the matrices.
Exercise 4
Step9: Exercise 5
Step10: Exercise 6
Step11: Using the predicted_talk_indexes get the talk id, description, presenters, title and location and talk date.
How many talks should the user go to according to your model? | Python Code:
!pip install -r requirements.txt
Explanation: Introduction to pandas and sklearn
Recommendation System
We live in a world surrounded by recommendation systems - our shopping habbits, our reading habits, political opinions are heavily influenced by recommendation algorithms. So lets take a closer look at how to build a basic recommendation system.
Simply put a recommendation system learns from your previous behavior and tries to recommend items that are similar to your previous choices. While there are a multitude of approaches for building recommendation systems, we will take a simple approach that is easy to understand and has a reasonable performance.
For this exercise we will build a recommendation system that predicts which talks you'll enjoy at a conference - specifically our favorite conference Pycon!
Before you proceed
This project is still in alpha stage. Bugs, typos, spelling, grammar, terminologies - there's every scope of finding bugs. If you have found one - open an issue on github. Pull Requests with corrections, fixes and enhancements will be received with open arms! Don't forget to add yourself to the list of contributors to this project.
Recommendation for Pycon talks
Take a look at 2018 schedule.
With 32 tuotorials, 12 sponsor workshops, 16 talks at the education summit, and 95 talks at the main conference - Pycon has a lot to offer. Reading through all the talk descriptions and filtering out the ones that you should go to is a tedious process.
Lets build a recommendation system that recommends talks from Pycon 2018, based on the ones that a person went to in 2017. This way the attendee does not waste any time deciding which talk to go to and spend more time making friends on the hallway track!
We will be using pandas and scikit-learn to build the recommnedation system using the text description of talks.
Definitions
Documents
In our example the talk descriptions make up the documents
Class
We have two classes to classify our documents
- The talks that the attendee would like to see "in person". Denoted by 1
- The talks that the attendee would watch "later online". Denoted by 0
A talk description is labeled 0 would mean the user has chosen to watch it later and a label 1 would mean the user has chose to watch it in person.
Supervised Learning
In Supervised learning we inspect each observation in a given dataset and manually label them. These manually labeled data is used to construct a model that can predict the labels on new data. We will use a Supervised Learning technique called Support Vector Machines.
In unsupervised learning we do not need any manual labeling. The recommendation system finds the pattern in the data to build a model that can be used for recommendation.
Dataset
The dataset contains the talk description and speaker details from Pycon 2017 and 2018. All the 2017 talk data has been labeled by a user who has been to Pycon 2017.
Required packages installation
The following packages are needed for this project. Execute the cell below to install them.
numpy==1.14.2
pandas==0.22.0
python-dateutil==2.7.2
pytz==2018.4
scikit-learn==0.19.1
scipy==1.0.1
six==1.11.0
sklearn==0.0
End of explanation
import pandas as pd
import numpy as np
df=pd.read_csv('talks.csv')
df.head()
Explanation: Exercise A: Load the data
The data directory contains the snapshot of one such user's labeling - lets load that up and start with our analysis.
End of explanation
year_labeled=
year_predict=
description_labeled = df[df.year==year_labeled]['description']
description_predict = df[df.year==year_predict]['description']
Explanation: Here is a brief description of the interesting fields.
variable | description
------|------|
title|Title of the talk
description|Description of the talk
year|Is it a 2017 talk or 2018
label|1 indicates the user preferred seeing the talk in person,<br> 0 indicates they would schedule it for later.
Note all 2018 talks are set to 1. However they are only placeholders, and are not used in training the model. We will use 2017 data for training, and predict the labels on the 2018 talks.
Lets start by selecting the 2017 talk descriptions that were labeled by the user for watching in person.
python
df[(df.year==2017) & (df.label==1)]['description']
Print the description of the talks that the user preferred watching in person. How many such talks are there?
Exercise 1: Exploring the dataset
Exercise 1.1: Select 2017 talk description and labels from the Pandas dataframe. How many of them are present? Do the same for 2018 talks.
The 2017 talks will be used for training and the 2018 talks will we used for predicting. Set the values of year_labeled and year_predict to appropriate values and print out the values of description_labeled and description_predict.
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(ngram_range=(1, 2), stop_words="english")
Explanation: Quick Introduction to Text Analysis
Lets have a quick overview of text analysis. Our end goal is to train a machine learning algorithm by making it go through enough documents from each class to recognize the distingusihing characteristics in documents from a particular class.
Labeling - This is the step where the user (i.e. a human) reviews a set of documents and manually classifies them. For our problem, here a Pycon attendee is labeling a talk description from 2017 as "watch later"(0) or "watch now" (1).
Training/Testing split - In order to test our algorithm, we split parts of our labeled data into training (used to train the algorithm) and testing set (used to test the algorithm).
Vectorization & feature extraction - Since machine learning algorithms deal with numbers rather than words, we vectorize our documents - i.e. we split the documents into individual unique words and count the frequency of their occurance across documents. There are different data normalization is possible at this stage like stop words removal, lemmatization - but we will skip them for now. Each individual token occurrence frequency (normalized or not) is treated as a feature.
Model training - This is where we build the model.
Model testing - Here we test out the model to see how it is performing against label data as we subject it to the previously set aside test set.
Tweak and train - If our measures are not satisfactory, we will change the parameters that define different aspects of the machine learning algorithm and we will train the model again.
Once satisfied with the results from the previous step, we are now ready to deploy the model and have new unlabled documents be classified by it.
Exercise 2: Vectorize and Feature Extraction
In this step we build the feature set by tokenization, counting and normalization of the bi-grams from the text descriptions of the talk.
tokenizing strings and giving an integer id for each possible token, for instance by using white-spaces and punctuation as token separators
counting the occurrences of tokens in each document
normalizing and weighting with diminishing importance tokens that occur in the majority of samples / documents
You can find more information on text feature extraction here and TfidfVectorizer here.
End of explanation
vectorized_text_labeled = vectorizer.fit_transform( ... )
Explanation: Extra Credit
Note that we are choosing default value on all parameters for TfidfVectorizer. While this is a starting point, for better results we would want to come back and tune them to reduce noise. You can try that after you have taken a first pass through all the exercises. You might consider using spacy to fine tune the input to TfidfVectorizer.
Exercise 2.1 Fit_transform
We will use the fit_transform method to learn the vocabulary dictionary and return term-document matrix. What should be the input to fit_transform?
End of explanation
occurrences = np.asarray(vectorized_text_labeled.sum(axis=0)).ravel()
terms = ( ... )
counts_df = pd.DataFrame({'terms': terms, 'occurrences': occurrences}).sort_values('occurrences', ascending=False)
counts_df
Explanation: Exercise 2.2 Inspect the vocabulary
Take a look at the vocabulary dictionary that is accessible by calling vocabulary_ on the vectorizer. The stopwords can be accessed using stop_words_ attribute.
Use the get_feature_names function on the Tfidf vectorizer to get the features (terms).
End of explanation
vectorized_text_predict = vectorizer.transform( ... )
vectorized_text_predict.toarray()
Explanation: Exercise 2.3 Transform documents for prediction into document-term matrix
For the data on which we will do our predictions, we will use the transform method to get the document-term matrix.
We will use this later, once we have our model ready. What should be the input to the transform function?
End of explanation
from sklearn.model_selection import train_test_split
labels = df[df.year == 2017]['label']
test_size= ...
X_train, X_test, y_train, y_test = train_test_split(vectorized_text_labeled, labels, test_size=test_size, random_state=1)
Explanation: Exercise 3: Split into training and testing set
Next we split our data into training set and testing set. This allows us to do cross validation and avoid overfitting. Use the train_test_split method from sklearn.model_selection to split the vectorized_text_labeled into training and testing set with the test size as one third of the size (0.3) of the labeled.
Here is the documentation for the function. The example usage should be helpful for understanding what X_train, X_test, y_train, y_test tuple represents.
End of explanation
import sklearn
from sklearn.svm import LinearSVC
classifier = LinearSVC(verbose=1)
classifier.fit(X_train, y_train)
Explanation: Exercise 3.1 Inspect the shape of each output of train_test_split
For each of the output above, get the shape of the matrices.
Exercise 4: Train the model
Finally we get to the stage for training the model. We are going to use a linear support vector classifier and check its accuracy by using the classification_report function. Note that we have not done any parameter tuning done yet, so your model might not give you the best results. Like TfIdfVectorizer you can come back and tune these parameters later.
End of explanation
y_pred = classifier.predict( ... )
report = sklearn.metrics.classification_report( ... , ... )
print(report)
Explanation: Exercise 5: Evaluate the model
Evaluate the model by using the the classification_report method from the classification_report. What are the values of precision, recall and f1-scores? They are defined here.
End of explanation
predicted_talks_vector = classifier.predict( ... )
Explanation: Exercise 6: Make Predictions
Use the model to predict which 2018 talks the user should go to. Plugin vectorized_text_predict from exercise 2.3 to get the predicted_talks_vector into the predict function.
End of explanation
df_2018 = df[df.year==2018]
predicted_talk_indexes = predicted_talks_vector.nonzero()[0] + len(df[df.year==2017])
df_2018_talks = df_2018.loc[predicted_talk_indexes]
Explanation: Using the predicted_talk_indexes get the talk id, description, presenters, title and location and talk date.
How many talks should the user go to according to your model?
End of explanation |
924 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
context.trie(data="", format="default", filename="")
Generate a "trie" automaton (a prefix tree) from a finite series, given as a file of (weighted) words.
Arguments
Step1: Weighted words (finite series)
Step2: Tuples of words | Python Code:
import vcsn
vcsn.B.trie('''foo
bar
baz''')
%%file words
hello
world
hell
word
vcsn.B.trie(filename='words')
Explanation: context.trie(data="", format="default", filename="")
Generate a "trie" automaton (a prefix tree) from a finite series, given as a file of (weighted) words.
Arguments:
- data: a string containing the list of words
- format:
- "default": same as "monomials"
- "monomials": each line contains a weighted word in the monomials syntax: <2>foo
- "words": each line contains a single word: foo. In this case <2>foo is read as a five-letter word.
Postconditions:
- Result.is_deterministic()
See also:
- context.cotrie
- polynomial.cotrie
- polynomial.trie
Examples
Words (finite language)
End of explanation
vcsn.Q.trie('''
one
<2>two
<3>three
<13>thirteen
<30>thirty
<51>thirsty''')
Explanation: Weighted words (finite series)
End of explanation
vcsn.context('lat<law_char, law_char>, q').trie('''
<1>one|un
<2>two|deux
<3>three|trois
<4>four|quatre
<14>forteen|quatorze
<40>forty|quarante''')
Explanation: Tuples of words
End of explanation |
925 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting Earnings from Census Data with Random Forests
taken from The Analytics Edge
The Task
The United States government periodically collects demographic information by conducting a census.
In this problem, we are going to use census information about an individual to predict how much a person earns -- in particular, whether the person earns more than $50,000 per year. This data comes from the UCI Machine Learning Repository.
The file census.csv contains 1994 census data for 31,978 individuals in the United States.
The dataset includes the following 13 variables
Step1: Exercise 1
Read the dataset census-2.csv.
find out the name and the type of the single colums
Step2: Exercise 2
sklearn classification can only work with numeric values. Therefore we first have to convert all not-numeric values to numeric values.
convert the target column over50k to a boolean
convert the not-numeric independent variables (aka features, aka predictors) via pd.get_dummies().
check the number of columns before and after applying pd.get_dummies
how did `pd.get_dummies() work?
See http
Step3: Exercise 3
Separate target variable over50k from the independent variables (all others)
Step4: Exercise 4
Then, split the data randomly into a training set and a testing set, setting the random_state to 2000 before creating the split. Split the data so that the training set contains 60% of the observations, while the testing set contains 40% of the observations.
Step5: Exercise 5
Let us now build a classification tree to predict "over50k". Use the training set to build the model, and all of the other variables as independent variables. Use the default parameters.
Step6: Exercise 6
Which are the most important features? Plot Top 5 with plotting_utilities.plot_feature_importances.
Step7: Exercise 7
Predict for the test data and
compare with the actual outcome | Python Code:
import pandas as pd
import numpy as np
Explanation: Predicting Earnings from Census Data with Random Forests
taken from The Analytics Edge
The Task
The United States government periodically collects demographic information by conducting a census.
In this problem, we are going to use census information about an individual to predict how much a person earns -- in particular, whether the person earns more than $50,000 per year. This data comes from the UCI Machine Learning Repository.
The file census.csv contains 1994 census data for 31,978 individuals in the United States.
The dataset includes the following 13 variables:
age = the age of the individual in years
workclass = the classification of the individual's working status (does the person work for the federal government, work for the local government, work without pay, and so on)
education = the level of education of the individual (e.g., 5th-6th grade, high school graduate, PhD, so on)
maritalstatus = the marital status of the individual
occupation = the type of work the individual does (e.g., administrative/clerical work, farming/fishing, sales and so on)
relationship = relationship of individual to his/her household
race = the individual's race
sex = the individual's sex
capitalgain = the capital gains of the individual in 1994 (from selling an asset such as a stock or bond for more than the original purchase price)
capitalloss = the capital losses of the individual in 1994 (from selling an asset such as a stock or bond for less than the original purchase price)
hoursperweek = the number of hours the individual works per week
nativecountry = the native country of the individual
over50k = whether or not the individual earned more than $50,000 in 1994
Predict whether an individual's earnings are above $50,000 (the variable "over50k") using all of the other variables as independent variables.
End of explanation
# TODO
Explanation: Exercise 1
Read the dataset census-2.csv.
find out the name and the type of the single colums
End of explanation
# TODO convert over50k to boolean
# TODO convert independend variables
Explanation: Exercise 2
sklearn classification can only work with numeric values. Therefore we first have to convert all not-numeric values to numeric values.
convert the target column over50k to a boolean
convert the not-numeric independent variables (aka features, aka predictors) via pd.get_dummies().
check the number of columns before and after applying pd.get_dummies
how did `pd.get_dummies() work?
See http://pbpython.com/categorical-encoding.html for further alternatives to convert not-numeric values to numeric values.
End of explanation
# TODO (hint: use drop(columns,axis=1))
Explanation: Exercise 3
Separate target variable over50k from the independent variables (all others):
over50k -> y, all others -> X
End of explanation
from sklearn.model_selection import train_test_split
# TODO
Explanation: Exercise 4
Then, split the data randomly into a training set and a testing set, setting the random_state to 2000 before creating the split. Split the data so that the training set contains 60% of the observations, while the testing set contains 40% of the observations.
End of explanation
from sklearn.ensemble import RandomForestClassifier
# TODO
Explanation: Exercise 5
Let us now build a classification tree to predict "over50k". Use the training set to build the model, and all of the other variables as independent variables. Use the default parameters.
End of explanation
from plotting_utilities import plot_feature_importances
import matplotlib.pyplot as plt
%matplotlib inline
# TODO
Explanation: Exercise 6
Which are the most important features? Plot Top 5 with plotting_utilities.plot_feature_importances.
End of explanation
# TODO predict
from sklearn.metrics import confusion_matrix
# TODO
Explanation: Exercise 7
Predict for the test data and
compare with the actual outcome:
Therefore print the confusion matrix for the test-data and
calculate the accuracy
for the trainings-data
for the test-data
how good is the accuracy in comparision to the Decision Tree?
End of explanation |
926 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quiz - Week 3B
Q1.
Suppose we hash the elements of a set S having 20 members, to a bit array of length 99. The array is initially all-0's, and we set a bit to 1 whenever a member of S hashes to it. The hash function is random and uniform in its distribution. What is the expected fraction of 0's in the array after hashing? What is the expected fraction of 1's? You may assume that 99 is large enough that asymptotic limits are reached
Solution 1.
Throwing Darts approximation - since we know that in this case we have t = 99, d = S*20, the fraction of 0's that remain will follow the following equation
Step1: Q2.
Given the following graph
<pre>
2 -----6
/ \ |
1 4 |
\ / \ |
3 5
</pre>
The goal is to find two clusters in this graph using Spectral Clustering on the Laplacian matrix. Compute the Laplacian of this graph. Then compute the second eigen vector of the Laplacian (the one corresponding to the second smallest eigenvalue).
To cluster the points, we decide to split at the mean value. We say that a node is a tie if its value in the eigen-vector is exactly equal to the mean value. Let's assume that if a point is a tie, we choose its cluster at random. Identify the true statement from the list below.
Step2: Q3.
We wish to estimate the surprise number (2nd moment) of a data stream, using the method of AMS. It happens that our stream consists of ten different values, which we'll call 1, 2,..., 10, that cycle repeatedly. That is, at timestamps 1 through 10, the element of the stream equals the timestamp, at timestamps 11 through 20, the element is the timestamp minus 10, and so on. It is now timestamp 75, and a 5 has just been read from the stream. As a start, you should calculate the surprise number for this time.
For our estimate of the surprise number, we shall choose three timestamps at random, and estimate the surprise number from each, using the AMS approach (length of the stream times 2m-1, where m is the number of occurrences of the element of the stream at that timestamp, considering all times from that timestamp on, to the current time). Then, our estimate will be the median of the three resulting values.
You should discover the simple rules that determine the estimate derived from any given timestamp and from any set of three timestamps. Then, identify from the list below the set of three "random" timestamps that give the closest estimate.
Some hints
Step3: Q4
We wish to use the Flagolet-Martin lgorithm of Section 4.4 to count the number of distinct elements in a stream. Suppose that there are ten possible elements, 1, 2,..., 10, that could appear in the stream, but only four of them have actually appeared. To make our estimate of the count of distinct elements, we hash each element to a 4-bit binary number. The element x is hashed to 3x + 7 (modulo 11). For example, element 8 hashes to 3*8+7 = 31, which is 9 modulo 11 (i.e., the remainder of 31/11 is 9). Thus, the 4-bit string for element 8 is 1001.
A set of four of the elements 1 through 10 could give an estimate that is exact (if the estimate is 4), or too high, or too low. You should figure out under what circumstances a set of four elements falls into each of those categories. Then, identify in the list below the set of four elements that gives the exactly correct estimate. | Python Code:
## Solution 1.
import numpy as np
A = np.array([[0, 0, 1, 0, 0, 1, 0, 0], #A
[0, 0, 0, 0, 1, 0, 0, 1], #B
[1, 0, 0, 1, 0, 1, 0, 0], #C
[0, 0, 1, 0, 1, 0, 1, 0], #D
[0, 1, 0, 1, 0, 0, 0, 1], #E
[1, 0, 1, 0, 0, 0, 1, 0], #F
[0, 0, 0, 1, 0, 1, 0, 1], #G
[0, 1, 0, 0, 1, 0, 1, 0] #H
])
print "Adjacent Matrix A:"
print A
print "Zero elements in A:"
print A.shape[1]* A.shape[0] - A.sum()
print "Sum as well as Non zero elements in A:"
print A.sum()
print "=========================================="
D = np.diag([2,2,3,3,3,3,3,3])
print "Degree Matrix D:"
print D
print "=========================================="
L = D - A
print "Laplacian Matrix L:"
print L
Explanation: Quiz - Week 3B
Q1.
Suppose we hash the elements of a set S having 20 members, to a bit array of length 99. The array is initially all-0's, and we set a bit to 1 whenever a member of S hashes to it. The hash function is random and uniform in its distribution. What is the expected fraction of 0's in the array after hashing? What is the expected fraction of 1's? You may assume that 99 is large enough that asymptotic limits are reached
Solution 1.
Throwing Darts approximation - since we know that in this case we have t = 99, d = S*20, the fraction of 0's that remain will follow the following equation:
\begin{align}
(1 - 1/t)^{t(d/t)} &~= e^{-d/t} \
&= e^{ \frac{-20*S}{99} }
\end{align}
So in this cae, we can derive the fraction of 1's that remain will be $ 1 - e^{ \frac{-20*S}{99} }$
Q2.
A certain Web mail service (like gmail, e.g.) has $10^8$ users, and wishes to create a sample of data about these users, occupying $10^{10}$ bytes. Activity at the service can be viewed as a stream of elements, each of which is an email. The element contains the ID of the sender, which must be one of the $10^8$ users of the service, and other information, e.g., the recipient(s), and contents of the message. The plan is to pick a subset of the users and collect in the $10^{10}$ bytes records of length 100 bytes about every email sent by the users in the selected set (and nothing about other users).
The method of Section 4.2.4 will be used. User ID's will be hashed to a bucket number, from 0 to 999,999. At all times, there will be a threshold t such that the 100-byte records for all the users whose ID's hash to t or less will be retained, and other users' records will not be retained. You may assume that each user generates emails at exactly the same rate as other users. As a function of n, the number of emails in the stream so far, what should the threshold t be in order that the selected records will not exceed the $10^{10}$ bytes available to store records? From the list below, identify the true statement about a value of n and its value of t.
Solution 2.
Since in this problem, basically we are going to partition the data stream by its value. Thus the relationship betwee n and t will be as folloing[note that hash value starts at 0]:
\begin{align}
\frac{t}{buckets#} = \frac{10^{10}}{n}
\end{align}
Quiz - Week 3A(Advanced)
Q1.
<pre>
C -- D -- E
/ | | | \
A | | | B
\ | | | /
F -- G -- H
</pre>
Write the adjacency matrix A, the degree matrix D, and the Laplacian matrix L. For each, find the sum of all entries and the number of nonzero entries. Then identify the true statement from the list below.
End of explanation
import numpy as np
A = np.array([[0, 1, 1, 0, 0, 0],
[1, 0, 0, 1, 0, 0],
[1, 0, 0, 1, 0, 0],
[0, 1, 1, 0, 1, 0],
[0, 0, 0, 1, 0, 1],
[0, 1, 0, 0, 1, 0]
])
D = np.diag([2, 2, 2, 3, 2, 2])
L = (D - A)
print "=========================================="
print "Graph Laplacian Matrix: "
print L
values, vectors = np.linalg.eig(L)
print "Eigen values: "
values = np.around(values, decimals=4)
print values
print "=========================================="
print "Eigen vectors: "
vectors = np.around(vectors, decimals=4)
print vectors
print "=========================================="
print "Second Smallest Eigen vectors: "
print vectors[:, 2]
print "=========================================="
print "Mean of each row in vectors: "
print np.mean(vectors[:, 2])
Explanation: Q2.
Given the following graph
<pre>
2 -----6
/ \ |
1 4 |
\ / \ |
3 5
</pre>
The goal is to find two clusters in this graph using Spectral Clustering on the Laplacian matrix. Compute the Laplacian of this graph. Then compute the second eigen vector of the Laplacian (the one corresponding to the second smallest eigenvalue).
To cluster the points, we decide to split at the mean value. We say that a node is a tie if its value in the eigen-vector is exactly equal to the mean value. Let's assume that if a point is a tie, we choose its cluster at random. Identify the true statement from the list below.
End of explanation
def cal_supprise(current_t = 75):
common = ( current_t / 10 ) % 10
more_set = ( current_t ) % 10
less_set = 10 - more_set
return more_set * (common+1)**2 + less_set * (common**2)
def AMS(time_list, current_t = 75):
estimates = []
for t in time_list:
delta = current_t - t
elem = t % 10
threshold = current_t % 10
common = (delta / 10) % 10
if elem > threshold:
estimates.append( ( 2*common - 1)*current_t )
else:
estimates.append( ( 2*common + 1)*current_t )
return estimates
print cal_supprise()
print np.mean(AMS([31,32,44]))
print np.mean(AMS([14,35,42]))
print np.mean(AMS([32,48,50]))
print np.mean(AMS([22,42,62]))
#buffer
Explanation: Q3.
We wish to estimate the surprise number (2nd moment) of a data stream, using the method of AMS. It happens that our stream consists of ten different values, which we'll call 1, 2,..., 10, that cycle repeatedly. That is, at timestamps 1 through 10, the element of the stream equals the timestamp, at timestamps 11 through 20, the element is the timestamp minus 10, and so on. It is now timestamp 75, and a 5 has just been read from the stream. As a start, you should calculate the surprise number for this time.
For our estimate of the surprise number, we shall choose three timestamps at random, and estimate the surprise number from each, using the AMS approach (length of the stream times 2m-1, where m is the number of occurrences of the element of the stream at that timestamp, considering all times from that timestamp on, to the current time). Then, our estimate will be the median of the three resulting values.
You should discover the simple rules that determine the estimate derived from any given timestamp and from any set of three timestamps. Then, identify from the list below the set of three "random" timestamps that give the closest estimate.
Some hints:
Be sure you have the surprise number correct. Notice that some of the elements appear 8 times and others appear 7 times. The surprise number is the sum over all elements that appear of the square of the number of times they appear.
Remember that for any given timestamp, the estimate is the length of the stream (75 in this example) times 2m-1, where m is the number of times the element at that timestamp appears, at that time or later.
End of explanation
def hash1(x):
return (3*x + 7) % 11
def printBinary(x):
print "- The binary result of {0} is:".format(str(x)) + str(bin(x))
def countTrailingZerosInBinary(num):
bnum = str(bin(num))
return len(bnum) - len(bnum.rstrip('0'))
def FlagoletMatrtin(hash_list):
maxnum = 0
for val in hash_list:
num = countTrailingZerosInBinary(val)
if num > maxnum:
maxnum = num
return 2**maxnum
print FlagoletMatrtin(([hash1(x) for x in [1,3,6,8]]))
print FlagoletMatrtin(([hash1(x) for x in [2,4,6,10]]))
print FlagoletMatrtin(([hash1(x) for x in [2,6,8,10]]))
print FlagoletMatrtin(([hash1(x) for x in [3,4,8,10]]))
print "================="
print FlagoletMatrtin(([hash1(x) for x in [2,6,8,9]]))
print FlagoletMatrtin(([hash1(x) for x in [4,6,9,10]]))
print FlagoletMatrtin(([hash1(x) for x in [1,5,8,9]]))
print FlagoletMatrtin(([hash1(x) for x in [1,6,7,10]]))
print "================="
print FlagoletMatrtin(([hash1(x) for x in [1,2,3,9]]))
print FlagoletMatrtin(([hash1(x) for x in [1,3,9,10]]))
print FlagoletMatrtin(([hash1(x) for x in [3,4,8,10]]))
print FlagoletMatrtin(([hash1(x) for x in [4,6,9,10]]))
print "================="
print FlagoletMatrtin(([hash1(x) for x in [1,4,7,9]]))
print FlagoletMatrtin(([hash1(x) for x in [4,6,9,10]]))
print FlagoletMatrtin(([hash1(x) for x in [1,6,7,10]]))
print FlagoletMatrtin(([hash1(x) for x in [4,5,6,10]]))
Explanation: Q4
We wish to use the Flagolet-Martin lgorithm of Section 4.4 to count the number of distinct elements in a stream. Suppose that there are ten possible elements, 1, 2,..., 10, that could appear in the stream, but only four of them have actually appeared. To make our estimate of the count of distinct elements, we hash each element to a 4-bit binary number. The element x is hashed to 3x + 7 (modulo 11). For example, element 8 hashes to 3*8+7 = 31, which is 9 modulo 11 (i.e., the remainder of 31/11 is 9). Thus, the 4-bit string for element 8 is 1001.
A set of four of the elements 1 through 10 could give an estimate that is exact (if the estimate is 4), or too high, or too low. You should figure out under what circumstances a set of four elements falls into each of those categories. Then, identify in the list below the set of four elements that gives the exactly correct estimate.
End of explanation |
927 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Exploration
Step1: Data File
Step2: Plotting Functions
Step3: Descriptions of Data
Step4: there are many items that are viewed more than a day before buying
most items are viewed less than 10 times and for less than a couple minutes (though need to zoom in)
Step5: longest span from viewing to buying is 6 days
Average Time for Items Viewed before Being Bought
Step6: 5% look like they have relatively short sessions (maybe within one sitting)
Step7: zooming in to look at the shortest sessions.
about 7% have sessions <10 minutes
Step8: 20% has sessions less <100 minutes
Example Trajectories
Step9: this is an example trajectory of someone who browsed a few items and then bought item 31.. within the same session.
Step10: here are 50 random subjects and when they view items (could make into an interactive plot)
What's the distribution of items that are bought? Are there some items that are much more popular than others?
Step11: Items bought and viewed per user?
Step12: How many times did the user buy an item he/she already looked at?
Image URLs
How many of the SPUs in our dataset (smaller) have urls in our url.csv?
Step13: items with more than one url?
Step14: these are the same item, just different images.
Step15: These are different thought!!
Step16: the url contains the spu, but I'm not sure what the other numbers are. The goods_num? The category etc?
Step17: we only have the url for 7% of the bought items and 9% of the viewed items
Step18: Are the images we have in this new dataset?
at the moment, I don't know how to find the spu of the images we have.
Viewing DataSet with Feature Data in
Step19: Plotting Trajectories and Seeing How many features we have
Step20: What percent of rows have features?
Step21: What percent of bought items are in the feature list?
Step22: Evaluation Dataset
Step23: some people have longer viewing trajectories. first item was viewed 28hours ahead of time.
Step24: this person bought two items.
Step25: I'd like to make this figure better - easier to tell which rows people are on
Save Notebook | Python Code:
import sys
import os
sys.path.append(os.getcwd()+'/../')
# other
import numpy as np
import glob
import pandas as pd
import ntpath
#keras
from keras.preprocessing import image
# plotting
import seaborn as sns
sns.set_style('white')
import matplotlib.pyplot as plt
%matplotlib inline
# debuggin
from IPython.core.debugger import Tracer
#stats
import scipy.stats as stats
import bqplot.pyplot as bqplt
Explanation: Data Exploration
End of explanation
user_profile = pd.read_csv('../data_user_view_buy/user_profile.csv',sep='\t',header=None)
user_profile.columns = ['user_id','buy_spu','buy_sn','buy_ct3','view_spu','view_sn','view_ct3','time_interval','view_cnt','view_seconds']
string =str(user_profile.buy_spu.as_matrix()[3002])
print(string)
print(string[0:7]+'-'+string[7::])
#print(str(user_profile.buy_spu.as_matrix()[0])[7::])
user_profile.head(10)
print('n rows: {0}').format(len(user_profile))
Explanation: Data File
End of explanation
def plot_trajectory_scatter(user_profile,scatter_color_col=None,samplesize=50,size=10,savedir=None):
plt.figure(figsize=(12,1*samplesize/10))
for ui,user_id in enumerate(np.random.choice(user_profile.user_id.unique(),samplesize)):
trajectory = user_profile.loc[user_profile.user_id==user_id,]
time = 0-trajectory.time_interval.as_matrix()/60.0/60.0/24.0
# add image or not
if scatter_color_col is not None:
c = trajectory[scatter_color_col].as_matrix()
else:
c = np.ones(len(trajectory))
plt.scatter(time,np.ones(len(time))*ui,s=size,c=c,edgecolors="none",cmap="jet")
plt.axvline(x=0,linewidth=1)
sns.despine()
plt.title('example user trajectories')
plt.xlabel('days to purchase')
if savedir is not None:
plt.savefig(savedir,dpi=100)
Explanation: Plotting Functions
End of explanation
user_profile.describe()
print('unique users:{0}').format(len(user_profile.user_id.unique()))
print('unique items viewed:{0}').format(len(user_profile.view_spu.unique()))
print('unique items bought:{0}').format(len(user_profile.buy_spu.unique()))
print('unique categories viewed:{0}').format(len(user_profile.view_ct3.unique()))
print('unique categories bought:{0}').format(len(user_profile.buy_ct3.unique()))
print('unique brands viewed:{0}').format(len(user_profile.view_sn.unique()))
print('unique brands bought:{0}').format(len(user_profile.buy_sn.unique()))
samplesize = 2000
plt.figure(figsize=(12,4))
plt.subplot(1,3,1)
plt.hist(np.random.choice(user_profile.time_interval.as_matrix()/60.0/60.0,samplesize))
sns.despine()
plt.title('sample histogram from "time interval"')
plt.xlabel('hours from view to buy')
plt.ylabel('counts of items')
plt.subplot(1,3,2)
plt.hist(np.random.choice(user_profile.view_cnt.as_matrix(),samplesize))
sns.despine()
plt.title('sample histogram from "view count"')
plt.xlabel('view counts')
plt.ylabel('counts of items')
plt.subplot(1,3,3)
plt.hist(np.random.choice(user_profile.view_seconds.as_matrix(),samplesize))
sns.despine()
plt.title('sample histogram from "view lengths"')
plt.xlabel('view lengths (seconds)')
plt.ylabel('counts of items')
Explanation: Descriptions of Data
End of explanation
print('longest time interval')
print(user_profile.time_interval.min())
print('longest time interval')
print(user_profile.time_interval.max()/60.0/60.0/24)
Explanation: there are many items that are viewed more than a day before buying
most items are viewed less than 10 times and for less than a couple minutes (though need to zoom in)
End of explanation
mean_time_interval = np.array([])
samplesize =1000
for user_id in np.random.choice(user_profile.user_id.unique(),samplesize):
mean_time_interval = np.append(mean_time_interval, user_profile.loc[user_profile.user_id==user_id,'time_interval'].mean())
plt.figure(figsize=(12,3))
plt.hist(mean_time_interval/60.0,bins=200)
sns.despine()
plt.title('sample histogram of average length for user trajectories"')
plt.xlabel('minutes')
plt.ylabel('counts of items out of '+str(samplesize))
Explanation: longest span from viewing to buying is 6 days
Average Time for Items Viewed before Being Bought
End of explanation
plt.figure(figsize=(12,3))
plt.hist(mean_time_interval/60.0,bins=1000)
plt.xlim(0,100)
sns.despine()
plt.title('sample histogram of average length for user trajectories"')
plt.xlabel('minutes')
plt.ylabel('counts of items out of '+str(samplesize))
Explanation: 5% look like they have relatively short sessions (maybe within one sitting)
End of explanation
plt.figure(figsize=(8,3))
plt.hist(mean_time_interval/60.0,bins=200,cumulative=True,normed=True)
plt.xlim(0,2000)
sns.despine()
plt.title('sample cdf of average length for user trajectories"')
plt.xlabel('minutes')
plt.ylabel('counts of items out of '+str(samplesize))
Explanation: zooming in to look at the shortest sessions.
about 7% have sessions <10 minutes
End of explanation
user_id = 1606682799
trajectory = user_profile.loc[user_profile.user_id==user_id,]
trajectory= trajectory.sort_values(by='time_interval',ascending=False)
trajectory
Explanation: 20% has sessions less <100 minutes
Example Trajectories
End of explanation
plot_trajectory_scatter(user_profile)
Explanation: this is an example trajectory of someone who browsed a few items and then bought item 31.. within the same session.
End of explanation
samplesize =1000
number_of_times_item_bought = np.empty(samplesize)
number_of_times_item_viewed = np.empty(samplesize)
for ii,item_id in enumerate(np.random.choice(user_profile.view_spu.unique(),samplesize)):
number_of_times_item_bought[ii] = len(user_profile.loc[user_profile.buy_spu==item_id,'user_id'].unique()) # assume the same user would not buy the same product
number_of_times_item_viewed[ii] = len(user_profile.loc[user_profile.view_spu==item_id]) # same user can view the same image more than once for this count
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.bar(np.arange(len(number_of_times_item_bought)),number_of_times_item_bought)
sns.despine()
plt.title('item popularity (purchases)')
plt.xlabel('item')
plt.ylabel('# of times items were bought')
plt.subplot(1,2,2)
plt.hist(number_of_times_item_bought,bins=100)
sns.despine()
plt.title('item popularity (purchases)')
plt.xlabel('# of times items were bought sample size='+str(samplesize))
plt.ylabel('# of items')
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.bar(np.arange(len(number_of_times_item_viewed)),number_of_times_item_viewed)
sns.despine()
plt.title('item popularity (views)')
plt.xlabel('item')
plt.ylabel('# of times items were viewed')
plt.subplot(1,2,2)
plt.hist(number_of_times_item_bought,bins=100)
sns.despine()
plt.title('item popularity (views) sample size='+str(samplesize))
plt.xlabel('# of times items were viewed')
plt.ylabel('# of items')
plt.figure(figsize=(6,4))
plt.subplot(1,1,1)
thresh =30
include = number_of_times_item_bought<thresh
plt.scatter(number_of_times_item_viewed[include],number_of_times_item_bought[include],)
(r,p) = stats.pearsonr(number_of_times_item_viewed[include],number_of_times_item_bought[include])
sns.despine()
plt.xlabel('number of times viewed')
plt.ylabel('number of times bought')
plt.title('r='+str(np.round(r,2))+' data truncated buys<'+str(thresh))
Explanation: here are 50 random subjects and when they view items (could make into an interactive plot)
What's the distribution of items that are bought? Are there some items that are much more popular than others?
End of explanation
samplesize =1000
items_bought_per_user = np.empty(samplesize)
items_viewed_per_user = np.empty(samplesize)
for ui,user_id in enumerate(np.random.choice(user_profile.user_id.unique(),samplesize)):
items_bought_per_user[ui] = len(user_profile.loc[user_profile.user_id==user_id,'buy_spu'].unique())
items_viewed_per_user[ui] = len(user_profile.loc[user_profile.user_id==user_id,'view_spu'].unique())
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.hist(items_bought_per_user)
sns.despine()
plt.title('number of items bought per user (sample of 1000)')
plt.xlabel('# items bought')
plt.ylabel('# users')
plt.subplot(1,2,2)
plt.hist(items_viewed_per_user)
sns.despine()
plt.title('number of items viewed per user (sample of 1000)')
plt.xlabel('# items viewed')
plt.ylabel('# users')
Explanation: Items bought and viewed per user?
End of explanation
urls = pd.read_csv('../../deep-learning-models-master/img/eval_img_url.csv',header=None)
urls.columns = ['spu','url']
print(len(urls))
urls.head(10)
urls[['spu','url']].groupby(['spu']).agg(['count']).head()
Explanation: How many times did the user buy an item he/she already looked at?
Image URLs
How many of the SPUs in our dataset (smaller) have urls in our url.csv?
End of explanation
urls.loc[urls.spu==357870273655002,'url'].as_matrix()
urls.loc[urls.spu==357889732772303,'url'].as_matrix()
Explanation: items with more than one url?
End of explanation
#urls.loc[urls.spu==1016200950427238422,'url']
tmp_urls = urls.loc[urls.spu==1016200950427238422,'url'].as_matrix()
tmp_urls
from urllib import urlretrieve
import time
# scrape images
for i,tmp_url in enumerate(tmp_urls):
urlretrieve(tmp_url, '../data_img_tmp/{}.jpg'.format(i))
#time.sleep(3)
# plot them.
print('two images from url with same spu (ugh)')
plt.figure(figsize=(8,3))
for i,tmp_url in enumerate(tmp_urls):
img_path= '../data_img_tmp/{}.jpg'.format(i)
img = image.load_img(img_path, target_size=(224, 224))
plt.subplot(1,len(tmp_urls),i+1)
plt.imshow(img)
plt.grid(b=False)
Explanation: these are the same item, just different images.
End of explanation
urls.spu[0]
urls.url[0]
Explanation: These are different thought!!
End of explanation
view_spus = user_profile.view_spu.unique()
contained = 0
spus_with_url = list(urls.spu.as_matrix())
for view_spu in view_spus:
if view_spu in spus_with_url:
contained+=1
print(contained/np.float(len(view_spus)))
buy_spus = user_profile.buy_spu.unique()
contained = 0
spus_with_url = list(urls.spu.as_matrix())
for buy_spu in buy_spus:
if buy_spu in spus_with_url:
contained+=1
print(contained/np.float(len(buy_spus)))
Explanation: the url contains the spu, but I'm not sure what the other numbers are. The goods_num? The category etc?
End of explanation
buy_spu in spus_with_url
len(urls.spu.unique())
len(user_profile.view_spu.unique())
Explanation: we only have the url for 7% of the bought items and 9% of the viewed items
End of explanation
spu_fea = pd.read_pickle("../data_nn_features/spu_fea.pkl") #takes forever to load
spu_fea['view_spu']=spu_fea['spu_id']
spu_fea['view_spu']=spu_fea['spu_id']
user_profile_w_features = user_profile.merge(spu_fea,on='view_spu',how='left')
print('before merge nrow: {0}').format(len(user_profile))
print('after merge nrows:{0}').format(len(user_profile_w_features))
print('number of items with features: {0}').format(len(spu_fea))
spu_fea.head()
# merge with userdata
spu_fea['view_spu']=spu_fea['spu_id']
user_profile_w_features = user_profile.merge(spu_fea,on='view_spu',how='left')
print('before merge nrow: {0}').format(len(user_profile))
print('after merge nrows:{0}').format(len(user_profile_w_features))
user_profile_w_features['has_features']=user_profile_w_features.groupby(['view_spu'])['spu_id'].apply(lambda x: np.isnan(x))
user_profile_w_features.has_features= user_profile_w_features.has_features.astype('int')
user_profile_w_features.head()
Explanation: Are the images we have in this new dataset?
at the moment, I don't know how to find the spu of the images we have.
Viewing DataSet with Feature Data in
End of explanation
plot_trajectory_scatter(user_profile_w_features,scatter_color_col='has_features',samplesize=100,size=10,savedir='../../test.png')
Explanation: Plotting Trajectories and Seeing How many features we have
End of explanation
1-(user_profile_w_features['features'].isnull()).mean()
Explanation: What percent of rows have features?
End of explanation
1-user_profile_w_features.groupby(['view_spu'])['spu_id'].apply(lambda x: np.isnan(x)).mean()
buy_spus = user_profile.buy_spu.unique()
contained = 0
spus_with_features = list(spu_fea.spu_id.as_matrix())
for buy_spu in buy_spus:
if buy_spu in spus_with_features:
contained+=1
print(contained/np.float(len(buy_spus)))
contained
len(buy_spus)
view_spus = user_profile.view_spu.unique()
contained = 0
spus_with_features = list(spu_fea.spu_id.as_matrix())
for view_spu in view_spus:
if view_spu in spus_with_features:
contained+=1
print(contained/np.float(len(view_spus)))
len(view_spus)
Explanation: What percent of bought items are in the feature list?
End of explanation
user_profile = pd.read_pickle('../data_user_view_buy/user_profile_items_nonnull_features_20_mins_5_views.pkl')
len(user_profile)
print('unique users:{0}').format(len(user_profile.user_id.unique()))
print('unique items viewed:{0}').format(len(user_profile.view_spu.unique()))
print('unique items bought:{0}').format(len(user_profile.buy_spu.unique()))
print('unique categories viewed:{0}').format(len(user_profile.view_ct3.unique()))
print('unique categories bought:{0}').format(len(user_profile.buy_ct3.unique()))
print('unique brands viewed:{0}').format(len(user_profile.view_sn.unique()))
print('unique brands bought:{0}').format(len(user_profile.buy_sn.unique()))
#user_profile.groupby(['user_id'])['buy_spu'].nunique()
# how many items bought per user in this dataset?
plt.figure(figsize=(8,3))
plt.hist(user_profile.groupby(['user_id'])['buy_spu'].nunique(),bins=20,normed=False)
sns.despine()
plt.xlabel('number of items bought per user')
plt.ylabel('number of user')
user_profile.loc[user_profile.user_id==4283991208,]
Explanation: Evaluation Dataset
End of explanation
user_profile.loc[user_profile.user_id==6539296,]
Explanation: some people have longer viewing trajectories. first item was viewed 28hours ahead of time.
End of explanation
plot_trajectory_scatter(user_profile,samplesize=100,size=10,savedir='../figures/trajectories_evaluation_dataset.png')
Explanation: this person bought two items.
End of explanation
%%bash
jupyter nbconvert --to slides Exploring_Data.ipynb && mv Exploring_Data.slides.html ../notebook_slides/Exploring_Data_v1.slides.html
jupyter nbconvert --to html Exploring_Data.ipynb && mv Exploring_Data.html ../notebook_htmls/Exploring_Data_v1.html
cp Exploring_Data.ipynb ../notebook_versions/Exploring_Data_v1.ipynb
# push to s3
import sys
import os
sys.path.append(os.getcwd()+'/../')
from src import s3_data_management
s3_data_management.push_results_to_s3('Exploring_Data_v1.html','../notebook_htmls/Exploring_Data_v1.html')
s3_data_management.push_results_to_s3('Exporing_Data_v1.slides.html','../notebook_slides/Exploring_Data_v1.slides.html')
Explanation: I'd like to make this figure better - easier to tell which rows people are on
Save Notebook
End of explanation |
928 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Full control with the ask-and-tell interface
For day-to-day use, we recommend sampling or optimising via one of the "controller" classes, for example the OptimisationController or the MCMCController class.
Internally, these classes create an optimiser or sampler, configure it for use, and then run the procedure, logging important information along the way.
However, there are cases where you might want more control than the controller interface offers, and in those situations we recommend using the chosen method directly, via its "ask-and-tell" interface.
Examples of such scenarios include
Step1: Next, we create an XNES optimiser object and run a simple optimisation
Step2: One advantage of this type of interface is that it gives us the freedom to evaluate the score function in any way we like. For example using parallelisation
Step3: Note that, for our toy problem, the time it takes to set up parallelisation actually outweighs its benefits!
Another thing we can do is track exactly what happens over time
Step4: For a simple 2d problem, we can also graph the trajectory of the optimiser through the parameter space | Python Code:
import pints
import pints.toy as toy
import numpy as np
import matplotlib.pyplot as plt
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
real_parameters = [0.015, 500]
times = np.linspace(0, 1000, 1000)
values = model.simulate(real_parameters, times)
# Add noise
values += np.random.normal(0, 10, values.shape)
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Select a score function
score = pints.SumOfSquaresError(problem)
# Select some boundaries
boundaries = pints.RectangularBoundaries([0, 200], [1, 1000])
# Choose an initial position
x0 = [0, 700]
Explanation: Full control with the ask-and-tell interface
For day-to-day use, we recommend sampling or optimising via one of the "controller" classes, for example the OptimisationController or the MCMCController class.
Internally, these classes create an optimiser or sampler, configure it for use, and then run the procedure, logging important information along the way.
However, there are cases where you might want more control than the controller interface offers, and in those situations we recommend using the chosen method directly, via its "ask-and-tell" interface.
Examples of such scenarios include:
debugging: if a method doesn't do what you think it should do, it can be useful to dig in
visualisation: using the ask-and-tell interface lets you extract more information to (interactively) visualise what a method is doing
hierarchical sampling: hierarchical schemes can be set up that use one or several sampling methods internally
Example: visualising an optimisation
Below, we show an example of using the ask-and-tell interface to visually inspect an optimisation routine.
End of explanation
# Create an XNES object
xnes = pints.XNES(x0, boundaries=boundaries)
# Run optimisation
for i in range(500):
# Get the next points to evaluate
xs = xnes.ask()
# Evaluate the scores for each point
fxs = [score(x) for x in xs]
# Pass the result back to XNES
xnes.tell(fxs)
# Show the best solution
print(xnes.xbest())
Explanation: Next, we create an XNES optimiser object and run a simple optimisation:
Now we can run a simple optimisation:
End of explanation
# Create an XNES object
xnes = pints.XNES(x0, boundaries=boundaries)
# Create parallel evaluator
e = pints.ParallelEvaluator(score)
# Run optimisation
for i in range(500):
# Get the next points to evaluate
xs = xnes.ask()
# Evaluate the scores in parallel!
fxs = e.evaluate(xs)
# Pass the result back to XNES
xnes.tell(fxs)
# Show the best solution
print(xnes.xbest())
Explanation: One advantage of this type of interface is that it gives us the freedom to evaluate the score function in any way we like. For example using parallelisation:
End of explanation
# Create an XNES object
xnes = pints.XNES(x0, boundaries=boundaries)
# Run optimisation
best = []
for i in range(500):
# Get the next points to evaluate
xs = xnes.ask()
# Evaluate the scores
fxs = [score(x) for x in xs]
# Pass the result back to XNES
xnes.tell(fxs)
# Store the best score
best.append(xnes.fbest())
# Show how the score converges
plt.figure()
plt.xlabel('Iteration')
plt.ylabel('Score')
plt.plot(best)
plt.show()
Explanation: Note that, for our toy problem, the time it takes to set up parallelisation actually outweighs its benefits!
Another thing we can do is track exactly what happens over time:
End of explanation
# Create an XNES object
xnes = pints.XNES(x0, boundaries=boundaries)
# Run optimisation
best = []
mean = []
allx = []
for i in range(250):
# Get the next points to evaluate
xs = xnes.ask()
# Evaluate the scores
fxs = [score(x) for x in xs]
# Pass the result back to XNES
xnes.tell(fxs)
# Store the best score
best.append(xnes.fbest())
# Store the mean of the population of points
mean.append(np.mean(xs, axis=0))
# Store all requested points
allx.extend(xs)
mean = np.array(mean)
allx = np.array(allx)
# Plot the optimiser convergence
plt.figure(figsize=(18, 2))
plt.xlabel('Iteration')
plt.ylabel('Score')
plt.plot(best)
# Plot the optimiser trajectory
plt.figure(figsize=(18, 8))
plt.xlabel('Parameter 1')
plt.ylabel('Parameter 2')
plt.axhline(real_parameters[1], color='green')
plt.axvline(real_parameters[0], color='green')
plt.plot(allx[:, 0], allx[:, 1], 'x', color='red', alpha=0.25)
plt.plot(mean[:, 0], mean[:, 1], alpha=0.75)
plt.show()
Explanation: For a simple 2d problem, we can also graph the trajectory of the optimiser through the parameter space
End of explanation |
929 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: The Data
There are some fake data csv files you can read in as dataframes
Step2: Style Sheets
Matplotlib has style sheets you can use to make your plots look a little nicer. These style sheets include plot_bmh,plot_fivethirtyeight,plot_ggplot and more. They basically create a set of style rules that your plots follow. I recommend using them, they make all your plots have the same look and feel more professional. You can even create your own if you want your company's plots to all have the same look (it is a bit tedious to create on though).
Here is how to use them.
Before plt.style.use() your plots look like this
Step3: Call the style
Step4: Now your plots look like this
Step5: Let's stick with the ggplot style and actually show you how to utilize pandas built-in plotting capabilities!
Plot Types
There are several plot types built-in to pandas, most of them statistical plots by nature
Step6: Barplots
Step7: Histograms
Step8: Line Plots
Step9: Scatter Plots
Step10: You can use c to color based off another column value
Use cmap to indicate colormap to use.
For all the colormaps, check out
Step11: Or use s to indicate size based off another column. s parameter needs to be an array, not just the name of a column
Step12: BoxPlots
Step13: Hexagonal Bin Plot
Useful for Bivariate Data, alternative to scatterplot
Step14: Kernel Density Estimation plot (KDE) | Python Code:
import numpy as np
import pandas as pd
%matplotlib inline
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Pandas Built-in Data Visualization
In this lecture we will learn about pandas built-in capabilities for data visualization! It's built-off of matplotlib, but it baked into pandas for easier usage!
Let's take a look!
Imports
End of explanation
df1 = pd.read_csv('df1',index_col=0)
df2 = pd.read_csv('df2')
Explanation: The Data
There are some fake data csv files you can read in as dataframes:
End of explanation
df1['A'].hist()
Explanation: Style Sheets
Matplotlib has style sheets you can use to make your plots look a little nicer. These style sheets include plot_bmh,plot_fivethirtyeight,plot_ggplot and more. They basically create a set of style rules that your plots follow. I recommend using them, they make all your plots have the same look and feel more professional. You can even create your own if you want your company's plots to all have the same look (it is a bit tedious to create on though).
Here is how to use them.
Before plt.style.use() your plots look like this:
End of explanation
import matplotlib.pyplot as plt
plt.style.use('ggplot')
Explanation: Call the style:
End of explanation
df1['A'].hist()
plt.style.use('bmh')
df1['A'].hist()
plt.style.use('dark_background')
df1['A'].hist()
plt.style.use('fivethirtyeight')
df1['A'].hist()
plt.style.use('ggplot')
Explanation: Now your plots look like this:
End of explanation
df2.plot.area(alpha=0.4)
Explanation: Let's stick with the ggplot style and actually show you how to utilize pandas built-in plotting capabilities!
Plot Types
There are several plot types built-in to pandas, most of them statistical plots by nature:
df.plot.area
df.plot.barh
df.plot.density
df.plot.hist
df.plot.line
df.plot.scatter
df.plot.bar
df.plot.box
df.plot.hexbin
df.plot.kde
df.plot.pie
You can also just call df.plot(kind='hist') or replace that kind argument with any of the key terms shown in the list above (e.g. 'box','barh', etc..)
Let's start going through them!
Area
End of explanation
df2.head()
df2.plot.bar()
df2.plot.bar(stacked=True)
Explanation: Barplots
End of explanation
df1['A'].plot.hist(bins=50)
Explanation: Histograms
End of explanation
df1.plot.line(x=df1.index,y='B',figsize=(12,3),lw=1)
Explanation: Line Plots
End of explanation
df1.plot.scatter(x='A',y='B')
Explanation: Scatter Plots
End of explanation
df1.plot.scatter(x='A',y='B',c='C',cmap='coolwarm')
Explanation: You can use c to color based off another column value
Use cmap to indicate colormap to use.
For all the colormaps, check out: http://matplotlib.org/users/colormaps.html
End of explanation
df1.plot.scatter(x='A',y='B',s=df1['C']*200)
Explanation: Or use s to indicate size based off another column. s parameter needs to be an array, not just the name of a column:
End of explanation
df2.plot.box() # Can also pass a by= argument for groupby
Explanation: BoxPlots
End of explanation
df = pd.DataFrame(np.random.randn(1000, 2), columns=['a', 'b'])
df.plot.hexbin(x='a',y='b',gridsize=25,cmap='Oranges')
Explanation: Hexagonal Bin Plot
Useful for Bivariate Data, alternative to scatterplot:
End of explanation
df2['a'].plot.kde()
df2.plot.density()
Explanation: Kernel Density Estimation plot (KDE)
End of explanation |
930 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 05
Logistic regression exercise with Titanic data
We'll be working with a dataset from Kaggle's Titanic competition
Step1: Create X and y
Define Pclass and Parch as the features, and Survived as the response.
Step2: Exercise 5.1
Split the data into training and testing sets
Step3: Exercise 5.2
Fit a logistic regression model and examine the coefficients
Confirm that the coefficients make intuitive sense.
Step4: Exercise 5.3
Make predictions on the testing set and calculate the accuracy
Step5: Exercise 5.4
Confusion matrix of Titanic predictions
Step6: Exercise 5.5
Increase sensitivity by lowering the threshold for predicting survival
Create a new classifier by changing the probability threshold to 0.3
What is the new confusion matrix? | Python Code:
import pandas as pd
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/titanic.csv'
titanic = pd.read_csv(url, index_col='PassengerId')
titanic.head()
Explanation: Exercise 05
Logistic regression exercise with Titanic data
We'll be working with a dataset from Kaggle's Titanic competition: data, data dictionary
Goal: Predict survival based on passenger characteristics
The sinking of the RMS Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships.
One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.
In this challenge, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy.
Read the data into Pandas
End of explanation
feature_cols = ['Pclass', 'Parch']
X = titanic[feature_cols]
Y = titanic.Survived
Explanation: Create X and y
Define Pclass and Parch as the features, and Survived as the response.
End of explanation
import numpy as np
# Insert code here
random_sample = np.random.rand(y.shape[0])
X_train, X_test = X[random_sample<0.7], X[random_sample>=0.7]
Y_train, Y_test = Y[random_sample<0.7], Y[random_sample>=0.7]
print(Y_train.shape, Y_test.shape)
import numpy as np
from sklearn.cross_validation import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33, random_state=42)
print(Y_train.shape, Y_test.shape)
Explanation: Exercise 5.1
Split the data into training and testing sets
End of explanation
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(C=1e9)
mod1=logreg.fit(X_train, Y_train)
mod1.coef_
Explanation: Exercise 5.2
Fit a logistic regression model and examine the coefficients
Confirm that the coefficients make intuitive sense.
End of explanation
titanic['survive_pred'] = logreg.predict(X)
titanic.head()
survive_pred = logreg.predict(X_test)
(Y_test == survive_pred).mean()
Explanation: Exercise 5.3
Make predictions on the testing set and calculate the accuracy
End of explanation
from sklearn.metrics import confusion_matrix
confusion_matrix(Y_test, survive_pred)
Explanation: Exercise 5.4
Confusion matrix of Titanic predictions
End of explanation
survive_pred_prob=logreg.predict_proba(X_test)[:,1]
predict2=np.where(survive_pred_prob >= 0.7, 1, 0)
confusion_matrix(Y_test, predict2)
(Y_test == predict2).mean()
survive_pred_prob=logreg.predict_proba(X_test)[:,1]
predict2=np.where(survive_pred_prob >= 0.3, 1, 0)
confusion_matrix(Y_test, predict2)
(Y_test == predict2).mean()
Explanation: Exercise 5.5
Increase sensitivity by lowering the threshold for predicting survival
Create a new classifier by changing the probability threshold to 0.3
What is the new confusion matrix?
End of explanation |
931 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
At PyCon in Montreal https
Step1: TRMM
A little googling turned up this gem from the NASA's Tropical Rainfall Measuring Mission.
<img src='http
Step2: Landsat data for Bermuda
A major challenge with satellite data is finding just what images are available.
Landsat has <a href="http
Step3: Summary
So we have succeeded in downloading and plotting one of these bands.
Now time to play spot Bermuda. First impressions are this particular data is likely not high enough resolution to be useful.
A second thing to note is that the NASA sites are, understandably, quite US-centric. To do comprehensive studies of satellite data for Bermuda it looks like it will be worthwhile to create local mirrors of the key data.
In particular, whilst some of these images are quite large, the part covering Bermuda will generally be much more manageable.
Zooming in on Bermuda | Python Code:
from IPython import display
# Chris Waigl, Satellite mapping for everyone.
display.YouTubeVideo('MCHpt1FvblI')
Explanation: At PyCon in Montreal https://us.pycon.org/2015/ Chris Waigl gave a talk about satellite mapping and some of the python tools that help with this
Following the talk I decided to take a look to see what satellite data is available around the time of the hurricanes Fay and Gonzalo, back in October 2014.
The hope was to be able to find suitable before and after images at a high enough resolution to use image processing software to help with damage analyis.
Chris's talk is on youtube (along with all the other PyCon talks) and embedded below.
End of explanation
# 3-D animation of a typhoon from the GPM project
display.YouTubeVideo('kDlTZxejlbI')
Explanation: TRMM
A little googling turned up this gem from the NASA's Tropical Rainfall Measuring Mission.
<img src='http://www.nasa.gov/sites/default/files/thumbnails/image/fay_and_gonzalo_rain_10-20_october_2014_animated.gif'>
Image Credit: NASA/SSAI, Hal Pierce
<a href=http://www.nasa.gov/content/goddard/nasas-trmm-satellite-calculates-hurricanes-fay-and-gonzalo-rainfall/>Full article</a>
This is a seven day animation, covering the period of Fay and Gonzalo.
Assuming rainfall is a good proxy for storm intensity, you can see how Fay intensified as it reached the island and how Gonzalo followed a very similar path, just six days later.
The key question with respect to Bermuda is whether this sort of data is available at higher resolution.
The article does mention that
<a href="http://www.nasa.gov/mission_pages/GPM/main/index.html">Global Precipitation Measurement (GPM) mission product in late 2014</a> will supersede the TRMM project.
The <a href="http://www.nasa.gov/mission_pages/GPM/main/index.html">Nasa GPM page has some wonderful animations of the sort of thing that is possible with GPM.
End of explanation
# lets start with matplotlib
%matplotlib inline
from matplotlib import pyplot
# Chris recommended the rasterio library
import rasterio
infile = '../data/LC80060382014275LGN00_B2.TIF'
# This is pretty simple, just open the TIFF file and you have
# an object that can tell you all sorts of things about the image
data = rasterio.open(infile)
data.width, data.height
# take a look at the meta data
data.meta
# read the bands in the file, there will be as many bands as
# the count above
bands = data.read()
# take a look at the data -- numpy arrays with 16 bit values
bands
# so we have a 3D array, first dimension is the band
bands[0].shape
img = bands[0]
# just take every 10th pixel for now -- imshow does not handle
# large images well.
img = img[::10, ::10]
img.shape
# now plot the thing.
pyplot.imshow(img)
Explanation: Landsat data for Bermuda
A major challenge with satellite data is finding just what images are available.
Landsat has <a href="http://landsat.usgs.gov/Landsat_Search_and_Download.php">
a well documented site created by the USGS</a>
However it is still time consuming to see what is available.
Downloads can be large, roughly 1GB per satellite image. These images generally contain multiple layers for different parts of the spectrum.
To download the larger files you need to register and get an API key.
Once registered I downloaded a couple of images, either side of the October storms.
Below are my attempts to extract and plot the data.
End of explanation
# put it all together
def plot_image(infile, box=None, axes=None):
if axes is None:
fig, axes = pyplot.subplots(1, 2, figsize=(8,8))
if box is None:
box = 1000, 2200, 3700, 5500
a, b, c, d = box
data = rasterio.open(infile)
bands = data.read()
img = bands[0]
img = img[a:b, c:d]
axes.imshow(img)
# plotting images either side of the hurricane
fig, axes = pyplot.subplots(1, 2, figsize=(8,8))
#pyplot.subplot(1,2,1)
x = 3
top = 1300
left = 4000
width = 1200
height = 1000
box = (top, top + height, left, left + width)
infile = '../data/LC80060382014275LGN00_B%d.TIF' % x
plot_image(infile, box=box, axes=axes[0])
#pyplot.subplot(1,2,2)
infile = '../data/LC80060382014307LGN00_B%d.TIF' % x
plot_image(infile, box=box, axes=axes[1])
Explanation: Summary
So we have succeeded in downloading and plotting one of these bands.
Now time to play spot Bermuda. First impressions are this particular data is likely not high enough resolution to be useful.
A second thing to note is that the NASA sites are, understandably, quite US-centric. To do comprehensive studies of satellite data for Bermuda it looks like it will be worthwhile to create local mirrors of the key data.
In particular, whilst some of these images are quite large, the part covering Bermuda will generally be much more manageable.
Zooming in on Bermuda
End of explanation |
932 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Backpropagation
Instructions
In this assignment, you will train a neural network to draw a curve.
The curve takes one input variable, the amount travelled along the curve from 0 to 1, and returns 2 outputs, the 2D coordinates of the position of points on the curve.
To help capture the complexity of the curve, we shall use two hidden layers in our network with 6 and 7 neurons respectively.
You will be asked to complete functions that calculate the Jacobian of the cost function, with respect to the weights and biases of the network. Your code will form part of a stochastic steepest descent algorithm that will train your network.
Matrices in Python
Recall from assignments in the previous course in this specialisation that matrices can be multiplied together in two ways.
Element wise
Step1: Backpropagation
In the next cells, you will be asked to complete functions for the Jacobian of the cost function with respect to the weights and biases.
We will start with layer 3, which is the easiest, and work backwards through the layers.
We'll define our Jacobians as,
$$ \mathbf{J}{\mathbf{W}^{(3)}} = \frac{\partial C}{\partial \mathbf{W}^{(3)}} $$
$$ \mathbf{J}{\mathbf{b}^{(3)}} = \frac{\partial C}{\partial \mathbf{b}^{(3)}} $$
etc., where $C$ is the average cost function over the training set. i.e.,
$$ C = \frac{1}{N}\sum_k C_k $$
You calculated the following in the practice quizzes,
$$ \frac{\partial C}{\partial \mathbf{W}^{(3)}} =
\frac{\partial C}{\partial \mathbf{a}^{(3)}}
\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}}
\frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{W}^{(3)}}
,$$
for the weight, and similarly for the bias,
$$ \frac{\partial C}{\partial \mathbf{b}^{(3)}} =
\frac{\partial C}{\partial \mathbf{a}^{(3)}}
\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}}
\frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{b}^{(3)}}
.$$
With the partial derivatives taking the form,
$$ \frac{\partial C}{\partial \mathbf{a}^{(3)}} = 2(\mathbf{a}^{(3)} - \mathbf{y}) $$
$$ \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}} = \sigma'({z}^{(3)})$$
$$ \frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{W}^{(3)}} = \mathbf{a}^{(2)}$$
$$ \frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{b}^{(3)}} = 1$$
We'll do the J_W3 ($\mathbf{J}_{\mathbf{W}^{(3)}}$) function for you, so you can see how it works.
You should then be able to adapt the J_b3 function, with help, yourself.
Step2: We'll next do the Jacobian for the Layer 2. The partial derivatives for this are,
$$ \frac{\partial C}{\partial \mathbf{W}^{(2)}} =
\frac{\partial C}{\partial \mathbf{a}^{(3)}}
\left(
\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}}
\right)
\frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{z}^{(2)}}
\frac{\partial \mathbf{z}^{(2)}}{\partial \mathbf{W}^{(2)}}
,$$
$$ \frac{\partial C}{\partial \mathbf{b}^{(2)}} =
\frac{\partial C}{\partial \mathbf{a}^{(3)}}
\left(
\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}}
\right)
\frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{z}^{(2)}}
\frac{\partial \mathbf{z}^{(2)}}{\partial \mathbf{b}^{(2)}}
.$$
This is very similar to the previous layer, with two exceptions
Step3: Layer 1 is very similar to Layer 2, but with an addition partial derivative term.
$$ \frac{\partial C}{\partial \mathbf{W}^{(1)}} =
\frac{\partial C}{\partial \mathbf{a}^{(3)}}
\left(
\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}}
\frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{a}^{(1)}}
\right)
\frac{\partial \mathbf{a}^{(1)}}{\partial \mathbf{z}^{(1)}}
\frac{\partial \mathbf{z}^{(1)}}{\partial \mathbf{W}^{(1)}}
,$$
$$ \frac{\partial C}{\partial \mathbf{b}^{(1)}} =
\frac{\partial C}{\partial \mathbf{a}^{(3)}}
\left(
\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}}
\frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{a}^{(1)}}
\right)
\frac{\partial \mathbf{a}^{(1)}}{\partial \mathbf{z}^{(1)}}
\frac{\partial \mathbf{z}^{(1)}}{\partial \mathbf{b}^{(1)}}
.$$
You should be able to adapt lines from the previous cells to complete both the weight and bias Jacobian.
Step4: Test your code before submission
To test the code you've written above, run all previous cells (select each cell, then press the play button [ ▶| ] or press shift-enter).
You can then use the code below to test out your function.
You don't need to submit these cells; you can edit and run them as much as you like.
First, we generate training data, and generate a network with randomly assigned weights and biases.
Step5: Next, if you've implemented the assignment correctly, the following code will iterate through a steepest descent algorithm using the Jacobians you have calculated.
The function will plot the training data (in green), and your neural network solutions in pink for each iteration, and orange for the last output.
It takes about 50,000 iterations to train this network.
We can split this up though - 10,000 iterations should take about a minute to run.
Run the line below as many times as you like. | Python Code:
%run "readonly/BackpropModule.ipynb"
# PACKAGE
import numpy as np
import matplotlib.pyplot as plt
# PACKAGE
# First load the worksheet dependencies.
# Here is the activation function and its derivative.
sigma = lambda z : 1 / (1 + np.exp(-z))
d_sigma = lambda z : np.cosh(z/2)**(-2) / 4
# This function initialises the network with it's structure, it also resets any training already done.
def reset_network (n1 = 6, n2 = 7, random=np.random) :
global W1, W2, W3, b1, b2, b3
W1 = random.randn(n1, 1) / 2
W2 = random.randn(n2, n1) / 2
W3 = random.randn(2, n2) / 2
b1 = random.randn(n1, 1) / 2
b2 = random.randn(n2, 1) / 2
b3 = random.randn(2, 1) / 2
# This function feeds forward each activation to the next layer. It returns all weighted sums and activations.
def network_function(a0) :
z1 = W1 @ a0 + b1
a1 = sigma(z1)
z2 = W2 @ a1 + b2
a2 = sigma(z2)
z3 = W3 @ a2 + b3
a3 = sigma(z3)
return a0, z1, a1, z2, a2, z3, a3
# This is the cost function of a neural network with respect to a training set.
def cost(x, y) :
return np.linalg.norm(network_function(x)[-1] - y)**2 / x.size
Explanation: Backpropagation
Instructions
In this assignment, you will train a neural network to draw a curve.
The curve takes one input variable, the amount travelled along the curve from 0 to 1, and returns 2 outputs, the 2D coordinates of the position of points on the curve.
To help capture the complexity of the curve, we shall use two hidden layers in our network with 6 and 7 neurons respectively.
You will be asked to complete functions that calculate the Jacobian of the cost function, with respect to the weights and biases of the network. Your code will form part of a stochastic steepest descent algorithm that will train your network.
Matrices in Python
Recall from assignments in the previous course in this specialisation that matrices can be multiplied together in two ways.
Element wise: when two matrices have the same dimensions, matrix elements in the same position in each matrix are multiplied together
In python this uses the '$*$' operator.
python
A = B * C
Matrix multiplication: when the number of columns in the first matrix is the same as the number of rows in the second.
In python this uses the '$@$' operator
python
A = B @ C
This assignment will not test which ones to use where, but it will use both in the starter code presented to you.
There is no need to change these or worry about their specifics.
How to submit
To complete the assignment, edit the code in the cells below where you are told to do so.
Once you are finished and happy with it, press the Submit Assignment button at the top of this worksheet.
Test your code using the cells at the bottom of the notebook before you submit.
Please don't change any of the function names, as these will be checked by the grading script.
Feed forward
In the following cell, we will define functions to set up our neural network.
Namely an activation function, $\sigma(z)$, it's derivative, $\sigma'(z)$, a function to initialise weights and biases, and a function that calculates each activation of the network using feed-forward.
Recall the feed-forward equations,
$$ \mathbf{a}^{(n)} = \sigma(\mathbf{z}^{(n)}) $$
$$ \mathbf{z}^{(n)} = \mathbf{W}^{(n)}\mathbf{a}^{(n-1)} + \mathbf{b}^{(n)} $$
In this worksheet we will use the logistic function as our activation function, rather than the more familiar $\tanh$.
$$ \sigma(\mathbf{z}) = \frac{1}{1 + \exp(-\mathbf{z})} $$
There is no need to edit the following cells.
They do not form part of the assessment.
You may wish to study how it works though.
Run the following cells before continuing.
End of explanation
# GRADED FUNCTION
# Jacobian for the third layer weights. There is no need to edit this function.
def J_W3 (x, y) :
# First get all the activations and weighted sums at each layer of the network.
a0, z1, a1, z2, a2, z3, a3 = network_function(x)
# We'll use the variable J to store parts of our result as we go along, updating it in each line.
# Firstly, we calculate dC/da3, using the expressions above.
J = 2 * (a3 - y)
# Next multiply the result we've calculated by the derivative of sigma, evaluated at z3.
J = J * d_sigma(z3)
# Then we take the dot product (along the axis that holds the training examples) with the final partial derivative,
# i.e. dz3/dW3 = a2
# and divide by the number of training examples, for the average over all training examples.
J = J @ a2.T / x.size
# Finally return the result out of the function.
return J
# In this function, you will implement the jacobian for the bias.
# As you will see from the partial derivatives, only the last partial derivative is different.
# The first two partial derivatives are the same as previously.
# ===YOU SHOULD EDIT THIS FUNCTION===
def J_b3 (x, y) :
# As last time, we'll first set up the activations.
a0, z1, a1, z2, a2, z3, a3 = network_function(x)
# Next you should implement the first two partial derivatives of the Jacobian.
# ===COPY TWO LINES FROM THE PREVIOUS FUNCTION TO SET UP THE FIRST TWO JACOBIAN TERMS===
J = 2 * (a3 - y)
J = J * d_sigma(z3)
# For the final line, we don't need to multiply by dz3/db3, because that is multiplying by 1.
# We still need to sum over all training examples however.
# There is no need to edit this line.
J = np.sum(J, axis=1, keepdims=True) / x.size
return J
Explanation: Backpropagation
In the next cells, you will be asked to complete functions for the Jacobian of the cost function with respect to the weights and biases.
We will start with layer 3, which is the easiest, and work backwards through the layers.
We'll define our Jacobians as,
$$ \mathbf{J}{\mathbf{W}^{(3)}} = \frac{\partial C}{\partial \mathbf{W}^{(3)}} $$
$$ \mathbf{J}{\mathbf{b}^{(3)}} = \frac{\partial C}{\partial \mathbf{b}^{(3)}} $$
etc., where $C$ is the average cost function over the training set. i.e.,
$$ C = \frac{1}{N}\sum_k C_k $$
You calculated the following in the practice quizzes,
$$ \frac{\partial C}{\partial \mathbf{W}^{(3)}} =
\frac{\partial C}{\partial \mathbf{a}^{(3)}}
\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}}
\frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{W}^{(3)}}
,$$
for the weight, and similarly for the bias,
$$ \frac{\partial C}{\partial \mathbf{b}^{(3)}} =
\frac{\partial C}{\partial \mathbf{a}^{(3)}}
\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}}
\frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{b}^{(3)}}
.$$
With the partial derivatives taking the form,
$$ \frac{\partial C}{\partial \mathbf{a}^{(3)}} = 2(\mathbf{a}^{(3)} - \mathbf{y}) $$
$$ \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}} = \sigma'({z}^{(3)})$$
$$ \frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{W}^{(3)}} = \mathbf{a}^{(2)}$$
$$ \frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{b}^{(3)}} = 1$$
We'll do the J_W3 ($\mathbf{J}_{\mathbf{W}^{(3)}}$) function for you, so you can see how it works.
You should then be able to adapt the J_b3 function, with help, yourself.
End of explanation
# GRADED FUNCTION
# Compare this function to J_W3 to see how it changes.
# There is no need to edit this function.
def J_W2 (x, y) :
#The first two lines are identical to in J_W3.
a0, z1, a1, z2, a2, z3, a3 = network_function(x)
J = 2 * (a3 - y)
# the next two lines implement da3/da2, first σ' and then W3.
J = J * d_sigma(z3)
J = (J.T @ W3).T
# then the final lines are the same as in J_W3 but with the layer number bumped down.
J = J * d_sigma(z2)
J = J @ a1.T / x.size
return J
# As previously, fill in all the incomplete lines.
# ===YOU SHOULD EDIT THIS FUNCTION===
def J_b2 (x, y) :
a0, z1, a1, z2, a2, z3, a3 = network_function(x)
J = 2 * (a3 - y)
J = J * d_sigma(z3)
J = (J.T @ W3).T
J = J * d_sigma(z2)
J = np.sum(J, axis=1, keepdims=True) / x.size
return J
Explanation: We'll next do the Jacobian for the Layer 2. The partial derivatives for this are,
$$ \frac{\partial C}{\partial \mathbf{W}^{(2)}} =
\frac{\partial C}{\partial \mathbf{a}^{(3)}}
\left(
\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}}
\right)
\frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{z}^{(2)}}
\frac{\partial \mathbf{z}^{(2)}}{\partial \mathbf{W}^{(2)}}
,$$
$$ \frac{\partial C}{\partial \mathbf{b}^{(2)}} =
\frac{\partial C}{\partial \mathbf{a}^{(3)}}
\left(
\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}}
\right)
\frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{z}^{(2)}}
\frac{\partial \mathbf{z}^{(2)}}{\partial \mathbf{b}^{(2)}}
.$$
This is very similar to the previous layer, with two exceptions:
* There is a new partial derivative, in parentheses, $\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}}$
* The terms after the parentheses are now one layer lower.
Recall the new partial derivative takes the following form,
$$ \frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}} =
\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{z}^{(3)}}
\frac{\partial \mathbf{z}^{(3)}}{\partial \mathbf{a}^{(2)}} =
\sigma'(\mathbf{z}^{(3)})
\mathbf{W}^{(3)}
$$
To show how this changes things, we will implement the Jacobian for the weight again and ask you to implement it for the bias.
End of explanation
# GRADED FUNCTION
# Fill in all incomplete lines.
# ===YOU SHOULD EDIT THIS FUNCTION===
def J_W1 (x, y) :
a0, z1, a1, z2, a2, z3, a3 = network_function(x)
J = 2 * (a3 - y)
J = J * d_sigma(z3)
J = (J.T @ W3).T
J = J * d_sigma(z2)
J = (J.T @ W2).T
J = J * d_sigma(z1)
J = J @ a0.T / x.size
return J
# Fill in all incomplete lines.
# ===YOU SHOULD EDIT THIS FUNCTION===
def J_b1 (x, y) :
a0, z1, a1, z2, a2, z3, a3 = network_function(x)
J = 2 * (a3 - y)
J = J * d_sigma(z3)
J = (J.T @ W3).T
J = J * d_sigma(z2)
J = (J.T @ W2).T
J = J * d_sigma(z1)
J = np.sum(J, axis=1, keepdims=True) / x.size
return J
Explanation: Layer 1 is very similar to Layer 2, but with an addition partial derivative term.
$$ \frac{\partial C}{\partial \mathbf{W}^{(1)}} =
\frac{\partial C}{\partial \mathbf{a}^{(3)}}
\left(
\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}}
\frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{a}^{(1)}}
\right)
\frac{\partial \mathbf{a}^{(1)}}{\partial \mathbf{z}^{(1)}}
\frac{\partial \mathbf{z}^{(1)}}{\partial \mathbf{W}^{(1)}}
,$$
$$ \frac{\partial C}{\partial \mathbf{b}^{(1)}} =
\frac{\partial C}{\partial \mathbf{a}^{(3)}}
\left(
\frac{\partial \mathbf{a}^{(3)}}{\partial \mathbf{a}^{(2)}}
\frac{\partial \mathbf{a}^{(2)}}{\partial \mathbf{a}^{(1)}}
\right)
\frac{\partial \mathbf{a}^{(1)}}{\partial \mathbf{z}^{(1)}}
\frac{\partial \mathbf{z}^{(1)}}{\partial \mathbf{b}^{(1)}}
.$$
You should be able to adapt lines from the previous cells to complete both the weight and bias Jacobian.
End of explanation
x, y = training_data()
reset_network()
Explanation: Test your code before submission
To test the code you've written above, run all previous cells (select each cell, then press the play button [ ▶| ] or press shift-enter).
You can then use the code below to test out your function.
You don't need to submit these cells; you can edit and run them as much as you like.
First, we generate training data, and generate a network with randomly assigned weights and biases.
End of explanation
plot_training(x, y, iterations=50000, aggression=7, noise=1)
Explanation: Next, if you've implemented the assignment correctly, the following code will iterate through a steepest descent algorithm using the Jacobians you have calculated.
The function will plot the training data (in green), and your neural network solutions in pink for each iteration, and orange for the last output.
It takes about 50,000 iterations to train this network.
We can split this up though - 10,000 iterations should take about a minute to run.
Run the line below as many times as you like.
End of explanation |
933 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Datasets
機器學習資料集/ 範例一
Step1: (二)資料集介紹
digits = datasets.load_digits() 將一個dict型別資料存入digits,我們可以用下面程式碼來觀察裏面資料
Step2: | 顯示 | 說明 |
| -- | -- |
| ('images', (1797L, 8L, 8L))| 共有 1797 張影像,影像大小為 8x8 |
| ('data', (1797L, 64L)) | data 則是將8x8的矩陣攤平成64個元素之一維向量 |
| ('target_names', (10L,)) | 說明10種分類之對應 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] |
| DESCR | 資料之描述 |
| ('target', (1797L,))| 記錄1797張影像各自代表那一個數字 |
接下來我們試著以下面指令來觀察資料檔,每張影像所對照的實際數字存在digits.target變數中
Step3: | Python Code:
#這行是在ipython notebook的介面裏專用,如果在其他介面則可以拿掉
%matplotlib inline
from sklearn import datasets
import matplotlib.pyplot as plt
#載入數字資料集
digits = datasets.load_digits()
#畫出第一個圖片
plt.figure(1, figsize=(3, 3))
plt.imshow(digits.images[-1], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
Explanation: Datasets
機器學習資料集/ 範例一: The digits dataset
http://scikit-learn.org/stable/auto_examples/datasets/plot_digits_last_image.html
這個範例目的是介紹機器學習範例資料集的操作,對於初學者以及授課特別適合使用。
(一)引入函式庫及內建手寫數字資料庫
End of explanation
for key,value in digits.items() :
try:
print (key,value.shape)
except:
print (key)
Explanation: (二)資料集介紹
digits = datasets.load_digits() 將一個dict型別資料存入digits,我們可以用下面程式碼來觀察裏面資料
End of explanation
images_and_labels = list(zip(digits.images, digits.target))
for index, (image, label) in enumerate(images_and_labels[:4]):
plt.subplot(2, 4, index + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Training: %i' % label)
Explanation: | 顯示 | 說明 |
| -- | -- |
| ('images', (1797L, 8L, 8L))| 共有 1797 張影像,影像大小為 8x8 |
| ('data', (1797L, 64L)) | data 則是將8x8的矩陣攤平成64個元素之一維向量 |
| ('target_names', (10L,)) | 說明10種分類之對應 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] |
| DESCR | 資料之描述 |
| ('target', (1797L,))| 記錄1797張影像各自代表那一個數字 |
接下來我們試著以下面指令來觀察資料檔,每張影像所對照的實際數字存在digits.target變數中
End of explanation
#接著我們嘗試將這個機器學習資料之描述檔顯示出來
print(digits['DESCR'])
Explanation:
End of explanation |
934 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2D PSV wave propagation in a homogenous block-model
The propagation of waves in a general elastic medium can be described by a system of coupled linear partial differential equations. They consist of the equations of motion
$$\rm{\rho \frac{\partial v_i}{\partial t} = \frac{\partial \sigma_{ij}}{\partial x_j} + fs_i}$$
which simply state that the momentum of the particles in the medium, the product of density $\rm{\rho}$ and the displacement velocity $\rm{v_i}$, can be changed by surface forces, described by the stress tensor $\rm{\sigma_{ij}}$ or body forces $\rm{fs_i}$. These equations describe a general medium, like gas, fluid, solid or plasma. The material specific properties are introduced by additional equations which describe how the medium reacts when a certain force is applied. In the isotropic elastic case this can be described by a linear stress-strain relationship
Step1: Because we are currently dealing with a homogeneous block model, we don't have to care about the artihmetic and harmonic averaging of density and shear modulus, respectively. In the next step we define the FD updates for particle velocity and stresses and assemble the 2D PSV FD code.
Update particle velocities
Step2: Update stresses
Step3: Assemble the 2D PSV code
Step4: Let's run the 2D PSV code for a homogeneous block model | Python Code:
# load all necessary libraries
import numpy
from matplotlib import pyplot, cm
from mpl_toolkits.mplot3d import Axes3D
from numba import jit
%matplotlib notebook
# spatial discretization
nx = 601
ny = 601
dh = 5.0
x = numpy.linspace(0, dh*(nx-1), nx)
y = numpy.linspace(0, dh*(ny-1), ny)
X, Y = numpy.meshgrid(x, y)
# time discretization
T = 0.55
dt = 0.6e-3
nt = numpy.floor(T/dt)
nt = nt.astype(int)
# snapshot frequency [timesteps]
isnap = 10
# wavefield clip
clip = 2.5e-2
# define model parameters
rho = 7100.0
vp = 2955.0
vs = 2362.0
mu = rho * vs * vs
lam = rho * vp * vp - 2 * mu
Explanation: 2D PSV wave propagation in a homogenous block-model
The propagation of waves in a general elastic medium can be described by a system of coupled linear partial differential equations. They consist of the equations of motion
$$\rm{\rho \frac{\partial v_i}{\partial t} = \frac{\partial \sigma_{ij}}{\partial x_j} + fs_i}$$
which simply state that the momentum of the particles in the medium, the product of density $\rm{\rho}$ and the displacement velocity $\rm{v_i}$, can be changed by surface forces, described by the stress tensor $\rm{\sigma_{ij}}$ or body forces $\rm{fs_i}$. These equations describe a general medium, like gas, fluid, solid or plasma. The material specific properties are introduced by additional equations which describe how the medium reacts when a certain force is applied. In the isotropic elastic case this can be described by a linear stress-strain relationship:
$$\begin{split}
\rm{\sigma_{ij}}&\rm{=\lambda \theta \delta_{ij} + 2 \mu \epsilon_{ij}}\
\rm{\epsilon_{ij}}&\rm{=\frac{1}{2}\biggl(\frac{\partial u_i}{\partial x_j}+\frac{\partial u_j}{\partial x_i}\biggr)}
\end{split}
$$
where $\rm{\lambda}$ and $\rm{\mu}$ are the Lamé parameters, $\rm{\epsilon_{ij}}$ the strain tensor, $\rm{\theta = \epsilon_{11} + \epsilon_{22} + \epsilon_{33}}$ the cubic dilatation, $\rm{\delta_{ij}}$ the Kronecker Delta and $\rm{u_i}$ the displacement. By taking the time derivative of the stress-strain relationship and the strain tensor, we can derive the following partial differential equations to describe wave propagtion in a general 3D isotropic elastic medium:
$$\begin{split}
\rm{\rho \frac{\partial v_i}{\partial t}} &\rm{= \frac{\partial \sigma_{ij}}{\partial x_j} + fs_i}\
\rm{\frac{\partial \sigma_{ij}}{\partial t}} &\rm{= \lambda \frac{\partial \theta}{\partial t} \delta_{ij} + 2 \mu \frac{\partial \epsilon_{ij}}{\partial t}}\
\rm{\frac{\partial \epsilon_{ij}}{\partial t}}&\rm{=\frac{1}{2}\biggl(\frac{\partial v_i}{\partial x_j}+\frac{\partial v_j}{\partial x_i}\biggr)}
\end{split}
$$
Equations of motion for 2D PSV wave propagation in an isotropic elastic medium
In case of certain symmetries and model limitations, the general 3D seismic wave propagation in isotropic elastic media can be significantly simplified. Assuming only non-zero particle displacements in the x-y-plane (PSV problem), where x denotes the horizontal distance and y the depth, wave propagation can be described by the following system of partial differential equations:
$$\rm{\rho \frac{\partial v_x}{\partial t} = \frac{\partial \sigma_{xx}}{\partial x} + \frac{\partial \sigma_{xy}}{\partial y} + fs_x,} $$
$$\rm{\rho \frac{\partial v_y}{\partial t} = \frac{\partial \sigma_{xy}}{\partial x} + \frac{\partial \sigma_{yy}}{\partial y} + fs_y,} $$
$$\rm{\frac{\partial \sigma_{xx}}{\partial t} = (\lambda + 2 \mu) \frac{\partial v_{x}}{\partial x} + \lambda \frac{\partial v_{y}}{\partial y},} $$
$$\rm{\frac{\partial \sigma_{yy}}{\partial t} = \lambda \frac{\partial v_{x}}{\partial x} + (\lambda + 2 \mu) \frac{\partial v_{y}}{\partial y},} $$
$$\rm{\frac{\partial \sigma_{xy}}{\partial t} = \mu \biggl(\frac{\partial v_{x}}{\partial y} + \frac{\partial v_{y}}{\partial x}\biggr),} $$
where $\rm{\rho}$ is the density, $\rm{\lambda}$ and $\rm{\mu}$ the Lamé parameters, $\rm{(v_x,\; v_y)}$ particle velocity vector, $\rm{\sigma_{xx}}$, $\rm{\sigma_{yy}}$, $\rm{\sigma_{xy}}$ stress tensor components, ($\rm{fs_x}$, $\rm{fs_y}$) directed body force vector, respectively.
Finite difference discretization on a staggered grid
For the numerical solution of the elastic equations of motion have to be discretized in time and space on a grid. The particle velocities $\rm{\mathbf{v}}$, the stresses $\rm{\sigma_{ij}}$, the Lamé parameters $\rm{\lambda}$ and $\rm{\mu}$ are calculated and defined at discrete Cartesian coordinates $\rm{x=i\; dh}$, $\rm{y=j\; dh}$ and discrete times $\rm{t=n\; dt}$.
$\rm{dh}$ denotes the spatial distance between two adjacent grid points and $\rm{dt}$ the difference between two successive time steps. Therefore every grid point is located in the interval $\rm{i \in N | [1,nx]}$, $\rm{j \in N | [1,ny]}$ and $\rm{n \in N | [1,nt]}$, where
$\rm{nx}$, $\rm{ny}$ and $\rm{nt}$ are the number of discrete spatial grid points and time steps, respectively.
Finally the partial derivatives are replaced by finite-difference (FD) operators. Two types of operators can be distinguished, forward and backward operators $\rm{D^+,\;D^-}$. The derivative of a function f(x) with respect to a variable x can be approximated by the following 2nd order operators:
$$\rm{D^+x f = \frac{f{i+1}-f_{i}}{dh} \hspace{1 cm} \text{forward operator}} $$
$$\rm{D^-x f = \frac{f_i-f{i-1}}{dh} \hspace{1 cm} \text{backward operator}} $$
To calculate the spatial derivatives of the wavefield variables at the correct positions with respect to each other, the variables are not placed on the same grid points, but staggered by half of the spatial grid point distance (Virieux 1986).
Figure 1 shows the distribution of the material parameters and wavefield variables on the spatial grid.
To guarantee the stability of the staggered grid code, the Lamé parameter $\rm{\mu}$ and density $\rm{\rho}$ have to be averaged harmonically and arithmetically (Moczo et al. 2004, Bohlen 2006), respectively
$$\rm{\mu_{xy}(j+1/2,i+1/2)=\biggl[\frac{1}{4}\biggl(\mu^{-1}{j,i}+\mu^{-1}{j,i+1}+\mu^{-1}{j+1,i+1}+\mu^{-1}{j+1,i}\biggr)\biggr]^{-1}} $$
$$\rm{\rho_x(j,i+1/2) = 0.5\, (\rho_{j,i+1}+\rho_{j,i})} $$
$$\rm{\rho_y(j+1/2,i) = 0.5\, (\rho_{j+1,i}+\rho_{j,i})} $$
Discretized equations of motion
In the next step we discretize the equations of motion for the 2D PSV problem using a staggered finite difference approach. First, we discretize the x-compoment of the momentum equation by approximating the spatial derivatives
$$\rm{\frac{\partial \sigma_{xx}}{\partial x} \approx \frac{\sigma_{xx}(j,i+1) - \sigma_{xx}(j,i)}{dh}}, \rm{\frac{\partial \sigma_{xy}}{\partial y} \approx \frac{\sigma_{xy}(j+1/2, i) - \sigma_{xy}(j-1/2,i)}{dh}} $$
and the LHS of the x-momentum equation
$$\rho \rm{\frac{\partial v_x}{\partial t} \approx \rho_x(j,i+1/2) \frac{v_x^{n+1/2}(j,i+1/2) - v_x^{n-1/2}(j,i+1/2)}{dt}} $$
Inserting in the partial differential equation
$$\rm{\rho \frac{\partial v_x}{\partial t} = \frac{\partial \sigma_{xx}}{\partial x} + \frac{\partial \sigma_{xy}}{\partial y}} $$
leads to
$$\rho_x(j,i+1/2) \frac{v_x^{n+1/2}(j,i+1/2) - v_x^{n-1/2}(j,i+1/2)}{dt} = \frac{\sigma_{xx}^n(j,i+1) - \sigma_{xx}^n(j,i)}{dh} + \frac{\sigma_{xy}^n(j+1/2, i) - \sigma_{xy}^n(j-1/2,i)}{dh} $$
After rearranging for $v_x^{n+1/2}(j,i+1/2)$ we get the following explicit FD scheme for the x-component of the momentum equation:
$$\rm{v_x^{n+1/2}(j,i+1/2) = v_x^{n-1/2}(j,i+1/2) + \frac{dt}{dh\cdot \rho_x(j,i+1/2)}\cdot \biggl(\sigma^n_{xx}(j,i+1) - \sigma^n_{xx}
(j,i) + \sigma^n_{xy}(j+1/2, i) - \sigma^n_{xy}(j-1/2,i) \biggr)} $$
Using a similar approach we can derive the FD scheme for the y-compoment of the momentum equation ...
$$\rm{v_y^{n+1/2}(j,i+1/2) = v_y^{n-1/2}(j,i+1/2) + \frac{dt}{dh\cdot \rho_y(j+1/2,i)}\cdot \biggl(\sigma^n_{xy}(j, i+1/2) - \sigma^n_{xy}(j,i-1/2) + \sigma^n_{yy}(j+1,i) - \sigma^n_{yy}(j,i) \biggr)} $$
... and the stress-strain relationship ...
$$
\begin{split}
\rm{\sigma^{n+1}{xx}(j,i)}\;&\rm{= \sigma{xx}^{n}(j,i) + dt\cdot\lambda(j,i)\cdot \biggl(v^{n+1/2}{xx}(j,i) + v^{n+1/2}{yy}(j,i) \biggr) + 2 dt\cdot \mu(j,i) \cdot v^{n+1/2}{xx}(j,i)}\
\rm{\sigma^{n+1}{yy}(j,i)}\;&\rm{= \sigma_{yy}^{n}(j,i) + dt\cdot\lambda(j,i)\cdot \biggl(v^{n+1/2}{xx}(j,i) + v^{n+1/2}{yy}(j,i) \biggr) + 2 dt\cdot \mu(j,i) \cdot v^{n+1/2}{yy}(j,i)}\
\rm{\sigma^{n+1}{xy}(j+1/2,i+1/2)}\;&\rm{=\sigma^{n}{xy}(j+1/2,i+1/2) + dt\cdot\mu{xy}(j+1/2,i+1/2)\biggl(v^{n+1/2}{xy}(j+1/2,i+1/2) + v^{n+1/2}{yx}(j+1/2,i+1/2)\biggr)}\
\end{split}
$$
with the spatial derivatives
$$
\begin{split}
\rm{v_{xx}(j,i)}\; & \rm{= \frac{v_x(j,i+1/2)-v_x(j,i-1/2)}{dh}}\
\rm{v_{yy}(j,i)}\; & \rm{= \frac{v_y(j+1/2,i)-v_y(j-1/2,i)}{dh}}\
\rm{v_{yx}(j+1/2,i+1/2)}\; & \rm{= \frac{v_y(j+1/2, i+1)-v_y(j+1/2, i)}{dh}}\
\rm{v_{xy}(j+1/2,i+1/2)}\; & \rm{= \frac{v_x(j+1,i+1/2)-v_x(j, i+1/2)}{dh}}\
\end{split}
$$
Implementing 2D PSV code
End of explanation
@jit(nopython=True) # use JIT for C-performance
def update_v(vx, vy, sxx, syy, sxy, nx, ny, dtdx, rhoi):
for j in range(1, ny-1):
for i in range(1, nx-1):
# calculate spatial derivatives
sxx_x = sxx[j, i+1] - sxx[j, i]
syy_y = syy[j+1, i] - syy[j, i]
sxy_x = sxy[j, i] - sxy[j, i-1]
sxy_y = sxy[j, i] - sxy[j-1, i]
# update particle velocities
vx[j, i] = vx[j, i] + dtdx * rhoi * (sxx_x + sxy_y)
vy[j, i] = vy[j, i] + dtdx * rhoi * (sxy_x + syy_y)
return vx, vy
Explanation: Because we are currently dealing with a homogeneous block model, we don't have to care about the artihmetic and harmonic averaging of density and shear modulus, respectively. In the next step we define the FD updates for particle velocity and stresses and assemble the 2D PSV FD code.
Update particle velocities
End of explanation
@jit(nopython=True) # use JIT for C-performance
def update_s(vx, vy, sxx, syy, sxy, nx, ny, dtdx, lam, mu):
for j in range(1, ny-1):
for i in range(1, nx-1):
# calculate spatial derivatives
vxx = vx[j][i] - vx[j][i-1]
vyy = vy[j][i] - vy[j-1][i]
vyx = vy[j][i+1] - vy[j][i]
vxy = vx[j+1][i] - vx[j][i]
# update stresses
sxx[j, i] = sxx[j, i] + dtdx * ( lam * (vxx + vyy) + 2.0 * mu * vxx )
syy[j, i] = syy[j, i] + dtdx * ( lam * (vxx + vyy) + 2.0 * mu * vyy )
sxy[j, i] = sxy[j, i] + dtdx * ( mu * (vyx + vxy) )
return sxx, syy, sxy
Explanation: Update stresses
End of explanation
def psv_mod(nt, nx, ny, dt, dh, rho, lam, mu, clip, isnap, X, Y):
# initialize wavefields
vx = numpy.zeros((ny, nx))
vy = numpy.zeros((ny, nx))
sxx = numpy.zeros((ny, nx))
syy = numpy.zeros((ny, nx))
sxy = numpy.zeros((ny, nx))
# define some parameters
dtdx = dt / dh
rhoi = 1.0 / rho
# define source wavelet parameters
fc = 17.0
tshift = 0.0
ts = 1.0 / fc
# source position [gridpoints]
jjs = 300
iis = 300
# initalize animation
fig = pyplot.figure(figsize=(11,7))
extent = [numpy.min(X),numpy.max(X),numpy.min(X),numpy.max(Y)]
image = pyplot.imshow(vy, animated=True, cmap=cm.seismic, interpolation='nearest', vmin=-clip, vmax=clip)
pyplot.colorbar()
pyplot.title('Wavefield vy')
pyplot.xlabel('X [m]')
pyplot.ylabel('Y [m]')
pyplot.gca().invert_yaxis()
pyplot.ion()
pyplot.show(block=False)
# loop over timesteps
for n in range(nt):
# define Ricker wavelet
t = n * dt
tau = numpy.pi * (t - 1.5 * ts - tshift) / (1.5 * ts)
amp = (1.0 - 4.0 * tau * tau) * numpy.exp(-2.0 * tau * tau)
# update particle velocities
vx, vy = update_v(vx, vy, sxx, syy, sxy, nx, ny, dtdx, rhoi)
# apply vertical impact source term @ source position
vy[jjs, iis] = vy[jjs, iis] + amp
# update stresses
sxx, syy, sxy = update_s(vx, vy, sxx, syy, sxy, nx, ny, dtdx, lam, mu)
# display vy snapshots
if (n % isnap) == 0:
image.set_data(vy)
fig.canvas.draw()
return vx, vy
Explanation: Assemble the 2D PSV code
End of explanation
# run 2D PSV code
vy, vy = psv_mod(nt, nx, ny, dt, dh, rho, lam, mu, clip, isnap, X, Y)
Explanation: Let's run the 2D PSV code for a homogeneous block model:
End of explanation |
935 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2> How to read and display Nexrad on AWS using Python </h2>
<h4> Valliappa Lakshmanan, The Climate Corporation, [email protected] </h4>
Amazon Web Services is making NEXRAD data <a href="http
Step1: <h3> Find volume scan </h3>
Volume scans from NEXRAD are stored on S3 such that the bucket name is noaa-nexrad-level2 and the key name is something like 2014/04/03/KLSX/KLSX20140403_211350_V06.gz i.e. YYYY/MM/DD/KRAD/KRADYYYYMMDD_HHmmSS_V0?.gz. You can use the boto library to browse and select the keys that you want.
Here, I'll directly navigate to a NEXRAD file that I know exists.
Step2: <h3> Download volume scan from S3 </h3>
For further processing, it is helpful to have the file locally on disk.
Step3: <h3> Display the data </h3>
Here, I am using PyArt to display the lowest elevation scan data of all the moments.
Step4: <h3> Processing data </h3>
Not all the reflectivity above is because of weather echoes. Most of the data are actually bioscatter (insects and birds). Let's mask the reflectivity data using the polarimetric variables to retain only approximately spherical echoes (this is not actually a good quality-control technique -- see http | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy.ma as ma
import numpy as np
import pyart.graph
import tempfile
import pyart.io
import boto
Explanation: <h2> How to read and display Nexrad on AWS using Python </h2>
<h4> Valliappa Lakshmanan, The Climate Corporation, [email protected] </h4>
Amazon Web Services is making NEXRAD data <a href="http://aws.amazon.com/noaa-big-data/nexrad/">freely available</a> on Amazon S3 as part of the NOAA Big Data Project. In this Python notebook, I will step you through being able to read and display this data from your Python programs. I will assume that you know Python, how to install Python modules, and can access AWS. (Follow along by downloading and running <a href="https://github.com/lakshmanok/nexradaws">this iPython notebook</a>).
<h3> What you need </h3>
You probably have ipython and matplotlib already. In addition, you may need to install the following Python modules:
<ol>
<li> <a href="https://boto3.readthedocs.org/en/latest/guide/quickstart.html">boto</a> which I installed using conda: <pre>conda install boto</pre> </li>
<li> <a href="http://arm-doe.github.io/pyart/">pyart</a> which I installed using conda: <pre>conda install -c https://conda.anaconda.org/jjhelmus pyart</pre>
</ol>
You may also need to configure your AWS credentials to access S3.
<h3> Import modules </h3>
End of explanation
# read a volume scan file on S3. I happen to know this file exists.
s3conn = boto.connect_s3()
bucket = s3conn.get_bucket('noaa-nexrad-level2')
s3key = bucket.get_key('2015/05/15/KVWX/KVWX20150515_080737_V06.gz')
print s3key
Explanation: <h3> Find volume scan </h3>
Volume scans from NEXRAD are stored on S3 such that the bucket name is noaa-nexrad-level2 and the key name is something like 2014/04/03/KLSX/KLSX20140403_211350_V06.gz i.e. YYYY/MM/DD/KRAD/KRADYYYYMMDD_HHmmSS_V0?.gz. You can use the boto library to browse and select the keys that you want.
Here, I'll directly navigate to a NEXRAD file that I know exists.
End of explanation
# download to a local file, and read it
localfile = tempfile.NamedTemporaryFile()
s3key.get_contents_to_filename(localfile.name)
radar = pyart.io.read_nexrad_archive(localfile.name)
Explanation: <h3> Download volume scan from S3 </h3>
For further processing, it is helpful to have the file locally on disk.
End of explanation
# display the lowest elevation scan data
display = pyart.graph.RadarDisplay(radar)
fig = plt.figure(figsize=(10, 10))
plots = [
# variable-name in pyart, display-name that we want, sweep-number of radar (0=lowest ref, 1=lowest velocity)
['reflectivity', 'Reflectivity (dBZ)', 0],
['differential_reflectivity', 'Zdr (dB)', 0],
['differential_phase', 'Phi_DP (deg)', 0],
['cross_correlation_ratio', 'Rho_HV', 0],
['velocity', 'Velocity (m/s)', 1],
['spectrum_width', 'Spectrum Width', 1]
]
for plotno, plot in enumerate(plots, start=1):
ax = fig.add_subplot(3, 2, plotno)
display.plot(plot[0], plot[2], ax=ax, title=plot[1],
colorbar_label='',
axislabels=('East-West distance from radar (km)' if plotno == 6 else '',
'North-South distance from radar (km)' if plotno == 1 else ''))
display.set_limits((-300, 300), (-300, 300), ax=ax)
plt.show()
Explanation: <h3> Display the data </h3>
Here, I am using PyArt to display the lowest elevation scan data of all the moments.
End of explanation
refl_grid = radar.get_field(0, 'reflectivity')
print refl_grid[0]
rhohv_grid = radar.get_field(0, 'cross_correlation_ratio')
zdr_grid = radar.get_field(0, 'differential_reflectivity')
# apply rudimentary quality control
reflow = np.less(refl_grid, 20)
zdrhigh = np.greater(np.abs(zdr_grid), 2.3)
rhohvlow = np.less(rhohv_grid, 0.95)
notweather = np.logical_or(reflow, np.logical_or(zdrhigh, rhohvlow))
print notweather[0]
qcrefl_grid = ma.masked_where(notweather, refl_grid)
print qcrefl_grid[0]
# let's create a new object containing only sweep=0 so we can add the QC'ed ref to it for plotting
qced = radar.extract_sweeps([0])
qced.add_field_like('reflectivity', 'reflectivityqc', qcrefl_grid)
display = pyart.graph.RadarDisplay(qced)
fig = plt.figure(figsize=(10, 5))
plots = [
# variable-name in pyart, display-name that we want, sweep-number of radar (0=lowest ref, 1=lowest velocity)
['reflectivity', 'Reflectivity (dBZ)', 0],
['reflectivityqc', 'QCed Reflectivity (dBZ)', 0],
]
for plotno, plot in enumerate(plots, start=1):
ax = fig.add_subplot(1, 2, plotno)
display.plot(plot[0], plot[2], ax=ax, title=plot[1],
colorbar_label='',
axislabels=('East-West distance from radar (km)' if plotno == 2 else '',
'North-South distance from radar (km)' if plotno == 1 else ''))
display.set_limits((-300, 300), (-300, 300), ax=ax)
plt.show()
Explanation: <h3> Processing data </h3>
Not all the reflectivity above is because of weather echoes. Most of the data are actually bioscatter (insects and birds). Let's mask the reflectivity data using the polarimetric variables to retain only approximately spherical echoes (this is not actually a good quality-control technique -- see http://journals.ametsoc.org/doi/abs/10.1175/JTECH-D-13-00073.1 -- but it is okay as a simple illustration).
End of explanation |
936 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<a href="http
Step1: Les données peuvent être préalablement téléchargées ou directement lues. Ce sont celles originales du site MNIST DataBase mais préalablement converties au format .csv, certes plus volumineux mais plus facile à lire. Attention le fichier mnist_train.zip présent dans le dépôt est compressé.
Step2: 1.3 Exploration
Les données ont déjà été normalisées centrées et sont complètes. Elles ne nécessitent pas d'autre "nettoyage" au moins rudimentaire.
Le tutoriel d'introduction à Scikit-learn montre comment représenter les images des caractères ainsi qu'une ACP qui n'est pas reprise ici. Quelles sont néanmoins les performances de k-means sur un tel volume ?
Step3: Résultat sans grand intérêt mais qui montre la difficulté de regouper les caractères identiques à l'aide de la distance euclidienne usuelle; il y a beaucoup de confusion entre les classes.
2 Apprentissage et prévision du test
2.1 $K$ plus proches voisins
Step4: Il faudrait ré-appliquer la procédure d'otpimisation de $k$ par validation croisée décrite dans le tutoriel d'introduction à scikit-learn. Néanmoins la solution $k=10$ est raisonnable et on retrouve une performance classique sur ce type de données
Step5: Comme pour les $k$ plus proches voisins, il serait utile d'optimiser certains paramètres dont le nombre d'arbres et sans doute max_features. L'optimisation de l'erreur out-of-bag plutôt qu'une procédure lourde de validaiton croisée serait bienvenue. D'autre part, la restriction de la profondeur max des arbres pourrait réduire sensiblement les temps de calcul mais cela ne semble pas nécessaire d'autant que c'est un paramètre critique pour la qualité de la prévision.
3 Effet de la taille l'échantillon d'apprentissage
Le taux d'erreur de 3% obtenu sans effort d'optimisation est tout à fait correct au regard du temps passé en développement ! Plutôt que de chercher à l'optimiser, la suite du travail s'intéresse à l'effet de la taille de cet échantillon d'apprentissage sur la précision. La fonction learning_curve réalise ce calcul mais ne permet pas d'extraire le temps d'excution pour chaque taille. Une procédure rudimentaire est mise en oeuvre.
3.1 Avec Random Forest (Scikit-learn) et 100 arbres
Step6: 3.2 Avec Random Forest (Scikit-learn) et 250 arbres | Python Code:
# Graphiques dans la fenêtre
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import time
Explanation: <center>
<a href="http://www.insa-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg" style="float:left; max-width: 120px; display: inline" alt="INSA"/></a>
<a href="http://wikistat.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg" style="max-width: 250px; display: inline" alt="Wikistat"/></a>
<a href="http://www.math.univ-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo_imt.jpg" style="float:right; max-width: 200px; display: inline" alt="IMT"/> </a>
</center>
Ateliers: Technologies des grosses data
Reconnaissance de caractères manuscrits (MNIST) en <a href="https://www.python.org/"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Python_logo_and_wordmark.svg/390px-Python_logo_and_wordmark.svg.png" style="max-width: 120px; display: inline" alt="Python"/></a> avec <a href="http://scikit-learn.org/stable/#"><img src="http://scikit-learn.org/stable/_static/scikit-learn-logo-small.png" style="max-width: 100px; display: inline" alt="Scikit-Learn"/></a>
Résumé
Présentation du problème de reconnaissance de caractères manuscrits (MNIST DataBase à partir d’images numérisées. L’objectif est de comparer les performances (qualité de prévision, temps d'exécution) en fonction de latechnologie, ici Python et la librairie Scikit-learn, et en fonction de la taille de l'échantillon. Même interprété, les exécutions sont effecaces par une bonne parallélisation. La faiblesse concerne les insufisances des aides à l'interprétation..
1 Introduction
1.1 Objetif
L'objectif général est la construction d'un meilleur modèle de reconnaissance de chiffres manuscrits. Ce problème est ancien (zipcodes) et sert souvent de base pour la comparaison de méthodes et d'algorithmes d'apprentissage. Le site de Yann Le Cun: MNIST DataBase, est à la source des données étudiées, il décrit précisément le problème et les modes d'acquisition. Il tenait à jour la liste des publications proposant des solutions avec la qualité de prévision obtenue. Ce problème a également été proposé comme sujet d'un concours Kaggle mais sur un sous-ensemble des données.
De façon très schématique, plusieurs stratégies sont développées dans une vaste littérature sur ces données.
Utiliser une méthode classique (k-nn, random forest...) sans trop raffiner mais avec des temps d'apprentissage rapide conduit à un taux d'erreur autour de 3\%.
Ajouter ou intégrer un pré-traitement des données permettant de recaler les images par des distorsions plus ou moins complexes.
Construire une mesure de distance adaptée au problème, par exemple invariante par rotation, translation, puis l'intégrer dans une technique d'apprentissage classique comme les $k$ plus proches voisins.
Utiliser une méthode plus flexibles (réseau de neurones épais) avec une optimisation fine des paramètres.
L'objectif de cet atelier est de comparer sur des données relativement volumineuses les performances de différents environnements technologiques et librairies. Une dernière question est abordée, elle concerne l'influence de la taille de l'échantillon d'apprentissage sur le temps d'exécution ainsi que sur la qualité des prévisions.
Analyse des données avec Python noter les temps d'exécution, la précision estimée sur l'échantillon test.
1.2 Lecture des données d'apprentissage et de test
Les données peuvent être préalablement téléchargées ou directement lues. Ce sont celles originales du site MNIST DataBase mais préalablement converties au format .csv, certes plus volumineux mais plus facile à lire. Attention le fichier mnist_train.zip présent dans le dépôt est compressé.
End of explanation
# Lecture des données d'apprentissage
# path="" # Si les données sont dans le répertoire courant sinon:
path="http://www.math.univ-toulouse.fr/~besse/Wikistat/data/"
Dtrain=pd.read_csv(path+"mnist_train.csv",header=None)
Dtrain.head()
# Extraction puis suppression de la dernière colonne des labels
Ltrain=Dtrain.iloc[:,784]
Dtrain.drop(Dtrain.columns[[784]], axis=1,inplace=True)
Dtrain.head()
# Dimensions de l'échantillon
Dtrain.shape
# Même chose pour les données de test
Dtest=pd.read_csv(path+"mnist_test.csv",header=None)
Ltest=Dtest.iloc[:,784]
Dtest.drop(Dtest.columns[[784]], axis=1,inplace=True)
Dtest.shape
# affichage d'un chiffre
plt.figure(1, figsize=(3, 3))
plt.imshow(np.matrix(Dtest.iloc[1,:]).reshape(28,28), cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
Explanation: Les données peuvent être préalablement téléchargées ou directement lues. Ce sont celles originales du site MNIST DataBase mais préalablement converties au format .csv, certes plus volumineux mais plus facile à lire. Attention le fichier mnist_train.zip présent dans le dépôt est compressé.
End of explanation
from sklearn.metrics import confusion_matrix
from sklearn.cluster import KMeans
tps1 = time.clock()
km=KMeans(n_clusters=10,init='k-means++',
n_init=10, max_iter=100, tol=0.01,
precompute_distances=True, verbose=0,
random_state=None, copy_x=True, n_jobs=1)
km.fit(Dtrain)
tps2 = time.clock()
print("Temps execution Kmeans :", (tps2 - tps1)/60)
cm = confusion_matrix(Ltrain, km.labels_)
print(cm)
Explanation: 1.3 Exploration
Les données ont déjà été normalisées centrées et sont complètes. Elles ne nécessitent pas d'autre "nettoyage" au moins rudimentaire.
Le tutoriel d'introduction à Scikit-learn montre comment représenter les images des caractères ainsi qu'une ACP qui n'est pas reprise ici. Quelles sont néanmoins les performances de k-means sur un tel volume ?
End of explanation
# Définition du modèle avec un nombre k "standard" de voisins
from sklearn.neighbors import KNeighborsClassifier
tps1 = time.clock()
knn = KNeighborsClassifier(n_neighbors=10,n_jobs=-1)
digit_knn=knn.fit(Dtrain, Ltrain)
tps2 = time.clock()
print("Temps de k-nn :",(tps2 - tps1)/60)
# Apprentissage et estimation de l'erreur de prévision sur l'échantillon test
tps1 = time.clock()
erreur=1-digit_knn.score(Dtest,Ltest)
tps2 = time.clock()
print("Temps:",(tps2 - tps1)/60,"Erreur:",erreur)
Explanation: Résultat sans grand intérêt mais qui montre la difficulté de regouper les caractères identiques à l'aide de la distance euclidienne usuelle; il y a beaucoup de confusion entre les classes.
2 Apprentissage et prévision du test
2.1 $K$ plus proches voisins
End of explanation
from sklearn.ensemble import RandomForestClassifier
tps0 = time.clock()
rf = RandomForestClassifier(n_estimators=100,
criterion='gini', max_depth=None, min_samples_split=2,
min_samples_leaf=1, max_features='auto', max_leaf_nodes=None,
bootstrap=True, oob_score=True, n_jobs=-1,random_state=None, verbose=0)
rf.fit(Dtrain,Ltrain)
tps1 = time.clock()
print("Temps de configutration RF :" ,tps1 - tps0)
# erreur out-of-bag
erreur_oob=1-rf.oob_score_
tps2 = time.clock()
print("Temps execution RF :", tps2 - tps0, "Erreur oob:", erreur_oob)
# erreur sur l'échantillon test
1-rf.score(Dtest,Ltest)
cm = confusion_matrix(Ltest, rf.predict(Dtest))
print(cm)
Explanation: Il faudrait ré-appliquer la procédure d'otpimisation de $k$ par validation croisée décrite dans le tutoriel d'introduction à scikit-learn. Néanmoins la solution $k=10$ est raisonnable et on retrouve une performance classique sur ce type de données: 3.3%, pour une méthode utilisée sans raffinement.
C'est en effet une autre distance qu'il faudrait utiliser avec les $k$ plus proches voisins pour améliorer sensiblement les résultats mais avec un coût beaucoup plus élevé en temps de calcul. Un autre scénario propose ainsi le calcul d'une distance tangentielle entre les images (Simard et al. (1998)). Le programme Matlab fait appel à un programme en C. L'intégration dans du code python plutôt que Matlab resterait à faire...
2.2 Random forest
Les forêts aléatoires sont également une approche raisonnable, à moindre coût de développement, sur ces données. Analyser en détail la liste des paramètres proposés dans l'implémentation de cet algorithme. Consulter pour ce faire la documentation en ligne.
Les valeurs par défaut des paramètres sont utilisées sauf pour le nombre d'arbres: 100 au lieu de 10, et le nombre de processeurs utilisés: -1 au lieu de 1 (tous sont utilisés sauf 1 pour le système). Attention, tous les paramètres disponibles ne sont pas listés.
End of explanation
from sklearn.cross_validation import train_test_split
# tailles croissantes de l'échantillon d'apprentissage
arrayErreur=np.empty((12,3))
nArbres=100
for i in range(1,13):
n=5000*i
arrayErreur[i-1,0]=n
if i==12:
n=59999
Xtrain,Xdrop,ytrain,ydrop=train_test_split(Dtrain,Ltrain,train_size=n)
tps1 = time.clock()
rf = RandomForestClassifier(n_estimators=nArbres,
criterion='gini', max_depth=None, min_samples_split=2,
min_samples_leaf=1, max_features='auto', max_leaf_nodes=None,
bootstrap=True, oob_score=True, n_jobs=-1,random_state=None, verbose=0)
rf.fit(Xtrain,ytrain)
tps2=time.clock()
arrayErreur[i-1,2]=1-rf.score(Dtest,Ltest)
arrayErreur[i-1,1]=tps2 - tps1
dataframeErreur1=pd.DataFrame(arrayErreur,columns=["Taille","Temps","Erreur"])
print(dataframeErreur1)
# Graphes superposés
from __future__ import division
from scipy import *
from pylab import *
x = linspace(5,60,12)
fig = plt.figure()
# premier graphe
ax1 = fig.add_subplot(111)
ax1.plot(x,dataframeErreur1["Temps"] , '-b', label=ur"Temps",lw=1.5)
# absisses communes
xlim(0,65)
xlabel(ur"Taille échantillon x 1000", color='b', fontsize=16)
ylim(0, 70)
ylabel(ur"Secondes", color='b', fontsize=16)
legend(loc=2)
# 2ème graphe
ax2 = ax1.twinx()
ax2.plot(x,dataframeErreur1["Erreur"] ,'--g', label=ur"Erreur",lw=1.5)
ylim(0, 0.1)
ylabel(ur"Taux d'erreur", color='g', fontsize=16)
legend(loc=1)
show()
Explanation: Comme pour les $k$ plus proches voisins, il serait utile d'optimiser certains paramètres dont le nombre d'arbres et sans doute max_features. L'optimisation de l'erreur out-of-bag plutôt qu'une procédure lourde de validaiton croisée serait bienvenue. D'autre part, la restriction de la profondeur max des arbres pourrait réduire sensiblement les temps de calcul mais cela ne semble pas nécessaire d'autant que c'est un paramètre critique pour la qualité de la prévision.
3 Effet de la taille l'échantillon d'apprentissage
Le taux d'erreur de 3% obtenu sans effort d'optimisation est tout à fait correct au regard du temps passé en développement ! Plutôt que de chercher à l'optimiser, la suite du travail s'intéresse à l'effet de la taille de cet échantillon d'apprentissage sur la précision. La fonction learning_curve réalise ce calcul mais ne permet pas d'extraire le temps d'excution pour chaque taille. Une procédure rudimentaire est mise en oeuvre.
3.1 Avec Random Forest (Scikit-learn) et 100 arbres
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
# tailles croissantes de l'échantillon d'apprentissage
arrayErreur=np.empty((12,3))
nArbres=250
for i in range(1,13):
n=5000*i
arrayErreur[i-1,0]=n
if i==12:
n=59999
Xtrain,Xdrop,ytrain,ydrop=train_test_split(Dtrain,Ltrain,train_size=n)
tps1 = time.clock()
rf = RandomForestClassifier(n_estimators=nArbres,
criterion='gini', max_depth=None, min_samples_split=2,
min_samples_leaf=1, max_features='auto', max_leaf_nodes=None,
bootstrap=True, oob_score=True, n_jobs=-1,random_state=None, verbose=0)
rf.fit(Xtrain,ytrain)
tps2=time.clock()
arrayErreur[i-1,2]=1-rf.score(Dtest,Ltest)
arrayErreur[i-1,1]=tps2 - tps1
dataframeErreur=pd.DataFrame(arrayErreur,columns=["Taille","Temps","Erreur"])
print(dataframeErreur)
# Graphes supersosés
from __future__ import division
from scipy import *
from pylab import *
x = linspace(5,60,12)
fig = plt.figure()
# premier graphe
ax1 = fig.add_subplot(111)
ax1.plot(x,dataframeErreur["Temps"] , '-b', label=ur"Temps",lw=1.5,marker=".",markersize=6)
# absisses communes
xlim(0,65)
xlabel(ur"Taille échantillon x1000", fontsize=15)
ylim(0, 100)
ylabel(ur"Temps (s)", color='b', fontsize=15)
legend(loc=2)
# 2ème graphe
ax2 = ax1.twinx()
ax2.plot(x,dataframeErreur["Erreur"] ,'--',color='black', label=ur"Erreur",lw=1.5,marker=".",markersize=6)
ylim(0, 0.1)
ylabel(ur"Erreur (%)", fontsize=15)
legend(loc=1)
show()
Explanation: 3.2 Avec Random Forest (Scikit-learn) et 250 arbres
End of explanation |
937 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Swiss road accidents 2011 - 2015
Data source
Step1: Importing files on accidents, .csv Files, organised by year
Step2: Importing files on people involved in accidents, .csv Files, organised by year
Step3: Importing files on vehicles involved in accidents, .csv Files, organised by year
Step4: Concatenating the files
Step5: Contents
OVERVIEW
1. Total number of road accidents in Switzerland since 2011? How have they developed over the years? Make a bar graph.
2. On which day of the week do the most accidents occur? And at what time of day? Make a graph of that as well.
3. Compare the various vehicle categories involved in accidents. Total and by year.
4. What was the most expensive crash in the past five years? Where was it? And which crash one was the most deadly?
5. How many accidents occured on roads without ligthing?
6. How have drug and alcohol related accidents developed in the past 5 years?
7. Combine as much information about Breatheliser record as possible.
8. What ist the average age of crash drivers? Overall? And how has it changed over the years?
9. What about female vs. male drivers?
10. List the 10 accident hotspots in Switzerland?
1. Total number of road accidents in Switzerland since 2011? How have they developed over the years? Make a bar graph.
Step6: Most recently there has been a slight increase in road accidents in Switzerland.
2. On which day of the week do the most accidents occur? And at what time of day? Make graph of that as well.
Step7: Transforming the data into times is proving a problem. They need to be transformed, so I can make a histogram of the times. With time as X-axis and count of accidents corresponding to the time on Y-xis. This post on timestamps in Pandas may be of some help.
Step8: 3. Compare the top ten various vehicle categories involved in accidents. Total and by year.
Step9: The "|" sign ist causing a lot of problems as it doesn't allow me to change the names of the numbers in one go. And I haven't figures out how to do it in two goes.
Step10: 4. What was the most expensive crash in the past five years? Where was it? And which crash was the most deadly?
Step11: The accident causing the greatest damage, 3 Million Swiss Francs, happened on 26 October 2012 in Canton Thurgau, here, or here on Google Maps.
Step12: The most deadly accident was a bus accident involving mostly school kids from Belgien. And this one more recently.
5. How many accidents occured on roads without ligthing? (Maybe even work out an accident hotspot?
Step13: Including the over 1000 locations were the lights were out of order, over 10% of accidents happened in poorly lit locations.
6. How have drug and alcohol related accidents developed in the past 5 years?
ALCOHOL
Step14: DRUGS
Step15: 7. Combine as much information about breatheliser record as possible.
MERGING | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use("ggplot")
%matplotlib inline
Explanation: Swiss road accidents 2011 - 2015
Data source: Federal Roads Office (FEDRO)
Analysing every road accident in Switzerland from 2011 to 2015 using Pandas.
End of explanation
unfaelle2011 = pd.read_csv("Export_1_Unfallblatt_2011.csv", delimiter = ";", encoding = "latin-1")
unfaelle2012 = pd.read_csv("Export_1_Unfallblatt_2012.csv", delimiter = ";", encoding = "latin-1")
unfaelle2013 = pd.read_csv("Export_1_Unfallblatt_2013.csv", delimiter = ";", encoding = "latin-1")
unfaelle2014 = pd.read_csv("Export_1_Unfallblatt_2014.csv", delimiter = ";", encoding = "latin-1")
unfaelle2015 = pd.read_csv("Export_1_Unfallblatt_2015.csv", delimiter = "\t", encoding = "UTF-8")
Explanation: Importing files on accidents, .csv Files, organised by year
End of explanation
personen2011 = pd.read_csv("Export_3_Personenblatt_2011.csv", delimiter = ";", encoding = "latin-1")
personen2012 = pd.read_csv("Export_3_Personenblatt_2012.csv", delimiter = ";", encoding = "latin-1")
personen2013 = pd.read_csv("Export_3_Personenblatt_2013.csv", delimiter = ";", encoding = "latin-1")
personen2014 = pd.read_csv("Export_3_Personenblatt_2014.csv", delimiter = ";", encoding = "latin-1")
personen2015 = pd.read_csv("Export_3_Personenblatt_2015.csv", delimiter = "\t", encoding = "UTF-8")
Explanation: Importing files on people involved in accidents, .csv Files, organised by year
End of explanation
fahrzeuge2011 = pd.read_csv("Export_2_Objektblatt_2011.csv", delimiter = ";", encoding = "latin-1")
fahrzeuge2012 = pd.read_csv("Export_2_Objektblatt_2012.csv", delimiter = ";", encoding = "latin-1")
fahrzeuge2013 = pd.read_csv("Export_2_Objektblatt_2013.csv", delimiter = ";", encoding = "latin-1")
fahrzeuge2014 = pd.read_csv("Export_2_Objektblatt_2014.csv", delimiter = ";", encoding = "latin-1")
fahrzeuge2015 = pd.read_csv("Export_2_Objektblatt_2015.csv", delimiter = "\t", encoding = "UTF-8")
Explanation: Importing files on vehicles involved in accidents, .csv Files, organised by year
End of explanation
df_unfaelle = pd.concat([unfaelle2011, unfaelle2012, unfaelle2013, unfaelle2014, unfaelle2015], ignore_index=True)
df_personen = pd.concat([personen2011, personen2012, personen2013, personen2014, personen2015], ignore_index=True)
df_fahrzeuge = pd.concat([fahrzeuge2011, fahrzeuge2012, fahrzeuge2013, fahrzeuge2014, fahrzeuge2015], ignore_index=True)
#df_unfaelle.columns
#df_personen.columns
#df_fahrzeuge.columns
#contains the "Hauptverursacher UAP" (ja/nein) category
Explanation: Concatenating the files
End of explanation
df_unfaelle['Unfall-UID'].count()
df_unfaelle.groupby('Jahr')['Unfall-UID'].count()
df_unfaelle.groupby('Jahr')['Unfall-UID'].count().plot(kind='bar')
plt.savefig("Unfaelle_pro_Jahr.svg")
plt.savefig("Unfaelle_pro_Jahr.png")
Explanation: Contents
OVERVIEW
1. Total number of road accidents in Switzerland since 2011? How have they developed over the years? Make a bar graph.
2. On which day of the week do the most accidents occur? And at what time of day? Make a graph of that as well.
3. Compare the various vehicle categories involved in accidents. Total and by year.
4. What was the most expensive crash in the past five years? Where was it? And which crash one was the most deadly?
5. How many accidents occured on roads without ligthing?
6. How have drug and alcohol related accidents developed in the past 5 years?
7. Combine as much information about Breatheliser record as possible.
8. What ist the average age of crash drivers? Overall? And how has it changed over the years?
9. What about female vs. male drivers?
10. List the 10 accident hotspots in Switzerland?
1. Total number of road accidents in Switzerland since 2011? How have they developed over the years? Make a bar graph.
End of explanation
df_unfaelle['Wochentag-Nr'].value_counts()
df_unfaelle['Wochentag-Nr'].astype(str).str.replace('6', "Saturday").str.replace('4', "Thursday").str.replace('1', "Monday").str.replace('2', "Tuesday").str.replace('3', "Wednesday").str.replace('5', "Friday").str.replace('7', "Sunday").value_counts(ascending=True).plot(kind='barh')
plt.savefig("Unfaelle_pro_Tag.svg")
plt.savefig("Unfaelle_pro_Tag.png")
plt.savefig("Unfaelle_pro_Tag.svg")
plt.savefig("Unfaelle_pro_Tag.png")
Explanation: Most recently there has been a slight increase in road accidents in Switzerland.
2. On which day of the week do the most accidents occur? And at what time of day? Make graph of that as well.
End of explanation
#df_unfaelle['Unfallzeit'].value_counts(ascending=False).hist(kind='scatter', x=)
#df_unfaelle['Unfallzeit'].value_counts().plot(kind='barh', x=['Unfallzeit'])
#df_unfaelle['Unfallzeit_Double'] = df_unfaelle['Unfallzeit']
df_unfaelle['Unfallzeit'].value_counts().head(4)
Explanation: Transforming the data into times is proving a problem. They need to be transformed, so I can make a histogram of the times. With time as X-axis and count of accidents corresponding to the time on Y-xis. This post on timestamps in Pandas may be of some help.
End of explanation
df_fahrzeuge['Fahrzeugart UAP'].value_counts().head(5)
#money_players = nba[nba['2013 $'] != 'n/a']
Explanation: 3. Compare the top ten various vehicle categories involved in accidents. Total and by year.
End of explanation
#df_fahrzeuge['Fahrzeugart UAP'].astype(str).str.replace('|', "").value_counts(ascending=False)
#vehicles_without_pipes.astype(str).str.replace('710', "Cars").value_counts(ascending=False).head(10)
test = df_fahrzeuge['Fahrzeugart UAP'].astype(str).str.replace('|', "").value_counts(ascending=False)
pd.DataFrame(test).head(10)
#test['Fahrzeugart UAP']
#test['Fahrzeugart UAP'].astype(str).str.replace('712', "Vans")
#test.columns()
#.str.replace('712', "Vans").str.replace('733', "n/a").str.replace('730', "Bicycles").str.replace('725', "Motorbikes, above 25kw").str.replace('718', "Trucks, above 7,4t").str.replace('722', "Mopeds").str.replace('724', "Motorbike, to 25kw").str.replace('720', "Semi trailer, above 7,5t").
Explanation: The "|" sign ist causing a lot of problems as it doesn't allow me to change the names of the numbers in one go. And I haven't figures out how to do it in two goes.
End of explanation
df_unfaelle.sort_values('Total geschätzter Sachschaden (in 1000 CHF)', ascending=False)[['Total geschätzter Sachschaden (in 1000 CHF)', 'Datum', 'x-Koordinate', 'y-Koordinate', 'Kanton Kürzel', 'Aktuelle BFS Gemeinde-Nr']].head(5)
Explanation: 4. What was the most expensive crash in the past five years? Where was it? And which crash was the most deadly?
End of explanation
df_unfaelle.sort_values('Getötete', ascending=False)[['Getötete', 'Datum', 'x-Koordinate', 'y-Koordinate', 'Kanton Kürzel', 'Aktuelle BFS Gemeinde-Nr']].head(5)
Explanation: The accident causing the greatest damage, 3 Million Swiss Francs, happened on 26 October 2012 in Canton Thurgau, here, or here on Google Maps.
End of explanation
Strassenbeleuchtung UAP 640 keine 641 ausser Betrieb
df_unfaelle[(df_unfaelle['Lichtverhältnis UAP'] == 622) & (df_unfaelle['Strassenbeleuchtung UAP'] == 640)]
#df[(df['animal'] == 'cat') & (df['inches'] > 12)]
#df[df['feet'] > 6.5]
df_unfaelle[(df_unfaelle['Lichtverhältnis UAP'] == 622) & (df_unfaelle['Strassenbeleuchtung UAP'] == 641)]
Explanation: The most deadly accident was a bus accident involving mostly school kids from Belgien. And this one more recently.
5. How many accidents occured on roads without ligthing? (Maybe even work out an accident hotspot?
End of explanation
df_unfaelle[df_unfaelle['Hauptursache UAP'] == 1101].groupby("Jahr")["Hauptursache UAP"].value_counts().plot(kind='bar')
Explanation: Including the over 1000 locations were the lights were out of order, over 10% of accidents happened in poorly lit locations.
6. How have drug and alcohol related accidents developed in the past 5 years?
ALCOHOL
End of explanation
df_unfaelle[df_unfaelle['Hauptursache UAP'] == 1102].groupby("Jahr")["Hauptursache UAP"].value_counts().plot(kind='bar')
Explanation: DRUGS
End of explanation
df_fahrzeuge["Atemtest"].describe()
df_fahrzeuge["Atemtest"].value_counts()
Alcohol_record = df_fahrzeuge[df_fahrzeuge['Atemtest'] == 9.0]
Alcohol_record[['BUM Probe angeordnet UAP', 'Blutalkoholtest', 'Datum', 'Vertrautheit Strecke UAP', 'Kennzeichen Fahrzeug Kanton']]
df_merged = df_personen.merge(df_fahrzeuge, how = 'left', left_on = 'Unfall-UID', right_on ='Unfall-UID')
df = df_merged.merge(df_unfaelle, how = 'left', left_on = 'Unfall-UID', right_on ='Unfall-UID')
df_fahrzeuge.info()
df_unfaelle.info()
df_personen.info()
Explanation: 7. Combine as much information about breatheliser record as possible.
MERGING: Look at this again!!!*
End of explanation |
938 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulation of 10000 ms of 400 independent descending commands following a gamma distribution with mean of 142 ms and order 10
Step1: The spike times of all descending commands along the 10000 ms of simulation is shown in Fig. \ref{fig
Step2: The spike times of all descending commands during the last 1000 ms of the simulation is shown in Fig. \ref{fig
Step3: The histogram of the interspike intevals of all the descending commands is shown in Fig. \ref{fig
Step4: Below different statistics of the interspike intervals and firing rate are obtained. | Python Code:
import sys
sys.path.insert(0, '..')
import time
import matplotlib.pyplot as plt
%matplotlib notebook
import numpy as np
import scipy.stats
from Configuration import Configuration
from NeuralTract import NeuralTract
conf = Configuration('confNeuralTractSpikes.rmto')
t = np.arange(0.0, conf.simDuration_ms, conf.timeStep_ms)
pools = dict()
pools[0] = NeuralTract(conf, 'CMExt')
tic = time.time()
for i in xrange(0,len(t)-1):
pools[0].atualizePool(t[i], 1000/12.0, 10)
toc = time.time()
print str(toc - tic) + ' seconds'
pools[0].listSpikes()
Explanation: Simulation of 10000 ms of 400 independent descending commands following a gamma distribution with mean of 142 ms and order 10
End of explanation
plt.figure()
plt.plot(pools[0].poolTerminalSpikes[:, 0],
pools[0].poolTerminalSpikes[:, 1]+1, '.')
plt.xlabel('t (ms)')
plt.ylabel('Descending Command index')
Explanation: The spike times of all descending commands along the 10000 ms of simulation is shown in Fig. \ref{fig:spikesDesc}.
End of explanation
plt.figure()
plt.plot(pools[0].poolTerminalSpikes[pools[0].poolTerminalSpikes[:, 0]>9000, 0],
pools[0].poolTerminalSpikes[pools[0].poolTerminalSpikes[:, 0]>9000, 1]+1, '.')
plt.xlabel('t (ms)')
plt.ylabel('Descending Command index')
Explanation: The spike times of all descending commands during the last 1000 ms of the simulation is shown in Fig. \ref{fig:spikesDescLast}.
End of explanation
ISI = np.array([])
for i in xrange(0,len(pools[0].unit)):
ISI = np.append(ISI, np.diff(np.reshape(np.array(pools[0].unit[i].terminalSpikeTrain), (-1,2))[:,0]))
plt.figure()
plt.hist(ISI)
plt.xlabel('ISI (ms)')
plt.ylabel('Counts')
Explanation: The histogram of the interspike intevals of all the descending commands is shown in Fig. \ref{fig:hist}. Note that the peak is in the specified ISI at the beginning of the simulation.
End of explanation
SD = np.std(ISI)
M = np.mean(ISI)
SK = scipy.stats.skew(ISI)
CV = SD / M
print 'ISI Mean = ' + str(M) + ' ms'
print 'ISI Standard deviation = ' + str(SD) + ' ms'
print 'ISI CV = ' + str(CV)
M_FR = 1000.0 / M
SD_FR = np.sqrt((SD**2) * 1000 / (M**3) + 1/6.0 + (SD**4) / (2*M**4) - SK/(3*M**3))
print 'Firing rate mean = ' + str(M_FR) + ' Hz'
print 'Firing rate standard deviation = ' + str(SD_FR) + ' Hz'
CV_FR = SD_FR / M_FR
print 'CV of Firing rate = ' + str(CV_FR)
Explanation: Below different statistics of the interspike intervals and firing rate are obtained.
End of explanation |
939 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Kvswap
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right
Step2: Examples
In the following example, we create a pipeline with a PCollection of key-value pairs.
Then, we apply KvSwap to swap the keys and values. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
Explanation: <a href="https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/documentation/transforms/python/elementwise/kvswap-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
<table align="left"><td><a target="_blank" href="https://beam.apache.org/documentation/transforms/python/elementwise/kvswap"><img src="https://beam.apache.org/images/logos/full-color/name-bottom/beam-logo-full-color-name-bottom-100.png" width="32" height="32" />View the docs</a></td></table>
End of explanation
!pip install --quiet -U apache-beam
Explanation: Kvswap
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.util.html#apache_beam.transforms.util.KvSwap"><img src="https://beam.apache.org/images/logos/sdks/python.png" width="32px" height="32px" alt="Pydoc"/> Pydoc</a>
</td>
</table>
<br/><br/><br/>
Takes a collection of key-value pairs and returns a collection of key-value pairs
which has each key and value swapped.
Setup
To run a code cell, you can click the Run cell button at the top left of the cell,
or select it and press Shift+Enter.
Try modifying a code cell and re-running it to see what happens.
To learn more about Colab, see
Welcome to Colaboratory!.
First, let's install the apache-beam module.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Garden plants' >> beam.Create([
('🍓', 'Strawberry'),
('🥕', 'Carrot'),
('🍆', 'Eggplant'),
('🍅', 'Tomato'),
('🥔', 'Potato'),
])
| 'Key-Value swap' >> beam.KvSwap()
| beam.Map(print))
Explanation: Examples
In the following example, we create a pipeline with a PCollection of key-value pairs.
Then, we apply KvSwap to swap the keys and values.
End of explanation |
940 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Repairing artifacts with SSP
This tutorial covers the basics of signal-space projection (SSP) and shows
how SSP can be used for artifact repair; extended examples illustrate use
of SSP for environmental noise reduction, and for repair of ocular and
heartbeat artifacts.
We begin as always by importing the necessary Python modules. To save ourselves
from repeatedly typing mne.preprocessing we'll directly import a handful of
functions from that submodule
Step1: <div class="alert alert-info"><h4>Note</h4><p>Before applying SSP (or any artifact repair strategy), be sure to observe
the artifacts in your data to make sure you choose the right repair tool.
Sometimes the right tool is no tool at all — if the artifacts are small
enough you may not even need to repair them to get good analysis results.
See `tut-artifact-overview` for guidance on detecting and
visualizing various types of artifact.</p></div>
What is SSP?
Signal-space projection (SSP)
Step2: The example data <sample-dataset> also includes an "empty room"
recording taken the same day as the recording of the subject. This will
provide a more accurate estimate of environmental noise than the projectors
stored with the system (which are typically generated during annual
maintenance and tuning). Since we have this subject-specific empty-room
recording, we'll create our own projectors from it and discard the
system-provided SSP projectors (saving them first, for later comparison with
the custom ones)
Step3: Notice that the empty room recording itself has the system-provided SSP
projectors in it — we'll remove those from the empty room file too.
Step4: Visualizing the empty-room noise
Let's take a look at the spectrum of the empty room noise. We can view an
individual spectrum for each sensor, or an average (with confidence band)
across sensors
Step5: Creating the empty-room projectors
We create the SSP vectors using ~mne.compute_proj_raw, and control
the number of projectors with parameters n_grad and n_mag. Once
created, the field pattern of the projectors can be easily visualized with
~mne.viz.plot_projs_topomap. We include the parameter
vlim='joint' so that the colormap is computed jointly for all projectors
of a given channel type; this makes it easier to compare their relative
smoothness. Note that for the function to know the types of channels in a
projector, you must also provide the corresponding ~mne.Info object
Step6: Notice that the gradiometer-based projectors seem to reflect problems with
individual sensor units rather than a global noise source (indeed, planar
gradiometers are much less sensitive to distant sources). This is the reason
that the system-provided noise projectors are computed only for
magnetometers. Comparing the system-provided projectors to the
subject-specific ones, we can see they are reasonably similar (though in a
different order) and the left-right component seems to have changed
polarity.
Step7: Visualizing how projectors affect the signal
We could visualize the different effects these have on the data by applying
each set of projectors to different copies of the ~mne.io.Raw object
using ~mne.io.Raw.apply_proj. However, the ~mne.io.Raw.plot
method has a proj parameter that allows us to temporarily apply
projectors while plotting, so we can use this to visualize the difference
without needing to copy the data. Because the projectors are so similar, we
need to zoom in pretty close on the data to see any differences
Step8: The effect is sometimes easier to see on averaged data. Here we use an
interactive feature of mne.Evoked.plot_topomap to turn projectors on
and off to see the effect on the data. Of course, the interactivity won't
work on the tutorial website, but you can download the tutorial and try it
locally
Step9: Plotting the ERP/F using evoked.plot() or evoked.plot_joint() with
and without projectors applied can also be informative, as can plotting with
proj='reconstruct', which can reduce the signal bias introduced by
projections (see tut-artifact-ssp-reconstruction below).
Example
Step10: Repairing ECG artifacts with SSP
MNE-Python provides several functions for detecting and removing heartbeats
from EEG and MEG data. As we saw in tut-artifact-overview,
~mne.preprocessing.create_ecg_epochs can be used to both detect and
extract heartbeat artifacts into an ~mne.Epochs object, which can
be used to visualize how the heartbeat artifacts manifest across the sensors
Step11: Looks like the EEG channels are pretty spread out; let's baseline-correct and
plot again
Step12: To compute SSP projectors for the heartbeat artifact, you can use
~mne.preprocessing.compute_proj_ecg, which takes a
~mne.io.Raw object as input and returns the requested number of
projectors for magnetometers, gradiometers, and EEG channels (default is two
projectors for each channel type).
~mne.preprocessing.compute_proj_ecg also returns an
Step13: The first line of output tells us that
~mne.preprocessing.compute_proj_ecg found three existing projectors
already in the ~mne.io.Raw object, and will include those in the
list of projectors that it returns (appending the new ECG projectors to the
end of the list). If you don't want that, you can change that behavior with
the boolean no_proj parameter. Since we've already run the computation,
we can just as easily separate out the ECG projectors by indexing the list of
projectors
Step14: Just like with the empty-room projectors, we can visualize the scalp
distribution
Step15: Since no dedicated ECG sensor channel was detected in the
~mne.io.Raw object, by default
~mne.preprocessing.compute_proj_ecg used the magnetometers to
estimate the ECG signal (as stated on the third line of output, above). You
can also supply the ch_name parameter to restrict which channel to use
for ECG artifact detection; this is most useful when you had an ECG sensor
but it is not labeled as such in the ~mne.io.Raw file.
The next few lines of the output describe the filter used to isolate ECG
events. The default settings are usually adequate, but the filter can be
customized via the parameters ecg_l_freq, ecg_h_freq, and
filter_length (see the documentation of
~mne.preprocessing.compute_proj_ecg for details).
.. TODO what are the cases where you might need to customize the ECG filter?
infants? Heart murmur?
Once the ECG events have been identified,
~mne.preprocessing.compute_proj_ecg will also filter the data
channels before extracting epochs around each heartbeat, using the parameter
values given in l_freq, h_freq, filter_length, filter_method,
and iir_params. Here again, the default parameter values are usually
adequate.
.. TODO should advice for filtering here be the same as advice for filtering
raw data generally? (e.g., keep high-pass very low to avoid peak shifts?
what if your raw data is already filtered?)
By default, the filtered epochs will be averaged together
before the projection is computed; this can be controlled with the boolean
average parameter. In general this improves the signal-to-noise (where
"signal" here is our artifact!) ratio because the artifact temporal waveform
is fairly similar across epochs and well time locked to the detected events.
To get a sense of how the heartbeat affects the signal at each sensor, you
can plot the data with and without the ECG projectors
Step16: Finally, note that above we passed reject=None to the
~mne.preprocessing.compute_proj_ecg function, meaning that all
detected ECG epochs would be used when computing the projectors (regardless
of signal quality in the data sensors during those epochs). The default
behavior is to reject epochs based on signal amplitude
Step17: Just like we did with the heartbeat artifact, we can compute SSP projectors
for the ocular artifact using ~mne.preprocessing.compute_proj_eog,
which again takes a ~mne.io.Raw object as input and returns the
requested number of projectors for magnetometers, gradiometers, and EEG
channels (default is two projectors for each channel type). This time, we'll
pass no_proj parameter (so we get back only the new EOG projectors, not
also the existing projectors in the ~mne.io.Raw object), and we'll
ignore the events array by assigning it to _ (the conventional way of
handling unwanted return elements in Python).
Step18: Just like with the empty-room and ECG projectors, we can visualize the scalp
distribution
Step19: Now we repeat the plot from above (with empty room and ECG projectors) and
compare it to a plot with empty room, ECG, and EOG projectors, to see how
well the ocular artifacts have been repaired
Step20: Notice that the small peaks in the first to magnetometer channels (MEG
1411 and MEG 1421) that occur at the same time as the large EEG
deflections have also been removed.
Choosing the number of projectors
In the examples above, we used 3 projectors (all magnetometer) to capture
empty room noise, and saw how projectors computed for the gradiometers failed
to capture global patterns (and thus we discarded the gradiometer
projectors). Then we computed 3 projectors (1 for each channel type) to
capture the heartbeat artifact, and 3 more to capture the ocular artifact.
How did we choose these numbers? The short answer is "based on experience" —
knowing how heartbeat artifacts typically manifest across the sensor array
allows us to recognize them when we see them, and recognize when additional
projectors are capturing something else other than a heartbeat artifact (and
thus may be removing brain signal and should be discarded).
Visualizing SSP sensor-space bias via signal reconstruction
.. sidebar | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.preprocessing import (create_eog_epochs, create_ecg_epochs,
compute_proj_ecg, compute_proj_eog)
Explanation: Repairing artifacts with SSP
This tutorial covers the basics of signal-space projection (SSP) and shows
how SSP can be used for artifact repair; extended examples illustrate use
of SSP for environmental noise reduction, and for repair of ocular and
heartbeat artifacts.
We begin as always by importing the necessary Python modules. To save ourselves
from repeatedly typing mne.preprocessing we'll directly import a handful of
functions from that submodule:
End of explanation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
# here we crop and resample just for speed
raw = mne.io.read_raw_fif(sample_data_raw_file).crop(0, 60)
raw.load_data().resample(100)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Before applying SSP (or any artifact repair strategy), be sure to observe
the artifacts in your data to make sure you choose the right repair tool.
Sometimes the right tool is no tool at all — if the artifacts are small
enough you may not even need to repair them to get good analysis results.
See `tut-artifact-overview` for guidance on detecting and
visualizing various types of artifact.</p></div>
What is SSP?
Signal-space projection (SSP) :footcite:UusitaloIlmoniemi1997 is a
technique for removing noise from EEG
and MEG signals by :term:projecting <projector> the signal onto a
lower-dimensional subspace. The subspace is chosen by calculating the average
pattern across sensors when the noise is present, treating that pattern as
a "direction" in the sensor space, and constructing the subspace to be
orthogonal to the noise direction (for a detailed walk-through of projection
see tut-projectors-background).
The most common use of SSP is to remove noise from MEG signals when the noise
comes from environmental sources (sources outside the subject's body and the
MEG system, such as the electromagnetic fields from nearby electrical
equipment) and when that noise is stationary (doesn't change much over the
duration of the recording). However, SSP can also be used to remove
biological artifacts such as heartbeat (ECG) and eye movement (EOG)
artifacts. Examples of each of these are given below.
Example: Environmental noise reduction from empty-room recordings
The example data <sample-dataset> was recorded on a Neuromag system,
which stores SSP projectors for environmental noise removal in the system
configuration (so that reasonably clean raw data can be viewed in real-time
during acquisition). For this reason, all the ~mne.io.Raw data in
the example dataset already includes SSP projectors, which are noted in the
output when loading the data:
End of explanation
system_projs = raw.info['projs']
raw.del_proj()
empty_room_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'ernoise_raw.fif')
# cropped to 60 sec just for speed
empty_room_raw = mne.io.read_raw_fif(empty_room_file).crop(0, 30)
Explanation: The example data <sample-dataset> also includes an "empty room"
recording taken the same day as the recording of the subject. This will
provide a more accurate estimate of environmental noise than the projectors
stored with the system (which are typically generated during annual
maintenance and tuning). Since we have this subject-specific empty-room
recording, we'll create our own projectors from it and discard the
system-provided SSP projectors (saving them first, for later comparison with
the custom ones):
End of explanation
empty_room_raw.del_proj()
Explanation: Notice that the empty room recording itself has the system-provided SSP
projectors in it — we'll remove those from the empty room file too.
End of explanation
for average in (False, True):
empty_room_raw.plot_psd(average=average, dB=False, xscale='log')
Explanation: Visualizing the empty-room noise
Let's take a look at the spectrum of the empty room noise. We can view an
individual spectrum for each sensor, or an average (with confidence band)
across sensors:
End of explanation
empty_room_projs = mne.compute_proj_raw(empty_room_raw, n_grad=3, n_mag=3)
mne.viz.plot_projs_topomap(empty_room_projs, colorbar=True, vlim='joint',
info=empty_room_raw.info)
Explanation: Creating the empty-room projectors
We create the SSP vectors using ~mne.compute_proj_raw, and control
the number of projectors with parameters n_grad and n_mag. Once
created, the field pattern of the projectors can be easily visualized with
~mne.viz.plot_projs_topomap. We include the parameter
vlim='joint' so that the colormap is computed jointly for all projectors
of a given channel type; this makes it easier to compare their relative
smoothness. Note that for the function to know the types of channels in a
projector, you must also provide the corresponding ~mne.Info object:
End of explanation
fig, axs = plt.subplots(2, 3)
for idx, _projs in enumerate([system_projs, empty_room_projs[3:]]):
mne.viz.plot_projs_topomap(_projs, axes=axs[idx], colorbar=True,
vlim='joint', info=empty_room_raw.info)
Explanation: Notice that the gradiometer-based projectors seem to reflect problems with
individual sensor units rather than a global noise source (indeed, planar
gradiometers are much less sensitive to distant sources). This is the reason
that the system-provided noise projectors are computed only for
magnetometers. Comparing the system-provided projectors to the
subject-specific ones, we can see they are reasonably similar (though in a
different order) and the left-right component seems to have changed
polarity.
End of explanation
mags = mne.pick_types(raw.info, meg='mag')
for title, projs in [('system', system_projs),
('subject-specific', empty_room_projs[3:])]:
raw.add_proj(projs, remove_existing=True)
fig = raw.plot(proj=True, order=mags, duration=1, n_channels=2)
fig.subplots_adjust(top=0.9) # make room for title
fig.suptitle('{} projectors'.format(title), size='xx-large', weight='bold')
Explanation: Visualizing how projectors affect the signal
We could visualize the different effects these have on the data by applying
each set of projectors to different copies of the ~mne.io.Raw object
using ~mne.io.Raw.apply_proj. However, the ~mne.io.Raw.plot
method has a proj parameter that allows us to temporarily apply
projectors while plotting, so we can use this to visualize the difference
without needing to copy the data. Because the projectors are so similar, we
need to zoom in pretty close on the data to see any differences:
End of explanation
events = mne.find_events(raw, stim_channel='STI 014')
event_id = {'auditory/left': 1}
# NOTE: appropriate rejection criteria are highly data-dependent
reject = dict(mag=4000e-15, # 4000 fT
grad=4000e-13, # 4000 fT/cm
eeg=150e-6, # 150 µV
eog=250e-6) # 250 µV
# time range where we expect to see the auditory N100: 50-150 ms post-stimulus
times = np.linspace(0.05, 0.15, 5)
epochs = mne.Epochs(raw, events, event_id, proj='delayed', reject=reject)
fig = epochs.average().plot_topomap(times, proj='interactive')
Explanation: The effect is sometimes easier to see on averaged data. Here we use an
interactive feature of mne.Evoked.plot_topomap to turn projectors on
and off to see the effect on the data. Of course, the interactivity won't
work on the tutorial website, but you can download the tutorial and try it
locally:
End of explanation
# pick some channels that clearly show heartbeats and blinks
regexp = r'(MEG [12][45][123]1|EEG 00.)'
artifact_picks = mne.pick_channels_regexp(raw.ch_names, regexp=regexp)
raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
Explanation: Plotting the ERP/F using evoked.plot() or evoked.plot_joint() with
and without projectors applied can also be informative, as can plotting with
proj='reconstruct', which can reduce the signal bias introduced by
projections (see tut-artifact-ssp-reconstruction below).
Example: EOG and ECG artifact repair
Visualizing the artifacts
As mentioned in the ICA tutorial <tut-artifact-ica>, an important
first step is visualizing the artifacts you want to repair. Here they are in
the raw data:
End of explanation
ecg_evoked = create_ecg_epochs(raw).average()
ecg_evoked.plot_joint()
Explanation: Repairing ECG artifacts with SSP
MNE-Python provides several functions for detecting and removing heartbeats
from EEG and MEG data. As we saw in tut-artifact-overview,
~mne.preprocessing.create_ecg_epochs can be used to both detect and
extract heartbeat artifacts into an ~mne.Epochs object, which can
be used to visualize how the heartbeat artifacts manifest across the sensors:
End of explanation
ecg_evoked.apply_baseline((None, None))
ecg_evoked.plot_joint()
Explanation: Looks like the EEG channels are pretty spread out; let's baseline-correct and
plot again:
End of explanation
projs, events = compute_proj_ecg(raw, n_grad=1, n_mag=1, n_eeg=1, reject=None)
Explanation: To compute SSP projectors for the heartbeat artifact, you can use
~mne.preprocessing.compute_proj_ecg, which takes a
~mne.io.Raw object as input and returns the requested number of
projectors for magnetometers, gradiometers, and EEG channels (default is two
projectors for each channel type).
~mne.preprocessing.compute_proj_ecg also returns an :term:events
array containing the sample numbers corresponding to the peak of the
R wave <https://en.wikipedia.org/wiki/QRS_complex>__ of each detected
heartbeat.
End of explanation
ecg_projs = projs[3:]
print(ecg_projs)
Explanation: The first line of output tells us that
~mne.preprocessing.compute_proj_ecg found three existing projectors
already in the ~mne.io.Raw object, and will include those in the
list of projectors that it returns (appending the new ECG projectors to the
end of the list). If you don't want that, you can change that behavior with
the boolean no_proj parameter. Since we've already run the computation,
we can just as easily separate out the ECG projectors by indexing the list of
projectors:
End of explanation
mne.viz.plot_projs_topomap(ecg_projs, info=raw.info)
Explanation: Just like with the empty-room projectors, we can visualize the scalp
distribution:
End of explanation
raw.del_proj()
for title, proj in [('Without', empty_room_projs), ('With', ecg_projs)]:
raw.add_proj(proj, remove_existing=False)
fig = raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
fig.subplots_adjust(top=0.9) # make room for title
fig.suptitle('{} ECG projectors'.format(title), size='xx-large',
weight='bold')
Explanation: Since no dedicated ECG sensor channel was detected in the
~mne.io.Raw object, by default
~mne.preprocessing.compute_proj_ecg used the magnetometers to
estimate the ECG signal (as stated on the third line of output, above). You
can also supply the ch_name parameter to restrict which channel to use
for ECG artifact detection; this is most useful when you had an ECG sensor
but it is not labeled as such in the ~mne.io.Raw file.
The next few lines of the output describe the filter used to isolate ECG
events. The default settings are usually adequate, but the filter can be
customized via the parameters ecg_l_freq, ecg_h_freq, and
filter_length (see the documentation of
~mne.preprocessing.compute_proj_ecg for details).
.. TODO what are the cases where you might need to customize the ECG filter?
infants? Heart murmur?
Once the ECG events have been identified,
~mne.preprocessing.compute_proj_ecg will also filter the data
channels before extracting epochs around each heartbeat, using the parameter
values given in l_freq, h_freq, filter_length, filter_method,
and iir_params. Here again, the default parameter values are usually
adequate.
.. TODO should advice for filtering here be the same as advice for filtering
raw data generally? (e.g., keep high-pass very low to avoid peak shifts?
what if your raw data is already filtered?)
By default, the filtered epochs will be averaged together
before the projection is computed; this can be controlled with the boolean
average parameter. In general this improves the signal-to-noise (where
"signal" here is our artifact!) ratio because the artifact temporal waveform
is fairly similar across epochs and well time locked to the detected events.
To get a sense of how the heartbeat affects the signal at each sensor, you
can plot the data with and without the ECG projectors:
End of explanation
eog_evoked = create_eog_epochs(raw).average()
eog_evoked.apply_baseline((None, None))
eog_evoked.plot_joint()
Explanation: Finally, note that above we passed reject=None to the
~mne.preprocessing.compute_proj_ecg function, meaning that all
detected ECG epochs would be used when computing the projectors (regardless
of signal quality in the data sensors during those epochs). The default
behavior is to reject epochs based on signal amplitude: epochs with
peak-to-peak amplitudes exceeding 50 µV in EEG channels, 250 µV in EOG
channels, 2000 fT/cm in gradiometer channels, or 3000 fT in magnetometer
channels. You can change these thresholds by passing a dictionary with keys
eeg, eog, mag, and grad (though be sure to pass the threshold
values in volts, teslas, or teslas/meter). Generally, it is a good idea to
reject such epochs when computing the ECG projectors (since presumably the
high-amplitude fluctuations in the channels are noise, not reflective of
brain activity); passing reject=None above was done simply to avoid the
dozens of extra lines of output (enumerating which sensor(s) were responsible
for each rejected epoch) from cluttering up the tutorial.
<div class="alert alert-info"><h4>Note</h4><p>`~mne.preprocessing.compute_proj_ecg` has a similar parameter
``flat`` for specifying the *minimum* acceptable peak-to-peak amplitude
for each channel type.</p></div>
While ~mne.preprocessing.compute_proj_ecg conveniently combines
several operations into a single function, MNE-Python also provides functions
for performing each part of the process. Specifically:
mne.preprocessing.find_ecg_events for detecting heartbeats in a
~mne.io.Raw object and returning a corresponding :term:events
array
mne.preprocessing.create_ecg_epochs for detecting heartbeats in a
~mne.io.Raw object and returning an ~mne.Epochs object
mne.compute_proj_epochs for creating projector(s) from any
~mne.Epochs object
See the documentation of each function for further details.
Repairing EOG artifacts with SSP
Once again let's visualize our artifact before trying to repair it. We've
seen above the large deflections in frontal EEG channels in the raw data;
here is how the ocular artifacts manifests across all the sensors:
End of explanation
eog_projs, _ = compute_proj_eog(raw, n_grad=1, n_mag=1, n_eeg=1, reject=None,
no_proj=True)
Explanation: Just like we did with the heartbeat artifact, we can compute SSP projectors
for the ocular artifact using ~mne.preprocessing.compute_proj_eog,
which again takes a ~mne.io.Raw object as input and returns the
requested number of projectors for magnetometers, gradiometers, and EEG
channels (default is two projectors for each channel type). This time, we'll
pass no_proj parameter (so we get back only the new EOG projectors, not
also the existing projectors in the ~mne.io.Raw object), and we'll
ignore the events array by assigning it to _ (the conventional way of
handling unwanted return elements in Python).
End of explanation
mne.viz.plot_projs_topomap(eog_projs, info=raw.info)
Explanation: Just like with the empty-room and ECG projectors, we can visualize the scalp
distribution:
End of explanation
for title in ('Without', 'With'):
if title == 'With':
raw.add_proj(eog_projs)
fig = raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
fig.subplots_adjust(top=0.9) # make room for title
fig.suptitle('{} EOG projectors'.format(title), size='xx-large',
weight='bold')
Explanation: Now we repeat the plot from above (with empty room and ECG projectors) and
compare it to a plot with empty room, ECG, and EOG projectors, to see how
well the ocular artifacts have been repaired:
End of explanation
evoked_eeg = epochs.average().pick('eeg')
evoked_eeg.del_proj().add_proj(ecg_projs).add_proj(eog_projs)
fig, axes = plt.subplots(1, 3, figsize=(8, 3), squeeze=False)
for ii in range(axes.shape[0]):
axes[ii, 0].get_shared_y_axes().join(*axes[ii])
for pi, proj in enumerate((False, True, 'reconstruct')):
evoked_eeg.plot(proj=proj, axes=axes[:, pi], spatial_colors=True)
if pi == 0:
for ax in axes[:, pi]:
parts = ax.get_title().split('(')
ax.set(ylabel=f'{parts[0]} ({ax.get_ylabel()})\n'
f'{parts[1].replace(")", "")}')
axes[0, pi].set(title=f'proj={proj}')
for text in list(axes[0, pi].texts):
text.remove()
plt.setp(axes[1:, :].ravel(), title='')
plt.setp(axes[:, 1:].ravel(), ylabel='')
plt.setp(axes[:-1, :].ravel(), xlabel='')
mne.viz.tight_layout()
Explanation: Notice that the small peaks in the first to magnetometer channels (MEG
1411 and MEG 1421) that occur at the same time as the large EEG
deflections have also been removed.
Choosing the number of projectors
In the examples above, we used 3 projectors (all magnetometer) to capture
empty room noise, and saw how projectors computed for the gradiometers failed
to capture global patterns (and thus we discarded the gradiometer
projectors). Then we computed 3 projectors (1 for each channel type) to
capture the heartbeat artifact, and 3 more to capture the ocular artifact.
How did we choose these numbers? The short answer is "based on experience" —
knowing how heartbeat artifacts typically manifest across the sensor array
allows us to recognize them when we see them, and recognize when additional
projectors are capturing something else other than a heartbeat artifact (and
thus may be removing brain signal and should be discarded).
Visualizing SSP sensor-space bias via signal reconstruction
.. sidebar:: SSP reconstruction
Internally, the reconstruction is performed by effectively using a
minimum-norm source localization to a spherical source space with the
projections accounted for, and then projecting the source-space data
back out to sensor space.
Because SSP performs an orthogonal projection, any spatial component in the
data that is not perfectly orthogonal to the SSP spatial direction(s) will
have its overall amplitude reduced by the projection operation. In other
words, SSP typically introduces some amount of amplitude reduction bias in
the sensor space data.
When performing source localization of M/EEG data, these projections are
properly taken into account by being applied not just to the M/EEG data
but also to the forward solution, and hence SSP should not bias the estimated
source amplitudes. However, for sensor space analyses, it can be useful to
visualize the extent to which SSP projection has biased the data. This can be
explored by using proj='reconstruct' in evoked plotting functions, for
example via evoked.plot() <mne.Evoked.plot>, here restricted to just
EEG channels for speed:
End of explanation |
941 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
R Programming
<a name="top"></a>
Vahid Mirjalili, Data Scientist
R Data Types
Atomic classes
Object attributes
Create Vectors
Explicit Coercion
Matrices
Lists
Factors
Missing Values
Data Frmaes
Names
Subsetting R objects
Subsetting lists
Partial matching
Removing missing values
Reading and Writing Data
Reading data
Writing data
Reading large datasets
Reading and writing in textual format
Connection interfaces
Flow Control
Functions
Special argument "..."
Scopting Rules
Dates and Times
Format Strings to dates
Apply Functions
Shortcut apply functions
tapply() and split()
Random Number Generation
Gaussian Noise
R Profiler
Using Rprof()
Optimization in R
<a name="sec1_datatypes"></a>
1. R Data Types
top
<a name="atomic_classes"></a>
Atomic classes in R
Step1: <a name="matrices"></a>
Matrices
To create a vector
Step2: <a name="lists"></a>
Lists
Lists are special vectors that can store elements from different classes
Create a list by list()
Step3: <a name="factors"></a>
Factors
Factors are a special data type to represent categorical variables, such as gender ('male', 'female'), patterm classes ('1', '2') ..
Create factors by factor()
Access the labels
Step4: <a name="missing_vals"></a>
Missing values
Missing values are denoted by NA or NaN
NaN is for undefined mathematical operations such as 0/0. NaN is a sub-type of NA
is.na() and is.nan() to test whether vector elements are missing
Step5: <a name="dataframes"></a>
Data Frames
To store tabular data. One of the most useful things in R.
Data frames can store different object classes in different columns, unlike matrices that have to have the same elements class.
special attributes
Step6: <a name="names"></a>
Names
R objects can have a name. Useful for self describing data, so each vector can be accesed by an index or by its object names
access and change the names by names()
names in list
Step7: <a name="subset"></a>
2. Subsetting R objects
Top
To extract subsets of data (objects).
subsetting with [ ] (given numeric index or logical index)
subsetting with double bracket for lists
Step8: For matrices, you can acess rows and columns by [row index, column index]
<b> By default, subsetting drops the column and row index, to avoid that use m[1,2, drop=F] </b>
This way, the output would still be a matrix.
Step9: <a name="subset_lists"></a>
Subsetting lists
Using double bracket to subset lists.
Only extract one element of a list at time. Therefore, no vector inside double bracket. (x[[1
Step10: Use single bracket [ ] To extract multiple elements of a list
Step11: Nested list
Step12: <a name="partial_match"></a>
Partial Matching
Partial matching works for subsetting objects with [[ ]] and $
Step13: <a name="remove_nas"></a>
Removing missing values
In order to subset non-missing values onlt, use !is.na()
Complete cases of a data.frame, a matrix, or multiple vectors complete.cases()
Step14: <a name="read_write_data"></a>
3. Reading and Writing Data
Top
<a name="read_data"></a>
Reading data
To read in tabular data, use read.table() or read.csv()
readLines() to read lines of a text file
source() for reading R codes (inverse of dump())
dget() for reading R codes (inverse of dput())
load() for reading a saved workspace
unserialize() for reading single R objects in binary form
Attributes of read.table()
file
Step15: <a name="flow_control"></a> Top
4. Flow Control
if, else, else if
for
while
repeat an infinite loop, (to stop use break)
break break the execution of a loop
next skip one iteration
return exit a function
Step16: <a name="functions"></a>
5. Functions
Top
Functions are first class objects.
We can have nested functions, functions inside other functions.
Function arguments can have default values. Function arguments can be mapped positionally, or by name.
The arguments that don't have default values, are required when function is called.
Note
Step17: <a name="special_arg"></a>
special argument "..."
Three dots indicates variable number of arguments.
It is usually used when extending another function
Step18: ... is also useful when the number of input arguments is not known in advance
Step19: <a name="scoping"></a>
6. Scoping Rules
Top
R binds a value to every symbol.
If you define a new objects with a name that is used previously in R
Step20: <a name="date_time"></a>
7. Dates and Times
Top
dates are represented by Date class
times are represented by POSIXct or POSIXlt classes
POSIXct is a very large integer; It uses a data frame to store time
POSIXlt store other information, such as day of month, day of year, ...
internal reference time is 1970-01-01
weekdays() what day of the week that time/date is
months() gives the month name
quarters() gives the quarter number
objects of the same class can be used in mathematical operations
Step21: <a name="format_strptime"></a>
Format Strings to dates
To convert a string in an arbitrary format to a Date format
Step22: <a name="apply_functions"></a>
8. Apply Functions
Top
Apply functions can be used to apply a function over elements of a vector, list or data frame.
They make the codes shorter, and faster through vectorized operations.
lapply() loop over elements of a list, and evaluate a function on each element.
if input is not a list, it will be corced to a list (if possible)
sapply() same as lapply, but also simplifies the result
apply() apply a function over margins of an array
tapply() apply a function over subsets of a vector
mapply() multivariate version of lapply
Step23: <a name="shortcut_apply"></a>
Shortcut functions
For means and sums of a matrix, these shortcut functions perform much faster on big matrices.
rowSums() equivalent to apply(x, 1, sum)
colSums() equivalent to apply(x, 2, sum)
rowMeans() equivalent to apply(x, 1, mean)
colMeans() equivalent to apply(x, 2, mean)
Step24: <a name="tapply_split"></a>
tapply() and split()
tapplly() can be used to apply a function on each sub-group (determied by a factor vector).
Step25: split() is not a loop function, but it can be used to split a vector based on a given factor vector.
The output is a list.
Step26: <a name="random"></a>
9. Random Number Generation
Top
rnorm() generates normal random number variates with a given mean and SD
rnorm(n, mean=0, sd=1)
dnorm() returns the probablity density function at a point x, with given mean and CD
pnorm() returns the cumulative distribution function at point x
qnorm() quantile
rpois() generates random numbers with Poisson distribution with a given rate (lambda)
rbinom() generates binomial random numbers with a given probability
Step27: <a name="gauss_noise"></a>
Gaussian Noise
Step28: Use set.seed() to make the random numbers reproducible
<a name="general_linear"></a>
Simulate random numbers from a Generalized Linear Model
Assume $y \approx Poisson(\mu)$
and $log(\mu) = \beta_0 + \beta_1 x$
Step29: <a name="random_sample"></a>
Random Sampling
sample() to draw a random sample from a set
replace=TRUE
replace=FALSE
Step30: <a id="rprofiler"></a>
10. R Profiler
First design your code, unit-test it, then optimize it
Using system.time() to measure how much time it takes to perform a step
returns an object of class proc_time
user time --> CPU time
elapsed time --> wallclock time
Step31: Measureing multiple expressions
Step32: <a id="using-rprof"></a>
Using Rprof()
summaryRprof() will tabulates the output
Step33: <a id="r-optim"></a>
11. Optimization in R
optim(), nlm(), optimize()
Pass them a function and a vector of parameters.
We can hold some parameters fixed, by fixed=c(F, T)
Example
Step34: <a id="plot-loglik"><a>
Plot the log-likelihood function | Python Code:
x <- 0:6
print(x)
print(as.logical(x))
print(as.complex(x))
Explanation: R Programming
<a name="top"></a>
Vahid Mirjalili, Data Scientist
R Data Types
Atomic classes
Object attributes
Create Vectors
Explicit Coercion
Matrices
Lists
Factors
Missing Values
Data Frmaes
Names
Subsetting R objects
Subsetting lists
Partial matching
Removing missing values
Reading and Writing Data
Reading data
Writing data
Reading large datasets
Reading and writing in textual format
Connection interfaces
Flow Control
Functions
Special argument "..."
Scopting Rules
Dates and Times
Format Strings to dates
Apply Functions
Shortcut apply functions
tapply() and split()
Random Number Generation
Gaussian Noise
R Profiler
Using Rprof()
Optimization in R
<a name="sec1_datatypes"></a>
1. R Data Types
top
<a name="atomic_classes"></a>
Atomic classes in R:
charcater
numeric
integer (L)
complex
logical (TRUE/FALSE)
<b> Special numbers </b> Inf and NaN: 1/0 = Inf and 1/Inf = 0; 0/0=NaN
<a name="attribs"></a>
Object attributes:
names, dimnames
dimensions (for matrices and arrays)
class
length
Other user defined attributes
<b> attributes() </b> to get list of attributes of an object
<a name="cr_vecs"></a>
Creating vectors
vector() : to create an empty vector
vector("numeric", length=10)
c('a', 'b') : to concatenate multiple objects into a single vector
Mixing objects from different classes: c(1.5, 'a') : coercion
<a name="coercion"></a>
Explicit coercion
as.character()
as.numeric()
as.logical()
as.complex()
End of explanation
m <- matrix(nrow=2, ncol=3)
print(m)
attributes(m)
# Method1: create a matrix by matrix()
m1 <- matrix(1:6, nrow=2, ncol=3)
print(m1)
# Method2: change the dimension of a vector
m2 <- 1:6
dim(m2) = c(2,3)
print(m2)
x <- 1:4
y <- 6:9
cbind(x,y)
rbind(x,y)
Explanation: <a name="matrices"></a>
Matrices
To create a vector:
create a matrox by matrix() function
change the dimension of a vector dim()
binding vectors and matruces: cbind() and rbind()
End of explanation
x <- list(23, "abc", 1+5i, TRUE)
print(x)
Explanation: <a name="lists"></a>
Lists
Lists are special vectors that can store elements from different classes
Create a list by list()
End of explanation
## create a vector of factors
xf <- factor(c("male", "female", "female", "female", "male", "female", "female", "male", "female"))
str(xf)
## print levels of xf
levels(xf)
unclass(xf)
table(xf)
Explanation: <a name="factors"></a>
Factors
Factors are a special data type to represent categorical variables, such as gender ('male', 'female'), patterm classes ('1', '2') ..
Create factors by factor()
Access the labels: levels()
Make a table view of number of items in each factor: table()
Factor levels are ordered by default in alphabetic order. In order to directly specify the order: factor(vec, levels=c("male", "female"))
End of explanation
x <- c(1,2,3, NA, 6, NaN, 8)
is.na(x)
is.nan(x)
Explanation: <a name="missing_vals"></a>
Missing values
Missing values are denoted by NA or NaN
NaN is for undefined mathematical operations such as 0/0. NaN is a sub-type of NA
is.na() and is.nan() to test whether vector elements are missing
End of explanation
df <- data.frame(foo=11:14, bar=c(T, T, F, T))
df
nrow(df)
ncol(df)
Explanation: <a name="dataframes"></a>
Data Frames
To store tabular data. One of the most useful things in R.
Data frames can store different object classes in different columns, unlike matrices that have to have the same elements class.
special attributes: row.names, col.names
Rading tabular data from files: read.table() and read.csv()
Convert data.frame to a matrix: data.matrox() (coercion may happen)
End of explanation
x <- 1:3
names(x) <- c("foo", "bar", "norf")
x
names(x)
Explanation: <a name="names"></a>
Names
R objects can have a name. Useful for self describing data, so each vector can be accesed by an index or by its object names
access and change the names by names()
names in list: x <- list(a=1, b=2, c=3)
names in matrices: dimnames()
End of explanation
## subsetting vectors: numeric or logical index
x <- c("a", "b", "c", "d", "h")
x[1:3]
x[x > "c"]
Explanation: <a name="subset"></a>
2. Subsetting R objects
Top
To extract subsets of data (objects).
subsetting with [ ] (given numeric index or logical index)
subsetting with double bracket for lists: [[ ]] (only to extract a single element of a list, meaning no vector inside double bracket)
subsetting with $ to extract elements of lists and data.frames by their names
End of explanation
## subsetting matrices
m1[1,2]
m2[,2]
m2[,2, drop=F]
Explanation: For matrices, you can acess rows and columns by [row index, column index]
<b> By default, subsetting drops the column and row index, to avoid that use m[1,2, drop=F] </b>
This way, the output would still be a matrix.
End of explanation
xl <- list(foo=1:5, bar=0.7, norf="ABcDE")
xl[[1]]
xl[["bar"]]
xl["bar"]
xl$norf
Explanation: <a name="subset_lists"></a>
Subsetting lists
Using double bracket to subset lists.
Only extract one element of a list at time. Therefore, no vector inside double bracket. (x[[1:3]] ==> error)
End of explanation
xl[c(1,3)]
Explanation: Use single bracket [ ] To extract multiple elements of a list:
End of explanation
xnest <- list(a=list(10, 12, 13, 14), b=c(4:7))
# to extract 13:
xnest[["a"]][[3]]
Explanation: Nested list
End of explanation
xp <- list(abc=c(1:8), fgh=c(T, F, F, T))
xp$a
xp[["f"]]
xp[["f", exact=F]]
Explanation: <a name="partial_match"></a>
Partial Matching
Partial matching works for subsetting objects with [[ ]] and $
End of explanation
x <- c(1,2,NA, 5, NaN, 8, NaN, 9)
x[!is.na(x)]
y <- c(3, 5, 6, NA, NA, 8, 9, 10)
complete.cases(x,y)
Explanation: <a name="remove_nas"></a>
Removing missing values
In order to subset non-missing values onlt, use !is.na()
Complete cases of a data.frame, a matrix, or multiple vectors complete.cases()
End of explanation
str(file)
con <- gzfile("data/hw1.csv.gz", "r")
xgz <- readLines(con, 10) ## read the first 1 lines
xgz[1]
## read from a web-page
con <- url("http://www.jhsph.edu", "r")
x <- readLines(con)
head(x)
Explanation: <a name="read_write_data"></a>
3. Reading and Writing Data
Top
<a name="read_data"></a>
Reading data
To read in tabular data, use read.table() or read.csv()
readLines() to read lines of a text file
source() for reading R codes (inverse of dump())
dget() for reading R codes (inverse of dput())
load() for reading a saved workspace
unserialize() for reading single R objects in binary form
Attributes of read.table()
file: name of the file
header: logical (T/F) indicating if the file has a header or not
sep: a string that indicates how columns are separated
colClasses: a character vector indicating the class of each column in the data file
nrows: the number of rows in the dataset
comment.char: a character string indicating the comment character used in the file
skip: the number of lines to be skipped from the begining
stringAsFactors: logical, should character variables be coded as factors?
<a name="write_data"></a>
Writing data
write.table()
writeLines()
dump()
dput()
save()
serialize()
<a name="read_large"></a>
Reading large datasets
For efficiently reading in large datasets:
first identify column classes
ignore the comment character if you know there is no comment in the file: comment.char=""
You can automaucally specify column classes by reaading the first 10 rows of the file.
example:
init10 <- read.table("large_dataset.txt", nrows=10)
classes <- sapply(init10, class)
datAll <- read.table("large_dataset.txt", colClasses=classes)
If you know the number of rows, it is more memory efficient to set nrows as well
<a name="textual_format"></a>
Reading and writing in textual format
Textual format has the advantage of readablity and storing element class information.
dput() for a single object
dunpy() for multiple objects
```
example for dupt:
y <- data.frame(a=c(1,2), b=c("HH", "WW"))
dput(y, file="data_y.R")
new.y <- dget("data_t.R")
```
```
example for dput: multiple objects
x <- "foo"
dump(c("x", "y"), file="data_xy.R"))
rm(x,y)
source("data_xy.R")
```
<a name="connection"></a>
Connection interfaces
Data can be read from other sources by creating a connection.
file(): opens a connection to a file
gzfile(): opens a connection to a gz compressed file
bzfile(): opens a connection to a bz compressed file
url(): opens a connection to a web-page
End of explanation
x <- 5
y <- if(x>3) {
10
}else{
0
}
print(y)
## three ways to iterate through a vector:
x <- c("a", "b", "c")
for (i in 1:length(x)) {
print(x[i])
}
for (i in seq_along(x)) print(x[i])
for (letter in x) print(letter)
Explanation: <a name="flow_control"></a> Top
4. Flow Control
if, else, else if
for
while
repeat an infinite loop, (to stop use break)
break break the execution of a loop
next skip one iteration
return exit a function
End of explanation
args(lm)
Explanation: <a name="functions"></a>
5. Functions
Top
Functions are first class objects.
We can have nested functions, functions inside other functions.
Function arguments can have default values. Function arguments can be mapped positionally, or by name.
The arguments that don't have default values, are required when function is called.
Note: Split your functions so that each function does a single task.
End of explanation
myplot <- function(x, y, mytype="l", ...) {
plot(x, y, type=mytype)
}
Explanation: <a name="special_arg"></a>
special argument "..."
Three dots indicates variable number of arguments.
It is usually used when extending another function:
End of explanation
args(paste)
args(cat)
paste("a", "b", sep=":")
# partial matching doesn't work after ...
paste("a", "b", se=":")
Explanation: ... is also useful when the number of input arguments is not known in advance:
End of explanation
search()
make.Gaussian <- function(mu, sigma) {
gauss <- function(x) {
exp(-((x-mu)/sigma)^2)
}
gauss
}
mygauss_0 <- make.Gaussian(mu=0, sigma=1)
mygauss_1 <- make.Gaussian(mu=1, sigma=1)
mygauss_0
library(IRdisplay)
x <- seq(-3, 5, by=0.1)
png("figs/mygauss.png")
par(mar=c(4.7, 4.7, 0.6, 0.6))
plot(x, mygauss_0(x), type="l", col=rgb(0.2, 0.4, 0.7, 0.5), lwd=6, xlab="x", ylab="Gaussian Distn.", cex.axis=1.5, cex.lab=1.5)
lines(x, mygauss_1(x), col=rgb(0.7, 0.3, 0.2, 0.5), lwd=6)
dev.off()
PNG("figs/mygauss.png")
Explanation: <a name="scoping"></a>
6. Scoping Rules
Top
R binds a value to every symbol.
If you define a new objects with a name that is used previously in R:
R uses lexical scoping
R first searches for a free variable in the environment in which the function is defined
If no matched found, it searches in the parent environment
The global enfironemnt is the workspace.
When a free variable is found, R searches through the search space, from the workspace (global environment) until the symbol is found or it hits the emoty environment.
End of explanation
t <- as.Date("2014-05-11")
print(t)
unclass(t) ## number of days since 1970-01-01
x <- Sys.time()
print(x)
p <- as.POSIXlt(x)
names(unclass(p))
p$hour
Explanation: <a name="date_time"></a>
7. Dates and Times
Top
dates are represented by Date class
times are represented by POSIXct or POSIXlt classes
POSIXct is a very large integer; It uses a data frame to store time
POSIXlt store other information, such as day of month, day of year, ...
internal reference time is 1970-01-01
weekdays() what day of the week that time/date is
months() gives the month name
quarters() gives the quarter number
objects of the same class can be used in mathematical operations
End of explanation
dstring <- c("January 01 2012, 10:40", "March 05 2013, 11:20")
strptime(dstring, "%B %d %Y, %H:%M")
## the formats can be found in help page ?strptime
Explanation: <a name="format_strptime"></a>
Format Strings to dates
To convert a string in an arbitrary format to a Date format:
End of explanation
## Example for lapply()
x <- list(a=1:20, b=rnorm(10, 2))
lapply(x, mean)
sapply(x, mean)
## Example for apply()
x <- matrix(rnorm(50, mean=1), nrow=10, ncol=5)
apply(x, 2, mean)
apply(x, 1, mean)
Explanation: <a name="apply_functions"></a>
8. Apply Functions
Top
Apply functions can be used to apply a function over elements of a vector, list or data frame.
They make the codes shorter, and faster through vectorized operations.
lapply() loop over elements of a list, and evaluate a function on each element.
if input is not a list, it will be corced to a list (if possible)
sapply() same as lapply, but also simplifies the result
apply() apply a function over margins of an array
tapply() apply a function over subsets of a vector
mapply() multivariate version of lapply
End of explanation
# Benchmark: Comparing apply() and rowMeans()
library(rbenchmark)
a <- matrix(rnorm(100000), nrow=1000)
benchmark(apply(a, 1, mean))
benchmark(rowMeans(a))
Explanation: <a name="shortcut_apply"></a>
Shortcut functions
For means and sums of a matrix, these shortcut functions perform much faster on big matrices.
rowSums() equivalent to apply(x, 1, sum)
colSums() equivalent to apply(x, 2, sum)
rowMeans() equivalent to apply(x, 1, mean)
colMeans() equivalent to apply(x, 2, mean)
End of explanation
## Example for tapply()
x <- c(rnorm(10, mean=3), runif(10, min=1, max=3), rnorm(10, mean=5, sd=2))
f <- gl(3, 10)
tapply(x, f, mean)
Explanation: <a name="tapply_split"></a>
tapply() and split()
tapplly() can be used to apply a function on each sub-group (determied by a factor vector).
End of explanation
split(x, f)
library(datasets)
s <- split(airquality, airquality$Month)
sapply(s, function(x) colMeans(x[,c("Ozone", "Solar.R", "Wind", "Temp")], na.rm=T))
Explanation: split() is not a loop function, but it can be used to split a vector based on a given factor vector.
The output is a list.
End of explanation
x <- rnorm(n=10, mean=20, sd=5)
x
str(rpois)
Explanation: <a name="random"></a>
9. Random Number Generation
Top
rnorm() generates normal random number variates with a given mean and SD
rnorm(n, mean=0, sd=1)
dnorm() returns the probablity density function at a point x, with given mean and CD
pnorm() returns the cumulative distribution function at point x
qnorm() quantile
rpois() generates random numbers with Poisson distribution with a given rate (lambda)
rbinom() generates binomial random numbers with a given probability
End of explanation
## adding Gaussian noise to sin(x)
x <- seq(0, 2*pi, by=0.02)
y <- sapply(sin(x), function(x) rnorm(n=1, mean=sin(x), sd=0.2))
## the same as y <- sin(x) + rnorm(n=length(x), mean=0, sd=0.2)
par(mar=c(4.7, 4.7, 0.7, 0.7))
plot(x, y, pch=19, col=rgb(0.2, 0.3, 0.7, 0.6), cex=1.3, cex.lab=1.6, cex.axis=1.6)
Explanation: <a name="gauss_noise"></a>
Gaussian Noise
End of explanation
x <- rnorm(200)
log.mu <- 0.5 + 0.3 * x
y <- rpois(200, exp(log.mu))
plot(x, y, pch=19, col=rgb(0.2, 0.3, 0.7, 0.6), cex=1.5, cex.axis=1.6, cex.lab=1.6)
Explanation: Use set.seed() to make the random numbers reproducible
<a name="general_linear"></a>
Simulate random numbers from a Generalized Linear Model
Assume $y \approx Poisson(\mu)$
and $log(\mu) = \beta_0 + \beta_1 x$
End of explanation
str(sample)
sample(letters, 5)
sample(1:10, size=6, replace=T)
Explanation: <a name="random_sample"></a>
Random Sampling
sample() to draw a random sample from a set
replace=TRUE
replace=FALSE
End of explanation
system.time(rnorm(n=100000, mean=0, sd=1))
system.time(readLines("http://www.jhsph.edu"))
Explanation: <a id="rprofiler"></a>
10. R Profiler
First design your code, unit-test it, then optimize it
Using system.time() to measure how much time it takes to perform a step
returns an object of class proc_time
user time --> CPU time
elapsed time --> wallclock time
End of explanation
system.time({
n <- 1000
r <- numeric(n)
for (i in 1:n) {
x = rnorm(i)
r[i] = mean(x)
}
})
# comparing time of vectorized operations vs. for loops
n=1000000
x <- rnorm(n, mean=0, sd=1)
system.time(sum(x))
system.time({
s = 0
for (i in 1:n) {
s = s + x[i]
}
})
Explanation: Measureing multiple expressions:
End of explanation
Rprof(filename="data/rprof_test.dat")
for (i in 1:1000) {
x <- seq(0, 5, by=0.01)
y <- 1.5+x + rnorm(n=length(x), mean=0, sd=1)
lm(y ~ x)
}
Rprof(NULL)
summaryRprof(filename="data/rprof_test.dat")
Explanation: <a id="using-rprof"></a>
Using Rprof()
summaryRprof() will tabulates the output
End of explanation
## define a constructor function:
make.NegLogLik <- function(data, fixed=c(FALSE, FALSE)) {
params <- fixed
function(p) {
params[!fixed] <- p
mu <- params[1]
sigma <- params[2]
a <- -0.5 * length(data) * log(2*pi*sigma^2)
b <- -0.5 * sum((data-mu)^2) / (sigma^2)
-(a + b)
}
}
# Generate a random dataset with normal distribution
d <- rnorm(n=1000, mean=2, sd=3)
nLL <- make.NegLogLik(d)
nLL
ls(environment(nLL))
## Minimze Neative of log-likelihood
optim(c(mu=0, sigma=1), nLL)$par
Explanation: <a id="r-optim"></a>
11. Optimization in R
optim(), nlm(), optimize()
Pass them a function and a vector of parameters.
We can hold some parameters fixed, by fixed=c(F, T)
Example: maximize log-likelihood
End of explanation
## generate a range of possible values for mean
x <- seq(1, 3, len=100)
## when fixing sigma, use the true value
nLL.mean <- make.NegLogLik(d, fixed=c(FALSE, 3))
y <- sapply(x, nLL.mean)
#windows.options(width=40, height=10)
#layout(matrix(c(1,2), 1, 2, byrow = TRUE),
# widths=c(5,5), heights=c(1,1))
par(mar=c(4.7, 4.7, 0.7, 0.7), mfrow=c(1,2))
plot(x, -(y-min(y)), type="l", lwd=4, col="steelblue",
cex.axis=1.7, cex.lab=1.7, xlab=expression(mu), ylab="log-likelihood")
plot(x, exp(-(y-min(y))), type="l", lwd=4, col="steelblue",
cex.axis=1.7, cex.lab=1.7, xlab=expression(mu), ylab="likelihood")
Explanation: <a id="plot-loglik"><a>
Plot the log-likelihood function
End of explanation |
942 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The previous Notebook in this series used multi-group mode to perform a calculation with previously defined cross sections. However, in many circumstances the multi-group data is not given and one must instead generate the cross sections for the specific application (or at least verify the use of cross sections from another application).
This Notebook illustrates the use of the openmc.mgxs.Library class specifically for the calculation of MGXS to be used in OpenMC's multi-group mode. This example notebook is therefore very similar to the MGXS Part III notebook, except OpenMC is used as the multi-group solver instead of OpenMOC.
During this process, this notebook will illustrate the following features
Step1: We will begin by creating three materials for the fuel, water, and cladding of the fuel pins.
Step2: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
Step4: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
Step5: Likewise, we can construct a control rod guide tube with the same surfaces.
Step6: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
Step7: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
Step8: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step9: Before proceeding lets check the geometry.
Step10: Looks good!
We now must create a geometry that is assigned a root universe and export it to XML.
Step11: With the geometry and materials finished, we now just need to define simulation parameters.
Step12: Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
Step13: Next, we will instantiate an openmc.mgxs.Library for the energy groups with our the fuel assembly geometry.
Step14: Now, we must specify to the Library which types of cross sections to compute. OpenMC's multi-group mode can accept isotropic flux-weighted cross sections or angle-dependent cross sections, as well as supporting anisotropic scattering represented by either Legendre polynomials, histogram, or tabular angular distributions. We will create the following multi-group cross sections needed to run an OpenMC simulation to verify the accuracy of our cross sections
Step15: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material", "cell", "universe", and "mesh" domain types. In this simple example, we wish to compute multi-group cross sections only for each material and therefore will use a "material" domain type.
NOTE
Step16: We will instruct the library to not compute cross sections on a nuclide-by-nuclide basis, and instead to focus on generating material-specific macroscopic cross sections.
NOTE
Step17: Now we will set the scattering order that we wish to use. For this problem we will use P3 scattering. A warning is expected telling us that the default behavior (a P0 correction on the scattering data) is over-ridden by our choice of using a Legendre expansion to treat anisotropic scattering.
Step18: Now that the Library has been setup let's verify that it contains the types of cross sections which meet the needs of OpenMC's multi-group solver. Note that this step is done automatically when writing the Multi-Group Library file later in the process (as part of mgxs_lib.write_mg_library()), but it is a good practice to also run this before spending all the time running OpenMC to generate the cross sections.
If no error is raised, then we have a good set of data.
Step19: Great, now we can use the Library to construct the tallies needed to compute all of the requested multi-group cross sections in each domain.
Step20: The tallies can now be exported to a "tallies.xml" input file for OpenMC.
NOTE
Step21: In addition, we instantiate a fission rate mesh tally that we will eventually use to compare with the corresponding multi-group results.
Step22: Time to run the calculation and get our results!
Step23: To make sure the results we need are available after running the multi-group calculation, we will now rename the statepoint and summary files.
Step24: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. Let's begin by loading the StatePoint file.
Step25: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
Step26: The next step will be to prepare the input for OpenMC to use our newly created multi-group data.
Multi-Group OpenMC Calculation
We will now use the Library to produce a multi-group cross section data set for use by the OpenMC multi-group solver.
Note that since this simulation included so few histories, it is reasonable to expect some data has not had any scores, and thus we could see division by zero errors. This will show up as a runtime warning in the following step. The Library class is designed to gracefully handle these scenarios.
Step27: OpenMC's multi-group mode uses the same input files as does the continuous-energy mode (materials, geometry, settings, plots, and tallies file). Differences would include the use of a flag to tell the code to use multi-group transport, a location of the multi-group library file, and any changes needed in the materials.xml and geometry.xml files to re-define materials as necessary. The materials and geometry file changes could be necessary if materials or their nuclide/element/macroscopic constituents need to be renamed.
In this example we have created macroscopic cross sections (by material), and thus we will need to change the material definitions accordingly.
First we will create the new materials.xml file.
Step28: No geometry file neeeds to be written as the continuous-energy file is correctly defined for the multi-group case as well.
Next, we can make the changes we need to the simulation parameters.
These changes are limited to telling OpenMC to run a multi-group vice contrinuous-energy calculation.
Step29: Lets clear the tallies file so it doesn't include tallies for re-generating a multi-group library, but then put back in a tally for the fission mesh.
Step30: Before running the calculation let's visually compare a subset of the newly-generated multi-group cross section data to the continuous-energy data. We will do this using the cross section plotting functionality built-in to the OpenMC Python API.
Step31: At this point, the problem is set up and we can run the multi-group calculation.
Step32: Results Comparison
Now we can compare the multi-group and continuous-energy results.
We will begin by loading the multi-group statepoint file we just finished writing and extracting the calculated keff.
Step33: Next, we can load the continuous-energy eigenvalue for comparison.
Step34: Lets compare the two eigenvalues, including their bias
Step35: This shows a small but nontrivial pcm bias between the two methods. Some degree of mismatch is expected simply to the very few histories being used in these example problems. An additional mismatch is always inherent in the practical application of multi-group theory due to the high degree of approximations inherent in that method.
Pin Power Visualizations
Next we will visualize the pin power results obtained from both the Continuous-Energy and Multi-Group OpenMC calculations.
First, we extract volume-integrated fission rates from the Multi-Group calculation's mesh fission rate tally for each pin cell in the fuel assembly.
Step36: We can now do the same for the Continuous-Energy results.
Step37: Now we can easily use Matplotlib to visualize the two fission rates side-by-side.
Step38: These figures really indicate that more histories are probably necessary when trying to achieve a fully converged solution, but hey, this is good enough for our example!
Scattering Anisotropy Treatments
We will next show how we can work with the scattering angular distributions. OpenMC's MG solver has the capability to use group-to-group angular distributions which are represented as any of the following
Step39: Now we can re-run OpenMC to obtain our results
Step40: And then get the eigenvalue differences from the Continuous-Energy and P3 MG solution
Step41: Mixed Scattering Representations
OpenMC's Multi-Group mode also includes a feature where not every data in the library is required to have the same scattering treatment. For example, we could represent the water with P3 scattering, and the fuel and cladding with P0 scattering. This series will show how this can be done.
First we will convert the data to P0 scattering, unless its water, then we will leave that as P3 data.
Step42: We can also use whatever scattering format that we want for the materials in the library. As an example, we will take this P0 data and convert zircaloy to a histogram anisotropic scattering format and the fuel to a tabular anisotropic scattering format
Step43: Finally we will re-set our max_order parameter of our openmc.Settings object to our maximum order so that OpenMC will use whatever scattering data is available in the library.
After we do this we can re-run the simulation.
Step44: For a final step we can again obtain the eigenvalue differences from this case and compare with the same from the P3 MG solution | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import os
import openmc
%matplotlib inline
Explanation: The previous Notebook in this series used multi-group mode to perform a calculation with previously defined cross sections. However, in many circumstances the multi-group data is not given and one must instead generate the cross sections for the specific application (or at least verify the use of cross sections from another application).
This Notebook illustrates the use of the openmc.mgxs.Library class specifically for the calculation of MGXS to be used in OpenMC's multi-group mode. This example notebook is therefore very similar to the MGXS Part III notebook, except OpenMC is used as the multi-group solver instead of OpenMOC.
During this process, this notebook will illustrate the following features:
Calculation of multi-group cross sections for a fuel assembly
Automated creation and storage of MGXS with openmc.mgxs.Library
Steady-state pin-by-pin fission rates comparison between continuous-energy and multi-group OpenMC.
Modification of the scattering data in the library to show the flexibility of the multi-group solver
Generate Input Files
End of explanation
# 1.6% enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_element('U', 1., enrichment=1.6)
fuel.add_element('O', 2.)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_element('Zr', 1.)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_element('H', 4.9457e-2)
water.add_element('O', 2.4732e-2)
water.add_element('B', 8.0042e-6)
Explanation: We will begin by creating three materials for the fuel, water, and cladding of the fuel pins.
End of explanation
# Instantiate a Materials object
materials_file = openmc.Materials((fuel, zircaloy, water))
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
# Create cylinders for the fuel and clad
# The x0 and y0 parameters (0. and 0.) are the default values for an
# openmc.ZCylinder object. We could therefore leave them out to no effect
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective')
Explanation: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
End of explanation
# Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell)
Explanation: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
# Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell)
Explanation: Likewise, we can construct a control rod guide tube with the same surfaces.
End of explanation
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
Explanation: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
End of explanation
# Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Initialize an empty 17x17 array of the lattice universes
universes = np.empty((17, 17), dtype=openmc.Universe)
# Fill the array with the fuel pin and guide tube universes
universes[:, :] = fuel_pin_universe
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes
Explanation: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
End of explanation
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = assembly
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(name='root universe', universe_id=0)
root_universe.add_cell(root_cell)
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
root_universe.plot(origin=(0., 0., 0.), width=(21.42, 21.42), pixels=(500, 500), color_by='material')
Explanation: Before proceeding lets check the geometry.
End of explanation
# Create Geometry and set root universe
geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
geometry.export_to_xml()
Explanation: Looks good!
We now must create a geometry that is assigned a root universe and export it to XML.
End of explanation
# OpenMC simulation parameters
batches = 600
inactive = 50
particles = 3000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': False}
settings_file.run_mode = 'eigenvalue'
settings_file.verbosity = 4
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: With the geometry and materials finished, we now just need to define simulation parameters.
End of explanation
# Instantiate a 2-group EnergyGroups object
groups = openmc.mgxs.EnergyGroups([0., 0.625, 20.0e6])
Explanation: Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
End of explanation
# Initialize a 2-group MGXS Library for OpenMC
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = groups
Explanation: Next, we will instantiate an openmc.mgxs.Library for the energy groups with our the fuel assembly geometry.
End of explanation
# Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['total', 'absorption', 'nu-fission', 'fission',
'nu-scatter matrix', 'multiplicity matrix', 'chi']
Explanation: Now, we must specify to the Library which types of cross sections to compute. OpenMC's multi-group mode can accept isotropic flux-weighted cross sections or angle-dependent cross sections, as well as supporting anisotropic scattering represented by either Legendre polynomials, histogram, or tabular angular distributions. We will create the following multi-group cross sections needed to run an OpenMC simulation to verify the accuracy of our cross sections: "total", "absorption", "nu-fission", '"fission", "nu-scatter matrix", "multiplicity matrix", and "chi".
The "multiplicity matrix" type is a relatively rare cross section type. This data is needed to provide OpenMC's multi-group mode with additional information needed to accurately treat scattering multiplication (i.e., (n,xn) reactions)), including how this multiplication varies depending on both incoming and outgoing neutron energies.
End of explanation
# Specify a "cell" domain type for the cross section tally filters
mgxs_lib.domain_type = "material"
# Specify the cell domains over which to compute multi-group cross sections
mgxs_lib.domains = geometry.get_all_materials().values()
Explanation: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material", "cell", "universe", and "mesh" domain types. In this simple example, we wish to compute multi-group cross sections only for each material and therefore will use a "material" domain type.
NOTE: By default, the Library class will instantiate MGXS objects for each and every domain (material, cell, universe, or mesh) in the geometry of interest. However, one may specify a subset of these domains to the Library.domains property.
End of explanation
# Do not compute cross sections on a nuclide-by-nuclide basis
mgxs_lib.by_nuclide = False
Explanation: We will instruct the library to not compute cross sections on a nuclide-by-nuclide basis, and instead to focus on generating material-specific macroscopic cross sections.
NOTE: The default value of the by_nuclide parameter is False, so the following step is not necessary but is included for illustrative purposes.
End of explanation
# Set the Legendre order to 3 for P3 scattering
mgxs_lib.legendre_order = 3
Explanation: Now we will set the scattering order that we wish to use. For this problem we will use P3 scattering. A warning is expected telling us that the default behavior (a P0 correction on the scattering data) is over-ridden by our choice of using a Legendre expansion to treat anisotropic scattering.
End of explanation
# Check the library - if no errors are raised, then the library is satisfactory.
mgxs_lib.check_library_for_openmc_mgxs()
Explanation: Now that the Library has been setup let's verify that it contains the types of cross sections which meet the needs of OpenMC's multi-group solver. Note that this step is done automatically when writing the Multi-Group Library file later in the process (as part of mgxs_lib.write_mg_library()), but it is a good practice to also run this before spending all the time running OpenMC to generate the cross sections.
If no error is raised, then we have a good set of data.
End of explanation
# Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library()
Explanation: Great, now we can use the Library to construct the tallies needed to compute all of the requested multi-group cross sections in each domain.
End of explanation
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
Explanation: The tallies can now be exported to a "tallies.xml" input file for OpenMC.
NOTE: At this point the Library has constructed nearly 100 distinct Tally objects. The overhead to tally in OpenMC scales as O(N) for N tallies, which can become a bottleneck for large tally datasets. To compensate for this, the Python API's Tally, Filter and Tallies classes allow for the smart merging of tallies when possible. The Library class supports this runtime optimization with the use of the optional merge parameter (False by default) for the Library.add_to_tallies_file(...) method, as shown below.
End of explanation
# Instantiate a tally Mesh
mesh = openmc.Mesh()
mesh.type = 'regular'
mesh.dimension = [17, 17]
mesh.lower_left = [-10.71, -10.71]
mesh.upper_right = [+10.71, +10.71]
# Instantiate tally Filter
mesh_filter = openmc.MeshFilter(mesh)
# Instantiate the Tally
tally = openmc.Tally(name='mesh tally')
tally.filters = [mesh_filter]
tally.scores = ['fission']
# Add tally to collection
tallies_file.append(tally, merge=True)
# Export all tallies to a "tallies.xml" file
tallies_file.export_to_xml()
Explanation: In addition, we instantiate a fission rate mesh tally that we will eventually use to compare with the corresponding multi-group results.
End of explanation
# Run OpenMC
openmc.run()
Explanation: Time to run the calculation and get our results!
End of explanation
# Move the statepoint File
ce_spfile = './statepoint_ce.h5'
os.rename('statepoint.' + str(batches) + '.h5', ce_spfile)
# Move the Summary file
ce_sumfile = './summary_ce.h5'
os.rename('summary.h5', ce_sumfile)
Explanation: To make sure the results we need are available after running the multi-group calculation, we will now rename the statepoint and summary files.
End of explanation
# Load the statepoint file
sp = openmc.StatePoint(ce_spfile, autolink=False)
# Load the summary file in its new location
su = openmc.Summary(ce_sumfile)
sp.link_with_summary(su)
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. Let's begin by loading the StatePoint file.
End of explanation
# Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp)
Explanation: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
# Create a MGXS File which can then be written to disk
mgxs_file = mgxs_lib.create_mg_library(xs_type='macro', xsdata_names=['fuel', 'zircaloy', 'water'])
# Write the file to disk using the default filename of "mgxs.h5"
mgxs_file.export_to_hdf5()
Explanation: The next step will be to prepare the input for OpenMC to use our newly created multi-group data.
Multi-Group OpenMC Calculation
We will now use the Library to produce a multi-group cross section data set for use by the OpenMC multi-group solver.
Note that since this simulation included so few histories, it is reasonable to expect some data has not had any scores, and thus we could see division by zero errors. This will show up as a runtime warning in the following step. The Library class is designed to gracefully handle these scenarios.
End of explanation
# Re-define our materials to use the multi-group macroscopic data
# instead of the continuous-energy data.
# 1.6% enriched fuel UO2
fuel_mg = openmc.Material(name='UO2', material_id=1)
fuel_mg.add_macroscopic('fuel')
# cladding
zircaloy_mg = openmc.Material(name='Clad', material_id=2)
zircaloy_mg.add_macroscopic('zircaloy')
# moderator
water_mg = openmc.Material(name='Water', material_id=3)
water_mg.add_macroscopic('water')
# Finally, instantiate our Materials object
materials_file = openmc.Materials((fuel_mg, zircaloy_mg, water_mg))
# Set the location of the cross sections file
materials_file.cross_sections = 'mgxs.h5'
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: OpenMC's multi-group mode uses the same input files as does the continuous-energy mode (materials, geometry, settings, plots, and tallies file). Differences would include the use of a flag to tell the code to use multi-group transport, a location of the multi-group library file, and any changes needed in the materials.xml and geometry.xml files to re-define materials as necessary. The materials and geometry file changes could be necessary if materials or their nuclide/element/macroscopic constituents need to be renamed.
In this example we have created macroscopic cross sections (by material), and thus we will need to change the material definitions accordingly.
First we will create the new materials.xml file.
End of explanation
# Set the energy mode
settings_file.energy_mode = 'multi-group'
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: No geometry file neeeds to be written as the continuous-energy file is correctly defined for the multi-group case as well.
Next, we can make the changes we need to the simulation parameters.
These changes are limited to telling OpenMC to run a multi-group vice contrinuous-energy calculation.
End of explanation
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
# Add fission and flux mesh to tally for plotting using the same mesh we've already defined
mesh_tally = openmc.Tally(name='mesh tally')
mesh_tally.filters = [openmc.MeshFilter(mesh)]
mesh_tally.scores = ['fission']
tallies_file.append(mesh_tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
Explanation: Lets clear the tallies file so it doesn't include tallies for re-generating a multi-group library, but then put back in a tally for the fission mesh.
End of explanation
# First lets plot the fuel data
# We will first add the continuous-energy data
fig = openmc.plot_xs(fuel, ['total'])
# We will now add in the corresponding multi-group data and show the result
openmc.plot_xs(fuel_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
# Then repeat for the zircaloy data
fig = openmc.plot_xs(zircaloy, ['total'])
openmc.plot_xs(zircaloy_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
# And finally repeat for the water data
fig = openmc.plot_xs(water, ['total'])
openmc.plot_xs(water_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
Explanation: Before running the calculation let's visually compare a subset of the newly-generated multi-group cross section data to the continuous-energy data. We will do this using the cross section plotting functionality built-in to the OpenMC Python API.
End of explanation
# Run the Multi-Group OpenMC Simulation
openmc.run()
Explanation: At this point, the problem is set up and we can run the multi-group calculation.
End of explanation
# Move the StatePoint File
mg_spfile = './statepoint_mg.h5'
os.rename('statepoint.' + str(batches) + '.h5', mg_spfile)
# Move the Summary file
mg_sumfile = './summary_mg.h5'
os.rename('summary.h5', mg_sumfile)
# Rename and then load the last statepoint file and keff value
mgsp = openmc.StatePoint(mg_spfile, autolink=False)
# Load the summary file in its new location
mgsu = openmc.Summary(mg_sumfile)
mgsp.link_with_summary(mgsu)
# Get keff
mg_keff = mgsp.k_combined
Explanation: Results Comparison
Now we can compare the multi-group and continuous-energy results.
We will begin by loading the multi-group statepoint file we just finished writing and extracting the calculated keff.
End of explanation
ce_keff = sp.k_combined
Explanation: Next, we can load the continuous-energy eigenvalue for comparison.
End of explanation
bias = 1.0E5 * (ce_keff - mg_keff)
print('Continuous-Energy keff = {0:1.6f}'.format(ce_keff))
print('Multi-Group keff = {0:1.6f}'.format(mg_keff))
print('bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
Explanation: Lets compare the two eigenvalues, including their bias
End of explanation
# Get the OpenMC fission rate mesh tally data
mg_mesh_tally = mgsp.get_tally(name='mesh tally')
mg_fission_rates = mg_mesh_tally.get_values(scores=['fission'])
# Reshape array to 2D for plotting
mg_fission_rates.shape = (17,17)
# Normalize to the average pin power
mg_fission_rates /= np.mean(mg_fission_rates[mg_fission_rates > 0.])
Explanation: This shows a small but nontrivial pcm bias between the two methods. Some degree of mismatch is expected simply to the very few histories being used in these example problems. An additional mismatch is always inherent in the practical application of multi-group theory due to the high degree of approximations inherent in that method.
Pin Power Visualizations
Next we will visualize the pin power results obtained from both the Continuous-Energy and Multi-Group OpenMC calculations.
First, we extract volume-integrated fission rates from the Multi-Group calculation's mesh fission rate tally for each pin cell in the fuel assembly.
End of explanation
# Get the OpenMC fission rate mesh tally data
ce_mesh_tally = sp.get_tally(name='mesh tally')
ce_fission_rates = ce_mesh_tally.get_values(scores=['fission'])
# Reshape array to 2D for plotting
ce_fission_rates.shape = (17,17)
# Normalize to the average pin power
ce_fission_rates /= np.mean(ce_fission_rates[ce_fission_rates > 0.])
Explanation: We can now do the same for the Continuous-Energy results.
End of explanation
# Force zeros to be NaNs so their values are not included when matplotlib calculates
# the color scale
ce_fission_rates[ce_fission_rates == 0.] = np.nan
mg_fission_rates[mg_fission_rates == 0.] = np.nan
# Plot the CE fission rates in the left subplot
fig = plt.subplot(121)
plt.imshow(ce_fission_rates, interpolation='none', cmap='jet')
plt.title('Continuous-Energy Fission Rates')
# Plot the MG fission rates in the right subplot
fig2 = plt.subplot(122)
plt.imshow(mg_fission_rates, interpolation='none', cmap='jet')
plt.title('Multi-Group Fission Rates')
Explanation: Now we can easily use Matplotlib to visualize the two fission rates side-by-side.
End of explanation
# Set the maximum scattering order to 0 (i.e., isotropic scattering)
settings_file.max_order = 0
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: These figures really indicate that more histories are probably necessary when trying to achieve a fully converged solution, but hey, this is good enough for our example!
Scattering Anisotropy Treatments
We will next show how we can work with the scattering angular distributions. OpenMC's MG solver has the capability to use group-to-group angular distributions which are represented as any of the following: a truncated Legendre series of up to the 10th order, a histogram distribution, and a tabular distribution. Any combination of these representations can be used by OpenMC during the transport process, so long as all constituents of a given material use the same representation. This means it is possible to have water represented by a tabular distribution and fuel represented by a Legendre if so desired.
Note: To have the highest runtime performance OpenMC natively converts Legendre series to a tabular distribution before the transport begins. This default functionality can be turned off with the tabular_legendre element of the settings.xml file (or for the Python API, the openmc.Settings.tabular_legendre attribute).
This section will examine the following:
- Re-run the MG-mode calculation with P0 scattering everywhere using the openmc.Settings.max_order attribute
- Re-run the problem with only the water represented with P3 scattering and P0 scattering for the remaining materials using the Python API's ability to convert between formats.
Global P0 Scattering
First we begin by re-running with P0 scattering (i.e., isotropic) everywhere. If a global maximum order is requested, the most effective way to do this is to use the max_order attribute of our openmc.Settings object.
End of explanation
# Run the Multi-Group OpenMC Simulation
openmc.run()
Explanation: Now we can re-run OpenMC to obtain our results
End of explanation
# Move the statepoint File
mgp0_spfile = './statepoint_mg_p0.h5'
os.rename('statepoint.' + str(batches) + '.h5', mgp0_spfile)
# Move the Summary file
mgp0_sumfile = './summary_mg_p0.h5'
os.rename('summary.h5', mgp0_sumfile)
# Load the last statepoint file and keff value
mgsp_p0 = openmc.StatePoint(mgp0_spfile, autolink=False)
# Get keff
mg_p0_keff = mgsp_p0.k_combined
bias_p0 = 1.0E5 * (ce_keff - mg_p0_keff)
print('P3 bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
print('P0 bias [pcm]: {0:1.1f}'.format(bias_p0.nominal_value))
Explanation: And then get the eigenvalue differences from the Continuous-Energy and P3 MG solution
End of explanation
# Convert the zircaloy and fuel data to P0 scattering
for i, xsdata in enumerate(mgxs_file.xsdatas):
if xsdata.name != 'water':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('legendre', 0)
Explanation: Mixed Scattering Representations
OpenMC's Multi-Group mode also includes a feature where not every data in the library is required to have the same scattering treatment. For example, we could represent the water with P3 scattering, and the fuel and cladding with P0 scattering. This series will show how this can be done.
First we will convert the data to P0 scattering, unless its water, then we will leave that as P3 data.
End of explanation
# Convert the formats as discussed
for i, xsdata in enumerate(mgxs_file.xsdatas):
if xsdata.name == 'zircaloy':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('histogram', 2)
elif xsdata.name == 'fuel':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('tabular', 2)
mgxs_file.export_to_hdf5('mgxs.h5')
Explanation: We can also use whatever scattering format that we want for the materials in the library. As an example, we will take this P0 data and convert zircaloy to a histogram anisotropic scattering format and the fuel to a tabular anisotropic scattering format
End of explanation
settings_file.max_order = None
# Export to "settings.xml"
settings_file.export_to_xml()
# Run the Multi-Group OpenMC Simulation
openmc.run()
Explanation: Finally we will re-set our max_order parameter of our openmc.Settings object to our maximum order so that OpenMC will use whatever scattering data is available in the library.
After we do this we can re-run the simulation.
End of explanation
# Load the last statepoint file and keff value
mgsp_mixed = openmc.StatePoint('./statepoint.' + str(batches) + '.h5')
mg_mixed_keff = mgsp_mixed.k_combined
bias_mixed = 1.0E5 * (ce_keff - mg_mixed_keff)
print('P3 bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
print('Mixed Scattering bias [pcm]: {0:1.1f}'.format(bias_mixed.nominal_value))
Explanation: For a final step we can again obtain the eigenvalue differences from this case and compare with the same from the P3 MG solution
End of explanation |
943 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$$A(t,T) = \Sigma_i A_i P(dI/dt (t)\otimes e^{-t/\tau_i})$$
Step1: Simple exponential basis
$$ \mathbf{A}\mathbf{\alpha} = \mathbf{d}$$ | Python Code:
def AofT(time,T, ai, taui):
return ai*np.exp(-time/taui)/(1.+np.exp(-T/(2*taui)))
from SimPEG import *
import sys
sys.path.append("./DoubleLog/")
from plotting import mapDat
class LinearSurvey(Survey.BaseSurvey):
nD = None
def __init__(self, time, **kwargs):
self.time = time
self.nD = time.size
def projectFields(self, u):
return u
class LinearProblem(Problem.BaseProblem):
surveyPair = LinearSurvey
def __init__(self, mesh, G, **kwargs):
Problem.BaseProblem.__init__(self, mesh, **kwargs)
self.G = G
def fields(self, m, u=None):
return self.G.dot(m)
def Jvec(self, m, v, u=None):
return self.G.dot(v)
def Jtvec(self, m, v, u=None):
return self.G.T.dot(v)
Explanation: $$A(t,T) = \Sigma_i A_i P(dI/dt (t)\otimes e^{-t/\tau_i})$$
End of explanation
time = np.cumsum(np.r_[0., 1e-5*np.ones(10), 5e-5*np.ones(10), 1e-4*np.ones(10), 5e-4*np.ones(10), 1e-3*np.ones(10)])
# time = np.cumsum(np.r_[0., 1e-5*np.ones(10), 5e-5*np.ones(10),1e-4*np.ones(5)])
M = 41
tau = np.logspace(-4.1, -1, M)
from simpegem1d.Waveform import SineFun, SineFunDeriv, CausalConv
dt = 1e-5
P = meshtime.getInterpolationMat(time+t0, 'N')
time_conv = meshtime.gridN
t0 = 0.003
currentderiv = SineFunDeriv(time_conv, t0)
current = SineFun(time_conv, t0)
from SimPEG import Mesh
meshtime = Mesh.TensorMesh([np.ones(2**12-1)*dt], x0="0")
temp = np.exp(-time_conv/1e-2)
out = CausalConv(temp, currentderiv, time_conv)
plt.plot(time_conv, currentderiv)
plt.plot(time_conv, out)
actind = time_conv>t0
plt.plot(time_conv[actind]-t0, -out[actind])
plt.plot(time, -P*out, '.')
time_conv.min(), time_conv.max()
N = time.size
A = np.zeros((N, M))
for j in range(M):
A[:,j] = P*(CausalConv(np.exp(-time_conv/tau[j]), -currentderiv, time_conv))
mtrue = np.zeros(M)
np.random.seed(1)
inds = np.random.random_integers(0, 41, size=5)
mtrue[inds] = np.r_[0.1, 2, 1, 4, 5]
out = np.dot(A,mtrue)
fig = plt.figure(figsize=(6,4.5))
ax = plt.subplot(111)
for i, ind in enumerate(inds):
temp, dum, dum = mapDat(mtrue[inds][i]*np.exp(-time/tau[ind]), 1e-5, stretch=2)
plt.semilogx(time, temp, 'k', alpha = 0.5)
outmap, ticks, tickLabels = mapDat(out, 1e-5, stretch=2)
ax.semilogx(time, outmap, 'k', lw=2)
ax.set_yticks(ticks)
ax.set_yticklabels(tickLabels)
# ax.set_ylim(ticks.min(), ticks.max())
ax.set_ylim(ticks.min(), ticks.max())
ax.set_xlim(time.min(), time.max())
ax.grid(True)
# from pymatsolver import MumpsSolver
mesh = Mesh.TensorMesh([M])
prob = LinearProblem(mesh, A)
survey = LinearSurvey(time)
survey.pair(prob)
survey.makeSyntheticData(mtrue, std=0.01)
# survey.dobs = out
reg = Regularization.BaseRegularization(mesh)
dmis = DataMisfit.l2_DataMisfit(survey)
dmis.Wd = 1./(0.05*abs(survey.dobs)+0.05*1e-2)
opt = Optimization.ProjectedGNCG(maxIter=20)
# opt = Optimization.InexactGaussNewton(maxIter=20)
opt.lower = -1e-10
invProb = InvProblem.BaseInvProblem(dmis, reg, opt)
invProb.beta = 1e-4
beta = Directives.BetaSchedule()
beta.coolingFactor = 2
target = Directives.TargetMisfit()
inv = Inversion.BaseInversion(invProb, directiveList=[beta, target])
m0 = np.zeros_like(survey.mtrue)
mrec = inv.run(m0)
plt.semilogx(tau, mtrue, '.')
plt.semilogx(tau, mrec, '.')
fig = plt.figure(figsize=(6,4.5))
ax = plt.subplot(111)
obsmap, ticks, tickLabels = mapDat(survey.dobs, 1e0, stretch=2)
predmap, dum, dum = mapDat(invProb.dpred, 1e0, stretch=2)
ax.loglog(time, survey.dobs, 'k', lw=2)
ax.loglog(time, invProb.dpred, 'k.', lw=2)
# ax.set_yticks(ticks)
# ax.set_yticklabels(tickLabels)
# ax.set_ylim(ticks.min(), ticks.max())
# ax.set_ylim(ticks.min(), ticks.max())
ax.set_xlim(time.min(), time.max())
ax.grid(True)
Explanation: Simple exponential basis
$$ \mathbf{A}\mathbf{\alpha} = \mathbf{d}$$
End of explanation |
944 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculation of the divergence of the advection of the perpendicular gradient of the potential times density using Clebsch coordinates
We would here like to calculate
$$
\nabla\cdot\left(\mathbf{u}E\cdot\nabla\left[n \nabla\perp\phi \right]\right)
$$
using cylindrical Clebsch coordinates, as this tensor identity has not been found in the literature.
NOTE
Step1: Calculation of the $E\times B$ advection
We would now like to calculate
$$
\zeta = \nabla\cdot\left(\mathbf{u}E \cdot\nabla\left[n\nabla\perp\phi\right]\right)
$$
We will do this by
Calculate $n\nabla_\perp\phi$
Define $\mathbf{u}_E$
By first calculating $\nabla_\perp\phi$
Calculate $\mathbf{u}E\cdot\nabla \left(n\nabla\perp\phi\right)$
To check the different contributions we also
Calculate $\mathbf{u}_E\cdot\nabla f$
Calculate $\mathbf{a}\cdot \left(n\nabla_\perp\phi\right)$
Take the divergence of the resulting vector
Compare this with $B{\phi,\Omega^D}$
Calculation of $n\nabla_\perp\phi$
Step2: Defining $\mathbf{u}_E$
We have that
$${u}E = - \frac{\nabla\perp\phi\times\mathbf{b}}{B}$$
Remember that we are working with normalized equations, so $B$ (which in reality is $\tilde{B}$) is equal to $1$.
NOTE
Step3: NOTE
Step4: Calculation of $\mathbf{u}E\cdot\nabla \left(n\nabla\perp\phi\right)$
Calculation of $\mathbf{u}_E\cdot\nabla f$
Step5: Calculation of $\mathbf{a}\cdot\nabla \left(n\nabla_\perp\phi\right)$
Step6: Using covariant vector
Step7: Using contravariant vector
Step8: Calculation of full $\mathbf{u}E\cdot\nabla \left(n\nabla\perp\phi\right)$
Step9: Calculation of $\nabla\cdot\left(\mathbf{u}E\cdot\nabla\left[n\nabla\perp\phi\right]\right)$
Step10: Comparison with $B{\phi,\Omega^D}$
In cylindrical Clebsch coordinates, we have that $\mathbf{u}_E\cdot\nabla = {\phi,\cdot}$. However, we have normalized our equations so that $\tilde{B}=1$. As $B$ from the Clebsch system is not constant, we can achieve normalization by multiplying the Poisson bracket with the un-normalized $B$ (from the Clebsch system).
We define the vorticity-like field $\Omega^D$ to be $\Omega^D = \nabla\cdot\left(n\nabla_\perp\phi\right)$. In the Clebsch system this is written as
Step11: We now write $\chi = B{\phi,\Omega^D}$
Step12: The difference $\epsilon$ between $\zeta = \nabla\cdot\left(\mathbf{u}E\cdot\nabla\left[n\nabla\perp\phi\right]\right)$ and $\chi = B{\phi,\Omega^D}$ is given by
$$\epsilon = \zeta - \chi$$
Step13: In fact we see that
\begin{align}
\epsilon
- \left(
-\frac{1}{\rho}[\partial_\rho\phi]{n, \partial_\rho\phi}
-\frac{1}{\rho^3}[\partial_\theta\phi]{n, \partial_\theta\phi}
+\frac{1}{\rho^4}[\partial_\theta n][\partial_\theta\phi]^2
\right)
=\
\epsilon
- \left(
\frac{1}{\rho}[\partial_\rho\phi]{\partial_\rho\phi,n}
+\frac{1}{\rho^3}[\partial_\theta\phi]{\partial_\theta\phi, n}
+\frac{1}{\rho^4}[\partial_\theta n][\partial_\theta\phi]^2
\right)
=
\end{align}
Step14: What is more interesting is in fact that
\begin{align}
\epsilon
- \xi
=
\epsilon
- \frac{B}{2}{\mathbf{u}_E\cdot\mathbf{u}_E, n}
\end{align}
Step15: Where
\begin{align}
\mathbf{u}_E\cdot\mathbf{u}_E
=
\end{align}
Step16: Note that the last term $\frac{1}{\rho^4}(\partial_\theta n)(\partial_\theta\phi)^2$ does not appear to come from the Poisson bracket. This is however the case, and comes from the part which contains
$\frac{1}{2}\partial_\rho\left(\frac{1}{\rho}\partial_\theta \phi\right)^2 =
\left(\frac{1}{\rho}\partial_\theta \phi\right)\partial_\rho\left(\frac{1}{\rho}\partial_\theta \phi\right)$
as
$\partial_i (fg) = f \partial_i g + g \partial_i f$
To summarize, we have
\begin{align}
\zeta - (\chi + \xi) =
\end{align}
Step17: Printing for comparison | Python Code:
from IPython.display import display
from sympy import symbols, simplify, sympify, expand
from sympy import init_printing
from sympy import Eq, Function
from clebschVector import ClebschVec
from clebschVector import div, grad, gradPerp, advVec
from common import rho, theta, poisson
from common import displayVec
init_printing()
u_z = symbols('u_z', real = True)
# In reality this is a function, but as it serves only as a dummy it is here defined as a symbol
# This makes it easier to replace
f = symbols('f', real = True)
phi = Function('phi')(rho, theta)
n = Function('n')(rho, theta)
# Symbols for printing
zeta, chi, epsilon = symbols('zeta, chi, epsilon')
Explanation: Calculation of the divergence of the advection of the perpendicular gradient of the potential times density using Clebsch coordinates
We would here like to calculate
$$
\nabla\cdot\left(\mathbf{u}E\cdot\nabla\left[n \nabla\perp\phi \right]\right)
$$
using cylindrical Clebsch coordinates, as this tensor identity has not been found in the literature.
NOTE: These are normalized equations. As $B$ is constant, we can choose $B_0$ so that the normalized $\tilde{B}=1$, thus, $B$ is excluded from these equations.
Also, we would like to compare this with
$$
B{\phi,\Omega^D}
$$
End of explanation
nGradPerpPhi = gradPerp(phi)*n
displayVec(nGradPerpPhi)
Explanation: Calculation of the $E\times B$ advection
We would now like to calculate
$$
\zeta = \nabla\cdot\left(\mathbf{u}E \cdot\nabla\left[n\nabla\perp\phi\right]\right)
$$
We will do this by
Calculate $n\nabla_\perp\phi$
Define $\mathbf{u}_E$
By first calculating $\nabla_\perp\phi$
Calculate $\mathbf{u}E\cdot\nabla \left(n\nabla\perp\phi\right)$
To check the different contributions we also
Calculate $\mathbf{u}_E\cdot\nabla f$
Calculate $\mathbf{a}\cdot \left(n\nabla_\perp\phi\right)$
Take the divergence of the resulting vector
Compare this with $B{\phi,\Omega^D}$
Calculation of $n\nabla_\perp\phi$
End of explanation
# The basis-vectors are contravariant => components are covariant
eTheta = ClebschVec(rho=0, theta=1, z=0, covariant=True)
eRho = ClebschVec(rho=1, theta=0, z=0, covariant=True)
B = eTheta^eRho
displayVec(B, 'B')
Blen = B.len()
display(Eq(symbols('B'), Blen))
b = B/(B.len())
displayVec(b, 'b')
Explanation: Defining $\mathbf{u}_E$
We have that
$${u}E = - \frac{\nabla\perp\phi\times\mathbf{b}}{B}$$
Remember that we are working with normalized equations, so $B$ (which in reality is $\tilde{B}$) is equal to $1$.
NOTE: It migth appear that there is a discrepancy between having a coordinate system where $B$ is not constant where we have derived equation where $B$ is constant. This is because the cylindrical coordinate system is not a Clebsch system, but the metrics coinside. The Poisson bracket is the only place where $B$ is used explicitly, and care must be taken. The workaround is easy: Just multiply the Poisson bracket with $B$ to make it correct in cylindrical coordinates.
End of explanation
gradPerpPhi = gradPerp(phi)
displayVec(gradPerpPhi)
# Normalized B
BTilde = 1
# Defining u_E
ue = - ((gradPerpPhi^b)/BTilde)
displayVec(ue, 'u_E')
Explanation: NOTE: Basis vectors in $B$ are covariant, so components are contravariant
Calculation of $\nabla_\perp\phi$
End of explanation
ueDotGrad_f = ue*grad(f)
display(ueDotGrad_f)
Explanation: Calculation of $\mathbf{u}E\cdot\nabla \left(n\nabla\perp\phi\right)$
Calculation of $\mathbf{u}_E\cdot\nabla f$
End of explanation
aRho, aZ, aTheta = symbols('a^rho, a^z, a^theta')
a_Rho, a_Z, a_Theta = symbols('a_rho, a_z, a_theta')
aCov = ClebschVec(rho = a_Rho, z=a_Z, theta = a_Theta, covariant=True)
aCon = ClebschVec(rho = aRho, z=aZ, theta = aTheta, covariant=False)
Explanation: Calculation of $\mathbf{a}\cdot\nabla \left(n\nabla_\perp\phi\right)$
End of explanation
aCovDotNablaGradPhi = advVec(aCov, nGradPerpPhi)
displayVec(aCovDotNablaGradPhi)
Explanation: Using covariant vector
End of explanation
aConDotNablaGradPhi = advVec(aCon, nGradPerpPhi)
displayVec(aConDotNablaGradPhi)
Explanation: Using contravariant vector
End of explanation
ueDotGradnGradPerpPhi = advVec(ue, nGradPerpPhi)
displayVec(ueDotGradnGradPerpPhi.doitVec())
displayVec(ueDotGradnGradPerpPhi.doitVec().simplifyVec())
Explanation: Calculation of full $\mathbf{u}E\cdot\nabla \left(n\nabla\perp\phi\right)$
End of explanation
div_ueDotGradnGradPerpPhi = div(ueDotGradnGradPerpPhi)
zetaFunc = div_ueDotGradnGradPerpPhi.doit().expand()
display(Eq(zeta, simplify(zetaFunc)))
Explanation: Calculation of $\nabla\cdot\left(\mathbf{u}E\cdot\nabla\left[n\nabla\perp\phi\right]\right)$
End of explanation
vortD = div(gradPerp(phi)*n)
display(Eq(symbols('Omega^D'), vortD.doit().expand()))
Explanation: Comparison with $B{\phi,\Omega^D}$
In cylindrical Clebsch coordinates, we have that $\mathbf{u}_E\cdot\nabla = {\phi,\cdot}$. However, we have normalized our equations so that $\tilde{B}=1$. As $B$ from the Clebsch system is not constant, we can achieve normalization by multiplying the Poisson bracket with the un-normalized $B$ (from the Clebsch system).
We define the vorticity-like field $\Omega^D$ to be $\Omega^D = \nabla\cdot\left(n\nabla_\perp\phi\right)$. In the Clebsch system this is written as
End of explanation
poissonPhiVortD = Blen*poisson(phi, vortD)
chiFunc = poissonPhiVortD.doit().expand()
display(Eq(chi, chiFunc))
Explanation: We now write $\chi = B{\phi,\Omega^D}$
End of explanation
epsilonFunc = (zetaFunc - chiFunc).expand()
display(Eq(epsilon, epsilonFunc))
Explanation: The difference $\epsilon$ between $\zeta = \nabla\cdot\left(\mathbf{u}E\cdot\nabla\left[n\nabla\perp\phi\right]\right)$ and $\chi = B{\phi,\Omega^D}$ is given by
$$\epsilon = \zeta - \chi$$
End of explanation
epsMinusCorrection = epsilonFunc\
-\
(\
(1/rho)*phi.diff(rho)*poisson(phi.diff(rho), n)\
+(1/(rho)**3)*phi.diff(theta)*poisson(phi.diff(theta),n)\
+(1/(rho)**4)*n.diff(theta)*(phi.diff(theta))**2
)
display(epsMinusCorrection.simplify())
Explanation: In fact we see that
\begin{align}
\epsilon
- \left(
-\frac{1}{\rho}[\partial_\rho\phi]{n, \partial_\rho\phi}
-\frac{1}{\rho^3}[\partial_\theta\phi]{n, \partial_\theta\phi}
+\frac{1}{\rho^4}[\partial_\theta n][\partial_\theta\phi]^2
\right)
=\
\epsilon
- \left(
\frac{1}{\rho}[\partial_\rho\phi]{\partial_\rho\phi,n}
+\frac{1}{\rho^3}[\partial_\theta\phi]{\partial_\theta\phi, n}
+\frac{1}{\rho^4}[\partial_\theta n][\partial_\theta\phi]^2
\right)
=
\end{align}
End of explanation
xi = (Blen/2)*poisson(ue*ue, n).doit()
epsMinusNewCorr = epsilonFunc - (Blen/2)*poisson(ue*ue, n).doit()
display(epsMinusNewCorr.simplify())
Explanation: What is more interesting is in fact that
\begin{align}
\epsilon
- \xi
=
\epsilon
- \frac{B}{2}{\mathbf{u}_E\cdot\mathbf{u}_E, n}
\end{align}
End of explanation
display((ue*ue).doit())
Explanation: Where
\begin{align}
\mathbf{u}_E\cdot\mathbf{u}_E
=
\end{align}
End of explanation
display((zetaFunc - (chiFunc + xi)).simplify())
Explanation: Note that the last term $\frac{1}{\rho^4}(\partial_\theta n)(\partial_\theta\phi)^2$ does not appear to come from the Poisson bracket. This is however the case, and comes from the part which contains
$\frac{1}{2}\partial_\rho\left(\frac{1}{\rho}\partial_\theta \phi\right)^2 =
\left(\frac{1}{\rho}\partial_\theta \phi\right)\partial_\rho\left(\frac{1}{\rho}\partial_\theta \phi\right)$
as
$\partial_i (fg) = f \partial_i g + g \partial_i f$
To summarize, we have
\begin{align}
\zeta - (\chi + \xi) =
\end{align}
End of explanation
S = expand(zetaFunc)
strS = str(S)
# phi rho derivatives
strS = strS.replace('Derivative(phi(rho, theta), rho)', 'phi_x')
strS = strS.replace('Derivative(phi(rho, theta), rho, rho)', 'phi_xx')
strS = strS.replace('Derivative(phi(rho, theta), rho, rho, rho)', 'phi_xxx')
# phi theta derivatives
strS = strS.replace('Derivative(phi(rho, theta), theta)', 'phi_z')
strS = strS.replace('Derivative(phi(rho, theta), theta, theta)', 'phi_zz')
strS = strS.replace('Derivative(phi(rho, theta), theta, theta, theta)', 'phi_zzz')
# phi mixed derivatives
strS = strS.replace('Derivative(phi(rho, theta), rho, theta)', 'phi_xz')
strS = strS.replace('Derivative(phi(rho, theta), rho, theta, theta)', 'phi_xzz')
strS = strS.replace('Derivative(phi(rho, theta), rho, rho, theta)', 'phi_xxz')
# Non-derivatives
strS = strS.replace('phi(rho, theta)', 'phi')
# n rho derivatives
strS = strS.replace('Derivative(n(rho, theta), rho)', 'n_x')
strS = strS.replace('Derivative(n(rho, theta), rho, rho)', 'n_xx')
# n theta derivatives
strS = strS.replace('Derivative(n(rho, theta), theta)', 'n_z')
strS = strS.replace('Derivative(n(rho, theta), theta, theta)', 'n_zz')
# n mixed derivatives
strS = strS.replace('Derivative(n(rho, theta), rho, theta)', 'n_xz')
# Non-derivatives
strS = strS.replace('n(rho, theta)', 'n')
newS = sympify(strS)
display(Eq(symbols('S_new'), expand(newS)))
Explanation: Printing for comparison
End of explanation |
945 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Molonglo coordinate transforms
Useful coordinate transforms for the molonglo radio telescope
Step1: Below we define the rotation and reflection matrices
Step2: Define a position vectors
Step3: Generic transform
Step4: Reference conversion formula from Duncan's old TCC
Step5: New conversion formula using rotation matrices
What do we think we should have
Step6: The inverse of this is
Step7: Extending this to HA Dec | Python Code:
import numpy as np
import ephem as e
from scipy.optimize import minimize
import matplotlib.pyplot as plt
np.set_printoptions(precision=5,suppress =True)
Explanation: Molonglo coordinate transforms
Useful coordinate transforms for the molonglo radio telescope
End of explanation
def rotation_matrix(angle, d):
directions = {
"x":[1.,0.,0.],
"y":[0.,1.,0.],
"z":[0.,0.,1.]
}
direction = np.array(directions[d])
sina = np.sin(angle)
cosa = np.cos(angle)
# rotation matrix around unit vector
R = np.diag([cosa, cosa, cosa])
R += np.outer(direction, direction) * (1.0 - cosa)
direction *= sina
R += np.array([[ 0.0, -direction[2], direction[1]],
[ direction[2], 0.0, -direction[0]],
[-direction[1], direction[0], 0.0]])
return R
def reflection_matrix(d):
m = {
"x":[[-1.,0.,0.],[0., 1.,0.],[0.,0., 1.]],
"y":[[1., 0.,0.],[0.,-1.,0.],[0.,0., 1.]],
"z":[[1., 0.,0.],[0., 1.,0.],[1.,0.,-1.]]
}
return np.array(m[d])
Explanation: Below we define the rotation and reflection matrices
End of explanation
def pos_vector(a,b):
return np.array([[np.cos(b)*np.cos(a)],
[np.cos(b)*np.sin(a)],
[np.sin(b)]])
def pos_from_vector(vec):
a,b,c = vec
a_ = np.arctan2(b,a)
c_ = np.arcsin(c)
return a_,c_
Explanation: Define a position vectors
End of explanation
def transform(a,b,R,inverse=True):
P = pos_vector(a,b)
if inverse:
R = R.T
V = np.dot(R,P).ravel()
a,b = pos_from_vector(V)
a = 0 if np.isnan(a) else a
b = 0 if np.isnan(a) else b
return a,b
Explanation: Generic transform
End of explanation
def hadec_to_nsew(ha,dec):
ew = np.arcsin((0.9999940546 * np.cos(dec) * np.sin(ha))
- (0.0029798011806 * np.cos(dec) * np.cos(ha))
+ (0.002015514993 * np.sin(dec)))
ns = np.arcsin(((-0.0000237558704 * np.cos(dec) * np.sin(ha))
+ (0.578881847 * np.cos(dec) * np.cos(ha))
+ (0.8154114339 * np.sin(dec)))
/ np.cos(ew))
return ns,ew
Explanation: Reference conversion formula from Duncan's old TCC
End of explanation
# There should be a slope and tilt conversion to get accurate change
#skew = 4.363323129985824e-05
#slope = 0.0034602076124567475
#skew = 0.00004
#slope = 0.00346
skew = 0.01297 # <- this is the skew I get if I optimize for the same results as duncan's system
slope= 0.00343
def telescope_to_nsew_matrix(skew,slope):
R = rotation_matrix(skew,"z")
R = np.dot(R,rotation_matrix(slope,"y"))
return R
def nsew_to_azel_matrix(skew,slope):
pre_R = telescope_to_nsew_matrix(skew,slope)
x_rot = rotation_matrix(-np.pi/2,"x")
y_rot = rotation_matrix(np.pi/2,"y")
R = np.dot(x_rot,y_rot)
R = np.dot(pre_R,R)
R_bar = reflection_matrix("x")
R = np.dot(R,R_bar)
return R
def nsew_to_azel(ns, ew):
az,el = transform(ns,ew,nsew_to_azel_matrix(skew,slope))
return az,el
print nsew_to_azel(0,np.pi/2) # should be -pi/2 and 0
print nsew_to_azel(-np.pi/2,0)# should be -pi and 0
print nsew_to_azel(0.0,.5) # should be pi/2 and something near pi/2
print nsew_to_azel(-.5,.5) # less than pi/2 and less than pi/2
print nsew_to_azel(.5,-.5)
print nsew_to_azel(.5,.5)
Explanation: New conversion formula using rotation matrices
What do we think we should have:
\begin{equation}
\begin{bmatrix}
\cos(\rm EW)\cos(\rm NS) \
\cos(\rm EW)\sin(\rm NS) \
\sin(\rm EW)
\end{bmatrix}
=
\mathbf{R}
\begin{bmatrix}
\cos(\delta)\cos(\rm HA) \
\cos(\delta)\sin(\rm HA) \
\sin(\delta)
\end{bmatrix}
\end{equation}
Where $\mathbf{R}$ is a composite rotation matrix.
We need a rotations in axis of array plus orthogonal rotation w.r.t. to array centre. Note that the NS convention is flipped so HA and NS go clockwise and anti-clockwise respectively when viewed from the north pole in both coordinate systems.
\begin{equation}
\mathbf{R}_x
=
\begin{bmatrix}
1 & 0 & 0 \
0 & \cos(\theta) & -\sin(\theta) \
0 & \sin(\theta) & \cos(\theta)
\end{bmatrix}
\end{equation}
\begin{equation}
\mathbf{R}_y
=
\begin{bmatrix}
\cos(\phi) & 0 & \sin(\phi) \
0 & 1 & 0 \
-\sin(\phi) & 0 & \cos(\phi)
\end{bmatrix}
\end{equation}
\begin{equation}
\mathbf{R}_z
=
\begin{bmatrix}
\cos(\eta) & -\sin(\eta) & 0\
\sin(\eta) & \cos(\eta) & 0\
0 & 0 & 1
\end{bmatrix}
\end{equation}
\begin{equation}
\mathbf{R} = \mathbf{R}_x \mathbf{R}_y \mathbf{R}_z
\end{equation}
Here I think $\theta$ is a $3\pi/2$ rotation to put the telescope pole (west) at the telescope zenith and $\phi$ is also $\pi/2$ to rotate the telescope meridian (which is lengthwise on the array, what we traditionally think of as the meridian is actually the equator of the telescope) into the position of $Az=0$.
However rotation of NS and HA are opposite, so a reflection is needed. For example reflection around a plane in along which the $z$ axis lies:
\begin{equation}
\mathbf{\bar{R}}_z
=
\begin{bmatrix}
1 & 0 & 0\
0 & 1 & 0\
0 & 0 & -1
\end{bmatrix}
\end{equation}
Conversion to azimuth and elevations should therefore require $\theta=-\pi/2$ and $\phi=\pi/2$ with a reflection about $x$.
Taking into account the EW skew and slope of the telescope:
\begin{equation}
\begin{bmatrix}
\cos(\rm EW)\cos(\rm NS) \
\cos(\rm EW)\sin(\rm NS) \
\sin(\rm EW)
\end{bmatrix}
=
\begin{bmatrix}
\cos(\alpha) & -\sin(\alpha) & 0\
\sin(\alpha) & \cos(\alpha) & 0\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
\cos(\beta) & 0 & \sin(\beta) \
0 & 1 & 0 \
-\sin(\beta) & 0 & \cos(\beta)
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 \
0 & 0 & 1 \
0 & -1 & 0
\end{bmatrix}
\begin{bmatrix}
0 & 0 & -1 \
0 & 1 & 0 \
1 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
-1 & 0 & 0\
0 & 1 & 0\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
\cos(\delta)\cos(\rm HA) \
\cos(\delta)\sin(\rm HA) \
\sin(\delta)
\end{bmatrix}
\end{equation}
So the correction matrix to take telescope coordinates to ns,ew
\begin{bmatrix}
\cos(\alpha)\sin(\beta) & -\sin(\beta) & \cos(\alpha)\sin(\beta) \
\sin(\alpha)\cos(\beta) & \cos(\alpha) & \sin(\alpha)\sin(\beta) \
-\sin(\beta) & 0 & \cos(\beta)
\end{bmatrix}
and to Az Elv
\begin{bmatrix}
\sin(\alpha) & -\cos(\alpha)\sin(\beta) & -\cos(\alpha)\cos(\beta) \
\cos(\alpha) & -\sin(\alpha)\sin(\beta) & -\sin(\alpha)\cos(\beta) \
-\cos(\beta) & 0 & \sin(\beta)
\end{bmatrix}
End of explanation
def azel_to_nsew(az, el):
ns,ew = transform(az,el,nsew_to_azel_matrix(skew,slope).T)
return ns,ew
Explanation: The inverse of this is:
End of explanation
mol_lat = -0.6043881274183919 # in radians
def azel_to_hadec_matrix(lat):
rot_y = rotation_matrix(np.pi/2-lat,"y")
rot_z = rotation_matrix(np.pi,"z")
R = np.dot(rot_y,rot_z)
return R
def azel_to_hadec(az,el,lat):
ha,dec = transform(az,el,azel_to_hadec_matrix(lat))
return ha,dec
def nsew_to_hadec(ns,ew,lat,skew=skew,slope=slope):
R = np.dot(nsew_to_azel_matrix(skew,slope),azel_to_hadec_matrix(lat))
ha,dec = transform(ns,ew,R)
return ha,dec
ns,ew = 0.8,0.8
az,el = nsew_to_azel(ns,ew)
print "AzEl:",az,el
ha,dec = azel_to_hadec(az,el,mol_lat)
print "HADec:",ha,dec
ha,dec = nsew_to_hadec(ns,ew,mol_lat)
print "HADec2:",ha,dec
# This is Duncan's version
ns_,ew_ = hadec_to_nsew(ha,dec)
print "NSEW Duncan:",ns_,ew_
print "NS offset:",ns_-ns," EW offset:",ew_-ew
def test(ns,ew,skew,slope):
ha,dec = nsew_to_hadec(ns,ew,mol_lat,skew,slope)
ns_,ew_ = hadec_to_nsew(ha,dec)
no,eo = ns-ns_,ew-ew_
no = 0 if np.isnan(no) else no
eo = 0 if np.isnan(eo) else eo
return no,eo
ns = np.linspace(-np.pi/2+0.1,np.pi/2-0.1,10)
ew = np.linspace(-np.pi/2+0.1,np.pi/2-0.1,10)
def test2(a):
skew,slope = a
out_ns = np.empty([10,10])
out_ew = np.empty([10,10])
for ii,n in enumerate(ns):
for jj,k in enumerate(ew):
a,b = test(n,k,skew,slope)
out_ns[ii,jj] = a
out_ew[ii,jj] = b
a = abs(out_ns).sum()#abs(np.median(out_ns))
b = abs(out_ew).sum()#abs(np.median(out_ew))
print a,b
print max(a,b)
return max(a,b)
#minimize(test2,[skew,slope])
# Plotting out the conversion error as a function of HA and Dec.
# Colour scale is log of the absolute difference between original system and new system
ns = np.linspace(-np.pi/2,np.pi/2,10)
ew = np.linspace(-np.pi/2,np.pi/2,10)
out_ns = np.empty([10,10])
out_ew = np.empty([10,10])
for ii,n in enumerate(ns):
for jj,k in enumerate(ew):
print jj
a,b = test(n,k,skew,slope)
out_ns[ii,jj] = a
out_ew[ii,jj] = b
plt.figure()
plt.subplot(121)
plt.imshow(abs(out_ns),aspect="auto")
plt.colorbar()
plt.subplot(122)
plt.imshow(abs(out_ew),aspect="auto")
plt.colorbar()
plt.show()
from mpl_toolkits.mplot3d import Axes3D
from itertools import product, combinations
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d import proj3d
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_aspect("equal")
#draw sphere
u, v = np.mgrid[0:2*np.pi:20j, 0:np.pi:10j]
x=np.cos(u)*np.sin(v)
y=np.sin(u)*np.sin(v)
z=np.cos(v)
ax.plot_wireframe(x, y, z, color="r",lw=1)
R = rotation_matrix(np.pi/2,"x")
pos_v = np.array([[x],[y],[z]])
p = pos_v.T
for i in p:
for j in i:
j[0] = np.dot(R,j[0])
class Arrow3D(FancyArrowPatch):
def __init__(self, xs, ys, zs, *args, **kwargs):
FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)
self._verts3d = xs, ys, zs
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
FancyArrowPatch.draw(self, renderer)
a = Arrow3D([0,1],[0,0.1],[0,.10], mutation_scale=20, lw=1, arrowstyle="-|>", color="k")
ax.add_artist(a)
ax.set_xlabel("X")
ax.set_ylabel("Y")
ax.set_zlabel("Z")
x=p.T[0,0]
y=p.T[1,0]
z=p.T[2,0]
ax.plot_wireframe(x, y, z, color="b",lw=1)
plt.show()
Explanation: Extending this to HA Dec
End of explanation |
946 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to Bookworm
Motivation
Infinite Jest is a very long and complicated novel. There are a lot of brilliant resources connected to the book, which aim to help the reader stay afloat amongst the chaos of David Foster Wallace's obscure language, interwoven timelines and narratives, and the sprawling networks of characters. The Infinite Jest Wiki, for example, is insanely well documented and I'd recommend it to anyone reading the book.
One of the most interesting resources I found while reading was Sam Potts' Infinite Jest Diagram.
I went back to the image once or twice while I was reading IJ to work out who a character was and how they were connected to the scene. It's a fun resource to have access to while reading something so deliberately scattered.
However, Infinite Jest isn't the only "big" book out there, and as far as I know the network above was drawn up entirely by hand. I thought it would be nice to have something like this for anything I was reading. It might also function as an interesting learning resource - either for kids at a young, early-reader stage with simple books and small character networks, or for people learning about network analysis who have never bothered reading Les Miserables (again, as far as I know all of the standard example graph datasets like Les Mis and The Karate Kid were put together entirely by hand).
I thought that with a bit of thought and testing, this process was probably automatable, and it is. I can now feed bookworm any novel and have it churn out a pretty network like the one above in seconds, without any prior knowledge of the story or its characters. By virtue of the way character connections are measured, it can also tell you the relative strength of all links between characters.
Getting Started
Before we start, let's import all of the code in the bookworm module. I'll explain what each function does as we move through the notebook - we'll be covering most of build_network.py here.
Step1: The fisrt thing we'll do is load in a book and a list of its characters. These operations are both pretty simple. The book is loaded in as one long string from a .txt file. Character lists are stored in a .csv, with all potential names for a character stored on each row. They're loaded in as tuples of names in a list of characters.
Step2: Then we split the book down into sections. Bookworm works by looking for coocurrence of characters in these sections of the text as a proxy for their connectedness. It's a very simple trick which works stupidly well.
There are a few ways we can break down the book into sections
Step3: Now comes the interesting bit. We've assembled our cast, and moved the text that they inhabit into a nice, machine-interpretable format.
What we want to generate now is the blank table below which describes the presence of a character in a sentence. At this point, Bookworm hasn't really 'read' any of the text so all of the interactions between characters and sentences (where each cell in the table represents an interaction) are set to 0
Step4: Next, it iterates through the list of sentences it has been fed, checking for an instance of each character. If it finds a character in the sentence, it marks their presence with a 1.
So if character 1 appears with character 2 in sentence 1, and with character 3 in sentence 2, we would see the following, with the rest of the cells remaining blank
Step5: calculate_cooccurence() does this computation and then wipes out any interaction of a character with themselves. For the table above, this would give us
Step6: That's the essence of what bookworm does, and everything from here onwards is just play. It really is that simple. Once we have an adjacency matrix of our characters, all of the graph theory falls into place.
So, now we can show off a few some results! Despite describing a set of tiny matrices above, we've really been computing all of Infinite Jest's massiveness while working through the notebook.
We can print the strongest relationships for a chosen character using the function below
Step7: Applying this to 5 characters at random
Step8: Those all seem to make sense... Lets try with a few characters who we know about in more detail
Step9: Yep... Compare the results we've generated to the ones in the diagram at the top of the notebook.
Same code, different book
Lets run the whole thing for an entirely different book and see whether we get similarly positive results. This time, Harry Potter and The Philosopher's Stone - chosen because you're more likely to have some contextual knowledge of who's who and what's what in that book. | Python Code:
from bookworm import *
Explanation: Intro to Bookworm
Motivation
Infinite Jest is a very long and complicated novel. There are a lot of brilliant resources connected to the book, which aim to help the reader stay afloat amongst the chaos of David Foster Wallace's obscure language, interwoven timelines and narratives, and the sprawling networks of characters. The Infinite Jest Wiki, for example, is insanely well documented and I'd recommend it to anyone reading the book.
One of the most interesting resources I found while reading was Sam Potts' Infinite Jest Diagram.
I went back to the image once or twice while I was reading IJ to work out who a character was and how they were connected to the scene. It's a fun resource to have access to while reading something so deliberately scattered.
However, Infinite Jest isn't the only "big" book out there, and as far as I know the network above was drawn up entirely by hand. I thought it would be nice to have something like this for anything I was reading. It might also function as an interesting learning resource - either for kids at a young, early-reader stage with simple books and small character networks, or for people learning about network analysis who have never bothered reading Les Miserables (again, as far as I know all of the standard example graph datasets like Les Mis and The Karate Kid were put together entirely by hand).
I thought that with a bit of thought and testing, this process was probably automatable, and it is. I can now feed bookworm any novel and have it churn out a pretty network like the one above in seconds, without any prior knowledge of the story or its characters. By virtue of the way character connections are measured, it can also tell you the relative strength of all links between characters.
Getting Started
Before we start, let's import all of the code in the bookworm module. I'll explain what each function does as we move through the notebook - we'll be covering most of build_network.py here.
End of explanation
book = load_book('data/raw/ij.txt', lower=True)
characters = load_characters('data/raw/characters_ij.csv')
Explanation: The fisrt thing we'll do is load in a book and a list of its characters. These operations are both pretty simple. The book is loaded in as one long string from a .txt file. Character lists are stored in a .csv, with all potential names for a character stored on each row. They're loaded in as tuples of names in a list of characters.
End of explanation
sequences = get_sentence_sequences(book)
Explanation: Then we split the book down into sections. Bookworm works by looking for coocurrence of characters in these sections of the text as a proxy for their connectedness. It's a very simple trick which works stupidly well.
There are a few ways we can break down the book into sections:
- get_sentence_sequences() uses NLTK's standard .tokenize() function to split the book into sentences.
- get_word_sequences() uses NLTK's word_tokenize() to split the book into words, of which it will then select ordered lists of length n (default 40).
- get_character_sequences() uses python builtins to split it into substrings of length n (default 200).
Fundamentally, they all return a list of strings which each cover a very small section of the novel. For simplicity's sake we're going to use the sentence-wise splitter.
End of explanation
df = find_connections(sequences, characters)
Explanation: Now comes the interesting bit. We've assembled our cast, and moved the text that they inhabit into a nice, machine-interpretable format.
What we want to generate now is the blank table below which describes the presence of a character in a sentence. At this point, Bookworm hasn't really 'read' any of the text so all of the interactions between characters and sentences (where each cell in the table represents an interaction) are set to 0:
| | character 1 | character 2 | character 3 |
|------------|-------------|-------------|-------------|
| sentence 1 | 0 | 0 | 0 |
| sentence 2 | 0 | 0 | 0 |
| sentence 3 | 0 | 0 | 0 |
| sentence 4 | 0 | 0 | 0 |
The first bit of the find_connections() sets up the blank table above.
End of explanation
cooccurence = calculate_cooccurence(df)
Explanation: Next, it iterates through the list of sentences it has been fed, checking for an instance of each character. If it finds a character in the sentence, it marks their presence with a 1.
So if character 1 appears with character 2 in sentence 1, and with character 3 in sentence 2, we would see the following, with the rest of the cells remaining blank:
| | character 1 | character 2 | character 3 |
|------------|-------------|-------------|-------------|
| sentence 1 | 1 | 1 | 0 |
| sentence 2 | 1 | 0 | 1 |
| sentence 3 | 0 | 0 | 0 |
| sentence 4 | 0 | 0 | 0 |
In the next stage, we enumerate characters coocurence with one another. We can compute this very quickly by taking the dot product of the table with its transpose.
End of explanation
cooccurence = cooccurence.to_sparse()
Explanation: calculate_cooccurence() does this computation and then wipes out any interaction of a character with themselves. For the table above, this would give us:
| | character 1 | character 2 | character 3 |
|-------------|-------------|-------------|-------------|
| character 1 | 0 | 1 | 1 |
| character 2 | 1 | 0 | 0 |
| character 3 | 1 | 0 | 0 |
showing that character 1 has interacted with character 2 and character 3, but character 2 and character 3 haven't interacted. Note the symmetry across the diagonal...
The cooccurence matrix we're referring to here is also known as an adjacency matrix - I might use the terms interchangably from here on.
The example table above is miniscule in comparison to the dozens of characters who might turn up in a reasonably sized novel, and the hundreds or thousands of opportunities they have to interact with one another. The coocurence matrix in reality is likely to contain much larger numbers between characters who regularly appear in the same sentences. Unless we're working with a really tiny, incestuous network, this coocurence matrix is also probably going to be pretty sparse. For that reason it'll often make sense to store it as a sparse matrix:
End of explanation
def print_five_closest(character):
print('-'*len(str(character))
+ '\n' + str(character) + '\n'
+ '-'*len(str(character)))
top_five = (cooccurence[str(character)]
.sort_values(ascending=False)
.index.values
[:5])
for name in top_five:
print(name)
Explanation: That's the essence of what bookworm does, and everything from here onwards is just play. It really is that simple. Once we have an adjacency matrix of our characters, all of the graph theory falls into place.
So, now we can show off a few some results! Despite describing a set of tiny matrices above, we've really been computing all of Infinite Jest's massiveness while working through the notebook.
We can print the strongest relationships for a chosen character using the function below:
End of explanation
from random import randint
for i in range(5):
print_five_closest(characters[randint(0, len(characters))])
print()
Explanation: Applying this to 5 characters at random:
End of explanation
print_five_closest(('the moms ', 'avril ', 'mondragon '))
print_five_closest(('joelle ', 'van dyne ', 'lucille '))
print_five_closest(('pemulis ',))
print_five_closest(('bruce green ',))
Explanation: Those all seem to make sense... Lets try with a few characters who we know about in more detail
End of explanation
book = load_book('data/raw/hp_philosophers_stone.txt', lower=True)
characters = load_characters('data/raw/characters_hp.csv')
sequences = get_sentence_sequences(book)
df = find_connections(sequences, characters)
cooccurence = calculate_cooccurence(df).to_sparse()
characters[:5]
print_five_closest(('harry ', ' potter '))
print_five_closest(('voldemort ', ' lord ', ' you-know-who '))
print_five_closest(('crabbe ',))
print_five_closest(('fred ',))
Explanation: Yep... Compare the results we've generated to the ones in the diagram at the top of the notebook.
Same code, different book
Lets run the whole thing for an entirely different book and see whether we get similarly positive results. This time, Harry Potter and The Philosopher's Stone - chosen because you're more likely to have some contextual knowledge of who's who and what's what in that book.
End of explanation |
947 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Practical Optimisations for Pandas 🐼
Eyal Trabelsi
About Me 🙈
Software Engineer at Salesforce 👷
Big passion for python, data and performance optimisations 🐍🤖
Online at medium | twitter 🌐
Optimizing Your Pandas is not Rocket Science 🚀
Optimization ?! Why ?🤨
Fast is better than slow 🐇
latency response time 200 milliseconds client roundtrip
throughput successful traffic flow of 200 requests per seconds
Memory efficiency is good 💾
Saving money is awesome 💸
Hardware will only take you so far 💻
Ok now that i have got you attention, the next question i want to tackle is when should we optimize our code
Before We Optimize ⏰
It's actually needed 🚔
remember optimized code is
Step1: How 👀
Use What You Need 🧑
Keep needed columns only
Keep needed rows only
Dont Reinvent the Wheel 🎡
Vast ecosystem
Use existing solutions
Fewer bugs
Highly optimized
Avoid Loops ♾
Bad Option 😈
Step2: Better Option 🤵
Step3: 150x Improvement In Execution Time ⌛
Best Option 👼
Step4: Another 2000x Improvement In Execution Time ⌛
pandas vectorized functions
Step5: Supported Types 🌈
int64 / float64
bool
objects
datetime64 / timedelta
Category
Sparse Types
Nullable Integer/Nullable Bolean
Your Own Types
Open Sourced Types like cyberpandas and geopandas
Where We Stand 🌈
Step6: Optimized Types 🌈
Step7: 13x Improvement In Memory ⌛
Optimized Types 🌈
Improved operation performance 🧮
Step8: 2.5x Performance Improvement⌛
Recommended Installation 👨🏫
numexpr - Fast numerical expression evaluator for NumPy
bottleneck - uses specialized nan aware Cython routines to achieve large speedups.
Better for medium to big datasets
Compiled Code 🤯
Python dynamic nature
No compilation optimization
Pure Python can be slow
Step9: Cython and Numba for the rescue 👨🚒
Cython 🤯
Up to 100x speedup from pure python 👍
Learning Curve 👎
Separated Compilation Step 👎 👍
Step10: Example
Step11: 100x Performance Improvement⌛
Numba 🤯
Up to 200x speedup from pure python 👍
Easy 👍
using numba is really easy its simply adding a decorator to a method
Highly Configurable - fastmath, parallel, nogil 👍
Mostly Numeric 👎
Example
Step12: 65x Performance Improvement⌛
1️⃣ Vectorized methods
2️⃣ Numba
3️⃣ Cython
General Python Optimizations 🐍
Caching 🏎
Avoid unnecessary work/computation.
Faster code
functools.lru_cache
Intermediate Variables👩👩👧👧
Intermediate calculations
Memory foot print of both objects
Smarter variables allocation
Step13: Example | Python Code:
! pip install numba numexpr
import math
import time
import warnings
from dateutil.parser import parse
import janitor
import numpy as np
import pandas as pd
from numba import jit
from sklearn import datasets
from pandas.api.types import is_datetime64_any_dtype as is_datetime
warnings.filterwarnings("ignore", category=pd.errors.DtypeWarning)
pd.options.display.max_columns = 999
path = 'https://raw.githubusercontent.com/FBosler/you-datascientist/master/invoices.csv'
def load_dataset(naivly=False):
df = (pd.concat([pd.read_csv(path)
.clean_names()
.remove_columns(["meal_id", "company_id"])
for i in range(20)])
.assign(meal_tip=lambda x: x.meal_price.map(lambda x: x * 0.2))
.astype({"meal_price": int})
.rename(columns={"meal_price": "meal_price_with_tip"}))
if naivly:
for col in df.columns:
df[col] = df[col].astype(object)
return df
df = load_dataset()
df.head()
Explanation: Practical Optimisations for Pandas 🐼
Eyal Trabelsi
About Me 🙈
Software Engineer at Salesforce 👷
Big passion for python, data and performance optimisations 🐍🤖
Online at medium | twitter 🌐
Optimizing Your Pandas is not Rocket Science 🚀
Optimization ?! Why ?🤨
Fast is better than slow 🐇
latency response time 200 milliseconds client roundtrip
throughput successful traffic flow of 200 requests per seconds
Memory efficiency is good 💾
Saving money is awesome 💸
Hardware will only take you so far 💻
Ok now that i have got you attention, the next question i want to tackle is when should we optimize our code
Before We Optimize ⏰
It's actually needed 🚔
remember optimized code is:
harder to write and read
less maintainable
buggier, more brittle
Optimize when
gather requirements, there are some parts you won't be able to touch
establish percentile SLAs: 50, 95, 99 max
Our code is well tested 💯
Focus on the bottlenecks 🍾
I have a 45 minute talk on how to properly profile code, in this talk i give u a glimp
Profiling 📍
timeit - Benchmark multiple runs of the code snippet and measure CPU ⌛
memit - Measures process Memory 💾
Dataset 📉
End of explanation
import warnings
warnings.filterwarnings("ignore")
def iterrows_original_meal_price(df):
for i, row in df.iterrows():
df.loc[i]["original_meal_price"] = row["meal_price_with_tip"] - row["meal_tip"]
return df
%%timeit -r 1 -n 1
iterrows_original_meal_price(df)
Explanation: How 👀
Use What You Need 🧑
Keep needed columns only
Keep needed rows only
Dont Reinvent the Wheel 🎡
Vast ecosystem
Use existing solutions
Fewer bugs
Highly optimized
Avoid Loops ♾
Bad Option 😈
End of explanation
def apply_original_meal_price(df):
df["original_meal_price"] = df.apply(lambda x: x['meal_price_with_tip'] - x['meal_tip'], axis=1)
return df
%%timeit
apply_original_meal_price(df)
Explanation: Better Option 🤵
End of explanation
def vectorized_original_meal_price(df):
df["original_meal_price"] = df["meal_price_with_tip"] - df["meal_tip"]
return df
%%timeit
vectorized_original_meal_price(df)
Explanation: 150x Improvement In Execution Time ⌛
Best Option 👼
End of explanation
ones = np.ones(shape=5000)
ones
types = ['object', 'complex128', 'float64', 'int64', 'int32', 'int16', 'int8', 'bool']
df = pd.DataFrame(dict([(t, ones.astype(t)) for t in types]))
df.memory_usage(index=False, deep=True)
Explanation: Another 2000x Improvement In Execution Time ⌛
pandas vectorized functions: +, -, .str.lower(), .str.strip(), .dt.second and more
numpy vectorized functions: np.log, np.log, np.divide, np.subtract, np.where, and more
scipy vectorized functions: scipy.special.gamma, scipy.special.beta and more
np.vectorize
Picking the Right Type 🌈
Motivation 🏆
End of explanation
df = load_dataset(naivly=True)
df.memory_usage(deep=True).sum()
df.memory_usage(deep=True)
df.dtypes
Explanation: Supported Types 🌈
int64 / float64
bool
objects
datetime64 / timedelta
Category
Sparse Types
Nullable Integer/Nullable Bolean
Your Own Types
Open Sourced Types like cyberpandas and geopandas
Where We Stand 🌈
End of explanation
optimized_df = df.astype({'order_id': 'category',
'date': 'category',
'date_of_meal': 'category',
'participants': 'category',
'meal_price_with_tip': 'int16',
'type_of_meal': 'category',
'heroes_adjustment': 'bool',
'meal_tip': 'float32'})
optimized_df.memory_usage(deep=True).sum()
Explanation: Optimized Types 🌈
End of explanation
%%timeit
df["meal_price_with_tip"].astype(object).mean()
%%timeit
df["meal_price_with_tip"].astype(float).mean()
Explanation: 13x Improvement In Memory ⌛
Optimized Types 🌈
Improved operation performance 🧮
End of explanation
def foo(N):
accumulator = 0
for i in range(N):
accumulator = accumulator + i
return accumulator
%%timeit
df.meal_price_with_tip.map(foo)
Explanation: 2.5x Performance Improvement⌛
Recommended Installation 👨🏫
numexpr - Fast numerical expression evaluator for NumPy
bottleneck - uses specialized nan aware Cython routines to achieve large speedups.
Better for medium to big datasets
Compiled Code 🤯
Python dynamic nature
No compilation optimization
Pure Python can be slow
End of explanation
%load_ext Cython
Explanation: Cython and Numba for the rescue 👨🚒
Cython 🤯
Up to 100x speedup from pure python 👍
Learning Curve 👎
Separated Compilation Step 👎 👍
End of explanation
%%cython
def cython_foo(long N):
cdef long accumulator
accumulator = 0
cdef long i
for i in range(N):
accumulator += i
return accumulator
%%timeit
df.meal_price_with_tip.map(cython_foo)
Explanation: Example
End of explanation
@jit(nopython=True)
def numba_foo(N):
accumulator = 0
for i in range(N):
accumulator = accumulator + i
return accumulator
%%timeit
df.meal_price_with_tip.map(numba_foo)
Explanation: 100x Performance Improvement⌛
Numba 🤯
Up to 200x speedup from pure python 👍
Easy 👍
using numba is really easy its simply adding a decorator to a method
Highly Configurable - fastmath, parallel, nogil 👍
Mostly Numeric 👎
Example
End of explanation
def another_foo(data):
return data * 2
def foo(data):
return data + 10
Explanation: 65x Performance Improvement⌛
1️⃣ Vectorized methods
2️⃣ Numba
3️⃣ Cython
General Python Optimizations 🐍
Caching 🏎
Avoid unnecessary work/computation.
Faster code
functools.lru_cache
Intermediate Variables👩👩👧👧
Intermediate calculations
Memory foot print of both objects
Smarter variables allocation
End of explanation
%reload_ext memory_profiler
def load_data():
return np.ones((2 ** 30), dtype=np.uint8)
%%memit
def proccess():
data = load_data()
data2 = foo(data)
data3 = another_foo(data2)
return data3
proccess()
%%memit
def proccess():
data = load_data()
data = foo(data)
data = another_foo(data)
return data
proccess()
Explanation: Example
End of explanation |
948 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing unstructured text in product review data
It's common for companies to have useful data hidden in large volumes of text
Step1: Focus on chosen aspects about baby monitors
Step2: Process reviews for the most common product
Step3: Comparing to another product
Step4: Comparing the number of sentences that mention each aspect
Step5: Comparing the sentence-level sentiment for each aspect of each product
Step6: Comparing the use of adjectives for each aspect
Step7: Investigating good and bad sentences
Step8: Print good sentences for the first item, where adjectives and aspects are highlighted.
Step9: Print bad sentences for the first item, where adjectives and aspects are highlighted.
Step10: Deployment | Python Code:
import graphlab as gl
from graphlab.toolkits.text_analytics import trim_rare_words, split_by_sentence, extract_part_of_speech, stopwords, PartOfSpeech
def nlp_pipeline(reviews, title, aspects):
print(title)
print('1. Get reviews for this product')
reviews = reviews.filter_by(title, 'name')
print('2. Splitting reviews into sentences')
reviews['sentences'] = split_by_sentence(reviews['review'])
sentences = reviews.stack('sentences', 'sentence').dropna()
print('3. Tagging relevant reviews')
tags = gl.SFrame({'tag': aspects})
tagger_model = gl.data_matching.autotagger.create(tags, verbose=False)
tagged = tagger_model.tag(sentences, query_name='sentence', similarity_threshold=.3, verbose=False)\
.join(sentences, on='sentence')
print('4. Extracting adjectives')
tagged['cleaned'] = trim_rare_words(tagged['sentence'], stopwords=list(stopwords()))
tagged['adjectives'] = extract_part_of_speech(tagged['cleaned'], [PartOfSpeech.ADJ])
print('5. Predicting sentence-level sentiment')
model = gl.sentiment_analysis.create(tagged, features=['review'])
tagged['sentiment'] = model.predict(tagged)
return tagged
reviews = gl.SFrame('amazon_baby.gl')
reviews
from helper_util import *
Explanation: Analyzing unstructured text in product review data
It's common for companies to have useful data hidden in large volumes of text:
online reviews
social media posts and tweets
interactions with customers, such as emails and call center transcripts
For example, when shopping it can be challenging to decide between products with the same star rating. When this happens, shoppers often sift through the raw text of reviews to understand the strengths and weaknesses of each option.
<img src="ItemC.png">
<img src="ItemD.png">
In this notebook we seek to automate the task of determining product strengths and weaknesses from review text.
splitting Amazon review text into sentences and applying a sentiment analysis model
tagging documents that mention aspects of interest
extract adjectives from raw text, and compare their use in positive and negative reviews
summarizing the use of adjectives for tagged documents
GraphLab Create includes feature engineering objects that leverage spaCy, a high performance NLP package. Here we use it for extracting parts of speech and parsing reviews into sentences.
End of explanation
aspects = ['audio', 'price', 'signal', 'range', 'battery life']
reviews = search(reviews, 'monitor')
reviews
Explanation: Focus on chosen aspects about baby monitors
End of explanation
item_a = 'Infant Optics DXR-5 2.4 GHz Digital Video Baby Monitor with Night Vision'
reviews_a = nlp_pipeline(reviews, item_a, aspects)
reviews_a
Explanation: Process reviews for the most common product
End of explanation
dropdown = get_dropdown(reviews)
display(dropdown)
item_b = dropdown.value
reviews_b = nlp_pipeline(reviews, item_b, aspects)
counts, sentiment, adjectives = get_comparisons(reviews_a, reviews_b, item_a, item_b, aspects)
Explanation: Comparing to another product
End of explanation
counts
Explanation: Comparing the number of sentences that mention each aspect
End of explanation
sentiment
Explanation: Comparing the sentence-level sentiment for each aspect of each product
End of explanation
adjectives
Explanation: Comparing the use of adjectives for each aspect
End of explanation
good, bad = get_extreme_sentences(reviews_a)
Explanation: Investigating good and bad sentences
End of explanation
print_sentences(good['highlighted'])
Explanation: Print good sentences for the first item, where adjectives and aspects are highlighted.
End of explanation
print_sentences(bad['highlighted'])
Explanation: Print bad sentences for the first item, where adjectives and aspects are highlighted.
End of explanation
service = gl.deploy.predictive_service.load("s3://gl-demo-usw2/predictive_service/demolab/ps-1.8.5")
service.get_predictive_objects_status()
def word_count(text):
sa = gl.SArray([text])
sa = gl.text_analytics.count_words(sa)
return sa[0]
service.update('chris_bow', word_count)
service.apply_changes()
service.query('chris_bow', text=["It's a beautiful day in the neighborhood. Beautiful day for a neighbor."])
Explanation: Deployment
End of explanation |
949 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Types in Core-Python
Python has all types that you already know from other programming languages.
Notes
Step1: The last example shows that Python provides aribtrary precise integer arithmetic! There is only one integer type in the Python core.
Step2: Float Type
Step3: By default Python uses Double Precision floating point numbers.
The example print(c / d) reminds us that most floating-point numbers do not have an exact representation in a computer (numerics!). See also Floating Point Arithmetic
Step4: Boolean Type
Python has an explicit boolean type which can only take the values True and False. It is used to test conditions in loops and if statements.
Step5: Exercises
Write down a Python expression which evaluates to True
if $1 < x \leq 10$ and to False otherwise.
Explain the results of the following cell! Can you modify the expressions such that the results meet your expectations?
Step6: String Type
The Python String type is a special case of a container which we will treat in detail in the coming weeks!
Step7: Remarks | Python Code:
a = 5 # assigninig the integer value 5 to variable 'a'
b = 2
print(a + b, a - b) # Integer addition and subtraction
print(a * b) # Integer multiplication
print(a**b) # 5 to the power of 2
print(a // b) # Integer division!
print(a % b) # modulo function
print(a / b) # division with 'promotion' of the result to float
# if necessary
# This behaviour is different in Python2!
print(5**5460) # 'arbitrarily accurate' integer arithmetic!
Explanation: Basic Types in Core-Python
Python has all types that you already know from other programming languages.
Notes:
- In Python, variables do not need to be declared before using them!
- Variables do not have any type but the objects they point to do!
Integer Type
End of explanation
a = 5
print(type(a)) # The type of the object a variable points to
# can explicitely be queried!
print(type(5))
Explanation: The last example shows that Python provides aribtrary precise integer arithmetic! There is only one integer type in the Python core.
End of explanation
import numpy # 'library or module' of mathematical functions
# and data structures.
c = 3.14159 # seen already?
d = .1 # equal to 0.1
e = 1.2e2 # read: 1.2 times 10 to the power of 2
print(type(e))
print(c + d, c - d, d * e, c / d)
print(numpy.cos(c)) # The cosine function is defined within the numpy module
print(c%1) # obtain fractional part of a float
print(d + 3) # in mixed calculations, integer values are 'promoted' to float
Explanation: Float Type
End of explanation
# your solution here
Explanation: By default Python uses Double Precision floating point numbers.
The example print(c / d) reminds us that most floating-point numbers do not have an exact representation in a computer (numerics!). See also Floating Point Arithmetic: Issues and Limitations and read especially What every computer scientist should know about Floating-Point Arithmetic if you have not yet done so!
Exercise:
Calculate the expressions: $2 + 4 \cdot 3$, $5^{5^{5}}$, $\mathrm{e}^{-1}$
and $\left({49 \atop 6}\right)=\frac{49!}{6! 43!}$
Hint: You can use the functions numpy.exp and numpy.math.factorial to obatin Eulers number and factorials respectively.
End of explanation
f = False
t = True
a = 1
print(f or t) # logical 'or'
print(f and t) # logical 'and'
print(a == 1) # test for equality
print(a == 2)
print(a < 5, a >= 5)
Explanation: Boolean Type
Python has an explicit boolean type which can only take the values True and False. It is used to test conditions in loops and if statements.
End of explanation
d = 1.0
print(d == 1.0)
print((d - 0.6) == 0.4)
print((d - 6 * 0.1) == 0.4)
# your solution here
Explanation: Exercises
Write down a Python expression which evaluates to True
if $1 < x \leq 10$ and to False otherwise.
Explain the results of the following cell! Can you modify the expressions such that the results meet your expectations?
End of explanation
first_name = "Thomas"
second_name = "Erben"
full_name = first_name + " " + second_name # concatenation of strings
print(full_name)
print(full_name[1], full_name[1:3]) # accessing parts of strings
# (see containers)
print(len(full_name)) # length of string
print(full_name.upper()) # translate all chars to uppercase
print(full_name.find("Erben")) # find substring
Explanation: String Type
The Python String type is a special case of a container which we will treat in detail in the coming weeks!
End of explanation
s = "Hello Thomas"
# your solution here
Explanation: Remarks:
Python fully administrates memory for you (You did not need to worry aboput memory overflows by
assigning too long strings to variables etc.)! However, it is often necessary to know how Python
does it, especially for data-intensive applications. We will talk more about this later.
Note the different ways to operate with a function on strings: len(full_name) (usual function call)
and full_name.upper() (calling a method from the String Class).
Exercise:
Give a Python command which replaces the word Hello in a string with Goodbye!
End of explanation |
950 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Semi-supervised Learning
author
Step1: Let's first generate some data in the form of blobs that are close together. Generally one tends to have far more unlabeled data than labeled data, so let's say that a person only has 50 samples of labeled training data and 4950 unlabeled samples. In pomegranate you a sample can be specified as lacking a label by providing the integer -1 as the label, just like in scikit-learn. Let's also say there there is a bit of bias in the labeled samples to inject some noise into the problem, as otherwise Gaussian blobs are trivially modeled with even a few samples.
Step2: Now let's take a look at the data when we plot it.
Step3: The clusters of unlabeled data seem clear to us, and it doesn't seem like the labeled data is perfectly faithful to these clusters. This can typically happen in a semisupervised setting as well, as the data that is labeled is sometimes biased either because the labeled data was chosen as it was easy to label, or the data was chosen to be labeled in a biased maner.
Now let's try fitting a simple naive Bayes classifier to this data and compare the results when using only the labeled data to when using both the labeled and unlabeled data together.
Step4: It seems like we get a big bump in test set accuracy when we use semi-supervised learning. Let's visualize the data to get a better sense of what is happening here.
Step5: The contours plot the decision boundaries between the different classes with the left figures corresponding to the partially labeled training set and the right figures corresponding to the test set. We can see that the boundaries learning using only the labeled data look a bit weird when considering the unlabeled data, particularly in that it doesn't cleanly separate the cyan cluster from the other two. In addition, it seems like the boundary between the magenta and red clusters is a bit curved in an unrealistic way. We would not expect points that fell around (-18, -7) to actually come from the red class. Training the model in a semi-supervised manner cleaned up both of these concerns by learning better boundaries that are also flatter and more generalizable.
Let's next compare the training times to see how much slower it is to do semi-supervised learning than it is to do simple supervised learning.
Step6: It is quite a bit slower to do semi-supervised learning than simple supervised learning in this example. This is expected as the simple supervised update for naive Bayes is a trivial MLE across each dimension whereas the semi-supervised case requires EM to converge to complete. However, it is still faster to do semi-supervised learning this setting to learn a naive Bayes classifier than it is to fit the label propagation estimator from sklearn.
However, though it is widely used, the naive Bayes classifier is still a fairly simple model. One can construct a more complicated model that does not assume feature independence called a Bayes classifier that can also be trained using semi-supervised learning in pretty much the same manner. You can read more about the Bayes classifier in its tutorial in the tutorial folder. Let's move on to more complicated data and try to fit a mixture model Bayes classifier, comparing the performance between using only labeled data and using all data.
First let's generate some more complicated, noisier data.
Step7: Now let's take a look at the accuracies that we get when training a model using just the labeled examples versus all of the examples in a semi-supervised manner.
Step8: As expected, the semi-supervised method performs better. Let's visualize the landscape in the same manner as before in order to see why this is the case.
Step9: Immediately, one would notice that the decision boundaries when using semi-supervised learning are smoother than those when using only a few samples. This can be explained mostly because having more data can generally lead to smoother decision boundaries as the model does not overfit to spurious examples in the dataset. It appears that the majority of the correctly classified samples come from having a more accurate decision boundary for the magenta samples in the left cluster. When using only the labeled samples many of the magenta samples in this region get classified incorrectly as cyan samples. In contrast, when using all of the data these points are all classified correctly.
Lastly, let's take a look at a time comparison in this more complicated example. | Python Code:
%matplotlib inline
import time
import pandas
import random
import numpy
import matplotlib.pyplot as plt
import seaborn; seaborn.set_style('whitegrid')
import itertools
from pomegranate import *
random.seed(0)
numpy.random.seed(0)
numpy.set_printoptions(suppress=True)
%load_ext watermark
%watermark -m -n -p numpy,scipy,pomegranate
%pylab inline
from pomegranate import *
from sklearn.semi_supervised import LabelPropagation
from sklearn.datasets import make_blobs
import seaborn, time
seaborn.set_style('whitegrid')
numpy.random.seed(1)
Explanation: Semi-supervised Learning
author: Jacob Schreiber <br>
contact: [email protected]
Most classical machine learning algorithms either assume that an entire dataset is either labeled (supervised learning) or that there are no labels (unsupervised learning). However, frequently it is the case that some labeled data is present but there is a great deal of unlabeled data as well. A great example of this is that of computer vision where the internet is filled of pictures (mostly of cats) that could be useful, but you don't have the time or money to label them all in accordance with your specific task. Typically what ends up happening is that either the unlabeled data is discarded in favor of training a model solely on the labeled data, or that an unsupervised model is initialized with the labeled data and then set free on the unlabeled data. Neither method uses both sets of data in the optimization process.
Semi-supervised learning is a method to incorporate both labeled and unlabeled data into the training task, typically yield better performing estimators than using the labeled data alone. There are many methods one could use for semisupervised learning, and <a href="http://scikit-learn.org/stable/modules/label_propagation.html">scikit-learn has a good write-up on some of these techniques</a>.
pomegranate natively implements semi-supervised learning through the a merger of maximum-likelihood and expectation-maximization. As an overview, the models are initialized by first fitting to the labeled data directly using maximum-likelihood estimates. The models are then refined by running expectation-maximization (EM) on the unlabeled datasets and adding the sufficient statistics to those acquired from maximum-likelihood estimates on the labeled data. Under the hood both a supervised model and an unsupervised mixture model are created using the same underlying distribution objects. The summarize method is first called using the supervised method on the labeled data, and then the summarize method is called again using the unsupervised method on the unlabeled data. This causes the sufficient statistics to be updated appropriately given the results of first maximum-likelihood and then EM. This process continues until convergence in the EM step.
Let's take a look!
End of explanation
X, y = make_blobs(10000, 2, 3, cluster_std=2)
x_min, x_max = X[:,0].min()-2, X[:,0].max()+2
y_min, y_max = X[:,1].min()-2, X[:,1].max()+2
X_train = X[:5000]
y_train = y[:5000]
# Set the majority of samples to unlabeled.
y_train[numpy.random.choice(5000, size=4950, replace=False)] = -1
# Inject noise into the problem
X_train[y_train != -1] += 2.5
X_test = X[5000:]
y_test = y[5000:]
Explanation: Let's first generate some data in the form of blobs that are close together. Generally one tends to have far more unlabeled data than labeled data, so let's say that a person only has 50 samples of labeled training data and 4950 unlabeled samples. In pomegranate you a sample can be specified as lacking a label by providing the integer -1 as the label, just like in scikit-learn. Let's also say there there is a bit of bias in the labeled samples to inject some noise into the problem, as otherwise Gaussian blobs are trivially modeled with even a few samples.
End of explanation
plt.figure(figsize=(8, 8))
plt.scatter(X_train[y_train == -1, 0], X_train[y_train == -1, 1], color='0.6')
plt.scatter(X_train[y_train == 0, 0], X_train[y_train == 0, 1], color='c')
plt.scatter(X_train[y_train == 1, 0], X_train[y_train == 1, 1], color='m')
plt.scatter(X_train[y_train == 2, 0], X_train[y_train == 2, 1], color='r')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.show()
Explanation: Now let's take a look at the data when we plot it.
End of explanation
model_a = NaiveBayes.from_samples(NormalDistribution, X_train[y_train != -1], y_train[y_train != -1])
print("Supervised Learning Accuracy: {}".format((model_a.predict(X_test) == y_test).mean()))
model_b = NaiveBayes.from_samples(NormalDistribution, X_train, y_train)
print("Semisupervised Learning Accuracy: {}".format((model_b.predict(X_test) == y_test).mean()))
Explanation: The clusters of unlabeled data seem clear to us, and it doesn't seem like the labeled data is perfectly faithful to these clusters. This can typically happen in a semisupervised setting as well, as the data that is labeled is sometimes biased either because the labeled data was chosen as it was easy to label, or the data was chosen to be labeled in a biased maner.
Now let's try fitting a simple naive Bayes classifier to this data and compare the results when using only the labeled data to when using both the labeled and unlabeled data together.
End of explanation
def plot_contour(X, y, Z):
plt.scatter(X[y == -1, 0], X[y == -1, 1], color='0.2', alpha=0.5, s=20)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', s=20)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', s=20)
plt.scatter(X[y == 2, 0], X[y == 2, 1], color='r', s=20)
plt.contour(xx, yy, Z)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
xx, yy = numpy.meshgrid(numpy.arange(x_min, x_max, 0.1), numpy.arange(y_min, y_max, 0.1))
Z1 = model_a.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
Z2 = model_b.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.figure(figsize=(16, 16))
plt.subplot(221)
plt.title("Training Data, Supervised Boundaries", fontsize=16)
plot_contour(X_train, y_train, Z1)
plt.subplot(223)
plt.title("Training Data, Semi-supervised Boundaries", fontsize=16)
plot_contour(X_train, y_train, Z2)
plt.subplot(222)
plt.title("Test Data, Supervised Boundaries", fontsize=16)
plot_contour(X_test, y_test, Z1)
plt.subplot(224)
plt.title("Test Data, Semi-supervised Boundaries", fontsize=16)
plot_contour(X_test, y_test, Z2)
plt.show()
Explanation: It seems like we get a big bump in test set accuracy when we use semi-supervised learning. Let's visualize the data to get a better sense of what is happening here.
End of explanation
print("Supervised Learning: ")
%timeit NaiveBayes.from_samples(NormalDistribution, X_train[y_train != -1], y_train[y_train != -1])
print()
print("Semi-supervised Learning: ")
%timeit NaiveBayes.from_samples(NormalDistribution, X_train, y_train)
print()
print("Label Propagation (sklearn): ")
%timeit LabelPropagation().fit(X_train, y_train)
Explanation: The contours plot the decision boundaries between the different classes with the left figures corresponding to the partially labeled training set and the right figures corresponding to the test set. We can see that the boundaries learning using only the labeled data look a bit weird when considering the unlabeled data, particularly in that it doesn't cleanly separate the cyan cluster from the other two. In addition, it seems like the boundary between the magenta and red clusters is a bit curved in an unrealistic way. We would not expect points that fell around (-18, -7) to actually come from the red class. Training the model in a semi-supervised manner cleaned up both of these concerns by learning better boundaries that are also flatter and more generalizable.
Let's next compare the training times to see how much slower it is to do semi-supervised learning than it is to do simple supervised learning.
End of explanation
X = numpy.empty(shape=(0, 2))
X = numpy.concatenate((X, numpy.random.normal(4, 1, size=(3000, 2)).dot([[-2, 0.5], [2, 0.5]])))
X = numpy.concatenate((X, numpy.random.normal(3, 1, size=(6500, 2)).dot([[-1, 2], [1, 0.8]])))
X = numpy.concatenate((X, numpy.random.normal(7, 1, size=(8000, 2)).dot([[-0.75, 0.8], [0.9, 1.5]])))
X = numpy.concatenate((X, numpy.random.normal(6, 1, size=(2200, 2)).dot([[-1.5, 1.2], [0.6, 1.2]])))
X = numpy.concatenate((X, numpy.random.normal(8, 1, size=(3500, 2)).dot([[-0.2, 0.8], [0.7, 0.8]])))
X = numpy.concatenate((X, numpy.random.normal(9, 1, size=(6500, 2)).dot([[-0.0, 0.8], [0.5, 1.2]])))
x_min, x_max = X[:,0].min()-2, X[:,0].max()+2
y_min, y_max = X[:,1].min()-2, X[:,1].max()+2
y = numpy.concatenate((numpy.zeros(9500), numpy.ones(10200), numpy.ones(10000)*2))
idxs = numpy.arange(29700)
numpy.random.shuffle(idxs)
X = X[idxs]
y = y[idxs]
X_train, X_test = X[:25000], X[25000:]
y_train, y_test = y[:25000], y[25000:]
y_train[numpy.random.choice(25000, size=24920, replace=False)] = -1
plt.scatter(X_train[y_train == -1, 0], X_train[y_train == -1, 1], color='0.6', s=1)
plt.scatter(X_train[y_train == 0, 0], X_train[y_train == 0, 1], color='c', s=10)
plt.scatter(X_train[y_train == 1, 0], X_train[y_train == 1, 1], color='m', s=10)
plt.scatter(X_train[y_train == 2, 0], X_train[y_train == 2, 1], color='r', s=10)
plt.show()
Explanation: It is quite a bit slower to do semi-supervised learning than simple supervised learning in this example. This is expected as the simple supervised update for naive Bayes is a trivial MLE across each dimension whereas the semi-supervised case requires EM to converge to complete. However, it is still faster to do semi-supervised learning this setting to learn a naive Bayes classifier than it is to fit the label propagation estimator from sklearn.
However, though it is widely used, the naive Bayes classifier is still a fairly simple model. One can construct a more complicated model that does not assume feature independence called a Bayes classifier that can also be trained using semi-supervised learning in pretty much the same manner. You can read more about the Bayes classifier in its tutorial in the tutorial folder. Let's move on to more complicated data and try to fit a mixture model Bayes classifier, comparing the performance between using only labeled data and using all data.
First let's generate some more complicated, noisier data.
End of explanation
d1 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 0], max_iterations=1)
d2 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 1], max_iterations=1)
d3 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 2], max_iterations=1)
model_a = BayesClassifier([d1, d2, d3]).fit(X_train[y_train != -1], y_train[y_train != -1])
print("Supervised Learning Accuracy: {}".format((model_a.predict(X_test) == y_test).mean()))
d1 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 0], max_iterations=1)
d2 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 1], max_iterations=1)
d3 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 2], max_iterations=1)
model_b = BayesClassifier([d1, d2, d3])
model_b.fit(X_train, y_train)
print("Semisupervised Learning Accuracy: {}".format((model_b.predict(X_test) == y_test).mean()))
Explanation: Now let's take a look at the accuracies that we get when training a model using just the labeled examples versus all of the examples in a semi-supervised manner.
End of explanation
xx, yy = numpy.meshgrid(numpy.arange(x_min, x_max, 0.1), numpy.arange(y_min, y_max, 0.1))
Z1 = model_a.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
Z2 = model_b.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.figure(figsize=(16, 16))
plt.subplot(221)
plt.title("Training Data, Supervised Boundaries", fontsize=16)
plot_contour(X_train, y_train, Z1)
plt.subplot(223)
plt.title("Training Data, Semi-supervised Boundaries", fontsize=16)
plot_contour(X_train, y_train, Z2)
plt.subplot(222)
plt.title("Test Data, Supervised Boundaries", fontsize=16)
plot_contour(X_test, y_test, Z1)
plt.subplot(224)
plt.title("Test Data, Semi-supervised Boundaries", fontsize=16)
plot_contour(X_test, y_test, Z2)
plt.show()
Explanation: As expected, the semi-supervised method performs better. Let's visualize the landscape in the same manner as before in order to see why this is the case.
End of explanation
d1 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 0], max_iterations=1)
d2 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 1], max_iterations=1)
d3 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 2], max_iterations=1)
model = BayesClassifier([d1, d2, d3])
print("Supervised Learning: ")
%timeit model.fit(X_train[y_train != -1], y_train[y_train != -1])
print()
d1 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 0], max_iterations=1)
d2 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 1], max_iterations=1)
d3 = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_train[y_train == 2], max_iterations=1)
model = BayesClassifier([d1, d2, d3])
print("Semi-supervised Learning: ")
%timeit model.fit(X_train, y_train)
print()
print("Label Propagation (sklearn): ")
%timeit LabelPropagation().fit(X_train, y_train)
Explanation: Immediately, one would notice that the decision boundaries when using semi-supervised learning are smoother than those when using only a few samples. This can be explained mostly because having more data can generally lead to smoother decision boundaries as the model does not overfit to spurious examples in the dataset. It appears that the majority of the correctly classified samples come from having a more accurate decision boundary for the magenta samples in the left cluster. When using only the labeled samples many of the magenta samples in this region get classified incorrectly as cyan samples. In contrast, when using all of the data these points are all classified correctly.
Lastly, let's take a look at a time comparison in this more complicated example.
End of explanation |
951 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Media
Introduction
skrf supports some basic circuit simulation based on transmission line models. Network creation is accomplished through methods of the Media class, which represents a transmission line object for a given medium. Once constructed, a Media object contains the neccesary properties such as propagation constant and characteristic impedance, that are needed to generate microwave networks.
This tutorial illustrates how created Networks using several different Media objects. The basic usage is,
Step1: To create a transmission line of 100um
Step2: More detailed examples illustrating how to create various kinds of Media
objects are given below. A full list of media's supported can be found in the Media API page. The network creation and connection syntax of skrf are cumbersome if you need to doing complex circuit design. skrf's synthesis cabilities lend themselves more to scripted applications such as calibration, optimization or batch processing.
Media Object Basics
Two arguments are common to all media constructors
frequency (required)
z0 (optional)
frequency is a Frequency object, and z0 is the port impedance. z0 is only needed if the port impedance is different from the media's characteristic impedance. Here is an example of how to initialize a coplanar waveguide [0] media. The instance has a 10um center conductor, gap of 5um, and substrate with relative permativity of 10.6,
Step3: For the purpose of microwave network analysis, the defining properties of a (single moded) transmisison line are it's characteristic impedance and propagation constant. These properties return complex numpy.ndarray's, A port impedance is also needed when different networks are connected.
The characteristic impedance is given by a Z0 (capital Z)
Step4: The port impedance is given by z0 (lower z). Which we set to 1, just to illustrate how this works. The port impedance is used to compute impednace mismatched if circuits of different port impedance are connected.
Step5: The propagation constant is given by gamma
Step6: Lets take a look at some other Media's
Slab of Si in Freespace
A plane-wave in freespace from 10-20GHz.
Step7: Simpulate a 1cm slab of Si in half-space,
Step8: Rectangular Waveguide
a WR-10 Rectangular Waveguide
Step9: The z0 argument in the Rectangular Waveguide constructor is used
to force a specifc port impedance. This is commonly used to match
the port impedance to what a VNA stores in a touchstone file. Lets compare the propagation constant in waveguide to that of freespace,
Step10: Because the wave quantities are dynamic they change when the attributes
of the media change. To illustrate, plot the propagation constant of the cpw for various values of substrated permativity,
Step11: Network Synthesis
Networks are created through methods of a Media object. To create a 1-port network for a rectangular waveguide short,
Step12: Or to create a $90^{\circ}$ section of cpw line,
Step13: Building Cicuits
By connecting a series of simple circuits, more complex circuits can be
made. To build a the $90^{\circ}$ delay short, in the
rectangular waveguide media defined above.
Step14: When Networks with more than 2 ports need to be connected together, use
rf.connect(). To create a two-port network for a shunted delayed open, you can create an ideal 3-way splitter (a 'tee') and conect the delayed open to one of its ports,
Step15: Adding networks in shunt is pretty common, so there is a Media.shunt() function to do this,
Step16: If a specific circuit is created frequently, it may make sense to
use a function to create the circuit. This can be done most quickly using lambda
Step17: A more useful example may be to create a function for a shunt-stub tuner,
that will work for any media object
Step18: This approach lends itself to design optimization.
Design Optimization
The abilities of scipy's optimizers can be used to automate network design. In this example, skrf is used to automate the single stub impedance matching network design. First, we create a 'cost' function which returns somthing we want to minimize, such as the reflection coefficient magnitude at band center. Then, one of scipy's minimization algorithms is used to determine the optimal parameters of the stub lengths to minimize this cost. | Python Code:
%matplotlib inline
import skrf as rf
rf.stylely()
from skrf import Frequency
from skrf.media import CPW
freq = Frequency(75,110,101,'ghz')
cpw = CPW(freq, w=10e-6, s=5e-6, ep_r=10.6)
cpw
Explanation: Media
Introduction
skrf supports some basic circuit simulation based on transmission line models. Network creation is accomplished through methods of the Media class, which represents a transmission line object for a given medium. Once constructed, a Media object contains the neccesary properties such as propagation constant and characteristic impedance, that are needed to generate microwave networks.
This tutorial illustrates how created Networks using several different Media objects. The basic usage is,
End of explanation
cpw.line(100*1e-6, name = '100um line')
Explanation: To create a transmission line of 100um
End of explanation
freq = Frequency(75,110,101,'ghz')
cpw = CPW(freq, w=10e-6, s=5e-6, ep_r=10.6, z0 =1)
cpw
Explanation: More detailed examples illustrating how to create various kinds of Media
objects are given below. A full list of media's supported can be found in the Media API page. The network creation and connection syntax of skrf are cumbersome if you need to doing complex circuit design. skrf's synthesis cabilities lend themselves more to scripted applications such as calibration, optimization or batch processing.
Media Object Basics
Two arguments are common to all media constructors
frequency (required)
z0 (optional)
frequency is a Frequency object, and z0 is the port impedance. z0 is only needed if the port impedance is different from the media's characteristic impedance. Here is an example of how to initialize a coplanar waveguide [0] media. The instance has a 10um center conductor, gap of 5um, and substrate with relative permativity of 10.6,
End of explanation
cpw.Z0[:3]
Explanation: For the purpose of microwave network analysis, the defining properties of a (single moded) transmisison line are it's characteristic impedance and propagation constant. These properties return complex numpy.ndarray's, A port impedance is also needed when different networks are connected.
The characteristic impedance is given by a Z0 (capital Z)
End of explanation
cpw.z0[:3]
Explanation: The port impedance is given by z0 (lower z). Which we set to 1, just to illustrate how this works. The port impedance is used to compute impednace mismatched if circuits of different port impedance are connected.
End of explanation
cpw.gamma[:3]
Explanation: The propagation constant is given by gamma
End of explanation
from skrf.media import Freespace
freq = Frequency(10,20,101,'ghz')
air = Freespace(freq)
air
air.z0[:2] # 377ohm baby!
# plane wave in Si
si = Freespace(freq, ep_r = 11.2)
si.z0[:3] # ~110ohm
Explanation: Lets take a look at some other Media's
Slab of Si in Freespace
A plane-wave in freespace from 10-20GHz.
End of explanation
slab = air.thru() ** si.line(1, 'cm') ** air.thru()
slab.plot_s_db(n=0)
Explanation: Simpulate a 1cm slab of Si in half-space,
End of explanation
from skrf.media import RectangularWaveguide
freq = Frequency(75,110,101,'ghz')
wg = RectangularWaveguide(freq, a=100*rf.mil, z0=50) # see note below about z0
wg
Explanation: Rectangular Waveguide
a WR-10 Rectangular Waveguide
End of explanation
air = Freespace(freq)
from matplotlib import pyplot as plt
air.plot(air.gamma.imag, label='Freespace')
wg.plot(wg.gamma.imag, label='WR10')
plt.ylabel('Propagation Constant (rad/m)')
plt.legend()
Explanation: The z0 argument in the Rectangular Waveguide constructor is used
to force a specifc port impedance. This is commonly used to match
the port impedance to what a VNA stores in a touchstone file. Lets compare the propagation constant in waveguide to that of freespace,
End of explanation
for ep_r in [9,10,11]:
cpw.ep_r = ep_r
cpw.frequency.plot(cpw.beta, label='er=%.1f'%ep_r)
plt.xlabel('Frequency [GHz]')
plt.ylabel('Propagation Constant [rad/m]')
plt.legend()
Explanation: Because the wave quantities are dynamic they change when the attributes
of the media change. To illustrate, plot the propagation constant of the cpw for various values of substrated permativity,
End of explanation
wg.short(name = 'short')
Explanation: Network Synthesis
Networks are created through methods of a Media object. To create a 1-port network for a rectangular waveguide short,
End of explanation
cpw.line(d=90,unit='deg', name='line')
Explanation: Or to create a $90^{\circ}$ section of cpw line,
End of explanation
delay_short = wg.line(d=90,unit='deg') ** wg.short()
delay_short.name = 'delay short'
delay_short
Explanation: Building Cicuits
By connecting a series of simple circuits, more complex circuits can be
made. To build a the $90^{\circ}$ delay short, in the
rectangular waveguide media defined above.
End of explanation
tee = cpw.tee()
delay_open = cpw.delay_open(40,'deg')
shunt_open = rf.connect(tee,1,delay_open,0)
Explanation: When Networks with more than 2 ports need to be connected together, use
rf.connect(). To create a two-port network for a shunted delayed open, you can create an ideal 3-way splitter (a 'tee') and conect the delayed open to one of its ports,
End of explanation
cpw.shunt(delay_open)
Explanation: Adding networks in shunt is pretty common, so there is a Media.shunt() function to do this,
End of explanation
delay_short = lambda d: wg.line(d,'deg')**wg.short()
delay_short(90)
Explanation: If a specific circuit is created frequently, it may make sense to
use a function to create the circuit. This can be done most quickly using lambda
End of explanation
def shunt_stub(med, d0, d1):
return med.line(d0,'deg')**med.shunt_delay_open(d1,'deg')
shunt_stub(cpw,10,90)
Explanation: A more useful example may be to create a function for a shunt-stub tuner,
that will work for any media object
End of explanation
from scipy.optimize import fmin
# the load we are trying to match
load = cpw.load(.2+.2j)
# single stub circuit generator function
def shunt_stub(med, d0, d1):
return med.line(d0,'deg')**med.shunt_delay_open(d1,'deg')
# define the cost function we want to minimize (this uses sloppy namespace)
def cost(d):
# prevent negative length lines, returning high cost
if d[0] <0 or d[1] <0:
return 1e3
return (shunt_stub(cpw,d[0],d[1]) ** load)[100].s_mag.squeeze()
# initial guess of optimal delay lengths in degrees
d0 = 120,40 # initial guess
#determine the optimal delays
d_opt = fmin(cost,(120,40))
d_opt
Explanation: This approach lends itself to design optimization.
Design Optimization
The abilities of scipy's optimizers can be used to automate network design. In this example, skrf is used to automate the single stub impedance matching network design. First, we create a 'cost' function which returns somthing we want to minimize, such as the reflection coefficient magnitude at band center. Then, one of scipy's minimization algorithms is used to determine the optimal parameters of the stub lengths to minimize this cost.
End of explanation |
952 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
RESIT
Import and settings
In this example, we need to import numpy, pandas, and graphviz in addition to lingam.
Step1: Test data
First, we generate a causal structure with 7 variables. Then we create a dataset with 6 variables from x0 to x5, with x6 being the latent variable for x2 and x3.
Step2: Causal Discovery
To run causal discovery, we create a RESIT object and call the fit method.
Step3: Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery. x2 and x3, which have latent confounders as parents, are stored in a list without causal ordering.
Step4: Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery. The coefficients between variables with latent confounders are np.nan.
Step5: We can draw a causal graph by utility funciton.
Step6: Bootstrapping
We call bootstrap() method instead of fit(). Here, the second argument specifies the number of bootstrap sampling.
Step7: Causal Directions
Since BootstrapResult object is returned, we can get the ranking of the causal directions extracted by get_causal_direction_counts() method. In the following sample code, n_directions option is limited to the causal directions of the top 8 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more.
Step8: We can check the result by utility function.
Step9: Directed Acyclic Graphs
Also, using the get_directed_acyclic_graph_counts() method, we can get the ranking of the DAGs extracted. In the following sample code, n_dags option is limited to the dags of the top 3 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more.
Step10: We can check the result by utility function.
Step11: Probability
Using the get_probabilities() method, we can get the probability of bootstrapping.
Step12: Bootstrap Probability of Path
Using the get_paths() method, we can explore all paths from any variable to any variable and calculate the bootstrap probability for each path. The path will be output as an array of variable indices. For example, the array [0, 1, 3] shows the path from variable X0 through variable X1 to variable X3. | Python Code:
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import print_causal_directions, print_dagc, make_dot
import warnings
warnings.filterwarnings('ignore')
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
Explanation: RESIT
Import and settings
In this example, we need to import numpy, pandas, and graphviz in addition to lingam.
End of explanation
X = pd.read_csv('nonlinear_data.csv')
m = np.array([
[0, 0, 0, 0, 0],
[1, 0, 0, 0, 0],
[1, 1, 0, 0, 0],
[0, 1, 1, 0, 0],
[0, 0, 0, 1, 0]])
dot = make_dot(m)
# Save pdf
dot.render('dag')
# Save png
dot.format = 'png'
dot.render('dag')
dot
Explanation: Test data
First, we generate a causal structure with 7 variables. Then we create a dataset with 6 variables from x0 to x5, with x6 being the latent variable for x2 and x3.
End of explanation
from sklearn.ensemble import RandomForestRegressor
reg = RandomForestRegressor(max_depth=4, random_state=0)
model = lingam.RESIT(regressor=reg)
model.fit(X)
Explanation: Causal Discovery
To run causal discovery, we create a RESIT object and call the fit method.
End of explanation
model.causal_order_
Explanation: Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery. x2 and x3, which have latent confounders as parents, are stored in a list without causal ordering.
End of explanation
model.adjacency_matrix_
Explanation: Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery. The coefficients between variables with latent confounders are np.nan.
End of explanation
make_dot(model.adjacency_matrix_)
Explanation: We can draw a causal graph by utility funciton.
End of explanation
import warnings
warnings.filterwarnings('ignore', category=UserWarning)
n_sampling = 100
model = lingam.RESIT(regressor=reg)
result = model.bootstrap(X, n_sampling=n_sampling)
Explanation: Bootstrapping
We call bootstrap() method instead of fit(). Here, the second argument specifies the number of bootstrap sampling.
End of explanation
cdc = result.get_causal_direction_counts(n_directions=8, min_causal_effect=0.01, split_by_causal_effect_sign=True)
Explanation: Causal Directions
Since BootstrapResult object is returned, we can get the ranking of the causal directions extracted by get_causal_direction_counts() method. In the following sample code, n_directions option is limited to the causal directions of the top 8 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more.
End of explanation
print_causal_directions(cdc, n_sampling)
Explanation: We can check the result by utility function.
End of explanation
dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01, split_by_causal_effect_sign=True)
Explanation: Directed Acyclic Graphs
Also, using the get_directed_acyclic_graph_counts() method, we can get the ranking of the DAGs extracted. In the following sample code, n_dags option is limited to the dags of the top 3 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more.
End of explanation
print_dagc(dagc, n_sampling)
Explanation: We can check the result by utility function.
End of explanation
prob = result.get_probabilities(min_causal_effect=0.01)
print(prob)
Explanation: Probability
Using the get_probabilities() method, we can get the probability of bootstrapping.
End of explanation
from_index = 0 # index of x0
to_index = 3 # index of x3
pd.DataFrame(result.get_paths(from_index, to_index))
Explanation: Bootstrap Probability of Path
Using the get_paths() method, we can explore all paths from any variable to any variable and calculate the bootstrap probability for each path. The path will be output as an array of variable indices. For example, the array [0, 1, 3] shows the path from variable X0 through variable X1 to variable X3.
End of explanation |
953 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interpolate Isochrones for Praesepe
Instead of interpolating boundary condition tables and computing a new set of models, we can interpolate between two metallicities to calculate new isochrones for Praesepe at metallicities between [Fe/H] = 0.00 and 0.20.
Load required libraries.
Step1: Read in isochrone files.
Step2: Check to confirm that isochrones are equal in length.
Step3: This is related to the fact that it was difficult to get models to converge at high metallicity below 0.26 Msun. Trimming the isochrones to be the same length.
Step4: Again, check to confirm lengths are equal.
Step5: Now confirm that the mass resolution is equal for both isochrones.
Step6: It appears that the masses at each point along the isochrones are equal. Now we may interpolate using a simple linear interpolation. Start by defining an interpolation routine.
Step7: and then interpolating at two intermediate points.
Step8: These should now be intermediate between the original two isochrones. Note that we've only interpolate fundamental properties, and not photometric magnitudes. These can be added separately as interpolating magnitudes in the isochrone may not be as accurate as interpolating bolometric corrections from the original data tables.
Step9: The tracks shift as they should, so the interpolation worked! Now, we can save the new isochrones and compute colors for them separately. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Interpolate Isochrones for Praesepe
Instead of interpolating boundary condition tables and computing a new set of models, we can interpolate between two metallicities to calculate new isochrones for Praesepe at metallicities between [Fe/H] = 0.00 and 0.20.
Load required libraries.
End of explanation
iso_zp00 = np.genfromtxt('data/dmestar_00600.0myr_z+0.00_a+0.00_marcs.iso')
iso_zp20 = np.genfromtxt('data/dmestar_00600.0myr_z+0.20_a+0.00_marcs.iso')
Explanation: Read in isochrone files.
End of explanation
len(iso_zp00) == len(iso_zp20)
Explanation: Check to confirm that isochrones are equal in length.
End of explanation
def trimIsochrone(isochrone):
bools = [0.26 <= x[0] <= 1.61 for x in isochrone]
return np.compress(bools, isochrone, axis=0)
iso_zp00_trim = trimIsochrone(iso_zp00)
iso_zp20_trim = trimIsochrone(iso_zp20)
Explanation: This is related to the fact that it was difficult to get models to converge at high metallicity below 0.26 Msun. Trimming the isochrones to be the same length.
End of explanation
len(iso_zp00_trim) == len(iso_zp20_trim)
Explanation: Again, check to confirm lengths are equal.
End of explanation
for i, line in enumerate(iso_zp00_trim):
if line[0] != iso_zp20_trim[i, 0]:
raise ValueError('Masses are not equal between the two isochrones in row {:4.0f}.'.format(i))
break
else:
pass
Explanation: Now confirm that the mass resolution is equal for both isochrones.
End of explanation
def isoLinInterp(FeH):
return iso_zp00_trim[:, 0:6] + (iso_zp20_trim[:, 0:6] - iso_zp00_trim[:, 0:6])*(FeH - 0.0)/0.20
Explanation: It appears that the masses at each point along the isochrones are equal. Now we may interpolate using a simple linear interpolation. Start by defining an interpolation routine.
End of explanation
iso_zp10 = isoLinInterp(0.10)
iso_zp15 = isoLinInterp(0.15)
Explanation: and then interpolating at two intermediate points.
End of explanation
fig, ax = plt.subplots(1, 3, figsize=(12., 8.), sharex=True)
ax[0].set_xlim(8000., 3000.)
ax[0].plot(10**iso_zp00_trim[:,1], iso_zp00_trim[:,3], '-', lw=2, c='#555555')
ax[0].plot(10**iso_zp20_trim[:,1], iso_zp20_trim[:,3], '-', lw=2, c='#222222')
ax[0].plot(10**iso_zp10[:,1], iso_zp10[:,3], '-', lw=2, c='#b22222')
ax[0].plot(10**iso_zp15[:,1], iso_zp15[:,3], '-', lw=2, c='#0094b2')
ax[1].plot(10**iso_zp00_trim[:,1], iso_zp00_trim[:,2], '-', lw=2, c='#555555')
ax[1].plot(10**iso_zp20_trim[:,1], iso_zp20_trim[:,2], '-', lw=2, c='#222222')
ax[1].plot(10**iso_zp10[:,1], iso_zp10[:,2], '-', lw=2, c='#b22222')
ax[1].plot(10**iso_zp15[:,1], iso_zp15[:,2], '-', lw=2, c='#0094b2')
ax[2].plot(10**iso_zp00_trim[:,1], iso_zp00_trim[:,4], '-', lw=2, c='#555555')
ax[2].plot(10**iso_zp20_trim[:,1], iso_zp20_trim[:,4], '-', lw=2, c='#222222')
ax[2].plot(10**iso_zp10[:,1], iso_zp10[:,4], '-', lw=2, c='#b22222')
ax[2].plot(10**iso_zp15[:,1], iso_zp15[:,4], '-', lw=2, c='#0094b2')
Explanation: These should now be intermediate between the original two isochrones. Note that we've only interpolate fundamental properties, and not photometric magnitudes. These can be added separately as interpolating magnitudes in the isochrone may not be as accurate as interpolating bolometric corrections from the original data tables.
End of explanation
np.savetxt('data/dmestar_00600.0myr_z+0.10_a+0.00_marcs.iso', iso_zp10, fmt='%14.8f')
np.savetxt('data/dmestar_00600.0myr_z+0.15_a+0.00_marcs.iso', iso_zp15, fmt='%14.8f')
Explanation: The tracks shift as they should, so the interpolation worked! Now, we can save the new isochrones and compute colors for them separately.
End of explanation |
954 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
训练
我们对训练集采用随机森林模型,并评估模型效果
Step1: 我们对验证集和测试集使用predict()方法,并得到相应的误差。
Step2: 输出误差 | Python Code:
%pylab inline
# 导入训练集、验证集和测试集
import pandas as pd
samtrain = pd.read_csv('samtrain.csv')
samval = pd.read_csv('samval.csv')
samtest = pd.read_csv('samtest.csv')
# 使用 sklearn的随机森林模型,其模块叫做 sklearn.ensemble.RandomForestClassifier
# 在这里我们需要将标签列 ('activity') 转换为整数表示,
# 因为Python的RandomForest package需要这样的格式。
# 其对应关系如下:
# laying = 1, sitting = 2, standing = 3, walk = 4, walkup = 5, walkdown = 6
# 其代码在 library randomforest.py 中。
import randomforests as rf
samtrain = rf.remap_col(samtrain,'activity')
samval = rf.remap_col(samval,'activity')
samtest = rf.remap_col(samtest,'activity')
import sklearn.ensemble as sk
#oob校验就是将本次没有用于训练的数据集用于验证误差,称为袋外数据
rfc = sk.RandomForestClassifier(n_estimators=500, oob_score=True)
train_data = samtrain[samtrain.columns[1:-2]]
train_truth = samtrain['activity']
model = rfc.fit(train_data, train_truth)
# 使用 OOB (out of band) 来对模型的精确度进行评估.
rfc.oob_score_
# 用 "feature importance" 得分来看最重要的10个特征
fi = enumerate(rfc.feature_importances_)
cols = samtrain.columns
[(value,cols[i]) for (i,value) in fi if value > 0.04]
## 这个值0.4是我们通过经验选取的,它恰好能够提供10个最好的特征。
## 改变这个值的大小可以得到不同数量的特征。
## 下面这句命令是防止你修改参数弄乱了后回不来的命令备份。
## [(value,cols[i]) for (i,value) in fi if value > 0.04]
Explanation: 训练
我们对训练集采用随机森林模型,并评估模型效果
End of explanation
# 因为pandas的 data frame 在第0列增加了一个假的未知列,所以我们从第1列开始。
# not using subject column, activity ie target is in last columns hence -2 i.e dropping last 2 cols
val_data = samval[samval.columns[1:-2]]
val_truth = samval['activity']
val_pred = rfc.predict(val_data)
test_data = samtest[samtest.columns[1:-2]]
test_truth = samtest['activity']
test_pred = rfc.predict(test_data)
Explanation: 我们对验证集和测试集使用predict()方法,并得到相应的误差。
End of explanation
print("mean accuracy score for validation set = %f" %(rfc.score(val_data, val_truth)))
print("mean accuracy score for test set = %f" %(rfc.score(test_data, test_truth)))
# 使用混淆矩阵来观察哪些活动被错误分类了。
# 详细说明请看 [5]
import sklearn.metrics as skm
test_cm = skm.confusion_matrix(test_truth,test_pred)
test_cm
# 混淆矩阵可视化
import pylab as pl
pl.matshow(test_cm)
pl.title('Confusion matrix for test data')
pl.colorbar()
pl.show()
# 计算一下其他的对预测效果的评估指标
# 详细内容请看 [6],[7],[8],[9]
# Accuracy:真实分类的对错比例
print("Accuracy = %f" %(skm.accuracy_score(test_truth,test_pred)))
# Precision:tp/tp+fp
print("Precision = %f" %(skm.precision_score(test_truth,test_pred)))
# Recall:tp/tp+fn
print("Recall = %f" %(skm.recall_score(test_truth,test_pred)))
# F1 Score:F1 = 2 * (precision * recall) / (precision + recall)
#The F1 score can be interpreted as a weighted average of the precision and rec
print("F1 score = %f" %(skm.f1_score(test_truth,test_pred)))
Explanation: 输出误差
End of explanation |
955 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Visualizing eclipse data
Let us find some interesting data to generate elements from, before we consider how to customize them. Here is a dataset containing information about all the eclipses of the 21st century
Step2: Here we have the date of each eclipse, what time of day the eclipse reached its peak in both local time and in UTC, the type of eclipse, its magnitude (fraction of the Sun's diameter obscured by the Moon) and the position of the peak in latitude and longitude.
Let's see what happens if we pass this dataframe to the Curve element
Step3: We see that, by default, the first dataframe column becomes the key dimension (corresponding to the x-axis) and the second column becomes the value dimension (corresponding to the y-axis). There is clearly structure in this data, but the plot is too highly compressed in the x direction to see much detail, and you may not like the particular color or line style. So we can start customizing the appearance of this curve using the HoloViews options system.
Types of option
If we want to change the appearance of what we can already see in the plot, we're no longer focusing on the data and metadata stored in the elements, but about details of the presentation. Details specific to the final plo tare handled by the separate "options" system, not the element objects. HoloViews allows you to set three types of options
Step4: The top line uses a special IPython/Jupyter syntax called the %%opts cell magic to specify the width plot option for all Curve objects in this cell. %%opts accepts a simple specification where we pass the width=900 keyword argument to Curve as a plot option (denoted by the square brackets).
Of course, there are other ways of applying options in HoloViews that do not require this IPython-specific syntax, but for this tutorial, we will only be covering the more-convenient magic-based syntax. You can read about the alternative approaches in the user guide.
Step5: Aside
Step6: Style options
The plot options earlier instructed HoloViews to build a plot 900 pixels wide, when rendered with the Bokeh plotting extension. Now let's specify that the Bokeh glyph should be 'red' and slightly thicker, which is information passed on directly to Bokeh (making it a style option)
Step7: Note how the plot options applied above to hour_curve are remembered! The %%opts magic is used to customize the object displayed as output for a particular code cell
Step8: Switching to matplotlib
Let us now view our curve with matplotlib using the %%output cell magic
Step9: All our options are gone! This is because the options are associated with the corresponding plotting extension---if you switch back to 'bokeh', the options will be applicable again. In general, options have to be specific to backends; e.g. the line_width style option accepted by Bokeh is called linewidth in matplotlib
Step10: The %output line magic
In the two cells above we repeated %%output backend='matplotlib' to use matplotlib to render those two cells. Instead of repeating ourselves with the cell magic, we can use a "line magic" (similar syntax to the cell magic but with one %) to set things globally. Let us switch to matplotlib with a line magic and specify that we want SVG output
Step11: Unlike the cell magic, the line magic doesn't need to be followed by any expression and can be used anywhere in the notebook. Both the %output and %opts line magics set things globally so it is recommended you declare them at the top of your notebooks. Now let us look at the SVG matplotlib output we requested
Step12: Switching back to bokeh
In previous releases of HoloViews, it was typical to switch to matplotlib in order to export to PNG or SVG, because Bokeh did not support these file formats. Since Bokeh 0.12.6 we can now easily use HoloViews to export Bokeh plots to a PNG file, as we will now demonstrate
Step13: By passing fig='png' and a filename='eclipses' to %output we can both render to PNG and save the output to file
Step14: Here we have requested PNG format using fig='png' and that the output is output to eclipses.png using filename='eclipses'
Step15: Bokeh also has some SVG support, but it is not yet exposed in HoloViews.
Using group and label
The above examples showed how to customize by type, but HoloViews offers multiple additional levels of customization that should be sufficient to cover any purpose. For our last example, let us split our eclipse dataframe based on the type ('Total' or 'Partial')
Step16: We'll now introduce the Spikes element, and display it with a large width and without a y-axis. We can specify those options for all following Spikes elements using the %opts line magic
Step17: Now let us look at the hour of day at which these two types of eclipses occur (local time) by overlaying the two types of eclipse as Spikes elements. The problem then is finding a way to visually distinguish the spikes corresponding to the different ellipse types.
We can do this using the element group and label introduced in the introduction to elements section as follows
Step18: Using these options to distinguish between the two categories of data with the same type, you can now see clear patterns of grouping between the two types, with many more total eclipses around noon in local time. Similar techniques can be used to provide arbitrarily specific customizations when needed. | Python Code:
import pandas as pd
import holoviews as hv
hv.extension('bokeh', 'matplotlib')
Explanation: <a href='http://www.holoviews.org'><img src="assets/hv+bk.png" alt="HV+BK logos" width="40%;" align="left"/></a>
<div style="float:right;"><h2>02. Customizing Visual Appearance</h2></div>
Section 01 focused on specifying elements and simple collections of them. This section explains how the visual appearance can be adjusted to bring out the most salient aspects of your data, or just to make the style match the overall theme of your document.
Preliminaries
In the introduction to elements, hv.extension('bokeh') was used at the start to load and activate the bokeh plotting extension. In this notebook, we will also briefly use matplotlib which we will load, but not yet activate, by listing it second:
End of explanation
eclipses = pd.read_csv('../data/eclipses_21C.csv', parse_dates=['date'])
eclipses.head()
Explanation: Visualizing eclipse data
Let us find some interesting data to generate elements from, before we consider how to customize them. Here is a dataset containing information about all the eclipses of the 21st century:
End of explanation
hv.Curve(eclipses)
Explanation: Here we have the date of each eclipse, what time of day the eclipse reached its peak in both local time and in UTC, the type of eclipse, its magnitude (fraction of the Sun's diameter obscured by the Moon) and the position of the peak in latitude and longitude.
Let's see what happens if we pass this dataframe to the Curve element:
End of explanation
%%opts Curve [width=900]
hour_curve = hv.Curve(eclipses).redim.label(hour_local='Hour (local time)', date='Date (21st century)')
hour_curve
Explanation: We see that, by default, the first dataframe column becomes the key dimension (corresponding to the x-axis) and the second column becomes the value dimension (corresponding to the y-axis). There is clearly structure in this data, but the plot is too highly compressed in the x direction to see much detail, and you may not like the particular color or line style. So we can start customizing the appearance of this curve using the HoloViews options system.
Types of option
If we want to change the appearance of what we can already see in the plot, we're no longer focusing on the data and metadata stored in the elements, but about details of the presentation. Details specific to the final plo tare handled by the separate "options" system, not the element objects. HoloViews allows you to set three types of options:
plot options: Options that tell HoloViews how to construct the plot.
style options: Options that tell the underlying plotting extension (Bokeh, matplotlib, etc.) how to style the plot
normalization options: Options that tell HoloViews how to normalize the various elements in the plot against each other (not covered in this tutorial)
Plot options
We noted that the data is too compressed in the x direction. Let us fix that by specifying the width plot option:
End of explanation
# Exercise: Try setting the height plot option of the Curve above.
# Hint: the magic supports tab completion when the cursor is in the square brackets!
# Exercise: Try enabling the boolean show_grid plot option for the curve above
# Exercise: Try set the x-axis label rotation (in degrees) with the xrotation plot option
Explanation: The top line uses a special IPython/Jupyter syntax called the %%opts cell magic to specify the width plot option for all Curve objects in this cell. %%opts accepts a simple specification where we pass the width=900 keyword argument to Curve as a plot option (denoted by the square brackets).
Of course, there are other ways of applying options in HoloViews that do not require this IPython-specific syntax, but for this tutorial, we will only be covering the more-convenient magic-based syntax. You can read about the alternative approaches in the user guide.
End of explanation
# hv.help(hv.Curve)
Explanation: Aside: hv.help
Tab completion helps discover what keywords are available but you can get more complete help using the hv.help utility. For instance, to learn more about the options for hv.Curve run hv.help(hv.Curve):
End of explanation
%%opts Curve (color='red' line_width=2)
hour_curve
Explanation: Style options
The plot options earlier instructed HoloViews to build a plot 900 pixels wide, when rendered with the Bokeh plotting extension. Now let's specify that the Bokeh glyph should be 'red' and slightly thicker, which is information passed on directly to Bokeh (making it a style option):
End of explanation
# Exercise: Display hour_curve without any new options to verify it stays red
# Exercise: Try setting the line_width style options to 1
# Exercise: Try setting the line_dash style option to 'dotdash'
Explanation: Note how the plot options applied above to hour_curve are remembered! The %%opts magic is used to customize the object displayed as output for a particular code cell: behind the scenes HoloViews has linked the specified options to the hour_curve object via a hidden integer id attribute.
Having used the %%opts magic on hour_curve again, we have now associated the 'red' color style option to it. In the options specification syntax, style options are the keywords in parentheses and are keywords defined and used by Bokeh to style line glyphs.
End of explanation
%%output backend='matplotlib'
hour_curve
Explanation: Switching to matplotlib
Let us now view our curve with matplotlib using the %%output cell magic:
End of explanation
%%output backend='matplotlib'
%%opts Curve [aspect=4 fig_size=400 xrotation=90] (color='blue' linewidth=2)
hour_curve
# Exercise: Apply the matplotlib equivalent to line_dash above using linestyle='-.'
Explanation: All our options are gone! This is because the options are associated with the corresponding plotting extension---if you switch back to 'bokeh', the options will be applicable again. In general, options have to be specific to backends; e.g. the line_width style option accepted by Bokeh is called linewidth in matplotlib:
End of explanation
%output backend='matplotlib' fig='svg'
Explanation: The %output line magic
In the two cells above we repeated %%output backend='matplotlib' to use matplotlib to render those two cells. Instead of repeating ourselves with the cell magic, we can use a "line magic" (similar syntax to the cell magic but with one %) to set things globally. Let us switch to matplotlib with a line magic and specify that we want SVG output:
End of explanation
%%opts Curve [aspect=4 fig_size=400 xrotation=70] (color='green' linestyle='--')
hour_curve
# Exercise: Verify for yourself that the output above is SVG and not PNG
# You can do this by right-clicking above then selecting 'Open Image in a new Tab' (Chrome) or 'View Image' (Firefox)
Explanation: Unlike the cell magic, the line magic doesn't need to be followed by any expression and can be used anywhere in the notebook. Both the %output and %opts line magics set things globally so it is recommended you declare them at the top of your notebooks. Now let us look at the SVG matplotlib output we requested:
End of explanation
%output backend='bokeh'
Explanation: Switching back to bokeh
In previous releases of HoloViews, it was typical to switch to matplotlib in order to export to PNG or SVG, because Bokeh did not support these file formats. Since Bokeh 0.12.6 we can now easily use HoloViews to export Bokeh plots to a PNG file, as we will now demonstrate:
End of explanation
%%output fig='png' filename='eclipses'
hour_curve.clone()
Explanation: By passing fig='png' and a filename='eclipses' to %output we can both render to PNG and save the output to file:
End of explanation
ls *.png
Explanation: Here we have requested PNG format using fig='png' and that the output is output to eclipses.png using filename='eclipses':
End of explanation
total_eclipses = eclipses[eclipses.type=='Total']
partial_eclipses = eclipses[eclipses.type=='Partial']
Explanation: Bokeh also has some SVG support, but it is not yet exposed in HoloViews.
Using group and label
The above examples showed how to customize by type, but HoloViews offers multiple additional levels of customization that should be sufficient to cover any purpose. For our last example, let us split our eclipse dataframe based on the type ('Total' or 'Partial'):
End of explanation
%opts Spikes [width=900 yaxis=None]
Explanation: We'll now introduce the Spikes element, and display it with a large width and without a y-axis. We can specify those options for all following Spikes elements using the %opts line magic:
End of explanation
%%opts Spikes.Eclipses.Total (line_dash='solid')
%%opts Spikes.Eclipses.Partial (line_dash='dotted')
total = hv.Spikes(total_eclipses, kdims=['hour_local'], vdims=[], group='Eclipses', label='Total')
partial = hv.Spikes(partial_eclipses, kdims=['hour_local'], vdims=[], group='Eclipses', label='Partial')
(total * partial).redim.label(hour_local='Local time (hour)')
Explanation: Now let us look at the hour of day at which these two types of eclipses occur (local time) by overlaying the two types of eclipse as Spikes elements. The problem then is finding a way to visually distinguish the spikes corresponding to the different ellipse types.
We can do this using the element group and label introduced in the introduction to elements section as follows:
End of explanation
# Exercise: Remove the two %%opts lines above and observe the effect
# Exercise: Show all spikes with 'solid' line_dash, total eclipses in black and the partial ones in 'lightgray'
# Optional Exercise: Try differentiating the two sets of spikes by group and not label
Explanation: Using these options to distinguish between the two categories of data with the same type, you can now see clear patterns of grouping between the two types, with many more total eclipses around noon in local time. Similar techniques can be used to provide arbitrarily specific customizations when needed.
End of explanation |
956 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Eigenvalues and eigenvectors of stiffness matrices
Step1: Predefinition
The constitutive model tensor in Voigt notation (plane stress) is
$$C = \frac{E}{(1 - \nu^2)}
\begin{pmatrix}
1 & \nu & 0\
\nu & 1 & 0\
0 & 0 & \frac{1 - \nu}{2)}
\end{pmatrix}$$
Step2: Interpolation functions
The shape functions are
Step3: Thus, the interpolation matrix renders
Derivatives interpolation matrix
Step4: Being the stiffness matrix integrand
$$K_\text{int} = B^T C B$$
Step5: Analytic integration
The stiffness matrix is obtained integrating the product of the interpolator-derivatives (displacement-to-strains) matrix with the constitutive tensor and itself, i.e.
$$\begin{align}
K &= \int\limits_{-1}^{1}\int\limits_{-1}^{1} K_\text{int} dr\, ds\
&= \int\limits_{-1}^{1}\int\limits_{-1}^{1} B^T C\, B\, dr\, ds \enspace .
\end{align}$$
Step6: We can check some numerical vales for $E=1$ Pa and $\nu=1/3$ | Python Code:
from sympy.utilities.codegen import codegen
from sympy import *
from sympy import init_printing
init_printing()
r, s, t, x, y, z = symbols('r s t x y z')
k, m, n = symbols('k m n', integer=True)
rho, nu, E = symbols('rho, nu, E')
Explanation: Eigenvalues and eigenvectors of stiffness matrices
End of explanation
K_factor = E/(1 - nu**2)
C = K_factor * Matrix([
[1, nu, 0],
[nu, 1, 0],
[0, 0, (1 - nu)/2]])
C
Explanation: Predefinition
The constitutive model tensor in Voigt notation (plane stress) is
$$C = \frac{E}{(1 - \nu^2)}
\begin{pmatrix}
1 & \nu & 0\
\nu & 1 & 0\
0 & 0 & \frac{1 - \nu}{2)}
\end{pmatrix}$$
End of explanation
N = S(1)/4*Matrix([(1 + r)*(1 + s),
(1 - r)*(1 + s),
(1 - r)*(1 - s),
(1 + r)*(1 - s)])
N
Explanation: Interpolation functions
The shape functions are
End of explanation
dHdr = zeros(2,4)
for i in range(4):
dHdr[0,i] = diff(N[i],r)
dHdr[1,i] = diff(N[i],s)
jaco = eye(2) # Jacobian matrix, identity for now
dHdx = jaco*dHdr
B = zeros(3,8)
for i in range(4):
B[0, 2*i] = dHdx[0, i]
B[1, 2*i+1] = dHdx[1, i]
B[2, 2*i] = dHdx[1, i]
B[2, 2*i+1] = dHdx[0, i]
B
Explanation: Thus, the interpolation matrix renders
Derivatives interpolation matrix
End of explanation
K_int = B.T*C*B
Explanation: Being the stiffness matrix integrand
$$K_\text{int} = B^T C B$$
End of explanation
K = zeros(8,8)
for i in range(8):
for j in range(8):
K[i,j] = integrate(K_int[i,j], (r,-1,1), (s,-1,1))
simplify(K/K_factor)
Explanation: Analytic integration
The stiffness matrix is obtained integrating the product of the interpolator-derivatives (displacement-to-strains) matrix with the constitutive tensor and itself, i.e.
$$\begin{align}
K &= \int\limits_{-1}^{1}\int\limits_{-1}^{1} K_\text{int} dr\, ds\
&= \int\limits_{-1}^{1}\int\limits_{-1}^{1} B^T C\, B\, dr\, ds \enspace .
\end{align}$$
End of explanation
K_num = K.subs([(E, 1), (nu, S(1)/3)])
K_num.eigenvects()
Explanation: We can check some numerical vales for $E=1$ Pa and $\nu=1/3$
End of explanation |
957 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test Script
Used by tests.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Execute Test Script
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: Test Script
Used by tests.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'hello':{
'auth':'user',
'hour':[
1
],
'say':'Hello At 1',
'sleep':0
}
},
{
'hello':{
'auth':'user',
'hour':[
3
],
'say':'Hello At 3',
'sleep':0
}
},
{
'hello':{
'auth':'user',
'hour':[
],
'say':'Hello Manual',
'sleep':0
}
},
{
'hello':{
'auth':'user',
'hour':[
23
],
'say':'Hello At 23 Sleep',
'sleep':30
}
},
{
'hello':{
'auth':'user',
'say':'Hello At Anytime',
'sleep':0
}
},
{
'hello':{
'auth':'user',
'hour':[
1,
3,
23
],
'say':'Hello At 1, 3, 23',
'sleep':0
}
},
{
'hello':{
'auth':'user',
'hour':[
3
],
'say':'Hello At 3 Reordered',
'sleep':0
}
}
]
execute(CONFIG, TASKS, force=True)
Explanation: 3. Execute Test Script
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
958 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python
Python <https
Step1: Second, if you have a MATLAB background remember that indexing in Python
starts from zero (and is done with square brackets) | Python Code:
a = 3
print(type(a))
b = [1, 2.5, 'This is a string']
print(type(b))
c = 'Hello world!'
print(type(c))
Explanation: Introduction to Python
Python <https://www.python.org/>_ is a modern general-purpose object-oriented
high-level programming language. First make sure you have a working Python
environment and dependencies (see install_python_and_mne_python). If you
are completely new to Python, don't worry, it's just like any other programming
language, only easier. Here are a few great resources to get you started:
SciPy lectures <http://scipy-lectures.github.io>_
Learn X in Y minutes: Python <https://learnxinyminutes.com/docs/python/>_
NumPy for MATLAB users <https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html>_
We highly recommend watching the SciPy videos and reading through these sites
to get a sense of how scientific computing is done in Python.
Here are few important points to familiarize yourself with Python. First,
everything is dynamically typed. There is no need to declare and initialize
data structures or variables separately.
End of explanation
a = [1, 2, 3, 4]
print('This is the zeroth value in the list: {}'.format(a[0]))
Explanation: Second, if you have a MATLAB background remember that indexing in Python
starts from zero (and is done with square brackets):
End of explanation |
959 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intensity Weighting
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Relevant Parameters
Step3: Influence on Light Curves (fluxes)
Let's (roughtly) reproduce Figure 5 from Prsa et al. 2016 which shows the difference between photon and energy intensity weighting.
<img src="prsa+2016_fig5.png" alt="Figure 8" width="600px"/> | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: Intensity Weighting
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', times=np.linspace(0,1,101))
Explanation: As always, let's do imports and initialize a logger and a new Bundle.
End of explanation
b['intens_weighting']
print(b['intens_weighting'])
Explanation: Relevant Parameters
End of explanation
for teff_primary in [5000,7500,10000,12500,15000]:
b['teff@primary'] = teff_primary
b['teff@secondary'] = 0.9 * teff_primary
for weighting in ['energy', 'photon']:
b['intens_weighting'] = weighting
b.run_compute(irrad_method='none', model='{}_{}'.format(teff_primary, weighting))
teff_colormap = {5000: 'm', 7500: 'r', 10000: 'g', 12500: 'c', 15000: 'b'}
fig = plt.figure()
ax1, ax2 = fig.add_subplot(211), fig.add_subplot(212)
for teff, color in teff_colormap.items():
fluxes_energy = b.get_value('fluxes@{}_energy'.format(teff))
fluxes_photon = b.get_value('fluxes@{}_photon'.format(teff))
phases = b.to_phase('times@lc@dataset')
# alias data from -0.6 to 0.6
fluxes_energy = np.append(fluxes_energy, fluxes_energy[abs(phases) > 0.4])
fluxes_photon = np.append(fluxes_photon, fluxes_photon[abs(phases) > 0.4])
phases = np.append(phases, phases[abs(phases)>0.4]+1.0)
phases[phases > 1.0] = phases[phases > 1.0] - 2.0
sort = phases.argsort()
phases = phases[sort]
fluxes_energy = fluxes_energy[sort]
fluxes_photon = fluxes_photon[sort]
ax1.plot(phases, fluxes_energy, color=color)
ax2.plot(phases, fluxes_photon-fluxes_energy, color=color)
lbl = ax1.set_xlabel('')
lbl = ax1.set_ylabel('flux')
lbl = ax2.set_xlabel('phase')
lbl = ax2.set_ylabel('flux diff')
Explanation: Influence on Light Curves (fluxes)
Let's (roughtly) reproduce Figure 5 from Prsa et al. 2016 which shows the difference between photon and energy intensity weighting.
<img src="prsa+2016_fig5.png" alt="Figure 8" width="600px"/>
End of explanation |
960 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SEG Machine Learning (Well Log Facies Prediction) Contest
Entry by Justin Gosses of team Pet_Stromatolite
This is an "open science" contest designed to introduce people to machine learning with well logs and brainstorm different methods through collaboration with others, so this notebook is based heavily on the introductary notebook with my own modifications.
more information at https
Step1: This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set.
Remove a single well to use as a blind test later.
Step2: Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe.
Step3: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial.
Step4: editing the well viewer code in an attempt to understand it and potentially not show everything
Step5: looking at several wells at once
Step6: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class.
This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies.
Crossplots are a familiar tool in the geosciences to visualize how two properties vary with rock type. This dataset contains 5 log variables, and scatter matrix can help to quickly visualize the variation between the all the variables in the dataset. We can employ the very useful Seaborn library to quickly create a nice looking scatter matrix. Each pane in the plot shows the relationship between two of the variables on the x and y axis, with each point is colored according to its facies. The same colormap is used to represent the 9 facies.
Step7: Conditioning the data set
Now we extract just the feature variables we need to perform the classification. The predictor variables are the five wireline values and two geologic constraining variables. We also get a vector of the facies labels that correspond to each feature vector.
Step8: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie
Step9: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set.
Step10: Training the SVM classifier
Now we use the cleaned and conditioned training set to create a facies classifier. As mentioned above, we will use a type of machine learning model known as a support vector machine. The SVM is a map of the feature vectors as points in a multi dimensional space, mapped so that examples from different facies are divided by a clear gap that is as wide as possible.
The SVM implementation in scikit-learn takes a number of important parameters. First we create a classifier using the default settings.
Step11: Now we can train the classifier using the training set we created above.
Step12: Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set. Because we know the true facies labels of the vectors in the test set, we can use the results to evaluate the accuracy of the classifier.
Step13: We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.
The confusion matrix is simply a 2D array. The entries of confusion matrix C[i][j] are equal to the number of observations predicted to have facies j, but are known to have facies i.
To simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file classification_utilities.py in this repo for the display_cm() function.
Step14: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels.
Step15: Model parameter selection
The classifier so far has been built with the default parameters. However, we may be able to get improved classification results with optimal parameter choices.
We will consider two parameters. The parameter C is a regularization factor, and tells the classifier how much we want to avoid misclassifying training examples. A large value of C will try to correctly classify more examples from the training set, but if C is too large it may 'overfit' the data and fail to generalize when classifying new data. If C is too small then the model will not be good at fitting outliers and will have a large error on the training set.
The SVM learning algorithm uses a kernel function to compute the distance between feature vectors. Many kernel functions exist, but in this case we are using the radial basis function rbf kernel (the default). The gamma parameter describes the size of the radial basis functions, which is how far away two vectors in the feature space need to be to be considered close.
We will train a series of classifiers with different values for C and gamma. Two nested loops are used to train a classifier for every possible combination of values in the ranges specified. The classification accuracy is recorded for each combination of parameter values. The results are shown in a series of plots, so the parameter values that give the best classification accuracy on the test set can be selected.
This process is also known as 'cross validation'. Often a separate 'cross validation' dataset will be created in addition to the training and test sets to do model selection. For this tutorial we will just use the test set to choose model parameters.
Step16: The best accuracy on the cross validation error curve was achieved for gamma = 1, and C = 10. We can now create and train an optimized classifier based on these parameters
Step17: Precision and recall are metrics that give more insight into how the classifier performs for individual facies. Precision is the probability that given a classification result for a sample, the sample actually belongs to that class. Recall is the probability that a sample will be correctly classified for a given class.
Precision and recall can be computed easily using the confusion matrix. The code to do so has been added to the display_confusion_matrix() function
Step18: To interpret these results, consider facies SS. In our test set, if a sample was labeled SS the probability the sample was correct is 0.8 (precision). If we know a sample has facies SS, then the probability it will be correctly labeled by the classifier is 0.78 (recall). It is desirable to have high values for both precision and recall, but often when an algorithm is tuned to increase one, the other decreases. The F1 score combines both to give a single measure of relevancy of the classifier results.
These results can help guide intuition for how to improve the classifier results. For example, for a sample with facies MS or mudstone, it is only classified correctly 57% of the time (recall). Perhaps this could be improved by introducing more training samples. Sample quality could also play a role. Facies BS or bafflestone has the best F1 score and relatively few training examples. But this data was handpicked from other wells to provide training examples to identify this facies.
We can also consider the classification metrics when we consider misclassifying an adjacent facies as correct
Step19: Considering adjacent facies, the F1 scores for all facies types are above 0.9, except when classifying SiSh or marine siltstone and shale. The classifier often misclassifies this facies (recall of 0.66), most often as wackestone.
These results are comparable to those reported in Dubois et al. (2007).
Applying the classification model to the blind data
We held a well back from the training, and stored it in a dataframe called blind
Step20: The label vector is just the Facies column
Step21: We can form the feature matrix by dropping some of the columns and making a new dataframe
Step22: Now we can transform this with the scaler we made before
Step23: Now it's a simple matter of making a prediction and storing it back in the dataframe
Step24: Let's see how we did with the confusion matrix
Step25: We managed 0.71 using the test data, but it was from the same wells as the training data. This more reasonable test does not perform as well...
Step26: ...but does remarkably well on the adjacent facies predictions.
Step27: Applying the classification model to new data
Now that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input.
This dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called test_data.
Step28: The data needs to be scaled using the same constants we used for the training data.
Step29: Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe.
Step30: We can use the well log plot to view the classification results along with the well logs. | Python Code:
### loading
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
### setting up options in pandas
from pandas import set_option
set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
### taking a look at the training dataset
filename = 'training_data.csv'
training_data = pd.read_csv(filename)
training_data
### Checking out Well Names
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Well Name'].unique()
training_data['Well Name']
well_name_list = training_data['Well Name'].unique()
well_name_list
### Checking out Formation Names
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Formation'].unique()
training_data.describe()
facies_1 = training_data.loc[training_data['Facies'] == 1]
facies_2 = training_data.loc[training_data['Facies'] == 2]
facies_3 = training_data.loc[training_data['Facies'] == 3]
facies_4 = training_data.loc[training_data['Facies'] == 4]
facies_5 = training_data.loc[training_data['Facies'] == 5]
facies_6 = training_data.loc[training_data['Facies'] == 6]
facies_7 = training_data.loc[training_data['Facies'] == 7]
facies_8 = training_data.loc[training_data['Facies'] == 8]
facies_9 = training_data.loc[training_data['Facies'] == 9]
#showing description for just facies 1, Sandstone
facies_1.describe()
#showing description for just facies 9, Phylloid-algal bafflestone (limestone)
facies_9.describe()
#showing description for just facies 8, Packstone-grainstone (limestone)
facies_8.describe()
Explanation: SEG Machine Learning (Well Log Facies Prediction) Contest
Entry by Justin Gosses of team Pet_Stromatolite
This is an "open science" contest designed to introduce people to machine learning with well logs and brainstorm different methods through collaboration with others, so this notebook is based heavily on the introductary notebook with my own modifications.
more information at https://github.com/seg/2016-ml-contest
and even more information at http://library.seg.org/doi/abs/10.1190/tle35100906.1
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),
photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1. Nonmarine sandstone
2. Nonmarine coarse siltstone
3. Nonmarine fine siltstone
4. Marine siltstone and shale
5. Mudstone (limestone)
6. Wackestone (limestone)
7. Dolomite
8. Packstone-grainstone (limestone)
9. Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies |Label| Adjacent Facies
:---: | :---: |:--:
1 |SS| 2
2 |CSiS| 1,3
3 |FSiS| 2
4 |SiSh| 5
5 |MS| 4,6
6 |WS| 5,7
7 |D| 6,8
8 |PS| 6,7,9
9 |BS| 7,8
Let's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type.
=================================================================================================================
Notes:
Early Ideas for feature engineering
take out any points in individual wells where not all the logs are present
test whether error increases around the depths where PE is absent?
test whether using formation, depth, or depth&formation as variables impacts prediction
examine well logs & facies logs (including prediction wells) to see if there aren't trends that might be dealt with by increasing the population of certain wells over others in the training set?
explore effect size of using/not using marine or non-marine flags
explore making 'likely to predict wrong' flags based on first-pass results with thin facies surrounded by thicker facies such that you might expand a 'blended' response due to the measured response of the tool being thicker than predicted facies
explore doing the same above but before prediction using range of thickness in predicted facies flags vs. range of thickness in known facies flags
explore using multiple prediction loops, in order words, predict errors not just facies.
Explore error distribution: adjacent vs. non-adjacent facies, by thickness, marine vs. non-marine, by formation, and possible human judgement patterns that influence interpreted facies.
End of explanation
blind = training_data[training_data['Well Name'] == 'SHANKLE']
training_data = training_data[training_data['Well Name'] != 'SHANKLE']
Explanation: This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set.
Remove a single well to use as a blind test later.
End of explanation
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
Explanation: Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe.
End of explanation
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
make_facies_log_plot(
training_data[training_data['Well Name'] == 'SHRIMPLIN'],
facies_colors)
Explanation: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial.
End of explanation
def make_faciesOnly_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=2, figsize=(3, 9))
# f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
# ax[0].plot(logs.GR, logs.Depth, '-g')
ax[0].plot(logs.ILD_log10, logs.Depth, '-')
# ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
# ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
# ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[1].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[1])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
# ax[0].set_xlabel("GR")
# ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[0].set_xlabel("ILD_log10")
ax[0].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
# ax[2].set_xlabel("DeltaPHI")
# ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
# ax[3].set_xlabel("PHIND")
# ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
# ax[4].set_xlabel("PE")
# ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[1].set_xlabel('Facies')
# ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
# ax[4].set_yticklabels([]);
ax[1].set_yticklabels([])
ax[1].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
Explanation: editing the well viewer code in an attempt to understand it and potentially not show everything
End of explanation
# make_faciesOnly_log_plot(
# training_data[training_data['Well Name'] == 'SHRIMPLIN'],
# facies_colors)
for i in range(len(well_name_list)-1):
# well_name_list[i]
make_faciesOnly_log_plot(
training_data[training_data['Well Name'] == well_name_list[i]],
facies_colors)
Explanation: looking at several wells at once
End of explanation
#save plot display settings to change back to when done plotting with seaborn
inline_rc = dict(mpl.rcParams)
import seaborn as sns
sns.set()
sns.pairplot(training_data.drop(['Well Name','Facies','Formation','Depth','NM_M','RELPOS'],axis=1),
hue='FaciesLabels', palette=facies_color_map,
hue_order=list(reversed(facies_labels)))
#switch back to default matplotlib plot style
mpl.rcParams.update(inline_rc)
from pandas.tools.plotting import radviz
radviz(training_data.drop(['Well Name','Formation','Depth','NM_M','RELPOS']), "Facies")
Explanation: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class.
This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies.
Crossplots are a familiar tool in the geosciences to visualize how two properties vary with rock type. This dataset contains 5 log variables, and scatter matrix can help to quickly visualize the variation between the all the variables in the dataset. We can employ the very useful Seaborn library to quickly create a nice looking scatter matrix. Each pane in the plot shows the relationship between two of the variables on the x and y axis, with each point is colored according to its facies. The same colormap is used to represent the 9 facies.
End of explanation
correct_facies_labels = training_data['Facies'].values
# dropping certain labels and only keeping the geophysical log values to train on
feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)
feature_vectors.describe()
Explanation: Conditioning the data set
Now we extract just the feature variables we need to perform the classification. The predictor variables are the five wireline values and two geologic constraining variables. We also get a vector of the facies labels that correspond to each feature vector.
End of explanation
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(feature_vectors)
scaled_features = scaler.transform(feature_vectors)
feature_vectors
feature_vectors.describe()
Explanation: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie: Gaussian with zero mean and unit variance). The factors used to standardize the training set must be applied to any subsequent feature set that will be input to the classifier. The StandardScalar class can be fit to the training set, and later used to standardize any training data.
End of explanation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
scaled_features, correct_facies_labels, test_size=0.1, random_state=42)
Explanation: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set.
End of explanation
from sklearn import svm
clf = svm.SVC()
Explanation: Training the SVM classifier
Now we use the cleaned and conditioned training set to create a facies classifier. As mentioned above, we will use a type of machine learning model known as a support vector machine. The SVM is a map of the feature vectors as points in a multi dimensional space, mapped so that examples from different facies are divided by a clear gap that is as wide as possible.
The SVM implementation in scikit-learn takes a number of important parameters. First we create a classifier using the default settings.
End of explanation
clf.fit(X_train,y_train)
Explanation: Now we can train the classifier using the training set we created above.
End of explanation
predicted_labels = clf.predict(X_test)
Explanation: Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set. Because we know the true facies labels of the vectors in the test set, we can use the results to evaluate the accuracy of the classifier.
End of explanation
from sklearn.metrics import confusion_matrix
from classification_utilities import display_cm, display_adj_cm
conf = confusion_matrix(y_test, predicted_labels)
display_cm(conf, facies_labels, hide_zeros=True)
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
Explanation: We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.
The confusion matrix is simply a 2D array. The entries of confusion matrix C[i][j] are equal to the number of observations predicted to have facies j, but are known to have facies i.
To simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file classification_utilities.py in this repo for the display_cm() function.
End of explanation
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
print('Facies classification accuracy = %f' % accuracy(conf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies))
Explanation: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels.
End of explanation
#model selection takes a few minutes, change this variable
#to true to run the parameter loop
do_model_selection = True
if do_model_selection:
C_range = np.array([.01, 1, 5, 10, 20, 50, 100, 1000, 5000, 10000])
gamma_range = np.array([0.0001, 0.001, 0.01, 0.1, 1, 10])
fig, axes = plt.subplots(3, 2,
sharex='col', sharey='row',figsize=(10,10))
plot_number = 0
for outer_ind, gamma_value in enumerate(gamma_range):
row = int(plot_number / 2)
column = int(plot_number % 2)
cv_errors = np.zeros(C_range.shape)
train_errors = np.zeros(C_range.shape)
for index, c_value in enumerate(C_range):
clf = svm.SVC(C=c_value, gamma=gamma_value)
clf.fit(X_train,y_train)
train_conf = confusion_matrix(y_train, clf.predict(X_train))
cv_conf = confusion_matrix(y_test, clf.predict(X_test))
cv_errors[index] = accuracy(cv_conf)
train_errors[index] = accuracy(train_conf)
ax = axes[row, column]
ax.set_title('Gamma = %g'%gamma_value)
ax.semilogx(C_range, cv_errors, label='CV error')
ax.semilogx(C_range, train_errors, label='Train error')
plot_number += 1
ax.set_ylim([0.2,1])
ax.legend(bbox_to_anchor=(1.05, 0), loc='lower left', borderaxespad=0.)
fig.text(0.5, 0.03, 'C value', ha='center',
fontsize=14)
fig.text(0.04, 0.5, 'Classification Accuracy', va='center',
rotation='vertical', fontsize=14)
Explanation: Model parameter selection
The classifier so far has been built with the default parameters. However, we may be able to get improved classification results with optimal parameter choices.
We will consider two parameters. The parameter C is a regularization factor, and tells the classifier how much we want to avoid misclassifying training examples. A large value of C will try to correctly classify more examples from the training set, but if C is too large it may 'overfit' the data and fail to generalize when classifying new data. If C is too small then the model will not be good at fitting outliers and will have a large error on the training set.
The SVM learning algorithm uses a kernel function to compute the distance between feature vectors. Many kernel functions exist, but in this case we are using the radial basis function rbf kernel (the default). The gamma parameter describes the size of the radial basis functions, which is how far away two vectors in the feature space need to be to be considered close.
We will train a series of classifiers with different values for C and gamma. Two nested loops are used to train a classifier for every possible combination of values in the ranges specified. The classification accuracy is recorded for each combination of parameter values. The results are shown in a series of plots, so the parameter values that give the best classification accuracy on the test set can be selected.
This process is also known as 'cross validation'. Often a separate 'cross validation' dataset will be created in addition to the training and test sets to do model selection. For this tutorial we will just use the test set to choose model parameters.
End of explanation
clf = svm.SVC(C=10, gamma=1)
clf.fit(X_train, y_train)
cv_conf = confusion_matrix(y_test, clf.predict(X_test))
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
Explanation: The best accuracy on the cross validation error curve was achieved for gamma = 1, and C = 10. We can now create and train an optimized classifier based on these parameters:
End of explanation
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
Explanation: Precision and recall are metrics that give more insight into how the classifier performs for individual facies. Precision is the probability that given a classification result for a sample, the sample actually belongs to that class. Recall is the probability that a sample will be correctly classified for a given class.
Precision and recall can be computed easily using the confusion matrix. The code to do so has been added to the display_confusion_matrix() function:
End of explanation
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
Explanation: To interpret these results, consider facies SS. In our test set, if a sample was labeled SS the probability the sample was correct is 0.8 (precision). If we know a sample has facies SS, then the probability it will be correctly labeled by the classifier is 0.78 (recall). It is desirable to have high values for both precision and recall, but often when an algorithm is tuned to increase one, the other decreases. The F1 score combines both to give a single measure of relevancy of the classifier results.
These results can help guide intuition for how to improve the classifier results. For example, for a sample with facies MS or mudstone, it is only classified correctly 57% of the time (recall). Perhaps this could be improved by introducing more training samples. Sample quality could also play a role. Facies BS or bafflestone has the best F1 score and relatively few training examples. But this data was handpicked from other wells to provide training examples to identify this facies.
We can also consider the classification metrics when we consider misclassifying an adjacent facies as correct:
End of explanation
blind
Explanation: Considering adjacent facies, the F1 scores for all facies types are above 0.9, except when classifying SiSh or marine siltstone and shale. The classifier often misclassifies this facies (recall of 0.66), most often as wackestone.
These results are comparable to those reported in Dubois et al. (2007).
Applying the classification model to the blind data
We held a well back from the training, and stored it in a dataframe called blind:
End of explanation
y_blind = blind['Facies'].values
Explanation: The label vector is just the Facies column:
End of explanation
well_features = blind.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1)
Explanation: We can form the feature matrix by dropping some of the columns and making a new dataframe:
End of explanation
X_blind = scaler.transform(well_features)
Explanation: Now we can transform this with the scaler we made before:
End of explanation
y_pred = clf.predict(X_blind)
blind['Prediction'] = y_pred
Explanation: Now it's a simple matter of making a prediction and storing it back in the dataframe:
End of explanation
cv_conf = confusion_matrix(y_blind, y_pred)
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
Explanation: Let's see how we did with the confusion matrix:
End of explanation
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
Explanation: We managed 0.71 using the test data, but it was from the same wells as the training data. This more reasonable test does not perform as well...
End of explanation
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
def compare_facies_plot(logs, compadre, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[6])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im2, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-2):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[6].set_xlabel(compadre)
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
ax[6].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
compare_facies_plot(blind, 'Prediction', facies_colors)
Explanation: ...but does remarkably well on the adjacent facies predictions.
End of explanation
well_data = pd.read_csv('validation_data_nofacies.csv')
well_data['Well Name'] = well_data['Well Name'].astype('category')
well_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
Explanation: Applying the classification model to new data
Now that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input.
This dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called test_data.
End of explanation
X_unknown = scaler.transform(well_features)
Explanation: The data needs to be scaled using the same constants we used for the training data.
End of explanation
#predict facies of unclassified data
y_unknown = clf.predict(X_unknown)
well_data['Facies'] = y_unknown
well_data
well_data['Well Name'].unique()
Explanation: Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe.
End of explanation
make_facies_log_plot(
well_data[well_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
well_data[well_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
well_data.to_csv('well_data_with_facies.csv')
Explanation: We can use the well log plot to view the classification results along with the well logs.
End of explanation |
961 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Interactive generation of Bézier and B-spline curves.<br> Python functional programming implementation of the <br> de Casteljau and Cox-de Boor algorithms </center>
The aim of this IPython notebook is twofold
Step1: We define two functions that implement this recursive scheme. In both cases
we consider 2D points as tuples of float numbers and control polygons as lists of tuples.
First an imperative programming implementation of the de Casteljau algorithm
Step2: This is a typical imperative programming code
Step3: For a FP implementation of de Casteljau algorithm we use standard Python.
First we define functions that return an affine/ convex combination of two numbers, two 2D points, respectively of each pair of consecutive points in a list of 2D control points
Step4: The recursive function implementing de Casteljau scheme
Step5: A Bézier curve of control polygon ${\bf b}$ is discretized calling deCasteljauF function for each
parameter $t_j=j/(nr-1)$, $j=\overline{0,nr-1}$, where $nr$ is the number of points to be calculated
Step6: map(lambda s
Step7: Now we build the object C, set the axes for the curve to be generated, and choose the control points
with the left mouse button click. A right button click generates the corresponding curve
Step8: Subdividing a Bézier curve
Let $\Gamma$ be a Bézier curve of control points $({\bf b}_0, {\bf b}_1, \ldots, {\bf b}_n)$, and
$s\in (0,1)$ a parameter. Cutting (dividing) the curve at the point $p(s)$ we get two arcs of polynomial curves, which can be also expressed as Bezier
curves. The problem of finding the control polygons of the two arcs is called subdivision of the Bézier curve at $s$.
The control points ${\bf d}r$, $r=\overline{0,n}$, of the right arc of Bezier curve are
Step9: To get the left subpolygon we exploit the invariance of a Bézier curve to reversing its control points.
The Bézier curve defined by the control points
${\bf b}n,{\bf b}{n-1}, \ldots, {\bf b}0$, coincides with that defined by
${\bf b}_0,{\bf b}{1}, \ldots, {\bf b}_n$.
If $p$ is the Bernstein parameterization of the curve defined by
${\bf b}0,{\bf b}{1}, \ldots, {\bf b}n$
and $\tilde{p}$ of that defined by the reversed control polygon, ${\bf b}_n,{\bf b}{n-1}, \ldots, {\bf b}_0$, then $p(t)=\tilde{p}(1-t)$ [Farin].
This means that the left subpolygon of the subdivision of the former curve at $s$,
is the right subpolygon resulted by dividing the latter curve at $1-s$.
Now we can define the function that returns the left and right subpolygon of a Bézier curve subdivision
Step10: Define a function to plot the subpolygons
Step11: Let us generate a Bézier curve and subdivide it at $s=0.47$
Step12: Multi-affine de Casteljau algorithm
The above de Casteljau algorithm is the classical one. There is a newer approach to define and study Bézier
curves through polarization.
Every polynomial curve of degree n, parameterized by $p
Step13: Usually we should test the concordance between the length (n+1) of the control polygon and the number of elements of the iterator
u. Since a listiterator has no length we have to count its elements. Functionally this number would get as
Step14: 2. The multi-affine de Casteljau algorithm can also be applied to redefine a subarc of a Bézier curve as a Bézier curve.
More precisely, let us assume that a Bézier curve of control points ${\bf b}j$, $j=\overline{0,n}$,
and parameterization $p$ defined on the interval [0,1] is cut at the points corresponding to the parameters $r<s$, $r, s\in[0,1]$.
The arc between $p(r)$ and $p(s)$ is also a polynomial curve and its control points are [Gallier]
Step15: Now let us test this function
Step16: 3. The function redefineBezier can also be invoked to compute the left and right subpolygons resulted from a subdivision of a Bézier curve,
at a point corresponding to the paramater $s$. Namely the left subpolygon is returned by
redefineBezier(b, 0, s), whereas the right subpolygon by redefineBezier(b, s, 1)
B-spline curves
We give a procedural definition of a B-spline curve of degree $k$, not an analytical one.
The following data
Step17: The function functools.partial associates to a multiargument function
a partial function in the same way as in mathematics, i.e. if $f(x_1, x_2, \ldots, x_n)$ is a n-variable function
$g(x_2, \ldots, x_n)=f(a, x_2, \ldots, x_n)$, with $a$ fixed, is a partial function. More details here.
In Omegas is involved the partial function of the anonymous function
lambda r, j
Step18: Noticing that in each step $r>1$, the coefficients $\omega^r_i$ are computed with the same formula as for
$r=1$, i.e. $\omega_i^r=\displaystyle\frac{t-u_i}{u_i+k-u_i}$ but for the knots in the list
u[r-1
Step19: We also need a new function that calculates the convex combinations of each pair of points in a list,
with distinct coefficients given in a list alpha
Step20: The recursive Cox-de Boor formula is now implemented in the following way
Step21: To experiment the interactive generation of B-spline curves we give the restrictions on input data that lead to a closed B-spline curve
Step22: We build the control polygon and draw the corresponding closed B-spline curve
Step23: After these experiments our conclusion is that you can insert functional programming code in your imperative Python code, but trying to write pure functions could be difficult, sometimes impossible.
Writing FP style Python code is challenging, but the code is not so easy to read, unlike the imperative Python code which is characterized as a readable one.
The next step in experimenting FP will be to use a Python implementation of monads (see for example Pymonad or fn.py.
In this IPython Notebook we avoided to raise errors and commented the lines handling errors.
The algorithms implemented in this IPython Notebook are presented in detail in | Python Code:
from IPython.display import Image
Image(filename='Imag/Decast4p.png')
Explanation: <center> Interactive generation of Bézier and B-spline curves.<br> Python functional programming implementation of the <br> de Casteljau and Cox-de Boor algorithms </center>
The aim of this IPython notebook is twofold:
- first to illustrate the interactive generation of Bézier and B-spline curves using the matplotlib backend nbagg, and second
- to give a functional programming implementation of the basic algorithms related to these classes of curves.
Bézier and B-spline curves are widely used for interactive heuristic design of free-form curves in Geometric modeling.
Their properties are usually illustrated through interactive generation using C and OpenGL or Java applets. The new
matplotlib nbagg backend enables the interactive generation of Bézier and B-spline curves in an IPython Notebook.
Why functional programming (FP)?
Lately there has been an increasing interest in pure functional programming and an active debate on whether we can do FP in Python or not.
By Wikipedia:
Functional programming is a programming paradigm, a style of building the structure and elements of computer programs, that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions. In functional code, the output value of a function depends only on the arguments that are input to the function, so calling a function f twice with the same value for an argument x will produce the same result f(x) each time. Eliminating side effects, i.e. changes in state that do not depend on the function inputs, can make it much easier to understand and predict the behavior of a program, which is one of the key motivations for the development of functional programming.
Python is a multi-paradigm programming language: it is both imperative and object-oriented programming language and provides a few constructs for a functional programming style, as well.
Here is a discussion on why Python is not very good for FP.
Instead of discussing pros and cons of Python FP we decided to start a small project and implement it as much as possible in a FP style.
Before starting let us recall a few characteristics of FP:
- functions are building blocks of FP. They are first class objects, meaning that they are treated like any other objects. Functions can be passed as arguments to other functions (called higher order functions) or be returned by another functions.
- FP defines only pure functions, i.e. functions without side effects. Such a function acts only on its input to produce output (like mathematical functions). A pure function never interact with the outside world (it does not perform any I/O operation or modify an instance of a class).
- a FP program consists in evaluating expressions, unlike imperative programming where programs are composed of statements which change global state when executed.
- FP avoids looping and uses recursion instead.
In this IPython Notebook we try to implement algorithms related to Bézier and B-spline curves,
using recursion, iterators, higher order functions.
We also define a class to build interactively a curve. The code for class methods will be imperative
as well as for the last function that prepares data for a closed B-spline curve.
Bézier curves. de Casteljau algorithm
A Bézier curve of degree $n$ in the $2$-dimensional space is a polynomial curve defined by an ordered set $({\bf b}_0, {\bf b}_1, \ldots, {\bf b}_n)$ of $n+1$ points, called control points. This set is also called control polygon of the Bézier curve.
Its parameterization, $p:t\in [0,1]\mapsto p(t)\in\mathbb{R}^2$, is defined by:
$$p(t)=\sum_{k=0}^n{\bf b}_k B^n_k(t), \quad t\in[0,1]$$
where $B^n_k(t)=\binom nk t^k(1-t)^{n-k}, k=0,1,\ldots, n$, are Bernstein polynomials.
To compute a point on a Bézier curve theoretically we have to evaluate the above parameterization $p$ at a parameter $t\in[0,1]$. In Geometric Modeling a more stable algorithm is used instead, namely the de Casteljau algorithm.
De Casteljau algorithm provides a procedural method to compute recursively a point, $p(t)$, on a Bezier curve.
Given the control points $({\bf b}0, {\bf b}_1, \ldots, {\bf b}_n)$, and a parameter $t\in [0,1]$, one computes in each step $r=\overline{1,n}$ of the recursion, the points:
$$ {\bf
b}{i}^{r}(t)=(1-t)\,{\bf b}{i}^{r-1}(t)+t\,{\bf b}{i+1}^{r-1}(t),\:
i=\overline{0,n-r}, $$
i.e. the $i^{th}$ point ${\bf
b}_{i}^{r}(t)$, from the step $r$, is a convex combination of the $i^{th}$ and $(i+1)^{th}$ point from the step $r-1$:
$$
\begin{array}{lll} {\bf b}^{r-1}i&\stackrel{1-t}{\rightarrow}&{\bf
b}^{r}_i\
{\bf b}^{r-1}{i+1}&\stackrel{\nearrow}{t}&\end{array}$$
The control points calculated in the intermediate steps can be displayed in a triangular array:
$$
\begin{array}{llllll}
{\bf b}0^0 & {\bf b}{0}^1 & {\bf b}{0}^2& \cdots & {\bf b}_0^{n-1}& {\bf b}^n_0\
{\bf b}_1^0 & {\bf b}{1}^1 & {\bf b}{1}^2& \cdots & {\bf b}_1^{n-1}& \
{\bf b}_2^0 & {\bf b}{2}^1 & {\bf b}{2}^2 & \cdots & & \
\vdots & \vdots &\vdots & & & \
{\bf b}{n-2}^0 & {\bf b}{n-2}^1& {\bf b}{n-2}^2& & & \
{\bf b}{n-1}^0 & {\bf b}{n-1}^1 & & & & \
{\bf b}_n^0 & & & & & \end{array} $$
The points ${\bf b}_i^0$ denote the given control points ${\bf b}_i$, $i=\overline{0,n}$.
The number of points reduces by 1, from a step to the next one, such that at the final step, $r=n$, we get only one point, ${\bf b}_0^n(t)$, which is the point $p(t)$
on the Bézier curve.
The image below illustrates the points computed by de Casteljau algorithm for a Bézier curve defined by 4 control points, and a fixed parameter $t\in[0,1]$:
End of explanation
import numpy as np
class InvalidInputError(Exception):
pass
def deCasteljauImp(b,t):
N=len(b)
if(N<2):
raise InvalidInputError("The control polygon must have at least two points")
a=np.copy(b) #shallow copy of the list of control points and its conversion to a numpy.array
for r in range(1,N):
a[:N-r,:]=(1-t)*a[:N-r,:]+t*a[1:N-r+1,:]# convex combinations in step r
return a[0,:]
Explanation: We define two functions that implement this recursive scheme. In both cases
we consider 2D points as tuples of float numbers and control polygons as lists of tuples.
First an imperative programming implementation of the de Casteljau algorithm:
End of explanation
def BezierCv(b, nr=200):# compute nr points on the Bezier curve of control points in list bom
t=np.linspace(0, 1, nr)
return [deCasteljauImp(b, t[k]) for k in range(nr)]
Explanation: This is a typical imperative programming code: assignment statements, for looping.
Each call of this function copies the control points into a numpy array a.
The convex combinations a[i,:] =(1-t) *a[i,:]+t*a[i-1,:], $i=\overline{1,n-r}$, that must be calculated in each step $r$
are vectorized, computing convex combinations of points from two slices in the numpy.array a.
A discrete version of a Bezier curve, of nr points is computed by this function:
End of explanation
cvx=lambda x, y, t: (1-t) * x + t * y # affine/convex combination of two numbers x, y
cvxP=lambda (P, Q, t): (cvx(P[0], Q[0], t), cvx(P[1], Q[1], t))# affine/cvx comb of two points P,Q
def cvxCtrlP(ctrl, t):# affine/cvx combination of each two consecutive points in a list ctrl
# with the same coefficient t
return map(cvxP, zip(ctrl[:-1], ctrl[1:], [t]*(len(ctrl)-1)))
Explanation: For a FP implementation of de Casteljau algorithm we use standard Python.
First we define functions that return an affine/ convex combination of two numbers, two 2D points, respectively of each pair of consecutive points in a list of 2D control points:
End of explanation
def deCasteljauF(b, t):
# de Casteljau scheme - computes the point p(t) on the Bezier curve of control polygon b
if len(b)>1:
return deCasteljauF(cvxCtrlP( b, t), t)
else:
return b[0]
Explanation: The recursive function implementing de Casteljau scheme:
End of explanation
def BezCurve(b, nr=200):
#computes nr points on the Bezier curve of control points b
return map(lambda s: deCasteljauF(b,s), map(lambda j: j*1.0/(nr-1), xrange(nr)))
Explanation: A Bézier curve of control polygon ${\bf b}$ is discretized calling deCasteljauF function for each
parameter $t_j=j/(nr-1)$, $j=\overline{0,nr-1}$, where $nr$ is the number of points to be calculated:
End of explanation
%matplotlib notebook
import matplotlib.pyplot as plt
def Curve_plot(ctrl, func):
# plot the control polygon and the corresponding curve discretized by the function func
xc, yc = zip(*func(ctrl))
xd, yd = zip(*ctrl)
plt.plot(xd,yd , 'bo-', xc,yc, 'r')
class BuildB(object): #Build a Bezier/B-spline Curve
def __init__(self, xlims=[0,1], ylims=[0,1], func=BezCurve):
self.ctrl=[] # list of control points
self.xlims=xlims # limits for x coordinate of control points
self.ylims=ylims # limits for y coordinate
self.func=func # func - function that discretizes a curve defined by the control polygon ctrl
def callback(self, event): #select control points with left mouse button click
if event.button==1 and event.inaxes:
x,y = event.xdata,event.ydata
self.ctrl.append((x,y))
plt.plot(x, y, 'bo')
elif event.button==3: #press right button to plot the curve
Curve_plot(self.ctrl, self.func)
plt.draw()
else: pass
def B_ax(self):#define axes lims
fig = plt.figure(figsize=(6, 5))
ax = fig.add_subplot(111)
ax.set_xlim(self.xlims)
ax.set_ylim(self.ylims)
ax.grid('on')
ax.set_autoscale_on(False)
fig.canvas.mpl_connect('button_press_event', self.callback)
Explanation: map(lambda s: deCasteljauF(b,s), map(lambda j: j*1.0/(nr-1), xrange(nr)) means that the function
deCasteljauF with fixed list of control points, b, and the variable parameter s is mapped to the list
of parameters $t_j$ defined above. Instead of defining the list of parameters through comprehension [j*1.0/(nr-1) for j in xrange(nr)]
we created it calling the higher order function map:
map(lambda j: j*1.0/(nr-1), xrange(nr))
Obvioulsly the FP versions of de Casteljau algorithm and BezCurve are more compact
than the corresponding imperative versions, but they are not quite readable at the first sight.
To choose interactively the control points of a Bézier curve (and later of a B-spline curve) and to plot the curve, we set now the matplotlib nbagg backend, and define the class BuildB, below:
End of explanation
C=BuildB()
C.B_ax()
Explanation: Now we build the object C, set the axes for the curve to be generated, and choose the control points
with the left mouse button click. A right button click generates the corresponding curve:
End of explanation
def get_right_subpol(b,s, right=[]): # right is the list of control points for the right subpolygon
right.append(b[-1]) # append the last point in the list
if len(b)==1:
return right
else:
return get_right_subpol(cvxCtrlP(b,s), s, right)
Explanation: Subdividing a Bézier curve
Let $\Gamma$ be a Bézier curve of control points $({\bf b}_0, {\bf b}_1, \ldots, {\bf b}_n)$, and
$s\in (0,1)$ a parameter. Cutting (dividing) the curve at the point $p(s)$ we get two arcs of polynomial curves, which can be also expressed as Bezier
curves. The problem of finding the control polygons of the two arcs is called subdivision of the Bézier curve at $s$.
The control points ${\bf d}r$, $r=\overline{0,n}$, of the right arc of Bezier curve are:
$${\bf d}_r={\bf b}{n-r}^r(s),$$
where ${\bf b}^r_{n-r}(s)$ are points generated in the $r^{th}$ step of the de Casteljau algorithm from the control points
$({\bf b}_0, {\bf b}_1, \ldots, {\bf b}_n)$, and the parameter $s$ [Farin].
More precisely, the right polygon consists in last points computed in each step, $r=0,1,\ldots, n$, of the de Casteljau algorithm (see the triangular matrix of points displayed above):
${\bf b}^0_n(s), {\bf b}^1_{n-1}(s), \ldots, {\bf b}^n_0(s)$.
The recursive function get_right_subPol returns the right subpolygon of a subdivision at parameter $s$:
End of explanation
def subdivision(b, s):
#returns the left and right subpolygon, resulted dividing the curve of control points b, at s
#if(s<=0 or s>=1):
#raise InvalidInputError('The subdivision parameter must be in the interval (0,1)')
return (get_right_subpol( b[::-1], 1-s, right=[]), get_right_subpol( b, s, right=[]))
Explanation: To get the left subpolygon we exploit the invariance of a Bézier curve to reversing its control points.
The Bézier curve defined by the control points
${\bf b}n,{\bf b}{n-1}, \ldots, {\bf b}0$, coincides with that defined by
${\bf b}_0,{\bf b}{1}, \ldots, {\bf b}_n$.
If $p$ is the Bernstein parameterization of the curve defined by
${\bf b}0,{\bf b}{1}, \ldots, {\bf b}n$
and $\tilde{p}$ of that defined by the reversed control polygon, ${\bf b}_n,{\bf b}{n-1}, \ldots, {\bf b}_0$, then $p(t)=\tilde{p}(1-t)$ [Farin].
This means that the left subpolygon of the subdivision of the former curve at $s$,
is the right subpolygon resulted by dividing the latter curve at $1-s$.
Now we can define the function that returns the left and right subpolygon of a Bézier curve subdivision:
End of explanation
def plot_polygon(pol, ch):# plot a control polygon computed by an algorithm from a Bezier curve
plt.plot(zip(*pol)[0], zip(*pol)[1], linestyle='-', marker='o', color=ch)
Explanation: Define a function to plot the subpolygons:
End of explanation
cv=BuildB()
cv.B_ax()
left, right=subdivision(cv.ctrl, 0.47)
plot_polygon(left, 'g')
plot_polygon(right, 'm')
Explanation: Let us generate a Bézier curve and subdivide it at $s=0.47$:
End of explanation
def deCasteljauAff(b, r, s, u): #multi-affine de Casteljau algorithm
# b is the list of control points b_0, b_1, ..., b_n
# [r,s] is a subinterval
# u is the iterator associated to the polar bag [u_0, u_1, u_{n-1}], n=len(b)-1
if len(b)>1:
return deCasteljauAff(cvxCtrlP( b, (u.next()-r)/(s-r)), r, s, u)
else: return b[0]
Explanation: Multi-affine de Casteljau algorithm
The above de Casteljau algorithm is the classical one. There is a newer approach to define and study Bézier
curves through polarization.
Every polynomial curve of degree n, parameterized by $p:[a,b]\to\mathbb{R}^2$ defines a symmetric multiaffine
map $g:\underbrace{[a,b]\times[a,b]\times[a,b]}{n}\to \mathbb{R}^d$, such that
$p(t)=g(\underbrace{t,t,\ldots,t }{n})$, for every $t\in[a,b]$.
$g$ is called the polar form of the polynomial curve. An argument $(u_1, u_2, \ldots, u_n)$ of the polar form $g$ is called polar bag.
If $p(t)$ is the Bernstein parameterization of a Bezier curve of control points
${\bf b}0,{\bf b}{1}, \ldots, {\bf b}_n$, and $g:[0,1]^n\to\mathbb{R}^2$ its polar form, then the control points are related to $g$ as follows [Gallier]:
$${\bf b}j=g(\underbrace{0, 0, \ldots, 0}{n-j}, \underbrace{1,1, \ldots, 1}_{j}), \quad j=0,1, \ldots, n$$
This relationship allows to define Bézier curves by a parameterization $P$ defined not only on the interval $[0,1]$ but also on an arbitrary interval $[r,s]$. Namely, given a symmetric multiaffine map $g:[r,s]^n\to\mathbb{R}^2$
the associated polynomial curve $P(t)=g(\underbrace{t,t,\ldots, t}{n})$, $t\in[r,s]$, expressed as a Bézier curve has the control points defined by [Gallier]:
$${\bf b}_j=g(\underbrace{r, r, \ldots, r}{n-j}, \underbrace{s,s, \ldots, s}_{j}), \quad j=0,1, \ldots, n$$
Given the control points ${\bf b}0,{\bf b}{1}, \ldots, {\bf b}_n$ of a Bézier curve, and a polar bag $(u_1, u_2, \ldots, u_n)\in[r,s]^n$ the multi-affine de Casteljau algorithm
evaluates the corresponding multi-affine map, $g$, at this polar bag through a recursive formula similar to the classical
de Casteljau formula:
$${\bf b}^k_i= \displaystyle\frac{s-u_k}{s-r}\:{\bf b}i^{k-1}+ \displaystyle\frac{u_k-r}{s-r}\:{\bf b}{i+1}^{k-1}, \quad k=\overline{1,n}, i=\overline{0,n-k}$$
The point $b_0^n$ computed in the last step is the polar value, $g(u_1,u_2, \ldots, u_n)$.
Unlike the classical de Casteljau formula, where in each step the parameter for the convex combination
is the same, here in each step $k$ the parameter involved in convex combinations changes, namely it is
$\displaystyle\frac{u_k-r}{s-r}$.
In order to define a recursive function to implement this scheme
we consider the polar bag as an iterator associated to the list containing its coordinates, i.e. if L=[u[0], u[1], ..., u[n-1]] is the list, iter(L) is its iterator:
End of explanation
ct=BuildB()
ct.B_ax()
n=len(ct.ctrl)-1
t=0.45
u=[t]*(n-1)+[0]
v=[t]*(n-1)+[1]
A=deCasteljauAff(ct.ctrl, 0,1, iter(u))
B=deCasteljauAff(ct.ctrl, 0,1, iter(v))
plt.plot([A[0], B[0]], [A[1], B[1]], 'g')
Explanation: Usually we should test the concordance between the length (n+1) of the control polygon and the number of elements of the iterator
u. Since a listiterator has no length we have to count its elements. Functionally this number would get as:
len(map(lambda item: item, u))
What characteristic elements of a Bézier curve can the multi-affine de Casteljau algorithm compute?
1. The direction of the tangent to a Bézier curve of degree $n$, at a point $p(t)$, is defined by the vector
$\overrightarrow{{\bf b}^{n-1}_0{\bf b}^{n-1}_1}$. The end points of this vector are the points computed in
the $(n-1)^{th}$
step of the classical de Casteljauscheme. On the other hand, these points are polar values of the corresponding
multiaffine map $g$ [Gallier], namely:
$${\bf b}^{n-1}0=g(\underbrace{t,t,\ldots, t}{n-1}, 0), \quad {\bf b}^{n-1}1=g(\underbrace{t,t,\ldots, t}{n-1}, 1)$$
Thus they can be computed by the function deCasteljauAff.
Let us generate a Bézier curve and draw the tangent at the point corresponding to $t=0.45$:
End of explanation
def polar_bags(r,s, n):
return map(lambda j: [r]*(n-j)+[s]*j, xrange(n+1))
r=0.3
s=0.67
n=3
L=polar_bags(r, s, n)
print L
def redefineBezier(b, r, s):
# returns the control polygon for the subarc of ends p(r), p(s)
#of a Bezier curve defined by control polygon b
#if(r<0 or s>1 or r>s or s-r<0.1):
#raise InvalidInputError('innapropriate interval ends')
return map(lambda u: deCasteljauAff(b, 0, 1, iter(u)), polar_bags(r,s, len(b)-1))
Explanation: 2. The multi-affine de Casteljau algorithm can also be applied to redefine a subarc of a Bézier curve as a Bézier curve.
More precisely, let us assume that a Bézier curve of control points ${\bf b}j$, $j=\overline{0,n}$,
and parameterization $p$ defined on the interval [0,1] is cut at the points corresponding to the parameters $r<s$, $r, s\in[0,1]$.
The arc between $p(r)$ and $p(s)$ is also a polynomial curve and its control points are [Gallier]:
$${\bf c}_j=g(\underbrace{r,r,\ldots, r}{n-j}, \underbrace{s,s\ldots, s}_{j}),\quad j=\overline{0,n}$$
where $g$ is the polar form of the initial curve.
The function polar_bags below defines the list of polar bags involved in computation of the control points ${\bf c}_j$:
End of explanation
Bez=BuildB()
Bez.B_ax()
br=redefineBezier(Bez.ctrl, 0.3, 0.67)
plot_polygon(br, 'g')
Explanation: Now let us test this function:
End of explanation
from functools import partial
def Omegas(u, k, t):
# compute a list of lists
#an inner list contains the values $\omega_j^r(t)$ values from a step r
#if (len(u)!=2*k+1):
#raise InvalidInputError('the list u must have length 2k+1')
return map(lambda r: map(partial(lambda r, j: (t-u[j])/(u[j+k-r+1]-u[j]), r), \
xrange(r,k+1)), xrange(1, k+1))
Explanation: 3. The function redefineBezier can also be invoked to compute the left and right subpolygons resulted from a subdivision of a Bézier curve,
at a point corresponding to the paramater $s$. Namely the left subpolygon is returned by
redefineBezier(b, 0, s), whereas the right subpolygon by redefineBezier(b, s, 1)
B-spline curves
We give a procedural definition of a B-spline curve of degree $k$, not an analytical one.
The following data:
an interval $[a,b]$;
an integer $k\geq 1$;
a sequence of knots:
$$u_0 = u_1 = \cdots = u_k< u_{k+1} \leq \cdots \leq u_{m−k−1} < u_{m−k} = \cdots = u_{m−1} = u_m$$
with $u_0 = a, u_m = b$, $m−k > k$, and each knot $u_{k+1}, \ldots, u_{m-k-1}$ of multiplicity at most $k$;
- the control points ${\bf d}0, {\bf d}_1, \ldots, {\bf d}{m-k-1}$, called de Boor points;
define a curve $s:[a,b]\to\mathbb{R}^2$, such that for each $t$ in an interval $[u_J, u_{J+1}]$, with
$u_J<u_{J+1}$, $s(t)$ is computed in the last step of the Cox-de Boor recursive formula [Gallier]:
$$\begin{array}{lll}
{\bf d}^0_i&=&{\bf d}i,\quad i=\overline{J-k,J}\
{\bf d}_i^r(t)&=&(1-\omega_i^r(t))\,{\bf d}{i-1}^{r-1}(t)+\omega_i^r(t)\,{\bf d}{i}^{r-1}(t),\,\,
r=\overline{1, k}, i=\overline{J-k, J}\end{array}$$
with $\omega_i^r(t)=\displaystyle\frac{t-u{i}}{u_{i+k-r+1}-u_{i}}$
The curve defined in this way is called B-spline curve. It is a piecewise polynomial curve of degree
at most $k$, i.e. it is a polynomial curve on each nondegenerate interval $[u_J, u_{J+1}]$, and at each knot $u_J$ of multiplicity $1\leq p\leq k$, it is of class $C^{k-p}$.
The points computed in the steps $r=\overline{1,k}$ of the recursion can be written in a lower triangular matrix:
$$
\begin{array}{ccccc}
{\bf d}^0_{0}& & & & \
{\bf d}^0_{1} & {\bf d}{1}^1& & & \
\vdots & \vdots & & & \
{\bf d}^0{k-1} & {\bf d}{k-1}^1 &\ldots & {\bf d}{k-1}^{k-1}& \
{\bf d}^0_{k}& {\bf d}{k}^1 &\ldots & {\bf d}{k}^{k-1} &{\bf d}_{k}^{k}
\end{array}
$$
Unlike the classical de Casteljau or the multi-affine de Casteljau scheme, in the de Boor-formula
for each pair of consecutive points ${\bf d}^{r-1}_{i-1}, {\bf d}_i^{r-1}$, one computes a convex combination with the
coefficient $\omega_i^r$ depending both on $r$ and $i$.
In a first attempt to write a recursive function that implements the Cox-de Boor formula we defined an
auxiliary function Omegas, that returns a list of lists. An inner list contains the elements $\omega^r_i$, involved in a step of the recursive formula.
Using list comprehension the function Omegas returns:
[[(t-u[j])/(u[j+k-r+1]-u[j]) for j in range(r, k+1) ] for r in range(1,k+1)]
Although later we chose another solution, we give this function as an interesting example of FP code (avoiding list comprehension):
End of explanation
k=3
u=[j for j in range(7)]
LL=Omegas( u, k, 3.5)
print LL
Explanation: The function functools.partial associates to a multiargument function
a partial function in the same way as in mathematics, i.e. if $f(x_1, x_2, \ldots, x_n)$ is a n-variable function
$g(x_2, \ldots, x_n)=f(a, x_2, \ldots, x_n)$, with $a$ fixed, is a partial function. More details here.
In Omegas is involved the partial function of the anonymous function
lambda r, j: (t-u[j])/(u[j+k-r+1]-u[j])
defined by freezing r.
Let us test it:
End of explanation
def omega(u, k, t):
# defines the list of coefficients for the convex combinations performed in a step of de Boor algo
#if (len(u)!=2*k+1 or t<u[k] or t>u[k+1]):
#raise InvalidInputError('the list u has not the length 2k+1 or t isn't within right interval')
return map(lambda j: (t-u[j]) / (u[j+k]-u[j]), xrange(1,k+1))
Explanation: Noticing that in each step $r>1$, the coefficients $\omega^r_i$ are computed with the same formula as for
$r=1$, i.e. $\omega_i^r=\displaystyle\frac{t-u_i}{u_i+k-u_i}$ but for the knots in the list
u[r-1: -r+1] and $k=k-r+1$ we
define the function omega below, instead of Omegas:
End of explanation
def cvxList(d, alpha):
#len(d)=len(alpha)+1
return map(cvxP, zip(d[:-1], d[1:], alpha))
Explanation: We also need a new function that calculates the convex combinations of each pair of points in a list,
with distinct coefficients given in a list alpha:
End of explanation
def DeBoor(d, u, k, t):
#len(d) must be (k+1) and len(u)=2*k+1:
# this algorithm evaluates a point c(t) on an arc of B-spline curve
# of degree k, defined by:
# u_0<=u_1<=... u_k<u_{k+1}<=...u_{2k} the sequence of knots
# d_0, d_1, ... d_k de Boor points
if(len(d)==1):
return d[0]
else:
return DeBoor(cvxList(d, omega(u, k,t)), u[1:-1], k-1,t)
Explanation: The recursive Cox-de Boor formula is now implemented in the following way:
End of explanation
def Bspline(d, k=3, N=100):
L=len(d)-1
n=L+k+1
#extend the control polygon
d+=[d[j] for j in range (k+1)]
#make uniform knots
u=np.arange(n+k+1)
# define the period T
T=u[n]-u[k]
#extend the sequence of knots
u[:k]=u[n-k:n]-T
u[n+1:n+k+1]=u[k+1:2*k+1]+T
u=list(u)
curve=[]#the list of points to be computed on the closed B-spline curve
for J in range(k, n+1):
if u[J]<u[J+1]:
t=np.linspace(u[J], u[J+1], N)
else: continue
curve+=[DeBoor(d[J-k:J+1], u[J-k:J+k+1], k, t[j]) for j in range (N) ]
return curve
Explanation: To experiment the interactive generation of B-spline curves we give the restrictions on input data that lead to a closed B-spline curve:
A B-spline curve of degree k, defined on an interval $[a,b]$ by the following data:
de Boor control polygon:
$({\bf d}0, \ldots, {\bf d}{n-k},\ldots,{\bf d}{n-1})$ such that
$$\begin{array}{llllllll}
{\bf d}_0,&{\bf d_1},&\ldots,& {\bf d}{n-k-1}, & {\bf d}{n-k}, & {\bf d}{n-k+1}, & \ldots& {\bf d}{n-1}\
& & & & \shortparallel &\shortparallel & &\shortparallel\
& & & & {\bf d}_0 & {\bf d}_1
&\ldots& {\bf d}{k-1}\end{array}$$
a knot sequence obtained by extending through periodicity of period $T=b-a$ of the sequence of knots $a=u_k\leq\cdots\leq u_n=b$ ($n\geq 2k$):
$$\begin{array}{lll}u_{k-j}&=&u_{n-j}-T\ u_{n+j}&=&u_{k+j}+T\& & j=1,\ldots k\end{array}$$
is a closed B-spline curve.
In our implementation we take a uniform sequence of knots of the interval $[a,b]$
The de Boor points are chosen interactively with the left mouse button click within the plot window.
A point is a tuple of real numbers, and the control polygon is a list of tuples.
The next function constructs the periodic sequence of knots and the de Boor polygon such that to
generate from the interactively chosen control points a closed B-spline curve, of degree $3$, and
of class $C^{2}$ at each knot.
The function's code is imperative:
End of explanation
D=BuildB(func=Bspline)
D.B_ax()
Explanation: We build the control polygon and draw the corresponding closed B-spline curve:
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: After these experiments our conclusion is that you can insert functional programming code in your imperative Python code, but trying to write pure functions could be difficult, sometimes impossible.
Writing FP style Python code is challenging, but the code is not so easy to read, unlike the imperative Python code which is characterized as a readable one.
The next step in experimenting FP will be to use a Python implementation of monads (see for example Pymonad or fn.py.
In this IPython Notebook we avoided to raise errors and commented the lines handling errors.
The algorithms implemented in this IPython Notebook are presented in detail in:
1. G Farin, Curves and Surfaces for Computer Aided Geometric Design: A Practical Guide, Morgan Kaufmann, 2002.
2. J. Gallier, Curves and Surfaces in Geometric Modeling: Theory and Algorithms, Morgan Kaufmann, 1999. Free electronic version can be downloaded here.
2/10/2015
End of explanation |
962 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Processes
(c) 2016 by Chris Fonnesbeck
Example of simple GP fit, adapted from Stan's example-models repository.
Step1: This is what our initial covariance matrix looks like. Intuitively, every data point's Y-value correlates with points according to their squared distances.
Step2: The following generates predictions from the GP model in a grid of values
Step3: Sample from the posterior GP | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import pymc3 as pm
from pymc3 import Model, MvNormal, HalfCauchy, sample, traceplot, summary, find_MAP, NUTS, Deterministic
import theano.tensor as T
from theano import shared
from theano.tensor.nlinalg import matrix_inverse
x = np.array([-5, -4.9, -4.8, -4.7, -4.6, -4.5, -4.4, -4.3, -4.2, -4.1, -4,
-3.9, -3.8, -3.7, -3.6, -3.5, -3.4, -3.3, -3.2, -3.1, -3, -2.9,
-2.8, -2.7, -2.6, -2.5, -2.4, -2.3, -2.2, -2.1, -2, -1.9, -1.8,
-1.7, -1.6, -1.5, -1.4, -1.3, -1.2, -1.1, -1, -0.9, -0.8, -0.7,
-0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 1, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8,
1.9, 2, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3, 3.1,
3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4, 4.1, 4.2, 4.3, 4.4,
4.5, 4.6, 4.7, 4.8, 4.9, 5])
y = np.array([1.04442478194401, 0.948306088493654, 0.357037759697332, 0.492336514646604,
0.520651364364746, 0.112629866592809, 0.470995468454158, -0.168442254267804,
0.0720344402575861, -0.188108980535916, -0.0160163306512027,
-0.0388792158617705, -0.0600673630622568, 0.113568725264636,
0.447160403837629, 0.664421188556779, -0.139510743820276, 0.458823971660986,
0.141214654640904, -0.286957663528091, -0.466537724021695, -0.308185884317105,
-1.57664872694079, -1.44463024170082, -1.51206214603847, -1.49393593601901,
-2.02292464164487, -1.57047488853653, -1.22973445533419, -1.51502367058357,
-1.41493587255224, -1.10140254663611, -0.591866485375275, -1.08781838696462,
-0.800375653733931, -1.00764767602679, -0.0471028950122742, -0.536820626879737,
-0.151688056391446, -0.176771681318393, -0.240094952335518, -1.16827876746502,
-0.493597351974992, -0.831683011472805, -0.152347043914137, 0.0190364158178343,
-1.09355955218051, -0.328157917911376, -0.585575679802941, -0.472837120425201,
-0.503633622750049, -0.0124446353828312, -0.465529814250314,
-0.101621725887347, -0.26988462590405, 0.398726664193302, 0.113805181040188,
0.331353802465398, 0.383592361618461, 0.431647298655434, 0.580036473774238,
0.830404669466897, 1.17919105883462, 0.871037583886711, 1.12290553424174,
0.752564860804382, 0.76897960270623, 1.14738839410786, 0.773151715269892,
0.700611498974798, 0.0412951045437818, 0.303526087747629, -0.139399513324585,
-0.862987735433697, -1.23399179134008, -1.58924289116396, -1.35105117911049,
-0.990144529089174, -1.91175364127672, -1.31836236129543, -1.65955735224704,
-1.83516148300526, -2.03817062501248, -1.66764011409214, -0.552154350554687,
-0.547807883952654, -0.905389222477036, -0.737156477425302, -0.40211249920415,
0.129669958952991, 0.271142753510592, 0.176311762529962, 0.283580281859344,
0.635808289696458, 1.69976647982837, 1.10748978734239, 0.365412229181044,
0.788821368082444, 0.879731888124867, 1.02180766619069, 0.551526067300283])
N = len(y)
squared_distance = lambda x, y: np.array([[(x[i] - y[j])**2 for i in range(len(x))] for j in range(len(y))])
with Model() as gp_fit:
μ = np.zeros(N)
η_sq = HalfCauchy('η_sq', 5)
ρ_sq = HalfCauchy('ρ_sq', 5)
σ_sq = HalfCauchy('σ_sq', 5)
D = squared_distance(x, x)
# Squared exponential
Σ = T.fill_diagonal(η_sq * T.exp(-ρ_sq * D), η_sq + σ_sq)
obs = MvNormal('obs', μ, Σ, observed=y)
Explanation: Gaussian Processes
(c) 2016 by Chris Fonnesbeck
Example of simple GP fit, adapted from Stan's example-models repository.
End of explanation
sns.heatmap(Σ.tag.test_value, xticklabels=False, yticklabels=False)
Explanation: This is what our initial covariance matrix looks like. Intuitively, every data point's Y-value correlates with points according to their squared distances.
End of explanation
with gp_fit:
# Prediction over grid
xgrid = np.linspace(-6, 6)
D_pred = squared_distance(xgrid, xgrid)
D_off_diag = squared_distance(x, xgrid)
# Covariance matrices for prediction
Σ_pred = η_sq * T.exp(-ρ_sq * D_pred)
Σ_off_diag = η_sq * T.exp(-ρ_sq * D_off_diag)
# Posterior mean
μ_post = Deterministic('μ_post', T.dot(T.dot(Σ_off_diag, matrix_inverse(Σ)), y))
# Posterior covariance
Σ_post = Deterministic('Σ_post', Σ_pred - T.dot(T.dot(Σ_off_diag, matrix_inverse(Σ)), Σ_off_diag.T))
with gp_fit:
gp_trace = pm.variational.svgd(n=300, n_particles=50)
traceplot(gp_trace, varnames=['η_sq', 'ρ_sq', 'σ_sq']);
Explanation: The following generates predictions from the GP model in a grid of values:
End of explanation
y_pred = [np.random.multivariate_normal(m, S) for m, S in zip(gp_trace['μ_post'], gp_trace['Σ_post'])]
for yp in y_pred:
plt.plot(np.linspace(-6, 6), yp, 'c-', alpha=0.1);
plt.plot(x, y, 'r.')
Explanation: Sample from the posterior GP
End of explanation |
963 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The simplest native coroutine demo
(that I could imagine)
Step1: The driving code starts here
Step2: A slightly more interesting demo
Now the generator-coroutine yields 3 times.
Step3: Driving code
Step4: A generator-coroutine that receives values
The driving code can send values other than None.
Step5: Driving code must prime the coroutine by sending None initially
Step6: To retrieve the last result, we must catch StopIteration and get its value attribute | Python Code:
import types
@types.coroutine
def gen():
yield 42
async def delegating():
await gen()
Explanation: The simplest native coroutine demo
(that I could imagine)
End of explanation
coro = delegating()
coro
coro.send(None)
# coro.send(None) # --> StopIteration
Explanation: The driving code starts here:
End of explanation
@types.coroutine
def gen123():
return (i for i in range(1, 4))
async def delegating():
await gen123()
Explanation: A slightly more interesting demo
Now the generator-coroutine yields 3 times.
End of explanation
coro = delegating()
coro.send(None)
coro.send(None)
coro.send(None)
# coro.send(None) # --> StopIteration
# coro.send(None) # --> RuntimeError
Explanation: Driving code:
End of explanation
import types
@types.coroutine
def times10(terms):
n = yield 'Ready to begin!'
for _ in range(terms):
n = yield n * 10
return n * 10
async def delegating(terms):
res = await times10(terms)
return res
Explanation: A generator-coroutine that receives values
The driving code can send values other than None.
End of explanation
coro = delegating(3)
coro.send(None)
coro.send(5)
coro.send(6)
coro.send(7)
Explanation: Driving code must prime the coroutine by sending None initially:
End of explanation
try:
coro.send(8)
except StopIteration as e:
res = e.value
res
Explanation: To retrieve the last result, we must catch StopIteration and get its value attribute:
End of explanation |
964 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-3', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: NIWA
Source ID: SANDBOX-3
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:30
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
965 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Doing Math with Python </center>
<center>
<p> <b>Amit Saha</b>
<p>May 29, PyCon US 2016 Education Summit
<p>Portland, Oregon
</center>
## About me
- Software Engineer at [Freelancer.com](https
Step1: (Main) Tools
<img align="center" src="collage/logo_collage.png"></img>
Python - a scientific calculator
Python 3 is my favorite calculator (not Python 2 because 1/2 = 0)
fabs(), abs(), sin(), cos(), gcd(), log() (See math)
Descriptive statistics (See statistics)
Python - a scientific calculator
Develop your own functions
Step2: Python - Making other subjects more lively
<img align="center" src="collage/collage1.png"></img>
matplotlib
basemap
Interactive Jupyter Notebooks
Bringing Science to life
Animation of a Projectile motion
Drawing fractals
Interactively drawing a Barnsley Fern
The world is your graph paper
Showing places on a digital map
Great base for the future
Statistics and Graphing data -> Data Science
Differential Calculus -> Machine learning
Application of differentiation
Use gradient descent to find a function's minimum value
Predict the college admission score based on high school math score
Use gradient descent as the optimizer for single variable linear regression model | Python Code:
As I will attempt to describe in the next slides, Python is an amazing way to lead to a more fun learning and teaching
experience.
It can be a basic calculator, a fancy calculator and
Math, Science, Geography..
Tools that will help us in that quest are:
Explanation: <center> Doing Math with Python </center>
<center>
<p> <b>Amit Saha</b>
<p>May 29, PyCon US 2016 Education Summit
<p>Portland, Oregon
</center>
## About me
- Software Engineer at [Freelancer.com](https://www.freelancer.com) HQ in Sydney, Australia
- Author of "Doing Math with Python" (No Starch Press, 2015)
- Writes for Linux Voice, Linux Journal, etc.
- [Blog](http://echorand.me), [GitHub](http://github.com/amitsaha)
#### Contact
- [@echorand](http://twitter.com/echorand)
- [Email](mailto:[email protected])
### This talk - a proposal, a hypothesis, a statement
*Python can lead to a more enriching learning and teaching experience in the classroom*
End of explanation
When you bring in SymPy to the picture, things really get awesome. You are suddenly writing computer
programs which are capable of speaking algebra. You are no more limited to numbers.
# Create graphs from algebraic expressions
from sympy import Symbol, plot
x = Symbol('x')
p = plot(2*x**2 + 2*x + 2)
# Solve equations
from sympy import solve, Symbol
x = Symbol('x')
solve(2*x + 1)
# Limits
from sympy import Symbol, Limit, sin
x = Symbol('x')
Limit(sin(x)/x, x, 0).doit()
# Derivative
from sympy import Symbol, Derivative, sin, init_printing
x = Symbol('x')
init_printing()
Derivative(sin(x)**(2*x+1), x).doit()
# Indefinite integral
from sympy import Symbol, Integral, sqrt, sin, init_printing
x = Symbol('x')
init_printing()
Integral(sqrt(x)).doit()
# Definite integral
from sympy import Symbol, Integral, sqrt
x = Symbol('x')
Integral(sqrt(x), (x, 0, 2)).doit()
Explanation: (Main) Tools
<img align="center" src="collage/logo_collage.png"></img>
Python - a scientific calculator
Python 3 is my favorite calculator (not Python 2 because 1/2 = 0)
fabs(), abs(), sin(), cos(), gcd(), log() (See math)
Descriptive statistics (See statistics)
Python - a scientific calculator
Develop your own functions: unit conversion, finding correlation, .., anything really
Use PYTHONSTARTUP to extend the battery of readily available mathematical functions
$ PYTHONSTARTUP=~/work/dmwp/pycon-us-2016/startup_math.py idle3 -s
Unit conversion functions
```
unit_conversion()
1. Kilometers to Miles
2. Miles to Kilometers
3. Kilograms to Pounds
4. Pounds to Kilograms
5. Celsius to Fahrenheit
6. Fahrenheit to Celsius
Which conversion would you like to do? 6
Enter temperature in fahrenheit: 98
Temperature in celsius: 36.66666666666667
```
Finding linear correlation
```
x = [1, 2, 3, 4]
y = [2, 4, 6.1, 7.9]
find_corr_x_y(x, y)
0.9995411791453812
```
Python - a really fancy calculator
SymPy - a pure Python symbolic math library
from sympy import awesomeness - don't try that :)
End of explanation
### TODO: digit recognition using Neural networks
### Scikitlearn, pandas, scipy, statsmodel
Explanation: Python - Making other subjects more lively
<img align="center" src="collage/collage1.png"></img>
matplotlib
basemap
Interactive Jupyter Notebooks
Bringing Science to life
Animation of a Projectile motion
Drawing fractals
Interactively drawing a Barnsley Fern
The world is your graph paper
Showing places on a digital map
Great base for the future
Statistics and Graphing data -> Data Science
Differential Calculus -> Machine learning
Application of differentiation
Use gradient descent to find a function's minimum value
Predict the college admission score based on high school math score
Use gradient descent as the optimizer for single variable linear regression model
End of explanation |
966 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Brainstorm CTF phantom dataset tutorial
Here we compute the evoked from raw for the Brainstorm CTF phantom
tutorial dataset. For comparison, see [1]_ and
Step1: The data were collected with a CTF system at 2400 Hz.
Step2: The sinusoidal signal is generated on channel HDAC006, so we can use
that to obtain precise timing.
Step3: Let's create some events using this signal by thresholding the sinusoid.
Step4: The CTF software compensation works reasonably well
Step5: But here we can get slightly better noise suppression, lower localization
bias, and a better dipole goodness of fit with spatio-temporal (tSSS)
Maxwell filtering
Step6: Our choice of tmin and tmax should capture exactly one cycle, so
we can make the unusual choice of baselining using the entire epoch
when creating our evoked data. We also then crop to a single time point
(@t=0) because this is a peak in our signal.
Step7: Let's use a sphere head geometry model and let's see the coordinate
alignment and the sphere location.
Step8: To do a dipole fit, let's use the covariance provided by the empty room
recording.
Step9: Compare the actual position with the estimated one. | Python Code:
# Authors: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import fit_dipole
from mne.datasets.brainstorm import bst_phantom_ctf
from mne.io import read_raw_ctf
print(__doc__)
Explanation: Brainstorm CTF phantom dataset tutorial
Here we compute the evoked from raw for the Brainstorm CTF phantom
tutorial dataset. For comparison, see [1]_ and:
http://neuroimage.usc.edu/brainstorm/Tutorials/PhantomCtf
References
.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
Computational Intelligence and Neuroscience, vol. 2011, Article ID
879716, 13 pages, 2011. doi:10.1155/2011/879716
End of explanation
data_path = bst_phantom_ctf.data_path(verbose=True)
# Switch to these to use the higher-SNR data:
# raw_path = op.join(data_path, 'phantom_200uA_20150709_01.ds')
# dip_freq = 7.
raw_path = op.join(data_path, 'phantom_20uA_20150603_03.ds')
dip_freq = 23.
erm_path = op.join(data_path, 'emptyroom_20150709_01.ds')
raw = read_raw_ctf(raw_path, preload=True)
Explanation: The data were collected with a CTF system at 2400 Hz.
End of explanation
sinusoid, times = raw[raw.ch_names.index('HDAC006-4408')]
plt.figure()
plt.plot(times[times < 1.], sinusoid.T[times < 1.])
Explanation: The sinusoidal signal is generated on channel HDAC006, so we can use
that to obtain precise timing.
End of explanation
events = np.where(np.diff(sinusoid > 0.5) > 0)[1] + raw.first_samp
events = np.vstack((events, np.zeros_like(events), np.ones_like(events))).T
Explanation: Let's create some events using this signal by thresholding the sinusoid.
End of explanation
raw.plot()
Explanation: The CTF software compensation works reasonably well:
End of explanation
raw.apply_gradient_compensation(0) # must un-do software compensation first
mf_kwargs = dict(origin=(0., 0., 0.), st_duration=10.)
raw = mne.preprocessing.maxwell_filter(raw, **mf_kwargs)
raw.plot()
Explanation: But here we can get slightly better noise suppression, lower localization
bias, and a better dipole goodness of fit with spatio-temporal (tSSS)
Maxwell filtering:
End of explanation
tmin = -0.5 / dip_freq
tmax = -tmin
epochs = mne.Epochs(raw, events, event_id=1, tmin=tmin, tmax=tmax,
baseline=(None, None))
evoked = epochs.average()
evoked.plot(time_unit='s')
evoked.crop(0., 0.)
Explanation: Our choice of tmin and tmax should capture exactly one cycle, so
we can make the unusual choice of baselining using the entire epoch
when creating our evoked data. We also then crop to a single time point
(@t=0) because this is a peak in our signal.
End of explanation
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None)
mne.viz.plot_alignment(raw.info, subject='sample',
meg='helmet', bem=sphere, dig=True,
surfaces=['brain'])
del raw, epochs
Explanation: Let's use a sphere head geometry model and let's see the coordinate
alignment and the sphere location.
End of explanation
raw_erm = read_raw_ctf(erm_path).apply_gradient_compensation(0)
raw_erm = mne.preprocessing.maxwell_filter(raw_erm, coord_frame='meg',
**mf_kwargs)
cov = mne.compute_raw_covariance(raw_erm)
del raw_erm
dip, residual = fit_dipole(evoked, cov, sphere, verbose=True)
Explanation: To do a dipole fit, let's use the covariance provided by the empty room
recording.
End of explanation
expected_pos = np.array([18., 0., 49.])
diff = np.sqrt(np.sum((dip.pos[0] * 1000 - expected_pos) ** 2))
print('Actual pos: %s mm' % np.array_str(expected_pos, precision=1))
print('Estimated pos: %s mm' % np.array_str(dip.pos[0] * 1000, precision=1))
print('Difference: %0.1f mm' % diff)
print('Amplitude: %0.1f nAm' % (1e9 * dip.amplitude[0]))
print('GOF: %0.1f %%' % dip.gof[0])
Explanation: Compare the actual position with the estimated one.
End of explanation |
967 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Steepest Gradient Descent Visualization
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
Step1: Specify the function to minimize as a simple python function.<br>
We start with a very simple function that is given by
\begin{equation}
f(\boldsymbol{x}) = \frac{1}{16}x_1^2 + 9x_2^2
\end{equation}
The derivative is automatically computed using the autograd library, which returns a function that evaluates the gradient of myfun. The gradient can also be easily computed by hand and is given as
\begin{equation}
\nabla f(\boldsymbol{x}) = \begin{pmatrix} \frac{1}{8}x_1 \ 18x_2 \end{pmatrix}
\end{equation}
Step2: Plot the function as a 2d surface plot. Different colors indicate different values of the function.
Step3: Carry out the simple gradient descent strategy by using only the sign of the gradient. Carry out 200 iterations (without using a stopping criterion). The values of epsilon and the starting point are specified
Step4: Plot the trajectory and the value of the function (right subplot). Note that the minimum of this function is achieved for (0,0) and is 0
Step5: This is an interactive demonstration of gradient descent, where you can specify yourself the starting point as well as the step value. You can see that depending on the step size, the minimization can get unstable
Step6: Next, we consider the so-called Rosenbrock function, which is given by
\begin{equation}
f(\boldsymbol{x}) = (1-x_1)^2 + 100(x_2-x_1^2)^2
\end{equation}
Its gradient is given by
\begin{equation}
\nabla f(\boldsymbol{x}) = \begin{pmatrix} -2(1-x_1)-400(x_2-x_1^2)x_1 \ 200(x_2-x_1^2)\end{pmatrix}
\end{equation}
The Rosenbrock function has a global minimum at (1,1) but is difficult to optimize due to its curved valley. For details, see <url>https | Python Code:
import importlib
autograd_available = True
# if automatic differentiation is available, use it
try:
import autograd
except ImportError:
autograd_available = False
pass
if autograd_available:
import autograd.numpy as np
from autograd import grad
else:
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interactive
import ipywidgets as widgets
%matplotlib inline
if autograd_available:
print('Using autograd to compute gradients')
else:
print('Using hand-calculated gradient')
Explanation: Steepest Gradient Descent Visualization
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates:
* Steepest gradient descent in two dimensions
* Interactive demonstration of step size influence
End of explanation
# Valley
def myfun(x):
return (x[0]**2)/16 + 9*(x[1]**2)
if autograd_available:
gradient = grad(myfun)
else:
def gradient(x):
grad = [x[0]/8, 18*x[1]]
return grad;
Explanation: Specify the function to minimize as a simple python function.<br>
We start with a very simple function that is given by
\begin{equation}
f(\boldsymbol{x}) = \frac{1}{16}x_1^2 + 9x_2^2
\end{equation}
The derivative is automatically computed using the autograd library, which returns a function that evaluates the gradient of myfun. The gradient can also be easily computed by hand and is given as
\begin{equation}
\nabla f(\boldsymbol{x}) = \begin{pmatrix} \frac{1}{8}x_1 \ 18x_2 \end{pmatrix}
\end{equation}
End of explanation
x = np.arange(-5.0, 5.0, 0.02)
y = np.arange(-2.0, 2.0, 0.02)
X, Y = np.meshgrid(x, y)
fZ = myfun([X,Y])
plt.figure(1,figsize=(10,6))
plt.rcParams.update({'font.size': 14})
plt.contourf(X,Y,fZ,levels=20)
plt.colorbar()
plt.xlabel("x")
plt.ylabel("y")
plt.show()
Explanation: Plot the function as a 2d surface plot. Different colors indicate different values of the function.
End of explanation
epsilon = 0.1
start = np.array([-4.0,-1.0])
points = []
while len(points) < 200:
points.append( (start,myfun(start)) )
start = start - np.array([epsilon*gradient(start)[0], epsilon*gradient(start)[1]])
Explanation: Carry out the simple gradient descent strategy by using only the sign of the gradient. Carry out 200 iterations (without using a stopping criterion). The values of epsilon and the starting point are specified
End of explanation
trajectory_x = [points[i][0][0] for i in range(len(points))]
trajectory_y = [points[i][0][1] for i in range(len(points))]
plt.figure(1,figsize=(16,6))
plt.subplot(121)
plt.rcParams.update({'font.size': 14})
plt.contourf(X,Y,fZ,levels=20)
plt.xlim(-5,0)
plt.ylim(-2,2)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2)
plt.subplot(122)
plt.plot(range(0,len(points)),list(zip(*points))[1])
plt.grid(True)
plt.xlabel("Step i")
plt.ylabel("f(x^{(i)})")
plt.show()
Explanation: Plot the trajectory and the value of the function (right subplot). Note that the minimum of this function is achieved for (0,0) and is 0
End of explanation
def plot_function(epsilon, start_x, start_y):
start = [start_x,start_y]
points = []
while len(points) < 200:
points.append( (start,myfun(start)) )
start = start - np.array([epsilon*gradient(start)[0], epsilon*gradient(start)[1]])
trajectory_x = [points[i][0][0] for i in range(len(points))]
trajectory_y = [points[i][0][1] for i in range(len(points))]
plt.figure(3,figsize=(15,5))
plt.subplot(121)
plt.rcParams.update({'font.size': 14})
plt.contourf(X,Y,fZ,levels=20)
plt.xlim(-5,0)
plt.ylim(-2,2)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2)
plt.subplot(122)
plt.plot(range(0,len(points)),list(zip(*points))[1])
plt.grid(True)
plt.xlabel("Step i")
plt.ylabel("f(x^{(i)})")
plt.show()
epsilon_values = np.arange(0.0,0.12,0.0002)
interactive_update = interactive(plot_function, \
epsilon = widgets.SelectionSlider(options=[("%g"%i,i) for i in epsilon_values], value=0.1, continuous_update=False,description='epsilon',layout=widgets.Layout(width='50%')),
start_x = widgets.FloatSlider(min=-5.0,max=0.0,step=0.001,value=-4.0, continuous_update=False, description='x'), \
start_y = widgets.FloatSlider(min=-1.0, max=1.0, step=0.001, value=-1.0, continuous_update=False, description='y'))
output = interactive_update.children[-1]
output.layout.height = '370px'
interactive_update
Explanation: This is an interactive demonstration of gradient descent, where you can specify yourself the starting point as well as the step value. You can see that depending on the step size, the minimization can get unstable
End of explanation
# Rosenbrock function
def rosenbrock_fun(x):
return (1-x[0])**2+100*((x[1]-(x[0])**2)**2)
if autograd_available:
rosenbrock_gradient = grad(rosenbrock_fun)
else:
def rosenbrock_gradient(x):
grad = [-2*(1-x[0])-400*(x[1]-x[0]**2)*x[0], 200*(x[1]-x[0]**2)]
return grad
xr = np.arange(-1.6, 1.6, 0.01)
yr = np.arange(-1.0, 3.0, 0.01)
Xr, Yr = np.meshgrid(xr, yr)
fZr = rosenbrock_fun([Xr,Yr])
def plot_function_rosenbrock(epsilon, start_x, start_y):
start = [start_x,start_y]
points = []
while len(points) < 1000:
points.append( (start,rosenbrock_fun(start)) )
rgradient = rosenbrock_gradient(start)
start = start - np.array([epsilon*rgradient[0], epsilon*rgradient[1]])
trajectory_x = [points[i][0][0] for i in range(len(points))]
trajectory_y = [points[i][0][1] for i in range(len(points))]
plt.figure(4,figsize=(15,5))
plt.subplot(121)
plt.rcParams.update({'font.size': 14})
plt.contourf(Xr,Yr,fZr,levels=20)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2)
plt.subplot(122)
plt.plot(range(0,len(points)),list(zip(*points))[1])
plt.grid(True)
plt.xlabel("Step i")
plt.ylabel("f(x^{(i)})")
plt.show()
epsilon_values = np.arange(0.0,0.007,0.00002)
interactive_update = interactive(plot_function_rosenbrock, \
epsilon = widgets.SelectionSlider(options=[("%g"%i,i) for i in epsilon_values], value=0.001, continuous_update=False,description='epsilon',layout=widgets.Layout(width='50%')), \
start_x = widgets.FloatSlider(min=-1.0,max=2.0,step=0.0001,value=0.6, continuous_update=False, description='x'), \
start_y = widgets.FloatSlider(min=-1.0, max=2.0, step=0.0001, value=0.1, continuous_update=False, description='y'))
output = interactive_update.children[-1]
output.layout.height = '350px'
interactive_update
Explanation: Next, we consider the so-called Rosenbrock function, which is given by
\begin{equation}
f(\boldsymbol{x}) = (1-x_1)^2 + 100(x_2-x_1^2)^2
\end{equation}
Its gradient is given by
\begin{equation}
\nabla f(\boldsymbol{x}) = \begin{pmatrix} -2(1-x_1)-400(x_2-x_1^2)x_1 \ 200(x_2-x_1^2)\end{pmatrix}
\end{equation}
The Rosenbrock function has a global minimum at (1,1) but is difficult to optimize due to its curved valley. For details, see <url>https://en.wikipedia.org/wiki/Rosenbrock_function</url>
End of explanation |
968 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
scona
scona is a tool to perform network analysis over correlation networks of brain regions.
This tutorial will go through the basic functionality of scona, taking us from our inputs (a matrix of structural regional measures over subjects) to a report of local network measures for each brain region, and network level comparisons to a cohort of random graphs of the same degree.
Step1: Importing data
A scona analysis starts with four inputs.
* regional_measures
A pandas DataFrame with subjects as rows. The columns should include structural measures for each brain region, as well as any subject-wise covariates.
* names
A list of names of the brain regions. This will be used to specify which columns of the regional_measures matrix to want to correlate over.
* covars (optional)
A list of your covariates. This will be used to specify which columns of regional_measure you wish to correct for.
* centroids
A list of tuples representing the cartesian coordinates of brain regions. This list should be in the same order as the list of brain regions to accurately assign coordinates to regions. The coordinates are expected to obey the convention the the x=0 plane is the same plane that separates the left and right hemispheres of the brain.
Step2: Create a correlation matrix
We calculate residuals of the matrix df for the columns of names, correcting for the columns in covars.
Step3: Now we create a correlation matrix over the columns of df_res
Step4: Create a weighted graph
A short sidenote on the BrainNetwork class
Step5: Threshold to create a binary graph
We threshold G at cost 10 to create a binary graph with 10% as many edges as the complete graph G. Ordinarily when thresholding one takes the 10% of edges with the highest weight. In our case, because we want the resulting graph to be connected, we calculate a minimum spanning tree first. If you want to omit this step, you can pass the argument mst=False to threshold.
The threshold method does not edit objects inplace
Step6: Calculate nodal summary.
calculate_nodal_measures will compute and record the following nodal measures
average_dist (if centroids available)
total_dist (if centroids available)
betweenness
closeness
clustering coefficient
degree
interhem (if centroids are available)
interhem_proportion (if centroids are available)
nodal partition
participation coefficient under partition calculated above
shortest_path_length
export_nodal_measure returns nodal attributes in a DataFrame. Let's try it now.
Step7: Use calculate_nodal_measures to fill in a bunch of nodal measures
Step8: We can also add measures as one might normally add nodal attributes to a networkx graph
Step9: These show up in our DataFrame too
Step10: Calculate Global measures
Step11: Create a GraphBundle
The GraphBundle object is the scona way to handle across network comparisons. What is it? Essentially it's a python dictionary with BrainNetwork objects as values.
Step12: This creates a dictionary-like object with BrainNetwork H keyed by 'NSPN_cost=10'
Step13: Now add a series of random_graphs created by edge swap randomisation of H (keyed by 'NSPN_cost=10')
Step14: Report on a GraphBundle
The following method will calculate global measures ( if they have not already been calculated) for all of the graphs in graph_bundle and report the results in a DataFrame. We can do the same for rich club coefficients below. | Python Code:
import numpy as np
import networkx as nx
import scona as scn
import scona.datasets as datasets
Explanation: scona
scona is a tool to perform network analysis over correlation networks of brain regions.
This tutorial will go through the basic functionality of scona, taking us from our inputs (a matrix of structural regional measures over subjects) to a report of local network measures for each brain region, and network level comparisons to a cohort of random graphs of the same degree.
End of explanation
# Read in sample data from the NSPN WhitakerVertes PNAS 2016 paper.
df, names, covars, centroids = datasets.NSPN_WhitakerVertes_PNAS2016.import_data()
df.head()
Explanation: Importing data
A scona analysis starts with four inputs.
* regional_measures
A pandas DataFrame with subjects as rows. The columns should include structural measures for each brain region, as well as any subject-wise covariates.
* names
A list of names of the brain regions. This will be used to specify which columns of the regional_measures matrix to want to correlate over.
* covars (optional)
A list of your covariates. This will be used to specify which columns of regional_measure you wish to correct for.
* centroids
A list of tuples representing the cartesian coordinates of brain regions. This list should be in the same order as the list of brain regions to accurately assign coordinates to regions. The coordinates are expected to obey the convention the the x=0 plane is the same plane that separates the left and right hemispheres of the brain.
End of explanation
df_res = scn.create_residuals_df(df, names, covars)
df_res
Explanation: Create a correlation matrix
We calculate residuals of the matrix df for the columns of names, correcting for the columns in covars.
End of explanation
M = scn.create_corrmat(df_res, method='pearson')
Explanation: Now we create a correlation matrix over the columns of df_res
End of explanation
G = scn.BrainNetwork(network=M, parcellation=names, centroids=centroids)
Explanation: Create a weighted graph
A short sidenote on the BrainNetwork class: This is a very lightweight subclass of the Networkx.Graph class. This means that any methods you can use on a Networkx.Graph object can also be used on a BrainNetwork object, although the reverse is not true. We have added various methods which allow us to keep track of measures that have already been calculated, which, especially later on when one is dealing with 10^3 random graphs, saves a lot of time.
All scona measures are implemented in such a way that they can be used on a regular Networkx.Graph object. For example, instead of G.threshold(10) you can use scn.threshold_graph(G, 10).
Also you can create a BrainNetwork from a Networkx.Graph G, using scn.BrainNetwork(network=G)
Initialise a weighted graph G from the correlation matrix M. The parcellation and centroids arguments are used to label nodes with names and coordinates respectively.
End of explanation
H = G.threshold(10)
Explanation: Threshold to create a binary graph
We threshold G at cost 10 to create a binary graph with 10% as many edges as the complete graph G. Ordinarily when thresholding one takes the 10% of edges with the highest weight. In our case, because we want the resulting graph to be connected, we calculate a minimum spanning tree first. If you want to omit this step, you can pass the argument mst=False to threshold.
The threshold method does not edit objects inplace
End of explanation
H.report_nodal_measures().head()
Explanation: Calculate nodal summary.
calculate_nodal_measures will compute and record the following nodal measures
average_dist (if centroids available)
total_dist (if centroids available)
betweenness
closeness
clustering coefficient
degree
interhem (if centroids are available)
interhem_proportion (if centroids are available)
nodal partition
participation coefficient under partition calculated above
shortest_path_length
export_nodal_measure returns nodal attributes in a DataFrame. Let's try it now.
End of explanation
H.calculate_nodal_measures()
H.report_nodal_measures().head()
Explanation: Use calculate_nodal_measures to fill in a bunch of nodal measures
End of explanation
nx.set_node_attributes(H, name="hat", values={x: x**2 for x in H.nodes})
Explanation: We can also add measures as one might normally add nodal attributes to a networkx graph
End of explanation
H.report_nodal_measures(columns=['name', 'degree', 'hat']).head()
Explanation: These show up in our DataFrame too
End of explanation
H.calculate_global_measures()
H.rich_club();
Explanation: Calculate Global measures
End of explanation
brain_bundle = scn.GraphBundle([H], ['NSPN_cost=10'])
Explanation: Create a GraphBundle
The GraphBundle object is the scona way to handle across network comparisons. What is it? Essentially it's a python dictionary with BrainNetwork objects as values.
End of explanation
brain_bundle
Explanation: This creates a dictionary-like object with BrainNetwork H keyed by 'NSPN_cost=10'
End of explanation
# Note that 10 is not usually a sufficient number of random graphs to do meaningful analysis,
# it is used here for time considerations
brain_bundle.create_random_graphs('NSPN_cost=10', 10)
brain_bundle
Explanation: Now add a series of random_graphs created by edge swap randomisation of H (keyed by 'NSPN_cost=10')
End of explanation
brain_bundle.report_global_measures()
brain_bundle.report_rich_club()
Explanation: Report on a GraphBundle
The following method will calculate global measures ( if they have not already been calculated) for all of the graphs in graph_bundle and report the results in a DataFrame. We can do the same for rich club coefficients below.
End of explanation |
969 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quartet d'Anscombe
Page wikipedia
Construits en 1973 par le statisticien Francis Anscombe dans le but de démontrer l'importance de tracer des graphiques avant d'analyser un ensemble de données.
Step1: Lecture des données
Step2: Calcul des propriétés statistiques
Ici on montre que les quatre jeux de données ont les même propriétés statistiques.
Step3: Représentation graphique des données
La représentation graphique de ces jeux de données a deux objectifs. Elle montre
qu'il est important de visualiser les données pour faire une interprétation
que des données abérantes peuvent avoir un impact majeur sur certaines propriétés statistique telle que la moyenne. | Python Code:
%pylab --no-import-all inline
from scipy.stats import linregress, pearsonr
Explanation: Quartet d'Anscombe
Page wikipedia
Construits en 1973 par le statisticien Francis Anscombe dans le but de démontrer l'importance de tracer des graphiques avant d'analyser un ensemble de données.
End of explanation
all_sets = list()
for i in range(0, 8, 2):
x, y = np.loadtxt("anscombe.dat", usecols=(i, i+1), skiprows=1, unpack=True)
all_sets.append((x, y))
print(all_sets[0][0])
print(all_sets[0][1])
Explanation: Lecture des données
End of explanation
def show_stat(data):
x, y = data
print("moyenne x : %4.2f" % x.mean())
print("variance x : %4.2f" % np.var(x))
print("moyenne y : %4.2f" % y.mean())
print("variance y : %4.2f" % np.var(y))
cor, p = pearsonr(x, y)
print("corrélation : %5.3f" % cor)
a, b, r, p_value, std_err = linregress(x, y)
print("regression linéaire : %3.1f x + %3.1f (r^2 = %4.2f)" % (a, b, r**2))
for i, data in enumerate(all_sets):
print("\nset %d" % i)
print("------")
show_stat(data)
Explanation: Calcul des propriétés statistiques
Ici on montre que les quatre jeux de données ont les même propriétés statistiques.
End of explanation
fig = plt.figure(figsize=(10, 8))
fig.suptitle("Quartet d'Anscombe", size=20)
for i, data in enumerate(all_sets):
ax = plt.subplot(2, 2, i + 1)
x, y = data
ax.plot(x, y, marker="o", color="C3", linestyle="", label="set %d" % (i+1))
ax.set_ylabel("y%d" % (i+1), size=14)
ax.set_xlabel("x%d" % (i+1), size=14)
a, b, r, p_value, std_err = linregress(x, y)
ax.plot([0, 20], [b, a*20 + b], color="C0")
ax.set_xlim(0, 20)
ax.set_ylim(0, 15)
ax.legend(loc="lower right", fontsize=18)
ax.grid(True)
Explanation: Représentation graphique des données
La représentation graphique de ces jeux de données a deux objectifs. Elle montre
qu'il est important de visualiser les données pour faire une interprétation
que des données abérantes peuvent avoir un impact majeur sur certaines propriétés statistique telle que la moyenne.
End of explanation |
970 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create a dataframe
Step2: Create a second dataframe
Step3: Create a third dataframe
Step4: Join the two dataframes along rows
Step5: Join the two dataframes along columns
Step6: Merge two dataframes along the subject_id value
Step7: Merge two dataframes with both the left and right dataframes using the subject_id key
Step8: Merge with outer join
"Full outer join produces the set of all records in Table A and Table B, with matching records from both sides where available. If there is no match, the missing side will contain null." - source
Step9: Merge with inner join
"Inner join produces only the set of records that match in both Table A and Table B." - source
Step10: Merge with right join
Step11: Merge with left join
"Left outer join produces a complete set of records from Table A, with the matching records (where available) in Table B. If there is no match, the right side will contain null." - source
Step12: Merge while adding a suffix to duplicate column names
Step13: Merge based on indexes | Python Code:
import pandas as pd
from IPython.display import display
from IPython.display import Image
Explanation: Title: Join And Merge Pandas Dataframe
Slug: pandas_join_merge_dataframe
Summary: Join And Merge Pandas Dataframe
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
import modules
End of explanation
raw_data = {
'subject_id': ['1', '2', '3', '4', '5'],
'first_name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'last_name': ['Anderson', 'Ackerman', 'Ali', 'Aoni', 'Atiches']}
df_a = pd.DataFrame(raw_data, columns = ['subject_id', 'first_name', 'last_name'])
df_a
Explanation: Create a dataframe
End of explanation
raw_data = {
'subject_id': ['4', '5', '6', '7', '8'],
'first_name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'last_name': ['Bonder', 'Black', 'Balwner', 'Brice', 'Btisan']}
df_b = pd.DataFrame(raw_data, columns = ['subject_id', 'first_name', 'last_name'])
df_b
Explanation: Create a second dataframe
End of explanation
raw_data = {
'subject_id': ['1', '2', '3', '4', '5', '7', '8', '9', '10', '11'],
'test_id': [51, 15, 15, 61, 16, 14, 15, 1, 61, 16]}
df_n = pd.DataFrame(raw_data, columns = ['subject_id','test_id'])
df_n
Explanation: Create a third dataframe
End of explanation
df_new = pd.concat([df_a, df_b])
df_new
Explanation: Join the two dataframes along rows
End of explanation
pd.concat([df_a, df_b], axis=1)
Explanation: Join the two dataframes along columns
End of explanation
pd.merge(df_new, df_n, on='subject_id')
Explanation: Merge two dataframes along the subject_id value
End of explanation
pd.merge(df_new, df_n, left_on='subject_id', right_on='subject_id')
Explanation: Merge two dataframes with both the left and right dataframes using the subject_id key
End of explanation
pd.merge(df_a, df_b, on='subject_id', how='outer')
Explanation: Merge with outer join
"Full outer join produces the set of all records in Table A and Table B, with matching records from both sides where available. If there is no match, the missing side will contain null." - source
End of explanation
pd.merge(df_a, df_b, on='subject_id', how='inner')
Explanation: Merge with inner join
"Inner join produces only the set of records that match in both Table A and Table B." - source
End of explanation
pd.merge(df_a, df_b, on='subject_id', how='right')
Explanation: Merge with right join
End of explanation
pd.merge(df_a, df_b, on='subject_id', how='left')
Explanation: Merge with left join
"Left outer join produces a complete set of records from Table A, with the matching records (where available) in Table B. If there is no match, the right side will contain null." - source
End of explanation
pd.merge(df_a, df_b, on='subject_id', how='left', suffixes=('_left', '_right'))
Explanation: Merge while adding a suffix to duplicate column names
End of explanation
pd.merge(df_a, df_b, right_index=True, left_index=True)
Explanation: Merge based on indexes
End of explanation |
971 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
vocab_to_int = {w:i for i, w in enumerate(set(text))}
int_to_vocab = {i:w for i, w in enumerate(set(text))}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
return {
'.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semicolon||',
'!': '||Exclamation_mark||',
'?': '||Question_mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'--': '||Dash||',
"\n": '||Return||'
}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input')
targets = tf.placeholder(dtype=tf.int32, shape=[None, None], name='targets')
learning_rate = tf.placeholder(dtype=tf.float32, name='learning_rate')
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
def get_init_cell(batch_size, rnn_size, keep_prob=0.8, layers=3):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
multi = tf.contrib.rnn.MultiRNNCell([cell] * layers)
init_state = multi.zero_state(batch_size, tf.float32)
init_state = tf.identity(init_state, 'initial_state')
return multi, init_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embeddings = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embeddings, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, 'final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_batches = len(int_text) // (batch_size * seq_length)
result = []
for i in range(n_batches):
inputs = []
targets = []
for j in range(batch_size):
idx = i * seq_length + j * seq_length
inputs.append(int_text[idx:idx + seq_length])
targets.append(int_text[idx + 1:idx + seq_length + 1])
result.append([inputs, targets])
return np.array(result)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 256
# Sequence Length
seq_length = 25
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 10
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
inputs = loaded_graph.get_tensor_by_name('input:0')
init_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probs = loaded_graph.get_tensor_by_name('probs:0')
return inputs, init_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return int_to_vocab[np.argmax(probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
972 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computer Hardware Dataset Analysis - UCI
This is a regression analysis of the <a href="https
Step1: Attribute Information
Attribute Information
Step2: Univariate Analysis
Step3: Bivariate Analysis
Step4: Correlations with Target Column
Step5: Feature Correlations
Step6: The Regression Model
Step7: Cross Validation | Python Code:
import numpy as np
import pandas as pd
%pylab inline
pylab.style.use('ggplot')
import seaborn as sns
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/cpu-performance/machine.data'
data = pd.read_csv(url, header=None)
data.head()
Explanation: Computer Hardware Dataset Analysis - UCI
This is a regression analysis of the <a href="https://archive.ics.uci.edu/ml/datasets/Computer+Hardware">UCI Computer Analysis Dataset.</a>
End of explanation
data.columns = ['VENDOR', 'MODEL', 'MYCT', 'MMIN',
'MMAX', 'CACH', 'CHMIN', 'CHMAX', 'PRP', 'ERP']
# Drop the ERP column - this is an estimate
data = data.drop('ERP', axis=1)
data.VENDOR.value_counts().plot(kind='barh')
# Drop the model column as well
data = data.drop('MODEL', axis=1)
Explanation: Attribute Information
Attribute Information:
vendor name: 30
(adviser, amdahl,apollo, basf, bti, burroughs, c.r.d, cambex, cdc, dec,
dg, formation, four-phase, gould, honeywell, hp, ibm, ipl, magnuson,
microdata, nas, ncr, nixdorf, perkin-elmer, prime, siemens, sperry,
sratus, wang)
Model Name: many unique symbols
MYCT: machine cycle time in nanoseconds (integer)
MMIN: minimum main memory in kilobytes (integer)
MMAX: maximum main memory in kilobytes (integer)
CACH: cache memory in kilobytes (integer)
CHMIN: minimum channels in units (integer)
CHMAX: maximum channels in units (integer)
PRP: published relative performance (integer)
ERP: estimated relative performance from the original article (integer)
End of explanation
feature_names = data.columns.drop('VENDOR')
for fname in feature_names:
_ = pylab.figure()
_ = data.loc[:, fname].plot(kind='hist', title=fname)
Explanation: Univariate Analysis
End of explanation
_, axes = pylab.subplots(6, figsize=(10, 21))
n_columns = data.columns.drop(['VENDOR', 'PRP'])
for i, fname in enumerate(n_columns):
sns.regplot(x=fname, y='PRP', data=data, ax=axes[i])
pylab.tight_layout()
Explanation: Bivariate Analysis
End of explanation
corrs = data.loc[:, n_columns].corrwith(data.loc[:, 'PRP'])
corrs.plot(kind='barh')
Explanation: Correlations with Target Column
End of explanation
f_corrs = data.loc[:, n_columns].corr()
sns.heatmap(f_corrs, annot=True)
Explanation: Feature Correlations
End of explanation
import statsmodels.formula.api as sm
model = sm.ols(formula='PRP ~ MMAX + MMIN + CACH + CHMAX', data=data)
result = model.fit()
result.summary()
Explanation: The Regression Model
End of explanation
from sklearn.model_selection import KFold
from sklearn.metrics import r2_score
n_splits = 3
fold = KFold(n_splits=n_splits, shuffle=True)
scores = []
for train_idx, test_idx in fold.split(data):
model = sm.ols(formula='PRP ~ MMAX + MMIN + CACH + CHMAX', data=data.loc[train_idx])
result = model.fit()
test_features = data.loc[test_idx].drop('PRP', axis=1)
predictions = result.predict(test_features)
actual = data.loc[test_idx, 'PRP']
score = r2_score(actual, predictions)
scores.append(score)
scores = pd.Series(scores)
scores.plot(kind='bar')
Explanation: Cross Validation
End of explanation |
973 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Traffic Sign Classification with Keras
Keras exists to make coding deep neural networks simpler. To demonstrate just how easy it is, you’re going to use Keras to build a convolutional neural network in a few dozen lines of code.
You’ll be connecting the concepts from the previous lessons to the methods that Keras provides.
Dataset
The network you'll build with Keras is similar to the example in Keras’s GitHub repository that builds out a convolutional neural network for MNIST.
However, instead of using the MNIST dataset, you're going to use the German Traffic Sign Recognition Benchmark dataset that you've used previously.
You can download pickle files with sanitized traffic sign data here
Step1: Overview
Here are the steps you'll take to build the network
Step2: Load the Data
Start by importing the data from the pickle file.
Step3: Preprocess the Data
Shuffle the data
Normalize the features using Min-Max scaling between -0.5 and 0.5
One-Hot Encode the labels
Shuffle the data
Hint
Step4: Normalize the features
Hint
Step5: One-Hot Encode the labels
Hint
Step6: Keras Sequential Model
```python
from keras.models import Sequential
Create the Sequential model
model = Sequential()
``
Thekeras.models.Sequentialclass is a wrapper for the neural network model. Just like many of the class models in scikit-learn, it provides common functions likefit(),evaluate(), andcompile()`. We'll cover these functions as we get to them. Let's start looking at the layers of the model.
Keras Layer
A Keras layer is just like a neural network layer. It can be fully connected, max pool, activation, etc. You can add a layer to the model using the model's add() function. For example, a simple model would look like this
Step7: Training a Sequential Model
You built a multi-layer neural network in Keras, now let's look at training a neural network.
```python
from keras.models import Sequential
from keras.layers.core import Dense, Activation
model = Sequential()
...
Configures the learning process and metrics
model.compile('sgd', 'mean_squared_error', ['accuracy'])
Train the model
History is a record of training loss and metrics
history = model.fit(x_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2)
Calculate test score
test_score = model.evaluate(x_test_data, Y_test_data)
``
The code above configures, trains, and tests the model. The linemodel.compile('sgd', 'mean_squared_error', ['accuracy'])configures the model's optimizer to'sgd'(stochastic gradient descent), the loss to'mean_squared_error', and the metric to'accuracy'`.
You can find more optimizers here, loss functions here, and more metrics here.
To train the model, use the fit() function as shown in model.fit(x_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2). The validation_split parameter will split a percentage of the training dataset to be used to validate the model. The model can be further tested with the test dataset using the evaluation() function as shown in the last line.
Train the Network
Compile the network using adam optimizer and categorical_crossentropy loss function.
Train the network for ten epochs and validate with 20% of the training data.
Step8: Convolutions
Re-construct the previous network
Add a convolutional layer with 32 filters, a 3x3 kernel, and valid padding before the flatten layer.
Add a ReLU activation after the convolutional layer.
Hint 1
Step9: Pooling
Re-construct the network
Add a 2x2 max pooling layer immediately following your convolutional layer.
Step10: Dropout
Re-construct the network
Add a dropout layer after the pooling layer. Set the dropout rate to 50%.
Step11: Optimization
Congratulations! You've built a neural network with convolutions, pooling, dropout, and fully-connected layers, all in just a few lines of code.
Have fun with the model and see how well you can do! Add more layers, or regularization, or different padding, or batches, or more training epochs.
What is the best validation accuracy you can achieve?
Step12: Best Validation Accuracy | Python Code:
from urllib.request import urlretrieve
from os.path import isfile
from tqdm import tqdm
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('train.p'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Train Dataset') as pbar:
urlretrieve(
'https://s3.amazonaws.com/udacity-sdc/datasets/german_traffic_sign_benchmark/train.p',
'train.p',
pbar.hook)
if not isfile('test.p'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Test Dataset') as pbar:
urlretrieve(
'https://s3.amazonaws.com/udacity-sdc/datasets/german_traffic_sign_benchmark/test.p',
'test.p',
pbar.hook)
print('Training and Test data downloaded.')
Explanation: Traffic Sign Classification with Keras
Keras exists to make coding deep neural networks simpler. To demonstrate just how easy it is, you’re going to use Keras to build a convolutional neural network in a few dozen lines of code.
You’ll be connecting the concepts from the previous lessons to the methods that Keras provides.
Dataset
The network you'll build with Keras is similar to the example in Keras’s GitHub repository that builds out a convolutional neural network for MNIST.
However, instead of using the MNIST dataset, you're going to use the German Traffic Sign Recognition Benchmark dataset that you've used previously.
You can download pickle files with sanitized traffic sign data here:
End of explanation
import pickle
import numpy as np
import math
# Fix error with TF and Keras
import tensorflow as tf
tf.python.control_flow_ops = tf
print('Modules loaded.')
Explanation: Overview
Here are the steps you'll take to build the network:
Load the training data.
Preprocess the data.
Build a feedforward neural network to classify traffic signs.
Build a convolutional neural network to classify traffic signs.
Evaluate the final neural network on testing data.
Keep an eye on the network’s accuracy over time. Once the accuracy reaches the 98% range, you can be confident that you’ve built and trained an effective model.
End of explanation
with open('train.p', 'rb') as f:
data = pickle.load(f)
# TODO: Load the feature data to the variable X_train
X_train = data['features']
# TODO: Load the label data to the variable y_train
y_train = data['labels']
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert np.array_equal(X_train, data['features']), 'X_train not set to data[\'features\'].'
assert np.array_equal(y_train, data['labels']), 'y_train not set to data[\'labels\'].'
print('Tests passed.')
Explanation: Load the Data
Start by importing the data from the pickle file.
End of explanation
# TODO: Shuffle the data
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert X_train.shape == data['features'].shape, 'X_train has changed shape. The shape shouldn\'t change when shuffling.'
assert y_train.shape == data['labels'].shape, 'y_train has changed shape. The shape shouldn\'t change when shuffling.'
assert not np.array_equal(X_train, data['features']), 'X_train not shuffled.'
assert not np.array_equal(y_train, data['labels']), 'y_train not shuffled.'
print('Tests passed.')
Explanation: Preprocess the Data
Shuffle the data
Normalize the features using Min-Max scaling between -0.5 and 0.5
One-Hot Encode the labels
Shuffle the data
Hint: You can use the scikit-learn shuffle function to shuffle the data.
End of explanation
# TODO: Normalize the data features to the variable X_normalized
def normalize_grayscale(image_data):
a = -0.5
b = 0.5
grayscale_min = 0
grayscale_max = 255
return a + ( ( (image_data - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )
X_normalized = normalize_grayscale(X_train)
# STOP: Do not change the tests below. Your implementation should pass these tests.
assert math.isclose(np.min(X_normalized), -0.5, abs_tol=1e-5) and math.isclose(np.max(X_normalized), 0.5, abs_tol=1e-5), 'The range of the training data is: {} to {}. It must be -0.5 to 0.5'.format(np.min(X_normalized), np.max(X_normalized))
print('Tests passed.')
Explanation: Normalize the features
Hint: You solved this in TensorFlow lab Problem 1.
End of explanation
# TODO: One Hot encode the labels to the variable y_one_hot
from sklearn.preprocessing import LabelBinarizer
label_binarizer = LabelBinarizer()
with tf.device('/cpu:0'):
y_one_hot = label_binarizer.fit_transform(y_train)
# STOP: Do not change the tests below. Your implementation should pass these tests.
import collections
assert y_one_hot.shape == (39209, 43), 'y_one_hot is not the correct shape. It\'s {}, it should be (39209, 43)'.format(y_one_hot.shape)
assert next((False for y in y_one_hot if collections.Counter(y) != {0: 42, 1: 1}), True), 'y_one_hot not one-hot encoded.'
print('Tests passed.')
Explanation: One-Hot Encode the labels
Hint: You can use the scikit-learn LabelBinarizer function to one-hot encode the labels.
End of explanation
from keras.models import Sequential
model = Sequential()
# TODO: Build a Multi-layer feedforward neural network with Keras here.
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
model.add(Flatten(input_shape=(32, 32, 3)))
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(43))
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten
from keras.activations import relu, softmax
def check_layers(layers, true_layers):
assert len(true_layers) != 0, 'No layers found'
for layer_i in range(len(layers)):
assert isinstance(true_layers[layer_i], layers[layer_i]), 'Layer {} is not a {} layer'.format(layer_i+1, layers[layer_i].__name__)
assert len(true_layers) == len(layers), '{} layers found, should be {} layers'.format(len(true_layers), len(layers))
check_layers([Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[0].input_shape == (None, 32, 32, 3), 'First layer input shape is wrong, it should be (32, 32, 3)'
assert model.layers[1].output_shape == (None, 128), 'Second layer output is wrong, it should be (128)'
assert model.layers[2].activation == relu, 'Third layer not a relu activation layer'
assert model.layers[3].output_shape == (None, 43), 'Fourth layer output is wrong, it should be (43)'
assert model.layers[4].activation == softmax, 'Fifth layer not a softmax activation layer'
print('Tests passed.')
Explanation: Keras Sequential Model
```python
from keras.models import Sequential
Create the Sequential model
model = Sequential()
``
Thekeras.models.Sequentialclass is a wrapper for the neural network model. Just like many of the class models in scikit-learn, it provides common functions likefit(),evaluate(), andcompile()`. We'll cover these functions as we get to them. Let's start looking at the layers of the model.
Keras Layer
A Keras layer is just like a neural network layer. It can be fully connected, max pool, activation, etc. You can add a layer to the model using the model's add() function. For example, a simple model would look like this:
```python
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
Create the Sequential model
model = Sequential()
1st Layer - Add a flatten layer
model.add(Flatten(input_shape=(32, 32, 3)))
2nd Layer - Add a fully connected layer
model.add(Dense(100))
3rd Layer - Add a ReLU activation layer
model.add(Activation('relu'))
4th Layer - Add a fully connected layer
model.add(Dense(60))
5th Layer - Add a ReLU activation layer
model.add(Activation('relu'))
```
Keras will automatically infer the shape of all layers after the first layer. This means you only have to set the input dimensions for the first layer.
The first layer from above, model.add(Flatten(input_shape=(32, 32, 3))), sets the input dimension to (32, 32, 3) and output dimension to (3072=32*32*3). The second layer takes in the output of the first layer and sets the output dimenions to (100). This chain of passing output to the next layer continues until the last layer, which is the output of the model.
Build a Multi-Layer Feedforward Network
Build a multi-layer feedforward neural network to classify the traffic sign images.
Set the first layer to a Flatten layer with the input_shape set to (32, 32, 3)
Set the second layer to Dense layer width to 128 output.
Use a ReLU activation function after the second layer.
Set the output layer width to 43, since there are 43 classes in the dataset.
Use a softmax activation function after the output layer.
To get started, review the Keras documentation about models and layers.
The Keras example of a Multi-Layer Perceptron network is similar to what you need to do here. Use that as a guide, but keep in mind that there are a number of differences.
End of explanation
# TODO: Compile and train the model here.
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, nb_epoch=10, validation_split=0.2)
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.optimizers import Adam
assert model.loss == 'categorical_crossentropy', 'Not using categorical_crossentropy loss function'
assert isinstance(model.optimizer, Adam), 'Not using adam optimizer'
assert len(history.history['acc']) == 10, 'You\'re using {} epochs when you need to use 10 epochs.'.format(len(history.history['acc']))
assert history.history['acc'][-1] > 0.92, 'The training accuracy was: %.3f. It shoud be greater than 0.92' % history.history['acc'][-1]
assert history.history['val_acc'][-1] > 0.85, 'The validation accuracy is: %.3f. It shoud be greater than 0.85' % history.history['val_acc'][-1]
print('Tests passed.')
Explanation: Training a Sequential Model
You built a multi-layer neural network in Keras, now let's look at training a neural network.
```python
from keras.models import Sequential
from keras.layers.core import Dense, Activation
model = Sequential()
...
Configures the learning process and metrics
model.compile('sgd', 'mean_squared_error', ['accuracy'])
Train the model
History is a record of training loss and metrics
history = model.fit(x_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2)
Calculate test score
test_score = model.evaluate(x_test_data, Y_test_data)
``
The code above configures, trains, and tests the model. The linemodel.compile('sgd', 'mean_squared_error', ['accuracy'])configures the model's optimizer to'sgd'(stochastic gradient descent), the loss to'mean_squared_error', and the metric to'accuracy'`.
You can find more optimizers here, loss functions here, and more metrics here.
To train the model, use the fit() function as shown in model.fit(x_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2). The validation_split parameter will split a percentage of the training dataset to be used to validate the model. The model can be further tested with the test dataset using the evaluation() function as shown in the last line.
Train the Network
Compile the network using adam optimizer and categorical_crossentropy loss function.
Train the network for ten epochs and validate with 20% of the training data.
End of explanation
# TODO: Re-construct the network and add a convolutional layer before the flatten layer.
from keras.models import Sequential
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(32, 32, 3)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(43))
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
check_layers([Convolution2D, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[0].input_shape == (None, 32, 32, 3), 'First layer input shape is wrong, it should be (32, 32, 3)'
assert model.layers[0].nb_filter == 32, 'Wrong number of filters, it should be 32'
assert model.layers[0].nb_col == model.layers[0].nb_row == 3, 'Kernel size is wrong, it should be a 3x3'
assert model.layers[0].border_mode == 'valid', 'Wrong padding, it should be valid'
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2)
assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1]
print('Tests passed.')
Explanation: Convolutions
Re-construct the previous network
Add a convolutional layer with 32 filters, a 3x3 kernel, and valid padding before the flatten layer.
Add a ReLU activation after the convolutional layer.
Hint 1: The Keras example of a convolutional neural network for MNIST would be a good example to review.
End of explanation
# TODO: Re-construct the network and add a pooling layer after the convolutional layer.
from keras.models import Sequential
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(43))
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
check_layers([Convolution2D, MaxPooling2D, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[1].pool_size == (2, 2), 'Second layer must be a max pool layer with pool size of 2x2'
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2)
assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1]
print('Tests passed.')
Explanation: Pooling
Re-construct the network
Add a 2x2 max pooling layer immediately following your convolutional layer.
End of explanation
# TODO: Re-construct the network and add dropout after the pooling layer.
from keras.models import Sequential
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(43))
model.add(Activation('softmax'))
# STOP: Do not change the tests below. Your implementation should pass these tests.
from keras.layers.core import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
check_layers([Convolution2D, MaxPooling2D, Dropout, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers)
assert model.layers[2].p == 0.5, 'Third layer should be a Dropout of 50%'
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2)
assert(history.history['val_acc'][-1] > 0.91), "The validation accuracy is: %.3f. It should be greater than 0.91" % history.history['val_acc'][-1]
print('Tests passed.')
Explanation: Dropout
Re-construct the network
Add a dropout layer after the pooling layer. Set the dropout rate to 50%.
End of explanation
# TODO: Build a model
from keras.models import Sequential
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(43))
model.add(Activation('softmax'))
# There is no right or wrong answer. This is for you to explore model creation.
# TODO: Compile and train the model
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, nb_epoch=10, validation_split=0.2)
Explanation: Optimization
Congratulations! You've built a neural network with convolutions, pooling, dropout, and fully-connected layers, all in just a few lines of code.
Have fun with the model and see how well you can do! Add more layers, or regularization, or different padding, or batches, or more training epochs.
What is the best validation accuracy you can achieve?
End of explanation
# TODO: Load test data
with open('test.p', 'rb') as f:
data_test = pickle.load(f)
X_test = data_test['features']
y_test = data_test['labels']
# TODO: Preprocess data & one-hot encode the labels
X_normalized_test = normalize_grayscale(X_test)
y_one_hot_test = label_binarizer.fit_transform(y_test)
# TODO: Evaluate model on test data
metrics = model.evaluate(X_normalized_test, y_one_hot_test)
for metric_i in range(len(model.metrics_names)):
metric_name = model.metrics_names[metric_i]
metric_value = metrics[metric_i]
print('{}: {}'.format(metric_name, metric_value))
Explanation: Best Validation Accuracy: (fill in here)
Testing
Once you've picked out your best model, it's time to test it.
Load up the test data and use the evaluate() method to see how well it does.
Hint 1: The evaluate() method should return an array of numbers. Use the metrics_names property to get the labels.
End of explanation |
974 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ex 1-1 by Keras in Tensflow 2.0
Keras가 이제 텐서플로의 기본 상위 인터페이스가 되었다. 다시 말해 텐서플로에서 인공지능 코드 작성시 케라스를 기본적으로 사용할 수 있게 되었다는 말이다.
Keras를 텐서플로에서 사용하는 방법은 크게 두가지가 있다. 첫 번째는 오리지널 케라스 방식처럼 케라스를 주 인터페이스로 사용하고 텐서풀로를 백앤드 인공지능 엔진으로 사용하는 방법이다. 이를 텐서플로 2.0 기반 케라스 사용법(Keras in Tensorflow 2.0)이라 하자. 두 번째는 텐서플로로 인공지능 코드를 작성할 때 케라스를 이용하는 방법이다. 이를 케라스 인터페이스를 사용하는 텐서플로 2.0 사용법(Tensorflow 2.0 with Keras IO)이라 하자. 본 책의 9장에도 그와 유사한 접근이 소개되어 있지만 그 때는 둘을 섞어서 사용하는 정도였고 이번에는 기본적으로 텐서플로내에서 케라스를 지원하는 단계가 되었기 때문에 훨씬 편리하고 강력하게 둘이 융합되었다. 첫 번째 방법은 편리함에 방점이 있고 두 번째 방법은 강력함에 방점이 있다. 여기서는 두 가지 방법을 모두 소개한다.
I. 텐서플로 2.0 기반 케라스 사용법(Keras in Tensorflow 2.0)
Step1: II. 케라스 인터페이스를 사용하는 텐서플로 2.0 사용법(Tensorflow 2.0 with Keras IO)
간단한 구성
Step2: 간단한 구성에 진행 결과 보이기
Step3: 클래스를 이용한 네트웍 모델 구성하기
케라스로 모델을 만들 때 클래스를 이용해 만들 수 있다. 파이토치에서 사용하는 방법이지만 케라스에서도 사용이 가능하다. 이 경우는 뉴럴넷의 각 계층의 구성과 계층간의 연결을 구분해서 작성이 가능해 이해에 도움이 되어 복잡한 네트웍 구성시 도움이 되는 방법이다. 여기서는 클래스를 사용해서 케라스 모델을 만드는 방법을 사용한다. 파이토치 구현을 이해하고 케라스 구현과 비교하는데도 도움이 되리라 보인다.
모델을 model = Model()로 구성하고 별도의 컴파일이나 빌드 과정이 없다. 텐서플로 방식으로 사용하는 경우는 케라스가 모델을 처음 사용하는 시점에서 자동으로 구성을 하기 때문이다. 여기서는 y_pr = model(x[ | Python Code:
from tensorflow import keras
import numpy
x = numpy.array([0, 1, 2, 3, 4])
y = x * 2 + 1
model = keras.models.Sequential()
model.add(keras.layers.Dense(1,input_shape=(1,)))
model.compile('SGD', 'mse')
model.fit(x[:2], y[:2], epochs=1000, verbose=0)
print(model.predict(x))
Explanation: Ex 1-1 by Keras in Tensflow 2.0
Keras가 이제 텐서플로의 기본 상위 인터페이스가 되었다. 다시 말해 텐서플로에서 인공지능 코드 작성시 케라스를 기본적으로 사용할 수 있게 되었다는 말이다.
Keras를 텐서플로에서 사용하는 방법은 크게 두가지가 있다. 첫 번째는 오리지널 케라스 방식처럼 케라스를 주 인터페이스로 사용하고 텐서풀로를 백앤드 인공지능 엔진으로 사용하는 방법이다. 이를 텐서플로 2.0 기반 케라스 사용법(Keras in Tensorflow 2.0)이라 하자. 두 번째는 텐서플로로 인공지능 코드를 작성할 때 케라스를 이용하는 방법이다. 이를 케라스 인터페이스를 사용하는 텐서플로 2.0 사용법(Tensorflow 2.0 with Keras IO)이라 하자. 본 책의 9장에도 그와 유사한 접근이 소개되어 있지만 그 때는 둘을 섞어서 사용하는 정도였고 이번에는 기본적으로 텐서플로내에서 케라스를 지원하는 단계가 되었기 때문에 훨씬 편리하고 강력하게 둘이 융합되었다. 첫 번째 방법은 편리함에 방점이 있고 두 번째 방법은 강력함에 방점이 있다. 여기서는 두 가지 방법을 모두 소개한다.
I. 텐서플로 2.0 기반 케라스 사용법(Keras in Tensorflow 2.0)
End of explanation
import tensorflow as tf2
import numpy as np
x = np.array([0, 1, 2, 3, 4]).astype('float32').reshape(-1,1)
y = x * 2 + 1
model = tf2.keras.Sequential()
model.add(tf2.keras.layers.Dense(1, input_dim = 1))
model.build()
Optimizer = tf2.keras.optimizers.Adam(learning_rate = 0.01)
for epoch in range(1000):
with tf2.GradientTape() as tape:
y_pr = model(x[:2,:1])
loss = tf2.keras.losses.mean_squared_error(y[:2,:1], y_pr)
gradients = tape.gradient(loss, model.trainable_variables)
Optimizer.apply_gradients(zip(gradients, model.trainable_variables))
print(model.predict(x))
Explanation: II. 케라스 인터페이스를 사용하는 텐서플로 2.0 사용법(Tensorflow 2.0 with Keras IO)
간단한 구성
End of explanation
import tensorflow as tf2
import numpy as np
x = np.array([0, 1, 2, 3, 4]).astype('float32').reshape(-1,1)
y = x * 2 + 1
model = tf2.keras.Sequential()
model.add(tf2.keras.layers.Dense(1, input_dim = 1))
model.build()
print('w=', model.trainable_variables[0].numpy(), 'b=', model.trainable_variables[1].numpy())
print()
Optimizer = tf2.keras.optimizers.Adam(learning_rate = 0.01)
for epoch in range(1000):
with tf2.GradientTape() as tape:
y_pr = model(x[:2,:1])
loss = tf2.keras.losses.mean_squared_error(y[:2,:1], y_pr)
if epoch < 3:
print(f'Epoch:{epoch}')
print('y_pr:', y_pr.numpy())
print('y_tr:', y[:2,:1])
print('loss:', loss.numpy())
print()
gradients = tape.gradient(loss, model.trainable_variables)
Optimizer.apply_gradients(zip(gradients, model.trainable_variables))
print(model.predict(x))
Explanation: 간단한 구성에 진행 결과 보이기
End of explanation
import tensorflow as tf2
from tensorflow import keras
import numpy as np
x = np.array([0, 1, 2, 3, 4]).astype('float32').reshape(-1,1)
y = x * 2 + 1
class Model(keras.models.Model):
def __init__(self):
super().__init__()
# self.layer = keras.layers.Dense(1, input_shape=[None,1])
self.layer = keras.layers.Dense(1, input_dim=1)
def call(self, x):
return self.layer(x)
model = Model()
Optimizer = tf2.keras.optimizers.Adam(learning_rate = 0.01)
for epoch in range(1000):
with tf2.GradientTape() as tape:
y_pr = model(x[:2,:1])
loss = tf2.keras.losses.mean_squared_error(y[:2,:1], y_pr)
gradients = tape.gradient(loss, model.trainable_variables)
Optimizer.apply_gradients(zip(gradients, model.trainable_variables))
print(model.predict(x))
Explanation: 클래스를 이용한 네트웍 모델 구성하기
케라스로 모델을 만들 때 클래스를 이용해 만들 수 있다. 파이토치에서 사용하는 방법이지만 케라스에서도 사용이 가능하다. 이 경우는 뉴럴넷의 각 계층의 구성과 계층간의 연결을 구분해서 작성이 가능해 이해에 도움이 되어 복잡한 네트웍 구성시 도움이 되는 방법이다. 여기서는 클래스를 사용해서 케라스 모델을 만드는 방법을 사용한다. 파이토치 구현을 이해하고 케라스 구현과 비교하는데도 도움이 되리라 보인다.
모델을 model = Model()로 구성하고 별도의 컴파일이나 빌드 과정이 없다. 텐서플로 방식으로 사용하는 경우는 케라스가 모델을 처음 사용하는 시점에서 자동으로 구성을 하기 때문이다. 여기서는 y_pr = model(x[:2,:1])을 수행하는 시점이다. 그 전에는 모델이 빌드되지 않았기 때문에 구조를 model.summary()로 확인할 수 없다. 그렇지만 모델이 사용된 후에는 구성을 볼 수 있다. 이점은 컴파일 과정이 반드시 수반되어야 하는 케라스로 만 사용하는 경우와 다르다. 그 경우는 컴파일을 하고 나면 네트웍 구조를 확인 할 수 있게 된다.
End of explanation |
975 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was prepared by Thunder Shiviah. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem
Step1: Unit Test
The following unit test is expected to fail until you solve the challenge. | Python Code:
def list_primes(n):
primes = []
for i in range(0, n + 1):
for j in range(0, i):
if i % j == 0:
break
else:
primes.append(i)
return primes
Explanation: <small><i>This notebook was prepared by Thunder Shiviah. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement list_primes(n), which returns a list of primes up to n (inclusive).
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
Does list_primes do anything else?
No
Test Cases
list_primes(1) -> [] # 1 is not prime.
list_primes(2) -> [2]
list_primes(12) -> [2, 3, 5, 7 , 11]
Algorithm
Primes are numbers which are only divisible by 1 and themselves.
5 is a prime since it can only be divided by itself and 1.
9 is not a prime since it can be divided by 3 (3*3 = 9).
1 is not a prime for reasons that only mathematicians care about.
To check if a number is prime, we can implement a basic algorithm, namely: check if a given number can be divided by any numbers smaller than the given number (note: you really only need to test numbers up to the square root of a given number, but it doesn't really matter for this assignment).
Code
End of explanation
# %load test_list_primes.py
from nose.tools import assert_equal
class Test_list_primes(object):
def test_list_primes(self):
assert_equal(list_primes(1), [])
assert_equal(list_primes(2), [2])
assert_equal(list_primes(7), [2, 3, 5, 7])
assert_equal(list_primes(9), list_primes(7))
print('Success: test_list_primes')
def main():
test = Test_list_primes()
test.test_list_primes()
if __name__ == '__main__':
main()
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation |
976 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.загружаем файлы .json
Step1: Смотрим, где именно в файле интересующие нас данные
Step2: Считываем нужные нам данные как датафреймы
Step3: Создаем в датафреймах отдельные столбцы с данными в удобных нам форматах.
Step4: Создаем столбцы в датафрейме с "Goal1Complitations", где будем хранить количество сессий и конверсию
Step5: Переносим из таблицы сессий количество сессий и считаем конверсию для каждой страницы, которая есть в "Goal1Complitations"
Step6: Обнулим конверсию для тех страниц по которым не було сессий. В даннос случае это страница "(entrance)"
Step7: Строим график
Step8: Выводим результат | Python Code:
path = 'Sessions_Page.json'
path2 = 'Goal1CompletionLocation_Goal1Completions.json'
with open(path, 'r') as f:
sessions_page = json.loads(f.read())
with open(path2, 'r') as f:
goals_page = json.loads(f.read())
Explanation: .загружаем файлы .json
End of explanation
type (sessions_page)
sessions_page.keys()
sessions_page['reports'][0].keys()
sessions_page['reports'][0]['data']['rows']
Explanation: Смотрим, где именно в файле интересующие нас данные
End of explanation
sessions_df = pd.DataFrame(sessions_page['reports'][0]['data']['rows'])
goals_df = pd.DataFrame(goals_page['reports'][0]['data']['rows'])
Explanation: Считываем нужные нам данные как датафреймы
End of explanation
x=[]
for i in sessions_df.dimensions:
x.append(str(i[0]))
sessions_df.insert(2, 'name', x)
x=[]
for i in goals_df.dimensions:
x.append(str(i[0]))
goals_df.insert(2, 'name', x)
x=[]
for i in sessions_df.metrics:
x.append(float(i[0]['values'][0]))
sessions_df.insert(3, 'sessions', x)
x=[]
for i in goals_df.metrics:
x.append(float(i[0]['values'][0]))
goals_df.insert(3, 'goals', x)
Explanation: Создаем в датафреймах отдельные столбцы с данными в удобных нам форматах.
End of explanation
goals_df.insert(4, 'sessions', 0)
goals_df.insert(5, 'convers_rate', 0)
Explanation: Создаем столбцы в датафрейме с "Goal1Complitations", где будем хранить количество сессий и конверсию
End of explanation
for i in range(7):
goals_df.sessions[i] = sum(sessions_df.sessions[sessions_df.name==goals_df.name[i]])
goals_df.convers_rate = goals_df.goals/goals_df.sessions*100
Explanation: Переносим из таблицы сессий количество сессий и считаем конверсию для каждой страницы, которая есть в "Goal1Complitations"
End of explanation
goals_df.convers_rate[goals_df.sessions==0] = 0
goals_df.ix[range(1,7),[2,5]]
Explanation: Обнулим конверсию для тех страниц по которым не було сессий. В даннос случае это страница "(entrance)"
End of explanation
goals_df.ix[range(1,7),[2,5]].plot(kind="bar", legend=False)
plt.xticks([0, 1, 2, 3, 4, 5], goals_df.name, rotation="vertical")
plt.show()
Explanation: Строим график
End of explanation
name = goals_df.ix[goals_df.convers_rate==max(goals_df.convers_rate),2]
print 'The best converting page on your site is "',str(name)[5:len(name)-28], '" with conversion rate', max(goals_df.convers_rate),'%'
Explanation: Выводим результат
End of explanation |
977 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of key agreement data
Step1: Settings
Enter your input below.
Step2: Data processing
Step3: Analysis
Summary
Step4: Selected quantiles
Step5: Info
Step6: Plots
Private key MSB vs time heatmap
The heatmap should show uncorrelated variables.
Step7: Private key Hamming Weight vs time heatmap
The heatmap should show uncorrelated variables.
Also contains a private key Hamming Weight histogram, which should be binomially distributed.
Step8: Key agreement time histogram
Step9: Moving averages of key agreement time
Step10: Private key MSB and LSB histograms
Expected to be uniform over [0, 255].
Step11: Private key bit length vs time heatmap
Also contains private key bit length histogram, which is expected to be axis flipped geometric distribution with $p = \frac{1}{2}$ peaking at the bit size of the order of the curve.
Step12: Private key bit length histogram given time
Interactively shows the histogram of private key bit length given a selected time range centered around center of width width. Ideally, the means of these conditional distributions are equal, while the variances can vary.
Step13: Validation
Perform some tests on the produced data and compare to expected results.
This requires some information about the used curve, enter it below.
Step14: All of the following tests should pass (e.g. be true), given a large enough sample. | Python Code:
%matplotlib notebook
import numpy as np
from scipy.stats import describe
from scipy.stats import norm as norm_dist
from scipy.stats.mstats import mquantiles
from math import log, sqrt
import matplotlib.pyplot as plt
from matplotlib import ticker, colors, gridspec
from copy import deepcopy
from utils import plot_hist, moving_average, hw, time_scale, hist_size_func
from binascii import unhexlify
from IPython.display import display, HTML
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
import tabulate
Explanation: Analysis of key agreement data
End of explanation
# File name with output from ECTesterReader or ECTesterStandalone ECDH.
fname = "filename.csv"
# The time unit used in displaying the plots. One of "milli", "micro", "nano".
# WARNING: Using nano might lead to very large plots/histograms and to the
# notebook to freeze or run out of memory, as well as bad visualization
# quality, due to noise and low density.
time_unit = "milli"
# A number which will be used to divide the time into sub-units, e.g. for 5, time will be in fifths of units
scaling_factor = 1
# The amount of entries skipped from the beginning of the file, as they are usually outliers.
skip_first = 10
# Whether to plot things in logarithmic scale or not.
log_scale = False
# Whether to trim the time data outside the 1 - 99 percentile range (adjust below). Quite useful.
trim = True
# How much to trim? Either a number in [0,1] signifying a quantile, or an absolute value signifying a threshold
trim_low = 0.01
trim_high = 0.99
# Graphical (matplotlib) style name
style = "ggplot"
# Color map to use, and what color to assign to "bad" values (necessary for log_scale)
color_map = plt.cm.viridis
color_map_bad = "black"
# What function to use to calculate number of histogram bins of time
# one of "sqrt", "sturges", "rice", "scott" and "fd" or a number specifying the number of bins
hist_size = "sturges"
Explanation: Settings
Enter your input below.
End of explanation
# Setup plot style
plt.style.use(style)
cmap = deepcopy(color_map)
cmap.set_bad(color_map_bad)
# Normalization, linear or log.
if log_scale:
norm = colors.LogNorm()
else:
norm = colors.Normalize()
# Read the header line.
with open(fname, "r") as f:
header = f.readline()
header_names = header.split(";")
if len(header_names) != 5:
print("Bad data?")
exit(1)
# Load the data
hx = lambda x: int(x, 16)
data = np.genfromtxt(fname, delimiter=";", skip_header=1, converters={2: unhexlify, 3: hx, 4: hx},
dtype=np.dtype([("index", "u4"), ("time", "u4"), ("pub", "O"), ("priv", "O"), ("secret", "O")]))
# Skip first (outliers?)
data = data[skip_first:]
# Setup the data
orig_time_unit = header_names[1].split("[")[1][:-1]
time_disp_unit = time_scale(data["time"], orig_time_unit, time_unit, scaling_factor)
# Trim times
quant_low_bound = trim_low if 0 <= trim_low <= 1 else 0.01
quant_high_bound = trim_high if 0 <= trim_high <= 1 else 0.95
quantiles = mquantiles(data["time"], prob=(quant_low_bound, 0.25, 0.5, 0.75, quant_high_bound))
if trim:
low_bound = quantiles[0] if 0 <= trim_low <= 1 else trim_low
high_bound = quantiles[4] if 0 <= trim_high <= 1 else trim_high
data_trimmed = data[np.logical_and(data["time"] >= low_bound,
data["time"] <= high_bound)]
quantiles_trim = mquantiles(data_trimmed["time"], prob=(quant_low_bound, 0.25, 0.5, 0.75, quant_high_bound))
else:
low_bound = None
high_bound = None
data_trimmed = data
quantiles_trim = quantiles_gen
description = describe(data["time"])
description_trim = describe(data_trimmed["time"])
max_time = description.minmax[1]
min_time = description.minmax[0]
bit_size = len(bin(max(data["priv"]))) - 2
byte_size = (bit_size + 7) // 8
bit_size = byte_size * 8
hist_size_time = hist_size_func(hist_size)(description.nobs, min_time, max_time, description.variance, quantiles[1], quantiles[3])
hist_size_time_trim = hist_size_func(hist_size)(description_trim.nobs, description_trim.minmax[0], description_trim.minmax[1], description_trim.variance, quantiles_trim[1], quantiles_trim[3])
if hist_size_time < 30:
hist_size_time = max_time - min_time
if hist_size_time_trim < 30:
hist_size_time_trim = description_trim.minmax[1] - description_trim.minmax[0]
Explanation: Data processing
End of explanation
display("Raw")
desc = [("N", "min, max", "mean", "variance", "skewness", "kurtosis"),
description]
display(HTML(tabulate.tabulate(desc, tablefmt="html")))
display("Trimmed")
desc = [("N", "min, max", "mean", "variance", "skewness", "kurtosis"),
description_trim]
display(HTML(tabulate.tabulate(desc, tablefmt="html")))
Explanation: Analysis
Summary
End of explanation
tbl = [(quant_low_bound, "0.25", "0.5", "0.75", quant_high_bound),
list(map(lambda x: "{} {}".format(x, time_disp_unit), quantiles))]
display(HTML(tabulate.tabulate(tbl, tablefmt="html")))
Explanation: Selected quantiles
End of explanation
display("Bitsize: {}".format(bit_size))
display("Histogram time bins: {}".format(hist_size_time))
display("Histogram time bins(trimmed): {}".format(hist_size_time_trim))
Explanation: Info
End of explanation
fig_private = plt.figure(figsize=(10.5, 8), dpi=90)
axe_private = fig_private.add_subplot(1, 1, 1, title="Private key MSB vs key agreement time")
priv_msb = np.array(list(map(lambda x: x >> (bit_size - 8), data_trimmed["priv"])), dtype=np.dtype("u1"))
max_msb = max(priv_msb)
min_msb = min(priv_msb)
heatmap, xedges, yedges = np.histogram2d(priv_msb, data_trimmed["time"],
bins=[max_msb - min_msb + 1, hist_size_time_trim])
extent = [min_msb, max_msb, yedges[0], yedges[-1]]
im = axe_private.imshow(heatmap.T, extent=extent, aspect="auto", cmap=cmap, origin="low",
interpolation="nearest", norm=norm)
axe_private.set_xlabel("private key MSB value")
axe_private.set_ylabel("key agreement time ({})".format(time_disp_unit))
fig_private.colorbar(im, ax=axe_private)
fig_private.tight_layout()
del priv_msb
Explanation: Plots
Private key MSB vs time heatmap
The heatmap should show uncorrelated variables.
End of explanation
fig_priv_hist = plt.figure(figsize=(10.5, 12), dpi=90)
gs = gridspec.GridSpec(2, 1, height_ratios=[2.5, 1])
axe_priv_hist = fig_priv_hist.add_subplot(gs[0], title="Private key Hamming weight vs key agreement time")
axe_priv_hist_hw = fig_priv_hist.add_subplot(gs[1], sharex=axe_priv_hist, title="Private key Hamming weight")
priv_hw = np.array(list(map(hw, data_trimmed["priv"])), dtype=np.dtype("u2"))
h, xe, ye = np.histogram2d(priv_hw, data_trimmed["time"], bins=[max(priv_hw) - min(priv_hw), hist_size_time_trim])
im = axe_priv_hist.imshow(h.T, origin="low", cmap=cmap, aspect="auto", extent=[xe[0], xe[-1], ye[0], ye[-1]], norm=norm)
axe_priv_hist.axvline(x=bit_size//2, alpha=0.7, linestyle="dotted", color="white", label=str(bit_size//2) + " bits")
axe_priv_hist.set_xlabel("private key Hamming weight")
axe_priv_hist.set_ylabel("key agreement time ({})".format(time_disp_unit))
axe_priv_hist.legend(loc="best")
plot_hist(axe_priv_hist_hw, priv_hw, "private key Hamming weight", log_scale, None)
param = norm_dist.fit(priv_hw)
pdf_range = np.arange(min(priv_hw), max(priv_hw))
norm_pdf = norm_dist.pdf(pdf_range, *param[:-2], loc=param[-2], scale=param[-1]) * description_trim.nobs
axe_priv_hist_hw.plot(pdf_range, norm_pdf, label="fitted normal distribution")
axe_priv_hist_hw.legend(loc="best")
fig_priv_hist.tight_layout()
fig_priv_hist.colorbar(im, ax=[axe_priv_hist, axe_priv_hist_hw])
display(HTML("<b>Private key Hamming weight fitted with normal distribution:</b>"))
display(HTML(tabulate.tabulate([("Mean", "Variance"), param], tablefmt="html")))
del priv_hw
Explanation: Private key Hamming Weight vs time heatmap
The heatmap should show uncorrelated variables.
Also contains a private key Hamming Weight histogram, which should be binomially distributed.
End of explanation
fig_ka_hist = plt.figure(figsize=(10.5, 8), dpi=90)
axe_hist_full = fig_ka_hist.add_subplot(2, 1, 1)
axe_hist_trim = fig_ka_hist.add_subplot(2, 1, 2)
plot_hist(axe_hist_full, data["time"], "key agreement time ({})".format(time_disp_unit), log_scale, hist_size_time);
plot_hist(axe_hist_trim, data_trimmed["time"], "key agreement time ({})".format(time_disp_unit), log_scale, hist_size_time_trim);
fig_ka_hist.tight_layout()
Explanation: Key agreement time histogram
End of explanation
fig_avg = plt.figure(figsize=(10.5, 7), dpi=90)
axe_avg = fig_avg.add_subplot(1, 1, 1, title="Moving average of key agreement time")
avg_100 = moving_average(data["time"], 100)
avg_1000 = moving_average(data["time"], 1000)
axe_avg.plot(avg_100, label="window = 100")
axe_avg.plot(avg_1000, label="window = 1000")
if low_bound is not None:
axe_avg.axhline(y=low_bound, alpha=0.7, linestyle="dotted", color="green", label="Low trim bound = {}".format(low_bound))
if high_bound is not None:
axe_avg.axhline(y=high_bound, alpha=0.7, linestyle="dotted", color="orange", label="Hight trim bound = {}".format(high_bound))
axe_avg.set_ylabel("key agreement time ({})".format(time_disp_unit))
axe_avg.set_xlabel("index")
axe_avg.legend(loc="best")
fig_avg.tight_layout()
del avg_100, avg_1000
Explanation: Moving averages of key agreement time
End of explanation
fig_priv_hists = plt.figure(figsize=(10.5, 8), dpi=90)
priv_msb = np.array(list(map(lambda x: x >> (bit_size - 8), data["priv"])), dtype=np.dtype("u1"))
priv_lsb = np.array(list(map(lambda x: x & 0xff, data["priv"])), dtype=np.dtype("u1"))
axe_msb_s_hist = fig_priv_hists.add_subplot(2, 1, 1, title="Private key MSB")
axe_lsb_s_hist = fig_priv_hists.add_subplot(2, 1, 2, title="Private key LSB")
msb_h = plot_hist(axe_msb_s_hist, priv_msb, "private key MSB", log_scale, False, False)
lsb_h = plot_hist(axe_lsb_s_hist, priv_lsb, "private key LSB", log_scale, False, False)
fig_priv_hists.tight_layout()
del priv_msb, priv_lsb
Explanation: Private key MSB and LSB histograms
Expected to be uniform over [0, 255].
End of explanation
fig_bl = plt.figure(figsize=(10.5, 12), dpi=90)
gs = gridspec.GridSpec(2, 1, height_ratios=[2.5, 1])
axe_bl_heat = fig_bl.add_subplot(gs[0], title="Private key bit length vs keygen time")
axe_bl_hist = fig_bl.add_subplot(gs[1], sharex=axe_bl_heat, title="Private key bit length")
bl_data = np.array(list(map(lambda x: x.bit_length(), data_trimmed["priv"])), dtype=np.dtype("u2"))
h, xe, ye = np.histogram2d(bl_data, data_trimmed["time"], bins=[max(bl_data) - min(bl_data), hist_size_time_trim])
im = axe_bl_heat.imshow(h.T, origin="low", cmap=cmap, aspect="auto", extent=[xe[0], xe[-1], ye[0], ye[-1]], norm=norm)
axe_bl_heat.set_xlabel("private key bit length")
axe_bl_heat.set_ylabel("key agreement time ({})".format(time_disp_unit))
plot_hist(axe_bl_hist, bl_data, "Private key bit length", log_scale, align="right")
fig_bl.tight_layout()
fig_bl.colorbar(im, ax=[axe_bl_heat, axe_bl_hist])
del bl_data
Explanation: Private key bit length vs time heatmap
Also contains private key bit length histogram, which is expected to be axis flipped geometric distribution with $p = \frac{1}{2}$ peaking at the bit size of the order of the curve.
End of explanation
fig_bl_time = plt.figure(figsize=(10.5, 5), dpi=90)
axe_bl_time = fig_bl_time.add_subplot(111)
axe_bl_time.set_autoscalex_on(False)
def f(center, width):
lower_bnd = center - width/2
upper_bnd = center + width/2
values = data_trimmed[np.logical_and(data_trimmed["time"] <= upper_bnd,
data_trimmed["time"] >= lower_bnd)]
axe_bl_time.clear()
axe_bl_time.set_title("Private key bit length, given key agreement time $\in ({}, {})$ {}".format(int(lower_bnd), int(upper_bnd), time_disp_unit))
bl_data = np.array(list(map(lambda x: x.bit_length(), values["priv"])), dtype=np.dtype("u2"))
plot_hist(axe_bl_time, bl_data, "private key bit length", bins=11, range=(bit_size-10, bit_size+1), align="left")
axe_bl_time.set_xlim((bit_size-10, bit_size))
fig_bl_time.tight_layout()
center_w = widgets.IntSlider(min=min(data_trimmed["time"]),
max=max(data_trimmed["time"]),
step=1,
value=description_trim.mean,
continuous_update=False,
description="center {}".format(time_disp_unit))
width_w = widgets.IntSlider(min=1, max=100, continuous_update=False,
description="width {}".format(time_disp_unit))
w = interactive(f, center=center_w,
width=width_w)
display(w)
Explanation: Private key bit length histogram given time
Interactively shows the histogram of private key bit length given a selected time range centered around center of width width. Ideally, the means of these conditional distributions are equal, while the variances can vary.
End of explanation
p_str = input("The prime specifying the finite field:")
p = int(p_str, 16) if p_str.startswith("0x") else int(p_str)
r_str = input("The order of the curve:")
r = int(r_str, 16) if r_str.startswith("0x") else int(r_str)
Explanation: Validation
Perform some tests on the produced data and compare to expected results.
This requires some information about the used curve, enter it below.
End of explanation
max_priv = max(data["priv"])
un = len(np.unique(data["priv"])) != 1
if un:
print("Private keys are smaller than order:\t\t\t" + str(max_priv < r))
print("Private keys are larger than prime(if order > prime):\t" + str(r <= p or max_priv > p))
print("Private keys reach full bit length of order:\t\t" + str(max_priv.bit_length() == r.bit_length()))
if un:
print("Private key bit length (min, max):" + str(min(data["priv"]).bit_length()) + ", " + str(max(data["priv"]).bit_length()))
Explanation: All of the following tests should pass (e.g. be true), given a large enough sample.
End of explanation |
978 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Percolation
OpenPNM contains several percolation algorithms which are central to the multiphase models employed by pore networks. The essential idea is to identify pathways for fluid flow through the network using the entry capillary pressure as a threshold for passage between connected pores. The capillary pressure can either be associated to the pores themselves known as site percolation or the connecting throats known as bond percolation or a mixture of both. OpenPNM provides several models for calculating the entry pressure for a given pore or throat and it generally depends on the size of the pore or throat and the wettability to a particular phase characterised by the contact angle. If a pathway through the network connects pores into clusters that contain both an inlet and an outlet then it is deemed to be percolating.
In this example we will demonstrate Ordinary Percolation which is the fastest and simplest algorithm to run. The number of steps involved in the algorithm is equal to the number of points that are specified in the run method. This can either be an integer, in which case the minimum and maximum capillary entry pressures in the network are used as limits and the integer value is used to create that number of intervals between the limits, or an array of specified pressured can be supplied.
The algorithm progresses incrementally from low pressure to high. At each step, clusters of connected pores are found with entry pressures below the current threshold and those that are not already invaded and connected to an inlet are set to be invaded at this pressure. Therefore the process is quasistatic and represents the steady state saturation that would be achieved if the inlet pressure were to be held at that threshold.
First do our imports
Step1: Create a 2D Cubic network with standard PSD and define the phase as Water and use Standard physics which implements the washburn capillary pressure relation for throat entry pressure.
Step2: We can check the model by looking at the model dict on the phys object
Step3: Now set up and run the algorithm choosing the left and right sides of the network for inlets and outlets respectively. Because we did not set up the network with boundary pores with zero volume a little warning is given because the starting saturation for the algorithm is not zero. However, this is fine and because the network is quite large the starting saturation is actually quite close to zero.
Step4: The algorithm completes very quickly and the invading phase saturation can be plotted versus the applied boundary pressure.
Step5: As the network is 2D and cubic we can easily plot the invading phase configuration at the different invasion steps | Python Code:
import openpnm as op
%config InlineBackend.figure_formats = ['svg']
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(10)
from ipywidgets import interact, IntSlider
%matplotlib inline
mpl.rcParams["image.interpolation"] = "None"
ws = op.Workspace()
ws.settings["loglevel"] = 40
Explanation: Ordinary Percolation
OpenPNM contains several percolation algorithms which are central to the multiphase models employed by pore networks. The essential idea is to identify pathways for fluid flow through the network using the entry capillary pressure as a threshold for passage between connected pores. The capillary pressure can either be associated to the pores themselves known as site percolation or the connecting throats known as bond percolation or a mixture of both. OpenPNM provides several models for calculating the entry pressure for a given pore or throat and it generally depends on the size of the pore or throat and the wettability to a particular phase characterised by the contact angle. If a pathway through the network connects pores into clusters that contain both an inlet and an outlet then it is deemed to be percolating.
In this example we will demonstrate Ordinary Percolation which is the fastest and simplest algorithm to run. The number of steps involved in the algorithm is equal to the number of points that are specified in the run method. This can either be an integer, in which case the minimum and maximum capillary entry pressures in the network are used as limits and the integer value is used to create that number of intervals between the limits, or an array of specified pressured can be supplied.
The algorithm progresses incrementally from low pressure to high. At each step, clusters of connected pores are found with entry pressures below the current threshold and those that are not already invaded and connected to an inlet are set to be invaded at this pressure. Therefore the process is quasistatic and represents the steady state saturation that would be achieved if the inlet pressure were to be held at that threshold.
First do our imports
End of explanation
N = 100
net = op.network.Cubic(shape=[N, N, 1], spacing=2.5e-5)
geom = op.geometry.SpheresAndCylinders(network=net, pores=net.Ps, throats=net.Ts)
water = op.phases.Water(network=net)
phys = op.physics.Standard(network=net, phase=water, geometry=geom)
Explanation: Create a 2D Cubic network with standard PSD and define the phase as Water and use Standard physics which implements the washburn capillary pressure relation for throat entry pressure.
End of explanation
phys.models['throat.entry_pressure']
Explanation: We can check the model by looking at the model dict on the phys object
End of explanation
alg = op.algorithms.OrdinaryPercolation(network=net, phase=water)
alg.settings._update({'pore_volume': 'pore.volume',
'throat_volume': 'throat.volume'})
alg.set_inlets(pores=net.pores('left'))
alg.set_outlets(pores=net.pores('right'))
alg.run(points=1000)
alg.plot_intrusion_curve()
plt.show()
Explanation: Now set up and run the algorithm choosing the left and right sides of the network for inlets and outlets respectively. Because we did not set up the network with boundary pores with zero volume a little warning is given because the starting saturation for the algorithm is not zero. However, this is fine and because the network is quite large the starting saturation is actually quite close to zero.
End of explanation
data = alg.get_intrusion_data()
mask = np.logical_and(np.asarray(data.Snwp) > 0.0 , np.asarray(data.Snwp) < 1.0)
mask = np.argwhere(mask).flatten()
pressures = np.asarray(data.Pcap)[mask]
Explanation: The algorithm completes very quickly and the invading phase saturation can be plotted versus the applied boundary pressure.
End of explanation
def plot_saturation(step):
arg = mask[step]
Pc = np.ceil(data.Pcap[arg])
sat = np.around(data.Snwp[arg], 3)
is_perc = alg.is_percolating(Pc)
pmask = alg['pore.invasion_pressure'] <= Pc
im = pmask.reshape([N, N])
fig, ax = plt.subplots(figsize=[5, 5])
ax.imshow(im, cmap='Blues');
title = f"Capillary pressure: {Pc:.0f}, saturation: {sat:.2f}, percolating: {is_perc}"
ax.set_title(title)
plt.show()
perc_thresh = alg.get_percolation_threshold()
thresh_step = np.argwhere(np.asarray(pressures) == perc_thresh)
slider = IntSlider(min=0, max=len(mask)-1, step=1, value=thresh_step)
interact(plot_saturation, step=slider);
Explanation: As the network is 2D and cubic we can easily plot the invading phase configuration at the different invasion steps
End of explanation |
979 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load the twobody data
Step1: The Pair Correlation Function (or Radial Distribution Function)
Nice picture/description here
Basically we want to compute the following for a given pair of atom symbols ($A, B$)
Step2: Compute!
Step3: Plot!
Step4: Save the everything for later
First generate a beautiful graph... | Python Code:
xyz = pd.read_hdf('xyz.hdf5', 'xyz')
twobody = pd.read_hdf('twobody.hdf5', 'twobody')
Explanation: Load the twobody data
End of explanation
from scipy.integrate import cumtrapz
def pcf(A, B, a, twobody, dr=0.05, start=0.5, end=7.5):
'''
Pair correlation function between two atom types.
'''
distances = twobody.loc[(twobody['symbols'] == A + B) |
(twobody['symbols'] == B + A), 'distance'].values
bins = np.arange(start, end, dr)
bins = np.append(bins, bins[-1] + dr)
hist, bins = np.histogram(distances, bins)
#...
#...
# r = ?
# g = ?
# n = ?
return pd.DataFrame.from_dict({'r': None, 'g': None, 'n': None})
%load -s pcf, snippets/pcf.py
Explanation: The Pair Correlation Function (or Radial Distribution Function)
Nice picture/description here
Basically we want to compute the following for a given pair of atom symbols ($A, B$):
\begin{equation}
g_{AB}\left(r\right) = \frac{V}{4\pi r^{2}\Delta r MN_{A}N_{B}}\sum_{m=1}^{M}\sum_{a=1}^{N_{A}}\sum_{b=1}^{N_{B}}Q_{m}\left(r_{a}, r_{b}; r, \Delta r\right)
\end{equation}
\begin{equation}
Q_{m}\left(r_{a}, r_{b}; r, \Delta r\right) = \begin{cases}
1\ \ if\ r - \frac{\Delta r}{2} \le \left|r_{a} - r_{b}\right|\lt r + \frac{\Delta r}{2}\
0\ \ otherwise
\end{cases}
\end{equation}
Note that that is the analytical form of the equation (meaning continuous values for r).
As a consequence the denominator is simplified using an approximation for a volume of a
spherical shell when $\Delta r$ is small.
Note:
\begin{equation}
\frac{4}{3}\pi\left(r_{i+1}^{3} - r_{i}^{3}\right) \approx 4\pi r_{i}^{2}\Delta r
\end{equation}
Computationally things will be a bit simpler...the summations are simply a histogram and there is no need to make the approximation above.
Algorithm:
Select the distances of interest
Compute the distance histogram
Multiply by the normalization constant
\begin{equation}
\frac{V}{4\pi r^{2}\Delta r MN_{A}N_{B}} \equiv \frac{volume}{\left(distance\ count\right)\left(4 / 3 \pi\right)\left(r_{i+1}^{3} - r_{i}^{3}\right)}
\end{equation}
Let's also compute the normalized integration of $g_{AB}(r)$ which returns the pairwise count with respect to distance:
\begin{equation}
n(r) = \rho 4\pi r^2 g_{AB}(r)
\end{equation}
End of explanation
A = 'O'
B = 'O'
df = pcf(A, B, a, twobody)
Explanation: Compute!
End of explanation
import seaborn as sns
sns.set_context('poster', font_scale=1.3)
sns.set_style('white')
sns.set_palette('colorblind')
# Lets modify a copy of the data for plotting
plotdf = df.set_index('r')
plotdf.columns = ['PCF', 'Pair Count']
# Generate the plot
ax = plotdf.plot(secondary_y='Pair Count')
ax.set_ylabel('Pair Correlation Function ({0}, {1})'.format(A, B))
ax.right_ax.set_ylabel('Pair Count ({0}, {1})'.format(A, B))
ax.set_xlabel('Distance ($\AA$)')
patches, labels = ax.get_legend_handles_labels()
patches2, labels2 = ax.right_ax.get_legend_handles_labels()
legend = ax.legend(patches+patches2, labels+labels2, loc='upper center', frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
frame.set_edgecolor('black')
Explanation: Plot!
End of explanation
df1 = pcf('O', 'O', a, twobody)
df2 = pcf('O', 'H', a, twobody)
df3 = pcf('H', 'H', a, twobody)
df = pd.concat((df1, df2, df3), axis=1)
df.columns = ['$g_{OO}$', '$n_{OO}$', '$r$', '$g_{OH}$', '$n_{OH}$', 'del1', '$g_{HH}$', '$n_{HH}$', 'del2']
del df['del1']
del df['del2']
df.set_index('$r$', inplace=True)
ax = df.plot(secondary_y=['$n_{OO}$', '$n_{OH}$', '$n_{HH}$'])
ax.set_ylabel('Pair Correlation Function ($g_{AB}$)')
ax.right_ax.set_ylabel('Pairwise Count ($n_{AB}$)')
ax.set_xlabel('Distance ($\AA$)')
ax.set_ylim(0, 5)
ax.right_ax.set_ylim(0, 20)
patches, labels = ax.get_legend_handles_labels()
patches2, labels2 = ax.right_ax.get_legend_handles_labels()
legend = ax.legend(patches+patches2, labels+labels2, loc='upper right', frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
frame.set_edgecolor('black')
# Save the figure
fig = ax.get_figure()
fig.savefig('pcf.pdf')
# Save the pcf data
store = pd.HDFStore('pcf.hdf5', mode='w')
store.put('pcf', df)
store.close()
Explanation: Save the everything for later
First generate a beautiful graph...
End of explanation |
980 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Styling
This document is written as a Jupyter Notebook, and can be viewed or downloaded here.
You can apply conditional formatting, the visual styling of a DataFrame
depending on the data within, by using the DataFrame.style property.
This is a property that returns a Styler object, which has
useful methods for formatting and displaying DataFrames.
The styling is accomplished using CSS.
You write "style functions" that take scalars, DataFrames or Series, and return like-indexed DataFrames or Series with CSS "attribute
Step1: Here's a boring example of rendering a DataFrame, without any (visible) styles
Step2: Note
Step4: The row0_col2 is the identifier for that particular cell. We've also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn't collide with the styling from another within the same notebook or page (you can set the uuid if you'd like to tie together the styling of two DataFrames).
When writing style functions, you take care of producing the CSS attribute / value pairs you want. Pandas matches those up with the CSS classes that identify each cell.
Let's write a simple style function that will color negative numbers red and positive numbers black.
Step5: In this case, the cell's style depends only on its own value.
That means we should use the Styler.applymap method which works elementwise.
Step6: Notice the similarity with the standard df.applymap, which operates on DataFrames elementwise. We want you to be able to reuse your existing knowledge of how to interact with DataFrames.
Notice also that our function returned a string containing the CSS attribute and value, separated by a colon just like in a <style> tag. This will be a common theme.
Finally, the input shapes matched. Styler.applymap calls the function on each scalar input, and the function returns a scalar output.
Now suppose you wanted to highlight the maximum value in each column.
We can't use .applymap anymore since that operated elementwise.
Instead, we'll turn to .apply which operates columnwise (or rowwise using the axis keyword). Later on we'll see that something like highlight_max is already defined on Styler so you wouldn't need to write this yourself.
Step7: In this case the input is a Series, one column at a time.
Notice that the output shape of highlight_max matches the input shape, an array with len(s) items.
We encourage you to use method chains to build up a style piecewise, before finally rending at the end of the chain.
Step8: Above we used Styler.apply to pass in each column one at a time.
<span style="background-color
Step9: When using Styler.apply(func, axis=None), the function must return a DataFrame with the same index and column labels.
Step10: Building Styles Summary
Style functions should return strings with one or more CSS attribute
Step11: For row and column slicing, any valid indexer to .loc will work.
Step12: Only label-based slicing is supported right now, not positional.
If your style function uses a subset or axis keyword argument, consider wrapping your function in a functools.partial, partialing out that keyword.
python
my_func2 = functools.partial(my_func, subset=42)
Finer Control
Step13: Use a dictionary to format specific columns.
Step14: Or pass in a callable (or dictionary of callables) for more flexible handling.
Step15: You can format the text displayed for missing values by na_rep.
Step16: These formatting techniques can be used in combination with styling.
Step17: Builtin styles
Finally, we expect certain styling functions to be common enough that we've included a few "built-in" to the Styler, so you don't have to write them yourself.
Step18: You can create "heatmaps" with the background_gradient method. These require matplotlib, and we'll use Seaborn to get a nice colormap.
Step19: Styler.background_gradient takes the keyword arguments low and high. Roughly speaking these extend the range of your data by low and high percent so that when we convert the colors, the colormap's entire range isn't used. This is useful so that you can actually read the text still.
Step20: There's also .highlight_min and .highlight_max.
Step21: Use Styler.set_properties when the style doesn't actually depend on the values.
Step22: Bar charts
You can include "bar charts" in your DataFrame.
Step23: New in version 0.20.0 is the ability to customize further the bar chart
Step26: The following example aims to give a highlight of the behavior of the new align options
Step27: Sharing styles
Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame. Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set
Step28: Notice that you're able to share the styles even though they're data aware. The styles are re-evaluated on the new DataFrame they've been used upon.
Other Options
You've seen a few methods for data-driven styling.
Styler also provides a few other options for styles that don't depend on the data.
precision
captions
table-wide styles
missing values representation
hiding the index or columns
Each of these can be specified in two ways
Step29: Or through a set_precision method.
Step30: Setting the precision only affects the printed number; the full-precision values are always passed to your style functions. You can always use df.round(2).style if you'd prefer to round from the start.
Captions
Regular table captions can be added in a few ways.
Step31: Table styles
The next option you have are "table styles".
These are styles that apply to the table as a whole, but don't look at the data.
Certain stylings, including pseudo-selectors like
Step32: table_styles should be a list of dictionaries.
Each dictionary should have the selector and props keys.
The value for selector should be a valid CSS selector.
Recall that all the styles are already attached to an id, unique to
each Styler. This selector is in addition to that id.
The value for props should be a list of tuples of ('attribute', 'value').
table_styles are extremely flexible, but not as fun to type out by hand.
We hope to collect some useful ones either in pandas, or preferable in a new package that builds on top the tools here.
table_styles can be used to add column and row based class descriptors. For large tables this can increase performance by avoiding repetitive individual css for each cell, and it can also simplify style construction in some cases.
If table_styles is given as a dictionary each key should be a specified column or index value and this will map to specific class CSS selectors of the given column or row.
Note that Styler.set_table_styles will overwrite existing styles but can be chained by setting the overwrite argument to False.
Step33: Missing values
You can control the default missing values representation for the entire table through set_na_rep method.
Step34: Hiding the Index or Columns
The index can be hidden from rendering by calling Styler.hide_index. Columns can be hidden from rendering by calling Styler.hide_columns and passing in the name of a column, or a slice of columns.
Step35: CSS classes
Certain CSS classes are attached to cells.
Index and Column names include index_name and level<k> where k is its level in a MultiIndex
Index label cells include
row_heading
row<n> where n is the numeric position of the row
level<k> where k is the level in a MultiIndex
Column label cells include
col_heading
col<n> where n is the numeric position of the column
level<k> where k is the level in a MultiIndex
Blank cells include blank
Data cells include data
Limitations
DataFrame only (use Series.to_frame().style)
The index and columns must be unique
No large repr, and performance isn't great; this is intended for summary DataFrames
You can only style the values, not the index or columns (except with table_styles above)
You can only apply styles, you can't insert new HTML entities
Some of these will be addressed in the future.
Performance can suffer when adding styles to each cell in a large DataFrame.
It is recommended to apply table or column based styles where possible to limit overall HTML length, as well as setting a shorter UUID to avoid unnecessary repeated data transmission.
Terms
Style function
Step36: Export to Excel
New in version 0.20.0
<span style="color
Step37: A screenshot of the output
Step38: We'll use the following template
Step39: Now that we've created a template, we need to set up a subclass of Styler that
knows about it.
Step40: Notice that we include the original loader in our environment's loader.
That's because we extend the original template, so the Jinja environment needs
to be able to find it.
Now we can use that custom styler. It's __init__ takes a DataFrame.
Step41: Our custom template accepts a table_title keyword. We can provide the value in the .render method.
Step42: For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass.
Step43: Here's the template structure
Step44: See the template in the GitHub repo for more details. | Python Code:
import matplotlib.pyplot
# We have this here to trigger matplotlib's font cache stuff.
# This cell is hidden from the output
import pandas as pd
import numpy as np
np.random.seed(24)
df = pd.DataFrame({'A': np.linspace(1, 10, 10)})
df = pd.concat([df, pd.DataFrame(np.random.randn(10, 4), columns=list('BCDE'))],
axis=1)
df.iloc[3, 3] = np.nan
df.iloc[0, 2] = np.nan
Explanation: Styling
This document is written as a Jupyter Notebook, and can be viewed or downloaded here.
You can apply conditional formatting, the visual styling of a DataFrame
depending on the data within, by using the DataFrame.style property.
This is a property that returns a Styler object, which has
useful methods for formatting and displaying DataFrames.
The styling is accomplished using CSS.
You write "style functions" that take scalars, DataFrames or Series, and return like-indexed DataFrames or Series with CSS "attribute: value" pairs for the values.
These functions can be incrementally passed to the Styler which collects the styles before rendering.
Building styles
Pass your style functions into one of the following methods:
Styler.applymap: elementwise
Styler.apply: column-/row-/table-wise
Both of those methods take a function (and some other keyword arguments) and applies your function to the DataFrame in a certain way.
Styler.applymap works through the DataFrame elementwise.
Styler.apply passes each column or row into your DataFrame one-at-a-time or the entire table at once, depending on the axis keyword argument.
For columnwise use axis=0, rowwise use axis=1, and for the entire table at once use axis=None.
For Styler.applymap your function should take a scalar and return a single string with the CSS attribute-value pair.
For Styler.apply your function should take a Series or DataFrame (depending on the axis parameter), and return a Series or DataFrame with an identical shape where each value is a string with a CSS attribute-value pair.
Let's see some examples.
End of explanation
df.style
Explanation: Here's a boring example of rendering a DataFrame, without any (visible) styles:
End of explanation
df.style.highlight_null().render().split('\n')[:10]
Explanation: Note: The DataFrame.style attribute is a property that returns a Styler object. Styler has a _repr_html_ method defined on it so they are rendered automatically. If you want the actual HTML back for further processing or for writing to file call the .render() method which returns a string.
The above output looks very similar to the standard DataFrame HTML representation. But we've done some work behind the scenes to attach CSS classes to each cell. We can view these by calling the .render method.
End of explanation
def color_negative_red(val):
Takes a scalar and returns a string with
the css property `'color: red'` for negative
strings, black otherwise.
color = 'red' if val < 0 else 'black'
return 'color: %s' % color
Explanation: The row0_col2 is the identifier for that particular cell. We've also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn't collide with the styling from another within the same notebook or page (you can set the uuid if you'd like to tie together the styling of two DataFrames).
When writing style functions, you take care of producing the CSS attribute / value pairs you want. Pandas matches those up with the CSS classes that identify each cell.
Let's write a simple style function that will color negative numbers red and positive numbers black.
End of explanation
s = df.style.applymap(color_negative_red)
s
Explanation: In this case, the cell's style depends only on its own value.
That means we should use the Styler.applymap method which works elementwise.
End of explanation
def highlight_max(s):
'''
highlight the maximum in a Series yellow.
'''
is_max = s == s.max()
return ['background-color: yellow' if v else '' for v in is_max]
df.style.apply(highlight_max)
Explanation: Notice the similarity with the standard df.applymap, which operates on DataFrames elementwise. We want you to be able to reuse your existing knowledge of how to interact with DataFrames.
Notice also that our function returned a string containing the CSS attribute and value, separated by a colon just like in a <style> tag. This will be a common theme.
Finally, the input shapes matched. Styler.applymap calls the function on each scalar input, and the function returns a scalar output.
Now suppose you wanted to highlight the maximum value in each column.
We can't use .applymap anymore since that operated elementwise.
Instead, we'll turn to .apply which operates columnwise (or rowwise using the axis keyword). Later on we'll see that something like highlight_max is already defined on Styler so you wouldn't need to write this yourself.
End of explanation
df.style.\
applymap(color_negative_red).\
apply(highlight_max)
Explanation: In this case the input is a Series, one column at a time.
Notice that the output shape of highlight_max matches the input shape, an array with len(s) items.
We encourage you to use method chains to build up a style piecewise, before finally rending at the end of the chain.
End of explanation
def highlight_max(data, color='yellow'):
'''
highlight the maximum in a Series or DataFrame
'''
attr = 'background-color: {}'.format(color)
if data.ndim == 1: # Series from .apply(axis=0) or axis=1
is_max = data == data.max()
return [attr if v else '' for v in is_max]
else: # from .apply(axis=None)
is_max = data == data.max().max()
return pd.DataFrame(np.where(is_max, attr, ''),
index=data.index, columns=data.columns)
Explanation: Above we used Styler.apply to pass in each column one at a time.
<span style="background-color: #DEDEBE">Debugging Tip: If you're having trouble writing your style function, try just passing it into <code style="background-color: #DEDEBE">DataFrame.apply</code>. Internally, <code style="background-color: #DEDEBE">Styler.apply</code> uses <code style="background-color: #DEDEBE">DataFrame.apply</code> so the result should be the same.</span>
What if you wanted to highlight just the maximum value in the entire table?
Use .apply(function, axis=None) to indicate that your function wants the entire table, not one column or row at a time. Let's try that next.
We'll rewrite our highlight-max to handle either Series (from .apply(axis=0 or 1)) or DataFrames (from .apply(axis=None)). We'll also allow the color to be adjustable, to demonstrate that .apply, and .applymap pass along keyword arguments.
End of explanation
df.style.apply(highlight_max, color='darkorange', axis=None)
Explanation: When using Styler.apply(func, axis=None), the function must return a DataFrame with the same index and column labels.
End of explanation
df.style.apply(highlight_max, subset=['B', 'C', 'D'])
Explanation: Building Styles Summary
Style functions should return strings with one or more CSS attribute: value delimited by semicolons. Use
Styler.applymap(func) for elementwise styles
Styler.apply(func, axis=0) for columnwise styles
Styler.apply(func, axis=1) for rowwise styles
Styler.apply(func, axis=None) for tablewise styles
And crucially the input and output shapes of func must match. If x is the input then func(x).shape == x.shape.
Finer control: slicing
Both Styler.apply, and Styler.applymap accept a subset keyword.
This allows you to apply styles to specific rows or columns, without having to code that logic into your style function.
The value passed to subset behaves similar to slicing a DataFrame.
A scalar is treated as a column label
A list (or series or numpy array)
A tuple is treated as (row_indexer, column_indexer)
Consider using pd.IndexSlice to construct the tuple for the last one.
End of explanation
df.style.applymap(color_negative_red,
subset=pd.IndexSlice[2:5, ['B', 'D']])
Explanation: For row and column slicing, any valid indexer to .loc will work.
End of explanation
df.style.format("{:.2%}")
Explanation: Only label-based slicing is supported right now, not positional.
If your style function uses a subset or axis keyword argument, consider wrapping your function in a functools.partial, partialing out that keyword.
python
my_func2 = functools.partial(my_func, subset=42)
Finer Control: Display Values
We distinguish the display value from the actual value in Styler.
To control the display value, the text is printed in each cell, use Styler.format. Cells can be formatted according to a format spec string or a callable that takes a single value and returns a string.
End of explanation
df.style.format({'B': "{:0<4.0f}", 'D': '{:+.2f}'})
Explanation: Use a dictionary to format specific columns.
End of explanation
df.style.format({"B": lambda x: "±{:.2f}".format(abs(x))})
Explanation: Or pass in a callable (or dictionary of callables) for more flexible handling.
End of explanation
df.style.format("{:.2%}", na_rep="-")
Explanation: You can format the text displayed for missing values by na_rep.
End of explanation
df.style.highlight_max().format(None, na_rep="-")
Explanation: These formatting techniques can be used in combination with styling.
End of explanation
df.style.highlight_null(null_color='red')
Explanation: Builtin styles
Finally, we expect certain styling functions to be common enough that we've included a few "built-in" to the Styler, so you don't have to write them yourself.
End of explanation
import seaborn as sns
cm = sns.light_palette("green", as_cmap=True)
s = df.style.background_gradient(cmap=cm)
s
Explanation: You can create "heatmaps" with the background_gradient method. These require matplotlib, and we'll use Seaborn to get a nice colormap.
End of explanation
# Uses the full color range
df.loc[:4].style.background_gradient(cmap='viridis')
# Compress the color range
(df.loc[:4]
.style
.background_gradient(cmap='viridis', low=.5, high=0)
.highlight_null('red'))
Explanation: Styler.background_gradient takes the keyword arguments low and high. Roughly speaking these extend the range of your data by low and high percent so that when we convert the colors, the colormap's entire range isn't used. This is useful so that you can actually read the text still.
End of explanation
df.style.highlight_max(axis=0)
Explanation: There's also .highlight_min and .highlight_max.
End of explanation
df.style.set_properties(**{'background-color': 'black',
'color': 'lawngreen',
'border-color': 'white'})
Explanation: Use Styler.set_properties when the style doesn't actually depend on the values.
End of explanation
df.style.bar(subset=['A', 'B'], color='#d65f5f')
Explanation: Bar charts
You can include "bar charts" in your DataFrame.
End of explanation
df.style.bar(subset=['A', 'B'], align='mid', color=['#d65f5f', '#5fba7d'])
Explanation: New in version 0.20.0 is the ability to customize further the bar chart: You can now have the df.style.bar be centered on zero or midpoint value (in addition to the already existing way of having the min value at the left side of the cell), and you can pass a list of [color_negative, color_positive].
Here's how you can change the above with the new align='mid' option:
End of explanation
import pandas as pd
from IPython.display import HTML
# Test series
test1 = pd.Series([-100,-60,-30,-20], name='All Negative')
test2 = pd.Series([10,20,50,100], name='All Positive')
test3 = pd.Series([-10,-5,0,90], name='Both Pos and Neg')
head =
<table>
<thead>
<th>Align</th>
<th>All Negative</th>
<th>All Positive</th>
<th>Both Neg and Pos</th>
</thead>
</tbody>
aligns = ['left','zero','mid']
for align in aligns:
row = "<tr><th>{}</th>".format(align)
for series in [test1,test2,test3]:
s = series.copy()
s.name=''
row += "<td>{}</td>".format(s.to_frame().style.bar(align=align,
color=['#d65f5f', '#5fba7d'],
width=100).render()) #testn['width']
row += '</tr>'
head += row
head+=
</tbody>
</table>
HTML(head)
Explanation: The following example aims to give a highlight of the behavior of the new align options:
End of explanation
df2 = -df
style1 = df.style.applymap(color_negative_red)
style1
style2 = df2.style
style2.use(style1.export())
style2
Explanation: Sharing styles
Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame. Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set
End of explanation
with pd.option_context('display.precision', 2):
html = (df.style
.applymap(color_negative_red)
.apply(highlight_max))
html
Explanation: Notice that you're able to share the styles even though they're data aware. The styles are re-evaluated on the new DataFrame they've been used upon.
Other Options
You've seen a few methods for data-driven styling.
Styler also provides a few other options for styles that don't depend on the data.
precision
captions
table-wide styles
missing values representation
hiding the index or columns
Each of these can be specified in two ways:
A keyword argument to Styler.__init__
A call to one of the .set_ or .hide_ methods, e.g. .set_caption or .hide_columns
The best method to use depends on the context. Use the Styler constructor when building many styled DataFrames that should all share the same properties. For interactive use, the.set_ and .hide_ methods are more convenient.
Precision
You can control the precision of floats using pandas' regular display.precision option.
End of explanation
df.style\
.applymap(color_negative_red)\
.apply(highlight_max)\
.set_precision(2)
Explanation: Or through a set_precision method.
End of explanation
df.style.set_caption('Colormaps, with a caption.')\
.background_gradient(cmap=cm)
Explanation: Setting the precision only affects the printed number; the full-precision values are always passed to your style functions. You can always use df.round(2).style if you'd prefer to round from the start.
Captions
Regular table captions can be added in a few ways.
End of explanation
from IPython.display import HTML
def hover(hover_color="#ffff99"):
return dict(selector="tr:hover",
props=[("background-color", "%s" % hover_color)])
styles = [
hover(),
dict(selector="th", props=[("font-size", "150%"),
("text-align", "center")]),
dict(selector="caption", props=[("caption-side", "bottom")])
]
html = (df.style.set_table_styles(styles)
.set_caption("Hover to highlight."))
html
Explanation: Table styles
The next option you have are "table styles".
These are styles that apply to the table as a whole, but don't look at the data.
Certain stylings, including pseudo-selectors like :hover can only be used this way.
These can also be used to set specific row or column based class selectors, as will be shown.
End of explanation
html = html.set_table_styles({
'B': [dict(selector='', props=[('color', 'green')])],
'C': [dict(selector='td', props=[('color', 'red')])],
}, overwrite=False)
html
Explanation: table_styles should be a list of dictionaries.
Each dictionary should have the selector and props keys.
The value for selector should be a valid CSS selector.
Recall that all the styles are already attached to an id, unique to
each Styler. This selector is in addition to that id.
The value for props should be a list of tuples of ('attribute', 'value').
table_styles are extremely flexible, but not as fun to type out by hand.
We hope to collect some useful ones either in pandas, or preferable in a new package that builds on top the tools here.
table_styles can be used to add column and row based class descriptors. For large tables this can increase performance by avoiding repetitive individual css for each cell, and it can also simplify style construction in some cases.
If table_styles is given as a dictionary each key should be a specified column or index value and this will map to specific class CSS selectors of the given column or row.
Note that Styler.set_table_styles will overwrite existing styles but can be chained by setting the overwrite argument to False.
End of explanation
(df.style
.set_na_rep("FAIL")
.format(None, na_rep="PASS", subset=["D"])
.highlight_null("yellow"))
Explanation: Missing values
You can control the default missing values representation for the entire table through set_na_rep method.
End of explanation
df.style.hide_index()
df.style.hide_columns(['C','D'])
Explanation: Hiding the Index or Columns
The index can be hidden from rendering by calling Styler.hide_index. Columns can be hidden from rendering by calling Styler.hide_columns and passing in the name of a column, or a slice of columns.
End of explanation
from IPython.html import widgets
@widgets.interact
def f(h_neg=(0, 359, 1), h_pos=(0, 359), s=(0., 99.9), l=(0., 99.9)):
return df.style.background_gradient(
cmap=sns.palettes.diverging_palette(h_neg=h_neg, h_pos=h_pos, s=s, l=l,
as_cmap=True)
)
def magnify():
return [dict(selector="th",
props=[("font-size", "4pt")]),
dict(selector="td",
props=[('padding', "0em 0em")]),
dict(selector="th:hover",
props=[("font-size", "12pt")]),
dict(selector="tr:hover td:hover",
props=[('max-width', '200px'),
('font-size', '12pt')])
]
np.random.seed(25)
cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True)
bigdf = pd.DataFrame(np.random.randn(20, 25)).cumsum()
bigdf.style.background_gradient(cmap, axis=1)\
.set_properties(**{'max-width': '80px', 'font-size': '1pt'})\
.set_caption("Hover to magnify")\
.set_precision(2)\
.set_table_styles(magnify())
Explanation: CSS classes
Certain CSS classes are attached to cells.
Index and Column names include index_name and level<k> where k is its level in a MultiIndex
Index label cells include
row_heading
row<n> where n is the numeric position of the row
level<k> where k is the level in a MultiIndex
Column label cells include
col_heading
col<n> where n is the numeric position of the column
level<k> where k is the level in a MultiIndex
Blank cells include blank
Data cells include data
Limitations
DataFrame only (use Series.to_frame().style)
The index and columns must be unique
No large repr, and performance isn't great; this is intended for summary DataFrames
You can only style the values, not the index or columns (except with table_styles above)
You can only apply styles, you can't insert new HTML entities
Some of these will be addressed in the future.
Performance can suffer when adding styles to each cell in a large DataFrame.
It is recommended to apply table or column based styles where possible to limit overall HTML length, as well as setting a shorter UUID to avoid unnecessary repeated data transmission.
Terms
Style function: a function that's passed into Styler.apply or Styler.applymap and returns values like 'css attribute: value'
Builtin style functions: style functions that are methods on Styler
table style: a dictionary with the two keys selector and props. selector is the CSS selector that props will apply to. props is a list of (attribute, value) tuples. A list of table styles passed into Styler.
Fun stuff
Here are a few interesting examples.
Styler interacts pretty well with widgets. If you're viewing this online instead of running the notebook yourself, you're missing out on interactively adjusting the color palette.
End of explanation
df.style.\
applymap(color_negative_red).\
apply(highlight_max).\
to_excel('styled.xlsx', engine='openpyxl')
Explanation: Export to Excel
New in version 0.20.0
<span style="color: red">Experimental: This is a new feature and still under development. We'll be adding features and possibly making breaking changes in future releases. We'd love to hear your feedback.</span>
Some support is available for exporting styled DataFrames to Excel worksheets using the OpenPyXL or XlsxWriter engines. CSS2.2 properties handled include:
background-color
border-style, border-width, border-color and their {top, right, bottom, left variants}
color
font-family
font-style
font-weight
text-align
text-decoration
vertical-align
white-space: nowrap
Only CSS2 named colors and hex colors of the form #rgb or #rrggbb are currently supported.
The following pseudo CSS properties are also available to set excel specific style properties:
number-format
End of explanation
from jinja2 import Environment, ChoiceLoader, FileSystemLoader
from IPython.display import HTML
from pandas.io.formats.style import Styler
Explanation: A screenshot of the output:
Extensibility
The core of pandas is, and will remain, its "high-performance, easy-to-use data structures".
With that in mind, we hope that DataFrame.style accomplishes two goals
Provide an API that is pleasing to use interactively and is "good enough" for many tasks
Provide the foundations for dedicated libraries to build on
If you build a great library on top of this, let us know and we'll link to it.
Subclassing
If the default template doesn't quite suit your needs, you can subclass Styler and extend or override the template.
We'll show an example of extending the default template to insert a custom header before each table.
End of explanation
with open("templates/myhtml.tpl") as f:
print(f.read())
Explanation: We'll use the following template:
End of explanation
class MyStyler(Styler):
env = Environment(
loader=ChoiceLoader([
FileSystemLoader("templates"), # contains ours
Styler.loader, # the default
])
)
template = env.get_template("myhtml.tpl")
Explanation: Now that we've created a template, we need to set up a subclass of Styler that
knows about it.
End of explanation
MyStyler(df)
Explanation: Notice that we include the original loader in our environment's loader.
That's because we extend the original template, so the Jinja environment needs
to be able to find it.
Now we can use that custom styler. It's __init__ takes a DataFrame.
End of explanation
HTML(MyStyler(df).render(table_title="Extending Example"))
Explanation: Our custom template accepts a table_title keyword. We can provide the value in the .render method.
End of explanation
EasyStyler = Styler.from_custom_template("templates", "myhtml.tpl")
EasyStyler(df)
Explanation: For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass.
End of explanation
with open("templates/template_structure.html") as f:
structure = f.read()
HTML(structure)
Explanation: Here's the template structure:
End of explanation
# Hack to get the same style in the notebook as the
# main site. This is hidden in the docs.
from IPython.display import HTML
with open("themes/nature_with_gtoc/static/nature.css_t") as f:
css = f.read()
HTML('<style>{}</style>'.format(css))
Explanation: See the template in the GitHub repo for more details.
End of explanation |
981 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
THIS NOTEBOOK HAS BEEN MOVED
See https
Step2: Acquisition failure rate and multiple stars flag rate
Here we examine available statistics on the mean rate of the MS flag being set during guide star tracking and compares this to the model prediction of acquisition failure rate.
The time span used from 2014
Step3: Figure 1
Step5: Figure 2 - this plot shows that disabling MS-flag filtering is similar to making the star around 0.4 mags brighter (at a given acq fail rate). This is a significant improvement.
Worst case star catalog obsid 17728 currently in the LTS for around early April
ra, dec, roll = 174.301751, -1.487777, 258.450000
date = 2016
Step6: MS filtering enabled (as per past operations before FEB0816)
Step7: MS filtering disabled (as per current operations)
Step8: Takeway -- acquisition seems quite feasible with MS filtering disabled
IMPORTANT CAVEAT - no statement made about guide star tracking. To do this catalog we would definitely need MS filtering disabled for the whole observation.
Run-of-the-mill synthetic constrained catalog
This represents a more typical case of a catalog that requires a temperature cooler than -14.9 C. | Python Code:
from __future__ import division
import os
import matplotlib.pyplot as plt
from astropy.table import Table
import numpy as np
from Ska.DBI import DBI
%matplotlib inline
# Use development version of chandra_aca which has the new acq stats fit parameters
import sys
import os
sys.path.insert(0, os.path.join(os.environ['HOME'], 'git', 'chandra_aca'))
from chandra_aca import star_probs
Explanation: THIS NOTEBOOK HAS BEEN MOVED
See https://github.com/sot/mult_stars_flag/blob/master/mult_stars_flag_impact.ipynb for the current version. This one is left purely for the redirect for existing links in email.
Impact of disabling multiple stars status flag filtering
Prior to uplink of the image status flag patch and subsequent operational use starting in the FEB0816 loads, if the multiple stars flag was set on the readout prior to acquisition then the star would be rejected in ACA data processing. This would result in a failed acquisition even if the correct star was in fact acquired.
It was previously recognized that disabling the multiple stars status flag would produce a notable improvement in acquisition success rate. However, having now created a new model of acquisition success and carried out detailed analysis, the improvement is quite substantial.
This provides cautious optimism of significant relief for ACA-related thermal constraints for the near future.
Note that this assumes that MS-filtering can be disable for guide star tracking as well. A rather complex (and potentially incorrect) analysis has been done which demonstrates that this should not lead to unexpected safing actions (NSM or BSH). The SSAWG and community will need to evaluate to what extent we require that analysis to be independently verified versus accepting the risk of occasional safing actions.
End of explanation
def get_trak_stats(date='2014:180'):
Get relevant info from guide star tracking statistics from Sybase database.
This returns one record per guide star per obsid.
db = DBI(dbi='sybase', server='sybase', user='aca_read')
stats = db.fetchall('SELECT mult_star_samples, n_samples, aoacmag_median, obsid FROM trak_stats_data '
'WHERE kalman_datestart > "{}" '
'AND aoacmag_median is not NULL'
.format(date))
stats = Table(stats)
db.conn.close()
return stats
# Reading data from the database is slow, so cache in a FITS file
filename = 'mult_stars_flag_trak_stats.fits.gz'
if os.path.exists(filename):
stats = Table.read(filename)
else:
stats = get_trak_stats()
stats.write(filename)
# Select only stars in range 9.0 < mag < 11.0
mags = stats['aoacmag_median']
ok = (mags > 9) & (mags < 11)
stats = stats[ok]
mags = mags[ok]
# Compute fraction of samples
stats['frac_ms'] = stats['mult_star_samples'] / stats['n_samples']
# Bin the data using mean aggregation in 0.2 mag bins
stats['mag_bin'] = np.round(mags / 0.2) * 0.2
sg = stats.group_by('mag_bin')
sgm = sg.groups.aggregate(np.mean)
# Make the plot
plt.figure(1, figsize=(8, 5))
plt.clf()
randx = np.random.uniform(-0.05, 0.05, size=len(stats))
plt.plot(mags + randx, stats['frac_ms'], '.', alpha=0.5,
label='MS flag rate per obsid')
plt.plot(sgm['mag_bin'], sgm['frac_ms'], 'r', linewidth=5, alpha=0.7,
label='MS flag rate (0.2 mag bins)')
p_acqs = star_probs.acq_success_prob('2016:001', t_ccd=-15.0, mag=sgm['mag_bin'])
plt.plot(sgm['mag_bin'], 1 - p_acqs, 'g', linewidth=5,
label='Acq fail rate (model 2016:001, T=-15C)')
plt.legend(loc='upper left', fontsize='medium')
plt.xlabel('Magnitude')
plt.title('Acq fail rate compared to MS flag rate')
plt.grid()
plt.tight_layout()
Explanation: Acquisition failure rate and multiple stars flag rate
Here we examine available statistics on the mean rate of the MS flag being set during guide star tracking and compares this to the model prediction of acquisition failure rate.
The time span used from 2014:180 to the present (around 2016:030 in the original iteration). During that epoch the ACA CCD planning limit was -14 C and temperatures were relatively stable.
End of explanation
star_probs.__file__
star_probs.set_fit_pars(ms_enabled=False)
p_acqs_no_ms = star_probs.acq_success_prob('2016:001', t_ccd=-15.0, mag=sgm['mag_bin'])
plt.figure(1, figsize=(8, 5))
plt.clf()
plt.plot(sgm['mag_bin'], 1 - p_acqs, 'g', linewidth=5,
label='Acq fail rate (model 2016:001, T=-15C)')
plt.plot(sgm['mag_bin'], 1 - p_acqs_no_ms, 'r', linewidth=5,
label='Acq fail rate NO MS (model 2016:001, T=-15C)')
plt.arrow(10.7, 0.4, -0.4, 0.0, head_width=0.05, head_length=0.1, fc='k', ec='k')
plt.legend(loc='upper left', fontsize='medium')
plt.xlabel('Magnitude')
plt.title('Acq fail rate with (green) and without (red) MS-flag filtering')
plt.grid()
plt.tight_layout();
Explanation: Figure 1: the plot above demonstrates that (statistically) most of the acquisition failures below 11th mag are actually due to the multiple stars flag being set. Below about 10.0 mag nearly all of failures can be attributed to the MS flag.
Acquisition failure probabilities with and without MS-flag filtering
The SOTA model for acquisition probabilities was re-fit using acquisition
statistics that did a post-facto removal of the MS-flag filtering on board.
It was assumed that if a star were acquired at the correct position (within 5 arcsec)
and did not have ionizing radiation or saturated pixel flags set, then the
OBC would have identified it (aka successful acquisition).
Refitting is done in the fit_sota_model_probit_no_ms Jupyter notebook in this directory.
End of explanation
# Star catalog for obsid 17728
dat_str =
type agasc_id ra dec mag yag zag notes
BOT 646185704 173.7895 -0.7888 10.571 -1.01622E-02 -1.12012E-02 a3g4
BOT 646190208 174.5589 -1.1497 10.463 -6.67958E-03 3.21581E-03 a3g4
BOT 646190528 173.8661 -1.2423 10.549 -2.67450E-03 -8.30566E-03 a3g4
BOT 646190912 173.9066 -1.5759 9.349 2.88817E-03 -6.44565E-03 a1g1
BOT 646192600 174.4886 -1.0417 10.305 -8.28102E-03 1.63611E-03 a3g3
BOT 646193600 174.1094 -2.0234 10.045 9.83073E-03 -1.41511E-03 a3g4
GUI 646189648 174.8442 -1.9800 10.757 6.52457E-03 1.09910E-02 g5
GUI 646191600 174.9020 -1.7752 10.576 2.81966E-03 1.12644E-02 g5
ACQ 646189528 174.0391 -1.7808 10.536 5.92820E-03 -3.46607E-03 a4
ACQ 646190064 174.3127 -1.3096 10.629 -3.08574E-03 -4.35447E-04 a4
dat = Table.read(dat_str, format='ascii')
dat = dat[dat['type'] != 'GUI']
dat
Explanation: Figure 2 - this plot shows that disabling MS-flag filtering is similar to making the star around 0.4 mags brighter (at a given acq fail rate). This is a significant improvement.
Worst case star catalog obsid 17728 currently in the LTS for around early April
ra, dec, roll = 174.301751, -1.487777, 258.450000
date = 2016:092
maneuver error=30
dither = 8
End of explanation
# MS enabled case
star_probs.set_fit_pars(ms_enabled=True)
t_ccd = star_probs.t_ccd_warm_limit(dat['mag'], min_n_acq=(2, 0.008))[0]
print('CCD temperature must be below {:.2f} C'.format(t_ccd))
Explanation: MS filtering enabled (as per past operations before FEB0816)
End of explanation
# MS disabled case
star_probs.set_fit_pars(ms_enabled=False)
t_ccd = star_probs.t_ccd_warm_limit(dat['mag'], min_n_acq=(2, 0.008))[0]
print('CCD temperature must be below {:.2f} C'.format(t_ccd))
Explanation: MS filtering disabled (as per current operations)
End of explanation
# MS enabled case
star_probs.set_fit_pars(ms_enabled=True)
mags = [10.0, 10.2, 10.2, 9.3, 10.3, 10.0, 10.0, 10.0]
t_ccd = star_probs.t_ccd_warm_limit(mags, min_n_acq=(2, 0.008))[0]
print('CCD temperature must be below {:.2f} C'.format(t_ccd))
# MS enabled case
star_probs.set_fit_pars(ms_enabled=False)
t_ccd = star_probs.t_ccd_warm_limit(mags, min_n_acq=(2, 0.008))[0]
print('CCD temperature must be below {:.2f} C'.format(t_ccd))
Explanation: Takeway -- acquisition seems quite feasible with MS filtering disabled
IMPORTANT CAVEAT - no statement made about guide star tracking. To do this catalog we would definitely need MS filtering disabled for the whole observation.
Run-of-the-mill synthetic constrained catalog
This represents a more typical case of a catalog that requires a temperature cooler than -14.9 C.
End of explanation |
982 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Resignation prediction using machine learning algorithms
by A. Zayer
1. Introduction
Employees retention, especially in large companies, is and still will be a hot topic. Considerable amounts of money and time are spent during the hiring and the training process,therefore, the ability to uderstand and forecast future resignations is of prime interest since this, could help decision makers and hiring managers to prevent such situations from appearing by taking the appropriate measures.
An exploratory-predictive analysis will be carried out in order to understand what impels employees to resign. The kaggle dataset released under the CC BY-SA 4.0 License is used for this purpose. The dataset has 14999 rows and 9 columns with the following names
Step1: 2.2 Data Preprocessing
The csv data file is loaded to the memory as a pandas dataframe.
Step2: Let's have a quick look at the first rows
Step3: and the last rows
Step4: Rows can be seen as instances of a class called employees, wehere columns represent attributes.
Step5: Data is a mixture of numerical and categorical values.
The column 'Departments' is a nominal variable with the following categories
Step6: The 'Salary' variable has 3 ordinal categories
Step7: Tha salary and departments columns need to be encoded in integer numbers in order to be handled with learning algorithms, which will be done in the predictive section of this study. We have also two binary variables namely 'Left' and 'Promotion_last_5years'
Step8: Check for missing or incomplete data
Step9: 3. Exploratory analysis
3.1 Summary statistics of the numerical features
Step10: The average monthly hours is around 200 hours/month, which is about 10 hours/day, assuming five business days per week.
In the past five years, around 24% of the employees left the company. This number is quit alarming and raises questions about the tenure policy of the company.
The'satisfaction level' and 'last evaluation' averages are 0.61 and 0.72 respectively. Although these variables are not always objective but considering that we have a large dataset, the noise associated with each employee judgment should be minimal when averaging over large numbers.
The average employees’ life cycle is about 3.5 years which means an average employee may complete this loop more than 15 times in his career if he keeps looking for work in companies similar to this one.
3.2 Data visualisation
Step11: Overworked employees tend to leave
Step12: especially if they are not well paid
Step13: and yes they are not happy
Step14: 'Satisfaction level' is a strong indicator on whether an employee will leave or stay.
Step15: The three separate clusters formed by leavers show that there is no guarantee that an employee will stay based on the sole fact that his superiors are happy with his work rate.
Employers and employees judge each other from different perspectives.
'Last evaluation' metric measures time, effort and work rate of the employees.
'Satisfaction level' metric on the other hand measures how fair the salary and benefits, if any, relative to the effort and time spent in the company.
Step16: The correlation plot confirms the observations mentioned earlier. The 'last evaluation' metric is highly correlated to the 'average monthly hours' and the 'number of projects'. The more projects you take and the more hours you work the happier your boss is.
The 'Left' feature is correlated with 'Satisfaction Level', but in a negative way, the lower the satisfaction level, the higher the probability of leaving.
4. Predictive Analysis
Goal
Step17: In order to build the feature matrix., we have to convert categorical values into binaries which will create extra columns indicating the presence or absence of a category with a value of 1 or 0, respectively.
Step18: The 'Departments' and 'Salary' columns have been expanded into five and three separate columns respectively.
Create the features matrix X and the target vector y.
Step19: 4.2 Create helpers
Step20: 4.3 Train and test models
4.3.1 Naive Bayes
Step21: 4.3.2 Logistic Regression
Step22: The logistic regression classifier performed better than the random Bayes, but with a score accuracy still under 0.80. Both models did not capture well the non linearities in the features arising from the heterogeneity of the data.
4.3.3 Random Forest
Step23: 4.3.4 Gradient Boosted Regression Trees
Step24: Random forest and gradient boosted algorithms are both ensemble learning algorithms that
combine multiple decision trees to create more powerful models.
They performed exceptionally well on this dataset. Not only the accuracy but also the precision and the recall metrics are quite high with these models.
The three most important features, extracted from the data using the gradient boosted algorithm, are | Python Code:
%matplotlib inline
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_predict
from sklearn.calibration import CalibratedClassifierCV
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier as rfc
from sklearn import metrics
from sklearn.ensemble import GradientBoostingClassifier
from palettable import colorbrewer as cb
from IPython.display import display
from jupyterthemes import get_themes
from jupyterthemes.stylefx import set_nb_theme
themes = get_themes()
set_nb_theme(themes[0])
sns.set_style('whitegrid')
sns.set_context("notebook", font_scale=1.5, rc={"lines.linewidth": 2.5,'figure.figsize': (12, 9)})
pd.set_option('display.float_format', lambda x: '%.2f' % x)
Explanation: Resignation prediction using machine learning algorithms
by A. Zayer
1. Introduction
Employees retention, especially in large companies, is and still will be a hot topic. Considerable amounts of money and time are spent during the hiring and the training process,therefore, the ability to uderstand and forecast future resignations is of prime interest since this, could help decision makers and hiring managers to prevent such situations from appearing by taking the appropriate measures.
An exploratory-predictive analysis will be carried out in order to understand what impels employees to resign. The kaggle dataset released under the CC BY-SA 4.0 License is used for this purpose. The dataset has 14999 rows and 9 columns with the following names:
<pre>
|Satisfaction_level | Level of satisfaction (0-1)|
|Last_evaluation | Evaluation of employee performance (0-1)|
|Number_project | Number of projects completed while at work|
|Average_monthly_hours| Average monthly hours at workplace|
|Time_spend_company | Number of years spent in the company|
|Work_accident | Whether the employee had a workplace accident|
|Left | Whether the employee left the workplace or not (1 or 0) Factor|
|Promotion_last_5years| Whether the employee was promoted in the last five years|
|Departments | Department in which they work for|
|Salary | Relative level of salary (low med high)|
</pre>
2. Preprocessing
2.1 Import libraries
End of explanation
df = pd.read_csv('./data/hr.csv',sep=";") #Import data
df.shape # check how many rows and columns in the data
Explanation: 2.2 Data Preprocessing
The csv data file is loaded to the memory as a pandas dataframe.
End of explanation
df.head(10)
Explanation: Let's have a quick look at the first rows
End of explanation
df.tail(10)
Explanation: and the last rows
End of explanation
for key in (df.columns.values):
print(key)
Explanation: Rows can be seen as instances of a class called employees, wehere columns represent attributes.
End of explanation
print(df.Departments.value_counts())
Explanation: Data is a mixture of numerical and categorical values.
The column 'Departments' is a nominal variable with the following categories:
End of explanation
print(df.Salary.value_counts())
Explanation: The 'Salary' variable has 3 ordinal categories
End of explanation
print(df['Left'].value_counts()) #check the occurence od the binary values
print(df['Promotion_last_5years'].value_counts()) #check the occurence of the binary values
Explanation: Tha salary and departments columns need to be encoded in integer numbers in order to be handled with learning algorithms, which will be done in the predictive section of this study. We have also two binary variables namely 'Left' and 'Promotion_last_5years'
End of explanation
np.count_nonzero(df.isnull())
Explanation: Check for missing or incomplete data
End of explanation
df.describe().T
Explanation: 3. Exploratory analysis
3.1 Summary statistics of the numerical features:
End of explanation
colors = cb.qualitative.Set3_12.hex_colors
colors1=cb.qualitative.Paired_11.hex_colors
colorz=['#EA8E83','#FFFFB3','#B3DE69','#FDB462']
colorz2=['#96B68D','#807885','#D1D3D4','#C7B5A7','#B5C2C9','#F2CF9A','#C58083']
colorz3=['#F6D3E5','#EA8E83']
labelz =["Satisfaction level","Last evaluation","Number of projects","Average monthly hours",
"Time spent company","Work accident","Left","Promotion last 5 years"]
left_labels =["Stayers","Leavers"]
salez = ["Satisfaction level",
"Last evaluation",
"Number of projects",
"Average monthly hours",
"Time spent company",
"Work accident" ,
"Left",
"Promotion last 5 years",
"Departments",
"Salary"]
ax = sns.countplot(
x='Salary',
data=df,
hue='Departments',
hue_order=df['Departments'].value_counts().index,
palette= colors)
_ = ax.set_xlabel('Salary')
_ = ax.set_ylabel('Number of employees')
_ = ax.set_title('Salaries distibution')
_ = plt.legend(bbox_to_anchor=(1.02, 1.0), loc=2, borderaxespad=0.)
ax = sns.boxplot(
y='Departments',#Column to split upon
x='Average_montly_hours',# Column to plot
data=df,
hue='Left',
width=0.35,
fliersize=5,
palette=colorz3,
flierprops={
'marker': '.'})
_ = ax.set_xlabel('Average monthly hours')
_ = ax.set_ylabel('Departments')
handles, labels = ax.get_legend_handles_labels()
l = plt.legend(handles[0:2], left_labels[0:2], bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
Explanation: The average monthly hours is around 200 hours/month, which is about 10 hours/day, assuming five business days per week.
In the past five years, around 24% of the employees left the company. This number is quit alarming and raises questions about the tenure policy of the company.
The'satisfaction level' and 'last evaluation' averages are 0.61 and 0.72 respectively. Although these variables are not always objective but considering that we have a large dataset, the noise associated with each employee judgment should be minimal when averaging over large numbers.
The average employees’ life cycle is about 3.5 years which means an average employee may complete this loop more than 15 times in his career if he keeps looking for work in companies similar to this one.
3.2 Data visualisation
End of explanation
ax = sns.countplot(
x='Salary',
hue='Left',
data=df,
palette=colorz3)
_ = ax.set_xlabel('Salary')
_ = ax.set_ylabel('Employees')
_ = plt.legend(bbox_to_anchor=(1.02, 1.0), loc=2, borderaxespad=0.,labels={"Stayers","Leavers"})
Explanation: Overworked employees tend to leave
End of explanation
# satisfaction level among the leavers and the Stayers in different departments
from numpy import median
ax = sns.barplot(
y='Departments',
x='Satisfaction_level',
data=df,
ci=None,
hue='Left',
estimator=median,
palette=colorz3)
_ = ax.set_ylabel('Departments')
_ = ax.set_xlabel('Satisfaction_level')
_ = plt.legend(bbox_to_anchor=(1.02, 1.0), loc=2, borderaxespad=0.,labels={"Stayers","Leavers"})
Explanation: especially if they are not well paid
End of explanation
sns.factorplot(x="Satisfaction_level",
y="Departments",
hue="Left",
row="Salary",
data=df[df.Departments.notnull()],
kind="box",
aspect=3,
palette=colorz3,
legend=False);
l = plt.legend(handles[0:2],
left_labels[0:2],
bbox_to_anchor=(1.05, 1),
loc=2,
borderaxespad=0.)
Explanation: and yes they are not happy
End of explanation
l_sat = df.loc[df["Left"] == 1]["Satisfaction_level"]
s_sat = df.loc[df["Left"] == 0]["Satisfaction_level"]
l_ev = df.loc[df["Left"] == 1]["Last_evaluation"]
s_ev = df.loc[df["Left"] == 0]["Last_evaluation"]
plt.figure(figsize=(14,10))
plt.xlabel("Satisfaction level")
plt.ylabel("Last evaluation")
scat_s = plt.scatter(s_sat, s_ev, color=colorz3[0])
scat_l = plt.scatter(l_sat, l_ev, color=colorz3[1])
l = plt.legend(handles[0:2],
left_labels[0:2],
bbox_to_anchor=(1.05, 1),
loc=2,
borderaxespad=0.)
Explanation: 'Satisfaction level' is a strong indicator on whether an employee will leave or stay.
End of explanation
correlation = df.corr()
g=sns.heatmap(correlation, vmax=1, square=True,annot=True,cmap='plasma')
zz1=np.transpose(labelz)
zz2=np.transpose(labelz[::-1])
g.set(xticklabels=zz1);
g.set(yticklabels=zz2);
Explanation: The three separate clusters formed by leavers show that there is no guarantee that an employee will stay based on the sole fact that his superiors are happy with his work rate.
Employers and employees judge each other from different perspectives.
'Last evaluation' metric measures time, effort and work rate of the employees.
'Satisfaction level' metric on the other hand measures how fair the salary and benefits, if any, relative to the effort and time spent in the company.
End of explanation
df.columns
Explanation: The correlation plot confirms the observations mentioned earlier. The 'last evaluation' metric is highly correlated to the 'average monthly hours' and the 'number of projects'. The more projects you take and the more hours you work the happier your boss is.
The 'Left' feature is correlated with 'Satisfaction Level', but in a negative way, the lower the satisfaction level, the higher the probability of leaving.
4. Predictive Analysis
Goal: Extraction and classification of main factors causing people to leave the company
4.1 Scale and Split the data
End of explanation
df_clf = pd.get_dummies(df)
df_clf.head()
labels =df_clf.columns
labels
Explanation: In order to build the feature matrix., we have to convert categorical values into binaries which will create extra columns indicating the presence or absence of a category with a value of 1 or 0, respectively.
End of explanation
y = df_clf['Left'].values
df_clf = df_clf.drop(['Left'],axis=1)
X = df_clf.values
from sklearn.preprocessing import StandardScaler
X= StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)
Explanation: The 'Departments' and 'Salary' columns have been expanded into five and three separate columns respectively.
Create the features matrix X and the target vector y.
End of explanation
def separator():
print(" ")
print('*********************************************************************')
print(" ")
# function to print classification metrics
def printz(predict_train,predict_test):
print('Performance on Training Data')
# Accuracy
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_train, predict_train)))
separator()
print('Performance on Testing Data')
# training metrics
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_test, predict_test)))
print("Confusion Matrix")
separator()
print('Metrics')
# Note the use of labels for set 1=True to upper left and 0=False to lower right
print("{0}".format(metrics.confusion_matrix(y_test, predict_test, labels=[1, 0])))
separator()
print("Classification Report")
print(metrics.classification_report(y_test, predict_test, labels=[1,0]))
# model fitting
def modelz(model):
model.fit(X_train, y_train.ravel())
# predict values using the training data
predict_train = model.predict(X_train)
# predict values using the testing data
predict_test = model.predict(X_test)
#printz(predict_train,predict_test,y_train,y_test)
printz(predict_train,predict_test)
# printz(predict_train,predict_test,y_train,y_test)
def classifierz(typz):
# Instantiate models and set the parameters
if typz==1:
# create a Gaussian Naive Bayes model object
model = GaussianNB()
modelz(model)
if typz==2:
# create a LogisticRegression model object
model =LogisticRegression(C=0.7, random_state=42)
modelz(model)
if typz==3:
# create a Random Forest Classifier model object
model = rfc(random_state=42)
modelz(model)
if typz==4:
model = GradientBoostingClassifier(random_state=0, learning_rate=0.1, max_depth=6)
modelz(model)
importances = pd.DataFrame({'feature':df_clf.columns,
'importance':model.feature_importances_})
importances = importances.sort_values('importance',ascending=False).set_index('feature')
importances = importances[importances.importance>=0.1]
importances.index = [x.strip().replace('_', ' ') for x in importances.index]
print (importances)
importances.plot(kind = 'barh', x = importances.index, figsize = (10,4),
color='#d65f5f', legend=False, title = "Importance factors")
Explanation: 4.2 Create helpers
End of explanation
clf_id = 1 # 1 to run the Naive bayes classifier
classifierz(clf_id)
Explanation: 4.3 Train and test models
4.3.1 Naive Bayes
End of explanation
clf_id = 2 # 2 to run the logistic regression classifier
classifierz(clf_id)
Explanation: 4.3.2 Logistic Regression
End of explanation
clf_id = 3 # 3 to run the random forest classifier
classifierz(clf_id)
Explanation: The logistic regression classifier performed better than the random Bayes, but with a score accuracy still under 0.80. Both models did not capture well the non linearities in the features arising from the heterogeneity of the data.
4.3.3 Random Forest
End of explanation
clf_id = 4 # 4 to run the gradient boosted regression trees
classifierz(clf_id)
Explanation: 4.3.4 Gradient Boosted Regression Trees
End of explanation
#Predict who will leave with a probability greater than or equal to 50%
rfc = rfc(n_estimators=10)
Mr_x = cross_val_predict(rfc, X, y, cv=5,
method='predict_proba')
Mr_x = pd.DataFrame(Mr_x[0:,1])
Mr_x.columns = ['prob_leaving']
Mr_x_prob= pd.concat([df, Mr_x], axis=1)
Mr_x_prob= Mr_x_prob[(Mr_x_prob["Left"] == 0)]
Mr_x_prob= Mr_x_prob[(Mr_x_prob["prob_leaving"] >= 0.49)]
Mr_x_prob.sort_values(by='prob_leaving', ascending=False, inplace=True)
wl = Mr_x_prob[['Number_project','Average_montly_hours',
'Time_spend_company', 'Work_accident',
'Salary','prob_leaving']]
wl.style.bar(subset=['prob_leaving'], color='#d65f5f')
Explanation: Random forest and gradient boosted algorithms are both ensemble learning algorithms that
combine multiple decision trees to create more powerful models.
They performed exceptionally well on this dataset. Not only the accuracy but also the precision and the recall metrics are quite high with these models.
The three most important features, extracted from the data using the gradient boosted algorithm, are:
* Satisfaction level
* Average monthly hours
* Last evaluation
5. Predict who will leave the company
End of explanation |
983 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Strings
Sequenzen von Zeichen
Strings sind Zeichenketten, also Sequenzen von Zeichen.
Die Länge der Sequenz ermitteln
Die Zahl der Element in der Sequenz (also die Zahl der Zeichen) kann mit der Funktion len() ermittelt werden
Step1: Adressierung einzelner Elemente
Jedes Element in der Sequenz kann einzeln addressiert werden
Step2: Ausprobieren
Was passiert, wenn ich auf satz[10] zugreife?
Slicing
Step3: Ist der erste Wert 0, kann dieser weggelassen werden
Step4: Wird der zweite Wert weggelassen, ist das gleichbedeutend mit "bis zum Ende des Strings"
Step5: Stringende
In den meisten Programmiersprachen muss man auf das letzte Elemente eines Strings so zugreifen
Step6: In Python gibt es dazu eine elegante Alternative
Step7: Übung 1
Schreiben wir ein Programm, das
1. Zur Eingabe eines Namens auffordert
1. Diese Eingabe in einer Variable zuweist
1. Folgende Ausgabe produziert
Step8: Die for-Schleife
Beim Programmieren muss häufig ein Anweisung oder eine Reihe von Anweisungen wiederholt werden, beispielsweise für jedes Element einer Sequenz (z.B. für jedes Zeichen eines Strings). Für diesen Zweck ist eine for-Schleife bestens geeignet
Step9: Dieses Konstrukt (for Element in Sequenz) funktioniert für alle Datentypen, die in der Lage sind, ein Element nach dem anderen zu liefern. Man spricht hier von einem Iterable. Solche Iterables sind in Python zahlreich, so dass auf diese Weise nicht nur durch die Zeichen eines Strings, sondern beispielsweise auch durch die Elemente einer Liste, die Zeilen einer Datei oder einfach nur durch eine Abfolge von Zahlen iteriert werden kann
Step10: Übung
Ermitteln wir in einer Schleife die Summe aller Zahlen zwischen 1 und 50000
Verschachtelte Schleifen
Man kann zwei (oder mehr - meist nicht empfehlenswert) Schleifen ineinander verschachteln. Dadurch kann man beispielsweise als Elemente aus 2 Sequenzen miteinander kombinieren
Step11: Mit Dateien arbeiten
Bevor aus einer Datei gelesen oder in eine Datei geschrieben werden kann, muss diese Datei mit der Funktion open() geöffnet werden. open() erwartet mindestens ein Argument
Step12: Falls nötig, kann noch das Encoding der Datei explizit angegeben werden
Step13: Wenn wir die Datei nicht mehr brauchen, sollte sie wieder geschlossen werden, damit das Betriebssystem die Ressource wieder freigeben kann.
Step14: Das Objekt, das die geöffnete Datei repräsentiert, bietet mehrere Möglichkeiten um auf den Inhalt
zuzugreifen, darunter auch einen Iterator, den wir in einer for-Schleife nutzen können.
Step15: Eine Datei in einem Context-Manager öffnen
Es ist guter Stil, eine geöffnete Datei auch wieder zu schließen. Wenn aber z.B. das Programm abstürzt, während die Datei geöffnet ist, kann die close()-Methode nicht mehr ausgeführt werden. Um solche Probleme zu vermeiden, empfiehlt sich die Verwendung eines Context-Managers
Step16: Weitere Methoden um aus einer Datei zu lesen
read()
Die read()-Methode liest den gesamten Dateiinhalt als String ein
Step17: readlines()
Diese Methode liest jede Zeile der Datei als Element in eine Liste ein
Step18: Übung
Step19: Da eine Liste so wie ein String ein Sequenztyp ist, funktionieren viele Dinge, die wir bei Strings kennengelernt haben, auch bei Listen.
Zahl der Listenelement ermitteln
Wir können die Zahl der Elemente einer Liste mit der Funktion len() ermitteln
Step20: Einzelne Elemente adressieren
Wie bei einem String über den Index auf ein einzelnes Zeichen zugegriffen werden kann, kann bei einer Liste ein bestimmtes Element adressiert werden
Step21: Slicing
Außerdem können Teillisten extrahiert werden
Step22: Listen verändern
Im Unterschied zu Strings sind Listen nachträglich veränderbar. Wir können jederzeit neue Elemente hinzufügen. Die Mehode append(WERT) fügt ein neues Element am Ende der Liste ein
Step23: Wir können aber auch Elemente an beliebiger Position einfügen
Step24: Ebenso können wir Elemente wieder entfernen. Die Methode pop() entfernt das letzte Element der Liste.
Step25: pop() kann aber auch optional mit einem Argument aufgerufen werden
Step26: Elemente ersetzen
Über den Index kann der Wert eines Elements der Liste jederzeit verändert werden
Step27: Mehrdimensionale Listen
Wir haben gesehen, dass eine Liste beliebige Typen enthalten kann. Dazu gehören auch Listen. Wir können also auch eine Liste von Listen erzeugen. Stellen wir uns vor, wir messen drei Mal täglich die Temperatur und möchten diese speichern. Am ersten Tag haben wir diese 3 Messungen
Step28: Wir können uns diese Temperaturen als Tabelle vorstellen
Step29: Da das gewählte Element wieder eine Liste ist, können wir auch auf einzelne Element zugreifen. Den ersten Messwert des zweiten Tages erhalten wir so
Step30: Mit Listen-Werten rechnen
Für numerische Listen (int, float) stellt Python Funktionen bereit, die auf alle Werte einer Liste angewandt werden können
Step31: Übung
Wie hoch ist die gemittelte Mittagstemperatur?
Zeilen einer Datei in eine Liste lesen
Kehren wir zurück zu unsere Datei mit den Vornamen. Wie wir gesehen haben, liefert die Methode readlines() den Inhalt einer Datei als Liste von Zeilen
Step32: Die String-Methoden rstrip(), lstrip() und strip()
Wie wir sehen, enthält jedes Listenelement am Ende das Zeilenumbruchszeichen \n (line feed). Wir könnten dieses z.B. mit Slicing entfernen, jedoch bietet der String-Typ eine Methode .rstrip(), die genau das tut, was wird brauchen
Step33: rstrip() entfernt allen Whitespace (Leerzeichen, Tabulaturen, Zeilenumbrüche etc.) am Ende einen Strings. Zusätzlich gibt es noch lstrip(), das Whitespace am Anfang eines Strings entfernt, und strip(), das Whitespace links und rechts entfernt.
Step34: Zeilenumbrüche in einer Liste von Strings entfernen
Methode 1
Step35: Damit haben wir auch schon die erste Listen-Methode kennengelernt, die es bei Strings (die unveränderbar sind) nicht gibt
Step36: Methode 2
Step37: Übung
Schreiben Sie eine List Comprehension, die jeden Wert der Liste nums mit sich selbst multipliziert. | Python Code:
satz = 'Ein String ist eine Zeichenkette.'
len(satz)
Explanation: Strings
Sequenzen von Zeichen
Strings sind Zeichenketten, also Sequenzen von Zeichen.
Die Länge der Sequenz ermitteln
Die Zahl der Element in der Sequenz (also die Zahl der Zeichen) kann mit der Funktion len() ermittelt werden:
End of explanation
# Für die folgenden Beispiele ist ein kurzer String übersichtlicher
satz = "Ein String"
satz[0]
Explanation: Adressierung einzelner Elemente
Jedes Element in der Sequenz kann einzeln addressiert werden:
Dabei ist zu beachten, dass das erste Element der Sequenz den Index 0 hat!
End of explanation
satz[0:3]
Explanation: Ausprobieren
Was passiert, wenn ich auf satz[10] zugreife?
Slicing: Herausschneiden eines Substrings
Durch Angabe zweier, durch einen Doppelpunkt getrennter Werte (Index des ersten herauszuschneidenden und des ersten nicht mehr herauszuschneidenden Elements) kann man aus einem String einen Teilstring extrahieren:
End of explanation
satz[:3]
Explanation: Ist der erste Wert 0, kann dieser weggelassen werden:
End of explanation
satz[3:]
Explanation: Wird der zweite Wert weggelassen, ist das gleichbedeutend mit "bis zum Ende des Strings":
End of explanation
satz[len(satz)-1]
Explanation: Stringende
In den meisten Programmiersprachen muss man auf das letzte Elemente eines Strings so zugreifen:
End of explanation
satz[-1]
Explanation: In Python gibt es dazu eine elegante Alternative: Man kann negative Zahlen verwenden, um von hinten her auf einzelne Zeichen zuzugreifen:
Das letzte Zeichen des Strings hat als den Index -1, das vorletzte -2 usw.
End of explanation
output = ('Dein Name ist {} und besteht aus {} Zeichen. '
'Er beginnt mit {} und endet mit {}. Von hinten gelesen lautet er {}.')
Explanation: Übung 1
Schreiben wir ein Programm, das
1. Zur Eingabe eines Namens auffordert
1. Diese Eingabe in einer Variable zuweist
1. Folgende Ausgabe produziert:
"Dein Name ist XXX und besteht aus n Zeichen. Er beginnt mit X und endet mit Y. Von hinten gelesen lautet er XXX.
End of explanation
for char in satz:
print(char)
Explanation: Die for-Schleife
Beim Programmieren muss häufig ein Anweisung oder eine Reihe von Anweisungen wiederholt werden, beispielsweise für jedes Element einer Sequenz (z.B. für jedes Zeichen eines Strings). Für diesen Zweck ist eine for-Schleife bestens geeignet:
End of explanation
for i in range(1, 11):
print(i)
Explanation: Dieses Konstrukt (for Element in Sequenz) funktioniert für alle Datentypen, die in der Lage sind, ein Element nach dem anderen zu liefern. Man spricht hier von einem Iterable. Solche Iterables sind in Python zahlreich, so dass auf diese Weise nicht nur durch die Zeichen eines Strings, sondern beispielsweise auch durch die Elemente einer Liste, die Zeilen einer Datei oder einfach nur durch eine Abfolge von Zahlen iteriert werden kann:
End of explanation
for i in range(1, 11):
for j in range(1, 11):
print('{} x {} = {}'.format(j, i, i * j))
Explanation: Übung
Ermitteln wir in einer Schleife die Summe aller Zahlen zwischen 1 und 50000
Verschachtelte Schleifen
Man kann zwei (oder mehr - meist nicht empfehlenswert) Schleifen ineinander verschachteln. Dadurch kann man beispielsweise als Elemente aus 2 Sequenzen miteinander kombinieren:
End of explanation
fh = open('data/vornamen/names_short.txt')
Explanation: Mit Dateien arbeiten
Bevor aus einer Datei gelesen oder in eine Datei geschrieben werden kann, muss diese Datei mit der Funktion open() geöffnet werden. open() erwartet mindestens ein Argument: Den Namen (evtl. mit Pfad) zur Datei:
End of explanation
fh = open('data/vornamen/names_short.txt', encoding='utf-8')
Explanation: Falls nötig, kann noch das Encoding der Datei explizit angegeben werden:
End of explanation
fh.close()
Explanation: Wenn wir die Datei nicht mehr brauchen, sollte sie wieder geschlossen werden, damit das Betriebssystem die Ressource wieder freigeben kann.
End of explanation
fh = open('data/vornamen/names_short.txt', encoding='utf-8')
for line in fh:
print(line)
fh.close()
Explanation: Das Objekt, das die geöffnete Datei repräsentiert, bietet mehrere Möglichkeiten um auf den Inhalt
zuzugreifen, darunter auch einen Iterator, den wir in einer for-Schleife nutzen können.
End of explanation
with open('data/vornamen/names_short.txt', encoding='utf-8') as fh:
for line in fh:
print(line)
Explanation: Eine Datei in einem Context-Manager öffnen
Es ist guter Stil, eine geöffnete Datei auch wieder zu schließen. Wenn aber z.B. das Programm abstürzt, während die Datei geöffnet ist, kann die close()-Methode nicht mehr ausgeführt werden. Um solche Probleme zu vermeiden, empfiehlt sich die Verwendung eines Context-Managers:
End of explanation
with open('data/vornamen/names_short.txt', encoding='utf-8') as fh:
data = fh.read()
print(data)
Explanation: Weitere Methoden um aus einer Datei zu lesen
read()
Die read()-Methode liest den gesamten Dateiinhalt als String ein:
End of explanation
with open('data/vornamen/names_short.txt', encoding='utf-8') as fh:
data = fh.readlines()
print(data)
Explanation: readlines()
Diese Methode liest jede Zeile der Datei als Element in eine Liste ein:
End of explanation
students = ['Otto', 'Anna', 'Maria', 'Franz']
students
temperatures = [25, 28, 20, 26, 32]
temperatures
Explanation: Übung: wie viele Zeilen hat die Datei names_short.txt?
Listen
Eine Liste ist ein weiterer Sequenztyp. Eine Liste enthält eine Sequenz von Elementen. Der Datentyp eines Elements ist egal, oder anders gesagt: in einer Liste können Elemente mit beliebigen Typen gespeichert werden:
End of explanation
len(students)
Explanation: Da eine Liste so wie ein String ein Sequenztyp ist, funktionieren viele Dinge, die wir bei Strings kennengelernt haben, auch bei Listen.
Zahl der Listenelement ermitteln
Wir können die Zahl der Elemente einer Liste mit der Funktion len() ermitteln:
End of explanation
students[0]
Explanation: Einzelne Elemente adressieren
Wie bei einem String über den Index auf ein einzelnes Zeichen zugegriffen werden kann, kann bei einer Liste ein bestimmtes Element adressiert werden:
End of explanation
students[1:3]
Explanation: Slicing
Außerdem können Teillisten extrahiert werden:
End of explanation
print(students)
students.append('Otto')
print(students)
Explanation: Listen verändern
Im Unterschied zu Strings sind Listen nachträglich veränderbar. Wir können jederzeit neue Elemente hinzufügen. Die Mehode append(WERT) fügt ein neues Element am Ende der Liste ein:
End of explanation
students.insert(0, 'Berta')
students
Explanation: Wir können aber auch Elemente an beliebiger Position einfügen:
End of explanation
next_student = students.pop()
print(next_student)
print(students)
Explanation: Ebenso können wir Elemente wieder entfernen. Die Methode pop() entfernt das letzte Element der Liste.
End of explanation
first = students.pop(0)
print(first)
students
Explanation: pop() kann aber auch optional mit einem Argument aufgerufen werden: einer Zahl die dem Index des zu entfernenden Objekts entspricht:
End of explanation
print(students)
students[1] = 'Berta'
print(students)
with open('data/vornamen/names_short.txt', encoding='utf-8') as fh:
lines = fh.readlines()
print(len(lines))
Explanation: Elemente ersetzen
Über den Index kann der Wert eines Elements der Liste jederzeit verändert werden:
End of explanation
temperatures = [
[17, 28, 24],
[18, 31, 28],
[20, 35, 29]
]
Explanation: Mehrdimensionale Listen
Wir haben gesehen, dass eine Liste beliebige Typen enthalten kann. Dazu gehören auch Listen. Wir können also auch eine Liste von Listen erzeugen. Stellen wir uns vor, wir messen drei Mal täglich die Temperatur und möchten diese speichern. Am ersten Tag haben wir diese 3 Messungen: [17, 28, 24]. Am zweiten Tage messen wird diese Werte [18, 31, 28]. Wir haben also eine Liste pro Tag. Die einzelnen Tage (sprich: Listen) können wir wieder in ein Liste speichern:
End of explanation
temperatures[1]
Explanation: Wir können uns diese Temperaturen als Tabelle vorstellen: Jede Zeile repräsentiert einen Tag, jede Spalte einen Meßzeitpunkt (z.B. 6:00, 12:00, 18:00). Wie wir auf die Messwerte eines bestimmten Tages zugreifen können, haben wir schon gelernt:
End of explanation
temperatures[1][0]
Explanation: Da das gewählte Element wieder eine Liste ist, können wir auch auf einzelne Element zugreifen. Den ersten Messwert des zweiten Tages erhalten wir so:
End of explanation
max(temperatures[0])
Explanation: Mit Listen-Werten rechnen
Für numerische Listen (int, float) stellt Python Funktionen bereit, die auf alle Werte einer Liste angewandt werden können:
* max(liste) ermittelt den größten vorkommenden Wert
* min(liste) ermittelt den kleinsten vorkommenden Wert
* sum(liste) ermittelt die Summe aller Werte
End of explanation
with open('data/vornamen/names_short.txt', encoding='utf-8') as fh:
lines = fh.readlines()
print(lines)
Explanation: Übung
Wie hoch ist die gemittelte Mittagstemperatur?
Zeilen einer Datei in eine Liste lesen
Kehren wir zurück zu unsere Datei mit den Vornamen. Wie wir gesehen haben, liefert die Methode readlines() den Inhalt einer Datei als Liste von Zeilen:
End of explanation
s = 'abc\n'
s.rstrip()
Explanation: Die String-Methoden rstrip(), lstrip() und strip()
Wie wir sehen, enthält jedes Listenelement am Ende das Zeilenumbruchszeichen \n (line feed). Wir könnten dieses z.B. mit Slicing entfernen, jedoch bietet der String-Typ eine Methode .rstrip(), die genau das tut, was wird brauchen:
End of explanation
s = ' abc '
print('rstrip: "{}"'.format(s.rstrip()))
print('lstrip(): "{}"'.format(s.lstrip()))
print('strip(): "{}"'.format(s.strip()))
Explanation: rstrip() entfernt allen Whitespace (Leerzeichen, Tabulaturen, Zeilenumbrüche etc.) am Ende einen Strings. Zusätzlich gibt es noch lstrip(), das Whitespace am Anfang eines Strings entfernt, und strip(), das Whitespace links und rechts entfernt.
End of explanation
clean_names = []
for line in lines:
clean_names.append(line.rstrip())
print(clean_names)
Explanation: Zeilenumbrüche in einer Liste von Strings entfernen
Methode 1: in einer Schleife
Wenn wir nun alle Zeilenumbrüche aus unserer Liste lines entfernen wollen, können wir das in einer for-Schleife tun:
End of explanation
queue = ['Anna', 'Hans', 'Berta']
queue.append('Dora')
queue
Explanation: Damit haben wir auch schon die erste Listen-Methode kennengelernt, die es bei Strings (die unveränderbar sind) nicht gibt: Mit list.append(WERT) können wir der Liste einen weiteren Wert hinzufügen. Dieser Wert wird hinten an die Liste angefügt:
End of explanation
clean_names = [line.rstrip() for line in lines]
print(clean_names)
Explanation: Methode 2: mit einer List Comprehension
List Comprehensions sind ein aus dem Bereich der funktionalen Programmierung kommender Ansatz, um eine Aktion auf alle Elemente eine Liste anzuwenden.
End of explanation
nums = [4, 9, 17, 5, 99]
# TODO: fertig machen
Explanation: Übung
Schreiben Sie eine List Comprehension, die jeden Wert der Liste nums mit sich selbst multipliziert.
End of explanation |
984 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2/ Exercise solutions
Step1: Definitions
E2.1
E2.2
Given the matrices $A=\begin{pmatrix}1 & 3 \ 4 & 5\end{pmatrix}$ and $B=\begin{pmatrix} -1 & 0 \ 3 & 3 \end{pmatrix}$,
and the vectors $\vec{v}=\begin{pmatrix}1 \ 2\end{pmatrix}$ and $\vec{w}=\begin{pmatrix}-3 \ -4\end{pmatrix}$,
compute the following expressions.
a) $A\vec{v}$
b) $B\vec{v}$
c) $A(B\vec{v})$
d) $B(A\vec{v})$
e) $A\vec{w}$
f) $B\vec{w}$ | Python Code:
# setup SymPy
from sympy import *
x, y, z, t = symbols('x y z t')
init_printing()
Explanation: 2/ Exercise solutions
End of explanation
# define the matrices A and B, and the vecs v and w
A = Matrix([[1,3],
[4,5]])
B = Matrix([[-1,0],
[ 3,3]])
v = Matrix([[1,2]]).T # the .T makes v a column vector
w = Matrix([[-3,-4]]).T
# a)
A*v
# b)
B*v
# c)
A*B*v
# d)
B*A*v
# e)
A*w
# f)
B*w
Explanation: Definitions
E2.1
E2.2
Given the matrices $A=\begin{pmatrix}1 & 3 \ 4 & 5\end{pmatrix}$ and $B=\begin{pmatrix} -1 & 0 \ 3 & 3 \end{pmatrix}$,
and the vectors $\vec{v}=\begin{pmatrix}1 \ 2\end{pmatrix}$ and $\vec{w}=\begin{pmatrix}-3 \ -4\end{pmatrix}$,
compute the following expressions.
a) $A\vec{v}$
b) $B\vec{v}$
c) $A(B\vec{v})$
d) $B(A\vec{v})$
e) $A\vec{w}$
f) $B\vec{w}$
End of explanation |
985 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using custom fonts
Relevant fontName.ttf font need to be downloaded
Step1: Controlling font properties
Pylab examples
pie autopct
Plot labels inside by controlling the radial distance labeldistance
Donut plot in matplotlib | Python Code:
# help(font_manager)
path = '../fonts/segoeuib.ttf'
prop = font_manager.FontProperties(fname=path)
print prop.get_name()
print prop.get_family()
font0 = FontProperties()
font1 = font0.copy()
font1.set_family(prop.get_name())
Explanation: Using custom fonts
Relevant fontName.ttf font need to be downloaded
End of explanation
# Data to plot
labels = ['Python', 'R','MATLAB', 'C', 'C++']
sizes = [36, 19, 28, 8, 9]
colors = ['#2196F3','#FF5722', '#FFC107', '#CDDC39', '#4CAF50']
# explode = (0.1, 0, 0, 0) # explode 1st slice
explode = (0, 0, 0, 0, 0) # explode 1st slice
plt.figure(figsize=(8,8))
patches, texts = plt.pie(sizes, explode=explode, labels=labels, labeldistance=0.65, colors=colors,
autopct=None, shadow=False, startangle=22)
for item in texts:
item.set_fontproperties(font1)
item.set_fontsize(30)
item.set_horizontalalignment('center')
item.set_weight('bold')
#item.set_family(prop.get_family())
#draw a circle at the center of pie to make it look like a donut
centre_circle = plt.Circle((0,0),0.4,color='#E7E7E7',linewidth=1.25)
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
plt.axis('equal')
plt.tight_layout()
# plt.savefig('donut.pdf',transparent=True)
plt.show()
Explanation: Controlling font properties
Pylab examples
pie autopct
Plot labels inside by controlling the radial distance labeldistance
Donut plot in matplotlib
End of explanation |
986 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-layer Perceptron (normal neural network) on the Reuters newswire classification
The original script that this notebook is based on is here
Step1: Neural Network Settings
max_words
Step2: Get the data
Step3: Build the Neural Network
Step4: Fit and Evaluate
Step5: Save fitted model | Python Code:
# Imports
from __future__ import print_function
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.datasets import reuters
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.utils import np_utils
from keras.preprocessing.text import Tokenizer
Explanation: Multi-layer Perceptron (normal neural network) on the Reuters newswire classification
The original script that this notebook is based on is here
End of explanation
max_words = 1000
batch_size = 32
nb_epoch = 15
nb_dense = 512
nb_hidden = 1 # The number of hidden layers to use
p_dropout = 0.5
Explanation: Neural Network Settings
max_words: Only keep this many words as features. Uses the most common words.
Iterations: These values set the number of iterations.
batch_size: The number of samples per gradient update. Bigger values make the gradient update more accurate, but mean it takes longer to train the neural network
nb_epoch: The number of times to go through all of the training data. Since batch_size is less than the full training set size, each "epoch" will be updating the gradient multiple times. So basically, the number of iterations is nb_epoch * sample_size / batch_size.
nb_hidden: The number of hidden layers to use
nb_dense: The number of units to use in the hidden layer(s).
p_dropout: Randomly sets this fraction of the input units to 0 at each gradient update. It helps to prevent overfitting.
Network Architecture:
Here is something close to what the neural network we use here looks like.
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/e/e4/Artificial_neural_network.svg/400px-Artificial_neural_network.svg.png">
Each of the input nodes correspond to a yes/no answer to the questions "Does this article contain the word 'x'?" In our model, we have max_words input nodes instead of the 3 shown here.
The next layer, the hidden layer, is where a lot of the magic happens. Each hidden layer node input is a linear combination of the input layer values. Their output is a nonlinear "activation" function applied to the input. Typical activation functions are tanh or in this case, [relu](https://en.wikipedia.org/wiki/Rectifier_(neural_networks). The more hidden layer nodes you have, the more accurate the neural network can be.
The output layer in our case is the number of types of news articles. Like the hidden layer, each node is a linear combination of the previous layer's outputs.
End of explanation
print('Loading data...')
(X_train, y_train), (X_test, y_test) = reuters.load_data(nb_words=max_words, test_split=0.2)
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')
nb_classes = np.max(y_train)+1
print(nb_classes, 'classes')
print('Vectorizing sequence data...')
tokenizer = Tokenizer(nb_words=max_words)
X_train = tokenizer.sequences_to_matrix(X_train, mode='binary')
X_test = tokenizer.sequences_to_matrix(X_test, mode='binary')
print('X_train shape:', X_train.shape)
print('X_test shape:', X_test.shape)
print('Convert class vector to binary class matrix (for use with categorical_crossentropy)')
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
print('Y_train shape:', Y_train.shape)
print('Y_test shape:', Y_test.shape)
Explanation: Get the data
End of explanation
print('Building model...')
model = Sequential()
model.add(Dense(nb_dense, input_shape=(max_words,)))
model.add(Activation('relu'))
model.add(Dropout(p_dropout))
for _ in range(nb_hidden-1):
model.add(Dense(nb_dense))
model.add(Activation('relu'))
model.add(Dropout(p_dropout))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
Explanation: Build the Neural Network
End of explanation
import time
t1 = time.time()
history = model.fit(X_train, Y_train,
nb_epoch=nb_epoch, batch_size=batch_size,
verbose=1, validation_split=0.1)
t2 = time.time()
print('Model training took {:.2g} minutes'.format((t2-t1)/60))
score = model.evaluate(X_test, Y_test,
batch_size=batch_size, verbose=1)
print('\nTest score:', score[0])
print('Test accuracy:', score[1])
Explanation: Fit and Evaluate
End of explanation
import output_model
output_model.save_model(model, 'models/Reuters_MLP_model')
Explanation: Save fitted model
End of explanation |
987 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Line Follower - CompRobo17
This notebook will show the general procedure to use our project data directories and how to do a regression task using convnets
Imports and Directories
Step1: Create paths to data directories
Step7: Helper Functions
Throughout the notebook, we will take advantage of helper functions to cleanly process our data.
Step8: Data
Because we are using a CNN and unordered pictures, we can flip our data and concatenate it on the end of all training and validation data to make sure we don't bias left or right turns.
Training Data
Extract and store the training data in X_train and Y_train
Step9: Test the shape of the arrays
Step10: Visualize the training data, currently using a hacky method to display the numpy matrix as this is being run over a remote server and I can't view new windows
Step11: Validation Data
Follow the same steps for as the training data for the validation data.
Step12: Test the shape of the arrays
Step13: Resize Data
When we train the network, we don't want to be dealing with (240, 640, 3) images as they are way too big. Instead, we will resize the images to something more managable, like (64, 64, 3) or (128, 128, 3). In terms of network predictive performance, we are not concerned with the change in aspect ratio, but might want to test a (24, 64, 3) images for faster training
Step14: Visualize newly resized image.
Step15: Batches
gen allows us to normalize and augment our images. We will just use it to rescale the images.
Step16: Next, create the train and valid generators, these are shuffle and have a batch size of 32 by default
Step17: Convnet
Constants
Step18: Model
Our test model will use a VGG like structure with a few changes. We are removing the final activation function. We will also use either mean_absolute_error or mean_squared_error as our loss function for regression purposes.
Step19: Train
Step20: Visualize Training
Step21: Notes
* 32 by 32 images are too small resolution for regression
* 64 by 64 seemed to work really well
* Moving average plot to see val_loss over time is really nice
* Can take up to 2000 epochs to reach a nice minimum
Step22: Layer play | Python Code:
#Create references to important directories we will use over and over
import os, sys
#import modules
import numpy as np
from glob import glob
from PIL import Image
from tqdm import tqdm
from scipy.ndimage import zoom
from keras.models import Sequential
from keras.metrics import categorical_crossentropy, categorical_accuracy
from keras.layers.convolutional import *
from keras.preprocessing import image
from keras.layers.core import Flatten, Dense
from keras.optimizers import Adam
from keras.layers.normalization import BatchNormalization
from matplotlib import pyplot as plt
import seaborn as sns
%matplotlib inline
import bcolz
Explanation: Line Follower - CompRobo17
This notebook will show the general procedure to use our project data directories and how to do a regression task using convnets
Imports and Directories
End of explanation
DATA_HOME_DIR = '/home/nathan/olin/spring2017/line-follower/line-follower/data'
%cd $DATA_HOME_DIR
path = DATA_HOME_DIR
train_path=path + '/qea-square_2'#+ '/sun_apr_16_office_full_line_1'
valid_path=path + '/qea-square_3'#+ '/sun_apr_16_office_full_line_2'
Explanation: Create paths to data directories
End of explanation
def resize_vectorized4D(data, new_size=(64, 64)):
A vectorized implementation of 4d image resizing
Args:
data (4D array): The images you want to resize
new_size (tuple): The desired image size
Returns: (4D array): The resized images
fy, fx = np.asarray(new_size, np.float32) / data.shape[1:3]
return zoom(data, (1, fy, fx, 1), order=1) # order is the order of spline interpolation
def lowerHalfImage(array):
Returns the lower half rows of an image
Args: array (array): the array you want to extract the lower half from
Returns: The lower half of the array
return array[round(array.shape[0]/2):,:,:]
def folder_to_numpy(image_directory_full):
Read sorted pictures (by filename) in a folder to a numpy array.
We have hardcoded the extraction of the lower half of the images as
that is the relevant data
USAGE:
data_folder = '/train/test1'
X_train = folder_to_numpy(data_folder)
Args:
data_folder (str): The relative folder from DATA_HOME_DIR
Returns:
picture_array (np array): The numpy array in tensorflow format
# change directory
print ("Moving to directory: " + image_directory_full)
os.chdir(image_directory_full)
# read in filenames from directory
g = glob('*.png')
if len(g) == 0:
g = glob('*.jpg')
print ("Found {} pictures".format(len(g)))
# sort filenames
g.sort()
# open and convert images to numpy array - then extract the lower half of each image
print("Starting pictures to numpy conversion")
picture_arrays = np.array([lowerHalfImage(np.array(Image.open(image_path))) for image_path in g])
# reshape to tensorflow format
# picture_arrays = picture_arrays.reshape(*picture_arrays.shape, 1)
print ("Shape of output: {}".format(picture_arrays.shape))
# return array
return picture_arrays
return picture_arrays.astype('float32')
def flip4DArray(array):
Produces the mirror images of a 4D image array
return array[..., ::-1,:] #[:,:,::-1] also works but is 50% slower
def concatCmdVelFlip(array):
Concatentaes and returns Cmd Vel array
return np.concatenate((array, array*-1)) # multiply by negative 1 for opposite turn
def save_array(fname, arr):
c=bcolz.carray(arr, rootdir=fname, mode='w')
c.flush()
def load_array(fname):
return bcolz.open(fname)[:]
Explanation: Helper Functions
Throughout the notebook, we will take advantage of helper functions to cleanly process our data.
End of explanation
%cd $train_path
Y_train = np.genfromtxt('cmd_vel.csv', delimiter=',')[:,1] # only use turning angle
Y_train = np.concatenate((Y_train, Y_train*-1))
X_train = folder_to_numpy(train_path + '/raw')
X_train = np.concatenate((X_train, flip4DArray(X_train)))
Explanation: Data
Because we are using a CNN and unordered pictures, we can flip our data and concatenate it on the end of all training and validation data to make sure we don't bias left or right turns.
Training Data
Extract and store the training data in X_train and Y_train
End of explanation
X_train.shape, Y_train.shape
Explanation: Test the shape of the arrays:
X_train: (N, 240, 640, 3)
Y_train: (N,)
End of explanation
%cd /tmp
img = Image.fromarray(X_train[0], 'RGB')
img.save("temp.jpg")
image.load_img("temp.jpg")
Explanation: Visualize the training data, currently using a hacky method to display the numpy matrix as this is being run over a remote server and I can't view new windows
End of explanation
%cd $valid_path
Y_valid = np.genfromtxt('cmd_vel.csv', delimiter=',')[:,1]
Y_valid = np.concatenate((Y_valid, Y_valid*-1))
X_valid = folder_to_numpy(valid_path + '/raw')
X_valid = np.concatenate((X_valid, flip4DArray(X_valid)))
Explanation: Validation Data
Follow the same steps for as the training data for the validation data.
End of explanation
X_valid.shape, Y_valid.shape
Explanation: Test the shape of the arrays:
X_valid: (N, 240, 640, 3)
Y_valid: (N,)
End of explanation
img_rows, img_cols = (64, 64)
print(img_rows)
print(img_cols)
X_train = resize_vectorized4D(X_train, (img_rows, img_cols))
X_valid = resize_vectorized4D(X_valid, (img_rows, img_cols))
print(X_train.shape)
print(X_valid.shape)
Explanation: Resize Data
When we train the network, we don't want to be dealing with (240, 640, 3) images as they are way too big. Instead, we will resize the images to something more managable, like (64, 64, 3) or (128, 128, 3). In terms of network predictive performance, we are not concerned with the change in aspect ratio, but might want to test a (24, 64, 3) images for faster training
End of explanation
%cd /tmp
img = Image.fromarray(X_train[np.random.randint(0, X_train.shape[0])], 'RGB')
img.save("temp.jpg")
image.load_img("temp.jpg")
Explanation: Visualize newly resized image.
End of explanation
gen = image.ImageDataGenerator(
# rescale=1. / 255 # normalize data between 0 and 1
)
Explanation: Batches
gen allows us to normalize and augment our images. We will just use it to rescale the images.
End of explanation
train_generator = gen.flow(X_train, Y_train)#, batch_size=batch_size, shuffle=True)
valid_generator = gen.flow(X_valid, Y_valid)#, batch_size=batch_size, shuffle=True)
# get_batches(train_path, batch_size=batch_size,
# target_size=in_shape,
# gen=gen)
# val_batches = get_batches(valid_path, batch_size=batch_size,
# target_size=in_shape,
# gen=gen)
data, category = next(train_generator)
print ("Shape of data: {}".format(data[0].shape))
%cd /tmp
img = Image.fromarray(data[np.random.randint(0, data.shape[0])].astype('uint8'), 'RGB')
img.save("temp.jpg")
image.load_img("temp.jpg")
Explanation: Next, create the train and valid generators, these are shuffle and have a batch size of 32 by default
End of explanation
in_shape = (img_rows, img_cols, 3)
Explanation: Convnet
Constants
End of explanation
def get_model():
model = Sequential([
Convolution2D(32,3,3, border_mode='same', activation='relu', input_shape=in_shape),
MaxPooling2D(),
Convolution2D(64,3,3, border_mode='same', activation='relu'),
MaxPooling2D(),
Convolution2D(128,3,3, border_mode='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(2048, activation='relu'),
Dense(1024, activation='relu'),
Dense(512, activation='relu'),
Dense(1)
])
model.compile(loss='mean_absolute_error', optimizer='adam')
return model
model = get_model()
model.summary()
Explanation: Model
Our test model will use a VGG like structure with a few changes. We are removing the final activation function. We will also use either mean_absolute_error or mean_squared_error as our loss function for regression purposes.
End of explanation
history = model.fit_generator(train_generator,
samples_per_epoch=train_generator.n,
nb_epoch=150,
validation_data=valid_generator,
nb_val_samples=valid_generator.n,
verbose=True)
# %cd $DATA_HOME_DIR
# model.save_weights('epoche_150_square.h5')
%cd $DATA_HOME_DIR
model.load_weights('epoche_2500.h5')
Explanation: Train
End of explanation
val_plot = np.convolve(history.history['val_loss'], np.repeat(1/10, 10), mode='valid')
train_plot = np.convolve(history.history['loss'], np.repeat(1/10, 10), mode='valid')
sns.tsplot(val_plot)
X_preds = model.predict(X_valid).reshape(X_valid.shape[0],)
for i in range(len(X_valid)):
print("{:07f} | {:07f}".format(Y_valid[i], X_preds[i]))
X_train_preds = model.predict(X_train).reshape(X_train.shape[0],)
for i in range(len(X_train_preds)):
print("{:07f} | {:07f}".format(Y_train[i], X_train_preds[i]))
Explanation: Visualize Training
End of explanation
X_preds.shape
X_train_preds.shape
np.savetxt("X_train_valid.csv", X_preds, fmt='%.18e', delimiter=',', newline='\n')
np.savetxt("X_train_preds.csv", X_train_preds, fmt='%.18e', delimiter=',', newline='\n')
Explanation: Notes
* 32 by 32 images are too small resolution for regression
* 64 by 64 seemed to work really well
* Moving average plot to see val_loss over time is really nice
* Can take up to 2000 epochs to reach a nice minimum
End of explanation
len(model.layers)
model.pop()
len(model.layers)
model.compile(loss='mean_absolute_error', optimizer='adam')
model.summary()
X_train_features = model.predict(X_train)
X_valid_features = model.predict(X_valid)
%cd $train_path
save_array("X_train_features.b", X_train_features)
%cd $valid_path
save_array("X_train_features.b", X_valid_features)
X_train_features[9]
def get_model_lstm():
model = Sequential([
Convolution2D(32,3,3, border_mode='same', activation='relu', input_shape=in_shape),
MaxPooling2D(),
Convolution2D(64,3,3, border_mode='same', activation='relu'),
MaxPooling2D(),
Convolution2D(128,3,3, border_mode='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(2048, activation='relu'),
Dense(1024, activation='relu'),
Dense(512, activation='relu'),
Dense(1)
])
model.compile(loss='mean_absolute_error', optimizer='adam')
return model
Explanation: Layer play
End of explanation |
988 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook will go through how we match up students to real scientists based on their science interests. This code is heavily based on collaboratr, a project developed at Astro Hack Week.
Check it out here
Step1: Step 1 Create a Google Form with these questions
Step2: Step 2
Step3: Step 3 | Python Code:
!pip install nxpd
%matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx
import pandas as pd
import numpy as np
from operator import truediv
from collections import Counter
import itertools
import random
import collaboratr
#from nxpd import draw
#import nxpd
#reload(collaboratr)
Explanation: This notebook will go through how we match up students to real scientists based on their science interests. This code is heavily based on collaboratr, a project developed at Astro Hack Week.
Check it out here: github.com/benelson/collaboratr
<span style="color:red"> Here, we will use real Letters to a Prescientist form data. </span>
End of explanation
def format_name(data):
first_name = ['-'.join(list(map(str.capitalize,d))) for d in data['Name'].str.replace(" ", "-").str.split('-')]
last_name = ['-'.join(list(map(str.capitalize,d))) for d in data['Last'].str.replace(" ", "-").str.split('-')]
full_name = pd.Series([m+" "+n for m,n in zip(first_name,last_name)])
return full_name
# Retrieve data from Google Sheet and parse using pandas dataframe
student_data = pd.read_csv("students.csv")
student_data = student_data.replace(np.nan,' ', regex=True)
# Store student information in variables.
#
# Collaboratr divided people into "learners" and "teachers" based on what they wanted to "learn" and "teach."
# Here, students are always "learners" by default and the scientists are always "teachers."
# To maintain the structure of the pandas dataframe,
# I've created blank values for what students want to "teach" and what scientists want to "learn."
### write a function that would format names (including hyphens)
student_data['Full Name'] = format_name(student_data)
student_names = student_data['Full Name']
nStudents = len(student_names)
student_learn = student_data['If I could be any type of scientist when I grow up, I would want to study:']
student_teach = pd.Series(["" for i in range (nStudents)], index=[i for i in range(nStudents)])
student_email = pd.Series(["" for i in range (nStudents)], index=[i for i in range(nStudents)])
# Store scientist information in variables.
scientist_data = pd.read_csv("scientists_1.csv")
scientist_data = scientist_data.replace(np.nan,' ', regex=True)
#drop any duplicate email entries in the data frame
drop = np.where(scientist_data.duplicated('Email')==True)[0]
temp = scientist_data.drop(scientist_data.index[drop])
scientist_data = temp
scientist_data['Full Name'] = format_name(scientist_data)
scientist_names = scientist_data['Full Name']
nScientists = len(scientist_names)
scientist_learn = pd.Series(["" for i in range (nScientists)], index=[i for i in range(nScientists)])
scientist_teach = scientist_data['We will match you with a pen pal who has expressed an interest in at least one of the following subjects. Which topic is most relevant to your work?']
scientist_email = scientist_data['Email']
#drop any duplicate email entries in the data frame
drop = np.where(scientist_data.duplicated('Full Name')==True)[0]
temp = scientist_data.drop(scientist_data.index[drop])
scientist_data = temp
Explanation: Step 1 Create a Google Form with these questions:
1. What is your name? [text entry]
2. What is your gender? [multiple choice]
3. What are your general science interests? [checkboxes]
I can ask for other information from the students (e.g., grade, school name) and scientists (email).
After receiving the responses, load up the CSV of responses from the Google Form by running the cell below (you'll have to change the path to your own CSV).
End of explanation
names = student_names.append(scientist_names, ignore_index=True)
learn = student_learn.append(scientist_learn, ignore_index=True)
teach = student_teach.append(scientist_teach, ignore_index=True)
emails = student_email.append(scientist_email, ignore_index=True)
G = nx.DiGraph()
Explanation: Step 2: Merge the student and scientist dataframes
End of explanation
# Insert users in graphs
for n,e,l,t in zip(names, emails, learn, teach):
collaboratr.insert_node(G,n, email=e, learn=l.split(';'), teach=t.split(';'))
def sort_things(stu_data, sci_data):
num_interests = {}
for i,r in stu_data.iterrows():
name = r['Name'].capitalize() + " " + r['Last'].capitalize()
num_interests = { name: 1 }
print(num_interests)
stu_names_sorted = sorted(num_interests, key=num_interests.get)
print(stu_names_sorted)
interests_stu = Counter(list(itertools.chain.from_iterable(\
[ i.split(';') for i in stu_data['If I could be any type of scientist when I grow up, I would want to study:'] ])))
interests_sci = Counter(list(itertools.chain.from_iterable(\
[ i.split(';') for i in sci_data['We will match you with a pen pal who has expressed an interest in at least one of the following subjects. Which topic is most relevant to your work?'] ])))
interests_rel = { key: interests_stu[key]/interests_sci[key] for key in interests_sci.keys() }
interests_rel_sorted = sorted(interests_rel, key=interests_rel.get)
return interests_rel_sorted, stu_names_sorted
def assigner(assign, stu_data, sci_data, max_students=2):
assign_one = {}
subscriptions = { n: 0 for n in sci_data['What is your name?'] }
interests_rel_sorted, stu_names_sorted = sort_things(stu_data, sci_data)
for key in interests_rel_sorted:
for name in stu_names_sorted:
if name not in assign_one:
if key in assign[name].keys():
try:
scientist = np.random.choice(assign[name][key])
except ValueError:
scientist = np.random.choice(scientist_data['What is your name?'])
assign_one[name] = scientist
subscriptions[scientist] += 1
if subscriptions[scientist]>=max_students:
for kk,vv in assign.items():
if vv:
for k,v in vv.items():
if scientist in v:
v.remove(scientist)
for name in stu_names_sorted:
if name not in assign_one:
scientist = np.random.choice([ k for k,v in subscriptions.items() if v < max_students ])
assign_one[name] = scientist
return assign_one
assign_one = None
max_students = 2
while assign_one is None:
try:
participants = G.nodes(data=True)
assign = collaboratr.assign_users(G,participants)
assign_one = assigner(assign, student_data, scientist_data, max_students=max_students)
if max(Counter([v for k,v in assign_one.items()]).values())>max_students:
assign_one = None
except ValueError:
# print("error")
pass
print(assign_one)
print(Counter([v for k,v in assign_one.items()]))
items = []
for k,v in assign_one.items():
items.append(str(v.ljust(22) + "-> " + k.ljust(22) + "who is interested in " \
+ student_data.loc[student_data['What is your name?'] == k]\
['What general science fields are you interested in?'].tolist()[0] ))
for i in sorted(items):
print(i)
a, b = sort_things(student_data, scientist_data)
print(a, b)
Explanation: Step 3: Assign scientists to students
I thought about several ways to do this. Each student has a "pool" of scientists to be assigned to based on their interests. This was a non-trivial problem. I try to have no more than 2 students assigned to each scientist, working with a limited dataset of roughly 20 scientists and 30 students. Most scientists come from astronomy/physics or psychology/neuroscience. Here are my attempts to do just that:
For each student, randomly draw from their "pool" of scientists with matching interests. This typically caused the more "underrepresented" scientists to get oversubscribed quickly, e.g., having one biologist and having many students interested in biology. This didn't help for students who had limited interests. If I couldn't match everyone up, I'd try again with different random draws. Couldn't find a solution for the conditions listed above. Maybe this would work better if we had a nScientists > nStudents.
Start with the "least popular" topic, that is the topic where the student-to-scientist ratio is smallest. Loop through the students with those interests and try to match them to a scientist. Then, we work are way up the list until we get to the most popular topic. This approach worked much better.
End of explanation |
989 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
================================================================
Compute sparse inverse solution with mixed norm
Step1: Run solver
Step2: View in 2D and 3D ("glass" brain like 3D plot) | Python Code:
# Author: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.inverse_sparse import mixed_norm
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.viz import plot_sparse_source_estimates
print(__doc__)
data_path = sample.data_path()
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'
subjects_dir = data_path + '/subjects'
# Read noise covariance matrix
cov = mne.read_cov(cov_fname)
# Handling average file
condition = 'Left Auditory'
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked.crop(tmin=0, tmax=0.3)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname, surf_ori=True)
ylim = dict(eeg=[-10, 10], grad=[-400, 400], mag=[-600, 600])
evoked.plot(ylim=ylim, proj=True)
Explanation: ================================================================
Compute sparse inverse solution with mixed norm: MxNE and irMxNE
================================================================
Runs (ir)MxNE (L1/L2 or L0.5/L2 mixed norm) inverse solver.
L0.5/L2 is done with irMxNE which allows for sparser
source estimates with less amplitude bias due to the non-convexity
of the L0.5/L2 mixed norm penalty.
See
Gramfort A., Kowalski M. and Hamalainen, M,
Mixed-norm estimates for the M/EEG inverse problem using accelerated
gradient methods, Physics in Medicine and Biology, 2012
https://doi.org/10.1088/0031-9155/57/7/1937
Strohmeier D., Haueisen J., and Gramfort A.:
Improved MEG/EEG source localization with reweighted mixed-norms,
4th International Workshop on Pattern Recognition in Neuroimaging,
Tuebingen, 2014
DOI: 10.1109/PRNI.2014.6858545
End of explanation
alpha = 50 # regularization parameter between 0 and 100 (100 is high)
loose, depth = 0.2, 0.9 # loose orientation & depth weighting
n_mxne_iter = 10 # if > 1 use L0.5/L2 reweighted mixed norm solver
# if n_mxne_iter > 1 dSPM weighting can be avoided.
# Compute dSPM solution to be used as weights in MxNE
inverse_operator = make_inverse_operator(evoked.info, forward, cov,
loose=None, depth=depth, fixed=True)
stc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9.,
method='dSPM')
# Compute (ir)MxNE inverse solution
stc, residual = mixed_norm(evoked, forward, cov, alpha, loose=loose,
depth=depth, maxit=3000, tol=1e-4,
active_set_size=10, debias=True, weights=stc_dspm,
weights_min=8., n_mxne_iter=n_mxne_iter,
return_residual=True)
residual.plot(ylim=ylim, proj=True)
Explanation: Run solver
End of explanation
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
fig_name="MxNE (cond %s)" % condition,
opacity=0.1)
# and on the fsaverage brain after morphing
stc_fsaverage = stc.morph(subject_from='sample', subject_to='fsaverage',
grade=None, sparse=True, subjects_dir=subjects_dir)
src_fsaverage_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
src_fsaverage = mne.read_source_spaces(src_fsaverage_fname)
plot_sparse_source_estimates(src_fsaverage, stc_fsaverage, bgcolor=(1, 1, 1),
opacity=0.1)
Explanation: View in 2D and 3D ("glass" brain like 3D plot)
End of explanation |
990 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interpolation Exercise 2
Step1: Sparse 2d interpolation
In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain
Step2: The following plot should show the points on the boundary and the single point in the interior
Step3: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain
Step4: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set_style('white')
from scipy.interpolate import griddata
Explanation: Interpolation Exercise 2
End of explanation
# left, top, right, bottom
x = np.hstack((np.array([-5]*10), np.linspace(-5, 5, 10), np.array([5]*10), np.linspace(5, -5, 10), 0))
y = np.hstack((np.linspace(-5, 5, 10), np.array([5]*10), np.linspace(5, -5, 10), np.array([-5]*10), 0))
f = np.hstack(([0]*40, 1))
#raise NotImplementedError()
Explanation: Sparse 2d interpolation
In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain:
The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$.
The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points.
The value of $f$ is known at a single interior point: $f(0,0)=1.0$.
The function $f$ is not known at any other points.
Create arrays x, y, f:
x should be a 1d array of the x coordinates on the boundary and the 1 interior point.
y should be a 1d array of the y coordinates on the boundary and the 1 interior point.
f should be a 1d array of the values of f at the corresponding x and y coordinates.
You might find that np.hstack is helpful.
End of explanation
plt.scatter(x, y);
assert x.shape==(41,)
assert y.shape==(41,)
assert f.shape==(41,)
assert np.count_nonzero(f)==1
Explanation: The following plot should show the points on the boundary and the single point in the interior:
End of explanation
xnew = np.linspace(-5.0, 5.0, 100)
ynew = np.linspace(-5.0, 5.0, 100)
Xnew, Ynew = np.meshgrid(xnew, ynew)
Fnew = griddata((x,y), f, (Xnew, Ynew), method='cubic', fill_value=0.0)
#raise NotImplementedError()
assert xnew.shape==(100,)
assert ynew.shape==(100,)
assert Xnew.shape==(100,100)
assert Ynew.shape==(100,100)
assert Fnew.shape==(100,100)
Explanation: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain:
xnew and ynew should be 1d arrays with 100 points between $[-5,5]$.
Xnew and Ynew should be 2d versions of xnew and ynew created by meshgrid.
Fnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xnew,Ynew).
Use cubic spline interpolation.
End of explanation
plt.contour(Xnew, Ynew, Fnew)
#raise NotImplementedError()
assert True # leave this to grade the plot
Explanation: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
End of explanation |
991 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial for the structural-color python package
Copyright 2016, Vinothan N. Manoharan, Victoria Hwang.
This file is part of the structural-color python package.
This package is free software
Step1: This will populate the structcol namespace with a few functions and classes. You will probably find it easiest to keep all your calculations within a Jupyter notebook like this one. The package itself contains only generic functions and classes (that is, it doesn't include any specific calculations of structural color spectra beyond the ones in this notebook). For calculations in a notebook, you'll want to import some other packages too, like numpy and matplotlib
Step2: Using quantities with units
The structural-color package uses the pint package to keep track of units and automatically convert them. To define a quantity with units, use the structcol.Quantity constructor. For example, to define a wavelength of 0.45 $\mu$m
Step3: Converting between units
Step4: Units work in numpy arrays, too
Step5: Refractive index module
To use the refractive index module
Step6: This module contains dispersion relations for a number of materials. For example, to get the index of polystyrene at 500 nm, you can call
Step7: You must give this function a quantity with units as the second argument. If you give it a number, it will throw an error, rather than trying to guess what units you're thinking of. You can also calculate the refractive index at several different wavelengths simultaneously, like this (using wavelens array from above)
Step8: You can use complex refractive indices by adding the imaginary component of the index. Note that in python the imaginary number $i$ is denoted by $j$. You can choose to use the values from literature or from experimental measurement
Step9: Importing your own refractive index data
You can input your own refractive index data (which can be real or complex) by calling the material $\textbf{'data'}$ and specifying the optional parameters $\textbf{'index_data'}$, $\textbf{'wavelength_data'}$, and $\textbf{'kind'}$.
index_data
Step10: Calculating a reflection spectrum
With the tools above we can calculate a reflection spectrum using the single-scattering model described in Magkiriadou, S., Park, J.-G., Kim, Y.-S., and Manoharan, V. N. “Absence of Red Structural Color in Photonic Glasses, Bird Feathers, and Certain Beetles” Physical Review E 90, no. 6 (2014)
Step11: Note that the asymmetry parameter becomes negative at the reflection peak (as expected, since light is preferentially backscattered), and, as a result, the transport length has a dip in the same wavelength region.
Calculating the reflection spectrum of a core-shell particle system, with either an absorbing or non-absorbing particle index
We can calculate a reflection spectrum of a system of core-shell particles, where the core and the shell(s) can have different radii and refractive indices. The syntax is mostly the same as that of non-core-shell particles, except that the particle radius and the particle index are now Quantity arrays of values from the innermost (the core) to the outermost layer in the particle. The volume fraction is that of the entire core-shell particle.
Step12: Calculating the reflection spectrum of a polydisperse system with either one species or two species of particles
We can calculate the spectrum of a polydisperse system with either one or two species of particles, meaning that there are one or two mean radii, and each species has its own size distribution. We then need to specify the mean radius, the polydispersity index (pdi), and the concentration of each species. For example, consider a system of 90$\%$ of 200 nm polystyrene particles and 10$\%$ of 300 nm particles, with each species having a polydispersity index of 1$\%$. In this case, the mean radii are [200, 300] nm, the pdi are [0.01, 0.01], and the concentrations are [0.9, 0.1].
If the system is monospecies, we still need to specify the polydispersity parameters in 2-element arrays. For example, the mean radii become [200, 200] nm, the pdi become [0.01, 0.01], and the concentrations become [1.0, 0.0].
To include absorption into the polydisperse system calculation, we just need to use the complex refractive index of the particle and/or the matrix.
Note
Step13: Note that the polydisperse case has a broader and red-shifted peak compared to the monodisperse case. This trend makes sense since the polydisperse system contains 10$\%$ of larger particles than the monodisperse system.
Mie scattering module
Normally you won't need to use this model on its own, but if you want to, start with
Step14: Form factor calculation
Step15: Structure module
To use this module
Step16: Here is an example of calculating structure factors with the Percus-Yevick approximation. The code is fully vectorized, so we can calculate structure factors for a variety of qd values and volume fractions in parallel | Python Code:
import structcol
# or
import structcol as sc
Explanation: Tutorial for the structural-color python package
Copyright 2016, Vinothan N. Manoharan, Victoria Hwang.
This file is part of the structural-color python package.
This package is free software: you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation, either version 3 of the License, or (at your option) any later
version.
This package is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
details.
You should have received a copy of the GNU General Public License along with
this package. If not, see http://www.gnu.org/licenses/.
Loading and using the package
To load, make sure you are in the top directory and do
End of explanation
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate
# require seaborn (not installed by default in Anaconda; comment out if not installed)
import seaborn as sns
Explanation: This will populate the structcol namespace with a few functions and classes. You will probably find it easiest to keep all your calculations within a Jupyter notebook like this one. The package itself contains only generic functions and classes (that is, it doesn't include any specific calculations of structural color spectra beyond the ones in this notebook). For calculations in a notebook, you'll want to import some other packages too, like numpy and matplotlib:
End of explanation
wavelen = sc.Quantity('0.45 um')
print(wavelen)
print(wavelen.dimensionality)
Explanation: Using quantities with units
The structural-color package uses the pint package to keep track of units and automatically convert them. To define a quantity with units, use the structcol.Quantity constructor. For example, to define a wavelength of 0.45 $\mu$m:
End of explanation
print(wavelen.to('m'))
Explanation: Converting between units:
End of explanation
wavelens = sc.Quantity(np.arange(450.0, 800.0, 10.0), 'nm')
print(wavelens.to('um'))
Explanation: Units work in numpy arrays, too:
End of explanation
import structcol.refractive_index as ri
Explanation: Refractive index module
To use the refractive index module:
End of explanation
ri.n('polystyrene', sc.Quantity('500 nm'))
Explanation: This module contains dispersion relations for a number of materials. For example, to get the index of polystyrene at 500 nm, you can call
End of explanation
n_particle = ri.n('polystyrene', wavelens)
plt.plot(wavelens, n_particle)
plt.ylabel('$n_\mathrm{PS}$')
plt.xlabel('wavelength (nm)')
Explanation: You must give this function a quantity with units as the second argument. If you give it a number, it will throw an error, rather than trying to guess what units you're thinking of. You can also calculate the refractive index at several different wavelengths simultaneously, like this (using wavelens array from above):
End of explanation
ri.n('polystyrene', sc.Quantity('500 nm'))+0.0001j
Explanation: You can use complex refractive indices by adding the imaginary component of the index. Note that in python the imaginary number $i$ is denoted by $j$. You can choose to use the values from literature or from experimental measurement:
End of explanation
wavelength_values = sc.Quantity(np.array([400,500,600]), 'nm')
index_values= sc.Quantity(np.array([1.5,1.55,1.6]), '')
wavelength = sc.Quantity(np.arange(400, 600, 1), 'nm')
n_data = ri.n('data', wavelength, index_data=index_values, wavelength_data=wavelength_values)
plt.plot(wavelength, n_data, '--', label='fit')
plt.plot(wavelength_values, index_values, '.', markersize=18, label='data')
plt.ylabel('$n_\mathrm{data}$')
plt.xlabel('wavelength (nm)')
plt.legend();
Explanation: Importing your own refractive index data
You can input your own refractive index data (which can be real or complex) by calling the material $\textbf{'data'}$ and specifying the optional parameters $\textbf{'index_data'}$, $\textbf{'wavelength_data'}$, and $\textbf{'kind'}$.
index_data: refractive index data from literature or experiment that the user can input if desired. The data is interpolated, so that the user can call specific values of the index. The index data can be real or complex.
wavelength_data: wavelength data corresponding to index_data. Must be specified as a Quantity.
kind: type of interpolation. The options are: ‘linear’, ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘previous’, ‘next', where ‘zero’, ‘slinear’, ‘quadratic’ and ‘cubic’ refer to a spline interpolation of zeroth, first, second or third order; ‘previous’ and ‘next’ simply return the previous or next value of the point. The default is 'linear'.
End of explanation
# uncomment the line below to time how long this calculation takes
# %%timeit
from structcol import model
# parameters for our colloidal sample
volume_fraction = sc.Quantity(0.64, '')
radius = sc.Quantity('125 nm')
# wavelengths of interest
wavelength = sc.Quantity(np.arange(400., 800., 10.0), 'nm')
# calculate refractive indices at wavelengths of interest
n_particle = sc.Quantity(1.53, '')#ri.n('polystyrene', wavelength)
n_matrix = ri.n('vacuum', wavelength)
n_medium = n_matrix
# now calculate the reflection spectrum, asymmetry parameter (g), and
# transport length (lstar)
refl = np.zeros(wavelength.size)
g = np.zeros(wavelength.size)
# note the units explicitly assigned to the transport length; you
# must specify a length unit here
lstar = np.zeros(wavelength.size)*sc.ureg('um')
for i in range(wavelength.size):
# the first element in the tuple is the reflection coefficient for
# unpolarized light. The next two (which we skip) are the
# coefficients for parallel and perpendicularly polarized light.
# Third is the asymmetry parameter, and fourth the transport length
refl[i], _, _, g[i], lstar[i] = model.reflection(n_particle, n_matrix[i],
n_medium[i], wavelength[i],
radius, volume_fraction,
thickness = sc.Quantity('4000.0 nm'),
theta_min = sc.Quantity('90 deg'),
maxwell_garnett=False) # the default option is False
fig, (ax_a, ax_b, ax_c) = plt.subplots(nrows=3, figsize=(8,8))
ax_a.plot(wavelength, refl)
ax_a.set_ylabel('Reflected fraction (unpolarized)')
ax_b.plot(wavelength, g)
ax_b.set_ylabel('Asymmetry parameter')
ax_c.semilogy(wavelength, lstar)
ax_c.set_ylabel('Transport length (μm)')
ax_c.set_xlabel('wavelength (nm)')
Explanation: Calculating a reflection spectrum
With the tools above we can calculate a reflection spectrum using the single-scattering model described in Magkiriadou, S., Park, J.-G., Kim, Y.-S., and Manoharan, V. N. “Absence of Red Structural Color in Photonic Glasses, Bird Feathers, and Certain Beetles” Physical Review E 90, no. 6 (2014): 62302. doi:10.1103/PhysRevE.90.062302
The effective refractive index of the sample can be calculated either with the Maxwell-Garnett formulation or the Bruggeman equation. The Bruggeman equation is the default option for our calculations, because Maxwell-Garnett is not valid when the volume fractions of the components are comparable, which is often the case in structural color samples (Markel, V. A., "Introduction to the Maxwell Garnett approximation: tutorial", Journal of the Optical Socienty of America A, 33, no. 7 (2016)). In addition, Maxwell Garnett only works for systems of two components (e.g. a particle index and a matrix index), whereas Bruggeman can be applied to multicomponent systems such as core-shell particles.
The model can also handle absorbing systems, either with an absorbing particle or an absorbing matrix. Then the corresponding complex refractive indices must be specified.
End of explanation
# Example calculation for a core-shell particle system (core is polystyrene and shell is silica, in a matrix of air)
from structcol import model
# parameters for our colloidal sample
volume_fraction = sc.Quantity(0.5, '')
radius = sc.Quantity(np.array([110,120]), 'nm')
# wavelengths of interest
wavelength = sc.Quantity(np.arange(400., 800., 10.0), 'nm')
# calculate refractive indices at wavelengths of interest
n_particle = sc.Quantity([ri.n('polystyrene', wavelength), ri.n('fused silica', wavelength)])
n_particle_abs = sc.Quantity([ri.n('polystyrene', wavelength),
ri.n('fused silica', wavelength)+0.005j]) # assume the shell absorbs
n_matrix = ri.n('vacuum', wavelength)
n_medium = n_matrix
# now calculate the reflection spectrum, asymmetry parameter (g), and
# transport length (lstar)
refl = np.zeros(wavelength.size)
g = np.zeros(wavelength.size)
lstar = np.zeros(wavelength.size)*sc.ureg('um')
refl_abs = np.zeros(wavelength.size)
g_abs = np.zeros(wavelength.size)
lstar_abs = np.zeros(wavelength.size)*sc.ureg('um')
for i in range(wavelength.size):
# non-absorbing case
refl[i], _, _, g[i], lstar[i] = model.reflection(n_particle[:,i], n_matrix[i],
n_medium[i], wavelength[i],
radius, volume_fraction,
thickness = sc.Quantity('15000.0 nm'),
theta_min = sc.Quantity('90 deg'))
# absorbing case
refl_abs[i], _, _, g_abs[i], lstar_abs[i] = model.reflection(n_particle_abs[:,i], n_matrix[i],
n_medium[i], wavelength[i],
radius, volume_fraction,
thickness = sc.Quantity('15000.0 nm'),
theta_min = sc.Quantity('90 deg'))
fig, (ax_a, ax_b, ax_c) = plt.subplots(nrows=3, figsize=(8,8))
ax_a.plot(wavelength, refl, label='non-absorbing shell')
ax_a.plot(wavelength, refl_abs, label='absorbing shell')
ax_a.legend()
ax_a.set_ylabel('Reflected fraction (unpolarized)')
ax_b.plot(wavelength, g, label='non-absorbing shell')
ax_b.plot(wavelength, g_abs, '--', label='absorbing shell')
ax_b.legend()
ax_b.set_ylabel('Asymmetry parameter')
ax_c.semilogy(wavelength, lstar, label='non-absorbing shell')
ax_c.semilogy(wavelength, lstar_abs, '--', label='absorbing shell')
ax_c.legend()
ax_c.set_ylabel('Transport length (μm)')
ax_c.set_xlabel('wavelength (nm)')
Explanation: Note that the asymmetry parameter becomes negative at the reflection peak (as expected, since light is preferentially backscattered), and, as a result, the transport length has a dip in the same wavelength region.
Calculating the reflection spectrum of a core-shell particle system, with either an absorbing or non-absorbing particle index
We can calculate a reflection spectrum of a system of core-shell particles, where the core and the shell(s) can have different radii and refractive indices. The syntax is mostly the same as that of non-core-shell particles, except that the particle radius and the particle index are now Quantity arrays of values from the innermost (the core) to the outermost layer in the particle. The volume fraction is that of the entire core-shell particle.
End of explanation
# Example calculation for a polydisperse system with two species of particles, each with its own size distribution
from structcol import model
# parameters for our colloidal sample
volume_fraction = sc.Quantity(0.5, '')
radius = sc.Quantity('100 nm')
# define the parameters for polydispersity
radius2 = sc.Quantity('150 nm')
concentration = sc.Quantity(np.array([0.9,0.1]), '')
pdi = sc.Quantity(np.array([0.01, 0.01]), '')
# wavelengths of interest
wavelength = sc.Quantity(np.arange(400., 800., 10.0), 'nm')
# calculate refractive indices at wavelengths of interest
n_particle = ri.n('polystyrene', wavelength)
n_matrix = ri.n('vacuum', wavelength)
n_medium = n_matrix
# now calculate the reflection spectrum, asymmetry parameter (g), and
# transport length (lstar)
refl_mono = np.zeros(wavelength.size)
g_mono = np.zeros(wavelength.size)
lstar_mono = np.zeros(wavelength.size)*sc.ureg('um')
refl_poly = np.zeros(wavelength.size)
g_poly = np.zeros(wavelength.size)
lstar_poly = np.zeros(wavelength.size)*sc.ureg('um')
for i in range(wavelength.size):
# need to specify extra parameters for the polydisperse (and bispecies) case
refl_poly[i], _, _, g_poly[i], lstar_poly[i] = model.reflection(n_particle[i], n_matrix[i],
n_medium[i], wavelength[i],
radius, volume_fraction,
thickness = sc.Quantity('15000.0 nm'),
theta_min = sc.Quantity('90 deg'),
radius2 = radius2, concentration = concentration,
pdi = pdi, structure_type='polydisperse',
form_type='polydisperse')
# monodisperse (assuming the system is composed of purely the 200 nm particles)
refl_mono[i], _, _, g_mono[i], lstar_mono[i] = model.reflection(n_particle[i], n_matrix[i],
n_medium[i], wavelength[i],
radius, volume_fraction,
thickness = sc.Quantity('15000.0 nm'),
theta_min = sc.Quantity('90 deg'))
fig, (ax_a, ax_b, ax_c) = plt.subplots(nrows=3, figsize=(8,8))
ax_a.plot(wavelength, refl_mono, label='monodisperse')
ax_a.plot(wavelength, refl_poly, label='polydisperse, bispecies')
ax_a.legend()
ax_a.set_ylabel('Reflected fraction (unpolarized)')
ax_b.plot(wavelength, g_mono, label='monodisperse')
ax_b.plot(wavelength, g_poly, label='polydisperse, bispecies')
ax_b.legend()
ax_b.set_ylabel('Asymmetry parameter')
ax_c.semilogy(wavelength, lstar_mono, label='monodisperse')
ax_c.semilogy(wavelength, lstar_poly, label='polydisperse, bispecies')
ax_c.legend()
ax_c.set_ylabel('Transport length (μm)')
ax_c.set_xlabel('wavelength (nm)')
Explanation: Calculating the reflection spectrum of a polydisperse system with either one species or two species of particles
We can calculate the spectrum of a polydisperse system with either one or two species of particles, meaning that there are one or two mean radii, and each species has its own size distribution. We then need to specify the mean radius, the polydispersity index (pdi), and the concentration of each species. For example, consider a system of 90$\%$ of 200 nm polystyrene particles and 10$\%$ of 300 nm particles, with each species having a polydispersity index of 1$\%$. In this case, the mean radii are [200, 300] nm, the pdi are [0.01, 0.01], and the concentrations are [0.9, 0.1].
If the system is monospecies, we still need to specify the polydispersity parameters in 2-element arrays. For example, the mean radii become [200, 200] nm, the pdi become [0.01, 0.01], and the concentrations become [1.0, 0.0].
To include absorption into the polydisperse system calculation, we just need to use the complex refractive index of the particle and/or the matrix.
Note: the code takes longer (~1 min) in the polydisperse than in the monodisperse case because it calculates the scattering for a distribution of particles.
Note 2: the code currently does not handle polydispersity for systems of core-shell particles.
End of explanation
from structcol import mie
Explanation: Note that the polydisperse case has a broader and red-shifted peak compared to the monodisperse case. This trend makes sense since the polydisperse system contains 10$\%$ of larger particles than the monodisperse system.
Mie scattering module
Normally you won't need to use this model on its own, but if you want to, start with
End of explanation
wavelen = sc.Quantity('450 nm')
n_matrix = ri.n('vacuum', wavelen)
n_particle = ri.n('polystyrene', wavelen)
radius = sc.Quantity('0.4 um')
m = sc.index_ratio(n_particle, n_matrix)
x = sc.size_parameter(wavelen, n_matrix, radius)
# must explicitly state whether angles are in radians or degrees
angles = sc.Quantity(np.linspace(0, np.pi, 1000), 'rad')
form_factor_par, form_factor_perp = mie.calc_ang_dist(m, x, angles)
plt.semilogy(angles.to('deg'), form_factor_par, label='parallel polarization')
plt.plot(angles.to('deg'), form_factor_perp, label='perpendicular polarization')
plt.legend()
plt.xlabel('angle ($\degree$)')
plt.ylabel('intensity')
Explanation: Form factor calculation:
End of explanation
from structcol import structure
Explanation: Structure module
To use this module:
End of explanation
qd = np.arange(0.1, 20, 0.01)
phi = np.array([0.15, 0.3, 0.45])
# this little trick allows us to calculate the structure factor on a 2d
# grid of points (turns qd into a column vector and phi into a row vector).
# Could also use np.ogrid
s = structure.factor_py(qd.reshape(-1,1), phi.reshape(1,-1))
for i in range(len(phi)):
plt.plot(qd, s[:,i], label='$\phi=$'+str(phi[i]))#, label='$phi='+phi[i]+'$')
plt.legend()
plt.xlabel('$qd$')
plt.ylabel('$\phi$')
Explanation: Here is an example of calculating structure factors with the Percus-Yevick approximation. The code is fully vectorized, so we can calculate structure factors for a variety of qd values and volume fractions in parallel:
End of explanation |
992 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excercises Electric Machinery Fundamentals
Chapter 6
Problem 6-3
Step1: Description
A three-phase 60-Hz induction motor runs at 715 r/min at no-load and at 670 r/min at full load.
Note
Step2: (a)
How many poles does this motor have?
(b)
What is the slip at rated load?
(c)
What is the speed at one-quarter of the rated load?
(d)
What is the rotor’s electrical frequency at one-quarter of the rated load?
SOLUTION
(a)
This machine produces a synchronous speed of
Step3: So nearest even number of poles is
Step4: which produces a syncronous speed of
Step5: (b)
The slip at rated load is
Step6: (c)
The motor is operating in the linear region of its torque-speed curve, so the slip at $\frac14$ load will approximatly be
Step7: The resulting speed is
Step8: (d)
The electrical frequency at $\frac14$ load is | Python Code:
%pylab notebook
%precision 4
Explanation: Excercises Electric Machinery Fundamentals
Chapter 6
Problem 6-3
End of explanation
fe = 60.0 # [Hz]
n_noload = 715.0 # [r/min]
n_m = 670.0 # [r/min]
Explanation: Description
A three-phase 60-Hz induction motor runs at 715 r/min at no-load and at 670 r/min at full load.
Note:
The no-load speed is near but not identical with the synchronous speed. You will always have some losses that the machine needs to overcome. Hence the speed of the rotor will never reach synchronous speed even with no-load.
End of explanation
p = 120*fe / n_noload
p
Explanation: (a)
How many poles does this motor have?
(b)
What is the slip at rated load?
(c)
What is the speed at one-quarter of the rated load?
(d)
What is the rotor’s electrical frequency at one-quarter of the rated load?
SOLUTION
(a)
This machine produces a synchronous speed of:
$$n_\text{sync} = \frac{120f_e}{p}$$
End of explanation
p=floor(p/2)*2 # nearest even number
p=floor(p/2)*2 # nearest even number
print('''
p = {:.0f}
======'''.format(p))
Explanation: So nearest even number of poles is:
End of explanation
n_sync = 120*fe / p
print('n_sync = {:.0f} rpm'.format(n_sync))
Explanation: which produces a syncronous speed of
End of explanation
s = (n_sync - n_m) / n_sync
print('''
s = {:.2f} %
=========='''.format(s*100))
Explanation: (b)
The slip at rated load is:
$$s = \frac{n_\text{sync} - n_m}{n_\text{sync}} \cdot 100\%$$
End of explanation
s_c = 1/4 * s
s_c
Explanation: (c)
The motor is operating in the linear region of its torque-speed curve, so the slip at $\frac14$ load will approximatly be:
End of explanation
n_m_c = (1 - s_c) * n_sync
print('''
n_m_c = {:.0f} r/min
================='''.format(n_m_c))
Explanation: The resulting speed is:
End of explanation
fr_d = s_c * fe
print('''
fr_d = {:.2f} Hz
=============='''.format(fr_d))
Explanation: (d)
The electrical frequency at $\frac14$ load is:
$$f_r = sf_e$$
End of explanation |
993 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
실제함수 input
Step1: bug fix1
rawArrayDatas -> rawArrayDatas[0]
rawArrayDatas이 이차원 배열이어서 len(rawArrayDatas)=2가 되고,
len(rawArrayDatas[0])가 5가 된다.
Step2: bug fix1
rawArrayDatas -> rawArrayDatas[0]
rawArrayDatas이 이차원 배열이어서 len(rawArrayDatas)=2가 되고,
len(rawArrayDatas[0])가 5가 된다.
Step3: 전체 data 전처리 | Python Code:
rawArrayDatas=[["2017-08-11", "2017-08-12", "2017-08-13", "2017-08-14", "2017-08-15","2017-08-16"],
[20.0, 30.0, 40.0, 50.0, 60.0,20.0]]
processId=12
forecastDay=4
Explanation: 실제함수 input
End of explanation
mockForecast={}
rmse={}
forecast=[]
realForecast={}
trainSize=int(len(rawArrayDatas[0]) * 0.7)
testSize=len(rawArrayDatas[0])-trainSize
Explanation: bug fix1
rawArrayDatas -> rawArrayDatas[0]
rawArrayDatas이 이차원 배열이어서 len(rawArrayDatas)=2가 되고,
len(rawArrayDatas[0])가 5가 된다.
End of explanation
print(trainSize)
print(testSize)
Explanation: bug fix1
rawArrayDatas -> rawArrayDatas[0]
rawArrayDatas이 이차원 배열이어서 len(rawArrayDatas)=2가 되고,
len(rawArrayDatas[0])가 5가 된다.
End of explanation
ds = rawArrayDatas[0]
y = list(np.log(rawArrayDatas[1]))
sales = list(zip(ds, y))
rawArrayDatas
sales
print(type(ds), type(y))
preprocessedData= pd.DataFrame(data = sales, columns=['ds', 'y'])
preprocessedData
model = Prophet()
model.fit(preprocessedData)
future = model.make_future_dataframe(periods=forecastDay)
forecast = future[-forecastDay:]
# Python
forecast = model.predict(future)
forecast[['ds', 'yhat']].tail()
forecast
forecast
model.plot(forecast)
forecastData= [np.exp(y) for y in forecast['yhat'][-forecastDay:]]
print(forecastData)
data=rawArrayDatas[1]+forecastData
data
ans=np.log10(10)
ans
np.exp(ans)
data= [np.exp(y) for y in forecast['yhat']]
print(data)
date= [d.strftime('%Y-%m-%d') for d in forecast['ds']]
date
dateStamp = list(forecast['ds'][-forecastDay:])
dateStamp
date = [p.strftime('%Y-%m-%d') for p in dateStamp]
date
realForecast['Bayseian'] = Bayseian(preprocessedData=XY, forecastDay=forecastDay)[0]
realForecast
XY = PrepareBayseian(rawArrayDatas)
Bayseian(preprocessedData=XY, forecastDay=forecastDay)[0]
LearningModuleRunner(rawArrayDatas, processId, forecastDay)
realForecast
type(realForecast['Bayseian'])
import sys
print(sys.version)
#data 준비위한 하드코딩
#rawArrayDatas
df0=pd.read_csv('./data/397_replace0with1.csv')
df0['y'] = np.log(df0['y'])
ds=df0['ds']
y=df0['y']
#processId
processId=1
#정식적인 data input과정
rawArrayDatas=[['2016-01-01','2016-01-02','2016-01-03','2016-01-04','2016-01-05'],[10,10,12,13,14]]
ds=rawArrayDatas[0]
y=rawArrayDatas[1]
sales=list(zip(ds,y))
day=5
sales
rawArrayDatas[:][:2]
rawArrayDatas=[['2016-01-01','2016-01-02','2016-01-03','2016-01-04','2016-01-05'],[10,10,12,13,14]]
ds=rawArrayDatas[0]
y=rawArrayDatas[1]
sales = list(zip(rawArrayDatas[0], rawArrayDatas[1]))
y
sales
ds=rawArrayDatas[0]
#-->year, month, dayOfWeek 추출
year=[2016,2016, 2016, 2017]
month=[1,1,1,1]
dayOfWeek=[1,2,3,4]
y=rawArrayDatas[1][:len(train)]
ds=rawArrayDatas[0]
#-->year, month, dayOfWeek 추출
year=np.random.beta(2000, 2017, len(train))*(2017-2000)
month=np.random.beta(1, 12, len(train))*(12-1)
dayOfWeek=np.random.beta(0, 6, len(train))*(6-0)
y=rawArrayDatas[1][:len(train)]
year
month
dayOfWeek
sales=list(zip(year, month, dayOfWeek, y))
sales
x = pd.DataFrame(data = sales, columns=['year', 'month', 'dayOfWeek','y'])
x
y
np.size(y)
x = pd.DataFrame(data = sales, columns=['year', 'month', 'dayOfWeek','y'])
x['month']
type(rawArrayDatas)
x
type(x)
list(df)
#값에 0이 있으면 log를 할 때 inf가 되므로 Initialization failed. 오류가 나니 주의할 것.
m = Prophet()
m.fit(df);
future = m.make_future_dataframe(periods=day)
future.tail()
future[-day:]
future
forecast=m.predict(future)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
temp=list(forecast['ds'][-day:])
date=[p.strftime('%Y-%m-%d') for p in temp]
date=[p.strftime('%Y-%m-%d') for p in temp]
date
strftime('We are the %d, %b %Y')
m.plot(forecast)
m.plot_components(forecast)
Explanation: 전체 data 전처리
End of explanation |
994 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Automatic differentiation and gradient tape
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https
Step2: Derivatives of a function
TensorFlow provides APIs for automatic differentiation - computing the derivative of a function. The way that more closely mimics the math is to encapsulate the computation in a Python function, say f, and use tfe.gradients_function to create a function that computes the derivatives of f with respect to its arguments. If you're familiar with autograd for differentiating numpy functions, this will be familiar. For example
Step3: Higher-order gradients
The same API can be used to differentiate as many times as you like
Step4: Gradient tapes
Every differentiable TensorFlow operation has an associated gradient function. For example, the gradient function of tf.square(x) would be a function that returns 2.0 * x. To compute the gradient of a user-defined function (like f(x) in the example above), TensorFlow first "records" all the operations applied to compute the output of the function. We call this record a "tape". It then uses that tape and the gradients functions associated with each primitive operation to compute the gradients of the user-defined function using reverse mode differentiation.
Since operations are recorded as they are executed, Python control flow (using ifs and whiles for example) is naturally handled
Step5: At times it may be inconvenient to encapsulate computation of interest into a function. For example, if you want the gradient of the output with respect to intermediate values computed in the function. In such cases, the slightly more verbose but explicit tf.GradientTape context is useful. All computation inside the context of a tf.GradientTape is "recorded".
For example
Step6: Higher-order gradients
Operations inside of the GradientTape context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow as tf
tf.enable_eager_execution()
tfe = tf.contrib.eager # Shorthand for some symbols
Explanation: Automatic differentiation and gradient tape
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/automatic_differentiation.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/automatic_differentiation.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table>
In the previous tutorial we introduced Tensors and operations on them. In this tutorial we will cover automatic differentiation, a key technique for optimizing machine learning models.
Setup
End of explanation
from math import pi
def f(x):
return tf.square(tf.sin(x))
assert f(pi/2).numpy() == 1.0
# grad_f will return a list of derivatives of f
# with respect to its arguments. Since f() has a single argument,
# grad_f will return a list with a single element.
grad_f = tfe.gradients_function(f)
assert tf.abs(grad_f(pi/2)[0]).numpy() < 1e-7
Explanation: Derivatives of a function
TensorFlow provides APIs for automatic differentiation - computing the derivative of a function. The way that more closely mimics the math is to encapsulate the computation in a Python function, say f, and use tfe.gradients_function to create a function that computes the derivatives of f with respect to its arguments. If you're familiar with autograd for differentiating numpy functions, this will be familiar. For example:
End of explanation
def f(x):
return tf.square(tf.sin(x))
def grad(f):
return lambda x: tfe.gradients_function(f)(x)[0]
x = tf.lin_space(-2*pi, 2*pi, 100) # 100 points between -2π and +2π
import matplotlib.pyplot as plt
plt.plot(x, f(x), label="f")
plt.plot(x, grad(f)(x), label="first derivative")
plt.plot(x, grad(grad(f))(x), label="second derivative")
plt.plot(x, grad(grad(grad(f)))(x), label="third derivative")
plt.legend()
plt.show()
Explanation: Higher-order gradients
The same API can be used to differentiate as many times as you like:
End of explanation
def f(x, y):
output = 1
for i in range(y):
output = tf.multiply(output, x)
return output
def g(x, y):
# Return the gradient of `f` with respect to it's first parameter
return tfe.gradients_function(f)(x, y)[0]
assert f(3.0, 2).numpy() == 9.0 # f(x, 2) is essentially x * x
assert g(3.0, 2).numpy() == 6.0 # And its gradient will be 2 * x
assert f(4.0, 3).numpy() == 64.0 # f(x, 3) is essentially x * x * x
assert g(4.0, 3).numpy() == 48.0 # And its gradient will be 3 * x * x
Explanation: Gradient tapes
Every differentiable TensorFlow operation has an associated gradient function. For example, the gradient function of tf.square(x) would be a function that returns 2.0 * x. To compute the gradient of a user-defined function (like f(x) in the example above), TensorFlow first "records" all the operations applied to compute the output of the function. We call this record a "tape". It then uses that tape and the gradients functions associated with each primitive operation to compute the gradients of the user-defined function using reverse mode differentiation.
Since operations are recorded as they are executed, Python control flow (using ifs and whiles for example) is naturally handled:
End of explanation
x = tf.ones((2, 2))
# TODO(b/78880779): Remove the 'persistent=True' argument and use
# a single t.gradient() call when the bug is resolved.
with tf.GradientTape(persistent=True) as t:
# TODO(ashankar): Explain with "watch" argument better?
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the same tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
Explanation: At times it may be inconvenient to encapsulate computation of interest into a function. For example, if you want the gradient of the output with respect to intermediate values computed in the function. In such cases, the slightly more verbose but explicit tf.GradientTape context is useful. All computation inside the context of a tf.GradientTape is "recorded".
For example:
End of explanation
# TODO(ashankar): Should we use the persistent tape here instead? Follow up on Tom and Alex's discussion
x = tf.constant(1.0) # Convert the Python 1.0 to a Tensor object
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
t2.watch(x)
y = x * x * x
# Compute the gradient inside the 't' context manager
# which means the gradient computation is differentiable as well.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
Explanation: Higher-order gradients
Operations inside of the GradientTape context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example:
End of explanation |
995 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro. to Snorkel
Step1: We repeat our definition of the Spouse Candidate subclass
Step2: We reload the probabilistic training labels
Step3: We also reload the candidates
Step4: Finally, we load gold labels for evaluation
Step5: Now we can setup our discriminative model. Here we specify the model and learning hyperparameters.
They can also be set automatically using a search based on the dev set with a GridSearch object.
Step6: Now, we get the precision, recall, and F1 score from the discriminative model
Step7: We can also get the candidates returned in sets (true positives, false positives, true negatives, false negatives) as well as a more detailed score report
Step8: Note that if this is the final test set that you will be reporting final numbers on, to avoid biasing results you should not inspect results. However you can run the model on your development set and, as we did in the previous part with the generative labeling function model, inspect examples to do error analysis.
You can also improve performance substantially by increasing the number of training epochs!
Finally, we can save the predictions of the model on the test set back to the database. (This also works for other candidate sets, such as unlabeled candidates.) | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
# TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE
# Note that this is necessary for parallel execution amongst other things...
# os.environ['SNORKELDB'] = 'postgres:///snorkel-intro'
from snorkel import SnorkelSession
session = SnorkelSession()
Explanation: Intro. to Snorkel: Extracting Spouse Relations from the News
Part III: Training an End Extraction Model
In this final section of the tutorial, we'll use the noisy training labels we generated in the last tutorial part to train our end extraction model.
For this tutorial, we will be training a Bi-LSTM, a state-of-the-art deep neural network implemented in TensorFlow.
End of explanation
from snorkel.models import candidate_subclass
Spouse = candidate_subclass('Spouse', ['person1', 'person2'])
Explanation: We repeat our definition of the Spouse Candidate subclass:
End of explanation
from snorkel.annotations import load_marginals
train_marginals = load_marginals(session, split=0)
Explanation: We reload the probabilistic training labels:
End of explanation
train_cands = session.query(Spouse).filter(Spouse.split == 0).order_by(Spouse.id).all()
dev_cands = session.query(Spouse).filter(Spouse.split == 1).order_by(Spouse.id).all()
test_cands = session.query(Spouse).filter(Spouse.split == 2).order_by(Spouse.id).all()
Explanation: We also reload the candidates:
End of explanation
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
L_gold_test = load_gold_labels(session, annotator_name='gold', split=2)
Explanation: Finally, we load gold labels for evaluation:
End of explanation
from snorkel.learning.pytorch import LSTM
train_kwargs = {
'lr': 0.01,
'embedding_dim': 50,
'hidden_dim': 50,
'n_epochs': 10,
'dropout': 0.25,
'seed': 1701
}
lstm = LSTM(n_threads=None)
lstm.train(train_cands, train_marginals, X_dev=dev_cands, Y_dev=L_gold_dev, **train_kwargs)
Explanation: Now we can setup our discriminative model. Here we specify the model and learning hyperparameters.
They can also be set automatically using a search based on the dev set with a GridSearch object.
End of explanation
p, r, f1 = lstm.score(test_cands, L_gold_test)
print("Prec: {0:.3f}, Recall: {1:.3f}, F1 Score: {2:.3f}".format(p, r, f1))
Explanation: Now, we get the precision, recall, and F1 score from the discriminative model:
End of explanation
tp, fp, tn, fn = lstm.error_analysis(session, test_cands, L_gold_test)
Explanation: We can also get the candidates returned in sets (true positives, false positives, true negatives, false negatives) as well as a more detailed score report:
End of explanation
lstm.save_marginals(session, test_cands)
Explanation: Note that if this is the final test set that you will be reporting final numbers on, to avoid biasing results you should not inspect results. However you can run the model on your development set and, as we did in the previous part with the generative labeling function model, inspect examples to do error analysis.
You can also improve performance substantially by increasing the number of training epochs!
Finally, we can save the predictions of the model on the test set back to the database. (This also works for other candidate sets, such as unlabeled candidates.)
End of explanation |
996 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Half Power Estimation of $\zeta$
Step1: We simulate a dynamic testing, using a low sampled, random error affected sequence of frequencies to compute a random error affected sequence of dynamic amplification factors for $\zeta=3.5\%$.
Step2: We find the reference response value using the measured maximum value and plot a zone around the max value, using a reference line at $D_\text{max}/\sqrt2$
Step3: We plot 2 ranges around the crossings with the reference value
Step4: My estimates for the half-power frequencies are $f_1 = 0.9635$ and $f_2 = 1.0336$, and using these values in the half-power formula gives us our estimate of $\zeta$. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1969)
Explanation: Half Power Estimation of $\zeta$
End of explanation
b = np.linspace(0.5, 1.5, 51) + (np.random.random(51)-0.5)/100
z = 0.035
D = 1/np.sqrt((1-b*b)**2+(2*z*b)**2) * (1 + (np.random.random(51)-0.5)/100)
plt.plot(b, D) ; plt.ylim((0, 15)) ; plt.grid();
print('max of curve', max(D), '\tmax approx.', 1/2/z, '\texact', 1/2/z/np.sqrt(1-z*z))
Explanation: We simulate a dynamic testing, using a low sampled, random error affected sequence of frequencies to compute a random error affected sequence of dynamic amplification factors for $\zeta=3.5\%$.
End of explanation
Dmax = max(D)
D2 = Dmax/np.sqrt(2)
plt.plot(b, D, 'k-*')
plt.yticks((D2, Dmax))
plt.xlim((0.9, 1.1))
plt.grid()
Explanation: We find the reference response value using the measured maximum value and plot a zone around the max value, using a reference line at $D_\text{max}/\sqrt2$
End of explanation
plt.plot(b, D)
plt.yticks((D2, Dmax))
plt.xlim((0.950, 0.965))
plt.grid()
plt.show()
plt.plot(b, D)
plt.yticks((D2, Dmax))
plt.xlim((1.025, 1.040))
plt.grid();
Explanation: We plot 2 ranges around the crossings with the reference value
End of explanation
f1 = 0.9618
f2 = 1.0346
print(z, (f2-f1)/(f2+f1))
Explanation: My estimates for the half-power frequencies are $f_1 = 0.9635$ and $f_2 = 1.0336$, and using these values in the half-power formula gives us our estimate of $\zeta$.
End of explanation |
997 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Atlanta Police Department
The Atlanta Police Department provides Part 1 crime data at http
Step1: Load data (don't change this if you're running the notebook on the cluster)
We have two files
- /home/data/APD/COBRA083016_2015.xlsx for 2015
- /home/data/APD/COBRA083016.xlsx from 2009 to current date
Step2: Exploring Dates
Step3: Convert into date-time type
Step4: Part 1 - Observations from the data
Part 2 - Seasonal Model
Step5: Crime per year
Let's look at the
Step6: Let's look at residential burglary.
Step7: Normalized over the annual average
Step8: Fitting the regression line
Suppose there are $n$ data points {{math|{(''x<sub>i</sub>'', ''y<sub>i</sub>''), ''i'' {{=}} 1, ..., ''n''}.}} The function that describes x and y is | Python Code:
### Load libraries
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
help(plt.legend)
Explanation: Atlanta Police Department
The Atlanta Police Department provides Part 1 crime data at http://www.atlantapd.org/crimedatadownloads.aspx
A recent copy of the data file is stored in the cluster. <span style="color: red; font-weight: bold;">Please, do not copy this data file into your home directory!</span>
End of explanation
%%time
df = pd.read_excel('/home/data/APD/COBRA083016_2015.xlsx', sheetname='Query')
df.shape
for c in df.columns:
print(c)
df[0:5]
df.describe()
df.offense_id.min(), df.offense_id.max()
df.groupby(['UC2 Literal']).offense_id.count()
Explanation: Load data (don't change this if you're running the notebook on the cluster)
We have two files
- /home/data/APD/COBRA083016_2015.xlsx for 2015
- /home/data/APD/COBRA083016.xlsx from 2009 to current date
End of explanation
df[['offense_id', 'occur_date', 'occur_time', 'rpt_date']][1:10]
Explanation: Exploring Dates
End of explanation
df['occur_ts'] = pd.to_datetime(df.occur_date+' '+df.occur_time)
#df[['offense_id', 'occur_date', 'occur_time', 'occur_ts', 'rpt_date']][1:10]
df['occur_ts'] = pd.to_datetime(df.occur_date+' '+df.occur_time)
df['occur_month'] = df['occur_ts'].map(lambda x: x.month)
df['occur_woy'] = df.occur_ts.dt.weekofyear
df.describe()
resdf = df.groupby(['UC2 Literal', 'occur_month']).offense_id.count()
resdf
resdf['BURGLARY-RESIDENCE'].as_matrix()
resdf['BURGLARY-RESIDENCE'].iloc(0)
%matplotlib inline
fig = plt.figure(figsize=(10,6)) # 10inx10in
#plt.plot(resdf['BURGLARY-RESIDENCE'].index, resdf['BURGLARY-RESIDENCE'])
plt.scatter(resdf['BURGLARY-RESIDENCE'].index, resdf['BURGLARY-RESIDENCE'], marker='x')
plt.scatter(resdf['BURGLARY-NONRES'].index, resdf['BURGLARY-NONRES'], marker='o')
plt.ylim(0, 500)
plt.title('BURGLARY-RESIDENCE')
plt.xticks(range(13), ['', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
fig.savefig('BurglaryResidence_over_month.svg')
x = 1
df = pd.read_excel('/home/data/APD/COBRA083016_2015.xlsx', sheetname='Query')
df['occur_ts'] = pd.to_datetime(df.occur_date+' '+df.occur_time)
df['occur_month'] = df['occur_ts'].map(lambda x: x.month)
df['occur_woy'] = df.occur_ts.dt.weekofyear
%matplotlib inline
resdf = df.groupby(['UC2 Literal', 'occur_month']).offense_id.count()
fig = plt.figure(figsize=(10,6))
plt.scatter(resdf['BURGLARY-RESIDENCE'].index, resdf['BURGLARY-RESIDENCE'], marker='x')
plt.ylim(0, 500)
plt.title('BURGLARY-RESIDENCE')
plt.xticks(range(13), ['', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
plt.savefig('quiz3-burglary-residence.png')
''
plt.savefig('quiz3-burglary-residence.png')
Explanation: Convert into date-time type
End of explanation
## load complete dataset
dff = pd.read_excel('/home/data/APD/COBRA083016.xlsx', sheetname='Query')
dff.shape
for evt in ['occur', 'poss']:
dff['%s_ts'%evt] = pd.to_datetime(dff['%s_date'%evt]+' '+dff['%s_time'%evt])
dff['rpt_ts'] = pd.to_datetime(dff.rpt_date)
', '.join(dff.columns)
dff['occur_year'] = dff.occur_ts.dt.year
dff['occur_month'] = dff.occur_ts.dt.month
dff['occur_dayweek'] = dff.occur_ts.dt.dayofweek
Explanation: Part 1 - Observations from the data
Part 2 - Seasonal Model
End of explanation
crime_year = dff[dff.occur_year.between(2009, 2015)].groupby(by=['UC2 Literal', 'occur_year']).offense_id.count()
%matplotlib inline
fig = plt.figure(figsize=(40,30))
crime_types = crime_year.index.levels[0]
years = crime_year.index.levels[1]
for c in range(len(crime_types)):
y_max = max(crime_year.loc[crime_types[c]])
plt.subplot(4,3,c+1)
plt.hlines(crime_year.loc[crime_types[c]].iloc[-1]*100/y_max, years[0], years[-1], linestyles="dashed", color="r")
plt.bar(crime_year.loc[crime_types[c]].index, crime_year.loc[crime_types[c]]*100/y_max, label=crime_types[c], alpha=0.5)
##plt.legend()
plt.ylim(0, 100)
plt.xticks(years+0.4, [str(int(y)) for y in years], rotation=0, fontsize=24)
plt.yticks([0,20,40,60,80,100], ['0%','20%','40%','60%','80%','100%'], fontsize=24)
plt.title(crime_types[c], fontsize=30)
None
Explanation: Crime per year
Let's look at the
End of explanation
c = 3
crime_types[c]
crime_year_month = dff[dff.occur_year.between(2009, 2015)].groupby(by=['UC2 Literal', 'occur_year', 'occur_month']).offense_id.count()
c = 3 ## 'BURGLARY-RESIDENCE'
resburglaries = crime_year_month.loc[crime_types[c]]
fig = plt.figure(figsize=(20,10))
for y in years:
plt.plot(resburglaries.loc[y].index, resburglaries.loc[y], label=("%4.0f"%y))
plt.legend()
plt.title("Seasonal Trends - %s"%crime_types[c], fontsize=20)
plt.xticks(range(13), ['', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
plt.xlim(0,13)
None
Explanation: Let's look at residential burglary.
End of explanation
c = 3 ## 'BURGLARY-RESIDENCE'
fig = plt.figure(figsize=(20,10))
for y in years:
avg = resburglaries.loc[y].mean()
plt.hlines(avg, 1, 13, linestyle='dashed')
plt.plot(resburglaries.loc[y].index, resburglaries.loc[y], label=("%4.0f"%y))
plt.legend()
plt.title("Seasonal Trends - %s (with annuale averages)"%crime_types[c], fontsize=20)
plt.xticks(list(range(1,13)), ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
plt.xlim(0,13)
None
c = 3 ## 'BURGLARY-RESIDENCE'
fig = plt.figure(figsize=(20,10))
for y in years:
avg = resburglaries.loc[y].mean()
std = resburglaries.loc[y].std()
##plt.hlines(avg, 1, 13, linestyle='dashed')
plt.plot(resburglaries.loc[y].index, (resburglaries.loc[y]-avg)/std, label=("%4.0f"%y))
plt.legend()
plt.title("Seasonal Trends - %s (normalized)"%crime_types[c], fontsize=20)
plt.xticks(list(range(1,13)), ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
plt.xlim(0,13)
plt.ylabel("Standard deviations $\sigma_y$")
None
seasonal_adjust = resburglaries.reset_index().groupby(by=['occur_month']).offense_id.agg('mean')
Explanation: Normalized over the annual average
End of explanation
### in case we want to save a DataFrame
#writer = pd.ExcelWriter('myresults.xlsx')
#df.to_excel(writer,'Results')
#writer.save()
resdf
Explanation: Fitting the regression line
Suppose there are $n$ data points {{math|{(''x<sub>i</sub>'', ''y<sub>i</sub>''), ''i'' {{=}} 1, ..., ''n''}.}} The function that describes x and y is:
$$y_i = \alpha + \beta x_i + \varepsilon_i.$$
The goal is to find the equation of the straight line
$$y = \alpha + \beta x,$$
which would provide a "best" fit for the data points. Here the "best" will be understood as in the [[Ordinary least squares|least-squares]] approach: a line that minimizes the sum of squared residuals of the linear regression model. In other words, {{mvar|α}} (the {{mvar|y}}-intercept) and {{mvar|β}} (the slope) solve the following minimization problem:
$$\text{Find }\min_{\alpha,\,\beta} Q(\alpha, \beta), \qquad \text{for } Q(\alpha, \beta) = \sum_{i=1}^n\varepsilon_i^{\,2} = \sum_{i=1}^n (y_i - \alpha - \beta x_i)^2\ $$
By using either [[calculus]], the geometry of [[inner product space]]s, or simply expanding to get a quadratic expression in {{mvar|α}} and {{mvar|β}}, it can be shown that the values of {{mvar|α}} and {{mvar|β}} that minimize the objective function {{mvar|Q}}<ref>Kenney, J. F. and Keeping, E. S. (1962) "Linear Regression and Correlation." Ch. 15 in ''Mathematics of Statistics'', Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 252–285</ref> are
: <math>\begin{align}
\hat\beta &= \frac{ \sum_{i=1}^{n} (x_i - \bar{x})(y_i - \bar{y}) }{ \sum_{i=1}^n (x_i - \bar{x})^2 } \[6pt]
&= \frac{ \sum_{i=1}^{n} (x_i y_i - x_i \bar{y} - \bar{x} y_i + \bar{x} \bar{y})} { \sum_{i=1}^n (x_i^2 - 2 x_i \bar{x} + \bar{x}^2) } \[6pt]
&= \frac{ \sum_{i=1}^{n} (x_i y_i) - \bar{y} \sum_{i=1}^{n} x_i - \bar{x} \sum_{i=1}^{n} y_i + n \bar{x} \bar{y}} { \sum_{i=1}^n (x_i^2) - 2 \bar{x} \sum_{i=1}^n x_i + n \bar{x}^2 } \[6pt]
&= \frac{ \frac{1}{n} \sum_{i=1}^{n} x_i y_i - \bar{x} \bar{y} }{ \frac{1}{n}\sum_{i=1}^n {x_i^2} - \overline{x}^2 } \[6pt]
&= \frac{ \overline{xy} - \bar{x}\bar{y} }{ \overline{x^2} - \bar{x}^2 } = \frac{ \operatorname{Cov}[x, y] }{ \operatorname{Var}[x] } \
&= r_{xy} \frac{s_y}{s_x}, \[6pt]
\hat\alpha & = \bar{y} - \hat\beta\,\bar{x},
\end{align}</math>
where {{math|''r<sub>xy</sub>''}} is the [[Correlation#Pearson's product-moment coefficient|sample correlation coefficient]] between {{mvar|x}} and {{mvar|y}}; and {{math|''s<sub>x</sub>''}} and {{math|''s<sub>y</sub>''}} are the [[sample standard deviation]] of {{mvar|x}} and {{mvar|y}}. A horizontal bar over a quantity indicates the average value of that quantity. For example:
:<math>\overline{xy} = \frac{1}{n} \sum_{i=1}^n x_i y_i.</math>
Substituting the above expressions for <math>\hat{\alpha}</math> and <math>\hat{\beta}</math> into
: <math>f = \hat{\alpha} + \hat{\beta} x,</math>
yields
: <math>\frac{ f - \bar{y}}{s_y} = r_{xy} \frac{ x - \bar{x}}{s_x} </math>
This shows that {{math|''r<sub>xy</sub>''}} is the slope of the regression line of the [[Standard score|standardized]] data points (and that this line passes through the origin).
It is sometimes useful to calculate {{math|''r<sub>xy</sub>''}} from the data independently using this equation:
:<math>r_{xy} = \frac{ \overline{xy} - \bar{x}\bar{y} }{ \sqrt{ \left(\overline{x^2} - \bar{x}^2\right)\left(\overline{y^2} - \bar{y}^2\right)} } </math>
The [[coefficient of determination]] (R squared) is equal to <math>r_{xy}^2</math> when the model is linear with a single independent variable. See [[Correlation#Pearson's product-moment coefficient|sample correlation coefficient]] for additional details.
===Linear regression without the intercept term===
Sometimes it is appropriate to force the regression line to pass through the origin, because {{mvar|x}} and {{mvar|y}} are assumed to be proportional. For the model without the intercept term, {{math|''y'' {{=}} ''βx''}}, the OLS estimator for {{mvar|β}} simplifies to
: <math>\hat{\beta} = \frac{ \sum_{i=1}^n x_i y_i }{ \sum_{i=1}^n x_i^2 } = \frac{\overline{x y}}{\overline{x^2}} </math>
Substituting {{math|(''x'' − ''h'', ''y'' − ''k'')}} in place of {{math|(''x'', ''y'')}} gives the regression through {{math|(''h'', ''k'')}}:
: <math>\begin{align}
\hat\beta &= \frac{\overline{(x - h) (y - k)}}{\overline{(x - h)^2}} \[6pt]
&= \frac{\overline{x y} + k \bar{x} - h \bar{y} - h k }{\overline{x^2} - 2 h \bar{x} + h^2} \[6pt]
&= \frac{\overline{x y} - \bar{x} \bar{y} + (\bar{x} - h)(\bar{y} - k)}{\overline{x^2} - \bar{x}^2 + (\bar{x} - h)^2} \[6pt]
&= \frac{\operatorname{Cov}[x,y] + (\bar{x} - h)(\bar{y}-k)}{\operatorname{Var}[x] + (\bar{x} - h)^2}
\end{align}</math>
The last form above demonstrates how moving the line away from the center of mass of the data points affects the slope.
End of explanation |
998 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The
Step1:
Step2: Now, we can create an
Step3: Epochs behave similarly to
Step4: You can select subsets of epochs by indexing the
Step5: Note the '/'s in the event code labels. These separators allow tag-based
selection of epoch sets; every string separated by '/' can be entered, and
returns the subset of epochs matching any of the strings. E.g.,
Step6: Note that MNE will not complain if you ask for tags not present in the
object, as long as it can find some match
Step7: It is also possible to iterate through
Step8: You can manually remove epochs from the Epochs object by using
Step9: If you wish to save the epochs as a file, you can do it with
Step10: Later on you can read the epochs with
Step11: If you wish to look at the average across trial types, then you may do so,
creating an | Python Code:
import mne
import os.path as op
import numpy as np
from matplotlib import pyplot as plt
Explanation: The :class:Epochs <mne.Epochs> data structure: epoched data
:class:Epochs <mne.Epochs> objects are a way of representing continuous
data as a collection of time-locked trials, stored in an array of shape
(n_events, n_channels, n_times). They are useful for many statistical
methods in neuroscience, and make it easy to quickly overview what occurs
during a trial.
End of explanation
data_path = mne.datasets.sample.data_path()
# Load a dataset that contains events
raw = mne.io.read_raw_fif(
op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif'))
# If your raw object has a stim channel, you can construct an event array
# easily
events = mne.find_events(raw, stim_channel='STI 014')
# Show the number of events (number of rows)
print('Number of events:', len(events))
# Show all unique event codes (3rd column)
print('Unique event codes:', np.unique(events[:, 2]))
# Specify event codes of interest with descriptive labels.
# This dataset also has visual left (3) and right (4) events, but
# to save time and memory we'll just look at the auditory conditions
# for now.
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2}
Explanation: :class:Epochs <mne.Epochs> objects can be created in three ways:
1. From a :class:Raw <mne.io.Raw> object, along with event times
2. From an :class:Epochs <mne.Epochs> object that has been saved as a
.fif file
3. From scratch using :class:EpochsArray <mne.EpochsArray>. See
tut_creating_data_structures
End of explanation
epochs = mne.Epochs(raw, events, event_id, tmin=-0.1, tmax=1,
baseline=(None, 0), preload=True)
print(epochs)
Explanation: Now, we can create an :class:mne.Epochs object with the events we've
extracted. Note that epochs constructed in this manner will not have their
data available until explicitly read into memory, which you can do with
:func:get_data <mne.Epochs.get_data>. Alternatively, you can use
preload=True.
Expose the raw data as epochs, cut from -0.1 s to 1.0 s relative to the event
onsets
End of explanation
print(epochs.events[:3])
print(epochs.event_id)
Explanation: Epochs behave similarly to :class:mne.io.Raw objects. They have an
:class:info <mne.Info> attribute that has all of the same
information, as well as a number of attributes unique to the events contained
within the object.
End of explanation
print(epochs[1:5])
print(epochs['Auditory/Right'])
Explanation: You can select subsets of epochs by indexing the :class:Epochs <mne.Epochs>
object directly. Alternatively, if you have epoch names specified in
event_id then you may index with strings instead.
End of explanation
print(epochs['Right'])
print(epochs['Right', 'Left'])
Explanation: Note the '/'s in the event code labels. These separators allow tag-based
selection of epoch sets; every string separated by '/' can be entered, and
returns the subset of epochs matching any of the strings. E.g.,
End of explanation
epochs_r = epochs['Right']
epochs_still_only_r = epochs_r[['Right', 'Left']]
print(epochs_still_only_r)
try:
epochs_still_only_r["Left"]
except KeyError:
print("Tag-based selection without any matches raises a KeyError!")
Explanation: Note that MNE will not complain if you ask for tags not present in the
object, as long as it can find some match: the below example is parsed as
(inclusive) 'Right' OR 'Left'. However, if no match is found, an error is
returned.
End of explanation
# These will be epochs objects
for i in range(3):
print(epochs[i])
# These will be arrays
for ep in epochs[:2]:
print(ep)
Explanation: It is also possible to iterate through :class:Epochs <mne.Epochs> objects
in this way. Note that behavior is different if you iterate on Epochs
directly rather than indexing:
End of explanation
epochs.drop([0], reason='User reason')
epochs.drop_bad(reject=dict(grad=2500e-13, mag=4e-12, eog=200e-6), flat=None)
print(epochs.drop_log)
epochs.plot_drop_log()
print('Selection from original events:\n%s' % epochs.selection)
print('Removed events (from numpy setdiff1d):\n%s'
% (np.setdiff1d(np.arange(len(events)), epochs.selection).tolist(),))
print('Removed events (from list comprehension -- should match!):\n%s'
% ([li for li, log in enumerate(epochs.drop_log) if len(log) > 0]))
Explanation: You can manually remove epochs from the Epochs object by using
:func:epochs.drop(idx) <mne.Epochs.drop>, or by using rejection or flat
thresholds with :func:epochs.drop_bad(reject, flat) <mne.Epochs.drop_bad>.
You can also inspect the reason why epochs were dropped by looking at the
list stored in epochs.drop_log or plot them with
:func:epochs.plot_drop_log() <mne.Epochs.plot_drop_log>. The indices
from the original set of events are stored in epochs.selection.
End of explanation
epochs_fname = op.join(data_path, 'MEG', 'sample', 'sample-epo.fif')
epochs.save(epochs_fname)
Explanation: If you wish to save the epochs as a file, you can do it with
:func:mne.Epochs.save. To conform to MNE naming conventions, the
epochs file names should end with '-epo.fif'.
End of explanation
epochs = mne.read_epochs(epochs_fname, preload=False)
Explanation: Later on you can read the epochs with :func:mne.read_epochs. For reading
EEGLAB epochs files see :func:mne.read_epochs_eeglab. We can also use
preload=False to save memory, loading the epochs from disk on demand.
End of explanation
ev_left = epochs['Auditory/Left'].average()
ev_right = epochs['Auditory/Right'].average()
f, axs = plt.subplots(3, 2, figsize=(10, 5))
_ = f.suptitle('Left / Right auditory', fontsize=20)
_ = ev_left.plot(axes=axs[:, 0], show=False, time_unit='s')
_ = ev_right.plot(axes=axs[:, 1], show=False, time_unit='s')
plt.tight_layout()
Explanation: If you wish to look at the average across trial types, then you may do so,
creating an :class:Evoked <mne.Evoked> object in the process. Instances
of Evoked are usually created by calling :func:mne.Epochs.average. For
creating Evoked from other data structures see :class:mne.EvokedArray and
tut_creating_data_structures.
End of explanation |
999 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table align="left">
<td>
<a href="https
Step1: Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AI Platform APIs and Compute Engine APIs.
Enter your project ID and region in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Step2: Import libraries and define constants
Step3: Create a BigQuery dataset
In this notebook, you will need to create a dataset in your project called bqmlga4. To create it, run the following cell
Step4: The dataset
Using the sample gaming event data from Flood it!
The sample dataset contains raw event data, as shown in the next cell
Step5: It may be helpful to take a look at the overall schema used in Google Analytics 4. As mentioned earlier, Google Analytics 4 uses an event based measurement model and each row in this dataset is an event. Click here to view the complete schema and details about each column. As you can see above, certain columns are nested records and contain detailed information
Step6: Preparing the training data
You cannot simply use raw event data to train a machine learning model as it would not be in the right shape and format to use as training data. So in this section, you will learn how to pre-process the raw data into an appropriate format to use as training data for classification models.
To predict which user is going to churn or return, the ideal training data format for classification should look like the following
Step7: For the churned column, churned=0 if the user performs an action after 24 hours since their first touch, otherwise if their last action was only within the first 24 hours, then churned=1.
For the bounced column, bounced=1 if the user's last action was within the first ten minutes since their first touch with the app, otherwise bounced=0. We can use this column to filter our training data later on, by conditionally querying for users where bounced = 0.
You might wonder how many of these 15k users bounced and returned? You can run the following query to check
Step8: For the training data, you will only end up using data where bounced = 0. Based on the 15k users, you can see that 5,557 (\~41%) users bounced within the first ten minutes of their first engagement with the app, but of the remaining 8,031 users, 1,883 users (\~23%) churned after 24 hours.
Step9: Step 2. Extracting demographic data for each user
This section is focused on extracting the demographic information for each user. Different demographic information about the user is available in the dataset already, including app_info, device, ecommerce, event_params, geo. Demographic features can help the model predict whether users on certain devices or countries are more likely to churn.
For this notebook, you can start just with geo.country, device.operating_system, and device.language. If you are using your own dataset and have joinable first-party data, this section is a good opportunity to add any additional attributes for each user that may not be readily available in Google Analytics 4.
Note that a user's demographics may occasionally change (e.g. moving from one country to another). For simplicity, you will just use the demographic information that Google Analytics 4 provides when the user first engaged with the app as indicated by MIN(event_timestamp). This enables every unique user to be represented by a single row.
Step10: Step 3. Extracting behavioral data for each user
Behavioral data in the raw event data spans across multiple events -- and thus rows -- per user. The goal of this section is to aggregate and extract behavioral data for each user, resulting in one row of behavioral data per unique user.
But what kind of behavioral data will you need to prepare? Since the end goal of this notebook is to predict, based on a user's activity within the first 24 hrs since app installation, whether that user will churn or return thereafter, then you will want to use behavioral data from the first 24 hrs in your training data. Later on, we can also extract some extra time-related features from user_first_engagement, such as the month or day of the first engagement.
Google Analytics automatically collects certain events that you can use to analyze behavior. In addition, there are certain recommended events for games.
As a first step, you can explore all the unique events that exist in this dataset, based on event_name
Step11: For this notebook, to predict whether a user will churn or return, you can start by counting the number of times a user engages in the following event types
Step12: Note that in addition to frequency of performing an action, you can also include other behavioral features in this step such as the total amount of in-game currency they spent, or if they reached certain app-specifc milestones that may be more relevant to your app (e.g., gained a certain threshold amount of XP or leveled up at least 5 times). This is an opportunity for you to extend this notebook to suit your needs.
Step 4
Step13: Training the propensity model with BigQuery ML
In this section, using the training data you prepared, you will now train machine learning models in SQL using BigQuery ML. The remainder of the notebook will only use logistic regression, but you can also follow the optional code below to train other model types.
Choosing the model
Step14: Train an XGBoost model (optional)
The following code trains an XGBoost model. This may take several minutes to train.
For more information on the default hyperparameters used, you can read the documentation
Step15: Train a deep neural network (DNN) model (optional)
The following code trains a deep neural network. This may take several minutes to train.
For more information on the default hyperparameters used, you can read the documentation
Step16: Train an AutoML Tables model (optional)
AutoML Tables enables you to automatically build state-of-the-art machine learning models on structured data at massively increased speed and scale. AutoML Tables automatically searches through Google’s model zoo for structured data to find the best model for your needs, ranging from linear/logistic regression models for simpler datasets to advanced deep, ensemble, and architecture-search methods for larger, more complex ones.
You can train an AutoML model directly with BigQuery ML, as in the code below.
Note that the BUDGET_HOURS parameter is for AutoML Tables training, specified in hours. The default value is 1.0 hour and must be between 1.0 and 72.0. The total query processing time can be greater than the budgeted hours specified in the query.
Note
Step17: Model Evaluation
To evaluate the model, you can run ML.EVALUATE on a model that has finished training to inspect some of the metrics.
The metrics are based on the test sample data that was automatically split during model creation (documentation).
Step18: ML.EVALUATE generates the precision, recall, accuracy and f1_score using the default classification threshold of 0.5, which can be modified by using the optional THRESHOLD parameter.
Generally speaking, you can use the log_loss and roc_auc metrics to compare model performance.
The log_loss ranges between 0 and 1.0, and the closer the log_loss is the zero, the closer the predicted labels were to the actual labels.
The roc_auc ranges between 0 and 1.0, and the closer the roc_auc is to 1.0, the better the model is at distinguishing between the classes.
For more information on these metrics, you can read through the definitions on precision and recall, accuracy, f1-score, log_loss and roc_auc.
Confusion matrix
Step19: ROC Curve
You can plot the AUC-ROC curve by using ML.ROC_CURVE to return the metrics for different threshold values for the model (documentation).
Step20: Plot the AUC-ROC curve
Step21: Model prediction
You can run ML.PREDICT to make predictions on the propensity to churn. The following code returns all the information from ML.PREDICT.
Step22: For propensity modeling, the most important output is the probability of a behavior occuring. The following query returns the probability that the user will return after 24 hrs. The higher the probability and closer it is to 1, the more likely the user is predicted to churn, and the closer it is to 0, the more likely the user is predicted to return.
Step23: Exporting the predictions out of Bigquery
Reading the predictions directly from BigQuery
With the predictions from ML.PREDICT, you can export the data into a Pandas dataframe using the BigQuery Storage API (see documentation and code samples). You can also use other BigQuery client libraries.
Alternatively you can also export directly into pandas in a notebook using the %%bigquery <variable name> as in
Step24: Export predictions table to Google Cloud Storage
There are several ways to export the predictions table to Google Cloud Storage (GCS), so that you can use them in a separate service. Perhaps the easiest way is to export directly to GCS using SQL (documentation). | Python Code:
!pip install google-cloud-bigquery
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: <table align="left">
<td>
<a href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name=Churn%20prediction%20for%20game%20developers%20using%20Google%20Analytics%204%20%28GA4%29%20and%20BigQuery%20ML%20Notebook&download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fanalytics-componentized-patterns%2Fmaster%2Fgaming%2Fpropensity-model%2Fbqml%2Fbqml_ga4_gaming_propensity_to_churn.ipynb">
<img src="https://cloud.google.com/images/products/ai/ai-solutions-icon.svg" alt="AI Platform Notebooks">Run on AI Platform Notebooks</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/analytics-componentized-patterns/blob/master/gaming/propensity-model/bqml/bqml_ga4_gaming_propensity_to_churn.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
This notebook shows you how you can train, evaluate, and deploy a propensity model in BigQuery ML to predict user retention on a mobile game, based on app measurement data from Google Analytics 4.
Propensity modeling in the mobile gaming industry
According to a 2019 study on 100K mobile games by the Mobile Gaming Industry Analysis, most mobile games only see a 25% retention rate for users after the first 24 hours, and any game "below 30% retention generally needs improvement". In light of this, using machine learning -- to identify the propensity that a user churn after day 1 -- can allow app developers to incentivize users at higher risk of churning to return.
To predict the propensity (a.k.a. likelihood) that a user will return vs churn, you can use classification algorithms, like logistic regression, XGBoost, neural networks, or AutoML Tables, all of which are available with BigQuery ML.
Propensity modeling in BigQuery ML
With BigQuery ML, you can train, evaluate and deploy our models directly within BigQuery using SQL, which saves time from needing to manually configure ML infrastructure. You can train and deploy ML models directly where the data is already stored, which also helps to avoid potential issues around data governance.
Using classification models that you train and deploy in BigQuery ML, you can predict propensity using the output of the models. The models outputs provide a probability score between 0 and 1.0 -- how likely the model predicts that the user will churn (1) or not churn (0).
Using the probability (propensity) scores, you can then, for example, target users who may not return on their own, but could potentially return if they are provided with an incentive or notification.
Not just churn -- propensity modeling for any behavior
Propensity modeling is not limited to predicting churn. In fact, you can calculate a propensity score for any behavior you may want to predict. For example, you may want to predict the likelihood a user will spend money on in-app purchases. Or, perhaps you can predict the likelihood of a user performing "stickier" behaviors such as adding and playing with friends, which could lead to longer-term retention and organic user growth. Whichever the case, you can easily modify this notebook to suit your needs, as the overall workflow will still be the same.
Scope of this notebook
Dataset
This notebook uses this public BigQuery dataset, contains raw event data from a real mobile gaming app called Flood It! (Android app, iOS app). The data schema originates from Google Analytics for Firebase, but is the same schema as Google Analytics 4; this notebook applies to use cases that use either Google Analytics for Firebase or Google Analytics 4 data.
Google Analytics 4 (GA4) uses an event-based measurement model. Events provide insight on what is happening in an app or on a website, such as user actions, system events, or errors. Every row in the dataset is an event, with various characteristics relevant to that event stored in a nested format within the row. While Google Analytics logs many types of events already by default, developers can also customize the types of events they also wish to log.
Note that as you cannot simply use the raw event data to train a machine learning model, in this notebook, you will also learn the important steps of how to pre-process the raw data into an appropriate format to use as training data for classification models.
Using your own GA4 data?
If you are already using a Google Analytics 4 property, follow [this guide]((https://support.google.com/analytics/answer/9823238) to learn how to export your GA4 data to BigQuery. Once the GA4 data is in BigQuery, there will be two tables:
events_
events_intraday_
For this notebook, you can replace the table in the FROM clause in SQL queries with your events_ table that is updated daily. The events_intraday_ table contains streaming data for the current day.
Note that if you use your own GA4 data, you may need to slightly modify some of the scripts in this notebook to predict a different output behavior or the types events in the training data that are specific to your use case.
Using data from other non-Google Analytics data collection tools?
While this notebook provides code based on a Google Analytics dataset, you can also use your own dataset from other non-Google Analytics data collection tools. The overall concepts and process of propensity modeling will be the same, but you may need to customize the code in order to prepare your dataset into the training data format described in this notebook.
Objective and Problem Statement
The goal of this notebook is to provide an end-to-end solution for propensity modeling to predict user churn on GA4 data using BigQuery ML. Using the "Flood It!" dataset, based on a user's activity within the first 24 hrs of app installation, you will try various classification models to predict the propensity to churn (1) or not churn (0).
By the end of this notebook, you will know how to:
* Explore the export of Google Analytics 4 data on BigQuery
* Prepare the training data using demographic, behavioral data, and the label (churn/not-churn)
* Train classification models using BigQuery ML
* Evaluate classification models using BigQuery ML
* Make predictions on which users will churn using BigQuery ML
* Activate on model predictions
Costs
There is no cost associated with using the free version of Google Analytics and using the BigQuery Export feature. This tutorial uses billable components of Google Cloud Platform (GCP):
BigQuery
BigQuery ML
Learn about BigQuery pricing, BigQuery ML
pricing and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Setup
PIP Install Packages and dependencies
End of explanation
PROJECT_ID = "YOUR-PROJECT-ID" #replace with your project id
REGION = 'US'
Explanation: Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AI Platform APIs and Compute Engine APIs.
Enter your project ID and region in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
End of explanation
from google.cloud import bigquery
import pandas as pd
pd.set_option('display.float_format', lambda x: '%.3f' % x)
Explanation: Import libraries and define constants
End of explanation
DATASET_NAME = "bqmlga4"
!bq mk --location=$REGION --dataset $PROJECT_ID:$DATASET_NAME
Explanation: Create a BigQuery dataset
In this notebook, you will need to create a dataset in your project called bqmlga4. To create it, run the following cell:
End of explanation
%%bigquery --project $PROJECT_ID
SELECT
*
FROM
`firebase-public-project.analytics_153293282.events_*`
TABLESAMPLE SYSTEM (1 PERCENT)
Explanation: The dataset
Using the sample gaming event data from Flood it!
The sample dataset contains raw event data, as shown in the next cell:
Note: Jupyter runs cells starting with %%bigquery as SQL queries
End of explanation
%%bigquery --project $PROJECT_ID
SELECT
COUNT(DISTINCT user_pseudo_id) as count_distinct_users,
COUNT(event_timestamp) as count_events
FROM
`firebase-public-project.analytics_153293282.events_*`
Explanation: It may be helpful to take a look at the overall schema used in Google Analytics 4. As mentioned earlier, Google Analytics 4 uses an event based measurement model and each row in this dataset is an event. Click here to view the complete schema and details about each column. As you can see above, certain columns are nested records and contain detailed information:
app_info
device
ecommerce
event_params
geo
traffic_source
user_properties
items*
web_info*
* present by default in GA4 datasets
As we can see below, there are 15K users and 5.7M events in this dataset:
End of explanation
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE VIEW bqmlga4.returningusers AS (
WITH firstlasttouch AS (
SELECT
user_pseudo_id,
MIN(event_timestamp) AS user_first_engagement,
MAX(event_timestamp) AS user_last_engagement
FROM
`firebase-public-project.analytics_153293282.events_*`
WHERE event_name="user_engagement"
GROUP BY
user_pseudo_id
)
SELECT
user_pseudo_id,
user_first_engagement,
user_last_engagement,
EXTRACT(MONTH from TIMESTAMP_MICROS(user_first_engagement)) as month,
EXTRACT(DAYOFYEAR from TIMESTAMP_MICROS(user_first_engagement)) as julianday,
EXTRACT(DAYOFWEEK from TIMESTAMP_MICROS(user_first_engagement)) as dayofweek,
#add 24 hr to user's first touch
(user_first_engagement + 86400000000) AS ts_24hr_after_first_engagement,
#churned = 1 if last_touch within 24 hr of app installation, else 0
IF (user_last_engagement < (user_first_engagement + 86400000000),
1,
0 ) AS churned,
#bounced = 1 if last_touch within 10 min, else 0
IF (user_last_engagement <= (user_first_engagement + 600000000),
1,
0 ) AS bounced,
FROM
firstlasttouch
GROUP BY
1,2,3
);
SELECT
*
FROM
bqmlga4.returningusers
LIMIT 100;
Explanation: Preparing the training data
You cannot simply use raw event data to train a machine learning model as it would not be in the right shape and format to use as training data. So in this section, you will learn how to pre-process the raw data into an appropriate format to use as training data for classification models.
To predict which user is going to churn or return, the ideal training data format for classification should look like the following:
|User ID|User demographic data|User behavioral data|Churned|
|-|-|-|-|
|User1|(e.g., country, device_type)|(e.g., # of times they did something within a time period)|1
|User2|(e.g., country, device_type)|(e.g., # of times they did something within a time period)|0
|User3|(e.g., country, device_type)|(e.g., # of times they did something within a time period)|1
Characteristics of the training data:
- each row is a separate unique user ID
- feature(s) for demographic data
- feature(s) for behavioral data
- the actual label that you want to train the model to predict (e.g., 1 = churned, 0 = returned)
You can train a model with only demographic data or behavioral data, but having a combination of both will likely help you create a more predictive model. For this reason, in this section, you will learn how to pre-process the raw data to follow this training data format.
The following sections will walk you through preparing the demographic data, behavioral data, and the label before joining them all together as the training data.
Identifying the label for each user (churned or returned)
Extracting demographic data for each user
Extracting behavioral data for each user
Combining the label, demographic and behavioral data together as training data
Step 1: Identifying the label for each user
The raw dataset doesn't have a feature that simply identifies users as "churned" or "returned", so in this section, you will need to create this label based on some of the existing columns.
There are many ways to define user churn, but for the purposes of this notebook, you will predict 1-day churn as users who do not come back and use the app again after 24 hr of the user's first engagement.
In other words, after 24 hr of a user's first engagement with the app:
- if the user shows no event data thereafter, the user is considered churned.
- if the user does have at least one event datapoint thereafter, then the user is considered returned
You may also want to remove users who were unlikely to have ever returned anyway after spending just a few minutes with the app, which is sometimes referred to as "bouncing". For example, we can say want to build our model only on users who spent at least 10 minutes with the app (users who didn't bounce).
So your updated definition of a churned user for this notebook is:
"any user who spent at least 10 minutes on the app, but after 24 hour from when they first engaged with the app, never used the app again"
In SQL, since the raw data contains all of the events for every user, from their first touch (app installation) to their last touch, you can use this information to create two columns: churned and bounced.
Take a look at the following SQL query and the results:
End of explanation
%%bigquery --project $PROJECT_ID
SELECT
bounced,
churned,
COUNT(churned) as count_users
FROM
bqmlga4.returningusers
GROUP BY 1,2
ORDER BY bounced
Explanation: For the churned column, churned=0 if the user performs an action after 24 hours since their first touch, otherwise if their last action was only within the first 24 hours, then churned=1.
For the bounced column, bounced=1 if the user's last action was within the first ten minutes since their first touch with the app, otherwise bounced=0. We can use this column to filter our training data later on, by conditionally querying for users where bounced = 0.
You might wonder how many of these 15k users bounced and returned? You can run the following query to check:
End of explanation
%%bigquery --project $PROJECT_ID
SELECT
COUNTIF(churned=1)/COUNT(churned) as churn_rate
FROM
bqmlga4.returningusers
WHERE bounced = 0
Explanation: For the training data, you will only end up using data where bounced = 0. Based on the 15k users, you can see that 5,557 (\~41%) users bounced within the first ten minutes of their first engagement with the app, but of the remaining 8,031 users, 1,883 users (\~23%) churned after 24 hours.
End of explanation
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE VIEW bqmlga4.user_demographics AS (
WITH first_values AS (
SELECT
user_pseudo_id,
geo.country as country,
device.operating_system as operating_system,
device.language as language,
ROW_NUMBER() OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp DESC) AS row_num
FROM `firebase-public-project.analytics_153293282.events_*`
WHERE event_name="user_engagement"
)
SELECT * EXCEPT (row_num)
FROM first_values
WHERE row_num = 1
);
SELECT
*
FROM
bqmlga4.user_demographics
LIMIT 10
Explanation: Step 2. Extracting demographic data for each user
This section is focused on extracting the demographic information for each user. Different demographic information about the user is available in the dataset already, including app_info, device, ecommerce, event_params, geo. Demographic features can help the model predict whether users on certain devices or countries are more likely to churn.
For this notebook, you can start just with geo.country, device.operating_system, and device.language. If you are using your own dataset and have joinable first-party data, this section is a good opportunity to add any additional attributes for each user that may not be readily available in Google Analytics 4.
Note that a user's demographics may occasionally change (e.g. moving from one country to another). For simplicity, you will just use the demographic information that Google Analytics 4 provides when the user first engaged with the app as indicated by MIN(event_timestamp). This enables every unique user to be represented by a single row.
End of explanation
%%bigquery --project $PROJECT_ID
SELECT
event_name,
COUNT(event_name) as event_count
FROM
`firebase-public-project.analytics_153293282.events_*`
GROUP BY 1
ORDER BY
event_count DESC
Explanation: Step 3. Extracting behavioral data for each user
Behavioral data in the raw event data spans across multiple events -- and thus rows -- per user. The goal of this section is to aggregate and extract behavioral data for each user, resulting in one row of behavioral data per unique user.
But what kind of behavioral data will you need to prepare? Since the end goal of this notebook is to predict, based on a user's activity within the first 24 hrs since app installation, whether that user will churn or return thereafter, then you will want to use behavioral data from the first 24 hrs in your training data. Later on, we can also extract some extra time-related features from user_first_engagement, such as the month or day of the first engagement.
Google Analytics automatically collects certain events that you can use to analyze behavior. In addition, there are certain recommended events for games.
As a first step, you can explore all the unique events that exist in this dataset, based on event_name:
End of explanation
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE VIEW bqmlga4.user_aggregate_behavior AS (
WITH
events_first24hr AS (
#select user data only from first 24 hr of using the app
SELECT
e.*
FROM
`firebase-public-project.analytics_153293282.events_*` e
JOIN
bqmlga4.returningusers r
ON
e.user_pseudo_id = r.user_pseudo_id
WHERE
e.event_timestamp <= r.ts_24hr_after_first_engagement
)
SELECT
user_pseudo_id,
SUM(IF(event_name = 'user_engagement', 1, 0)) AS cnt_user_engagement,
SUM(IF(event_name = 'level_start_quickplay', 1, 0)) AS cnt_level_start_quickplay,
SUM(IF(event_name = 'level_end_quickplay', 1, 0)) AS cnt_level_end_quickplay,
SUM(IF(event_name = 'level_complete_quickplay', 1, 0)) AS cnt_level_complete_quickplay,
SUM(IF(event_name = 'level_reset_quickplay', 1, 0)) AS cnt_level_reset_quickplay,
SUM(IF(event_name = 'post_score', 1, 0)) AS cnt_post_score,
SUM(IF(event_name = 'spend_virtual_currency', 1, 0)) AS cnt_spend_virtual_currency,
SUM(IF(event_name = 'ad_reward', 1, 0)) AS cnt_ad_reward,
SUM(IF(event_name = 'challenge_a_friend', 1, 0)) AS cnt_challenge_a_friend,
SUM(IF(event_name = 'completed_5_levels', 1, 0)) AS cnt_completed_5_levels,
SUM(IF(event_name = 'use_extra_steps', 1, 0)) AS cnt_use_extra_steps,
FROM
events_first24hr
GROUP BY
1
);
SELECT
*
FROM
bqmlga4.user_aggregate_behavior
LIMIT 10
Explanation: For this notebook, to predict whether a user will churn or return, you can start by counting the number of times a user engages in the following event types:
user_engagement
level_start_quickplay
level_end_quickplay
level_complete_quickplay
level_reset_quickplay
post_score
spend_virtual_currency
ad_reward
challenge_a_friend
completed_5_levels
use_extra_steps
In SQL, you can aggregate the behavioral data by calculating the total number of times when each of the above event_names occurred in the data set per user.
If you are using your own dataset, you may have different event types that you can aggregate and extract. Your app may be sending very different event_names to Google Analytics so be sure to use events most suitable to your scenario.
End of explanation
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE VIEW bqmlga4.train AS (
SELECT
dem.*,
IFNULL(beh.cnt_user_engagement, 0) AS cnt_user_engagement,
IFNULL(beh.cnt_level_start_quickplay, 0) AS cnt_level_start_quickplay,
IFNULL(beh.cnt_level_end_quickplay, 0) AS cnt_level_end_quickplay,
IFNULL(beh.cnt_level_complete_quickplay, 0) AS cnt_level_complete_quickplay,
IFNULL(beh.cnt_level_reset_quickplay, 0) AS cnt_level_reset_quickplay,
IFNULL(beh.cnt_post_score, 0) AS cnt_post_score,
IFNULL(beh.cnt_spend_virtual_currency, 0) AS cnt_spend_virtual_currency,
IFNULL(beh.cnt_ad_reward, 0) AS cnt_ad_reward,
IFNULL(beh.cnt_challenge_a_friend, 0) AS cnt_challenge_a_friend,
IFNULL(beh.cnt_completed_5_levels, 0) AS cnt_completed_5_levels,
IFNULL(beh.cnt_use_extra_steps, 0) AS cnt_use_extra_steps,
ret.user_first_engagement,
ret.month,
ret.julianday,
ret.dayofweek,
ret.churned
FROM
bqmlga4.returningusers ret
LEFT OUTER JOIN
bqmlga4.user_demographics dem
ON
ret.user_pseudo_id = dem.user_pseudo_id
LEFT OUTER JOIN
bqmlga4.user_aggregate_behavior beh
ON
ret.user_pseudo_id = beh.user_pseudo_id
WHERE ret.bounced = 0
);
SELECT
*
FROM
bqmlga4.train
LIMIT 10
Explanation: Note that in addition to frequency of performing an action, you can also include other behavioral features in this step such as the total amount of in-game currency they spent, or if they reached certain app-specifc milestones that may be more relevant to your app (e.g., gained a certain threshold amount of XP or leveled up at least 5 times). This is an opportunity for you to extend this notebook to suit your needs.
Step 4: Combining the label, demographic and behavioral data together as training data
In this section, you can now combine these three intermediary views (label, demographic, and behavioral data) into the final training data. Here you can also specify bounced = 0, in order to limit the training data only to users who did not "bounce" within the first 10 minutes of using the app.
End of explanation
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE MODEL bqmlga4.churn_logreg
OPTIONS(
MODEL_TYPE="LOGISTIC_REG",
INPUT_LABEL_COLS=["churned"]
) AS
SELECT
*
FROM
bqmlga4.train
Explanation: Training the propensity model with BigQuery ML
In this section, using the training data you prepared, you will now train machine learning models in SQL using BigQuery ML. The remainder of the notebook will only use logistic regression, but you can also follow the optional code below to train other model types.
Choosing the model:
As this is a binary classification task, for simplicity, you can start with logistic regression, but you can also train other classification models like XGBoost, deep neural networks and AutoML Tables in BigQuery ML to calculate propensity scores. Each of these models will output a probability score (propensity) between 0 and 1.0 of how likely the model prediction is based on the training data. In this notebook, the model predicts whether the user will churn (1) or return (0) after 24 hours of the user's first engagement with the app.
|Model| model_type| Advantages | Disadvantages|
|-|-|-|-|
|Logistic Regression| LOGISTIC_REG (documentation)| Fast to train vs. other model types | May not have the highest model performance |
|XGBoost| BOOSTED_TREE_CLASSIFIER (documentation)| Higher model performance. Can inspect feature importance. | Slower to train vs. LOGISTIC_REG.|
|Deep Neural Networks| DNN_CLASSIFIER (documentation)| Higher model performance | Slower to train vs. LOGISTIC_REG.|
|AutoML Tables| AUTOML_CLASSIFIER (documentation)| Very high model performance | May take at least a few hours to train, not easy to explain how the model works. |
There's no need to split your data into train/test:
- When you run the CREATE MODEL statement, BigQuery ML will automatically split your data into training and test, so you can evaluate your model immediately after training (see the documentation for more information or how to specify the split manually).
Hyperparameter tuning:
Note that you can also tune hyperparameters for each model, although it is beyond the scope of this notebook. See the BigQuery ML documentation for CREATE MODEL for further details on the available hyperparameters.
TRANSFORM():
It may also be useful to extract features from datetimes/timestamps as one simple example of additional feature preprocessing before training. For example, we can extract the month, day of year, and day of week from user_first_engagement. TRANSFORM() allows the model to remember the extracted values so you won't need to extract them again when making predictions using the model later on.
Train a logistic regression model
The following code trains a logistic regression model. This should only take a minute or two to train.
For more information on the default hyperparameters used, you can read the documentation:
CREATE MODEL statement
End of explanation
# %%bigquery --project $PROJECT_ID
# CREATE OR REPLACE MODEL bqmlga4.churn_xgb
# OPTIONS(
# MODEL_TYPE="BOOSTED_TREE_CLASSIFIER",
# INPUT_LABEL_COLS=["churned"]
# ) AS
# SELECT
# * EXCEPT(user_pseudo_id)
# FROM
# bqmlga4.train
Explanation: Train an XGBoost model (optional)
The following code trains an XGBoost model. This may take several minutes to train.
For more information on the default hyperparameters used, you can read the documentation:
CREATE MODEL statement for Boosted Tree models using XGBoost
End of explanation
# %%bigquery --project $PROJECT_ID
# CREATE OR REPLACE MODEL bqmlga4.churn_dnn
# OPTIONS(
# MODEL_TYPE="DNN_CLASSIFIER",
# INPUT_LABEL_COLS=["churned"]
# ) AS
# SELECT
# * EXCEPT(user_pseudo_id)
# FROM
# bqmlga4.train
Explanation: Train a deep neural network (DNN) model (optional)
The following code trains a deep neural network. This may take several minutes to train.
For more information on the default hyperparameters used, you can read the documentation:
CREATE MODEL statement for Deep Neural Network (DNN) models
End of explanation
# %%bigquery --project $PROJECT_ID
# CREATE OR REPLACE MODEL bqmlga4.churn_automl
# OPTIONS(
# MODEL_TYPE="AUTOML_CLASSIFIER",
# INPUT_LABEL_COLS=["churned"],
# BUDGET_HOURS=1.0
# ) AS
# SELECT
# * EXCEPT(user_pseudo_id)
# FROM
# bqmlga4.train
Explanation: Train an AutoML Tables model (optional)
AutoML Tables enables you to automatically build state-of-the-art machine learning models on structured data at massively increased speed and scale. AutoML Tables automatically searches through Google’s model zoo for structured data to find the best model for your needs, ranging from linear/logistic regression models for simpler datasets to advanced deep, ensemble, and architecture-search methods for larger, more complex ones.
You can train an AutoML model directly with BigQuery ML, as in the code below.
Note that the BUDGET_HOURS parameter is for AutoML Tables training, specified in hours. The default value is 1.0 hour and must be between 1.0 and 72.0. The total query processing time can be greater than the budgeted hours specified in the query.
Note: This may take a few hours to train.
End of explanation
%%bigquery --project $PROJECT_ID
SELECT
*
FROM
ML.EVALUATE(MODEL bqmlga4.churn_logreg)
Explanation: Model Evaluation
To evaluate the model, you can run ML.EVALUATE on a model that has finished training to inspect some of the metrics.
The metrics are based on the test sample data that was automatically split during model creation (documentation).
End of explanation
%%bigquery --project $PROJECT_ID
SELECT
expected_label,
_0 AS predicted_0,
_1 AS predicted_1
FROM
ML.CONFUSION_MATRIX(MODEL bqmlga4.churn_logreg)
Explanation: ML.EVALUATE generates the precision, recall, accuracy and f1_score using the default classification threshold of 0.5, which can be modified by using the optional THRESHOLD parameter.
Generally speaking, you can use the log_loss and roc_auc metrics to compare model performance.
The log_loss ranges between 0 and 1.0, and the closer the log_loss is the zero, the closer the predicted labels were to the actual labels.
The roc_auc ranges between 0 and 1.0, and the closer the roc_auc is to 1.0, the better the model is at distinguishing between the classes.
For more information on these metrics, you can read through the definitions on precision and recall, accuracy, f1-score, log_loss and roc_auc.
Confusion matrix: predicted vs actual values
In addition to model evaluation metrics, you may also want to use a confusion matrix to inspect how well the model predicted the labels, compared to the actual labels.
With the rows indicating the actual labels, and the columns as the predicted labels, the resulting format for ML.CONFUSION_MATRIX for binary classification looks like:
| | Predicted_0 | Predicted_1|
|-|-|-|
|Actual_0| True Negatives | False Positives|
|Actual_1| False Negatives | True Positives|
For more information on confusion matrices, you can read through a detailed explanation here.
End of explanation
%%bigquery df_roc --project $PROJECT_ID
SELECT * FROM ML.ROC_CURVE(MODEL bqmlga4.churn_logreg)
df_roc
Explanation: ROC Curve
You can plot the AUC-ROC curve by using ML.ROC_CURVE to return the metrics for different threshold values for the model (documentation).
End of explanation
df_roc.plot(x="false_positive_rate", y="recall", title="AUC-ROC curve")
Explanation: Plot the AUC-ROC curve
End of explanation
%%bigquery --project $PROJECT_ID
SELECT
*
FROM
ML.PREDICT(MODEL bqmlga4.churn_logreg,
(SELECT * FROM bqmlga4.train)) #can be replaced with a test dataset
Explanation: Model prediction
You can run ML.PREDICT to make predictions on the propensity to churn. The following code returns all the information from ML.PREDICT.
End of explanation
%%bigquery --project $PROJECT_ID
SELECT
user_pseudo_id,
churned,
predicted_churned,
predicted_churned_probs[OFFSET(0)].prob as probability_churned
FROM
ML.PREDICT(MODEL bqmlga4.churn_logreg,
(SELECT * FROM bqmlga4.train)) #can be replaced with a proper test dataset
Explanation: For propensity modeling, the most important output is the probability of a behavior occuring. The following query returns the probability that the user will return after 24 hrs. The higher the probability and closer it is to 1, the more likely the user is predicted to churn, and the closer it is to 0, the more likely the user is predicted to return.
End of explanation
%%bigquery df --project $PROJECT_ID
SELECT
user_pseudo_id,
churned,
predicted_churned,
predicted_churned_probs[OFFSET(0)].prob as probability_churned
FROM
ML.PREDICT(MODEL bqmlga4.churn_logreg,
(SELECT * FROM bqmlga4.train)) #can be replaced with a proper test dataset
df.head()
Explanation: Exporting the predictions out of Bigquery
Reading the predictions directly from BigQuery
With the predictions from ML.PREDICT, you can export the data into a Pandas dataframe using the BigQuery Storage API (see documentation and code samples). You can also use other BigQuery client libraries.
Alternatively you can also export directly into pandas in a notebook using the %%bigquery <variable name> as in:
End of explanation
%%bigquery --project $PROJECT_ID
EXPORT DATA OPTIONS (
uri="gs://mybucket/myfile/churnpredictions.csv",
format=CSV
) AS
SELECT
user_pseudo_id,
churned,
predicted_churned,
predicted_churned_probs[OFFSET(0)].prob as probability_churned
FROM
ML.PREDICT(MODEL bqmlga4.churn_logreg,
(SELECT * FROM bqmlga4.train)) #can be replaced with a proper test dataset
Explanation: Export predictions table to Google Cloud Storage
There are several ways to export the predictions table to Google Cloud Storage (GCS), so that you can use them in a separate service. Perhaps the easiest way is to export directly to GCS using SQL (documentation).
End of explanation |
Subsets and Splits