Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
1,000
Given the following text description, write Python code to implement the functionality described below step by step Description: Fixed $Y_i$ & Fixed $\alpha_{MLT}$ Analysis for the case that the proto-stellar (i.e., initial) helium mass fraction is fixed to a value that is linearly proportional to the heavy element mass fraction, $Z_i$. Specifically, we assume that \begin{equation} Y_i = Y_{\textrm{prim}} + \frac{\Delta Y}{\Delta Z}\left(Z_i - Z_{\textrm{prim}}\right), \end{equation} where $Y_{\textrm{prim}}$ and $Z_{\textrm{prim}}$ are the primordial helium and heavy element abundances immediately following big bang nucleosynthesis, $Z_i$ is the proto-stellar heavy element abundance, and $\Delta Y / \Delta Z$ is a helium enrichment factor. Here, we assume $Z_{\textrm{prim}} = 0$, $Y_{\textrm{prim}} = 0.2488$ (Piembert et al. 2007), and $\Delta Y / \Delta Z = 1.76$. The latter was determined based on the enrichment factor required during our solar model calibration and is consistent with more rigorous studies that suggest $\Delta Y / \Delta Z = 2 \pm 1$ (e.g., Casagrande et al. 2007). We shall return address the validity of the helium enrichment factor later. Three different comparisons with models were created Step1: The pre-processed data files have observational quantities embedded within them. However, specific star names are (for the moment) kept hidden for proprietary reasons. It means we're storing a bit more data in memeory, since each data file contains observed quantities, but no matter. Anyway, we can look how well the MCMC does recovering particular values. Starting with distance, Step2: ~~The first two cases accurately reproduce the observed distances. However, the third case (no $T_{\textrm{eff}}$ constraint) does not. This may suggest that the latter MCMC runs did not fully converge.~~ Scratch that -- there was a typo in the code that lead to reasonable looking, but incorrect results. Distances are well recovered. Moving now to metallicity, Step3: Metallicities are not so well reproduced without a strong metallicity prior (small uncertainty in the observed value). Even then, there is an apparent preferences for some stars to prefer metallicities around [Fe/H] $= -0.15$ dex. There is also a slight systematic shift of the inferred metallicities toward higher values. This is not necessarily indicative of a complete failure of the models to reproduce the observed values, as we must also be concerned with the present day metallicity, not the proto-stellar metallicity, which is inferred from the models. Due to gravitational settling and various diffusion processes, we should expect a marginally higher metallicity value among the proto-stellar values by upward of 0.05 dex. Unfortunately, the expected offset is temperature dependent, as it is a strong function of the depth of the convection zone and the temporal evolution of the surface convection zone depth. Fully convective stars, for example, are not expected to be affected by diffusive process as the timescale for mixing by the convection zone is orders of magnitude shorter. Nevertheless, as can be seen in the left-most panel, the models (if given the opportunity) will prefer significantly higher metallicity values, which should be seen as a potential issue among the models. Masses inferred from the three data sets may also be compared. Step4: In general, masses estimates are consistent among all three trials. Masses inferred in the weak metallicity prior trial are systematically higher than those in the other two trials, but not by a large margin. Issues arise in the vicinity of $0.80 M_{\odot}$, with the strong metallicity prior and no $T_{\textrm{eff}}$ trials showing a propensity for stars to have masses around $0.77 M_{\odot}$. This propensity does not appear to exist under the weak metallicity prior. While the effective temperature is a derived quantity, results of trials adopting an effective temperature prior appear to derive more reliable results that are devoid of a pile up in the $0.80 M_{\odot}$ vicinity. Masses above that threshold are most certainly expected, particularly in the case of Gl 559 B ($\alpha$ Cen B), which has an expected mass of approximately $0.92±0.02 M_{\odot}$. Unfortunately, none of the various trials were able to recover this mass within the $1\sigma$ limit. How well do the various trials recover the observed properties? Let's start with the two primary observables Step5: Now we can compare results from all three trials at once, with relative discrepancies normalized to the observational uncertainties. Step6: Results are rather consistent, at least in the morphology of the scatter. Beyond the 68% confidence interval region (inner ellipse), there is very little scatter into quadrants III and IV with only a handful of points lying in quadrant I. Most of the scatter resids in quadrant II, which can be interpretted to mean that models tend to infer radii that are too small compared to observations with bolometric fluxes that are slightly too large. While the scatter along the bolometric flux axis is reveals a general consistency with the observations, there is still a systematic offset toward the left side of the zero-point. One may interpret this as the model attempting to produce larger radii while maintaining agreement with the bolometric flux and effective temperature measurements. Can we discern any correlations of these errors among the parameter present in the data? Let's start with potential correations with the observables themselves. Step7: Note that this does not include the trials where the effective temperature was neglected in the likelihood estimator. There is good agreement between the two samples, despite changing the metallicity prior. Step8: Computing the mean and median offsets of bolometric flux and angular diameter from the zero-points, we find Step9: Rephrasing these as fractional relative errors, Step10: Errors on stellar radii (given that we've recovered measured distances) are on the order of 2%, depending on the strength of the metallicity prior. This is consistent with mean offsets observed among low-mass stars in detached double-lined ecilpsing binaries (Feiden & Chaboyer 2012; [Spada et al. 2013). However, it's quite possible that the models are compensating for true modeling errors with older ages and higher metallicities. Let's look at the distribution of ages. Step11: There is, in fact, a significant number of stars that are predicted to have ages comparable to the age of the Universe. While we cannot rule out the possibility that some stars have older ages consistent with the thick disk or halo populations, it is unlikely that such a large fraction of local field stars (nearly 50%) have ages older than 12 Gyr. This, in part, reflects the difficult of acquiring ages for low-mass stars, particuarly those in the M dwarf regime, where the stellar properties are not expected to evolve significantly over the stellar lifetime. A preference for older ages then only reflects that fact that some evolution does occur in the models, leading to almost negligibly larger radii that are only slightly preferred given the systematic offset observed above. While considering final distributions, here is the final mass distribution from the weak metallicity prior trial. Step12: Exploring Correlations Step13: There do not appear to be trends among model-observation discrepancies as a function of mass. Instead, models tend to systmatically under predict stellar radii and over predict effective temperatures. Below $0.5 M_{\odot}$, there is increased scatter owing to four significant outliers. One of these, Gl 388 (AD Leo), is a candidate young star, so we should perhaps not be surprised by the failure of models to reproduce its properties. It stands out in our above analysis of how well models recovered observed bolometric fluxes. Since the star is young, the models are not able to reproduce its bolometric flux; Gl 388 shows the largest disagreements when it comes to bolometric flux, adding further proof that the system may very well be young. Step14: As with stellar mass, there is no immediate correlation between observed effective temperatures and model inferred fundamental properties. However, it is clear that the largest errors appear at the coolest temperatures, with the presence of the same four outliers as above. We also again note that, even for the hottest stars, there appears to be a systematic offset of model errors toward hotter temperatures and smaller radii. Individual errors are not necessarily significant, often located within $1\sigma$ of the zero-point. However, the sysetmatic nature of the errors betrays the existence of possible errors among K dwarfs. A problem thought to be exclusive of M dwarfs appears to extend throughout the K dwarf regime. Step15: Comparison with Mann et al. (2015)
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt FeH_weak = np.genfromtxt('data/run09_mle_props.txt') # weak metallicity prior FeH_strong = np.genfromtxt('data/run10_mle_props.txt') # strong metallicity prior no_Teff = np.genfromtxt('data/run11_mle_props.txt') # no Teff observable in likelihood function Explanation: Fixed $Y_i$ & Fixed $\alpha_{MLT}$ Analysis for the case that the proto-stellar (i.e., initial) helium mass fraction is fixed to a value that is linearly proportional to the heavy element mass fraction, $Z_i$. Specifically, we assume that \begin{equation} Y_i = Y_{\textrm{prim}} + \frac{\Delta Y}{\Delta Z}\left(Z_i - Z_{\textrm{prim}}\right), \end{equation} where $Y_{\textrm{prim}}$ and $Z_{\textrm{prim}}$ are the primordial helium and heavy element abundances immediately following big bang nucleosynthesis, $Z_i$ is the proto-stellar heavy element abundance, and $\Delta Y / \Delta Z$ is a helium enrichment factor. Here, we assume $Z_{\textrm{prim}} = 0$, $Y_{\textrm{prim}} = 0.2488$ (Piembert et al. 2007), and $\Delta Y / \Delta Z = 1.76$. The latter was determined based on the enrichment factor required during our solar model calibration and is consistent with more rigorous studies that suggest $\Delta Y / \Delta Z = 2 \pm 1$ (e.g., Casagrande et al. 2007). We shall return address the validity of the helium enrichment factor later. Three different comparisons with models were created: one with a weak metallicity constraint for stars that don't have quoted observational uncertainties (assumed $1\sigma = \pm 0.20$ dex), one with a strong metallicity contraint ($1\sigma = 0.05$ dex), and a run where we consider only contraints on the bolometric flux and angular diameter, since the observed effective temperature is a derived quantity. First, we must load the data from each comparison (note, data is pre-processed). We'll start by loading data representing the most likely values from the MCMC runs, End of explanation fig, axes = plt.subplots(1, 3, figsize=(15, 5)) # set axis labels and plot one-to-one line for ax in axes: ax.set_xlabel('Observed Distance (pc)', fontsize=22.) ax.plot( np.arange(0.0, 20.0, 1.0), np.arange(0.0, 20.0, 1.0), '--', lw=2, color='#444444') axes[0].set_ylabel('Inferred Distance (pc)', fontsize=22.) # plot distance data axes[0].plot(1.0/FeH_weak[:, 20], FeH_weak[:, 4], 'o', color='#4682B4') axes[1].plot(1.0/FeH_strong[:, 20], FeH_strong[:, 4], 'o', color='#4682B4') axes[2].plot(1.0/no_Teff[:, 20], no_Teff[:, 4], 'o', color='#4682B4') Explanation: The pre-processed data files have observational quantities embedded within them. However, specific star names are (for the moment) kept hidden for proprietary reasons. It means we're storing a bit more data in memeory, since each data file contains observed quantities, but no matter. Anyway, we can look how well the MCMC does recovering particular values. Starting with distance, End of explanation fig, axes = plt.subplots(1, 3, figsize=(15, 5)) # set axis labels and plot one-to-one line for ax in axes: ax.set_xlabel('Observed [Fe/H] (dex)', fontsize=22.) ax.set_xlim((-0.5, 0.5)) ax.set_ylim((-0.5, 0.5)) ax.plot( np.arange(-0.5, 0.51, 0.5), np.arange(-0.5, 0.51, 0.5), '--', lw=2, color='#444444') axes[0].set_ylabel('Inferred [M/H] (dex)', fontsize=22.) # plot metallicity data axes[0].plot(FeH_weak[:, 30], FeH_weak[:, 1], 'o', color='#4682B4') axes[1].plot(FeH_strong[:, 30], FeH_strong[:, 1], 'o', color='#4682B4') axes[2].plot(no_Teff[:, 30], no_Teff[:, 1], 'o', color='#4682B4') Explanation: ~~The first two cases accurately reproduce the observed distances. However, the third case (no $T_{\textrm{eff}}$ constraint) does not. This may suggest that the latter MCMC runs did not fully converge.~~ Scratch that -- there was a typo in the code that lead to reasonable looking, but incorrect results. Distances are well recovered. Moving now to metallicity, End of explanation fig, axes = plt.subplots(1, 2, figsize=(10, 5)) # set axis labels and plot one-to-one line for ax in axes: ax.set_xlabel('Inferred Mass ($M_{\odot}$), weak prior', fontsize=18.) ax.set_xlim((0.0, 1.0)) ax.set_ylim((0.0, 1.0)) ax.plot( np.arange(0.0, 1.01, 0.5), np.arange(0.0, 1.01, 0.5), '--', lw=2, color='#444444') axes[0].set_ylabel('Inferred Mass ($M_{\odot}$), strong prior', fontsize=18.) axes[1].set_ylabel('Inferred Mass ($M_{\odot}$), no $T_{\\rm eff}$', fontsize=18.) # plot masses axes[0].plot(FeH_weak[:, 0], FeH_strong[:, 0], 'o', color='#4682B4') axes[1].plot(FeH_weak[:, 0], no_Teff[:, 0], 'o', color='#4682B4') Explanation: Metallicities are not so well reproduced without a strong metallicity prior (small uncertainty in the observed value). Even then, there is an apparent preferences for some stars to prefer metallicities around [Fe/H] $= -0.15$ dex. There is also a slight systematic shift of the inferred metallicities toward higher values. This is not necessarily indicative of a complete failure of the models to reproduce the observed values, as we must also be concerned with the present day metallicity, not the proto-stellar metallicity, which is inferred from the models. Due to gravitational settling and various diffusion processes, we should expect a marginally higher metallicity value among the proto-stellar values by upward of 0.05 dex. Unfortunately, the expected offset is temperature dependent, as it is a strong function of the depth of the convection zone and the temporal evolution of the surface convection zone depth. Fully convective stars, for example, are not expected to be affected by diffusive process as the timescale for mixing by the convection zone is orders of magnitude shorter. Nevertheless, as can be seen in the left-most panel, the models (if given the opportunity) will prefer significantly higher metallicity values, which should be seen as a potential issue among the models. Masses inferred from the three data sets may also be compared. End of explanation # define in terms of (obs - model)/sigma_observed # # angular diameters dTheta_sigma_weak = (FeH_weak[:, 18] - FeH_weak[:, 8])/FeH_weak[:, 19] dTheta_sigma_strong = (FeH_strong[:, 18] - FeH_strong[:, 8])/FeH_strong[:, 19] dTheta_sigma_noTeff = (no_Teff[:, 18] - no_Teff[:, 8])/no_Teff[:, 19] # bolometric fluxes (note: strange units for model properties) dFbol_sigma_weak = (FeH_weak[:, 22] - 10**(FeH_weak[:, 7] + 8.0))/FeH_weak[:, 23] dFbol_sigma_strong = (FeH_strong[:, 22] - 10**(FeH_strong[:, 7] + 8.0))/FeH_strong[:, 23] dFbol_sigma_noTeff = (no_Teff[:, 22] - 10**(no_Teff[:, 7] + 8.0))/no_Teff[:, 23] # effective temperatures dTeff_sigma_weak = (FeH_weak[:, 24] - 10**FeH_weak[:, 6])/FeH_weak[:, 25] dTeff_sigma_strong = (FeH_strong[:, 24] - 10**FeH_strong[:, 6])/FeH_strong[:, 25] Explanation: In general, masses estimates are consistent among all three trials. Masses inferred in the weak metallicity prior trial are systematically higher than those in the other two trials, but not by a large margin. Issues arise in the vicinity of $0.80 M_{\odot}$, with the strong metallicity prior and no $T_{\textrm{eff}}$ trials showing a propensity for stars to have masses around $0.77 M_{\odot}$. This propensity does not appear to exist under the weak metallicity prior. While the effective temperature is a derived quantity, results of trials adopting an effective temperature prior appear to derive more reliable results that are devoid of a pile up in the $0.80 M_{\odot}$ vicinity. Masses above that threshold are most certainly expected, particularly in the case of Gl 559 B ($\alpha$ Cen B), which has an expected mass of approximately $0.92±0.02 M_{\odot}$. Unfortunately, none of the various trials were able to recover this mass within the $1\sigma$ limit. How well do the various trials recover the observed properties? Let's start with the two primary observables: angular diameter and bolometric flux. These are inextricably coupled to the star's distance when comparing with stellar evolution models, which only provide information on the stellar radius and bolometric luminosity. Nevertheless, we saw above that distances were well recovered, permitting a reliable comparison of the more fundamental observables. First we'll define these new properties in new arrays, End of explanation from matplotlib.patches import Ellipse fig, ax = plt.subplots(1, 1, figsize=(8, 8)) # set axis properties ax.set_xlabel('$\\Delta F_{\\rm bol} / \\sigma$', fontsize=22.) ax.set_ylabel('$\\Delta \\Theta / \\sigma$', fontsize=22.) # plot 68% and 99% confidence intervals ells = [Ellipse(xy=(0.0, 0.0), width=2.*x, height=2.*x, angle=0.0, lw=3, fill=False, linestyle='dashed', edgecolor='#333333') for x in [1.0, 3.0]] for e in ells: ax.add_artist(e) # plot results of trials ax.plot(dFbol_sigma_weak, dTheta_sigma_weak, 'o', color='#4682B4', markersize=9.0) ax.plot(dFbol_sigma_strong, dTheta_sigma_strong, 's', color='#800000', markersize=9.0) ax.plot(dFbol_sigma_noTeff, dTheta_sigma_noTeff, '^', color='#444444', markersize=9.0) Explanation: Now we can compare results from all three trials at once, with relative discrepancies normalized to the observational uncertainties. End of explanation fig, axes = plt.subplots(2, 1, figsize=(10, 8)) # set axis properties axes[1].set_xlabel('Bolometric Flux (erg s$^{-1}$ cm$^{-2}$)', fontsize=22.) axes[0].set_ylabel('$\\Delta F_{\\rm bol} / \\sigma$', fontsize=22.) axes[1].set_ylabel('$\\Delta \\Theta / \\sigma$', fontsize=22.) # plot zero points for ax in axes: ax.semilogx([1.e-1, 1.e3], [0.0, 0.0], '--', lw=2, color='#333333') # plot relative errors axes[0].semilogx(FeH_strong[:, 22], dFbol_sigma_strong, 's', color='#800000', markersize=7.0) axes[1].semilogx(FeH_strong[:, 22], dTheta_sigma_strong, 's', color='#800000', markersize=7.0) axes[0].semilogx(FeH_weak[:, 22], dFbol_sigma_weak, 'o', color='#4682B4', markersize=9.0) axes[1].semilogx(FeH_weak[:, 22], dTheta_sigma_weak, 'o', color='#4682B4', markersize=9.0) Explanation: Results are rather consistent, at least in the morphology of the scatter. Beyond the 68% confidence interval region (inner ellipse), there is very little scatter into quadrants III and IV with only a handful of points lying in quadrant I. Most of the scatter resids in quadrant II, which can be interpretted to mean that models tend to infer radii that are too small compared to observations with bolometric fluxes that are slightly too large. While the scatter along the bolometric flux axis is reveals a general consistency with the observations, there is still a systematic offset toward the left side of the zero-point. One may interpret this as the model attempting to produce larger radii while maintaining agreement with the bolometric flux and effective temperature measurements. Can we discern any correlations of these errors among the parameter present in the data? Let's start with potential correations with the observables themselves. End of explanation fig, axes = plt.subplots(2, 1, figsize=(10, 8)) # set axis properties axes[1].set_xlabel('Angular Diameter (mas)', fontsize=22.) axes[0].set_ylabel('$\\Delta F_{\\rm bol} / \\sigma$', fontsize=22.) axes[1].set_ylabel('$\\Delta \\Theta / \\sigma$', fontsize=22.) # plot zero points for ax in axes: ax.semilogx([1.e-1, 1.e1], [0.0, 0.0], '--', lw=2, color='#333333') # plot relative errors axes[0].semilogx(FeH_strong[:, 18], dFbol_sigma_strong, 's', color='#800000', markersize=7.0) axes[1].semilogx(FeH_strong[:, 18], dTheta_sigma_strong, 's', color='#800000', markersize=7.0) axes[0].semilogx(FeH_weak[:, 18], dFbol_sigma_weak, 'o', color='#4682B4', markersize=9.0) axes[1].semilogx(FeH_weak[:, 18], dTheta_sigma_weak, 'o', color='#4682B4', markersize=9.0) Explanation: Note that this does not include the trials where the effective temperature was neglected in the likelihood estimator. There is good agreement between the two samples, despite changing the metallicity prior. End of explanation # compute mean of the offsets print 'Weak Metallicity Prior' print 'Mean Bolometric Flux Error: {:+5.2f} sigma'.format(np.mean(dFbol_sigma_weak)) print 'Mean Angular Diameter Error: {:+5.2f} sigma\n'.format(np.mean(dTheta_sigma_weak)) print 'Strong Metallicity Prior' print 'Mean Bolometric Flux Error: {:+5.2f} sigma'.format(np.mean(dFbol_sigma_strong)) print 'Mean Angular Diameter Error: {:+5.2f} sigma\n'.format(np.mean(dTheta_sigma_strong)) # compute median of the offsets print 'Weak Metallicity Prior' print 'Median Bolometric Flux Error: {:+5.2f} sigma'.format(np.median(dFbol_sigma_weak)) print 'Median Angular Diameter Error: {:+5.2f} sigma\n'.format(np.median(dTheta_sigma_weak)) print 'Strong Metallicity Prior' print 'Median Bolometric Flux Error: {:+5.2f} sigma'.format(np.median(dFbol_sigma_strong)) print 'Median Angular Diameter Error: {:+5.2f} sigma\n'.format(np.median(dTheta_sigma_strong)) Explanation: Computing the mean and median offsets of bolometric flux and angular diameter from the zero-points, we find End of explanation # define in terms of (obs - model)/obs # # angular diameters dTheta_weak = (FeH_weak[:, 18] - FeH_weak[:, 8])/FeH_weak[:, 18] dTheta_strong = (FeH_strong[:, 18] - FeH_strong[:, 8])/FeH_strong[:, 18] # bolometric fluxes (note: strange units for model properties) dFbol_weak = (FeH_weak[:, 22] - 10**(FeH_weak[:, 7] + 8.0))/FeH_weak[:, 22] dFbol_strong = (FeH_strong[:, 22] - 10**(FeH_strong[:, 7] + 8.0))/FeH_strong[:, 22] # compute mean of the offsets print 'Weak Metallicity Prior' print 'Mean Bolometric Flux Error: {:+5.2f}%'.format(np.mean(dFbol_weak)*100.) print 'Mean Angular Diameter Error: {:+5.2f}%\n'.format(np.mean(dTheta_weak)*100.) print 'Strong Metallicity Prior' print 'Mean Bolometric Flux Error: {:+5.2f}%'.format(np.mean(dFbol_strong)*100.) print 'Mean Angular Diameter Error: {:+5.2f}%\n'.format(np.mean(dTheta_strong)*100.) # compute median of the offsets print 'Weak Metallicity Prior' print 'Median Bolometric Flux Error: {:+5.2f}%'.format(np.median(dFbol_weak)*100.) print 'Median Angular Diameter Error: {:+5.2f}%\n'.format(np.median(dTheta_weak)*100.) print 'Strong Metallicity Prior' print 'Median Bolometric Flux Error: {:+5.2f}%'.format(np.median(dFbol_strong)*100.) print 'Median Angular Diameter Error: {:+5.2f}%\n'.format(np.median(dTheta_strong)*100.) Explanation: Rephrasing these as fractional relative errors, End of explanation fig, ax = plt.subplots(1, 1, figsize=(8, 8)) # set axis properties ax.set_xlabel('$\\log_{10}(age / yr)$', fontsize=22.) ax.set_ylabel('Number', fontsize=22.) strong_hist = ax.hist(FeH_strong[:, 3], bins=20, facecolor='#800000', alpha=0.6) weak_hist = ax.hist(FeH_weak[:, 3], bins=20, facecolor='#4682B4', alpha=0.6) Explanation: Errors on stellar radii (given that we've recovered measured distances) are on the order of 2%, depending on the strength of the metallicity prior. This is consistent with mean offsets observed among low-mass stars in detached double-lined ecilpsing binaries (Feiden & Chaboyer 2012; [Spada et al. 2013). However, it's quite possible that the models are compensating for true modeling errors with older ages and higher metallicities. Let's look at the distribution of ages. End of explanation fig, ax = plt.subplots(1, 1, figsize=(8, 8)) # set axis properties ax.set_xlabel('Mass ($M_{\\odot}$)', fontsize=22.) ax.set_ylabel('Number', fontsize=22.) weak_hist = ax.hist(FeH_weak[:, 0], bins=10, facecolor='#4682B4', alpha=0.6) Explanation: There is, in fact, a significant number of stars that are predicted to have ages comparable to the age of the Universe. While we cannot rule out the possibility that some stars have older ages consistent with the thick disk or halo populations, it is unlikely that such a large fraction of local field stars (nearly 50%) have ages older than 12 Gyr. This, in part, reflects the difficult of acquiring ages for low-mass stars, particuarly those in the M dwarf regime, where the stellar properties are not expected to evolve significantly over the stellar lifetime. A preference for older ages then only reflects that fact that some evolution does occur in the models, leading to almost negligibly larger radii that are only slightly preferred given the systematic offset observed above. While considering final distributions, here is the final mass distribution from the weak metallicity prior trial. End of explanation fig, axes = plt.subplots(2, 1, figsize=(10, 8)) # set axis properties axes[1].set_xlabel('Mass ($M_{\\odot}$)', fontsize=22.) axes[0].set_ylabel('$\\Delta T_{\\rm eff} / \\sigma$', fontsize=22.) axes[1].set_ylabel('$\\Delta \\Theta / \\sigma$', fontsize=22.) # plot zero points for ax in axes: ax.plot([0.0, 1.0e0], [0.0, 0.0], '--', lw=2, color='#333333') # plot relative errors axes[0].plot(FeH_strong[:, 0], dTeff_sigma_strong, 's', color='#800000', markersize=7.0) axes[1].plot(FeH_strong[:, 0], dTheta_sigma_strong, 's', color='#800000', markersize=7.0) axes[0].plot(FeH_weak[:, 0], dTeff_sigma_weak, 'o', color='#4682B4', markersize=9.0) axes[1].plot(FeH_weak[:, 0], dTheta_sigma_weak, 'o', color='#4682B4', markersize=9.0) Explanation: Exploring Correlations End of explanation fig, axes = plt.subplots(2, 1, figsize=(10, 8)) # set axis properties axes[1].set_xlabel('Effective Temperature (K)', fontsize=22.) axes[0].set_ylabel('$\\Delta T_{\\rm eff} / \\sigma$', fontsize=22.) axes[1].set_ylabel('$\\Delta \\Theta / \\sigma$', fontsize=22.) # plot zero points for ax in axes: ax.plot([2500, 6000], [0.0, 0.0], '--', lw=2, color='#333333') # plot relative errors axes[0].plot(FeH_strong[:, 24], dTeff_sigma_strong, 's', color='#800000', markersize=7.0) axes[1].plot(FeH_strong[:, 24], dTheta_sigma_strong, 's', color='#800000', markersize=7.0) axes[0].plot(FeH_weak[:, 24], dTeff_sigma_weak, 'o', color='#4682B4', markersize=9.0) axes[1].plot(FeH_weak[:, 24], dTheta_sigma_weak, 'o', color='#4682B4', markersize=9.0) Explanation: There do not appear to be trends among model-observation discrepancies as a function of mass. Instead, models tend to systmatically under predict stellar radii and over predict effective temperatures. Below $0.5 M_{\odot}$, there is increased scatter owing to four significant outliers. One of these, Gl 388 (AD Leo), is a candidate young star, so we should perhaps not be surprised by the failure of models to reproduce its properties. It stands out in our above analysis of how well models recovered observed bolometric fluxes. Since the star is young, the models are not able to reproduce its bolometric flux; Gl 388 shows the largest disagreements when it comes to bolometric flux, adding further proof that the system may very well be young. End of explanation fig, axes = plt.subplots(2, 1, figsize=(10, 8)) # set axis properties axes[1].set_xlabel('Observed Metallicity (dex)', fontsize=22.) axes[0].set_ylabel('$\\Delta T_{\\rm eff} / \\sigma$', fontsize=22.) axes[1].set_ylabel('$\\Delta \\Theta / \\sigma$', fontsize=22.) # plot zero points for ax in axes: ax.set_xlim(-0.5, 0.5) ax.plot([-0.5, 0.5], [0.0, 0.0], '--', lw=2, color='#333333') # plot relative errors axes[0].plot(FeH_strong[:, 30], dTeff_sigma_strong, 's', color='#800000', markersize=7.0) axes[1].plot(FeH_strong[:, 30], dTheta_sigma_strong, 's', color='#800000', markersize=7.0) axes[0].plot(FeH_weak[:, 30], dTeff_sigma_weak, 'o', color='#4682B4', markersize=9.0) axes[1].plot(FeH_weak[:, 30], dTheta_sigma_weak, 'o', color='#4682B4', markersize=9.0) Explanation: As with stellar mass, there is no immediate correlation between observed effective temperatures and model inferred fundamental properties. However, it is clear that the largest errors appear at the coolest temperatures, with the presence of the same four outliers as above. We also again note that, even for the hottest stars, there appears to be a systematic offset of model errors toward hotter temperatures and smaller radii. Individual errors are not necessarily significant, often located within $1\sigma$ of the zero-point. However, the sysetmatic nature of the errors betrays the existence of possible errors among K dwarfs. A problem thought to be exclusive of M dwarfs appears to extend throughout the K dwarf regime. End of explanation # load Mann et al. (2015) data mann = np.genfromtxt('data/dmestar_model_mcmcTLD_Final_FeH.txt') # plot comparison fig, axes = plt.subplots(2, 1, figsize=(10, 8)) # set axis properties axes[1].set_xlabel('Effective Temperature (K)', fontsize=22.) axes[0].set_ylabel('$\\Delta T_{\\rm eff} / \\sigma$', fontsize=22.) axes[1].set_ylabel('$\\Delta \\Theta / \\sigma$', fontsize=22.) # plot zero points for ax in axes: ax.plot([2500, 6000], [0.0, 0.0], '--', lw=2, color='#333333') # plot relative errors axes[0].plot(mann[:, 32], (mann[:, 32] - mann[:, 18])/mann[:, 33], 'o', color='#555555', markersize=6.0) axes[1].plot(mann[:, 32], (mann[:, 36] - mann[:, 24])/mann[:, 37], 'o', color='#555555', markersize=6.0) axes[0].plot(FeH_weak[:, 24], dTeff_sigma_weak, 'o', color='#4682B4', markersize=9.0) axes[1].plot(FeH_weak[:, 24], dTheta_sigma_weak, 'o', color='#4682B4', markersize=9.0) fig, axes = plt.subplots(2, 1, figsize=(10, 8)) # set axis properties axes[1].set_xlabel('Mass ($M_{\\odot}$)', fontsize=22.) axes[0].set_ylabel('$\\Delta T_{\\rm eff} / \\sigma$', fontsize=22.) axes[1].set_ylabel('$\\Delta \\Theta / \\sigma$', fontsize=22.) # plot zero points for ax in axes: ax.plot([0.0, 1.0], [0.0, 0.0], '--', lw=2, color='#333333') # plot relative errors axes[0].plot(mann[:, 3], (mann[:, 32] - mann[:, 18])/mann[:, 33], 'o', color='#555555', markersize=6.0) axes[1].plot(mann[:, 3], (mann[:, 36] - mann[:, 24])/mann[:, 37], 'o', color='#555555', markersize=6.0) axes[0].plot(FeH_weak[:, 0], dTeff_sigma_weak, 'o', color='#4682B4', markersize=9.0) axes[1].plot(FeH_weak[:, 0], dTheta_sigma_weak, 'o', color='#4682B4', markersize=9.0) Explanation: Comparison with Mann et al. (2015) End of explanation
1,001
Given the following text description, write Python code to implement the functionality described below step by step Description: Source Step1: Source
Python Code: flowData = pd.read_csv('../TableD_01110030-eng.csv') flowData.head() # Convert place names to unicode flowData['GEO'] = flowData['GEO'].map(u) flowData['GEODEST'] = flowData['GEODEST'].map(u) flowData.head() # Remove unneeded columns dropCols = ['Geographical classification', 'Geographical classification.1', 'Coordinate', 'Vector'] flowData = flowData.drop(dropCols, axis=1) flowData.head(10) # Rename columns flowData = flowData.rename(columns={"GEO": "Origin", "GEODEST": "Destination"}) # Filter for only the most recent data flowData2011 = flowData[flowData['Ref_Date'] == 2011].drop('Ref_Date', axis=1).reset_index(drop=True) flowData2011.head() # Convert that Value column to a numeric data type flowData2011['Value'] = flowData2011['Value'].convert_objects(convert_numeric=True) # Remove all the non-census areas so we can geocode the cities that qualify as CMAs flowData2011_cma = flowData2011[~flowData2011['Destination'].str.contains('Non-census')] flowData2011_cma = flowData2011_cma[~flowData2011_cma['Origin'].str.contains('Non-census')] flowData2011_cma.head() outMig = flowData2011_cma[flowData2011_cma['MIGMOVE'] == "Out-migration"].drop('MIGMOVE', axis=1).reset_index(drop=True) outMig.head() outMigPiv = outMig.pivot('Origin', 'Destination', 'Value') outMigPiv.head() # Since there is such a range in values, let's put this on a log scale log_scale = lambda x: np.log10(x) outMigPivLog = outMigPiv.applymap(log_scale).replace([np.inf, -np.inf], 0) sns.heatmap(outMigPivLog) Explanation: Source: In-, out- and net-migration estimates, by geographic regions of origin and destination, Terminated End of explanation # Get mapping of cities to centroids centroids = pd.read_csv('./canada_cities.csv', header=None, names=['Location', 'Province', 'Latitude', 'Longitude']) from titlecase import titlecase title_u = lambda x: u(x).title() centroids['Location'] = centroids['Location'].map(title_u) centroids.head(15) provAbbr = {'BC' : 'British Columbia', 'SK' : 'Saskatchewan', 'QC' : 'Quebec', 'AB' : 'Alberta', 'NB' : 'New Brunswick', 'NS' : 'Nova Scotia', 'ON' : 'Ontario', 'NL' : 'Newfoundland', 'PE' : 'PEI', 'MB' : 'Manitoba', 'NT' : 'Northwest Territories', 'YT' : 'Yukon', 'NU' : 'Nunavut'} centroids['Province'] = centroids['Province'].replace(provAbbr) centroids.head() centroids = centroids.drop_duplicates(subset=['Location', 'Province']) centroids.head() Explanation: Source: Geocoder.ca End of explanation
1,002
Given the following text description, write Python code to implement the functionality described below step by step Description: Temporal-Comorbidity Adjusted Risk of Emergency Readmission (TCARER) <font style="font-weight Step1: 1.1. Initialise General Settings Step2: Common variables Step3: <br/><br/> 2.1. Initialise Step4: <br/><br/> 2.2. Load Features Load pre-processed features Step5: 2.3. Load Features Names Step6: 2.4. Load the fitted model <font style="font-weight Step7: Load the model Step8: <font style="font-weight Step9: <br/><br/> Performance Step10: <br/><br/> 2.5. Load the Extra Features for Benchmarking Read the extra features Step11: Replace NaN appears in the Charlson-Index feature Step12: Combine (join by PatientID) Step13: <font style="font-weight Step14: <br/><br/> 3. Charlson Index Model 3.1. Algorithm <font style="font-weight Step15: <font style="font-weight Step16: <br/><br/> 3.2. Initialise Step17: 3.3. Fit Fit Model Step18: Fit Performance Step19: 3.4. Predict Step20: 3.5. Cross-Validate Step21: 3.6. Save Step22: <br/><br/> 4. Features Statistics 4.1. Features Rank <i>It is produced during modelling</i> 4.2. Descriptive Statistics <i>It is produced during modelling</i> 4.3. Features Weigths Step23: <br/><br/> 5. Model Performance 5.1. Performance Indicators Step24: 5.2. Population Statistics Step25: 5.2.1. Most Prevalent Diagnoses Groups Most prevalent diagnoses groups (30-day, 1-year readmission) Step26: 5.2.2. Major Comorbidity Groups Comorbidity diagnoses groups (30-day, 1-year readmission) Step27: 5.2.3. Charlson Comorbidity Groups Charlson diagnoses groups (30-day, 1-year readmission) Step28: 5.2.4. Most Prevalent Operatons Most prevalent operations variables (30-day, 1-year readmission) Step29: 5.2.4. Most Prevalent Main Speciality Most prevalent operations variables (30-day, 1-year readmission) Step30: 5.2.5. Other Variables Other variables (30-day, 1-year readmission) Step31: <br/><br/> 5.3. Plots Step32: 5.3.1. ROC Step33: 5.3.2. Precision Recall Step34: 5.3.3. Learning Curve Step35: 5.3.4. Validation Curve Set the model's metadata
Python Code: # reload modules # Reload all modules (except those excluded by %aimport) every time before executing the Python code typed. %load_ext autoreload %autoreload 2 # import libraries import logging import os import sys import gc import pandas as pd import numpy as np import random import statistics from datetime import datetime from collections import OrderedDict from sklearn import preprocessing from scipy.stats import stats from IPython.display import display, HTML from pprint import pprint from pivottablejs import pivot_ui from IPython.display import clear_output import imblearn.over_sampling as oversampling import matplotlib.pyplot as plt # import local classes from Configs.CONSTANTS import CONSTANTS from Configs.Logger import Logger from Features.Variables import Variables from ReadersWriters.ReadersWriters import ReadersWriters from Stats.PreProcess import PreProcess from Stats.FeatureSelection import FeatureSelection from Stats.TrainingMethod import TrainingMethod from Stats.Plots import Plots from Stats.Stats import Stats # Check the interpreter print("\nMake sure the correct Python interpreter is used!") print(sys.version) print("\nMake sure sys.path of the Python interpreter is correct!") print(os.getcwd()) Explanation: Temporal-Comorbidity Adjusted Risk of Emergency Readmission (TCARER) <font style="font-weight:bold;color:gray">Summary Reports</font> 1. Initialise End of explanation # init paths & directories config_path = os.path.abspath("ConfigInputs/CONFIGURATIONS.ini") io_path = os.path.abspath("../../tmp/TCARER/Basic_prototype") schema = "parr_sample_prototype" app_name = "T-CARER" print("Output path:", io_path) # init logs if not os.path.exists(io_path): os.makedirs(io_path, exist_ok=True) logger = Logger(path=io_path, app_name=app_name, ext="log") logger = logging.getLogger(app_name) # init constants CONSTANTS.set(io_path, app_name) # initialise other classes readers_writers = ReadersWriters() plots = Plots() # other Constant variables submodel_name = "hesIp" submodel_input_name = "tcarer_model_features_ip" # set print settings pd.set_option('display.width', 1600, 'display.max_colwidth', 800) Explanation: 1.1. Initialise General Settings End of explanation # settings feature_table = 'tcarer_features' featureExtra_table = 'tcarer_featuresExtra' result = readers_writers.load_mysql_procedure("tcarer_set_featuresExtra", [feature_table, featureExtra_table], schema) Explanation: Common variables: * Readmission * 'label30', 'label365' * Admissions Methods: * 'admimeth_0t30d_prevalence_1_cnt', ... * Prior Spells: * 'prior_spells' * Male: * 'gender_1' * LoS: * 'trigger_los' * Age: * 'trigger_age' * Charlson Score: * 'trigger_charlsonFoster' * predictions score * score * Most prevalent diagnoses groups (0-30-day, 0-730-day): * 0-30-day: 'diagCCS_0t30d_prevalence_1_cnt', ... * 0-730-day: 'diagCCS_0t30d_prevalence_1_cnt' + 'diagCCS_30t90d_prevalence_1_cnt' + 'diagCCS_90t180d_prevalence_1_cnt' + 'diagCCS_180t365d_prevalence_1_cnt' + 'diagCCS_365t730d_prevalence_1_cnt', ... * Comorbidity diagnoses groups (0-730-day): * 'prior_admiOther', 'prior_admiAcute', 'prior_spells', 'prior_asthma', 'prior_copd', 'prior_depression', 'prior_diabetes', 'prior_hypertension', 'prior_cancer', 'prior_chd', 'prior_chf' * Charlson diagnoses groups (trigger): * 'diagCci_01_myocardial_freq__trigger',... <br/><br/> 2. Load the Saved Model Outputs <font style="font-weight:bold;color:red">Note: Make sure the following files are located at the input path</font> * Step_05_Features.bz2 * Step_07_Top_Features_... * Step_07_Model_Train_model_rank_summaries_... * Step_09_Model_... <font style="font-weight:bold;color:red">Note: Create features extra (Run only once)</font> End of explanation # select the target variable target_feature = "label365" # "label365", "label30" method_name = "rfc" # "rfc", "gbrt", "randLogit", "wdnn" rank_models = ["rfc"] # ["rfc", "gbrt", "randLogit"] Explanation: <br/><br/> 2.1. Initialise End of explanation file_name = "Step_07_Features" features = readers_writers.load_serialised_compressed(path=CONSTANTS.io_path, title=file_name) # print print("File size: ", os.stat(os.path.join(CONSTANTS.io_path, file_name + ".bz2")).st_size) print("Number of columns: ", len(features["train_indep"].columns)) print("features: {train: ", len(features["train_indep"]), ", test: ", len(features["test_indep"]), "}") Explanation: <br/><br/> 2.2. Load Features Load pre-processed features End of explanation file_name = "Step_07_Top_Features_rfc_adhoc" features_names_selected = readers_writers.load_csv(path=CONSTANTS.io_path, title=file_name, dataframing=False)[0] features_names_selected = [f.replace("\n", "") for f in features_names_selected] display(pd.DataFrame(features_names_selected)) Explanation: 2.3. Load Features Names End of explanation training_method = TrainingMethod(method_name) # file name file_name = "Step_09_Model_" + method_name + "_" + target_feature Explanation: 2.4. Load the fitted model <font style="font-weight:bold;color:blue">2.4.1. Basic Models</font> Initialise End of explanation training_method.load(path=CONSTANTS.io_path, title=file_name) Explanation: Load the model End of explanation class TrainingMethodTensorflow: def __init__(self, summaries, features_names, num_features, cut_off, train_size, test_size): self.model_predict = {"train": {'score': [], 'model_labels': []}, "test": {'score': [], 'model_labels': []}} self.__stats = Stats() # summaries["fit"]["get_variable_names"] # summaries["fit"]["get_variable_value"] # summaries["fit"]["get_params"] # summaries["fit"]["export"] # summaries["fit"]["get_variable_names()"] # summaries["fit"]["params"] # summaries["fit"]["dnn_bias_"] # summaries["fit"]["dnn_weights_"] # summaries["train"]["results"] # summaries["test"]["results"] self.model_predict["train"]['pred'] = np.asarray([1 if i[1] >= 0.5 else 0 for i in summaries["train"]["predict_proba"]][0:train_size]) self.model_predict["test"]['pred'] = np.asarray([1 if i[1] >= 0.5 else 0 for i in summaries["test"]["predict_proba"]][0:test_size]) self.model_predict["train"]['score'] = np.asarray([i[1] for i in summaries["train"]["predict_proba"]][0:train_size]) self.model_predict["test"]['score'] = np.asarray([i[1] for i in summaries["test"]["predict_proba"]][0:test_size]) self.model_predict["train"]['score_0'] = np.asarray([i[0] for i in summaries["train"]["predict_proba"]][0:train_size]) self.model_predict["test"]['score_0'] = np.asarray([i[0] for i in summaries["test"]["predict_proba"]][0:test_size]) def train_summaries(self): return {"feature_importances_": self.__weights} def predict_summaries(self, feature_target, sample_name): return self.__stats.predict_summaries(self.model_predict[sample_name], feature_target) file_name = "model_tensorflow_summaries_" + target_feature summaries = readers_writers.load_serialised_compressed(path=CONSTANTS.io_path, title=file_name) num_features = 300 cut_off = 0.5 training_method = TrainingMethodTensorflow(summaries, features_names_selected, num_features, cut_off, len(features["train_indep"].index), len(features["test_indep"].index)) Explanation: <font style="font-weight:bold;color:blue">2.4.2. TensorFlow Models</font> End of explanation # train o_summaries = training_method.predict_summaries(features["train_target"][target_feature], "train") for k in o_summaries.keys(): print(k, o_summaries[k]) print("\n") # test o_summaries = training_method.predict_summaries(features["test_target"][target_feature], "test") for k in o_summaries.keys(): print(k, o_summaries[k]) Explanation: <br/><br/> Performance End of explanation table = 'tcarer_featuresExtra' features_extra_dtypes = {'patientID': 'U32', 'trigger_charlsonFoster': 'i4', 'trigger_los': 'i4', 'trigger_age': 'i4', 'prior_admiOther': 'i4', 'prior_admiAcute': 'i4', 'prior_spells': 'i4', 'prior_asthma': 'i4', 'prior_copd': 'i4', 'prior_depression': 'i4', 'prior_diabetes': 'i4', 'prior_hypertension': 'i4', 'prior_cancer': 'i4', 'prior_chd': 'i4', 'prior_chf': 'i4', 'diagCci_01_myocardial_freq': 'i4', 'diagCci_02_chf_freq': 'i4', 'diagCci_03_pvd_freq': 'i4', 'diagCci_04_cerebrovascular_freq': 'i4', 'diagCci_05_dementia_freq': 'i4', 'diagCci_06_cpd_freq': 'i4', 'diagCci_07_rheumatic_freq': 'i4', 'diagCci_08_ulcer_freq': 'i4', 'diagCci_09_liverMild_freq': 'i4', 'diagCci_10_diabetesNotChronic_freq': 'i4', 'diagCci_11_diabetesChronic_freq': 'i4', 'diagCci_12_hemiplegia_freq': 'i4', 'diagCci_13_renal_freq': 'i4', 'diagCci_14_malignancy_freq': 'i4', 'diagCci_15_liverSevere_freq': 'i4', 'diagCci_16_tumorSec_freq': 'i4', 'diagCci_17_aids_freq': 'i4', 'diagCci_18_depression_freq': 'i4', 'diagCci_19_cardiac_freq': 'i4', 'diagCci_20_valvular_freq': 'i4', 'diagCci_21_pulmonary_freq': 'i4', 'diagCci_22_vascular_freq': 'i4', 'diagCci_23_hypertensionNotComplicated_freq': 'i4', 'diagCci_24_hypertensionComplicated_freq': 'i4', 'diagCci_25_paralysis_freq': 'i4', 'diagCci_26_neuroOther_freq': 'i4', 'diagCci_27_pulmonaryChronic_freq': 'i4', 'diagCci_28_diabetesNotComplicated_freq': 'i4', 'diagCci_29_diabetesComplicated_freq': 'i4', 'diagCci_30_hypothyroidism_freq': 'i4', 'diagCci_31_renal_freq': 'i4', 'diagCci_32_liver_freq': 'i4', 'diagCci_33_ulcerNotBleeding_freq': 'i4', 'diagCci_34_psychoses_freq': 'i4', 'diagCci_35_lymphoma_freq': 'i4', 'diagCci_36_cancerSec_freq': 'i4', 'diagCci_37_tumorNotSec_freq': 'i4', 'diagCci_38_rheumatoid_freq': 'i4', 'diagCci_39_coagulopathy_freq': 'i4', 'diagCci_40_obesity_freq': 'i4', 'diagCci_41_weightLoss_freq': 'i4', 'diagCci_42_fluidDisorder_freq': 'i4', 'diagCci_43_bloodLoss_freq': 'i4', 'diagCci_44_anemia_freq': 'i4', 'diagCci_45_alcohol_freq': 'i4', 'diagCci_46_drug_freq': 'i4'} features_extra_name = features_extra_dtypes.keys() # Read features from the MySQL features_extra = dict() features_extra['train'] = readers_writers.load_mysql_table(schema, table, dataframing=True) features_extra['train'].astype(dtype=features_extra_dtypes) features_extra['test'] = features_extra['train'] print("Number of columns: ", len(features_extra['train'].columns), "; Total records: ", len(features_extra['train'].index)) Explanation: <br/><br/> 2.5. Load the Extra Features for Benchmarking Read the extra features End of explanation features_extra['train'].loc[:, "trigger_charlsonFoster"] = np.nan_to_num(features_extra['train']["trigger_charlsonFoster"]) features_extra['test'].loc[:, "trigger_charlsonFoster"] = np.nan_to_num(features_extra['test']["trigger_charlsonFoster"]) Explanation: Replace NaN appears in the Charlson-Index feature End of explanation features_extra['train'] = features_extra['train'].merge( pd.concat([features['train_id'], features['train_target'], pd.DataFrame({'score': training_method.model_predict["train"]['score']}), features['train_indep']], axis=1), how="inner", on="patientID") features_extra['test'] = features_extra['test'].merge( pd.concat([features['test_id'], features['test_target'], pd.DataFrame({'score': training_method.model_predict["test"]['score']}), features['test_indep']], axis=1), how="inner", on="patientID") Explanation: Combine (join by PatientID) End of explanation features = None gc.collect() Explanation: <font style="font-weight:bold;color:red">Clean-up</font> End of explanation charlson_method_name = "rfc" kwargs = {"n_estimators": 20, "criterion": 'gini', "max_depth": None, "min_samples_split": 100, "min_samples_leaf": 50, "min_weight_fraction_leaf": 0.0, "max_features": 'auto', "max_leaf_nodes": None, "bootstrap": True, "oob_score": False, "n_jobs": -1, "random_state": None, "verbose": 0, "warm_start": False, "class_weight": "balanced_subsample"} Explanation: <br/><br/> 3. Charlson Index Model 3.1. Algorithm <font style="font-weight:bold;color:brown">Algorithm 1</font>: Random Forest End of explanation charlson_method_name = "lr" kwargs = {"penalty": 'l2', "dual": False, "tol": 0.0001, "C": 1, "fit_intercept": True, "intercept_scaling": 1, "class_weight": None, "random_state": None, "solver": 'liblinear', "max_iter": 100, "multi_class": 'ovr', "verbose": 0, "warm_start": False, "n_jobs": -1} Explanation: <font style="font-weight:bold;color:brown">Algorithm 2</font>: Logistic Regression End of explanation # set features charlson_features_names = ['trigger_charlsonFoster'] # select the target variable charlson_target_feature = "label30" # "label30", "label365" # file name file_name = "report_Model_Charlson_" + charlson_method_name + "_" + charlson_target_feature # initialise charlson_training_method = TrainingMethod(charlson_method_name) Explanation: <br/><br/> 3.2. Initialise End of explanation o_summaries = dict() # Fit model = charlson_training_method.train(features_extra["train"][charlson_features_names], features_extra["train"][target_feature], **kwargs) charlson_training_method.save_model(path=CONSTANTS.io_path, title=file_name) # load model # charlson_training_method.load(path=CONSTANTS.io_path, title=file_name) # short summary o_summaries = charlson_training_method.train_summaries() Explanation: 3.3. Fit Fit Model End of explanation o_summaries = dict() model = charlson_training_method.predict(features_extra["train"][charlson_features_names], "train") # short summary o_summaries = charlson_training_method.predict_summaries(pd.Series(features_extra["train"][target_feature]), "train") print("ROC AUC:", o_summaries['roc_auc_score_1'], "\n", o_summaries['classification_report']) for k in o_summaries.keys(): print(k, o_summaries[k]) Explanation: Fit Performance End of explanation o_summaries = dict() model = charlson_training_method.predict(features_extra["test"][charlson_features_names], "test") # short summary o_summaries = charlson_training_method.predict_summaries(pd.Series(features_extra["test"][target_feature]), "test") print("ROC AUC:", o_summaries['roc_auc_score_1'], "\n", o_summaries['classification_report']) for k in o_summaries.keys(): print(k, o_summaries[k]) Explanation: 3.4. Predict End of explanation o_summaries = dict() score = charlson_training_method.cross_validate(features_extra["test"][charlson_features_names], features_extra["test"][target_feature], scoring="neg_mean_squared_error", cv=10) # short summary o_summaries = charlson_training_method.cross_validate_summaries() print("Scores: ", o_summaries) Explanation: 3.5. Cross-Validate End of explanation charlson_training_method.save_model(path=CONSTANTS.io_path, title=file_name) Explanation: 3.6. Save End of explanation def features_importance_rank(fitting_method, ranking_file_name=None, rank_models=["rfc", "gbrt", "randLogit"]): # Fitting weight o_summaries = pd.DataFrame({"Name": fitting_method.model_labels, "Fitting Weight": fitting_method.train_summaries()["feature_importances_"]}, index = fitting_method.model_labels) o_summaries = o_summaries.sort_values("Fitting Weight", ascending=False) o_summaries = o_summaries.reset_index(drop=True) # Ranking scores if ranking_file_name is not None: for rank_model in rank_models: o_summaries_ranks = readers_writers.load_serialised_compressed( path=CONSTANTS.io_path, title=ranking_file_name + rank_model) for trial in range(len(o_summaries_ranks)): o_summaries_rank = pd.DataFrame(o_summaries_ranks[trial]) o_summaries_rank.columns = ["Name", "Importance - " + rank_model + " - Trial_" + str(trial), "Order - " + rank_model + " - Trial_" + str(trial)] o_summaries = o_summaries.merge(o_summaries_rank, how="outer", on="Name") return o_summaries file_name = "Step_07_Model_Train_model_rank_summaries_" o_summaries = features_importance_rank(fitting_method=training_method, ranking_file_name=file_name, rank_models=rank_models) file_name = "report_weights_ranks" readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name, data=o_summaries, append=False, extension="csv", header=o_summaries.columns) display(o_summaries.head()) Explanation: <br/><br/> 4. Features Statistics 4.1. Features Rank <i>It is produced during modelling</i> 4.2. Descriptive Statistics <i>It is produced during modelling</i> 4.3. Features Weigths End of explanation measures = ["accuracy_score", "precision_score", "recall_score", "roc_auc_score_1", "f1_score", "fbeta_score", "average_precision_score", "log_loss", "zero_one_loss", "hamming_loss", "jaccard_similarity_score", "matthews_corrcoef"] # train o_summaries = training_method.predict_summaries(features_extra["train"][target_feature], "train") o_summaries = np.array([(m, o_summaries[m]) for m in measures]) report_performance = pd.DataFrame({"Measure": o_summaries[:, 0], "Sample Train": o_summaries[:, 1], "Sample Test": [None] * len(measures)}) # test o_summaries = training_method.predict_summaries(features_extra["test"][target_feature], "test") o_summaries = np.array([(m, o_summaries[m]) for m in measures]) report_performance["Sample Test"] = o_summaries[:, 1] # print file_name = "report_performance_" + method_name + "_" + target_feature display(report_performance) readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name, data=report_performance, append=False) Explanation: <br/><br/> 5. Model Performance 5.1. Performance Indicators End of explanation def population_statistics(df, diagnoses, cutpoints=[0.50, 0.60, 0.70, 0.80, 0.90]): o_summaries = pd.DataFrame(columns=['Name'], index=diagnoses) o_summaries['Name'] = diagnoses for diagnose in diagnoses: o_summaries.loc[diagnose, 'Total'] = len(df.index) if diagnose not in df: continue o_summaries.loc[diagnose, 'Total - diagnose'] = len(df.loc[(df[diagnose] > 0)].index) o_summaries.loc[diagnose, 'Total - diagnose - label_1'] = len(df.loc[(df[diagnose] > 0) & (df[target_feature] > 0)].index) o_summaries.loc[diagnose, 'Emergency Readmission Rate - cnt 1'] = len(df.loc[(df[diagnose] > 0) & (df['admimeth_0t30d_prevalence_1_cnt'] > 0) & (df[target_feature] > 0)].index) o_summaries.loc[diagnose, 'Emergency Readmission Rate - cnt 2'] = len(df.loc[(df[diagnose] > 0) & (df['admimeth_0t30d_prevalence_2_cnt'] > 0) & (df[target_feature] > 0)].index) o_summaries.loc[diagnose, 'Emergency Readmission Rate - cnt 3'] = len(df.loc[(df[diagnose] > 0) & (df['admimeth_0t30d_prevalence_3_cnt'] > 0) & (df[target_feature] > 0)].index) o_summaries.loc[diagnose, 'Prior Spells'] = len(df.loc[(df[diagnose] > 0) & (df['prior_spells'] > 0) & (df[target_feature] > 0)].index) o_summaries.loc[diagnose, 'Male - perc'] = len(df.loc[(df[diagnose] > 0) & (df['gender_1'] > 0) & (df[target_feature] > 0)].index) age = df.loc[(df[diagnose] > 0) & (df[target_feature] > 0)]['trigger_age'].describe(percentiles=[.25, .5, .75]) o_summaries.loc[diagnose, 'Age - IQR_min'] = age['min'] o_summaries.loc[diagnose, 'Age - IQR_25'] = age['25%'] o_summaries.loc[diagnose, 'Age - IQR_50'] = age['50%'] o_summaries.loc[diagnose, 'Age - IQR_75'] = age['75%'] o_summaries.loc[diagnose, 'Age - IQR_max'] = age['max'] los = df.loc[(df[diagnose] > 0) & (df[target_feature] > 0)]['trigger_los'].describe(percentiles=[.25, .5, .75]) o_summaries.loc[diagnose, 'LoS - IQR_min'] = los['min'] o_summaries.loc[diagnose, 'LoS - IQR_25'] = los['25%'] o_summaries.loc[diagnose, 'LoS - IQR_50'] = los['50%'] o_summaries.loc[diagnose, 'LoS - IQR_75'] = los['75%'] o_summaries.loc[diagnose, 'LoS - IQR_max'] = los['max'] for cutpoint in cutpoints: o_summaries.loc[diagnose, 'score - ' + str(cutpoint)] = len(df.loc[(df[diagnose] > 0) & (df['score'] > cutpoint)].index) o_summaries.loc[diagnose, 'TP - ' + str(cutpoint)] = len(df.loc[(df[diagnose] > 0) & (df[target_feature] > 0) & (df['score'] > cutpoint)].index) o_summaries.loc[diagnose, 'FP - ' + str(cutpoint)] = len(df.loc[(df[diagnose] > 0) & (df[target_feature] == 0) & (df['score'] > cutpoint)].index) o_summaries.loc[diagnose, 'FN - ' + str(cutpoint)] = len(df.loc[(df[diagnose] > 0) & (df[target_feature] > 0) & (df['score'] <= cutpoint)].index) o_summaries.loc[diagnose, 'TN - ' + str(cutpoint)] = len(df.loc[(df[diagnose] > 0) & (df[target_feature] == 0) & (df['score'] <= cutpoint)].index) o_summaries.loc[diagnose, 'Charlson - 0'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 0)].index) o_summaries.loc[diagnose, 'Charlson - 0 - label_1'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 0) & (df[target_feature] > 0)].index) o_summaries.loc[diagnose, 'Charlson - 1'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 1)].index) o_summaries.loc[diagnose, 'Charlson - 1 - label_1'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 1) & (df[target_feature] > 0)].index) o_summaries.loc[diagnose, 'Charlson - 2'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 2)].index) o_summaries.loc[diagnose, 'Charlson - 2 - label_1'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 2) & (df[target_feature] > 0)].index) o_summaries.loc[diagnose, 'Charlson - 3'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 3)].index) o_summaries.loc[diagnose, 'Charlson - 3 - label_1'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 3) & (df[target_feature] > 0)].index) o_summaries.loc[diagnose, 'Charlson - 4+'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] >= 4)].index) o_summaries.loc[diagnose, 'Charlson - 4+ - label_1'] = len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] >= 4) & (df[target_feature] > 0)].index) for cutpoint in cutpoints: o_summaries.loc[diagnose, 'Charlson - 0 - label_1 - TP - ' + str(cutpoint)] = \ len(df.loc[(df[diagnose] > 0) & (df["trigger_charlsonFoster"] == 0) & (df[target_feature] > 0) & (df['score'] > cutpoint)].index) return o_summaries Explanation: 5.2. Population Statistics End of explanation diagnoses = ['diagCCS_0t30d_others_cnt', 'diagCCS_0t30d_prevalence_1_cnt', 'diagCCS_0t30d_prevalence_2_cnt', 'diagCCS_0t30d_prevalence_3_cnt', 'diagCCS_0t30d_prevalence_4_cnt', 'diagCCS_0t30d_prevalence_5_cnt', 'diagCCS_0t30d_prevalence_6_cnt', 'diagCCS_0t30d_prevalence_7_cnt', 'diagCCS_0t30d_prevalence_8_cnt', 'diagCCS_0t30d_prevalence_9_cnt', 'diagCCS_0t30d_prevalence_10_cnt', 'diagCCS_0t30d_prevalence_11_cnt', 'diagCCS_0t30d_prevalence_12_cnt', 'diagCCS_0t30d_prevalence_13_cnt', 'diagCCS_0t30d_prevalence_14_cnt', 'diagCCS_0t30d_prevalence_15_cnt', 'diagCCS_0t30d_prevalence_16_cnt', 'diagCCS_0t30d_prevalence_17_cnt', 'diagCCS_0t30d_prevalence_18_cnt', 'diagCCS_0t30d_prevalence_19_cnt', 'diagCCS_0t30d_prevalence_20_cnt', 'diagCCS_0t30d_prevalence_21_cnt', 'diagCCS_0t30d_prevalence_22_cnt', 'diagCCS_0t30d_prevalence_23_cnt', 'diagCCS_0t30d_prevalence_24_cnt', 'diagCCS_0t30d_prevalence_25_cnt', 'diagCCS_0t30d_prevalence_26_cnt', 'diagCCS_0t30d_prevalence_27_cnt', 'diagCCS_0t30d_prevalence_28_cnt', 'diagCCS_0t30d_prevalence_29_cnt', 'diagCCS_0t30d_prevalence_30_cnt' , 'diagCCS_30t90d_others_cnt', 'diagCCS_30t90d_prevalence_1_cnt', 'diagCCS_30t90d_prevalence_2_cnt', 'diagCCS_30t90d_prevalence_3_cnt', 'diagCCS_30t90d_prevalence_4_cnt', 'diagCCS_30t90d_prevalence_5_cnt', 'diagCCS_30t90d_prevalence_6_cnt', 'diagCCS_30t90d_prevalence_7_cnt', 'diagCCS_30t90d_prevalence_8_cnt', 'diagCCS_30t90d_prevalence_9_cnt', 'diagCCS_30t90d_prevalence_10_cnt', 'diagCCS_30t90d_prevalence_11_cnt', 'diagCCS_30t90d_prevalence_12_cnt', 'diagCCS_30t90d_prevalence_13_cnt', 'diagCCS_30t90d_prevalence_14_cnt', 'diagCCS_30t90d_prevalence_15_cnt', 'diagCCS_30t90d_prevalence_16_cnt', 'diagCCS_30t90d_prevalence_17_cnt', 'diagCCS_30t90d_prevalence_18_cnt', 'diagCCS_30t90d_prevalence_19_cnt', 'diagCCS_30t90d_prevalence_20_cnt', 'diagCCS_30t90d_prevalence_21_cnt', 'diagCCS_30t90d_prevalence_22_cnt', 'diagCCS_30t90d_prevalence_23_cnt', 'diagCCS_30t90d_prevalence_24_cnt', 'diagCCS_30t90d_prevalence_25_cnt', 'diagCCS_30t90d_prevalence_26_cnt', 'diagCCS_30t90d_prevalence_27_cnt', 'diagCCS_30t90d_prevalence_28_cnt', 'diagCCS_30t90d_prevalence_29_cnt', 'diagCCS_30t90d_prevalence_30_cnt' , 'diagCCS_90t180d_others_cnt', 'diagCCS_90t180d_prevalence_1_cnt', 'diagCCS_90t180d_prevalence_2_cnt', 'diagCCS_90t180d_prevalence_3_cnt', 'diagCCS_90t180d_prevalence_4_cnt', 'diagCCS_90t180d_prevalence_5_cnt', 'diagCCS_90t180d_prevalence_6_cnt', 'diagCCS_90t180d_prevalence_7_cnt', 'diagCCS_90t180d_prevalence_8_cnt', 'diagCCS_90t180d_prevalence_9_cnt', 'diagCCS_90t180d_prevalence_10_cnt', 'diagCCS_90t180d_prevalence_11_cnt', 'diagCCS_90t180d_prevalence_12_cnt', 'diagCCS_90t180d_prevalence_13_cnt', 'diagCCS_90t180d_prevalence_14_cnt', 'diagCCS_90t180d_prevalence_15_cnt', 'diagCCS_90t180d_prevalence_16_cnt', 'diagCCS_90t180d_prevalence_17_cnt', 'diagCCS_90t180d_prevalence_18_cnt', 'diagCCS_90t180d_prevalence_19_cnt', 'diagCCS_90t180d_prevalence_20_cnt', 'diagCCS_90t180d_prevalence_21_cnt', 'diagCCS_90t180d_prevalence_22_cnt', 'diagCCS_90t180d_prevalence_23_cnt', 'diagCCS_90t180d_prevalence_24_cnt', 'diagCCS_90t180d_prevalence_25_cnt', 'diagCCS_90t180d_prevalence_26_cnt', 'diagCCS_90t180d_prevalence_27_cnt', 'diagCCS_90t180d_prevalence_28_cnt', 'diagCCS_90t180d_prevalence_29_cnt', 'diagCCS_90t180d_prevalence_30_cnt' , 'diagCCS_180t365d_others_cnt', 'diagCCS_180t365d_prevalence_1_cnt', 'diagCCS_180t365d_prevalence_2_cnt', 'diagCCS_180t365d_prevalence_3_cnt', 'diagCCS_180t365d_prevalence_4_cnt', 'diagCCS_180t365d_prevalence_5_cnt', 'diagCCS_180t365d_prevalence_6_cnt', 'diagCCS_180t365d_prevalence_7_cnt', 'diagCCS_180t365d_prevalence_8_cnt', 'diagCCS_180t365d_prevalence_9_cnt', 'diagCCS_180t365d_prevalence_10_cnt', 'diagCCS_180t365d_prevalence_11_cnt', 'diagCCS_180t365d_prevalence_12_cnt', 'diagCCS_180t365d_prevalence_13_cnt', 'diagCCS_180t365d_prevalence_14_cnt', 'diagCCS_180t365d_prevalence_15_cnt', 'diagCCS_180t365d_prevalence_16_cnt', 'diagCCS_180t365d_prevalence_17_cnt', 'diagCCS_180t365d_prevalence_18_cnt', 'diagCCS_180t365d_prevalence_19_cnt', 'diagCCS_180t365d_prevalence_20_cnt', 'diagCCS_180t365d_prevalence_21_cnt', 'diagCCS_180t365d_prevalence_22_cnt', 'diagCCS_180t365d_prevalence_23_cnt', 'diagCCS_180t365d_prevalence_24_cnt', 'diagCCS_180t365d_prevalence_25_cnt', 'diagCCS_180t365d_prevalence_26_cnt', 'diagCCS_180t365d_prevalence_27_cnt', 'diagCCS_180t365d_prevalence_28_cnt', 'diagCCS_180t365d_prevalence_29_cnt', 'diagCCS_180t365d_prevalence_30_cnt' , 'diagCCS_365t730d_others_cnt', 'diagCCS_365t730d_prevalence_1_cnt', 'diagCCS_365t730d_prevalence_2_cnt', 'diagCCS_365t730d_prevalence_3_cnt', 'diagCCS_365t730d_prevalence_4_cnt', 'diagCCS_365t730d_prevalence_5_cnt', 'diagCCS_365t730d_prevalence_6_cnt', 'diagCCS_365t730d_prevalence_7_cnt', 'diagCCS_365t730d_prevalence_8_cnt', 'diagCCS_365t730d_prevalence_9_cnt', 'diagCCS_365t730d_prevalence_10_cnt', 'diagCCS_365t730d_prevalence_11_cnt', 'diagCCS_365t730d_prevalence_12_cnt', 'diagCCS_365t730d_prevalence_13_cnt', 'diagCCS_365t730d_prevalence_14_cnt', 'diagCCS_365t730d_prevalence_15_cnt', 'diagCCS_365t730d_prevalence_16_cnt', 'diagCCS_365t730d_prevalence_17_cnt', 'diagCCS_365t730d_prevalence_18_cnt', 'diagCCS_365t730d_prevalence_19_cnt', 'diagCCS_365t730d_prevalence_20_cnt', 'diagCCS_365t730d_prevalence_21_cnt', 'diagCCS_365t730d_prevalence_22_cnt', 'diagCCS_365t730d_prevalence_23_cnt', 'diagCCS_365t730d_prevalence_24_cnt', 'diagCCS_365t730d_prevalence_25_cnt', 'diagCCS_365t730d_prevalence_26_cnt', 'diagCCS_365t730d_prevalence_27_cnt', 'diagCCS_365t730d_prevalence_28_cnt', 'diagCCS_365t730d_prevalence_29_cnt', 'diagCCS_365t730d_prevalence_30_cnt'] file_name = "report_population_prevalent_diagnoses_" + method_name + "_" + target_feature + "_" o_summaries = population_statistics(features_extra['train'], diagnoses) readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "train", data=o_summaries, append=False, extension="csv", header=o_summaries.columns) o_summaries = population_statistics(features_extra['test'], diagnoses) readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "test", data=o_summaries, append=False, extension="csv", header=o_summaries.columns) Explanation: 5.2.1. Most Prevalent Diagnoses Groups Most prevalent diagnoses groups (30-day, 1-year readmission): * Total, Admissions, Emergency Readmission Rate, Prior Spells, Male (%), Age (IQR), LoS (IQR), TP, FP, FN, TN End of explanation diagnoses = ['prior_admiOther', 'prior_admiAcute', 'prior_spells', 'prior_asthma', 'prior_copd', 'prior_depression', 'prior_diabetes', 'prior_hypertension', 'prior_cancer', 'prior_chd', 'prior_chf'] file_name = "report_population_comorbidity_diagnoses_" + method_name + "_" + target_feature + "_" o_summaries = population_statistics(features_extra['train'], diagnoses) readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "train", data=o_summaries, append=False, extension="csv", header=o_summaries.columns) o_summaries = population_statistics(features_extra['test'], diagnoses) readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "test", data=o_summaries, append=False, extension="csv", header=o_summaries.columns) Explanation: 5.2.2. Major Comorbidity Groups Comorbidity diagnoses groups (30-day, 1-year readmission): * Total, Admissions, Emergency Readmission Rate, Prior Spells, Male (%), Age (IQR), LoS (IQR), TP, FP, FN, TN End of explanation diagnoses = ['diagCci_01_myocardial_freq', 'diagCci_02_chf_freq', 'diagCci_03_pvd_freq', 'diagCci_04_cerebrovascular_freq', 'diagCci_05_dementia_freq', 'diagCci_06_cpd_freq', 'diagCci_07_rheumatic_freq', 'diagCci_08_ulcer_freq', 'diagCci_09_liverMild_freq', 'diagCci_10_diabetesNotChronic_freq', 'diagCci_11_diabetesChronic_freq', 'diagCci_12_hemiplegia_freq', 'diagCci_13_renal_freq', 'diagCci_14_malignancy_freq', 'diagCci_15_liverSevere_freq', 'diagCci_16_tumorSec_freq', 'diagCci_17_aids_freq', 'diagCci_18_depression_freq', 'diagCci_19_cardiac_freq', 'diagCci_20_valvular_freq', 'diagCci_21_pulmonary_freq', 'diagCci_22_vascular_freq', 'diagCci_23_hypertensionNotComplicated_freq', 'diagCci_24_hypertensionComplicated_freq', 'diagCci_25_paralysis_freq', 'diagCci_26_neuroOther_freq', 'diagCci_27_pulmonaryChronic_freq', 'diagCci_28_diabetesNotComplicated_freq', 'diagCci_29_diabetesComplicated_freq', 'diagCci_30_hypothyroidism_freq', 'diagCci_31_renal_freq', 'diagCci_32_liver_freq', 'diagCci_33_ulcerNotBleeding_freq', 'diagCci_34_psychoses_freq', 'diagCci_35_lymphoma_freq', 'diagCci_36_cancerSec_freq', 'diagCci_37_tumorNotSec_freq', 'diagCci_38_rheumatoid_freq', 'diagCci_39_coagulopathy_freq', 'diagCci_40_obesity_freq', 'diagCci_41_weightLoss_freq', 'diagCci_42_fluidDisorder_freq', 'diagCci_43_bloodLoss_freq', 'diagCci_44_anemia_freq', 'diagCci_45_alcohol_freq', 'diagCci_46_drug_freq'] file_name = "report_population_charlson_diagnoses_" + method_name + "_" + target_feature + "_" o_summaries = population_statistics(features_extra['train'], diagnoses) readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "train", data=o_summaries, append=False, extension="csv", header=o_summaries.columns) o_summaries = population_statistics(features_extra['test'], diagnoses) readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "test", data=o_summaries, append=False, extension="csv", header=o_summaries.columns) Explanation: 5.2.3. Charlson Comorbidity Groups Charlson diagnoses groups (30-day, 1-year readmission): * Total, Admissions, Emergency Readmission Rate, Prior Spells, Male (%), Age (IQR), LoS (IQR), TP, FP, FN, TN End of explanation diagnoses = ['operOPCSL1_0t30d_others_cnt', 'operOPCSL1_0t30d_prevalence_1_cnt', 'operOPCSL1_0t30d_prevalence_2_cnt', 'operOPCSL1_0t30d_prevalence_3_cnt', 'operOPCSL1_0t30d_prevalence_4_cnt', 'operOPCSL1_0t30d_prevalence_5_cnt', 'operOPCSL1_0t30d_prevalence_6_cnt', 'operOPCSL1_0t30d_prevalence_7_cnt', 'operOPCSL1_0t30d_prevalence_8_cnt', 'operOPCSL1_0t30d_prevalence_9_cnt', 'operOPCSL1_0t30d_prevalence_10_cnt', 'operOPCSL1_0t30d_prevalence_11_cnt', 'operOPCSL1_0t30d_prevalence_12_cnt', 'operOPCSL1_0t30d_prevalence_13_cnt', 'operOPCSL1_0t30d_prevalence_14_cnt', 'operOPCSL1_0t30d_prevalence_15_cnt', 'operOPCSL1_0t30d_prevalence_16_cnt', 'operOPCSL1_0t30d_prevalence_17_cnt', 'operOPCSL1_0t30d_prevalence_18_cnt', 'operOPCSL1_0t30d_prevalence_19_cnt', 'operOPCSL1_0t30d_prevalence_20_cnt', 'operOPCSL1_0t30d_prevalence_21_cnt', 'operOPCSL1_0t30d_prevalence_22_cnt', 'operOPCSL1_0t30d_prevalence_23_cnt', 'operOPCSL1_0t30d_prevalence_24_cnt', 'operOPCSL1_0t30d_prevalence_25_cnt', 'operOPCSL1_0t30d_prevalence_26_cnt', 'operOPCSL1_0t30d_prevalence_27_cnt', 'operOPCSL1_0t30d_prevalence_28_cnt', 'operOPCSL1_0t30d_prevalence_29_cnt', 'operOPCSL1_0t30d_prevalence_30_cnt' , 'operOPCSL1_30t90d_others_cnt', 'operOPCSL1_30t90d_prevalence_1_cnt', 'operOPCSL1_30t90d_prevalence_2_cnt', 'operOPCSL1_30t90d_prevalence_3_cnt', 'operOPCSL1_30t90d_prevalence_4_cnt', 'operOPCSL1_30t90d_prevalence_5_cnt', 'operOPCSL1_30t90d_prevalence_6_cnt', 'operOPCSL1_30t90d_prevalence_7_cnt', 'operOPCSL1_30t90d_prevalence_8_cnt', 'operOPCSL1_30t90d_prevalence_9_cnt', 'operOPCSL1_30t90d_prevalence_10_cnt', 'operOPCSL1_30t90d_prevalence_11_cnt', 'operOPCSL1_30t90d_prevalence_12_cnt', 'operOPCSL1_30t90d_prevalence_13_cnt', 'operOPCSL1_30t90d_prevalence_14_cnt', 'operOPCSL1_30t90d_prevalence_15_cnt', 'operOPCSL1_30t90d_prevalence_16_cnt', 'operOPCSL1_30t90d_prevalence_17_cnt', 'operOPCSL1_30t90d_prevalence_18_cnt', 'operOPCSL1_30t90d_prevalence_19_cnt', 'operOPCSL1_30t90d_prevalence_20_cnt', 'operOPCSL1_30t90d_prevalence_21_cnt', 'operOPCSL1_30t90d_prevalence_22_cnt', 'operOPCSL1_30t90d_prevalence_23_cnt', 'operOPCSL1_30t90d_prevalence_24_cnt', 'operOPCSL1_30t90d_prevalence_25_cnt', 'operOPCSL1_30t90d_prevalence_26_cnt', 'operOPCSL1_30t90d_prevalence_27_cnt', 'operOPCSL1_30t90d_prevalence_28_cnt', 'operOPCSL1_30t90d_prevalence_29_cnt', 'operOPCSL1_30t90d_prevalence_30_cnt' , 'operOPCSL1_90t180d_others_cnt', 'operOPCSL1_90t180d_prevalence_1_cnt', 'operOPCSL1_90t180d_prevalence_2_cnt', 'operOPCSL1_90t180d_prevalence_3_cnt', 'operOPCSL1_90t180d_prevalence_4_cnt', 'operOPCSL1_90t180d_prevalence_5_cnt', 'operOPCSL1_90t180d_prevalence_6_cnt', 'operOPCSL1_90t180d_prevalence_7_cnt', 'operOPCSL1_90t180d_prevalence_8_cnt', 'operOPCSL1_90t180d_prevalence_9_cnt', 'operOPCSL1_90t180d_prevalence_10_cnt', 'operOPCSL1_90t180d_prevalence_11_cnt', 'operOPCSL1_90t180d_prevalence_12_cnt', 'operOPCSL1_90t180d_prevalence_13_cnt', 'operOPCSL1_90t180d_prevalence_14_cnt', 'operOPCSL1_90t180d_prevalence_15_cnt', 'operOPCSL1_90t180d_prevalence_16_cnt', 'operOPCSL1_90t180d_prevalence_17_cnt', 'operOPCSL1_90t180d_prevalence_18_cnt', 'operOPCSL1_90t180d_prevalence_19_cnt', 'operOPCSL1_90t180d_prevalence_20_cnt', 'operOPCSL1_90t180d_prevalence_21_cnt', 'operOPCSL1_90t180d_prevalence_22_cnt', 'operOPCSL1_90t180d_prevalence_23_cnt', 'operOPCSL1_90t180d_prevalence_24_cnt', 'operOPCSL1_90t180d_prevalence_25_cnt', 'operOPCSL1_90t180d_prevalence_26_cnt', 'operOPCSL1_90t180d_prevalence_27_cnt', 'operOPCSL1_90t180d_prevalence_28_cnt', 'operOPCSL1_90t180d_prevalence_29_cnt', 'operOPCSL1_90t180d_prevalence_30_cnt' , 'operOPCSL1_180t365d_others_cnt', 'operOPCSL1_180t365d_prevalence_1_cnt', 'operOPCSL1_180t365d_prevalence_2_cnt', 'operOPCSL1_180t365d_prevalence_3_cnt', 'operOPCSL1_180t365d_prevalence_4_cnt', 'operOPCSL1_180t365d_prevalence_5_cnt', 'operOPCSL1_180t365d_prevalence_6_cnt', 'operOPCSL1_180t365d_prevalence_7_cnt', 'operOPCSL1_180t365d_prevalence_8_cnt', 'operOPCSL1_180t365d_prevalence_9_cnt', 'operOPCSL1_180t365d_prevalence_10_cnt', 'operOPCSL1_180t365d_prevalence_11_cnt', 'operOPCSL1_180t365d_prevalence_12_cnt', 'operOPCSL1_180t365d_prevalence_13_cnt', 'operOPCSL1_180t365d_prevalence_14_cnt', 'operOPCSL1_180t365d_prevalence_15_cnt', 'operOPCSL1_180t365d_prevalence_16_cnt', 'operOPCSL1_180t365d_prevalence_17_cnt', 'operOPCSL1_180t365d_prevalence_18_cnt', 'operOPCSL1_180t365d_prevalence_19_cnt', 'operOPCSL1_180t365d_prevalence_20_cnt', 'operOPCSL1_180t365d_prevalence_21_cnt', 'operOPCSL1_180t365d_prevalence_22_cnt', 'operOPCSL1_180t365d_prevalence_23_cnt', 'operOPCSL1_180t365d_prevalence_24_cnt', 'operOPCSL1_180t365d_prevalence_25_cnt', 'operOPCSL1_180t365d_prevalence_26_cnt', 'operOPCSL1_180t365d_prevalence_27_cnt', 'operOPCSL1_180t365d_prevalence_28_cnt', 'operOPCSL1_180t365d_prevalence_29_cnt', 'operOPCSL1_180t365d_prevalence_30_cnt' , 'operOPCSL1_365t730d_others_cnt', 'operOPCSL1_365t730d_prevalence_1_cnt', 'operOPCSL1_365t730d_prevalence_2_cnt', 'operOPCSL1_365t730d_prevalence_3_cnt', 'operOPCSL1_365t730d_prevalence_4_cnt', 'operOPCSL1_365t730d_prevalence_5_cnt', 'operOPCSL1_365t730d_prevalence_6_cnt', 'operOPCSL1_365t730d_prevalence_7_cnt', 'operOPCSL1_365t730d_prevalence_8_cnt', 'operOPCSL1_365t730d_prevalence_9_cnt', 'operOPCSL1_365t730d_prevalence_10_cnt', 'operOPCSL1_365t730d_prevalence_11_cnt', 'operOPCSL1_365t730d_prevalence_12_cnt', 'operOPCSL1_365t730d_prevalence_13_cnt', 'operOPCSL1_365t730d_prevalence_14_cnt', 'operOPCSL1_365t730d_prevalence_15_cnt', 'operOPCSL1_365t730d_prevalence_16_cnt', 'operOPCSL1_365t730d_prevalence_17_cnt', 'operOPCSL1_365t730d_prevalence_18_cnt', 'operOPCSL1_365t730d_prevalence_19_cnt', 'operOPCSL1_365t730d_prevalence_20_cnt', 'operOPCSL1_365t730d_prevalence_21_cnt', 'operOPCSL1_365t730d_prevalence_22_cnt', 'operOPCSL1_365t730d_prevalence_23_cnt', 'operOPCSL1_365t730d_prevalence_24_cnt', 'operOPCSL1_365t730d_prevalence_25_cnt', 'operOPCSL1_365t730d_prevalence_26_cnt', 'operOPCSL1_365t730d_prevalence_27_cnt', 'operOPCSL1_365t730d_prevalence_28_cnt', 'operOPCSL1_365t730d_prevalence_29_cnt', 'operOPCSL1_365t730d_prevalence_30_cnt'] file_name = "report_population_operations_" + method_name + "_" + target_feature + "_" o_summaries = population_statistics(features_extra['train'], diagnoses) readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "train", data=o_summaries, append=False, extension="csv", header=o_summaries.columns) o_summaries = population_statistics(features_extra['test'], diagnoses) readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "test", data=o_summaries, append=False, extension="csv", header=o_summaries.columns) Explanation: 5.2.4. Most Prevalent Operatons Most prevalent operations variables (30-day, 1-year readmission): * Total, Admissions, Emergency Readmission Rate, Prior Spells, Male (%), Age (IQR), LoS (IQR), TP, FP, FN, TN End of explanation diagnoses = ['mainspef_0t30d_others_cnt', 'mainspef_0t30d_prevalence_1_cnt', 'mainspef_0t30d_prevalence_2_cnt', 'mainspef_0t30d_prevalence_3_cnt', 'mainspef_0t30d_prevalence_4_cnt', 'mainspef_0t30d_prevalence_5_cnt', 'mainspef_0t30d_prevalence_6_cnt', 'mainspef_0t30d_prevalence_7_cnt', 'mainspef_0t30d_prevalence_8_cnt', 'mainspef_0t30d_prevalence_9_cnt', 'mainspef_0t30d_prevalence_10_cnt' , 'mainspef_30t90d_others_cnt', 'mainspef_30t90d_prevalence_1_cnt', 'mainspef_30t90d_prevalence_2_cnt', 'mainspef_30t90d_prevalence_3_cnt', 'mainspef_30t90d_prevalence_4_cnt', 'mainspef_30t90d_prevalence_5_cnt', 'mainspef_30t90d_prevalence_6_cnt', 'mainspef_30t90d_prevalence_7_cnt', 'mainspef_30t90d_prevalence_8_cnt', 'mainspef_30t90d_prevalence_9_cnt', 'mainspef_30t90d_prevalence_10_cnt' , 'mainspef_90t180d_others_cnt', 'mainspef_90t180d_prevalence_1_cnt', 'mainspef_90t180d_prevalence_2_cnt', 'mainspef_90t180d_prevalence_3_cnt', 'mainspef_90t180d_prevalence_4_cnt', 'mainspef_90t180d_prevalence_5_cnt', 'mainspef_90t180d_prevalence_6_cnt', 'mainspef_90t180d_prevalence_7_cnt', 'mainspef_90t180d_prevalence_8_cnt', 'mainspef_90t180d_prevalence_9_cnt', 'mainspef_90t180d_prevalence_10_cnt' , 'mainspef_180t365d_others_cnt', 'mainspef_180t365d_prevalence_1_cnt', 'mainspef_180t365d_prevalence_2_cnt', 'mainspef_180t365d_prevalence_3_cnt', 'mainspef_180t365d_prevalence_4_cnt', 'mainspef_180t365d_prevalence_5_cnt', 'mainspef_180t365d_prevalence_6_cnt', 'mainspef_180t365d_prevalence_7_cnt', 'mainspef_180t365d_prevalence_8_cnt', 'mainspef_180t365d_prevalence_9_cnt', 'mainspef_180t365d_prevalence_10_cnt' , 'mainspef_365t730d_others_cnt', 'mainspef_365t730d_prevalence_1_cnt', 'mainspef_365t730d_prevalence_2_cnt', 'mainspef_365t730d_prevalence_3_cnt', 'mainspef_365t730d_prevalence_4_cnt', 'mainspef_365t730d_prevalence_5_cnt', 'mainspef_365t730d_prevalence_6_cnt', 'mainspef_365t730d_prevalence_7_cnt', 'mainspef_365t730d_prevalence_8_cnt', 'mainspef_365t730d_prevalence_9_cnt', 'mainspef_365t730d_prevalence_10_cnt'] file_name = "report_population_operations_" + method_name + "_" + target_feature + "_" o_summaries = population_statistics(features_extra['train'], diagnoses) readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "train", data=o_summaries, append=False, extension="csv", header=o_summaries.columns) o_summaries = population_statistics(features_extra['test'], diagnoses) readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "test", data=o_summaries, append=False, extension="csv", header=o_summaries.columns) Explanation: 5.2.4. Most Prevalent Main Speciality Most prevalent operations variables (30-day, 1-year readmission): * Total, Admissions, Emergency Readmission Rate, Prior Spells, Male (%), Age (IQR), LoS (IQR), TP, FP, FN, TN End of explanation diagnoses = ['gapDays_0t30d_avg', 'gapDays_30t90d_avg', 'gapDays_90t180d_avg', 'gapDays_180t365d_avg', 'gapDays_365t730d_avg', 'epidur_0t30d_avg', 'epidur_30t90d_avg', 'epidur_90t180d_avg', 'epidur_180t365d_avg', 'epidur_365t730d_avg', 'preopdur_0t30d_avg', 'preopdur_30t90d_avg', 'preopdur_90t180d_avg', 'preopdur_180t365d_avg', 'preopdur_365t730d_avg', 'posopdur_0t30d_avg', 'posopdur_30t90d_avg', 'posopdur_90t180d_avg', 'posopdur_180t365d_avg', 'posopdur_365t730d_avg'] file_name = "report_population_other_variables_" + method_name + "_" + target_feature + "_" o_summaries = population_statistics(features_extra['train'], diagnoses) readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "train", data=o_summaries, append=False, extension="csv", header=o_summaries.columns) o_summaries = population_statistics(features_extra['test'], diagnoses) readers_writers.save_csv(path=CONSTANTS.io_path, title=file_name + "test", data=o_summaries, append=False, extension="csv", header=o_summaries.columns) Explanation: 5.2.5. Other Variables Other variables (30-day, 1-year readmission): * Total, Admissions, Emergency Readmission Rate, Prior Spells, Male (%), Age (IQR), LoS (IQR), TP, FP, FN, TN End of explanation file_name = "report_population_" + method_name + "_" + target_feature + "_" Explanation: <br/><br/> 5.3. Plots End of explanation fig, summaries = plots.roc(training_method.model_predict["test"], features_extra["test"][target_feature], title="ROC Curve", lw=2) display(fig) # save plt.savefig(os.path.join(CONSTANTS.io_path, file_name + "_roc" + ".pdf"), dpi=300, facecolor='w', edgecolor='w', orientation='portrait', papertype=None, format="pdf", transparent=False, bbox_inches=None, pad_inches=0.1, frameon=None) Explanation: 5.3.1. ROC End of explanation fig, summaries = plots.precision_recall(training_method.model_predict["test"], features_extra["test"][target_feature], title="Precision-Recall Curve", lw=2) display(fig) # save plt.savefig(os.path.join(CONSTANTS.io_path, file_name + "_precision_recall" + ".pdf"), dpi=300, facecolor='w', edgecolor='w', orientation='portrait', papertype=None, format="pdf", transparent=False, bbox_inches=None, pad_inches=0.1, frameon=None) Explanation: 5.3.2. Precision Recall End of explanation fig, summaries = plots.learning_curve(training_method.model_train, features_extra["test"][features_names_selected], features_extra["test"][target_feature], title="Learning Curve", ylim=None, cv=None, n_jobs=-1, train_sizes=np.linspace(.1, 1.0, 5)) display(fig) # save plt.savefig(os.path.join(CONSTANTS.io_path, file_name + "_learning_curve" + ".pdf"), dpi=300, facecolor='w', edgecolor='w', orientation='portrait', papertype=None, format="pdf", transparent=False, bbox_inches=None, pad_inches=0.1, frameon=None) Explanation: 5.3.3. Learning Curve End of explanation # method metadata if method_name == "lr": param_name = "clf__C" param_range = [0.001, 0.01, 0.1, 1.0, 10.0, 100.0] elif method_name == "rfc": param_name = "max_features" param_range = range(1, 4, 1) # range(1, 20, 1) elif method_name == "nn": param_name = "alpha" param_range = range(1e4, 1e6, 9e4) fig, summaries = plots.validation_curve(training_method.model_train, features_extra["test"][features_names_selected], features_extra["test"][target_feature], param_name, param_range, title="Learning Curve", ylim=None, cv=None, lw=2, n_jobs=-1) display(fig) # save plt.savefig(os.path.join(CONSTANTS.io_path, file_name + "_validation_curve" + ".pdf"), dpi=300, facecolor='w', edgecolor='w', orientation='portrait', papertype=None, format="pdf", transparent=False, bbox_inches=None, pad_inches=0.1, frameon=None) Explanation: 5.3.4. Validation Curve Set the model's metadata End of explanation
1,003
Given the following text description, write Python code to implement the functionality described below step by step Description: Image features exercise Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website. We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels. All of your work for this exercise will be done in this notebook. Step1: Load data Similar to previous exercises, we will load CIFAR-10 data from disk. Step2: Extract Features For each image we will compute a Histogram of Oriented Gradients (HOG) as well as a color histogram using the hue channel in HSV color space. We form our final feature vector for each image by concatenating the HOG and color histogram feature vectors. Roughly speaking, HOG should capture the texture of the image while ignoring color information, and the color histogram represents the color of the input image while ignoring texture. As a result, we expect that using both together ought to work better than using either alone. Verifying this assumption would be a good thing to try for the bonus section. The hog_feature and color_histogram_hsv functions both operate on a single image and return a feature vector for that image. The extract_features function takes a set of images and a list of feature functions and evaluates each feature function on each image, storing the results in a matrix where each column is the concatenation of all feature vectors for a single image. Step3: Train SVM on features Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels. Step4: Inline question 1
Python Code: import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading extenrnal modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 Explanation: Image features exercise Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website. We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels. All of your work for this exercise will be done in this notebook. End of explanation from cs231n.features import color_histogram_hsv, hog_feature def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000): # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask = range(num_training, num_training + num_validation) X_val = X_train[mask] y_val = y_train[mask] mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] return X_train, y_train, X_val, y_val, X_test, y_test X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data() Explanation: Load data Similar to previous exercises, we will load CIFAR-10 data from disk. End of explanation from cs231n.features import * num_color_bins = 10 # Number of bins in the color histogram feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)] X_train_feats = extract_features(X_train, feature_fns, verbose=True) X_val_feats = extract_features(X_val, feature_fns) X_test_feats = extract_features(X_test, feature_fns) # Preprocessing: Subtract the mean feature mean_feat = np.mean(X_train_feats, axis=0, keepdims=True) X_train_feats -= mean_feat X_val_feats -= mean_feat X_test_feats -= mean_feat # Preprocessing: Divide by standard deviation. This ensures that each feature # has roughly the same scale. std_feat = np.std(X_train_feats, axis=0, keepdims=True) X_train_feats /= std_feat X_val_feats /= std_feat X_test_feats /= std_feat # Preprocessing: Add a bias dimension X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))]) X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))]) X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))]) Explanation: Extract Features For each image we will compute a Histogram of Oriented Gradients (HOG) as well as a color histogram using the hue channel in HSV color space. We form our final feature vector for each image by concatenating the HOG and color histogram feature vectors. Roughly speaking, HOG should capture the texture of the image while ignoring color information, and the color histogram represents the color of the input image while ignoring texture. As a result, we expect that using both together ought to work better than using either alone. Verifying this assumption would be a good thing to try for the bonus section. The hog_feature and color_histogram_hsv functions both operate on a single image and return a feature vector for that image. The extract_features function takes a set of images and a list of feature functions and evaluates each feature function on each image, storing the results in a matrix where each column is the concatenation of all feature vectors for a single image. End of explanation # Use the validation set to tune the learning rate and regularization strength from cs231n.classifiers.linear_classifier import LinearSVM learning_rates = [1e-10, 5e-9, 1e-9, 5e-8, 1e-8, 5e-7, 1e-7, 5e-6] regularization_strengths = [1e6, 5e6, 1e7, 5e7, 1e8, 5e8, 1e9, 5e9, 1e10] results = {} best_val = -1 best_svm = None ################################################################################ # TODO: # # Use the validation set to set the learning rate and regularization strength. # # This should be identical to the validation that you did for the SVM; save # # the best trained classifer in best_svm. You might also want to play # # with different numbers of bins in the color histogram. If you are careful # # you should be able to get accuracy of near 0.44 on the validation set. # ################################################################################ for lr in learning_rates: for rs in regularization_strengths: svm = LinearSVM() svm.train(X_train_feats, y_train, learning_rate=lr, reg=rs, num_iters=1500, verbose=False) y_train_pred = svm.predict(X_train_feats) pred_train = np.mean(y_train == y_train_pred) y_val_pred = svm.predict(X_val_feats) pred_val = np.mean(y_train == y_train_pred) results[(lr, rs)] = (pred_train, pred_val) if pred_val > best_val: best_val = pred_val best_svm = svm print 'done' ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print 'lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy) print 'best validation accuracy achieved during cross-validation: %f' % best_val # Evaluate your trained SVM on the test set y_test_pred = best_svm.predict(X_test_feats) test_accuracy = np.mean(y_test == y_test_pred) print test_accuracy # An important way to gain intuition about how an algorithm works is to # visualize the mistakes that it makes. In this visualization, we show examples # of images that are misclassified by our current system. The first column # shows images that our system labeled as "plane" but whose true label is # something other than "plane". examples_per_class = 8 classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] for cls, cls_name in enumerate(classes): idxs = np.where((y_test != cls) & (y_test_pred == cls))[0] idxs = np.random.choice(idxs, examples_per_class, replace=False) for i, idx in enumerate(idxs): plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1) plt.imshow(X_test[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls_name) plt.show() Explanation: Train SVM on features Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels. End of explanation print X_train_feats.shape from cs231n.classifiers.neural_net import TwoLayerNet input_dim = X_train_feats.shape[1] hidden_dim = 500 num_classes = 10 best_acc = -1 x = 10 tmp_X_train_feats = X_train_feats[0:10, :] tmp_y_train = y_train[0:10] #tmp_X_val_feats = X_val_feats[0:x, :] #tmp_y_val = y_val[0:x, :] learning_rates = [2e-1, 3e-1, 4e-1] regularization_strengths = [1e-7, 1e-6, 1e-5, 1e-4] for lr in learning_rates: for rs in regularization_strengths: net = TwoLayerNet(input_dim, hidden_dim, num_classes) # Train the network stats = net.train(X_train_feats, y_train, X_val_feats, y_val, num_iters=1000, batch_size=200, learning_rate=lr, learning_rate_decay=0.95, reg=rs, verbose=False) # Predict on the validation set val_acc = (net.predict(X_val_feats) == y_val).mean() if (val_acc > best_acc): best_net = net best_acc = val_acc print 'lr %f, res %f, Validation accuracy:%f ' % (lr, rs, val_acc) print 'done' ################################################################################ # TODO: Train a two-layer neural network on image features. You may want to # # cross-validate various parameters as in previous sections. Store your best # # model in the best_net variable. # ################################################################################ ################################################################################ # END OF YOUR CODE # ################################################################################ # Run your neural net classifier on the test set. You should be able to # get more than 55% accuracy. test_acc = (best_net.predict(X_test_feats) ==8 y_test).mean() print test_acc Explanation: Inline question 1: Describe the misclassification results that you see. Do they make sense? Neural Network on image features Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy. End of explanation
1,004
Given the following text description, write Python code to implement the functionality described below step by step Description: LEARNING This notebook serves as supporting material for topics covered in Chapter 18 - Learning from Examples , Chapter 19 - Knowledge in Learning, Chapter 20 - Learning Probabilistic Models from the book Artificial Intelligence Step1: CONTENTS Machine Learning Overview Datasets Iris Visualization Distance Functions Plurality Learner k-Nearest Neighbours Decision Tree Learner Naive Bayes Learner Perceptron Learner Evaluation MACHINE LEARNING OVERVIEW In this notebook, we learn about agents that can improve their behavior through diligent study of their own experiences. An agent is learning if it improves its performance on future tasks after making observations about the world. There are three types of feedback that determine the three main types of learning Step2: Class Attributes examples Step3: To check that we imported the correct dataset, we can do the following Step4: Which correctly prints the first line in the csv file and the list of attribute indexes. When importing a dataset, we can specify to exclude an attribute (for example, at index 1) by setting the parameter exclude to the attribute index or name. Step5: Attributes Here we showcase the attributes. First we will print the first three items/examples in the dataset. Step6: Then we will print attrs, attrnames, target, input. Notice how attrs holds values in [0,4], but since the fourth attribute is the target, inputs holds values in [0,3]. Step7: Now we will print all the possible values for the first feature/attribute. Step8: Finally we will print the dataset's name and source. Keep in mind that we have not set a source for the dataset, so in this case it is empty. Step9: A useful combination of the above is dataset.values[dataset.target] which returns the possible values of the target. For classification problems, this will return all the possible classes. Let's try it Step10: Helper Functions We will now take a look at the auxiliary functions found in the class. First we will take a look at the sanitize function, which sets the non-input values of the given example to None. In this case we want to hide the class of the first example, so we will sanitize it. Note that the function doesn't actually change the given example; it returns a sanitized copy of it. Step11: Currently the iris dataset has three classes, setosa, virginica and versicolor. We want though to convert it to a binary class dataset (a dataset with two classes). The class we want to remove is "virginica". To accomplish that we will utilize the helper function remove_examples. Step12: We also have classes_to_numbers. For a lot of the classifiers in the module (like the Neural Network), classes should have numerical values. With this function we map string class names to numbers. Step13: As you can see "setosa" was mapped to 0. Finally, we take a look at find_means_and_deviations. It finds the means and standard deviations of the features for each class. Step14: IRIS VISUALIZATION Since we will use the iris dataset extensively in this notebook, below we provide a visualization tool that helps in comprehending the dataset and thus how the algorithms work. We plot the dataset in a 3D space using matplotlib and the function show_iris from notebook.py. The function takes as input three parameters, i, j and k, which are indicises to the iris features, "Sepal Length", "Sepal Width", "Petal Length" and "Petal Width" (0 to 3). By default we show the first three features. Step15: You can play around with the values to get a good look at the dataset. DISTANCE FUNCTIONS In a lot of algorithms (like the k-Nearest Neighbors algorithm), there is a need to compare items, finding how similar or close they are. For that we have many different functions at our disposal. Below are the functions implemented in the module Step16: Euclidean Distance (euclidean_distance) Probably the most popular distance function. It returns the square root of the sum of the squared differences between individual elements of two items. Step17: Hamming Distance (hamming_distance) This function counts the number of differences between single elements in two items. For example, if we have two binary strings "111" and "011" the function will return 1, since the two strings only differ at the first element. The function works the same way for non-binary strings too. Step18: Mean Boolean Error (mean_boolean_error) To calculate this distance, we find the ratio of different elements over all elements of two items. For example, if the two items are (1,2,3) and (1,4,5), the ration of different/all elements is 2/3, since they differ in two out of three elements. Step19: Mean Error (mean_error) This function finds the mean difference of single elements between two items. For example, if the two items are (1,0,5) and (3,10,5), their error distance is (3-1) + (10-0) + (5-5) = 2 + 10 + 0 = 12. The mean error distance therefore is 12/3=4. Step20: Mean Square Error (ms_error) This is very similar to the Mean Error, but instead of calculating the difference between elements, we are calculating the square of the differences. Step21: Root of Mean Square Error (rms_error) This is the square root of Mean Square Error. Step22: PLURALITY LEARNER CLASSIFIER Overview The Plurality Learner is a simple algorithm, used mainly as a baseline comparison for other algorithms. It finds the most popular class in the dataset and classifies any subsequent item to that class. Essentially, it classifies every new item to the same class. For that reason, it is not used very often, instead opting for more complicated algorithms when we want accurate classification. Let's see how the classifier works with the plot above. There are three classes named Class A (orange-colored dots) and Class B (blue-colored dots) and Class C (green-colored dots). Every point in this plot has two features (i.e. X<sub>1</sub>, X<sub>2</sub>). Now, let's say we have a new point, a red star and we want to know which class this red star belongs to. Solving this problem by predicting the class of this new red star is our current classification problem. The Plurality Learner will find the class most represented in the plot. Class A has four items, Class B has three and Class C has seven. The most popular class is Class C. Therefore, the item will get classified in Class C, despite the fact that it is closer to the other two classes. Implementation Below follows the implementation of the PluralityLearner algorithm Step23: It takes as input a dataset and returns a function. We can later call this function with the item we want to classify as the argument and it returns the class it should be classified in. The function first finds the most popular class in the dataset and then each time we call its "predict" function, it returns it. Note that the input ("example") does not matter. The function always returns the same class. Example For this example, we will not use the Iris dataset, since each class is represented the same. This will throw an error. Instead we will use the zoo dataset. Step24: The output for the above code is "mammal", since that is the most popular and common class in the dataset. K-NEAREST NEIGHBOURS CLASSIFIER Overview The k-Nearest Neighbors algorithm is a non-parametric method used for classification and regression. We are going to use this to classify Iris flowers. More about kNN on Scholarpedia. Let's see how kNN works with a simple plot shown in the above picture. We have co-ordinates (we call them features in Machine Learning) of this red star and we need to predict its class using the kNN algorithm. In this algorithm, the value of k is arbitrary. k is one of the hyper parameters for kNN algorithm. We choose this number based on our dataset and choosing a particular number is known as hyper parameter tuning/optimising. We learn more about this in coming topics. Let's put k = 3. It means you need to find 3-Nearest Neighbors of this red star and classify this new point into the majority class. Observe that smaller circle which contains three points other than test point (red star). As there are two violet points, which form the majority, we predict the class of red star as violet- Class B. Similarly if we put k = 5, you can observe that there are three yellow points, which form the majority. So, we classify our test point as yellow- Class A. In practical tasks, we iterate through a bunch of values for k (like [1, 3, 5, 10, 20, 50, 100]), see how it performs and select the best one. Implementation Below follows the implementation of the kNN algorithm Step25: It takes as input a dataset and k (default value is 1) and it returns a function, which we can later use to classify a new item. To accomplish that, the function uses a heap-queue, where the items of the dataset are sorted according to their distance from example (the item to classify). We then take the k smallest elements from the heap-queue and we find the majority class. We classify the item to this class. Example We measured a new flower with the following values Step26: The output of the above code is "setosa", which means the flower with the above measurements is of the "setosa" species. DECISION TREE LEARNER Overview Decision Trees A decision tree is a flowchart that uses a tree of decisions and their possible consequences for classification. At each non-leaf node of the tree an attribute of the input is tested, based on which corresponding branch leading to a child-node is selected. At the leaf node the input is classified based on the class label of this leaf node. The paths from root to leaves represent classification rules based on which leaf nodes are assigned class labels. Decision Tree Learning Decision tree learning is the construction of a decision tree from class-labeled training data. The data is expected to be a tuple in which each record of the tuple is an attribute used for classification. The decision tree is built top-down, by choosing a variable at each step that best splits the set of items. There are different metrics for measuring the "best split". These generally measure the homogeneity of the target variable within the subsets. Gini Impurity Gini impurity of a set is the probability of a randomly chosen element to be incorrectly labeled if it was randomly labeled according to the distribution of labels in the set. $$I_G(p) = \sum{p_i(1 - p_i)} = 1 - \sum{p_i^2}$$ We select split which minimizes the Gini impurity in childre nodes. Information Gain Information gain is based on the concept of entropy from information theory. Entropy is defined as Step27: Implementation The nodes of the tree constructed by our learning algorithm are stored using either DecisionFork or DecisionLeaf based on whether they are a parent node or a leaf node respectively. Step28: DecisionFork holds the attribute, which is tested at that node, and a dict of branches. The branches store the child nodes, one for each of the attribute's values. Calling an object of this class as a function with input tuple as an argument returns the next node in the classification path based on the result of the attribute test. Step29: The leaf node stores the class label in result. All input tuples' classification paths end on a DecisionLeaf whose result attribute decide their class. Step30: The implementation of DecisionTreeLearner provided in learning.py uses information gain as the metric for selecting which attribute to test for splitting. The function builds the tree top-down in a recursive manner. Based on the input it makes one of the four choices Step31: First we found the different values for the classes (called targets here) and calculated their distribution. Next we initialized a dictionary of CountingProbDist objects, one for each class and feature. Finally, we iterated through the examples in the dataset and calculated the needed probabilites. Having calculated the different probabilities, we will move on to the predicting function. It will receive as input an item and output the most likely class. Using the above formula, it will multiply the probability of the class appearing, with the probability of each feature value appearing in the class. It will return the max result. Step32: You can view the complete code by executing the next line Step33: Continuous In the implementation we use the Gaussian/Normal distribution function. To make it work, we need to find the means and standard deviations of features for each class. We make use of the find_means_and_deviations Dataset function. On top of that, we will also calculate the class probabilities as we did with the Discrete approach. Step34: You can see the means of the features for the "Setosa" class and the deviations for "Versicolor". The prediction function will work similarly to the Discrete algorithm. It will multiply the probability of the class occuring with the conditional probabilities of the feature values for the class. Since we are using the Gaussian distribution, we will input the value for each feature into the Gaussian function, together with the mean and deviation of the feature. This will return the probability of the particular feature value for the given class. We will repeat for each class and pick the max value. Step35: The complete code of the continuous algorithm Step36: Simple The simple classifier (chosen with the argument simple) does not learn from a dataset, instead it takes as input a dictionary of already calculated CountingProbDist objects and returns a predictor function. The dictionary is in the following form Step37: This classifier is useful when you already have calculated the distributions and you need to predict future items. Examples We will now use the Naive Bayes Classifier (Discrete and Continuous) to classify items Step38: Notice how the Discrete Classifier misclassified the second item, while the Continuous one had no problem. Let's now take a look at the simple classifier. First we will come up with a sample problem to solve. Say we are given three bags. Each bag contains three letters ('a', 'b' and 'c') of different quantities. We are given a string of letters and we are tasked with finding from which bag the string of letters came. Since we know the probability distribution of the letters for each bag, we can use the naive bayes classifier to make our prediction. Step39: Now that we have the CountingProbDist objects for each bag/class, we will create the dictionary. We assume that it is equally probable that we will pick from any bag. Step40: Now we can start making predictions Step41: The results make intuitive sence. The first bag has a high amount of 'a's, the second has a high amount of 'b's and the third has a high amount of 'c's. The classifier seems to confirm this intuition. Note that the simple classifier doesn't distinguish between discrete and continuous values. It just takes whatever it is given. Also, the simple option on the NaiveBayesLearner overrides the continuous argument. NaiveBayesLearner(d, simple=True, continuous=False) just creates a simple classifier. PERCEPTRON CLASSIFIER Overview The Perceptron is a linear classifier. It works the same way as a neural network with no hidden layers (just input and output). First it trains its weights given a dataset and then it can classify a new item by running it through the network. Its input layer consists of the the item features, while the output layer consists of nodes (also called neurons). Each node in the output layer has n synapses (for every item feature), each with its own weight. Then, the nodes find the dot product of the item features and the synapse weights. These values then pass through an activation function (usually a sigmoid). Finally, we pick the largest of the values and we return its index. Note that in classification problems each node represents a class. The final classification is the class/node with the max output value. Below you can see a single node/neuron in the outer layer. With f we denote the item features, with w the synapse weights, then inside the node we have the dot product and the activation function, g. Implementation First, we train (calculate) the weights given a dataset, using the BackPropagationLearner function of learning.py. We then return a function, predict, which we will use in the future to classify a new item. The function computes the (algebraic) dot product of the item with the calculated weights for each node in the outer layer. Then it picks the greatest value and classifies the item in the corresponding class. Step42: Note that the Perceptron is a one-layer neural network, without any hidden layers. So, in BackPropagationLearner, we will pass no hidden layers. From that function we get our network, which is just one layer, with the weights calculated. That function predict passes the input/example through the network, calculating the dot product of the input and the weights for each node and returns the class with the max dot product. Example We will train the Perceptron on the iris dataset. Because though the BackPropagationLearner works with integer indexes and not strings, we need to convert class names to integers. Then, we will try and classify the item/flower with measurements of 5, 3, 1, 0.1. Step43: The correct output is 0, which means the item belongs in the first class, "setosa". Note that the Perceptron algorithm is not perfect and may produce false classifications. LEARNER EVALUATION In this section we will evaluate and compare algorithm performance. The dataset we will use will again be the iris one. Step44: Naive Bayes First up we have the Naive Bayes algorithm. First we will test how well the Discrete Naive Bayes works, and then how the Continuous fares. Step45: The error for the Naive Bayes algorithm is very, very low; close to 0. There is also very little difference between the discrete and continuous version of the algorithm. k-Nearest Neighbors Now we will take a look at kNN, for different values of k. Note that k should have odd values, to break any ties between two classes. Step46: Notice how the error became larger and larger as k increased. This is generally the case with datasets where classes are spaced out, as is the case with the iris dataset. If items from different classes were closer together, classification would be more difficult. Usually a value of 1, 3 or 5 for k suffices. Also note that since the training set is also the testing set, for k equal to 1 we get a perfect score, since the item we want to classify each time is already in the dataset and its closest neighbor is itself. Perceptron For the Perceptron, we first need to convert class names to integers. Let's see how it performs in the dataset.
Python Code: from learning import * from notebook import * Explanation: LEARNING This notebook serves as supporting material for topics covered in Chapter 18 - Learning from Examples , Chapter 19 - Knowledge in Learning, Chapter 20 - Learning Probabilistic Models from the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from learning.py. Let's start by importing everything from the module: End of explanation %psource DataSet Explanation: CONTENTS Machine Learning Overview Datasets Iris Visualization Distance Functions Plurality Learner k-Nearest Neighbours Decision Tree Learner Naive Bayes Learner Perceptron Learner Evaluation MACHINE LEARNING OVERVIEW In this notebook, we learn about agents that can improve their behavior through diligent study of their own experiences. An agent is learning if it improves its performance on future tasks after making observations about the world. There are three types of feedback that determine the three main types of learning: Supervised Learning: In Supervised Learning the agent observes some example input-output pairs and learns a function that maps from input to output. Example: Let's think of an agent to classify images containing cats or dogs. If we provide an image containing a cat or a dog, this agent should output a string "cat" or "dog" for that particular image. To teach this agent, we will give a lot of input-output pairs like {cat image-"cat"}, {dog image-"dog"} to the agent. The agent then learns a function that maps from an input image to one of those strings. Unsupervised Learning: In Unsupervised Learning the agent learns patterns in the input even though no explicit feedback is supplied. The most common type is clustering: detecting potential useful clusters of input examples. Example: A taxi agent would develop a concept of good traffic days and bad traffic days without ever being given labeled examples. Reinforcement Learning: In Reinforcement Learning the agent learns from a series of reinforcements—rewards or punishments. Example: Let's talk about an agent to play the popular Atari game—Pong. We will reward a point for every correct move and deduct a point for every wrong move from the agent. Eventually, the agent will figure out its actions prior to reinforcement were most responsible for it. DATASETS For the following tutorials we will use a range of datasets, to better showcase the strengths and weaknesses of the algorithms. The datasests are the following: Fisher's Iris: Each item represents a flower, with four measurements: the length and the width of the sepals and petals. Each item/flower is categorized into one of three species: Setosa, Versicolor and Virginica. Zoo: The dataset holds different animals and their classification as "mammal", "fish", etc. The new animal we want to classify has the following measurements: 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1 (don't concern yourself with what the measurements mean). To make using the datasets easier, we have written a class, DataSet, in learning.py. The tutorials found here make use of this class. Let's have a look at how it works before we get started with the algorithms. Intro A lot of the datasets we will work with are .csv files (although other formats are supported too). We have a collection of sample datasets ready to use on aima-data. Two examples are the datasets mentioned above (iris.csv and zoo.csv). You can find plenty datasets online, and a good repository of such datasets is UCI Machine Learning Repository. In such files, each line corresponds to one item/measurement. Each individual value in a line represents a feature and usually there is a value denoting the class of the item. You can find the code for the dataset here: End of explanation iris = DataSet(name="iris") Explanation: Class Attributes examples: Holds the items of the dataset. Each item is a list of values. attrs: The indexes of the features (by default in the range of [0,f), where f is the number of features. For example, item[i] returns the feature at index i of item. attrnames: An optional list with attribute names. For example, item[s], where s is a feature name, returns the feature of name s in item. target: The attribute a learning algorithm will try to predict. By default the last attribute. inputs: This is the list of attributes without the target. values: A list of lists which holds the set of possible values for the corresponding attribute/feature. If initially None, it gets computed (by the function setproblem) from the examples. distance: The distance function used in the learner to calculate the distance between two items. By default mean_boolean_error. name: Name of the dataset. source: The source of the dataset (url or other). Not used in the code. exclude: A list of indexes to exclude from inputs. The list can include either attribute indexes (attrs) or names (attrnames). Class Helper Functions These functions help modify a DataSet object to your needs. sanitize: Takes as input an example and returns it with non-input (target) attributes replaced by None. Useful for testing. Keep in mind that the example given is not itself sanitized, but instead a sanitized copy is returned. classes_to_numbers: Maps the class names of a dataset to numbers. If the class names are not given, they are computed from the dataset values. Useful for classifiers that return a numerical value instead of a string. remove_examples: Removes examples containing a given value. Useful for removing examples with missing values, or for removing classes (needed for binary classifiers). Importing a Dataset Importing from aima-data Datasets uploaded on aima-data can be imported with the following line: End of explanation print(iris.examples[0]) print(iris.inputs) Explanation: To check that we imported the correct dataset, we can do the following: End of explanation iris2 = DataSet(name="iris",exclude=[1]) print(iris2.inputs) Explanation: Which correctly prints the first line in the csv file and the list of attribute indexes. When importing a dataset, we can specify to exclude an attribute (for example, at index 1) by setting the parameter exclude to the attribute index or name. End of explanation print(iris.examples[:3]) Explanation: Attributes Here we showcase the attributes. First we will print the first three items/examples in the dataset. End of explanation print("attrs:", iris.attrs) print("attrnames (by default same as attrs):", iris.attrnames) print("target:", iris.target) print("inputs:", iris.inputs) Explanation: Then we will print attrs, attrnames, target, input. Notice how attrs holds values in [0,4], but since the fourth attribute is the target, inputs holds values in [0,3]. End of explanation print(iris.values[0]) Explanation: Now we will print all the possible values for the first feature/attribute. End of explanation print("name:", iris.name) print("source:", iris.source) Explanation: Finally we will print the dataset's name and source. Keep in mind that we have not set a source for the dataset, so in this case it is empty. End of explanation print(iris.values[iris.target]) Explanation: A useful combination of the above is dataset.values[dataset.target] which returns the possible values of the target. For classification problems, this will return all the possible classes. Let's try it: End of explanation print("Sanitized:",iris.sanitize(iris.examples[0])) print("Original:",iris.examples[0]) Explanation: Helper Functions We will now take a look at the auxiliary functions found in the class. First we will take a look at the sanitize function, which sets the non-input values of the given example to None. In this case we want to hide the class of the first example, so we will sanitize it. Note that the function doesn't actually change the given example; it returns a sanitized copy of it. End of explanation iris2 = DataSet(name="iris") iris2.remove_examples("virginica") print(iris2.values[iris2.target]) Explanation: Currently the iris dataset has three classes, setosa, virginica and versicolor. We want though to convert it to a binary class dataset (a dataset with two classes). The class we want to remove is "virginica". To accomplish that we will utilize the helper function remove_examples. End of explanation print("Class of first example:",iris2.examples[0][iris2.target]) iris2.classes_to_numbers() print("Class of first example:",iris2.examples[0][iris2.target]) Explanation: We also have classes_to_numbers. For a lot of the classifiers in the module (like the Neural Network), classes should have numerical values. With this function we map string class names to numbers. End of explanation means, deviations = iris.find_means_and_deviations() print("Setosa feature means:", means["setosa"]) print("Versicolor mean for first feature:", means["versicolor"][0]) print("Setosa feature deviations:", deviations["setosa"]) print("Virginica deviation for second feature:",deviations["virginica"][1]) Explanation: As you can see "setosa" was mapped to 0. Finally, we take a look at find_means_and_deviations. It finds the means and standard deviations of the features for each class. End of explanation iris = DataSet(name="iris") show_iris() show_iris(0, 1, 3) show_iris(1, 2, 3) Explanation: IRIS VISUALIZATION Since we will use the iris dataset extensively in this notebook, below we provide a visualization tool that helps in comprehending the dataset and thus how the algorithms work. We plot the dataset in a 3D space using matplotlib and the function show_iris from notebook.py. The function takes as input three parameters, i, j and k, which are indicises to the iris features, "Sepal Length", "Sepal Width", "Petal Length" and "Petal Width" (0 to 3). By default we show the first three features. End of explanation def manhattan_distance(X, Y): return sum([abs(x - y) for x, y in zip(X, Y)]) distance = manhattan_distance([1,2], [3,4]) print("Manhattan Distance between (1,2) and (3,4) is", distance) Explanation: You can play around with the values to get a good look at the dataset. DISTANCE FUNCTIONS In a lot of algorithms (like the k-Nearest Neighbors algorithm), there is a need to compare items, finding how similar or close they are. For that we have many different functions at our disposal. Below are the functions implemented in the module: Manhattan Distance (manhattan_distance) One of the simplest distance functions. It calculates the difference between the coordinates/features of two items. To understand how it works, imagine a 2D grid with coordinates x and y. In that grid we have two items, at the squares positioned at (1,2) and (3,4). The difference between their two coordinates is 3-1=2 and 4-2=2. If we sum these up we get 4. That means to get from (1,2) to (3,4) we need four moves; two to the right and two more up. The function works similarly for n-dimensional grids. End of explanation def euclidean_distance(X, Y): return math.sqrt(sum([(x - y)**2 for x, y in zip(X,Y)])) distance = euclidean_distance([1,2], [3,4]) print("Euclidean Distance between (1,2) and (3,4) is", distance) Explanation: Euclidean Distance (euclidean_distance) Probably the most popular distance function. It returns the square root of the sum of the squared differences between individual elements of two items. End of explanation def hamming_distance(X, Y): return sum(x != y for x, y in zip(X, Y)) distance = hamming_distance(['a','b','c'], ['a','b','b']) print("Hamming Distance between 'abc' and 'abb' is", distance) Explanation: Hamming Distance (hamming_distance) This function counts the number of differences between single elements in two items. For example, if we have two binary strings "111" and "011" the function will return 1, since the two strings only differ at the first element. The function works the same way for non-binary strings too. End of explanation def mean_boolean_error(X, Y): return mean(int(x != y) for x, y in zip(X, Y)) distance = mean_boolean_error([1,2,3], [1,4,5]) print("Mean Boolean Error Distance between (1,2,3) and (1,4,5) is", distance) Explanation: Mean Boolean Error (mean_boolean_error) To calculate this distance, we find the ratio of different elements over all elements of two items. For example, if the two items are (1,2,3) and (1,4,5), the ration of different/all elements is 2/3, since they differ in two out of three elements. End of explanation def mean_error(X, Y): return mean([abs(x - y) for x, y in zip(X, Y)]) distance = mean_error([1,0,5], [3,10,5]) print("Mean Error Distance between (1,0,5) and (3,10,5) is", distance) Explanation: Mean Error (mean_error) This function finds the mean difference of single elements between two items. For example, if the two items are (1,0,5) and (3,10,5), their error distance is (3-1) + (10-0) + (5-5) = 2 + 10 + 0 = 12. The mean error distance therefore is 12/3=4. End of explanation def ms_error(X, Y): return mean([(x - y)**2 for x, y in zip(X, Y)]) distance = ms_error([1,0,5], [3,10,5]) print("Mean Square Distance between (1,0,5) and (3,10,5) is", distance) Explanation: Mean Square Error (ms_error) This is very similar to the Mean Error, but instead of calculating the difference between elements, we are calculating the square of the differences. End of explanation def rms_error(X, Y): return math.sqrt(ms_error(X, Y)) distance = rms_error([1,0,5], [3,10,5]) print("Root of Mean Error Distance between (1,0,5) and (3,10,5) is", distance) Explanation: Root of Mean Square Error (rms_error) This is the square root of Mean Square Error. End of explanation psource(PluralityLearner) Explanation: PLURALITY LEARNER CLASSIFIER Overview The Plurality Learner is a simple algorithm, used mainly as a baseline comparison for other algorithms. It finds the most popular class in the dataset and classifies any subsequent item to that class. Essentially, it classifies every new item to the same class. For that reason, it is not used very often, instead opting for more complicated algorithms when we want accurate classification. Let's see how the classifier works with the plot above. There are three classes named Class A (orange-colored dots) and Class B (blue-colored dots) and Class C (green-colored dots). Every point in this plot has two features (i.e. X<sub>1</sub>, X<sub>2</sub>). Now, let's say we have a new point, a red star and we want to know which class this red star belongs to. Solving this problem by predicting the class of this new red star is our current classification problem. The Plurality Learner will find the class most represented in the plot. Class A has four items, Class B has three and Class C has seven. The most popular class is Class C. Therefore, the item will get classified in Class C, despite the fact that it is closer to the other two classes. Implementation Below follows the implementation of the PluralityLearner algorithm: End of explanation zoo = DataSet(name="zoo") pL = PluralityLearner(zoo) print(pL([1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1])) Explanation: It takes as input a dataset and returns a function. We can later call this function with the item we want to classify as the argument and it returns the class it should be classified in. The function first finds the most popular class in the dataset and then each time we call its "predict" function, it returns it. Note that the input ("example") does not matter. The function always returns the same class. Example For this example, we will not use the Iris dataset, since each class is represented the same. This will throw an error. Instead we will use the zoo dataset. End of explanation psource(NearestNeighborLearner) Explanation: The output for the above code is "mammal", since that is the most popular and common class in the dataset. K-NEAREST NEIGHBOURS CLASSIFIER Overview The k-Nearest Neighbors algorithm is a non-parametric method used for classification and regression. We are going to use this to classify Iris flowers. More about kNN on Scholarpedia. Let's see how kNN works with a simple plot shown in the above picture. We have co-ordinates (we call them features in Machine Learning) of this red star and we need to predict its class using the kNN algorithm. In this algorithm, the value of k is arbitrary. k is one of the hyper parameters for kNN algorithm. We choose this number based on our dataset and choosing a particular number is known as hyper parameter tuning/optimising. We learn more about this in coming topics. Let's put k = 3. It means you need to find 3-Nearest Neighbors of this red star and classify this new point into the majority class. Observe that smaller circle which contains three points other than test point (red star). As there are two violet points, which form the majority, we predict the class of red star as violet- Class B. Similarly if we put k = 5, you can observe that there are three yellow points, which form the majority. So, we classify our test point as yellow- Class A. In practical tasks, we iterate through a bunch of values for k (like [1, 3, 5, 10, 20, 50, 100]), see how it performs and select the best one. Implementation Below follows the implementation of the kNN algorithm: End of explanation iris = DataSet(name="iris") kNN = NearestNeighborLearner(iris,k=3) print(kNN([5.1,3.0,1.1,0.1])) Explanation: It takes as input a dataset and k (default value is 1) and it returns a function, which we can later use to classify a new item. To accomplish that, the function uses a heap-queue, where the items of the dataset are sorted according to their distance from example (the item to classify). We then take the k smallest elements from the heap-queue and we find the majority class. We classify the item to this class. Example We measured a new flower with the following values: 5.1, 3.0, 1.1, 0.1. We want to classify that item/flower in a class. To do that, we write the following: End of explanation pseudocode("Decision Tree Learning") Explanation: The output of the above code is "setosa", which means the flower with the above measurements is of the "setosa" species. DECISION TREE LEARNER Overview Decision Trees A decision tree is a flowchart that uses a tree of decisions and their possible consequences for classification. At each non-leaf node of the tree an attribute of the input is tested, based on which corresponding branch leading to a child-node is selected. At the leaf node the input is classified based on the class label of this leaf node. The paths from root to leaves represent classification rules based on which leaf nodes are assigned class labels. Decision Tree Learning Decision tree learning is the construction of a decision tree from class-labeled training data. The data is expected to be a tuple in which each record of the tuple is an attribute used for classification. The decision tree is built top-down, by choosing a variable at each step that best splits the set of items. There are different metrics for measuring the "best split". These generally measure the homogeneity of the target variable within the subsets. Gini Impurity Gini impurity of a set is the probability of a randomly chosen element to be incorrectly labeled if it was randomly labeled according to the distribution of labels in the set. $$I_G(p) = \sum{p_i(1 - p_i)} = 1 - \sum{p_i^2}$$ We select split which minimizes the Gini impurity in childre nodes. Information Gain Information gain is based on the concept of entropy from information theory. Entropy is defined as: $$H(p) = -\sum{p_i \log_2{p_i}}$$ Information Gain is difference between entropy of the parent and weighted sum of entropy of children. The feature used for splitting is the one which provides the most information gain. Pseudocode You can view the pseudocode by running the cell below: End of explanation psource(DecisionFork) Explanation: Implementation The nodes of the tree constructed by our learning algorithm are stored using either DecisionFork or DecisionLeaf based on whether they are a parent node or a leaf node respectively. End of explanation psource(DecisionLeaf) Explanation: DecisionFork holds the attribute, which is tested at that node, and a dict of branches. The branches store the child nodes, one for each of the attribute's values. Calling an object of this class as a function with input tuple as an argument returns the next node in the classification path based on the result of the attribute test. End of explanation psource(DecisionTreeLearner) Explanation: The leaf node stores the class label in result. All input tuples' classification paths end on a DecisionLeaf whose result attribute decide their class. End of explanation dataset = iris target_vals = dataset.values[dataset.target] target_dist = CountingProbDist(target_vals) attr_dists = {(gv, attr): CountingProbDist(dataset.values[attr]) for gv in target_vals for attr in dataset.inputs} for example in dataset.examples: targetval = example[dataset.target] target_dist.add(targetval) for attr in dataset.inputs: attr_dists[targetval, attr].add(example[attr]) print(target_dist['setosa']) print(attr_dists['setosa', 0][5.0]) Explanation: The implementation of DecisionTreeLearner provided in learning.py uses information gain as the metric for selecting which attribute to test for splitting. The function builds the tree top-down in a recursive manner. Based on the input it makes one of the four choices: <ol> <li>If the input at the current step has no training data we return the mode of classes of input data recieved in the parent step (previous level of recursion).</li> <li>If all values in training data belong to the same class it returns a `DecisionLeaf` whose class label is the class which all the data belongs to.</li> <li>If the data has no attributes that can be tested we return the class with highest plurality value in the training data.</li> <li>We choose the attribute which gives the highest amount of entropy gain and return a `DecisionFork` which splits based on this attribute. Each branch recursively calls `decision_tree_learning` to construct the sub-tree.</li> </ol> NAIVE BAYES LEARNER Overview Theory of Probabilities The Naive Bayes algorithm is a probabilistic classifier, making use of Bayes' Theorem. The theorem states that the conditional probability of A given B equals the conditional probability of B given A multiplied by the probability of A, divided by the probability of B. $$P(A|B) = \dfrac{P(B|A)*P(A)}{P(B)}$$ From the theory of Probabilities we have the Multiplication Rule, if the events X are independent the following is true: $$P(X_{1} \cap X_{2} \cap ... \cap X_{n}) = P(X_{1})P(X_{2})...*P(X_{n})$$ For conditional probabilities this becomes: $$P(X_{1}, X_{2}, ..., X_{n}|Y) = P(X_{1}|Y)P(X_{2}|Y)...*P(X_{n}|Y)$$ Classifying an Item How can we use the above to classify an item though? We have a dataset with a set of classes (C) and we want to classify an item with a set of features (F). Essentially what we want to do is predict the class of an item given the features. For a specific class, Class, we will find the conditional probability given the item features: $$P(Class|F) = \dfrac{P(F|Class)*P(Class)}{P(F)}$$ We will do this for every class and we will pick the maximum. This will be the class the item is classified in. The features though are a vector with many elements. We need to break the probabilities up using the multiplication rule. Thus the above equation becomes: $$P(Class|F) = \dfrac{P(Class)P(F_{1}|Class)P(F_{2}|Class)...P(F_{n}|Class)}{P(F_{1})P(F_{2})...*P(F_{n})}$$ The calculation of the conditional probability then depends on the calculation of the following: a) The probability of Class in the dataset. b) The conditional probability of each feature occuring in an item classified in Class. c) The probabilities of each individual feature. For a), we will count how many times Class occurs in the dataset (aka how many items are classified in a particular class). For b), if the feature values are discrete ('Blue', '3', 'Tall', etc.), we will count how many times a feature value occurs in items of each class. If the feature values are not discrete, we will go a different route. We will use a distribution function to calculate the probability of values for a given class and feature. If we know the distribution function of the dataset, then great, we will use it to compute the probabilities. If we don't know the function, we can assume the dataset follows the normal (Gaussian) distribution without much loss of accuracy. In fact, it can be proven that any distribution tends to the Gaussian the larger the population gets (see Central Limit Theorem). NOTE: If the values are continuous but use the discrete approach, there might be issues if we are not lucky. For one, if we have two values, '5.0 and 5.1', with the discrete approach they will be two completely different values, despite being so close. Second, if we are trying to classify an item with a feature value of '5.15', if the value does not appear for the feature, its probability will be 0. This might lead to misclassification. Generally, the continuous approach is more accurate and more useful, despite the overhead of calculating the distribution function. The last one, c), is tricky. If feature values are discrete, we can count how many times they occur in the dataset. But what if the feature values are continuous? Imagine a dataset with a height feature. Is it worth it to count how many times each value occurs? Most of the time it is not, since there can be miscellaneous differences in the values (for example, 1.7 meters and 1.700001 meters are practically equal, but they count as different values). So as we cannot calculate the feature value probabilities, what are we going to do? Let's take a step back and rethink exactly what we are doing. We are essentially comparing conditional probabilities of all the classes. For two classes, A and B, we want to know which one is greater: $$\dfrac{P(F|A)P(A)}{P(F)} vs. \dfrac{P(F|B)P(B)}{P(F)}$$ Wait, P(F) is the same for both the classes! In fact, it is the same for every combination of classes. That is because P(F) does not depend on a class, thus being independent of the classes. So, for c), we actually don't need to calculate it at all. Wrapping It Up Classifying an item to a class then becomes a matter of calculating the conditional probabilities of feature values and the probabilities of classes. This is something very desirable and computationally delicious. Remember though that all the above are true because we made the assumption that the features are independent. In most real-world cases that is not true though. Is that an issue here? Fret not, for the the algorithm is very efficient even with that assumption. That is why the algorithm is called Naive Bayes Classifier. We (naively) assume that the features are independent to make computations easier. Implementation The implementation of the Naive Bayes Classifier is split in two; Learning and Simple. The learning classifier takes as input a dataset and learns the needed distributions from that. It is itself split into two, for discrete and continuous features. The simple classifier takes as input not a dataset, but already calculated distributions (a dictionary of CountingProbDist objects). Discrete The implementation for discrete values counts how many times each feature value occurs for each class, and how many times each class occurs. The results are stored in a CountinProbDist object. With the below code you can see the probabilities of the class "Setosa" appearing in the dataset and the probability of the first feature (at index 0) of the same class having a value of 5. Notice that the second probability is relatively small, even though if we observe the dataset we will find that a lot of values are around 5. The issue arises because the features in the Iris dataset are continuous, and we are assuming they are discrete. If the features were discrete (for example, "Tall", "3", etc.) this probably wouldn't have been the case and we would see a much nicer probability distribution. End of explanation def predict(example): def class_probability(targetval): return (target_dist[targetval] * product(attr_dists[targetval, attr][example[attr]] for attr in dataset.inputs)) return argmax(target_vals, key=class_probability) print(predict([5, 3, 1, 0.1])) Explanation: First we found the different values for the classes (called targets here) and calculated their distribution. Next we initialized a dictionary of CountingProbDist objects, one for each class and feature. Finally, we iterated through the examples in the dataset and calculated the needed probabilites. Having calculated the different probabilities, we will move on to the predicting function. It will receive as input an item and output the most likely class. Using the above formula, it will multiply the probability of the class appearing, with the probability of each feature value appearing in the class. It will return the max result. End of explanation psource(NaiveBayesDiscrete) Explanation: You can view the complete code by executing the next line: End of explanation means, deviations = dataset.find_means_and_deviations() target_vals = dataset.values[dataset.target] target_dist = CountingProbDist(target_vals) print(means["setosa"]) print(deviations["versicolor"]) Explanation: Continuous In the implementation we use the Gaussian/Normal distribution function. To make it work, we need to find the means and standard deviations of features for each class. We make use of the find_means_and_deviations Dataset function. On top of that, we will also calculate the class probabilities as we did with the Discrete approach. End of explanation def predict(example): def class_probability(targetval): prob = target_dist[targetval] for attr in dataset.inputs: prob *= gaussian(means[targetval][attr], deviations[targetval][attr], example[attr]) return prob return argmax(target_vals, key=class_probability) print(predict([5, 3, 1, 0.1])) Explanation: You can see the means of the features for the "Setosa" class and the deviations for "Versicolor". The prediction function will work similarly to the Discrete algorithm. It will multiply the probability of the class occuring with the conditional probabilities of the feature values for the class. Since we are using the Gaussian distribution, we will input the value for each feature into the Gaussian function, together with the mean and deviation of the feature. This will return the probability of the particular feature value for the given class. We will repeat for each class and pick the max value. End of explanation psource(NaiveBayesContinuous) Explanation: The complete code of the continuous algorithm: End of explanation psource(NaiveBayesSimple) Explanation: Simple The simple classifier (chosen with the argument simple) does not learn from a dataset, instead it takes as input a dictionary of already calculated CountingProbDist objects and returns a predictor function. The dictionary is in the following form: (Class Name, Class Probability): CountingProbDist Object. Each class has its own probability distribution. The classifier given a list of features calculates the probability of the input for each class and returns the max. The only pre-processing work is to create dictionaries for the distribution of classes (named targets) and attributes/features. The complete code for the simple classifier: End of explanation nBD = NaiveBayesLearner(iris, continuous=False) print("Discrete Classifier") print(nBD([5, 3, 1, 0.1])) print(nBD([6, 5, 3, 1.5])) print(nBD([7, 3, 6.5, 2])) nBC = NaiveBayesLearner(iris, continuous=True) print("\nContinuous Classifier") print(nBC([5, 3, 1, 0.1])) print(nBC([6, 5, 3, 1.5])) print(nBC([7, 3, 6.5, 2])) Explanation: This classifier is useful when you already have calculated the distributions and you need to predict future items. Examples We will now use the Naive Bayes Classifier (Discrete and Continuous) to classify items: End of explanation bag1 = 'a'*50 + 'b'*30 + 'c'*15 dist1 = CountingProbDist(bag1) bag2 = 'a'*30 + 'b'*45 + 'c'*20 dist2 = CountingProbDist(bag2) bag3 = 'a'*20 + 'b'*20 + 'c'*35 dist3 = CountingProbDist(bag3) Explanation: Notice how the Discrete Classifier misclassified the second item, while the Continuous one had no problem. Let's now take a look at the simple classifier. First we will come up with a sample problem to solve. Say we are given three bags. Each bag contains three letters ('a', 'b' and 'c') of different quantities. We are given a string of letters and we are tasked with finding from which bag the string of letters came. Since we know the probability distribution of the letters for each bag, we can use the naive bayes classifier to make our prediction. End of explanation dist = {('First', 0.5): dist1, ('Second', 0.3): dist2, ('Third', 0.2): dist3} nBS = NaiveBayesLearner(dist, simple=True) Explanation: Now that we have the CountingProbDist objects for each bag/class, we will create the dictionary. We assume that it is equally probable that we will pick from any bag. End of explanation print(nBS('aab')) # We can handle strings print(nBS(['b', 'b'])) # And lists! print(nBS('ccbcc')) Explanation: Now we can start making predictions: End of explanation psource(PerceptronLearner) Explanation: The results make intuitive sence. The first bag has a high amount of 'a's, the second has a high amount of 'b's and the third has a high amount of 'c's. The classifier seems to confirm this intuition. Note that the simple classifier doesn't distinguish between discrete and continuous values. It just takes whatever it is given. Also, the simple option on the NaiveBayesLearner overrides the continuous argument. NaiveBayesLearner(d, simple=True, continuous=False) just creates a simple classifier. PERCEPTRON CLASSIFIER Overview The Perceptron is a linear classifier. It works the same way as a neural network with no hidden layers (just input and output). First it trains its weights given a dataset and then it can classify a new item by running it through the network. Its input layer consists of the the item features, while the output layer consists of nodes (also called neurons). Each node in the output layer has n synapses (for every item feature), each with its own weight. Then, the nodes find the dot product of the item features and the synapse weights. These values then pass through an activation function (usually a sigmoid). Finally, we pick the largest of the values and we return its index. Note that in classification problems each node represents a class. The final classification is the class/node with the max output value. Below you can see a single node/neuron in the outer layer. With f we denote the item features, with w the synapse weights, then inside the node we have the dot product and the activation function, g. Implementation First, we train (calculate) the weights given a dataset, using the BackPropagationLearner function of learning.py. We then return a function, predict, which we will use in the future to classify a new item. The function computes the (algebraic) dot product of the item with the calculated weights for each node in the outer layer. Then it picks the greatest value and classifies the item in the corresponding class. End of explanation iris = DataSet(name="iris") iris.classes_to_numbers() perceptron = PerceptronLearner(iris) print(perceptron([5, 3, 1, 0.1])) Explanation: Note that the Perceptron is a one-layer neural network, without any hidden layers. So, in BackPropagationLearner, we will pass no hidden layers. From that function we get our network, which is just one layer, with the weights calculated. That function predict passes the input/example through the network, calculating the dot product of the input and the weights for each node and returns the class with the max dot product. Example We will train the Perceptron on the iris dataset. Because though the BackPropagationLearner works with integer indexes and not strings, we need to convert class names to integers. Then, we will try and classify the item/flower with measurements of 5, 3, 1, 0.1. End of explanation iris = DataSet(name="iris") Explanation: The correct output is 0, which means the item belongs in the first class, "setosa". Note that the Perceptron algorithm is not perfect and may produce false classifications. LEARNER EVALUATION In this section we will evaluate and compare algorithm performance. The dataset we will use will again be the iris one. End of explanation nBD = NaiveBayesLearner(iris, continuous=False) print("Error ratio for Discrete:", err_ratio(nBD, iris)) nBC = NaiveBayesLearner(iris, continuous=True) print("Error ratio for Continuous:", err_ratio(nBC, iris)) Explanation: Naive Bayes First up we have the Naive Bayes algorithm. First we will test how well the Discrete Naive Bayes works, and then how the Continuous fares. End of explanation kNN_1 = NearestNeighborLearner(iris, k=1) kNN_3 = NearestNeighborLearner(iris, k=3) kNN_5 = NearestNeighborLearner(iris, k=5) kNN_7 = NearestNeighborLearner(iris, k=7) print("Error ratio for k=1:", err_ratio(kNN_1, iris)) print("Error ratio for k=3:", err_ratio(kNN_3, iris)) print("Error ratio for k=5:", err_ratio(kNN_5, iris)) print("Error ratio for k=7:", err_ratio(kNN_7, iris)) Explanation: The error for the Naive Bayes algorithm is very, very low; close to 0. There is also very little difference between the discrete and continuous version of the algorithm. k-Nearest Neighbors Now we will take a look at kNN, for different values of k. Note that k should have odd values, to break any ties between two classes. End of explanation iris2 = DataSet(name="iris") iris2.classes_to_numbers() perceptron = PerceptronLearner(iris2) print("Error ratio for Perceptron:", err_ratio(perceptron, iris2)) Explanation: Notice how the error became larger and larger as k increased. This is generally the case with datasets where classes are spaced out, as is the case with the iris dataset. If items from different classes were closer together, classification would be more difficult. Usually a value of 1, 3 or 5 for k suffices. Also note that since the training set is also the testing set, for k equal to 1 we get a perfect score, since the item we want to classify each time is already in the dataset and its closest neighbor is itself. Perceptron For the Perceptron, we first need to convert class names to integers. Let's see how it performs in the dataset. End of explanation
1,005
Given the following text description, write Python code to implement the functionality described below step by step Description: Exporting Burst Data This notebook is part of a tutorial series for the FRETBursts burst analysis software. In this notebook, show a few example of how to export FRETBursts burst data to a file. <div class="alert alert-info"> Please <b>cite</b> FRETBursts in publications or presentations! </div> Loading the software We start by loading FRETBursts Step1: Downloading the data file The full list of smFRET measurements used in the FRETBursts tutorials can be found on Figshare. This is the file we will download Step2: <div class="alert alert-success"> You can change the <code>url</code> variable above to download your own data file. This is useful if you are executing FRETBursts online and you want to use your own data file. See <a href="1. First Steps - Start here if new to Jupyter Notebooks.ipynb">First Steps</a>. </div> Here, we download the data file and put it in a folder named data, inside the notebook folder Step3: NOTE Step4: Let's check that the file exists Step5: μs-ALEX parameters At this point, timestamps and detectors numbers are contained in the ph_times_t and det_t attributes of d. Let's print them Step6: We need to define some ALEX parameters Step7: Here the parameters are Step8: If the previous alternation histogram looks correct, the corresponding definitions of the excitation periods can be applied to the data using the following command Step9: If the previous histogram does not look right, the parameters in the d.add(...) cell can be modified and checked by running the histogram plot cell until everything looks fine. Don't forget to apply the parameters with loader.usalex_apply_period(d) as a last step. NOTE Step10: First we filter the bursts to avoid creating big files Step11: Exporting Burst Data By burst-data we mean all the scalar burst parameters, e.g. size, duration, background, etc... We can easily get a table (a pandas DataFrame) with all the burst data as follows Step12: Once we have the DataFrame, saving it to disk in CSV format is trivial Step13: Exporting Bursts Timestamps Exporting timestamps and other photon-data for each bursts is a little trickier because the data is less uniform (i.e. each burst has a different number of photons). In the following example, we will save a csv file with variable-length columns. Each burst is represented by to lines Step15: Now we define an header documenting the file format. Ww will also include the filename of the measurement. This is just an example including nanotimes Step17: And this is header we are going to use Step18: We can now save the data to disk Step19: Done! Read the file back For consistency check, we can read back the data we just saved. As an exercise we will put the results in a pandas DataFrame which is more convenient than an array for holding this data. Step20: We start reading the header and computing some file-specific constants. Step21: As a test, we load the data for the first burst into a dataframe, converting the numerical column "streams" into photon-stream names (strings). The new column is of type categorical, so it will take very little space Step22: For reading the whole file I use a different approach. First, I load the entire file in two lists of lists (one for timestamps and one for the stream). Next, I create a single DataFrame with a third column indicating the burst index.
Python Code: from fretbursts import * sns = init_notebook() Explanation: Exporting Burst Data This notebook is part of a tutorial series for the FRETBursts burst analysis software. In this notebook, show a few example of how to export FRETBursts burst data to a file. <div class="alert alert-info"> Please <b>cite</b> FRETBursts in publications or presentations! </div> Loading the software We start by loading FRETBursts: End of explanation url = 'http://files.figshare.com/2182601/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5' Explanation: Downloading the data file The full list of smFRET measurements used in the FRETBursts tutorials can be found on Figshare. This is the file we will download: End of explanation download_file(url, save_dir='./data') Explanation: <div class="alert alert-success"> You can change the <code>url</code> variable above to download your own data file. This is useful if you are executing FRETBursts online and you want to use your own data file. See <a href="1. First Steps - Start here if new to Jupyter Notebooks.ipynb">First Steps</a>. </div> Here, we download the data file and put it in a folder named data, inside the notebook folder: End of explanation filename = "./data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5" filename Explanation: NOTE: If you modified the url variable providing an invalid URL the previous command will fail. In this case, edit the cell containing the url and re-try the download. Loading the data file Here, we can directly define the file name to be loaded: End of explanation import os if os.path.isfile(filename): print("Perfect, file found!") else: print("Sorry, file:\n%s not found" % filename) d = loader.photon_hdf5(filename) Explanation: Let's check that the file exists: End of explanation d.ph_times_t, d.det_t Explanation: μs-ALEX parameters At this point, timestamps and detectors numbers are contained in the ph_times_t and det_t attributes of d. Let's print them: End of explanation d.add(det_donor_accept = (0, 1), alex_period = 4000, offset = 700, D_ON = (2180, 3900), A_ON = (200, 1800)) Explanation: We need to define some ALEX parameters: End of explanation bpl.plot_alternation_hist(d) Explanation: Here the parameters are: det_donor_accept: donor and acceptor channels alex_period: length of excitation period (in timestamps units) D_ON and A_ON: donor and acceptor excitation windows offset: the offset between the start of alternation and start of timestamping (see also Definition of alternation periods). To check that the above parameters are correct, we need to plot the histogram of timestamps (modulo the alternation period) and superimpose the two excitation period definitions to it: End of explanation loader.alex_apply_period(d) Explanation: If the previous alternation histogram looks correct, the corresponding definitions of the excitation periods can be applied to the data using the following command: End of explanation d.calc_bg(bg.exp_fit, time_s=30, tail_min_us='auto', F_bg=1.7) d.burst_search(L=10, m=10, F=6) Explanation: If the previous histogram does not look right, the parameters in the d.add(...) cell can be modified and checked by running the histogram plot cell until everything looks fine. Don't forget to apply the parameters with loader.usalex_apply_period(d) as a last step. NOTE: After applying the ALEX parameters a new array of timestamps containing only photons inside the excitation periods is created (name d.ph_times_m). To save memory, by default, the old timestamps array (d.ph_times_t) is deleted. Therefore, in the following, when we talk about all-photon selection we always refer to all photons inside both excitation periods. Background and burst search End of explanation ds = d.select_bursts(select_bursts.size, th1=60) Explanation: First we filter the bursts to avoid creating big files: End of explanation bursts = bext.burst_data(ds, include_bg=True, include_ph_index=True) bursts.head(5) # display first 5 bursts Explanation: Exporting Burst Data By burst-data we mean all the scalar burst parameters, e.g. size, duration, background, etc... We can easily get a table (a pandas DataFrame) with all the burst data as follows: End of explanation bursts.to_csv('%s_burst_data.csv' % filename.replace('.hdf5', '')) Explanation: Once we have the DataFrame, saving it to disk in CSV format is trivial: End of explanation ds.A_ex #{0: DexDem, 1:DexAem, 2: AexDem, 3: AemAem} (ds.A_ex[0].view('int8') << 1) + ds.A_em[0].view('int8') Explanation: Exporting Bursts Timestamps Exporting timestamps and other photon-data for each bursts is a little trickier because the data is less uniform (i.e. each burst has a different number of photons). In the following example, we will save a csv file with variable-length columns. Each burst is represented by to lines: one line for timestamps and one line for the photon-stream (excitation/emission period) the timestamps belongs to. Let's start by creating an array of photon streams containing one of the values 0, 1, 2 or 3 for each timestamp. These values will correspond to the DexDem, DexAem, AexDem, AemAem photon streams respectively. End of explanation header = \ # BPH2CSV: %s # Lines per burst: 3 # - timestamps (int64): in 12.5 ns units # - nanotimes (int16): in raw TCSPC unit (3.3ps) # - stream (uint8): the photon stream according to the mapping {0: DexDem, 1: DexAem, 2: AexDem, 3: AemAem} % filename print(header) Explanation: Now we define an header documenting the file format. Ww will also include the filename of the measurement. This is just an example including nanotimes: End of explanation header = \ # BPH2CSV: %s # Lines per burst: 2 # - timestamps (int64): in 12.5 ns units # - stream (uint8): the photon stream according to the mapping {0: DexDem, 1: DexAem, 2: AexDem, 3: AemAem} % filename print(header) Explanation: And this is header we are going to use: End of explanation out_fname = '%s_burst_timestamps.csv' % filename.replace('.hdf5', '') dx = ds ich = 0 bursts = dx.mburst[ich] timestamps = dx.ph_times_m[ich] stream = (dx.A_ex[ich].view('int8') << 1) + dx.A_em[ich].view('int8') with open(out_fname, 'wt') as f: f.write(header) for times, period in zip(bl.iter_bursts_ph(timestamps, bursts), bl.iter_bursts_ph(stream, bursts)): times.tofile(f, sep=',') f.write('\n') period.tofile(f, sep=',') f.write('\n') Explanation: We can now save the data to disk: End of explanation import pandas as pd Explanation: Done! Read the file back For consistency check, we can read back the data we just saved. As an exercise we will put the results in a pandas DataFrame which is more convenient than an array for holding this data. End of explanation with open(out_fname) as f: lines = [] lines.append(f.readline()) while lines[-1].startswith('#'): lines.append(f.readline()) header = ''.join(lines[:-1]) print(header) stream_map = {0: 'DexDem', 1: 'DexAem', 2: 'AexDem', 3: 'AemAem'} nrows = int(header.split('\n')[1].split(':')[1].strip()) header_len = len(header.split('\n')) - 1 header_len, nrows Explanation: We start reading the header and computing some file-specific constants. End of explanation burstph = (pd.read_csv(out_fname, skiprows=header_len, nrows=nrows, header=None).T .rename(columns={0: 'timestamp', 1: 'stream'})) burstph.stream = (burstph.stream .apply(lambda x: stream_map[pd.to_numeric(x)]) .astype('category', categories=['DexDem', 'DexAem', 'AexDem', 'AemAem'], ordered=True)) burstph Explanation: As a test, we load the data for the first burst into a dataframe, converting the numerical column "streams" into photon-stream names (strings). The new column is of type categorical, so it will take very little space: End of explanation import csv from builtins import int # python 2 workaround, can be removed on python 3 # Read data in two list of lists t_list, s_list = [], [] with open(out_fname) as f: for i in range(header_len): f.readline() csvreader = csv.reader(f) for row in csvreader: t_list.append([int(v) for v in row]) s_list.append([int(v) for v in next(csvreader)]) # Turn the inner list into pandas.DataFrame d_list = [] for ib, (t, s) in enumerate(zip(t_list, s_list)): d_list.append( pd.DataFrame({'timestamp': t, 'stream': s}, columns=['timestamp', 'stream']) .assign(iburst=ib) ) # Concatenate dataframes burstph = pd.concat(d_list, ignore_index=True) # Convert stream column into categorical burstph.stream = (burstph.stream .apply(lambda x: stream_map[pd.to_numeric(x)]) .astype('category', categories=['DexDem', 'DexAem', 'AexDem', 'AemAem'], ordered=True)) burstph burstph.dtypes Explanation: For reading the whole file I use a different approach. First, I load the entire file in two lists of lists (one for timestamps and one for the stream). Next, I create a single DataFrame with a third column indicating the burst index. End of explanation
1,006
Given the following text description, write Python code to implement the functionality described below step by step Description: ATM 623 Step1: Contents First section title <a id='section1'></a> 1. First section title Some text. <div class="alert alert-success"> [Back to ATM 623 notebook home](../index.ipynb) </div> Version information
Python Code: # Ensure compatibility with Python 2 and 3 from __future__ import print_function, division Explanation: ATM 623: Climate Modeling Brian E. J. Rose, University at Albany Lecture N: No title About these notes: This document uses the interactive Jupyter notebook format. The notes can be accessed in several different ways: The interactive notebooks are hosted on github at https://github.com/brian-rose/ClimateModeling_courseware The latest versions can be viewed as static web pages rendered on nbviewer A complete snapshot of the notes as of May 2017 (end of spring semester) are available on Brian's website. Also here is a legacy version from 2015. Many of these notes make use of the climlab package, available at https://github.com/brian-rose/climlab End of explanation %load_ext version_information %version_information numpy, matplotlib, xarray, climlab Explanation: Contents First section title <a id='section1'></a> 1. First section title Some text. <div class="alert alert-success"> [Back to ATM 623 notebook home](../index.ipynb) </div> Version information End of explanation
1,007
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: In this notebook, we will use a multi-layer perceptron to develop time series forecasting models. The dataset used for the examples of this notebook is on air pollution measured by concentration of particulate matter (PM) of diameter less than or equal to 2.5 micrometers. There are other variables such as air pressure, air temparature, dewpoint and so on. Two time series models are developed - one on air pressure and the other on pm2.5. The dataset has been downloaded from UCI Machine Learning Repository. https Step2: To make sure that the rows are in the right order of date and time of observations, a new column datetime is created from the date and time related columns of the DataFrame. The new column consists of Python's datetime.datetime objects. The DataFrame is sorted in ascending order over this column. Step3: Gradient descent algorithms perform better (for example converge faster) if the variables are wihtin range [-1, 1]. Many sources relax the boundary to even [-3, 3]. The pm2.5 variable is mixmax scaled to bound the tranformed variable within [0,1]. Step6: Before training the model, the dataset is split in two parts - train set and validation set. The neural network is trained on the train set. This means computation of the loss function, back propagation and weights updated by a gradient descent algorithm is done on the train set. The validation set is used to evaluate the model and to determine the number of epochs in model training. Increasing the number of epochs will further decrease the loss function on the train set but might not neccesarily have the same effect for the validation set due to overfitting on the train set.Hence, the number of epochs is controlled by keeping a tap on the loss function computed for the validation set. We use Keras with Tensorflow backend to define and train the model. All the steps involved in model training and validation is done by calling appropriate functions of the Keras API. Step8: Now we need to generate regressors (X) and target variable (y) for train and validation. 2-D array of regressor and 1-D array of target is created from the original 1-D array of columm standardized_pm2.5 in the DataFrames. For the time series forecasting model, Past seven days of observations are used to predict for the next day. This is equivalent to a AR(7) model. We define a function which takes the original time series and the number of timesteps in regressors as input to generate the arrays of X and y. Step9: The input to convolution layers must be of shape (number of samples, number of timesteps, number of features per timestep). In this case we are modeling only pm2.5 hence number of features per timestep is one. Number of timesteps is seven and number of samples is same as the number of samples in X_train and X_val, which are reshaped to 3D arrays. Step10: Now we define the MLP using the Keras Functional API. In this approach a layer can be declared as the input of the following layer at the time of defining the next layer. Step11: ZeroPadding1D layer is added next to add zeros at the begining and end of each series. Zeropadding ensure that the downstream convolution layer does not reduce the dimension of the output sequences. Pooling layer, added after the convolution layer is used to downsampling the input. Step12: The first argument of Conv1D is the number of filters, which determine the number of features in the output. Second argument indicates length of the 1D convolution window. The third argument is strides and represent the number of places to shift the convolution window. Lastly, setting use_bias as True, add a bias value during computation of an output feature. Here, the 1D convolution can be thought of as generating local AR models over rolling window of three time units. Step13: AveragePooling1D is added next to downsample the input by taking average over pool size of three with stride of one timesteps. The average pooling in this case can be thought of as taking moving averages over a rolling window of three time units. We have used average pooling instead of max pooling to generate the moving averages. Step14: The preceeding pooling layer returns 3D output. Hence before passing to the output layer, a Flatten layer is added. The Flatten layer reshapes the input to (number of samples, number of timesteps*number of features per timestep), which is then fed to the output layer Step15: The input, dense and output layers will now be packed inside a Model, which is wrapper class for training and making predictions. The box plot of pm2.5 shows the presence of outliers. Hence, mean absolute error (MAE) is used as absolute deviations suffer less fluctuations compared to squared deviations. The network's weights are optimized by the Adam algorithm. Adam stands for adaptive moment estimation and has been a popular choice for training deep neural networks. Unlike, stochastic gradient descent, adam uses different learning rates for each weight and separately updates the same as the training progresses. The learning rate of a weight is updated based on exponentially weighted moving averages of the weight's gradients and the squared gradients. Step16: The model is trained by calling the fit function on the model object and passing the X_train and y_train. The training is done for a predefined number of epochs. Additionally, batch_size defines the number of samples of train set to be used for a instance of back propagation.The validation dataset is also passed to evaluate the model after every epoch completes. A ModelCheckpoint object tracks the loss function on the validation set and saves the model for the epoch, at which the loss function has been minimum. Step17: Prediction are made for the pm2.5 from the best saved model. The model's predictions, which are on the standardized pm2.5, are inverse transformed to get predictions of original pm2.5.
Python Code: from __future__ import print_function import os import sys import pandas as pd import numpy as np %matplotlib inline from matplotlib import pyplot as plt import seaborn as sns import datetime #set current working directory os.chdir('D:/Practical Time Series') #Read the dataset into a pandas.DataFrame df = pd.read_csv('datasets/PRSA_data_2010.1.1-2014.12.31.csv') print('Shape of the dataframe:', df.shape) #Let's see the first five rows of the DataFrame df.head() Rows having NaN values in column pm2.5 are dropped. df.dropna(subset=['pm2.5'], axis=0, inplace=True) df.reset_index(drop=True, inplace=True) Explanation: In this notebook, we will use a multi-layer perceptron to develop time series forecasting models. The dataset used for the examples of this notebook is on air pollution measured by concentration of particulate matter (PM) of diameter less than or equal to 2.5 micrometers. There are other variables such as air pressure, air temparature, dewpoint and so on. Two time series models are developed - one on air pressure and the other on pm2.5. The dataset has been downloaded from UCI Machine Learning Repository. https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data End of explanation df['datetime'] = df[['year', 'month', 'day', 'hour']].apply(lambda row: datetime.datetime(year=row['year'], month=row['month'], day=row['day'], hour=row['hour']), axis=1) df.sort_values('datetime', ascending=True, inplace=True) #Let us draw a box plot to visualize the central tendency and dispersion of PRES plt.figure(figsize=(5.5, 5.5)) g = sns.boxplot(df['pm2.5']) g.set_title('Box plot of pm2.5') plt.figure(figsize=(5.5, 5.5)) g = sns.tsplot(df['pm2.5']) g.set_title('Time series of pm2.5') g.set_xlabel('Index') g.set_ylabel('pm2.5 readings') #Let's plot the series for six months to check if any pattern apparently exists. plt.figure(figsize=(5.5, 5.5)) g = sns.tsplot(df['pm2.5'].loc[df['datetime']<=datetime.datetime(year=2010,month=6,day=30)], color='g') g.set_title('pm2.5 during 2010') g.set_xlabel('Index') g.set_ylabel('pm2.5 readings') #Let's zoom in on one month. plt.figure(figsize=(5.5, 5.5)) g = sns.tsplot(df['pm2.5'].loc[df['datetime']<=datetime.datetime(year=2010,month=1,day=31)], color='g') g.set_title('pm2.5 during Jan 2010') g.set_xlabel('Index') g.set_ylabel('pm2.5 readings') Explanation: To make sure that the rows are in the right order of date and time of observations, a new column datetime is created from the date and time related columns of the DataFrame. The new column consists of Python's datetime.datetime objects. The DataFrame is sorted in ascending order over this column. End of explanation from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler(feature_range=(0, 1)) df['scaled_pm2.5'] = scaler.fit_transform(np.array(df['pm2.5']).reshape(-1, 1)) Explanation: Gradient descent algorithms perform better (for example converge faster) if the variables are wihtin range [-1, 1]. Many sources relax the boundary to even [-3, 3]. The pm2.5 variable is mixmax scaled to bound the tranformed variable within [0,1]. End of explanation Let's start by splitting the dataset into train and validation. The dataset's time period if from Jan 1st, 2010 to Dec 31st, 2014. The first fours years - 2010 to 2013 is used as train and 2014 is kept for validation. split_date = datetime.datetime(year=2014, month=1, day=1, hour=0) df_train = df.loc[df['datetime']<split_date] df_val = df.loc[df['datetime']>=split_date] print('Shape of train:', df_train.shape) print('Shape of test:', df_val.shape) #First five rows of train df_train.head() #First five rows of validation df_val.head() #Reset the indices of the validation set df_val.reset_index(drop=True, inplace=True) The train and validation time series of scaled pm2.5 is also plotted. plt.figure(figsize=(5.5, 5.5)) g = sns.tsplot(df_train['scaled_pm2.5'], color='b') g.set_title('Time series of scaled pm2.5 in train set') g.set_xlabel('Index') g.set_ylabel('Scaled pm2.5 readings') plt.figure(figsize=(5.5, 5.5)) g = sns.tsplot(df_val['scaled_pm2.5'], color='r') g.set_title('Time series of scaled pm2.5 in validation set') g.set_xlabel('Index') g.set_ylabel('Scaled pm2.5 readings') Explanation: Before training the model, the dataset is split in two parts - train set and validation set. The neural network is trained on the train set. This means computation of the loss function, back propagation and weights updated by a gradient descent algorithm is done on the train set. The validation set is used to evaluate the model and to determine the number of epochs in model training. Increasing the number of epochs will further decrease the loss function on the train set but might not neccesarily have the same effect for the validation set due to overfitting on the train set.Hence, the number of epochs is controlled by keeping a tap on the loss function computed for the validation set. We use Keras with Tensorflow backend to define and train the model. All the steps involved in model training and validation is done by calling appropriate functions of the Keras API. End of explanation def makeXy(ts, nb_timesteps): Input: ts: original time series nb_timesteps: number of time steps in the regressors Output: X: 2-D array of regressors y: 1-D array of target X = [] y = [] for i in range(nb_timesteps, ts.shape[0]): X.append(list(ts.loc[i-nb_timesteps:i-1])) y.append(ts.loc[i]) X, y = np.array(X), np.array(y) return X, y X_train, y_train = makeXy(df_train['scaled_pm2.5'], 7) print('Shape of train arrays:', X_train.shape, y_train.shape) X_val, y_val = makeXy(df_val['scaled_pm2.5'], 7) print('Shape of validation arrays:', X_val.shape, y_val.shape) Explanation: Now we need to generate regressors (X) and target variable (y) for train and validation. 2-D array of regressor and 1-D array of target is created from the original 1-D array of columm standardized_pm2.5 in the DataFrames. For the time series forecasting model, Past seven days of observations are used to predict for the next day. This is equivalent to a AR(7) model. We define a function which takes the original time series and the number of timesteps in regressors as input to generate the arrays of X and y. End of explanation #X_train and X_val are reshaped to 3D arrays X_train, X_val = X_train.reshape((X_train.shape[0], X_train.shape[1], 1)),\ X_val.reshape((X_val.shape[0], X_val.shape[1], 1)) print('Shape of arrays after reshaping:', X_train.shape, X_val.shape) Explanation: The input to convolution layers must be of shape (number of samples, number of timesteps, number of features per timestep). In this case we are modeling only pm2.5 hence number of features per timestep is one. Number of timesteps is seven and number of samples is same as the number of samples in X_train and X_val, which are reshaped to 3D arrays. End of explanation from keras.layers import Dense from keras.layers import Input from keras.layers import Dropout from keras.layers import Flatten from keras.layers.convolutional import ZeroPadding1D from keras.layers.convolutional import Conv1D from keras.layers.pooling import AveragePooling1D from keras.optimizers import SGD from keras.models import Model from keras.models import load_model from keras.callbacks import ModelCheckpoint #Define input layer which has shape (None, 7) and of type float32. None indicates the number of instances input_layer = Input(shape=(7,1), dtype='float32') Explanation: Now we define the MLP using the Keras Functional API. In this approach a layer can be declared as the input of the following layer at the time of defining the next layer. End of explanation #Add zero padding zeropadding_layer = ZeroPadding1D(padding=1)(input_layer) Explanation: ZeroPadding1D layer is added next to add zeros at the begining and end of each series. Zeropadding ensure that the downstream convolution layer does not reduce the dimension of the output sequences. Pooling layer, added after the convolution layer is used to downsampling the input. End of explanation #Add 1D convolution layers conv1D_layer1 = Conv1D(64, 3, strides=1, use_bias=True)(zeropadding_layer) conv1D_layer2 = Conv1D(32, 3, strides=1, use_bias=True)(conv1D_layer1) Explanation: The first argument of Conv1D is the number of filters, which determine the number of features in the output. Second argument indicates length of the 1D convolution window. The third argument is strides and represent the number of places to shift the convolution window. Lastly, setting use_bias as True, add a bias value during computation of an output feature. Here, the 1D convolution can be thought of as generating local AR models over rolling window of three time units. End of explanation #Add AveragePooling1D layer avgpooling_layer = AveragePooling1D(pool_size=3, strides=1)(conv1D_layer2) Explanation: AveragePooling1D is added next to downsample the input by taking average over pool size of three with stride of one timesteps. The average pooling in this case can be thought of as taking moving averages over a rolling window of three time units. We have used average pooling instead of max pooling to generate the moving averages. End of explanation #Add Flatten layer flatten_layer = Flatten()(avgpooling_layer) #A couple of Dense layers are also added dense_layer1 = Dense(32)(avgpooling_layer) dense_layer2 = Dense(16)(dense_layer1) dropout_layer = Dropout(0.2)(flatten_layer) #Finally the output layer gives prediction for the next day's air pressure. output_layer = Dense(1, activation='linear')(dropout_layer) Explanation: The preceeding pooling layer returns 3D output. Hence before passing to the output layer, a Flatten layer is added. The Flatten layer reshapes the input to (number of samples, number of timesteps*number of features per timestep), which is then fed to the output layer End of explanation ts_model = Model(inputs=input_layer, outputs=output_layer) ts_model.compile(loss='mean_absolute_error', optimizer='adam')#SGD(lr=0.001, decay=1e-5)) ts_model.summary() Explanation: The input, dense and output layers will now be packed inside a Model, which is wrapper class for training and making predictions. The box plot of pm2.5 shows the presence of outliers. Hence, mean absolute error (MAE) is used as absolute deviations suffer less fluctuations compared to squared deviations. The network's weights are optimized by the Adam algorithm. Adam stands for adaptive moment estimation and has been a popular choice for training deep neural networks. Unlike, stochastic gradient descent, adam uses different learning rates for each weight and separately updates the same as the training progresses. The learning rate of a weight is updated based on exponentially weighted moving averages of the weight's gradients and the squared gradients. End of explanation save_weights_at = os.path.join('keras_models', 'PRSA_data_PM2.5_1DConv_weights.{epoch:02d}-{val_loss:.4f}.hdf5') save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0, save_best_only=True, save_weights_only=False, mode='min', period=1) ts_model.fit(x=X_train, y=y_train, batch_size=16, epochs=20, verbose=1, callbacks=[save_best], validation_data=(X_val, y_val), shuffle=True) Explanation: The model is trained by calling the fit function on the model object and passing the X_train and y_train. The training is done for a predefined number of epochs. Additionally, batch_size defines the number of samples of train set to be used for a instance of back propagation.The validation dataset is also passed to evaluate the model after every epoch completes. A ModelCheckpoint object tracks the loss function on the validation set and saves the model for the epoch, at which the loss function has been minimum. End of explanation best_model = load_model(os.path.join('keras_models', 'PRSA_data_PM2.5_1DConv_weights.18-0.0128.hdf5')) preds = best_model.predict(X_val) pred_pm25 = scaler.inverse_transform(preds) pred_pm25 = np.squeeze(pred_pm25) from sklearn.metrics import mean_absolute_error mae = mean_absolute_error(df_val['pm2.5'].loc[7:], pred_pm25) print('MAE for the validation set:', round(mae, 4)) #Let's plot the first 50 actual and predicted values of pm2.5. plt.figure(figsize=(5.5, 5.5)) plt.plot(range(50), df_val['pm2.5'].loc[7:56], linestyle='-', marker='*', color='r') plt.plot(range(50), pred_pm25[:50], linestyle='-', marker='.', color='b') plt.legend(['Actual','Predicted'], loc=2) plt.title('Actual vs Predicted pm2.5') plt.ylabel('pm2.5') plt.xlabel('Index') Explanation: Prediction are made for the pm2.5 from the best saved model. The model's predictions, which are on the standardized pm2.5, are inverse transformed to get predictions of original pm2.5. End of explanation
1,008
Given the following text description, write Python code to implement the functionality described below step by step Description: Evaluating Machine Learning Algorithms - Extended Examples Preparations Download Anaconda with Python 3.6 to install a nearly complete Python enviroment for data science projects Install Keras Step1: Setting Up the Experiment In this example, we will rely on the NIST MNIST data set, a data set for the recognition of hand-written digits. MNIST is a data set that has been used by the NIST such as the discussed TREC campaign. The following script will display some sample digits to give an example of the contents of the data set. Step2: Next, we define out machine learning model with different layers. Roughly speaking, the function baseline_model() defines how the neural network looks like. For more details, see the documentation. Step3: Overfitting In the next cell, we will use very few training data up to the same amount of training data used before to illustrate the overfitting phenomenon. ATTENTION! This will take some time. Step4: Next, we will illustrate our results. Step5: The graph indicates clearly that the baseline error decreases with the increase of training data. In other words, the overfitting effect is limited in relation to the amount of data the learning algorithm has seen. To end the example, we will check how well the model can predict new input. Step6: Accuracy and Error Rate The next cell illustrates how accuracy changes with respect to different distributions between two classes if the model always predict that an element belongs to class A. $$ Accuracy=\frac{|tp+tn|}{|tp|+|tn|+|fp|+|fn|}\equiv\frac{|\mbox{correct predictions}|}{|\mbox{predictions}|} $$ Step7: Logarithmic Loss The $Logarithmic ~Loss=\frac{-1}{N}\sum_{i=1}^N\sum_{j=1}^M y_{ij}\log(p_{ij}) \rightarrow [0,\infty)$ penalizes wrong predicitions. For the sake of simplicity, we simply use the function provided by sklearn, a machine-learning toolkit for Python. The manual will give you more details.
Python Code: # The %... is an iPython thing, and is not part of the Python language. # In this case we're just telling the plotting library to draw things on # the notebook, instead of on a separate window. %matplotlib inline # the import statements load differnt Python packages that we need for the tutorial # See all the "as ..." contructs? They're just aliasing the package names. # That way we can call methods like plt.plot() instead of matplotlib.pyplot.plot(). # packages for scientif computing and visualization import numpy as np import scipy as sp import matplotlib as mpl import matplotlib.cm as cm import matplotlib.pyplot as plt import pandas as pd import time # configuration of the notebook pd.set_option('display.width', 500) pd.set_option('display.max_columns', 100) pd.set_option('display.notebook_repr_html', True) import seaborn as sns sns.set_style("whitegrid") sns.set_context("notebook") # machine learning library imports from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.utils import np_utils Explanation: Evaluating Machine Learning Algorithms - Extended Examples Preparations Download Anaconda with Python 3.6 to install a nearly complete Python enviroment for data science projects Install Keras: The Python Deep Learning Library and other missing packages with the following command: conda install keras Start your local Jupyter instance with jupyter notebook If you cannot see line numbers press Shift+Lto switch them on or check the View menu. End of explanation # load (download if needed) the MNIST dataset of handwritten numbers # we will get a training and test set consisting of bitmaps # in the X_* arrays and the associated labels in the y_* arrays (X_train, y_train), (X_test, y_test) = mnist.load_data() # plot 4 images as gray scale images using subplots without axis labels plt.subplot(221) plt.axis('off') # -1 inverts the image because of aesthetical reasons plt.imshow(X_train[0]*-1, cmap=plt.get_cmap('gray')) plt.subplot(222) plt.axis('off') plt.imshow(X_train[1]*-1, cmap=plt.get_cmap('gray')) plt.subplot(223) plt.axis('off') plt.imshow(X_train[2]*-1, cmap=plt.get_cmap('gray')) plt.subplot(224) plt.axis('off') plt.imshow(X_train[3]*-1, cmap=plt.get_cmap('gray')) # show the plot #plt.savefig("test.pdf",format="pdf") plt.show() Explanation: Setting Up the Experiment In this example, we will rely on the NIST MNIST data set, a data set for the recognition of hand-written digits. MNIST is a data set that has been used by the NIST such as the discussed TREC campaign. The following script will display some sample digits to give an example of the contents of the data set. End of explanation # define baseline model def baseline_model(): # create model model = Sequential() model.add(Dense(num_pixels, input_dim=num_pixels, kernel_initializer='normal', activation='relu')) model.add(Dense(num_classes, kernel_initializer='normal', activation='softmax')) # Compile model, use logarithmic loss for evaluation model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model # fix random seed for reproducibility seed = 7 np.random.seed(seed) # flatten 28*28 images from the MNIST data set to a 784 vector for each image num_pixels = X_train.shape[1] * X_train.shape[2] X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32') X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32') # normalize inputs from 0-255 to 0-1 X_train = X_train / 255 X_test = X_test / 255 # one hot encode outputs y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) num_classes = y_test.shape[1] # build the model model = baseline_model() # fit the model, i.e., start the actual learning model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2) # Final evaluation of the model scores = model.evaluate(X_test, y_test, verbose=0) # print the error rate of the algorithm print("Baseline Error: %.2f%%" % (100-scores[1]*100)) Explanation: Next, we define out machine learning model with different layers. Roughly speaking, the function baseline_model() defines how the neural network looks like. For more details, see the documentation. End of explanation # define baseline model def baseline_model(): # create model model = Sequential() model.add(Dense(num_pixels, input_dim=num_pixels, kernel_initializer='normal', activation='relu')) model.add(Dense(num_classes, kernel_initializer='normal', activation='softmax')) # Compile model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model # the steps indicate the size of the training sample steps=[18,100,1000,5000,10000,20000,30000,40000,50000] # this dict (basically a hashmap) holds the error rate for each iteration errorPerStep=dict() # fix random seed for reproducibility seed = 7 np.random.seed(seed) for step in steps: # load data (X_train, y_train), (X_test, y_test) = mnist.load_data() # limit the training data size to the current step, the : means "from 0 to step" X_train=X_train[0:step] y_train=y_train[0:step] # flatten 28*28 images to a 784 vector for each image num_pixels = X_train.shape[1] * X_train.shape[2] X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32') X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32') # normalize inputs from 0-255 to 0-1 X_train = X_train / 255 X_test = X_test / 255 # one hot encode outputs y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) num_classes = y_test.shape[1] # build the model model = baseline_model() # Fit the model model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2) # Final evaluation of the model scores = model.evaluate(X_test, y_test, verbose=0) print("Baseline Error: %.2f%%" % (100-scores[1]*100)) errorPerStep[step]=(100-scores[1]*100) Explanation: Overfitting In the next cell, we will use very few training data up to the same amount of training data used before to illustrate the overfitting phenomenon. ATTENTION! This will take some time. End of explanation print(errorPerStep) x=[] y=[] for e in errorPerStep: x.append(e) y.append(errorPerStep[e]) plt.xlabel("Training Samples") plt.ylabel("Baseline Error (%)") plt.plot(x,y,'o-') plt.savefig("test.pdf",format="pdf") Explanation: Next, we will illustrate our results. End of explanation (X_train, y_train), (X_test, y_test) = mnist.load_data() # choose a random sample as our test image test_im = X_train[25] # display the image plt.imshow(test_im.reshape(28,28)*-1, cmap=plt.get_cmap('gray'), interpolation='none') plt.axis('off') num_pixels = X_train.shape[1] * X_train.shape[2] # as we are dealing with only one image, we have to restrict the array to a 1D * 784 test_im = test_im.reshape(1, num_pixels).astype('float32') # let the model predict the image r=model.predict(test_im) itemindex = np.where(r[0]==1) print("The model predicts: %i for the following image:"%itemindex[0]) Explanation: The graph indicates clearly that the baseline error decreases with the increase of training data. In other words, the overfitting effect is limited in relation to the amount of data the learning algorithm has seen. To end the example, we will check how well the model can predict new input. End of explanation # arrays for plotting x=[] # samples in A y=[] # samples in B accuracies=[] # calculated accuracies for each distribution # distributions between class A and B, first entry means 90% in A, 10% in B distributions=[[90,10],[55,45],[70,30],[50,50],[20,80]] for distribution in distributions: x.append(distribution[0]) y.append(distribution[1]) samplesA=np.ones((1,distribution[0])) # membership of class A is encoded as 1 samplesB=np.zeros((1,distribution[1])) # membership of class B is encoded as 0 # combine both arrays reality=np.concatenate((samplesA,samplesB),axis=None) # as said above, our model always associates the elements with class A (encoded by 1) prediction=np.ones((1,100)) tpCount=0 # count the true positives for (i,val) in enumerate(prediction[0]): if not reality[i]==val: pass else: tpCount+=1 # calculate the accuracy and add the to the accuracies array for later visualization acc=float(tpCount+tnCount)/100.0 accuracies.append(acc*1000) # the multiplication by 1000 is done for visualization purposes only print("Accuracy: %.2f"%(acc)) # plot the results as a bubble chart plt.xlim(0,100) plt.ylim(0,100) plt.xlabel("Samples in A") plt.ylabel("Samples in B") plt.title("Accuracy of a Always-A Predictor") plt.scatter(x, y, s=accuracies*100000,alpha=0.5) #plt.savefig("test.png",format="png") plt.show() Explanation: Accuracy and Error Rate The next cell illustrates how accuracy changes with respect to different distributions between two classes if the model always predict that an element belongs to class A. $$ Accuracy=\frac{|tp+tn|}{|tp|+|tn|+|fp|+|fn|}\equiv\frac{|\mbox{correct predictions}|}{|\mbox{predictions}|} $$ End of explanation from sklearn.metrics import log_loss # the correct cluster for each sample, i.e., sample 1 is in class 0 y_true = [0, 0, 1, 1,2] # the predictions: 1st sample is 90% predicted to be in class 0 y_pred = [[.9, .1,.0], [.8, .2,.0], [.3, .7,.0], [.01, .99,.0],[.0,.0,1.0]] print(log_loss(y_true, y_pred)) # perfect prediction y_perfect = [[1.0, .0,.0], [1.0, .0,.0], [.0, 1.0,.0], [0, 1.0,.0],[.0,.0,1.0]] print(log_loss(y_true, y_perfect)) x=[] y=[] # the for loop modifies the first prediction of an element belonging to class 0 from 0 to 1 # in other words, from a wrong to a correct prediction for i in range(1,11): r2=y_perfect r2[0][0]=float(i/10) x.append(r2[0][0]) y.append(log_loss(y_true,r2)) # plot the result plt.xlabel("Predicted Probability") plt.ylabel("Logarithmic Loss") plt.title("Does an object of class X belong do class X?") plt.plot(x,y,'o-') #plt.savefig("test.pdf",format="pdf") Explanation: Logarithmic Loss The $Logarithmic ~Loss=\frac{-1}{N}\sum_{i=1}^N\sum_{j=1}^M y_{ij}\log(p_{ij}) \rightarrow [0,\infty)$ penalizes wrong predicitions. For the sake of simplicity, we simply use the function provided by sklearn, a machine-learning toolkit for Python. The manual will give you more details. End of explanation
1,009
Given the following text description, write Python code to implement the functionality described below step by step Description: 11 Dimensionality Reduction 11.1 Eigenvalues and Eigenvectors of Symmetric Matrices 11.1.1 Definitions $M e = \lambda e$, where $\lambda$ is an eigenvalue and $e$ is the corresponding eigenvector. to make the eigenvector unique, we require that Step1: 11.2.2 Using Eigenvectors for Dimensionality Reduction Since $M^T M e = \lambda e = e \lambda$, Let $E$ be the matrix whose columns are the eigenvectors, ordered as largest eigenvalue first. Define the matrix $L$ to have the eigenvalues of $M^T M$ along the diagonal, largest first, and 0's in all other entries. $M^T M E = E L$ Let $E_k$ be the first $k$ columns of $E$. Then $M E_k$ is a $k$-dimensional representation of $M$. 11.2.3 The Matrix of Distances \begin{align} M^T M e &= \lambda e \ M M^T (M e) &= M \lambda e = \lambda (M e) \end{align} the eigenvalues of $M M^T$ are the eigenvalues of $M^T M$ plus additional 0's, and their eigenvectors are shared. 11.2.4 Exercises for Section 11.2 11.3 Singular-Value Decomposition 11.3.1 Definition of SVD Let $M$ be an $m \times n$ matrix, and let the rank of $M$ be $r$. $$M = U \Sigma V^T$$ $U$ is an $m \times r$ column-orthnormal matrix (each of its columns is a unit vector and the dot product of any two columns is 0). $V$ is an $n \times r$ column-orthnormal maxtrix. $\Sigma$ is a diagonal matrix. The elements of $\Sigma$ are called the singular values of $M$. Step2: 11.3.2 Interpretation of SVD viewing the $r$ columns of $U$, $\Sigma$, and $V$ as representing concepts that are hidden in the original matrix $M$. In Fig 11.7 Step3: If we set the $s$ smallest singular values to 0, then we can also eliminate the corresponding $s$ rows of $U$ and $V$. Step4: 11.3.4 Why Zeroing Low Singular Values Works Let $M = P Q R$, $m_{i j} = \sum_k \sum_l p_{ik} q_{kl} r_{lj}$. Then \begin{align} \| M \|^2 &= \sum_i \sum_j (m_{ij})^2 \ &= \sum_i \sum_j (\sum_k \sum_l p_{ik} q_{kl} r_{lj})^2 \ &= \sum_i \sum_j \left ( \sum_k \sum_l \sum_n \sum_m p_{ik} q_{kl} r_{lj} p_{in} q_{nm} r_{mj} \right) \ &\text{as $Q$ is diagonal matrix, $q_{kl}$ and $q_{nm}$ will be 0 unless $k = l$ and $n = m$.} \ &= \sum_i \sum_j \sum_k \sum_n p_{ik} q_{kk} r_{kj} p_{in} q_{nn} r_{nj} \ &= \sum_j \sum_k \sum_n \color{blue}{\sum_i p_{ik} p_{in}} q_{kk} r_{kj} q_{nn} r_{nj} \ &\text{as } P = U, \sum_i p_{ik} p_{in} = 1 \text{ if } k = n \text{ and 0 otherwise} \ &= \sum_j \sum_k q_{kk} r_{kj} q_{kk} r_{kj} \ &= \sum_k (q_{kk})^2 \end{align} How many singular values should we retain? A useful rule of thumb is to retain enough singular values to make up 90\% of the energy in $\Sigma$. 11.3.5 Querying Using Concepts Let $q$ is the vector of user Quincy what movies he would like? "concept space" Step5: Eliminating Duplicate Rows and Columns it is possible that a single row or column is selected more than once, how to deal with it? let it go. or, merge same rows and/or columns $W$ will not be a sqaure matrix, then we need transpose the result to get $\Sigma^+$.
Python Code: plt.imshow(plt.imread('./res/fig11_2.png')) Explanation: 11 Dimensionality Reduction 11.1 Eigenvalues and Eigenvectors of Symmetric Matrices 11.1.1 Definitions $M e = \lambda e$, where $\lambda$ is an eigenvalue and $e$ is the corresponding eigenvector. to make the eigenvector unique, we require that: every eigenvector is a unit vector. the first nonzero component of an eigenvector is positive. 11.1.2 Computing Eigenvalues and Eigenvectors To find principal eigenvector, we use power iteration method: start with any unit vector $v$, and compute $M^i v$ iteratively until it converges. When $M$ is a stochastic matrix, the limiting vector is the principal eigenvector, and its corresponding eigenvalue is 1. Another method $O(n^3)$: algebra solution. 11.1.3 Finding Eigenpairs by Power Iteration idea: Start by computing the pricipal eigenvector, then modify the matrix to, in effect, remove the principal eigenvector. To find the pricipal eigenpair power iteration: Start with any nonzero vector $x_0$ and then iterate: $$x_{k+1} := \frac{M x_k}{\| M x_k\|}$$ $x$ is the pricipal eigenvector when it reaches convergence, and $\lambda_1 = x^T M x$. Attention: Since power iteration will introduce small errors, inaccuracies accumulate when we try to compute all eigenpairs. To find the second eigenpair we create $M^* = M - \lambda_1 x x^T$, then use power iteration again. proof: the second eigenpair of $M^*$ is also that of $M$. 略 11.1.4 The Matrix of Eigenvectors the eigenvectors of a symmetric matrix are orthonormal 11.1.5 Exercises for Section 11.1 11.2 Pricipal-Component Analysis idea: treat the set of tuples as a matrix $M$ and find the eigenvectors for $M M^T$ or $M^T M$. End of explanation show_image('fig11_5.png') show_image('fig11_7.png', figsize=(8, 10)) Explanation: 11.2.2 Using Eigenvectors for Dimensionality Reduction Since $M^T M e = \lambda e = e \lambda$, Let $E$ be the matrix whose columns are the eigenvectors, ordered as largest eigenvalue first. Define the matrix $L$ to have the eigenvalues of $M^T M$ along the diagonal, largest first, and 0's in all other entries. $M^T M E = E L$ Let $E_k$ be the first $k$ columns of $E$. Then $M E_k$ is a $k$-dimensional representation of $M$. 11.2.3 The Matrix of Distances \begin{align} M^T M e &= \lambda e \ M M^T (M e) &= M \lambda e = \lambda (M e) \end{align} the eigenvalues of $M M^T$ are the eigenvalues of $M^T M$ plus additional 0's, and their eigenvectors are shared. 11.2.4 Exercises for Section 11.2 11.3 Singular-Value Decomposition 11.3.1 Definition of SVD Let $M$ be an $m \times n$ matrix, and let the rank of $M$ be $r$. $$M = U \Sigma V^T$$ $U$ is an $m \times r$ column-orthnormal matrix (each of its columns is a unit vector and the dot product of any two columns is 0). $V$ is an $n \times r$ column-orthnormal maxtrix. $\Sigma$ is a diagonal matrix. The elements of $\Sigma$ are called the singular values of $M$. End of explanation show_image('fig11_9.png', figsize=(8, 10)) Explanation: 11.3.2 Interpretation of SVD viewing the $r$ columns of $U$, $\Sigma$, and $V$ as representing concepts that are hidden in the original matrix $M$. In Fig 11.7: concepts: "science fiction" and "romance". $U$ connects peopel to concepts. $V$ connects movies to concepts. $\Sigma$ give the strength of each of the concepts. 11.3.3 Dimensionality Reduction Using SVD End of explanation show_image('fig11_10.png', figsize=(8, 10)) Explanation: If we set the $s$ smallest singular values to 0, then we can also eliminate the corresponding $s$ rows of $U$ and $V$. End of explanation show_image('fig11_13.png') Explanation: 11.3.4 Why Zeroing Low Singular Values Works Let $M = P Q R$, $m_{i j} = \sum_k \sum_l p_{ik} q_{kl} r_{lj}$. Then \begin{align} \| M \|^2 &= \sum_i \sum_j (m_{ij})^2 \ &= \sum_i \sum_j (\sum_k \sum_l p_{ik} q_{kl} r_{lj})^2 \ &= \sum_i \sum_j \left ( \sum_k \sum_l \sum_n \sum_m p_{ik} q_{kl} r_{lj} p_{in} q_{nm} r_{mj} \right) \ &\text{as $Q$ is diagonal matrix, $q_{kl}$ and $q_{nm}$ will be 0 unless $k = l$ and $n = m$.} \ &= \sum_i \sum_j \sum_k \sum_n p_{ik} q_{kk} r_{kj} p_{in} q_{nn} r_{nj} \ &= \sum_j \sum_k \sum_n \color{blue}{\sum_i p_{ik} p_{in}} q_{kk} r_{kj} q_{nn} r_{nj} \ &\text{as } P = U, \sum_i p_{ik} p_{in} = 1 \text{ if } k = n \text{ and 0 otherwise} \ &= \sum_j \sum_k q_{kk} r_{kj} q_{kk} r_{kj} \ &= \sum_k (q_{kk})^2 \end{align} How many singular values should we retain? A useful rule of thumb is to retain enough singular values to make up 90\% of the energy in $\Sigma$. 11.3.5 Querying Using Concepts Let $q$ is the vector of user Quincy what movies he would like? "concept space": $q V$, select the one whose score is highest. find users similar to Quincy? measure the similarity of users by their cosine distance in concept space. 11.3.6 Computing the SVD of a Matrix The SVD of a matrix $M$ is strongly connected to the eigenvalues of the symmetric matrices $M^T M$and $M M^T$. $M^T = (U \Sigma V^T)^T = V \Sigma^T U^T = V \Sigma U^T$ \begin{align} M^T M &= V \Sigma U^T U \Sigma V^T \ &= V \Sigma^2 V^T \ M^T M V &= V \Sigma^2 V^T V \ & = V \Sigma^2 \end{align} similar, $M M^T U = U \Sigma^2$. 11.3.7 Exercises for Section 11.3 11.4 CUR Decomposition SVD: even if $M$ is sparse, $U$ and $V$ will be dense. CUR: if $M$ is sparse, $C$ and $R$ will be sparse. 11.4.2 Choosing Rows and Columns Properly Let $f = \sum_{i,j} m_{ij}^2$. find $C$ pick $r$ rows, and each row is picked with $p_i = \frac{\sum_j m_{ij}^2}{f}$. normailize: each row is divided by $\sqrt{r p_i}$. find $R$ selected in the analogous way. Counstructing $U$. find $M$, that is the intersection of the chosen columns of $C$ and $R$. compute the SVD of $W$: $W = X \Sigma Y^T$. compute $\Sigma^+$, the Moore-Penrose pseudoinverse of the diagonal matrix $\Sigma$: replace $\sigma$ by $1/\sigma$ if $\sigma \neq 0$. $U = Y (\Sigma^+)^2 X^T$. End of explanation show_image('ex11_17.png') #Exercise Explanation: Eliminating Duplicate Rows and Columns it is possible that a single row or column is selected more than once, how to deal with it? let it go. or, merge same rows and/or columns $W$ will not be a sqaure matrix, then we need transpose the result to get $\Sigma^+$. End of explanation
1,010
Given the following text description, write Python code to implement the functionality described below step by step Description: How many movies are listed in the titles dataframe? Step1: 212811 What are the earliest two films listed in the titles dataframe? Step2: Reproduction of the Corbett and Fitzimmons Fight, Miss Jerry How many movies have the title "Hamlet"? Step3: 19 How many movies are titled "North by Northwest"? Step4: 1 When was the first movie titled "Hamlet" made? Step5: 1910 List all of the "Treasure Island" movies from earliest to most recent. Step6: How many movies were made in the year 1950? Step7: 1033 How many movies were made in the year 1960? Step8: 1423 How many movies were made from 1950 through 1959? Step9: 12051 In what years has a movie titled "Batman" been released? Step10: How many roles were there in the movie "Inception"? Step11: 72 How many roles in the movie "Inception" are NOT ranked by an "n" value? Step12: 21 But how many roles in the movie "Inception" did receive an "n" value? Step13: 51 Display the cast of "North by Northwest" in their correct "n"-value order, ignoring roles that did not earn a numeric "n" value. Step14: Display the entire cast, in "n"-order, of the 1972 film "Sleuth". Step15: Now display the entire cast, in "n"-order, of the 2007 version of "Sleuth". Step16: How many roles were credited in the silent 1921 version of Hamlet? Step17: 9 How many roles were credited in Branagh’s 1996 Hamlet? Step18: 55 How many "Hamlet" roles have been listed in all film credits through history? Step19: 81 How many people have played an "Ophelia"? Step20: 96 How many people have played a role called "The Dude"? Step21: 16 How many people have played a role called "The Stranger"? Step22: 190 How many roles has Sidney Poitier played throughout his career? Step23: 43 How many roles has Judi Dench played? Step24: 51 List the supporting roles (having n=2) played by Cary Grant in the 1940s, in order by year. Step25: List the leading roles that Cary Grant played in the 1940s in order by year. Step26: How many roles were available for actors in the 1950s? Step27: 147404 How many roles were avilable for actresses in the 1950s? Step28: 106867 How many leading roles (n=1) were available from the beginning of film history through 1980? Step29: 61285 How many non-leading roles were available through from the beginning of film history through 1980? Step30: 630932 How many roles through 1980 were minor enough that they did not warrant a numeric "n" rank?
Python Code: titles.count() Explanation: How many movies are listed in the titles dataframe? End of explanation titles.sort('year').head() Explanation: 212811 What are the earliest two films listed in the titles dataframe? End of explanation t = titles t[t.title == 'Hamlet'].count() Explanation: Reproduction of the Corbett and Fitzimmons Fight, Miss Jerry How many movies have the title "Hamlet"? End of explanation t = titles t[t.title == "North by Northwest"] Explanation: 19 How many movies are titled "North by Northwest"? End of explanation t = titles t[t.title == 'Hamlet'].sort('year').head() Explanation: 1 When was the first movie titled "Hamlet" made? End of explanation t = titles t[t.title == "Treasure Island"].sort('year') Explanation: 1910 List all of the "Treasure Island" movies from earliest to most recent. End of explanation t = titles t[t.year == 1950].count() Explanation: How many movies were made in the year 1950? End of explanation t = titles t[t.year == 1960].count() Explanation: 1033 How many movies were made in the year 1960? End of explanation t = titles t[(t.year >= 1950) & (t.year <= 1959)].count() Explanation: 1423 How many movies were made from 1950 through 1959? End of explanation t = titles t[t.title == 'Batman'] Explanation: 12051 In what years has a movie titled "Batman" been released? End of explanation c = cast c = len(c[c.title == 'Inception']) c Explanation: How many roles were there in the movie "Inception"? End of explanation c = cast c = c[c.title == 'Inception'] c = c[c.n.isnull()] len(c) Explanation: 72 How many roles in the movie "Inception" are NOT ranked by an "n" value? End of explanation c = cast c = c[c.title == 'Inception'] c = c[c.n.notnull()] len(c) Explanation: 21 But how many roles in the movie "Inception" did receive an "n" value? End of explanation c = cast c = c[c.title == "North by Northwest"] c = c[c.n.notnull()] c.sort('n') Explanation: 51 Display the cast of "North by Northwest" in their correct "n"-value order, ignoring roles that did not earn a numeric "n" value. End of explanation c = cast c = c[c.title == "Sleuth"] c.sort(['n']) Explanation: Display the entire cast, in "n"-order, of the 1972 film "Sleuth". End of explanation c = cast c = c[(c.title == 'Sleuth') & (c.year == 2007)] c.sort(['n']) Explanation: Now display the entire cast, in "n"-order, of the 2007 version of "Sleuth". End of explanation c = cast c = c[(c.title == 'Hamlet') & (c.year == 1921)] len(c.n) Explanation: How many roles were credited in the silent 1921 version of Hamlet? End of explanation c = cast c = c[(c.title == 'Hamlet') & (c.year == 1996)] len(c.n) Explanation: 9 How many roles were credited in Branagh’s 1996 Hamlet? End of explanation c = cast c = c[c.character == 'Hamlet'] len(c) Explanation: 55 How many "Hamlet" roles have been listed in all film credits through history? End of explanation c = cast c = c[c.character == 'Ophelia'] len(c) Explanation: 81 How many people have played an "Ophelia"? End of explanation c = cast c = c[c.character == "The Dude"] len(c) Explanation: 96 How many people have played a role called "The Dude"? End of explanation c = cast c = c[c.character == 'The Stranger'] len(c) Explanation: 16 How many people have played a role called "The Stranger"? End of explanation c = cast c = c[c.name == "Sidney Poitier"] len(c) Explanation: 190 How many roles has Sidney Poitier played throughout his career? End of explanation c = cast c = c[c.name == "Judi Dench"] len(c) Explanation: 43 How many roles has Judi Dench played? End of explanation c = cast c = c[(c.name == 'Cary Grant')] c = c[(c.year >= 1940) & (c.year < 1950)] c = c[c.n == 2] c Explanation: 51 List the supporting roles (having n=2) played by Cary Grant in the 1940s, in order by year. End of explanation c = cast c = c[c.name == 'Cary Grant'] c = c[(c.year >= 1940) & (c.year < 1950)] c.sort('year') Explanation: List the leading roles that Cary Grant played in the 1940s in order by year. End of explanation c = cast c = c[(c.year >= 1950) & (c.year < 1960)] c = c[c.type == 'actor'] len(c.n) Explanation: How many roles were available for actors in the 1950s? End of explanation c = cast c = c[c.type == 'actress'] len(c.n) Explanation: 147404 How many roles were avilable for actresses in the 1950s? End of explanation c = cast c = c[c.year <= 1980] c = c[c.n == 1] c.count() Explanation: 106867 How many leading roles (n=1) were available from the beginning of film history through 1980? End of explanation c = cast c = c[c.year <= 1980] c = c[c.n > 1] c.count() Explanation: 61285 How many non-leading roles were available through from the beginning of film history through 1980? End of explanation c = cast c = c[c.n.isnull()] len(c) Explanation: 630932 How many roles through 1980 were minor enough that they did not warrant a numeric "n" rank? End of explanation
1,011
Given the following text description, write Python code to implement the functionality described below step by step Description: We will use matplotlib.pyplot for plotting and scipy's netcdf package for reading the model output. The %pylab inline causes figures to appear in the page and conveniently alias pyplot to plt (which is becoming a widely used alias). This analysis assumes you changed DAYMAX to some multiple of 5 so that there are multiple time records in the model output. To see this notebook with figures, see https Step1: We first create a netcdf object, or "handle", to the netcdf file. We'll also list all the objects in the netcdf object. Step2: Now we will create a variable object for the "e" variable in the file. Again, I'm labelling it as a handle to distinguish it from a numpy array or raw data. We'll also look at an "attribute" and print the shape of the data. Step3: "e" is 4-dimensional. netcdf files and objects are index [n,k,j,i] for the time-, vertical-, meridional-, zonal-axes. Let's take a quick look at the first record [n=0] of the top interface [k=0]. Step4: The data looks OKish. No scale! And see that "&lt;matplotlib...&gt;" line? That's a handle returned by the matplotlib function. Hide it with a semicolon. Let's add a scale and change the colormap. Step5: We have 4D data but can only visualize by projecting on a 2D medium (the page). Let's solve that by going interactive! Step6: We'll need to know the range to fix the color scale... Step7: We define a simple function that takes the record number as an argument and then plots the top interface (k=0) for that record. We then use the interact() function to do some magic! Step8: Unable to scroll the slider steadily enough? We'll use a loop to redraw for us...
Python Code: %pylab inline import scipy.io.netcdf Explanation: We will use matplotlib.pyplot for plotting and scipy's netcdf package for reading the model output. The %pylab inline causes figures to appear in the page and conveniently alias pyplot to plt (which is becoming a widely used alias). This analysis assumes you changed DAYMAX to some multiple of 5 so that there are multiple time records in the model output. To see this notebook with figures, see https://gist.github.com/adcroft/2a2b91d66625fd534372. End of explanation prog_file = scipy.io.netcdf_file('prog__0001_006.nc') prog_file.variables Explanation: We first create a netcdf object, or "handle", to the netcdf file. We'll also list all the objects in the netcdf object. End of explanation e_handle = prog_file.variables['e'] print('Description =', e_handle.long_name) print('Shape =',e_handle.shape) Explanation: Now we will create a variable object for the "e" variable in the file. Again, I'm labelling it as a handle to distinguish it from a numpy array or raw data. We'll also look at an "attribute" and print the shape of the data. End of explanation plt.pcolormesh( e_handle[0,0] ) Explanation: "e" is 4-dimensional. netcdf files and objects are index [n,k,j,i] for the time-, vertical-, meridional-, zonal-axes. Let's take a quick look at the first record [n=0] of the top interface [k=0]. End of explanation plt.pcolormesh( e_handle[0,0], cmap=cm.seismic ); plt.colorbar(); Explanation: The data looks OKish. No scale! And see that "&lt;matplotlib...&gt;" line? That's a handle returned by the matplotlib function. Hide it with a semicolon. Let's add a scale and change the colormap. End of explanation import ipywidgets Explanation: We have 4D data but can only visualize by projecting on a 2D medium (the page). Let's solve that by going interactive! End of explanation [e_handle[:,0].min(), e_handle[:,0].max()] Explanation: We'll need to know the range to fix the color scale... End of explanation def plot_ssh(record): plt.pcolormesh( e_handle[record,0], cmap=cm.spectral ) plt.clim(-.5,.8) # Fixed scale here plt.colorbar() ipywidgets.interact(plot_ssh, record=(0,e_handle.shape[0]-1,1)); Explanation: We define a simple function that takes the record number as an argument and then plots the top interface (k=0) for that record. We then use the interact() function to do some magic! End of explanation from IPython import display for n in range( e_handle.shape[0]): display.display(plt.gcf()) plt.clf() plot_ssh(n) display.clear_output(wait=True) Explanation: Unable to scroll the slider steadily enough? We'll use a loop to redraw for us... End of explanation
1,012
Given the following text description, write Python code to implement the functionality described below step by step Description: Relationship between common similarity metrics Reference Step1: Inner product Step2: Covariance Average centered inner product Step3: Cosine Similarity Normalized (L2) inner product Step4: Pearson Correlation Normalized (L2) centered inner product Step5: OLS (univariate w/o intercept) Partially normalized inner product, where partially means applied to only one vector Step6: OLS (univariate w/ intercept) Centered, partially normalized inner product, where partially means applied to only one vector
Python Code: # Import some stuff import numpy as np import pandas as pd import scipy.spatial.distance as spd from pymer4.simulate import easy_multivariate_normal from pymer4.models import Lm import matplotlib.pyplot as plt % matplotlib inline # Prep some data X = easy_multivariate_normal(50,2,corrs=.2) a, b = X[:,0], X[:,1] Explanation: Relationship between common similarity metrics Reference End of explanation np.dot(a,b) Explanation: Inner product End of explanation a_centered = a - a.mean() b_centered = b - b.mean() np.dot(a_centered,b_centered) / len(a) # could have used len(b) instead # Check our work np.cov(a,b,ddof=0)[0][1] Explanation: Covariance Average centered inner product End of explanation # Euclidean/L2 norm = square root of sum of squared values # algebra form a_norm = np.sqrt(np.sum(np.power(a,2))) # matrix form b_norm = np.sqrt(np.dot(b,b.T)) # numpy short-cut # np.linalg.norm(a) np.dot(a,b) / (a_norm * b_norm) # Check our work (subract 1 because scipy returns distances) 1 - spd.cosine(a,b) Explanation: Cosine Similarity Normalized (L2) inner product End of explanation # Can think of this as normalized covariance OR centered cosine similarity a_centered_norm = np.linalg.norm(a_centered) b_centered_norm = np.linalg.norm(b_centered) np.dot(a_centered,b_centered) / (a_centered_norm * b_centered_norm) # Check our work 1 - spd.correlation(a,b) Explanation: Pearson Correlation Normalized (L2) centered inner product End of explanation # Can think of this as cosine similarity using only one vector np.dot(a,b) / (a_norm * a_norm) # Check our work model = Lm('B ~ 0 + A',data=pd.DataFrame({'A':a,'B':b})) model.fit(summarize=False) model.coefs.iloc[-1,0] Explanation: OLS (univariate w/o intercept) Partially normalized inner product, where partially means applied to only one vector End of explanation # In the numerator we could actually center a or b, or both. np.dot(a_centered,b) / (a_centered_norm * a_centered_norm) # Check our work model = Lm('B ~ A',data=pd.DataFrame({'A':a,'B':b})) model.fit(summarize=False) model.coefs.iloc[-1,0] Explanation: OLS (univariate w/ intercept) Centered, partially normalized inner product, where partially means applied to only one vector End of explanation
1,013
Given the following text description, write Python code to implement the functionality described below step by step Description: Easy unsupervised learning With python, scikit-learn, and handwritten digits Loading the digits dataset It's a dataset that contains around 1700 images (8x8 pixels) of handwritten digits. Recognising handwriting is a hard task for a computer, but can nowadays be done with good reliability. The dataset also contains labels that tell us the ground truth (what digit is contained in each image), but we're not using them (except at the end, for testing), so this is unsupervised learning. We'll pretend to only know that there are 10 classes of images (corresponding to 0, 1,..., 9). Step1: Display an example of these digits These are some of the images we're working with. Step2: Dimensionality reduction Our world is three-dimensional. Mathematically, this means that we need three numbers to identify a point in space Step3: Display the whole dataset with t-SNE t-SNE (t-distributed stochastic neighbor embedding) is a more sophisticated algorithm which performs dimensionality reduction in a non-linear way, as opposed to PCA which is linear only. Step4: Clustering with k-means We can see in the plot above that the dimensionality reduction is so clever that it lumps together images that do look similar to us humans. We just need to identify those clusters and stick a label on them Step5: You can see that the colours are consistent in those groups, and each color corresponds more or less to one digits. There are mistakes, of course... Evaluate performance We used unsupervised learning. Which means that we didn't use at all the labels that were provided with the dataset. Here, I evaluate the performance also using labels Step6: Measure accuracy Here we compare to the ground truth. Step7: Is this good or bad? This is question has a bit of a philosophical answer. Without labels, our algorithm cannot know that we, humans, consider these all the same digit. In a context of supervised learning, it could, because we tell it. But here, it can only rely on geometrical similarity. But geometrical similarity would imply that the second of the 1s above might be a slightly tilted 7 instead... Show example prediction
Python Code: from sklearn.datasets import load_digits digits_dataset = load_digits() print(digits_dataset.DESCR) digits = digits_dataset['images'] n_digits = digits.shape[0] # how many images are there? Explanation: Easy unsupervised learning With python, scikit-learn, and handwritten digits Loading the digits dataset It's a dataset that contains around 1700 images (8x8 pixels) of handwritten digits. Recognising handwriting is a hard task for a computer, but can nowadays be done with good reliability. The dataset also contains labels that tell us the ground truth (what digit is contained in each image), but we're not using them (except at the end, for testing), so this is unsupervised learning. We'll pretend to only know that there are 10 classes of images (corresponding to 0, 1,..., 9). End of explanation n = 20 sample_size = n**2 d = 10/n plt.figure(figsize=(10, 10)) random_sample = np.random.choice(n_digits, replace=False, size=sample_size) for i in range(sample_size): x = i//n y = i%n plt.imshow(digits[i], cmap=plt.cm.gray_r, extent=(x, x+d, y, y+d)) plt.xlim([-.5, 20]) plt.ylim([-.5, 20]) plt.xticks([]) plt.yticks([]); Explanation: Display an example of these digits These are some of the images we're working with. End of explanation from sklearn.decomposition import PCA pca_model = PCA(n_components=2) pca_embedding = pca_model.fit_transform(digits.reshape([-1, 64])) plt.figure(figsize=(10, 10)) plt.gca().set_aspect('equal') d = .6 for i in range(n_digits): plt.imshow(digits[i], cmap=plt.cm.gray_r, extent=(pca_embedding[i, 0], pca_embedding[i, 0]+d, pca_embedding[i, 1], pca_embedding[i, 1]+d)) plt.xlim([pca_embedding[:, 0].min()-1, pca_embedding[:, 0].max()+1]) plt.ylim([pca_embedding[:, 1].min()-1, pca_embedding[:, 1].max()+1]); Explanation: Dimensionality reduction Our world is three-dimensional. Mathematically, this means that we need three numbers to identify a point in space: for example the $x$, $y$, $z$ Cartesian coordinates, or, on Earth, latitude, longitude, and altitude. The "space of digits" is 64-dimensional. What does this mean? Simply, that we need 64 numbers to describe a digit. Each of these number represents a shade of gray for each of the 64 pixels (8x8) of the image. It's impossible to visualise 64 dimensions geometrically! So, to put them together, we try to put them in perspective in the same sense as a three dimensional object is drawn on a piece of paper in two dimensions. We try to do this in a clever way that keeps as much information as possible about all the 64 dimensions. Display the whole dataset with PCA Principal Component Analysis is a simple algorithm that picks the directions along which there is most variance in the data. We choose to keep the first two only, so that we can draw them down on a plane. End of explanation from sklearn.manifold import TSNE # try playing with perplexity, which is a free parameter tsne_model = TSNE(perplexity=30.) tsne_embedding = tsne_model.fit_transform(digits.reshape([-1, 64])) plt.figure(figsize=(10, 10)) plt.gca().set_aspect('equal') d = .4 for i in range(n_digits): plt.imshow(digits[i], cmap=plt.cm.gray_r, extent=(tsne_embedding[i, 0], tsne_embedding[i, 0]+d, tsne_embedding[i, 1], tsne_embedding[i, 1]+d)) plt.xlim([tsne_embedding[:, 0].min()-1, tsne_embedding[:, 0].max()+1]) plt.ylim([tsne_embedding[:, 1].min()-1, tsne_embedding[:, 1].max()+1]); Explanation: Display the whole dataset with t-SNE t-SNE (t-distributed stochastic neighbor embedding) is a more sophisticated algorithm which performs dimensionality reduction in a non-linear way, as opposed to PCA which is linear only. End of explanation from sklearn.cluster import KMeans n_classes = 10 # there are 10 digits 0...9 kmeans_model = KMeans(n_clusters=n_classes) predicted_labels = kmeans_model.fit_predict(digits.reshape([-1, 64])) plt.figure(figsize=(10, 10)) plt.gca().set_aspect('equal') for i in range(n_digits): plt.imshow(digits[i], cmap=plt.cm.gray_r, extent=(tsne_embedding[i, 0], tsne_embedding[i, 0]+d, tsne_embedding[i, 1], tsne_embedding[i, 1]+d)) plt.imshow([[predicted_labels[i]]], cmap=plt.cm.jet, extent=(tsne_embedding[i, 0], tsne_embedding[i, 0]+d, tsne_embedding[i, 1], tsne_embedding[i, 1]+d), vmin=0, alpha=.2, vmax=predicted_labels.max()) plt.xlim([tsne_embedding[:, 0].min()-1, tsne_embedding[:, 0].max()+1]) plt.ylim([tsne_embedding[:, 1].min()-1, tsne_embedding[:, 1].max()+1]); Explanation: Clustering with k-means We can see in the plot above that the dimensionality reduction is so clever that it lumps together images that do look similar to us humans. We just need to identify those clusters and stick a label on them: this group here, we call it "group of 1s", these others are "the 8s", and so on. "Clustering" means finding groups of points that are near each other. There is no reason to limit ourselves to the dimensionality-reduced space above: we can cluster based on the full 64-dimensional space, so that we don't lose information. End of explanation true_labels = digits_dataset['target'] consensus_for_cluster = np.empty(n_classes, dtype=int) for i in range(n_classes): digits_in_cluster = predicted_labels == i true_labels_cluster = true_labels[digits_in_cluster] consensus_for_cluster[i] = np.argmax(np.bincount(true_labels_cluster)) # translate "cluster id" description into "what number is this" predicted_digits = consensus_for_cluster[predicted_labels] Explanation: You can see that the colours are consistent in those groups, and each color corresponds more or less to one digits. There are mistakes, of course... Evaluate performance We used unsupervised learning. Which means that we didn't use at all the labels that were provided with the dataset. Here, I evaluate the performance also using labels: it is evident from the plot above that there is a correspondence between clusters and digits. So we can do a crude thing where we consider "correct" the given labels that coincide with those of the majority of the cluster, and wrong otherwise. Find what cluster corresponds to which actual digit End of explanation training_set_accuracy = np.mean(predicted_digits == true_labels) print("Accuracy on the training set is:", training_set_accuracy, "%") Explanation: Measure accuracy Here we compare to the ground truth. End of explanation random_digit = np.random.choice(n_digits) print("I chose image n.",random_digit, "and it looks as below.") plt.figure(figsize=(2,2)) plt.imshow(digits[random_digit], cmap=plt.cm.gray_r) plt.axis('off'); prediction = predicted_digits[random_digit] truth = true_labels[random_digit] print("I believe this image contains the digit", prediction) if prediction==truth: print("and it looks like I'm right :D") else: print("but unfortunately the dataset says it's a", truth, ":(") Explanation: Is this good or bad? This is question has a bit of a philosophical answer. Without labels, our algorithm cannot know that we, humans, consider these all the same digit. In a context of supervised learning, it could, because we tell it. But here, it can only rely on geometrical similarity. But geometrical similarity would imply that the second of the 1s above might be a slightly tilted 7 instead... Show example prediction End of explanation
1,014
Given the following text description, write Python code to implement the functionality described below step by step Description: which artist has the most songs listed? how many billboard hits did the rolling stones have? vs the beatles? how many songs total? how many years? how many song lyrics have the word 'love'? how many song lyrics include the word 'no'? which song has the most uses of the word 'no'? did any songs rank for more than one year? what is the most common word in #1-ranked songs? i/me vs you more common? which artist has most number ones? make a list of all the #1s Step1: what is the most common word in #1-ranked songs? Step2: make a list of all the #1s Step3: which artist has most number ones?
Python Code: import matplotlib.pyplot as plt %matplotlib inline #Which artist has the most songs listed? print(df['Artist'].value_counts()[:1]) #How many hits did the Stones have? len(df[df['Artist'] == 'the rolling stones']) #How many did The Beatles have? len(df[df['Artist'] == 'the beatles']) #How many songs are there total? How many years? (df['Year'].value_counts()) #How many songs have the word 'love' in their lyrics? len(df[df['Lyrics'].str.contains(" love ", na=False)]) #How many songs have the word 'no' in their lyrics? len(df[df['Lyrics'].str.contains(" no ", na=False)]) #Did any songs rank for more than one year? df['Song'].value_counts()[:10] (df['Song'].value_counts()) Explanation: which artist has the most songs listed? how many billboard hits did the rolling stones have? vs the beatles? how many songs total? how many years? how many song lyrics have the word 'love'? how many song lyrics include the word 'no'? which song has the most uses of the word 'no'? did any songs rank for more than one year? what is the most common word in #1-ranked songs? i/me vs you more common? which artist has most number ones? make a list of all the #1s End of explanation (df['Lyrics'].value_counts()) Explanation: what is the most common word in #1-ranked songs? End of explanation (df['Artist'].value_counts()) Explanation: make a list of all the #1s End of explanation list(df['Rank'].groupby(df['Artist'])) #list(df['preTestScore'].groupby(df['company'])) Explanation: which artist has most number ones? End of explanation
1,015
Given the following text description, write Python code to implement the functionality described below step by step Description: Foodnet - Spanish cuisine analysis Author Step1: Graph building Step2: Graph analytics Step3: Visualitzations
Python Code: #imports import networkx as nx import pandas as pd from itertools import combinations import matplotlib.pyplot as plt from matplotlib import pylab import sys from itertools import combinations import operator from operator import itemgetter from scipy import integrate # Exploring data recipes_df = pd.read_csv('../data/clean_spanish_recipes.csv',sep='","') print recipes_df.keys() print "\n" print recipes_df.head() # Transforming data #recipes_df["ingredients"].apply(encode("latin-1")) recipes_df["ingredients"] = recipes_df["ingredients"].str.split("', '") print type(recipes_df["ingredients"][0]) Explanation: Foodnet - Spanish cuisine analysis Author: Marc Cadús García In this notebook I pretend to apply different analytics techniques over a graph representing the Spanish cuisine in order to extract new insights. It is expected that graph algorithms may help to extract new knowledge for helping to understand better the Spanish culinary culture. To do so, I a going to use Python networkX. I have scrapped near 3000 Spanish recipes from cookpad.com. These recipes and the scrapping code are available in this repository. Data exploration and transformation End of explanation def build_graph(nodes, graph): # Generate a new graph. Edges are nodes permutations in pairs edges = combinations(nodes, 2) graph.add_nodes_from(nodes) weighted_edges = list() for edge in edges: if graph.has_edge(edge[0],edge[1]): weighted_edges.append((edge[0],edge[1],graph[edge[0]][edge[1]]['weight']+1)) else: weighted_edges.append((edge[0],edge[1],1)) graph.add_weighted_edges_from(weighted_edges) def save_graph(graph,file_name): #initialze Figure plt.figure(num=None, figsize=(120, 120), dpi=60) plt.axis('off') fig = plt.figure(1) pos = nx.spring_layout(graph) d = nx.degree(graph) nx.draw_networkx_nodes(graph,pos, nodelist=d.keys(), node_size=[v * 10 for v in d.values()]) nx.draw_networkx_edges(graph,pos) nx.draw_networkx_labels(graph,pos) cut = 1.00 xmax = cut * max(xx for xx, yy in pos.values()) ymax = cut * max(yy for xx, yy in pos.values()) plt.xlim(0, xmax) plt.ylim(0, ymax) plt.savefig(file_name,bbox_inches="tight") pylab.close() del fig # Generating graph recipes_graph = nx.Graph() recipes_graph.clear() for val in recipes_df["ingredients"]: build_graph(val,recipes_graph) Explanation: Graph building End of explanation #Num of nodes print "Total num of nodes: "+str(len(recipes_graph.nodes())) print "Total num of edges: "+str(len(recipes_graph.edges())) # Top 20 higher degree nodes degrees = sorted(recipes_graph.degree_iter(),key=itemgetter(1),reverse=True) high_degree_nodes = list() for node in degrees[:20]: high_degree_nodes.append(node[0]) print node # Top 20 eigenvector centrality eigenvector_centrality = nx.eigenvector_centrality(recipes_graph) eigenvector_centrality_sorted = sorted(eigenvector_centrality.items(), key=itemgetter(1), reverse=True) for node in eigenvector_centrality_sorted[1:21]: print node # Top 20 pagerank centrality pagerank_centrality = nx.eigenvector_centrality(recipes_graph) pagerank_centrality_sorted = sorted(pagerank_centrality.items(), key=itemgetter(1), reverse=True) for node in pagerank_centrality_sorted[1:21]: print node # Conected components connected_component = list(nx.connected_component_subgraphs(recipes_graph)) print "There is "+str(len(connected_component))+" connected componentes" for component in connected_component: print "- Component of "+str(len(component))+ " nodes" if (len(component)==1): print "\t- Ingredient: "+str(component.nodes()) main_component = connected_component[0] # Graph diameter print "Nodes having minimum eccentricity\n"+str(nx.center(main_component)) print "Nodes having maximum eccentricity\n"+str(nx.periphery(main_component)) print "Minimum eccentricity "+str(nx.radius(main_component)) print "Maximum eccentricity "+str(nx.diameter(main_component)) # Mean cut print "Nodes to be removed to disconect the graph"+nx.minimum_node_cut(main_component) Explanation: Graph analytics End of explanation # For avoid encoding problems reload(sys) sys.setdefaultencoding('utf8') # Original graph save_graph(main_component,"original_graph.jpg") def extract_backbone(g, alpha): backbone_graph = nx.Graph() for node in g: k_n = len(g[node]) if k_n > 1: sum_w = sum( g[node][neighbor]['weight'] for neighbor in g[node] ) for neighbor in g[node]: edgeWeight = g[node][neighbor]['weight'] pij = float(edgeWeight)/sum_w if (1-pij)**(k_n-1) < alpha: # equation 2 backbone_graph.add_edge( node,neighbor, weight = edgeWeight) return backbone_graph save_graph(extract_backbone(main_component,0.01),"backbone_graph.jpg") # Visualizing Higher degree nodes k = recipes_graph.subgraph(high_degree_nodes) save_graph(k,"high_degree_subgraph.jpg") Explanation: Visualitzations End of explanation
1,016
Given the following text description, write Python code to implement the functionality described below step by step Description: The goal is to see how we can read the data contained in a netCDF file. Several possibilities will be examined. Reading a local file Let's assume we have downlowded a file from CMEMS. We define the directory and the file name. datafile have to be adapted according to your case. Step1: To read the file we need the netCDF4 interface for python. Step2: where the first argurment of the files and 'r' indicates that it's open for reading ('w' would be used for writing).<br/> ds contains all the information about the dataset Step3: We can access the global attributes individually Step4: Data Now we want to load some of the variables Step5: Let's examine the variable temperature Step6: This means that the variable depends on two dimensions Step7: To get the variable attributes Step8: Quality flags Just a quick plot to see everything is fine. More details about the plots will be given later. Step9: It seems that we have not taken into accound the quality flags of the data. We can load the corresponding variable TEMP_QC. Step10: The meaning of the quality flags is also stored in the file. Step11: Now we will generate a new plot of the time series using only data with QF = 1. Step12: The resulting plot now seems correct, with values ranging roughly between 10 and 28ºC. Last thing to remember
Python Code: datafile = "~/CMEMS_INSTAC/INSITU_MED_NRT_OBSERVATIONS_013_035/history/mooring/IR_TS_MO_61198.nc" import os datafile = os.path.expanduser(datafile) Explanation: The goal is to see how we can read the data contained in a netCDF file. Several possibilities will be examined. Reading a local file Let's assume we have downlowded a file from CMEMS. We define the directory and the file name. datafile have to be adapted according to your case. End of explanation import netCDF4 ds = netCDF4.Dataset(datafile, 'r') Explanation: To read the file we need the netCDF4 interface for python. End of explanation ds Explanation: where the first argurment of the files and 'r' indicates that it's open for reading ('w' would be used for writing).<br/> ds contains all the information about the dataset: * Metadata (global attributes) * Dimensions * Variables Metadata End of explanation print 'Institution: ' + ds.institution print 'Reference: ' + ds.institution_references Explanation: We can access the global attributes individually: End of explanation time = ds.variables['TIME'] temperature = ds.variables['TEMP'] Explanation: Data Now we want to load some of the variables: we use the ds.variables End of explanation temperature Explanation: Let's examine the variable temperature End of explanation temperature_values = temperature[:] time_values = time[:] Explanation: This means that the variable depends on two dimensions: time and depth. We also know the long_name, standard_name, units, and other useful pieces of information concerning the temperature. To get the values corresponding to the variables, the synthax is: End of explanation print 'Time units: ' + time.units print 'Temperature units: ' + temperature.units Explanation: To get the variable attributes: End of explanation %matplotlib inline import matplotlib.pyplot as plt plt.plot(temperature) plt.show() Explanation: Quality flags Just a quick plot to see everything is fine. More details about the plots will be given later. End of explanation temperatureQC = ds.variables['TEMP_QC'] plt.plot(temperatureQC[:]) plt.show() Explanation: It seems that we have not taken into accound the quality flags of the data. We can load the corresponding variable TEMP_QC. End of explanation print 'Flag values: ' + str(temperatureQC.flag_values) print 'Flag meanings: ' + temperatureQC.flag_meanings Explanation: The meaning of the quality flags is also stored in the file. End of explanation plt.plot(temperature[(temperatureQC[:, 0] == 1), 0]) plt.show() Explanation: Now we will generate a new plot of the time series using only data with QF = 1. End of explanation nc.close() Explanation: The resulting plot now seems correct, with values ranging roughly between 10 and 28ºC. Last thing to remember: close the netCDF file! End of explanation
1,017
Given the following text description, write Python code to implement the functionality described below step by step Description: Data from http Step1: We can list the columns in the dataset Step2: Let's look at a sorted list of the 10 most frequent types of incidents Step3: Let's group the incidents by year Step4: We can then look at the frequency of incidents that occured per year
Python Code: import pandas as pd %matplotlib inline pd.set_option('display.max_rows', 1000) pd.set_option('display.max_columns', 1000) df = pd.read_csv('fire-incidents.csv') df.head(3) df.shape Explanation: Data from http://catalog.data.gov/dataset/baton-rouge-fire-incidents End of explanation df.columns df['DISPATCH DATE'] = pd.to_datetime(df['DISPATCH DATE']) df['DISPATCH TIME'] = pd.to_datetime(df['DISPATCH TIME']) df['DISPATCH DATE'].min() df['DISPATCH DATE'].max() Explanation: We can list the columns in the dataset: End of explanation pd.value_counts(df['INCIDENT DESCRIPTION']).head(10) pd.value_counts(df['INCIDENT DESCRIPTION'], normalize=True).head(10).plot(kind='bar') Explanation: Let's look at a sorted list of the 10 most frequent types of incidents: End of explanation incidents_by_year = df.groupby(df['DISPATCH DATE'].dt.year) incidents_by_year.size().plot(kind='bar') incidents_by_year.sum() Explanation: Let's group the incidents by year End of explanation incidents_by_type = incidents_by_year['INCIDENT DESCRIPTION'] incidents_by_type.value_counts() incidents_by_type.value_counts(normalize=True) Explanation: We can then look at the frequency of incidents that occured per year End of explanation
1,018
Given the following text description, write Python code to implement the functionality described below step by step Description: Version control for fun and profit Step1: A repository Step2: And this is pretty much the essence of Git! First Step3: Other settings Change how you will edit text files (it will often ask you to edit messages and other information, and thus wants to know how you like to edit your files) Step4: Password memory Set git to use the credential memory cache so we don't have to retype passwords too frequently. On Linux, you should run the following (note that this requires git version 1.7.10 or newer) Step5: Github offers in its help pages instructions on how to configure the credentials helper for Mac OSX and Windows. Double-checking the result Step6: Stage 1 Step7: git init Step8: Note Step9: Now let's edit our first file in the test directory with a text editor... I'm doing it programatically here for automation purposes, but you'd normally be editing by hand Step10: git add Step11: We can now ask git about what happened with status Step12: git commit Step13: In the commit above, we used the -m flag to specify a message at the command line. If we don't do that, git will open the editor we specified in our configuration above and require that we enter a message. By default, git refuses to record changes that don't have a message to go along with them (though you can obviously 'cheat' by using an empty or meaningless string Step14: git diff Step15: And now we can ask git what is different Step16: The cycle of git virtue Step17: git log revisited First, let's see what the log shows us now Step18: Sometimes it's handy to see a very summarized version of the log Step19: Defining an alias Git supports aliases Step20: git mv and rm Step21: Note that these changes must be committed too, to become permanent! In git's world, until something hasn't been committed, it isn't permanently recorded anywhere. Step22: And git rm works in a similar fashion. Exercise Add a new file file2.txt, commit it, make some changes to it, commit them again, and then remove it (and don't forget to commit this last step!). 2. Single Local user, branching What is a branch? Simply a label for the 'current' commit in a sequence of ongoing commits Step23: We are now going to try two different routes of development Step24: 3. Using remotes as a single user We are now going to introduce the concept of a remote repository Step25: Since the above cell didn't produce any output after the git remote -v call, it means we have no remote repositories configured. Configuring a remote Log into GitHub, go to the new repository page and make a repository called test. Do not check the box that says Initialize this repository with a README, since we already have an existing repository here. That option is useful when you're starting first at Github and don't have a repo made already on a local computer. We can now follow the instructions from the next page Step26: Let's see the remote situation again Step27: Pushing changes to a remote repository Now push the master branch to the remote named origin Step28: We can now see this repository publicly on github. Using Git to Sync Work Let's see how this can be useful for backup and syncing work between two different computers. I'll simulate a 2nd computer by working in a different directory... Step29: Let's now make some changes in one 'computer' and synchronize them on the second. Step30: Now we put this new work up on the github server so it's available from the internet Step31: Now let's fetch that work from machine #1 Step32: An important aside Step33: And now we go back to the master branch, where we change the same file Step34: The conflict... So now let's see what happens if we try to merge the trouble branch into master Step35: Let's see what git has put into our file Step36: At this point, we go into the file with a text editor, decide which changes to keep, and make a new commit that records our decision. I've now made the edits, in this case I decided that both pieces of text were useful, but integrated them with some changes Step37: Let's then make our new commit
Python Code: !ls Explanation: Version control for fun and profit: Git: the tool you didn't know you needed Sources of this material: This tutorial is adapted from "Version Control for Fun and Profit" by Fernando Perez For an excellent list of Git resources for scientists, see Fernando's Page. Fernando's original notebook specifically mentions two references he drew from: "Git for Scientists: A Tutorial" by John McDonnell Emanuele Olivetti's lecture notes and exercises from the G-Node summer school on Advanced Scientific Programming in Python. Via Fernando, some of the images below are copied from the Pro Git book Also see J.R. Johansson's tutorial on version control, part of his excellent series Lectures on Scientific Computing with Python What is Version Control? From Wikipedia: “Revision control, also known as version control, source control or software configuration management (SCM), is the management of changes to documents, programs, and other information stored as computer files.” Reproducibility? Tracking and recreating every step of your work In the software world: it's called Version Control! What do (good) version control tools give you? Peace of mind (backups) Freedom (exploratory branching) Collaboration (synchronization) Git is an enabling technology: Use version control for everything Paper writing (never get paper_v5_jake_final_oct22_9.tex by email again!) Grant writing Everyday research Teaching (never accept an emailed homework assignment again!) Code management Personal website history tracking The plan for this tutorial Overview of Git key concepts Hands-on work with Git 5 "stages" of using Git: Local, single-user, linear workflow Single local user, branching Using remotes as a single user Remotes for collaborating in a small team Full-contact github: distributed collaboration with large teams High level picture: overview of key concepts The commit: a snapshot of work at a point in time Credit: ProGit book, by Scott Chacon, CC License. Looking at my current directory: End of explanation import sha # Our first commit data1 = 'This is the start of my paper2.' meta1 = 'date: 1/1/12' hash1 = sha.sha(data1 + meta1).hexdigest() print('Hash:', hash1) # Our second commit, linked to the first data2 = 'Some more text in my paper...' meta2 = 'date: 1/2/12' # Note we add the parent hash here! hash2 = sha.sha(data2 + meta2 + hash1).hexdigest() print('Hash:', hash2) Explanation: A repository: a group of linked commits Note: these form a Directed Acyclic Graph (DAG), with nodes identified by their hash. A hash: a fingerprint of the content of each commit and its parent End of explanation %%bash git config --global user.name "John Doe" git config --global user.email "[email protected]" Explanation: And this is pretty much the essence of Git! First: Configuring Git The minimal amount of configuration for git to work without pestering you is to tell it who you are. All the commands here modify the .gitconfig file in your home directory. Modify these before running them: End of explanation %%bash # Put here your preferred editor. If this is not set, git will honor # the $EDITOR environment variable git config --global core.editor /usr/bin/nano # my preferred editor # On Windows Notepad will do in a pinch, # I recommend Notepad++ as a free alternative # On the mac, you can set nano or emacs as a basic option %%bash # And while we're at it, we also turn on the use of color, which is very useful git config --global color.ui "auto" Explanation: Other settings Change how you will edit text files (it will often ask you to edit messages and other information, and thus wants to know how you like to edit your files): End of explanation %%bash git config --global credential.helper cache # Set the cache to timeout after 2 hours (setting is in seconds) git config --global credential.helper 'cache --timeout=7200' Explanation: Password memory Set git to use the credential memory cache so we don't have to retype passwords too frequently. On Linux, you should run the following (note that this requires git version 1.7.10 or newer): End of explanation !cat ~/.gitconfig Explanation: Github offers in its help pages instructions on how to configure the credentials helper for Mac OSX and Windows. Double-checking the result: End of explanation !git Explanation: Stage 1: Local, single-user, linear workflow Simply type git to see a full list of all the 'core' commands. We'll now go through most of these via small practical exercises: End of explanation %%bash rm -rf test git init test Explanation: git init: create an empty repository End of explanation %%bash cd test ls %%bash cd test ls -la %%bash cd test ls -l .git Explanation: Note: all these cells below are meant to be run by you in a terminal where you change once to the test directory and continue working there. Since we are putting all of them here in a single notebook for the purposes of the tutorial, they will all be prepended with the first two lines: %%bash cd test that tell IPython to do that each time. But you should ignore those two lines and type the rest of each cell yourself in your terminal. Let's look at what git did: End of explanation %%bash cd test echo "My first bit of text" > file1.txt %%bash cd test ls -al Explanation: Now let's edit our first file in the test directory with a text editor... I'm doing it programatically here for automation purposes, but you'd normally be editing by hand End of explanation %%bash cd test git add file1.txt Explanation: git add: tell git about this new file End of explanation %%bash cd test git status Explanation: We can now ask git about what happened with status: End of explanation %%bash cd test git commit -a -m "This is our first commit" Explanation: git commit: permanently record our changes in git's database For now, we are always going to call git commit either with the -a option or with specific filenames (git commit file1 file2...). This delays the discussion of an aspect of git called the index (often referred to also as the 'staging area') that we will cover later. Most everyday work in regular scientific practice doesn't require understanding the extra moving parts that the index involves, so on a first round we'll bypass it. Later on we will discuss how to use it to achieve more fine-grained control of what and how git records our actions. End of explanation %%bash cd test git log Explanation: In the commit above, we used the -m flag to specify a message at the command line. If we don't do that, git will open the editor we specified in our configuration above and require that we enter a message. By default, git refuses to record changes that don't have a message to go along with them (though you can obviously 'cheat' by using an empty or meaningless string: git only tries to facilitate best practices, it's not your nanny). git log: what has been committed so far End of explanation %%bash cd test echo "And now some more text..." >> file1.txt Explanation: git diff: what have I changed? Let's do a little bit more work... Again, in practice you'll be editing the files by hand, here we do it via shell commands for the sake of automation (and therefore the reproducibility of this tutorial!) End of explanation %%bash cd test git diff Explanation: And now we can ask git what is different: End of explanation %%bash cd test git commit -a -m "I have made great progress on this critical matter." Explanation: The cycle of git virtue: work, commit, work, commit, ... End of explanation %%bash cd test git log Explanation: git log revisited First, let's see what the log shows us now: End of explanation %%bash cd test git log --oneline --topo-order --graph Explanation: Sometimes it's handy to see a very summarized version of the log: End of explanation %%bash cd test # We create our alias (this saves it in git's permanent configuration file): git config --global alias.slog "log --oneline --topo-order --graph" # And now we can use it git slog Explanation: Defining an alias Git supports aliases: new names given to command combinations. Let's make this handy shortlog an alias, so we only have to type git slog and see this compact log: End of explanation %%bash cd test git mv file1.txt file-newname.txt git status Explanation: git mv and rm: moving and removing files While git add is used to add fils to the list git tracks, we must also tell it if we want their names to change or for it to stop tracking them. In familiar Unix fashion, the mv and rm git commands do precisely this: End of explanation %%bash cd test git commit -a -m"I like this new name better" echo "Let's look at the log again:" git slog Explanation: Note that these changes must be committed too, to become permanent! In git's world, until something hasn't been committed, it isn't permanently recorded anywhere. End of explanation %%bash cd test git status ls Explanation: And git rm works in a similar fashion. Exercise Add a new file file2.txt, commit it, make some changes to it, commit them again, and then remove it (and don't forget to commit this last step!). 2. Single Local user, branching What is a branch? Simply a label for the 'current' commit in a sequence of ongoing commits: Mulitple Branches There can be multiple branches alive at any point in time; the working directory is the state of a special pointer called HEAD. In this example there are two branches, master and testing, and testing is the currently active branch since it's what HEAD points to: Once new commits are made on a branch, HEAD and the branch label move with the new commits: This allows the history of both branches to diverge: But based on this graph structure, git can compute the necessary information to merge the divergent branches back and continue with a unified line of development: Branching Example Let's now illustrate all of this with a concrete example. Let's get our bearings first: End of explanation %%bash cd test git branch experiment git checkout experiment %%bash cd test echo "Some crazy idea" > experiment.txt git add experiment.txt git commit -a -m"Trying something new" git slog %%bash cd test git checkout master git slog %%bash cd test echo "All the while, more work goes on in master..." >> file-newname.txt git commit -a -m"The mainline keeps moving" git slog %%bash cd test ls %%bash cd test git merge experiment git slog Explanation: We are now going to try two different routes of development: on the master branch we will add one file and on the experiment branch, which we will create, we will add a different one. We will then merge the experimental branch into master. End of explanation %%bash cd test ls echo "Let's see if we have any remote repositories here:" git remote -v Explanation: 3. Using remotes as a single user We are now going to introduce the concept of a remote repository: a pointer to another copy of the repository that lives on a different location. This can be simply a different path on the filesystem or a server on the internet. For this discussion, we'll be using remotes hosted on the GitHub.com service, but you can equally use other services like BitBucket or Gitorious as well as host your own. If you don't have a Github account, take a moment now to sign up git remote: view/modify remote repositories End of explanation %%bash cd test git remote add origin https://github.com/jakevdp/test.git Explanation: Since the above cell didn't produce any output after the git remote -v call, it means we have no remote repositories configured. Configuring a remote Log into GitHub, go to the new repository page and make a repository called test. Do not check the box that says Initialize this repository with a README, since we already have an existing repository here. That option is useful when you're starting first at Github and don't have a repo made already on a local computer. We can now follow the instructions from the next page: End of explanation %%bash cd test git remote -v Explanation: Let's see the remote situation again: End of explanation %%bash cd test git push origin master Explanation: Pushing changes to a remote repository Now push the master branch to the remote named origin: End of explanation %%bash # Here I clone my 'test' repo but with a different name, test2, to simulate a 2nd computer git clone https://github.com/jakevdp/test.git test2 cd test2 pwd git remote -v Explanation: We can now see this repository publicly on github. Using Git to Sync Work Let's see how this can be useful for backup and syncing work between two different computers. I'll simulate a 2nd computer by working in a different directory... End of explanation %%bash cd test2 # working on computer #2 echo "More new content on my experiment" >> experiment.txt git commit -a -m"More work, on machine #2" Explanation: Let's now make some changes in one 'computer' and synchronize them on the second. End of explanation %%bash cd test2 git push origin master Explanation: Now we put this new work up on the github server so it's available from the internet End of explanation %%bash cd test git pull origin master Explanation: Now let's fetch that work from machine #1: End of explanation %%bash cd test git branch trouble git checkout trouble echo "This is going to be a problem..." >> experiment.txt git commit -a -m"Changes in the trouble branch" Explanation: An important aside: conflict management While git is very good at merging, if two different branches modify the same file in the same location, it simply can't decide which change should prevail. At that point, human intervention is necessary to make the decision. Git will help you by marking the location in the file that has a problem, but it's up to you to resolve the conflict. Let's see how that works by intentionally creating a conflict. We start by creating a branch and making a change to our experiment file: End of explanation %%bash cd test git checkout master echo "More work on the master branch..." >> experiment.txt git commit -a -m"Mainline work" Explanation: And now we go back to the master branch, where we change the same file: End of explanation %%bash cd test git merge trouble Explanation: The conflict... So now let's see what happens if we try to merge the trouble branch into master: End of explanation %%bash cd test cat experiment.txt Explanation: Let's see what git has put into our file: End of explanation %%bash cd test cat experiment.txt Explanation: At this point, we go into the file with a text editor, decide which changes to keep, and make a new commit that records our decision. I've now made the edits, in this case I decided that both pieces of text were useful, but integrated them with some changes: End of explanation %%bash cd test git commit -a -m"Completed merge of trouble, fixing conflicts along the way" git slog Explanation: Let's then make our new commit: End of explanation
1,019
Given the following text description, write Python code to implement the functionality described below step by step Description: Discerning Haggis 2016-ml-contest submission Author Step1: Convenience functions Step2: Load, treat and color data Step3: Condition dataset Step4: Test, train and cross-validate Up to here, there have been no secrets, just reusing the standard code to load the data. Now, instead of doing the usual test/train split, I create another dataset, the cross-validate set. The split will be 60% train, 20% cross-validate and 20% test. It which will be used as the "test set", to tune the neural network parameters. My actual test set will only be used to predict the performance of my neural network at the end. Step5: Tuning Selecting model size I create a number of model sizes, all with 3 hidden layers. The first and largest hidden layer is normally distributed between 1 to 500 nodes. The second ranges from 1 to the number of first nodes. The third ranges from 1 to the number of second nodes. These different sizes will be used to train several unregularized networks. Step6: Training with several model sizes This takes a few minutes. Step7: Plot performance of neural networks vs sum of nodes Step8: Choose best size from parabolic fit When I create neural network sizes, the first parameter $n_1$ normally distributed between 1 and 500. Its mean is ~250. The number of nodes in the second layer, $n_2$ depends on the first Step9: Choose regularization valus Here we will choose the regularization value using the same approach as before. This takes a few minutes. Step10: Plot performance of neural networks vs regularization Step11: Choose best regularization parameter from parabolic fit Step12: Predict accuracy Now I train a neural network with the obtained values and predict its accuracy using the test set. Step13: Save neural network parameters Step14: Retrain and predict Finally we train a neural network using all data available, and apply it to our blind test.
Python Code: %matplotlib inline import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.colors as colors from mpl_toolkits.axes_grid1 import make_axes_locatable import seaborn as sns sns.set(style='whitegrid', rc={'lines.linewidth': 2.5, 'figure.figsize': (10, 8), 'text.usetex': False, # 'font.family': 'sans-serif', # 'font.sans-serif': 'Optima LT Std', }) from pandas import set_option set_option("display.max_rows", 10) pd.options.mode.chained_assignment = None from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.neural_network import MLPClassifier from sklearn.metrics import confusion_matrix from scipy.stats import truncnorm Explanation: Discerning Haggis 2016-ml-contest submission Author: Carlos Alberto da Costa Filho, University of Edinburgh Load libraries End of explanation def make_facies_log_plot(logs, facies_colors): #make sure logs are sorted by depth logs = logs.sort_values(by='Depth') cmap_facies = colors.ListedColormap( facies_colors[0:len(facies_colors)], 'indexed') ztop=logs.Depth.min(); zbot=logs.Depth.max() cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1) f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12)) ax[0].plot(logs.GR, logs.Depth, '-g') ax[1].plot(logs.ILD_log10, logs.Depth, '-') ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5') ax[3].plot(logs.PHIND, logs.Depth, '-', color='r') ax[4].plot(logs.PE, logs.Depth, '-', color='black') im=ax[5].imshow(cluster, interpolation='none', aspect='auto', cmap=cmap_facies,vmin=1,vmax=9) divider = make_axes_locatable(ax[5]) cax = divider.append_axes("right", size="20%", pad=0.05) cbar=plt.colorbar(im, cax=cax) cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', 'SiSh', ' MS ', ' WS ', ' D ', ' PS ', ' BS '])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') for i in range(len(ax)-1): ax[i].set_ylim(ztop,zbot) ax[i].invert_yaxis() ax[i].grid() ax[i].locator_params(axis='x', nbins=3) ax[0].set_xlabel("GR") ax[0].set_xlim(logs.GR.min(),logs.GR.max()) ax[1].set_xlabel("ILD_log10") ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max()) ax[2].set_xlabel("DeltaPHI") ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max()) ax[3].set_xlabel("PHIND") ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max()) ax[4].set_xlabel("PE") ax[4].set_xlim(logs.PE.min(),logs.PE.max()) ax[5].set_xlabel('Facies') ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([]) ax[4].set_yticklabels([]); ax[5].set_yticklabels([]) ax[5].set_xticklabels([]) f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94) def accuracy(conf): total_correct = 0. nb_classes = conf.shape[0] for i in np.arange(0,nb_classes): total_correct += conf[i][i] acc = total_correct/sum(sum(conf)) return acc adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]]) def accuracy_adjacent(conf, adjacent_facies): nb_classes = conf.shape[0] total_correct = 0. for i in np.arange(0,nb_classes): total_correct += conf[i][i] for j in adjacent_facies[i]: total_correct += conf[i][j] return total_correct / sum(sum(conf)) Explanation: Convenience functions End of explanation # Loading Data validationFull = pd.read_csv('../validation_data_nofacies.csv') training_data = pd.read_csv('../facies_vectors.csv') # Treat Data training_data.fillna(training_data.mean(),inplace=True) training_data['Well Name'] = training_data['Well Name'].astype('category') training_data['Formation'] = training_data['Formation'].astype('category') training_data['Well Name'].unique() training_data.describe() # Color Data # 1=sandstone 2=c_siltstone 3=f_siltstone # 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite # 8=packstone 9=bafflestone facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D'] facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D','PS', 'BS'] #facies_color_map is a dictionary that maps facies labels #to their respective colors facies_color_map = {} for ind, label in enumerate(facies_labels): facies_color_map[label] = facies_colors[ind] def label_facies(row, labels): return labels[ row['Facies'] -1] training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1) make_facies_log_plot( training_data[training_data['Well Name'] == 'SHRIMPLIN'], facies_colors) Explanation: Load, treat and color data End of explanation correct_facies_labels = training_data['Facies'].values feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1) feature_vectors.describe() scaler = preprocessing.StandardScaler().fit(feature_vectors) scaled_features = scaler.transform(feature_vectors) Explanation: Condition dataset End of explanation X_train, X_cv_test, y_train, y_cv_test = train_test_split(scaled_features, correct_facies_labels, test_size=0.4, random_state=42) X_cv, X_test, y_cv, y_test = train_test_split(X_cv_test, y_cv_test, test_size=0.5, random_state=42) Explanation: Test, train and cross-validate Up to here, there have been no secrets, just reusing the standard code to load the data. Now, instead of doing the usual test/train split, I create another dataset, the cross-validate set. The split will be 60% train, 20% cross-validate and 20% test. It which will be used as the "test set", to tune the neural network parameters. My actual test set will only be used to predict the performance of my neural network at the end. End of explanation lower, upper = 1, 500 mu, sigma = (upper-lower)/2, (upper-lower)/2 sizes_rv = truncnorm((lower - mu) / sigma, (upper - mu) / sigma, loc=mu, scale=sigma) samples = 30 sizes_L1 = [ int(d) for d in sizes_rv.rvs(samples) ] sizes_L2 = [] sizes_L3 = [] for sL1 in sizes_L1: lower, upper = 1, sL1+1 mu, sigma = (upper-lower)/2+1, (upper-lower)/2+1 sizes_rv = truncnorm((lower - mu) / sigma, (upper - mu) / sigma, loc=mu, scale=sigma) sL2 = int(sizes_rv.rvs(1)[0]) sizes_L2.append(sL2) lower, upper = 1, sL2+1 mu, sigma = (upper-lower)/2+1, (upper-lower)/2+1 sizes_rv = truncnorm((lower - mu) / sigma, (upper - mu) / sigma, loc=mu, scale=sigma) sL3 = int(sizes_rv.rvs(1)[0]) sizes_L3.append(sL3) sizes = sorted(set(zip(sizes_L1, sizes_L2, sizes_L3)), key=lambda s: sum(s)) Explanation: Tuning Selecting model size I create a number of model sizes, all with 3 hidden layers. The first and largest hidden layer is normally distributed between 1 to 500 nodes. The second ranges from 1 to the number of first nodes. The third ranges from 1 to the number of second nodes. These different sizes will be used to train several unregularized networks. End of explanation train_error = np.array([]) cv_error = np.array([]) train_adj_error = np.array([]) cv_adj_error = np.array([]) minerr = 1 for i, s in enumerate(sizes): clf = MLPClassifier(solver='lbfgs', alpha=0, hidden_layer_sizes=s) clf.fit(X_train,y_train) # Compute errors conf_cv = confusion_matrix(y_cv, clf.predict(X_cv)) conf_tr = confusion_matrix(y_train, clf.predict(X_train)) train_error = np.append(train_error, 1-accuracy(conf_tr)) cv_error = np.append(cv_error, 1-accuracy(conf_cv)) train_adj_error = np.append(train_adj_error, 1-accuracy_adjacent(conf_tr, adjacent_facies)) cv_adj_error = np.append(cv_adj_error, 1-accuracy_adjacent(conf_cv, adjacent_facies)) print('[ %3d%% done ] ' % (100*(i+1)/len(sizes),), end="") if cv_error[-1] < minerr: minerr = cv_error[-1] print('CV error = %d%% with' % (100*minerr,), s) else: print() Explanation: Training with several model sizes This takes a few minutes. End of explanation sizes_sum = [ np.sum(s) for s in sizes ] p = np.poly1d(np.polyfit(sizes_sum, cv_error, 2)) f, ax = plt.subplots(figsize=(5,5)) ax.scatter(sizes_sum, cv_error, c='k', label='Cross-validate') ax.plot(range(1, max(sizes_sum)+1), p(range(1, max(sizes_sum)+1))) ax.set_ylim([min(cv_error)-.1, max(cv_error)+.1]) ax.set_xlabel('Sum of nodes') ax.set_ylabel('Error') plt.legend() Explanation: Plot performance of neural networks vs sum of nodes End of explanation minsum = range(1, max(sizes_sum)+1)[np.argmin(p(range(1, max(sizes_sum)+1)))] minsize = (int(minsum*4/7),int(minsum*2/7),int(minsum*1/7)) print(minsize) Explanation: Choose best size from parabolic fit When I create neural network sizes, the first parameter $n_1$ normally distributed between 1 and 500. Its mean is ~250. The number of nodes in the second layer, $n_2$ depends on the first: it is between 1 and $n_1+1$. Also, its mean is $n_1/2$. The third layer is analogous: between 1 and $n_2/+1$ and with mean $n_2/2$. This is an empirical relationship I use to loosely "parametrize" the number of nodes in each hidden layer. Knowing the optimal sum, I simply choose the number of nodes whose means would result in this sum, according to my empirical relationships. This gives the following optimal size: End of explanation alphas = np.append([0], np.sqrt(10)**np.arange(-10, 4.0, 1)) train_error = np.array([]) cv_error = np.array([]) train_adj_error = np.array([]) cv_adj_error = np.array([]) minerr = 1 for i, a in enumerate(alphas): clf = MLPClassifier(solver='lbfgs', alpha=a, hidden_layer_sizes=minsize) clf.fit(X_train,y_train) # Compute errors conf_cv = confusion_matrix(y_cv, clf.predict(X_cv)) conf_tr = confusion_matrix(y_train, clf.predict(X_train)) train_error = np.append(train_error, 1-accuracy(conf_tr)) cv_error = np.append(cv_error, 1-accuracy(conf_cv)) train_adj_error = np.append(train_adj_error, 1-accuracy_adjacent(conf_tr, adjacent_facies)) cv_adj_error = np.append(cv_adj_error, 1-accuracy_adjacent(conf_cv, adjacent_facies)) print('[ %3d%% done ] ' % (100*(i+1)/len(alphas),), end="") if cv_error[-1] < minerr: minerr = cv_error[-1] print('CV error = %d%% with %g' % (100*minerr, a)) else: print() Explanation: Choose regularization valus Here we will choose the regularization value using the same approach as before. This takes a few minutes. End of explanation p = np.poly1d(np.polyfit(np.log(alphas[1:]), cv_error[1:], 2)) f, ax = plt.subplots(figsize=(5,5)) ax.scatter(np.log(alphas[1:]), cv_error[1:], c='k', label='Cross-validate') ax.plot(np.arange(-12, 4.0, .1), p(np.arange(-12, 4.0, .1))) ax.set_xlabel(r'$\log(\alpha)$') ax.set_ylabel('Error') plt.legend() Explanation: Plot performance of neural networks vs regularization End of explanation minalpha = np.arange(-12, 4.0, .1)[np.argmin(p(np.arange(-12, 4.0, .1)))] # minalpha = np.log(alphas)[np.argmin(cv_error)] # This chooses the minimum minalpha = np.sqrt(10)**minalpha print(minalpha) Explanation: Choose best regularization parameter from parabolic fit End of explanation clf = MLPClassifier(solver='lbfgs', alpha=minalpha, hidden_layer_sizes=minsize) clf.fit(X_train,y_train) conf_te = confusion_matrix(y_test, clf.predict(X_test)) print('Predicted accuracy %.d%%' % (100*accuracy(conf_te),)) Explanation: Predict accuracy Now I train a neural network with the obtained values and predict its accuracy using the test set. End of explanation pd.DataFrame({'alpha':minalpha, 'layer1': minsize[0], 'layer2': minsize[1], 'layer3': minsize[2]}, index=[0]).to_csv('DHparams.csv') Explanation: Save neural network parameters End of explanation clf_final = MLPClassifier(solver='lbfgs', alpha=minalpha, hidden_layer_sizes=minsize) clf_final.fit(scaled_features,correct_facies_labels) validation_features = validationFull.drop(['Formation', 'Well Name', 'Depth'], axis=1) scaled_validation = scaler.transform(validation_features) validation_output = clf_final.predict(scaled_validation) validationFull['Facies']=validation_output validationFull.to_csv('well_data_with_facies_DH.csv') Explanation: Retrain and predict Finally we train a neural network using all data available, and apply it to our blind test. End of explanation
1,020
Given the following text description, write Python code to implement the functionality described below step by step Description: Recursive Images and Fractals Recursive images One of the cool thing about graphs is they can link to themselves. The reason why this is a graph and not a quad tree (as most maps are) is to create recursive images. Here we will see how to create a recursive image continuing from the getting started image. Let us create a recursive image where Mt. Tacoma is inside Seattle, and Seattle is inside Mt. Tacoma. Step1: Insert self So far this has been same as the getting started example. Now to create a recursive graph, we will take the created node link and insert itself in the quad key '1333'. Diagram What we are trying to create is something like this Step2: Result Thus we have created a recursive image. This contains an image of Seattle, which has image of Tacoma, which has the first image in it. If you zoom in you will see more details
Python Code: %pylab inline import sys import os sys.path.insert(0,'..') import graphmap from graphmap.graphmap_main import GraphMap from graphmap.memory_persistence import MemoryPersistence from graphmap.graph_helpers import NodeLink G = GraphMap(MemoryPersistence()) seattle_skyline_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/2f/Space_Needle002.jpg/640px-Space_Needle002.jpg' mt_tacoma_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/a/a2/Mount_Rainier_from_the_Silver_Queen_Peak.jpg/1024px-Mount_Rainier_from_the_Silver_Queen_Peak.jpg' seattle_node_link = NodeLink('seattle') mt_tacoma_node_link = NodeLink('tacoma') G.create_node(root_node_link=seattle_node_link, image_value_link=seattle_skyline_image_url) G.create_node(root_node_link=mt_tacoma_node_link, image_value_link=mt_tacoma_image_url) insert_quad_key = '13' created_node_link_result = G.connect_child(root_node_link=seattle_node_link, quad_key=insert_quad_key, child_node_link=mt_tacoma_node_link) created_node_link = created_node_link_result.value print(created_node_link_result) plt.imshow(G.get_image_at_quad_key(created_node_link, 256, '').value) Explanation: Recursive Images and Fractals Recursive images One of the cool thing about graphs is they can link to themselves. The reason why this is a graph and not a quad tree (as most maps are) is to create recursive images. Here we will see how to create a recursive image continuing from the getting started image. Let us create a recursive image where Mt. Tacoma is inside Seattle, and Seattle is inside Mt. Tacoma. End of explanation recursive_quad_key = '1333' recursive_node_link_result = G.connect_child(root_node_link=created_node_link, quad_key=recursive_quad_key, child_node_link=created_node_link) recursive_node_link = recursive_node_link_result.value print(recursive_node_link_result) Explanation: Insert self So far this has been same as the getting started example. Now to create a recursive graph, we will take the created node link and insert itself in the quad key '1333'. Diagram What we are trying to create is something like this End of explanation plt.imshow(G.get_image_at_quad_key(recursive_node_link, 256, '').value) plt.figure() plt.imshow(G.get_image_at_quad_key(recursive_node_link, 256, '1').value) plt.figure() plt.imshow(G.get_image_at_quad_key(recursive_node_link, 256, '13').value) Explanation: Result Thus we have created a recursive image. This contains an image of Seattle, which has image of Tacoma, which has the first image in it. If you zoom in you will see more details End of explanation
1,021
Given the following text description, write Python code to implement the functionality described below step by step Description: BayesianMarkovStateModel This example demonstrates the class BayesianMarkovStateModel, which uses Metropolis Markov chain Monte Carlo (MCMC) to sample over the posterior distribution of transition matrices, given the observed transitions in your dataset. This can be useful for evaluating the uncertainty due to sampling in your dataset. Step1: Load some double-well data Step2: We'll discretize the space using 10 states And the build one MSM using the MLE transition matrix estimator, and one with the Bayesian estimator Step3: Now lets try using 50 states The MCMC sampling is a lot harder to converge
Python Code: %matplotlib inline import numpy as np from matplotlib import pyplot as plt from mdtraj.utils import timing from msmbuilder.example_datasets import load_doublewell from msmbuilder.cluster import NDGrid from msmbuilder.msm import BayesianMarkovStateModel, MarkovStateModel Explanation: BayesianMarkovStateModel This example demonstrates the class BayesianMarkovStateModel, which uses Metropolis Markov chain Monte Carlo (MCMC) to sample over the posterior distribution of transition matrices, given the observed transitions in your dataset. This can be useful for evaluating the uncertainty due to sampling in your dataset. End of explanation trjs = load_doublewell(random_state=0)['trajectories'] plt.hist(np.concatenate(trjs), bins=50, log=True) plt.ylabel('Frequency') plt.show() Explanation: Load some double-well data End of explanation clusterer = NDGrid(n_bins_per_feature=10) mle_msm = MarkovStateModel(lag_time=100) b_msm = BayesianMarkovStateModel(lag_time=100, n_samples=10000, n_steps=1000) states = clusterer.fit_transform(trjs) with timing('running mcmc'): b_msm.fit(states) mle_msm.fit(states) plt.subplot(2, 1, 1) plt.plot(b_msm.all_transmats_[:, 0, 0]) plt.axhline(mle_msm.transmat_[0, 0], c='k') plt.ylabel('t_00') plt.subplot(2, 1, 2) plt.ylabel('t_23') plt.xlabel('MCMC Iteration') plt.plot(b_msm.all_transmats_[:, 2, 3]) plt.axhline(mle_msm.transmat_[2, 3], c='k') plt.show() plt.plot(b_msm.all_timescales_[:, 0], label='MCMC') plt.axhline(mle_msm.timescales_[0], c='k', label='MLE') plt.legend(loc='best') plt.ylabel('Longest timescale') plt.xlabel('MCMC iteration') plt.show() Explanation: We'll discretize the space using 10 states And the build one MSM using the MLE transition matrix estimator, and one with the Bayesian estimator End of explanation clusterer = NDGrid(n_bins_per_feature=50) mle_msm = MarkovStateModel(lag_time=100) b_msm = BayesianMarkovStateModel(lag_time=100, n_samples=1000, n_steps=100000) states = clusterer.fit_transform(trjs) with timing('running mcmc (50 states)'): b_msm.fit(states) mle_msm.fit(states) plt.plot(b_msm.all_timescales_[:, 0], label='MCMC') plt.axhline(mle_msm.timescales_[0], c='k', label='MLE') plt.legend(loc='best') plt.ylabel('Longest timescale') plt.xlabel('MCMC iteration') plt.plot(b_msm.all_transmats_[:, 0, 0], label='MCMC') plt.axhline(mle_msm.transmat_[0, 0], c='k', label='MLE') plt.legend(loc='best') plt.ylabel('t_00') plt.xlabel('MCMC iteration') Explanation: Now lets try using 50 states The MCMC sampling is a lot harder to converge End of explanation
1,022
Given the following text description, write Python code to implement the functionality described below step by step Description: Handwritten Number Recognition with TFLearn and MNIST In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. This kind of neural network is used in a variety of real-world applications including Step1: Retrieving training and test data The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data. Each MNIST data point has Step2: Visualize the training data Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title. Step3: Building the network TFLearn lets you build the network by defining the layers in that network. For this example, you'll define Step4: Training the network Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely! Step5: Testing After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results. A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
Python Code: # Import Numpy, TensorFlow, TFLearn, and MNIST data import numpy as np import tensorflow as tf import tflearn import tflearn.datasets.mnist as mnist Explanation: Handwritten Number Recognition with TFLearn and MNIST In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9. We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network. End of explanation # Retrieve the training and test data trainX, trainY, testX, testY = mnist.load_data(one_hot=True) Explanation: Retrieving training and test data The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data. Each MNIST data point has: 1. an image of a handwritten digit and 2. a corresponding label (a number 0-9 that identifies the image) We'll call the images, which will be the input to our neural network, X and their corresponding labels Y. We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0]. Flattened data For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network. End of explanation # Visualizing the data import matplotlib.pyplot as plt %matplotlib inline # Function for displaying a training image by it's index in the MNIST set def show_digit(index): label = trainY[index].argmax(axis=0) # Reshape 784 array into 28x28 image image = trainX[index].reshape([28,28]) plt.title('Training data, index: %d, Label: %d' % (index, label)) plt.imshow(image, cmap='gray_r') plt.show() # Display the first (index 0) training image show_digit(0) Explanation: Visualize the training data Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title. End of explanation # Define the neural network def build_model(): with tf.device("/gpu:0"): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### # Include the input layer, hidden layer(s), and set how you want to train the model # This model assumes that your network is named "net" # Create input layer, sized to the shape of the 28x28 image net = tflearn.input_data([None, trainX.shape[1]]) # Create intermediate layers # First layer of 150 seems to make sense in context of 784 pixels per image ~1:5 ratio net = tflearn.fully_connected(net, 150, activation='ReLU') # Second hidden layer of 150 is again an approximate ~1:5 ratio net = tflearn.fully_connected(net, 30, activation='ReLU') # Create output layer net = tflearn.fully_connected(net, 10, activation='softmax') net = tflearn.regression(net, optimizer='sgd', learning_rate=0.01, loss='categorical_crossentropy') model = tflearn.DNN(net) return model # Build the model model = build_model() Explanation: Building the network TFLearn lets you build the network by defining the layers in that network. For this example, you'll define: The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. Hidden layers, which recognize patterns in data and connect the input to the output layer, and The output layer, which defines how the network learns and outputs a label for a given image. Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example, net = tflearn.input_data([None, 100]) would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units. Adding layers To add new hidden layers, you use net = tflearn.fully_connected(net, n_units, activation='ReLU') This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units). Then, to set how you train the network, use: net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') Again, this is passing in the network you've been building. The keywords: optimizer sets the training method, here stochastic gradient descent learning_rate is the learning rate loss determines how the network error is calculated. In this example, with categorical cross-entropy. Finally, you put all this together to create the model with tflearn.DNN(net). Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc. Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer. End of explanation # Training model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=100) Explanation: Training the network Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely! End of explanation # Compare the labels that our model predicts with the actual labels # Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample. predictions = np.array(model.predict(testX)).argmax(axis=1) # Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels actual = testY.argmax(axis=1) test_accuracy = np.mean(predictions == actual, axis=0) # Print out the result print("Test accuracy: ", test_accuracy) Explanation: Testing After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results. A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy! End of explanation
1,023
Given the following text description, write Python code to implement the functionality described below step by step Description: Basic CNN part-of-speech tagger with Thinc This notebook shows how to implement a basic CNN for part-of-speech tagging model in Thinc (without external dependencies) and train the model on the Universal Dependencies AnCora corpus. The tutorial shows three different workflows Step1: We start by making sure the computation is performed on GPU if available. prefer_gpu should be called right after importing Thinc, and it returns a boolean indicating whether the GPU has been activated. Step2: We also define the following helper functions for loading the data, and training and evaluating a given model. Don't forget to call model.initialize with a batch of input and output data to initialize the model and fill in any missing shapes. Step3: 1. Composing the model in code Here's the model definition, using the &gt;&gt; operator for the chain combinator. The strings2arrays transform converts a sequence of strings to a list of arrays. with_array transforms sequences (the sequences of arrays) into a contiguous 2-dimensional array on the way into and out of the model it wraps. This means our model has the following signature Step5: Composing the model via a config file Thinc's config system lets describe arbitrary trees of objects. The config can include values like hyperparameters or training settings, or references to functions and the values of their arguments. Thinc will then construct the config bottom-up – so you can define one function with its arguments, and then pass the return value into another function. If we want to rebuild the model defined above in a config file, we first need to break down its structure Step6: When the config is loaded, it's first parsed as a dictionary and all references to values from other sections, e.g. ${hyper_params Step7: registry.resolve then creates the objects and calls the functions bottom-up. Step8: We now have a model, optimizer and training settings, built from the config, and can use them to train the model. Step9: Composing the model with code and config The @thinc.registry decorator lets you register your own layers and model definitions, which can then be referenced in config files. This approach gives you the most flexibility, while also keeping your config and model definitions concise. 💡 The function you register will be filled in by the config – e.g. the value of width defined in the config block will be passed in as the argument width. If arguments are missing, you'll see a validation error. If you're using type hints in the function, the values will be parsed to ensure they always have the right type. If they're invalid – e.g. if you're passing in a list as the value of width – you'll see an error. This makes it easier to prevent bugs caused by incorrect values lower down in the network. Step11: The config would then only need to define one model block with @layers = "cnn_tagger.v1" and the function arguments. Whether you move them out to a section like [hyper_params] or just hard-code them into the block is up to you. The advantage of a separate section is that the values are preserved in the parsed config object (and not just passed into the function), so you can always print and view them.
Python Code: !pip install "thinc>=8.0.0a0" "ml_datasets>=0.2.0a0" "tqdm>=4.41" Explanation: Basic CNN part-of-speech tagger with Thinc This notebook shows how to implement a basic CNN for part-of-speech tagging model in Thinc (without external dependencies) and train the model on the Universal Dependencies AnCora corpus. The tutorial shows three different workflows: Composing the model in code (basic usage) Composing the model via a config file only (mostly to demonstrate advanced usage of configs) Composing the model in code and configuring it via config (recommended) End of explanation from thinc.api import prefer_gpu prefer_gpu() Explanation: We start by making sure the computation is performed on GPU if available. prefer_gpu should be called right after importing Thinc, and it returns a boolean indicating whether the GPU has been activated. End of explanation import ml_datasets from tqdm.notebook import tqdm from thinc.api import fix_random_seed fix_random_seed(0) def train_model(model, optimizer, n_iter, batch_size): (train_X, train_y), (dev_X, dev_y) = ml_datasets.ud_ancora_pos_tags() model.initialize(X=train_X[:5], Y=train_y[:5]) for n in range(n_iter): loss = 0.0 batches = model.ops.multibatch(batch_size, train_X, train_y, shuffle=True) for X, Y in tqdm(batches, leave=False): Yh, backprop = model.begin_update(X) d_loss = [] for i in range(len(Yh)): d_loss.append(Yh[i] - Y[i]) loss += ((Yh[i] - Y[i]) ** 2).sum() backprop(d_loss) model.finish_update(optimizer) score = evaluate(model, dev_X, dev_y, batch_size) print(f"{n}\t{loss:.2f}\t{score:.3f}") def evaluate(model, dev_X, dev_Y, batch_size): correct = 0 total = 0 for X, Y in model.ops.multibatch(batch_size, dev_X, dev_Y): Yh = model.predict(X) for yh, y in zip(Yh, Y): correct += (y.argmax(axis=1) == yh.argmax(axis=1)).sum() total += y.shape[0] return float(correct / total) Explanation: We also define the following helper functions for loading the data, and training and evaluating a given model. Don't forget to call model.initialize with a batch of input and output data to initialize the model and fill in any missing shapes. End of explanation from thinc.api import Model, chain, strings2arrays, with_array, HashEmbed, expand_window, Relu, Softmax, Adam, warmup_linear width = 32 vector_width = 16 nr_classes = 17 learn_rate = 0.001 n_iter = 10 batch_size = 128 with Model.define_operators({">>": chain}): model = strings2arrays() >> with_array( HashEmbed(nO=width, nV=vector_width, column=0) >> expand_window(window_size=1) >> Relu(nO=width, nI=width * 3) >> Relu(nO=width, nI=width) >> Softmax(nO=nr_classes, nI=width) ) optimizer = Adam(learn_rate) train_model(model, optimizer, n_iter, batch_size) Explanation: 1. Composing the model in code Here's the model definition, using the &gt;&gt; operator for the chain combinator. The strings2arrays transform converts a sequence of strings to a list of arrays. with_array transforms sequences (the sequences of arrays) into a contiguous 2-dimensional array on the way into and out of the model it wraps. This means our model has the following signature: Model[Sequence[str], Sequence[Array2d]]. End of explanation CONFIG = [hyper_params] width = 32 vector_width = 16 learn_rate = 0.001 [training] n_iter = 10 batch_size = 128 [model] @layers = "chain.v1" [model.*.strings2arrays] @layers = "strings2arrays.v1" [model.*.with_array] @layers = "with_array.v1" [model.*.with_array.layer] @layers = "chain.v1" [model.*.with_array.layer.*.hashembed] @layers = "HashEmbed.v1" nO = ${hyper_params:width} nV = ${hyper_params:vector_width} column = 0 [model.*.with_array.layer.*.expand_window] @layers = "expand_window.v1" window_size = 1 [model.*.with_array.layer.*.relu1] @layers = "Relu.v1" nO = ${hyper_params:width} nI = 96 [model.*.with_array.layer.*.relu2] @layers = "Relu.v1" nO = ${hyper_params:width} nI = ${hyper_params:width} [model.*.with_array.layer.*.softmax] @layers = "Softmax.v1" nO = 17 nI = ${hyper_params:width} [optimizer] @optimizers = "Adam.v1" learn_rate = ${hyper_params:learn_rate} Explanation: Composing the model via a config file Thinc's config system lets describe arbitrary trees of objects. The config can include values like hyperparameters or training settings, or references to functions and the values of their arguments. Thinc will then construct the config bottom-up – so you can define one function with its arguments, and then pass the return value into another function. If we want to rebuild the model defined above in a config file, we first need to break down its structure: chain (any number of positional arguments) strings2arrays (no arguments) with_array (one argument layer) layer: chain (any number of positional arguments) HashEmbed expand_window Relu Relu Softmax chain takes a variable number of positional arguments (the layers to compose). In the config, positional arguments can be expressed using * in the dot notation. For example, model.layer could describe a function passed to model as the argument layer, while model.*.relu defines a positional argument passed to model. The name of the argument, e.g. relu – doesn't matter in this case. It just needs to be unique. ⚠️ Important note: This example is mostly intended to show what's possible. We don't recommend "programming via config files" as shown here, since it doesn't really solve any problem and makes the model definition just as complicated. Instead, we recommend a hybrid approach: wrap the model definition in a registed function and configure it via the config. End of explanation from thinc.api import registry, Config config = Config().from_str(CONFIG) config Explanation: When the config is loaded, it's first parsed as a dictionary and all references to values from other sections, e.g. ${hyper_params:width} are replaced. The result is a nested dictionary describing the objects defined in the config. End of explanation C = registry.resolve(config) C Explanation: registry.resolve then creates the objects and calls the functions bottom-up. End of explanation model = C["model"] optimizer = C["optimizer"] n_iter = C["training"]["n_iter"] batch_size = C["training"]["batch_size"] train_model(model, optimizer, n_iter, batch_size) Explanation: We now have a model, optimizer and training settings, built from the config, and can use them to train the model. End of explanation import thinc from thinc.api import Model, chain, strings2arrays, with_array, HashEmbed, expand_window, Relu, Softmax, Adam, warmup_linear @thinc.registry.layers("cnn_tagger.v1") def create_cnn_tagger(width: int, vector_width: int, nr_classes: int = 17): with Model.define_operators({">>": chain}): model = strings2arrays() >> with_array( HashEmbed(nO=width, nV=vector_width, column=0) >> expand_window(window_size=1) >> Relu(nO=width, nI=width * 3) >> Relu(nO=width, nI=width) >> Softmax(nO=nr_classes, nI=width) ) return model Explanation: Composing the model with code and config The @thinc.registry decorator lets you register your own layers and model definitions, which can then be referenced in config files. This approach gives you the most flexibility, while also keeping your config and model definitions concise. 💡 The function you register will be filled in by the config – e.g. the value of width defined in the config block will be passed in as the argument width. If arguments are missing, you'll see a validation error. If you're using type hints in the function, the values will be parsed to ensure they always have the right type. If they're invalid – e.g. if you're passing in a list as the value of width – you'll see an error. This makes it easier to prevent bugs caused by incorrect values lower down in the network. End of explanation CONFIG = [hyper_params] width = 32 vector_width = 16 learn_rate = 0.001 [training] n_iter = 10 batch_size = 128 [model] @layers = "cnn_tagger.v1" width = ${hyper_params:width} vector_width = ${hyper_params:vector_width} nr_classes = 17 [optimizer] @optimizers = "Adam.v1" learn_rate = ${hyper_params:learn_rate} C = registry.resolve(Config().from_str(CONFIG)) C model = C["model"] optimizer = C["optimizer"] n_iter = C["training"]["n_iter"] batch_size = C["training"]["batch_size"] train_model(model, optimizer, n_iter, batch_size) Explanation: The config would then only need to define one model block with @layers = "cnn_tagger.v1" and the function arguments. Whether you move them out to a section like [hyper_params] or just hard-code them into the block is up to you. The advantage of a separate section is that the values are preserved in the parsed config object (and not just passed into the function), so you can always print and view them. End of explanation
1,024
Given the following text description, write Python code to implement the functionality described below step by step Description: <!-- dom Step1: With $\boldsymbol{\beta}\in {\mathbb{R}}^{p\times 1}$, it means that we will hereafter write our equations for the approximation as $$ \boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta}, $$ throughout these lectures. Optimizing our parameters, more details With the above we use the design matrix to define the approximation $\boldsymbol{\tilde{y}}$ via the unknown quantity $\boldsymbol{\beta}$ as $$ \boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta}, $$ and in order to find the optimal parameters $\beta_i$ instead of solving the above linear algebra problem, we define a function which gives a measure of the spread between the values $y_i$ (which represent hopefully the exact values) and the parameterized values $\tilde{y}_i$, namely $$ C(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right}, $$ or using the matrix $\boldsymbol{X}$ and in a more compact matrix-vector notation as $$ C(\boldsymbol{\beta})=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}. $$ This function is one possible way to define the so-called cost function. It is also common to define the function $C$ as $$ C(\boldsymbol{\beta})=\frac{1}{2n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2, $$ since when taking the first derivative with respect to the unknown parameters $\beta$, the factor of $2$ cancels out. Interpretations and optimizing our parameters The function $$ C(\boldsymbol{\beta})=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}, $$ can be linked to the variance of the quantity $y_i$ if we interpret the latter as the mean value. When linking (see the discussion below) with the maximum likelihood approach below, we will indeed interpret $y_i$ as a mean value $$ y_{i}=\langle y_i \rangle = \beta_0x_{i,0}+\beta_1x_{i,1}+\beta_2x_{i,2}+\dots+\beta_{n-1}x_{i,n-1}+\epsilon_i, $$ where $\langle y_i \rangle$ is the mean value. Keep in mind also that till now we have treated $y_i$ as the exact value. Normally, the response (dependent or outcome) variable $y_i$ the outcome of a numerical experiment or another type of experiment and is thus only an approximation to the true value. It is then always accompanied by an error estimate, often limited to a statistical error estimate given by the standard deviation discussed earlier. In the discussion here we will treat $y_i$ as our exact value for the response variable. In order to find the parameters $\beta_i$ we will then minimize the spread of $C(\boldsymbol{\beta})$, that is we are going to solve the problem $$ {\displaystyle \min_{\boldsymbol{\beta}\in {\mathbb{R}}^{p}}}\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}. $$ In practical terms it means we will require $$ \frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)^2\right]=0, $$ which results in $$ \frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_{ij}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)\right]=0, $$ or in a matrix-vector form as $$ \frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right). $$ Interpretations and optimizing our parameters We can rewrite $$ \frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right), $$ as $$ \boldsymbol{X}^T\boldsymbol{y} = \boldsymbol{X}^T\boldsymbol{X}\boldsymbol{\beta}, $$ and if the matrix $\boldsymbol{X}^T\boldsymbol{X}$ is invertible we have the solution $$ \boldsymbol{\beta} =\left(\boldsymbol{X}^T\boldsymbol{X}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}. $$ We note also that since our design matrix is defined as $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$, the product $\boldsymbol{X}^T\boldsymbol{X} \in {\mathbb{R}}^{p\times p}$. In the above case we have that $p \ll n$, in our case $p=5$ meaning that we end up with inverting a small $5\times 5$ matrix. This is a rather common situation, in many cases we end up with low-dimensional matrices to invert. The methods discussed here and for many other supervised learning algorithms like classification with logistic regression or support vector machines, exhibit dimensionalities which allow for the usage of direct linear algebra methods such as LU decomposition or Singular Value Decomposition (SVD) for finding the inverse of the matrix $\boldsymbol{X}^T\boldsymbol{X}$. Small question Step2: Alternatively, you can use the least squares functionality in Numpy as Step3: And finally we plot our fit with and compare with data Step4: Adding error analysis and training set up We can easily test our fit by computing the $R2$ score that we discussed in connection with the functionality of Scikit-Learn in the introductory slides. Since we are not using Scikit-Learn here we can define our own $R2$ function as Step5: and we would be using it as Step6: We can easily add our MSE score as Step7: and finally the relative error as Step8: The $\chi^2$ function Normally, the response (dependent or outcome) variable $y_i$ is the outcome of a numerical experiment or another type of experiment and is thus only an approximation to the true value. It is then always accompanied by an error estimate, often limited to a statistical error estimate given by the standard deviation discussed earlier. In the discussion here we will treat $y_i$ as our exact value for the response variable. Introducing the standard deviation $\sigma_i$ for each measurement $y_i$, we define now the $\chi^2$ function (omitting the $1/n$ term) as $$ \chi^2(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\frac{\left(y_i-\tilde{y}_i\right)^2}{\sigma_i^2}=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\frac{1}{\boldsymbol{\Sigma^2}}\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right}, $$ where the matrix $\boldsymbol{\Sigma}$ is a diagonal matrix with $\sigma_i$ as matrix elements. The $\chi^2$ function In order to find the parameters $\beta_i$ we will then minimize the spread of $\chi^2(\boldsymbol{\beta})$ by requiring $$ \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)^2\right]=0, $$ which results in $$ \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}\frac{x_{ij}}{\sigma_i}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)\right]=0, $$ or in a matrix-vector form as $$ \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right). $$ where we have defined the matrix $\boldsymbol{A} =\boldsymbol{X}/\boldsymbol{\Sigma}$ with matrix elements $a_{ij} = x_{ij}/\sigma_i$ and the vector $\boldsymbol{b}$ with elements $b_i = y_i/\sigma_i$. The $\chi^2$ function We can rewrite $$ \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right), $$ as $$ \boldsymbol{A}^T\boldsymbol{b} = \boldsymbol{A}^T\boldsymbol{A}\boldsymbol{\beta}, $$ and if the matrix $\boldsymbol{A}^T\boldsymbol{A}$ is invertible we have the solution $$ \boldsymbol{\beta} =\left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1}\boldsymbol{A}^T\boldsymbol{b}. $$ The $\chi^2$ function If we then introduce the matrix $$ \boldsymbol{H} = \left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1}, $$ we have then the following expression for the parameters $\beta_j$ (the matrix elements of $\boldsymbol{H}$ are $h_{ij}$) $$ \beta_j = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}\frac{y_i}{\sigma_i}\frac{x_{ik}}{\sigma_i} = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}b_ia_{ik} $$ We state without proof the expression for the uncertainty in the parameters $\beta_j$ as (we leave this as an exercise) $$ \sigma^2(\beta_j) = \sum_{i=0}^{n-1}\sigma_i^2\left( \frac{\partial \beta_j}{\partial y_i}\right)^2, $$ resulting in $$ \sigma^2(\beta_j) = \left(\sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}a_{ik}\right)\left(\sum_{l=0}^{p-1}h_{jl}\sum_{m=0}^{n-1}a_{ml}\right) = h_{jj}! $$ The $\chi^2$ function The first step here is to approximate the function $y$ with a first-order polynomial, that is we write $$ y=y(x) \rightarrow y(x_i) \approx \beta_0+\beta_1 x_i. $$ By computing the derivatives of $\chi^2$ with respect to $\beta_0$ and $\beta_1$ show that these are given by $$ \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_0} = -2\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0, $$ and $$ \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_1} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_i\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0. $$ The $\chi^2$ function For a linear fit (a first-order polynomial) we don't need to invert a matrix!! Defining $$ \gamma = \sum_{i=0}^{n-1}\frac{1}{\sigma_i^2}, $$ $$ \gamma_x = \sum_{i=0}^{n-1}\frac{x_{i}}{\sigma_i^2}, $$ $$ \gamma_y = \sum_{i=0}^{n-1}\left(\frac{y_i}{\sigma_i^2}\right), $$ $$ \gamma_{xx} = \sum_{i=0}^{n-1}\frac{x_ix_{i}}{\sigma_i^2}, $$ $$ \gamma_{xy} = \sum_{i=0}^{n-1}\frac{y_ix_{i}}{\sigma_i^2}, $$ we obtain $$ \beta_0 = \frac{\gamma_{xx}\gamma_y-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2}, $$ $$ \beta_1 = \frac{\gamma_{xy}\gamma-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2}. $$ This approach (different linear and non-linear regression) suffers often from both being underdetermined and overdetermined in the unknown coefficients $\beta_i$. A better approach is to use the Singular Value Decomposition (SVD) method discussed below. Or using Lasso and Ridge regression. See below. Fitting an Equation of State for Dense Nuclear Matter Before we continue, let us introduce yet another example. We are going to fit the nuclear equation of state using results from many-body calculations. The equation of state we have made available here, as function of density, has been derived using modern nucleon-nucleon potentials with the addition of three-body forces. This time the file is presented as a standard csv file. The beginning of the Python code here is similar to what you have seen before, with the same initializations and declarations. We use also pandas again, rather extensively in order to organize our data. The difference now is that we use Scikit-Learn's regression tools instead of our own matrix inversion implementation. Furthermore, we sneak in Ridge regression (to be discussed below) which includes a hyperparameter $\lambda$, also to be explained below. The code Step9: The above simple polynomial in density $\rho$ gives an excellent fit to the data. We note also that there is a small deviation between the standard OLS and the Ridge regression at higher densities. We discuss this in more detail below. Splitting our Data in Training and Test data It is normal in essentially all Machine Learning studies to split the data in a training set and a test set (sometimes also an additional validation set). Scikit-Learn has an own function for this. There is no explicit recipe for how much data should be included as training data and say test data. An accepted rule of thumb is to use approximately $2/3$ to $4/5$ of the data as training data. We will postpone a discussion of this splitting to the end of these notes and our discussion of the so-called bias-variance tradeoff. Here we limit ourselves to repeat the above equation of state fitting example but now splitting the data into a training set and a test set. Step10: <!-- !split --> The Boston housing data example The Boston housing data set was originally a part of UCI Machine Learning Repository and has been removed now. The data set is now included in Scikit-Learn's library. There are 506 samples and 13 feature (predictor) variables in this data set. The objective is to predict the value of prices of the house using the features (predictors) listed here. The features/predictors are 1. CRIM Step11: and load the Boston Housing DataSet from Scikit-Learn Step12: Then we invoke Pandas Step13: and preprocess the data Step14: We can then visualize the data Step15: It is now useful to look at the correlation matrix Step16: From the above coorelation plot we can see that MEDV is strongly correlated to LSTAT and RM. We see also that RAD and TAX are stronly correlated, but we don't include this in our features together to avoid multi-colinearity Step17: Now we start training our model Step18: We split the data into training and test sets Step19: Then we use the linear regression functionality from Scikit-Learn Step20: Reducing the number of degrees of freedom, overarching view Many Machine Learning problems involve thousands or even millions of features for each training instance. Not only does this make training extremely slow, it can also make it much harder to find a good solution, as we will see. This problem is often referred to as the curse of dimensionality. Fortunately, in real-world problems, it is often possible to reduce the number of features considerably, turning an intractable problem into a tractable one. Later we will discuss some of the most popular dimensionality reduction techniques Step21: The singular value decomposition The examples we have looked at so far are cases where we normally can invert the matrix $\boldsymbol{X}^T\boldsymbol{X}$. Using a polynomial expansion as we did both for the masses and the fitting of the equation of state, leads to row vectors of the design matrix which are essentially orthogonal due to the polynomial character of our model. Obtaining the inverse of the design matrix is then often done via a so-called LU, QR or Cholesky decomposition. This may however not the be case in general and a standard matrix inversion algorithm based on say LU, QR or Cholesky decomposition may lead to singularities. We will see examples of this below. There is however a way to partially circumvent this problem and also gain some insights about the ordinary least squares approach, and later shrinkage methods like Ridge and Lasso regressions. This is given by the Singular Value Decomposition algorithm, perhaps the most powerful linear algebra algorithm. Let us look at a different example where we may have problems with the standard matrix inversion algorithm. Thereafter we dive into the math of the SVD. Linear Regression Problems One of the typical problems we encounter with linear regression, in particular when the matrix $\boldsymbol{X}$ (our so-called design matrix) is high-dimensional, are problems with near singular or singular matrices. The column vectors of $\boldsymbol{X}$ may be linearly dependent, normally referred to as super-collinearity. This means that the matrix may be rank deficient and it is basically impossible to to model the data using linear regression. As an example, consider the matrix $$ \begin{align} \mathbf{X} & = \left[ \begin{array}{rrr} 1 & -1 & 2 \ 1 & 0 & 1 \ 1 & 2 & -1 \ 1 & 1 & 0 \end{array} \right] \end{align} $$ The columns of $\boldsymbol{X}$ are linearly dependent. We see this easily since the the first column is the row-wise sum of the other two columns. The rank (more correct, the column rank) of a matrix is the dimension of the space spanned by the column vectors. Hence, the rank of $\mathbf{X}$ is equal to the number of linearly independent columns. In this particular case the matrix has rank 2. Super-collinearity of an $(n \times p)$-dimensional design matrix $\mathbf{X}$ implies that the inverse of the matrix $\boldsymbol{X}^T\boldsymbol{X}$ (the matrix we need to invert to solve the linear regression equations) is non-invertible. If we have a square matrix that does not have an inverse, we say this matrix singular. The example here demonstrates this $$ \begin{align} \boldsymbol{X} & = \left[ \begin{array}{rr} 1 & -1 \ 1 & -1 \end{array} \right]. \end{align} $$ We see easily that $\mbox{det}(\boldsymbol{X}) = x_{11} x_{22} - x_{12} x_{21} = 1 \times (-1) - 1 \times (-1) = 0$. Hence, $\mathbf{X}$ is singular and its inverse is undefined. This is equivalent to saying that the matrix $\boldsymbol{X}$ has at least an eigenvalue which is zero. Fixing the singularity If our design matrix $\boldsymbol{X}$ which enters the linear regression problem <!-- Equation labels as ordinary links --> <div id="_auto1"></div> $$ \begin{equation} \boldsymbol{\beta} = (\boldsymbol{X}^{T} \boldsymbol{X})^{-1} \boldsymbol{X}^{T} \boldsymbol{y}, \label{_auto1} \tag{1} \end{equation} $$ has linearly dependent column vectors, we will not be able to compute the inverse of $\boldsymbol{X}^T\boldsymbol{X}$ and we cannot find the parameters (estimators) $\beta_i$. The estimators are only well-defined if $(\boldsymbol{X}^{T}\boldsymbol{X})^{-1}$ exits. This is more likely to happen when the matrix $\boldsymbol{X}$ is high-dimensional. In this case it is likely to encounter a situation where the regression parameters $\beta_i$ cannot be estimated. A cheap ad hoc approach is simply to add a small diagonal component to the matrix to invert, that is we change $$ \boldsymbol{X}^{T} \boldsymbol{X} \rightarrow \boldsymbol{X}^{T} \boldsymbol{X}+\lambda \boldsymbol{I}, $$ where $\boldsymbol{I}$ is the identity matrix. When we discuss Ridge regression this is actually what we end up evaluating. The parameter $\lambda$ is called a hyperparameter. More about this later. Basic math of the SVD From standard linear algebra we know that a square matrix $\boldsymbol{X}$ can be diagonalized if and only it is a so-called normal matrix, that is if $\boldsymbol{X}\in {\mathbb{R}}^{n\times n}$ we have $\boldsymbol{X}\boldsymbol{X}^T=\boldsymbol{X}^T\boldsymbol{X}$ or if $\boldsymbol{X}\in {\mathbb{C}}^{n\times n}$ we have $\boldsymbol{X}\boldsymbol{X}^{\dagger}=\boldsymbol{X}^{\dagger}\boldsymbol{X}$. The matrix has then a set of eigenpairs $$ (\lambda_1,\boldsymbol{u}_1),\dots, (\lambda_n,\boldsymbol{u}_n), $$ and the eigenvalues are given by the diagonal matrix $$ \boldsymbol{\Sigma}=\mathrm{Diag}(\lambda_1, \dots,\lambda_n). $$ The matrix $\boldsymbol{X}$ can be written in terms of an orthogonal/unitary transformation $\boldsymbol{U}$ $$ \boldsymbol{X} = \boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T, $$ with $\boldsymbol{U}\boldsymbol{U}^T=\boldsymbol{I}$ or $\boldsymbol{U}\boldsymbol{U}^{\dagger}=\boldsymbol{I}$. Not all square matrices are diagonalizable. A matrix like the one discussed above $$ \boldsymbol{X} = \begin{bmatrix} 1& -1 \ 1& -1\ \end{bmatrix} $$ is not diagonalizable, it is a so-called defective matrix. It is easy to see that the condition $\boldsymbol{X}\boldsymbol{X}^T=\boldsymbol{X}^T\boldsymbol{X}$ is not fulfilled. The SVD, a Fantastic Algorithm However, and this is the strength of the SVD algorithm, any general matrix $\boldsymbol{X}$ can be decomposed in terms of a diagonal matrix and two orthogonal/unitary matrices. The Singular Value Decompostion (SVD) theorem states that a general $m\times n$ matrix $\boldsymbol{X}$ can be written in terms of a diagonal matrix $\boldsymbol{\Sigma}$ of dimensionality $m\times n$ and two orthognal matrices $\boldsymbol{U}$ and $\boldsymbol{V}$, where the first has dimensionality $m \times m$ and the last dimensionality $n\times n$. We have then $$ \boldsymbol{X} = \boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T $$ As an example, the above defective matrix can be decomposed as $$ \boldsymbol{X} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1& 1 \ 1& -1\ \end{bmatrix} \begin{bmatrix} 2& 0 \ 0& 0\ \end{bmatrix} \frac{1}{\sqrt{2}}\begin{bmatrix} 1& -1 \ 1& 1\ \end{bmatrix}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T, $$ with eigenvalues $\sigma_1=2$ and $\sigma_2=0$. The SVD exits always! The SVD decomposition (singular values) gives eigenvalues $\sigma_i\geq\sigma_{i+1}$ for all $i$ and for dimensions larger than $i=p$, the eigenvalues (singular values) are zero. In the general case, where our design matrix $\boldsymbol{X}$ has dimension $n\times p$, the matrix is thus decomposed into an $n\times n$ orthogonal matrix $\boldsymbol{U}$, a $p\times p$ orthogonal matrix $\boldsymbol{V}$ and a diagonal matrix $\boldsymbol{\Sigma}$ with $r=\mathrm{min}(n,p)$ singular values $\sigma_i\geq 0$ on the main diagonal and zeros filling the rest of the matrix. There are at most $p$ singular values assuming that $n > p$. In our regression examples for the nuclear masses and the equation of state this is indeed the case, while for the Ising model we have $p > n$. These are often cases that lead to near singular or singular matrices. The columns of $\boldsymbol{U}$ are called the left singular vectors while the columns of $\boldsymbol{V}$ are the right singular vectors. Economy-size SVD If we assume that $n > p$, then our matrix $\boldsymbol{U}$ has dimension $n \times n$. The last $n-p$ columns of $\boldsymbol{U}$ become however irrelevant in our calculations since they are multiplied with the zeros in $\boldsymbol{\Sigma}$. The economy-size decomposition removes extra rows or columns of zeros from the diagonal matrix of singular values, $\boldsymbol{\Sigma}$, along with the columns in either $\boldsymbol{U}$ or $\boldsymbol{V}$ that multiply those zeros in the expression. Removing these zeros and columns can improve execution time and reduce storage requirements without compromising the accuracy of the decomposition. If $n > p$, we keep only the first $p$ columns of $\boldsymbol{U}$ and $\boldsymbol{\Sigma}$ has dimension $p\times p$. If $p > n$, then only the first $n$ columns of $\boldsymbol{V}$ are computed and $\boldsymbol{\Sigma}$ has dimension $n\times n$. The $n=p$ case is obvious, we retain the full SVD. In general the economy-size SVD leads to less FLOPS and still conserving the desired accuracy. Codes for the SVD Step22: The matrix $\boldsymbol{X}$ has columns that are linearly dependent. The first column is the row-wise sum of the other two columns. The rank of a matrix (the column rank) is the dimension of space spanned by the column vectors. The rank of the matrix is the number of linearly independent columns, in this case just $2$. We see this from the singular values when running the above code. Running the standard inversion algorithm for matrix inversion with $\boldsymbol{X}^T\boldsymbol{X}$ results in the program terminating due to a singular matrix. Mathematical Properties There are several interesting mathematical properties which will be relevant when we are going to discuss the differences between say ordinary least squares (OLS) and Ridge regression. We have from OLS that the parameters of the linear approximation are given by $$ \boldsymbol{\tilde{y}} = \boldsymbol{X}\boldsymbol{\beta} = \boldsymbol{X}\left(\boldsymbol{X}^T\boldsymbol{X}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}. $$ The matrix to invert can be rewritten in terms of our SVD decomposition as $$ \boldsymbol{X}^T\boldsymbol{X} = \boldsymbol{V}\boldsymbol{\Sigma}^T\boldsymbol{U}^T\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T. $$ Using the orthogonality properties of $\boldsymbol{U}$ we have $$ \boldsymbol{X}^T\boldsymbol{X} = \boldsymbol{V}\boldsymbol{\Sigma}^T\boldsymbol{\Sigma}\boldsymbol{V}^T = \boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T, $$ with $\boldsymbol{D}$ being a diagonal matrix with values along the diagonal given by the singular values squared. This means that $$ (\boldsymbol{X}^T\boldsymbol{X})\boldsymbol{V} = \boldsymbol{V}\boldsymbol{D}, $$ that is the eigenvectors of $(\boldsymbol{X}^T\boldsymbol{X})$ are given by the columns of the right singular matrix of $\boldsymbol{X}$ and the eigenvalues are the squared singular values. It is easy to show (show this) that $$ (\boldsymbol{X}\boldsymbol{X}^T)\boldsymbol{U} = \boldsymbol{U}\boldsymbol{D}, $$ that is, the eigenvectors of $(\boldsymbol{X}\boldsymbol{X})^T$ are the columns of the left singular matrix and the eigenvalues are the same. Going back to our OLS equation we have $$ \boldsymbol{X}\boldsymbol{\beta} = \boldsymbol{X}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}\boldsymbol{X}^T\boldsymbol{y}=\boldsymbol{U\Sigma V^T}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}(\boldsymbol{U\Sigma V^T})^T\boldsymbol{y}=\boldsymbol{U}\boldsymbol{U}^T\boldsymbol{y}. $$ We will come back to this expression when we discuss Ridge regression. Ridge and LASSO Regression Let us remind ourselves about the expression for the standard Mean Squared Error (MSE) which we used to define our cost function and the equations for the ordinary least squares (OLS) method, that is our optimization problem is $$ {\displaystyle \min_{\boldsymbol{\beta}\in {\mathbb{R}}^{p}}}\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}. $$ or we can state it as $$ {\displaystyle \min_{\boldsymbol{\beta}\in {\mathbb{R}}^{p}}}\frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2=\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2, $$ where we have used the definition of a norm-2 vector, that is $$ \vert\vert \boldsymbol{x}\vert\vert_2 = \sqrt{\sum_i x_i^2}. $$ By minimizing the above equation with respect to the parameters $\boldsymbol{\beta}$ we could then obtain an analytical expression for the parameters $\boldsymbol{\beta}$. We can add a regularization parameter $\lambda$ by defining a new cost function to be optimized, that is $$ {\displaystyle \min_{\boldsymbol{\beta}\in {\mathbb{R}}^{p}}}\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_2^2 $$ which leads to the Ridge regression minimization problem where we require that $\vert\vert \boldsymbol{\beta}\vert\vert_2^2\le t$, where $t$ is a finite number larger than zero. By defining $$ C(\boldsymbol{X},\boldsymbol{\beta})=\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_1, $$ we have a new optimization equation $$ {\displaystyle \min_{\boldsymbol{\beta}\in {\mathbb{R}}^{p}}}\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_1 $$ which leads to Lasso regression. Lasso stands for least absolute shrinkage and selection operator. Here we have defined the norm-1 as $$ \vert\vert \boldsymbol{x}\vert\vert_1 = \sum_i \vert x_i\vert. $$ More on Ridge Regression Using the matrix-vector expression for Ridge regression, $$ C(\boldsymbol{X},\boldsymbol{\beta})=\frac{1}{n}\left{(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta})^T(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta})\right}+\lambda\boldsymbol{\beta}^T\boldsymbol{\beta}, $$ by taking the derivatives with respect to $\boldsymbol{\beta}$ we obtain then a slightly modified matrix inversion problem which for finite values of $\lambda$ does not suffer from singularity problems. We obtain $$ \boldsymbol{\beta}^{\mathrm{Ridge}} = \left(\boldsymbol{X}^T\boldsymbol{X}+\lambda\boldsymbol{I}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}, $$ with $\boldsymbol{I}$ being a $p\times p$ identity matrix with the constraint that $$ \sum_{i=0}^{p-1} \beta_i^2 \leq t, $$ with $t$ a finite positive number. We see that Ridge regression is nothing but the standard OLS with a modified diagonal term added to $\boldsymbol{X}^T\boldsymbol{X}$. The consequences, in particular for our discussion of the bias-variance tradeoff are rather interesting. Furthermore, if we use the result above in terms of the SVD decomposition (our analysis was done for the OLS method), we had $$ (\boldsymbol{X}\boldsymbol{X}^T)\boldsymbol{U} = \boldsymbol{U}\boldsymbol{D}. $$ We can analyse the OLS solutions in terms of the eigenvectors (the columns) of the right singular value matrix $\boldsymbol{U}$ as $$ \boldsymbol{X}\boldsymbol{\beta} = \boldsymbol{X}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}\boldsymbol{X}^T\boldsymbol{y}=\boldsymbol{U\Sigma V^T}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}(\boldsymbol{U\Sigma V^T})^T\boldsymbol{y}=\boldsymbol{U}\boldsymbol{U}^T\boldsymbol{y} $$ For Ridge regression this becomes $$ \boldsymbol{X}\boldsymbol{\beta}^{\mathrm{Ridge}} = \boldsymbol{U\Sigma V^T}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T+\lambda\boldsymbol{I} \right)^{-1}(\boldsymbol{U\Sigma V^T})^T\boldsymbol{y}=\sum_{j=0}^{p-1}\boldsymbol{u}_j\boldsymbol{u}_j^T\frac{\sigma_j^2}{\sigma_j^2+\lambda}\boldsymbol{y}, $$ with the vectors $\boldsymbol{u}_j$ being the columns of $\boldsymbol{U}$. Interpreting the Ridge results Since $\lambda \geq 0$, it means that compared to OLS, we have $$ \frac{\sigma_j^2}{\sigma_j^2+\lambda} \leq 1. $$ Ridge regression finds the coordinates of $\boldsymbol{y}$ with respect to the orthonormal basis $\boldsymbol{U}$, it then shrinks the coordinates by $\frac{\sigma_j^2}{\sigma_j^2+\lambda}$. Recall that the SVD has eigenvalues ordered in a descending way, that is $\sigma_i \geq \sigma_{i+1}$. For small eigenvalues $\sigma_i$ it means that their contributions become less important, a fact which can be used to reduce the number of degrees of freedom. Actually, calculating the variance of $\boldsymbol{X}\boldsymbol{v}_j$ shows that this quantity is equal to $\sigma_j^2/n$. With a parameter $\lambda$ we can thus shrink the role of specific parameters. More interpretations For the sake of simplicity, let us assume that the design matrix is orthonormal, that is $$ \boldsymbol{X}^T\boldsymbol{X}=(\boldsymbol{X}^T\boldsymbol{X})^{-1} =\boldsymbol{I}. $$ In this case the standard OLS results in $$ \boldsymbol{\beta}^{\mathrm{OLS}} = \boldsymbol{X}^T\boldsymbol{y}=\sum_{i=0}^{p-1}\boldsymbol{u}_j\boldsymbol{u}_j^T\boldsymbol{y}, $$ and $$ \boldsymbol{\beta}^{\mathrm{Ridge}} = \left(\boldsymbol{I}+\lambda\boldsymbol{I}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}=\left(1+\lambda\right)^{-1}\boldsymbol{\beta}^{\mathrm{OLS}}, $$ that is the Ridge estimator scales the OLS estimator by the inverse of a factor $1+\lambda$, and the Ridge estimator converges to zero when the hyperparameter goes to infinity. We will come back to more interpreations after we have gone through some of the statistical analysis part. For more discussions of Ridge and Lasso regression, Wessel van Wieringen's article is highly recommended. Similarly, Mehta et al's article is also recommended. <!-- !split --> A better understanding of regularization The parameter $\lambda$ that we have introduced in the Ridge (and Lasso as well) regression is often called a regularization parameter or shrinkage parameter. It is common to call it a hyperparameter. What does it mean mathemtically? Here we will first look at how to analyze the difference between the standard OLS equations and the Ridge expressions in terms of a linear algebra analysis using the SVD algorithm. Thereafter, we will link (see the material on the bias-variance tradeoff below) these observation to the statisical analysis of the results. In particular we consider how the variance of the parameters $\boldsymbol{\beta}$ is affected by changing the parameter $\lambda$. Decomposing the OLS and Ridge expressions We have our design matrix $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$. With the SVD we decompose it as $$ \boldsymbol{X} = \boldsymbol{U\Sigma V^T}, $$ with $\boldsymbol{U}\in {\mathbb{R}}^{n\times n}$, $\boldsymbol{\Sigma}\in {\mathbb{R}}^{n\times p}$ and $\boldsymbol{V}\in {\mathbb{R}}^{p\times p}$. The matrices $\boldsymbol{U}$ and $\boldsymbol{V}$ are unitary/orthonormal matrices, that is in case the matrices are real we have $\boldsymbol{U}^T\boldsymbol{U}=\boldsymbol{U}\boldsymbol{U}^T=\boldsymbol{I}$ and $\boldsymbol{V}^T\boldsymbol{V}=\boldsymbol{V}\boldsymbol{V}^T=\boldsymbol{I}$. Introducing the Covariance and Correlation functions Before we discuss the link between for example Ridge regression and the singular value decomposition, we need to remind ourselves about the definition of the covariance and the correlation function. These are quantities Suppose we have defined two vectors $\hat{x}$ and $\hat{y}$ with $n$ elements each. The covariance matrix $\boldsymbol{C}$ is defined as $$ \boldsymbol{C}[\boldsymbol{x},\boldsymbol{y}] = \begin{bmatrix} \mathrm{cov}[\boldsymbol{x},\boldsymbol{x}] & \mathrm{cov}[\boldsymbol{x},\boldsymbol{y}] \ \mathrm{cov}[\boldsymbol{y},\boldsymbol{x}] & \mathrm{cov}[\boldsymbol{y},\boldsymbol{y}] \ \end{bmatrix}, $$ where for example $$ \mathrm{cov}[\boldsymbol{x},\boldsymbol{y}] =\frac{1}{n} \sum_{i=0}^{n-1}(x_i- \overline{x})(y_i- \overline{y}). $$ With this definition and recalling that the variance is defined as $$ \mathrm{var}[\boldsymbol{x}]=\frac{1}{n} \sum_{i=0}^{n-1}(x_i- \overline{x})^2, $$ we can rewrite the covariance matrix as $$ \boldsymbol{C}[\boldsymbol{x},\boldsymbol{y}] = \begin{bmatrix} \mathrm{var}[\boldsymbol{x}] & \mathrm{cov}[\boldsymbol{x},\boldsymbol{y}] \ \mathrm{cov}[\boldsymbol{x},\boldsymbol{y}] & \mathrm{var}[\boldsymbol{y}] \ \end{bmatrix}. $$ The covariance takes values between zero and infinity and may thus lead to problems with loss of numerical precision for particularly large values. It is common to scale the covariance matrix by introducing instead the correlation matrix defined via the so-called correlation function $$ \mathrm{corr}[\boldsymbol{x},\boldsymbol{y}]=\frac{\mathrm{cov}[\boldsymbol{x},\boldsymbol{y}]}{\sqrt{\mathrm{var}[\boldsymbol{x}] \mathrm{var}[\boldsymbol{y}]}}. $$ The correlation function is then given by values $\mathrm{corr}[\boldsymbol{x},\boldsymbol{y}] \in [-1,1]$. This avoids eventual problems with too large values. We can then define the correlation matrix for the two vectors $\boldsymbol{x}$ and $\boldsymbol{y}$ as $$ \boldsymbol{K}[\boldsymbol{x},\boldsymbol{y}] = \begin{bmatrix} 1 & \mathrm{corr}[\boldsymbol{x},\boldsymbol{y}] \ \mathrm{corr}[\boldsymbol{y},\boldsymbol{x}] & 1 \ \end{bmatrix}, $$ In the above example this is the function we constructed using pandas. Correlation Function and Design/Feature Matrix In our derivation of the various regression algorithms like Ordinary Least Squares or Ridge regression we defined the design/feature matrix $\boldsymbol{X}$ as $$ \boldsymbol{X}=\begin{bmatrix} x_{0,0} & x_{0,1} & x_{0,2}& \dots & \dots x_{0,p-1}\ x_{1,0} & x_{1,1} & x_{1,2}& \dots & \dots x_{1,p-1}\ x_{2,0} & x_{2,1} & x_{2,2}& \dots & \dots x_{2,p-1}\ \dots & \dots & \dots & \dots \dots & \dots \ x_{n-2,0} & x_{n-2,1} & x_{n-2,2}& \dots & \dots x_{n-2,p-1}\ x_{n-1,0} & x_{n-1,1} & x_{n-1,2}& \dots & \dots x_{n-1,p-1}\ \end{bmatrix}, $$ with $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$, with the predictors/features $p$ refering to the column numbers and the entries $n$ being the row elements. We can rewrite the design/feature matrix in terms of its column vectors as $$ \boldsymbol{X}=\begin{bmatrix} \boldsymbol{x}0 & \boldsymbol{x}_1 & \boldsymbol{x}_2 & \dots & \dots & \boldsymbol{x}{p-1}\end{bmatrix}, $$ with a given vector $$ \boldsymbol{x}i^T = \begin{bmatrix}x{0,i} & x_{1,i} & x_{2,i}& \dots & \dots x_{n-1,i}\end{bmatrix}. $$ With these definitions, we can now rewrite our $2\times 2$ correaltion/covariance matrix in terms of a moe general design/feature matrix $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$. This leads to a $p\times p$ covariance matrix for the vectors $\boldsymbol{x}_i$ with $i=0,1,\dots,p-1$ $$ \boldsymbol{C}[\boldsymbol{x}] = \begin{bmatrix} \mathrm{var}[\boldsymbol{x}0] & \mathrm{cov}[\boldsymbol{x}_0,\boldsymbol{x}_1] & \mathrm{cov}[\boldsymbol{x}_0,\boldsymbol{x}_2] & \dots & \dots & \mathrm{cov}[\boldsymbol{x}_0,\boldsymbol{x}{p-1}]\ \mathrm{cov}[\boldsymbol{x}1,\boldsymbol{x}_0] & \mathrm{var}[\boldsymbol{x}_1] & \mathrm{cov}[\boldsymbol{x}_1,\boldsymbol{x}_2] & \dots & \dots & \mathrm{cov}[\boldsymbol{x}_1,\boldsymbol{x}{p-1}]\ \mathrm{cov}[\boldsymbol{x}2,\boldsymbol{x}_0] & \mathrm{cov}[\boldsymbol{x}_2,\boldsymbol{x}_1] & \mathrm{var}[\boldsymbol{x}_2] & \dots & \dots & \mathrm{cov}[\boldsymbol{x}_2,\boldsymbol{x}{p-1}]\ \dots & \dots & \dots & \dots & \dots & \dots \ \dots & \dots & \dots & \dots & \dots & \dots \ \mathrm{cov}[\boldsymbol{x}{p-1},\boldsymbol{x}_0] & \mathrm{cov}[\boldsymbol{x}{p-1},\boldsymbol{x}1] & \mathrm{cov}[\boldsymbol{x}{p-1},\boldsymbol{x}{2}] & \dots & \dots & \mathrm{var}[\boldsymbol{x}{p-1}]\ \end{bmatrix}, $$ and the correlation matrix $$ \boldsymbol{K}[\boldsymbol{x}] = \begin{bmatrix} 1 & \mathrm{corr}[\boldsymbol{x}0,\boldsymbol{x}_1] & \mathrm{corr}[\boldsymbol{x}_0,\boldsymbol{x}_2] & \dots & \dots & \mathrm{corr}[\boldsymbol{x}_0,\boldsymbol{x}{p-1}]\ \mathrm{corr}[\boldsymbol{x}1,\boldsymbol{x}_0] & 1 & \mathrm{corr}[\boldsymbol{x}_1,\boldsymbol{x}_2] & \dots & \dots & \mathrm{corr}[\boldsymbol{x}_1,\boldsymbol{x}{p-1}]\ \mathrm{corr}[\boldsymbol{x}2,\boldsymbol{x}_0] & \mathrm{corr}[\boldsymbol{x}_2,\boldsymbol{x}_1] & 1 & \dots & \dots & \mathrm{corr}[\boldsymbol{x}_2,\boldsymbol{x}{p-1}]\ \dots & \dots & \dots & \dots & \dots & \dots \ \dots & \dots & \dots & \dots & \dots & \dots \ \mathrm{corr}[\boldsymbol{x}{p-1},\boldsymbol{x}_0] & \mathrm{corr}[\boldsymbol{x}{p-1},\boldsymbol{x}1] & \mathrm{corr}[\boldsymbol{x}{p-1},\boldsymbol{x}_{2}] & \dots & \dots & 1\ \end{bmatrix}, $$ Covariance Matrix Examples The Numpy function np.cov calculates the covariance elements using the factor $1/(n-1)$ instead of $1/n$ since it assumes we do not have the exact mean values. The following simple function uses the np.vstack function which takes each vector of dimension $1\times n$ and produces a $2\times n$ matrix $\boldsymbol{W}$ $$ \boldsymbol{W} = \begin{bmatrix} x_0 & y_0 \ x_1 & y_1 \ x_2 & y_2\ \dots & \dots \ x_{n-2} & y_{n-2}\ x_{n-1} & y_{n-1} & \end{bmatrix}, $$ which in turn is converted into into the $2\times 2$ covariance matrix $\boldsymbol{C}$ via the Numpy function np.cov(). We note that we can also calculate the mean value of each set of samples $\boldsymbol{x}$ etc using the Numpy function np.mean(x). We can also extract the eigenvalues of the covariance matrix through the np.linalg.eig() function. Step23: Correlation Matrix The previous example can be converted into the correlation matrix by simply scaling the matrix elements with the variances. We should also subtract the mean values for each column. This leads to the following code which sets up the correlations matrix for the previous example in a more brute force way. Here we scale the mean values for each column of the design matrix, calculate the relevant mean values and variances and then finally set up the $2\times 2$ correlation matrix (since we have only two vectors). Step24: We see that the matrix elements along the diagonal are one as they should be and that the matrix is symmetric. Furthermore, diagonalizing this matrix we easily see that it is a positive definite matrix. The above procedure with numpy can be made more compact if we use pandas. Correlation Matrix with Pandas We whow here how we can set up the correlation matrix using pandas, as done in this simple code Step25: We expand this model to the Franke function discussed above. Correlation Matrix with Pandas and the Franke function Step26: We note here that the covariance is zero for the first rows and columns since all matrix elements in the design matrix were set to one (we are fitting the function in terms of a polynomial of degree $n$). This means that the variance for these elements will be zero and will cause problems when we set up the correlation matrix. We can simply drop these elements and construct a correlation matrix without these elements. Rewriting the Covariance and/or Correlation Matrix We can rewrite the covariance matrix in a more compact form in terms of the design/feature matrix $\boldsymbol{X}$ as $$ \boldsymbol{C}[\boldsymbol{x}] = \frac{1}{n}\boldsymbol{X}^T\boldsymbol{X}= \mathbb{E}[\boldsymbol{X}^T\boldsymbol{X}]. $$ To see this let us simply look at a design matrix $\boldsymbol{X}\in {\mathbb{R}}^{2\times 2}$ $$ \boldsymbol{X}=\begin{bmatrix} x_{00} & x_{01}\ x_{10} & x_{11}\ \end{bmatrix}=\begin{bmatrix} \boldsymbol{x}{0} & \boldsymbol{x}{1}\ \end{bmatrix}. $$ If we then compute the expectation value $$ \mathbb{E}[\boldsymbol{X}^T\boldsymbol{X}] = \frac{1}{n}\boldsymbol{X}^T\boldsymbol{X}=\begin{bmatrix} x_{00}^2+x_{01}^2 & x_{00}x_{10}+x_{01}x_{11}\ x_{10}x_{00}+x_{11}x_{01} & x_{10}^2+x_{11}^2\ \end{bmatrix}, $$ which is just $$ \boldsymbol{C}[\boldsymbol{x}_0,\boldsymbol{x}_1] = \boldsymbol{C}[\boldsymbol{x}]=\begin{bmatrix} \mathrm{var}[\boldsymbol{x}_0] & \mathrm{cov}[\boldsymbol{x}_0,\boldsymbol{x}_1] \ \mathrm{cov}[\boldsymbol{x}_1,\boldsymbol{x}_0] & \mathrm{var}[\boldsymbol{x}_1] \ \end{bmatrix}, $$ where we wrote $$\boldsymbol{C}[\boldsymbol{x}_0,\boldsymbol{x}_1] = \boldsymbol{C}[\boldsymbol{x}]$$ to indicate that this the covariance of the vectors $\boldsymbol{x}$ of the design/feature matrix $\boldsymbol{X}$. It is easy to generalize this to a matrix $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$. Linking with SVD See lecture september 11. More text to be added here soon. Where are we going? Before we proceed, we need to rethink what we have been doing. In our eager to fit the data, we have omitted several important elements in our regression analysis. In what follows we will 1. look at statistical properties, including a discussion of mean values, variance and the so-called bias-variance tradeoff introduce resampling techniques like cross-validation, bootstrapping and jackknife and more This will allow us to link the standard linear algebra methods we have discussed above to a statistical interpretation of the methods. Resampling methods Resampling methods are an indispensable tool in modern statistics. They involve repeatedly drawing samples from a training set and refitting a model of interest on each sample in order to obtain additional information about the fitted model. For example, in order to estimate the variability of a linear regression fit, we can repeatedly draw different samples from the training data, fit a linear regression to each new sample, and then examine the extent to which the resulting fits differ. Such an approach may allow us to obtain information that would not be available from fitting the model only once using the original training sample. Two resampling methods are often used in Machine Learning analyses, 1. The bootstrap method and Cross-Validation In addition there are several other methods such as the Jackknife and the Blocking methods. We will discuss in particular cross-validation and the bootstrap method. Resampling approaches can be computationally expensive Resampling approaches can be computationally expensive, because they involve fitting the same statistical method multiple times using different subsets of the training data. However, due to recent advances in computing power, the computational requirements of resampling methods generally are not prohibitive. In this chapter, we discuss two of the most commonly used resampling methods, cross-validation and the bootstrap. Both methods are important tools in the practical application of many statistical learning procedures. For example, cross-validation can be used to estimate the test error associated with a given statistical learning method in order to evaluate its performance, or to select the appropriate level of flexibility. The process of evaluating a model’s performance is known as model assessment, whereas the process of selecting the proper level of flexibility for a model is known as model selection. The bootstrap is widely used. Why resampling methods ? Statistical analysis. Our simulations can be treated as computer experiments. This is particularly the case for Monte Carlo methods The results can be analysed with the same statistical tools as we would use analysing experimental data. As in all experiments, we are looking for expectation values and an estimate of how accurate they are, i.e., possible sources for errors. Statistical analysis As in other experiments, many numerical experiments have two classes of errors Step27: Resampling methods Step28: <!-- !split --> Various steps in cross-validation When the repetitive splitting of the data set is done randomly, samples may accidently end up in a fast majority of the splits in either training or test set. Such samples may have an unbalanced influence on either model building or prediction evaluation. To avoid this $k$-fold cross-validation structures the data splitting. The samples are divided into $k$ more or less equally sized exhaustive and mutually exclusive subsets. In turn (at each split) one of these subsets plays the role of the test set while the union of the remaining subsets constitutes the training set. Such a splitting warrants a balanced representation of each sample in both training and test set over the splits. Still the division into the $k$ subsets involves a degree of randomness. This may be fully excluded when choosing $k=n$. This particular case is referred to as leave-one-out cross-validation (LOOCV). <!-- !split --> How to set up the cross-validation for Ridge and/or Lasso Define a range of interest for the penalty parameter. Divide the data set into training and test set comprising samples ${1, \ldots, n} \setminus i$ and ${ i }$, respectively. Fit the linear regression model by means of ridge estimation for each $\lambda$ in the grid using the training set, and the corresponding estimate of the error variance $\boldsymbol{\sigma}_{-i}^2(\lambda)$, as $$ \begin{align} \boldsymbol{\beta}{-i}(\lambda) & = ( \boldsymbol{X}{-i, \ast}^{T} \boldsymbol{X}{-i, \ast} + \lambda \boldsymbol{I}{pp})^{-1} \boldsymbol{X}{-i, \ast}^{T} \boldsymbol{y}{-i} \end{align} $$ Evaluate the prediction performance of these models on the test set by $\log{L[y_i, \boldsymbol{X}{i, \ast}; \boldsymbol{\beta}{-i}(\lambda), \boldsymbol{\sigma}{-i}^2(\lambda)]}$. Or, by the prediction error $|y_i - \boldsymbol{X}{i, \ast} \boldsymbol{\beta}_{-i}(\lambda)|$, the relative error, the error squared or the R2 score function. Repeat the first three steps such that each sample plays the role of the test set once. Average the prediction performances of the test sets at each grid point of the penalty bias/parameter. It is an estimate of the prediction performance of the model corresponding to this value of the penalty parameter on novel data. It is defined as $$ \begin{align} \frac{1}{n} \sum_{i = 1}^n \log{L[y_i, \mathbf{X}{i, \ast}; \boldsymbol{\beta}{-i}(\lambda), \boldsymbol{\sigma}_{-i}^2(\lambda)]}. \end{align} $$ Cross-validation in brief For the various values of $k$ shuffle the dataset randomly. Split the dataset into $k$ groups. For each unique group Step29: The bias-variance tradeoff We will discuss the bias-variance tradeoff in the context of continuous predictions such as regression. However, many of the intuitions and ideas discussed here also carry over to classification tasks. Consider a dataset $\mathcal{L}$ consisting of the data $\mathbf{X}_\mathcal{L}={(y_j, \boldsymbol{x}_j), j=0\ldots n-1}$. Let us assume that the true data is generated from a noisy model $$ \boldsymbol{y}=f(\boldsymbol{x}) + \boldsymbol{\epsilon} $$ where $\epsilon$ is normally distributed with mean zero and standard deviation $\sigma^2$. In our derivation of the ordinary least squares method we defined then an approximation to the function $f$ in terms of the parameters $\boldsymbol{\beta}$ and the design matrix $\boldsymbol{X}$ which embody our model, that is $\boldsymbol{\tilde{y}}=\boldsymbol{X}\boldsymbol{\beta}$. Thereafter we found the parameters $\boldsymbol{\beta}$ by optimizing the means squared error via the so-called cost function $$ C(\boldsymbol{X},\boldsymbol{\beta}) =\frac{1}{n}\sum_{i=0}^{n-1}(y_i-\tilde{y}_i)^2=\mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]. $$ We can rewrite this as $$ \mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\frac{1}{n}\sum_i(f_i-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2+\frac{1}{n}\sum_i(\tilde{y}_i-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2+\sigma^2. $$ The three terms represent the square of the bias of the learning method, which can be thought of as the error caused by the simplifying assumptions built into the method. The second term represents the variance of the chosen model and finally the last terms is variance of the error $\boldsymbol{\epsilon}$. To derive this equation, we need to recall that the variance of $\boldsymbol{y}$ and $\boldsymbol{\epsilon}$ are both equal to $\sigma^2$. The mean value of $\boldsymbol{\epsilon}$ is by definition equal to zero. Furthermore, the function $f$ is not a stochastics variable, idem for $\boldsymbol{\tilde{y}}$. We use a more compact notation in terms of the expectation value $$ \mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\mathbb{E}\left[(\boldsymbol{f}+\boldsymbol{\epsilon}-\boldsymbol{\tilde{y}})^2\right], $$ and adding and subtracting $\mathbb{E}\left[\boldsymbol{\tilde{y}}\right]$ we get $$ \mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\mathbb{E}\left[(\boldsymbol{f}+\boldsymbol{\epsilon}-\boldsymbol{\tilde{y}}+\mathbb{E}\left[\boldsymbol{\tilde{y}}\right]-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2\right], $$ which, using the abovementioned expectation values can be rewritten as $$ \mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\mathbb{E}\left[(\boldsymbol{y}-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2\right]+\mathrm{Var}\left[\boldsymbol{\tilde{y}}\right]+\sigma^2, $$ that is the rewriting in terms of the so-called bias, the variance of the model $\boldsymbol{\tilde{y}}$ and the variance of $\boldsymbol{\epsilon}$. Example code for Bias-Variance tradeoff Step30: Understanding what happens Step32: <!-- !split --> Summing up The bias-variance tradeoff summarizes the fundamental tension in machine learning, particularly supervised learning, between the complexity of a model and the amount of training data needed to train it. Since data is often limited, in practice it is often useful to use a less-complex model with higher bias, that is a model whose asymptotic performance is worse than another model because it is easier to train and less sensitive to sampling noise arising from having a finite-sized training dataset (smaller variance). The above equations tell us that in order to minimize the expected test error, we need to select a statistical learning method that simultaneously achieves low variance and low bias. Note that variance is inherently a nonnegative quantity, and squared bias is also nonnegative. Hence, we see that the expected test MSE can never lie below $Var(\epsilon)$, the irreducible error. What do we mean by the variance and bias of a statistical learning method? The variance refers to the amount by which our model would change if we estimated it using a different training data set. Since the training data are used to fit the statistical learning method, different training data sets will result in a different estimate. But ideally the estimate for our model should not vary too much between training sets. However, if a method has high variance then small changes in the training data can result in large changes in the model. In general, more flexible statistical methods have higher variance. You may also find this recent article of interest. Another Example from Scikit-Learn's Repository Step33: More examples on bootstrap and cross-validation and errors Step34: <!-- !split --> The same example but now with cross-validation Step35: Cross-validation with Ridge Step36: The Ising model The one-dimensional Ising model with nearest neighbor interaction, no external field and a constant coupling constant $J$ is given by <!-- Equation labels as ordinary links --> <div id="_auto2"></div> $$ \begin{equation} H = -J \sum_{k}^L s_k s_{k + 1}, \label{_auto2} \tag{2} \end{equation} $$ where $s_i \in {-1, 1}$ and $s_{N + 1} = s_1$. The number of spins in the system is determined by $L$. For the one-dimensional system there is no phase transition. We will look at a system of $L = 40$ spins with a coupling constant of $J = 1$. To get enough training data we will generate 10000 states with their respective energies. Step37: Here we use ordinary least squares regression to predict the energy for the nearest neighbor one-dimensional Ising model on a ring, i.e., the endpoints wrap around. We will use linear regression to fit a value for the coupling constant to achieve this. Reformulating the problem to suit regression A more general form for the one-dimensional Ising model is <!-- Equation labels as ordinary links --> <div id="_auto3"></div> $$ \begin{equation} H = - \sum_j^L \sum_k^L s_j s_k J_{jk}. \label{_auto3} \tag{3} \end{equation} $$ Here we allow for interactions beyond the nearest neighbors and a state dependent coupling constant. This latter expression can be formulated as a matrix-product <!-- Equation labels as ordinary links --> <div id="_auto4"></div> $$ \begin{equation} \boldsymbol{H} = \boldsymbol{X} J, \label{_auto4} \tag{4} \end{equation} $$ where $X_{jk} = s_j s_k$ and $J$ is a matrix which consists of the elements $-J_{jk}$. This form of writing the energy fits perfectly with the form utilized in linear regression, that is <!-- Equation labels as ordinary links --> <div id="_auto5"></div> $$ \begin{equation} \boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{\epsilon}, \label{_auto5} \tag{5} \end{equation} $$ We split the data in training and test data as discussed in the previous example Step38: Linear regression In the ordinary least squares method we choose the cost function <!-- Equation labels as ordinary links --> <div id="_auto6"></div> $$ \begin{equation} C(\boldsymbol{X}, \boldsymbol{\beta})= \frac{1}{n}\left{(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})^T(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})\right}. \label{_auto6} \tag{6} \end{equation} $$ We then find the extremal point of $C$ by taking the derivative with respect to $\boldsymbol{\beta}$ as discussed above. This yields the expression for $\boldsymbol{\beta}$ to be $$ \boldsymbol{\beta} = \frac{\boldsymbol{X}^T \boldsymbol{y}}{\boldsymbol{X}^T \boldsymbol{X}}, $$ which immediately imposes some requirements on $\boldsymbol{X}$ as there must exist an inverse of $\boldsymbol{X}^T \boldsymbol{X}$. If the expression we are modeling contains an intercept, i.e., a constant term, we must make sure that the first column of $\boldsymbol{X}$ consists of $1$. We do this here Step39: Singular Value decomposition Doing the inversion directly turns out to be a bad idea since the matrix $\boldsymbol{X}^T\boldsymbol{X}$ is singular. An alternative approach is to use the singular value decomposition. Using the definition of the Moore-Penrose pseudoinverse we can write the equation for $\boldsymbol{\beta}$ as $$ \boldsymbol{\beta} = \boldsymbol{X}^{+}\boldsymbol{y}, $$ where the pseudoinverse of $\boldsymbol{X}$ is given by $$ \boldsymbol{X}^{+} = \frac{\boldsymbol{X}^T}{\boldsymbol{X}^T\boldsymbol{X}}. $$ Using singular value decomposition we can decompose the matrix $\boldsymbol{X} = \boldsymbol{U}\boldsymbol{\Sigma} \boldsymbol{V}^T$, where $\boldsymbol{U}$ and $\boldsymbol{V}$ are orthogonal(unitary) matrices and $\boldsymbol{\Sigma}$ contains the singular values (more details below). where $X^{+} = V\Sigma^{+} U^T$. This reduces the equation for $\omega$ to <!-- Equation labels as ordinary links --> <div id="_auto7"></div> $$ \begin{equation} \boldsymbol{\beta} = \boldsymbol{V}\boldsymbol{\Sigma}^{+} \boldsymbol{U}^T \boldsymbol{y}. \label{_auto7} \tag{7} \end{equation} $$ Note that solving this equation by actually doing the pseudoinverse (which is what we will do) is not a good idea as this operation scales as $\mathcal{O}(n^3)$, where $n$ is the number of elements in a general matrix. Instead, doing $QR$-factorization and solving the linear system as an equation would reduce this down to $\mathcal{O}(n^2)$ operations. Step40: When extracting the $J$-matrix we need to make sure that we remove the intercept, as is done here Step41: A way of looking at the coefficients in $J$ is to plot the matrices as images. Step42: It is interesting to note that OLS considers both $J_{j, j + 1} = -0.5$ and $J_{j, j - 1} = -0.5$ as valid matrix elements for $J$. In our discussion below on hyperparameters and Ridge and Lasso regression we will see that this problem can be removed, partly and only with Lasso regression. In this case our matrix inversion was actually possible. The obvious question now is what is the mathematics behind the SVD? The one-dimensional Ising model Let us bring back the Ising model again, but now with an additional focus on Ridge and Lasso regression as well. We repeat some of the basic parts of the Ising model and the setup of the training and test data. The one-dimensional Ising model with nearest neighbor interaction, no external field and a constant coupling constant $J$ is given by <!-- Equation labels as ordinary links --> <div id="_auto8"></div> $$ \begin{equation} H = -J \sum_{k}^L s_k s_{k + 1}, \label{_auto8} \tag{8} \end{equation} $$ where $s_i \in {-1, 1}$ and $s_{N + 1} = s_1$. The number of spins in the system is determined by $L$. For the one-dimensional system there is no phase transition. We will look at a system of $L = 40$ spins with a coupling constant of $J = 1$. To get enough training data we will generate 10000 states with their respective energies. Step43: A more general form for the one-dimensional Ising model is <!-- Equation labels as ordinary links --> <div id="_auto9"></div> $$ \begin{equation} H = - \sum_j^L \sum_k^L s_j s_k J_{jk}. \label{_auto9} \tag{9} \end{equation} $$ Here we allow for interactions beyond the nearest neighbors and a more adaptive coupling matrix. This latter expression can be formulated as a matrix-product on the form <!-- Equation labels as ordinary links --> <div id="_auto10"></div> $$ \begin{equation} H = X J, \label{_auto10} \tag{10} \end{equation} $$ where $X_{jk} = s_j s_k$ and $J$ is the matrix consisting of the elements $-J_{jk}$. This form of writing the energy fits perfectly with the form utilized in linear regression, viz. <!-- Equation labels as ordinary links --> <div id="_auto11"></div> $$ \begin{equation} \boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{\epsilon}. \label{_auto11} \tag{11} \end{equation} $$ We organize the data as we did above Step44: We will do all fitting with Scikit-Learn, Step45: When extracting the $J$-matrix we make sure to remove the intercept Step46: And then we plot the results Step47: The results perfectly with our previous discussion where we used our own code. Ridge regression Having explored the ordinary least squares we move on to ridge regression. In ridge regression we include a regularizer. This involves a new cost function which leads to a new estimate for the weights $\boldsymbol{\beta}$. This results in a penalized regression problem. The cost function is given by 1 3 6 < < < ! ! M A T H _ B L O C K Step48: LASSO regression In the Least Absolute Shrinkage and Selection Operator (LASSO)-method we get a third cost function. <!-- Equation labels as ordinary links --> <div id="_auto13"></div> $$ \begin{equation} C(\boldsymbol{X}, \boldsymbol{\beta}; \lambda) = (\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})^T(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y}) + \lambda \sqrt{\boldsymbol{\beta}^T\boldsymbol{\beta}}. \label{_auto13} \tag{13} \end{equation} $$ Finding the extremal point of this cost function is not so straight-forward as in least squares and ridge. We will therefore rely solely on the function Lasso from Scikit-Learn. Step49: It is quite striking how LASSO breaks the symmetry of the coupling constant as opposed to ridge and OLS. We get a sparse solution with $J_{j, j + 1} = -1$. Performance as function of the regularization parameter We see how the different models perform for a different set of values for $\lambda$. Step50: We see that LASSO reaches a good solution for low values of $\lambda$, but will "wither" when we increase $\lambda$ too much. Ridge is more stable over a larger range of values for $\lambda$, but eventually also fades away. Finding the optimal value of $\lambda$ To determine which value of $\lambda$ is best we plot the accuracy of the models when predicting the training and the testing set. We expect the accuracy of the training set to be quite good, but if the accuracy of the testing set is much lower this tells us that we might be subject to an overfit model. The ideal scenario is an accuracy on the testing set that is close to the accuracy of the training set.
Python Code: %matplotlib inline # Common imports import numpy as np import pandas as pd import matplotlib.pyplot as plt from IPython.display import display import os # Where to save the figures and data files PROJECT_ROOT_DIR = "Results" FIGURE_ID = "Results/FigureFiles" DATA_ID = "DataFiles/" if not os.path.exists(PROJECT_ROOT_DIR): os.mkdir(PROJECT_ROOT_DIR) if not os.path.exists(FIGURE_ID): os.makedirs(FIGURE_ID) if not os.path.exists(DATA_ID): os.makedirs(DATA_ID) def image_path(fig_id): return os.path.join(FIGURE_ID, fig_id) def data_path(dat_id): return os.path.join(DATA_ID, dat_id) def save_fig(fig_id): plt.savefig(image_path(fig_id) + ".png", format='png') infile = open(data_path("MassEval2016.dat"),'r') # Read the experimental data with Pandas Masses = pd.read_fwf(infile, usecols=(2,3,4,6,11), names=('N', 'Z', 'A', 'Element', 'Ebinding'), widths=(1,3,5,5,5,1,3,4,1,13,11,11,9,1,2,11,9,1,3,1,12,11,1), header=39, index_col=False) # Extrapolated values are indicated by '#' in place of the decimal place, so # the Ebinding column won't be numeric. Coerce to float and drop these entries. Masses['Ebinding'] = pd.to_numeric(Masses['Ebinding'], errors='coerce') Masses = Masses.dropna() # Convert from keV to MeV. Masses['Ebinding'] /= 1000 # Group the DataFrame by nucleon number, A. Masses = Masses.groupby('A') # Find the rows of the grouped DataFrame with the maximum binding energy. Masses = Masses.apply(lambda t: t[t.Ebinding==t.Ebinding.max()]) A = Masses['A'] Z = Masses['Z'] N = Masses['N'] Element = Masses['Element'] Energies = Masses['Ebinding'] # Now we set up the design matrix X X = np.zeros((len(A),5)) X[:,0] = 1 X[:,1] = A X[:,2] = A**(2.0/3.0) X[:,3] = A**(-1.0/3.0) X[:,4] = A**(-1.0) # Then nice printout using pandas DesignMatrix = pd.DataFrame(X) DesignMatrix.index = A DesignMatrix.columns = ['1', 'A', 'A^(2/3)', 'A^(-1/3)', '1/A'] display(DesignMatrix) Explanation: <!-- dom:TITLE: Data Analysis and Machine Learning: Linear Regression and more Advanced Regression Analysis --> Data Analysis and Machine Learning: Linear Regression and more Advanced Regression Analysis <!-- dom:AUTHOR: Morten Hjorth-Jensen at Department of Physics, University of Oslo & Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University --> <!-- Author: --> Morten Hjorth-Jensen, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University Date: Sep 11, 2020 Copyright 1999-2020, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license Why Linear Regression (aka Ordinary Least Squares and family) Fitting a continuous function with linear parameterization in terms of the parameters $\boldsymbol{\beta}$. * Method of choice for fitting a continuous function! Gives an excellent introduction to central Machine Learning features with understandable pedagogical links to other methods like Neural Networks, Support Vector Machines etc Analytical expression for the fitting parameters $\boldsymbol{\beta}$ Analytical expressions for statistical propertiers like mean values, variances, confidence intervals and more Analytical relation with probabilistic interpretations Easy to introduce basic concepts like bias-variance tradeoff, cross-validation, resampling and regularization techniques and many other ML topics Easy to code! And links well with classification problems and logistic regression and neural networks Allows for easy hands-on understanding of gradient descent methods and many more features For more discussions of Ridge and Lasso regression, Wessel van Wieringen's article is highly recommended. Similarly, Mehta et al's article is also recommended. Regression analysis, overarching aims Regression modeling deals with the description of the sampling distribution of a given random variable $y$ and how it varies as function of another variable or a set of such variables $\boldsymbol{x} =[x_0, x_1,\dots, x_{n-1}]^T$. The first variable is called the dependent, the outcome or the response variable while the set of variables $\boldsymbol{x}$ is called the independent variable, or the predictor variable or the explanatory variable. A regression model aims at finding a likelihood function $p(\boldsymbol{y}\vert \boldsymbol{x})$, that is the conditional distribution for $\boldsymbol{y}$ with a given $\boldsymbol{x}$. The estimation of $p(\boldsymbol{y}\vert \boldsymbol{x})$ is made using a data set with * $n$ cases $i = 0, 1, 2, \dots, n-1$ Response (target, dependent or outcome) variable $y_i$ with $i = 0, 1, 2, \dots, n-1$ $p$ so-called explanatory (independent or predictor) variables $\boldsymbol{x}i=[x{i0}, x_{i1}, \dots, x_{ip-1}]$ with $i = 0, 1, 2, \dots, n-1$ and explanatory variables running from $0$ to $p-1$. See below for more explicit examples. The goal of the regression analysis is to extract/exploit relationship between $\boldsymbol{y}$ and $\boldsymbol{x}$ in or to infer causal dependencies, approximations to the likelihood functions, functional relationships and to make predictions, making fits and many other things. Regression analysis, overarching aims II Consider an experiment in which $p$ characteristics of $n$ samples are measured. The data from this experiment, for various explanatory variables $p$ are normally represented by a matrix $\mathbf{X}$. The matrix $\mathbf{X}$ is called the design matrix. Additional information of the samples is available in the form of $\boldsymbol{y}$ (also as above). The variable $\boldsymbol{y}$ is generally referred to as the response variable. The aim of regression analysis is to explain $\boldsymbol{y}$ in terms of $\boldsymbol{X}$ through a functional relationship like $y_i = f(\mathbf{X}{i,\ast})$. When no prior knowledge on the form of $f(\cdot)$ is available, it is common to assume a linear relationship between $\boldsymbol{X}$ and $\boldsymbol{y}$. This assumption gives rise to the linear regression model where $\boldsymbol{\beta} = [\beta_0, \ldots, \beta{p-1}]^{T}$ are the regression parameters. Linear regression gives us a set of analytical equations for the parameters $\beta_j$. Examples In order to understand the relation among the predictors $p$, the set of data $n$ and the target (outcome, output etc) $\boldsymbol{y}$, consider the model we discussed for describing nuclear binding energies. There we assumed that we could parametrize the data using a polynomial approximation based on the liquid drop model. Assuming $$ BE(A) = a_0+a_1A+a_2A^{2/3}+a_3A^{-1/3}+a_4A^{-1}, $$ we have five predictors, that is the intercept, the $A$ dependent term, the $A^{2/3}$ term and the $A^{-1/3}$ and $A^{-1}$ terms. This gives $p=0,1,2,3,4$. Furthermore we have $n$ entries for each predictor. It means that our design matrix is a $p\times n$ matrix $\boldsymbol{X}$. Here the predictors are based on a model we have made. A popular data set which is widely encountered in ML applications is the so-called credit card default data from Taiwan. The data set contains data on $n=30000$ credit card holders with predictors like gender, marital status, age, profession, education, etc. In total there are $24$ such predictors or attributes leading to a design matrix of dimensionality $24 \times 30000$. This is however a classification problem and we will come back to it when we discuss Logistic Regression. General linear models Before we proceed let us study a case from linear algebra where we aim at fitting a set of data $\boldsymbol{y}=[y_0,y_1,\dots,y_{n-1}]$. We could think of these data as a result of an experiment or a complicated numerical experiment. These data are functions of a series of variables $\boldsymbol{x}=[x_0,x_1,\dots,x_{n-1}]$, that is $y_i = y(x_i)$ with $i=0,1,2,\dots,n-1$. The variables $x_i$ could represent physical quantities like time, temperature, position etc. We assume that $y(x)$ is a smooth function. Since obtaining these data points may not be trivial, we want to use these data to fit a function which can allow us to make predictions for values of $y$ which are not in the present set. The perhaps simplest approach is to assume we can parametrize our function in terms of a polynomial of degree $n-1$ with $n$ points, that is $$ y=y(x) \rightarrow y(x_i)=\tilde{y}i+\epsilon_i=\sum{j=0}^{n-1} \beta_j x_i^j+\epsilon_i, $$ where $\epsilon_i$ is the error in our approximation. Rewriting the fitting procedure as a linear algebra problem For every set of values $y_i,x_i$ we have thus the corresponding set of equations $$ \begin{align} y_0&=\beta_0+\beta_1x_0^1+\beta_2x_0^2+\dots+\beta_{n-1}x_0^{n-1}+\epsilon_0\ y_1&=\beta_0+\beta_1x_1^1+\beta_2x_1^2+\dots+\beta_{n-1}x_1^{n-1}+\epsilon_1\ y_2&=\beta_0+\beta_1x_2^1+\beta_2x_2^2+\dots+\beta_{n-1}x_2^{n-1}+\epsilon_2\ \dots & \dots \ y_{n-1}&=\beta_0+\beta_1x_{n-1}^1+\beta_2x_{n-1}^2+\dots+\beta_{n-1}x_{n-1}^{n-1}+\epsilon_{n-1}.\ \end{align} $$ Rewriting the fitting procedure as a linear algebra problem, more details Defining the vectors $$ \boldsymbol{y} = [y_0,y_1, y_2,\dots, y_{n-1}]^T, $$ and $$ \boldsymbol{\beta} = [\beta_0,\beta_1, \beta_2,\dots, \beta_{n-1}]^T, $$ and $$ \boldsymbol{\epsilon} = [\epsilon_0,\epsilon_1, \epsilon_2,\dots, \epsilon_{n-1}]^T, $$ and the design matrix $$ \boldsymbol{X}= \begin{bmatrix} 1& x_{0}^1 &x_{0}^2& \dots & \dots &x_{0}^{n-1}\ 1& x_{1}^1 &x_{1}^2& \dots & \dots &x_{1}^{n-1}\ 1& x_{2}^1 &x_{2}^2& \dots & \dots &x_{2}^{n-1}\ \dots& \dots &\dots& \dots & \dots &\dots\ 1& x_{n-1}^1 &x_{n-1}^2& \dots & \dots &x_{n-1}^{n-1}\ \end{bmatrix} $$ we can rewrite our equations as $$ \boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}. $$ The above design matrix is called a Vandermonde matrix. Generalizing the fitting procedure as a linear algebra problem We are obviously not limited to the above polynomial expansions. We could replace the various powers of $x$ with elements of Fourier series or instead of $x_i^j$ we could have $\cos{(j x_i)}$ or $\sin{(j x_i)}$, or time series or other orthogonal functions. For every set of values $y_i,x_i$ we can then generalize the equations to $$ \begin{align} y_0&=\beta_0x_{00}+\beta_1x_{01}+\beta_2x_{02}+\dots+\beta_{n-1}x_{0n-1}+\epsilon_0\ y_1&=\beta_0x_{10}+\beta_1x_{11}+\beta_2x_{12}+\dots+\beta_{n-1}x_{1n-1}+\epsilon_1\ y_2&=\beta_0x_{20}+\beta_1x_{21}+\beta_2x_{22}+\dots+\beta_{n-1}x_{2n-1}+\epsilon_2\ \dots & \dots \ y_{i}&=\beta_0x_{i0}+\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_{n-1}x_{in-1}+\epsilon_i\ \dots & \dots \ y_{n-1}&=\beta_0x_{n-1,0}+\beta_1x_{n-1,2}+\beta_2x_{n-1,2}+\dots+\beta_{n-1}x_{n-1,n-1}+\epsilon_{n-1}.\ \end{align} $$ Note that we have $p=n$ here. The matrix is symmetric. This is generally not the case! Generalizing the fitting procedure as a linear algebra problem We redefine in turn the matrix $\boldsymbol{X}$ as $$ \boldsymbol{X}= \begin{bmatrix} x_{00}& x_{01} &x_{02}& \dots & \dots &x_{0,n-1}\ x_{10}& x_{11} &x_{12}& \dots & \dots &x_{1,n-1}\ x_{20}& x_{21} &x_{22}& \dots & \dots &x_{2,n-1}\ \dots& \dots &\dots& \dots & \dots &\dots\ x_{n-1,0}& x_{n-1,1} &x_{n-1,2}& \dots & \dots &x_{n-1,n-1}\ \end{bmatrix} $$ and without loss of generality we rewrite again our equations as $$ \boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}. $$ The left-hand side of this equation is kwown. Our error vector $\boldsymbol{\epsilon}$ and the parameter vector $\boldsymbol{\beta}$ are our unknow quantities. How can we obtain the optimal set of $\beta_i$ values? Optimizing our parameters We have defined the matrix $\boldsymbol{X}$ via the equations $$ \begin{align} y_0&=\beta_0x_{00}+\beta_1x_{01}+\beta_2x_{02}+\dots+\beta_{n-1}x_{0n-1}+\epsilon_0\ y_1&=\beta_0x_{10}+\beta_1x_{11}+\beta_2x_{12}+\dots+\beta_{n-1}x_{1n-1}+\epsilon_1\ y_2&=\beta_0x_{20}+\beta_1x_{21}+\beta_2x_{22}+\dots+\beta_{n-1}x_{2n-1}+\epsilon_1\ \dots & \dots \ y_{i}&=\beta_0x_{i0}+\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_{n-1}x_{in-1}+\epsilon_1\ \dots & \dots \ y_{n-1}&=\beta_0x_{n-1,0}+\beta_1x_{n-1,2}+\beta_2x_{n-1,2}+\dots+\beta_{n-1}x_{n-1,n-1}+\epsilon_{n-1}.\ \end{align} $$ As we noted above, we stayed with a system with the design matrix $\boldsymbol{X}\in {\mathbb{R}}^{n\times n}$, that is we have $p=n$. For reasons to come later (algorithmic arguments) we will hereafter define our matrix as $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$, with the predictors refering to the column numbers and the entries $n$ being the row elements. Our model for the nuclear binding energies In our introductory notes we looked at the so-called liquid drop model. Let us remind ourselves about what we did by looking at the code. We restate the parts of the code we are most interested in. End of explanation # matrix inversion to find beta beta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(Energies) # and then make the prediction ytilde = X @ beta Explanation: With $\boldsymbol{\beta}\in {\mathbb{R}}^{p\times 1}$, it means that we will hereafter write our equations for the approximation as $$ \boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta}, $$ throughout these lectures. Optimizing our parameters, more details With the above we use the design matrix to define the approximation $\boldsymbol{\tilde{y}}$ via the unknown quantity $\boldsymbol{\beta}$ as $$ \boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta}, $$ and in order to find the optimal parameters $\beta_i$ instead of solving the above linear algebra problem, we define a function which gives a measure of the spread between the values $y_i$ (which represent hopefully the exact values) and the parameterized values $\tilde{y}_i$, namely $$ C(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right}, $$ or using the matrix $\boldsymbol{X}$ and in a more compact matrix-vector notation as $$ C(\boldsymbol{\beta})=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}. $$ This function is one possible way to define the so-called cost function. It is also common to define the function $C$ as $$ C(\boldsymbol{\beta})=\frac{1}{2n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2, $$ since when taking the first derivative with respect to the unknown parameters $\beta$, the factor of $2$ cancels out. Interpretations and optimizing our parameters The function $$ C(\boldsymbol{\beta})=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}, $$ can be linked to the variance of the quantity $y_i$ if we interpret the latter as the mean value. When linking (see the discussion below) with the maximum likelihood approach below, we will indeed interpret $y_i$ as a mean value $$ y_{i}=\langle y_i \rangle = \beta_0x_{i,0}+\beta_1x_{i,1}+\beta_2x_{i,2}+\dots+\beta_{n-1}x_{i,n-1}+\epsilon_i, $$ where $\langle y_i \rangle$ is the mean value. Keep in mind also that till now we have treated $y_i$ as the exact value. Normally, the response (dependent or outcome) variable $y_i$ the outcome of a numerical experiment or another type of experiment and is thus only an approximation to the true value. It is then always accompanied by an error estimate, often limited to a statistical error estimate given by the standard deviation discussed earlier. In the discussion here we will treat $y_i$ as our exact value for the response variable. In order to find the parameters $\beta_i$ we will then minimize the spread of $C(\boldsymbol{\beta})$, that is we are going to solve the problem $$ {\displaystyle \min_{\boldsymbol{\beta}\in {\mathbb{R}}^{p}}}\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}. $$ In practical terms it means we will require $$ \frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)^2\right]=0, $$ which results in $$ \frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_{ij}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)\right]=0, $$ or in a matrix-vector form as $$ \frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right). $$ Interpretations and optimizing our parameters We can rewrite $$ \frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right), $$ as $$ \boldsymbol{X}^T\boldsymbol{y} = \boldsymbol{X}^T\boldsymbol{X}\boldsymbol{\beta}, $$ and if the matrix $\boldsymbol{X}^T\boldsymbol{X}$ is invertible we have the solution $$ \boldsymbol{\beta} =\left(\boldsymbol{X}^T\boldsymbol{X}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}. $$ We note also that since our design matrix is defined as $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$, the product $\boldsymbol{X}^T\boldsymbol{X} \in {\mathbb{R}}^{p\times p}$. In the above case we have that $p \ll n$, in our case $p=5$ meaning that we end up with inverting a small $5\times 5$ matrix. This is a rather common situation, in many cases we end up with low-dimensional matrices to invert. The methods discussed here and for many other supervised learning algorithms like classification with logistic regression or support vector machines, exhibit dimensionalities which allow for the usage of direct linear algebra methods such as LU decomposition or Singular Value Decomposition (SVD) for finding the inverse of the matrix $\boldsymbol{X}^T\boldsymbol{X}$. Small question: Do you think the example we have at hand here (the nuclear binding energies) can lead to problems in inverting the matrix $\boldsymbol{X}^T\boldsymbol{X}$? What kind of problems can we expect? Some useful matrix and vector expressions The following matrix and vector relation will be useful here and for the rest of the course. Vectors are always written as boldfaced lower case letters and matrices as upper case boldfaced letters. 2 6 < < < ! ! M A T H _ B L O C K 2 7 < < < ! ! M A T H _ B L O C K 2 8 < < < ! ! M A T H _ B L O C K $$ \frac{\partial \log{\vert\boldsymbol{A}\vert}}{\partial \boldsymbol{A}} = (\boldsymbol{A}^{-1})^T. $$ Interpretations and optimizing our parameters The residuals $\boldsymbol{\epsilon}$ are in turn given by $$ \boldsymbol{\epsilon} = \boldsymbol{y}-\boldsymbol{\tilde{y}} = \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}, $$ and with $$ \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)= 0, $$ we have $$ \boldsymbol{X}^T\boldsymbol{\epsilon}=\boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)= 0, $$ meaning that the solution for $\boldsymbol{\beta}$ is the one which minimizes the residuals. Later we will link this with the maximum likelihood approach. Let us now return to our nuclear binding energies and simply code the above equations. Own code for Ordinary Least Squares It is rather straightforward to implement the matrix inversion and obtain the parameters $\boldsymbol{\beta}$. After having defined the matrix $\boldsymbol{X}$ we simply need to write End of explanation fit = np.linalg.lstsq(X, Energies, rcond =None)[0] ytildenp = np.dot(fit,X.T) Explanation: Alternatively, you can use the least squares functionality in Numpy as End of explanation Masses['Eapprox'] = ytilde # Generate a plot comparing the experimental with the fitted values values. fig, ax = plt.subplots() ax.set_xlabel(r'$A = N + Z$') ax.set_ylabel(r'$E_\mathrm{bind}\,/\mathrm{MeV}$') ax.plot(Masses['A'], Masses['Ebinding'], alpha=0.7, lw=2, label='Ame2016') ax.plot(Masses['A'], Masses['Eapprox'], alpha=0.7, lw=2, c='m', label='Fit') ax.legend() save_fig("Masses2016OLS") plt.show() Explanation: And finally we plot our fit with and compare with data End of explanation def R2(y_data, y_model): return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_data)) ** 2) Explanation: Adding error analysis and training set up We can easily test our fit by computing the $R2$ score that we discussed in connection with the functionality of Scikit-Learn in the introductory slides. Since we are not using Scikit-Learn here we can define our own $R2$ function as End of explanation print(R2(Energies,ytilde)) Explanation: and we would be using it as End of explanation def MSE(y_data,y_model): n = np.size(y_model) return np.sum((y_data-y_model)**2)/n print(MSE(Energies,ytilde)) Explanation: We can easily add our MSE score as End of explanation def RelativeError(y_data,y_model): return abs((y_data-y_model)/y_data) print(RelativeError(Energies, ytilde)) Explanation: and finally the relative error as End of explanation # Common imports import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.pyplot as plt import sklearn.linear_model as skl from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error # Where to save the figures and data files PROJECT_ROOT_DIR = "Results" FIGURE_ID = "Results/FigureFiles" DATA_ID = "DataFiles/" if not os.path.exists(PROJECT_ROOT_DIR): os.mkdir(PROJECT_ROOT_DIR) if not os.path.exists(FIGURE_ID): os.makedirs(FIGURE_ID) if not os.path.exists(DATA_ID): os.makedirs(DATA_ID) def image_path(fig_id): return os.path.join(FIGURE_ID, fig_id) def data_path(dat_id): return os.path.join(DATA_ID, dat_id) def save_fig(fig_id): plt.savefig(image_path(fig_id) + ".png", format='png') infile = open(data_path("EoS.csv"),'r') # Read the EoS data as csv file and organize the data into two arrays with density and energies EoS = pd.read_csv(infile, names=('Density', 'Energy')) EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce') EoS = EoS.dropna() Energies = EoS['Energy'] Density = EoS['Density'] # The design matrix now as function of various polytrops X = np.zeros((len(Density),4)) X[:,3] = Density**(4.0/3.0) X[:,2] = Density X[:,1] = Density**(2.0/3.0) X[:,0] = 1 # We use now Scikit-Learn's linear regressor and ridge regressor # OLS part clf = skl.LinearRegression().fit(X, Energies) ytilde = clf.predict(X) EoS['Eols'] = ytilde # The mean squared error print("Mean squared error: %.2f" % mean_squared_error(Energies, ytilde)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % r2_score(Energies, ytilde)) # Mean absolute error print('Mean absolute error: %.2f' % mean_absolute_error(Energies, ytilde)) print(clf.coef_, clf.intercept_) # The Ridge regression with a hyperparameter lambda = 0.1 _lambda = 0.1 clf_ridge = skl.Ridge(alpha=_lambda).fit(X, Energies) yridge = clf_ridge.predict(X) EoS['Eridge'] = yridge # The mean squared error print("Mean squared error: %.2f" % mean_squared_error(Energies, yridge)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % r2_score(Energies, yridge)) # Mean absolute error print('Mean absolute error: %.2f' % mean_absolute_error(Energies, yridge)) print(clf_ridge.coef_, clf_ridge.intercept_) fig, ax = plt.subplots() ax.set_xlabel(r'$\rho[\mathrm{fm}^{-3}]$') ax.set_ylabel(r'Energy per particle') ax.plot(EoS['Density'], EoS['Energy'], alpha=0.7, lw=2, label='Theoretical data') ax.plot(EoS['Density'], EoS['Eols'], alpha=0.7, lw=2, c='m', label='OLS') ax.plot(EoS['Density'], EoS['Eridge'], alpha=0.7, lw=2, c='g', label='Ridge $\lambda = 0.1$') ax.legend() save_fig("EoSfitting") plt.show() Explanation: The $\chi^2$ function Normally, the response (dependent or outcome) variable $y_i$ is the outcome of a numerical experiment or another type of experiment and is thus only an approximation to the true value. It is then always accompanied by an error estimate, often limited to a statistical error estimate given by the standard deviation discussed earlier. In the discussion here we will treat $y_i$ as our exact value for the response variable. Introducing the standard deviation $\sigma_i$ for each measurement $y_i$, we define now the $\chi^2$ function (omitting the $1/n$ term) as $$ \chi^2(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\frac{\left(y_i-\tilde{y}_i\right)^2}{\sigma_i^2}=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\frac{1}{\boldsymbol{\Sigma^2}}\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right}, $$ where the matrix $\boldsymbol{\Sigma}$ is a diagonal matrix with $\sigma_i$ as matrix elements. The $\chi^2$ function In order to find the parameters $\beta_i$ we will then minimize the spread of $\chi^2(\boldsymbol{\beta})$ by requiring $$ \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)^2\right]=0, $$ which results in $$ \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}\frac{x_{ij}}{\sigma_i}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)\right]=0, $$ or in a matrix-vector form as $$ \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right). $$ where we have defined the matrix $\boldsymbol{A} =\boldsymbol{X}/\boldsymbol{\Sigma}$ with matrix elements $a_{ij} = x_{ij}/\sigma_i$ and the vector $\boldsymbol{b}$ with elements $b_i = y_i/\sigma_i$. The $\chi^2$ function We can rewrite $$ \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right), $$ as $$ \boldsymbol{A}^T\boldsymbol{b} = \boldsymbol{A}^T\boldsymbol{A}\boldsymbol{\beta}, $$ and if the matrix $\boldsymbol{A}^T\boldsymbol{A}$ is invertible we have the solution $$ \boldsymbol{\beta} =\left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1}\boldsymbol{A}^T\boldsymbol{b}. $$ The $\chi^2$ function If we then introduce the matrix $$ \boldsymbol{H} = \left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1}, $$ we have then the following expression for the parameters $\beta_j$ (the matrix elements of $\boldsymbol{H}$ are $h_{ij}$) $$ \beta_j = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}\frac{y_i}{\sigma_i}\frac{x_{ik}}{\sigma_i} = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}b_ia_{ik} $$ We state without proof the expression for the uncertainty in the parameters $\beta_j$ as (we leave this as an exercise) $$ \sigma^2(\beta_j) = \sum_{i=0}^{n-1}\sigma_i^2\left( \frac{\partial \beta_j}{\partial y_i}\right)^2, $$ resulting in $$ \sigma^2(\beta_j) = \left(\sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}a_{ik}\right)\left(\sum_{l=0}^{p-1}h_{jl}\sum_{m=0}^{n-1}a_{ml}\right) = h_{jj}! $$ The $\chi^2$ function The first step here is to approximate the function $y$ with a first-order polynomial, that is we write $$ y=y(x) \rightarrow y(x_i) \approx \beta_0+\beta_1 x_i. $$ By computing the derivatives of $\chi^2$ with respect to $\beta_0$ and $\beta_1$ show that these are given by $$ \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_0} = -2\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0, $$ and $$ \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_1} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_i\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0. $$ The $\chi^2$ function For a linear fit (a first-order polynomial) we don't need to invert a matrix!! Defining $$ \gamma = \sum_{i=0}^{n-1}\frac{1}{\sigma_i^2}, $$ $$ \gamma_x = \sum_{i=0}^{n-1}\frac{x_{i}}{\sigma_i^2}, $$ $$ \gamma_y = \sum_{i=0}^{n-1}\left(\frac{y_i}{\sigma_i^2}\right), $$ $$ \gamma_{xx} = \sum_{i=0}^{n-1}\frac{x_ix_{i}}{\sigma_i^2}, $$ $$ \gamma_{xy} = \sum_{i=0}^{n-1}\frac{y_ix_{i}}{\sigma_i^2}, $$ we obtain $$ \beta_0 = \frac{\gamma_{xx}\gamma_y-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2}, $$ $$ \beta_1 = \frac{\gamma_{xy}\gamma-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2}. $$ This approach (different linear and non-linear regression) suffers often from both being underdetermined and overdetermined in the unknown coefficients $\beta_i$. A better approach is to use the Singular Value Decomposition (SVD) method discussed below. Or using Lasso and Ridge regression. See below. Fitting an Equation of State for Dense Nuclear Matter Before we continue, let us introduce yet another example. We are going to fit the nuclear equation of state using results from many-body calculations. The equation of state we have made available here, as function of density, has been derived using modern nucleon-nucleon potentials with the addition of three-body forces. This time the file is presented as a standard csv file. The beginning of the Python code here is similar to what you have seen before, with the same initializations and declarations. We use also pandas again, rather extensively in order to organize our data. The difference now is that we use Scikit-Learn's regression tools instead of our own matrix inversion implementation. Furthermore, we sneak in Ridge regression (to be discussed below) which includes a hyperparameter $\lambda$, also to be explained below. The code End of explanation import os import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split # Where to save the figures and data files PROJECT_ROOT_DIR = "Results" FIGURE_ID = "Results/FigureFiles" DATA_ID = "DataFiles/" if not os.path.exists(PROJECT_ROOT_DIR): os.mkdir(PROJECT_ROOT_DIR) if not os.path.exists(FIGURE_ID): os.makedirs(FIGURE_ID) if not os.path.exists(DATA_ID): os.makedirs(DATA_ID) def image_path(fig_id): return os.path.join(FIGURE_ID, fig_id) def data_path(dat_id): return os.path.join(DATA_ID, dat_id) def save_fig(fig_id): plt.savefig(image_path(fig_id) + ".png", format='png') def R2(y_data, y_model): return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_data)) ** 2) def MSE(y_data,y_model): n = np.size(y_model) return np.sum((y_data-y_model)**2)/n infile = open(data_path("EoS.csv"),'r') # Read the EoS data as csv file and organized into two arrays with density and energies EoS = pd.read_csv(infile, names=('Density', 'Energy')) EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce') EoS = EoS.dropna() Energies = EoS['Energy'] Density = EoS['Density'] # The design matrix now as function of various polytrops X = np.zeros((len(Density),5)) X[:,0] = 1 X[:,1] = Density**(2.0/3.0) X[:,2] = Density X[:,3] = Density**(4.0/3.0) X[:,4] = Density**(5.0/3.0) # We split the data in test and training data X_train, X_test, y_train, y_test = train_test_split(X, Energies, test_size=0.2) # matrix inversion to find beta beta = np.linalg.inv(X_train.T.dot(X_train)).dot(X_train.T).dot(y_train) # and then make the prediction ytilde = X_train @ beta print("Training R2") print(R2(y_train,ytilde)) print("Training MSE") print(MSE(y_train,ytilde)) ypredict = X_test @ beta print("Test R2") print(R2(y_test,ypredict)) print("Test MSE") print(MSE(y_test,ypredict)) Explanation: The above simple polynomial in density $\rho$ gives an excellent fit to the data. We note also that there is a small deviation between the standard OLS and the Ridge regression at higher densities. We discuss this in more detail below. Splitting our Data in Training and Test data It is normal in essentially all Machine Learning studies to split the data in a training set and a test set (sometimes also an additional validation set). Scikit-Learn has an own function for this. There is no explicit recipe for how much data should be included as training data and say test data. An accepted rule of thumb is to use approximately $2/3$ to $4/5$ of the data as training data. We will postpone a discussion of this splitting to the end of these notes and our discussion of the so-called bias-variance tradeoff. Here we limit ourselves to repeat the above equation of state fitting example but now splitting the data into a training set and a test set. End of explanation import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns Explanation: <!-- !split --> The Boston housing data example The Boston housing data set was originally a part of UCI Machine Learning Repository and has been removed now. The data set is now included in Scikit-Learn's library. There are 506 samples and 13 feature (predictor) variables in this data set. The objective is to predict the value of prices of the house using the features (predictors) listed here. The features/predictors are 1. CRIM: Per capita crime rate by town ZN: Proportion of residential land zoned for lots over 25000 square feet INDUS: Proportion of non-retail business acres per town CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) NOX: Nitric oxide concentration (parts per 10 million) RM: Average number of rooms per dwelling AGE: Proportion of owner-occupied units built prior to 1940 DIS: Weighted distances to five Boston employment centers RAD: Index of accessibility to radial highways TAX: Full-value property tax rate per USD10000 B: $1000(Bk - 0.63)^2$, where $Bk$ is the proportion of [people of African American descent] by town LSTAT: Percentage of lower status of the population MEDV: Median value of owner-occupied homes in USD 1000s Housing data, the code We start by importing the libraries End of explanation from sklearn.datasets import load_boston boston_dataset = load_boston() # boston_dataset is a dictionary # let's check what it contains boston_dataset.keys() Explanation: and load the Boston Housing DataSet from Scikit-Learn End of explanation boston = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names) boston.head() boston['MEDV'] = boston_dataset.target Explanation: Then we invoke Pandas End of explanation # check for missing values in all the columns boston.isnull().sum() Explanation: and preprocess the data End of explanation # set the size of the figure sns.set(rc={'figure.figsize':(11.7,8.27)}) # plot a histogram showing the distribution of the target values sns.distplot(boston['MEDV'], bins=30) plt.show() Explanation: We can then visualize the data End of explanation # compute the pair wise correlation for all columns correlation_matrix = boston.corr().round(2) # use the heatmap function from seaborn to plot the correlation matrix # annot = True to print the values inside the square sns.heatmap(data=correlation_matrix, annot=True) Explanation: It is now useful to look at the correlation matrix End of explanation plt.figure(figsize=(20, 5)) features = ['LSTAT', 'RM'] target = boston['MEDV'] for i, col in enumerate(features): plt.subplot(1, len(features) , i+1) x = boston[col] y = target plt.scatter(x, y, marker='o') plt.title(col) plt.xlabel(col) plt.ylabel('MEDV') Explanation: From the above coorelation plot we can see that MEDV is strongly correlated to LSTAT and RM. We see also that RAD and TAX are stronly correlated, but we don't include this in our features together to avoid multi-colinearity End of explanation X = pd.DataFrame(np.c_[boston['LSTAT'], boston['RM']], columns = ['LSTAT','RM']) Y = boston['MEDV'] Explanation: Now we start training our model End of explanation from sklearn.model_selection import train_test_split # splits the training and test data set in 80% : 20% # assign random_state to any value.This ensures consistency. X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state=5) print(X_train.shape) print(X_test.shape) print(Y_train.shape) print(Y_test.shape) Explanation: We split the data into training and test sets End of explanation from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score lin_model = LinearRegression() lin_model.fit(X_train, Y_train) # model evaluation for training set y_train_predict = lin_model.predict(X_train) rmse = (np.sqrt(mean_squared_error(Y_train, y_train_predict))) r2 = r2_score(Y_train, y_train_predict) print("The model performance for training set") print("--------------------------------------") print('RMSE is {}'.format(rmse)) print('R2 score is {}'.format(r2)) print("\n") # model evaluation for testing set y_test_predict = lin_model.predict(X_test) # root mean square error of the model rmse = (np.sqrt(mean_squared_error(Y_test, y_test_predict))) # r-squared score of the model r2 = r2_score(Y_test, y_test_predict) print("The model performance for testing set") print("--------------------------------------") print('RMSE is {}'.format(rmse)) print('R2 score is {}'.format(r2)) # plotting the y_test vs y_pred # ideally should have been a straight line plt.scatter(Y_test, y_test_predict) plt.show() Explanation: Then we use the linear regression functionality from Scikit-Learn End of explanation # Common imports import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import sklearn.linear_model as skl from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler, StandardScaler, Normalizer # Where to save the figures and data files PROJECT_ROOT_DIR = "Results" FIGURE_ID = "Results/FigureFiles" DATA_ID = "DataFiles/" if not os.path.exists(PROJECT_ROOT_DIR): os.mkdir(PROJECT_ROOT_DIR) if not os.path.exists(FIGURE_ID): os.makedirs(FIGURE_ID) if not os.path.exists(DATA_ID): os.makedirs(DATA_ID) def image_path(fig_id): return os.path.join(FIGURE_ID, fig_id) def data_path(dat_id): return os.path.join(DATA_ID, dat_id) def save_fig(fig_id): plt.savefig(image_path(fig_id) + ".png", format='png') def FrankeFunction(x,y): term1 = 0.75*np.exp(-(0.25*(9*x-2)**2) - 0.25*((9*y-2)**2)) term2 = 0.75*np.exp(-((9*x+1)**2)/49.0 - 0.1*(9*y+1)) term3 = 0.5*np.exp(-(9*x-7)**2/4.0 - 0.25*((9*y-3)**2)) term4 = -0.2*np.exp(-(9*x-4)**2 - (9*y-7)**2) return term1 + term2 + term3 + term4 def create_X(x, y, n ): if len(x.shape) > 1: x = np.ravel(x) y = np.ravel(y) N = len(x) l = int((n+1)*(n+2)/2) # Number of elements in beta X = np.ones((N,l)) for i in range(1,n+1): q = int((i)*(i+1)/2) for k in range(i+1): X[:,q+k] = (x**(i-k))*(y**k) return X # Making meshgrid of datapoints and compute Franke's function n = 5 N = 1000 x = np.sort(np.random.uniform(0, 1, N)) y = np.sort(np.random.uniform(0, 1, N)) z = FrankeFunction(x, y) X = create_X(x, y, n=n) # split in training and test data X_train, X_test, y_train, y_test = train_test_split(X,z,test_size=0.2) clf = skl.LinearRegression().fit(X_train, y_train) # The mean squared error and R2 score print("MSE before scaling: {:.2f}".format(mean_squared_error(clf.predict(X_test), y_test))) print("R2 score before scaling {:.2f}".format(clf.score(X_test,y_test))) scaler = StandardScaler() scaler.fit(X_train) X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) print("Feature min values before scaling:\n {}".format(X_train.min(axis=0))) print("Feature max values before scaling:\n {}".format(X_train.max(axis=0))) print("Feature min values after scaling:\n {}".format(X_train_scaled.min(axis=0))) print("Feature max values after scaling:\n {}".format(X_train_scaled.max(axis=0))) clf = skl.LinearRegression().fit(X_train_scaled, y_train) print("MSE after scaling: {:.2f}".format(mean_squared_error(clf.predict(X_test_scaled), y_test))) print("R2 score for scaled data: {:.2f}".format(clf.score(X_test_scaled,y_test))) Explanation: Reducing the number of degrees of freedom, overarching view Many Machine Learning problems involve thousands or even millions of features for each training instance. Not only does this make training extremely slow, it can also make it much harder to find a good solution, as we will see. This problem is often referred to as the curse of dimensionality. Fortunately, in real-world problems, it is often possible to reduce the number of features considerably, turning an intractable problem into a tractable one. Later we will discuss some of the most popular dimensionality reduction techniques: the principal component analysis (PCA), Kernel PCA, and Locally Linear Embedding (LLE). Principal component analysis and its various variants deal with the problem of fitting a low-dimensional affine subspace to a set of of data points in a high-dimensional space. With its family of methods it is one of the most used tools in data modeling, compression and visualization. Preprocessing our data Before we proceed however, we will discuss how to preprocess our data. Till now and in connection with our previous examples we have not met so many cases where we are too sensitive to the scaling of our data. Normally the data may need a rescaling and/or may be sensitive to extreme values. Scaling the data renders our inputs much more suitable for the algorithms we want to employ. Scikit-Learn has several functions which allow us to rescale the data, normally resulting in much better results in terms of various accuracy scores. The StandardScaler function in Scikit-Learn ensures that for each feature/predictor we study the mean value is zero and the variance is one (every column in the design/feature matrix). This scaling has the drawback that it does not ensure that we have a particular maximum or minimum in our data set. Another function included in Scikit-Learn is the MinMaxScaler which ensures that all features are exactly between $0$ and $1$. The More preprocessing The Normalizer scales each data point such that the feature vector has a euclidean length of one. In other words, it projects a data point on the circle (or sphere in the case of higher dimensions) with a radius of 1. This means every data point is scaled by a different number (by the inverse of it’s length). This normalization is often used when only the direction (or angle) of the data matters, not the length of the feature vector. The RobustScaler works similarly to the StandardScaler in that it ensures statistical properties for each feature that guarantee that they are on the same scale. However, the RobustScaler uses the median and quartiles, instead of mean and variance. This makes the RobustScaler ignore data points that are very different from the rest (like measurement errors). These odd data points are also called outliers, and might often lead to trouble for other scaling techniques. Simple preprocessing examples, Franke function and regression End of explanation import numpy as np # SVD inversion def SVDinv(A): ''' Takes as input a numpy matrix A and returns inv(A) based on singular value decomposition (SVD). SVD is numerically more stable than the inversion algorithms provided by numpy and scipy.linalg at the cost of being slower. ''' U, s, VT = np.linalg.svd(A) # print('test U') # print( (np.transpose(U) @ U - U @np.transpose(U))) # print('test VT') # print( (np.transpose(VT) @ VT - VT @np.transpose(VT))) print(U) print(s) print(VT) D = np.zeros((len(U),len(VT))) for i in range(0,len(VT)): D[i,i]=s[i] UT = np.transpose(U); V = np.transpose(VT); invD = np.linalg.inv(D) return np.matmul(V,np.matmul(invD,UT)) X = np.array([ [1.0, -1.0, 2.0], [1.0, 0.0, 1.0], [1.0, 2.0, -1.0], [1.0, 1.0, 0.0] ]) print(X) A = np.transpose(X) @ X print(A) # Brute force inversion of super-collinear matrix #B = np.linalg.inv(A) #print(B) C = SVDinv(A) print(C) Explanation: The singular value decomposition The examples we have looked at so far are cases where we normally can invert the matrix $\boldsymbol{X}^T\boldsymbol{X}$. Using a polynomial expansion as we did both for the masses and the fitting of the equation of state, leads to row vectors of the design matrix which are essentially orthogonal due to the polynomial character of our model. Obtaining the inverse of the design matrix is then often done via a so-called LU, QR or Cholesky decomposition. This may however not the be case in general and a standard matrix inversion algorithm based on say LU, QR or Cholesky decomposition may lead to singularities. We will see examples of this below. There is however a way to partially circumvent this problem and also gain some insights about the ordinary least squares approach, and later shrinkage methods like Ridge and Lasso regressions. This is given by the Singular Value Decomposition algorithm, perhaps the most powerful linear algebra algorithm. Let us look at a different example where we may have problems with the standard matrix inversion algorithm. Thereafter we dive into the math of the SVD. Linear Regression Problems One of the typical problems we encounter with linear regression, in particular when the matrix $\boldsymbol{X}$ (our so-called design matrix) is high-dimensional, are problems with near singular or singular matrices. The column vectors of $\boldsymbol{X}$ may be linearly dependent, normally referred to as super-collinearity. This means that the matrix may be rank deficient and it is basically impossible to to model the data using linear regression. As an example, consider the matrix $$ \begin{align} \mathbf{X} & = \left[ \begin{array}{rrr} 1 & -1 & 2 \ 1 & 0 & 1 \ 1 & 2 & -1 \ 1 & 1 & 0 \end{array} \right] \end{align} $$ The columns of $\boldsymbol{X}$ are linearly dependent. We see this easily since the the first column is the row-wise sum of the other two columns. The rank (more correct, the column rank) of a matrix is the dimension of the space spanned by the column vectors. Hence, the rank of $\mathbf{X}$ is equal to the number of linearly independent columns. In this particular case the matrix has rank 2. Super-collinearity of an $(n \times p)$-dimensional design matrix $\mathbf{X}$ implies that the inverse of the matrix $\boldsymbol{X}^T\boldsymbol{X}$ (the matrix we need to invert to solve the linear regression equations) is non-invertible. If we have a square matrix that does not have an inverse, we say this matrix singular. The example here demonstrates this $$ \begin{align} \boldsymbol{X} & = \left[ \begin{array}{rr} 1 & -1 \ 1 & -1 \end{array} \right]. \end{align} $$ We see easily that $\mbox{det}(\boldsymbol{X}) = x_{11} x_{22} - x_{12} x_{21} = 1 \times (-1) - 1 \times (-1) = 0$. Hence, $\mathbf{X}$ is singular and its inverse is undefined. This is equivalent to saying that the matrix $\boldsymbol{X}$ has at least an eigenvalue which is zero. Fixing the singularity If our design matrix $\boldsymbol{X}$ which enters the linear regression problem <!-- Equation labels as ordinary links --> <div id="_auto1"></div> $$ \begin{equation} \boldsymbol{\beta} = (\boldsymbol{X}^{T} \boldsymbol{X})^{-1} \boldsymbol{X}^{T} \boldsymbol{y}, \label{_auto1} \tag{1} \end{equation} $$ has linearly dependent column vectors, we will not be able to compute the inverse of $\boldsymbol{X}^T\boldsymbol{X}$ and we cannot find the parameters (estimators) $\beta_i$. The estimators are only well-defined if $(\boldsymbol{X}^{T}\boldsymbol{X})^{-1}$ exits. This is more likely to happen when the matrix $\boldsymbol{X}$ is high-dimensional. In this case it is likely to encounter a situation where the regression parameters $\beta_i$ cannot be estimated. A cheap ad hoc approach is simply to add a small diagonal component to the matrix to invert, that is we change $$ \boldsymbol{X}^{T} \boldsymbol{X} \rightarrow \boldsymbol{X}^{T} \boldsymbol{X}+\lambda \boldsymbol{I}, $$ where $\boldsymbol{I}$ is the identity matrix. When we discuss Ridge regression this is actually what we end up evaluating. The parameter $\lambda$ is called a hyperparameter. More about this later. Basic math of the SVD From standard linear algebra we know that a square matrix $\boldsymbol{X}$ can be diagonalized if and only it is a so-called normal matrix, that is if $\boldsymbol{X}\in {\mathbb{R}}^{n\times n}$ we have $\boldsymbol{X}\boldsymbol{X}^T=\boldsymbol{X}^T\boldsymbol{X}$ or if $\boldsymbol{X}\in {\mathbb{C}}^{n\times n}$ we have $\boldsymbol{X}\boldsymbol{X}^{\dagger}=\boldsymbol{X}^{\dagger}\boldsymbol{X}$. The matrix has then a set of eigenpairs $$ (\lambda_1,\boldsymbol{u}_1),\dots, (\lambda_n,\boldsymbol{u}_n), $$ and the eigenvalues are given by the diagonal matrix $$ \boldsymbol{\Sigma}=\mathrm{Diag}(\lambda_1, \dots,\lambda_n). $$ The matrix $\boldsymbol{X}$ can be written in terms of an orthogonal/unitary transformation $\boldsymbol{U}$ $$ \boldsymbol{X} = \boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T, $$ with $\boldsymbol{U}\boldsymbol{U}^T=\boldsymbol{I}$ or $\boldsymbol{U}\boldsymbol{U}^{\dagger}=\boldsymbol{I}$. Not all square matrices are diagonalizable. A matrix like the one discussed above $$ \boldsymbol{X} = \begin{bmatrix} 1& -1 \ 1& -1\ \end{bmatrix} $$ is not diagonalizable, it is a so-called defective matrix. It is easy to see that the condition $\boldsymbol{X}\boldsymbol{X}^T=\boldsymbol{X}^T\boldsymbol{X}$ is not fulfilled. The SVD, a Fantastic Algorithm However, and this is the strength of the SVD algorithm, any general matrix $\boldsymbol{X}$ can be decomposed in terms of a diagonal matrix and two orthogonal/unitary matrices. The Singular Value Decompostion (SVD) theorem states that a general $m\times n$ matrix $\boldsymbol{X}$ can be written in terms of a diagonal matrix $\boldsymbol{\Sigma}$ of dimensionality $m\times n$ and two orthognal matrices $\boldsymbol{U}$ and $\boldsymbol{V}$, where the first has dimensionality $m \times m$ and the last dimensionality $n\times n$. We have then $$ \boldsymbol{X} = \boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T $$ As an example, the above defective matrix can be decomposed as $$ \boldsymbol{X} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1& 1 \ 1& -1\ \end{bmatrix} \begin{bmatrix} 2& 0 \ 0& 0\ \end{bmatrix} \frac{1}{\sqrt{2}}\begin{bmatrix} 1& -1 \ 1& 1\ \end{bmatrix}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T, $$ with eigenvalues $\sigma_1=2$ and $\sigma_2=0$. The SVD exits always! The SVD decomposition (singular values) gives eigenvalues $\sigma_i\geq\sigma_{i+1}$ for all $i$ and for dimensions larger than $i=p$, the eigenvalues (singular values) are zero. In the general case, where our design matrix $\boldsymbol{X}$ has dimension $n\times p$, the matrix is thus decomposed into an $n\times n$ orthogonal matrix $\boldsymbol{U}$, a $p\times p$ orthogonal matrix $\boldsymbol{V}$ and a diagonal matrix $\boldsymbol{\Sigma}$ with $r=\mathrm{min}(n,p)$ singular values $\sigma_i\geq 0$ on the main diagonal and zeros filling the rest of the matrix. There are at most $p$ singular values assuming that $n > p$. In our regression examples for the nuclear masses and the equation of state this is indeed the case, while for the Ising model we have $p > n$. These are often cases that lead to near singular or singular matrices. The columns of $\boldsymbol{U}$ are called the left singular vectors while the columns of $\boldsymbol{V}$ are the right singular vectors. Economy-size SVD If we assume that $n > p$, then our matrix $\boldsymbol{U}$ has dimension $n \times n$. The last $n-p$ columns of $\boldsymbol{U}$ become however irrelevant in our calculations since they are multiplied with the zeros in $\boldsymbol{\Sigma}$. The economy-size decomposition removes extra rows or columns of zeros from the diagonal matrix of singular values, $\boldsymbol{\Sigma}$, along with the columns in either $\boldsymbol{U}$ or $\boldsymbol{V}$ that multiply those zeros in the expression. Removing these zeros and columns can improve execution time and reduce storage requirements without compromising the accuracy of the decomposition. If $n > p$, we keep only the first $p$ columns of $\boldsymbol{U}$ and $\boldsymbol{\Sigma}$ has dimension $p\times p$. If $p > n$, then only the first $n$ columns of $\boldsymbol{V}$ are computed and $\boldsymbol{\Sigma}$ has dimension $n\times n$. The $n=p$ case is obvious, we retain the full SVD. In general the economy-size SVD leads to less FLOPS and still conserving the desired accuracy. Codes for the SVD End of explanation # Importing various packages import numpy as np n = 100 x = np.random.normal(size=n) print(np.mean(x)) y = 4+3*x+np.random.normal(size=n) print(np.mean(y)) W = np.vstack((x, y)) C = np.cov(W) print(C) Explanation: The matrix $\boldsymbol{X}$ has columns that are linearly dependent. The first column is the row-wise sum of the other two columns. The rank of a matrix (the column rank) is the dimension of space spanned by the column vectors. The rank of the matrix is the number of linearly independent columns, in this case just $2$. We see this from the singular values when running the above code. Running the standard inversion algorithm for matrix inversion with $\boldsymbol{X}^T\boldsymbol{X}$ results in the program terminating due to a singular matrix. Mathematical Properties There are several interesting mathematical properties which will be relevant when we are going to discuss the differences between say ordinary least squares (OLS) and Ridge regression. We have from OLS that the parameters of the linear approximation are given by $$ \boldsymbol{\tilde{y}} = \boldsymbol{X}\boldsymbol{\beta} = \boldsymbol{X}\left(\boldsymbol{X}^T\boldsymbol{X}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}. $$ The matrix to invert can be rewritten in terms of our SVD decomposition as $$ \boldsymbol{X}^T\boldsymbol{X} = \boldsymbol{V}\boldsymbol{\Sigma}^T\boldsymbol{U}^T\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T. $$ Using the orthogonality properties of $\boldsymbol{U}$ we have $$ \boldsymbol{X}^T\boldsymbol{X} = \boldsymbol{V}\boldsymbol{\Sigma}^T\boldsymbol{\Sigma}\boldsymbol{V}^T = \boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T, $$ with $\boldsymbol{D}$ being a diagonal matrix with values along the diagonal given by the singular values squared. This means that $$ (\boldsymbol{X}^T\boldsymbol{X})\boldsymbol{V} = \boldsymbol{V}\boldsymbol{D}, $$ that is the eigenvectors of $(\boldsymbol{X}^T\boldsymbol{X})$ are given by the columns of the right singular matrix of $\boldsymbol{X}$ and the eigenvalues are the squared singular values. It is easy to show (show this) that $$ (\boldsymbol{X}\boldsymbol{X}^T)\boldsymbol{U} = \boldsymbol{U}\boldsymbol{D}, $$ that is, the eigenvectors of $(\boldsymbol{X}\boldsymbol{X})^T$ are the columns of the left singular matrix and the eigenvalues are the same. Going back to our OLS equation we have $$ \boldsymbol{X}\boldsymbol{\beta} = \boldsymbol{X}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}\boldsymbol{X}^T\boldsymbol{y}=\boldsymbol{U\Sigma V^T}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}(\boldsymbol{U\Sigma V^T})^T\boldsymbol{y}=\boldsymbol{U}\boldsymbol{U}^T\boldsymbol{y}. $$ We will come back to this expression when we discuss Ridge regression. Ridge and LASSO Regression Let us remind ourselves about the expression for the standard Mean Squared Error (MSE) which we used to define our cost function and the equations for the ordinary least squares (OLS) method, that is our optimization problem is $$ {\displaystyle \min_{\boldsymbol{\beta}\in {\mathbb{R}}^{p}}}\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}. $$ or we can state it as $$ {\displaystyle \min_{\boldsymbol{\beta}\in {\mathbb{R}}^{p}}}\frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2=\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2, $$ where we have used the definition of a norm-2 vector, that is $$ \vert\vert \boldsymbol{x}\vert\vert_2 = \sqrt{\sum_i x_i^2}. $$ By minimizing the above equation with respect to the parameters $\boldsymbol{\beta}$ we could then obtain an analytical expression for the parameters $\boldsymbol{\beta}$. We can add a regularization parameter $\lambda$ by defining a new cost function to be optimized, that is $$ {\displaystyle \min_{\boldsymbol{\beta}\in {\mathbb{R}}^{p}}}\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_2^2 $$ which leads to the Ridge regression minimization problem where we require that $\vert\vert \boldsymbol{\beta}\vert\vert_2^2\le t$, where $t$ is a finite number larger than zero. By defining $$ C(\boldsymbol{X},\boldsymbol{\beta})=\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_1, $$ we have a new optimization equation $$ {\displaystyle \min_{\boldsymbol{\beta}\in {\mathbb{R}}^{p}}}\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_1 $$ which leads to Lasso regression. Lasso stands for least absolute shrinkage and selection operator. Here we have defined the norm-1 as $$ \vert\vert \boldsymbol{x}\vert\vert_1 = \sum_i \vert x_i\vert. $$ More on Ridge Regression Using the matrix-vector expression for Ridge regression, $$ C(\boldsymbol{X},\boldsymbol{\beta})=\frac{1}{n}\left{(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta})^T(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta})\right}+\lambda\boldsymbol{\beta}^T\boldsymbol{\beta}, $$ by taking the derivatives with respect to $\boldsymbol{\beta}$ we obtain then a slightly modified matrix inversion problem which for finite values of $\lambda$ does not suffer from singularity problems. We obtain $$ \boldsymbol{\beta}^{\mathrm{Ridge}} = \left(\boldsymbol{X}^T\boldsymbol{X}+\lambda\boldsymbol{I}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}, $$ with $\boldsymbol{I}$ being a $p\times p$ identity matrix with the constraint that $$ \sum_{i=0}^{p-1} \beta_i^2 \leq t, $$ with $t$ a finite positive number. We see that Ridge regression is nothing but the standard OLS with a modified diagonal term added to $\boldsymbol{X}^T\boldsymbol{X}$. The consequences, in particular for our discussion of the bias-variance tradeoff are rather interesting. Furthermore, if we use the result above in terms of the SVD decomposition (our analysis was done for the OLS method), we had $$ (\boldsymbol{X}\boldsymbol{X}^T)\boldsymbol{U} = \boldsymbol{U}\boldsymbol{D}. $$ We can analyse the OLS solutions in terms of the eigenvectors (the columns) of the right singular value matrix $\boldsymbol{U}$ as $$ \boldsymbol{X}\boldsymbol{\beta} = \boldsymbol{X}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}\boldsymbol{X}^T\boldsymbol{y}=\boldsymbol{U\Sigma V^T}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}(\boldsymbol{U\Sigma V^T})^T\boldsymbol{y}=\boldsymbol{U}\boldsymbol{U}^T\boldsymbol{y} $$ For Ridge regression this becomes $$ \boldsymbol{X}\boldsymbol{\beta}^{\mathrm{Ridge}} = \boldsymbol{U\Sigma V^T}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T+\lambda\boldsymbol{I} \right)^{-1}(\boldsymbol{U\Sigma V^T})^T\boldsymbol{y}=\sum_{j=0}^{p-1}\boldsymbol{u}_j\boldsymbol{u}_j^T\frac{\sigma_j^2}{\sigma_j^2+\lambda}\boldsymbol{y}, $$ with the vectors $\boldsymbol{u}_j$ being the columns of $\boldsymbol{U}$. Interpreting the Ridge results Since $\lambda \geq 0$, it means that compared to OLS, we have $$ \frac{\sigma_j^2}{\sigma_j^2+\lambda} \leq 1. $$ Ridge regression finds the coordinates of $\boldsymbol{y}$ with respect to the orthonormal basis $\boldsymbol{U}$, it then shrinks the coordinates by $\frac{\sigma_j^2}{\sigma_j^2+\lambda}$. Recall that the SVD has eigenvalues ordered in a descending way, that is $\sigma_i \geq \sigma_{i+1}$. For small eigenvalues $\sigma_i$ it means that their contributions become less important, a fact which can be used to reduce the number of degrees of freedom. Actually, calculating the variance of $\boldsymbol{X}\boldsymbol{v}_j$ shows that this quantity is equal to $\sigma_j^2/n$. With a parameter $\lambda$ we can thus shrink the role of specific parameters. More interpretations For the sake of simplicity, let us assume that the design matrix is orthonormal, that is $$ \boldsymbol{X}^T\boldsymbol{X}=(\boldsymbol{X}^T\boldsymbol{X})^{-1} =\boldsymbol{I}. $$ In this case the standard OLS results in $$ \boldsymbol{\beta}^{\mathrm{OLS}} = \boldsymbol{X}^T\boldsymbol{y}=\sum_{i=0}^{p-1}\boldsymbol{u}_j\boldsymbol{u}_j^T\boldsymbol{y}, $$ and $$ \boldsymbol{\beta}^{\mathrm{Ridge}} = \left(\boldsymbol{I}+\lambda\boldsymbol{I}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}=\left(1+\lambda\right)^{-1}\boldsymbol{\beta}^{\mathrm{OLS}}, $$ that is the Ridge estimator scales the OLS estimator by the inverse of a factor $1+\lambda$, and the Ridge estimator converges to zero when the hyperparameter goes to infinity. We will come back to more interpreations after we have gone through some of the statistical analysis part. For more discussions of Ridge and Lasso regression, Wessel van Wieringen's article is highly recommended. Similarly, Mehta et al's article is also recommended. <!-- !split --> A better understanding of regularization The parameter $\lambda$ that we have introduced in the Ridge (and Lasso as well) regression is often called a regularization parameter or shrinkage parameter. It is common to call it a hyperparameter. What does it mean mathemtically? Here we will first look at how to analyze the difference between the standard OLS equations and the Ridge expressions in terms of a linear algebra analysis using the SVD algorithm. Thereafter, we will link (see the material on the bias-variance tradeoff below) these observation to the statisical analysis of the results. In particular we consider how the variance of the parameters $\boldsymbol{\beta}$ is affected by changing the parameter $\lambda$. Decomposing the OLS and Ridge expressions We have our design matrix $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$. With the SVD we decompose it as $$ \boldsymbol{X} = \boldsymbol{U\Sigma V^T}, $$ with $\boldsymbol{U}\in {\mathbb{R}}^{n\times n}$, $\boldsymbol{\Sigma}\in {\mathbb{R}}^{n\times p}$ and $\boldsymbol{V}\in {\mathbb{R}}^{p\times p}$. The matrices $\boldsymbol{U}$ and $\boldsymbol{V}$ are unitary/orthonormal matrices, that is in case the matrices are real we have $\boldsymbol{U}^T\boldsymbol{U}=\boldsymbol{U}\boldsymbol{U}^T=\boldsymbol{I}$ and $\boldsymbol{V}^T\boldsymbol{V}=\boldsymbol{V}\boldsymbol{V}^T=\boldsymbol{I}$. Introducing the Covariance and Correlation functions Before we discuss the link between for example Ridge regression and the singular value decomposition, we need to remind ourselves about the definition of the covariance and the correlation function. These are quantities Suppose we have defined two vectors $\hat{x}$ and $\hat{y}$ with $n$ elements each. The covariance matrix $\boldsymbol{C}$ is defined as $$ \boldsymbol{C}[\boldsymbol{x},\boldsymbol{y}] = \begin{bmatrix} \mathrm{cov}[\boldsymbol{x},\boldsymbol{x}] & \mathrm{cov}[\boldsymbol{x},\boldsymbol{y}] \ \mathrm{cov}[\boldsymbol{y},\boldsymbol{x}] & \mathrm{cov}[\boldsymbol{y},\boldsymbol{y}] \ \end{bmatrix}, $$ where for example $$ \mathrm{cov}[\boldsymbol{x},\boldsymbol{y}] =\frac{1}{n} \sum_{i=0}^{n-1}(x_i- \overline{x})(y_i- \overline{y}). $$ With this definition and recalling that the variance is defined as $$ \mathrm{var}[\boldsymbol{x}]=\frac{1}{n} \sum_{i=0}^{n-1}(x_i- \overline{x})^2, $$ we can rewrite the covariance matrix as $$ \boldsymbol{C}[\boldsymbol{x},\boldsymbol{y}] = \begin{bmatrix} \mathrm{var}[\boldsymbol{x}] & \mathrm{cov}[\boldsymbol{x},\boldsymbol{y}] \ \mathrm{cov}[\boldsymbol{x},\boldsymbol{y}] & \mathrm{var}[\boldsymbol{y}] \ \end{bmatrix}. $$ The covariance takes values between zero and infinity and may thus lead to problems with loss of numerical precision for particularly large values. It is common to scale the covariance matrix by introducing instead the correlation matrix defined via the so-called correlation function $$ \mathrm{corr}[\boldsymbol{x},\boldsymbol{y}]=\frac{\mathrm{cov}[\boldsymbol{x},\boldsymbol{y}]}{\sqrt{\mathrm{var}[\boldsymbol{x}] \mathrm{var}[\boldsymbol{y}]}}. $$ The correlation function is then given by values $\mathrm{corr}[\boldsymbol{x},\boldsymbol{y}] \in [-1,1]$. This avoids eventual problems with too large values. We can then define the correlation matrix for the two vectors $\boldsymbol{x}$ and $\boldsymbol{y}$ as $$ \boldsymbol{K}[\boldsymbol{x},\boldsymbol{y}] = \begin{bmatrix} 1 & \mathrm{corr}[\boldsymbol{x},\boldsymbol{y}] \ \mathrm{corr}[\boldsymbol{y},\boldsymbol{x}] & 1 \ \end{bmatrix}, $$ In the above example this is the function we constructed using pandas. Correlation Function and Design/Feature Matrix In our derivation of the various regression algorithms like Ordinary Least Squares or Ridge regression we defined the design/feature matrix $\boldsymbol{X}$ as $$ \boldsymbol{X}=\begin{bmatrix} x_{0,0} & x_{0,1} & x_{0,2}& \dots & \dots x_{0,p-1}\ x_{1,0} & x_{1,1} & x_{1,2}& \dots & \dots x_{1,p-1}\ x_{2,0} & x_{2,1} & x_{2,2}& \dots & \dots x_{2,p-1}\ \dots & \dots & \dots & \dots \dots & \dots \ x_{n-2,0} & x_{n-2,1} & x_{n-2,2}& \dots & \dots x_{n-2,p-1}\ x_{n-1,0} & x_{n-1,1} & x_{n-1,2}& \dots & \dots x_{n-1,p-1}\ \end{bmatrix}, $$ with $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$, with the predictors/features $p$ refering to the column numbers and the entries $n$ being the row elements. We can rewrite the design/feature matrix in terms of its column vectors as $$ \boldsymbol{X}=\begin{bmatrix} \boldsymbol{x}0 & \boldsymbol{x}_1 & \boldsymbol{x}_2 & \dots & \dots & \boldsymbol{x}{p-1}\end{bmatrix}, $$ with a given vector $$ \boldsymbol{x}i^T = \begin{bmatrix}x{0,i} & x_{1,i} & x_{2,i}& \dots & \dots x_{n-1,i}\end{bmatrix}. $$ With these definitions, we can now rewrite our $2\times 2$ correaltion/covariance matrix in terms of a moe general design/feature matrix $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$. This leads to a $p\times p$ covariance matrix for the vectors $\boldsymbol{x}_i$ with $i=0,1,\dots,p-1$ $$ \boldsymbol{C}[\boldsymbol{x}] = \begin{bmatrix} \mathrm{var}[\boldsymbol{x}0] & \mathrm{cov}[\boldsymbol{x}_0,\boldsymbol{x}_1] & \mathrm{cov}[\boldsymbol{x}_0,\boldsymbol{x}_2] & \dots & \dots & \mathrm{cov}[\boldsymbol{x}_0,\boldsymbol{x}{p-1}]\ \mathrm{cov}[\boldsymbol{x}1,\boldsymbol{x}_0] & \mathrm{var}[\boldsymbol{x}_1] & \mathrm{cov}[\boldsymbol{x}_1,\boldsymbol{x}_2] & \dots & \dots & \mathrm{cov}[\boldsymbol{x}_1,\boldsymbol{x}{p-1}]\ \mathrm{cov}[\boldsymbol{x}2,\boldsymbol{x}_0] & \mathrm{cov}[\boldsymbol{x}_2,\boldsymbol{x}_1] & \mathrm{var}[\boldsymbol{x}_2] & \dots & \dots & \mathrm{cov}[\boldsymbol{x}_2,\boldsymbol{x}{p-1}]\ \dots & \dots & \dots & \dots & \dots & \dots \ \dots & \dots & \dots & \dots & \dots & \dots \ \mathrm{cov}[\boldsymbol{x}{p-1},\boldsymbol{x}_0] & \mathrm{cov}[\boldsymbol{x}{p-1},\boldsymbol{x}1] & \mathrm{cov}[\boldsymbol{x}{p-1},\boldsymbol{x}{2}] & \dots & \dots & \mathrm{var}[\boldsymbol{x}{p-1}]\ \end{bmatrix}, $$ and the correlation matrix $$ \boldsymbol{K}[\boldsymbol{x}] = \begin{bmatrix} 1 & \mathrm{corr}[\boldsymbol{x}0,\boldsymbol{x}_1] & \mathrm{corr}[\boldsymbol{x}_0,\boldsymbol{x}_2] & \dots & \dots & \mathrm{corr}[\boldsymbol{x}_0,\boldsymbol{x}{p-1}]\ \mathrm{corr}[\boldsymbol{x}1,\boldsymbol{x}_0] & 1 & \mathrm{corr}[\boldsymbol{x}_1,\boldsymbol{x}_2] & \dots & \dots & \mathrm{corr}[\boldsymbol{x}_1,\boldsymbol{x}{p-1}]\ \mathrm{corr}[\boldsymbol{x}2,\boldsymbol{x}_0] & \mathrm{corr}[\boldsymbol{x}_2,\boldsymbol{x}_1] & 1 & \dots & \dots & \mathrm{corr}[\boldsymbol{x}_2,\boldsymbol{x}{p-1}]\ \dots & \dots & \dots & \dots & \dots & \dots \ \dots & \dots & \dots & \dots & \dots & \dots \ \mathrm{corr}[\boldsymbol{x}{p-1},\boldsymbol{x}_0] & \mathrm{corr}[\boldsymbol{x}{p-1},\boldsymbol{x}1] & \mathrm{corr}[\boldsymbol{x}{p-1},\boldsymbol{x}_{2}] & \dots & \dots & 1\ \end{bmatrix}, $$ Covariance Matrix Examples The Numpy function np.cov calculates the covariance elements using the factor $1/(n-1)$ instead of $1/n$ since it assumes we do not have the exact mean values. The following simple function uses the np.vstack function which takes each vector of dimension $1\times n$ and produces a $2\times n$ matrix $\boldsymbol{W}$ $$ \boldsymbol{W} = \begin{bmatrix} x_0 & y_0 \ x_1 & y_1 \ x_2 & y_2\ \dots & \dots \ x_{n-2} & y_{n-2}\ x_{n-1} & y_{n-1} & \end{bmatrix}, $$ which in turn is converted into into the $2\times 2$ covariance matrix $\boldsymbol{C}$ via the Numpy function np.cov(). We note that we can also calculate the mean value of each set of samples $\boldsymbol{x}$ etc using the Numpy function np.mean(x). We can also extract the eigenvalues of the covariance matrix through the np.linalg.eig() function. End of explanation import numpy as np n = 100 # define two vectors x = np.random.random(size=n) y = 4+3*x+np.random.normal(size=n) #scaling the x and y vectors x = x - np.mean(x) y = y - np.mean(y) variance_x = np.sum(x@x)/n variance_y = np.sum(y@y)/n print(variance_x) print(variance_y) cov_xy = np.sum(x@y)/n cov_xx = np.sum(x@x)/n cov_yy = np.sum(y@y)/n C = np.zeros((2,2)) C[0,0]= cov_xx/variance_x C[1,1]= cov_yy/variance_y C[0,1]= cov_xy/np.sqrt(variance_y*variance_x) C[1,0]= C[0,1] print(C) Explanation: Correlation Matrix The previous example can be converted into the correlation matrix by simply scaling the matrix elements with the variances. We should also subtract the mean values for each column. This leads to the following code which sets up the correlations matrix for the previous example in a more brute force way. Here we scale the mean values for each column of the design matrix, calculate the relevant mean values and variances and then finally set up the $2\times 2$ correlation matrix (since we have only two vectors). End of explanation import numpy as np import pandas as pd n = 10 x = np.random.normal(size=n) x = x - np.mean(x) y = 4+3*x+np.random.normal(size=n) y = y - np.mean(y) X = (np.vstack((x, y))).T print(X) Xpd = pd.DataFrame(X) print(Xpd) correlation_matrix = Xpd.corr() print(correlation_matrix) Explanation: We see that the matrix elements along the diagonal are one as they should be and that the matrix is symmetric. Furthermore, diagonalizing this matrix we easily see that it is a positive definite matrix. The above procedure with numpy can be made more compact if we use pandas. Correlation Matrix with Pandas We whow here how we can set up the correlation matrix using pandas, as done in this simple code End of explanation # Common imports import numpy as np import pandas as pd def FrankeFunction(x,y): term1 = 0.75*np.exp(-(0.25*(9*x-2)**2) - 0.25*((9*y-2)**2)) term2 = 0.75*np.exp(-((9*x+1)**2)/49.0 - 0.1*(9*y+1)) term3 = 0.5*np.exp(-(9*x-7)**2/4.0 - 0.25*((9*y-3)**2)) term4 = -0.2*np.exp(-(9*x-4)**2 - (9*y-7)**2) return term1 + term2 + term3 + term4 def create_X(x, y, n ): if len(x.shape) > 1: x = np.ravel(x) y = np.ravel(y) N = len(x) l = int((n+1)*(n+2)/2) # Number of elements in beta X = np.ones((N,l)) for i in range(1,n+1): q = int((i)*(i+1)/2) for k in range(i+1): X[:,q+k] = (x**(i-k))*(y**k) return X # Making meshgrid of datapoints and compute Franke's function n = 4 N = 100 x = np.sort(np.random.uniform(0, 1, N)) y = np.sort(np.random.uniform(0, 1, N)) z = FrankeFunction(x, y) X = create_X(x, y, n=n) Xpd = pd.DataFrame(X) # subtract the mean values and set up the covariance matrix Xpd = Xpd - Xpd.mean() covariance_matrix = Xpd.cov() print(covariance_matrix) Explanation: We expand this model to the Franke function discussed above. Correlation Matrix with Pandas and the Franke function End of explanation from numpy import * from numpy.random import randint, randn from time import time def jackknife(data, stat): n = len(data);t = zeros(n); inds = arange(n); t0 = time() ## 'jackknifing' by leaving out an observation for each i for i in range(n): t[i] = stat(delete(data,i) ) # analysis print("Runtime: %g sec" % (time()-t0)); print("Jackknife Statistics :") print("original bias std. error") print("%8g %14g %15g" % (stat(data),(n-1)*mean(t)/n, (n*var(t))**.5)) return t # Returns mean of data samples def stat(data): return mean(data) mu, sigma = 100, 15 datapoints = 10000 x = mu + sigma*random.randn(datapoints) # jackknife returns the data sample t = jackknife(x, stat) Explanation: We note here that the covariance is zero for the first rows and columns since all matrix elements in the design matrix were set to one (we are fitting the function in terms of a polynomial of degree $n$). This means that the variance for these elements will be zero and will cause problems when we set up the correlation matrix. We can simply drop these elements and construct a correlation matrix without these elements. Rewriting the Covariance and/or Correlation Matrix We can rewrite the covariance matrix in a more compact form in terms of the design/feature matrix $\boldsymbol{X}$ as $$ \boldsymbol{C}[\boldsymbol{x}] = \frac{1}{n}\boldsymbol{X}^T\boldsymbol{X}= \mathbb{E}[\boldsymbol{X}^T\boldsymbol{X}]. $$ To see this let us simply look at a design matrix $\boldsymbol{X}\in {\mathbb{R}}^{2\times 2}$ $$ \boldsymbol{X}=\begin{bmatrix} x_{00} & x_{01}\ x_{10} & x_{11}\ \end{bmatrix}=\begin{bmatrix} \boldsymbol{x}{0} & \boldsymbol{x}{1}\ \end{bmatrix}. $$ If we then compute the expectation value $$ \mathbb{E}[\boldsymbol{X}^T\boldsymbol{X}] = \frac{1}{n}\boldsymbol{X}^T\boldsymbol{X}=\begin{bmatrix} x_{00}^2+x_{01}^2 & x_{00}x_{10}+x_{01}x_{11}\ x_{10}x_{00}+x_{11}x_{01} & x_{10}^2+x_{11}^2\ \end{bmatrix}, $$ which is just $$ \boldsymbol{C}[\boldsymbol{x}_0,\boldsymbol{x}_1] = \boldsymbol{C}[\boldsymbol{x}]=\begin{bmatrix} \mathrm{var}[\boldsymbol{x}_0] & \mathrm{cov}[\boldsymbol{x}_0,\boldsymbol{x}_1] \ \mathrm{cov}[\boldsymbol{x}_1,\boldsymbol{x}_0] & \mathrm{var}[\boldsymbol{x}_1] \ \end{bmatrix}, $$ where we wrote $$\boldsymbol{C}[\boldsymbol{x}_0,\boldsymbol{x}_1] = \boldsymbol{C}[\boldsymbol{x}]$$ to indicate that this the covariance of the vectors $\boldsymbol{x}$ of the design/feature matrix $\boldsymbol{X}$. It is easy to generalize this to a matrix $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$. Linking with SVD See lecture september 11. More text to be added here soon. Where are we going? Before we proceed, we need to rethink what we have been doing. In our eager to fit the data, we have omitted several important elements in our regression analysis. In what follows we will 1. look at statistical properties, including a discussion of mean values, variance and the so-called bias-variance tradeoff introduce resampling techniques like cross-validation, bootstrapping and jackknife and more This will allow us to link the standard linear algebra methods we have discussed above to a statistical interpretation of the methods. Resampling methods Resampling methods are an indispensable tool in modern statistics. They involve repeatedly drawing samples from a training set and refitting a model of interest on each sample in order to obtain additional information about the fitted model. For example, in order to estimate the variability of a linear regression fit, we can repeatedly draw different samples from the training data, fit a linear regression to each new sample, and then examine the extent to which the resulting fits differ. Such an approach may allow us to obtain information that would not be available from fitting the model only once using the original training sample. Two resampling methods are often used in Machine Learning analyses, 1. The bootstrap method and Cross-Validation In addition there are several other methods such as the Jackknife and the Blocking methods. We will discuss in particular cross-validation and the bootstrap method. Resampling approaches can be computationally expensive Resampling approaches can be computationally expensive, because they involve fitting the same statistical method multiple times using different subsets of the training data. However, due to recent advances in computing power, the computational requirements of resampling methods generally are not prohibitive. In this chapter, we discuss two of the most commonly used resampling methods, cross-validation and the bootstrap. Both methods are important tools in the practical application of many statistical learning procedures. For example, cross-validation can be used to estimate the test error associated with a given statistical learning method in order to evaluate its performance, or to select the appropriate level of flexibility. The process of evaluating a model’s performance is known as model assessment, whereas the process of selecting the proper level of flexibility for a model is known as model selection. The bootstrap is widely used. Why resampling methods ? Statistical analysis. Our simulations can be treated as computer experiments. This is particularly the case for Monte Carlo methods The results can be analysed with the same statistical tools as we would use analysing experimental data. As in all experiments, we are looking for expectation values and an estimate of how accurate they are, i.e., possible sources for errors. Statistical analysis As in other experiments, many numerical experiments have two classes of errors: Statistical errors Systematical errors Statistical errors can be estimated using standard tools from statistics Systematical errors are method specific and must be treated differently from case to case. <!-- !split --> Linking the regression analysis with a statistical interpretation The advantage of doing linear regression is that we actually end up with analytical expressions for several statistical quantities. Standard least squares and Ridge regression allow us to derive quantities like the variance and other expectation values in a rather straightforward way. It is assumed that $\varepsilon_i \sim \mathcal{N}(0, \sigma^2)$ and the $\varepsilon_{i}$ are independent, i.e.: $$ \begin{align} \mbox{Cov}(\varepsilon_{i_1}, \varepsilon_{i_2}) & = \left{ \begin{array}{lcc} \sigma^2 & \mbox{if} & i_1 = i_2, \ 0 & \mbox{if} & i_1 \not= i_2. \end{array} \right. \end{align} $$ The randomness of $\varepsilon_i$ implies that $\mathbf{y}i$ is also a random variable. In particular, $\mathbf{y}_i$ is normally distributed, because $\varepsilon_i \sim \mathcal{N}(0, \sigma^2)$ and $\mathbf{X}{i,\ast} \, \boldsymbol{\beta}$ is a non-random scalar. To specify the parameters of the distribution of $\mathbf{y}_i$ we need to calculate its first two moments. Recall that $\boldsymbol{X}$ is a matrix of dimensionality $n\times p$. The notation above $\mathbf{X}_{i,\ast}$ means that we are looking at the row number $i$ and perform a sum over all values $p$. Assumptions made The assumption we have made here can be summarized as (and this is going to be useful when we discuss the bias-variance trade off) that there exists a function $f(\boldsymbol{x})$ and a normal distributed error $\boldsymbol{\varepsilon}\sim \mathcal{N}(0, \sigma^2)$ which describe our data $$ \boldsymbol{y} = f(\boldsymbol{x})+\boldsymbol{\varepsilon} $$ We approximate this function with our model from the solution of the linear regression equations, that is our function $f$ is approximated by $\boldsymbol{\tilde{y}}$ where we want to minimize $(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2$, our MSE, with $$ \boldsymbol{\tilde{y}} = \boldsymbol{X}\boldsymbol{\beta}. $$ Expectation value and variance We can calculate the expectation value of $\boldsymbol{y}$ for a given element $i$ $$ \begin{align} \mathbb{E}(y_i) & = \mathbb{E}(\mathbf{X}{i, \ast} \, \boldsymbol{\beta}) + \mathbb{E}(\varepsilon_i) \, \, \, = \, \, \, \mathbf{X}{i, \ast} \, \beta, \end{align} $$ while its variance is $$ \begin{align} \mbox{Var}(y_i) & = \mathbb{E} { [y_i - \mathbb{E}(y_i)]^2 } \, \, \, = \, \, \, \mathbb{E} ( y_i^2 ) - [\mathbb{E}(y_i)]^2 \ & = \mathbb{E} [ ( \mathbf{X}{i, \ast} \, \beta + \varepsilon_i )^2] - ( \mathbf{X}{i, \ast} \, \boldsymbol{\beta})^2 \ & = \mathbb{E} [ ( \mathbf{X}{i, \ast} \, \boldsymbol{\beta})^2 + 2 \varepsilon_i \mathbf{X}{i, \ast} \, \boldsymbol{\beta} + \varepsilon_i^2 ] - ( \mathbf{X}{i, \ast} \, \beta)^2 \ & = ( \mathbf{X}{i, \ast} \, \boldsymbol{\beta})^2 + 2 \mathbb{E}(\varepsilon_i) \mathbf{X}{i, \ast} \, \boldsymbol{\beta} + \mathbb{E}(\varepsilon_i^2 ) - ( \mathbf{X}{i, \ast} \, \boldsymbol{\beta})^2 \ & = \mathbb{E}(\varepsilon_i^2 ) \, \, \, = \, \, \, \mbox{Var}(\varepsilon_i) \, \, \, = \, \, \, \sigma^2. \end{align} $$ Hence, $y_i \sim \mathcal{N}( \mathbf{X}_{i, \ast} \, \boldsymbol{\beta}, \sigma^2)$, that is $\boldsymbol{y}$ follows a normal distribution with mean value $\boldsymbol{X}\boldsymbol{\beta}$ and variance $\sigma^2$ (not be confused with the singular values of the SVD). Expectation value and variance for $\boldsymbol{\beta}$ With the OLS expressions for the parameters $\boldsymbol{\beta}$ we can evaluate the expectation value $$ \mathbb{E}(\boldsymbol{\beta}) = \mathbb{E}[ (\mathbf{X}^{\top} \mathbf{X})^{-1}\mathbf{X}^{T} \mathbf{Y}]=(\mathbf{X}^{T} \mathbf{X})^{-1}\mathbf{X}^{T} \mathbb{E}[ \mathbf{Y}]=(\mathbf{X}^{T} \mathbf{X})^{-1} \mathbf{X}^{T}\mathbf{X}\boldsymbol{\beta}=\boldsymbol{\beta}. $$ This means that the estimator of the regression parameters is unbiased. We can also calculate the variance The variance of $\boldsymbol{\beta}$ is $$ \begin{eqnarray} \mbox{Var}(\boldsymbol{\beta}) & = & \mathbb{E} { [\boldsymbol{\beta} - \mathbb{E}(\boldsymbol{\beta})] [\boldsymbol{\beta} - \mathbb{E}(\boldsymbol{\beta})]^{T} } \ & = & \mathbb{E} { [(\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y} - \boldsymbol{\beta}] \, [(\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y} - \boldsymbol{\beta}]^{T} } \ % & = & \mathbb{E} { [(\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y}] \, [(\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y}]^{T} } - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T} % \ % & = & \mathbb{E} { (\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y} \, \mathbf{Y}^{T} \, \mathbf{X} \, (\mathbf{X}^{T} \mathbf{X})^{-1} } - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T} % \ & = & (\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \, \mathbb{E} { \mathbf{Y} \, \mathbf{Y}^{T} } \, \mathbf{X} \, (\mathbf{X}^{T} \mathbf{X})^{-1} - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T} \ & = & (\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \, { \mathbf{X} \, \boldsymbol{\beta} \, \boldsymbol{\beta}^{T} \, \mathbf{X}^{T} + \sigma^2 } \, \mathbf{X} \, (\mathbf{X}^{T} \mathbf{X})^{-1} - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T} % \ % & = & (\mathbf{X}^T \mathbf{X})^{-1} \, \mathbf{X}^T \, \mathbf{X} \, \boldsymbol{\beta} \, \boldsymbol{\beta}^T \, \mathbf{X}^T \, \mathbf{X} \, (\mathbf{X}^T % \mathbf{X})^{-1} % \ % & & + \, \, \sigma^2 \, (\mathbf{X}^T \mathbf{X})^{-1} \, \mathbf{X}^T \, \mathbf{X} \, (\mathbf{X}^T \mathbf{X})^{-1} - \boldsymbol{\beta} \boldsymbol{\beta}^T \ & = & \boldsymbol{\beta} \, \boldsymbol{\beta}^{T} + \sigma^2 \, (\mathbf{X}^{T} \mathbf{X})^{-1} - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T} \, \, \, = \, \, \, \sigma^2 \, (\mathbf{X}^{T} \mathbf{X})^{-1}, \end{eqnarray} $$ where we have used that $\mathbb{E} (\mathbf{Y} \mathbf{Y}^{T}) = \mathbf{X} \, \boldsymbol{\beta} \, \boldsymbol{\beta}^{T} \, \mathbf{X}^{T} + \sigma^2 \, \mathbf{I}{nn}$. From $\mbox{Var}(\boldsymbol{\beta}) = \sigma^2 \, (\mathbf{X}^{T} \mathbf{X})^{-1}$, one obtains an estimate of the variance of the estimate of the $j$-th regression coefficient: $\boldsymbol{\sigma}^2 (\boldsymbol{\beta}_j ) = \boldsymbol{\sigma}^2 \sqrt{ [(\mathbf{X}^{T} \mathbf{X})^{-1}]{jj} }$. This may be used to construct a confidence interval for the estimates. In a similar way, we can obtain analytical expressions for say the expectation values of the parameters $\boldsymbol{\beta}$ and their variance when we employ Ridge regression, allowing us again to define a confidence interval. It is rather straightforward to show that $$ \mathbb{E} \big[ \boldsymbol{\beta}^{\mathrm{Ridge}} \big]=(\mathbf{X}^{T} \mathbf{X} + \lambda \mathbf{I}_{pp})^{-1} (\mathbf{X}^{\top} \mathbf{X})\boldsymbol{\beta}^{\mathrm{OLS}}. $$ We see clearly that $\mathbb{E} \big[ \boldsymbol{\beta}^{\mathrm{Ridge}} \big] \not= \boldsymbol{\beta}^{\mathrm{OLS}}$ for any $\lambda > 0$. We say then that the ridge estimator is biased. We can also compute the variance as $$ \mbox{Var}[\boldsymbol{\beta}^{\mathrm{Ridge}}]=\sigma^2[ \mathbf{X}^{T} \mathbf{X} + \lambda \mathbf{I} ]^{-1} \mathbf{X}^{T} \mathbf{X} { [ \mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I} ]^{-1}}^{T}, $$ and it is easy to see that if the parameter $\lambda$ goes to infinity then the variance of Ridge parameters $\boldsymbol{\beta}$ goes to zero. With this, we can compute the difference $$ \mbox{Var}[\boldsymbol{\beta}^{\mathrm{OLS}}]-\mbox{Var}(\boldsymbol{\beta}^{\mathrm{Ridge}})=\sigma^2 [ \mathbf{X}^{T} \mathbf{X} + \lambda \mathbf{I} ]^{-1}[ 2\lambda\mathbf{I} + \lambda^2 (\mathbf{X}^{T} \mathbf{X})^{-1} ] { [ \mathbf{X}^{T} \mathbf{X} + \lambda \mathbf{I} ]^{-1}}^{T}. $$ The difference is non-negative definite since each component of the matrix product is non-negative definite. This means the variance we obtain with the standard OLS will always for $\lambda > 0$ be larger than the variance of $\boldsymbol{\beta}$ obtained with the Ridge estimator. This has interesting consequences when we discuss the so-called bias-variance trade-off below. Resampling methods With all these analytical equations for both the OLS and Ridge regression, we will now outline how to assess a given model. This will lead us to a discussion of the so-called bias-variance tradeoff (see below) and so-called resampling methods. One of the quantities we have discussed as a way to measure errors is the mean-squared error (MSE), mainly used for fitting of continuous functions. Another choice is the absolute error. In the discussions below we will focus on the MSE and in particular since we will split the data into test and training data, we discuss the 1. prediction error or simply the test error $\mathrm{Err_{Test}}$, where we have a fixed training set and the test error is the MSE arising from the data reserved for testing. We discuss also the training error $\mathrm{Err_{Train}}$, which is the average loss over the training data. As our model becomes more and more complex, more of the training data tends to used. The training may thence adapt to more complicated structures in the data. This may lead to a decrease in the bias (see below for code example) and a slight increase of the variance for the test error. For a certain level of complexity the test error will reach minimum, before starting to increase again. The training error reaches a saturation. Resampling methods: Jackknife and Bootstrap Two famous resampling methods are the independent bootstrap and the jackknife. The jackknife is a special case of the independent bootstrap. Still, the jackknife was made popular prior to the independent bootstrap. And as the popularity of the independent bootstrap soared, new variants, such as the dependent bootstrap. The Jackknife and independent bootstrap work for independent, identically distributed random variables. If these conditions are not satisfied, the methods will fail. Yet, it should be said that if the data are independent, identically distributed, and we only want to estimate the variance of $\overline{X}$ (which often is the case), then there is no need for bootstrapping. Resampling methods: Jackknife The Jackknife works by making many replicas of the estimator $\widehat{\theta}$. The jackknife is a resampling method where we systematically leave out one observation from the vector of observed values $\boldsymbol{x} = (x_1,x_2,\cdots,X_n)$. Let $\boldsymbol{x}_i$ denote the vector $$ \boldsymbol{x}i = (x_1,x_2,\cdots,x{i-1},x_{i+1},\cdots,x_n), $$ which equals the vector $\boldsymbol{x}$ with the exception that observation number $i$ is left out. Using this notation, define $\widehat{\theta}_i$ to be the estimator $\widehat{\theta}$ computed using $\vec{X}_i$. Jackknife code example End of explanation from numpy import * from numpy.random import randint, randn from time import time import matplotlib.mlab as mlab import matplotlib.pyplot as plt # Returns mean of bootstrap samples def stat(data): return mean(data) # Bootstrap algorithm def bootstrap(data, statistic, R): t = zeros(R); n = len(data); inds = arange(n); t0 = time() # non-parametric bootstrap for i in range(R): t[i] = statistic(data[randint(0,n,n)]) # analysis print("Runtime: %g sec" % (time()-t0)); print("Bootstrap Statistics :") print("original bias std. error") print("%8g %8g %14g %15g" % (statistic(data), std(data),mean(t),std(t))) return t mu, sigma = 100, 15 datapoints = 10000 x = mu + sigma*random.randn(datapoints) # bootstrap returns the data sample t = bootstrap(x, stat, datapoints) # the histogram of the bootstrapped data n, binsboot, patches = plt.hist(t, 50, normed=1, facecolor='red', alpha=0.75) # add a 'best fit' line y = mlab.normpdf( binsboot, mean(t), std(t)) lt = plt.plot(binsboot, y, 'r--', linewidth=1) plt.xlabel('Smarts') plt.ylabel('Probability') plt.axis([99.5, 100.6, 0, 3.0]) plt.grid(True) plt.show() Explanation: Resampling methods: Bootstrap Bootstrapping is a nonparametric approach to statistical inference that substitutes computation for more traditional distributional assumptions and asymptotic results. Bootstrapping offers a number of advantages: 1. The bootstrap is quite general, although there are some cases in which it fails. Because it does not require distributional assumptions (such as normally distributed errors), the bootstrap can provide more accurate inferences when the data are not well behaved or when the sample size is small. It is possible to apply the bootstrap to statistics with sampling distributions that are difficult to derive, even asymptotically. It is relatively simple to apply the bootstrap to complex data-collection plans (such as stratified and clustered samples). Resampling methods: Bootstrap background Since $\widehat{\theta} = \widehat{\theta}(\boldsymbol{X})$ is a function of random variables, $\widehat{\theta}$ itself must be a random variable. Thus it has a pdf, call this function $p(\boldsymbol{t})$. The aim of the bootstrap is to estimate $p(\boldsymbol{t})$ by the relative frequency of $\widehat{\theta}$. You can think of this as using a histogram in the place of $p(\boldsymbol{t})$. If the relative frequency closely resembles $p(\vec{t})$, then using numerics, it is straight forward to estimate all the interesting parameters of $p(\boldsymbol{t})$ using point estimators. Resampling methods: More Bootstrap background In the case that $\widehat{\theta}$ has more than one component, and the components are independent, we use the same estimator on each component separately. If the probability density function of $X_i$, $p(x)$, had been known, then it would have been straight forward to do this by: 1. Drawing lots of numbers from $p(x)$, suppose we call one such set of numbers $(X_1^, X_2^, \cdots, X_n^*)$. Then using these numbers, we could compute a replica of $\widehat{\theta}$ called $\widehat{\theta}^*$. By repeated use of (1) and (2), many estimates of $\widehat{\theta}$ could have been obtained. The idea is to use the relative frequency of $\widehat{\theta}^*$ (think of a histogram) as an estimate of $p(\boldsymbol{t})$. Resampling methods: Bootstrap approach But unless there is enough information available about the process that generated $X_1,X_2,\cdots,X_n$, $p(x)$ is in general unknown. Therefore, Efron in 1979 asked the question: What if we replace $p(x)$ by the relative frequency of the observation $X_i$; if we draw observations in accordance with the relative frequency of the observations, will we obtain the same result in some asymptotic sense? The answer is yes. Instead of generating the histogram for the relative frequency of the observation $X_i$, just draw the values $(X_1^,X_2^,\cdots,X_n^*)$ with replacement from the vector $\boldsymbol{X}$. Resampling methods: Bootstrap steps The independent bootstrap works like this: Draw with replacement $n$ numbers for the observed variables $\boldsymbol{x} = (x_1,x_2,\cdots,x_n)$. Define a vector $\boldsymbol{x}^*$ containing the values which were drawn from $\boldsymbol{x}$. Using the vector $\boldsymbol{x}^$ compute $\widehat{\theta}^$ by evaluating $\widehat \theta$ under the observations $\boldsymbol{x}^*$. Repeat this process $k$ times. When you are done, you can draw a histogram of the relative frequency of $\widehat \theta^$. This is your estimate of the probability distribution $p(t)$. Using this probability distribution you can estimate any statistics thereof. In principle you never draw the histogram of the relative frequency of $\widehat{\theta}^$. Instead you use the estimators corresponding to the statistic of interest. For example, if you are interested in estimating the variance of $\widehat \theta$, apply the etsimator $\widehat \sigma^2$ to the values $\widehat \theta ^*$. Code example for the Bootstrap method The following code starts with a Gaussian distribution with mean value $\mu =100$ and variance $\sigma=15$. We use this to generate the data used in the bootstrap analysis. The bootstrap analysis returns a data set after a given number of bootstrap operations (as many as we have data points). This data set consists of estimated mean values for each bootstrap operation. The histogram generated by the bootstrap method shows that the distribution for these mean values is also a Gaussian, centered around the mean value $\mu=100$ but with standard deviation $\sigma/\sqrt{n}$, where $n$ is the number of bootstrap samples (in this case the same as the number of original data points). The value of the standard deviation is what we expect from the central limit theorem. End of explanation import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import KFold from sklearn.linear_model import Ridge from sklearn.model_selection import cross_val_score from sklearn.preprocessing import PolynomialFeatures # A seed just to ensure that the random numbers are the same for every run. # Useful for eventual debugging. np.random.seed(3155) # Generate the data. nsamples = 100 x = np.random.randn(nsamples) y = 3*x**2 + np.random.randn(nsamples) ## Cross-validation on Ridge regression using KFold only # Decide degree on polynomial to fit poly = PolynomialFeatures(degree = 6) # Decide which values of lambda to use nlambdas = 500 lambdas = np.logspace(-3, 5, nlambdas) # Initialize a KFold instance k = 5 kfold = KFold(n_splits = k) # Perform the cross-validation to estimate MSE scores_KFold = np.zeros((nlambdas, k)) i = 0 for lmb in lambdas: ridge = Ridge(alpha = lmb) j = 0 for train_inds, test_inds in kfold.split(x): xtrain = x[train_inds] ytrain = y[train_inds] xtest = x[test_inds] ytest = y[test_inds] Xtrain = poly.fit_transform(xtrain[:, np.newaxis]) ridge.fit(Xtrain, ytrain[:, np.newaxis]) Xtest = poly.fit_transform(xtest[:, np.newaxis]) ypred = ridge.predict(Xtest) scores_KFold[i,j] = np.sum((ypred - ytest[:, np.newaxis])**2)/np.size(ypred) j += 1 i += 1 estimated_mse_KFold = np.mean(scores_KFold, axis = 1) ## Cross-validation using cross_val_score from sklearn along with KFold # kfold is an instance initialized above as: # kfold = KFold(n_splits = k) estimated_mse_sklearn = np.zeros(nlambdas) i = 0 for lmb in lambdas: ridge = Ridge(alpha = lmb) X = poly.fit_transform(x[:, np.newaxis]) estimated_mse_folds = cross_val_score(ridge, X, y[:, np.newaxis], scoring='neg_mean_squared_error', cv=kfold) # cross_val_score return an array containing the estimated negative mse for every fold. # we have to the the mean of every array in order to get an estimate of the mse of the model estimated_mse_sklearn[i] = np.mean(-estimated_mse_folds) i += 1 ## Plot and compare the slightly different ways to perform cross-validation plt.figure() plt.plot(np.log10(lambdas), estimated_mse_sklearn, label = 'cross_val_score') plt.plot(np.log10(lambdas), estimated_mse_KFold, 'r--', label = 'KFold') plt.xlabel('log10(lambda)') plt.ylabel('mse') plt.legend() plt.show() Explanation: <!-- !split --> Various steps in cross-validation When the repetitive splitting of the data set is done randomly, samples may accidently end up in a fast majority of the splits in either training or test set. Such samples may have an unbalanced influence on either model building or prediction evaluation. To avoid this $k$-fold cross-validation structures the data splitting. The samples are divided into $k$ more or less equally sized exhaustive and mutually exclusive subsets. In turn (at each split) one of these subsets plays the role of the test set while the union of the remaining subsets constitutes the training set. Such a splitting warrants a balanced representation of each sample in both training and test set over the splits. Still the division into the $k$ subsets involves a degree of randomness. This may be fully excluded when choosing $k=n$. This particular case is referred to as leave-one-out cross-validation (LOOCV). <!-- !split --> How to set up the cross-validation for Ridge and/or Lasso Define a range of interest for the penalty parameter. Divide the data set into training and test set comprising samples ${1, \ldots, n} \setminus i$ and ${ i }$, respectively. Fit the linear regression model by means of ridge estimation for each $\lambda$ in the grid using the training set, and the corresponding estimate of the error variance $\boldsymbol{\sigma}_{-i}^2(\lambda)$, as $$ \begin{align} \boldsymbol{\beta}{-i}(\lambda) & = ( \boldsymbol{X}{-i, \ast}^{T} \boldsymbol{X}{-i, \ast} + \lambda \boldsymbol{I}{pp})^{-1} \boldsymbol{X}{-i, \ast}^{T} \boldsymbol{y}{-i} \end{align} $$ Evaluate the prediction performance of these models on the test set by $\log{L[y_i, \boldsymbol{X}{i, \ast}; \boldsymbol{\beta}{-i}(\lambda), \boldsymbol{\sigma}{-i}^2(\lambda)]}$. Or, by the prediction error $|y_i - \boldsymbol{X}{i, \ast} \boldsymbol{\beta}_{-i}(\lambda)|$, the relative error, the error squared or the R2 score function. Repeat the first three steps such that each sample plays the role of the test set once. Average the prediction performances of the test sets at each grid point of the penalty bias/parameter. It is an estimate of the prediction performance of the model corresponding to this value of the penalty parameter on novel data. It is defined as $$ \begin{align} \frac{1}{n} \sum_{i = 1}^n \log{L[y_i, \mathbf{X}{i, \ast}; \boldsymbol{\beta}{-i}(\lambda), \boldsymbol{\sigma}_{-i}^2(\lambda)]}. \end{align} $$ Cross-validation in brief For the various values of $k$ shuffle the dataset randomly. Split the dataset into $k$ groups. For each unique group: a. Decide which group to use as set for test data b. Take the remaining groups as a training data set c. Fit a model on the training set and evaluate it on the test set d. Retain the evaluation score and discard the model Summarize the model using the sample of model evaluation scores Code Example for Cross-validation and $k$-fold Cross-validation The code here uses Ridge regression with cross-validation (CV) resampling and $k$-fold CV in order to fit a specific polynomial. End of explanation import matplotlib.pyplot as plt import numpy as np from sklearn.linear_model import LinearRegression, Ridge, Lasso from sklearn.preprocessing import PolynomialFeatures from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline from sklearn.utils import resample np.random.seed(2018) n = 500 n_boostraps = 100 degree = 18 # A quite high value, just to show. noise = 0.1 # Make data set. x = np.linspace(-1, 3, n).reshape(-1, 1) y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2) + np.random.normal(0, 0.1, x.shape) # Hold out some test data that is never used in training. x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2) # Combine x transformation and model into one operation. # Not neccesary, but convenient. model = make_pipeline(PolynomialFeatures(degree=degree), LinearRegression(fit_intercept=False)) # The following (m x n_bootstraps) matrix holds the column vectors y_pred # for each bootstrap iteration. y_pred = np.empty((y_test.shape[0], n_boostraps)) for i in range(n_boostraps): x_, y_ = resample(x_train, y_train) # Evaluate the new model on the same test data each time. y_pred[:, i] = model.fit(x_, y_).predict(x_test).ravel() # Note: Expectations and variances taken w.r.t. different training # data sets, hence the axis=1. Subsequent means are taken across the test data # set in order to obtain a total value, but before this we have error/bias/variance # calculated per data point in the test set. # Note 2: The use of keepdims=True is important in the calculation of bias as this # maintains the column vector form. Dropping this yields very unexpected results. error = np.mean( np.mean((y_test - y_pred)**2, axis=1, keepdims=True) ) bias = np.mean( (y_test - np.mean(y_pred, axis=1, keepdims=True))**2 ) variance = np.mean( np.var(y_pred, axis=1, keepdims=True) ) print('Error:', error) print('Bias^2:', bias) print('Var:', variance) print('{} >= {} + {} = {}'.format(error, bias, variance, bias+variance)) plt.plot(x[::5, :], y[::5, :], label='f(x)') plt.scatter(x_test, y_test, label='Data points') plt.scatter(x_test, np.mean(y_pred, axis=1), label='Pred') plt.legend() plt.show() Explanation: The bias-variance tradeoff We will discuss the bias-variance tradeoff in the context of continuous predictions such as regression. However, many of the intuitions and ideas discussed here also carry over to classification tasks. Consider a dataset $\mathcal{L}$ consisting of the data $\mathbf{X}_\mathcal{L}={(y_j, \boldsymbol{x}_j), j=0\ldots n-1}$. Let us assume that the true data is generated from a noisy model $$ \boldsymbol{y}=f(\boldsymbol{x}) + \boldsymbol{\epsilon} $$ where $\epsilon$ is normally distributed with mean zero and standard deviation $\sigma^2$. In our derivation of the ordinary least squares method we defined then an approximation to the function $f$ in terms of the parameters $\boldsymbol{\beta}$ and the design matrix $\boldsymbol{X}$ which embody our model, that is $\boldsymbol{\tilde{y}}=\boldsymbol{X}\boldsymbol{\beta}$. Thereafter we found the parameters $\boldsymbol{\beta}$ by optimizing the means squared error via the so-called cost function $$ C(\boldsymbol{X},\boldsymbol{\beta}) =\frac{1}{n}\sum_{i=0}^{n-1}(y_i-\tilde{y}_i)^2=\mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]. $$ We can rewrite this as $$ \mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\frac{1}{n}\sum_i(f_i-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2+\frac{1}{n}\sum_i(\tilde{y}_i-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2+\sigma^2. $$ The three terms represent the square of the bias of the learning method, which can be thought of as the error caused by the simplifying assumptions built into the method. The second term represents the variance of the chosen model and finally the last terms is variance of the error $\boldsymbol{\epsilon}$. To derive this equation, we need to recall that the variance of $\boldsymbol{y}$ and $\boldsymbol{\epsilon}$ are both equal to $\sigma^2$. The mean value of $\boldsymbol{\epsilon}$ is by definition equal to zero. Furthermore, the function $f$ is not a stochastics variable, idem for $\boldsymbol{\tilde{y}}$. We use a more compact notation in terms of the expectation value $$ \mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\mathbb{E}\left[(\boldsymbol{f}+\boldsymbol{\epsilon}-\boldsymbol{\tilde{y}})^2\right], $$ and adding and subtracting $\mathbb{E}\left[\boldsymbol{\tilde{y}}\right]$ we get $$ \mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\mathbb{E}\left[(\boldsymbol{f}+\boldsymbol{\epsilon}-\boldsymbol{\tilde{y}}+\mathbb{E}\left[\boldsymbol{\tilde{y}}\right]-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2\right], $$ which, using the abovementioned expectation values can be rewritten as $$ \mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\mathbb{E}\left[(\boldsymbol{y}-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2\right]+\mathrm{Var}\left[\boldsymbol{\tilde{y}}\right]+\sigma^2, $$ that is the rewriting in terms of the so-called bias, the variance of the model $\boldsymbol{\tilde{y}}$ and the variance of $\boldsymbol{\epsilon}$. Example code for Bias-Variance tradeoff End of explanation import matplotlib.pyplot as plt import numpy as np from sklearn.linear_model import LinearRegression, Ridge, Lasso from sklearn.preprocessing import PolynomialFeatures from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline from sklearn.utils import resample np.random.seed(2018) n = 400 n_boostraps = 100 maxdegree = 30 # Make data set. x = np.linspace(-3, 3, n).reshape(-1, 1) y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2)+ np.random.normal(0, 0.1, x.shape) error = np.zeros(maxdegree) bias = np.zeros(maxdegree) variance = np.zeros(maxdegree) polydegree = np.zeros(maxdegree) x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2) for degree in range(maxdegree): model = make_pipeline(PolynomialFeatures(degree=degree), LinearRegression(fit_intercept=False)) y_pred = np.empty((y_test.shape[0], n_boostraps)) for i in range(n_boostraps): x_, y_ = resample(x_train, y_train) y_pred[:, i] = model.fit(x_, y_).predict(x_test).ravel() polydegree[degree] = degree error[degree] = np.mean( np.mean((y_test - y_pred)**2, axis=1, keepdims=True) ) bias[degree] = np.mean( (y_test - np.mean(y_pred, axis=1, keepdims=True))**2 ) variance[degree] = np.mean( np.var(y_pred, axis=1, keepdims=True) ) print('Polynomial degree:', degree) print('Error:', error[degree]) print('Bias^2:', bias[degree]) print('Var:', variance[degree]) print('{} >= {} + {} = {}'.format(error[degree], bias[degree], variance[degree], bias[degree]+variance[degree])) plt.plot(polydegree, error, label='Error') plt.plot(polydegree, bias, label='bias') plt.plot(polydegree, variance, label='Variance') plt.legend() plt.show() Explanation: Understanding what happens End of explanation ============================ Underfitting vs. Overfitting ============================ This example demonstrates the problems of underfitting and overfitting and how we can use linear regression with polynomial features to approximate nonlinear functions. The plot shows the function that we want to approximate, which is a part of the cosine function. In addition, the samples from the real function and the approximations of different models are displayed. The models have polynomial features of different degrees. We can see that a linear function (polynomial with degree 1) is not sufficient to fit the training samples. This is called **underfitting**. A polynomial of degree 4 approximates the true function almost perfectly. However, for higher degrees the model will **overfit** the training data, i.e. it learns the noise of the training data. We evaluate quantitatively **overfitting** / **underfitting** by using cross-validation. We calculate the mean squared error (MSE) on the validation set, the higher, the less likely the model generalizes correctly from the training data. print(__doc__) import numpy as np import matplotlib.pyplot as plt from sklearn.pipeline import Pipeline from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.model_selection import cross_val_score def true_fun(X): return np.cos(1.5 * np.pi * X) np.random.seed(0) n_samples = 30 degrees = [1, 4, 15] X = np.sort(np.random.rand(n_samples)) y = true_fun(X) + np.random.randn(n_samples) * 0.1 plt.figure(figsize=(14, 5)) for i in range(len(degrees)): ax = plt.subplot(1, len(degrees), i + 1) plt.setp(ax, xticks=(), yticks=()) polynomial_features = PolynomialFeatures(degree=degrees[i], include_bias=False) linear_regression = LinearRegression() pipeline = Pipeline([("polynomial_features", polynomial_features), ("linear_regression", linear_regression)]) pipeline.fit(X[:, np.newaxis], y) # Evaluate the models using crossvalidation scores = cross_val_score(pipeline, X[:, np.newaxis], y, scoring="neg_mean_squared_error", cv=10) X_test = np.linspace(0, 1, 100) plt.plot(X_test, pipeline.predict(X_test[:, np.newaxis]), label="Model") plt.plot(X_test, true_fun(X_test), label="True function") plt.scatter(X, y, edgecolor='b', s=20, label="Samples") plt.xlabel("x") plt.ylabel("y") plt.xlim((0, 1)) plt.ylim((-2, 2)) plt.legend(loc="best") plt.title("Degree {}\nMSE = {:.2e}(+/- {:.2e})".format( degrees[i], -scores.mean(), scores.std())) plt.show() Explanation: <!-- !split --> Summing up The bias-variance tradeoff summarizes the fundamental tension in machine learning, particularly supervised learning, between the complexity of a model and the amount of training data needed to train it. Since data is often limited, in practice it is often useful to use a less-complex model with higher bias, that is a model whose asymptotic performance is worse than another model because it is easier to train and less sensitive to sampling noise arising from having a finite-sized training dataset (smaller variance). The above equations tell us that in order to minimize the expected test error, we need to select a statistical learning method that simultaneously achieves low variance and low bias. Note that variance is inherently a nonnegative quantity, and squared bias is also nonnegative. Hence, we see that the expected test MSE can never lie below $Var(\epsilon)$, the irreducible error. What do we mean by the variance and bias of a statistical learning method? The variance refers to the amount by which our model would change if we estimated it using a different training data set. Since the training data are used to fit the statistical learning method, different training data sets will result in a different estimate. But ideally the estimate for our model should not vary too much between training sets. However, if a method has high variance then small changes in the training data can result in large changes in the model. In general, more flexible statistical methods have higher variance. You may also find this recent article of interest. Another Example from Scikit-Learn's Repository End of explanation # Common imports import os import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression, Ridge, Lasso from sklearn.model_selection import train_test_split from sklearn.utils import resample from sklearn.metrics import mean_squared_error # Where to save the figures and data files PROJECT_ROOT_DIR = "Results" FIGURE_ID = "Results/FigureFiles" DATA_ID = "DataFiles/" if not os.path.exists(PROJECT_ROOT_DIR): os.mkdir(PROJECT_ROOT_DIR) if not os.path.exists(FIGURE_ID): os.makedirs(FIGURE_ID) if not os.path.exists(DATA_ID): os.makedirs(DATA_ID) def image_path(fig_id): return os.path.join(FIGURE_ID, fig_id) def data_path(dat_id): return os.path.join(DATA_ID, dat_id) def save_fig(fig_id): plt.savefig(image_path(fig_id) + ".png", format='png') infile = open(data_path("EoS.csv"),'r') # Read the EoS data as csv file and organize the data into two arrays with density and energies EoS = pd.read_csv(infile, names=('Density', 'Energy')) EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce') EoS = EoS.dropna() Energies = EoS['Energy'] Density = EoS['Density'] # The design matrix now as function of various polytrops Maxpolydegree = 30 X = np.zeros((len(Density),Maxpolydegree)) X[:,0] = 1.0 testerror = np.zeros(Maxpolydegree) trainingerror = np.zeros(Maxpolydegree) polynomial = np.zeros(Maxpolydegree) trials = 100 for polydegree in range(1, Maxpolydegree): polynomial[polydegree] = polydegree for degree in range(polydegree): X[:,degree] = Density**(degree/3.0) # loop over trials in order to estimate the expectation value of the MSE testerror[polydegree] = 0.0 trainingerror[polydegree] = 0.0 for samples in range(trials): x_train, x_test, y_train, y_test = train_test_split(X, Energies, test_size=0.2) model = LinearRegression(fit_intercept=True).fit(x_train, y_train) ypred = model.predict(x_train) ytilde = model.predict(x_test) testerror[polydegree] += mean_squared_error(y_test, ytilde) trainingerror[polydegree] += mean_squared_error(y_train, ypred) testerror[polydegree] /= trials trainingerror[polydegree] /= trials print("Degree of polynomial: %3d"% polynomial[polydegree]) print("Mean squared error on training data: %.8f" % trainingerror[polydegree]) print("Mean squared error on test data: %.8f" % testerror[polydegree]) plt.plot(polynomial, np.log10(trainingerror), label='Training Error') plt.plot(polynomial, np.log10(testerror), label='Test Error') plt.xlabel('Polynomial degree') plt.ylabel('log10[MSE]') plt.legend() plt.show() Explanation: More examples on bootstrap and cross-validation and errors End of explanation # Common imports import os import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression, Ridge, Lasso from sklearn.metrics import mean_squared_error from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score # Where to save the figures and data files PROJECT_ROOT_DIR = "Results" FIGURE_ID = "Results/FigureFiles" DATA_ID = "DataFiles/" if not os.path.exists(PROJECT_ROOT_DIR): os.mkdir(PROJECT_ROOT_DIR) if not os.path.exists(FIGURE_ID): os.makedirs(FIGURE_ID) if not os.path.exists(DATA_ID): os.makedirs(DATA_ID) def image_path(fig_id): return os.path.join(FIGURE_ID, fig_id) def data_path(dat_id): return os.path.join(DATA_ID, dat_id) def save_fig(fig_id): plt.savefig(image_path(fig_id) + ".png", format='png') infile = open(data_path("EoS.csv"),'r') # Read the EoS data as csv file and organize the data into two arrays with density and energies EoS = pd.read_csv(infile, names=('Density', 'Energy')) EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce') EoS = EoS.dropna() Energies = EoS['Energy'] Density = EoS['Density'] # The design matrix now as function of various polytrops Maxpolydegree = 30 X = np.zeros((len(Density),Maxpolydegree)) X[:,0] = 1.0 estimated_mse_sklearn = np.zeros(Maxpolydegree) polynomial = np.zeros(Maxpolydegree) k =5 kfold = KFold(n_splits = k) for polydegree in range(1, Maxpolydegree): polynomial[polydegree] = polydegree for degree in range(polydegree): X[:,degree] = Density**(degree/3.0) OLS = LinearRegression() # loop over trials in order to estimate the expectation value of the MSE estimated_mse_folds = cross_val_score(OLS, X, Energies, scoring='neg_mean_squared_error', cv=kfold) #[:, np.newaxis] estimated_mse_sklearn[polydegree] = np.mean(-estimated_mse_folds) plt.plot(polynomial, np.log10(estimated_mse_sklearn), label='Test Error') plt.xlabel('Polynomial degree') plt.ylabel('log10[MSE]') plt.legend() plt.show() Explanation: <!-- !split --> The same example but now with cross-validation End of explanation import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import KFold from sklearn.linear_model import Ridge from sklearn.model_selection import cross_val_score from sklearn.preprocessing import PolynomialFeatures # A seed just to ensure that the random numbers are the same for every run. np.random.seed(3155) # Generate the data. n = 100 x = np.linspace(-3, 3, n).reshape(-1, 1) y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2)+ np.random.normal(0, 0.1, x.shape) # Decide degree on polynomial to fit poly = PolynomialFeatures(degree = 10) # Decide which values of lambda to use nlambdas = 500 lambdas = np.logspace(-3, 5, nlambdas) # Initialize a KFold instance k = 5 kfold = KFold(n_splits = k) estimated_mse_sklearn = np.zeros(nlambdas) i = 0 for lmb in lambdas: ridge = Ridge(alpha = lmb) estimated_mse_folds = cross_val_score(ridge, x, y, scoring='neg_mean_squared_error', cv=kfold) estimated_mse_sklearn[i] = np.mean(-estimated_mse_folds) i += 1 plt.figure() plt.plot(np.log10(lambdas), estimated_mse_sklearn, label = 'cross_val_score') plt.xlabel('log10(lambda)') plt.ylabel('MSE') plt.legend() plt.show() Explanation: Cross-validation with Ridge End of explanation import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import seaborn as sns import scipy.linalg as scl from sklearn.model_selection import train_test_split import tqdm sns.set(color_codes=True) cmap_args=dict(vmin=-1., vmax=1., cmap='seismic') L = 40 n = int(1e4) spins = np.random.choice([-1, 1], size=(n, L)) J = 1.0 energies = np.zeros(n) for i in range(n): energies[i] = - J * np.dot(spins[i], np.roll(spins[i], 1)) Explanation: The Ising model The one-dimensional Ising model with nearest neighbor interaction, no external field and a constant coupling constant $J$ is given by <!-- Equation labels as ordinary links --> <div id="_auto2"></div> $$ \begin{equation} H = -J \sum_{k}^L s_k s_{k + 1}, \label{_auto2} \tag{2} \end{equation} $$ where $s_i \in {-1, 1}$ and $s_{N + 1} = s_1$. The number of spins in the system is determined by $L$. For the one-dimensional system there is no phase transition. We will look at a system of $L = 40$ spins with a coupling constant of $J = 1$. To get enough training data we will generate 10000 states with their respective energies. End of explanation X = np.zeros((n, L ** 2)) for i in range(n): X[i] = np.outer(spins[i], spins[i]).ravel() y = energies X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) Explanation: Here we use ordinary least squares regression to predict the energy for the nearest neighbor one-dimensional Ising model on a ring, i.e., the endpoints wrap around. We will use linear regression to fit a value for the coupling constant to achieve this. Reformulating the problem to suit regression A more general form for the one-dimensional Ising model is <!-- Equation labels as ordinary links --> <div id="_auto3"></div> $$ \begin{equation} H = - \sum_j^L \sum_k^L s_j s_k J_{jk}. \label{_auto3} \tag{3} \end{equation} $$ Here we allow for interactions beyond the nearest neighbors and a state dependent coupling constant. This latter expression can be formulated as a matrix-product <!-- Equation labels as ordinary links --> <div id="_auto4"></div> $$ \begin{equation} \boldsymbol{H} = \boldsymbol{X} J, \label{_auto4} \tag{4} \end{equation} $$ where $X_{jk} = s_j s_k$ and $J$ is a matrix which consists of the elements $-J_{jk}$. This form of writing the energy fits perfectly with the form utilized in linear regression, that is <!-- Equation labels as ordinary links --> <div id="_auto5"></div> $$ \begin{equation} \boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{\epsilon}, \label{_auto5} \tag{5} \end{equation} $$ We split the data in training and test data as discussed in the previous example End of explanation X_train_own = np.concatenate( (np.ones(len(X_train))[:, np.newaxis], X_train), axis=1 ) X_test_own = np.concatenate( (np.ones(len(X_test))[:, np.newaxis], X_test), axis=1 ) def ols_inv(x: np.ndarray, y: np.ndarray) -> np.ndarray: return scl.inv(x.T @ x) @ (x.T @ y) beta = ols_inv(X_train_own, y_train) Explanation: Linear regression In the ordinary least squares method we choose the cost function <!-- Equation labels as ordinary links --> <div id="_auto6"></div> $$ \begin{equation} C(\boldsymbol{X}, \boldsymbol{\beta})= \frac{1}{n}\left{(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})^T(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})\right}. \label{_auto6} \tag{6} \end{equation} $$ We then find the extremal point of $C$ by taking the derivative with respect to $\boldsymbol{\beta}$ as discussed above. This yields the expression for $\boldsymbol{\beta}$ to be $$ \boldsymbol{\beta} = \frac{\boldsymbol{X}^T \boldsymbol{y}}{\boldsymbol{X}^T \boldsymbol{X}}, $$ which immediately imposes some requirements on $\boldsymbol{X}$ as there must exist an inverse of $\boldsymbol{X}^T \boldsymbol{X}$. If the expression we are modeling contains an intercept, i.e., a constant term, we must make sure that the first column of $\boldsymbol{X}$ consists of $1$. We do this here End of explanation def ols_svd(x: np.ndarray, y: np.ndarray) -> np.ndarray: u, s, v = scl.svd(x) return v.T @ scl.pinv(scl.diagsvd(s, u.shape[0], v.shape[0])) @ u.T @ y beta = ols_svd(X_train_own,y_train) Explanation: Singular Value decomposition Doing the inversion directly turns out to be a bad idea since the matrix $\boldsymbol{X}^T\boldsymbol{X}$ is singular. An alternative approach is to use the singular value decomposition. Using the definition of the Moore-Penrose pseudoinverse we can write the equation for $\boldsymbol{\beta}$ as $$ \boldsymbol{\beta} = \boldsymbol{X}^{+}\boldsymbol{y}, $$ where the pseudoinverse of $\boldsymbol{X}$ is given by $$ \boldsymbol{X}^{+} = \frac{\boldsymbol{X}^T}{\boldsymbol{X}^T\boldsymbol{X}}. $$ Using singular value decomposition we can decompose the matrix $\boldsymbol{X} = \boldsymbol{U}\boldsymbol{\Sigma} \boldsymbol{V}^T$, where $\boldsymbol{U}$ and $\boldsymbol{V}$ are orthogonal(unitary) matrices and $\boldsymbol{\Sigma}$ contains the singular values (more details below). where $X^{+} = V\Sigma^{+} U^T$. This reduces the equation for $\omega$ to <!-- Equation labels as ordinary links --> <div id="_auto7"></div> $$ \begin{equation} \boldsymbol{\beta} = \boldsymbol{V}\boldsymbol{\Sigma}^{+} \boldsymbol{U}^T \boldsymbol{y}. \label{_auto7} \tag{7} \end{equation} $$ Note that solving this equation by actually doing the pseudoinverse (which is what we will do) is not a good idea as this operation scales as $\mathcal{O}(n^3)$, where $n$ is the number of elements in a general matrix. Instead, doing $QR$-factorization and solving the linear system as an equation would reduce this down to $\mathcal{O}(n^2)$ operations. End of explanation J = beta[1:].reshape(L, L) Explanation: When extracting the $J$-matrix we need to make sure that we remove the intercept, as is done here End of explanation fig = plt.figure(figsize=(20, 14)) im = plt.imshow(J, **cmap_args) plt.title("OLS", fontsize=18) plt.xticks(fontsize=18) plt.yticks(fontsize=18) cb = fig.colorbar(im) cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18) plt.show() Explanation: A way of looking at the coefficients in $J$ is to plot the matrices as images. End of explanation import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import seaborn as sns import scipy.linalg as scl from sklearn.model_selection import train_test_split import sklearn.linear_model as skl import tqdm sns.set(color_codes=True) cmap_args=dict(vmin=-1., vmax=1., cmap='seismic') L = 40 n = int(1e4) spins = np.random.choice([-1, 1], size=(n, L)) J = 1.0 energies = np.zeros(n) for i in range(n): energies[i] = - J * np.dot(spins[i], np.roll(spins[i], 1)) Explanation: It is interesting to note that OLS considers both $J_{j, j + 1} = -0.5$ and $J_{j, j - 1} = -0.5$ as valid matrix elements for $J$. In our discussion below on hyperparameters and Ridge and Lasso regression we will see that this problem can be removed, partly and only with Lasso regression. In this case our matrix inversion was actually possible. The obvious question now is what is the mathematics behind the SVD? The one-dimensional Ising model Let us bring back the Ising model again, but now with an additional focus on Ridge and Lasso regression as well. We repeat some of the basic parts of the Ising model and the setup of the training and test data. The one-dimensional Ising model with nearest neighbor interaction, no external field and a constant coupling constant $J$ is given by <!-- Equation labels as ordinary links --> <div id="_auto8"></div> $$ \begin{equation} H = -J \sum_{k}^L s_k s_{k + 1}, \label{_auto8} \tag{8} \end{equation} $$ where $s_i \in {-1, 1}$ and $s_{N + 1} = s_1$. The number of spins in the system is determined by $L$. For the one-dimensional system there is no phase transition. We will look at a system of $L = 40$ spins with a coupling constant of $J = 1$. To get enough training data we will generate 10000 states with their respective energies. End of explanation X = np.zeros((n, L ** 2)) for i in range(n): X[i] = np.outer(spins[i], spins[i]).ravel() y = energies X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.96) X_train_own = np.concatenate( (np.ones(len(X_train))[:, np.newaxis], X_train), axis=1 ) X_test_own = np.concatenate( (np.ones(len(X_test))[:, np.newaxis], X_test), axis=1 ) Explanation: A more general form for the one-dimensional Ising model is <!-- Equation labels as ordinary links --> <div id="_auto9"></div> $$ \begin{equation} H = - \sum_j^L \sum_k^L s_j s_k J_{jk}. \label{_auto9} \tag{9} \end{equation} $$ Here we allow for interactions beyond the nearest neighbors and a more adaptive coupling matrix. This latter expression can be formulated as a matrix-product on the form <!-- Equation labels as ordinary links --> <div id="_auto10"></div> $$ \begin{equation} H = X J, \label{_auto10} \tag{10} \end{equation} $$ where $X_{jk} = s_j s_k$ and $J$ is the matrix consisting of the elements $-J_{jk}$. This form of writing the energy fits perfectly with the form utilized in linear regression, viz. <!-- Equation labels as ordinary links --> <div id="_auto11"></div> $$ \begin{equation} \boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{\epsilon}. \label{_auto11} \tag{11} \end{equation} $$ We organize the data as we did above End of explanation clf = skl.LinearRegression().fit(X_train, y_train) Explanation: We will do all fitting with Scikit-Learn, End of explanation J_sk = clf.coef_.reshape(L, L) Explanation: When extracting the $J$-matrix we make sure to remove the intercept End of explanation fig = plt.figure(figsize=(20, 14)) im = plt.imshow(J_sk, **cmap_args) plt.title("LinearRegression from Scikit-learn", fontsize=18) plt.xticks(fontsize=18) plt.yticks(fontsize=18) cb = fig.colorbar(im) cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18) plt.show() Explanation: And then we plot the results End of explanation _lambda = 0.1 clf_ridge = skl.Ridge(alpha=_lambda).fit(X_train, y_train) J_ridge_sk = clf_ridge.coef_.reshape(L, L) fig = plt.figure(figsize=(20, 14)) im = plt.imshow(J_ridge_sk, **cmap_args) plt.title("Ridge from Scikit-learn", fontsize=18) plt.xticks(fontsize=18) plt.yticks(fontsize=18) cb = fig.colorbar(im) cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18) plt.show() Explanation: The results perfectly with our previous discussion where we used our own code. Ridge regression Having explored the ordinary least squares we move on to ridge regression. In ridge regression we include a regularizer. This involves a new cost function which leads to a new estimate for the weights $\boldsymbol{\beta}$. This results in a penalized regression problem. The cost function is given by 1 3 6 < < < ! ! M A T H _ B L O C K End of explanation clf_lasso = skl.Lasso(alpha=_lambda).fit(X_train, y_train) J_lasso_sk = clf_lasso.coef_.reshape(L, L) fig = plt.figure(figsize=(20, 14)) im = plt.imshow(J_lasso_sk, **cmap_args) plt.title("Lasso from Scikit-learn", fontsize=18) plt.xticks(fontsize=18) plt.yticks(fontsize=18) cb = fig.colorbar(im) cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18) plt.show() Explanation: LASSO regression In the Least Absolute Shrinkage and Selection Operator (LASSO)-method we get a third cost function. <!-- Equation labels as ordinary links --> <div id="_auto13"></div> $$ \begin{equation} C(\boldsymbol{X}, \boldsymbol{\beta}; \lambda) = (\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})^T(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y}) + \lambda \sqrt{\boldsymbol{\beta}^T\boldsymbol{\beta}}. \label{_auto13} \tag{13} \end{equation} $$ Finding the extremal point of this cost function is not so straight-forward as in least squares and ridge. We will therefore rely solely on the function Lasso from Scikit-Learn. End of explanation lambdas = np.logspace(-4, 5, 10) train_errors = { "ols_sk": np.zeros(lambdas.size), "ridge_sk": np.zeros(lambdas.size), "lasso_sk": np.zeros(lambdas.size) } test_errors = { "ols_sk": np.zeros(lambdas.size), "ridge_sk": np.zeros(lambdas.size), "lasso_sk": np.zeros(lambdas.size) } plot_counter = 1 fig = plt.figure(figsize=(32, 54)) for i, _lambda in enumerate(tqdm.tqdm(lambdas)): for key, method in zip( ["ols_sk", "ridge_sk", "lasso_sk"], [skl.LinearRegression(), skl.Ridge(alpha=_lambda), skl.Lasso(alpha=_lambda)] ): method = method.fit(X_train, y_train) train_errors[key][i] = method.score(X_train, y_train) test_errors[key][i] = method.score(X_test, y_test) omega = method.coef_.reshape(L, L) plt.subplot(10, 5, plot_counter) plt.imshow(omega, **cmap_args) plt.title(r"%s, $\lambda = %.4f$" % (key, _lambda)) plot_counter += 1 plt.show() Explanation: It is quite striking how LASSO breaks the symmetry of the coupling constant as opposed to ridge and OLS. We get a sparse solution with $J_{j, j + 1} = -1$. Performance as function of the regularization parameter We see how the different models perform for a different set of values for $\lambda$. End of explanation fig = plt.figure(figsize=(20, 14)) colors = { "ols_sk": "r", "ridge_sk": "y", "lasso_sk": "c" } for key in train_errors: plt.semilogx( lambdas, train_errors[key], colors[key], label="Train {0}".format(key), linewidth=4.0 ) for key in test_errors: plt.semilogx( lambdas, test_errors[key], colors[key] + "--", label="Test {0}".format(key), linewidth=4.0 ) plt.legend(loc="best", fontsize=18) plt.xlabel(r"$\lambda$", fontsize=18) plt.ylabel(r"$R^2$", fontsize=18) plt.tick_params(labelsize=18) plt.show() Explanation: We see that LASSO reaches a good solution for low values of $\lambda$, but will "wither" when we increase $\lambda$ too much. Ridge is more stable over a larger range of values for $\lambda$, but eventually also fades away. Finding the optimal value of $\lambda$ To determine which value of $\lambda$ is best we plot the accuracy of the models when predicting the training and the testing set. We expect the accuracy of the training set to be quite good, but if the accuracy of the testing set is much lower this tells us that we might be subject to an overfit model. The ideal scenario is an accuracy on the testing set that is close to the accuracy of the training set. End of explanation
1,025
Given the following text description, write Python code to implement the functionality described below step by step Description: Loading the Book Ratings Dataset Step1: Loading the Books Dataset Step2: Some Books don't have unique ISBN, creating a 1 Step3: Data Preparation/ Cleaning <br> Removing ratings equal to zero, since Book Crossing Dataset has rating scale from 1-10. Taking Inner Join with books dataframe to maintain books whose details exist. Step4: Sparsity, Number of Users and Items in Book Crossing Dataset Step5: Loading the Amazon Data Set Step6: Sparsity, Number of Users and Items in Amazon Dataset Step7: Combining the Datasets Step8: Sampling wiht each item being rated atleast 53 times Step9: Baseline Naive Based Algorithm and Benchmarking with Traditional Collaborative Filtering Step10: Function for Evaluation Metrics Step11: -------------- Naive Baseline --------------- Our Naive Baseline for any user i, item j prediction is to assign it with average rating over entire dataset. (amean)) <br><br> Step12: Following are the functions to calculate pairwise similarity between two items Step13: Function item similar returns matrix of pairwise similarity between all items based on the option provided. Also return amean (global mean rating), umean (average rating of each user), imean (Average rating of each item) Step14: Benchmark Traditional Item-Item Collaborative Filtering Step15: Predict function is used to get recommended rating by user i for item j. Step16: Time to Test the recommendations Step17: get_results function is our function to cross_val setup and changing the parameter of this function will help to tune hyperparameter k (nearest neighbours) Grid Search for best K for item-item CF using all the similarity metric implemented. Step18: getmrec function is used to get top m recommendation for a user_id based on the similarity matrix (option), k neighbours. Step19: Coverage for Item - Item CF Step20: Coverage Results Step21: Content-Based CF Using Book Features Loading Book Features Step22: Similarity Matrix Using Only Book Features Step23: Coverage for Content Based CF
Python Code: ratings = pd.read_csv('../raw-data/BX-Book-Ratings.csv', encoding='iso-8859-1', sep = ';') ratings.columns = ['user_id', 'isbn', 'book_rating'] print(ratings.dtypes) print() print(ratings.head()) print() print("Data Points :", ratings.shape[0]) Explanation: Loading the Book Ratings Dataset End of explanation books = pd.read_csv('../raw-data/BX-Books.csv', sep=';', encoding = 'iso-8859-1', dtype =str) del books['Image-URL-L'] del books['Image-URL-M'] del books['Image-URL-S'] del books['Book-Author'] del books['Publisher'] Explanation: Loading the Books Dataset End of explanation print('Number of Books == Number of ISBN ? ', books["Book-Title"].nunique() == books["ISBN"].nunique()) book_dict = books[["Book-Title","ISBN"]].set_index("Book-Title").to_dict()["ISBN"] books['new_isbn'] = books["Book-Title"].apply(lambda x: book_dict[x]) print('Number of Books == Number of ISBN ? ', books["Book-Title"].nunique() == books["new_isbn"].nunique()) books['isbn'] = books['new_isbn'] del books['ISBN'] del books['new_isbn'] books.shape Explanation: Some Books don't have unique ISBN, creating a 1:1 maping between books-title and ISBN End of explanation newdf = ratings[ratings.book_rating>0] joined = books.merge(newdf, on ='isbn') print(joined.shape) Explanation: Data Preparation/ Cleaning <br> Removing ratings equal to zero, since Book Crossing Dataset has rating scale from 1-10. Taking Inner Join with books dataframe to maintain books whose details exist. End of explanation rows = joined.user_id.unique() cols = joined['Book-Title'].unique() print(joined.user_id.nunique(), joined.isbn.nunique()) print("Sparsity :", 100 - (joined.shape[0]/(joined.user_id.nunique()* joined.isbn.nunique()))) Explanation: Sparsity, Number of Users and Items in Book Crossing Dataset End of explanation data1 = pd.read_csv('../clean-data/ratings_Books.csv', ) data1.columns = ['user_id', 'isbn', 'book_rating', 'timestamp'] Explanation: Loading the Amazon Data Set End of explanation rows = data1.user_id.unique() cols = data1.isbn.unique() print(data1.user_id.nunique(), data1.isbn.nunique()) print("Sparsity :", 100 - (data1.shape[0]/(data1.user_id.nunique()* data1.isbn.nunique()))) data1 = data1[['user_id', 'isbn', 'book_rating']] data1.shape data2 = joined[['user_id', 'isbn', 'book_rating']] data2.book_rating = data2.book_rating / 2.0 data2.shape data2 = data2.drop_duplicates() data2.shape Explanation: Sparsity, Number of Users and Items in Amazon Dataset End of explanation data3 = pd.concat((data1, data2)) data3.shape Explanation: Combining the Datasets End of explanation temp = data3[data3['isbn'].isin(data3['isbn'].value_counts()[data3['isbn'].value_counts()>50].index)] # print(len(temp.user_id.unique())) # print(len(temp.isbn.unique())) temp1 = temp[temp['user_id'].isin(temp['user_id'].value_counts()[temp['user_id'].value_counts()>49].index)] # print(len(temp1.user_id.unique())) # print(len(temp1.isbn.unique())) temp2 = temp1[temp1['isbn'].isin(temp1['isbn'].value_counts()[temp1['isbn'].value_counts()>53].index)] print(len(temp2.user_id.unique())) print(len(temp2.isbn.unique())) print(temp2.groupby(['user_id']).count()['book_rating'].mean()) print(temp2.groupby(['isbn']).count()['book_rating'].mean()) Explanation: Sampling wiht each item being rated atleast 53 times End of explanation data = temp2 rows = data.user_id.unique() cols = data.isbn.unique() print(data.user_id.nunique(), data.isbn.nunique()) data = data[['user_id', 'isbn', 'book_rating']] data.to_csv('Combine.csv') print("Sparsity :", 100 - (data.shape[0]/(len(cols)*len(rows)) * 100)) idict = dict(zip(cols, range(len(cols)))) udict = dict(zip(rows, range(len(rows)))) data.user_id = [ udict[i] for i in data.user_id ] data['isbn'] = [ idict[i] for i in data['isbn'] ] nmat = data.as_matrix() nmat = nmat.astype(int) nmat.shape Explanation: Baseline Naive Based Algorithm and Benchmarking with Traditional Collaborative Filtering End of explanation def rmse(ypred, ytrue): ypred = ypred[ytrue.nonzero()].flatten() ytrue = ytrue[ytrue.nonzero()].flatten() return np.sqrt(mean_squared_error(ypred, ytrue)) def mae(ypred, ytrue): ypred = ypred[ytrue.nonzero()].flatten() ytrue = ytrue[ytrue.nonzero()].flatten() return mean_absolute_error(ypred, ytrue) Explanation: Function for Evaluation Metrics: MAE and RMSE End of explanation def predict_naive(user, item): return amean1 x1, x2 = train_test_split(nmat, test_size = 0.2, random_state =42) naive = np.zeros((len(rows),len(cols))) for row in x1: naive[row[0], row[1]] = row[2] predictions = [] targets = [] amean1 = np.mean(naive[naive!=0]) umean1 = sum(naive.T) / sum((naive!=0).T) imean1 = sum(naive) / sum((naive!=0)) umean1 = np.where(np.isnan(umean1), amean1, umean1) imean1 = np.where(np.isnan(imean1), amean1, imean1) print('Naive---') for row in x2: user, item, actual = row[0], row[1], row[2] predictions.append(predict_naive(user, item)) targets.append(actual) print('rmse %.4f' % rmse(np.array(predictions), np.array(targets))) print('mae %.4f' % mae(np.array(predictions), np.array(targets))) print() Explanation: -------------- Naive Baseline --------------- Our Naive Baseline for any user i, item j prediction is to assign it with average rating over entire dataset. (amean)) <br><br> End of explanation def cos(mat, a, b): if a == b: return 1 aval = mat.T[a].nonzero() bval = mat.T[b].nonzero() corated = np.intersect1d(aval, bval) if len(corated) == 0: return 0 avec = np.take(mat.T[a], corated) bvec = np.take(mat.T[b], corated) val = 1 - cosine(avec, bvec) if np.isnan(val): return 0 return val def adjcos(mat, a, b, umean): if a == b: return 1 aval = mat.T[a].nonzero() bval = mat.T[b].nonzero() corated = np.intersect1d(aval, bval) if len(corated) == 0: return 0 avec = np.take(mat.T[a], corated) bvec = np.take(mat.T[b], corated) avec1 = avec - umean[corated] bvec1 = bvec - umean[corated] val = 1 - cosine(avec1, bvec1) if np.isnan(val): return 0 return val def pr(mat, a, b, imean): if a == b: return 1 aval = mat.T[a].nonzero() bval = mat.T[b].nonzero() corated = np.intersect1d(aval, bval) if len(corated) < 2: return 0 avec = np.take(mat.T[a], corated) bvec = np.take(mat.T[b], corated) avec1 = avec - imean[a] bvec1 = bvec - imean[b] val = 1 - cosine(avec1, bvec1) if np.isnan(val): return 0 return val def euc(mat, a, b): if a == b: return 1 aval = mat.T[a].nonzero() bval = mat.T[b].nonzero() corated = np.intersect1d(aval, bval) if len(corated) == 0: return 0 avec = np.take(mat.T[a], corated) bvec = np.take(mat.T[b], corated) dist = np.sqrt(np.sum(a-b)**2) val = 1/(1+dist) if np.isnan(val): return 0 return val Explanation: Following are the functions to calculate pairwise similarity between two items : Cosine, Adjusted Cosine, Euclidean, Pearson Corelation. End of explanation def itemsimilar(mat, option): amean = np.mean(mat[mat!=0]) umean = sum(mat.T) / sum((mat!=0).T) imean = sum(mat) / sum((mat!=0)) umean = np.where(np.isnan(umean), amean, umean) imean = np.where(np.isnan(imean), amean, imean) n = mat.shape[1] sim_mat = np.zeros((n, n)) if option == 'pr': #print("PR") for i in range(n): for j in range(n): sim_mat[i][j] = pr(mat, i, j, imean) sim_mat = (sim_mat + 1)/2 elif option == 'cos': #print("COS") print(n) for i in range(n): if(i%100 == 0): print(i) for j in range(n): sim_mat[i][j] = cos(mat, i, j) elif option == 'adjcos': #print("ADJCOS") for i in range(n): for j in range(n): sim_mat[i][j] = adjcos(mat, i, j, umean) sim_mat = (sim_mat + 1)/2 elif option == 'euc': #print("EUCLIDEAN") for i in range(n): for j in range(n): sim_mat[i][j] = euc(mat, i, j) else: #print("Hello") sim_mat = cosine_similarity(mat.T) return sim_mat, amean, umean, imean Explanation: Function item similar returns matrix of pairwise similarity between all items based on the option provided. Also return amean (global mean rating), umean (average rating of each user), imean (Average rating of each item) End of explanation import time start = time.time() naive = np.zeros((len(rows),len(cols))) for row in x1: naive[row[0], row[1]] = row[2] items, amean, umean, imean = itemsimilar(naive,'cos') end = time.time() print(end-start) print(end - start) items.shape Explanation: Benchmark Traditional Item-Item Collaborative Filtering End of explanation def predict(user, item, mat, item_similarity, amean, umean, imean, k=20): nzero = mat[user].nonzero()[0] if len(nzero) == 0: return amean baseline = imean + umean[user] - amean choice = nzero[item_similarity[item, nzero].argsort()[::-1][1:k+1]] prediction = ((mat[user, choice] - baseline[choice]).dot(item_similarity[item, choice])/ sum(item_similarity[item, choice])) + baseline[item] if np.isnan(prediction): prediction = imean[item] + umean[user] - amean if prediction > 5: prediction = 5 if prediction < 1: prediction = 1 return prediction predict(0,1, naive, items, amean, umean, imean,5) def get_results1(X, rows, cols, folds, k, item_similarity, amean, umean, imean): kf = KFold(n_splits=folds, shuffle = True, random_state=95) count = 1 rmse_list = [] mae_list = [] trmse_list = [] tmae_list = [] for train_index, test_index in kf.split(X): print("---------- Fold ", count, "---------------") train_data, test_data = X[train_index], X[test_index] full_mat = np.zeros((rows, cols)) for row in train_data: full_mat[row[0], row[1]] = row[2] preds = [] real = [] for row in train_data: user_id, isbn, rating = row[0], row[1], row[2] preds.append(predict(user_id, isbn, full_mat, item_similarity, amean, umean, imean, k)) real.append(rating) err1 = rmse(np.array(preds), np.array(real)) err2 = mae(np.array(preds), np.array(real)) trmse_list.append(err1) tmae_list.append(err2) print('Train Errors') print('RMSE : %.4f' % err1) print('MAE : %.4f' % err2) preds = [] real = [] for row in test_data: user_id, isbn, rating = row[0], row[1], row[2] preds.append(predict(user_id, isbn, full_mat, item_similarity, amean, umean, imean, k)) real.append(rating) err1 = rmse(np.array(preds), np.array(real)) err2 = mae(np.array(preds), np.array(real)) rmse_list.append(err1) mae_list.append(err2) print('Test Errors') print('RMSE : %.4f' % err1) print('MAE : %.4f' % err2) count+=1 print("-------------------------------------") print("Training Avg Error:") print("AVG RMSE :", str(np.mean(trmse_list))) print("AVG MAE :", str(np.mean(tmae_list))) print() print("Testing Avg Error:") print("AVG RMSE :", str(np.mean(rmse_list))) print("AVG MAE :", str(np.mean(mae_list))) print(" ") return np.mean(mae_list), np.mean(rmse_list) Explanation: Predict function is used to get recommended rating by user i for item j. End of explanation s = time.time() get_results1(nmat, len(rows), len(cols), 5 ,20,items, amean,umean, imean) e=time.time() print("Time to test the recommendation over 5 fold cross validation of the data", (e-s)/5, "seconds") Explanation: Time to Test the recommendations End of explanation each_sims = [] each_sims_rmse = [] for k in [5, 10, 15, 20, 25]: print("Nearest Neighbors: ",k) ans1, ans2 = get_results1(nmat, len(rows), len(cols), 5 ,k,items, amean,umean, imean) each_sims.append(ans1) each_sims_rmse.append(ans2) print() print("Best K Value for") print() print("Min MAE") print(np.min(each_sims), np.argmin(each_sims)) print("Min RMSE") print(np.min(each_sims_rmse), np.argmin(each_sims_rmse)) print() print(each_sims[2], each_sims_rmse[2]) results_df1 = pd.DataFrame({'Nearest Neighbors': [5, 10, 15, 20, 25], 'MAE': each_sims, 'RMSE': each_sims_rmse }) plot1 = results_df1.plot(x='Nearest Neighbors', y=['MAE', 'RMSE'], ylim=(0.5,0.85), title = 'Item-Item CF: Metrics over different K') fig = plot1.get_figure() fig.savefig('MetricsCFK.png') Explanation: get_results function is our function to cross_val setup and changing the parameter of this function will help to tune hyperparameter k (nearest neighbours) Grid Search for best K for item-item CF using all the similarity metric implemented. End of explanation full_mat = np.zeros((len(rows),len(cols))) for row in nmat: full_mat[row[0], row[1]] = row[2] #item_similarity, amean, umean, imean = itemsimilar(full_mat, 'euc') def getmrec(full_mat, user_id, item_similarity, k, m, idict, cov = False): n = item_similarity.shape[0] nzero = full_mat[user_id].nonzero()[0] preds = {} for row in range(n): preds[row] = predict(user_id, row, full_mat, item_similarity, amean, umean, imean, k) flipped_dict = dict(zip(idict.values(), idict.keys())) if not cov: print("Books Read -----") for i in nzero: print(flipped_dict[i]) del preds[i] res = sorted(preds.items(), key=lambda x: x[1], reverse = True) ans = [flipped_dict[i[0]] for i in res[:m]] return ans flipped_dict = dict(zip(idict.values(), idict.keys())) Explanation: getmrec function is used to get top m recommendation for a user_id based on the similarity matrix (option), k neighbours. End of explanation def coverage(full_mat, user_id, item_similarity, k, mlist, flipped_dict, cov = False): n = item_similarity.shape[0] nzero = full_mat[user_id].nonzero()[0] preds = {} for row in range(n): preds[row] = predict(user_id, row, full_mat, item_similarity, amean, umean, imean, k) if not cov: print("Books Read -----") for i in nzero: print(flipped_dict[i]) del preds[i] res = sorted(preds.items(), key=lambda x: x[1], reverse = True) ret_tup = [] ans = [flipped_dict[i[0]] for i in res[:mlist[-1]]] for i in mlist: ret_tup.append(ans[:i]) return ret_tup cov1 = [] cov2 = [] cov3 = [] cov4 = [] cov5 = [] mlist = [5,10,15,20,25] for i in range(len(rows)): if(i%100 == 0): print(i) ans = coverage(full_mat, i, items, 10, mlist, flipped_dict, True) cov1.extend(ans[0]) cov2.extend(ans[1]) cov3.extend(ans[2]) cov4.extend(ans[3]) cov5.extend(ans[4]) Explanation: Coverage for Item - Item CF End of explanation print("Coverage with recommending 5 books", len(set(cov1))/4959 *100 ,"%") print("Coverage with recommending 10 books", len(set(cov2))/4959 *100 ,"%") print("Coverage with recommending 15 books", len(set(cov3))/4959 *100 ,"%") print("Coverage with recommending 20 books", len(set(cov4))/4959 *100 ,"%") print("Coverage with recommending 25 books", len(set(cov5))/4959 *100 ,"%") Explanation: Coverage Results End of explanation feats = pd.read_csv('../book_features.csv') feats.shape feats.head() scores = feats.iloc[:,1:15] scores1 = scores.as_matrix() scores1.shape inputscores = scores1.T Explanation: Content-Based CF Using Book Features Loading Book Features End of explanation naive = np.zeros((len(rows),len(cols))) for row in x1: naive[row[0], row[1]] = row[2] items_features, temple1, temple2, temple3 = itemsimilar(inputscores,'') s1 = time.time() get_results1(nmat, len(rows), len(cols), 5 ,20,items_features, amean,umean, imean) e1 = time.time() print("Time to test the recommendation over 5 folds cross validation of the data", (e1-s1)/5, "seconds") each_sims_con = [] each_sims_rmse_con = [] for k in [5, 10, 15, 20, 25]: print("Nearest Neighbors: ",k) ans1, ans2 = get_results1(nmat, len(rows), len(cols), 5 ,k,items_features, amean,umean, imean) each_sims_con.append(ans1) each_sims_rmse_con.append(ans2) print() print("Best K Value for") print() print("Min MAE") print(np.min(each_sims_con), np.argmin(each_sims_con)) print("Min RMSE") print(np.min(each_sims_rmse_con), np.argmin(each_sims_rmse_con)) print() results_df2 = pd.DataFrame({'Nearest Neighbors': [5, 10, 15, 20, 25], 'MAE': each_sims_con, 'RMSE': each_sims_rmse_con }) plot2 = results_df2.plot(x='Nearest Neighbors', y=['MAE', 'RMSE'], ylim=(0.5,0.9), title = 'Content Based Item-Item CF: Metrics over different K') fig = plot2.get_figure() fig.savefig('MetricsContentCFK.png') Explanation: Similarity Matrix Using Only Book Features End of explanation covcon1 = [] covcon2 = [] covcon3 = [] covcon4 = [] covcon5 = [] mlist = [5,10,15,20,25] for i in range(len(rows)): if(i%100 == 0): print(i) ans = coverage(full_mat, i, items_features, 10, mlist, flipped_dict, True) covcon1.extend(ans[0]) covcon2.extend(ans[1]) covcon3.extend(ans[2]) covcon4.extend(ans[3]) covcon5.extend(ans[4]) print("Coverage with recommending 5 books", len(set(covcon1))/4959 *100 ,"%") print("Coverage with recommending 10 books", len(set(covcon2))/4959 *100 ,"%") print("Coverage with recommending 15 books", len(set(covcon3))/4959 *100 ,"%") print("Coverage with recommending 20 books", len(set(covcon4))/4959 *100 ,"%") print("Coverage with recommending 25 books", len(set(covcon5))/4959 *100 ,"%") Explanation: Coverage for Content Based CF End of explanation
1,026
Given the following text description, write Python code to implement the functionality described below step by step Description: ギブスサンプリングについて $p(\bf{x}|\theta) = \frac{1}{Z(\theta)} \exp(-\Phi(\bf{x}, \theta))$ $\bf{x} = \left{ x_1, x_2, x_3, \dots, x_N \right}$ についての同時確率分布 $\bf{x} = \left{ x_1, x_2, x_3, \dots, x_N \right}$ からのサンプリング たとえば,ボルツマンマシンではそれぞれのユニットが2値ユニットなので,$O(2^n)$ ギブスサンプリング $x_i$以外の確率変数を固定して,確率変数$x_i$に関してだけのサンプリングをすれば良い. $p(\bf{x})$からサンプリングする代わりに$p(x_i|\bf{x^{\backslash i}})$を別々にサンプリングする. ただし,$\bf{x^{\backslash i}}$は$\bf{x}$から${x_i}$を除いた集合 t回目のサンプリングは(t-1)回目のサンプリング結果を使って行う 任意回数(tとする)サンプリングを行う $ p(x_i^{(0)}) $ はランダムに初期化する $ p(x_i) \ ( i \in \left{ 1, 2, \dots, M \right} )$ を順番にサンプリングしていく $t$回サンプリングする $ x_i^{(t)} \sim p(x_i^{(t)} | x_1^{(t)}, x_2^{(t)}, \dots, x_{i-1}^{(t)}, x_{i+1}^{(t-1)}, \dots, x_M^{(t-1)}) $ Burn-inという概念 tが小さい時のサンプリング結果は,ランダムに初期化した各変数に依存している. 初期値に依存したサンプルが得られる期間をburn-inといい,この期間のサンプルは破棄すべきである. 正規分布での実験 前提 多変数の正規分布は1変数の正規分布の積で表せる 詳しくはPRMLを参照してください 多変量の正規分布 $ \sum $ は分散共分散行列 $ \mu $ は平均ベクトル $ p(x) = \frac{1}{2\pi \left| \sum \right|^{\frac{1}{2}}} exp \Biggl[- \frac{1}{2} (\bf{x}-\bf{\mu})^T \sum^{-1} (\bf{x}-\bf{\mu}) \Bigg] $ パラメータの値 $ a = 0.8 $ $ \mu = \bf{0} $ $ \sum ^ {-1} = \begin{bmatrix}1 & -a\-a & 1\end{bmatrix}$ Step1: 棄却サンプリング 一様分布から乱数を発生させる 分布の形がめちゃくちゃ尖ってたりすると十分な近似を得るのにかなりのサンプリングをしないといけない たくさんのサンプルが棄却されてしまう Step2: 多変量の正規分布の棄却サンプリング $ p(x) = \frac{1}{\sqrt{2\pi} \sqrt{|\sum\|}}{ \exp{-\frac{1}{2} (\bf{x} - \bf{\mu})^T \sum^{-1} (\bf{x} - \bf{\mu})} }$ Step3: パラメータ $ \sum = \biggl( \matrix{2 & 0 & 0 \ 0 & 4 & 0 \ 0 & 0 & 10} \biggr) $ $ \mu = \left( 0, 0, 0 \right)^T $
Python Code: import numpy from matplotlib import pyplot %matplotlib inline pyplot.style.use('ggplot') a = 0.8 num_iter = 30000 cov_inv = [[ 1, -a],[-a, 1]] mu_x = numpy.array([0, 0]) cov = numpy.linalg.pinv(cov_inv) %%time # normal sampling x_1 = [] x_2 = [] for _ in range(num_iter): data = numpy.random.multivariate_normal(mu_x, cov, 1) x_1.append(data[0][0]) x_2.append(data[0][1]) pyplot.scatter(list(x_1), list(x_2)) %%time # gibbs sampling x_1 = [10] x_2 = [-10] for _ in range(num_iter): x_1_new = numpy.random.normal(a*x_1[-1], 1) x_1.append(x_1_new) x_2_new = numpy.random.normal(a*x_1_new,1) x_2.append(x_2_new) pyplot.scatter(x_1, x_2) Explanation: ギブスサンプリングについて $p(\bf{x}|\theta) = \frac{1}{Z(\theta)} \exp(-\Phi(\bf{x}, \theta))$ $\bf{x} = \left{ x_1, x_2, x_3, \dots, x_N \right}$ についての同時確率分布 $\bf{x} = \left{ x_1, x_2, x_3, \dots, x_N \right}$ からのサンプリング たとえば,ボルツマンマシンではそれぞれのユニットが2値ユニットなので,$O(2^n)$ ギブスサンプリング $x_i$以外の確率変数を固定して,確率変数$x_i$に関してだけのサンプリングをすれば良い. $p(\bf{x})$からサンプリングする代わりに$p(x_i|\bf{x^{\backslash i}})$を別々にサンプリングする. ただし,$\bf{x^{\backslash i}}$は$\bf{x}$から${x_i}$を除いた集合 t回目のサンプリングは(t-1)回目のサンプリング結果を使って行う 任意回数(tとする)サンプリングを行う $ p(x_i^{(0)}) $ はランダムに初期化する $ p(x_i) \ ( i \in \left{ 1, 2, \dots, M \right} )$ を順番にサンプリングしていく $t$回サンプリングする $ x_i^{(t)} \sim p(x_i^{(t)} | x_1^{(t)}, x_2^{(t)}, \dots, x_{i-1}^{(t)}, x_{i+1}^{(t-1)}, \dots, x_M^{(t-1)}) $ Burn-inという概念 tが小さい時のサンプリング結果は,ランダムに初期化した各変数に依存している. 初期値に依存したサンプルが得られる期間をburn-inといい,この期間のサンプルは破棄すべきである. 正規分布での実験 前提 多変数の正規分布は1変数の正規分布の積で表せる 詳しくはPRMLを参照してください 多変量の正規分布 $ \sum $ は分散共分散行列 $ \mu $ は平均ベクトル $ p(x) = \frac{1}{2\pi \left| \sum \right|^{\frac{1}{2}}} exp \Biggl[- \frac{1}{2} (\bf{x}-\bf{\mu})^T \sum^{-1} (\bf{x}-\bf{\mu}) \Bigg] $ パラメータの値 $ a = 0.8 $ $ \mu = \bf{0} $ $ \sum ^ {-1} = \begin{bmatrix}1 & -a\-a & 1\end{bmatrix}$ End of explanation # gaussian distribution (mean=0, variance=1) from math import pi def p(x): return numpy.exp(-x**2/2) / (numpy.sqrt(2*pi)) # rejection sampling def rejection_sampler(num_iter): MAX = p(0) samples = list() for _ in range(num_iter): sample = (numpy.random.random() - 0.5) * 20.0 if (p(sample) / MAX) > numpy.random.random(): samples.append(sample) return samples _ = pyplot.hist(rejection_sampler(10000)) Explanation: 棄却サンプリング 一様分布から乱数を発生させる 分布の形がめちゃくちゃ尖ってたりすると十分な近似を得るのにかなりのサンプリングをしないといけない たくさんのサンプルが棄却されてしまう End of explanation def costum_random(): return (numpy.random.random()-0.5) * 20 def p(cov, mu, x): numerator = numpy.exp(-(x-mu).T.dot(numpy.linalg.pinv(cov)).dot(x-mu)/2) denominator = numpy.sqrt(2*pi) * numpy.sqrt(numpy.linalg.norm(cov)) return numerator / denominator def p_given_x_i(cov, mu, x_i, x): 1 Explanation: 多変量の正規分布の棄却サンプリング $ p(x) = \frac{1}{\sqrt{2\pi} \sqrt{|\sum\|}}{ \exp{-\frac{1}{2} (\bf{x} - \bf{\mu})^T \sum^{-1} (\bf{x} - \bf{\mu})} }$ End of explanation num_iter = 10000 cov = numpy.array([[2,0,0],[0,4,0], [0,0,10]]) mu = numpy.array([0,0,0]) # cov = numpy.array([[1,0],[0,1]]) # mu = numpy.array([0,0]) MAX = p(cov, mu, mu) from mpl_toolkits.mplot3d import Axes3D x_1s = list() x_2s = list() x_3s = list() ys = list() for _ in range(num_iter): x_1, x_2, x_3 = costum_random(), costum_random(), costum_random() x = numpy.array([x_1, x_2, x_3]) y = p(cov, mu, x) if (y/MAX) > numpy.random.random(): x_1s.append(x_1) x_2s.append(x_2) x_3s.append(x_3) ys.append(p(cov, mu, x)) fig = pyplot.figure() ax = Axes3D(fig) plot = ax.scatter(x_1s, x_2s, ys) print(1 - (len(ys)/num_iter)) Explanation: パラメータ $ \sum = \biggl( \matrix{2 & 0 & 0 \ 0 & 4 & 0 \ 0 & 0 & 10} \biggr) $ $ \mu = \left( 0, 0, 0 \right)^T $ End of explanation
1,027
Given the following text description, write Python code to implement the functionality described below step by step Description: Homework 2 Taylor Patti 2/15/2015 Excercises Completed Exercise 5.18 (fit_pendulum_data.py) Exercise 5.22 (midpoint_vec.py) Exercise 5.23 (Lagrange_poly1.py) Exercise 5.24 (Lagrange_poly2.py) Exercise 5.25 (Lagrange_poly2b.py) fit_pendulum_data Provides various functions which facilitate the harvesting of data from a separate file, the plotting of this data, and the creation of composite plots consisting of this data and polynomial fits thereof of various orders. Ultimately, the third order polynomial had the best fit, although only marginally so over the 2nd order. This seems pretty common though; that a higher order polynomial usually beats out its competitors, especially over small data sets, even if they are more physically relevant. Step1: midpoint_vec This module called for the design of a midpoint integration function in three separate implementations, one list iterative, one semi-vectorized, and one fully vectorized. The supreme accuracy of this algorithm is illustrated in the sine integration below. The purpose of devising 3 different implementations was to be able to compare their relative efficiency. As denoted below, the list iterative midpointint function faired the fastest, most likely due to the relatively small number of iterations required. Moreover, the fully vectorized midpointvec was faster than the semi-vectorized midpointvecdefault, most likely due to the superior efficiency of the numpy sum function. Had the number of elements in these iterations been greater, the fully vectorized version would have faired better than both of its counterpart functions. Step2: Lagrange_poly1 In this module, the Lagrange Interpolation Formula was used to construct an interpolating function; a function who takes in n data points and devises an nth order polynomial which it then implements to extrapolate the value of all points within the boundaries of the data set, whether included or not. This first excercise shows how accurate this process is, with the interpolating function returning a nearly perfect value for the sine function when the input is given at the midpoint between two of the data points provided. Step3: Lagrange_poly2 A graphing function was constructed that permitted the simultaneous plotting of the actual function and the interpolating function, both with indicated interval perpherations. Step4: Lagrange_poly3 The oscillatory shortcomings of high peripheration interpolating functions is demonstrated by the creation of two plots. The first shows how the general fit quality increases with greater peripheration frequency while this quantity is reasonably small. The later indicates that too many terms can result in wild oscillations at the end points of the function indicated.
Python Code: p1.pendulum_plotter() p1.poly_plotter() Explanation: Homework 2 Taylor Patti 2/15/2015 Excercises Completed Exercise 5.18 (fit_pendulum_data.py) Exercise 5.22 (midpoint_vec.py) Exercise 5.23 (Lagrange_poly1.py) Exercise 5.24 (Lagrange_poly2.py) Exercise 5.25 (Lagrange_poly2b.py) fit_pendulum_data Provides various functions which facilitate the harvesting of data from a separate file, the plotting of this data, and the creation of composite plots consisting of this data and polynomial fits thereof of various orders. Ultimately, the third order polynomial had the best fit, although only marginally so over the 2nd order. This seems pretty common though; that a higher order polynomial usually beats out its competitors, especially over small data sets, even if they are more physically relevant. End of explanation p2.midpointvec(np.sin, 0, np.pi/2, 1000) %timeit p2.midpointint(p2.x_func, 1, 3, 1000) %timeit p2.midpointvecdefault(p2.x_func, 1, 3, 1000) %timeit p2.midpointvec(p2.x_func, 1, 3, 1000) Explanation: midpoint_vec This module called for the design of a midpoint integration function in three separate implementations, one list iterative, one semi-vectorized, and one fully vectorized. The supreme accuracy of this algorithm is illustrated in the sine integration below. The purpose of devising 3 different implementations was to be able to compare their relative efficiency. As denoted below, the list iterative midpointint function faired the fastest, most likely due to the relatively small number of iterations required. Moreover, the fully vectorized midpointvec was faster than the semi-vectorized midpointvecdefault, most likely due to the superior efficiency of the numpy sum function. Had the number of elements in these iterations been greater, the fully vectorized version would have faired better than both of its counterpart functions. End of explanation xp = np.linspace(0, math.pi, 5) yp = np.sin(xp) middle = xp[2] - xp[1] / 2 print 'Function interpolates: ', print p3.p_L(middle, xp, yp) print 'Actual sine result is: ', print math.sin(middle) Explanation: Lagrange_poly1 In this module, the Lagrange Interpolation Formula was used to construct an interpolating function; a function who takes in n data points and devises an nth order polynomial which it then implements to extrapolate the value of all points within the boundaries of the data set, whether included or not. This first excercise shows how accurate this process is, with the interpolating function returning a nearly perfect value for the sine function when the input is given at the midpoint between two of the data points provided. End of explanation p4.graph(np.sin, 5, 0, math.pi) Explanation: Lagrange_poly2 A graphing function was constructed that permitted the simultaneous plotting of the actual function and the interpolating function, both with indicated interval perpherations. End of explanation p5.multigrapher() Explanation: Lagrange_poly3 The oscillatory shortcomings of high peripheration interpolating functions is demonstrated by the creation of two plots. The first shows how the general fit quality increases with greater peripheration frequency while this quantity is reasonably small. The later indicates that too many terms can result in wild oscillations at the end points of the function indicated. End of explanation
1,028
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to Jupyter notebooks The first thing we'll do, discussed later, is import all the modules we'll need. You should in general do this at the very beginning of each notebook, and in fact each .py file you write. Step1: In this tutorial, you will learn the basics on how to use Jupyter notebooks. Most of your homework will be submitted as Jupyter notebooks, so this is something you will need to master. It will be useful for you to go over Tutorial 2 to learn how to use $\LaTeX$ to write mathematical notations and statements in your Jupyter notebooks. You should, of course, read the official Jupyter documentation as well. Contents What is Jupyter Launching a Jupyter notebook Cells Code cells Display of graphics Proper formatting of cells Best practices for code cells Markdown cells Styling your notebook Collaborating with Google Drive What is Jupyter? Jupyter is a way to combine text (with math!) and code (which runs and can display graphic output!) in an easy-to-read document that renders in a web browser. The notebook itself is stored as a text file in JSON format. This text file is what you will email the course instructor when submitting your homework. It is language agnostic as its name suggests. The name "Jupyter" is a combination of Julia (a new language for scientific computing), Python (which you know and love, or at least will when the course is over), and R (the dominant tool for statistical computation). However, you currently can run over 40 different languages in a Jupyter notebook, not just Julia, Python, and R. Launching a Jupyter notebook A Jupyter was spawned from the IPython project. To launch a Jupyter notebook, you can do the following. * Mac Step2: If you evaluate a Python expression that returns a value, that value is displayed as output of the code cell. This only happens, however, for the last line of the code cell. Step3: Note, however, if the last line does not return a value, such as if we assigned a variable, there is no visible output from the code cell. Step4: Output is asynchronous All output is displayed asynchronously as it is generated in the Kernel. If you execute the next cell, you will see the output one piece at a time, not all at the end. Step5: Large outputs To better handle large outputs, the output area can be collapsed. Run the following cell and then single- or double- click on the active area to the left of the output Step6: Beyond a certain point, output will scroll automatically Step7: Display of graphics When displaying graphics, you should have them inline, meaning that they are displayed directly in the Jupyter notebook and not in a separate window. You can specify that, as I did at the top of this document, using the %matplotlib inline magic function. Below is an example of graphics displayed inline. Generally, I prefer presenting graphics as scalable vector graphics (SVG). Vector graphics are infinitely zoom-able; i.e., the graphics are represented as points, lines, curves, etc., in space, not as a set of pixel values as is the case with raster graphics (such as PNG). By default, graphics are displayed as PNGs, but you can specify SVG as I have at the top of this document in the first code cell. %config InlineBackend.figure_formats = {'svg',} Unfortunately, there seems to be a bug, at least when I render in Safari, where vertical and horizontal lines are not properly rendered when using SVG. For some reason, when I select next cell and convert it to a code cell and back to markdown, the lines are then (sometimes) properly rendered. This is annoying, but I tend to think it is worth it to have nice SVG graphics. On the other hand, PNG graphics will usually suffice if you want to use them in your homework. To specify the ONG graphics to be high resolution, include %config InlineBackend.figure_formats = {'png', 'retina'} at the top of your file, as we have here. Step9: The plot is included inline with the styling we specified using Seaborn at the beginning of the document. Proper formatting of cells Generally, it is a good idea to keep cells simple. You can define one function, or maybe two or three closely related functions, in a single cell, and that's about it. When you define a function, you should make sure it is properly commented with descriptive doc strings. Below is an example of how I might generate a plot of the Lorenz attractor (which I choose just because it is fun) with code cells and markdown cells with discussion of what I am doing. We will use scipy.integrate.odeint to numerically integrate the Lorenz attractor. We therefore first define a function that returns the right hand side of the system of ODEs that define the Lorentz attractor. Step10: With this function in hand, we just have to pick our initial conditions and time points, run the numerical integration, and then plot the result.
Python Code: # Import numerical packages import numpy as np import scipy.integrate # Import pyplot for plotting import matplotlib.pyplot as plt # Seaborn, useful for graphics import seaborn as sns # Magic function to make matplotlib inline; other style specs must come AFTER %matplotlib inline # This enables SVG graphics inline. There is a bug, so uncomment if it works. #%config InlineBackend.figure_formats = {'svg',} # This enables high resolution PNGs. SVG is preferred, but has problems # rendering vertical and horizontal lines %config InlineBackend.figure_formats = {'png', 'retina'} # JB's favorite Seaborn settings for notebooks rc = {'lines.linewidth': 2, 'axes.labelsize': 18, 'axes.titlesize': 18, 'axes.facecolor': 'DFDFE5'} sns.set_context('notebook', rc=rc) sns.set_style('darkgrid', rc=rc) Explanation: Introduction to Jupyter notebooks The first thing we'll do, discussed later, is import all the modules we'll need. You should in general do this at the very beginning of each notebook, and in fact each .py file you write. End of explanation # Say hello to the world. print('hello, world.') Explanation: In this tutorial, you will learn the basics on how to use Jupyter notebooks. Most of your homework will be submitted as Jupyter notebooks, so this is something you will need to master. It will be useful for you to go over Tutorial 2 to learn how to use $\LaTeX$ to write mathematical notations and statements in your Jupyter notebooks. You should, of course, read the official Jupyter documentation as well. Contents What is Jupyter Launching a Jupyter notebook Cells Code cells Display of graphics Proper formatting of cells Best practices for code cells Markdown cells Styling your notebook Collaborating with Google Drive What is Jupyter? Jupyter is a way to combine text (with math!) and code (which runs and can display graphic output!) in an easy-to-read document that renders in a web browser. The notebook itself is stored as a text file in JSON format. This text file is what you will email the course instructor when submitting your homework. It is language agnostic as its name suggests. The name "Jupyter" is a combination of Julia (a new language for scientific computing), Python (which you know and love, or at least will when the course is over), and R (the dominant tool for statistical computation). However, you currently can run over 40 different languages in a Jupyter notebook, not just Julia, Python, and R. Launching a Jupyter notebook A Jupyter was spawned from the IPython project. To launch a Jupyter notebook, you can do the following. * Mac: Use the Anaconda launcher and select Jupyter notebook. * Windows: Under "Search programs and files" from the Start menu, type jupyter notebook and select "Jupyter notebook." A Jupyter notebook will then launch in your default web browser. You can also launch Jupyter from the command line. To do this, simply enter jupyter notebook on the command line and hit enter. This also allows for greater flexibility, as you can launch Jupyter with command line flags. For example, I launch Jupyter using jupyter notebook --browser=safari This fires up Jupyter with Safari as the browser. If you launch Jupyter from the command line, your shell will be occupied with Jupyter and will occasionally print information to the screen. After you are finished with your Jupyter session (and have saved everything), you can kill Jupyter by hitting "ctrl + C" in the terminal/PowerShell window. When you launch Jupyter, you will be presented with a menu of files in your current working directory to choose to edit. You can also navigate around the files on your computer to find a file you wish to edit by clicking the "Upload" button in the upper right corner. You can also click "New" in the upper right corner to get a new Jupyter notebook. After selecting the file you wish to edit, it will appear in a new window in your browser, beautifully formatted and ready to edit. Cells A Jupyter notebook consists of cells. The two main types of cells you will use are code cells and markdown cells, and we will go into their properties in depth momentarily. First, an overview. A code cell contains actual code that you want to run. You can specify a cell as a code cell using the pulldown menu in the toolbar in your Jupyter notebook. Otherwise, you can can hit esc and then y (denoted "esc, y") while a cell is selected to specify that it is a code cell. Note that you will have to hit enter after doing this to start editing it. If you want to execute the code in a code cell, hit "shift + enter." Note that code cells are executed in the order you execute them. That is to say, the ordering of the cells for which you hit "shift + enter" is the order in which the code is executed. If you did not explicitly execute a cell early in the document, its results are now known to the Python interpreter. Markdown cells contain text. The text is written in markdown, a lightweight markup language. You can read about its syntax here. Note that you can also insert HTML into markdown cells, and this will be rendered properly. As you are typing the contents of these cells, the results appear as text. Hitting "shift + enter" renders the text in the formatting you specify. You can specify a cell as being a markdown cell in the Jupyter toolbar, or by hitting "esc, m" in the cell. Again, you have to hit enter after using the quick keys to bring the cell into edit mode. In general, when you want to add a new cell, you can use the "Insert" pulldown menu from the Jupyter toolbar. The shortcut to insert a cell below is "esc, b" and to insert a cell above is "esc, a." Alternatively, you can execute a cell and automatically add a new one below it by hitting "alt + enter." There is another shot cut, "ctrl+enter", which execute a cell but not add a new line below. Code cells Below is an example of a code cell printing hello, world. Notice that the output of the print statement appears in the same cell, though separate from the code block. End of explanation # Would show 9 if this were the last line, but it is not, so shows nothing 4 + 5 # I hope we see 11. 5 + 6 Explanation: If you evaluate a Python expression that returns a value, that value is displayed as output of the code cell. This only happens, however, for the last line of the code cell. End of explanation # Variable assignment, so no visible output. a = 5 + 6 # However, now if we ask for a, its value will be displayed a Explanation: Note, however, if the last line does not return a value, such as if we assigned a variable, there is no visible output from the code cell. End of explanation import time, sys for i in range(8): print(i) time.sleep(0.5) Explanation: Output is asynchronous All output is displayed asynchronously as it is generated in the Kernel. If you execute the next cell, you will see the output one piece at a time, not all at the end. End of explanation for i in range(50): print(i) Explanation: Large outputs To better handle large outputs, the output area can be collapsed. Run the following cell and then single- or double- click on the active area to the left of the output: End of explanation for i in range(500): print(2**i - 1) Explanation: Beyond a certain point, output will scroll automatically: End of explanation # Generate data to plot x = np.linspace(0, 2 * np.pi, 200) y = np.exp(np.sin(np.sin(x))) # Make plot plt.plot(x, y) plt.xlim((0, 2 * np.pi)) plt.xlabel(r'$x$') plt.ylabel(r'$\mathrm{e}^{\sin{x}}$') Explanation: Display of graphics When displaying graphics, you should have them inline, meaning that they are displayed directly in the Jupyter notebook and not in a separate window. You can specify that, as I did at the top of this document, using the %matplotlib inline magic function. Below is an example of graphics displayed inline. Generally, I prefer presenting graphics as scalable vector graphics (SVG). Vector graphics are infinitely zoom-able; i.e., the graphics are represented as points, lines, curves, etc., in space, not as a set of pixel values as is the case with raster graphics (such as PNG). By default, graphics are displayed as PNGs, but you can specify SVG as I have at the top of this document in the first code cell. %config InlineBackend.figure_formats = {'svg',} Unfortunately, there seems to be a bug, at least when I render in Safari, where vertical and horizontal lines are not properly rendered when using SVG. For some reason, when I select next cell and convert it to a code cell and back to markdown, the lines are then (sometimes) properly rendered. This is annoying, but I tend to think it is worth it to have nice SVG graphics. On the other hand, PNG graphics will usually suffice if you want to use them in your homework. To specify the ONG graphics to be high resolution, include %config InlineBackend.figure_formats = {'png', 'retina'} at the top of your file, as we have here. End of explanation def lorenz_attractor(r, t, p): Compute the right hand side of system of ODEs for Lorenz attractor. Parameters ---------- r : array_like, shape (3,) (x, y, z) position of trajectory. t : dummy_argument Dummy argument, necessary to pass function into scipy.integrate.odeint p : array_like, shape (3,) Parameters (s, k, b) for the attractor. Returns ------- output : ndarray, shape (3,) Time derivatives of Lorenz attractor. Notes ----- .. Returns the right hand side of the system of ODEs describing the Lorenz attractor. x' = s * (y - x) y' = x * (k - z) - y z' = x * y - b * z # Unpack variables and parameters x, y, z = r s, p, b = p return np.array([s * (y - x), x * (p - z) - y, x * y - b * z]) Explanation: The plot is included inline with the styling we specified using Seaborn at the beginning of the document. Proper formatting of cells Generally, it is a good idea to keep cells simple. You can define one function, or maybe two or three closely related functions, in a single cell, and that's about it. When you define a function, you should make sure it is properly commented with descriptive doc strings. Below is an example of how I might generate a plot of the Lorenz attractor (which I choose just because it is fun) with code cells and markdown cells with discussion of what I am doing. We will use scipy.integrate.odeint to numerically integrate the Lorenz attractor. We therefore first define a function that returns the right hand side of the system of ODEs that define the Lorentz attractor. End of explanation # Parameters to use p = np.array([10.0, 28.0, 8.0 / 3.0]) # Initial condition r0 = np.array([0.1, 0.0, 0.0]) # Time points to sample t = np.linspace(0.0, 80.0, 10000) # Use scipy.integrate.odeint to integrate Lorentz attractor r = scipy.integrate.odeint(lorenz_attractor, r0, t, args=(p,)) # Unpack results into x, y, z. x, y, z = r.transpose() # Plot the result plt.plot(x, z, '-', linewidth=0.5) plt.xlabel(r'$x(t)$', fontsize=18) plt.ylabel(r'$z(t)$', fontsize=18) plt.title(r'$x$-$z$ proj. of Lorenz attractor traj.') Explanation: With this function in hand, we just have to pick our initial conditions and time points, run the numerical integration, and then plot the result. End of explanation
1,029
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Land MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Description Is Required Step7: 1.4. Land Atmosphere Flux Exchanges Is Required Step8: 1.5. Atmospheric Coupling Treatment Is Required Step9: 1.6. Land Cover Is Required Step10: 1.7. Land Cover Change Is Required Step11: 1.8. Tiling Is Required Step12: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required Step13: 2.2. Water Is Required Step14: 2.3. Carbon Is Required Step15: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required Step16: 3.2. Time Step Is Required Step17: 3.3. Timestepping Method Is Required Step18: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required Step19: 4.2. Code Version Is Required Step20: 4.3. Code Languages Is Required Step21: 5. Grid Land surface grid 5.1. Overview Is Required Step22: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required Step23: 6.2. Matches Atmosphere Grid Is Required Step24: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required Step25: 7.2. Total Depth Is Required Step26: 8. Soil Land surface soil 8.1. Overview Is Required Step27: 8.2. Heat Water Coupling Is Required Step28: 8.3. Number Of Soil layers Is Required Step29: 8.4. Prognostic Variables Is Required Step30: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required Step31: 9.2. Structure Is Required Step32: 9.3. Texture Is Required Step33: 9.4. Organic Matter Is Required Step34: 9.5. Albedo Is Required Step35: 9.6. Water Table Is Required Step36: 9.7. Continuously Varying Soil Depth Is Required Step37: 9.8. Soil Depth Is Required Step38: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required Step39: 10.2. Functions Is Required Step40: 10.3. Direct Diffuse Is Required Step41: 10.4. Number Of Wavelength Bands Is Required Step42: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required Step43: 11.2. Time Step Is Required Step44: 11.3. Tiling Is Required Step45: 11.4. Vertical Discretisation Is Required Step46: 11.5. Number Of Ground Water Layers Is Required Step47: 11.6. Lateral Connectivity Is Required Step48: 11.7. Method Is Required Step49: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required Step50: 12.2. Ice Storage Method Is Required Step51: 12.3. Permafrost Is Required Step52: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required Step53: 13.2. Types Is Required Step54: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required Step55: 14.2. Time Step Is Required Step56: 14.3. Tiling Is Required Step57: 14.4. Vertical Discretisation Is Required Step58: 14.5. Heat Storage Is Required Step59: 14.6. Processes Is Required Step60: 15. Snow Land surface snow 15.1. Overview Is Required Step61: 15.2. Tiling Is Required Step62: 15.3. Number Of Snow Layers Is Required Step63: 15.4. Density Is Required Step64: 15.5. Water Equivalent Is Required Step65: 15.6. Heat Content Is Required Step66: 15.7. Temperature Is Required Step67: 15.8. Liquid Water Content Is Required Step68: 15.9. Snow Cover Fractions Is Required Step69: 15.10. Processes Is Required Step70: 15.11. Prognostic Variables Is Required Step71: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required Step72: 16.2. Functions Is Required Step73: 17. Vegetation Land surface vegetation 17.1. Overview Is Required Step74: 17.2. Time Step Is Required Step75: 17.3. Dynamic Vegetation Is Required Step76: 17.4. Tiling Is Required Step77: 17.5. Vegetation Representation Is Required Step78: 17.6. Vegetation Types Is Required Step79: 17.7. Biome Types Is Required Step80: 17.8. Vegetation Time Variation Is Required Step81: 17.9. Vegetation Map Is Required Step82: 17.10. Interception Is Required Step83: 17.11. Phenology Is Required Step84: 17.12. Phenology Description Is Required Step85: 17.13. Leaf Area Index Is Required Step86: 17.14. Leaf Area Index Description Is Required Step87: 17.15. Biomass Is Required Step88: 17.16. Biomass Description Is Required Step89: 17.17. Biogeography Is Required Step90: 17.18. Biogeography Description Is Required Step91: 17.19. Stomatal Resistance Is Required Step92: 17.20. Stomatal Resistance Description Is Required Step93: 17.21. Prognostic Variables Is Required Step94: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required Step95: 18.2. Tiling Is Required Step96: 18.3. Number Of Surface Temperatures Is Required Step97: 18.4. Evaporation Is Required Step98: 18.5. Processes Is Required Step99: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required Step100: 19.2. Tiling Is Required Step101: 19.3. Time Step Is Required Step102: 19.4. Anthropogenic Carbon Is Required Step103: 19.5. Prognostic Variables Is Required Step104: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required Step105: 20.2. Carbon Pools Is Required Step106: 20.3. Forest Stand Dynamics Is Required Step107: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required Step108: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required Step109: 22.2. Growth Respiration Is Required Step110: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required Step111: 23.2. Allocation Bins Is Required Step112: 23.3. Allocation Fractions Is Required Step113: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required Step114: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required Step115: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required Step116: 26.2. Carbon Pools Is Required Step117: 26.3. Decomposition Is Required Step118: 26.4. Method Is Required Step119: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required Step120: 27.2. Carbon Pools Is Required Step121: 27.3. Decomposition Is Required Step122: 27.4. Method Is Required Step123: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required Step124: 28.2. Emitted Greenhouse Gases Is Required Step125: 28.3. Decomposition Is Required Step126: 28.4. Impact On Soil Properties Is Required Step127: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required Step128: 29.2. Tiling Is Required Step129: 29.3. Time Step Is Required Step130: 29.4. Prognostic Variables Is Required Step131: 30. River Routing Land surface river routing 30.1. Overview Is Required Step132: 30.2. Tiling Is Required Step133: 30.3. Time Step Is Required Step134: 30.4. Grid Inherited From Land Surface Is Required Step135: 30.5. Grid Description Is Required Step136: 30.6. Number Of Reservoirs Is Required Step137: 30.7. Water Re Evaporation Is Required Step138: 30.8. Coupled To Atmosphere Is Required Step139: 30.9. Coupled To Land Is Required Step140: 30.10. Quantities Exchanged With Atmosphere Is Required Step141: 30.11. Basin Flow Direction Map Is Required Step142: 30.12. Flooding Is Required Step143: 30.13. Prognostic Variables Is Required Step144: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required Step145: 31.2. Quantities Transported Is Required Step146: 32. Lakes Land surface lakes 32.1. Overview Is Required Step147: 32.2. Coupling With Rivers Is Required Step148: 32.3. Time Step Is Required Step149: 32.4. Quantities Exchanged With Rivers Is Required Step150: 32.5. Vertical Grid Is Required Step151: 32.6. Prognostic Variables Is Required Step152: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required Step153: 33.2. Albedo Is Required Step154: 33.3. Dynamics Is Required Step155: 33.4. Dynamic Lake Extent Is Required Step156: 33.5. Endorheic Basins Is Required Step157: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-1', 'land') Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: UHH Source ID: SANDBOX-1 Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:41 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation
1,030
Given the following text description, write Python code to implement the functionality described below step by step Description: Пример использования библиотеки BigARTM для тематического моделирования Для Bigartm v0.8.0 Редактировал Максим Чурилин Импортируем BigARTM Step1: Первое считывание данных (преобразуем удобный для человека формат в формат, который использует модель) Step2: В следующий раз данные можно считывать уже из батчей Step3: Создаем объект модели Step4: Создаем словарь и инициализируем модель с его помощью Step5: Строим модель. Offline - проходит по всей коллекции много раз. Удобно, когда коллекция маленькая. Step6: Необходимое число итераций можно отслеживать по графику перплексии. Когда она перестала меняться, модель сошлась. Step7: Выведем топы слов Step8: Давайте посмотрим также на разреженность матриц Step9: В темах много общеупотребительных слов (так называемой, фоновой лексики). Чтобы этого избежать, будем использовать разреживающий регуляризатор для матрицы фи. Он будет подавлять слова, которые имеют большую частоту во всей коллекции. Step10: Попробуем менять коэффициент регуляризации Step11: Обратите внимание, что разреживать модель рекомендуется только после того, как она сошлась без регуляризации. Сохранение и загрузка модели Step12: Можно попросить матрицы в чистом виде Step13: Матрица вероятностей тем в документах. Step14: Если бы у нас были новые батчи, по которым мы не строим модель, а хотим только получить матрицу theta, можно пользоваться методом transform.
Python Code: from matplotlib import pyplot as plt %matplotlib inline import artm Explanation: Пример использования библиотеки BigARTM для тематического моделирования Для Bigartm v0.8.0 Редактировал Максим Чурилин Импортируем BigARTM: End of explanation batch_vectorizer = artm.BatchVectorizer(data_path="school.txt", data_format="vowpal_wabbit", target_folder="school_batches", batch_size=100) Explanation: Первое считывание данных (преобразуем удобный для человека формат в формат, который использует модель): End of explanation batch_vectorizer = artm.BatchVectorizer(data_path="school_batches", data_format='batches') Explanation: В следующий раз данные можно считывать уже из батчей: End of explanation T = 10 # количество тем model_artm = artm.ARTM(num_topics=T, topic_names=["sbj"+str(i) for i in range(T)], class_ids={"text":1}, num_document_passes=1, reuse_theta=True, cache_theta=True, seed=-1) # число после названия модальностей - это их веса Explanation: Создаем объект модели: End of explanation dictionary = artm.Dictionary('dictionary') dictionary.gather(batch_vectorizer.data_path) model_artm.scores.add(artm.PerplexityScore(name='PerplexityScore', use_unigram_document_model=False, dictionary='dictionary')) model_artm.scores.add(artm.SparsityPhiScore(name='SparsityPhiScore', class_id="text")) model_artm.scores.add(artm.SparsityThetaScore(name='SparsityThetaScore')) model_artm.scores.add(artm.TopTokensScore(name="top_words", num_tokens=15, class_id="text")) model_artm.initialize('dictionary') Explanation: Создаем словарь и инициализируем модель с его помощью End of explanation model_artm.fit_offline(batch_vectorizer=batch_vectorizer, num_collection_passes=40) Explanation: Строим модель. Offline - проходит по всей коллекции много раз. Удобно, когда коллекция маленькая. End of explanation plt.plot(model_artm.score_tracker["PerplexityScore"].value) Explanation: Необходимое число итераций можно отслеживать по графику перплексии. Когда она перестала меняться, модель сошлась. End of explanation for topic_name in model_artm.topic_names: print topic_name + ': ', tokens = model_artm.score_tracker["top_words"].last_tokens for word in tokens[topic_name]: print word, print Explanation: Выведем топы слов: End of explanation print model_artm.score_tracker["SparsityPhiScore"].last_value print model_artm.score_tracker["SparsityThetaScore"].last_value Explanation: Давайте посмотрим также на разреженность матриц: End of explanation model_artm.regularizers.add(artm.SmoothSparsePhiRegularizer(name='SparsePhi', tau=-100, dictionary=dictionary)) #если вы хотите применять регуляризатор только к некоторым модальностям, указывайте это в параметре class_ids: class_ids=["text"] model_artm.fit_offline(batch_vectorizer=batch_vectorizer, num_collection_passes=15) for topic_name in model_artm.topic_names: print topic_name + ': ', tokens = model_artm.score_tracker["top_words"].last_tokens for word in tokens[topic_name]: print word, print print model_artm.score_tracker["SparsityPhiScore"].last_value print model_artm.score_tracker["SparsityThetaScore"].last_value Explanation: В темах много общеупотребительных слов (так называемой, фоновой лексики). Чтобы этого избежать, будем использовать разреживающий регуляризатор для матрицы фи. Он будет подавлять слова, которые имеют большую частоту во всей коллекции. End of explanation model_artm.regularizers['SparsePhi'].tau = -5*1e4 model_artm.fit_offline(batch_vectorizer=batch_vectorizer, num_collection_passes=15) for topic_name in model_artm.topic_names: print topic_name + ': ', tokens = model_artm.score_tracker["top_words"].last_tokens for word in tokens[topic_name]: print word, print # еще раз посмотрим на разреженность print model_artm.score_tracker["SparsityPhiScore"].last_value print model_artm.score_tracker["SparsityThetaScore"].last_value Explanation: Попробуем менять коэффициент регуляризации: End of explanation model_artm.save("my_model") model_artm.load("my_model") Explanation: Обратите внимание, что разреживать модель рекомендуется только после того, как она сошлась без регуляризации. Сохранение и загрузка модели: End of explanation phi = model_artm.get_phi() phi Explanation: Можно попросить матрицы в чистом виде: End of explanation theta = model_artm.get_theta() theta Explanation: Матрица вероятностей тем в документах. End of explanation theta_test = model_artm.transform(batch_vectorizer) Explanation: Если бы у нас были новые батчи, по которым мы не строим модель, а хотим только получить матрицу theta, можно пользоваться методом transform. End of explanation
1,031
Given the following text description, write Python code to implement the functionality described below step by step Description: Assignment 7 Lorenzo Biasi, Julius Vernie Step1: We load the variables and initilize the parameters we need Step2: We run the filter Step3: We can see a slight offset, we would expect that to be solved with the smoother step Step4: we can see that the offset is still present and slightly worse Step5: The predition is clearly following the data more or less correctly, but there is a probelm with the offset, that makes $\tilde{\mu}$ worse than our $\mu$. This should not happen, we would rather expect the opposite. Step6: After checking the algorithm many times i decided to look at our x to see if there was anything strange. And if you look closely at the first time steps there is some oddity. Step7: If someone looks at how x varies at the first time step you will see that it is almost constant and than it starts changing. this could explain the offset in our predictions. Step8: To test my hunch I decided to remove one time step from the data, to make sure that $x_1$ was not used in the prediction.
Python Code: import numpy as np import matplotlib.pyplot as plt from scipy.io import loadmat from numpy.linalg import inv %matplotlib inline Explanation: Assignment 7 Lorenzo Biasi, Julius Vernie End of explanation data = loadmat('data_files/Tut7_file1.mat') locals().update(data) data.keys() p, T = z.shape mu = np.zeros(z.shape) K = np.zeros((4, 4, T)) V = np.zeros((4, 4, T)) L = np.zeros((4, 4, T)) K[...,0] = L0.dot(B.T.dot(inv(B.dot(L0.dot(B.T)) + Gamma))) mu[..., [0]] = A.dot(mu0) + K[..., 0].dot(x[:, [0]] - B.dot(A.dot(mu0))) + C.dot(u[..., [0]]) V[..., 0] = (np.eye(4) - K[..., 0].dot(B)).dot(L0) L[..., 0] = A.dot(V[..., 0].dot(A.T)) + Sigma Explanation: We load the variables and initilize the parameters we need End of explanation for t in range(1, T): K[...,t] = L[..., t - 1].dot(B.T.dot(inv(B.dot(L[..., t - 1].dot(B.T)) + Gamma))) mu[..., [t]] = A.dot(mu[..., [t-1]]) + K[..., t].dot(x[:, [t]] - B.dot(A.dot(mu[..., [t-1]]))) + C.dot(u[..., [t]]) V[..., t] = (np.eye(4) - K[..., t].dot(B)).dot(L[..., t-1]) L[..., t] = A.dot(V[..., t].dot(A.T)) + Sigma Explanation: We run the filter End of explanation plt.plot(mu.T) plt.plot(z.T, color='red') V_tilde = np.zeros(V.shape) mu_tilde = np.zeros(mu.shape) V_tilde[..., -1] = V[..., -1] mu_tilde[..., [-1]] = mu[..., [-1]] for t in range(T - 2, -1, -1): #print(t) W = V[..., t].dot(A.T.dot(inv(L[..., t]))) V_tilde[..., t] = V[..., t] + W.dot(V_tilde[..., t+1] - L[..., t]).dot(W.T) mu_tilde[..., [t]] = mu[..., [t]] + W.dot(mu_tilde[..., [t+1]] - A.dot(mu[..., [t]])) Explanation: We can see a slight offset, we would expect that to be solved with the smoother step End of explanation plt.plot(mu_tilde.T) plt.plot(z.T, color='red') Explanation: we can see that the offset is still present and slightly worse End of explanation print ('Non smoothed result:', np.sum((mu - z).T ** 2)) print('Smoothed result:', np.sum((mu_tilde - z).T ** 2)) print('Ratio, \n', np.sum((mu_tilde - z).T ** 2) / np.sum((mu - z).T ** 2)) Explanation: The predition is clearly following the data more or less correctly, but there is a probelm with the offset, that makes $\tilde{\mu}$ worse than our $\mu$. This should not happen, we would rather expect the opposite. End of explanation plt.plot(x.T) Explanation: After checking the algorithm many times i decided to look at our x to see if there was anything strange. And if you look closely at the first time steps there is some oddity. End of explanation #plt.plot(x.T[:4, :]) plt.plot(np.diff(x[..., :10]).T) np.diff(x[..., :4]) Explanation: If someone looks at how x varies at the first time step you will see that it is almost constant and than it starts changing. this could explain the offset in our predictions. End of explanation T = 99 z = z[:, :-1] mu = np.zeros(z.shape) K = np.zeros((4, 4, T)) V = np.zeros((4, 4, T)) L = np.zeros((4, 4, T)) K[...,0] = L0.dot(B.T.dot(inv(B.dot(L0.dot(B.T)) + Gamma))) mu[..., [0]] = mu0 V[..., 0] = 0 L[..., 0] = L0 for t in range(1, T): #print(t) K[...,t] = L[..., t - 1].dot(B.T.dot(inv(B.dot(L[..., t - 1].dot(B.T)) + Gamma))) mu[..., [t]] = A.dot(mu[..., [t-1]]) + K[..., t].dot(x[:, [t + 1]] - B.dot(A.dot(mu[..., [t-1]]))) + C.dot(u[..., [t]]) V[..., t] = (np.eye(4) - K[..., t].dot(B)).dot(L[..., t-1]) L[..., t] = A.dot(V[..., t].dot(A.T)) + Sigma plt.plot(mu.T) plt.plot(z.T, color='red') np.sum((mu - z)**2) A.dot(mu[..., [t-1]]) + K[..., t].dot(x[:, [t + 1]] - B.dot(A.dot(mu[..., [t-1]]))) + C.dot(u[..., [t]]) V_tilde = np.zeros(V.shape) mu_tilde = np.zeros(mu.shape) V_tilde[..., -1] = V[..., -1] mu_tilde[..., [-1]] = mu[..., [-1]] for t in range(T - 2, -1, -1): W = V[..., t].dot(A.T.dot(inv(L[..., t]))) V_tilde[..., t] = V[..., t] + W.dot(V_tilde[..., t+1] - L[..., t]).dot(W.T) mu_tilde[..., [t]] = mu[..., [t]] + W.dot(mu_tilde[..., [t+1]] - A.dot(mu[..., [t]])) plt.plot(mu_tilde.T) plt.plot(z.T) print ('Non smoothed result:', np.sum((mu - z).T ** 2)) print('Smoothed result:', np.sum((mu_tilde - z).T ** 2)) print('Ratio, \n', np.sum((mu_tilde - z).T ** 2) / np.sum((mu - z).T ** 2)) Explanation: To test my hunch I decided to remove one time step from the data, to make sure that $x_1$ was not used in the prediction. End of explanation
1,032
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook explains how to add batch normalization to VGG. The code shown here is implemented in vgg_bn.py, and there is a version of vgg_ft (our fine tuning function) with batch norm called vgg_ft_bn in utils.py. Step1: The problem, and the solution The problem The problem that we faced in the lesson 3 is that when we wanted to add batch normalization, we initialized all the dense layers of the model to random weights, and then tried to train them with our cats v dogs dataset. But that's a lot of weights to initialize to random - out of 134m params, around 119m are in the dense layers! Take a moment to think about why this is, and convince yourself that dense layers are where most of the weights will be. Also, think about whether this implies that most of the time will be spent training these weights. What do you think? Trying to train 120m params using just 23k images is clearly an unreasonable expectation. The reason we haven't had this problem before is that the dense layers were not random, but were trained to recognize imagenet categories (other than the very last layer, which only has 8194 params). The solution The solution, obviously enough, is to add batch normalization to the VGG model! To do so, we have to be careful - we can't just insert batchnorm layers, since their parameters (gamma - which is used to multiply by each activation, and beta - which is used to add to each activation) will not be set correctly. Without setting these correctly, the new batchnorm layers will normalize the previous layer's activations, meaning that the next layer will receive totally different activations to what it would have without new batchnorm layer. And that means that all the pre-trained weights are no longer of any use! So instead, we need to figure out what beta and gamma to choose when we insert the layers. The answer to this turns out to be pretty simple - we need to calculate what the mean and standard deviation of that activations for that layer are when calculated on all of imagenet, and then set beta and gamma to these values. That means that the new batchnorm layer will normalize the data with the mean and standard deviation, and then immediately un-normalize the data using the beta and gamma parameters we provide. So the output of the batchnorm layer will be identical to it's input - which means that all the pre-trained weights will continue to work just as well as before. The benefit of this is that when we wish to fine-tune our own networks, we will have all the benefits of batch normalization (higher learning rates, more resiliant training, and less need for dropout) plus all the benefits of a pre-trained network. To calculate the mean and standard deviation of the activations on imagenet, we need to download imagenet. You can download imagenet from http Step2: Data setup We set up our paths, data, and labels in the usual way. Note that we don't try to read all of Imagenet into memory! We only load the sample into memory. Step3: Model setup Since we're just working with the dense layers, we should pre-compute the output of the convolutional layers. Step4: This is our usual Vgg network just covering the dense layers Step5: Check model It's a good idea to check that your models are giving reasonable answers, before using them. Step6: Adding our new layers Calculating batchnorm params To calculate the output of a layer in a Keras sequential model, we have to create a function that defines the input layer and the output layer, like this Step7: Then we can call the function to get our layer activations Step8: Now that we've got our activations, we can calculate the mean and standard deviation for each (note that due to a bug in keras, it's actually the variance that we'll need). Step9: Creating batchnorm model Now we're ready to create and insert our layers just after each dense layer. Step10: After inserting the layers, we can set their weights to the variance and mean we just calculated. Step11: We should find that the new model gives identical results to those provided by the original VGG model. Step12: Optional - additional fine-tuning Now that we have a VGG model with batchnorm, we might expect that the optimal weights would be a little different to what they were when originally created without batchnorm. So we fine tune the weights for one epoch. Step13: The results look quite encouraging! Note that these VGG weights are now specific to how keras handles image scaling - that is, it squashes and stretches images, rather than adding black borders. So this model is best used on images created in that way. Step14: Create combined model Our last step is simply to copy our new dense layers on to the end of the convolutional part of the network, and save the new complete set of weights, so we can use them in the future when using VGG. (Of course, we'll also need to update our VGG architecture to add the batchnorm layers).
Python Code: from __future__ import division, print_function %matplotlib inline from importlib import reload import utils; reload(utils) from utils import * Explanation: This notebook explains how to add batch normalization to VGG. The code shown here is implemented in vgg_bn.py, and there is a version of vgg_ft (our fine tuning function) with batch norm called vgg_ft_bn in utils.py. End of explanation # %pushd data/imagenet %pushd data/imagenet %cd train %mkdir ../sample %mkdir ../sample/train %mkdir ../sample/valid from shutil import copyfile g = glob('*') for d in g: os.mkdir('../sample/train/'+d) os.mkdir('../sample/valid/'+d) g = glob('*/*.JPEG') shuf = np.random.permutation(g) for i in range(25000): copyfile(shuf[i], '../sample/train/' + shuf[i]) %cd ../valid g = glob('*/*.JPEG') shuf = np.random.permutation(g) for i in range(5000): copyfile(shuf[i], '../sample/valid/' + shuf[i]) %cd .. %mkdir sample/results %popd Explanation: The problem, and the solution The problem The problem that we faced in the lesson 3 is that when we wanted to add batch normalization, we initialized all the dense layers of the model to random weights, and then tried to train them with our cats v dogs dataset. But that's a lot of weights to initialize to random - out of 134m params, around 119m are in the dense layers! Take a moment to think about why this is, and convince yourself that dense layers are where most of the weights will be. Also, think about whether this implies that most of the time will be spent training these weights. What do you think? Trying to train 120m params using just 23k images is clearly an unreasonable expectation. The reason we haven't had this problem before is that the dense layers were not random, but were trained to recognize imagenet categories (other than the very last layer, which only has 8194 params). The solution The solution, obviously enough, is to add batch normalization to the VGG model! To do so, we have to be careful - we can't just insert batchnorm layers, since their parameters (gamma - which is used to multiply by each activation, and beta - which is used to add to each activation) will not be set correctly. Without setting these correctly, the new batchnorm layers will normalize the previous layer's activations, meaning that the next layer will receive totally different activations to what it would have without new batchnorm layer. And that means that all the pre-trained weights are no longer of any use! So instead, we need to figure out what beta and gamma to choose when we insert the layers. The answer to this turns out to be pretty simple - we need to calculate what the mean and standard deviation of that activations for that layer are when calculated on all of imagenet, and then set beta and gamma to these values. That means that the new batchnorm layer will normalize the data with the mean and standard deviation, and then immediately un-normalize the data using the beta and gamma parameters we provide. So the output of the batchnorm layer will be identical to it's input - which means that all the pre-trained weights will continue to work just as well as before. The benefit of this is that when we wish to fine-tune our own networks, we will have all the benefits of batch normalization (higher learning rates, more resiliant training, and less need for dropout) plus all the benefits of a pre-trained network. To calculate the mean and standard deviation of the activations on imagenet, we need to download imagenet. You can download imagenet from http://www.image-net.org/download-images . The file you want is the one titled Download links to ILSVRC2013 image data. You'll need to request access from the imagenet admins for this, although it seems to be an automated system - I've always found that access is provided instantly. Once you're logged in and have gone to that page, look for the CLS-LOC dataset section. Both training and validation images are available, and you should download both. There's not much reason to download the test images, however. Note that this will not be the entire imagenet archive, but just the 1000 categories that are used in the annual competition. Since that's what VGG16 was originally trained on, that seems like a good choice - especially since the full dataset is 1.1 terabytes, whereas the 1000 category dataset is 138 gigabytes. Adding batchnorm to Imagenet Setup Sample As per usual, we create a sample so we can experiment more rapidly. End of explanation sample_path = "data/imagenet/sample/" path = "data/imagenet/" #sample_path = 'data/jhoward/imagenet/sample/' # This is the path to my fast SSD - I put datasets there when I can to get the speed benefit #fast_path = '/home/jhoward/ILSVRC2012_img_proc/' #path = '/data/jhoward/imagenet/sample/' #path = 'data/jhoward/imagenet/' batch_size=64 samp_trn = get_data(sample_path+'train') samp_val = get_data(sample_path+'valid') save_array(sample_path+'results/trn.dat', samp_trn) save_array(sample_path+'results/val.dat', samp_val) samp_trn = load_array(sample_path+'results/trn.dat') samp_val = load_array(sample_path+'results/val.dat') (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) (samp_val_classes, samp_trn_classes, samp_val_labels, samp_trn_labels, samp_val_filenames, samp_filenames, samp_test_filenames) = get_classes(sample_path) Explanation: Data setup We set up our paths, data, and labels in the usual way. Note that we don't try to read all of Imagenet into memory! We only load the sample into memory. End of explanation vgg = Vgg16() model = vgg.model layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Conv2D][-1] conv_layers = layers[:last_conv_idx+1] dense_layers = layers[last_conv_idx+1:] conv_model = Sequential(conv_layers) samp_conv_val_feat = conv_model.predict(samp_val, batch_size=batch_size*2) samp_conv_feat = conv_model.predict(samp_trn, batch_size=batch_size*2) save_array(sample_path+'results/conv_val_feat.dat', samp_conv_val_feat) save_array(sample_path+'results/conv_feat.dat', samp_conv_feat) samp_conv_feat = load_array(sample_path+'results/conv_feat.dat') samp_conv_val_feat = load_array(sample_path+'results/conv_val_feat.dat') samp_conv_val_feat.shape Explanation: Model setup Since we're just working with the dense layers, we should pre-compute the output of the convolutional layers. End of explanation def get_dense_layers(): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.5), Dense(4096, activation='relu'), Dropout(0.5), # Dense(1000, activation='softmax') Dense(1000, activation='relu') ] dense_model = Sequential(get_dense_layers()) for l1, l2 in zip(dense_layers, dense_model.layers): l2.set_weights(l1.get_weights()) dense_model.add(Dense(763, activation='softmax')) Explanation: This is our usual Vgg network just covering the dense layers: End of explanation dense_model.compile(Adam(), 'categorical_crossentropy', ['accuracy']) dense_model.evaluate(samp_conv_val_feat, samp_val_labels) model.compile(Adam(), 'categorical_crossentropy', ['accuracy']) # should be identical to above # model.evaluate(val, val_labels) # should be a little better than above, since VGG authors overfit # dense_model.evaluate(conv_feat, trn_labels) Explanation: Check model It's a good idea to check that your models are giving reasonable answers, before using them. End of explanation k_layer_out = K.function([dense_model.layers[0].input, K.learning_phase()], [dense_model.layers[2].output]) Explanation: Adding our new layers Calculating batchnorm params To calculate the output of a layer in a Keras sequential model, we have to create a function that defines the input layer and the output layer, like this: End of explanation d0_out = k_layer_out([samp_conv_val_feat, 0])[0] k_layer_out = K.function([dense_model.layers[0].input, K.learning_phase()], [dense_model.layers[4].output]) d2_out = k_layer_out([samp_conv_val_feat, 0])[0] Explanation: Then we can call the function to get our layer activations: End of explanation mu0,var0 = d0_out.mean(axis=0), d0_out.var(axis=0) mu2,var2 = d2_out.mean(axis=0), d2_out.var(axis=0) Explanation: Now that we've got our activations, we can calculate the mean and standard deviation for each (note that due to a bug in keras, it's actually the variance that we'll need). End of explanation nl1 = BatchNormalization() nl2 = BatchNormalization() bn_model = insert_layer(dense_model, nl2, 5) bn_model = insert_layer(bn_model, nl1, 3) bnl1 = bn_model.layers[3] bnl4 = bn_model.layers[6] Explanation: Creating batchnorm model Now we're ready to create and insert our layers just after each dense layer. End of explanation bnl1.set_weights([var0, mu0, mu0, var0]) bnl4.set_weights([var2, mu2, mu2, var2]) bn_model.compile(Adam(1e-5), 'categorical_crossentropy', ['accuracy']) Explanation: After inserting the layers, we can set their weights to the variance and mean we just calculated. End of explanation bn_model.evaluate(samp_conv_val_feat, samp_val_labels) bn_model.evaluate(samp_conv_feat, samp_trn_labels) Explanation: We should find that the new model gives identical results to those provided by the original VGG model. End of explanation feat_bc = bcolz.open(fast_path+'trn_features.dat') labels = load_array(fast_path+'trn_labels.dat') val_feat_bc = bcolz.open(fast_path+'val_features.dat') val_labels = load_array(fast_path+'val_labels.dat') bn_model.fit(feat_bc, labels, nb_epoch=1, batch_size=batch_size, validation_data=(val_feat_bc, val_labels)) Explanation: Optional - additional fine-tuning Now that we have a VGG model with batchnorm, we might expect that the optimal weights would be a little different to what they were when originally created without batchnorm. So we fine tune the weights for one epoch. End of explanation bn_model.save_weights(path+'models/bn_model2.h5') bn_model.load_weights(path+'models/bn_model2.h5') Explanation: The results look quite encouraging! Note that these VGG weights are now specific to how keras handles image scaling - that is, it squashes and stretches images, rather than adding black borders. So this model is best used on images created in that way. End of explanation new_layers = copy_layers(bn_model.layers) for layer in new_layers: conv_model.add(layer) copy_weights(bn_model.layers, new_layers) conv_model.compile(Adam(1e-5), 'categorical_crossentropy', ['accuracy']) conv_model.evaluate(samp_val, samp_val_labels) conv_model.save_weights(path+'models/inet_224squash_bn.h5') Explanation: Create combined model Our last step is simply to copy our new dense layers on to the end of the convolutional part of the network, and save the new complete set of weights, so we can use them in the future when using VGG. (Of course, we'll also need to update our VGG architecture to add the batchnorm layers). End of explanation
1,033
Given the following text description, write Python code to implement the functionality described below step by step Description: This is exactly the same as the OpenMM-based alanine dipeptide example, but this one uses Gromacs! Imports Step1: Setting up the engine Now we set things up for the Gromacs simulation. Note that all the details are in the mdp file, just as always with Gromacs. Currently, we need to define a few options that reproduce some of the mdp file; in the future, that information may be read from the mdp. Step2: We set several entries in the options dictionary. Like all OPS engines, the options dictionary for Gromacs includes some important ones that you are likely to set Step3: There are several arguments for the engine as well. In addition to the options dictionary above, you'll need the gro argument (used for grompp's -c; can be a gro, pdb, etc.), the mdp, and the top. There are two other arguments as well Step4: The storage file will need a template snapshot. Step5: Defining states First we define the CVs using the md.compute_dihedrals function. Then we define our states using PeriodicCVDefinedVolume (since our CVs are periodic.) Step6: Getting a first trajectory Here we'll use the VisitAllStatesEnsemble to create a trajectory that has visited all states, using the high temperature engine. This approach is reasonable for 2-state TPS and multiple state TIS simulations. VisitAllStatesEnsemble is more than is needed for multiple state TPS, and isn't guaranteed to provide all the needed initial conditions for multiple interface set TIS. The underlying theory of the VisitAllStatesEnsemble is described in the OpenMM alanine dipeptide TPS example. Step7: Plotting the trajectory Step8: Setting up another engine We'll create another engine that uses a 300K integrator, and equilibrate to a 300K path from the 500K path. Step9: Equilibrate TPS This is, again, a simple path sampling setup. We use the same TPSNetwork we'll use later, and only shooting moves. One the initial conditions are correctly set up, we run one step at a time until the initial trajectory is decorrelated. This setup of a path sampler always consists of defining a network and a move_scheme. See toy model notebooks for further discussion. Step10: From here, you can either extend this to a longer trajectory for the fixed length TPS in the alanine_dipeptide_fixed_tps_traj.ipynb notebook, or go straight to flexible length TPS in the alanine_dipeptide_tps_run.ipynb notebook.
Python Code: from __future__ import print_function %matplotlib inline import matplotlib.pyplot as plt import openpathsampling as paths from openpathsampling.engines import gromacs as ops_gmx import mdtraj as md import numpy as np Explanation: This is exactly the same as the OpenMM-based alanine dipeptide example, but this one uses Gromacs! Imports End of explanation %%bash # remove files created by previous run of this notebook rm -rf hi_T* rm -rf equil_* rm -rf \#* rm -rf initial_*.trr options = { 'gmx_executable': 'gmx -nobackup ', # run gmx how you like it! 'snapshot_timestep': 0.02, 'n_frames_max': 10000, } Explanation: Setting up the engine Now we set things up for the Gromacs simulation. Note that all the details are in the mdp file, just as always with Gromacs. Currently, we need to define a few options that reproduce some of the mdp file; in the future, that information may be read from the mdp. End of explanation hi_T_engine = ops_gmx.Engine(gro="conf.gro", mdp="hi_temp.mdp", top="topol.top", options=options, base_dir=".", prefix="hi_T").named("500K") Explanation: We set several entries in the options dictionary. Like all OPS engines, the options dictionary for Gromacs includes some important ones that you are likely to set: 'snapshot_timestep': Time between output frames in the TRR. Defaults to 1 (setting unit of time to "frames"), but you probably want to set this to dt * nstxout. Setting this is optional, but can assist in several analysis routines. 'n_frames_max': Maximum number of frames. This must be less than the corresponding nsteps entry in your Gromacs mdp file, otherwise Gromacs might end the trajectory before OPS tells it to, and this will leave OPS hanging. Don't forget that the mdp's nsteps is in units of the inner timestep, whereas OPS's n_frames_max is in unit of saved frames. So n_frames_max should be less than nsteps / nstxout. (Usually, you set the max number of frames in OPS first, and make sure your nsteps corresponds.) There are also several options specific to Gromacs: 'gmx_executable': This is the Gromacs command exactly as you need to call it. This allows you to, for example, use Gromacs in some specific path, or to use gmx_mpi instead of gmx. Note that, for modern Gromacs, this command should end in a space -- the subcommands grompp and mdrun do not automatically include a space. 'grompp_args': A string with additional arguments for grompp. 'mdrun_args': A string with additional arguments for mdrun. Finally, there are a few restrictions on your mdp file that you should be careful about: nsteps: See discussion of 'n_frames_max' above. nstxout, nstvout, nstenergy: All of these should be equal to each other. integrator: Path sampling should always use a reversible integrator; leapfrog-style integrators may be unstable. End of explanation print(hi_T_engine.grompp_command) print(hi_T_engine.engine_command()) Explanation: There are several arguments for the engine as well. In addition to the options dictionary above, you'll need the gro argument (used for grompp's -c; can be a gro, pdb, etc.), the mdp, and the top. There are two other arguments as well: base_dir sets the working directory for where to find the input files and place the output files, and prefix sets a prefix for the subdirectories where the output files goes (trr, edr, and log). Internally, the OPS Gromacs engine will fork off a Gromacs process, just as you would on the command line. You can see the exact commands that it will use (now with a few placeholder arguments for input/output filenames, but once the engine is running, this will show the exact command being used): End of explanation template = hi_T_engine.current_snapshot template.topology Explanation: The storage file will need a template snapshot. End of explanation # define the CVs psi = paths.MDTrajFunctionCV("psi", md.compute_dihedrals, template.topology, indices=[[6,8,14,16]]) phi = paths.MDTrajFunctionCV("phi", md.compute_dihedrals, template.topology, indices=[[4,6,8,14]]) # define the states deg = 180.0/np.pi C_7eq = (paths.PeriodicCVDefinedVolume(phi, lambda_min=-180/deg, lambda_max=0/deg, period_min=-np.pi, period_max=np.pi) & paths.PeriodicCVDefinedVolume(psi, lambda_min=100/deg, lambda_max=200/deg, period_min=-np.pi, period_max=np.pi) ).named("C_7eq") # similarly, without bothering with the labels: alpha_R = (paths.PeriodicCVDefinedVolume(phi, -180/deg, 0/deg, -np.pi, np.pi) & paths.PeriodicCVDefinedVolume(psi, -100/deg, 0/deg, -np.pi, np.pi)).named("alpha_R") Explanation: Defining states First we define the CVs using the md.compute_dihedrals function. Then we define our states using PeriodicCVDefinedVolume (since our CVs are periodic.) End of explanation visit_all = paths.VisitAllStatesEnsemble(states=[C_7eq, alpha_R], timestep=0.02) trajectory = hi_T_engine.generate(hi_T_engine.current_snapshot, [visit_all.can_append]) # create a network so we can use its ensemble to obtain an initial trajectory # use all-to-all because we don't care if initial traj is A->B or B->A: it can be reversed tmp_network = paths.TPSNetwork.from_states_all_to_all([C_7eq, alpha_R]) # take the subtrajectory matching the ensemble (for TPS only one ensemble, so only one subtraj) subtrajectories = [] for ens in tmp_network.analysis_ensembles: subtrajectories += ens.split(trajectory) print(subtrajectories) Explanation: Getting a first trajectory Here we'll use the VisitAllStatesEnsemble to create a trajectory that has visited all states, using the high temperature engine. This approach is reasonable for 2-state TPS and multiple state TIS simulations. VisitAllStatesEnsemble is more than is needed for multiple state TPS, and isn't guaranteed to provide all the needed initial conditions for multiple interface set TIS. The underlying theory of the VisitAllStatesEnsemble is described in the OpenMM alanine dipeptide TPS example. End of explanation plt.plot(phi(trajectory), psi(trajectory), 'k.-') plt.plot(phi(subtrajectories[0]), psi(subtrajectories[0]), 'r') Explanation: Plotting the trajectory End of explanation engine = ops_gmx.Engine(gro="conf.gro", mdp="md.mdp", top="topol.top", options=options, base_dir=".", prefix="equil").named("tps_equil") Explanation: Setting up another engine We'll create another engine that uses a 300K integrator, and equilibrate to a 300K path from the 500K path. End of explanation network = paths.TPSNetwork(initial_states=C_7eq, final_states=alpha_R).named("tps_network") scheme = paths.OneWayShootingMoveScheme(network, selector=paths.UniformSelector(), engine=engine) # make subtrajectories into initial conditions (trajectories become a sampleset) initial_conditions = scheme.initial_conditions_from_trajectories(subtrajectories) # check that initial conditions are valid and complete (raise AssertionError otherwise) scheme.assert_initial_conditions(initial_conditions) # use an empty background (fig, ax) = plt.subplots() plt.xlim(-np.pi, np.pi) plt.ylim(-np.pi, np.pi); sampler = paths.PathSampling(storage=paths.Storage("alanine_dipeptide_tps_equil.nc", "w", template), move_scheme=scheme, sample_set=initial_conditions) sampler.live_visualizer = paths.StepVisualizer2D(network, phi, psi, [-np.pi, np.pi], [-np.pi, np.pi]) sampler.live_visualizer.background = fig # initially, these trajectories are correlated (actually, identical) # once decorrelated, we have a (somewhat) reasonable 300K trajectory initial_conditions[0].trajectory.is_correlated(sampler.sample_set[0].trajectory) # this is a trick to take the first decorrelated trajectory sampler.run_until_decorrelated() # run an extra 10 to decorrelate a little futher sampler.run(10) Explanation: Equilibrate TPS This is, again, a simple path sampling setup. We use the same TPSNetwork we'll use later, and only shooting moves. One the initial conditions are correctly set up, we run one step at a time until the initial trajectory is decorrelated. This setup of a path sampler always consists of defining a network and a move_scheme. See toy model notebooks for further discussion. End of explanation sampler.storage.close() Explanation: From here, you can either extend this to a longer trajectory for the fixed length TPS in the alanine_dipeptide_fixed_tps_traj.ipynb notebook, or go straight to flexible length TPS in the alanine_dipeptide_tps_run.ipynb notebook. End of explanation
1,034
Given the following text description, write Python code to implement the functionality described below step by step Description: CAPITOLO 1.1 Step1: Iterazione nelle liste e cicli for su indice Step2: DIZIONARI Step3: Iterazione nei dizionari ATTENZIONE Step4: DYI Step5: WARNING / DANGER / EXPLOSION / ATTENZIONE! Cosa succede se inizializzo liste e dizionari nei parametri nominali di una funzione? Step7: Funzioni con parametri posizionali, nominali, e arbitrari Una funzione si definisce con def &lt;nomefunzione&gt;([parametri]) dove i parametri possono essere Step8: DYI
Python Code: # creazione l = [1,2,3,10,"a", -12.333, 1024, 768, "pippo"] # concatenazione l += ["la", "concatenazione", "della", "lista"] # aggiunta elementi in fondo l.append(32) l.append(3) print(u"la lista è {}".format(l)) l.remove(3) # rimuove la prima occorrenza print(u"la lista è {}".format(l)) i = l.index(10) # restituisce l'indice della prima occorrenza del valore 10 print(u"l'indice di 10 è {}".format(i)) print(u"il valore all'indice 3 è {}".format(l[3])) print(u"** vediamo come funziona lo SLICING delle liste") print(u"Ecco i primi 3 valori della lista {}".format(l[:3])) print(u"e poi i valori dal 3o al penultimo {}".format(l[3:-1])) print(u"e poi i valori dal 3o al penultimo, ma ogni 2 {}".format(l[3:-1:2])) print("\n***FUNZIONI RANGE e XRANGE***\n") l2 = range(1000) # questi sono i primi 1000 valori da 0 a 999 print(u"ecco la lista ogni 50 elementi di n <=1000: {}".format(l2[::50])) # LA FUNZIONE xrange è comoda per ottenere un oggetto tipo (ma non = a ) un generatore # da cui i numeri vengono appunto generati al momento dell'accesso all'elemento stesso # della sequenza # Il codice di prima dà errore try: l2 = xrange(1000) # questi sono i primi 1000 valori da 0 a 999 ma senza occupare RAM print(u"ecco la lista ogni 50 elementi di n <= 1000: {}".format(l2[::50])) except Exception as e: print("ECCEZIONE {}: {}".format(type(e), e)) # Il codice che funziona con lo slice valuta xrange in una lista quindi # risulta inutile l2 = list(xrange(1000)) # questi sono i primi 1000 valori da 0 a 999 ma senza occupare RAM print(u"ecco la lista ogni 50 elementi di n <= 1000: {}\n".format(l2[::50])) ## ma si può fare direttamente con range o xrange! print(u"[OK] lista ogni 50 elementi <= 1000: {}".format(range(0,1000,50))) Explanation: CAPITOLO 1.1: liste, dizionari e modello dati LISTE: Operazioni e metodi End of explanation print("***PER FARE UN CICLO FOR CON INDICE INCREMENTALE SI USA XRANGE!") for el in xrange(1,21): print("numero {}".format(el)) print("***PER NUMERARE GLI ELEMENTI DI UNA LISTA SI USA ENUMERATE!") for i, el in enumerate(l, start=10): # numero partendo da 10, se start non specificato parto da 0 print("Il contenuto {} si trova all'indice {}".format(el, i)) Explanation: Iterazione nelle liste e cicli for su indice End of explanation # -*- coding: utf-8 -*- # definizione d = {"nome": "Luca", "cognome": "Ferroni", "classe": 1980, 2: "figli", (2017, 1, 23): "corso python", "classe": 1979} print(d) # aggiornamento d.update({ "professioni" : ["docente", "lavoratore autonomo"] }) d["pranzo"] = "ritrovo degli artisti" # recupero valore per chiave certa (__getitem__) print(u"Il nome del personaggio è {}".format(d["nome"])) # sfrutto il mini-formato di template per le stringhe # https://docs.python.org/2.7/library/string.html#formatspec print(u"Il personaggio è {nome} {cognome} nato nel {classe}".format(**d)) # Recupero di un valore per una chiave opzionale print(u"'nome' è una chiave che esiste con valore = {}, 'codiceiban' invece non esiste = {}".format( d.get('nome'), d.get('codiceiban'))) print(u"Se avessi usato la __getitem__ avrei avuto un KeyError") # rimozione di una chiave dal dizionario print(u"Rimuovo il nome dal dizionario con d.pop('nome')") d.pop('nome') print(u"'nome' ora non esiste con valore = {}, come 'codiceiban' = {}".format( d.get('nome'), d.get('codiceiban'))) print(u"Allora, se non trovi la chiave 'nome' allora dimmi 'Pippo'. Cosa dici?") print(d.get('nome', 'Pippo')) Explanation: DIZIONARI: Operazioni e metodi End of explanation print("\n***PER ITERARE SU TUTTI GLI ELEMENTI DI UN DIZIONARIO SI USA .iteritems()***\n") for key, value in d.iteritems(): print("Alla chiave {} corrisponde il valore {}".format(key,value)) print("\n***DIZIONARI E ORDINAMENTO***\n") data_input = [('a', 1), ('b', 2), ('l', 10), ('c', 3)] d1 = dict(data_input) import collections d2_ord = collections.OrderedDict(data_input) print("input = {}".format(data_input)) print("dizionario non ordinato = {}".format(d1)) print("dizionario ordinato = {}".format(d2_ord)) print("lista di coppie da diz NON ordinato = {}".format(d1.items())) print("lista di coppie da diz ordinato = {}".format(d2_ord.items())) Explanation: Iterazione nei dizionari ATTENZIONE: Il contenuto del dizionario non è ordinato! Non c'è alcuna garanzia sull'ordinamento. Per avere garanzia bisogna usare la classe collections.OrderedDict End of explanation def foo(bar): bar.append(41) print(bar) # >> [41] answer_list = [] foo(answer_list) print(answer_list) # >> [41] def foo(bar): bar = 'new value' print (bar) # >> 'new value' answer_list = 'old value' foo(answer_list) print(answer_list) # >> 'old value' Explanation: DYI: Fibonacci optimized Salvare i calcoli intermedi della funzione di Fibonacci in un dizionario da usare come cache. Eseguire i test per essere sicuri di non aver rotto l'algoritmo Caratteristiche del modello dati di Python Tipi di dato "mutable" e "immutable" Python Data Model Ogni oggetto ha: * identità -> non cambia mai e si può pensare come l'indirizzo in memoria * tipo -> non cambia mai e rappresenta le operazioni che l'oggetto supporta * valore -> può cambiare se il tipo è mutable, non se è immutable Tipi di dato immutable sono: interi stringhe tuple set Tipi di dato mutable sono: liste dizionari Tipizzazione forte e dinamica Da http://stackoverflow.com/questions/11328920/is-python-strongly-typed/11328980#11328980 (v. anche i commenti) Python is strongly, dynamically typed. Strong typing means that the type of a value doesn't suddenly change. A string containing only digits doesn't magically become a number, as may happen in Perl. Every change of type requires an explicit conversion. Dynamic typing means that runtime objects (values) have a type, as opposed to static typing where variables have a type. As for example bob = 1 bob = "bob" This works because the variable does not have a type; it can name any object. After bob=1, you'll find that type(bob) returns int, but after bob="bob", it returns str. (Note that type is a regular function, so it evaluates its argument, then returns the type of the value.) Passaggio di parametro per valore o riferimento? Nessuno dei due! V. https://jeffknupp.com/blog/2012/11/13/is-python-callbyvalue-or-callbyreference-neither/ Call by object, or call by object reference. Concetto base: in python una variabile è solo un nome per un oggetto (= la tripla id,tipo,valore) In sostanza il comportamento dipende dal fatto che gli oggetti nominati dalle variabili sono mutable o immutable. Seguono esempi: End of explanation def ciao(n, l=[], d={}): if n > 5: return l.append(n) d[n] = n print("la lista è {}".format(l)) # print("il diz è {}".format(d)) ciao(1) # [1] ciao(4) # [1, 4] ciao(2) print("----") ciao(2, l=[1]) ciao(5) Explanation: WARNING / DANGER / EXPLOSION / ATTENZIONE! Cosa succede se inizializzo liste e dizionari nei parametri nominali di una funzione? End of explanation # -*- coding: utf-8 -*- # This is hello_who_3.py import sys # <-- importo un modulo def compose_hello(who, force=False): # <-- valore di default Get the hello message. try: # <-- gestione eccezioni `Duck Typing` message = "Hello " + who + "!" except TypeError: # <-- eccezione specifica # except TypeError as e: # <-- eccezione specifica su parametro e print("[WARNING] Il parametro `who` dovrebbe essere una stringa") if force: # <-- controllo "if" message = "Hello {}!".format(who) else: raise # <-- solleva eccezione originale except Exception: print("Verificatasi eccezione non prevista") else: print("nessuna eccezione") finally: print("Bye") return message def hello(who='world'): # <-- valore di default print(compose_hello(who)) if __name__ == "__main__": hello("mamma") hello("pippo") hello(1) ret = compose_hello(1, force=True) print("Ha composto {}".format(ret)) try: hello(1) except TypeError as e: print("{}: {}".format(type(e).__name__, e)) print("Riprova") Explanation: Funzioni con parametri posizionali, nominali, e arbitrari Una funzione si definisce con def &lt;nomefunzione&gt;([parametri]) dove i parametri possono essere: posizionali. Ad es: def hello(who) nominali. Ad es: def hello(who='') o who=None o who='default' entrambi, ma i nominali devono essere messi dopo i posizionali. Ad es: def hello(who, say="How are you?") arbitrari sia posizionali con il simbolo * o nominali con **. Come convenzione si utilizzano i nomi args e kw o kwargs. Ad es: def hello(who, say="How are you?", *args, **kw) I simboli * e ** indicano rispettivamente la rappresentazione di una lista come una sequenza di elementi, e di un dizionario come una sequenza di parametri &lt;chiave&gt;=&lt;valore&gt; Scope delle variabili http://www.saltycrane.com/blog/2008/01/python-variable-scope-notes/ e ricordatevi che: for i in [1,2,3]: print(i) print("Sono fuori dal ciclo e posso vedere che i={}".format(i)) Namespace I namespace in python sono raccoglitori di nomi e posson essere impliciti o espliciti. Sono impliciti lo spazio dei nomi __builtin__ e __main__. Sono espliciti, le classi, gli oggetti, le funzioni e in particolare i moduli. Posso importare un modulo che mi va a costituire un namespace con import &lt;nomemodulo&gt; e accedere a tutti i simboli top-level inseriti nel modulo come &lt;nomemodulo&gt;.&lt;simbolo&gt;. L'importazione di simboli singoli all'interno di un modulo in un altro namespace si può fare con from &lt;nomemodulo&gt; import &lt;simbolo&gt;. Quello che non si dovrebbe fare è importare tutti i simboli di un modulo dentro un altro nella forma: from &lt;nomemodulo&gt; import *. Non fatelo, a meno che non strettamente necessario. Stack delle eccezioni e loro gestione Lo stack delle eccezioni builtin, ossia già comprese nel linguaggio python sono al link: https://docs.python.org/2/library/exceptions.html#exception-hierarchy Derivando da essere facilmente se ne possono definire di proprie. La gestione delle eccezioni avviene in blocchi: try: ... except [eccezione] [as variabile]: ... else: ... finally: ... Pratica del Duck Typing! « If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck. » Segue esempio di composizione del saluto con la gestione delle eccezioni: End of explanation def main(): # Step 1. Finché l'utente non scrive STOP - si fa tutto in un while True con un break quando occorre # Step 2. L'utente inserisce il nome # Usa raw_input("Inserisci ...") per chiedere le info all'utente # Step 3. L'utente inserisce la città # Step 4. L'utente inserisce lo stipendio # Step 5. Inserisci il dizionario con chiavi # 'name', 'city', 'salary', 'genfibo' # nella lista PEOPLE = [] PEOPLE.append(person_d) # Step 6. Stampa a video PEOPLE nel modo che ti piace # Step 7. Riinizia da Step 1 # FINE # ---- BONUS ---- # Step 8. Quando l'utente smette -> scrivi i dati in un file # se vuoi Step 8.1 in formato json # se vuoi Step 8.2 in formato csv # se vuoi Step 8.3 in formato xml # Step 9. Fallo anche se l'utente preme CTRL+C o CTRL+Z Explanation: DYI: Anagrafica e Fibonacci Sapendo che l'input all'utente si chiede con la funzione raw_input(&lt;richiesta&gt;), scrivere una funzione che richieda nome, città, stipendio, e generazione di Fibonacci dei propri conigli. End of explanation
1,035
Given the following text description, write Python code to implement the functionality described below step by step Description: OpenStreetMap is an open project, which means it's free and everyone can use it and edit as they like. OpenStreetMap is direct competitor of Google Maps. How OpenStreetMap can compete with the giant you ask? It's depend completely on crowd sourcing. There's lot of people willingly update the map around the world, most of them fix their map country. Openstreetmap is so powerful, and rely heavily on the human input. But its strength also the downfall. Everytime there's human input, there's always be human error.It's very error prone. <!-- TEASER_END --> Take the name of the street for example. People like to abbreviate the type of the street. Street become St. but in Ireland Saint is also mapped to ST. Line few example Saint Stephen's Green will be written as ST Stephen's Green This project tends to fix that, it fix abbreviate name, so it can use more generalize type. Not only it's benefit for professional, But we can also can see more structured words. In this project, i want to show you to fix one type of error, that is the address of the street. I choose whole places of Ireland. Step5: To audit the osm file, first we need to know the overview of the data. To get an overview of the data, we count the tag content of the data. Step9: Update file data with few abbrivate and create a new file With updated data. Step11: This will save the Ireland osm that has been audited into map_audit.osm Not let's prepare the audited file to be input to the MongoDB instance. Step12: The processed map has ben saved to map_audit.osm.json Now that we have process the audited map file into array of JSON, let's put it into mongodb instance. this will take the map that we have been audited. First we load the script to insert the map Step13: Okay let's test if the data is something that we expect Step14: The data seems about right. After we verified the data is ready, let's put it into MongoDB Step15: Okay, it seems that we have sucessfully insert all of our data into MongoDB instance. Let's test this Step16: Show 5 data that have street Step17: Show the top 5 of contributed users Step18: Show the restaurant's name, the food they serve, and contact number
Python Code: OSM_FILE = 'data/map.osm' Explanation: OpenStreetMap is an open project, which means it's free and everyone can use it and edit as they like. OpenStreetMap is direct competitor of Google Maps. How OpenStreetMap can compete with the giant you ask? It's depend completely on crowd sourcing. There's lot of people willingly update the map around the world, most of them fix their map country. Openstreetmap is so powerful, and rely heavily on the human input. But its strength also the downfall. Everytime there's human input, there's always be human error.It's very error prone. <!-- TEASER_END --> Take the name of the street for example. People like to abbreviate the type of the street. Street become St. but in Ireland Saint is also mapped to ST. Line few example Saint Stephen's Green will be written as ST Stephen's Green This project tends to fix that, it fix abbreviate name, so it can use more generalize type. Not only it's benefit for professional, But we can also can see more structured words. In this project, i want to show you to fix one type of error, that is the address of the street. I choose whole places of Ireland. End of explanation # %%writefile mapparser.py #!/usr/bin/env python import xml.etree.ElementTree as ET import pprint def count_tags(filename): count tags in filename. Init 1 in dict if the key not exist, increment otherwise. tags = {} for ev,elem in ET.iterparse(filename): tag = elem.tag if tag not in tags.keys(): tags[tag] = 1 else: tags[tag]+=1 return tags def test(): tags = count_tags(OSM_FILE) pprint.pprint(len(tags)) if __name__ == "__main__": test() # %load tags.py import xml.etree.ElementTree as ET import pprint import re lower = re.compile(r'^([a-z]|_)*$') lower_colon = re.compile(r'^([a-z]|_)*:([a-z]|_)*$') problemchars = re.compile(r'[=\+/&<>;\'"\?%#$@\,\. \t\r\n]') def key_type(element, keys): Count the criteria in dictionary for the content of the tag. if element.tag == "tag": if lower.search(element.attrib['k']): keys['lower'] +=1 elif lower_colon.search(element.attrib['k']): keys['lower_colon']+=1 elif problemchars.search(element.attrib['k']): keys['problemchars']+=1 else: keys['other']+=1 return keys def process_map(filename): keys = {"lower": 0, "lower_colon": 0, "problemchars": 0, "other": 0} for _, element in ET.iterparse(filename): keys = key_type(element, keys) return keys def test(): keys = process_map(OSM_FILE) pprint.pprint(keys) if __name__ == "__main__": test() Find all Unique user In OSM File # %load users.py # -*- coding: utf-8 -*- import xml.etree.ElementTree as ET import pprint import re Your task is to explore the data a bit more. The first task is a fun one - find out how many unique users have contributed to the map in this particular area! The function process_map should return a set of unique user IDs ("uid") def get_user(element): return def process_map(filename): Count the user id in the filename. users = set() for _, element in ET.iterparse(filename): try: users.add(element.attrib['uid']) except KeyError: continue return users def test(): users = process_map(OSM_FILE) print ' No of user found in file ' , len( users) if __name__ == "__main__": test() import xml.etree.cElementTree as ET Explanation: To audit the osm file, first we need to know the overview of the data. To get an overview of the data, we count the tag content of the data. End of explanation # %load audit.py import xml.etree.cElementTree as ET from collections import defaultdict import re import pprint from optparse import OptionParser street_type_re = re.compile(r'^\b\S+\.?', re.IGNORECASE) expected = ["Street", "Avenue", "Boulevard", "Drive", "Court", "Place", "Square", "Lane", "Road", "Trail", "Parkway", "Commons"] mapping = { "Ave":"Avenue", "Rd." : "Road", "N.":"North", } def audit_street_type(street_types, street_name): m = street_type_re.search(street_name) if m: street_type = m.group() if street_type not in expected: street_types[street_type].add(street_name) #return True if need to be updated return True return False def is_street_name(elem): Perhaps the addr:full should also included to be fixed return (elem.attrib['k'] == "addr:street") or (elem.attrib['k'] == "addr:full") def is_name_is_street(elem): Some people fill the name of the street in k=name. Should change this s = street_type_re.search(elem.attrib['v']) #print s return (elem.attrib['k'] == "name") and s and s.group() in mapping.keys() def audit(osmfile): osm_file = open(osmfile, "r") street_types = defaultdict(set) # tree = ET.parse(osm_file, events=("start",)) tree = ET.parse(osm_file) listtree = list(tree.iter()) for elem in listtree: if elem.tag == "node" or elem.tag == "way": n_add = None for tag in elem.iter("tag"): if is_street_name(tag): if audit_street_type(street_types, tag.attrib['v']): #Update the tag attribtue tag.attrib['v'] = update_name(tag.attrib['v'],mapping) elif is_name_is_street(tag): tag.attrib['v'] = update_name(tag.attrib['v'],mapping) n_add = tag.attrib['v'] if n_add: elem.append(ET.Element('tag',{'k':'addr:street', 'v':n_add})) #write the to the file we've been audit tree.write(osmfile[:osmfile.find('.osm')]+'_audit.osm') return street_types def update_name(name, mapping): Fixed abreviate name so the name can be uniform. The reason why mapping in such particular order, is to prevent the shorter keys get first. dict_map = sorted(mapping.keys(), key=len, reverse=True) for key in dict_map: if name.find(key) != -1: name = name.replace(key,mapping[key]) return name return name def test(): st_types = audit(OSM_FILE) pprint.pprint(dict(st_types)) for st_type, ways in st_types.iteritems(): for name in ways: better_name = update_name(name, mapping) print name, "=>", better_name if __name__ == '__main__': #test() st_types = audit(OSM_FILE) Explanation: Update file data with few abbrivate and create a new file With updated data. End of explanation OSM_FILE = "data/map_audit.osm" # %%writefile data.py import xml.etree.ElementTree as ET import pprint import re import codecs import json lower = re.compile(r'^([a-z]|_)*$') lower_colon = re.compile(r'^([a-z]|_)*:([a-z]|_)*$') problemchars = re.compile(r'[=\+/&<>;\'"\?%#$@\,\. \t\r\n]') addresschars = re.compile(r'addr:(\w+)') CREATED = [ "version", "changeset", "timestamp", "user", "uid"] def shape_element(element): #node = defaultdict(set) node = {} if element.tag == "node" or element.tag == "way" : #create the dictionary based on exaclty the value in element attribute. node = {'created':{}, 'type':element.tag} for k in element.attrib: try: v = element.attrib[k] except KeyError: continue if k == 'lat' or k == 'lon': continue if k in CREATED: node['created'][k] = v else: node[k] = v try: node['pos']=[float(element.attrib['lat']),float(element.attrib['lon'])] except KeyError: pass if 'address' not in node.keys(): node['address'] = {} #Iterate the content of the tag for stag in element.iter('tag'): #Init the dictionry k = stag.attrib['k'] v = stag.attrib['v'] #Checking if indeed prefix with 'addr' and no ':' afterwards if k.startswith('addr:'): if len(k.split(':')) == 2: content = addresschars.search(k) if content: node['address'][content.group(1)] = v else: node[k]=v if not node['address']: node.pop('address',None) #Special case when the tag == way, scrap all the nd key if element.tag == "way": node['node_refs'] = [] for nd in element.iter('nd'): node['node_refs'].append(nd.attrib['ref']) # if 'address' in node.keys(): # pprint.pprint(node['address']) return node else: return None def process_map(file_in, pretty = False): Process the osm file to json file to be prepared for input file to monggo file_out = "{0}.json".format(file_in) data = [] with codecs.open(file_out, "w") as fo: for _, element in ET.iterparse(file_in): el = shape_element(element) if el: data.append(el) if pretty: fo.write(json.dumps(el, indent=2)+"\n") else: fo.write(json.dumps(el) + "\n") return data def test(): data = process_map(OSM_FILE) # pprint.pprint(data[500]) # if __name__ == "__main__": # data = process_map(OSM_FILE) Explanation: This will save the Ireland osm that has been audited into map_audit.osm Not let's prepare the audited file to be input to the MongoDB instance. End of explanation from data import * import pprint data = process_map('data/map_audit.osm') Explanation: The processed map has ben saved to map_audit.osm.json Now that we have process the audited map file into array of JSON, let's put it into mongodb instance. this will take the map that we have been audited. First we load the script to insert the map End of explanation pprint.pprint(data[0:1]) Explanation: Okay let's test if the data is something that we expect End of explanation from pymongo import MongoClient client = MongoClient('mongodb://localhost:3160') db = client.examples [db.ireland.insert(e) for e in data] Explanation: The data seems about right. After we verified the data is ready, let's put it into MongoDB End of explanation pipeline = [{'$limit' : 3}] for doc in db.ireland.aggregate((pipeline)): print(doc) Explanation: Okay, it seems that we have sucessfully insert all of our data into MongoDB instance. Let's test this End of explanation pipeline = [ {'$match': {'address.street':{'$exists':1}}}, {'$limit' : 5} ] for doc in db.ireland.aggregate((pipeline)): print(doc) Explanation: Show 5 data that have street End of explanation pipeline = [ {'$match': {'created.user':{'$exists':1}}}, {'$group': {'_id':'$created.user', 'count':{'$sum':1}}}, {'$sort': {'count':-1}}, {'$limit' : 5} ] for doc in db.ireland.aggregate((pipeline)): print(doc) Explanation: Show the top 5 of contributed users End of explanation pipeline = [ {'$match': {'amenity':'pub', 'name':{'$exists':1}}}, {'$project':{'_id':'$name', 'cuisine':'$cuisine', 'contact':'$phone'}} ] doc= db.ireland.aggregate((pipeline)): print(len(doc)) Explanation: Show the restaurant's name, the food they serve, and contact number End of explanation
1,036
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to Machine Learning with scikit-learn Lab 5 Step1: As always, we need to start with some data. Let's first generate a set of outputs $y$ and predicted outputs $\hat{y}$ to illustrate a few typical cases. Step3: Now let's define a function that will display the confusion matrix. The following is inspired from this example.
Python Code: %matplotlib inline import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) Explanation: Introduction to Machine Learning with scikit-learn Lab 5: Model evaluation and selection In this lab, we will apply a few model evaluation metrics we've seen in the lecture. End of explanation from sklearn.metrics import confusion_matrix y_true = [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0] y_pred = [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0] cnf_matrix = confusion_matrix(y_true, y_pred) print(cnf_matrix) Explanation: As always, we need to start with some data. Let's first generate a set of outputs $y$ and predicted outputs $\hat{y}$ to illustrate a few typical cases. End of explanation import itertools import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix def plot_confusion_matrix(cm, classes, title='Confusion matrix', cmap=plt.cm.Blues): This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes) plt.yticks(tick_marks, classes) plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') plt.figure() plot_confusion_matrix(cnf_matrix, ['0', '1']) Explanation: Now let's define a function that will display the confusion matrix. The following is inspired from this example. End of explanation
1,037
Given the following text description, write Python code to implement the functionality described below step by step Description: PyMotW - IceCream <h1 align="center"> <img src="https Step1: If you have several print calls it can quickly become unclear what is being printed where. So many of us may have ended up doing something like this before Step2: That's where IceCream comes in Step3: It tells you that this is IceCream issuing a print and which function with which arguments returned the value that is being printed. Pretty neat, right? This doesn't only work with functions Step4: If you would like to use print statements to determine the execution order of your program during runtime IceCream also has you covered Step5: When called without arguments it tells you the line and the context of the ic() printing the current message. As you may have noticed due to the superfluous prints above, ic() actually returns its arguments. This means that we can use it inside of an expression and we don't have to always invoke it separately Step6: You can even have it print nothing and return the message that would be printed as a string Step7: ic()'s output can be disabled and reenabled as follows Step8: Some configuration options
Python Code: def foo(i): return i + 333 print(foo(123)) Explanation: PyMotW - IceCream <h1 align="center"> <img src="https://github.com/gruns/icecream/raw/master/logo.svg" width="220px" height="370px" alt="icecream"> </h1> https://github.com/gruns/icecream available via conda-forge and pypi Never use print() to debug again! Many of us have used print() or log() for debugging purposes before. IceCream, or ic for short, makes print debugging a little cooler and sweeter. ic() is like print(), but better: It prints both expressions/variable names and their values. It's 40% faster to type. Data structures are pretty printed. Output is syntax highlighted. It optionally includes program context: filename, line number, and parent function. End of explanation print("foo(123)", foo(123)) Explanation: If you have several print calls it can quickly become unclear what is being printed where. So many of us may have ended up doing something like this before: End of explanation from icecream import ic ic(foo(123)) Explanation: That's where IceCream comes in: End of explanation d = {'key': {1: 'one'}} ic(d['key'][1]) class klass(): attr = 'yep' ic(klass.attr) Explanation: It tells you that this is IceCream issuing a print and which function with which arguments returned the value that is being printed. Pretty neat, right? This doesn't only work with functions: End of explanation def foo(): ic() pass if 'parrot' in 'bird': ic() pass else: ic() pass foo() Explanation: If you would like to use print statements to determine the execution order of your program during runtime IceCream also has you covered: End of explanation a = 6 def half(i): return i / 2 b = half(ic(a)) # <-- where the magic happens ic(b) Explanation: When called without arguments it tells you the line and the context of the ic() printing the current message. As you may have noticed due to the superfluous prints above, ic() actually returns its arguments. This means that we can use it inside of an expression and we don't have to always invoke it separately: End of explanation s = 'sup' out = ic.format(s) print(out) Explanation: You can even have it print nothing and return the message that would be printed as a string: End of explanation ic(1) ic.disable() ic(2) ic.enable() ic(3) Explanation: ic()'s output can be disabled and reenabled as follows: End of explanation ic.configureOutput(prefix='hello -> ') ic('world') import time def unixTimestamp(): return '%i |> ' % int(time.time()) ic.configureOutput(prefix=unixTimestamp) ic('world') import logging def warn(s): logging.warning(s) ic.configureOutput(outputFunction=warn) ic('eep') ic.configureOutput(includeContext=True) def foo(): ic('str') foo() Explanation: Some configuration options: End of explanation
1,038
Given the following text description, write Python code to implement the functionality described below step by step Description: O$_2$scl library linking example for O$_2$sclpy See the O$_2$sclpy documentation at https Step1: This code dynamically links the O$_2$scl library. Environment variables can be used to specify the location of various libraries which need to be added. These values can also be set directly in the linker class (and then they override the environment variables). See http Step2: To test that the link worked, obtain the O$_2$scl version from the DLL
Python Code: import sys print(sys.path) import o2sclpy import sys plots=True if 'pytest' in sys.modules: plots=False Explanation: O$_2$scl library linking example for O$_2$sclpy See the O$_2$sclpy documentation at https://neutronstars.utk.edu/code/o2sclpy for more information. End of explanation link=o2sclpy.linker() link.verbose=1 link.link_o2scl() Explanation: This code dynamically links the O$_2$scl library. Environment variables can be used to specify the location of various libraries which need to be added. These values can also be set directly in the linker class (and then they override the environment variables). See http://neutronstars.utk.edu/code/o2sclpy/link_cpp.html#linking-with-o2scl for more detail. We set the verbose parameter to 1 to output more information about which libraries are being linked. End of explanation print(link.o2scl_settings.o2scl_version()) def test_fun(): assert link.o2scl_settings.o2scl_version()==b'0.927a1' return Explanation: To test that the link worked, obtain the O$_2$scl version from the DLL: End of explanation
1,039
Given the following text description, write Python code to implement the functionality described below step by step Description: Gaussian Process regression tutorial 2 Step1: Problem 0 Step2: Problem 1a Step3: Now you will need to tell the GP object what inputs the covariance matrix is to be evaluated at. This is done using the compute method. 2-D inputs need to be passed as an $N \times 2$ array, which you will need to construct from the two 1-D arrays of $x$- and $y$-values we generated earlier. The second argument of compute should be the white noise standard deviation. Step4: Problem 1b Step5: Note that the parameters which are accessed through the set_parameter_vector method are the logarithms of the values used in building the kernel. The optimization is thus done in terms of the log parameters. Again following the same example, find the hyper-parameters that maximise the likelihood, using scipy.optimize's minimize function, and print the results. Step6: Now assign those best-fit values to the parameter vector Step7: Generate a grid of regularly spaced $x$ and $y$ locations, spanning the range of the observations, where we will evaluate the predictive distribution. Store these in 2-D arrays called X2D and Y2D. Then convert them into a single 2-D array of shape $N_{\mathrm{pred}} \times 2$, which will be passed to the GP's predict method. Hint Step8: Using the best-fit hyper-parameters, evaluate the mean of the predictive distribution at the grid locations. The output will be a 1-D array, which you will need to reshape so it has the same shape as X2D and Y2D for plotting. Step9: Execute the cell below to plot contours of the predictive mean alongside the data. Step10: Visualising the confidence intervals is a bit tricky in 3-D so we'll skip that. We could use emcee to explore the posterior distribution of the hyper-parameters, but we will leave that for a more realistic example. Problem 2 Step11: Problem 2a Step12: Next we want to optimize the likelihood. Luckily we can re-use the neg log likelihood and gradient functions from the previous problem. Start by packaging up the two inputs into a single 2-D vector, as in Problem 1, then use the minimize function to evaluate the max. likelihood hyper-parameters. Step13: Now let's plot the predictive distribution to check it worked ok. You can just copy and paste code from Problem 1. Step14: Problem 2b Step15: Now execute the cell below to plot the results. Step16: Problem 3 Step17: Problem 3a Step18: Now we need to fit each time-series in turn, and compute the mean of the predictive distribution over a tightly sampled, regular grid of time values. If you take care to name our variables right, you can reuse the neg log likelihood and associated gradient functions from Problem 1. Complete the code below and run it Step19: Now we are ready to cross-correlate the interpolated time-series. The easiest way to do this is using the function xcorr from matplotlib.pyplot. This function returns a tuple of 4 variables, the first two of which are the lags and corresponding cross-correlation values. Step20: As you can see, the delays estimated in this way aren't too far off. To get initial guesses for the GP hyper-parameters, we can take the mean of the best-fit values from the three individual time-series. Do this in the cell below. Step21: The GP HPs aren't too far off either. Problem 3b Step22: Now we are ready to define the likelihood function itself. The likelihood should accept a parameter array consisting of the shifts first, and then the GP hyper-parameters, and make use of the output of apply_delays to return a very high number if the time delays are unreasonable. Complete the definition below. Step23: There is no simple analytical way to evaluate the gradient of the log likelihood with respect to the time delays, so we will not define a grad_neg_log_like function for this problem. The gradient descent optimizer will be slower, since it will have to evaluate the gradients numerically, but for such a small dataset it doesn't matter. Ok, now we are ready to run the optimizer. Like before, we can use the minimize function from scipy.optimize. Step24: As you can see, the optimization further improved our estimates of the time delays and the GP HPs. But how much can we trust these? Let's evaluate posterior uncertainties using MCMC. Hyper-parameter marginalisation. We now use MCMC to obtain uncertainty estimates, or confidence intervals, for the model hyper-parameters. First we need to define the posterior function to pass to the emcee sampler. We will use improper, flat priors over all the parameters, so the posterior probability is just a trivial wrapper around our neg_ln_like_delays function. Complete the definition below Step25: Next, we set up the sampler. We will use 32 walkers, and initialise each set of walkers using the maximum likelihood estimates of the parameters plus a small random offset. Complete the code below, using the second george tutorial as an example. Step26: Now we are ready to run the MCMC, starting with a burn-in chain of 500 steps, after which we reset the sampler, and run the sampler again for 100 iterations. Complete the code below. Step27: Next we use the corner function from the corner module to plot the posterior distributions over the parameters. Complete the code below. Step28: Hopefully the distributions should look reasonable and be consistent with the true values. We need to extract confidence intervals for the parameters from the MCMC chain, which we can access through sampler.flatchain Step29: Hopefully, the MCMC estimates should be consistent with the true values... Challenge problem Step30: Your task is to devise and implement an algorithm that will schedule observations, based on the data to date, so as to ensure the uncertainty on the value of the function at any one time never exceeds 0.1. At each step, the aglorithm should
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import george, emcee, corner from scipy.optimize import minimize Explanation: Gaussian Process regression tutorial 2: Solutions In this tutorial, we are to explore some slightly more realistic applications of GPs to astrophysical (or at least, astronomy-like) datasets. We will do this using the popular george package by Daniel Foreman-Mackey. By S Aigrain (University of Oxford) In this tutorial, we are to explore some slightly more realistic applications of GPs to astrophysical (or at least, astronomy-like) datasets. We will do this using the popular george package by Daniel Foreman-Mackey. george doesn't have all the functionality of more general packages such as GPy and scikit-learn, but it still has a nice modelling interface, is easy to use, and is faster than either of the other two. We will also use another of Dan's packages, emcee to explore posterior probabilities using MCMC, and his corner.py module to plot the resulting parameter samples. Required packages Why george? george doesn't have all the functionality of GPy, but it is easy to use, and is faster than either of the other two. And I'm more familiar with it. We will also use another of Dan's packages, emcee to explore posterior probabilities using MCMC, and his corner.py module to plot the resulting parameter samples. Before you start, make sure you have the latest stable version of these packages installed. If you used george before, note the API has changed significantly between versions 0.2.x and 0.3.0. The easiest way to install all three packages is with pip: pip install emcee pip install george pip install corner Full documentation is available here: - https://george.readthedocs.io/ - https://emcee.readthedocs.io/ - https://corner.readthedocs.io/ End of explanation N = 100 xobs = np.random.uniform(-5,5,N) yobs = np.random.uniform(-5,5,N) zobs = - 0.05 * xobs**2 + 0.03 * yobs**2 - 0.02 * xobs * yobs eobs = 0.01 zobs += np.random.normal(0,eobs,len(xobs)) plt.scatter(xobs, yobs, c=zobs, s=20, marker='.') plt.xlabel(r'$x$') plt.ylabel(r'$y$') cb = plt.colorbar() cb.set_label(r'$z$'); Explanation: Problem 0: working through the george introductory tutorials The george documentation includes some nice tutorials, which you'll need to run through before being able to tackle the problems below. Download and run the notebooks, making sure you understand what's going on at each step, and don't hesitate to ask questions! A gentle introduction to Gaussian Process regression: essentially does the same thing as problem 3 from Tutorial 1, but without a mean function. Model fitting with correlated noise: includes a mean function, and uses MCMC to explore the dependence of the posterior on the hyper-parameters. The same dataset is also analysed using a model with white noise only, to show how ignoring the correlations in the noise leads to over-confident estimates of the mean function parameters. Now you should have an idea of how to set up a basic GP model using george, how to make predictions, and how to evaluate the likelihood, optimize it, and explore the posterior using MCMC. I would also encourage you to try out the other tutorials, but they are not pre-requisites for this one. Problem 1: A simple 2-D problem So far we have looked only at 1-D inputs, like time. Let's introduce a simple 2-d input case. We will generate some data using a 2-D polynomial and model it using a squared exponential GP. Run the cell below to generate and plot the data. End of explanation k = 1.0 * george.kernels.ExpSquaredKernel(1.0, ndim = 2, axes = 0) * george.kernels.ExpSquaredKernel(1.0, ndim = 2, axes = 1) gp = george.GP(k) Explanation: Problem 1a: setting up the GP Now we will construct the GP model using george. We will use a with different length scales in each of the two dimensions. To set this up in george, you have to multiply two individual kernels together, like that: k = a * KernelName(b, ndim = 2, axes = 0) * KernelName(c, ndim = 2, axes = 1) Here KernelName stands for the name of the kernel used (in george, the squared exponential kernel is called ExpSquaredKernel), a is the output variance, b is the metric, or length scale, applied to the first input dimension, and c to the second. Note this is equivalent to the parametrisation used in the lectures: $$ k(x,x') = A \exp \left[ - \Gamma (x-x')^2\right] = A \exp \left[ - (x-x')^2/m^2\right] $$ with $\Gamma=1/m^2$. Go ahead and define the kernel in the cell below, with some ball park values for the hyper-parameters (by ball-park, I mean not too many orders of magnitudes off). Then create a GP object using that kernel. End of explanation Xobs = np.concatenate([[xobs],[yobs]]).T gp.compute(Xobs, yerr=eobs) Explanation: Now you will need to tell the GP object what inputs the covariance matrix is to be evaluated at. This is done using the compute method. 2-D inputs need to be passed as an $N \times 2$ array, which you will need to construct from the two 1-D arrays of $x$- and $y$-values we generated earlier. The second argument of compute should be the white noise standard deviation. End of explanation def neg_ln_like(p): gp.set_parameter_vector(p) return -gp.log_likelihood(zobs) def grad_neg_ln_like(p): gp.set_parameter_vector(p) return -gp.grad_log_likelihood(zobs) Explanation: Problem 1b: Optimizing the likelihood Following the example in the first george tutorial, define a simple neg log likelihood function, and a function to evaluate its gradient. End of explanation from scipy.optimize import minimize result = minimize(neg_ln_like, gp.get_parameter_vector(), jac=grad_neg_ln_like) print(result) Explanation: Note that the parameters which are accessed through the set_parameter_vector method are the logarithms of the values used in building the kernel. The optimization is thus done in terms of the log parameters. Again following the same example, find the hyper-parameters that maximise the likelihood, using scipy.optimize's minimize function, and print the results. End of explanation gp.set_parameter_vector(result.x) Explanation: Now assign those best-fit values to the parameter vector End of explanation X2D,Y2D = np.mgrid[-6:6:0.5,-6:6:0.5] Xpred = np.concatenate([[X2D.flatten()],[Y2D.flatten()]]).T Explanation: Generate a grid of regularly spaced $x$ and $y$ locations, spanning the range of the observations, where we will evaluate the predictive distribution. Store these in 2-D arrays called X2D and Y2D. Then convert them into a single 2-D array of shape $N_{\mathrm{pred}} \times 2$, which will be passed to the GP's predict method. Hint: use numpy's mrid function. End of explanation zpred = gp.predict(zobs, Xpred, return_var=False, return_cov=False) Z2D = zpred.reshape(X2D.shape) Explanation: Using the best-fit hyper-parameters, evaluate the mean of the predictive distribution at the grid locations. The output will be a 1-D array, which you will need to reshape so it has the same shape as X2D and Y2D for plotting. End of explanation plt.scatter(xobs, yobs, c=zobs, s=20, marker='.') plt.xlabel(r'$x$') plt.ylabel(r'$y$') cb = plt.colorbar() cb.set_label(r'$z$'); plt.contour(X2D,Y2D,Z2D); Explanation: Execute the cell below to plot contours of the predictive mean alongside the data. End of explanation N = 100 xobs = np.random.uniform(-5,5,N) yobs = np.random.uniform(-5,5,N) zobs = -0.05 * xobs**2 + np.sin(yobs) eobs = 0.01 zobs += np.random.normal(0,eobs,len(xobs)) plt.scatter(xobs, yobs, c=zobs, s=20, marker='.') plt.xlabel(r'$x$') plt.ylabel(r'$y$') cb = plt.colorbar() cb.set_label(r'$z$'); Explanation: Visualising the confidence intervals is a bit tricky in 3-D so we'll skip that. We could use emcee to explore the posterior distribution of the hyper-parameters, but we will leave that for a more realistic example. Problem 2: Separable functions In the above problem we were modelling a non-separable function of $x$ and $y$ (because of the cross-term in the polynomial). Now we will model a separable function, and use a GP with a sum rather than a product of kernels to separate the dependence on each of the input variable. This exploits the fact that GPs preserve additivity. In other words, a GP with a sum of kernels, each depending on a disjoint subset of the inputs, sets up a probability distribution over functions that are sums of functions of the individual subsets of inputs. This is how the K2SC pipeline (for removing pointing systematics in K2 data) discussed in the lectures works. As ever, we start by simulating a dataset. Execute the cell below. End of explanation k1 = 1.0 * george.kernels.ExpSquaredKernel(1.0, ndim = 2, axes = 0) k2 = 1.0 * george.kernels.ExpSquaredKernel(1.0, ndim = 2, axes = 1) k = k1 + k2 gp = george.GP(k) Xobs = np.concatenate([[xobs],[yobs]]).T Explanation: Problem 2a: Joint model We start, once again, by defining the GP object. The kernel will consist of a sum of 2 squared exponentials, one applied to each dimension. It will be useful to be able to access each of the kernel objects separately later, so start by defining each of the component kernel, assigning them to variables k1 and k2, and then define the overal kernel k as the sum of the two. Then define the GP object itself. End of explanation gp.compute(Xobs, yerr=eobs) result = minimize(neg_ln_like, gp.get_parameter_vector(), jac=grad_neg_ln_like) print(result) Explanation: Next we want to optimize the likelihood. Luckily we can re-use the neg log likelihood and gradient functions from the previous problem. Start by packaging up the two inputs into a single 2-D vector, as in Problem 1, then use the minimize function to evaluate the max. likelihood hyper-parameters. End of explanation zpred = gp.predict(zobs, Xpred, return_var=False, return_cov=False) Z2D = zpred.reshape(X2D.shape) plt.scatter(xobs, yobs, c=zobs, s=20, marker='.') plt.xlabel(r'$x$') plt.ylabel(r'$y$') cb = plt.colorbar() cb.set_label(r'$z$'); plt.contour(X2D,Y2D,Z2D); Explanation: Now let's plot the predictive distribution to check it worked ok. You can just copy and paste code from Problem 1. End of explanation b = np.copy(zobs) gp.apply_inverse(b) K1 = k1.get_value(Xobs) fx = np.dot(K1,b) K2 = k2.get_value(Xobs) fy = np.dot(K2,b) Explanation: Problem 2b: Separating the components We now come to evaluating the predictive means for the individual components. The standard expression for the predictive mean is: $$ \overline{\boldsymbol{y}} = K(\boldsymbol{x}_,\boldsymbol{x}) K(\boldsymbol{x},\boldsymbol{x})^{-1} \boldsymbol{y} $$ The predictive mean for a given component of the kernel is obtained simply by replacing the first instance of the covariance matrix between test and training points, $K(\boldsymbol{x},\boldsymbol{x})$, by the corresponding matrix for the component in question only: $$ \overline{\boldsymbol{y}}_{1,} = K_1(\boldsymbol{x}_*,\boldsymbol{x}) K(\boldsymbol{x},\boldsymbol{x})^{-1} \boldsymbol{y}. $$ george doesn't provide a built-in function to do this, but - the GP object has a method apply_inverse, which evaluates and returns the product $K(\boldsymbol{x},\boldsymbol{x})^{-1} \boldsymbol{y}$ for a given vector of training set outputs $\boldsymbol{y}$, - the kernel object has a method get_value, which evaluates the covariance matrix for a given set of inputs. Use these two functions to evaluate the two components of the best-fit GP model in our problem. Store the $x$- and $y$ components in variables fx and fy, respectively. Hint: The apply_inverse method does what it says in the name, i.e. it modifies its argument by pre-multiplying it by the inverse of the covariance matrix. Therefore, you need to pass it a copy of the vector of obserced outputs, not the original. End of explanation plt.figure(figsize=(12,5)) plt.subplot(121) plt.plot(xobs,zobs,'.',c='grey') plt.plot(xobs,zobs-fy,'k.') s = np.argsort(xobs) plt.plot(xobs[s],fx[s],'r-') plt.subplot(122) plt.plot(yobs,zobs,'.',c='grey') plt.plot(yobs,zobs-fx,'k.') s = np.argsort(yobs) plt.plot(yobs[s],fy[s],'r-'); Explanation: Now execute the cell below to plot the results. End of explanation N = 50 M = 3 t2d = np.tile(np.linspace(0,10,N),(M,1)) for i in range(M): t2d[i,:] += np.random.uniform(-5./N,5./N,N) delays_true = [-1.5,3] t_delayed = np.copy(t2d) for i in range(M-1): t_delayed[i+1,:] = t2d[i,:] + delays_true[i] gp = george.GP(1.0 * george.kernels.Matern52Kernel(3.0)) gppar_true = gp.get_parameter_vector() y2d = gp.sample(t_delayed.flatten()).reshape((M,N)) wn = 0.1 y2d += np.random.normal(0,wn,(M,N)) for i in range(M): plt.errorbar(t2d[i,:],y2d[i,:].flatten(),yerr=wn,capsize=0,fmt='.') plt.xlabel('t') plt.ylabel('y'); Explanation: Problem 3: Multiple time-series with delays Consider a situation where we have several time-series, which we expect to display the same behaviour (up to observational noise), except for a time-delay. We don't know the form of the behaviour, but we want to measure the time-delay between each pair of time-series. Something like this might arise in AGN reverberation mapping, for example. We can do this by modelling the time-series as observations of the same GP, with shifted inputs, and marginalising over the GP hyper-parameters to obtain posterior distribution over the time shifts. First, let's simulate some data. We will cheat by doing this using a GP, so we know it will work. Execute the cell below. End of explanation k = 1.0 * george.kernels.Matern52Kernel(3.0) gp = george.GP(k) Explanation: Problem 3a: Initial guesses Because the function goes up an down, you can probably guess that the likelihood surface is going to be highly multi-modal. So it's important to have a decent initial guess for the time delays. A simple way to do obtain one would be by cross-correlation, but since the time-series are not regularly sampled (because of the small random term we added to each of the time arrays), we need to interpolate them onto a regular grid first. What better way to do this than with a GP? This will have the added advantage of giving us an initial estimate of the GP hyper-parameters too (we're assuming we don't know them either, though we will assume we know the white noise standard deviation). First we need to define a GP object, based on a Matern 3/2 kernel with variable input scale and variance. Do this in the cell below. End of explanation p0 = gp.get_parameter_vector() # 2-D array to hold the best-fit GP HPs for each time-series p1 = np.tile(p0,(3,1)) # Regularly sampled time array treg = np.linspace(0,10,100) # 2-D array to hold the interpolated time-series yreg = np.zeros((3,100)) c = ['r','g','b'] for i in range(M): # Compute the gp on the relevant subset of the 2-D time array t2d gp.compute(t2d[i,:].flatten(),yerr=wn) # Assign the corresponding y values to the variable zobs # (this is the one that neg_ln_like uses to condition the GP) zobs = y2d[i,:].flatten() # Optimize the likelihood using minimize result = minimize(neg_ln_like, p0, jac=grad_neg_ln_like) # Save the best-fit GP HPs in p1 p1[i,:] = result.x # update the GP parameter vector with the best fit values gp.set_parameter_vector(result.x) # evaluate the predictive mean conditioned on zobs at locations treg and save in yreg yreg[i,:] = gp.predict(zobs,treg,return_var=False,return_cov=False) # you might want to plot the results to check it worked plt.plot(t2d[i,:],y2d[i,:],'.',c=c[i]) plt.plot(treg,yreg[i,:],'-',c=c[i]) # And let's print the GP HPs to see if they were sensible. print('Individual GP fits: best-fit HPs') print(p1) Explanation: Now we need to fit each time-series in turn, and compute the mean of the predictive distribution over a tightly sampled, regular grid of time values. If you take care to name our variables right, you can reuse the neg log likelihood and associated gradient functions from Problem 1. Complete the code below and run it End of explanation dt = treg[1] - treg[0] # Array to hold estimates of the time-delays delays_0 = np.zeros(M-1) for i in range(M-1): # use pyplot's xcorr function to cross-correlate yreg[i+1] with yreg[0] lags, corr, _, _ = plt.xcorr(yreg[0,:],yreg[i+1,:],maxlags=49,usevlines=False,marker='.',color=c[i+1]) # find the lag that maximises the CCF, convert it to time delay, save in delays_0 array lmax = lags[np.argmax(corr)] plt.axvline(lmax,color=c[i+1]) delays_0[i] = dt * lmax plt.xlabel('lag') plt.ylabel('x-correlation'); # Compare estimated to true delays print('Estimated time delays from cross-correlation') print(delays_0) print('True delays') print(delays_true) Explanation: Now we are ready to cross-correlate the interpolated time-series. The easiest way to do this is using the function xcorr from matplotlib.pyplot. This function returns a tuple of 4 variables, the first two of which are the lags and corresponding cross-correlation values. End of explanation gppar_0 = np.mean(p1,axis=0) print('Estimated GP HPs') print(gppar_0) print('True GP HPs') print(gppar_true) Explanation: As you can see, the delays estimated in this way aren't too far off. To get initial guesses for the GP hyper-parameters, we can take the mean of the best-fit values from the three individual time-series. Do this in the cell below. End of explanation def apply_delays(delays,t2d): t_delayed = np.copy(t2d) for i, delay in enumerate(delays): t_delayed[i+1,:] += delay ok = True M = len(delays) + 1 for i in range(M): tc = t_delayed[i,:] to = t_delayed[np.arange(M)!=i,:] if (tc.min() > to.max()) + (tc.max() < to.min()): ok = False return t_delayed, ok Explanation: The GP HPs aren't too far off either. Problem 3b: Optimization Now we have some initial guesses for the time-delays and the GP hyper-parameters, we're ready to model the time-series simultaneously, using a single GP. We need to write a new likelihood function to do this. The function will need to apply the delays to the times, before passing these times to george to evaluate the likelihood itself. First let's define a function apply_delays, which will take the delays and the time array t as inputs, and return an $M \times N$ array of delayed times. This function will be called by the likelihood function, but it might be useful later for plotting the results too. It would also be useful for this function to warn us if the time-delays are such that one of the time-series no longer overlaps with the others at all, for example by returning a boolean variable that is true if all is well, but false if not. Complete the definition below. End of explanation def neg_ln_like_delays(p): delays = p[:-2] t_delayed, ok = apply_delays(delays,t2d) if not ok: return 1e25 gp.set_parameter_vector(p[-2:]) gp.compute(t_delayed.flatten(), yerr=wn) return -gp.log_likelihood(y2d.flatten()) Explanation: Now we are ready to define the likelihood function itself. The likelihood should accept a parameter array consisting of the shifts first, and then the GP hyper-parameters, and make use of the output of apply_delays to return a very high number if the time delays are unreasonable. Complete the definition below. End of explanation ptrue = np.concatenate([delays_true,gppar_true]) p0 = np.concatenate([delays_0,gppar_0]) print('Initial guesses') print(p0) result = minimize(neg_ln_like_delays, p0) p1 = np.array(result.x) print('ML parameters') print(p1) print('True parameters') print(ptrue) Explanation: There is no simple analytical way to evaluate the gradient of the log likelihood with respect to the time delays, so we will not define a grad_neg_log_like function for this problem. The gradient descent optimizer will be slower, since it will have to evaluate the gradients numerically, but for such a small dataset it doesn't matter. Ok, now we are ready to run the optimizer. Like before, we can use the minimize function from scipy.optimize. End of explanation def lnprob(p): return -neg_ln_like_delays(p) Explanation: As you can see, the optimization further improved our estimates of the time delays and the GP HPs. But how much can we trust these? Let's evaluate posterior uncertainties using MCMC. Hyper-parameter marginalisation. We now use MCMC to obtain uncertainty estimates, or confidence intervals, for the model hyper-parameters. First we need to define the posterior function to pass to the emcee sampler. We will use improper, flat priors over all the parameters, so the posterior probability is just a trivial wrapper around our neg_ln_like_delays function. Complete the definition below: End of explanation ndim, nwalkers = len(p1), 32 p2 = p1 + 1e-4 * np.random.randn(nwalkers, ndim) sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob) Explanation: Next, we set up the sampler. We will use 32 walkers, and initialise each set of walkers using the maximum likelihood estimates of the parameters plus a small random offset. Complete the code below, using the second george tutorial as an example. End of explanation print("Running burn-in...") p2, _, _ = sampler.run_mcmc(p2, 50) sampler.reset() print("Running production...") sampler.run_mcmc(p2, 500); Explanation: Now we are ready to run the MCMC, starting with a burn-in chain of 500 steps, after which we reset the sampler, and run the sampler again for 100 iterations. Complete the code below. End of explanation labels = [r"$\Delta_1$", r"$\Delta_2$", r"$\ln A$", r"$\ln\l$"] truths = ptrue corner.corner(sampler.flatchain, truths=truths, labels=labels); Explanation: Next we use the corner function from the corner module to plot the posterior distributions over the parameters. Complete the code below. End of explanation samples = sampler.flatchain[:] # The GP parameters were explored in log space, return them to linear space #samples[:, -2:] = np.exp(samples[:, -2:]) # This handy bit of code will extract median and +/- 1 sigma intervals for each parameter pv = map(lambda v: (v[1], v[2]-v[1], v[1]-v[0]), zip(*np.percentile(samples, [16, 50, 84], axis=0))) # Print the results for i in range(ndim): pval = pv[i] print("Param {}: {:5.2f} +{:4.2f} -{:4.2f} (true: {:5.2f})".format(i+1,pval[0], pval[1], pval[2], ptrue[i])) Explanation: Hopefully the distributions should look reasonable and be consistent with the true values. We need to extract confidence intervals for the parameters from the MCMC chain, which we can access through sampler.flatchain End of explanation xtrue = np.linspace(0,100,1000) k = george.kernels.CosineKernel(np.log(12.3)) * george.kernels.ExpSquaredKernel(1000.0) ytrue = george.GP(k).sample(xtrue) xobs = xtrue[:200:10] eobs = 10.0**(np.random.uniform(-1.5,-1,20)) yobs = ytrue[:200:10] + np.random.normal(0,1,20) * eobs plt.plot(xtrue,ytrue) plt.errorbar(xobs,yobs,yerr=eobs,fmt='.',capsize=0); Explanation: Hopefully, the MCMC estimates should be consistent with the true values... Challenge problem: Active scheduling Imagine you are monitoring a particular variable, you want to know its value to a given precision at anyone time, but each observation is costly, so you don't want to take any more than you have to. You can train a GP on the first few observations, then use the predictive distribution to work out when your uncertainty about the current value of the variable is so large that you need to take a new observation. Use the new observation to update the GP hyper parameters and the predictive distribution, and repeat the process... First we generate a tightly sampled time series over 100 days. This will represent the "true" value of the variable. We will include some periodic behaviour as that makes the problem more interesting. Then we will "observe" 1 point per day for the first 20 days. End of explanation gp = george.GP(k) gp.set_parameter_vector([np.log(10),np.log(1000)]) gp.compute(xobs,yerr=eobs) def nll(p): gp.set_parameter_vector(p) return -gp.log_likelihood(yobs) def gnll(p): gp.set_parameter_vector(p) return -gp.grad_log_likelihood(yobs) result = minimize(nll, gp.get_parameter_vector(), jac=gnll) print(result) gp.set_parameter_vector(result.x) ypred, epred = gp.predict(yobs, xtrue, return_var=True) plt.plot(xtrue,ytrue) plt.errorbar(xobs,yobs,yerr=eobs,fmt='.',capsize=0); plt.fill_between(xtrue,ypred + epred, ypred-epred,alpha=0.2,edgecolor='none') Explanation: Your task is to devise and implement an algorithm that will schedule observations, based on the data to date, so as to ensure the uncertainty on the value of the function at any one time never exceeds 0.1. At each step, the aglorithm should: - train a GP on the data acquired so far. You may assume the form of the covariance function is known, as is the output variance, so there are only two hyper-parameters to fit (the log period of the cosine kernel and the metric of the squared exponential term). - make predictions for future values. If you're being clever, you can do this sequentially so you only look ahead a small time interval at a time, and stop as soon as the uncertainty exceeds the desired bound. - use this to decide when to take the next observation - add the next observation (by sampling the "true" values at the appropriate time and adding noise with the same distribution as above) - repeat till the end time is reached. Of course you will need to test your algorithm by comparing the predictions to the true values. End of explanation
1,040
Given the following text description, write Python code to implement the functionality described below step by step Description: Get the data 2MASS => J, H K, angular resolution ~4" WISE => 3.4, 4.6, 12, and 22 μm (W1, W2, W3, W4) with an angular resolution of 6.1", 6.4", 6.5", & 12.0" GALEX imaging => Five imaging surveys in a Far UV band (1350-1750Å) and Near UV band (1750-2800Å) with 6-8 arcsecond resolution (80% encircled energy) and 1 arcsecond astrometry, and a cosmic UV background map. Step1: Try GAIA with 2MASS in gaiadr1 and gaiadr2 they also provide 2mass, allwise, sdss "best neigbour" pairs catalogs provided by GAIA Step2: Try GAIA and WISE Step3: Try GAIA + WISE with ProperMotion limit Step4: Try GAIA-WISE-2MASS directly I check there is tmass_key in gaiadr1.allwise_original_valid (ALLWISE catalog) Desc of tmass_key
Python Code: from astroquery.gaia import Gaia tables = Gaia.load_tables(only_names=True) for table in (tables): print (table.get_qualified_name()) #obj = ["3C 454.3", 343.49062, 16.14821, 1.0] obj = ["PKS J0006-0623", 1.55789, -6.39315, 1] #obj = ["M87", 187.705930, 12.391123, 1.0] #### name, ra, dec, radius of cone (in deg) obj_name = obj[0] obj_ra = obj[1] obj_dec = obj[2] cone_radius = obj[3] obj_coord = coordinates.SkyCoord(ra=obj_ra, dec=obj_dec, unit=(u.deg, u.deg), frame="icrs") Explanation: Get the data 2MASS => J, H K, angular resolution ~4" WISE => 3.4, 4.6, 12, and 22 μm (W1, W2, W3, W4) with an angular resolution of 6.1", 6.4", 6.5", & 12.0" GALEX imaging => Five imaging surveys in a Far UV band (1350-1750Å) and Near UV band (1750-2800Å) with 6-8 arcsecond resolution (80% encircled energy) and 1 arcsecond astrometry, and a cosmic UV background map. End of explanation # cmd = "SELECT * \ # FROM gaiadr2.gaia_source \ # WHERE CONTAINS(POINT('ICRS',gaiadr2.gaia_source.ra,gaiadr2.gaia_source.dec), \ # CIRCLE('ICRS'," + str(obj_ra) + "," + str(obj_dec) + "," + str(cone_radius) + "))=1;" cmd = "SELECT * FROM gaiadr2.gaia_source AS g, \ gaiadr2.tmass_best_neighbour AS tbest, \ gaiadr1.tmass_original_valid AS tmass \ WHERE g.source_id = tbest.source_id AND tbest.tmass_oid = tmass.tmass_oid AND CONTAINS(POINT('ICRS',g.ra,g.dec),\ CIRCLE('ICRS'," + str(obj_ra) + "," + str(obj_dec) + "," + str(cone_radius) + "))=1;" print(cmd) job = Gaia.launch_job_async(cmd, dump_to_file=True) print (job) # GAIA r = job.get_results() print(len(r['source_id'])) print(r['phot_g_mean_mag', 'phot_bp_mean_mag', 'phot_rp_mean_mag', 'j_m', 'h_m', 'ks_m', 'tmass_oid']) Explanation: Try GAIA with 2MASS in gaiadr1 and gaiadr2 they also provide 2mass, allwise, sdss "best neigbour" pairs catalogs provided by GAIA: gaiadr1.gaiadr1.allwise_original_valid gaiadr1.gaiadr1.gsc23_original_valid gaiadr1.gaiadr1.ppmxl_original_valid gaiadr1.gaiadr1.sdssdr9_original_valid gaiadr1.gaiadr1.tmass_original_valid gaiadr1.gaiadr1.ucac4_original_valid gaiadr1.gaiadr1.urat1_original_valid End of explanation cmd = "SELECT * FROM gaiadr2.gaia_source AS g, \ gaiadr2.allwise_best_neighbour AS wbest, \ gaiadr1.allwise_original_valid AS allwise \ WHERE g.source_id = wbest.source_id AND wbest.allwise_oid = allwise.allwise_oid AND CONTAINS(POINT('ICRS',g.ra,g.dec),\ CIRCLE('ICRS'," + str(obj_ra) + "," + str(obj_dec) + "," + str(cone_radius) + "))=1;" print(cmd) job = Gaia.launch_job_async(cmd, dump_to_file=True) print(job) r = job.get_results() print(len(r['source_id'])) print(r['w1mpro', 'w2mpro', 'w3mpro', 'w4mpro']) Explanation: Try GAIA and WISE End of explanation cmd = "SELECT * FROM gaiadr2.gaia_source AS g, \ gaiadr2.allwise_best_neighbour AS wbest, \ gaiadr1.allwise_original_valid AS allwise \ WHERE g.source_id = wbest.source_id AND wbest.allwise_oid = allwise.allwise_oid AND CONTAINS(POINT('ICRS',g.ra,g.dec),\ CIRCLE('ICRS'," + str(obj_ra) + "," + str(obj_dec) + "," + str(cone_radius) + "))=1 \ AND pmra IS NOT NULL AND abs(pmra)<10 \ AND pmdec IS NOT NULL AND abs(pmdec)<10;" print(cmd) job = Gaia.launch_job_async(cmd, dump_to_file=True) print(job) r = job.get_results() print(len(r['source_id'])) print(r['pmra', 'pmdec', 'w1mpro']) Explanation: Try GAIA + WISE with ProperMotion limit End of explanation cmd = "SELECT * FROM gaiadr2.gaia_source AS g, \ gaiadr2.allwise_best_neighbour AS wbest, \ gaiadr1.allwise_original_valid AS allwise, \ gaiadr1.tmass_original_valid AS tmass \ WHERE g.source_id = wbest.source_id AND wbest.allwise_oid = allwise.allwise_oid AND CONTAINS(POINT('ICRS',g.ra,g.dec),\ CIRCLE('ICRS'," + str(obj_ra) + "," + str(obj_dec) + "," + str(cone_radius) + "))=1\ AND allwise.tmass_key IS NOT NULL \ AND allwise.tmass_key = tmass.tmass_oid;" print(cmd) job = Gaia.launch_job_async(cmd, dump_to_file=True) print(job) r = job.get_results() print(len(r['source_id'])) print(r.colnames) r['ra', 'dec', 'ra_2', 'dec_2', 'ra_3', 'dec_3', 'phot_g_mean_mag', 'j_m', 'w1mpro', 'tmass_key', 'tmass_oid'] Explanation: Try GAIA-WISE-2MASS directly I check there is tmass_key in gaiadr1.allwise_original_valid (ALLWISE catalog) Desc of tmass_key: 2MASS PSC association. Unique identifier of the closest source in the 2MASS Point Source Catalog (PSC) that falls within 3 arcsec of the non-motion fit position of this WISE source. This is equivalent to the pts_key in the 2MASS PSC entry. This column is “null” if there is no 2MASS PSC source within 3 arcsec of the WISE source position. End of explanation
1,041
Given the following text description, write Python code to implement the functionality described below step by step Description: Photo-z Determination for SpIES High-z Candidates Notebook that actually applies the algorithms from SpIESHighzQuasarPhotoz.ipynb to the quasar candidates. Step1: Since we are running on separate test data, we don't need to do a train_test_split here. But we will scale the data. Need to remember to scale the test data later! Step2: Applying to Quasars Candidates Quasars candidates from the legacy KDE algorithm are in<br> GTR-ADM-QSO-ir-testhighz_kdephotoz_lup_2016_quasar_candidates.dat Quasars candidates from the Random Forest Algorithm are in<br> GTR-ADM-QSO-ir_good_test_2016_out.fits Quasar candidates from the RF, SVM, and/or bagging algorithms are in<br> GTR-ADM-QSO-ir_good_test_2016_out_Stripe82all.fits<br> In the case of the latter file, this includes Stripe82 only. If we run on the other files, we might want to limit to Stripe 82 to keep the computing time reasonable. Step3: If you want to compare ZSPEC to ZPHOT, use the cells below for test set Step4: Scale the test data Step5: Not currently executing the next 2 cells, but putting the code here in case we want to do it later. Step6: Instantiate Photo-z Algorithm of Choice Here using Nadaraya-Watson and Random Forests Step7: Apply Photo-z Algorithm(s) Random Forest Step8: Nadaraya-Watson Step9: Only need this if Xtest is too big
Python Code: ## Read in the Training Data and Instantiating the Photo-z Algorithm %matplotlib inline from astropy.table import Table import numpy as np import matplotlib.pyplot as plt #data = Table.read('GTR-ADM-QSO-ir-testhighz_findbw_lup_2016_starclean.fits') #JT PATH ON TRITON to training set after classification #data = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/Training_set/GTR-ADM-QSO-ir-testhighz_findbw_lup_2016_starclean_with_shenlabel.fits') data = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/Training_set/GTR-ADM-QSO-Trainingset-with-McGreer-VVDS-DR12Q_splitlabel_VCVcut_best.fits') #JT PATH HOME USE SHEN ZCUT #data = Table.read('/home/john/Catalogs/QSO_Candidates/Training_set/GTR-ADM-QSO-ir-testhighz_findbw_lup_2016_starclean_with_shenlabel.fits') #data = data.filled() # Remove stars qmask = (data['zspec']>0) qdata = data[qmask] print len(qdata) # X is in the format need for all of the sklearn tools, it just has the colors #Xtrain = np.vstack([ qdata['ug'], qdata['gr'], qdata['ri'], qdata['iz'], qdata['zs1'], qdata['s1s2']]).T Xtrain = np.vstack([np.asarray(qdata[name]) for name in ['ug', 'gr', 'ri', 'iz', 'zs1', 's1s2']]).T #y = np.array(data['labels']) ytrain = np.array(qdata['zspec']) Explanation: Photo-z Determination for SpIES High-z Candidates Notebook that actually applies the algorithms from SpIESHighzQuasarPhotoz.ipynb to the quasar candidates. End of explanation # For algorithms that need scaled data: from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(Xtrain) # Don't cheat - fit only on training data Explanation: Since we are running on separate test data, we don't need to do a train_test_split here. But we will scale the data. Need to remember to scale the test data later! End of explanation #testdata = Table.read('GTR-ADM-QSO-ir_good_test_2016_out_Stripe82all.fits') # TEST DATA USING 3.5<z<5 zrange ON TRITON #testdata = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/Final_S82_candidates_full/GTR-ADM-QSO-ir_good_test_2016_out_Stripe82all.fits') # TEST DATA USING 2.9<z<5.4 zrange ON HOME #testdata = Table.read('/Users/johntimlin/Catalogs/QSO_Candidates/photoz/SpIES_SHELA_Quasar_Canidates_Shen_zrange_JTmultiproc.fits') #testdata = Table.read('./catalogs/HZ_forphotoz.fits') testdata = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/New_training_candidates/Test_point_source_classifier/Final_sets/HZLZ_combined_all_wphotoz_alldata_allclassifiers.fits') #Limit to objects that have been classified as quasars #qsocandmask = ((testdata['ypredRFC']==0) | (testdata['ypredSVM']==0) | (testdata['ypredBAG']==0)) testdatacand = testdata#[qsocandmask] print len(testdata),len(testdatacand) Explanation: Applying to Quasars Candidates Quasars candidates from the legacy KDE algorithm are in<br> GTR-ADM-QSO-ir-testhighz_kdephotoz_lup_2016_quasar_candidates.dat Quasars candidates from the Random Forest Algorithm are in<br> GTR-ADM-QSO-ir_good_test_2016_out.fits Quasar candidates from the RF, SVM, and/or bagging algorithms are in<br> GTR-ADM-QSO-ir_good_test_2016_out_Stripe82all.fits<br> In the case of the latter file, this includes Stripe82 only. If we run on the other files, we might want to limit to Stripe 82 to keep the computing time reasonable. End of explanation ## Test zspec objects with zspec >=2.9 and see how well the zphot matches with zspec #testdata = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/Final_S82_candidates_full/QSOs_S82_wzspec_wcolors.fits') #Limit to objects that have been classified as quasars #qsocandmask = ((testdata['ypredRFC']==0) | (testdata['ypredSVM']==0) | (testdata['ypredBAG']==0)) #qsocandmask = (testdata['ZSPEC'] >= 2.9) #testdatacand = testdata#[qsocandmask] #print len(testdata),len(testdatacand) Explanation: If you want to compare ZSPEC to ZPHOT, use the cells below for test set End of explanation #Xtest = np.vstack([ testdatacand['ug'], testdatacand['gr'], testdatacand['ri'], testdatacand['iz'], testdatacand['zs1'], testdatacand['s1s2']]).T Xtest = np.vstack([np.asarray(testdatacand[name]) for name in ['ug', 'gr', 'ri', 'iz', 'zs1', 's1s2']]).T XStest = scaler.transform(Xtest) # apply same transformation to test data Explanation: Scale the test data End of explanation # Read in KDE candidates dataKDE = Table.read('GTR-ADM-QSO-ir-testhighz_kdephotoz_lup_2016_quasar_candidates.dat', format='ascii') print dataKDE.keys() print len(XKDE) XKDE = np.vstack([ dataKDE['ug'], dataKDE['gr'], dataKDE['ri'], dataKDE['iz'], dataKDE['zch1'], dataKDE['ch1ch2'] ]).T # Read in RF candidates dataRF = Table.read('GTR-ADM-QSO-ir_good_test_2016_out.fits') print dataRF.keys() print len(dataRF) # Canidates only maskRF = (dataRF['ypred']==0) dataRF = dataRF[maskRF] print len(dataRF) # X is in the format need for all of the sklearn tools, it just has the colors XRF = np.vstack([ dataRF['ug'], dataRF['gr'], dataRF['ri'], dataRF['iz'], dataRF['zs1'], dataRF['s1s2']]).T Explanation: Not currently executing the next 2 cells, but putting the code here in case we want to do it later. End of explanation import numpy as np from astroML.linear_model import NadarayaWatson model = NadarayaWatson('gaussian', 0.05) model.fit(Xtrain,ytrain) from sklearn.ensemble import RandomForestRegressor modelRF = RandomForestRegressor() modelRF.fit(Xtrain,ytrain) Explanation: Instantiate Photo-z Algorithm of Choice Here using Nadaraya-Watson and Random Forests End of explanation zphotRF = modelRF.predict(Xtest) Explanation: Apply Photo-z Algorithm(s) Random Forest End of explanation zphotNW = model.predict(Xtest) Explanation: Nadaraya-Watson End of explanation from dask import compute, delayed def process(Xin): return model.predict(Xin) # Create dask objects dobjs = [delayed(process)(x.reshape(1,-1)) for x in Xtest] import dask.threaded ypred = compute(*dobjs, get=dask.threaded.get) # The dask output needs to be reformatted. zphotNW = np.array(ypred).reshape(1,-1)[0] testdatacand['zphotNW'] = zphotNW testdatacand['zphotRF'] = zphotRF #TRITON PATH #testdatacand.write('/Users/johntimlin/Catalogs/QSO_candidates/photoz/Candidates_photoz_S82_shenzrange.fits', format='fits') #HOME PATH #testdatacand.write('/home/john/Catalogs/QSO_Candidates/photoz/Candidates_photoz_S82_shenzrange.fits', format='fits') testdatacand.write('./HZLZ_combined_all_hzclassifiers_wphotoz_new.fits') from densityplot import * from pylab import * fig = plt.figure(figsize=(5,5)) hex_scatter(testdatacand['zphotNW'],testdatacand['ug'], min_cnt=10, levels=2, std=True, smoothing=1, hkwargs={'gridsize': 100, 'cmap': plt.cm.Blues}, skwargs={'color': 'k'}) plt.xlabel('zphot') plt.ylabel('u-g') #plt.xlim([-0.1,5.5]) #plt.ylim([-0.1,5.5]) plt.show() from astroML.plotting import hist as fancyhist fancyhist(testdatacand['zphotRF'], bins="freedman", histtype="step") Explanation: Only need this if Xtest is too big End of explanation
1,042
Given the following text description, write Python code to implement the functionality described below step by step Description: Machine Learning Engineer Nanodegree Model Evaluation & Validation Project Step1: Data Exploration In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results. Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively. Implementation Step2: Question 1 - Feature Observation As a reminder, we are using three features from the Boston housing dataset Step4: Answer Step5: Question 2 - Goodness of Fit Assume that a dataset contains five data points and a model made the following predictions for the target variable Step6: Answer Step7: Question 3 - Training and Testing What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm? Hint Step8: Question 4 - Learning the Data Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model? Hint Step10: Question 5 - Bias-Variance Tradeoff When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions? Hint Step11: Making Predictions Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on. Question 9 - Optimal Model What maximum depth does the optimal model have? How does this result compare to your guess in Question 6? Run the code block below to fit the decision tree regressor to the training data and produce an optimal model. Step12: Answer Step13: Answer
Python Code: # Import libraries necessary for this project import numpy as np import pandas as pd from sklearn.cross_validation import ShuffleSplit # Import supplementary visualizations code visuals.py import visuals as vs # Pretty display for notebooks import matplotlib.pyplot as plt %matplotlib inline # Load the Boston housing dataset data = pd.read_csv('housing.csv') prices = data['MEDV'] features = data.drop('MEDV', axis = 1) # Success print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape) Explanation: Machine Learning Engineer Nanodegree Model Evaluation & Validation Project: Predicting Boston Housing Prices Welcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting Started In this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis. The dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset: - 16 data points have an 'MEDV' value of 50.0. These data points likely contain missing or censored values and have been removed. - 1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been removed. - The features 'RM', 'LSTAT', 'PTRATIO', and 'MEDV' are essential. The remaining non-relevant features have been excluded. - The feature 'MEDV' has been multiplicatively scaled to account for 35 years of market inflation. Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported. End of explanation # TODO: Minimum price of the data minimum_price = np.min(prices) # TODO: Maximum price of the data maximum_price = np.max(prices) # TODO: Mean price of the data mean_price = np.mean(prices) # TODO: Median price of the data median_price = np.median(prices) # TODO: Standard deviation of prices of the data std_price = np.std(prices) # Show the calculated statistics print "Statistics for Boston housing dataset:\n" print "Minimum price: ${:,.2f}".format(minimum_price) print "Maximum price: ${:,.2f}".format(maximum_price) print "Mean price: ${:,.2f}".format(mean_price) print "Median price ${:,.2f}".format(median_price) print "Standard deviation of prices: ${:,.2f}".format(std_price) Explanation: Data Exploration In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results. Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively. Implementation: Calculate Statistics For your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model. In the code cell below, you will need to implement the following: - Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV', which is stored in prices. - Store each calculation in their respective variable. End of explanation plt.figure(figsize=(40, 10)) for k, column in enumerate(features.columns): plt.subplot(1, 3, k+1) plt.plot(data[column], prices,'o') plt.title(column) plt.xlabel(column) Explanation: Question 1 - Feature Observation As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood): - 'RM' is the average number of rooms among homes in the neighborhood. - 'LSTAT' is the percentage of homeowners in the neighborhood considered "lower class" (working poor). - 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood. Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each. Hint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7? End of explanation # TODO: Import 'r2_score' from sklearn.metrics import r2_score def performance_metric(y_true, y_predict): Calculates and returns the performance score between true and predicted values based on the metric chosen. # TODO: Calculate the performance score between 'y_true' and 'y_predict' score = r2_score(y_true, y_predict) # Return the score return score Explanation: Answer: MEDV should increase when the number of rooms (RM) increases. I would expect MEDV to decrease when LSTAT increases (poorer neighbourhood), and to decrease when PTRATION increases (higher ratio of students should correlate negatively with quality of education and rating of schools, and therefore correlate with less attractive residential neighborhoods). Developing a Model In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions. Implementation: Define a Performance Metric It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions. The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 is no better than a model that always predicts the mean of the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is arbitrarily worse than one that always predicts the mean of the target variable. For the performance_metric function in the code cell below, you will need to implement the following: - Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict. - Assign the performance score to the score variable. End of explanation # Calculate the performance of this model score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3]) print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score) Explanation: Question 2 - Goodness of Fit Assume that a dataset contains five data points and a model made the following predictions for the target variable: | True Value | Prediction | | :-------------: | :--------: | | 3.0 | 2.5 | | -0.5 | 0.0 | | 2.0 | 2.1 | | 7.0 | 7.8 | | 4.2 | 5.3 | Would you consider this model to have successfully captured the variation of the target variable? Why or why not? Run the code cell below to use the performance_metric function and calculate this model's coefficient of determination. End of explanation # TODO: Import 'train_test_split' from sklearn.cross_validation import train_test_split # TODO: Shuffle and split the data into training and testing subsets X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=0) # Success print "Training and testing split was successful." Explanation: Answer: A R² of 0.92 indicates that 92% of the variance in the sample is explained. We can consider that the model has successfully captured the variation of the target variable. Implementation: Shuffle and Split Data Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset. For the code cell below, you will need to implement the following: - Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets. - Split the data into 80% training and 20% testing. - Set the random_state for train_test_split to a value of your choice. This ensures results are consistent. - Assign the train and testing splits to X_train, X_test, y_train, and y_test. End of explanation # Produce learning curves for varying training set sizes and maximum depths vs.ModelLearning(features, prices) Explanation: Question 3 - Training and Testing What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm? Hint: What could go wrong with not having a way to test your model? Answer: Not splitting the dataset into training and testing subsets would mean that we would test model on the training set, with the risk that the predictive power out-of-sample was poor despite a good in-sample performance (risk of overfitting). Analyzing Model Performance In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone. Learning Curves The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination. Run the code cell below and use these graphs to answer the following question. End of explanation vs.ModelComplexity(X_train, y_train) Explanation: Question 4 - Learning the Data Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model? Hint: Are the learning curves converging to particular scores? Answer: Looking at the 2nd graph (max_depth = 3) which leads to the highest score: we can see that initially the training score is very high then decreases as the number of training points increases. Conversely, the cross-validation score is very low initially and increases when the number of training points increases. The training and cross-validation scores converge to 0.8. After a certain level, adding more training points does not improve the cross-validation score (score stable over 250-300 training points). Complexity Curves The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function. Run the code cell below and use this graph to answer the following two questions. End of explanation # TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV' from sklearn.metrics import make_scorer from sklearn.tree import DecisionTreeRegressor from sklearn.grid_search import GridSearchCV def fit_model(X, y): Performs grid search over the 'max_depth' parameter for a decision tree regressor trained on the input data [X, y]. # Create cross-validation sets from the training data cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0) # TODO: Create a decision tree regressor object regressor = DecisionTreeRegressor() # TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10 params = {'max_depth': range(1,11)} # TODO: Transform 'performance_metric' into a scoring function using 'make_scorer' scoring_fnc = make_scorer(performance_metric, greater_is_better=True) # TODO: Create the grid search object grid = GridSearchCV(regressor, params, scoring=scoring_fnc, cv=cv_sets) # Fit the grid search object to the data to compute the optimal model grid = grid.fit(X, y) # Return the optimal model after fitting the data return grid.best_estimator_ Explanation: Question 5 - Bias-Variance Tradeoff When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions? Hint: How do you know when a model is suffering from high bias or high variance? Answer: With a depth of 1, the training score is pretty low (large error) and the model suffers from high bias. When the model is trained with a maximum depth of 10, it is over-fitting the data and suffers from high variance. This can be seen from the graph based on the large difference between training and validation scores. Question 6 - Best-Guess Optimal Model Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer? Answer: A maximum depth of 4 seems to result in a model that best generalizes to unseen data, since it leads to the highest validation score, while not having over-fitting/high variance issue. Evaluating Model Performance In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model. Question 7 - Grid Search What is the grid search technique and how it can be applied to optimize a learning algorithm? Answer: In the grid search technique, one applies cross-validation to multiple models that differ from each other by their combination of parameter values. Then, one determines which combination of parameters and therefore which model performs best. Question 8 - Cross-Validation What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model? Hint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set? Answer: The k-fold cross-validation training technique consists in splitting the data set in k sub-sets and using one of the subsets for testing and the k-1 others for training. The test scores obtained are averaged over the k tests. One of the benefits of this technique is that all data points are used for training, not an (arbitrary) subset, and all data points are used once for testing. By doing so, we lower the probability that the training and test sets are very different (i.e. corresponding to different set of events), which would lead to the model having a good fit on the training set but not having good predictive power on the test set (risk of overfitting on the training set). Besides, the larger k, the lower the variance of the estimates. <br/> One of the disadvantages is the compute time, which may be long given that learning needs to run k times. Implementation: Fitting a Model Your final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms. In addition, you will find your implementation is using ShuffleSplit() for an alternative form of cross-validation (see the 'cv_sets' variable). While it is not the K-Fold cross-validation technique you describe in Question 8, this type of cross-validation technique is just as useful!. The ShuffleSplit() implementation below will create 10 ('n_splits') shuffled sets, and for each shuffle, 20% ('test_size') of the data will be used as the validation set. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique. For the fit_model function in the code cell below, you will need to implement the following: - Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object. - Assign this object to the 'regressor' variable. - Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable. - Use make_scorer from sklearn.metrics to create a scoring function object. - Pass the performance_metric function as a parameter to the object. - Assign this scoring function to the 'scoring_fnc' variable. - Use GridSearchCV from sklearn.grid_search to create a grid search object. - Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object. - Assign the GridSearchCV object to the 'grid' variable. End of explanation # Fit the training data to the model using grid search reg = fit_model(X_train, y_train) # Produce the value for 'max_depth' print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth']) Explanation: Making Predictions Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on. Question 9 - Optimal Model What maximum depth does the optimal model have? How does this result compare to your guess in Question 6? Run the code block below to fit the decision tree regressor to the training data and produce an optimal model. End of explanation # Produce a matrix for client data client_data = [[5, 17, 15], # Client 1 [4, 32, 22], # Client 2 [8, 3, 12]] # Client 3 # Show predictions for i, price in enumerate(reg.predict(client_data)): print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price) features.describe() prices.describe() Explanation: Answer: The optimal model has a maximum depth of 4. Question 10 - Predicting Selling Prices Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients: | Feature | Client 1 | Client 2 | Client 3 | | :---: | :---: | :---: | :---: | | Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms | | Neighborhood poverty level (as %) | 17% | 32% | 3% | | Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 | What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features? Hint: Use the statistics you calculated in the Data Exploration section to help justify your response. Run the code block below to have your optimized model make predictions for each client's home. End of explanation vs.PredictTrials(features, prices, fit_model, client_data) Explanation: Answer: The predicted selling prices are respectively around USD 391k, USD 189k and USD 943k. Client 1's house has a number of rooms in the first quartile and a poverty level in the top quartile. This is expected to lower the value of the house well below the average. This is however partly compensated by a low ratio of students to teacher (in the first quartile). The estimated price (in the 2nd quartile) seems reasonable for this house. Client 2's house has a small number of rooms and is in a poorer neighborhood than the first house. Additionally, the students to teacher ratio is high (equal to max). It seems therefore reasonable to have an estimated price in the first quartile, and close to the minimum price. Client 3's house has a large number of rooms (close to the max), is located in a neighborhood with a very low poverty level (3%, close to the minimum of 2%). And the students to teacher ratio is lower than the min in the initial dataset. It seems therefore reasonable for the estimated price to be close to the maximum price in the dataset. Sensitivity An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on. End of explanation
1,043
Given the following text description, write Python code to implement the functionality described below step by step Description: Clustering analysis In this first notebook, we conduct a k-means clustering analysis on using an included MFC mask. Fortunately, Neurosynth includes a set of functions to make this rather easy. In this case I will use the simple neurosynth.analysis.cluster.magic function that takes care of most of the heavy lifting for you. By default, the magic() function performs a k-means co-activation based analysis using the same methods as our manuscript. First lets import some basic necessities Step1: Next, I load a previously generated Neurosynth dataset. This dataset was generated using version 0.4 of Neurosynth and the features are 60 topics generated using latent dietrich allocation (LDA). Feel free to generate a dataset using the latest version of Neurosynth and plug it into this analysis. Step2: Here, I use the magic function to perform the clustering analaysis. For each N I specify, an image is generated and placed in images/. Note that I specifiy that at least 80 studies must activate each voxel to be included in the analysis to ensure a robust classification. The following analysis will compute co-activation between each MFC voxel and the rest of the brain (reduced into 100 PCA components), and use the resulting distance matrix for classification. Note that this step may be computationally intensive. You may use the Clusterable class in neurosynth.analysis.cluster for a more custom analysis to avoid repeating the PCA for each classification if you desire. &gt; Note Step3: Next, I use nilearn's useful plotting functions to display the results on saggital and coronal slices Step4: For k = 12, it's hard to find a consistent color scheme, so I'm displaying the resulting using a randomly shuffled hls colors. Shuffle repeatedly until you can clearly perceive the different clusters
Python Code: import seaborn as sns from nilearn import plotting as niplt from matplotlib.colors import ListedColormap import numpy as np Explanation: Clustering analysis In this first notebook, we conduct a k-means clustering analysis on using an included MFC mask. Fortunately, Neurosynth includes a set of functions to make this rather easy. In this case I will use the simple neurosynth.analysis.cluster.magic function that takes care of most of the heavy lifting for you. By default, the magic() function performs a k-means co-activation based analysis using the same methods as our manuscript. First lets import some basic necessities: End of explanation from neurosynth.base.dataset import Dataset dataset = Dataset.load("data/neurosynth_60_0.4.pkl") Explanation: Next, I load a previously generated Neurosynth dataset. This dataset was generated using version 0.4 of Neurosynth and the features are 60 topics generated using latent dietrich allocation (LDA). Feel free to generate a dataset using the latest version of Neurosynth and plug it into this analysis. End of explanation # from neurosynth.analysis.cluster import magic # magic(dataset, roi_mask='data/mfc_mask.nii.gz', min_studies_per_voxel=80, output_dir='images/', n_clusters=3) # magic(dataset, roi_mask='data/mfc_mask.nii.gz', min_studies_per_voxel=80, output_dir='images/', n_clusters=9) # magic(dataset, roi_mask='data/mfc_mask.nii.gz', min_studies_per_voxel=80, output_dir='images/', n_clusters=12) Explanation: Here, I use the magic function to perform the clustering analaysis. For each N I specify, an image is generated and placed in images/. Note that I specifiy that at least 80 studies must activate each voxel to be included in the analysis to ensure a robust classification. The following analysis will compute co-activation between each MFC voxel and the rest of the brain (reduced into 100 PCA components), and use the resulting distance matrix for classification. Note that this step may be computationally intensive. You may use the Clusterable class in neurosynth.analysis.cluster for a more custom analysis to avoid repeating the PCA for each classification if you desire. &gt; Note: The following cell will crash in binder, and will take a long time on your local computer. Uncomment if you really want to run it, otherwise, we'll just use the precomputed images End of explanation # Generate color palette colors = sns.color_palette('Set1', 3) niplt.plot_roi('images/cluster_labels_k3.nii.gz', cut_coords=[4], display_mode='x', draw_cross=False, cmap = ListedColormap(colors), alpha=0.8) from plotting import nine_colors niplt.plot_roi('images/cluster_labels_k9.nii.gz', cut_coords=[4], display_mode='x', draw_cross=False, cmap = ListedColormap(nine_colors), alpha=0.8) niplt.plot_roi('images/cluster_labels_k9.nii.gz', display_mode='y', cut_coords=np.arange(-18, 62, 24), draw_cross=False, cmap = ListedColormap(nine_colors), alpha=0.7) Explanation: Next, I use nilearn's useful plotting functions to display the results on saggital and coronal slices End of explanation from random import shuffle colors = sns.color_palette("hls", 12) shuffle(colors) niplt.plot_roi('images/cluster_labels_k12.nii.gz', cut_coords=[4], display_mode='x', draw_cross=False, cmap = ListedColormap(colors), alpha=0.95) Explanation: For k = 12, it's hard to find a consistent color scheme, so I'm displaying the resulting using a randomly shuffled hls colors. Shuffle repeatedly until you can clearly perceive the different clusters End of explanation
1,044
Given the following text description, write Python code to implement the functionality described below step by step Description: ATLeS - Descriptive Statistics This script is designed to provide a general purpose tool for producing descriptive statistics and visualizations for ATLES data. The intent is that this notebook will provide a basic framework for you to build on. Instructions Provide experiment details in the 'Parameters' section below, then execute notebook to generate stats. General Information Everytime an experiment is run ATLeS generates three files. 1. date-time-experimentname.txt (log of tracking activity/issues) 2. date-time-experimentname-setup.txt (details of experimental setup) 3. date-time-experimentname-track.csv (track files; raw tracking data) Broadly this notebook will Step1: Parameters Input experiment details here Step2: Set analysis options here Step3: Globals Step4: Identify the Data Files Finds track and settingsfiles within the trackdirectory that match the experiment names and creates lists of track and settings files. Step5: Identify and Store Experimental Settings The number of experimental phases varies across experiments. This block identifies the phases used for the current experiment and verfies that all tracks have the same phase information. The settings may vary between tracks within an experiment. This block also identifies the settings for each track and writes them to a dictionary. Step6: Identify Phasetimes and Create Phase Dataframe This block extracts phase info from settings w. trackname and calculates phasetimes. This code currently assummes all phase time are the same across tracks within the experiment. This will need to be rewritten if we want to start running analyses across multiple studies with different phase times. Step7: Generate Basic Stats Step8: Generate Extinction Stats Step9: Combine Dataframes Combines settings, stim, phase, and with dataframe of basic descriptive stats. Step10: Cleaning Step11: Cleaning Step12: Cleaning Step13: Cleaning Step14: Cleaning Step15: Cleaning Step16: Cleaning Step17: Analysis - Preliminary Vizualizations - X and Y by Box Step18: Analysis - Preliminary Vizualizations - X and Y by Phase Step19: Analysis - Preliminary Vizualizations - Heatmaps Per Phase
Python Code: from pathlib import Path import configparser import numpy as np import pandas as pd import seaborn import matplotlib.pyplot as plt import pingouinparametrics as pp # add src/ directory to path to import ATLeS code import os import sys module_path = os.path.abspath(os.path.join('..', 'src')) if module_path not in sys.path: sys.path.append(module_path) # imported from ATLeS from analysis.process import TrackProcessor from analysis.plot import TrackPlotter # displays plots in notebook output %matplotlib inline Explanation: ATLeS - Descriptive Statistics This script is designed to provide a general purpose tool for producing descriptive statistics and visualizations for ATLES data. The intent is that this notebook will provide a basic framework for you to build on. Instructions Provide experiment details in the 'Parameters' section below, then execute notebook to generate stats. General Information Everytime an experiment is run ATLeS generates three files. 1. date-time-experimentname.txt (log of tracking activity/issues) 2. date-time-experimentname-setup.txt (details of experimental setup) 3. date-time-experimentname-track.csv (track files; raw tracking data) Broadly this notebook will: 1. grab the relevant data sources (see above) and integrate them 2. clean up the data a bit 3. summarize the data a bit 4. vizualize the data a bit To do: Function to check for duplicates, remove empty rows from df Import Libraries End of explanation experimentname = 'ACTEST2' trackdirectory = '../data/tracks' experimenttype = 'extinction' # Set to 'extinction' or 'none'. Supplemental analyses are generated for extinction experiments. Explanation: Parameters Input experiment details here: End of explanation acquisitionlevel = .85 # Sets cut off level for excluding tracks based on poor tracking. notriggerexclude = True # If True, excludes tracks where the trigger was never triggered. If False, includes tracks where no trigger occurred Explanation: Set analysis options here: End of explanation framelist = [] # Collects frames generated for eventual combination Explanation: Globals End of explanation trackfiles = list(Path(trackdirectory).glob(f'**/*{experimentname}*track.csv')) settingsfiles = list(Path(trackdirectory).glob(f'**/*{experimentname}*setup.txt')) print(f'{len(trackfiles)} track files were found with the name {experimentname}') print(f'{len(settingsfiles)} settings files were found with the name {experimentname}\n') if len(trackfiles) != len(settingsfiles): print('WARNING: Mismatched track and settings files.') Explanation: Identify the Data Files Finds track and settingsfiles within the trackdirectory that match the experiment names and creates lists of track and settings files. End of explanation Config = configparser.ConfigParser() settingsdic ={} # Dictionary used to store all settings information. phaselist = [] # List of phases used to verify phases are consistent across tracks. # reads and organizes information from each settings file for file in settingsfiles: Config.read(file) # generate clean list of stimuli stiminfo = Config.get('experiment', 'stimulus') #gets stim info stiminfo = stiminfo.replace('(', ',').replace(')', '').replace(' ', '').split(',')[1:] #cleans stim list # generate clean list of phases phaselisttemp = Config.get('phases', 'phases_argstrings') # gets phase info phaselisttemp = phaselisttemp.replace('-p ', '').replace(' ', '').split(',')[:-1] #cleans phase list # compare each phase list with the list from the previous settings file if len(phaselist) == 0: phaselist = phaselisttemp elif phaselist != phaselisttemp: print('Warning: Inconsistent phases between settings files.') else: pass # counts phases and generates phase variable names phasenumber = len(phaselist)//2 phasenames = [] for i in range(phasenumber): p, t, s = 'phase', 'time', 'stim' phase = p+str(i+1) phasetime = phase + t phasestim = phase + s phasenames.extend((phasetime, phasestim)) # gets settings info from filename (track/box) trackname = file.parts[-1].replace("-setup.txt", "") box = file.parts[-2] # gets settings info from setting file controller = Config.get('experiment', 'controller') trigger = Config.get('experiment', 'trigger') settings = [phaselisttemp, controller, trigger, stiminfo, box, str(file)] # puts all settings in dic keyed to trackname settingsdic[trackname] = settings # creates settings dataframe from settingsdic dfsettings = pd.DataFrame(settingsdic).transpose() dfsettings.columns = ['phases', 'controller', 'trigger', 'stimulus', 'box', 'file'] dfsettings['track'] = dfsettings.index # creates stimulus dataframe, splits up and names stims dfstim = pd.DataFrame(dfsettings.stimulus.values.tolist(), index=dfsettings.index).fillna('-') for col in range(dfstim.shape[1]): dfstim=dfstim.rename(columns = {col:('stim_setting' + str(col))}) framelist.append(dfsettings) dfsettings.head(3) Explanation: Identify and Store Experimental Settings The number of experimental phases varies across experiments. This block identifies the phases used for the current experiment and verfies that all tracks have the same phase information. The settings may vary between tracks within an experiment. This block also identifies the settings for each track and writes them to a dictionary. End of explanation phaseinfo = settingsdic.get(trackname)[0] phaseinfo = [x for x in phaseinfo if any(c.isdigit() for c in x)] phaseinfo = list(map(int, phaseinfo)) phaseinfo = [i * 60 for i in phaseinfo] phaselen = len(phaseinfo) phaset = [] for i in range(phaselen): times = sum(phaseinfo[0:i+1]) phaset.append(times) # moves 0 to the first entry of phaset (works, but find a cleaner way to do this) a = 0 phaset[0:0] = [a] phasedic = {} for i in range(phaselen): phasedic[i+1] = [phaset[i], phaset[i+1]] # splits up and names the phases dfphase = pd.DataFrame(dfsettings.phases.values.tolist(), index=dfsettings.index).fillna('-') dfphase.columns = phasenames phasenum = len(dfphase.columns)//2 framelist.append(dfphase) dfphase.head(3) Explanation: Identify Phasetimes and Create Phase Dataframe This block extracts phase info from settings w. trackname and calculates phasetimes. This code currently assummes all phase time are the same across tracks within the experiment. This will need to be rewritten if we want to start running analyses across multiple studies with different phase times. End of explanation dfstats = pd.DataFrame() for track in trackfiles: # gets track from file name trackname = track.parts[-1].replace("-track.csv", "") # gets stats from TrackProcessor (ATLeS analysis class) processor = TrackProcessor(str(track), normalize_x_with_trigger='xpos < 0.50') tempstatsdic = processor.get_stats(include_phases=True) # gets stats from track object # flattens dictionary into dataframe, from https://stackoverflow.com/questions/13575090/ dftemp = pd.DataFrame.from_dict({(i,j): tempstatsdic[i][j] for i in tempstatsdic.keys() for j in tempstatsdic[i].keys()}, orient='index') #transposes dataframe and adds track as index dftemp = dftemp.transpose() dftemp['track'] = trackname dftemp.set_index('track', inplace=True) dfstats = dfstats.append(dftemp, sort=True) if 'phase 0' in dfstats.columns: dfstats.rename({'phase 0': 'p1', 'phase 1': 'p2', 'phase 2': 'p3'}, axis='columns', inplace = True) dfstats.columns = dfstats.columns.map('|'.join) framelist.append(dfstats) dfstats.head(3) Explanation: Generate Basic Stats End of explanation if experimenttype == 'extinction': dfextstats = pd.DataFrame() for track in trackfiles: # gets track from file name trackname = track.parts[-1].replace("-track.csv", "") # gets advances stats from TrackProcessor (ATLeS analysis class) processor = TrackProcessor(str(track)) # passes track to track processor and returns track object tempstatsdic = processor.get_exp_stats('extinction') # gets stats from track object dftemp3 = pd.DataFrame(tempstatsdic, index=[0]) dftemp3['track'] = trackname dftemp3.set_index('track', inplace=True) dfextstats = dfextstats.append(dftemp3, sort=True) framelist.append(dfextstats) else: print('Extinction experiment not selected in Parameters section.') dfextstats.head(3) Explanation: Generate Extinction Stats End of explanation df = pd.concat(framelist, axis=1, sort=False) # combines all frames df.dropna(axis=0, how='all', inplace=True) # drops any rows where all values are missing df.head(3) Explanation: Combine Dataframes Combines settings, stim, phase, and with dataframe of basic descriptive stats. End of explanation print(f'Dataframe Shape:{df.shape}') print() print('Column Names by DataType') for dt in df.dtypes.unique(): print(f'Data Type, {dt}:') print(*list(df.select_dtypes(include=[dt]).columns), sep = ', ') print() # print('Number of Tracks with Null Data by Column:') #fix this # print(df[df.isnull().any(axis=1)][df.columns[df.isnull().any()]].count()) # print() Explanation: Cleaning: Dataframe Characteristics End of explanation print(f'''Track Times: Mean {df['all|Total time (sec)'].mean()}, Minimum {df['all|Total time (sec)'].min()}, Maximum {df['all|Total time (sec)'].max()}, Count {df['all|Total time (sec)'].count()}''') fig, ax = plt.subplots(1, 1, figsize=(6, 6)) ax.ticklabel_format(useOffset=False) # prevents appearance of scientific notation on y axis df.boxplot(column='all|Total time (sec)', by='box', ax=ax) Explanation: Cleaning: Early Termination Check End of explanation print(f'''Valid Datapoints: Mean {df['all|%Valid datapoints'].mean()}, Minimum {df['all|%Valid datapoints'].min()}, Maximum {df['all|%Valid datapoints'].max()}, Count {df['all|%Valid datapoints'].count()}''') fig, ax = plt.subplots(1, 1, figsize=(6, 6)) df.boxplot(column='all|%Valid datapoints', by='box', ax=ax) Explanation: Cleaning: Poor Tracking Check End of explanation print(f'''Number of Triggers: Mean {df['phase 2|#Triggers'].mean()}, Minimum {df['all|#Triggers'].min()}, Maximum {df['all|#Triggers'].max()}, Count {df['all|#Triggers'].count()}''') fig, ax = plt.subplots(1, 1, figsize=(6, 6)) df.boxplot(column='phase 2|#Triggers', by='box', ax=ax) Explanation: Cleaning: No Trigger Check End of explanation print(f'Raw Track Number: {df.shape[0]}') df = df.drop(df[df['all|Total time (sec)'] < (df['all|Total time (sec)'].mean())* .75].index) # drops rows if any data is missing, this will remove early termination tracks print(f'Modified Track Number: {df.shape[0]} (following removal of tracks less than 75% the length of the experiment mean)') df = df.drop(df[df['all|%Valid datapoints'] < acquisitionlevel].index) print(f'Modified Track Number: {df.shape[0]} (following removal for poor tracking set at less than {acquisitionlevel}% valid datapoints)') if notriggerexclude == True: df = df.drop(df[df['phase 2|#Triggers'] == 0].index) # drops rows if there was no trigger during phase 2; NOTE: fix this so it works if learning phase is not 2 print(f'Modified Track Number: {df.shape[0]} (following removal of tracks with no triggers during the learning)') Explanation: Cleaning: Removing Tracks for Early Termination, Poor Tracking, No Trigger End of explanation dftrig = df.groupby('box')['trigger'].describe() dftrig boxlist = df.box.unique().tolist() #creates a list of all boxes in the experiment onetriglist = dftrig.index[dftrig.unique < 2].tolist() # creates a list of boxes with less than 2 trigger conditions boxlist = [x for x in boxlist if x not in onetriglist] # removes boxes with less than 2 trigger conditions if len(onetriglist) > 0: print(f'WARNING: The following boxes had only one trigger condition: {onetriglist}. These boxes removed from trigger analyses below.') else: pass print(f'Trigger Conditions: {df.trigger.unique()}') print() from scipy.stats import ttest_ind # performs welch's t-test (does not assume equal variances) on all floats and prints any that are signficantly different as a function of trigger for i in df.select_dtypes(include=['float64']).columns: for b in boxlist: dfbox = df[df.box == b] ttest_result = ttest_ind(dfbox[dfbox.trigger == dfbox.trigger.unique()[0]][i], dfbox[dfbox.trigger == dfbox.trigger.unique()[1]][i], equal_var=False, nan_policy='omit') if ttest_result.pvalue < (.05/len(df.select_dtypes(include=['float64']).columns)): print(i) print(f' {b}: Welchs T-Test indicates significant difference by trigger condition, p = {ttest_result.pvalue}') print(f' Trigger Condition 1 Mean: {dfbox[dfbox.trigger == dfbox.trigger.unique()[0]][i].mean()}') print(f' Trigger Condition 2 Mean: {dfbox[dfbox.trigger == dfbox.trigger.unique()[1]][i].mean()}') print() Explanation: Cleaning: Checking Randomization of Trigger Condition End of explanation def betweensubjectANOVA (dependentvar, betweenfactor, suppress): try: anovaresult = pp.anova(dv=dependentvar, between=betweenfactor, data=df, detailed=True, export_filename=None) pvalue = anovaresult.loc[anovaresult.Source==betweenfactor]['p-unc'].values[0] if pvalue >= .05/len(df.select_dtypes(include=['float64']).columns): if suppress == False: print(f'{dependentvar}') print(f' NOT significant: One-way ANOVA conducted testing {betweenfactor} as significant predictor of {dependentvar}. P = {pvalue}') print() else: pass else: print(f'{dependentvar}') print(f' SIGNIFICANT: One-way ANOVA conducted testing {betweenfactor} as significant predictor of {dependentvar}. P = {pvalue}') fig, ax = plt.subplots(1, 1, figsize=(6, 6)) df.boxplot(column=dependentvar, by=betweenfactor, ax=ax) print() except: print(f'{dependentvar} analysis failed. Check descriptives.') for col in df.select_dtypes(include=['float64']).columns: betweensubjectANOVA(col,'box', True) Explanation: Cleaning: Checking for Box Variations Conducts one-way ANOVAs using box as an independent variable and all floats as dependent variables. Uses a Bonferroni correction. End of explanation fig, ax = plt.subplots(1, 3, figsize=(15, 6), sharey=True) df.boxplot(column=['phase 1|Avg. normed x coordinate', 'phase 2|Avg. normed x coordinate', 'phase 3|Avg. normed x coordinate'], by='box', ax=ax) fig, ax = plt.subplots(1, 3, figsize=(15, 6), sharey=True) df.boxplot(column=['phase 1|Avg. y coordinate', 'phase 2|Avg. y coordinate', 'phase 3|Avg. y coordinate'], by='box', ax=ax) Explanation: Analysis - Preliminary Vizualizations - X and Y by Box End of explanation fig, ax = plt.subplots(1, 1, figsize=(15, 6)) df.boxplot(column=['phase 1|Avg. normed x coordinate', 'phase 2|Avg. normed x coordinate', 'phase 3|Avg. normed x coordinate'], ax=ax) fig, ax = plt.subplots(1, 1, figsize=(15, 6)) df.boxplot(column=['phase 1|Avg. y coordinate', 'phase 2|Avg. y coordinate', 'phase 3|Avg. y coordinate'], ax=ax) Explanation: Analysis - Preliminary Vizualizations - X and Y by Phase End of explanation plotter = TrackPlotter(processor) plotter.plot_heatmap(plot_type='per-phase') # 'phase 1|Avg. normed x coordinate', 'phase 2|Avg. normed x coordinate', 'phase 3|Avg. normed x coordinate' # aov = rm_anova(dv='DV', within='Time', data=df, correction='auto', remove_na=True, detailed=True, export_filename=None) # print_table(aov) phasenumcount = 1 dependentvar = 'Avg. normed x coordinate' dfanova = pd.DataFrame() while phasenumcount <= phasenum: colname = f'phase {str(phasenumcount)}|{dependentvar}' dftemp = df[[colname]].copy() dftemp.columns.values[0] = dependentvar dftemp['phase'] = phasenumcount dfanova = dfanova.append(dftemp) phasenumcount +=1 pp.rm_anova(dv='Avg. normed x coordinate', within='phase', data=dfanova, correction='auto', remove_na=True, detailed=False, export_filename=None) Explanation: Analysis - Preliminary Vizualizations - Heatmaps Per Phase End of explanation
1,045
Given the following text description, write Python code to implement the functionality described below step by step Description: HgTe edge in proximity to an s-wave superconductor Step1: Let us define a simple real, and hence time reversal invariant lattice model that can serve as a good description to a 1D chiral edge channel. We start from the SSH model and relabel the sublattice degrees of freedom as spins, and we introduce an extra onsite magnetic field $B$ in the z direction. We also introduce s-wave superconductivity in the system, with the usual $i\sigma_y$-type pairing. The onsite $U$ and hopping matrices $T$ are defined below as sympy.Matrix objects since we want to perform some analytic calculations with them Step2: The $k$-dependent Bogoliubov–de Gennes matrix is defined below Step3: We can diagonalize this yealding the eigenvalues of the system Step4: Since we have an explicitely real Bogoliubov-de Gennes matrix we can define a chiral symmetry as $\tau_1\sigma_0$. Whit the help of this symmetry we can transform the Bogoliubov-de Gennes matrix to a block off-diagonal form. Step5: This is the Chiral symmetry operator Step6: The eigenvectors of this matrix give the unitary transformation necessary to block off-diagonalize the $\mathcal{H}$ matrix Step7: The Winding number of the determinant of the nonzero subblock will serve as a topological invariant for this model Step8: Looking at the imaginary and real part of this quantity we recognize that it describes an ellipse Step9: From the above expressions we can infer the topological phase diagram of the system. While keeping a finite value of $\Delta$ tuning $\mu^2+\Delta^2-B^2$ in the interval $[0,4\gamma^2]$ the system has a Winding number of one and hence is topological, otherwhise it is trivial. Let us examine the winding and the spectrum as we tune the parameters of the system Step10: Finally let us calculate the spectrum of a wire of finite length and let us look for zero energy excitations!
Python Code: #here we define sympy symbols to be used in the analytic calculations g,mu,b,D,k=sympy.symbols('gamma mu B Delta k',real=True) Explanation: HgTe edge in proximity to an s-wave superconductor End of explanation # onsite and hopping terms U=sympy.Matrix([[-mu+b,g,0,D], [g,-mu-b,-D,0], [0,-D,mu-b,-g], [D,0,-g,mu+b]]) T=sympy.Matrix([[0,0,0,0], [g,0,0,0], [0,0,0,0], [0,0,-g,0]]) Explanation: Let us define a simple real, and hence time reversal invariant lattice model that can serve as a good description to a 1D chiral edge channel. We start from the SSH model and relabel the sublattice degrees of freedom as spins, and we introduce an extra onsite magnetic field $B$ in the z direction. We also introduce s-wave superconductivity in the system, with the usual $i\sigma_y$-type pairing. The onsite $U$ and hopping matrices $T$ are defined below as sympy.Matrix objects since we want to perform some analytic calculations with them: End of explanation Hk=sympy.exp(sympy.I*k)*T+sympy.exp(-sympy.I*k)*T.transpose()+U Hk Explanation: The $k$-dependent Bogoliubov–de Gennes matrix is defined below End of explanation # this is where we will keep the eigenvalues of the BdG matrix bdgspectr=list(Hk.eigenvals()) # in theis list we keep the eigenvalues of the particle block wspect=list( ((sympy.exp(sympy.I*k).rewrite(sin)*T+ sympy.exp(-sympy.I*k).rewrite(sin)*T.transpose()+ U)[:2,:2]).eigenvals() ) wspect Explanation: We can diagonalize this yealding the eigenvalues of the system: End of explanation #Pauli matrices to be used in symbolic calculations S1=sympy.physics.matrices.msigma(1) S2=sympy.physics.matrices.msigma(2) S3=sympy.physics.matrices.msigma(3) S0=S1*S1 Explanation: Since we have an explicitely real Bogoliubov-de Gennes matrix we can define a chiral symmetry as $\tau_1\sigma_0$. Whit the help of this symmetry we can transform the Bogoliubov-de Gennes matrix to a block off-diagonal form. End of explanation Kron(S1,S0) Explanation: This is the Chiral symmetry operator End of explanation P,D=Kron(S1,S0).diagonalize() P*Hk*P.inv() Explanation: The eigenvectors of this matrix give the unitary transformation necessary to block off-diagonalize the $\mathcal{H}$ matrix End of explanation detblock=sympy.simplify(( ((( P*( Hk )*P.inv() )[:2,2:]).det()) ).rewrite(sin)) Explanation: The Winding number of the determinant of the nonzero subblock will serve as a topological invariant for this model: End of explanation sympy.re(detblock) sympy.im(detblock) Explanation: Looking at the imaginary and real part of this quantity we recognize that it describes an ellipse: End of explanation figsize(12,6) @interact(mu=(-3,3,0.1),B=(0,2,0.1),gamma=fixed(1),Delta=(0,2,0.1)) def spectr_wire(mu=0,B=0,gamma=1,Delta=0): # this part produces the spectrum subplot(121) k=linspace(0,2*pi,100) I=1j # evaluating the BdG spectra plot(k,real(eval(str(bdgspectr[0]))), 'o',lw=3,label='BdG',mec='green',mfc='green',alpha=0.5,mew=0) for i in [1,2,3]: plot(k,real(eval(str(bdgspectr[i]))), 'o',lw=3,mec='green',mfc='green',alpha=0.5,mew=0) # evaluating the particle and the hole spectra without superconductivity plot(k,eval(str(wspect[0])),'r-',lw=3,label='particle') plot(k,eval(str(wspect[1])),'r-',lw=3) plot(k,-eval(str(wspect[0])),'b--',lw=3,label='hole') plot(k,-eval(str(wspect[1])),'b--',lw=3) plot(k,0*k,'k-',lw=4) grid() xlim(0,2*pi) ylim(-3,3) xlabel(r'$k$',fontsize=20) ylabel(r'$E$',fontsize=20) legend(fontsize=20) # this part produces the winding plot subplot(122) plot(0,0,'ko',ms=8) plot(eval(str(sympy.re(detblock))), eval(str(sympy.im(detblock))),lw=3) xlabel(r'$\mathrm{Re}(\mathrm{det}(h))$',fontsize=20) ylabel(r'$\mathrm{Im}(\mathrm{det}(h))$',fontsize=20) grid() xlim(-5,5) ylim(-5,5) Explanation: From the above expressions we can infer the topological phase diagram of the system. While keeping a finite value of $\Delta$ tuning $\mu^2+\Delta^2-B^2$ in the interval $[0,4\gamma^2]$ the system has a Winding number of one and hence is topological, otherwhise it is trivial. Let us examine the winding and the spectrum as we tune the parameters of the system: End of explanation # this builds a BdG matrix of a finite system def HgTe_wire_BDG_Ham(N=10,mu=0,B=0,gamma=1,Delta=0): idL=eye(N); # identity matrix of dimension L odL=diag(ones(N-1),1);# upper off diagonal matrix with ones of size L U=matrix([[-mu+B,gamma,0,Delta], [gamma,-mu-B,-Delta,0], [0,-Delta,mu-B,-gamma], [Delta,0,-gamma,mu+B]]) T=matrix([[0,0,0,0], [gamma,0,0,0], [0,0,0,0], [0,0,-gamma,0]]) return kron(idL,U)+kron(odL,T)+kron(odL,T).H # calculate the spectrum as the function of chemical potential for different values of N,B and Delta figsize(12,6) uran=linspace(-3,3,50) @interact(N=(10,20,1),B=(0,3,.1),Delta=(0,1,0.1)) def playBdG(N=10,B=0,Delta=0.2,): dat=[] for mu in uran: dat.append(eigvalsh(HgTe_wire_BDG_Ham(N,mu,B,1,Delta))) plot(uran,dat,'r',lw=3); xlabel(r'$\mu$',fontsize=20) ylabel(r'$E^{BdG}_n$',fontsize=20) Explanation: Finally let us calculate the spectrum of a wire of finite length and let us look for zero energy excitations! End of explanation
1,046
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2019 DeepMind Technologies Limited. Step1: Environments <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: The code below defines a dummy RL environment for use in the examples below. Step3: Creating a Server and Client Step4: For details on customizing the sampler, remover, and rate limiter, see below. Example 1 Step5: The animation illustrates the state of the server at each step in the above code block. Although each item is being set to have the same priority value of 1.5, items do not need to have the same priority values. In real world scenarios, items would have differing and dynamically-calculated priority values. <img src="https Step6: Example 2 Step7: Inserting Complete Episodes Step8: Sampling Complete Episodes in TensorFlow Step9: Example 3 Step10: Inserting Sequences of Varying Length into Multiple Priority Tables Step11: This diagram shows the state of the server after executing the above cell. <img src="https Step12: Creating a Server with a MaxHeap Sampler and a MinHeap Remover Setting max_times_sampled=1 causes each item to be removed after it is sampled once. The end result is a priority table that essentially functions as a max priority queue. Step13: Creating a Server with One Queue and One Circular Buffer Behavior of canonical data structures such as circular buffer or a max priority queue can be implemented in Reverb by modifying the sampler and remover or by using the PriorityTable queue initializer. Step14: Example 5
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2019 DeepMind Technologies Limited. End of explanation !pip install dm-tree !pip install dm-reverb[tensorflow] import reverb import tensorflow as tf Explanation: Environments <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/deepmind/reverb/blob/master/examples/demo.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/deepmind/reverb/blob/master/examples/demo.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> </table> Introduction This colab is a demonstration of how to use Reverb through examples. Setup Installs the stable build of Reverb (dm-reverb) and TensorFlow (tf) to match. End of explanation OBSERVATION_SPEC = tf.TensorSpec([10, 10], tf.uint8) ACTION_SPEC = tf.TensorSpec([2], tf.float32) def agent_step(unused_timestep) -> tf.Tensor: return tf.cast(tf.random.uniform(ACTION_SPEC.shape) > .5, ACTION_SPEC.dtype) def environment_step(unused_action) -> tf.Tensor: return tf.cast(tf.random.uniform(OBSERVATION_SPEC.shape, maxval=256), OBSERVATION_SPEC.dtype) Explanation: The code below defines a dummy RL environment for use in the examples below. End of explanation # Initialize the reverb server. simple_server = reverb.Server( tables=[ reverb.Table( name='my_table', sampler=reverb.selectors.Prioritized(priority_exponent=0.8), remover=reverb.selectors.Fifo(), max_size=int(1e6), # Sets Rate Limiter to a low number for the examples. # Read the Rate Limiters section for usage info. rate_limiter=reverb.rate_limiters.MinSize(2), # The signature is optional but it is good practice to set it as it # enables data validation and easier dataset construction. Note that # we prefix all shapes with a 3 as the trajectories we'll be writing # consist of 3 timesteps. signature={ 'actions': tf.TensorSpec([3, *ACTION_SPEC.shape], ACTION_SPEC.dtype), 'observations': tf.TensorSpec([3, *OBSERVATION_SPEC.shape], OBSERVATION_SPEC.dtype), }, ) ], # Sets the port to None to make the server pick one automatically. # This can be omitted as it's the default. port=None) # Initializes the reverb client on the same port as the server. client = reverb.Client(f'localhost:{simple_server.port}') Explanation: Creating a Server and Client End of explanation # Dynamically adds trajectories of length 3 to 'my_table' using a client writer. with client.trajectory_writer(num_keep_alive_refs=3) as writer: timestep = environment_step(None) for step in range(4): action = agent_step(timestep) writer.append({'action': action, 'observation': timestep}) timestep = environment_step(action) if step >= 2: # In this example, the item consists of the 3 most recent timesteps that # were added to the writer and has a priority of 1.5. writer.create_item( table='my_table', priority=1.5, trajectory={ 'actions': writer.history['action'][-3:], 'observations': writer.history['observation'][-3:], } ) Explanation: For details on customizing the sampler, remover, and rate limiter, see below. Example 1: Overlapping Trajectories Inserting Overlapping Trajectories End of explanation # Dataset samples sequences of length 3 and streams the timesteps one by one. # This allows streaming large sequences that do not necessarily fit in memory. dataset = reverb.TrajectoryDataset.from_table_signature( server_address=f'localhost:{simple_server.port}', table='my_table', max_in_flight_samples_per_worker=10) # Batches 2 sequences together. # Shapes of items is now [2, 3, 10, 10]. batched_dataset = dataset.batch(2) for sample in batched_dataset.take(1): # Results in the following format. print(sample.info.key) # ([2], uint64) print(sample.info.probability) # ([2], float64) print(sample.data['observations']) # ([2, 3, 10, 10], uint8) print(sample.data['actions']) # ([2, 3, 2], float32) Explanation: The animation illustrates the state of the server at each step in the above code block. Although each item is being set to have the same priority value of 1.5, items do not need to have the same priority values. In real world scenarios, items would have differing and dynamically-calculated priority values. <img src="https://raw.githubusercontent.com/deepmind/reverb/master/docs/animations/diagram1.svg" /> Sampling Overlapping Trajectories in TensorFlow End of explanation EPISODE_LENGTH = 150 complete_episode_server = reverb.Server(tables=[ reverb.Table( name='my_table', sampler=reverb.selectors.Prioritized(priority_exponent=0.8), remover=reverb.selectors.Fifo(), max_size=int(1e6), # Sets Rate Limiter to a low number for the examples. # Read the Rate Limiters section for usage info. rate_limiter=reverb.rate_limiters.MinSize(2), # The signature is optional but it is good practice to set it as it # enables data validation and easier dataset construction. Note that # the number of observations is larger than the number of actions. # The extra observation is the terminal state where no action is # taken. signature={ 'actions': tf.TensorSpec([EPISODE_LENGTH, *ACTION_SPEC.shape], ACTION_SPEC.dtype), 'observations': tf.TensorSpec([EPISODE_LENGTH + 1, *OBSERVATION_SPEC.shape], OBSERVATION_SPEC.dtype), }, ), ]) # Initializes the reverb client on the same port. client = reverb.Client(f'localhost:{complete_episode_server.port}') Explanation: Example 2: Complete Episodes Create a new server for this example to keep the elements of the priority table consistent. End of explanation # Writes whole episodes of varying length to a Reverb server. NUM_EPISODES = 10 # We know that episodes are at most 150 steps so we set the writer buffer size # to 151 (to capture the final observation). with client.trajectory_writer(num_keep_alive_refs=151) as writer: for _ in range(NUM_EPISODES): timestep = environment_step(None) for _ in range(EPISODE_LENGTH): action = agent_step(timestep) writer.append({'action': action, 'observation': timestep}) timestep = environment_step(action) # The astute reader will recognize that the final timestep has not been # appended to the writer. We'll go ahead and add it WITHOUT an action. The # writer will automatically fill in the gap with `None` for the action # column. writer.append({'observation': timestep}) # Now that the entire episode has been added to the writer buffer we can an # item with a trajectory that spans the entire episode. Note that the final # action must not be included as it is None and the trajectory would be # rejected if we tried to include it. writer.create_item( table='my_table', priority=1.5, trajectory={ 'actions': writer.history['action'][:-1], 'observations': writer.history['observation'][:], }) # This call blocks until all the items (in this case only one) have been # sent to the server, inserted into respective tables and confirmations # received by the writer. writer.end_episode(timeout_ms=1000) # Ending the episode also clears the history property which is why we are # able to use `[:]` in when defining the trajectory above. assert len(writer.history['action']) == 0 assert len(writer.history['observation']) == 0 Explanation: Inserting Complete Episodes End of explanation # Each sample is an entire episode. # Adjusts the expected shapes to account for the whole episode length. dataset = reverb.TrajectoryDataset.from_table_signature( server_address=f'localhost:{complete_episode_server.port}', table='my_table', max_in_flight_samples_per_worker=10, rate_limiter_timeout_ms=10) # Batches 128 episodes together. # Each item is an episode of the format (observations, actions) as above. # Shape of items are now ([128, 151, 10, 10], [128, 150, 2]). dataset = dataset.batch(128) # Sample has type reverb.ReplaySample. for sample in dataset.take(1): # Results in the following format. print(sample.info.key) # ([128], uint64) print(sample.info.probability) # ([128], float64) print(sample.data['observations']) # ([128, 151, 10, 10], uint8) print(sample.data['actions']) # ([128, 150, 2], float32) Explanation: Sampling Complete Episodes in TensorFlow End of explanation multitable_server = reverb.Server( tables=[ reverb.Table( name='my_table_a', sampler=reverb.selectors.Prioritized(priority_exponent=0.8), remover=reverb.selectors.Fifo(), max_size=int(1e6), # Sets Rate Limiter to a low number for the examples. # Read the Rate Limiters section for usage info. rate_limiter=reverb.rate_limiters.MinSize(1)), reverb.Table( name='my_table_b', sampler=reverb.selectors.Prioritized(priority_exponent=0.8), remover=reverb.selectors.Fifo(), max_size=int(1e6), # Sets Rate Limiter to a low number for the examples. # Read the Rate Limiters section for usage info. rate_limiter=reverb.rate_limiters.MinSize(1)), ]) client = reverb.Client('localhost:{}'.format(multitable_server.port)) Explanation: Example 3: Multiple Priority Tables Create a server that maintains multiple priority tables. End of explanation with client.trajectory_writer(num_keep_alive_refs=3) as writer: timestep = environment_step(None) for step in range(4): writer.append({'timestep': timestep}) action = agent_step(timestep) timestep = environment_step(action) if step >= 1: writer.create_item( table='my_table_b', priority=4-step, trajectory=writer.history['timestep'][-2:]) if step >= 2: writer.create_item( table='my_table_a', priority=4-step, trajectory=writer.history['timestep'][-3:]) Explanation: Inserting Sequences of Varying Length into Multiple Priority Tables End of explanation reverb.Server(tables=[ reverb.Table( name='my_table', sampler=reverb.selectors.Prioritized(priority_exponent=0.8), remover=reverb.selectors.Fifo(), max_size=int(1e6), rate_limiter=reverb.rate_limiters.MinSize(100)), ]) Explanation: This diagram shows the state of the server after executing the above cell. <img src="https://raw.githubusercontent.com/deepmind/reverb/master/docs/animations/diagram2.svg" /> Example 4: Samplers and Removers Creating a Server with a Prioritized Sampler and a FIFO Remover End of explanation max_size = 1000 reverb.Server(tables=[ reverb.Table( name='my_priority_queue', sampler=reverb.selectors.MaxHeap(), remover=reverb.selectors.MinHeap(), max_size=max_size, rate_limiter=reverb.rate_limiters.MinSize(int(0.95 * max_size)), max_times_sampled=1, ) ]) Explanation: Creating a Server with a MaxHeap Sampler and a MinHeap Remover Setting max_times_sampled=1 causes each item to be removed after it is sampled once. The end result is a priority table that essentially functions as a max priority queue. End of explanation reverb.Server( tables=[ reverb.Table.queue(name='my_queue', max_size=10000), reverb.Table( name='my_circular_buffer', sampler=reverb.selectors.Fifo(), remover=reverb.selectors.Fifo(), max_size=10000, max_times_sampled=1, rate_limiter=reverb.rate_limiters.MinSize(1)), ]) Explanation: Creating a Server with One Queue and One Circular Buffer Behavior of canonical data structures such as circular buffer or a max priority queue can be implemented in Reverb by modifying the sampler and remover or by using the PriorityTable queue initializer. End of explanation reverb.Server( tables=[ reverb.Table( name='my_table', sampler=reverb.selectors.Prioritized(priority_exponent=0.8), remover=reverb.selectors.Fifo(), max_size=int(1e6), rate_limiter=reverb.rate_limiters.SampleToInsertRatio( samples_per_insert=3.0, min_size_to_sample=3, error_buffer=3.0)), ]) Explanation: Example 5: Rate Limiters Creating a Server with a SampleToInsertRatio Rate Limiter End of explanation
1,047
Given the following text description, write Python code to implement the functionality described below step by step Description: Issue #42 I am a new user of toytree tool, could you tell me please how can I change the position of inner nodes added custom labels. For examble I want to plot some new_added features above or below or right the branches . Could you help me with this please? Please see the documentation Step1: Get a random tree and show node indices Step2: Set custom node labels Step3: Show a node feature (e.g., int or str) next to the node
Python Code: import toytree toytree.__version__ Explanation: Issue #42 I am a new user of toytree tool, could you tell me please how can I change the position of inner nodes added custom labels. For examble I want to plot some new_added features above or below or right the branches . Could you help me with this please? Please see the documentation: https://toytree.readthedocs.io End of explanation tree = toytree.rtree.unittree(10) tree.draw(ts='s'); Explanation: Get a random tree and show node indices End of explanation tree = tree.set_node_values( feature="custom", values={16: "black", 13: "black", 11: "black"}, default="goldenrod", ) tree.draw( node_sizes=14, node_colors=tree.get_node_values("custom", 1, 1), ); Explanation: Set custom node labels End of explanation tree.draw( # show 'custom' label at all nodes node_labels=tree.get_node_values("custom", True, True), # offset the node labels to the left and up node_labels_style={ "-toyplot-anchor-shift": "5px", "baseline-shift": "5px", "fill": "red", "font-size": "12px" } ); Explanation: Show a node feature (e.g., int or str) next to the node End of explanation
1,048
Given the following text description, write Python code to implement the functionality described below step by step Description: LogGabor user guide Table of content What is the LogGabor package? Installing Importing the library Properties of log-Gabor filters Testing filter generation Testing on a sample image Building a pyramid An example of fitting images with log-Gabor filters Importing the library Step1: To install the dependencies related to running this notebook, see Installing notebook dependencies. Back to top Step2: Perspectives Step3: Back to top performing a fit Step4: With periodic boundaries, check that the filter "re-enters" the image from the other border Step5: Back to top TODO
Python Code: %load_ext autoreload %autoreload 2 from LogGabor import LogGabor parameterfile = 'https://raw.githubusercontent.com/bicv/LogGabor/master/default_param.py' lg = LogGabor(parameterfile) lg.set_size((32, 32)) Explanation: LogGabor user guide Table of content What is the LogGabor package? Installing Importing the library Properties of log-Gabor filters Testing filter generation Testing on a sample image Building a pyramid An example of fitting images with log-Gabor filters Importing the library End of explanation import os import numpy as np np.set_printoptions(formatter={'float': '{: 0.3f}'.format}) %matplotlib inline import matplotlib.pyplot as plt fig_width = 12 figsize=(fig_width, .618*fig_width) Explanation: To install the dependencies related to running this notebook, see Installing notebook dependencies. Back to top End of explanation def twoD_Gaussian(xy, x_pos, y_pos, theta, sf_0): FT_lg = lg.loggabor(x_pos, y_pos, sf_0=np.absolute(sf_0), B_sf=lg.pe.B_sf, theta=theta, B_theta=lg.pe.B_theta) return lg.invert(FT_lg).ravel() # Create x and y indices x = np.arange(lg.pe.N_X) y = np.arange(lg.pe.N_Y) x, y = xy = np.meshgrid(x, y) #create data x_pos, y_pos, theta, sf_0 = 14.6, 8.5, 12 * np.pi / 180., .1 data = twoD_Gaussian(xy, x_pos, y_pos, theta=theta, sf_0=sf_0) # plot twoD_Gaussian data generated above #plt.figure() #plt.imshow(data.reshape(lg.pe.N_X, lg.pe.N_Y)) #plt.colorbar() # add some noise to the data and try to fit the data generated beforehand data /= np.abs(data).max() data_noisy = data + .25*np.random.normal(size=data.shape) # getting best match C = lg.linear_pyramid(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y)) idx = lg.argmax(C) initial_guess = [idx[0], idx[1], lg.theta[idx[2]], lg.sf_0[idx[3]]] print ('initial_guess :', initial_guess, ', idx :', idx) import scipy.optimize as opt popt, pcov = opt.curve_fit(twoD_Gaussian, xy, data_noisy, p0=initial_guess) data_fitted = twoD_Gaussian(xy, *popt) extent = (0, lg.pe.N_X, 0, lg.pe.N_Y) print ('popt :', popt, ', true : ', x_pos, y_pos, theta, sf_0) fig, axs = plt.subplots(1, 3, figsize=(15, 5)) _ = axs[0].contourf(data.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper') _ = axs[1].imshow(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), cmap=plt.cm.viridis, extent=extent) _ = axs[2].contourf(data_fitted.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper') for ax in axs: ax.axis('equal') Explanation: Perspectives: Better fits of the filters Basically, it is possible to infer the best possible log-Gabor function, even if it's parameters do not fall on the grid Defining a reference log-gabor (look in the corners!) End of explanation from LogGabor import LogGaborFit lg = LogGaborFit(parameterfile) lg.set_size((32, 32)) x_pos, y_pos, theta, sf_0 = 14.6, 8.5, 12 * np.pi / 180., .1 data = lg.invert(lg.loggabor(x_pos, y_pos, sf_0=np.absolute(sf_0), B_sf=lg.pe.B_sf, theta=theta, B_theta=lg.pe.B_theta)) data /= np.abs(data).max() data_noisy = data + .25*np.random.normal(size=data.shape) data_fitted, params = lg.LogGaborFit(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y)) data_fitted.shape params.pretty_print() extent = (0, lg.pe.N_X, 0, lg.pe.N_Y) print ('params :', params, ', true : ', x_pos, y_pos, theta, sf_0) fig, axs = plt.subplots(1, 3, figsize=(15, 5)) _ = axs[0].contourf(data.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper') _ = axs[1].imshow(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), cmap=plt.cm.viridis, extent=extent) _ = axs[2].contourf(data_fitted.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper') for ax in axs: ax.axis('equal') Explanation: Back to top performing a fit End of explanation data_fitted, params = lg.LogGaborFit(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), do_border=False) extent = (0, lg.pe.N_X, 0, lg.pe.N_Y) print ('params :', params, ', true : ', x_pos, y_pos, theta, sf_0) fig, axs = plt.subplots(1, 3, figsize=(15, 5)) _ = axs[0].contourf(data.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper') _ = axs[1].imshow(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), cmap=plt.cm.viridis, extent=extent) _ = axs[2].contourf(data_fitted.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper') for ax in axs: ax.axis('equal') Explanation: With periodic boundaries, check that the filter "re-enters" the image from the other border: End of explanation %load_ext watermark %watermark -i -h -m -v -p numpy,matplotlib,scipy,imageio,SLIP,LogGabor -r -g -b Explanation: Back to top TODO: validation of fits Back to top more book keeping End of explanation
1,049
Given the following text description, write Python code to implement the functionality described below step by step Description: Crash Course in Supervised Learning with scikit-learn Machine learning, like all fields of study, have a broad array of naming conventions and terminology. Most of these conventions you naturally pick up as you go, but it's best to expose yourself to them as early as possible. I've bolded every first mention in this notebook of some naming conventions and terminology that you'll see again and again if you continue in this field of study. Step1: We often use $\boldsymbol{X}$ to represent a dataset of input vectors. The $i^{th}$ input vector in $X$ is notated $X_i$, though often times when iterating through our dataset (like in a summation) we will call our datapoints $x \in X$ and write the the $i^{th}$ input vector as $x^{(i)}$. The $j^{th}$ component of the $i^{th}$ input vector is written $x^{(i)}_j$. The number of input vectors, samples, data points, instances, etc, in $X$ is $\boldsymbol{m}$. The dimensionality (number of features) of each data point is $\boldsymbol{n}$. We use this notation when talking about datasets in general (like in proofs). This should make some sense if you've taken linear algebra - a matrix is said to be $m \times n$ if it has $m$ rows and $n$ columns. $X$ is a matrix that has $m$ samples (rows) and $n$ features (columns). $\boldsymbol{y}$ is the vector containing the labels (or classes) of the $x \in X$. Step2: Supervised learning is an area of study within machine learning that entails passing an input vector into a model and outputting a label. Supervised learning is further broken down into classification tasks, in which the label $y$ is taken from some finite set of objects like {red, green blue} or {0, 1, 2, ..., 9} and regression tasks, in which the label $y$ is taken from an infinite set, usually the set of real numbers $\mathbb{R}$. We do this by training our model on $X$, given the correct labels $y$. When we train our model, our model is learning a function that maps from input vectors $x$ to output labels $y$ - hence the name machine learning. Let's train a binary classifier that is able to correctly predict the label of the vectors in our two-label dataset both, using the class labels in labels. A binary classifier is to be contrasted with a multiclass classifier, which predicts a label within a set of two or more classes. Step3: A lot just happened in those three short lines. Let's step through it line by line Step4: Amazing! Our predictor was able to predict the labels of the test set with 100% accuracy! Okay, maybe not that amazing. Remember when we projected the ones and zeroes into $\mathbb{R}^2$ in our PCA notebook? They looked like they might be linearly seperable. And that was only in two dimensions. Our classifier can take advantage of the full 64 dimensions of our data to make its predictions. Before we move on to training a classifier on the entire digits dataset, here's a few more ways to get a sense for how well our predictor is doing its job. Step5: clf.predict tells us the actual predictions made on the test set. Step6: clf.predict_proba tells us how confident our predictor is for each label that that is the correct label for the input. The above table, along with the score, tells us that this was a very easy classification task for our predictor. How effective do you think logistic regression will be on the entire digits dataset? Step7: Here's a 2D projection of the entire digits dataset using PCA, yikes! By the way, PCA is a linear dimensionality reduction technique, so it gives us a rough idea of what a linear classifier like logistic regression has to deal with. There also exist non-linear dimensionality reduction techniques, which let you project on non-linear manifolds like spheres, instead of linear manifolds like hyperplanes. Step8: Not so easy now, is it? But is 94.8% accuracy good "enough"? Depends on your application. Step9: From this table we can tell that for a good portion of our digits our classifier had very high confidence in their class label, even with 10 different classes to choose from. But some digits were able to steal at least a tenth of a percent of confidence from our predictor across four different digits. And from clf.score we know that our predictor got roughly one digit wrong for every 20 digits predicted. We can look at some of the digits where our predictor had high uncertainty. $\boldsymbol{\hat{y}}$ is the prediction our model made and $y$ is the actual label. Would you (a human) have done better than logistic regression?
Python Code: from __future__ import print_function %matplotlib inline from sklearn.datasets import load_digits from matplotlib import pyplot as plt import numpy as np np.random.seed(42) # for reproducibility digits = load_digits() X = digits.data y = digits.target Explanation: Crash Course in Supervised Learning with scikit-learn Machine learning, like all fields of study, have a broad array of naming conventions and terminology. Most of these conventions you naturally pick up as you go, but it's best to expose yourself to them as early as possible. I've bolded every first mention in this notebook of some naming conventions and terminology that you'll see again and again if you continue in this field of study. End of explanation zeroes = [X[i] for i in range(len(y)) if y[i] == 0] # all 64-dim lists with label '0' ones = [X[i] for i in range(len(y)) if y[i] == 1] # all 64-dim lists with label '1' both = zeroes + ones labels = [0] * len(zeroes) + [1] * len(ones) Explanation: We often use $\boldsymbol{X}$ to represent a dataset of input vectors. The $i^{th}$ input vector in $X$ is notated $X_i$, though often times when iterating through our dataset (like in a summation) we will call our datapoints $x \in X$ and write the the $i^{th}$ input vector as $x^{(i)}$. The $j^{th}$ component of the $i^{th}$ input vector is written $x^{(i)}_j$. The number of input vectors, samples, data points, instances, etc, in $X$ is $\boldsymbol{m}$. The dimensionality (number of features) of each data point is $\boldsymbol{n}$. We use this notation when talking about datasets in general (like in proofs). This should make some sense if you've taken linear algebra - a matrix is said to be $m \times n$ if it has $m$ rows and $n$ columns. $X$ is a matrix that has $m$ samples (rows) and $n$ features (columns). $\boldsymbol{y}$ is the vector containing the labels (or classes) of the $x \in X$. End of explanation from sklearn.linear_model import LogisticRegression clf = LogisticRegression() # clf is code speak for 'classifier' clf.fit(X=both, y=labels) Explanation: Supervised learning is an area of study within machine learning that entails passing an input vector into a model and outputting a label. Supervised learning is further broken down into classification tasks, in which the label $y$ is taken from some finite set of objects like {red, green blue} or {0, 1, 2, ..., 9} and regression tasks, in which the label $y$ is taken from an infinite set, usually the set of real numbers $\mathbb{R}$. We do this by training our model on $X$, given the correct labels $y$. When we train our model, our model is learning a function that maps from input vectors $x$ to output labels $y$ - hence the name machine learning. Let's train a binary classifier that is able to correctly predict the label of the vectors in our two-label dataset both, using the class labels in labels. A binary classifier is to be contrasted with a multiclass classifier, which predicts a label within a set of two or more classes. End of explanation from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(both, labels, test_size=0.3) clf = LogisticRegression() clf.fit(X_train, y_train) clf.score(X_test, y_test) Explanation: A lot just happened in those three short lines. Let's step through it line by line: from sklearn.linear_model import LogisticRegression From the sklearn (scikit-learn) linear_model module, we import a classifier called Logistic Regression. Linear models are models (or predictors) that attempt to seperate vector inputs of different classes using a linear function. Geometrically, this means our model tries to draw a seperating hyperplane between classes, as opposed to a curved (non-linear) manifold. Logistic regression is a classifier, meaning it can only predict categorical labels, but it is not limited to binary classification, as we'll see later. clf = LogisticRegression() LogisticRegression is just a Python object, so we instantiate it and assign it to the variable clf. clf.fit(both, labels) fit in sklearn is the name of the method call that trains a model on a dataset $X$ given the correct labels $y$. Both LogisticRegression and fit have additional parameters for fine tuning the training process, but the above calls demonstrate model training at its simplest. So now what? We have a classifier that we can pass in an unlabled input vector to, and have it predict whether that input represents a one or a zero - but in doing so we have run into a big problem. A Really Big Problem A natural question to ask about our predictor is "how accurate is it?". We could pass in each $x \in X$ to our predictor, have it predict the label, and compare its prediction to the answer in labels. But this would give us a false sense of confidence in how accurate our predictor actually is. Because we trained our predictor on $X$, we have effectively already given it the answer key to the test. This is not a good way to test how well our predictor can predict never before seen data. To get around this problem, we split our dataset into a training set and a test set before training our model. This way we can train the model on the training set, then test how well it extrapolates to never before seen data on the test set. This is such a common task when training models that scikit-learn has a built-in function for sampling a test/training set. End of explanation clf.predict(X_test) Explanation: Amazing! Our predictor was able to predict the labels of the test set with 100% accuracy! Okay, maybe not that amazing. Remember when we projected the ones and zeroes into $\mathbb{R}^2$ in our PCA notebook? They looked like they might be linearly seperable. And that was only in two dimensions. Our classifier can take advantage of the full 64 dimensions of our data to make its predictions. Before we move on to training a classifier on the entire digits dataset, here's a few more ways to get a sense for how well our predictor is doing its job. End of explanation def print_proba_table(prob_list, stride=1): mnist_classes = [i for i in range(len(prob_list[0]))] print("Class:", *mnist_classes, sep="\t") print("index", *["---" for i in range(len(mnist_classes))], sep="\t") counter = 0 for prob in prob_list[::stride]: print(counter*stride, *[round(prob[i], 3) for i in range(len(mnist_classes))], sep="\t") counter += 1 print_proba_table(clf.predict_proba(X_test), stride=4) Explanation: clf.predict tells us the actual predictions made on the test set. End of explanation from sklearn.decomposition import PCA pca = PCA(2) Xproj = pca.fit_transform(X) plt.scatter(Xproj.T[0], Xproj.T[1], c=y, alpha=0.5) Explanation: clf.predict_proba tells us how confident our predictor is for each label that that is the correct label for the input. The above table, along with the score, tells us that this was a very easy classification task for our predictor. How effective do you think logistic regression will be on the entire digits dataset? End of explanation X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) clf = LogisticRegression() clf.fit(X_train, y_train) clf.score(X_test, y_test) Explanation: Here's a 2D projection of the entire digits dataset using PCA, yikes! By the way, PCA is a linear dimensionality reduction technique, so it gives us a rough idea of what a linear classifier like logistic regression has to deal with. There also exist non-linear dimensionality reduction techniques, which let you project on non-linear manifolds like spheres, instead of linear manifolds like hyperplanes. End of explanation print_proba_table(clf.predict_proba(X_test), stride=10) Explanation: Not so easy now, is it? But is 94.8% accuracy good "enough"? Depends on your application. End of explanation uncertain_indices = [] prob = clf.predict_proba(X_test) for i in range(len(prob)): # number of classes with > 0.45 confidence contender_count = sum([1 if p > 0.45 else 0 for p in prob[i]]) if contender_count == 2: uncertain_indices.append(i) f, ax = plt.subplots(5, 3, sharex=False, sharey=True) f.set_size_inches(9, 15) predictions = clf.predict(X_test) for i in range(5): for j in range(3): ax[i, j].set_xlabel(r"$\^y = $"+str(predictions[uncertain_indices[3*i + j]]) + r", $y = $"+str(y_test[uncertain_indices[3*i+j]]), size='large') ax[i, j].imshow(X_test[uncertain_indices[3*i + j]].reshape(8, 8), cmap='gray', interpolation='none') f.tight_layout() Explanation: From this table we can tell that for a good portion of our digits our classifier had very high confidence in their class label, even with 10 different classes to choose from. But some digits were able to steal at least a tenth of a percent of confidence from our predictor across four different digits. And from clf.score we know that our predictor got roughly one digit wrong for every 20 digits predicted. We can look at some of the digits where our predictor had high uncertainty. $\boldsymbol{\hat{y}}$ is the prediction our model made and $y$ is the actual label. Would you (a human) have done better than logistic regression? End of explanation
1,050
Given the following text description, write Python code to implement the functionality described below step by step Description: Double 7's (Short Term Trading Strategies that Work) 1. The Security is above its 200-day moving average or X-day ma 2. The Security closes at a 7-day low, buy. 3. If the Security closes at a 7-day high, sell your long position. (Scale in and out of trades). 'strategy.py' uses adjust_percent() approach 'scaling_in_out.py' uses lower level pinkfish functions Step1: Some global data Step2: Run Strategy Step3: View logs Step4: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats Step5: Plot Equity Curves Step6: Plot Trades Step7: Bar Graph Step8: Plot Instrument Risk vs Strategy Risk Step9: Prettier Graphs
Python Code: import datetime import matplotlib.pyplot as plt import pandas as pd import pinkfish as pf # Format price data. pd.options.display.float_format = '{:0.2f}'.format %matplotlib inline # Set size of inline plots. '''note: rcParams can't be in same cell as import matplotlib or %matplotlib inline %matplotlib notebook: will lead to interactive plots embedded within the notebook, you can zoom and resize the figure %matplotlib inline: only draw static images in the notebook ''' plt.rcParams["figure.figsize"] = (10, 7) Explanation: Double 7's (Short Term Trading Strategies that Work) 1. The Security is above its 200-day moving average or X-day ma 2. The Security closes at a 7-day low, buy. 3. If the Security closes at a 7-day high, sell your long position. (Scale in and out of trades). 'strategy.py' uses adjust_percent() approach 'scaling_in_out.py' uses lower level pinkfish functions End of explanation #symbol = '^GSPC' symbol = 'SPY' #symbol = 'DIA' #symbol = 'QQQ' #symbol = 'IWM' #symbol = 'TLT' #symbol = 'GLD' #symbol = 'AAPL' #symbol = 'BBRY' #symbol = 'GDX' #symbol = 'OIH' #symbol = 'NLY' capital = 10000 #start = datetime.datetime(2015, 1, 1) start = datetime.datetime(*pf.SP500_BEGIN) end = datetime.datetime.now() # ************** IMPORT ONLY ONE OF THESE ************* import strategy #import scaling_in_out as strategy Explanation: Some global data End of explanation options = { 'use_adj' : False, 'use_cache' : True, 'stop_loss_pct' : 1.0, 'margin' : 1, 'period' : 7, 'max_open_trades' : 4, 'enable_scale_in' : True, 'enable_scale_out' : True } s = strategy.Strategy(symbol, capital, start, end, options) s.run() Explanation: Run Strategy End of explanation s.rlog.head(50) s.tlog.tail() s.dbal.tail() Explanation: View logs End of explanation benchmark = pf.Benchmark(symbol, capital, s.start, s.end, use_adj=False) benchmark.run() Explanation: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats End of explanation pf.plot_equity_curve(s.dbal, benchmark=benchmark.dbal) Explanation: Plot Equity Curves: Strategy vs Benchmark End of explanation pf.plot_trades(s.dbal, benchmark=benchmark.dbal) Explanation: Plot Trades End of explanation df = pf.plot_bar_graph(s.stats, benchmark.stats) df Explanation: Bar Graph: Strategy vs Benchmark End of explanation df = pf.volatility_graphs([s.ts, s.dbal], [symbol, 'Strategy'], points_to_plot=5000) df Explanation: Plot Instrument Risk vs Strategy Risk End of explanation returns = s.dbal['close'] benchmark_returns = benchmark.dbal['close'] pf.prettier_graphs(returns, benchmark_returns, dbal_label='Strategy', benchmark_label='Benchmark', points_to_plot=5000) pf.kelly_criterion(s.stats, benchmark.stats) Explanation: Prettier Graphs End of explanation
1,051
Given the following text description, write Python code to implement the functionality described below step by step Description: Compute envelope correlations in volume source space Compute envelope correlations of orthogonalized activity Step1: Here we do some things in the name of speed, such as crop (which will hurt SNR) and downsample. Then we compute SSP projectors and apply them. Step2: Now we band-pass filter our data and create epochs. Step3: Compute the forward and inverse Step4: Compute label time series and do envelope correlation Step5: Compute the degree and plot it
Python Code: # Authors: Eric Larson <[email protected]> # Sheraz Khan <[email protected]> # Denis Engemann <[email protected]> # # License: BSD (3-clause) import os.path as op import mne from mne.beamformer import make_lcmv, apply_lcmv_epochs from mne.connectivity import envelope_correlation from mne.preprocessing import compute_proj_ecg, compute_proj_eog data_path = mne.datasets.brainstorm.bst_resting.data_path() subjects_dir = op.join(data_path, 'subjects') subject = 'bst_resting' trans = op.join(data_path, 'MEG', 'bst_resting', 'bst_resting-trans.fif') bem = op.join(subjects_dir, subject, 'bem', subject + '-5120-bem-sol.fif') raw_fname = op.join(data_path, 'MEG', 'bst_resting', 'subj002_spontaneous_20111102_01_AUX.ds') crop_to = 60. Explanation: Compute envelope correlations in volume source space Compute envelope correlations of orthogonalized activity :footcite:HippEtAl2012,KhanEtAl2018 in source space using resting state CTF data in a volume source space. End of explanation raw = mne.io.read_raw_ctf(raw_fname, verbose='error') raw.crop(0, crop_to).pick_types(meg=True, eeg=False).load_data().resample(80) raw.apply_gradient_compensation(3) projs_ecg, _ = compute_proj_ecg(raw, n_grad=1, n_mag=2) projs_eog, _ = compute_proj_eog(raw, n_grad=1, n_mag=2, ch_name='MLT31-4407') raw.info['projs'] += projs_ecg raw.info['projs'] += projs_eog raw.apply_proj() cov = mne.compute_raw_covariance(raw) # compute before band-pass of interest Explanation: Here we do some things in the name of speed, such as crop (which will hurt SNR) and downsample. Then we compute SSP projectors and apply them. End of explanation raw.filter(14, 30) events = mne.make_fixed_length_events(raw, duration=5.) epochs = mne.Epochs(raw, events=events, tmin=0, tmax=5., baseline=None, reject=dict(mag=8e-13), preload=True) del raw Explanation: Now we band-pass filter our data and create epochs. End of explanation # This source space is really far too coarse, but we do this for speed # considerations here pos = 15. # 1.5 cm is very broad, done here for speed! src = mne.setup_volume_source_space('bst_resting', pos, bem=bem, subjects_dir=subjects_dir, verbose=True) fwd = mne.make_forward_solution(epochs.info, trans, src, bem) data_cov = mne.compute_covariance(epochs) filters = make_lcmv(epochs.info, fwd, data_cov, 0.05, cov, pick_ori='max-power', weight_norm='nai') del fwd Explanation: Compute the forward and inverse End of explanation epochs.apply_hilbert() # faster to do in sensor space stcs = apply_lcmv_epochs(epochs, filters, return_generator=True) corr = envelope_correlation(stcs, verbose=True) Explanation: Compute label time series and do envelope correlation End of explanation degree = mne.connectivity.degree(corr, 0.15) stc = mne.VolSourceEstimate(degree, [src[0]['vertno']], 0, 1, 'bst_resting') brain = stc.plot( src, clim=dict(kind='percent', lims=[75, 85, 95]), colormap='gnuplot', subjects_dir=subjects_dir, mode='glass_brain') Explanation: Compute the degree and plot it End of explanation
1,052
Given the following text description, write Python code to implement the functionality described below step by step Description: Limpieza de dataset de la Encuesta Intercensal 2015 - Modulo Movilidad Cotidiana 1 . Introduccion Para la construcción de indicadores de la Plataforma de Conocimiento de Ciudades Sustentables se han considerado los siguientes datos disponibles desde el módulo de Movilidad Cotidiana de la Encuesta Intercensal 2015 del INEGI Step1: La descarga de datos se realiza desde el sitio Beta de INEGI. Los datos de la Encuesta Intercensal 2015 se encuentran en http Step2: Las ligas quedan almacenadas en un diccionario de python en el que key = 'Clave Geoestadística Estatal'; y value = 'liga para descarga'. Por ejemplo, '09' es la clave geoestadística para la Ciudad de México. Si al diccionario links le solicitamos el valor de la key '09', nos regresa la liga para descargar los indicadores de vivienda de la Ciudad de México, como se muestra a continuación Step3: Con el diccionario de ligas ya es posible descargar los archivos en una carpeta local para poder procesarlos. Step4: Cada archivo tiene la misma estructura y contiene los datos de vivienda de 2015 levantados en la encuesta intercensal. La primera hoja, 'Índice', incluye un listado de las hojas y datos que contiene cada libro. Este índice se tomará como referencia para la minería de datos Step5: La columna 'Tabulado' contiene el nombre de la hoja, mientras que 'Titulo' describe los datos de la misma. Para la construcción de parámetros de la PCCS se utilizarán las siguientes hojas Step6: 2 . Correr funcion sobre todos los archivos de excel para extraer datos de la hoja 02 Step7: Los datos fueron guardados como diccionario de Python, es necesario convertirlos en un DataFrame unico antes de hacer la limpieza final. Step8: 3 . Limpieza final del Dataframe 'Pisos' Step9: HOJAS 08, 09, 16, 19, 20, 21 23, 24, 25, 26 Como se mencionó antes, todas las hojas siguen un proceso similar para la extraccion de datos, con ligeras variaciones. 1 . Funcion para extraer datos de hoja tipo Para el resto de los archivos reutilizaremos la función "cargahoja" definida anteriormente 2 . Correr función sobre archivos de excel Para que la función "cargahoja" pueda iterar de manera adecuada sobre todos los archivos, es necesario especificar cuales son las variaciones para cada hoja que la función va a leer. Las principales variaciones son los nombres de las columnas y la ubicación de los encabezados en cada hoja, por lo que los datos de cada hoja pueden extraerse de manera automatizada una vez que se identifique cómo va a tratar el script a cada una de las variaciones. --- 2.1 . A continuación se definen las columnas para cada hoja Step10: --- 2.2 . Además de definir las columnas, es necesario definir cuántos renglones tiene que ignorar el script antes de encontrar los encabezados. Estos renglones se definen a continuación en un diccionario Step11: 2 . Correr funcion sobre todos los archivos de excel Una vez que se tienen definidos los tratamientos que el script dará a cada variación, se extraen los datos por medio de una función iterativa hacia un diccionario de Python. Step12: El diccionario resultante contiene los datos de cada hoja clasificados por estado. Sin embargo, la estructura de diccionarios aun requiere ser procesada para obtener dataframes estandar. Step13: 3 . Limpieza final de los Dataframe Step14: Extraccion de notas específicas para cada hoja Cada hoja en el dataset original contiene notas al final del bloque de datos que es conveniente tener en cuenta en el momento de la interpretacion de los datos. Las notas están específicadas para cada municipio con un número en superíndice, uno o dos asteriscos. Debido a que cada hoja contiene diferente información, es conveniente verificar la nomenclatura que tienen las notas para cada hoja. Si la nomenclatura llegara a ser diferente para una misma nota en diferentes estados de una misma hoja, sería necesario crear una nueva columna en los dataset estándar para especificar la nota que corresponde a cada caso. Para verificar las notas de cada hoja se utilizará el siguiente proceso Step15: 2 . Correr la función sobre cada hoja y extraer las notas Step16: Una vez que se han extraido todas las notas de las hojas, se corre un script iterativo para verificar en cuáles notas varía la nomenclatura. Este script funciona de la siguiente manera Step17: Gracias a este script podemos ver que la nomenclatura en todas las hojas es estándar, y puede agregarse como nota en los metadatos de todos los dataset estándar. Guardado de datasets estándar Los dataframes obtenidos a través de los procesos anteriores se guardan en libros de formato OpenXML (.xlsx) para facilitar su lectura tanto desde sistemas informáticos como por personas. Cada libro contiene 2 hojas Step18: El dataset que se procesó al principio de este estudio (Correspondiente a la hoja 02), se agrega junto con los demás datasets al diccionario de datasets estándar. Step19: La función para la escritura de datasets estándar es la siguiente Step20: Una vez definida la función para la escritura de datasets, se ejecuta de manera iterativa sobre los datos Step21: Al final del proceso se generaron 10 datasets estándar
Python Code: # Librerias utilizadas import pandas as pd import sys import urllib import os import numpy as np # Configuracion del sistema print('Python {} on {}'.format(sys.version, sys.platform)) print('Pandas version: {}'.format(pd.__version__)) import platform; print('Running on {} {}'.format(platform.system(), platform.release())) Explanation: Limpieza de dataset de la Encuesta Intercensal 2015 - Modulo Movilidad Cotidiana 1 . Introduccion Para la construcción de indicadores de la Plataforma de Conocimiento de Ciudades Sustentables se han considerado los siguientes datos disponibles desde el módulo de Movilidad Cotidiana de la Encuesta Intercensal 2015 del INEGI: ID |Descripción ---|:---------- P0715|Tiempo de traslado al trabajo P0716|Tiempo de traslado a la escuela En este documento se describen los pasos llevados a cabo para estandarizar los datos disponibles desde la encuesta intercensal y utilizarlos para la construcción de parámetros. Las limpieza se realiza utilizando Python 3. De los parámetros listados, no es posible obtener P0602 dentro del alcance de este proceso, debido a que los datos de focos ahorradores no fueron incluidos por INEGI en su agregación municipal de datos tabulados. Sería posible construir este parámetro desde los microdatos, para lo cual se requiere un proceso individual que recree la metodología utilizada por INEGI. 2 . Definiciones PCCS : Plataforma de Conocimiento de Ciudades Sustentables Dataset : Conjunto de datos que tratan acerca de un tema. Dataset fuente : Dataset como se encuentra disponible para su descarga en la página de la fuente de información. Dataframe : Estructura bidimensional de datos, compuesta por filas que contienen casos y columnas que contienen variables. Dataset estandar : Dataframe procesado para el uso de la PCCS, etiquetado con la clave geoestadística Municipal de 5 dígitos de INEGI. 3 . Descarga de Datos End of explanation # LIGAS PARA DESCARGA DE ARCHIVOS # Las ligas para descarga tienen una raiz URL común que cambia # dependiendo del indicador y estado que se busque descargar url = r'http://www.beta.inegi.org.mx/contenidos/Proyectos/enchogares/especiales/intercensal/2015/tabulados/' indicador = r'14_vivienda_' raiz = url+indicador links = { '01' : raiz+'ags.xls', '02' : raiz+'bc.xls', '03' : raiz+'bcs.xls', '04' : raiz+'cam.xls', '05' : raiz+'coah.xls', '06' : raiz+'col.xls', '07' : raiz+'chis.xls', '08' : raiz+'chih.xls', '09' : raiz+'cdmx.xls', '10' : raiz+'dgo.xls', '11' : raiz+'gto.xls', '12' : raiz+'gro.xls', '13' : raiz+'hgo.xls', '14' : raiz+'jal.xls', '15' : raiz+'mex.xls', '16' : raiz+'mich.xls', '17' : raiz+'mor.xls', '18' : raiz+'nay.xls', '19' : raiz+'nl.xls', '20' : raiz+'oax.xls', '21' : raiz+'pue.xls', '22' : raiz+'qro.xls', '23' : raiz+'qroo.xls', '24' : raiz+'slp.xls', '25' : raiz+'sin.xls', '26' : raiz+'son.xls', '27' : raiz+'tab.xls', '28' : raiz+'tamps.xlsz', '29' : raiz+'tlax.xls', '30' : raiz+'ver.xls', '31' : raiz+'yuc.xls', '32' : raiz+'zac.xls' } Explanation: La descarga de datos se realiza desde el sitio Beta de INEGI. Los datos de la Encuesta Intercensal 2015 se encuentran en http://www.beta.inegi.org.mx/proyectos/enchogares/especiales/intercensal/ Existen tres maneras de descargar la información: Datos para la República Mexicana, con la ventaja de que es un solo archivo con variables procesadas y con la desventaja de que su nivel de desagregación es estatal. Datos estatales, con la ventaja de que cuentan con desagregacion a nivel municipal con variables interpretadas y con la desventaja de que la información está fragmentada en muchos archivos pues hay un archivo por variable por estado. Microdatos, con la ventaja de que contienen toda la información del Proyecto en pocos archivos y con la desventaja de que tienen que interpretarse antes de obtener valores útiles para la PCCS. La manera más conveniente es descargar los datos estatales, pues la primera no entregaría datos relevantes para la construccion de indicadores de la PCCS y la segunda requeriría dedicar una gran cantidad de tiempo y esfuerzo para recrear la interpretacion realizada por INEGI. Todos los indicadores que se utilizarán para la construccion de la PCCS se encuentran en la encuesta de Vivienda, por lo que únicamente se descargará el paquete de datos de esta encuesta End of explanation print(links['09']) Explanation: Las ligas quedan almacenadas en un diccionario de python en el que key = 'Clave Geoestadística Estatal'; y value = 'liga para descarga'. Por ejemplo, '09' es la clave geoestadística para la Ciudad de México. Si al diccionario links le solicitamos el valor de la key '09', nos regresa la liga para descargar los indicadores de vivienda de la Ciudad de México, como se muestra a continuación: End of explanation # Descarga de archivos a carpeta local destino = r'D:\PCCS\00_RawData\01_CSV\Intercensal2015\estatal\14. Vivienda' archivos = {} # Diccionario para guardar memoria de descarga for k,v in links.items(): archivo_local = destino + r'\{}.xls'.format(k) if os.path.isfile(archivo_local): print('Ya existe el archivo: {}'.format(archivo_local)) archivos[k] = archivo_local else: print('Descargando {} ... ... ... ... ... '.format(archivo_local)) urllib.request.urlretrieve(v, archivo_local) # archivos[k] = archivo_local print('se descargó {}'.format(archivo_local)) Explanation: Con el diccionario de ligas ya es posible descargar los archivos en una carpeta local para poder procesarlos. End of explanation pd.options.display.max_colwidth = 150 df = pd.read_excel(archivos['01'], sheetname = 'Índice', skiprows = 6, usecols = ['Tabulado', 'Título'], dtype = {'Tabulado' : 'str'}, ).set_index('Tabulado') df Explanation: Cada archivo tiene la misma estructura y contiene los datos de vivienda de 2015 levantados en la encuesta intercensal. La primera hoja, 'Índice', incluye un listado de las hojas y datos que contiene cada libro. Este índice se tomará como referencia para la minería de datos: End of explanation # Funcion para extraer datos de hoja tipo # La funcion espera los siguientes valores: # --- entidad: [str] clave geoestadistica de entidad de 2 digitos # --- ruta: [str] ruta al archivo de excel que contiene la información # --- hoja: [str] numero de hoja dentro del archivo de excel que se pretende procesar # --- colnames: [list] nombres para las columnas de datos (Las columnas en los archivos de este # dataset requieren ser nombradas manualmente por la configuración de los # encabezados en los archivo fuente) # --- skip: [int] El numero de renglones en la hoja que el script tiene que ignorar para encontrar # el renglon de encabezados. def cargahoja(entidad, ruta, hoja, colnames, skip): # Abre el archivo de excel raw_data = pd.read_excel(ruta, sheetname=hoja, skiprows=skip).dropna() # renombra las columnas raw_data.columns = colnames # Obten Unicamente las filas con valores estimativos raw_data = raw_data[raw_data['Estimador'] == 'Valor'] # Crea la columna CVE_MUN raw_data['CVE_ENT'] = entidad raw_data['ID_MUN'] = raw_data.Municipio.str.split(' ', n=1).apply(lambda x: x[0]) raw_data['CVE_MUN'] = raw_data['CVE_ENT'].map(str) + raw_data['ID_MUN'] # Borra columnas con informacion irrelevante o duplicada del (raw_data['CVE_ENT']) del (raw_data['ID_MUN']) del (raw_data['Entidad federativa']) del (raw_data['Estimador']) raw_data.set_index('CVE_MUN', inplace=True) return raw_data Explanation: La columna 'Tabulado' contiene el nombre de la hoja, mientras que 'Titulo' describe los datos de la misma. Para la construcción de parámetros de la PCCS se utilizarán las siguientes hojas: HOJA | PARAMETRO | DESCRIPCION ----|----------|:----------- 24/25|P0101|Porcentaje de viviendas con agua entubada 26|P0102|Porcentaje de viviendas que cuentan con descarga a una red de alcantarillado. 26|P0403|Viviendas con drenaje 02|P0404|Viviendas con piso de tierra 23|P0603|Viviendas particulares habitadas con calentador de agua (boiler) 08|P0611|Viviendas que utilizan leña o carbón para cocinar 09|P0612|Viviendas que utilizan leña o carbón para cocinar, que disponen de estufa o fogón con chimenea 08|P0613|Viviendas habitadas que utilizan gas para cocinar 19|P1004|Forma de eliminación de residuos 21|P1010|Porcentaje de viviendas de reutilización de residuos 20|P1011|Porcentaje de viviendas que separan sus residuos en orgánicos e inorgánicos 16|P0601|Viviendas particulares habitadas con electricidad Los siguientes parámetros se pueden obtener desde otras fuentes, pero se incluirán en esta minería por encontrarse también disponibles para 2015 en este dataset. HOJA | PARAMETRO | DESCRIPCION ---- | ---------- | :----------- 23 | P0604 | Viviendas particulares habitadas con calentador solar 23 | P0605 | Viviendas particulares habitadas con panel fotovoltaico 4 . Estandarización de Dataset A partir de las hojas identificadas y asociadas con parámetros, es necesario crear un dataframe estándar que sea de fácil lectura para sistemas informáticos y permita la creación de parámetros para la PCCS. Cada hoja de las descritas anteriormente tiene un acomodo distinto de las variables y requiere un proceso diferente, aunque la secuencia general para todas las hojas será la siguiente: 1. Crear una función que sirva para extraer los datos de una hoja "tipo" 2. Correr la función sobre cada archivo de excel y juntar los datos recopilados en un solo DataFrame 3. Limpieza final al dataframe y guardado HOJA 02: Estimadores de las viviendas particulares habitadas y su distribución porcentual según material en pisos por tamaño de localidad 1 . Funcion para extraer datos de hoja tipo End of explanation # correr funcion sobre todos los archivos colnames = ['Entidad federativa', 'Municipio', 'Estimador', 'Viviendas particulares habitadas', 'Pisos_Tierra', 'Pisos_Cemento o firme', 'Pisos_Mosaico, madera u otro recubrimiento', 'Pisos_No especificado'] DatosPiso = {} for k,v in archivos.items(): print('Procesando {}'.format(v)) hoja = cargahoja(k, v, '02', colnames, 7) DatosPiso[k] = hoja Explanation: 2 . Correr funcion sobre todos los archivos de excel para extraer datos de la hoja 02 End of explanation PisosDF = pd.DataFrame() for k,v in DatosPiso.items(): PisosDF = PisosDF.append(v) Explanation: Los datos fueron guardados como diccionario de Python, es necesario convertirlos en un DataFrame unico antes de hacer la limpieza final. End of explanation PisosDF = PisosDF[PisosDF['Municipio'] != 'Total'] PisosDF.describe() Explanation: 3 . Limpieza final del Dataframe 'Pisos': El dataframe está casi listo para ser utilizado en la construcción de indicadores, únicamente hace falta quitar algunas lineas de "basura" que tienen los datos de totales por Municipio. End of explanation # Se define un diccionario con la siguiente sintaxis: 'NUMERO DE HOJA' : [LISTA DE COLUMNAS] dicthojas = { '08' : [ # Combustible utilizado para cocinar 'Entidad federativa', 'Municipio', 'Estimador', 'Viviendas particulares habitadas', 'Cocina_con_Lena o carbon', 'Cocina_con_Gas', 'Cocina_con_Electricidad', 'Cocina_con_Otro_Combustible', 'Cocina_con_Los_ocupantes_no_cocinan', 'Cocina_con_no_especificado' ], '09' : [ # Utilizan leña o carbón para cocinar y distribucion porcentual segun disponibilidad de estufa o fogon 'Entidad federativa', 'Municipio', 'Estimador', 'Viviendas particulares habitadas en las que sus ocupantes utilizan leña o carbon para cocinar', 'Dispone_de_estufa_o_fogon_con_chimenea', 'No dispone_de_estufa_o_fogon_con_chimenea', 'Estufa_o_fogon_no_especificado' ], '16' : [ # Viviendas con electricidad 'Entidad federativa', 'Municipio', 'Estimador', 'Viviendas particulares habitadas', 'Disponen_de_electricidad', 'No_disponen_de_electricidad', 'No_especificado_de_electricidad' ], '19' : [ # Forma de eliminación de residuos 'Entidad federativa', 'Municipio', 'Estimador', 'Viviendas particulares habitadas', 'Entregan_residuos_a_servicio_publico_de_recoleccion', 'Tiran_residuos_en_basurero_publico_colocan_en_contenedor_o_deposito', 'Queman_residuos', 'Entierran_residuos_o_tiran_en_otro_lugar', 'Eliminacion_de_residuos_no_especificado', ], '20' : [ # Viviendas que entregan sus residuos al servicio publico y distribucion porcentual por condición de separacion 'Entidad federativa', 'Municipio', 'Estimador', 'Viviendas particulares habitadas en las que entregan los residuos al servicio publico', 'Separan_organicos_inorganicos', 'No_separan_organicos_inorganicos', 'Separan_residuos_No_especificado' ], '21' : [ # Separación y reutilización de residuos 'Entidad federativa', 'Municipio', 'Forma de reutilizacion de residuos', 'Estimador', 'Viviendas particulares habitadas', 'Reutilizan_residuos', 'No_reutilizan_residuos', 'No_especificado_reutilizan_residuos', ], '23' : [ # Disponibilidad y tipo de equipamiento 'Entidad federativa', 'Municipio', 'Tipo de equipamiento', 'Estimador', 'Viviendas particulares habitadas', 'Dispone_de_Equipamiento', 'No_dispone_de_Equipamiento', 'No_especificado_dispone_de_Equipamiento' ], '24' : [ # Disponibilidad de agua entubada según disponibilidad y acceso 'Entidad federativa', 'Municipio', 'Estimador', 'Viviendas particulares habitadas', 'Entubada_Total', 'Entubada_Dentro_de_la_vivienda', 'Entubada_Fuera_de_la_vivienda,_pero_dentro_del_terreno', 'Acarreo_Total', 'Acarreo_De_llave_comunitaria', 'Acarreo_De_otra_vivienda', 'Acarreo_De_una_pipa', 'Acarreo_De_un_pozo', 'Acarreo_De_un_río_arroyo_o_lago', 'Acarreo_De_la_recolección_de_lluvia', 'Acarreo_Fuente_No_especificada', 'Entubada_o_Acarreo_No_especificado' ], '25' : [ # Disponibilidad de agua entubada según fuente de abastecimiento 'Entidad federativa', 'Municipio', 'Estimador', 'Viviendas particulares que disponen de agua entubada', 'Agua_entubada_de_Servicio_Publico', 'Agua_entubada_de_Pozo_comunitario', 'Agua_entubada_de_Pozo_particular', 'Agua_entubada_de_Pipa', 'Agua_entubada_de_Otra_Vivienda', 'Agua_entubada_de_Otro_lugar', 'Agua_entubada_de_No_especificado' ], '26' : [ # Disponibilidad de drenaje y lugar de desalojo 'Entidad federativa', 'Municipio', 'Estimador', 'Viviendas particulares habitadas', 'Drenaje_Total', 'Drenaje_desaloja_a_Red_publica', 'Drenaje_desaloja_a_Fosa_Septica_o_Tanque_Septico', 'Drenaje_desaloja_a_Barranca_o_Grieta', 'Drenaje_desaloja_a_Rio_lago_o_mar', 'No_Dispone_de_drenaje', 'Dispone_drenaje_No_especificado', ] } Explanation: HOJAS 08, 09, 16, 19, 20, 21 23, 24, 25, 26 Como se mencionó antes, todas las hojas siguen un proceso similar para la extraccion de datos, con ligeras variaciones. 1 . Funcion para extraer datos de hoja tipo Para el resto de los archivos reutilizaremos la función "cargahoja" definida anteriormente 2 . Correr función sobre archivos de excel Para que la función "cargahoja" pueda iterar de manera adecuada sobre todos los archivos, es necesario especificar cuales son las variaciones para cada hoja que la función va a leer. Las principales variaciones son los nombres de las columnas y la ubicación de los encabezados en cada hoja, por lo que los datos de cada hoja pueden extraerse de manera automatizada una vez que se identifique cómo va a tratar el script a cada una de las variaciones. --- 2.1 . A continuación se definen las columnas para cada hoja: End of explanation skiprows = { '02' : 7, # Tipo de piso '08' : 7, # Combustible utilizado para cocinar '09' : 7, # Utilizan leña o carbón para cocinar y distribucion porcentual segun disponibilidad de estufa o fogon '16' : 7, # disponibilidad de energía eléctrica '19' : 7, # Forma de eliminación de residuos '20' : 8, # Viviendas que entregan sus residuos al servicio publico y distribucion porcentual por condición de separacion '21' : 7, # Separación y reutilización de residuos '23' : 7, # Disponibilidad y tipo de equipamiento '24' : 8, # Disponibilidad de agua entubada según disponibilidad y acceso '25' : 7, # Disponibilidad de agua entubada según fuente de abastecimiento '26' : 8, # Disponibilidad de drenaje y lugar de desalojo } Explanation: --- 2.2 . Además de definir las columnas, es necesario definir cuántos renglones tiene que ignorar el script antes de encontrar los encabezados. Estos renglones se definen a continuación en un diccionario: End of explanation HojasDatos = {} for estado, archivo in archivos.items(): print('Procesando {}'.format(archivo)) hojas = {} for hoja, columnas in dicthojas.items(): print('---Procesando hoja {}'.format(hoja)) dataset = cargahoja(estado, archivo, hoja, columnas, skiprows[hoja]) if hoja not in HojasDatos.keys(): HojasDatos[hoja] = {} HojasDatos[hoja][estado] = dataset Explanation: 2 . Correr funcion sobre todos los archivos de excel Una vez que se tienen definidos los tratamientos que el script dará a cada variación, se extraen los datos por medio de una función iterativa hacia un diccionario de Python. End of explanation # Procesado de diccionarios para obtener datasets estándar DSstandar = {} for hoja, estado in HojasDatos.items(): print('Procesando hoja {}'.format(hoja)) tempDS = pd.DataFrame() for cve_edo, datos in estado.items(): tempDS = tempDS.append(datos) print('---Se agregó CVE_EDO {} a dataframe estandar'.format(cve_edo)) DSstandar[hoja] = tempDS Explanation: El diccionario resultante contiene los datos de cada hoja clasificados por estado. Sin embargo, la estructura de diccionarios aun requiere ser procesada para obtener dataframes estandar. End of explanation for hoja in DSstandar.keys(): temphoja = DSstandar[hoja] temphoja = temphoja[temphoja['Municipio'] != 'Total'] DSstandar[hoja] = temphoja Explanation: 3 . Limpieza final de los Dataframe: Antes de habilitar los dataframes para ser utilizados en la construcción de indicadores, hace falta quitar algunas lineas de "basura" que contienen datos de totales por Municipio. End of explanation # Funcion para extraccion de notas de una hoja # Espera los siguientes input: # --- ruta: [str] Ruta al archivo de datos del dataset fuente # --- skip: [str] El numero de renglones en la hoja que el script tiene que ignorar para encontrar # el renglon de encabezados. def getnotes(ruta, skip): tempDF = pd.read_excel(ruta, sheetname=hoja, skiprows=skip) # Carga el dataframe de manera temporal c1 = tempDF['Unnamed: 0'].dropna() # Carga únicamente la columna 1, que contiene las notas, sin valores NaN c1.index = range(len(c1)) # Reindexa la serie para compensar los NaN eliminados en el comando anterior indice = c1[c1.str.contains('Nota')].index[0] # Encuentra el renglon donde inician las notas rows = range(indice, len(c1)) # Crea una lista de los renglones que contienen notas templist = c1.loc[rows].tolist() # Crea una lista con las notas notas = [] for i in templist: notas.append(i.replace('\xa0', ' ')) # Guarda cada nota y reemplaza caracteres especiales por espacios simples return notas Explanation: Extraccion de notas específicas para cada hoja Cada hoja en el dataset original contiene notas al final del bloque de datos que es conveniente tener en cuenta en el momento de la interpretacion de los datos. Las notas están específicadas para cada municipio con un número en superíndice, uno o dos asteriscos. Debido a que cada hoja contiene diferente información, es conveniente verificar la nomenclatura que tienen las notas para cada hoja. Si la nomenclatura llegara a ser diferente para una misma nota en diferentes estados de una misma hoja, sería necesario crear una nueva columna en los dataset estándar para especificar la nota que corresponde a cada caso. Para verificar las notas de cada hoja se utilizará el siguiente proceso: 1. Crear una funcion para extraer las notas 2. Correr la función sobre cada hoja y extraer las notas 3. Verificar en qué hojas varía la nomenclatura de notas 1 . Funcion para extraer notas End of explanation listanotas = {} for archivo, ruta in archivos.items(): print('Procesando {} desde {}'.format(archivo, ruta)) for hoja in skiprows.keys(): # Los keys del diccionario 'skiprows' son una lista de las hojas a procesar if hoja not in listanotas.keys(): listanotas[hoja] = {} listanotas[hoja][archivo] = getnotes(ruta, skiprows[hoja]) Explanation: 2 . Correr la función sobre cada hoja y extraer las notas End of explanation notasunicas = [] # Inicia con una lista vacía for hoja, dict in listanotas.items(): # Itera sobre el diccionario con todas las notas for estado, notas in dict.items(): # Itera sobre el diccionario de estados de cada hoja for nota in notas: # Itera sobre la lista de notas que tiene cada estado if nota not in notasunicas: # Si la nota no existe en la lista: print('Estado: {} / Hoja {} / : Nota: {}'.format(estado, hoja, nota)) # Imprime la nota y donde se encontró notasunicas.append(nota) # Agrega la nota al diccionario for nota in notasunicas: print(nota) Explanation: Una vez que se han extraido todas las notas de las hojas, se corre un script iterativo para verificar en cuáles notas varía la nomenclatura. Este script funciona de la siguiente manera: 1. Comienza con una lista vacía de notas 2. Revisa cada nota y la compara con la lista de notas. 3. Si la nota no existe en la lista, la agrega a la lista End of explanation # Creacion de metadatos comunes metadatos = { 'Nombre del Dataset': 'Encuesta Intercensal 2015 - Tabulados de Vivienda', 'Descripcion del dataset': np.nan, 'Disponibilidad Temporal': '2015', 'Periodo de actualizacion': 'No Determinada', 'Nivel de Desagregacion': 'Municipal', 'Notas': 'Los límites de confianza se calculan al 90 por ciento.' \ '\n1 Excluye las siguientes clases de vivienda: locales no construidos para habitación, viviendas móviles y refugios.' \ '\n* Municipio censado.' \ '\n** Municipio con muestra insuficiente.', 'Fuente': 'INEGI (Microdatos)', 'URL_Fuente': 'http://www.beta.inegi.org.mx/proyectos/enchogares/especiales/intercensal/', 'Dataset base': np.nan, } Explanation: Gracias a este script podemos ver que la nomenclatura en todas las hojas es estándar, y puede agregarse como nota en los metadatos de todos los dataset estándar. Guardado de datasets estándar Los dataframes obtenidos a través de los procesos anteriores se guardan en libros de formato OpenXML (.xlsx) para facilitar su lectura tanto desde sistemas informáticos como por personas. Cada libro contiene 2 hojas: 1. Hoja de metadatos 2. Hoja de datos, con estimadores de las viviendas particulares habitadas y su distribución porcentual según: HOJA | DESCRIPCION --- | :--- 02 | Material en pisos por municipio 08 | Combustible utilizado para cocinar por municipio 09 | Viviendas en las que sus ocupantes utilizan leña o carbón para cocinar y su distribución porcentual según disponibilidad de estufa o fogón con chimenea por municipio 19 | Forma de eliminación de residuos por municipio 16 | Disponibilidad de energia electrica por municipio 20 | Viviendas en las que sus ocupantes entregan los residuos al servicio público de recolección o los colocan en un contenedor y su distribución porcentual 21 | Condición de separación y reutilización de residuos por municipio y forma de reutilización de los residuos 23 | Disponibilidad de equipamiento por municipio y tipo de equipamiento 24 | Disponibilidad de agua entubada según disponibilidad y acceso 25 | Disponibilidad de agua entubada según fuente del abastecimiento 26 | Disponibilidad de drenaje y lugar de desalojo por municipio Al ser datasets que provienen de una misma fuente, comparten varios campos de metadatos por lo que los campos en común se definen una sola vez y los campos particulares serán definidos a través de una función iterativa. End of explanation DSstandar['02'] = PisosDF Explanation: El dataset que se procesó al principio de este estudio (Correspondiente a la hoja 02), se agrega junto con los demás datasets al diccionario de datasets estándar. End of explanation # Script para escritura de datasets estándar. # La funcion espera los siguientes valores: # --- hoja: (str) numero de hoja # --- dataset: (Pandas DataFrame) datos que lleva la hoja # --- metadatos: (dict) metadatos comunes para todas las hojas # --- desc_hoja: (str) descripcion del contenido de la hoja def escribedataset(hoja, dataset, metadatos, desc_hoja): # Compilación de la información datasetbaseurl = r'https://github.com/INECC-PCCS/01_Dmine/tree/master/Datasets/EI2015' directoriolocal = r'D:\PCCS\01_Dmine\Datasets\EI2015' archivo = hoja + '.xlsx' tempmeta = metadatos tempmeta['Descripcion del dataset'] = desc_hoja tempmeta['Dataset base'] = '"' + archivo + '" disponible en \n' + datasetbaseurl tempmeta = pd.DataFrame.from_dict(tempmeta, orient='index') tempmeta.columns = ['Descripcion'] tempmeta = tempmeta.rename_axis('Metadato') # Escritura de dataset estándar destino = directoriolocal + '\\' + archivo writer = pd.ExcelWriter(destino) tempmeta.to_excel(writer, sheet_name ='METADATOS') dataset.to_excel(writer, sheet_name = hoja) writer.save() print('Se guardó: "{}" en \n{}'.format(desc_hoja, destino)) Explanation: La función para la escritura de datasets estándar es la siguiente: End of explanation for hoja, dataset in DSstandar.items(): print('Procesando hoja '+hoja) escribedataset(hoja, dataset, metadatos, df.loc[hoja][0]) Explanation: Una vez definida la función para la escritura de datasets, se ejecuta de manera iterativa sobre los datos: End of explanation for hoja in DSstandar.keys(): print('**{}.xlsx**|{}'.format(hoja, df.loc[hoja][0])) Explanation: Al final del proceso se generaron 10 datasets estándar: End of explanation
1,053
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="https Step1: We show some very basic plots with matplotlib. Step2: Simplest line chart. Step3: Scatter plot. Step4: In the process of learning data visualization we will try to construct the famous data visualization by Swidish prof hans Rosling about global development. This is a beautiful data visualization that tells its own story. Step5: That's how NOT to do visualization as it don't make any sense ... Lets try something better.
Python Code: import matplotlib.pyplot as plt Explanation: <a href="https://colab.research.google.com/github/subarnop/AMachineLearningWalkThrough/blob/master/learning_to_plot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Data visualization is a very important part of data analysis. It helps to explore data and thus understands the insights of the data. End of explanation year = [1950, 1970, 1990, 2010] pop = [2.519, 3.692, 5.263, 6.972] #Population expressed in Billions Explanation: We show some very basic plots with matplotlib. End of explanation plt.plot(year,pop) plt.show() Explanation: Simplest line chart. End of explanation plt.scatter(year,pop) plt.show() Explanation: Scatter plot. End of explanation #@title gdp = [974.5803384, 5937.029525999998, 6223.367465, 4797.231267, 12779.37964, 34435.367439999995, 36126.4927, 29796.04834, 1391.253792, 33692.60508, 1441.284873, 3822.137084, 7446.298803, 12569.85177, 9065.800825, 10680.79282, 1217.032994, 430.0706916, 1713.778686, 2042.09524, 36319.23501, 706.016537, 1704.063724, 13171.63885, 4959.114854, 7006.580419, 986.1478792, 277.5518587, 3632.557798, 9645.06142, 1544.750112, 14619.222719999998, 8948.102923, 22833.30851, 35278.41874, 2082.4815670000007, 6025.3747520000015, 6873.262326000001, 5581.180998, 5728.353514, 12154.08975, 641.3695236000002, 690.8055759, 33207.0844, 30470.0167, 13206.48452, 752.7497265, 32170.37442, 1327.60891, 27538.41188, 5186.050003, 942.6542111, 579.2317429999998, 1201.637154, 3548.3308460000007, 39724.97867, 18008.94444, 36180.78919, 2452.210407, 3540.651564, 11605.71449, 4471.061906, 40675.99635, 25523.2771, 28569.7197, 7320.8802620000015, 31656.06806, 4519.461171, 1463.249282, 1593.06548, 23348.139730000006, 47306.98978, 10461.05868, 1569.331442, 414.5073415, 12057.49928, 1044.770126, 759.3499101, 12451.6558, 1042.581557, 1803.151496, 10956.99112, 11977.57496, 3095.7722710000007, 9253.896111, 3820.17523, 823.6856205, 944.0, 4811.060429, 1091.359778, 36797.93332, 25185.00911, 2749.320965, 619.6768923999998, 2013.977305, 49357.19017, 22316.19287, 2605.94758, 9809.185636, 4172.838464, 7408.905561, 3190.481016, 15389.924680000002, 20509.64777, 19328.70901, 7670.122558, 10808.47561, 863.0884639000002, 1598.435089, 21654.83194, 1712.472136, 9786.534714, 862.5407561000002, 47143.17964, 18678.31435, 25768.25759, 926.1410683, 9269.657808, 28821.0637, 3970.095407, 2602.394995, 4513.480643, 33859.74835, 37506.41907, 4184.548089, 28718.27684, 1107.482182, 7458.396326999998, 882.9699437999999, 18008.50924, 7092.923025, 8458.276384, 1056.380121, 33203.26128, 42951.65309, 10611.46299, 11415.80569, 2441.576404, 3025.349798, 2280.769906, 1271.211593, 469.70929810000007] life_exp = [43.828, 76.423, 72.301, 42.731, 75.32, 81.235, 79.829, 75.635, 64.062, 79.441, 56.728, 65.554, 74.852, 50.728, 72.39, 73.005, 52.295, 49.58, 59.723, 50.43, 80.653, 44.74100000000001, 50.651, 78.553, 72.961, 72.889, 65.152, 46.462, 55.322, 78.782, 48.328, 75.748, 78.273, 76.486, 78.332, 54.791, 72.235, 74.994, 71.33800000000002, 71.878, 51.57899999999999, 58.04, 52.947, 79.313, 80.657, 56.735, 59.448, 79.406, 60.022, 79.483, 70.259, 56.007, 46.38800000000001, 60.916, 70.19800000000001, 82.208, 73.33800000000002, 81.757, 64.69800000000001, 70.65, 70.964, 59.545, 78.885, 80.745, 80.546, 72.567, 82.603, 72.535, 54.11, 67.297, 78.623, 77.58800000000002, 71.993, 42.592, 45.678, 73.952, 59.44300000000001, 48.303, 74.241, 54.467, 64.164, 72.801, 76.195, 66.803, 74.543, 71.164, 42.082, 62.069, 52.90600000000001, 63.785, 79.762, 80.204, 72.899, 56.867, 46.859, 80.196, 75.64, 65.483, 75.53699999999998, 71.752, 71.421, 71.688, 75.563, 78.098, 78.74600000000002, 76.442, 72.476, 46.242, 65.528, 72.777, 63.062, 74.002, 42.56800000000001, 79.972, 74.663, 77.926, 48.159, 49.339, 80.941, 72.396, 58.556, 39.613, 80.884, 81.70100000000002, 74.143, 78.4, 52.517, 70.616, 58.42, 69.819, 73.923, 71.777, 51.542, 79.425, 78.242, 76.384, 73.747, 74.249, 73.422, 62.698, 42.38399999999999, 43.487] plt.plot(gdp,life_exp) plt.show() Explanation: In the process of learning data visualization we will try to construct the famous data visualization by Swidish prof hans Rosling about global development. This is a beautiful data visualization that tells its own story. End of explanation plt.scatter(gdp, life_exp) # Put the x-axis on a logarithmic scale and the correlation will become clear plt.xscale('log') plt.show() pop = [31.889923, 3.600523, 33.333216, 12.420476, 40.301927, 20.434176, 8.199783, 0.708573, 150.448339, 10.392226, 8.078314, 9.119152, 4.552198, 1.639131, 190.010647, 7.322858, 14.326203, 8.390505, 14.131858, 17.696293, 33.390141, 4.369038, 10.238807, 16.284741, 1318.683096, 44.22755, 0.71096, 64.606759, 3.80061, 4.133884, 18.013409, 4.493312, 11.416987, 10.228744, 5.46812, 0.496374, 9.319622, 13.75568, 80.264543, 6.939688, 0.551201, 4.906585, 76.511887, 5.23846, 61.083916, 1.454867, 1.688359, 82.400996, 22.873338, 10.70629, 12.572928, 9.947814, 1.472041, 8.502814, 7.483763, 6.980412, 9.956108, 0.301931, 1110.396331, 223.547, 69.45357, 27.499638, 4.109086, 6.426679, 58.147733, 2.780132, 127.467972, 6.053193, 35.610177, 23.301725, 49.04479, 2.505559, 3.921278, 2.012649, 3.193942, 6.036914, 19.167654, 13.327079, 24.821286, 12.031795, 3.270065, 1.250882, 108.700891, 2.874127, 0.684736, 33.757175, 19.951656, 47.76198, 2.05508, 28.90179, 16.570613, 4.115771, 5.675356, 12.894865, 135.031164, 4.627926, 3.204897, 169.270617, 3.242173, 6.667147, 28.674757, 91.077287, 38.518241, 10.642836, 3.942491, 0.798094, 22.276056, 8.860588, 0.199579, 27.601038, 12.267493, 10.150265, 6.144562, 4.553009, 5.447502, 2.009245, 9.118773, 43.997828, 40.448191, 20.378239, 42.292929, 1.133066, 9.031088, 7.554661, 19.314747, 23.174294, 38.13964, 65.068149, 5.701579, 1.056608, 10.276158, 71.158647, 29.170398, 60.776238, 301.139947, 3.447496, 26.084662, 85.262356, 4.018332, 22.211743, 11.746035, 12.311143] col = ['red', 'green', 'blue', 'blue', 'yellow', 'black', 'green', 'red', 'red', 'green', 'blue', 'yellow', 'green', 'blue', 'yellow', 'green', 'blue', 'blue', 'red', 'blue', 'yellow', 'blue', 'blue', 'yellow', 'red', 'yellow', 'blue', 'blue', 'blue', 'yellow', 'blue', 'green', 'yellow', 'green', 'green', 'blue', 'yellow', 'yellow', 'blue', 'yellow', 'blue', 'blue', 'blue', 'green', 'green', 'blue', 'blue', 'green', 'blue', 'green', 'yellow', 'blue', 'blue', 'yellow', 'yellow', 'red', 'green', 'green', 'red', 'red', 'red', 'red', 'green', 'red', 'green', 'yellow', 'red', 'red', 'blue', 'red', 'red', 'red', 'red', 'blue', 'blue', 'blue', 'blue', 'blue', 'red', 'blue', 'blue', 'blue', 'yellow', 'red', 'green', 'blue', 'blue', 'red', 'blue', 'red', 'green', 'black', 'yellow', 'blue', 'blue', 'green', 'red', 'red', 'yellow', 'yellow', 'yellow', 'red', 'green', 'green', 'yellow', 'blue', 'green', 'blue', 'blue', 'red', 'blue', 'green', 'blue', 'red', 'green', 'green', 'blue', 'blue', 'green', 'red', 'blue', 'blue', 'green', 'green', 'red', 'red', 'blue', 'red', 'blue', 'yellow', 'blue', 'green', 'blue', 'green', 'yellow', 'yellow', 'yellow', 'red', 'red', 'red', 'blue', 'blue'] # Import numpy as np import numpy as np # Update: set s argument to np_pop plt.scatter(gdp, life_exp, s = np.array(pop)*2, c= col, alpha=0.6) # Previous customizations plt.xscale('log') plt.xlabel('GDP per Capita [in USD]') plt.ylabel('Life Expectancy [in years]') plt.title('World Development in 2007') plt.xticks([1000, 10000, 100000],['1k', '10k', '100k']) # Additional customizations plt.text(1550, 71, 'India') plt.text(5700, 80, 'China') # Add grid() call plt.grid(True) # Show the plot plt.show() mu, sigma = 100, 15 values = mu + sigma * np.random.randn(10000) plt.hist(values, bins=50, color='green') plt.show() x = np.random.rand(10) y = np.random.rand(10) z = np.sqrt(x**2 + y**2) plt.subplot(321) plt.scatter(x, y, s=80, c=z, marker=">") plt.subplot(322) plt.scatter(x, y, s=80, c=z, marker=(5, 0)) verts = np.array([[-1, -1], [1, -1], [1, 1], [-1, -1]]) plt.subplot(323) plt.scatter(x, y, s=80, c=z, marker=verts) plt.subplot(324) plt.scatter(x, y, s=80, c=z, marker=(5, 1)) plt.subplot(325) plt.scatter(x, y, s=80, c=z, marker='+') plt.subplot(326) plt.scatter(x, y, s=80, c=z, marker=(5, 2)) plt.suptitle('Scatter Star Ploy') plt.show() Explanation: That's how NOT to do visualization as it don't make any sense ... Lets try something better. End of explanation
1,054
Given the following text description, write Python code to implement the functionality described below step by step Description: QC Configuration Objective Step1: load_cfg(), just for demonstration Here we will import the load_cfg() function to illustrate different procedures. This is typically not necessary since ProfileQC does that for us. The cfgname argument for load_cfg is the same for ProfileQC, thus when we call ProfileQC(dataset, cfgname='argo') the procedure applied to dataset is the same shown by load_cfg(cfgname='argo') We will take advantage on that and simplify this notebook by inspecting only the configuration without actually applying it. Step2: Built-in tests The easiest way to configure a QC procedure is by using one of the built-in tests, for example the GTSPP procedure for realtime data, here named 'gtspp_realtime'. Step3: The output cfg is a dictionary type of object, more specifically it is an ordered dictionary. The configuration has Step4: So, for GTSSP realtime assessement, all variables must be associated with a valid time and a valid location that is at sea. Step5: GTSPP evaluates temperature and salinity. Here we use CF standard names, so temperature is sea_water_temperature. But which tests are applied on temperature measurements? Step6: Let's inspect the spike test. Step7: There is one single item, the threshold, here defined as 2, so that any measured temperature with a spike greater than this threshold will fail on this spike test. Let's check the global range test. Step8: Here there are two limit values, the minimum acceptable value and the maximum one. Anything beyond these limits will fail this test. Check CoTeDe's manual to see what each test does and the possible parameters for each one. Explicit inline A QC procedure can also be explicitly defined with a dictionary. For instance, let's consider that we want to evaluate the temperature of a dataset with a single test, the spike test, using a threshold equal to one, Step9: Note that load_cfg took care for us to format it with the 0.21 standard, adding the revision and variables. If a revision is not defined, it is assumed a pre-0.21. Compound procedure Many of the recommended QC procedures share several tests in common. One way to simplify a QC procedure definition is by using inheritance to define a QC procedure to be used as a template. For example, let's create a new QC procedure that is based on GTSPP realtime and add a new test to that, the World Ocean Atlas Climatology comparison for temperature, with a threshold of 3 standard deviations. Step10: There is a new item, inherit Step11: And now sea_water_temperature has all the GTSPP realtime tests plus the WOA comparison, Step12: This new definition is actually the GTSPP recommended procedure for non-realtime data, i.e. the delayed mode. The built-in GTSPP procedure is actually written by inheriting the GTSPP realtime. Step13: The inheritance can also be used to modify any parameter from the parent template procedure. For example, let's use the GTSPP recommended procedure but with a more restricted threshold, equal to 1,
Python Code: # A different version of CoTeDe might give slightly different outputs. # Please let me know if you see something that I should update. import cotede print("CoTeDe version: {}".format(cotede.__version__)) Explanation: QC Configuration Objective: Show different ways to configure a quality control (QC) procedure - explicit inline or calling a pre-set configuration. For CoTeDe, the most important component is the human operator, hence it should be easy to control which tests to apply and the specific parameters of each test. CoTeDe is based on the principle of a single engine for multiple applications by using a dictionary to describe the QC procedure to be used, since 2011. End of explanation from cotede.utils import load_cfg Explanation: load_cfg(), just for demonstration Here we will import the load_cfg() function to illustrate different procedures. This is typically not necessary since ProfileQC does that for us. The cfgname argument for load_cfg is the same for ProfileQC, thus when we call ProfileQC(dataset, cfgname='argo') the procedure applied to dataset is the same shown by load_cfg(cfgname='argo') We will take advantage on that and simplify this notebook by inspecting only the configuration without actually applying it. End of explanation cfg = load_cfg('gtspp_realtime') print(list(cfg.keys())) Explanation: Built-in tests The easiest way to configure a QC procedure is by using one of the built-in tests, for example the GTSPP procedure for realtime data, here named 'gtspp_realtime'. End of explanation cfg['revision'] print(list(cfg['common'].keys())) Explanation: The output cfg is a dictionary type of object, more specifically it is an ordered dictionary. The configuration has: A revision to help to determine how to handle this configuration. A common item with the common tests for the whole dataset, i.e. the tests that are valid for all variables. For instance, a valid date and time is the same if we are evaluating temperature, salinity, or chlorophyll fluorescence. A variables, with a list of the variables to evaluate. Let's check each item: End of explanation print(list(cfg['variables'].keys())) Explanation: So, for GTSSP realtime assessement, all variables must be associated with a valid time and a valid location that is at sea. End of explanation print(list(cfg['variables']['sea_water_temperature'].keys())) Explanation: GTSPP evaluates temperature and salinity. Here we use CF standard names, so temperature is sea_water_temperature. But which tests are applied on temperature measurements? End of explanation print(cfg['variables']['sea_water_temperature']['spike']) Explanation: Let's inspect the spike test. End of explanation print(list(cfg['variables']['sea_water_temperature']['global_range'])) Explanation: There is one single item, the threshold, here defined as 2, so that any measured temperature with a spike greater than this threshold will fail on this spike test. Let's check the global range test. End of explanation my_config = {"sea_water_temperature": {"spike": { "threshold": 1 } } } cfg = load_cfg(my_config) print(cfg) Explanation: Here there are two limit values, the minimum acceptable value and the maximum one. Anything beyond these limits will fail this test. Check CoTeDe's manual to see what each test does and the possible parameters for each one. Explicit inline A QC procedure can also be explicitly defined with a dictionary. For instance, let's consider that we want to evaluate the temperature of a dataset with a single test, the spike test, using a threshold equal to one, End of explanation my_config = {"inherit": "gtspp_realtime", "sea_water_temperature": {"woa_normbias": { "threshold": 3 } } } cfg = load_cfg(my_config) print(cfg.keys()) Explanation: Note that load_cfg took care for us to format it with the 0.21 standard, adding the revision and variables. If a revision is not defined, it is assumed a pre-0.21. Compound procedure Many of the recommended QC procedures share several tests in common. One way to simplify a QC procedure definition is by using inheritance to define a QC procedure to be used as a template. For example, let's create a new QC procedure that is based on GTSPP realtime and add a new test to that, the World Ocean Atlas Climatology comparison for temperature, with a threshold of 3 standard deviations. End of explanation print(cfg['inherit']) Explanation: There is a new item, inherit End of explanation print(cfg['variables']['sea_water_temperature'].keys()) Explanation: And now sea_water_temperature has all the GTSPP realtime tests plus the WOA comparison, End of explanation cfg = load_cfg('gtspp') print(cfg['inherit']) Explanation: This new definition is actually the GTSPP recommended procedure for non-realtime data, i.e. the delayed mode. The built-in GTSPP procedure is actually written by inheriting the GTSPP realtime. End of explanation my_config = {"inherit": "gtspp_realtime", "sea_water_temperature": {"spike": { "threshold": 1 } } } cfg = load_cfg(my_config) print(cfg['variables']['sea_water_temperature']['spike']) Explanation: The inheritance can also be used to modify any parameter from the parent template procedure. For example, let's use the GTSPP recommended procedure but with a more restricted threshold, equal to 1, End of explanation
1,055
Given the following text description, write Python code to implement the functionality described below step by step Description: vn.past.demo - Welcome! 1. Preface past 是一个从属于vn.trader的市场历史数据解决方案模块。主要功能为: 从datayes(通联数据)等web数据源高效地爬取、更新历史数据。 基于MongoDB的数据库管理、快速查询,各种输出格式的转换。 基于Matplotlib快速绘制K线图等可视化对象。 主要依赖:pymongo,pandas,requests,json 开发测试环境: * OS X 10.10 / Windows 7 * Anaconda.Python 2.7 2. Get Started 2.1 使用前 安装MongoDB Step1: 3. Methods 3.1 fetch DataGenerator.fetch( ticker, start, end, field=-1, output='list' ) ticker Step2: 3.2 update DataGenerator.update() 从数据库中获取存在的最新日期,然后自动更新数据库到今日。 根据网速的不同,更新一到五个交易日所需时间为1分钟到200秒不等。 Step3: 3.3 绘图相关 Bar.get_candlist( ) 我们知道matplotlib.finance.candlestick_ochl要求严格的input形式。为[(t,o,c,h,l),...]这样的数组列表。 内建Bar DataFrame加入了一个方法自己形成这种格式输出,方便作K线图。 Step4: Resampler.rspfbar_date(self, rate) 对Bar数据进行再取样。rate=取样率。 仍在测试中。
Python Code: # init.py from base import * if __name__ == '__main__': ds = DataGenerator() ds.download() Explanation: vn.past.demo - Welcome! 1. Preface past 是一个从属于vn.trader的市场历史数据解决方案模块。主要功能为: 从datayes(通联数据)等web数据源高效地爬取、更新历史数据。 基于MongoDB的数据库管理、快速查询,各种输出格式的转换。 基于Matplotlib快速绘制K线图等可视化对象。 主要依赖:pymongo,pandas,requests,json 开发测试环境: * OS X 10.10 / Windows 7 * Anaconda.Python 2.7 2. Get Started 2.1 使用前 安装MongoDB: https://www.mongodb.org/downloads 更新pymongo至3.0以上版本: ~$ pip install pymongo --upgrade 安装或更新requests: ~$ pip install requests --upgrade 启动MongoDB: ~$ mongod 2.2 首次使用 Demo中目前加载了使用通联数据Api下载股票日线数据和期货合约日线数据的方法。 首次使用时: 用文本编辑器打开base.py,填写通联数据的用户token。 执行init.py初始化MongoDB数据库。即下载全部股票与期货合约日线数据并储存至MongoDB。默认的初始化数据跨度为:股票从2013年1月1日至2015年7月20日;期货合约从2015年1月1日至2015年7月20日。各使用最大30个CPU线程。根据网速的不同,下载会花费大概8到15分钟。 End of explanation import time start_time = time.time() l = ds.fetch('000001','20150101','20150701',field=['closePrice','openPrice'],output='list') # 输出字典列表,股票代码为000001,选择closePrice和openPrice print 'Finished in',time.time()-start_time,'seconds' # 查询时间(秒) l[0:5] ds.fetch('IF1512','20150101','20150701',output='df').head() # 输出dataframe,期货合约为IF1512,包含所有键 bar = ds.fetch('000001','20150101','20150701',output='bar') # 输出Bar print type(bar) bar.head() Explanation: 3. Methods 3.1 fetch DataGenerator.fetch( ticker, start, end, field=-1, output='list' ) ticker: 字符串, 股票或期货合约代码。 start, end: ‘yyyymmdd’ 格式字符串;查询时间起止点。 field: 字符串列表,所选取的key。默认值为-1,选取所有key。 output: 字符串,输出格式。默认为'list',输出字典列表。可选的类型为: 'list': 输出字典列表。 'df': 输出pandas DataFrame。 'bar': 输出本模块内建的Bar数据结构,为一个包含日期,开、收、高、低价格以及成交量的DataFrame,之后详细介绍。注意若选择输出bar,则参数field的值会被忽略。 End of explanation ds.fetch('000001','20150701','20150723',field=['closePrice','openPrice'],output='list')[0] # 由于我们按照默认时间跨度下载,最后记录就是7月20日。(本文档写作时间为7月23日) # 这里手贱在update完之后又敲了一下build。。 ds.update() ds.fetch('000001','20150701','20150723',field=['closePrice','openPrice'],output='list') # 7月23日数据已更新。 Explanation: 3.2 update DataGenerator.update() 从数据库中获取存在的最新日期,然后自动更新数据库到今日。 根据网速的不同,更新一到五个交易日所需时间为1分钟到200秒不等。 End of explanation bar.head() candle = bar.get_candlist() candle[0:5] start_time = time.time() %matplotlib inline import matplotlib.pyplot as plt from matplotlib.finance import candlestick_ochl bar = ds.fetch('000100','20141001','20150601',output='bar') quotes = bar.get_candlist() fig = plt.figure(figsize=(16,10)) ax = plt.subplot(111) candlestick_ochl(ax, quotes, width=0.7, colorup='#5998ff', colordown='#07000d') ax.set_xlim([0, len(quotes)]) print 'Finished in',time.time()-start_time,'seconds.' Explanation: 3.3 绘图相关 Bar.get_candlist( ) 我们知道matplotlib.finance.candlestick_ochl要求严格的input形式。为[(t,o,c,h,l),...]这样的数组列表。 内建Bar DataFrame加入了一个方法自己形成这种格式输出,方便作K线图。 End of explanation rs = Resampler() rs.load_bars(bar) newbar1 = Bar(rs.rspfbar_date(3)) newbar2 = Bar(rs.rspfbar_date(7)) quotes1 = newbar1.get_candlist() quotes2 = newbar2.get_candlist() fig = plt.figure(figsize=(10,10)) ax1 = plt.subplot(211) ax2 = plt.subplot(212) candlestick_ochl(ax1, quotes1, width=0.7, colorup='#5998ff', colordown='#07000d') candlestick_ochl(ax2, quotes2, width=0.7, colorup='#5998ff', colordown='#07000d') ax1.set_xlim([0, len(quotes1)]) ax2.set_xlim([0, len(quotes2)]) Explanation: Resampler.rspfbar_date(self, rate) 对Bar数据进行再取样。rate=取样率。 仍在测试中。 End of explanation
1,056
Given the following text description, write Python code to implement the functionality described below step by step Description: Comparing Python and Julia This is a bruteforce attempt at solving Project Euler problem 14 with Python. The implementation is such that it is one-to-one with corresponding Julia code. Step1: Python code here is slow so we rather not exacute it multiple times as with %timeit magic. Step2: Next we consider a problem of drawing a Julia fractal. In Python we try to implement a rather clever heavily vectorized algorithm
Python Code: # Collatz def collatz_chain(n): 'Compute the Collatz chain for number n.' k = 1 while n > 1: n = 3*n+1 if (n % 2) else n >> 1 k += 1 # print n return k def solve_euler(stop): 'Which of the number [1, stop) has the longest Collatz chain.' n, N, N_max = 1, 0, 0 while n < stop: value = collatz_chain(n) if value > N_max: N = n N_max = value n += 1 return N, N_max Explanation: Comparing Python and Julia This is a bruteforce attempt at solving Project Euler problem 14 with Python. The implementation is such that it is one-to-one with corresponding Julia code. End of explanation import time N = 1000000 t0 = time.time() ans = solve_euler(N) t1 = time.time() - t0 ans, t1 Explanation: Python code here is slow so we rather not exacute it multiple times as with %timeit magic. End of explanation import numpy as np # Adopted from https://thesamovar.wordpress.com/2009/03/22/fast-fractals-with-python-and-numpy/ def julia(x, y, c): X, Y = np.meshgrid(x, y) Z = X + complex(0, 1)*Y del X, Y C = c*np.ones(Z.shape, dtype=complex) img = 80*np.ones(C.shape, dtype=int) # We will shrink Z, C inside the loop if certain point is found unbounded ix, iy = np.mgrid[0:Z.shape[0], 0:Z.shape[1]] Z, C, ix, iy = map(lambda mat: mat.flatten(), (Z, C, ix, iy)) for i in xrange(80): if not len(Z): break np.multiply(Z, Z, Z) # z**2 + c np.add(Z, C, Z) rem = abs(Z) > 2.0 # Unbounded - definite color img[ix[rem], iy[rem]] = i + 1 rem = ~rem # Bounded - keep for next round Z = Z[rem] # Update variables for next round ix, iy = ix[rem], iy[rem] C = C[rem] return img cs = (complex(-0.06, 0.67), complex(0.279, 0), complex(-0.4, 0.6), complex(0.285, 0.01)) x = np.arange(-1.5, 1.5, 0.002) y = np.arange(1, -1, -0.002) Js = [] # Evaluate fractal generation t0 = time.time() for c in cs: Js.append(julia(x, y, c)) t1 = time.time() - t0 print 'Generated in %.4f s' % t1 print 'Image size %d x %d' % Js[0].shape %matplotlib inline import matplotlib.pyplot as plt for J in Js: plt.figure(figsize=(12, 8)) plt.imshow(J, cmap="viridis", extent=[-1.5, 1.5, -1, 1]) plt.show() Explanation: Next we consider a problem of drawing a Julia fractal. In Python we try to implement a rather clever heavily vectorized algorithm End of explanation
1,057
Given the following text description, write Python code to implement the functionality described below step by step Description: Working with data 2017. Class 2 Contact Javier Garcia-Bernardo [email protected] 0. Structure Data types, structures and code II Merging and concatenating dataframes My second plots Summary Step1: PYTHON Step2: OPERATIONS IN DICT Get Step3: Add Step4: Remove Step5: Creating a dictionary from two lists Step7: Why to use dict? Because it's much much faster than a list, it always takes the same time to find an element in a dict, that's not the case in a list - With 10 elements, finding an element in a dict is 2x faster than finding it in a list - With 1000 elements, finding an element in adict is 100x faster than finding it in a list - With 1 million elements, finding an element in a dict is 100000x faster than finding it in a list Useful to assing values to words for instance 1.2-3 Code Step8: How the arguments of a function work If there are many arguments, the first value that you pass is matched to the first argument of the function, the second to the second, etc. For instance, these are the arguments of the function pd.read_csv() `pd.read_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer',...)` Writing `pd.read_csv("data/ams_green.csv","\t",None,0)` matches `filepath_or_buffer TO "data/ams_green.csv", sep TO "\t", delimiter TO None, header TO 0` You can also pass the arguments by name. For instance `pd.read_csv("data/ams_green.csv",header= 0, sep="\t",delimiter=None)` is identical to the line before. In this case the values you pass do not have to be in the same order as the arguments. 1.3 Scope Step9: Variables created inside functions are only seen within the function Step10: Variables created outside functions are seen by all the code (be careful!) Step11: 1.4 For-Loops Iterate over a list (or an array, or a set, or the keys of a dictionary..), like this Step12: what if we want to stop a loop? Then we can use break Step13: what if we want to skip some rounds? Then we use continue Step14: 1.5 Control flow = if-else statements Controls the flow. If something happens, then do something. Else, do another thing. Like this ` article = "Trump is going to make America great" if "python" in article Step16: 1.x1 Let's combine all we learned so far Write a function that prints if an article is related to "python" or "Trump", and run it for many cases We can wrap it into a function Step17: Now we do it for many articles Step19: 1.x2 Let's combine all we learned so far Write a function that counts the number of articles with the word python and "Trump" Step20: Let's make it a bit more flexible Step21: what if we want a loop but we don't know when we need to stop? Then we can use the while loop Step22: 2. Writing and reading from disk You can also write files line by line "r" Step23: But remember to add a "return character" (\n) Step24: There are 3 ways to read a file We won't be reading the files like this too often, but sometimes you need to read them line by line (instead of loading all the files like we do with pandas) Read it all as a string Step25: Read it breaking in the "\n" Step26: Read it line by line Step27: you can delete the "\n" at the end of the string with .rstrip() Step28: In-class exercises Step29: 1. Use a loop to do the same than above (write 5 lines to a file) Step30: 2. Use an if-else statement to write only if the number is larger than 3 Step31: 3. Encapsulate everything in a function, and call the function Step32: Everything in your computer/phone/online is based on these things you have already know
Python Code: ##Some code to run at the beginning of the file, to be able to show images in the notebook ##Don't worry about this cell #Print the plots in this screen %matplotlib inline #Be able to plot images saved in the hard drive from IPython.display import Image #Make the notebook wider from IPython.core.display import display, HTML display(HTML("<style>.container { width:90% !important; }</style>")) #Usual imports import pandas as pd import numpy as np import pylab as plt Explanation: Working with data 2017. Class 2 Contact Javier Garcia-Bernardo [email protected] 0. Structure Data types, structures and code II Merging and concatenating dataframes My second plots Summary End of explanation #Dictionary this_is_a_dict = {"Javier": "[email protected]", "Friend1": "[email protected]", "Friend2": "[email protected]"} print(this_is_a_dict) print(type(this_is_a_dict)) Explanation: PYTHON: Variables and code Python uses variables and code. Variables Variables tell the computer to save something (a number, a string, a spreadsheet) with a name. For instance, if you write variable_name = 3, the computer knows that variable_name is 3. - Data types: Numbers, strings and others - 1.1 Data structures: - Lists, tables... (full of data types) Code Instructions to modify variables 1.2 Can be organized in functions Variables can be seen for all or part of the code: 1.3 Scope of variables 1.4. For loops: Repeat a similar statement many times 1.5 Control-flow: if-else statements, try-except statement and for-loops 1.6 Try-except: error catching 1.1 Dictionary (type of data structure) Like in a index, finds a page in the book very very fast. It combiens keys (word in the index) with values (page number associated to the word): {key1: value2, key2: value2} The keys can be numbers, strings or tuples, but NOT lists (if you try Python will give the error unhashable key) End of explanation #Get an element print(this_is_a_dict["Friend2"]) print(this_is_a_dict.get("Friend2")) #The difference between the two is that while the first line gives an error if "Friends2" #is not part of the dictionary, the second one answers with None** print(this_is_a_dict.get("Friend5")) #not enough friends Explanation: OPERATIONS IN DICT Get End of explanation #Create an element this_is_a_dict["Friend3"] = "[email protected]" this_is_a_dict #Print the keys print(this_is_a_dict.keys()) #Print the values print(this_is_a_dict.values()) Explanation: Add End of explanation del this_is_a_dict["Friend3"] print(this_is_a_dict) Explanation: Remove End of explanation #Creating dictionary using two lists list_names = ["Javier", "Friend1", "Friend2"] list_numbers = ["[email protected]","[email protected]","[email protected]"] #Put both together using zip this_is_a_dict = dict(zip(list_names,list_numbers)) print(this_is_a_dict) #The zip object is another strange data structure that we cannot see (like range) print(zip(list_names,list_numbers)) #But we can convert it to a list to see how it looks (like range) print(list(zip(list_names,list_numbers))) Explanation: Creating a dictionary from two lists: ZIP End of explanation ## Our own functions def mean_ours(list_numbers): #list_numbers is the arguments This is called the docstring, it is a comment describing the function. In this case the function calculates the mean of a list of numbers. input list_numbers: a list of numbers output: the mean of the input #what gives back return sum(list_numbers)/len(list_numbers) ##INDENTATION!! ##Two points after the "def" mean_ours? aList = [2,3,4] print(mean_ours(aList)) #this is how you call the funciton Explanation: Why to use dict? Because it's much much faster than a list, it always takes the same time to find an element in a dict, that's not the case in a list - With 10 elements, finding an element in a dict is 2x faster than finding it in a list - With 1000 elements, finding an element in adict is 100x faster than finding it in a list - With 1 million elements, finding an element in a dict is 100000x faster than finding it in a list Useful to assing values to words for instance 1.2-3 Code: Operations, functions, control flow and loops We have the data in data structures, composed of several data types. We need code to edit everything 1.2 Functions A fragment of code that takes some standard input (arguments) and returns some standard output. Example: The mean function. Gets a list of numbers as input, gives the mean as output. Gives an error if you try to calculate the mean of some strings. We have already seen many functions. Add, mean... End of explanation def f(): local_var1 = 2 local_var2 = 3 local_var = local_var1*local_var2 print(local_var) #Call the function f() Explanation: How the arguments of a function work If there are many arguments, the first value that you pass is matched to the first argument of the function, the second to the second, etc. For instance, these are the arguments of the function pd.read_csv() `pd.read_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer',...)` Writing `pd.read_csv("data/ams_green.csv","\t",None,0)` matches `filepath_or_buffer TO "data/ams_green.csv", sep TO "\t", delimiter TO None, header TO 0` You can also pass the arguments by name. For instance `pd.read_csv("data/ams_green.csv",header= 0, sep="\t",delimiter=None)` is identical to the line before. In this case the values you pass do not have to be in the same order as the arguments. 1.3 Scope: Global vs local variables Variables inside functions are only seen by that function Variables outside functions are seen and can be modified by all functions (dangerous) End of explanation def f(): local_var1 = 2 local_var2 = 2 local_var = local_var1*local_var2 #Call the function f() #We haven't created local_var print(local_var) def f(): local_var1 = 2 local_var2 = 2 local_var = local_var1*local_var2 return local_var #Call the function gvar = f() #Now we have local_var (but generally it is not a good idea to use the same name) print(gvar) Explanation: Variables created inside functions are only seen within the function End of explanation local_var = "python" def f(): print(local_var) #this can read the variable outside, but NOT CHANGE IT (except .pop() and .append()) #it's okay for functions not to return anything, by default they return None #Call the function f() #We can also see it from outside the function print(local_var) Explanation: Variables created outside functions are seen by all the code (be careful!) End of explanation for x in ["Adam","Utercht"]: print(x) for i,x in enumerate(["Adam","Utercht"]): print(i,x) i = 0 for x in ["Adam","Utercht"]: print(i,x) i = i + 1 print("python" in list_articles[1]) #Imagine we want to find what some articles are talking about, we could do it like this, #but it's unfeasible when you have more than a dozen articles list_articles = ["article 1: blah python", "article 2: blah Trump", "article 3: blah Trump", "article 4: blah Trump"]#many article print("python" in list_articles[0]) print("python" in list_articles[1]) print("python" in list_articles[2]) print("python" in list_articles[3]) #... #but we can use for loops for a in list_articles: print("python" in a) #this is very common as well (especially in other programming languages) for index in [0,1,2,3]: print("python" in list_articles[index]) list(enumerate(list_articles)) #this is sometimes useful when we want both the article and the index for index,article in enumerate(list_articles): print(index, "python" in article) Explanation: 1.4 For-Loops Iterate over a list (or an array, or a set, or the keys of a dictionary..), like this: for element in [1,2,3,4,5]: print(element) The computer: - Reads the first line (for element in [1,2,3,4,5]) and realizes it is a for loop - It then assigns element = 1 (the first element in the list) and does whatever it is inside the for loop (print(element)) - Then it assigns element = 2 (the second element) and does whatever it is inside the loop - It continues like this until the last element - When there are no more elements in the list it exits the loop and continues reading the code behind the loop (if any) You can write anything instead of element (for i in range(10) for instance) The indentation and the colon are important, you get SyntaxError without them. End of explanation for index,article in enumerate(list_articles): if index == 2: break print(index, "python" in article) Explanation: what if we want to stop a loop? Then we can use break End of explanation for index,article in enumerate(list_articles): if index%2 == 0: continue #this skips the rest of the code below if the number is even print(index, "python" in article) Explanation: what if we want to skip some rounds? Then we use continue End of explanation article = "article 2: blah Trump python" if "python" in article: print("Article refering to Python") if "Trump" in article: print("Article refering to Trump") Explanation: 1.5 Control flow = if-else statements Controls the flow. If something happens, then do something. Else, do another thing. Like this ` article = "Trump is going to make America great" if "python" in article: print("python",article) elif "climate change" in article: print("climate change",article) else: print("no python", article) ` The computer: - Reads the first line (if "python" in article) and realizes it is an if-else statement - It then checks if python" in article is True. - If it is True, it reads whatever is inside the if statement (in this case print("python",article)) and goes to the end of all the if-elif-else. - If it is False, it goes to the elif (else if), and checks if elif "climate change" in article is True. - If it is True, it reads whatever it is inside and goes to the end - If it is False, it goes to the else and prints whatever it is inside You only need the if, the elif and else are optional. For instance without else the code above wouldn't print anything. You can have as many elifs as you want. The indentation and the colon are important, you get SyntaxError without them. Let's write code that tells us if an article is about python or Trump End of explanation def python_or_trump(article): prints if an article is related to python or trump input article: string with words if "python" in article: print("Article refering to Python") elif "Trump" in article: print("Article refering to Trump") else: print("Article not refering to Python or Trump") article = "article 2: blah Trump" print(article) #this is how you call the function python_or_trump(article) #stops when python is found, never check for trump article = "article 2: blah Trump python" print(article) python_or_trump(article) article = "article 2: blah blah" print(article) python_or_trump(article) Explanation: 1.x1 Let's combine all we learned so far Write a function that prints if an article is related to "python" or "Trump", and run it for many cases We can wrap it into a function End of explanation list_articles = ["article 1: blah python", "article 2: blah Trump", "article 3: blah Trump", "article 4: blah Trump"]#many articles for article in list_articles: python_or_trump(article) Explanation: Now we do it for many articles End of explanation def count_words(list_articles): input: list of articles output: number of articles with the word trump and with the word pythoon count_trump = 0 count_python = 0 for article in list_articles: if "python" in article.lower(): count_python = count_python + 1 #count_python += 1 if "trump" in article.lower(): count_trump = count_trump + 1 #count_trump += 1 return count_trump,count_python import numpy as np list_articles = ["article 1: blah python", "article 2: blah Trump", "article 3: blah Trump", "article 4: blah Trump"]#many articles g_count_trump,g_count_python = count_words(list_articles) print(g_count_python) print(g_count_trump) print("python articles: ", g_count_python) print("trump_articles: ", g_count_trump) [0]*10 Explanation: 1.x2 Let's combine all we learned so far Write a function that counts the number of articles with the word python and "Trump" End of explanation #Let's use a list of numbers instead of two separate variables for the counter list_articles = ["article 1: blah python", "article 2: blah Trump", "article 3: blah Trump", "article 4: blah Trump"]#many articles def count_words(list_articles): counters = [0]*2 # [0,0] for article in list_articles: if "python" in article: counters[0] += 1 #count_python += 1 #counters[0] = counters[0] + 1 if "Trump" in article: counters[1] += 1 #count_python += 1 return counters counters = count_words(list_articles) print("python articles: ") print(counters[0]) print("trump_articles: ") print(counters[1]) # And allow for any two words, not just python or Trump list_articles = ["article 1: blah python", "article 2: blah Trump", "article 3: blah Trump", "article 4: blah Trump"]#many articles def count_words(list_articles,words): counters = [0]*2 for article in list_articles: if words[0] in article: counters[0] += 1 #count_python += 1 if words[1] in article: counters[1] += 1 #count_python += 1 return counters counters = count_words(list_articles,words=["python","blah"]) print("python articles: ", counters[0]) print("blah_articles: ", counters[1]) words = ["python","Trump","blah"] list(enumerate(words)) list(range(len(words))),words enumerate(words) zip(range(len(words)),words) # And allow for any number of words, not just two list_articles = ["article 1: blah python", "article 2: blah Trump", "article 3: blah Trump", "article 4: blah Trump"]#many articles def count_words(list_articles,words): counters = [0] * len(words) for article in list_articles: for i,word in enumerate(words): if word in article: counters[i] += 1 return counters words = ["python","Trump","blah"] counters = count_words(list_articles,words) print(words) print(counters) #We can make a dictionary out of it d_word2counter = dict(zip(words,counters)) d_word2counter["Trump"] Explanation: Let's make it a bit more flexible End of explanation #For instance this fails, because we don't have more than 2 friends this_is_a_dict = {"Javier": "[email protected]", "Friend1": "[email protected]", "Friend2": "[email protected]"} print(this_is_a_dict["Friend5"]) f5 = this_is_a_dict.get("Friend5") if f5 is None: #f5 == None print("Not enough friends") #example how to fix it #the indents are important, as well as the colons try: print(this_is_a_dict["Friend5"]) except KeyError: print("Not enough friends") #but this one is very common and we have a function that does it for us print(this_is_a_dict.get("Friend5")) Explanation: what if we want a loop but we don't know when we need to stop? Then we can use the while loop: while condition: do something update condition #otherwise the loop is infinitei However in python is not too common. 1.6 Try - except Exception handling. Sometimes the code tries something that can result in an error, and you want to catch the error and react to it. End of explanation with open("data/file_to_write.csv","w+") as f: f.write("I'm line number {}".format(0)) f.write("I'm line number {}".format(1)) f.write("I'm line number {}".format(2)) f.write("I'm line number {}".format(3)) f.write("I'm line number {}".format(4)) Explanation: 2. Writing and reading from disk You can also write files line by line "r": read "w": write "w+": write and if doesn't exist, create it End of explanation with open("data/file_to_write.csv","w+") as f: f.write("I'm line number {}\n".format(0)) f.write("I'm line number {}\n".format(1)) f.write("I'm line number {}\n".format(2)) f.write("I'm line number {}\n".format(3)) f.write("I'm line number {}\n".format(4)) Explanation: But remember to add a "return character" (\n) End of explanation #Ways to read files with open("data/file_to_write.csv","r") as f: #way 1 all_file = f.read() print(all_file) Explanation: There are 3 ways to read a file We won't be reading the files like this too often, but sometimes you need to read them line by line (instead of loading all the files like we do with pandas) Read it all as a string End of explanation with open("data/file_to_write.csv") as f: #way 2 all_file_by_line = f.readlines() print(all_file_by_line) Explanation: Read it breaking in the "\n" End of explanation with open("data/file_to_write.csv") as f: #way 3 for line in f: print(line) print("Hi") print("Hi again") Explanation: Read it line by line End of explanation with open("data/file_to_write.csv") as f: #way 3 for line in f: print(line.rstrip()) Explanation: you can delete the "\n" at the end of the string with .rstrip() End of explanation with open("data/file_to_write.csv","w+") as f: f.write("I'm line number {}\n".format(0)) f.write("I'm line number {}\n".format(1)) f.write("I'm line number {}\n".format(2)) f.write("I'm line number {}\n".format(3)) f.write("I'm line number {}\n".format(4)) Explanation: In-class exercises End of explanation with open("data/file_to_write.csv","w+") as f: for test in range(5): f.write("I'm line number {}\n".format(test)) Explanation: 1. Use a loop to do the same than above (write 5 lines to a file) End of explanation with open("data/file_to_write.csv","w+") as f: for test in range(5): if test > 3: f.write("I'm line number {}\n".format(test)) with open("data/file_to_write.csv","r") as f: print(f.read()) Explanation: 2. Use an if-else statement to write only if the number is larger than 3 End of explanation def makesomethingup(): with open("data/file_to_write.csv","w+") as f: for test in range(5): if test > 3: f.write("I'm line number {}\n".format(test)) return None makesomethingup() Explanation: 3. Encapsulate everything in a function, and call the function End of explanation #A character is a special type of number ord("b") #A string is very similar to a list of characters "abdc"[3] #A boolean is a number print(True == 1) #A numpy array is a special type of list #A pandas dataframe is a list of numpy arrays #A set is a dictionary without values {"d":1,"e":3} vs {"d","e"} Explanation: Everything in your computer/phone/online is based on these things you have already know: data types: numbers data variables: lists and dictionaries code: if-else and for-loops Using these blocks you can create anything End of explanation
1,058
Given the following text description, write Python code to implement the functionality described below step by step Description: Select, Add, Delete, Columns Step1: dictionary like operations dictionary selection with string index
Python Code: import pandas as pd import numpy as np Explanation: Select, Add, Delete, Columns End of explanation cookbook_df = pd.DataFrame({'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}) cookbook_df['BBB'] Explanation: dictionary like operations dictionary selection with string index End of explanation
1,059
Given the following text description, write Python code to implement the functionality described below step by step Description: 2/ Linearity Step1: Simplest linear function Step3: What about vector inputs? Step5: Linear transformations A linear transformation is function that takes vectors as inputs, and produces vectors as outputs Step7: Linear transformations as matrix-vector products See page 133 in v2.2 of the book.
Python Code: # setup SymPy from sympy import * init_printing() x, y, z, t = symbols('x y z t') alpha, beta = symbols('alpha beta') Explanation: 2/ Linearity End of explanation b, m = symbols('b m') def f(x): return m*x f(1) f(2) f(1+2) f(1) + f(2) expand(f(x+y)) == f(x) + f(y) Explanation: Simplest linear function End of explanation m_1, m_2 = symbols('m_1 m_2') def T(vec): A function that takes a 2D vector and returns a number. return m_1*vec[0] + m_2*vec[1] u_1, u_2 = symbols('u_1 u_2') u = Matrix([u_1,u_2]) v_1, v_2 = symbols('v_1 v_2') v = Matrix([v_1,v_2]) T(u) T(v) T(u) + T(v) expand( T(u+v) ) simplify( T(alpha*u + beta*v) - alpha*T(u) - beta*T(v) ) Explanation: What about vector inputs? End of explanation m_11, m_12, m_21, m_22 = symbols('m_11 m_12 m_21 m_22') def T(vec): A linear transformations R^2 --> R^2. out_1 = m_11*vec[0] + m_12*vec[1] out_2 = m_21*vec[0] + m_22*vec[1] return Matrix([out_1, out_2]) T(u) T(v) T(u+v) Explanation: Linear transformations A linear transformation is function that takes vectors as inputs, and produces vectors as outputs: $$ T: \mathbb{R}^n \to \mathbb{R}^m. $$ See page 136 in v2.2 of the book. End of explanation def T_impl(vec): A linear transformations implemented as matrix-vector product. M_T = Matrix([[m_11, m_12], [m_21, m_22]]) return M_T*vec T_impl(u) Explanation: Linear transformations as matrix-vector products See page 133 in v2.2 of the book. End of explanation
1,060
Given the following text description, write Python code to implement the functionality described below step by step Description: Import and Preprocessing We import Yelp business data from our trimmed cleaned.csv file. For further processing, we only analyze American cities for now. Since this particular dataset has data from only one metropolitan area in each state, we don't need to look up MSA information on a per-city basis for now. Our data cleaning is rather light for now Step1: Data Analysis We define four functions to perform our data analysis Step2: Scoring For now, we choose categories with at least 350 restaurants in the nation. We create a DataFrame called scores which uses the metropolitan areas as its index, and the chosen categories as its columns. We fill the competitiveness scores of each category, by metropolitan area, and translate so each city has the same average score. Then, each category has its own scores (within the city) normalized. Finally, we create some plots of the restaurants of the least competitive categories.
Python Code: from collections import Counter from ast import literal_eval import pandas as pd import numpy as np # import and cleaning rests = pd.read_csv('cleaned.csv') rests['categories'] = rests['categories'].apply(literal_eval) # American cities only rests = rests[rests['state'].isin(['PA', 'NC', 'IL', 'AZ', 'NV', 'WI', 'OH'])] # add metropolitan data rests = rests.assign(metro=df['state'].map({'PA': 'Pittsburgh', 'NC': 'Charlotte', 'IL': 'Urbana-Champaign', 'AZ': 'Phoenix', 'NV': 'Las Vegas', 'WI': 'Madison', 'OH': 'Cleveland'})) # light cleaning rests = rests[~rests['name'].str.contains('Airport')] rests['cat_length'] = rests['categories'].apply(len) # mean + 2 * std = 7.89 rests = rests[rests['cat_length'] < 8] Explanation: Import and Preprocessing We import Yelp business data from our trimmed cleaned.csv file. For further processing, we only analyze American cities for now. Since this particular dataset has data from only one metropolitan area in each state, we don't need to look up MSA information on a per-city basis for now. Our data cleaning is rather light for now: * We filter any restaurant with the name 'Airport' since airport restaurants don't compete with local restaurants. * We filter any business with too many categories (2 standard deviations above mean), since they are likely to be falsely labeled as restaurants. To-do: * Shorten the 'Event Planning & Service' string to just 'Event Planning' for display space. * Scrub out more fake restaurants. End of explanation def categorize(df, n): # creates a categories DataFrame from input dataframe, taking categories with over n restaurants df_cats = pd.DataFrame.from_dict(Counter(df['categories'].sum()), orient='index') \ .reset_index().rename(columns={'index': 'category', 0: 'restaurant_count'}) # delete categories 'Restaurants' and 'Food' df_cats = df_cats[(df_cats['category'] != 'Restaurants') & (df_cats['category'] != 'Food')] # a more complete method would be to cluster categories # as proof of concept, we'll just take the top ones for now return df_cats[df_cats['restaurant_count'] >= n] def cat_score(df, category, plot): # searches input DataFrame for restaurants in given category cat_search = df[df['categories'].apply(lambda x: category in x)] # compute individual restaurant scores cat_search = cat_search.assign(score=score(cat_search)) # plots scores if plot = True if plot: cat_search['score'].plot(kind='box') # return the category score return np.sqrt(cat_search['score'].sum()) def analyze_metro(m): # creates data for city restaurants and categories city_rests = rests[rests['metro'] == m] city_cats = categorize(city_rests, 0) # merge this categorical data with our original analysis city_cats = pd.merge(city_cats, cats, how='right', on=['category']).fillna(0) # find the ratio of restaurant_count in given metro versus the dataset, normalize by max city_cats['ratio'] = city_cats['restaurant_count_x'].div(city_cats['restaurant_count_y']) * rests.size / city_rests.size # return scored categories in the metro return city_cats.assign(score=city_cats['category'].apply(lambda x: cat_score(city_rests, x, False))) \ .set_index('category') def score(df): # creates a score series based on input category-city dataframe # current score metric: square of average rating * percent share of ratings return (df['stars'] ** 2) * df['review_count'] / df['review_count'].sum() Explanation: Data Analysis We define four functions to perform our data analysis: categorize(df, n) takes an input DataFrame df of restaurants and creates a DataFrame of categories. * The 'Restaurant' and 'Food' categories are deleted. * We only return categories with n restaurants. * Future: Create groupings of categories (i.e. 'Tex-Mex' versus 'Mexican') for finer analysis. cat_score(df, category, plot) scores all of the restaurants in a given df and category. * Individual restaurants are scored using the score function. * A box plot of the individual restaurant scores is also created if plot=True is passed. * The category score is the sum of the individual restaurant scores. analyze_metro(m) returns a DataFrame of categories and category scores from the metropolitan area m. * Future: We also compute ratio of restaurant count in the metropolitan versus in the nation. score(df) creates a score of all restaurants in the passed DataFrame df. * The current scoring function is square of average rating * the percentage share of total reviews by the restaurant. * When summed, categories are scored by a square average of average rating, weighted by number of reviews. End of explanation # take categories with 350+ restaurants cats = categorize(rests, 350) scores = pd.DataFrame(index=rests['metro'].unique(), columns=cats['category']) for metro in scores.index: metro_scores = analyze_metro(metro)['score'] scores.loc[metro] = metro_scores # normalize so each city has the same average category score # adjust for behavior in each city scores = scores.sub(scores.mean(axis=1), axis=0) + scores.mean().mean() # normalize by mean, divide by standard deviation - for each category # for each category, compare score to the nationwide average scores_std = (scores - scores.mean()) / scores.std() from matplotlib import pyplot as plt %matplotlib notebook def plot_score(df_score, city): # creates a bar chart of category scores in given city from input scores dataframe plt.figure() df_sorted = df_score.loc[city].sort_values() score_title = 'Competition Strength for Restaurant Categories in ' + city score_plot = df_sorted.plot(legend=False, title=score_title, style='o-', xticks=np.arange(len(scores.columns)), rot=90) score_plot.set(xlabel = 'Restaurant Category', ylabel = 'Competition Strength') score_plot.set_xticklabels(df_sorted.index) plt.tight_layout() return score_plot def category_plot(cat, city): city_data = df[df['metro'] == city] cat_list = city_data[city_data['categories'].apply(lambda x: cat in x)] cat_title = 'Aggregated Reviews for ' + cat + ' Restaurants in ' + city cat_plot = cat_list.plot(x=['stars'], y=['review_count'], style='o', xlim=(0, 5), legend=False, title=cat_title) cat_plot.set(xlabel='Stars Given', ylabel='Number of Reviews') return cat_plot plot_score(scores_std, 'Urbana-Champaign') category_plot('Seafood', 'Urbana-Champaign') Explanation: Scoring For now, we choose categories with at least 350 restaurants in the nation. We create a DataFrame called scores which uses the metropolitan areas as its index, and the chosen categories as its columns. We fill the competitiveness scores of each category, by metropolitan area, and translate so each city has the same average score. Then, each category has its own scores (within the city) normalized. Finally, we create some plots of the restaurants of the least competitive categories. End of explanation
1,061
Given the following text description, write Python code to implement the functionality described below step by step Description: Linear Regression By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie with example algorithms by David Edwards Part of the Quantopian Lecture Series Step1: First we'll define a function that performs linear regression and plots the results. Step2: Now we'll get pricing data on TSLA and SPY and perform a regression. Step3: Each point on the above graph represents a day, with the x-coordinate being the return of SPY, and the y-coordinate being the return of TSLA. As we can see, the line of best fit tells us that for every 1% increased return we see from the SPY, we should see an extra 1.92% from TSLA. This is expressed by the parameter $\beta$, which is 1.9271 as estimated. Of course, for decresed return we will also see about double the loss in TSLA, so we haven't gained anything, we are just more volatile. Linear Regression vs. Correlation Linear regression gives us a specific linear model, but is limited to cases of linear dependence. Correlation is general to linear and non-linear dependencies, but doesn't give us an actual model. Both are measures of covariance. Linear regression can give us relationship between Y and many independent variables by making X multidimensional. Knowing Parameters vs. Estimates It is very important to keep in mind that all $\alpha$ and $\beta$ parameters estimated by linear regression are just that - estimates. You can never know the underlying true parameters unless you know the physical process producing the data. The parameters you estimate today may not be the same analysis done including tomorrow's data, and the underlying true parameters may be moving. As such it is very important when doing actual analysis to pay attention to the standard error of the parameter estimates. More material on the standard error will be presented in a later lecture. One way to get a sense of how stable your parameter estimates are is to estimate them using a rolling window of data and see how much variance there is in the estimates. Example case Now let's see what happens if we regress two purely random variables. Step4: The above shows a fairly uniform cloud of points. It is important to note that even with 100 samples, the line has a visible slope due to random chance. This is why it is crucial that you use statistical tests and not visualizations to verify your results. Now let's make Y dependent on X plus some random noise. Step5: In a situation like the above, the line of best fit does indeed model the dependent variable Y quite well (with a high $R^2$ value). Evaluating and reporting results The regression model relies on several assumptions
Python Code: # Import libraries import numpy as np from statsmodels import regression import statsmodels.api as sm import matplotlib.pyplot as plt import math Explanation: Linear Regression By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie with example algorithms by David Edwards Part of the Quantopian Lecture Series: www.quantopian.com/lectures github.com/quantopian/research_public Linear regression is a technique that measures the relationship between two variables. If we have an independent variable $X$, and a dependent outcome variable $Y$, linear regression allows us to determine which linear model $Y = \alpha + \beta X$ best explains the data. As an example, let's consider TSLA and SPY. We would like to know how TSLA varies as a function of how SPY varies, so we will take the daily returns of each and regress them against each other. Python's statsmodels library has a built-in linear fit function. Note that this will give a line of best fit; whether or not the relationship it shows is significant is for you to determine. The output will also have some statistics about the model, such as R-squared and the F value, which may help you quantify how good the fit actually is. End of explanation def linreg(X,Y): # Running the linear regression X = sm.add_constant(X) model = regression.linear_model.OLS(Y, X).fit() a = model.params[0] b = model.params[1] X = X[:, 1] # Return summary of the regression and plot results X2 = np.linspace(X.min(), X.max(), 100) Y_hat = X2 * b + a plt.scatter(X, Y, alpha=0.3) # Plot the raw data plt.plot(X2, Y_hat, 'r', alpha=0.9); # Add the regression line, colored in red plt.xlabel('X Value') plt.ylabel('Y Value') return model.summary() Explanation: First we'll define a function that performs linear regression and plots the results. End of explanation start = '2014-01-01' end = '2015-01-01' asset = get_pricing('TSLA', fields='price', start_date=start, end_date=end) benchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end) # We have to take the percent changes to get to returns # Get rid of the first (0th) element because it is NAN r_a = asset.pct_change()[1:] r_b = benchmark.pct_change()[1:] linreg(r_b.values, r_a.values) Explanation: Now we'll get pricing data on TSLA and SPY and perform a regression. End of explanation X = np.random.rand(100) Y = np.random.rand(100) linreg(X, Y) Explanation: Each point on the above graph represents a day, with the x-coordinate being the return of SPY, and the y-coordinate being the return of TSLA. As we can see, the line of best fit tells us that for every 1% increased return we see from the SPY, we should see an extra 1.92% from TSLA. This is expressed by the parameter $\beta$, which is 1.9271 as estimated. Of course, for decresed return we will also see about double the loss in TSLA, so we haven't gained anything, we are just more volatile. Linear Regression vs. Correlation Linear regression gives us a specific linear model, but is limited to cases of linear dependence. Correlation is general to linear and non-linear dependencies, but doesn't give us an actual model. Both are measures of covariance. Linear regression can give us relationship between Y and many independent variables by making X multidimensional. Knowing Parameters vs. Estimates It is very important to keep in mind that all $\alpha$ and $\beta$ parameters estimated by linear regression are just that - estimates. You can never know the underlying true parameters unless you know the physical process producing the data. The parameters you estimate today may not be the same analysis done including tomorrow's data, and the underlying true parameters may be moving. As such it is very important when doing actual analysis to pay attention to the standard error of the parameter estimates. More material on the standard error will be presented in a later lecture. One way to get a sense of how stable your parameter estimates are is to estimate them using a rolling window of data and see how much variance there is in the estimates. Example case Now let's see what happens if we regress two purely random variables. End of explanation # Generate ys correlated with xs by adding normally-destributed errors Y = X + 0.2*np.random.randn(100) linreg(X,Y) Explanation: The above shows a fairly uniform cloud of points. It is important to note that even with 100 samples, the line has a visible slope due to random chance. This is why it is crucial that you use statistical tests and not visualizations to verify your results. Now let's make Y dependent on X plus some random noise. End of explanation import seaborn start = '2014-01-01' end = '2015-01-01' asset = get_pricing('TSLA', fields='price', start_date=start, end_date=end) benchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end) # We have to take the percent changes to get to returns # Get rid of the first (0th) element because it is NAN r_a = asset.pct_change()[1:] r_b = benchmark.pct_change()[1:] seaborn.regplot(r_b.values, r_a.values); Explanation: In a situation like the above, the line of best fit does indeed model the dependent variable Y quite well (with a high $R^2$ value). Evaluating and reporting results The regression model relies on several assumptions: * The independent variable is not random. * The variance of the error term is constant across observations. This is important for evaluating the goodness of the fit. * The errors are not autocorrelated. The Durbin-Watson statistic detects this; if it is close to 2, there is no autocorrelation. * The errors are normally distributed. If this does not hold, we cannot use some of the statistics, such as the F-test. If we confirm that the necessary assumptions of the regression model are satisfied, we can safely use the statistics reported to analyze the fit. For example, the $R^2$ value tells us the fraction of the total variation of $Y$ that is explained by the model. When making a prediction based on the model, it's useful to report not only a single value but a confidence interval. The linear regression reports 95% confidence intervals for the regression parameters, and we can visualize what this means using the seaborn library, which plots the regression line and highlights the 95% (by default) confidence interval for the regression line: End of explanation
1,062
Given the following text description, write Python code to implement the functionality described below step by step Description: SparkSQL Lab Step1: HiveContext, a superset of SQLContext, was recommended for most use cases. Please make sure you are using HiveContext now! Part 2 Step2: Show time Step3: (2b) Read from Hive Don't forget the configuration of Hive should be done by placing your hive-site.xml file in conf/. Step4: In this class, you will use pixnet_user_log_1000 for further works Print the schema! What do we have? Step5: How many rows in pixnet_user_log Step6: Part 3 Step7: Part 4 Step8: Part 5 Step9: Don't forget to stop sc
Python Code: from pyspark.sql import SQLContext, Row sqlContext = SQLContext(sc) sqlContext from pyspark.sql import HiveContext, Row sqlContext= HiveContext(sc) sqlContext Explanation: SparkSQL Lab: From this lab, you would write code to execute SQL query in Spark. Makes your analytic life simpler and faster. During this lab we will cover: Part 1: Linking with SparkSQL Part 2: Loading data programmatically Part 3: User-Defined Functions Part 4: Caching for performance Part 5: Your show time - How many authors tagged as spam? Reference for Spark RDD Spark's Python API Part 1: Linking with SparkSQL End of explanation jsonfile = "file:///opt/spark-1.4.1-bin-hadoop2.6/examples/src/main/resources/people.json" df = sqlContext.read.load(jsonfile, format="json") Explanation: HiveContext, a superset of SQLContext, was recommended for most use cases. Please make sure you are using HiveContext now! Part 2: Loading data programmatically (2a) Read local JSON file to DataFrame Now, try to read json file from Spark Example. Thank for the hashed spam data from PIXNET PIXNET HACKATHON 2015 End of explanation # TODO: Replace <FILL IN> with appropriate code df.<FILL IN> #print df's schema df.printSchema() Explanation: Show time: Query top 2 row End of explanation sqlContext.sql("SHOW TABLES").show() Explanation: (2b) Read from Hive Don't forget the configuration of Hive should be done by placing your hive-site.xml file in conf/. End of explanation sqlContext.sql("SELECT * FROM pixnet_user_log_1000").printSchema() Explanation: In this class, you will use pixnet_user_log_1000 for further works Print the schema! What do we have? End of explanation from datetime import datetime start_time = datetime.now() df2 = sqlContext.sql("SELECT * FROM pixnet_user_log_1000") end_time = datetime.now() print df2.count() print('Duration: {}'.format(end_time - start_time)) df2.select('time').show(2) Explanation: How many rows in pixnet_user_log End of explanation #registers this RDD as a temporary table using the given name. df2.registerTempTable("people") # Create an UDF for how long some text is # example from user guide, length function sqlContext.registerFunction("strLenPython", lambda x: len(x)) # split function for parser sqlContext.registerFunction("strDate", lambda x: x.split("T")[0]) # put udf with expected columns results = sqlContext.sql("SELECT author, \ strDate(time) AS dt, \ strLenPython(action) AS lenAct \ FROM people") # print top 5 results results.show(5) Explanation: Part 3: User-Defined Functions In part 3, you will create your first UDF in Spark SQL with elegant lambda End of explanation sqlContext.cacheTable("people") start_time = datetime.now() sqlContext.sql("SELECT * FROM people").count() end_time = datetime.now() print('Duration: {}'.format(end_time - start_time)) sqlContext.sql("SELECT strDate(time) AS dt,\ count(distinct author) AS cnt \ FROM people \ GROUP BY strDate(time)").show(5) sqlContext.uncacheTable("people") Explanation: Part 4: Caching for performance Saving to persistent tables saveAsTable : Saves the contents of this DataFrame to a data source as a table. End of explanation # TODO: Replace <FILL IN> with appropriate code result = <FILL IN> Explanation: Part 5: Your show time - How many authors are tagged as spam from pixnet_user_log? Here are two hive tables you will need: (1) author with action pixnet_user_spam (2) author with spam tag pixnet_user_log_1000 End of explanation sc.stop() Explanation: Don't forget to stop sc End of explanation
1,063
Given the following text description, write Python code to implement the functionality described below step by step Description: E2E ML on GCP Step1: Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. Step2: Before you begin Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex AI API. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note Step3: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas Step4: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. Step5: Authenticate your Google Cloud account If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps Step6: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. Step7: Only if your bucket doesn't already exist Step8: Finally, validate access to your Cloud Storage bucket by examining its contents Step9: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Step10: Initialize Vertex AI SDK for Python Initialize the Vertex AI SDK for Python for your project and corresponding bucket. Step11: Vertex AI constants Setup up the following constants for Vertex AI Step12: Set hardware accelerators You can set hardware accelerators for training. Set the variable TRAIN_GPU/TRAIN_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify Step13: Set pre-built containers Set the pre-built Docker container image for training. Set the variable TF to the TensorFlow version of the container image. For example, 2-1 would be version 2.1, and 1-15 would be version 1.15. The following list shows some of the pre-built images available Step14: Set machine type Next, set the machine type to use for training. Set the variable TRAIN_COMPUTE to configure the compute resources for the VMs you will use for for training. machine type n1-standard Step15: Standalone Vertex AI Vizer service The Vizier service can be used as a standalone service for selecting the next set of parameters for a trial. Note Step16: Task.py contents In the next cell, you write the contents of the hyperparameter tuning script task.py. I won't go into detail, it's just there for you to browse. In summary Step17: Store hyperparameter tuning script on your Cloud Storage bucket Next, you package the hyperparameter tuning folder into a compressed tar ball, and then store it in your Cloud Storage bucket. Step18: Prepare your machine specification Now define the machine specification for your custom hyperparameter tuning job. This tells Vertex what type of machine instance to provision for the hyperparameter tuning. - machine_type Step19: Prepare your disk specification (optional) Now define the disk specification for your custom hyperparameter tuning job. This tells Vertex what type and size of disk to provision in each machine instance for the hyperparameter tuning. boot_disk_type Step20: Define the worker pool specification Next, you define the worker pool specification for your custom hyperparameter tuning job. The worker pool specification will consist of the following Step21: Create a custom job Use the class CustomJob to create a custom job, such as for hyperparameter tuning, with the following parameters Step22: Create a hyperparameter tuning job Use the class HyperparameterTuningJob to create a hyperparameter tuning job, with the following parameters Step23: Run the hyperparameter tuning job Use the run() method to execute the hyperparameter tuning job. Step24: Display the hyperparameter tuning job trial results After the hyperparameter tuning job has completed, the property trials will return the results for each trial. Step25: Best trial Now look at which trial was the best Step26: Get the Best Model If you used the method of having the service tell the tuning script where to save the model artifacts (DIRECT = False), then the model artifacts for the best model are saved at Step27: Delete the hyperparameter tuning job The method 'delete()' will delete the hyperparameter tuning job. Step28: Vertex AI Hyperparameter Tuning and Vertex AI Vizer service combined The following example demonstrates how to setup, execute and evaluate trials using the Vertex AI Hyperparameter Tuning service with Vizier search service. Create a custom job Use the class CustomJob to create a custom job, such as for hyperparameter tuning, with the following parameters Step29: Create a hyperparameter tuning job Use the class HyperparameterTuningJob to create a hyperparameter tuning job, with the following parameters Step30: Run the hyperparameter tuning job Use the run() method to execute the hyperparameter tuning job. Step31: Display the hyperparameter tuning job trial results After the hyperparameter tuning job has completed, the property trials will return the results for each trial. Step32: Best trial Now look at which trial was the best Step33: Delete the hyperparameter tuning job The method 'delete()' will delete the hyperparameter tuning job. Step34: Standalone Vertex AI Vizer service The Vizier service can be used as a standalone service for selecting the next set of parameters for a trial. Note Step35: Create a study A study is a series of experiments, or trials, that help you optimize your hyperparameters or parameters. In the following example, the goal is to maximize y = x^2 with x in the range of [-10. 10]. This example has only one parameter and uses an easily calculated function to help demonstrate how to use Vizier. First, you will create the study using the create_study() method. Step36: Get Vizier study You can get a study using the method get_study(), with the following key/value pairs Step37: Get suggested trial Next, query the Vizier service for a suggested trial(s) using the method suggest_trials, with the following key/value pairs Step38: Evaluate the results After receiving your trial suggestions, evaluate each trial and record each result as a measurement. For example, if the function you are trying to optimize is y = x^2, then you evaluate the function using the trial's suggested value of x. Using a suggested value of 0.1, the function evaluates to y = 0.1 * 0.1, which results in 0.01. Add a measurement After evaluating your trial suggestion to get a measurement, add this measurement to your trial. Use the following commands to store your measurement and send the request. In this example, replace RESULT with the measurement. If the function you are optimizing is y = x^2, and the suggested value of x is 0.1, the result is 0.01. Step39: Delete the Vizier study The method 'delete_study()' will delete the study. Step40: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial
Python Code: import os # The Vertex AI Workbench Notebook product has specific requirements IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME") IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists( "/opt/deeplearning/metadata/env_version" ) # Vertex AI Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_WORKBENCH_NOTEBOOK: USER_FLAG = "--user" ! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG Explanation: E2E ML on GCP: MLOps stage 2 : experimentation: get started with Vertex AI Vizier <table align="left"> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage2/get_started_vertex_vizier.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage2/get_started_vertex_vizier.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage2/get_started_vertex_vizier.ipynb"> <img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo"> Open in Vertex AI Workbench </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation: get started with Vertex Vizier. Dataset The dataset used for this tutorial is the Boston Housing Prices dataset. The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD. Objective In this tutorial, you learn how to use Vertex AI Vizier for when training with Vertex AI. This tutorial uses the following Google Cloud ML services: Vertex AI Training Vertex AI Hyperparameter Tuning Vertex AI Vizier The steps performed include: Hyperparameter tuning with Random algorithm. Hyperparameter tuning with Vizier (Bayesian) algorithm. Recommendations When doing E2E MLOps on Google Cloud, the following are best practices for when to use Vertex AI Vizier for hyperparameter tuning: Grid Search You have a small number of combinations of discrete values. For example, you have the following two hyperparameters and discrete values: batch size: [ 16, 32, 64] lr: [ 0.001, 0.01. 0.1] The total number of combinations is 9 (3 x 3). Random Search You have a small number of hyperparameters, where at least one is a continuous value. For example, you have the following two hyperparameters and ranges: batch size: [ 16, 32, 64] lr: 0.001 .. 0.1 Vizier Search You have either a: large number of hyperparameters and discrete values vast continuous search space multiple of objectives Installations Install the packages required for executing this notebook. End of explanation import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) Explanation: Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. End of explanation PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID Explanation: Before you begin Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex AI API. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. Set your project ID If you don't know your project ID, you may be able to get your project ID using gcloud. End of explanation REGION = "[your-region]" # @param {type:"string"} if REGION == "[your-region]": REGION = "us-central1" Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about Vertex AI regions. End of explanation from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. End of explanation # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Vertex AI Workbench, then don't execute this code IS_COLAB = False if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv( "DL_ANACONDA_HOME" ): if "google.colab" in sys.modules: IS_COLAB = True from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' Explanation: Authenticate your Google Cloud account If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation BUCKET_URI = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]": BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. End of explanation ! gsutil mb -l $REGION $BUCKET_URI Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation ! gsutil ls -al $BUCKET_URI Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation import google.cloud.aiplatform as aip Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants End of explanation aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI) Explanation: Initialize Vertex AI SDK for Python Initialize the Vertex AI SDK for Python for your project and corresponding bucket. End of explanation # API service endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION Explanation: Vertex AI constants Setup up the following constants for Vertex AI: API_ENDPOINT: The Vertex AI API service endpoint for Dataset, Model, Job, Pipeline and Endpoint services. PARENT: The Vertex AI location root path for Dataset, Model, Job, Pipeline and Endpoint resources. End of explanation if os.getenv("IS_TESTING_TRAIN_GPU"): TRAIN_GPU, TRAIN_NGPU = ( aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_TRAIN_GPU")), ) else: TRAIN_GPU, TRAIN_NGPU = (aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, 1) Explanation: Set hardware accelerators You can set hardware accelerators for training. Set the variable TRAIN_GPU/TRAIN_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4) Otherwise specify (None, None) to use a container image to run on a CPU. Learn more here hardware accelerator support for your region End of explanation if os.getenv("IS_TESTING_TF"): TF = os.getenv("IS_TESTING_TF") else: TF = "2.1".replace(".", "-") if TF[0] == "2": if TRAIN_GPU: TRAIN_VERSION = "tf-gpu.{}".format(TF) else: TRAIN_VERSION = "tf-cpu.{}".format(TF) else: if TRAIN_GPU: TRAIN_VERSION = "tf-gpu.{}".format(TF) else: TRAIN_VERSION = "tf-cpu.{}".format(TF) TRAIN_IMAGE = "{}-docker.pkg.dev/vertex-ai/training/{}:latest".format( REGION.split("-")[0], TRAIN_VERSION ) print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU) Explanation: Set pre-built containers Set the pre-built Docker container image for training. Set the variable TF to the TensorFlow version of the container image. For example, 2-1 would be version 2.1, and 1-15 would be version 1.15. The following list shows some of the pre-built images available: For the latest list, see Pre-built containers for training. End of explanation if os.getenv("IS_TESTING_TRAIN_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Train machine type", TRAIN_COMPUTE) Explanation: Set machine type Next, set the machine type to use for training. Set the variable TRAIN_COMPUTE to configure the compute resources for the VMs you will use for for training. machine type n1-standard: 3.75GB of memory per vCPU. n1-highmem: 6.5GB of memory per vCPU n1-highcpu: 0.9 GB of memory per vCPU vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ] Note: The following is not supported for training: standard: 2 vCPUs highcpu: 2, 4 and 8 vCPUs Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs. End of explanation # Make folder for Python hyperparameter tuning script ! rm -rf custom ! mkdir custom # Add package information ! touch custom/README.md setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0" ! echo "$setup_cfg" > custom/setup.cfg setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow==2.5.0',\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())" ! echo "$setup_py" > custom/setup.py pkg_info = "Metadata-Version: 1.0\n\nName: Boston Housing tabular regression\n\nVersion: 0.0.0\n\nSummary: Demostration hyperparameter tuning script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex" ! echo "$pkg_info" > custom/PKG-INFO # Make the training subfolder ! mkdir custom/trainer ! touch custom/trainer/__init__.py Explanation: Standalone Vertex AI Vizer service The Vizier service can be used as a standalone service for selecting the next set of parameters for a trial. Note: The service does not execute trials. You create your own trial and execution. Learn more about Using Vizier Vertex AI Hyperparameter Tuning service The following example demonstrates how to setup, execute and evaluate trials using the Vertex AI Hyperparameter Tuning service with random search algorithm. Learn more about Overview of hyperparameter tuning Examine the hyperparameter tuning package Package layout Before you start the hyperparameter tuning, you will look at how a Python package is assembled for a custom hyperparameter tuning job. When unarchived, the package contains the following directory/file layout. PKG-INFO README.md setup.cfg setup.py trainer __init__.py task.py The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image. The file trainer/task.py is the Python script for executing the custom hyperparameter tuning job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py). Package Assembly In the following cells, you will assemble the training package. End of explanation %%writefile custom/trainer/task.py # Custom Training for Boston Housing import tensorflow_datasets as tfds import tensorflow as tf from tensorflow.python.client import device_lib from hypertune import HyperTune import numpy as np import argparse import os import sys tfds.disable_progress_bar() parser = argparse.ArgumentParser() parser.add_argument('--model-dir', dest='model_dir', default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.') parser.add_argument('--lr', dest='lr', default=0.001, type=float, help='Learning rate.') parser.add_argument('--decay', dest='decay', default=0.98, type=float, help='Decay rate') parser.add_argument('--units', dest='units', default=64, type=int, help='Number of units.') parser.add_argument('--epochs', dest='epochs', default=20, type=int, help='Number of epochs.') parser.add_argument('--steps', dest='steps', default=200, type=int, help='Number of steps per epoch.') parser.add_argument('--param-file', dest='param_file', default='/tmp/param.txt', type=str, help='Output file for parameters') parser.add_argument('--distribute', dest='distribute', type=str, default='single', help='distributed training strategy') args = parser.parse_args() print('Python Version = {}'.format(sys.version)) print('TensorFlow Version = {}'.format(tf.__version__)) print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found'))) def make_dataset(): # Scaling Boston Housing data features def scale(feature): max = np.max(feature) feature = (feature / max).astype(np.float) return feature, max (x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data( path="boston_housing.npz", test_split=0.2, seed=113 ) params = [] for _ in range(13): x_train[_], max = scale(x_train[_]) x_test[_], _ = scale(x_test[_]) params.append(max) # store the normalization (max) value for each feature with tf.io.gfile.GFile(args.param_file, 'w') as f: f.write(str(params)) return (x_train, y_train), (x_test, y_test) # Build the Keras model def build_and_compile_dnn_model(): model = tf.keras.Sequential([ tf.keras.layers.Dense(args.units, activation='relu', input_shape=(13,)), tf.keras.layers.Dense(args.units, activation='relu'), tf.keras.layers.Dense(1, activation='linear') ]) model.compile( loss='mse', optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr, decay=args.decay)) return model model = build_and_compile_dnn_model() # Instantiate the HyperTune reporting object hpt = HyperTune() # Reporting callback class HPTCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): global hpt hpt.report_hyperparameter_tuning_metric( hyperparameter_metric_tag='val_loss', metric_value=logs['val_loss'], global_step=epoch) # Train the model BATCH_SIZE = 16 (x_train, y_train), (x_test, y_test) = make_dataset() model.fit(x_train, y_train, epochs=args.epochs, batch_size=BATCH_SIZE, validation_split=0.1, callbacks=[HPTCallback()]) model.save(args.model_dir) Explanation: Task.py contents In the next cell, you write the contents of the hyperparameter tuning script task.py. I won't go into detail, it's just there for you to browse. In summary: Parse the command line arguments for the hyperparameter settings for the current trial. Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR. Download and preprocess the Boston Housing dataset. Build a DNN model. The number of units per dense layer and learning rate hyperparameter values are used during the build and compile of the model. A definition of a callback HPTCallback which obtains the validation loss at the end of each epoch (on_epoch_end()) and reports it to the hyperparameter tuning service using hpt.report_hyperparameter_tuning_metric(). Train the model with the fit() method and specify a callback which will report the validation loss back to the hyperparameter tuning service. End of explanation ! rm -f custom.tar custom.tar.gz ! tar cvf custom.tar custom ! gzip custom.tar ! gsutil cp custom.tar.gz $BUCKET_URI/trainer_boston.tar.gz Explanation: Store hyperparameter tuning script on your Cloud Storage bucket Next, you package the hyperparameter tuning folder into a compressed tar ball, and then store it in your Cloud Storage bucket. End of explanation if TRAIN_GPU: machine_spec = { "machine_type": TRAIN_COMPUTE, "accelerator_type": TRAIN_GPU, "accelerator_count": TRAIN_NGPU, } else: machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0} Explanation: Prepare your machine specification Now define the machine specification for your custom hyperparameter tuning job. This tells Vertex what type of machine instance to provision for the hyperparameter tuning. - machine_type: The type of GCP instance to provision -- e.g., n1-standard-8. - accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU. - accelerator_count: The number of accelerators. End of explanation DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard] DISK_SIZE = 200 # GB disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE} Explanation: Prepare your disk specification (optional) Now define the disk specification for your custom hyperparameter tuning job. This tells Vertex what type and size of disk to provision in each machine instance for the hyperparameter tuning. boot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD. boot_disk_size_gb: Size of disk in GB. End of explanation JOB_NAME = "custom_job_" + TIMESTAMP MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME) if not TRAIN_NGPU or TRAIN_NGPU < 2: TRAIN_STRATEGY = "single" else: TRAIN_STRATEGY = "mirror" EPOCHS = 20 STEPS = 100 DIRECT = False if DIRECT: CMDARGS = [ "--model-dir=" + MODEL_DIR, "--epochs=" + str(EPOCHS), "--steps=" + str(STEPS), "--distribute=" + TRAIN_STRATEGY, ] else: CMDARGS = [ "--epochs=" + str(EPOCHS), "--steps=" + str(STEPS), "--distribute=" + TRAIN_STRATEGY, ] worker_pool_spec = [ { "replica_count": 1, "machine_spec": machine_spec, "disk_spec": disk_spec, "python_package_spec": { "executor_image_uri": TRAIN_IMAGE, "package_uris": [BUCKET_URI + "/trainer_boston.tar.gz"], "python_module": "trainer.task", "args": CMDARGS, }, } ] Explanation: Define the worker pool specification Next, you define the worker pool specification for your custom hyperparameter tuning job. The worker pool specification will consist of the following: replica_count: The number of instances to provision of this machine type. machine_spec: The hardware specification. disk_spec : (optional) The disk storage specification. python_package: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module. Let's dive deeper now into the python package specification: -executor_image_spec: This is the docker image which is configured for your custom hyperparameter tuning job. -package_uris: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image. -python_module: The Python module (script) to invoke for running the custom hyperparameter tuning job. In this example, you will be invoking trainer.task.py -- note that it was not neccessary to append the .py suffix. -args: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting: - "--model-dir=" + MODEL_DIR : The Cloud Storage location where to store the model artifacts. There are two ways to tell the hyperparameter tuning script where to save the model artifacts: - direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or - indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification. - "--epochs=" + EPOCHS: The number of epochs for training. - "--steps=" + STEPS: The number of steps (batches) per epoch. - "--distribute=" + TRAIN_STRATEGY" : The hyperparameter tuning distribution strategy to use for single or distributed hyperparameter tuning. - "single": single device. - "mirror": all GPU devices on a single compute instance. - "multi": all GPU devices on all compute instances. End of explanation job = aip.CustomJob( display_name="boston_" + TIMESTAMP, worker_pool_specs=worker_pool_spec ) Explanation: Create a custom job Use the class CustomJob to create a custom job, such as for hyperparameter tuning, with the following parameters: display_name: A human readable name for the custom job. worker_pool_specs: The specification for the corresponding VM instances. End of explanation from google.cloud.aiplatform import hyperparameter_tuning as hpt hpt_job = aip.HyperparameterTuningJob( display_name="boston_" + TIMESTAMP, custom_job=job, metric_spec={ "val_loss": "minimize", }, parameter_spec={ "lr": hpt.DoubleParameterSpec(min=0.001, max=0.1, scale="log"), "units": hpt.IntegerParameterSpec(min=4, max=128, scale="linear"), }, search_algorithm="random", max_trial_count=6, parallel_trial_count=1, ) Explanation: Create a hyperparameter tuning job Use the class HyperparameterTuningJob to create a hyperparameter tuning job, with the following parameters: display_name: A human readable name for the custom job. custom_job: The worker pool spec from this custom job applies to the CustomJobs created in all the trials. metrics_spec: The metrics to optimize. The dictionary key is the metric_id, which is reported by your training job, and the dictionary value is the optimization goal of the metric('minimize' or 'maximize'). parameter_spec: The parameters to optimize. The dictionary key is the metric_id, which is passed into your training job as a command line key word argument, and the dictionary value is the parameter specification of the metric. search_algorithm: The search algorithm to use: grid, random and None. If None is specified, the Vizier service (Bayesian) is used. max_trial_count: The maximum number of trials to perform. End of explanation hpt_job.run() Explanation: Run the hyperparameter tuning job Use the run() method to execute the hyperparameter tuning job. End of explanation print(hpt_job.trials) Explanation: Display the hyperparameter tuning job trial results After the hyperparameter tuning job has completed, the property trials will return the results for each trial. End of explanation best = (None, None, None, 0.0) for trial in hpt_job.trials: # Keep track of the best outcome if float(trial.final_measurement.metrics[0].value) > best[3]: try: best = ( trial.id, float(trial.parameters[0].value), float(trial.parameters[1].value), float(trial.final_measurement.metrics[0].value), ) except: best = ( trial.id, float(trial.parameters[0].value), None, float(trial.final_measurement.metrics[0].value), ) print(best) Explanation: Best trial Now look at which trial was the best: End of explanation BEST_MODEL_DIR = MODEL_DIR + "/" + best[0] + "/model" Explanation: Get the Best Model If you used the method of having the service tell the tuning script where to save the model artifacts (DIRECT = False), then the model artifacts for the best model are saved at: MODEL_DIR/&lt;best_trial_id&gt;/model End of explanation hpt_job.delete() Explanation: Delete the hyperparameter tuning job The method 'delete()' will delete the hyperparameter tuning job. End of explanation job = aip.CustomJob( display_name="boston_" + TIMESTAMP, worker_pool_specs=worker_pool_spec ) Explanation: Vertex AI Hyperparameter Tuning and Vertex AI Vizer service combined The following example demonstrates how to setup, execute and evaluate trials using the Vertex AI Hyperparameter Tuning service with Vizier search service. Create a custom job Use the class CustomJob to create a custom job, such as for hyperparameter tuning, with the following parameters: display_name: A human readable name for the custom job. worker_pool_specs: The specification for the corresponding VM instances. End of explanation from google.cloud.aiplatform import hyperparameter_tuning as hpt hpt_job = aip.HyperparameterTuningJob( display_name="boston_" + TIMESTAMP, custom_job=job, metric_spec={ "val_loss": "minimize", }, parameter_spec={ "lr": hpt.DoubleParameterSpec(min=0.0001, max=0.1, scale="log"), "units": hpt.IntegerParameterSpec(min=4, max=512, scale="linear"), }, search_algorithm=None, max_trial_count=12, parallel_trial_count=1, ) Explanation: Create a hyperparameter tuning job Use the class HyperparameterTuningJob to create a hyperparameter tuning job, with the following parameters: display_name: A human readable name for the custom job. custom_job: The worker pool spec from this custom job applies to the CustomJobs created in all the trials. metrics_spec: The metrics to optimize. The dictionary key is the metric_id, which is reported by your training job, and the dictionary value is the optimization goal of the metric('minimize' or 'maximize'). parameter_spec: The parameters to optimize. The dictionary key is the metric_id, which is passed into your training job as a command line key word argument, and the dictionary value is the parameter specification of the metric. search_algorithm: The search algorithm to use: grid, random and None. If None is specified, the Vizier service (Bayesian) is used. max_trial_count: The maximum number of trials to perform. End of explanation hpt_job.run() Explanation: Run the hyperparameter tuning job Use the run() method to execute the hyperparameter tuning job. End of explanation print(hpt_job.trials) Explanation: Display the hyperparameter tuning job trial results After the hyperparameter tuning job has completed, the property trials will return the results for each trial. End of explanation best = (None, None, None, 0.0) for trial in hpt_job.trials: # Keep track of the best outcome if float(trial.final_measurement.metrics[0].value) > best[3]: try: best = ( trial.id, float(trial.parameters[0].value), float(trial.parameters[1].value), float(trial.final_measurement.metrics[0].value), ) except: best = ( trial.id, float(trial.parameters[0].value), None, float(trial.final_measurement.metrics[0].value), ) print(best) Explanation: Best trial Now look at which trial was the best: End of explanation hpt_job.delete() Explanation: Delete the hyperparameter tuning job The method 'delete()' will delete the hyperparameter tuning job. End of explanation vizier_client = aip.gapic.VizierServiceClient( client_options=dict(api_endpoint=API_ENDPOINT) ) Explanation: Standalone Vertex AI Vizer service The Vizier service can be used as a standalone service for selecting the next set of parameters for a trial. Note: The service does not execute trials. You create your own trial and execution. Learn more about Using Vizier Create Vizier client Create a client side connection to the Vertex AI Vizier service. End of explanation STUDY_DISPLAY_NAME = "xpow2" + TIMESTAMP param_x = { "parameter_id": "x", "double_value_spec": {"min_value": -10.0, "max_value": 10.0}, } metric_y = {"metric_id": "y", "goal": "MAXIMIZE"} study = { "display_name": STUDY_DISPLAY_NAME, "study_spec": { "algorithm": "RANDOM_SEARCH", "parameters": [param_x], "metrics": [metric_y], }, } study = vizier_client.create_study(parent=PARENT, study=study) STUDY_NAME = study.name print(STUDY_NAME) Explanation: Create a study A study is a series of experiments, or trials, that help you optimize your hyperparameters or parameters. In the following example, the goal is to maximize y = x^2 with x in the range of [-10. 10]. This example has only one parameter and uses an easily calculated function to help demonstrate how to use Vizier. First, you will create the study using the create_study() method. End of explanation study = vizier_client.get_study({"name": STUDY_NAME}) print(study) Explanation: Get Vizier study You can get a study using the method get_study(), with the following key/value pairs: name: The name of the study. End of explanation SUGGEST_COUNT = 1 CLIENT_ID = "1001" response = vizier_client.suggest_trials( {"parent": STUDY_NAME, "suggestion_count": SUGGEST_COUNT, "client_id": CLIENT_ID} ) trials = response.result().trials print(trials) # Get the trial ID of the first trial TRIAL_ID = trials[0].name Explanation: Get suggested trial Next, query the Vizier service for a suggested trial(s) using the method suggest_trials, with the following key/value pairs: parent: The name of the study. suggestion_count: The number of trials to suggest. client_id: blah This call is a long running operation. The method result() from the response object will wait until the call has completed. End of explanation RESULT = 0.01 vizier_client.add_trial_measurement( { "trial_name": TRIAL_ID, "measurement": {"metrics": [{"metric_id": "y", "value": RESULT}]}, } ) Explanation: Evaluate the results After receiving your trial suggestions, evaluate each trial and record each result as a measurement. For example, if the function you are trying to optimize is y = x^2, then you evaluate the function using the trial's suggested value of x. Using a suggested value of 0.1, the function evaluates to y = 0.1 * 0.1, which results in 0.01. Add a measurement After evaluating your trial suggestion to get a measurement, add this measurement to your trial. Use the following commands to store your measurement and send the request. In this example, replace RESULT with the measurement. If the function you are optimizing is y = x^2, and the suggested value of x is 0.1, the result is 0.01. End of explanation vizier_client.delete_study({"name": STUDY_NAME}) Explanation: Delete the Vizier study The method 'delete_study()' will delete the study. End of explanation delete_bucket = False if os.getenv("IS_TESTING"): ! gsutil rm -r $BUCKET_URI Explanation: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Cloud Storage Bucket End of explanation
1,064
Given the following text description, write Python code to implement the functionality described below step by step Description: Plotting Every region has two plotting functions, which draw the outlines of all regions Step1: We use the srex regions to illustrate the plotting Step2: Plot all regions Calling plot() on any region without any arguments draws the default map with a PlateCarree() projection and includes the coastlines Step3: Plot options The plot method has a large number of arguments to adjust the layout of the axes. For example, you can pass a custom projection, the labels can display the abbreviation insead of the region number, the ocean can be colored, etc.. This example also shows how to use matplotlib.patheffects to ensure the labels are easily readable without covering too much of the map (compare to the map above) Step4: Plot only a Subset of Regions To plot a selection of regions subset them using indexing Step5: Plotting the region polygons only (no map) Step6: To achieve this, you need to explicitly create the axes
Python Code: import regionmask regionmask.__version__ Explanation: Plotting Every region has two plotting functions, which draw the outlines of all regions: plot: draws the region polygons on a cartopy GeoAxes (map) plot_regions: draws the the region polygons only Import regionmask and check the version: End of explanation srex = regionmask.defined_regions.srex srex Explanation: We use the srex regions to illustrate the plotting: End of explanation srex.plot(); Explanation: Plot all regions Calling plot() on any region without any arguments draws the default map with a PlateCarree() projection and includes the coastlines: End of explanation import cartopy.crs as ccrs import matplotlib.patheffects as pe text_kws = dict( bbox=dict(color="none"), path_effects=[pe.withStroke(linewidth=2, foreground="w")], color="#67000d", fontsize=8, ) ax = srex.plot( projection=ccrs.Robinson(), label="abbrev", add_ocean=True, text_kws=text_kws ) ax.set_global() Explanation: Plot options The plot method has a large number of arguments to adjust the layout of the axes. For example, you can pass a custom projection, the labels can display the abbreviation insead of the region number, the ocean can be colored, etc.. This example also shows how to use matplotlib.patheffects to ensure the labels are easily readable without covering too much of the map (compare to the map above): End of explanation # regions can be selected by number, abbreviation or long name regions = [11, "CEU", "S. Europe/Mediterranean"] # choose a good projection for regional maps proj = ccrs.LambertConformal(central_longitude=15) ax = srex[regions].plot( add_ocean=True, resolution="50m", proj=proj, label="abbrev", text_kws=text_kws, ) # fine tune the extent ax.set_extent([-15, 45, 28, 76], crs=ccrs.PlateCarree()) Explanation: Plot only a Subset of Regions To plot a selection of regions subset them using indexing: End of explanation srex.plot_regions(); Explanation: Plotting the region polygons only (no map) End of explanation import matplotlib.pyplot as plt f, ax = plt.subplots(subplot_kw=dict(projection=ccrs.Robinson())) srex.plot_regions(ax=ax, line_kws=dict(lw=1), text_kws=text_kws) ax.coastlines() ax.set_global() Explanation: To achieve this, you need to explicitly create the axes: End of explanation
1,065
Given the following text description, write Python code to implement the functionality described below step by step Description: Chopsticks! A few researchers set out to determine the optimal length of chopsticks for children and adults. They came up with a measure of how effective a pair of chopsticks performed, called the "Food Pinching Performance." The "Food Pinching Performance" was determined by counting the number of peanuts picked and placed in a cup (PPPC). An investigation for determining the optimum length of chopsticks. Link to Abstract and Paper the abstract below was adapted from the link Chopsticks are one of the most simple and popular hand tools ever invented by humans, but have not previously been investigated by ergonomists. Two laboratory studies were conducted in this research, using a randomised complete block design, to evaluate the effects of the length of the chopsticks on the food-serving performance of adults and children. Thirty-one male junior college students and 21 primary school pupils served as subjects for the experiment to test chopsticks lengths of 180, 210, 240, 270, 300, and 330 mm. The results showed that the food-pinching performance was significantly affected by the length of the chopsticks, and that chopsticks of about 240 and 180 mm long were optimal for adults and pupils, respectively. Based on these findings, the researchers suggested that families with children should provide both 240 and 180 mm long chopsticks. In addition, restaurants could provide 210 mm long chopsticks, considering the trade-offs between ergonomics and cost. For the rest of this project, answer all questions based only on the part of the experiment analyzing the thirty-one adult male college students. Download the data set for the adults, then answer the following questions based on the abstract and the data set. If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. You will learn more about Markdown later in the Nanodegree Program. Hit shift + enter or shift + return to show the formatted text. 1. What is the independent variable in the experiment? Chopstick length is the independent variable 2. What is the dependent variable in the experiment? Food Pinching Efficiency 3. How is the dependent variable operationally defined? The Food Pinching Efficiency was defined as the amount of food that the chopsticks could grasp based on the effort applied. 4. Based on the description of the experiment and the data set, list at least two variables that you know were controlled. The participants of the experiment - 31 male junior college students and 21 primary school pupils The food used to measure the pinching efficiency - peanuts One great advantage of ipython notebooks is that you can document your data analysis using code, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. For now, let's see some code for doing statistics. Step1: Let's do a basic statistical calculation on the data using code! Run the block of code below to calculate the average "Food Pinching Efficiency" for all 31 participants and all chopstick lengths. Step2: This number is helpful, but the number doesn't let us know which of the chopstick lengths performed best for the thirty-one male junior college students. Let's break down the data by chopstick length. The next block of code will generate the average "Food Pinching Effeciency" for each chopstick length. Run the block of code below. Step3: 5. Which chopstick length performed the best for the group of thirty-one male junior college students? 240
Python Code: import pandas as pd # pandas is a software library for data manipulation and analysis # We commonly use shorter nicknames for certain packages. Pandas is often abbreviated to pd. # hit shift + enter to run this cell or block of code path = r'~/Downloads/chopstick-effectiveness.csv' # Change the path to the location where the chopstick-effectiveness.csv file is located on your computer. # If you get an error when running this block of code, be sure the chopstick-effectiveness.csv is located at the path on your computer. dataFrame = pd.read_csv(path) dataFrame Explanation: Chopsticks! A few researchers set out to determine the optimal length of chopsticks for children and adults. They came up with a measure of how effective a pair of chopsticks performed, called the "Food Pinching Performance." The "Food Pinching Performance" was determined by counting the number of peanuts picked and placed in a cup (PPPC). An investigation for determining the optimum length of chopsticks. Link to Abstract and Paper the abstract below was adapted from the link Chopsticks are one of the most simple and popular hand tools ever invented by humans, but have not previously been investigated by ergonomists. Two laboratory studies were conducted in this research, using a randomised complete block design, to evaluate the effects of the length of the chopsticks on the food-serving performance of adults and children. Thirty-one male junior college students and 21 primary school pupils served as subjects for the experiment to test chopsticks lengths of 180, 210, 240, 270, 300, and 330 mm. The results showed that the food-pinching performance was significantly affected by the length of the chopsticks, and that chopsticks of about 240 and 180 mm long were optimal for adults and pupils, respectively. Based on these findings, the researchers suggested that families with children should provide both 240 and 180 mm long chopsticks. In addition, restaurants could provide 210 mm long chopsticks, considering the trade-offs between ergonomics and cost. For the rest of this project, answer all questions based only on the part of the experiment analyzing the thirty-one adult male college students. Download the data set for the adults, then answer the following questions based on the abstract and the data set. If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. You will learn more about Markdown later in the Nanodegree Program. Hit shift + enter or shift + return to show the formatted text. 1. What is the independent variable in the experiment? Chopstick length is the independent variable 2. What is the dependent variable in the experiment? Food Pinching Efficiency 3. How is the dependent variable operationally defined? The Food Pinching Efficiency was defined as the amount of food that the chopsticks could grasp based on the effort applied. 4. Based on the description of the experiment and the data set, list at least two variables that you know were controlled. The participants of the experiment - 31 male junior college students and 21 primary school pupils The food used to measure the pinching efficiency - peanuts One great advantage of ipython notebooks is that you can document your data analysis using code, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. For now, let's see some code for doing statistics. End of explanation dataFrame['Food.Pinching.Efficiency'].mean() Explanation: Let's do a basic statistical calculation on the data using code! Run the block of code below to calculate the average "Food Pinching Efficiency" for all 31 participants and all chopstick lengths. End of explanation meansByChopstickLength = dataFrame.groupby('Chopstick.Length')['Food.Pinching.Efficiency'].mean().reset_index() meansByChopstickLength # reset_index() changes Chopstick.Length from an index to column. Instead of the index being the length of the chopsticks, the index is the row numbers 0, 1, 2, 3, 4, 5. Explanation: This number is helpful, but the number doesn't let us know which of the chopstick lengths performed best for the thirty-one male junior college students. Let's break down the data by chopstick length. The next block of code will generate the average "Food Pinching Effeciency" for each chopstick length. Run the block of code below. End of explanation # Causes plots to display within the notebook rather than in a new window %pylab inline import matplotlib.pyplot as plt plt.scatter(x=meansByChopstickLength['Chopstick.Length'], y=meansByChopstickLength['Food.Pinching.Efficiency']) # title="") plt.xlabel("Length in mm") plt.ylabel("Efficiency in PPPC") plt.title("Average Food Pinching Efficiency by Chopstick Length") plt.show() Explanation: 5. Which chopstick length performed the best for the group of thirty-one male junior college students? 240 End of explanation
1,066
Given the following text description, write Python code to implement the functionality described below step by step Description: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. Step1: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! Step2: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. Step3: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). Step4: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. Step5: Splitting the data into training, testing, and validation sets We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). Step7: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters Step8: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. Step9: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. Step10: Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note
Python Code: %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt Explanation: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. End of explanation data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() Explanation: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! End of explanation rides[:24*10].plot(x='dteday', y='cnt') Explanation: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. End of explanation dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() Explanation: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). End of explanation quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std Explanation: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. End of explanation # Save the last 21 days test_data = data[-21*24:] data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] Explanation: Splitting the data into training, testing, and validation sets We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. End of explanation # Hold out the last 60 days of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). End of explanation class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes)) self.lr = learning_rate #### Set this to your implemented sigmoid function #### # Activation function is the sigmoid function self.activation_function = lambda x: 1/(1+np.exp(-x)) def train(self, inputs_list, targets_list): # Convert inputs list to 2d array inputs = np.array(inputs_list, ndmin=2).T targets = np.array(targets_list, ndmin=2).T #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) hidden_outputs = self.activation_function(hidden_inputs) # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer final_outputs = final_inputs # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output. # TODO: Backpropagated error hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer hidden_grad = hidden_outputs*(1-hidden_outputs) # TODO: Update the weights self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T)# update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += self.lr * np.dot((hidden_grad*hidden_errors), inputs.T) # update input-to-hidden weights with gradient descent step def run(self, inputs_list): # Run a forward pass through the network inputs = np.array(inputs_list, ndmin=2).T #### Implement the forward pass here #### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)# signals into final output layer final_outputs = final_inputs # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2) Explanation: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method. End of explanation import sys ### Set the hyperparameters here ### epochs = 3000 learning_rate = 0.01 hidden_nodes = 15 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) for record, target in zip(train_features.ix[batch].values, train_targets.ix[batch]['cnt']): network.train(record, target) # Printing out the training progress train_loss = MSE(network.run(train_features), train_targets['cnt'].values) val_loss = MSE(network.run(val_features), val_targets['cnt'].values) sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() plt.ylim(ymax=0.5) Explanation: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. End of explanation fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features)*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) Explanation: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. End of explanation import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328, -0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, 0.39775194, -0.29887597], [-0.20185996, 0.50074398, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) Explanation: Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter ANSWER: The model can predict really well except for abnormal days, such as holyday season, which is the case between DEC22 and New year, that happens probably because people get together or travel in these times, being mostly inside their houses. Unit tests Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project. End of explanation
1,067
Given the following text description, write Python code to implement the functionality described below step by step Description: Temporal sampling Sampling nothing Let's evolve 40 populations to mutation-drift equilibrium Step1: Take samples from population Step2: The output from this particular sampler type is a generator. Let's look at the first element of the first sample Step3: These "genotypes" blocks can be used to caculate summary statistics. See the example on using pylibseq for that task. Step4: Each element in d[0] is a tuple
Python Code: import fwdpy as fp import numpy as np import pandas as pd nregions=[fp.Region(0,1,1)] sregions=[fp.GammaS(0,1,0.1,0.1,0.1,1.0), fp.GammaS(0,1,0.9,-0.2,9.0,0.0) ] recregions=nregions N=1000 nlist=np.array([N]*(10*N),dtype=np.uint32) mutrate_neutral=50.0/float(4*N) recrate=mutrate_neutral mutrate_sel=mutrate_neutral*0.2 rng=fp.GSLrng(101) pops=fp.SpopVec(40,1000) sampler=fp.NothingSampler(len(pops)) #This function implicitly uses a "nothing sampler" fp.evolve_regions_sampler(rng,pops,sampler,nlist, mutrate_neutral, 0.0, #No selected mutations.... recrate, nregions,sregions,recregions, #Only sample every 10N generations, #which is fine b/c we're not sampling anything 10*N) Explanation: Temporal sampling Sampling nothing Let's evolve 40 populations to mutation-drift equilibrium: End of explanation #Take sample of size n=20 sampler=fp.PopSampler(len(pops),20,rng) fp.evolve_regions_sampler(rng,pops,sampler, nlist[:N], #Evolve for N generations mutrate_neutral, mutrate_sel, recrate, nregions,sregions,recregions, #Sampler every 100 generations 100) Explanation: Take samples from population End of explanation data=sampler[0] print data[0] Explanation: The output from this particular sampler type is a generator. Let's look at the first element of the first sample: End of explanation print data[1] Explanation: These "genotypes" blocks can be used to caculate summary statistics. See the example on using pylibseq for that task. End of explanation #The first element are the genotypes data[0][0] #The first element in the genotypes are the neutral variants. #The first value is the position. The second value is a string #of genotypes for chromosomes 1 through n. 0 = ancestral/1=derived data[0][0][0] #Same format for selected variants data[0][0][1] #This is a dict relating to info re: #the selected variants. #dcount = derived freq in sample #ftime = fixation time. 2^32-1 = has not fixed #generation = generation when sampling occurred #h = dominance #origin = generation when mutation entered population #p = population frequency #s = effect size/selection coefficient data[0][1] Explanation: Each element in d[0] is a tuple: End of explanation
1,068
Given the following text description, write Python code to implement the functionality described below step by step Description: Extra 3.1 - Historical Provenance - Application 2 Step1: Labelling data Based on its trust value, we categorise the data entity into two sets Step2: Having used the trust valuue to label all the data entities, we remove the trust_value column from the data frame. Step3: Filtering data We split the dataset into three Step4: Balancing Data This section explore the balance of each of the three datasets and balance them using the SMOTE Oversampling Method. Step5: Buildings Step6: Balancing the building dataset Step7: Routes Step8: Balancing the route dataset Step9: Route Sets Step10: Balancing the route set dataset Step11: Cross Validation We now run the cross validation tests on the three balanaced datasets (df_buildings, df_routes, and df_routesets) using all the features (combined), only the generic network metrics (generic), and only the provenance-specific network metrics (provenance). Please refer to Cross Validation Code.ipynb for the detailed description of the cross validation code. Step12: Building Classification We test the classification of buildings, collect individual accuracy scores results and the importance of every feature in each test in importances (both are Pandas Dataframes). These two tables will also be used to collect data from testing the classification of routes and route sets later. Step13: Route Classification Step14: Route Set Classification Step15: Charting the accuracy scores Step16: Converting the accuracy score from [0, 1] to percentage, i.e [0, 100]
Python Code: import pandas as pd df = pd.read_csv("collabmap/ancestor-graphs.csv", index_col='id') df.head() df.describe() Explanation: Extra 3.1 - Historical Provenance - Application 2: CollabMap Data Quality Assessing the quality of crowdsourced data in CollabMap from their provenance. In this notebook, we explore the performance of classification using the provenance of a data entity instead of its dependencies (as shown here and in the paper). In order to distinguish between the two, we call the former historical provenance and the latter forward provenance. Apart from using the historical provenance, all other steps are the same as the original experiments. Goal: To determine if the provenance network analytics method can identify trustworthy data (i.e. buildings, routes, and route sets) contributed by crowd workers in CollabMap. Classification labels: $\mathcal{L} = \left{ \textit{trusted}, \textit{uncertain} \right} $. Training data: Buildings: 5175 Routes: 4710 Route sets: 4997 Reading data [Changed] The CollabMap dataset based on historical provenance is provided in the collabmap/ancestor-graphs.csv file, each row corresponds to a building, route, or route sets created in the application: * id: the identifier of the data entity (i.e. building/route/route set). * trust_value: the beta trust value calculated from the votes for the data entity. * The remaining columns provide the provenance network metrics calculated from the historical provenance graph of the entity. End of explanation trust_threshold = 0.75 df['label'] = df.apply(lambda row: 'Trusted' if row.trust_value >= trust_threshold else 'Uncertain', axis=1) df.head() # The new label column is the last column below Explanation: Labelling data Based on its trust value, we categorise the data entity into two sets: trusted and uncertain. Here, the threshold for the trust value, whose range is [0, 1], is chosen to be 0.75. End of explanation # We will not use trust value from now on df.drop('trust_value', axis=1, inplace=True) df.shape # the dataframe now have 23 columns (22 metrics + label) Explanation: Having used the trust valuue to label all the data entities, we remove the trust_value column from the data frame. End of explanation df_buildings = df.filter(like="Building", axis=0) df_routes = df.filter(regex="^Route\d", axis=0) df_routesets = df.filter(like="RouteSet", axis=0) df_buildings.shape, df_routes.shape, df_routesets.shape # The number of data points in each dataset Explanation: Filtering data We split the dataset into three: buildings, routes, and route sets. End of explanation from analytics import balance_smote Explanation: Balancing Data This section explore the balance of each of the three datasets and balance them using the SMOTE Oversampling Method. End of explanation df_buildings.label.value_counts() Explanation: Buildings End of explanation df_buildings = balance_smote(df_buildings) Explanation: Balancing the building dataset: End of explanation df_routes.label.value_counts() Explanation: Routes End of explanation df_routes = balance_smote(df_routes) Explanation: Balancing the route dataset: End of explanation df_routesets.label.value_counts() Explanation: Route Sets End of explanation df_routesets = balance_smote(df_routesets) Explanation: Balancing the route set dataset: End of explanation from analytics import test_classification Explanation: Cross Validation We now run the cross validation tests on the three balanaced datasets (df_buildings, df_routes, and df_routesets) using all the features (combined), only the generic network metrics (generic), and only the provenance-specific network metrics (provenance). Please refer to Cross Validation Code.ipynb for the detailed description of the cross validation code. End of explanation # Cross validation test on building classification res, imps = test_classification(df_buildings) # adding the Data Type column res['Data Type'] = 'Building' imps['Data Type'] = 'Building' # storing the results and importance of features results = res importances = imps # showing a few newest rows results.tail() Explanation: Building Classification We test the classification of buildings, collect individual accuracy scores results and the importance of every feature in each test in importances (both are Pandas Dataframes). These two tables will also be used to collect data from testing the classification of routes and route sets later. End of explanation # Cross validation test on route classification res, imps = test_classification(df_routes) # adding the Data Type column res['Data Type'] = 'Route' imps['Data Type'] = 'Route' # storing the results and importance of features results = results.append(res, ignore_index=True) importances = importances.append(imps, ignore_index=True) # showing a few newest rows results.tail() Explanation: Route Classification End of explanation # Cross validation test on route classification res, imps = test_classification(df_routesets) # adding the Data Type column res['Data Type'] = 'Route Set' imps['Data Type'] = 'Route Set' # storing the results and importance of features results = results.append(res, ignore_index=True) importances = importances.append(imps, ignore_index=True) # showing a few newest rows results.tail() Explanation: Route Set Classification End of explanation %matplotlib inline import seaborn as sns sns.set_style("whitegrid") sns.set_context("paper", font_scale=1.4) Explanation: Charting the accuracy scores End of explanation results.Accuracy = results.Accuracy * 100 results.head() from matplotlib.font_manager import FontProperties fontP = FontProperties() fontP.set_size(12) pal = sns.light_palette("seagreen", n_colors=3, reverse=True) plot = sns.barplot(x="Data Type", y="Accuracy", hue='Metrics', palette=pal, errwidth=1, capsize=0.02, data=results) plot.set_ylim(40, 90) plot.legend(loc='upper center', bbox_to_anchor=(0.5, 1.0), ncol=3) plot.set_ylabel('Accuracy (%)') Explanation: Converting the accuracy score from [0, 1] to percentage, i.e [0, 100]: End of explanation
1,069
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2021 The TF-Agents Authors. Step1: DQN C51/Rainbow <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Hyperparameters Step3: Environment Load the environment as before, with one for training and one for evaluation. Here we use CartPole-v1 (vs. CartPole-v0 in the DQN tutorial), which has a larger max reward of 500 rather than 200. Step4: Agent C51 is a Q-learning algorithm based on DQN. Like DQN, it can be used on any environment with a discrete action space. The main difference between C51 and DQN is that rather than simply predicting the Q-value for each state-action pair, C51 predicts a histogram model for the probability distribution of the Q-value Step5: We also need an optimizer to train the network we just created, and a train_step_counter variable to keep track of how many times the network was updated. Note that one other significant difference from vanilla DqnAgent is that we now need to specify min_q_value and max_q_value as arguments. These specify the most extreme values of the support (in other words, the most extreme of the 51 atoms on either side). Make sure to choose these appropriately for your particular environment. Here we use -20 and 20. Step6: One last thing to note is that we also added an argument to use n-step updates with $n$ = 2. In single-step Q-learning ($n$ = 1), we only compute the error between the Q-values at the current time step and the next time step using the single-step return (based on the Bellman optimality equation). The single-step return is defined as Step7: Data Collection As in the DQN tutorial, set up the replay buffer and the initial data collection with the random policy. Step8: Training the agent The training loop involves both collecting data from the environment and optimizing the agent's networks. Along the way, we will occasionally evaluate the agent's policy to see how we are doing. The following will take ~7 minutes to run. Step9: Visualization Plots We can plot return vs global steps to see the performance of our agent. In Cartpole-v1, the environment gives a reward of +1 for every time step the pole stays up, and since the maximum number of steps is 500, the maximum possible return is also 500. Step11: Videos It is helpful to visualize the performance of an agent by rendering the environment at each step. Before we do that, let us first create a function to embed videos in this colab. Step12: The following code visualizes the agent's policy for a few episodes
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2021 The TF-Agents Authors. End of explanation !sudo apt-get update !sudo apt-get install -y xvfb ffmpeg freeglut3-dev !pip install 'imageio==2.4.0' !pip install pyvirtualdisplay !pip install tf-agents !pip install pyglet from __future__ import absolute_import from __future__ import division from __future__ import print_function import base64 import imageio import IPython import matplotlib import matplotlib.pyplot as plt import PIL.Image import pyvirtualdisplay import tensorflow as tf from tf_agents.agents.categorical_dqn import categorical_dqn_agent from tf_agents.drivers import dynamic_step_driver from tf_agents.environments import suite_gym from tf_agents.environments import tf_py_environment from tf_agents.eval import metric_utils from tf_agents.metrics import tf_metrics from tf_agents.networks import categorical_q_network from tf_agents.policies import random_tf_policy from tf_agents.replay_buffers import tf_uniform_replay_buffer from tf_agents.trajectories import trajectory from tf_agents.utils import common # Set up a virtual display for rendering OpenAI gym environments. display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start() Explanation: DQN C51/Rainbow <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/agents/tutorials/9_c51_tutorial"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/9_c51_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/9_c51_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/9_c51_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Introduction This example shows how to train a Categorical DQN (C51) agent on the Cartpole environment using the TF-Agents library. Make sure you take a look through the DQN tutorial as a prerequisite. This tutorial will assume familiarity with the DQN tutorial; it will mainly focus on the differences between DQN and C51. Setup If you haven't installed tf-agents yet, run: End of explanation env_name = "CartPole-v1" # @param {type:"string"} num_iterations = 15000 # @param {type:"integer"} initial_collect_steps = 1000 # @param {type:"integer"} collect_steps_per_iteration = 1 # @param {type:"integer"} replay_buffer_capacity = 100000 # @param {type:"integer"} fc_layer_params = (100,) batch_size = 64 # @param {type:"integer"} learning_rate = 1e-3 # @param {type:"number"} gamma = 0.99 log_interval = 200 # @param {type:"integer"} num_atoms = 51 # @param {type:"integer"} min_q_value = -20 # @param {type:"integer"} max_q_value = 20 # @param {type:"integer"} n_step_update = 2 # @param {type:"integer"} num_eval_episodes = 10 # @param {type:"integer"} eval_interval = 1000 # @param {type:"integer"} Explanation: Hyperparameters End of explanation train_py_env = suite_gym.load(env_name) eval_py_env = suite_gym.load(env_name) train_env = tf_py_environment.TFPyEnvironment(train_py_env) eval_env = tf_py_environment.TFPyEnvironment(eval_py_env) Explanation: Environment Load the environment as before, with one for training and one for evaluation. Here we use CartPole-v1 (vs. CartPole-v0 in the DQN tutorial), which has a larger max reward of 500 rather than 200. End of explanation categorical_q_net = categorical_q_network.CategoricalQNetwork( train_env.observation_spec(), train_env.action_spec(), num_atoms=num_atoms, fc_layer_params=fc_layer_params) Explanation: Agent C51 is a Q-learning algorithm based on DQN. Like DQN, it can be used on any environment with a discrete action space. The main difference between C51 and DQN is that rather than simply predicting the Q-value for each state-action pair, C51 predicts a histogram model for the probability distribution of the Q-value: By learning the distribution rather than simply the expected value, the algorithm is able to stay more stable during training, leading to improved final performance. This is particularly true in situations with bimodal or even multimodal value distributions, where a single average does not provide an accurate picture. In order to train on probability distributions rather than on values, C51 must perform some complex distributional computations in order to calculate its loss function. But don't worry, all of this is taken care of for you in TF-Agents! To create a C51 Agent, we first need to create a CategoricalQNetwork. The API of the CategoricalQNetwork is the same as that of the QNetwork, except that there is an additional argument num_atoms. This represents the number of support points in our probability distribution estimates. (The above image includes 10 support points, each represented by a vertical blue bar.) As you can tell from the name, the default number of atoms is 51. End of explanation optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate) train_step_counter = tf.Variable(0) agent = categorical_dqn_agent.CategoricalDqnAgent( train_env.time_step_spec(), train_env.action_spec(), categorical_q_network=categorical_q_net, optimizer=optimizer, min_q_value=min_q_value, max_q_value=max_q_value, n_step_update=n_step_update, td_errors_loss_fn=common.element_wise_squared_loss, gamma=gamma, train_step_counter=train_step_counter) agent.initialize() Explanation: We also need an optimizer to train the network we just created, and a train_step_counter variable to keep track of how many times the network was updated. Note that one other significant difference from vanilla DqnAgent is that we now need to specify min_q_value and max_q_value as arguments. These specify the most extreme values of the support (in other words, the most extreme of the 51 atoms on either side). Make sure to choose these appropriately for your particular environment. Here we use -20 and 20. End of explanation #@test {"skip": true} def compute_avg_return(environment, policy, num_episodes=10): total_return = 0.0 for _ in range(num_episodes): time_step = environment.reset() episode_return = 0.0 while not time_step.is_last(): action_step = policy.action(time_step) time_step = environment.step(action_step.action) episode_return += time_step.reward total_return += episode_return avg_return = total_return / num_episodes return avg_return.numpy()[0] random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(), train_env.action_spec()) compute_avg_return(eval_env, random_policy, num_eval_episodes) # Please also see the metrics module for standard implementations of different # metrics. Explanation: One last thing to note is that we also added an argument to use n-step updates with $n$ = 2. In single-step Q-learning ($n$ = 1), we only compute the error between the Q-values at the current time step and the next time step using the single-step return (based on the Bellman optimality equation). The single-step return is defined as: $G_t = R_{t + 1} + \gamma V(s_{t + 1})$ where we define $V(s) = \max_a{Q(s, a)}$. N-step updates involve expanding the standard single-step return function $n$ times: $G_t^n = R_{t + 1} + \gamma R_{t + 2} + \gamma^2 R_{t + 3} + \dots + \gamma^n V(s_{t + n})$ N-step updates enable the agent to bootstrap from further in the future, and with the right value of $n$, this often leads to faster learning. Although C51 and n-step updates are often combined with prioritized replay to form the core of the Rainbow agent, we saw no measurable improvement from implementing prioritized replay. Moreover, we find that when combining our C51 agent with n-step updates alone, our agent performs as well as other Rainbow agents on the sample of Atari environments we've tested. Metrics and Evaluation The most common metric used to evaluate a policy is the average return. The return is the sum of rewards obtained while running a policy in an environment for an episode, and we usually average this over a few episodes. We can compute the average return metric as follows. End of explanation #@test {"skip": true} replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer( data_spec=agent.collect_data_spec, batch_size=train_env.batch_size, max_length=replay_buffer_capacity) def collect_step(environment, policy): time_step = environment.current_time_step() action_step = policy.action(time_step) next_time_step = environment.step(action_step.action) traj = trajectory.from_transition(time_step, action_step, next_time_step) # Add trajectory to the replay buffer replay_buffer.add_batch(traj) for _ in range(initial_collect_steps): collect_step(train_env, random_policy) # This loop is so common in RL, that we provide standard implementations of # these. For more details see the drivers module. # Dataset generates trajectories with shape [BxTx...] where # T = n_step_update + 1. dataset = replay_buffer.as_dataset( num_parallel_calls=3, sample_batch_size=batch_size, num_steps=n_step_update + 1).prefetch(3) iterator = iter(dataset) Explanation: Data Collection As in the DQN tutorial, set up the replay buffer and the initial data collection with the random policy. End of explanation #@test {"skip": true} try: %%time except: pass # (Optional) Optimize by wrapping some of the code in a graph using TF function. agent.train = common.function(agent.train) # Reset the train step agent.train_step_counter.assign(0) # Evaluate the agent's policy once before training. avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes) returns = [avg_return] for _ in range(num_iterations): # Collect a few steps using collect_policy and save to the replay buffer. for _ in range(collect_steps_per_iteration): collect_step(train_env, agent.collect_policy) # Sample a batch of data from the buffer and update the agent's network. experience, unused_info = next(iterator) train_loss = agent.train(experience) step = agent.train_step_counter.numpy() if step % log_interval == 0: print('step = {0}: loss = {1}'.format(step, train_loss.loss)) if step % eval_interval == 0: avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes) print('step = {0}: Average Return = {1:.2f}'.format(step, avg_return)) returns.append(avg_return) Explanation: Training the agent The training loop involves both collecting data from the environment and optimizing the agent's networks. Along the way, we will occasionally evaluate the agent's policy to see how we are doing. The following will take ~7 minutes to run. End of explanation #@test {"skip": true} steps = range(0, num_iterations + 1, eval_interval) plt.plot(steps, returns) plt.ylabel('Average Return') plt.xlabel('Step') plt.ylim(top=550) Explanation: Visualization Plots We can plot return vs global steps to see the performance of our agent. In Cartpole-v1, the environment gives a reward of +1 for every time step the pole stays up, and since the maximum number of steps is 500, the maximum possible return is also 500. End of explanation def embed_mp4(filename): Embeds an mp4 file in the notebook. video = open(filename,'rb').read() b64 = base64.b64encode(video) tag = ''' <video width="640" height="480" controls> <source src="data:video/mp4;base64,{0}" type="video/mp4"> Your browser does not support the video tag. </video>'''.format(b64.decode()) return IPython.display.HTML(tag) Explanation: Videos It is helpful to visualize the performance of an agent by rendering the environment at each step. Before we do that, let us first create a function to embed videos in this colab. End of explanation num_episodes = 3 video_filename = 'imageio.mp4' with imageio.get_writer(video_filename, fps=60) as video: for _ in range(num_episodes): time_step = eval_env.reset() video.append_data(eval_py_env.render()) while not time_step.is_last(): action_step = agent.policy.action(time_step) time_step = eval_env.step(action_step.action) video.append_data(eval_py_env.render()) embed_mp4(video_filename) Explanation: The following code visualizes the agent's policy for a few episodes: End of explanation
1,070
Given the following text description, write Python code to implement the functionality described below step by step Description: Indexing Okay guys today's lecture is indexing. What is indexing? At heart, indexing is the ability to inspect a value inside a object. So basically if we have a list, X, of 100 items and our index is 'i' then 'i of X' returns the ith value inside the list (p.s. we can index strings too). Okay, so what is the Syntax for this? Glad you asked Step1: Now, those bounds I have just given might sound a bit arbitrary, but actually I can explain exactly how they work. Consider the following picture Step2: So that explains the first row of numbers in the image. What about the second row? Well, in Python not only can you index forwards you can also index backwards. Readabily counts... So basically index [0] will always be the start of the list/string and an index of [-1] will always be the end. If you wanted the middle "l" in "hello" have a choice; either [2] or [-3] will work. And, as a general rule, if code ends up being equivalent your choice should be to go with whatever is more readable. There should be one-- and preferably only one --obvious way to do it. ~ Zen of Python For example Step3: You might wonder what is wrong with index[4] to reference the end of the list. The problem with using index[4] instead of [-1] is that the former way of doing things is considerably less readable. Without actually checking the length of the input the meaning of index[4] is somewhat ambiguous; is this the end? Near the beginning/middle? Meanwhile [-1] always refers to the end regardless of input size, and so therefore its meaning is always clear even when we don’t know the size of the input. Index[len(a_string)-1] meanwhile always refers to the end of the list but it is considerably more verbose and less readable than the simple [-1]. The Index Method The string class AND the list class both have an index method, and now that we have just covered indexing we are in a position to understand its output. Basically, we ask if an item is in a string/list. And if it is, the method returns an index for that item. For example Step4: What can we do with indexing? Obviously we can do a lot with indexing, in the cases of lists, for example, we change the value of the list at position ‘i’. Its simple to do that Step5: Can we change the values inside strings? Lets try! Step6: In python strings are immutable, which is a fancy way of saying that they are set in stone; once created you just can't change them. Your only option is to create new strings with data you want. If we create a new string we can use the old variable if we want. But in this case, you didn't change the value of the string. Rather what you did was create a new string and give it a variable name, and thats allowed. Here is one way we can change the value of 'a_string' Step7: Making Grids "Flat is better than nested". ~ Zen of Python Talking of lists, remember that we can go all "inception-like" with lists and shove lists inside lists inside lists. How can we index a beast like that? Well, with difficulty... Step9: To index a list inside a list the syntax is to add another [{integer}] on the end. Repeat until you get to the required depth. list[{integer}][{integer}] In the case of the above the value 100 was nested inside so many lists that it took a lot of effort to tease it out. Structures like this are hard to work with, which is why the usual advice is to 'flatten' your lists wherever possible. With this said, nested structures are not all bad. A really common way of representing a grid in Python is to use nested lists. In which case, we can index any square we want by first indexing the 'row' and then the 'column'. Like so Step11: Anyway, thats enough about indexing for now, let's move onto the topic of slicing... Slicing What is slicing? Well it is a bit like indexing, only instead of returning point 'X' we return all the values between the points (x, y). Just as with indexing, you can slice strings as well as lists. Note Step12: Alright, so that's the basics of slicing covered, the only remaining question is what the final "step" argument does. Well basically, the step allows us to 'skip' every nth element of the list/string. For example, suppose that I have (just as before) a list of numbers 1-to-20, but this time I want to return the EVEN numbers between 15 and 19. Intuitively we know that the result should be [16,18] but how can we do this in code? Step13: How does this work? Well, index 15 is the number 16 (remember we count from 0 in Python), and then we skip index 16 (an odd number) and go straight to index 17 (which is the number 18). The next index to look at is 20, but since that is larger than our end step (19) we terminate. On last thing I'd like to note is that we got even numbers in this case because we started with an even number (index 15= 16). Had we of started with an odd number, this process would have returned odd numbers. For example Step14: In both of the above cases we are using a step of size 10. If we start at 0 that means we get Step15: The Range Function You maybe have observed that I use the 'range' function in some of the above examples. This function doesn't have anything to do with indexing or slicing, but I thought I would briefly talk about it here because although the syntax is different this function works in a very similar way to slicing. More specifically, the range function takes 3 arguments; start, end, step (optional). And these arguments work in a similar way to how start, end and step work with regards slicing. Allow me to demonstrate Step16: You will note a small difference between the two ways of doing things. When we slice we start the the count at 2 whereas with range we start the count at 3. The difference is the result of the fact the range function is dealing with numbers, whereas the slice is using indexing (e.g. list_1[2] is the number 3). And just as with slicing, a step of -1 counts backwards...
Python Code: # flipping signs of numbers... a = 5 b = -5 print(-a, -b) # len function x1 = [] x2 = "12" x3 = [1,2,3] print(len(x1), len(x2), len(x3)) x = [1,2,3] print(x[100]) # <--- IndexError! 100 is waayyy out of bounds Explanation: Indexing Okay guys today's lecture is indexing. What is indexing? At heart, indexing is the ability to inspect a value inside a object. So basically if we have a list, X, of 100 items and our index is 'i' then 'i of X' returns the ith value inside the list (p.s. we can index strings too). Okay, so what is the Syntax for this? Glad you asked: {variable} [{integer}] So if we wanted to index into something called "a_string" in code it would look something like: a_string[integer] Now, the integer in question cannot be any number from -infinity to +infinity. Rather, it is bounded by the size of the variable. For example, if the size of the variable is 5 that means our integer has to be in the range -5 to 4. Or more generally: Index Range :: Lower Bound = -len(variable) Upper Bound = len(variable) - 1 Anything outside this range = IndexError Just as a quick explanation, len() is a built-in command that gets the size of the object and adding a "-" sign infront of an integer 'flips' its sign: End of explanation string = "hello" print(string[0]) # first item print(string[len(string)-1]) # last item Explanation: Now, those bounds I have just given might sound a bit arbitrary, but actually I can explain exactly how they work. Consider the following picture: So in this picture we have the string ‘hello’. The two rows of numbers represent the indexes of this string. In Python we start counting from 0 which means the first item in a list/string always has an index of 0. And since we start counting at zero then that means the last item in the list/string is len(item)-1 like so: End of explanation a_string = "Hello" # indexing first item... print(a_string[0]) # Readable print(a_string[-len(a_string)]) # Less readable print(a_string[-1]) # Readable print(a_string[len(a_string)-1]) # Less readable print(a_string[4]) # Avoid this whereever possible! BAD BAD BAD!! Explanation: So that explains the first row of numbers in the image. What about the second row? Well, in Python not only can you index forwards you can also index backwards. Readabily counts... So basically index [0] will always be the start of the list/string and an index of [-1] will always be the end. If you wanted the middle "l" in "hello" have a choice; either [2] or [-3] will work. And, as a general rule, if code ends up being equivalent your choice should be to go with whatever is more readable. There should be one-- and preferably only one --obvious way to do it. ~ Zen of Python For example: End of explanation a_list = ["qwerty", "dave", "magic johnson", "qwerty"] a_string = "Helllllllo how ya doin fam?" # notice that Python returns the index of the first match. print(a_list.index("qwerty")) print(a_string.index("l")) # if item is not in the list, you get an value error: print(a_list.index("chris")) Explanation: You might wonder what is wrong with index[4] to reference the end of the list. The problem with using index[4] instead of [-1] is that the former way of doing things is considerably less readable. Without actually checking the length of the input the meaning of index[4] is somewhat ambiguous; is this the end? Near the beginning/middle? Meanwhile [-1] always refers to the end regardless of input size, and so therefore its meaning is always clear even when we don’t know the size of the input. Index[len(a_string)-1] meanwhile always refers to the end of the list but it is considerably more verbose and less readable than the simple [-1]. The Index Method The string class AND the list class both have an index method, and now that we have just covered indexing we are in a position to understand its output. Basically, we ask if an item is in a string/list. And if it is, the method returns an index for that item. For example: End of explanation a_list = [1,2,3] print(a_list) a_list[-1] = "a" print(a_list) a_list[0] = "c" print(a_list) a_list[1] = "b" print(a_list) Explanation: What can we do with indexing? Obviously we can do a lot with indexing, in the cases of lists, for example, we change the value of the list at position ‘i’. Its simple to do that: End of explanation a_string = "123" a_string[0] = "a" # <-- Error; strings are an "immutable" data type in Python. Explanation: Can we change the values inside strings? Lets try! End of explanation a_string = "123" a_string = "a" + a_string[1:] # slicing, see below. print(a_string) Explanation: In python strings are immutable, which is a fancy way of saying that they are set in stone; once created you just can't change them. Your only option is to create new strings with data you want. If we create a new string we can use the old variable if we want. But in this case, you didn't change the value of the string. Rather what you did was create a new string and give it a variable name, and thats allowed. Here is one way we can change the value of 'a_string': End of explanation this_is_insane = [ [[[[[[[[[[[[100]]]]]]]]]]]] ] # WTF !!?? print(this_is_insane[0][0][0][0][0][0][0][0][0][0][0][0][0]) Explanation: Making Grids "Flat is better than nested". ~ Zen of Python Talking of lists, remember that we can go all "inception-like" with lists and shove lists inside lists inside lists. How can we index a beast like that? Well, with difficulty... End of explanation grid = [ ["0"] * 5 for _ in range(5) ] # building a nested list, in style. 'List Comprehensions' are not covered in this course. print("The Grid looks like this...:", grid[2:], "\n") # Note: "grid[2:]" above is a 'slice' (more on slicing below), in this case I'm using slicing to truncate the results, # observe that three lists get printed, not five. def print_grid(): This function simply prints grid, row by row. for row in grid: # This is a for-loop, more on these later! print(row) print_grid() print("\n") grid[0][0] = "X" # Top-left corner grid[0][-1] = "Y" # Top-right corner grid[-1][0] = "W" # Bottom-left corner grid[-1][-1] = "Z" # Bottom-right corner grid[2][2] = "A" # Somewhere near the middle print_grid() # Quick note, since the corners index are defined by 0 and -1, these numbers should work for all nxn grids. Explanation: To index a list inside a list the syntax is to add another [{integer}] on the end. Repeat until you get to the required depth. list[{integer}][{integer}] In the case of the above the value 100 was nested inside so many lists that it took a lot of effort to tease it out. Structures like this are hard to work with, which is why the usual advice is to 'flatten' your lists wherever possible. With this said, nested structures are not all bad. A really common way of representing a grid in Python is to use nested lists. In which case, we can index any square we want by first indexing the 'row' and then the 'column'. Like so: grid[row][column] If you ever want to build simple board games (chess, connect 4, etc) you might find the representation useful. In code: End of explanation lst = list(range(1,21)) # list(range) just makes a list of numbers 1 to 20 # The below function just makes it faster for me to type out the test cases below. def printer(start, end, lst): Helper function, takes two integers (start, end) and a list/string. Function returns a formated string that contains: start, end and lst[start:end] if start: if end: sliced = lst[start:end] else: sliced = lst[start:] elif end: sliced = lst[:end] else: sliced = lst[:] return "slice is '[{}:{}]', which returns: {}".format(start, end, sliced) print("STARTING LIST IS:", lst) print("") # Test cases print("SLICING LISTS...") print(printer("","", lst)) # [:] is sometimes called a 'shallow copy' of a list. print(printer("", 5, lst )) # first 5 items. print(printer(14,"", lst)) # starting at index 14, go to the end. print(printer(200,500,lst)) # No errors for indexes that should be "out of bounds". print(printer(5,10, lst)) print(printer(4,5, lst)) # Negative numbers work too. In the case below we start at the 5th last item and move toward the 2nd to last item. print(printer(-5,-2, lst)) print(printer(-20,-1, lst)) # note that this list finishes at 19, not 20. # and for good measure, a few strings: print("\nSLICING STRINGS...") a_string = "Hello how are you?" print(printer("","", a_string)) # The whole string aka a 'shallow copy' print(printer(0,5, a_string)) print(printer(6,9, a_string)) print(printer(10,13, a_string)) print(printer(14, 17, a_string)) print(printer(17, "", a_string)) Explanation: Anyway, thats enough about indexing for now, let's move onto the topic of slicing... Slicing What is slicing? Well it is a bit like indexing, only instead of returning point 'X' we return all the values between the points (x, y). Just as with indexing, you can slice strings as well as lists. Note: start points are inclusive and endpoints are exclusive. {variable} [{start} : {end} : {step}] * Where start, end and step are all integer values. It is also worth noting that each of start, end and step are optional arguments, when nothing is given they default to the start of the list, end of the list and the default step is 1. If you give start/step an integer Python will treat that number as an index. Thus, a_list[2:10] says "Hey Python, go fetch me all the values in 'a_list' starting at index 2 up-to (but not including) index 10. Unlike indexing however, if you try to slice outside of range you won't get an error message. If you have a list of length five and try to slice with values 0 and 100 Python will just return the whole list. If you try to slice the list at 100 and 200 an empty list '[]' will be the result. Lets see a few examples: End of explanation a_list = list(range(1,21)) sliced_list = a_list[15:19:2] print(sliced_list) print(a_list[17]) Explanation: Alright, so that's the basics of slicing covered, the only remaining question is what the final "step" argument does. Well basically, the step allows us to 'skip' every nth element of the list/string. For example, suppose that I have (just as before) a list of numbers 1-to-20, but this time I want to return the EVEN numbers between 15 and 19. Intuitively we know that the result should be [16,18] but how can we do this in code? End of explanation a_list = list(range(0,206)) slice1 = a_list[::10] # every 10th element starting from zero = [0, 10, 20, ...] slice2 = a_list[5::10] # every 10th element starting from 5 = [5, 15, 25,...] a_string = "a123a123a123a123a123a123a123" # this pattern has a period of 4. slice3 = a_string[::4] # starts at a, returns aaaaaa slice4 = a_string[3::4] # starts at 3, returns 333333 print(slice1, slice2, slice3, slice4, sep="\n") Explanation: How does this work? Well, index 15 is the number 16 (remember we count from 0 in Python), and then we skip index 16 (an odd number) and go straight to index 17 (which is the number 18). The next index to look at is 20, but since that is larger than our end step (19) we terminate. On last thing I'd like to note is that we got even numbers in this case because we started with an even number (index 15= 16). Had we of started with an odd number, this process would have returned odd numbers. For example: End of explanation a_list = list(range(1, 11)) print(a_list) print(a_list[::-1]) # reverses the list Explanation: In both of the above cases we are using a step of size 10. If we start at 0 that means we get: 10,20,30... but if we start at 5 then the sequence we get is 5, 15, 25... In the case of the string example above, the patten has a length of four and then repeats. Thus, if we start with n charater and have a step of 4 the resulting pattern with be "nnnnnn". Reversing lists with step The very last thing I want to show you about a the step argument is that if you set step to -1 it will reverse the string/list. For example: End of explanation list_1 = list(range(1,21)) list_1 = list_1[2::3] print(list_1) # The above 3 lines can be refactored to: list_2 = list(range(3, 21, 3)) print(list_2) Explanation: The Range Function You maybe have observed that I use the 'range' function in some of the above examples. This function doesn't have anything to do with indexing or slicing, but I thought I would briefly talk about it here because although the syntax is different this function works in a very similar way to slicing. More specifically, the range function takes 3 arguments; start, end, step (optional). And these arguments work in a similar way to how start, end and step work with regards slicing. Allow me to demonstrate: End of explanation list_3 = list(range(10, -1, -1)) # this says: "start at the number 10 and count backwards to 0 # please remember that start points are inclusive BUT endpoints are exclusive, # if we want to include 0 in the results we must have an endpoint +1 of our target. # in this case the number one past zero (when counting backwards) is -1. print(list_3) Explanation: You will note a small difference between the two ways of doing things. When we slice we start the the count at 2 whereas with range we start the count at 3. The difference is the result of the fact the range function is dealing with numbers, whereas the slice is using indexing (e.g. list_1[2] is the number 3). And just as with slicing, a step of -1 counts backwards... End of explanation
1,071
Given the following text description, write Python code to implement the functionality described below step by step Description: Notes on Numpy Arrays and Panda's Series and DataFrames We need to import the numpy and pandas libraries before using them in this notebook Step1: Intro to Numpy Arrays Here create a 1-dimension array with floating numbers. Note we did not have to type print before array if it is the last line of code Step2: Here we create an 2x3 array Step3: Now let us show how to perform certain slicing and indexing operations on numpy Arrays.If we want to print from the 3rd element until the end on a vector array Step4: In this line of code, we print all elements from the 2nd element until the end of the 1st row. Step5: In this line, we print the 2nd row of the matrix Step6: In this line, we print the 2nd column Step7: In this line, I wanted to experiment a bit with slicing operations with the arithmetic operations. Here I print out the results of subtracting the 2nd row of the matrix from 1st three elements of the vector array Step8: Here we create a 2x2 array display it's output. Next, we print out the results of the 1st two elements from the vector array Step9: This operation I found to be kind of weird since I would have assumed the multiplication of two arrays to be the dot product, but that is not what occurs here. In this operation, The first element of array is multiplied by each element of the first column in two_by_two_array and the 2nd element of the array is multiplied by each element of the 2nd column of two_by_two_array. Step10: If we want to compute the dot product, you have to use the dot member function of the numpy library. Here, we create a new array and perform the dot product on two_by_two_array Step11: Intro to Panda Series and DataFrames The Panda Series allows you to create a Panda Column in which you can put elements of any type within the same column. Here I create Panda Series with information about my father. I have modified the index so that it will print out the standard number indices Step12: Here we create a Python Dictionary and then convert that into a Panda DataFrame. Here we create a dictionary named family and create dataframe df from family. Step13: Notice that when we print out the dictionary it prints out in regular text, But when we print out the dataframe, it gives us a nice table in IPython. Pretty cool!!! Now we are going to create a seperate dataframe and play around with some of it's member functions. Step14: Notice here that we can index Dataframes by the index name just like in Python dictionaries. Panda's dataframes have a function called describe which generates some interesting statiscal information. This function is very helpful for doing some initial sanity checking on the dataframe's columns. Here we are given the number of entries (given by the count row), the mean, standard deviation and the Interquartile Range (IQR). Step15: Here is a way to examine the DataFrame without printing the entire thing. The printout shows the printing of the first 2 rows of the subject column. Note that you can specify how rows you would like to print by passing the number as a parameter to the head function. The second printout shows the function being called on the entire dataframe object. Step16: Here we perform the tail on the DataFrame object.
Python Code: import numpy as np import pandas as pd Explanation: Notes on Numpy Arrays and Panda's Series and DataFrames We need to import the numpy and pandas libraries before using them in this notebook End of explanation array = np.array([1,2,3,4],float) array Explanation: Intro to Numpy Arrays Here create a 1-dimension array with floating numbers. Note we did not have to type print before array if it is the last line of code End of explanation two_dimen_array = np.array([[1,2,3],[4,5,6]], float) two_dimen_array Explanation: Here we create an 2x3 array End of explanation array[2:] Explanation: Now let us show how to perform certain slicing and indexing operations on numpy Arrays.If we want to print from the 3rd element until the end on a vector array End of explanation two_dimen_array[0][1:] Explanation: In this line of code, we print all elements from the 2nd element until the end of the 1st row. End of explanation two_dimen_array[1, :] Explanation: In this line, we print the 2nd row of the matrix End of explanation two_dimen_array[:,1] Explanation: In this line, we print the 2nd column End of explanation array[:3] - two_dimen_array[1,:] Explanation: In this line, I wanted to experiment a bit with slicing operations with the arithmetic operations. Here I print out the results of subtracting the 2nd row of the matrix from 1st three elements of the vector array End of explanation two_by_two_array = np.array([[1,4],[2,5]],float) two_by_two_array array[:2] Explanation: Here we create a 2x2 array display it's output. Next, we print out the results of the 1st two elements from the vector array End of explanation two_by_two_array * array[:2] Explanation: This operation I found to be kind of weird since I would have assumed the multiplication of two arrays to be the dot product, but that is not what occurs here. In this operation, The first element of array is multiplied by each element of the first column in two_by_two_array and the 2nd element of the array is multiplied by each element of the 2nd column of two_by_two_array. End of explanation array2 = np.array([1,2],float) array2 np.dot(array2,two_by_two_array) Explanation: If we want to compute the dot product, you have to use the dot member function of the numpy library. Here, we create a new array and perform the dot product on two_by_two_array End of explanation series = pd.Series(['Ransford', "Hyman Sr.", 'January', 1941], index=['First Name', 'Last Name', 'Birth Month', 'Birth Year']) series Explanation: Intro to Panda Series and DataFrames The Panda Series allows you to create a Panda Column in which you can put elements of any type within the same column. Here I create Panda Series with information about my father. I have modified the index so that it will print out the standard number indices End of explanation family = {'name': ['Ransford','Denzel'], 'Birth year': [1984, 2004], 'favorite subject': ['Math','Science']} family df = pd.DataFrame(family) df Explanation: Here we create a Python Dictionary and then convert that into a Panda DataFrame. Here we create a dictionary named family and create dataframe df from family. End of explanation frank_grades = {'subject':['Math','English','Social Studies','Science','Music','Art'], 'grades': [95,87,80,96,98,70]} df2 = pd.DataFrame(frank_grades) df2 Explanation: Notice that when we print out the dictionary it prints out in regular text, But when we print out the dataframe, it gives us a nice table in IPython. Pretty cool!!! Now we are going to create a seperate dataframe and play around with some of it's member functions. End of explanation df2['grades'].describe() Explanation: Notice here that we can index Dataframes by the index name just like in Python dictionaries. Panda's dataframes have a function called describe which generates some interesting statiscal information. This function is very helpful for doing some initial sanity checking on the dataframe's columns. Here we are given the number of entries (given by the count row), the mean, standard deviation and the Interquartile Range (IQR). End of explanation print df2['subject'].head(2) df2.head() Explanation: Here is a way to examine the DataFrame without printing the entire thing. The printout shows the printing of the first 2 rows of the subject column. Note that you can specify how rows you would like to print by passing the number as a parameter to the head function. The second printout shows the function being called on the entire dataframe object. End of explanation print df2['grades'].tail() df2.tail Explanation: Here we perform the tail on the DataFrame object. End of explanation
1,072
Given the following text description, write Python code to implement the functionality described below step by step Description: Read in the Kobe Bryant shooting data [https Step1: For now, use just the numerical datatypes. They are below as num_columns Step2: The shot_made_flag is the result (0 or 1) of the shot that Kobe took. Some of the values are missing (e.g. NaN). Drop them. Step3: Use the num_columns, the kobe dataframe to fit() the models. Choose one or more of the entries in num_columns as features. These models are used to predict whether Kobe will make or miss a shot given the certain input parameters provided. Get the accuracy of each model with respect to the data used to fit the model. Step4: The following is a reminder of how the SciKit-Learn Models can be interfaced
Python Code: kobe = pd.read_csv('../data/kobe.csv') Explanation: Read in the Kobe Bryant shooting data [https://www.kaggle.com/c/kobe-bryant-shot-selection] End of explanation [(col, dtype) for col, dtype in zip(kobe.columns, kobe.dtypes) if dtype != 'object'] num_columns = [col for col, dtype in zip(kobe.columns, kobe.dtypes) if dtype != 'object'] num_columns Explanation: For now, use just the numerical datatypes. They are below as num_columns End of explanation kobe = kobe Explanation: The shot_made_flag is the result (0 or 1) of the shot that Kobe took. Some of the values are missing (e.g. NaN). Drop them. End of explanation import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline sns.set(font_scale=1.5) Explanation: Use the num_columns, the kobe dataframe to fit() the models. Choose one or more of the entries in num_columns as features. These models are used to predict whether Kobe will make or miss a shot given the certain input parameters provided. Get the accuracy of each model with respect to the data used to fit the model. End of explanation # fit a linear regression model and store the predictions example = pd.DataFrame({'a':[1,2,3,4,5,6], 'b':[1,1,0,0,0,1]}) feature_cols = ['a'] X = example[feature_cols] y = example.b from sklearn.linear_model import LinearRegression linreg = LinearRegression() linreg.fit(X, y) example['pred'] = linreg.predict(X) # scatter plot that includes the regression line plt.scatter(example.a, example.b) plt.plot(example.a, example.pred, color='red') plt.xlabel('a') plt.ylabel('b') from sklearn.metrics import accuracy_score accuracy_score(example.b, example.pred.astype(int)) Explanation: The following is a reminder of how the SciKit-Learn Models can be interfaced End of explanation
1,073
Given the following text description, write Python code to implement the functionality described below step by step Description: Re-referencing the EEG signal This example shows how to load raw data and apply some EEG referencing schemes. Step1: We will now apply different EEG referencing schemes and plot the resulting evoked potentials. Note that when we construct epochs with mne.Epochs, we supply the proj=True argument. This means that any available projectors are applied automatically. Specifically, if there is an average reference projector set by raw.set_eeg_reference('average', projection=True), MNE applies this projector when creating epochs.
Python Code: # Authors: Marijn van Vliet <[email protected]> # Alexandre Gramfort <[email protected]> # # License: BSD (3-clause) import mne from mne.datasets import sample from matplotlib import pyplot as plt print(__doc__) # Setup for reading the raw data data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' event_id, tmin, tmax = 1, -0.2, 0.5 # Read the raw data raw = mne.io.read_raw_fif(raw_fname, preload=True) events = mne.read_events(event_fname) # The EEG channels will be plotted to visualize the difference in referencing # schemes. picks = mne.pick_types(raw.info, meg=False, eeg=True, eog=True, exclude='bads') Explanation: Re-referencing the EEG signal This example shows how to load raw data and apply some EEG referencing schemes. End of explanation reject = dict(eog=150e-6) epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax, picks=picks, reject=reject, proj=True) fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, sharex=True) # We first want to plot the data without any added reference (i.e., using only # the reference that was applied during recording of the data). # However, this particular data already has an average reference projection # applied that we now need to remove again using :func:`mne.set_eeg_reference` raw, _ = mne.set_eeg_reference(raw, []) # use [] to remove average projection evoked_no_ref = mne.Epochs(raw, **epochs_params).average() evoked_no_ref.plot(axes=ax1, titles=dict(eeg='Original reference'), show=False, time_unit='s') # Now we want to plot the data with an average reference, so let's add the # projection we removed earlier back to the data. Note that we can use # "set_eeg_reference" as a method on the ``raw`` object as well. raw.set_eeg_reference('average', projection=True) evoked_car = mne.Epochs(raw, **epochs_params).average() evoked_car.plot(axes=ax2, titles=dict(eeg='Average reference'), show=False, time_unit='s') # Re-reference from an average reference to the mean of channels EEG 001 and # EEG 002. raw.set_eeg_reference(['EEG 001', 'EEG 002']) evoked_custom = mne.Epochs(raw, **epochs_params).average() evoked_custom.plot(axes=ax3, titles=dict(eeg='Custom reference'), time_unit='s') Explanation: We will now apply different EEG referencing schemes and plot the resulting evoked potentials. Note that when we construct epochs with mne.Epochs, we supply the proj=True argument. This means that any available projectors are applied automatically. Specifically, if there is an average reference projector set by raw.set_eeg_reference('average', projection=True), MNE applies this projector when creating epochs. End of explanation
1,074
Given the following text description, write Python code to implement the functionality described below step by step Description: Digital Comics Data Analysis (Python) - Marvel or DC? Introduction After having done the analysis of the website (post here and the web scraping of the data from the Comixology website (post here, we will analyze the data that we got using Python (Pandas). Let's find out what publisher have the best average ratings and prices, the average page count of the Vamos descobrir quais editoras tem os melhores preços relativos à quantidade de páginas de seus comics, as editoras com as melhores avaliações médias, além de uma análise mais profunda do duelo das gigantes Step1: Now, let's create a new column, price per page. This column will help us compare the price of comics that have a different number of pages, and, therefore, should have a bigger price. But how much bigger? For some comics, the page count information is not available, and so, for these cases, Pandas will return inf as the value of the column, representing an infinite value. For these comics, we will set the price per page as NaN Step2: Now, let's use the iterrows() function of the DataFrame to extract the publishing year of the print version of the comic. This function creates a for loop that iterates over each row of the DataFrame. Let's use the split() function to turn the string that contains the print release date into a list of values, and the third one will be the year. In some cases, this will return a value bigger than 2016, and since this is impossible, we will define these cases as NaN Step3: To the analysis (and beyond!) The first analysis we'll do is the calculation of some average values of the website, like average price of comics, average page count, among others. We'll use the nanmean() function from numpy. This function calculates the mean of a series os values, not considering NaN cases. Step4: Now, we will define the maximum number of columns for each field in the printing of the table to 40 columns. We'll do that because the name of some comics is long, and the printing of the table can get a little strange. With this configuration we can see more information in one row. After that, let's list comics with an average rating of 5 stars, that have more than 20 ratings (to consider only the more representative comics; comics with an average rating of 5 stars but with only one rating are not a very good metric), and let's sort it by price per page. In the top, we will have some free comics (the 6 first ones). Then, we will have great comics, in the eyes of the users, that have a very good price.</p> Step5: In the next analysis, we will use only comics with more than 5 ratings. For that, we will filter the DataFrame. Then, we'll create a Pandas pivot table, so that we can visualize the quantity of comics with ratings and the average rating of this publisher. Then, we will consider as representative publishers those that have at least 20 comics with ratings. To do that, we will filter the pivot table. And finally, we will sort this table by average rating, going from the highest to the lowest. This means that the publishers on the top of the table will be the ones that have the best average rating from its comics. Step6: Note that the giants, Marvel and DC Comics, are not among the ones in the top. If we see the complete table, they are between the middle and the bottom of the table. To help in the visualization, let's create a matplotlib chart that represents the table above Step7: To simplify and have a better table and chart, let's consider now only the publishers that have 300 comics with ratings. First, the table Step8: One thing that I believed that could make a difference in the ratings of a comic was the age classification. Were comics made to the adults rated better? Or worse? Let's check that making another pivot table Step9: And below, the corresponding chart Step10: As we can see, the height of the bars is quite similar. It seems that the age classification does not make a significant effect on the ratings of a comic. If we see it with a purely mathematical view, comics with an age classification for 9+ years or for all ages get the best ratings, by a small margin. But it is not possible to view a strong relation, since it does not varies in the same way as the age classification increases or decreases. Our next step is to see how the release of comics evolved (considering print versions) over the years. Remember that we already created a column with the year of release of the print version of the comic. The next step is basically to count the occurrences of each year in this column. Let's make a list with the years and then count the releases per year Step11: And now let's create the cart to see the situation better Step12: The numbers show that the growing was moderate, until the decade of 2000, when a boom happened, with a great increase in releases until 2012, when the release numbers started to oscillate. The fall shown in 2016 is because we are still in the middle of the year. Now we'll go on to make an evaluation of the most rated comics on the website. We can also probably say that these are the most read comics on the website. So, for this analysis, we will check the comics with most ratings, sorting the table and printing some columns. Let's see the 30 first ones. Step13: And the chart with the most rated comics Step14: Walking Dead is by far the one with most ratings. After that, some Marvel and DC comics and then some varied ones.</p> Now, let's make our detailed analysis on the giant publishers Step15: As we can see, DC Comics has a lower average price and price per page, and an average rating slightly higher. The average page count is a little higher on Marvel. Below, the bar charts that represent these comparations Step16: Next step is to see some numbers related to the quantity of comics that each have. How many comics each publisher has, how many of them are good (4 or 5 stars rating), how many are bad (1 or 2 stars) and the proportion of these to the total. For this analysis, we will basically filter the DataFrame and count the number of rows of each filtered view. Simple Step17: Again, here, DC Comics comes a little better. DC shows a bigger proportion of good comics and a smaller proportion of bad comics. DC scores one more. Below, the chart with the comparisons Step18: Just as curiosity, let's check the number of ratings in comics of each publisher, through another pivot table Step19: Interesting to note that even with Marvel having more comics, as we saw in the previous table, there quantity of ratings of DC's comics is way bigger, approximately 55% more. It seems that DC's fans are more propense to rate comics in Comixology than Marvel ones. Our next evaluation will be about characters and teams of heroes / villains. First, we need to create lists of characters and teams for each publisher. I created the lists by hand, doing some research. It didn't took very long. Step20: Next, we need to pass each name of character or team. First, let's define a DataFrame, and we'll filter so that the only rows that remain are the ones where the comic name includes the name of this character or team. Then, we'll extract some information from there. The quantity of comics will be the number of rows of the resulting DataFrame. Then, we will get the average price, rating and page count. All this information will be saved in a dictionary, and this dictionary will be appended to a character list, if it is a character, or a team list, if it is a team. In the end, we will have a list of dictionaries for characters and one for teams, and we will use them to create DataFrames Step21: Let's consider only teams and characters that have more than 20 comics where their names are present on the title of the comic. So, let's make a filter Step22: Now, let's check the biggest characters and teams in number of comics and average rating. For the characters, even considering the ones with more than 20 comics, there are still too many characters left. So, we'll limit the list to the top 20 characters. For the teams, there is no need, since there are already less than 20. Then, we'll print the tables Step23: Among the characters, we have Batman as the one with the biggest number of comics, followed by Spider-Man and Superman. After that, we have some other famous characters, like Captain America, Iron Man, Wolverine, Flash. Here, nothing surprising. Step24: Here, we have some surprises on the top. Even if the quantity of comics is not very big, few people would imagine that Mystique would be the character with the highest average rating, among all these extremely popular characters. On the next positions, more surprises, with Booster Gold in second, Jonah Hex in third, Blue Beetle in fifth. Of the most popular characters, we see Spider-Man, Deadpool and Wonder Woman, in the end of the top 20. Let's go to the teams Step25: Among the teams with most comics, nothing surprising either. X-Men in first, Avenger in second and Justice League in third. Then, the other teams, like Fantastic Four, Suicide Squad Step26: On the ratings, the top 3 is formed by the All-Star Squadron, from DC Comics, Fantastic Four and the Thunderbolts, from Marvel. X-Men, Avenger and Suicide Squad are in the end of the list. Below we plot the charts for these numbers for the characters Step27: And below, the charts for the teams
Python Code: import warnings warnings.filterwarnings('ignore') import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns comixology_df = pd.read_csv("comixology_comics_dataset_19.04.2016.csv", encoding = "ISO-8859-1") Explanation: Digital Comics Data Analysis (Python) - Marvel or DC? Introduction After having done the analysis of the website (post here and the web scraping of the data from the Comixology website (post here, we will analyze the data that we got using Python (Pandas). Let's find out what publisher have the best average ratings and prices, the average page count of the Vamos descobrir quais editoras tem os melhores preços relativos à quantidade de páginas de seus comics, as editoras com as melhores avaliações médias, além de uma análise mais profunda do duelo das gigantes: Marvel x DC Comics. Vamos começar. Initial Preparation First, as usual, let's import the packages we need. With the warning package we will ignore the eventual warning messages that Python / Pandas give to us, so the code in our notebook does not get very long. For the other packages, they are old friends: numpy, pandas, matplotlib and seaborn. Then, we will read the csv file with the read_csv function from Pandas. End of explanation # Create price per page column comixology_df['Price_per_page'] = pd.Series(comixology_df['Original_price'] / comixology_df['Page Count'], index=comixology_df.index) # Define price_per_page as NaN for comics with no information about page count comixology_df.Price_per_page[comixology_df['Price_per_page'] == np.inf] = np.nan Explanation: Now, let's create a new column, price per page. This column will help us compare the price of comics that have a different number of pages, and, therefore, should have a bigger price. But how much bigger? For some comics, the page count information is not available, and so, for these cases, Pandas will return inf as the value of the column, representing an infinite value. For these comics, we will set the price per page as NaN: End of explanation # Extract the year of release for print version print_dates = [] for index, row in comixology_df.iterrows(): if type(comixology_df.ix[index]['Print Release Date']) == float: row_year = np.nan else: row_year = int(comixology_df.ix[index]['Print Release Date'].split()[2]) if row_year > 2016: row_year = np.nan print_dates.append(row_year) comixology_df['Print_Release_Year'] = pd.Series(print_dates, index=comixology_df.index) Explanation: Now, let's use the iterrows() function of the DataFrame to extract the publishing year of the print version of the comic. This function creates a for loop that iterates over each row of the DataFrame. Let's use the split() function to turn the string that contains the print release date into a list of values, and the third one will be the year. In some cases, this will return a value bigger than 2016, and since this is impossible, we will define these cases as NaN: End of explanation # Calculate some average values of the site average_price = np.nanmean(comixology_df['Original_price']) average_page_count = np.nanmean(comixology_df['Page Count']) average_rating = np.nanmean(comixology_df['Rating']) average_rating_quantity = np.nanmean(comixology_df['Ratings_Quantity']) average_price_per_page = np.nanmean(comixology_df['Price_per_page']) print("Average Price: " + str(average_price)) print("Average Page Count: " + str(average_page_count)) print("Average Rating: " + str(average_rating)) print("Average Ratings Quantity: " + str(average_rating_quantity)) print("Average Price Per Page: " + str(average_price_per_page)) Explanation: To the analysis (and beyond!) The first analysis we'll do is the calculation of some average values of the website, like average price of comics, average page count, among others. We'll use the nanmean() function from numpy. This function calculates the mean of a series os values, not considering NaN cases. End of explanation # Define number of columns for table printing pd.set_option('display.max_colwidth', 40) # List comics with 5 stars rating that have at least 20 ratings comics_with_5_stars = comixology_df[comixology_df.Rating == 5] comics_with_5_stars = comics_with_5_stars[comics_with_5_stars.Ratings_Quantity > 20] # Print comics sorted by price per page print(comics_with_5_stars[['Name','Publisher','Price_per_page']]. sort_values(by='Price_per_page')) Explanation: Now, we will define the maximum number of columns for each field in the printing of the table to 40 columns. We'll do that because the name of some comics is long, and the printing of the table can get a little strange. With this configuration we can see more information in one row. After that, let's list comics with an average rating of 5 stars, that have more than 20 ratings (to consider only the more representative comics; comics with an average rating of 5 stars but with only one rating are not a very good metric), and let's sort it by price per page. In the top, we will have some free comics (the 6 first ones). Then, we will have great comics, in the eyes of the users, that have a very good price.</p> End of explanation # Filter the original DataFrame for comics with more than 5 ratings comics_more_than_5_ratings = comixology_df[comixology_df.Ratings_Quantity > 5] # Create pivot table with average rating by publisher publishers_avg_rating = pd.pivot_table(comics_more_than_5_ratings, values=['Rating'], index=['Publisher'], aggfunc=[np.mean, np.count_nonzero]) # Filter for any Publisher that has more than 20 comics rated main_pub_avg_rating = publishers_avg_rating[publishers_avg_rating. count_nonzero.Rating > 20] main_pub_avg_rating = main_pub_avg_rating.sort_values(by=('mean','Rating'), ascending=False) print(main_pub_avg_rating) Explanation: In the next analysis, we will use only comics with more than 5 ratings. For that, we will filter the DataFrame. Then, we'll create a Pandas pivot table, so that we can visualize the quantity of comics with ratings and the average rating of this publisher. Then, we will consider as representative publishers those that have at least 20 comics with ratings. To do that, we will filter the pivot table. And finally, we will sort this table by average rating, going from the highest to the lowest. This means that the publishers on the top of the table will be the ones that have the best average rating from its comics. End of explanation # Create chart with average ratings for the Publishers plt.figure(figsize=(10, 6)) y_axis = main_pub_avg_rating['mean']['Rating'] x_axis = range(len(y_axis)) plt.bar(x_axis, y_axis) plt.xticks(x_axis, tuple(main_pub_avg_rating.index),rotation=90) plt.show() Explanation: Note that the giants, Marvel and DC Comics, are not among the ones in the top. If we see the complete table, they are between the middle and the bottom of the table. To help in the visualization, let's create a matplotlib chart that represents the table above: End of explanation # Filter for Publishers that have more than 300 comics rated big_pub_avg_rating = publishers_avg_rating[publishers_avg_rating. count_nonzero.Rating > 300] big_pub_avg_rating = big_pub_avg_rating.sort_values(by=('mean','Rating'), ascending=False) print(big_pub_avg_rating) # Create chart with average ratings for Publishers with more than 300 comics # rated plt.figure(figsize=(10, 6)) y_axis = big_pub_avg_rating['mean']['Rating'] x_axis = np.arange(len(y_axis)) plt.bar(x_axis, y_axis) plt.xticks(x_axis+0.5, tuple(big_pub_avg_rating.index), rotation=90) plt.show() Explanation: To simplify and have a better table and chart, let's consider now only the publishers that have 300 comics with ratings. First, the table: End of explanation # Create pivot table with Rating by Age Rating rating_by_age = pd.pivot_table(comics_more_than_5_ratings, values=['Rating'], index=['Age Rating'], aggfunc=[np.mean, np.count_nonzero]) print(rating_by_age) Explanation: One thing that I believed that could make a difference in the ratings of a comic was the age classification. Were comics made to the adults rated better? Or worse? Let's check that making another pivot table: End of explanation # Bar Chart with rating by age rating plt.figure(figsize=(10, 6)) y_axis = rating_by_age['mean']['Rating'] x_axis = np.arange(len(y_axis)) plt.bar(x_axis, y_axis) plt.xticks(x_axis+0.25, tuple(rating_by_age.index), rotation=45) plt.show() Explanation: And below, the corresponding chart: End of explanation # Create pivot table with print releases per year print_releases_per_year = pd.pivot_table(comixology_df, values=['Name'], index=['Print_Release_Year'], aggfunc=[np.count_nonzero]) print_years = [] for index, row in print_releases_per_year.iterrows(): print_year = int(index) print_years.append(print_year) print_releases_per_year.index = print_years print(print_releases_per_year) Explanation: As we can see, the height of the bars is quite similar. It seems that the age classification does not make a significant effect on the ratings of a comic. If we see it with a purely mathematical view, comics with an age classification for 9+ years or for all ages get the best ratings, by a small margin. But it is not possible to view a strong relation, since it does not varies in the same way as the age classification increases or decreases. Our next step is to see how the release of comics evolved (considering print versions) over the years. Remember that we already created a column with the year of release of the print version of the comic. The next step is basically to count the occurrences of each year in this column. Let's make a list with the years and then count the releases per year: End of explanation # Create chart with print releases per year y_axis = print_releases_per_year['count_nonzero']['Name'] x_axis = print_releases_per_year['count_nonzero']['Name'].index plt.figure(figsize=(10, 6)) plt.plot(x_axis, y_axis) plt.show() Explanation: And now let's create the cart to see the situation better: End of explanation # Sort the DataFrame by ratings quantity and show Name, Publisher and quantity comics_by_ratings_quantity = comixology_df[['Name','Publisher', 'Ratings_Quantity']].sort_values( by='Ratings_Quantity', ascending=False) print(comics_by_ratings_quantity.head(30)) Explanation: The numbers show that the growing was moderate, until the decade of 2000, when a boom happened, with a great increase in releases until 2012, when the release numbers started to oscillate. The fall shown in 2016 is because we are still in the middle of the year. Now we'll go on to make an evaluation of the most rated comics on the website. We can also probably say that these are the most read comics on the website. So, for this analysis, we will check the comics with most ratings, sorting the table and printing some columns. Let's see the 30 first ones. End of explanation # Create chart with the previously sorted comics plt.figure(figsize=(10, 6)) y_axis = comics_by_ratings_quantity.head(30)['Ratings_Quantity'] x_axis = np.arange(len(y_axis)) plt.bar(x_axis, y_axis) plt.xticks(x_axis+0.5, tuple(comics_by_ratings_quantity.head(30)['Name']), rotation=90) plt.show() Explanation: And the chart with the most rated comics: End of explanation # Filter the DataFrame for comics from Marvel or DC Comics marvel_dc_comics = comixology_df[(comixology_df.Publisher == 'Marvel') | (comixology_df.Publisher == 'DC Comics')] # Create pivot table with Primeiro, alguns valores médios de cada uma marvel_dc_pivot_averages = pd.pivot_table(marvel_dc_comics, values=['Rating','Original_price','Page Count', 'Price_per_page'], index=['Publisher'], aggfunc=[np.mean]) print(marvel_dc_pivot_averages) Explanation: Walking Dead is by far the one with most ratings. After that, some Marvel and DC comics and then some varied ones.</p> Now, let's make our detailed analysis on the giant publishers: Marvel and DC Comics. Marvel vs DC Comics First, let's filter the DataFrame, so that only comics from these two remain. After that, we will calculate some average values of these two using a pivot table: End of explanation # Charts for average values for Marvel and DC plt.figure(1,figsize=(10, 6)) plt.subplot(221) # Mean original price y_axis = marvel_dc_pivot_averages['mean']['Original_price'] x_axis = np.arange(len(marvel_dc_pivot_averages['mean']['Original_price'])) plt.bar(x_axis, y_axis) plt.xticks(x_axis+0.4, tuple(marvel_dc_pivot_averages['mean']['Original_price'].index)) plt.title('Mean Original Price') plt.tight_layout() plt.subplot(222) # Mean page count y_axis = marvel_dc_pivot_averages['mean']['Page Count'] x_axis = np.arange(len(marvel_dc_pivot_averages['mean']['Page Count'])) plt.bar(x_axis, y_axis) plt.xticks(x_axis+0.4, tuple(marvel_dc_pivot_averages['mean']['Page Count'].index)) plt.title('Mean Page Count') plt.tight_layout() plt.subplot(223) # Mean Price Per Page y_axis = marvel_dc_pivot_averages['mean']['Price_per_page'] x_axis = np.arange(len(marvel_dc_pivot_averages['mean']['Price_per_page'])) plt.bar(x_axis, y_axis) plt.xticks(x_axis+0.4, tuple(marvel_dc_pivot_averages['mean']['Price_per_page'].index)) plt.title('Mean Price Per Page') plt.tight_layout() plt.subplot(224) # Mean Comic Rating y_axis = marvel_dc_pivot_averages['mean']['Rating'] x_axis = np.arange(len(marvel_dc_pivot_averages['mean']['Rating'])) plt.bar(x_axis, y_axis) plt.xticks(x_axis+0.4, tuple(marvel_dc_pivot_averages['mean']['Rating'].index)) plt.title('Mean Comic Rating') plt.tight_layout() plt.show() Explanation: As we can see, DC Comics has a lower average price and price per page, and an average rating slightly higher. The average page count is a little higher on Marvel. Below, the bar charts that represent these comparations: End of explanation # Calculate total number of comics for each Publisher, proportion of comics # with rating 4 or bigger and proportion of comics with rating 2 or smaller marvel_total = len(marvel_dc_comics[marvel_dc_comics['Publisher'] == 'Marvel']) marvel_4_or_5 = len(marvel_dc_comics[(marvel_dc_comics['Publisher'] == 'Marvel') & (marvel_dc_comics['Rating'] >= 4)]) marvel_proportion_4_or_5 = marvel_4_or_5 / marvel_total marvel_1_or_2 = len(marvel_dc_comics[(marvel_dc_comics['Publisher'] == 'Marvel') & (marvel_dc_comics['Rating'] <= 2)]) marvel_proportion_1_or_2 = marvel_1_or_2 / marvel_total dc_total = len(marvel_dc_comics[marvel_dc_comics['Publisher'] == 'DC Comics']) dc_4_or_5 = len(marvel_dc_comics[(marvel_dc_comics['Publisher'] == 'DC Comics') & (marvel_dc_comics['Rating'] >= 4)]) dc_proportion_4_or_5 = dc_4_or_5 / dc_total dc_1_or_2 = len(marvel_dc_comics[(marvel_dc_comics['Publisher'] == 'DC Comics') & (marvel_dc_comics['Rating'] <= 2)]) dc_proportion_1_or_2 = dc_1_or_2 / dc_total print("\n") print("Marvel's Total Comics: " + str(marvel_total)) print("Marvel's comics with rating 4 or bigger: " + str(marvel_4_or_5)) print("Proportion of Marvel's comics with rating 4 or bigger: " + str("{0:.2f}%".format(marvel_proportion_4_or_5 * 100))) print("Marvel's comics with rating 2 or smaller: " + str(marvel_1_or_2)) print("Proportion of Marvel's comics with rating 2 or smaller: " + str("{0:.2f}%".format(marvel_proportion_1_or_2 * 100))) print("\n") print("DC's Total Comics: " + str(dc_total)) print("DC's comics with rating 4 or bigger: " + str(dc_4_or_5)) print("Proportion of DC's comics with rating 4 or bigger: " + str("{0:.2f}%".format(dc_proportion_4_or_5 * 100))) print("DC's comics with rating 2 or smaller: " + str(dc_1_or_2)) print("Proportion of DC's comis with rating 2 or smaller: " + str("{0:.2f}%".format(dc_proportion_1_or_2 * 100))) print("\n") Explanation: Next step is to see some numbers related to the quantity of comics that each have. How many comics each publisher has, how many of them are good (4 or 5 stars rating), how many are bad (1 or 2 stars) and the proportion of these to the total. For this analysis, we will basically filter the DataFrame and count the number of rows of each filtered view. Simple: End of explanation # Create charts with total comics and previously calculated proportions for # Marvel and DC plt.figure(2,figsize=(10, 6)) plt.subplot(221) # Total comics for Marvel and DC y_axis = [dc_total, marvel_total] x_axis = np.arange(len(y_axis)) plt.bar(x_axis, y_axis) plt.xticks(x_axis+0.4, ('DC Comics','Marvel')) plt.title('Total Comics') plt.tight_layout() plt.subplot(222) # Proportion of comics with rating 4 or 5 y_axis = [dc_proportion_4_or_5 * 100, marvel_proportion_4_or_5 * 100] x_axis = np.arange(len(y_axis)) plt.bar(x_axis, y_axis) plt.xticks(x_axis+0.4, ('DC Comics','Marvel')) plt.title('Proportion of comics with rating 4 or 5') plt.tight_layout() plt.subplot(223) # Proportion of comics with rating 1 or 2 y_axis = [dc_proportion_1_or_2 * 100, marvel_proportion_1_or_2 * 100] x_axis = np.arange(len(y_axis)) plt.bar(x_axis, y_axis) plt.xticks(x_axis+0.4, ('DC Comics','Marvel')) plt.title('Proportion of comics with rating 1 or 2') plt.tight_layout() plt.show() Explanation: Again, here, DC Comics comes a little better. DC shows a bigger proportion of good comics and a smaller proportion of bad comics. DC scores one more. Below, the chart with the comparisons: End of explanation # Create Pivot Table with quantity of ratings of each Publisher marvel_dc_pivot_sums = pd.pivot_table(marvel_dc_comics, values=['Ratings_Quantity'], index=['Publisher'], aggfunc=[np.sum]) print(marvel_dc_pivot_sums) Explanation: Just as curiosity, let's check the number of ratings in comics of each publisher, through another pivot table: End of explanation # Define list of characters and teams of DC and Marvel main_dc_characters = ['Superman','Batman','Aquaman','Wonder Woman', 'Flash', 'Robin','Arrow', 'Batgirl', 'Bane', 'Harley Queen', 'Poison Ivy', 'Joker','Firestorm','Vixen', 'Martian Manhunter','Zod','Penguin','Lex Luthor', 'Green Lantern','Supergirl','Atom','Cyborg','Hawkgirl', 'Starfire','Jonah Hex','Booster Gold','Black Canary', 'Shazam','Catwoman','Nightwing','Zatanna','Hawkman', 'Power Girl','Rorschach','Doctor Manhattan', 'Blue Beetle','Batwoman','Darkseid','Vandal Savage', "Ra's Al Ghul",'Riddler','Reverse Flash','Black Adam', 'Deathstroke','Brainiac','Sinestro','Two-Face'] main_marvel_characters = ['Spider-Man','Captain Marvel','Hulk','Thor', 'Iron Man','Luke Cage','Black Widow','Daredevil', 'Captain America','Jessica Jones','Ghost Rider', 'Spider-Woman','Silver Surfer','Beast','Thing', 'Kitty Pride','Doctor Strange','Black Panther', 'Invisible Woman','Nick Fury','Storm','Professor X', 'Cyclops','Jean Grey','Wolverine','Scarlet Witch', 'Gambit','Rogue','X-23','Iceman','She-Hulk', 'Iron Fist','Hawkeye','Quicksilver','Vision', 'Ant-Man','Cable','Bishop','Colossus','Deadpool', 'Human Torch','Mr. Fantastic','Nightcrawler','Nova', 'Psylocke','Punisher','Rocket Raccoon','Groot', 'Star-Lord','War Machine','Gamora','Drax','Venom', 'Carnage','Octopus','Green Goblin','Abomination', 'Enchantress','Sentinel','Viper','Lady Deathstrike', 'Annihilus','Ultron','Galactus','Kang','Bullseye', 'Juggernaut','Sabretooth','Mystique','Kingpin', 'Apocalypse','Thanos','Dark Phoenix','Loki', 'Red Skull','Magneto','Doctor Doom','Ronan'] dc_teams = ['Justice League','Teen Titans','Justice Society','Lantern Corps', 'Legion of Super-Heroes','All-Star Squadron','Suicide Squad', 'Birds of Prey','Gen13', 'The League of Extraordinary Gentlemen', 'Watchmen'] marvel_teams = ['X-Men','Avengers','Fantastic Four','Asgardian Gods','Skrulls', 'S.H.I.E.L.D.','Inhumans','A.I.M.','X-Factor','X-Force', 'Defenders','New Mutants','Brotherhood of Evil Mutants', 'Thunderbolts', 'Alpha Flight','Guardians of the Galaxy', 'Nova Corps','Illuminati'] Explanation: Interesting to note that even with Marvel having more comics, as we saw in the previous table, there quantity of ratings of DC's comics is way bigger, approximately 55% more. It seems that DC's fans are more propense to rate comics in Comixology than Marvel ones. Our next evaluation will be about characters and teams of heroes / villains. First, we need to create lists of characters and teams for each publisher. I created the lists by hand, doing some research. It didn't took very long. End of explanation # Create empty list and dict to hold character info character_row = {} characters_dicts = [] for character in main_dc_characters: character_df = comixology_df[(comixology_df['Name'].str.contains(character)) & (comixology_df['Publisher'] == 'DC Comics')] character_row['Character_Name'] = character character_row['Quantity_of_comics'] = len(character_df) character_row['Average_Rating'] = np.nanmean(character_df['Rating']) character_row['Average_Price'] = np.nanmean(character_df['Original_price']) character_row['Average_Pages'] = np.nanmean(character_df['Page Count']) character_row['Publisher'] = "DC Comics" characters_dicts.append(character_row) character_row = {} for character in main_marvel_characters: character_df = comixology_df[(comixology_df['Name'].str.contains(character)) & (comixology_df['Publisher'] == 'Marvel')] character_row['Character_Name'] = character character_row['Quantity_of_comics'] = len(character_df) character_row['Average_Rating'] = np.nanmean(character_df['Rating']) character_row['Average_Price'] = np.nanmean(character_df['Original_price']) character_row['Average_Pages'] = np.nanmean(character_df['Page Count']) character_row['Publisher'] = "Marvel" characters_dicts.append(character_row) character_row = {} characters_df = pd.DataFrame(characters_dicts) # Create empty list and dict to hold team info team_row = {} teams_dicts = [] for team in dc_teams: team_df = comixology_df[(comixology_df['Name'].str.contains(team)) & (comixology_df['Publisher'] == 'DC Comics')] team_row['Team_Name'] = team team_row['Quantity_of_comics'] = len(team_df) team_row['Average_Rating'] = np.nanmean(team_df['Rating']) team_row['Average_Price'] = np.nanmean(team_df['Original_price']) team_row['Average_Pages'] = np.nanmean(team_df['Page Count']) team_row['Publisher'] = "DC Comics" teams_dicts.append(team_row) team_row = {} for team in marvel_teams: team_df = comixology_df[(comixology_df['Name'].str.contains(team)) & (comixology_df['Publisher'] == 'Marvel')] team_row['Team_Name'] = team team_row['Quantity_of_comics'] = len(team_df) team_row['Average_Rating'] = np.nanmean(team_df['Rating']) team_row['Average_Price'] = np.nanmean(team_df['Original_price']) team_row['Average_Pages'] = np.nanmean(team_df['Page Count']) team_row['Publisher'] = "Marvel" teams_dicts.append(team_row) team_row = {} teams_df = pd.DataFrame(teams_dicts) Explanation: Next, we need to pass each name of character or team. First, let's define a DataFrame, and we'll filter so that the only rows that remain are the ones where the comic name includes the name of this character or team. Then, we'll extract some information from there. The quantity of comics will be the number of rows of the resulting DataFrame. Then, we will get the average price, rating and page count. All this information will be saved in a dictionary, and this dictionary will be appended to a character list, if it is a character, or a team list, if it is a team. In the end, we will have a list of dictionaries for characters and one for teams, and we will use them to create DataFrames: End of explanation # Filter characters and teams DataFrame for rows where there are more than 20 # comics where the character / team name is present on the title of the comics characters_df = characters_df[characters_df['Quantity_of_comics'] > 20] teams_df = teams_df[teams_df['Quantity_of_comics'] > 20] Explanation: Let's consider only teams and characters that have more than 20 comics where their names are present on the title of the comic. So, let's make a filter: End of explanation # Limit number of characters to 20 top_characters_by_quantity = characters_df.sort_values(by='Quantity_of_comics', ascending=False)[['Character_Name', 'Average_Rating', 'Quantity_of_comics']].head(20) top_characters_by_rating = characters_df.sort_values(by='Average_Rating', ascending=False)[['Character_Name', 'Average_Rating', 'Quantity_of_comics']].head(20) top_teams_by_quantity = teams_df.sort_values(by='Quantity_of_comics', ascending=False)[['Team_Name', 'Average_Rating', 'Quantity_of_comics']] top_teams_by_rating = teams_df.sort_values(by='Average_Rating', ascending=False)[['Team_Name', 'Average_Rating', 'Quantity_of_comics']] print(top_characters_by_quantity) Explanation: Now, let's check the biggest characters and teams in number of comics and average rating. For the characters, even considering the ones with more than 20 comics, there are still too many characters left. So, we'll limit the list to the top 20 characters. For the teams, there is no need, since there are already less than 20. Then, we'll print the tables: End of explanation print(top_characters_by_rating) Explanation: Among the characters, we have Batman as the one with the biggest number of comics, followed by Spider-Man and Superman. After that, we have some other famous characters, like Captain America, Iron Man, Wolverine, Flash. Here, nothing surprising. End of explanation print(top_teams_by_quantity) Explanation: Here, we have some surprises on the top. Even if the quantity of comics is not very big, few people would imagine that Mystique would be the character with the highest average rating, among all these extremely popular characters. On the next positions, more surprises, with Booster Gold in second, Jonah Hex in third, Blue Beetle in fifth. Of the most popular characters, we see Spider-Man, Deadpool and Wonder Woman, in the end of the top 20. Let's go to the teams: End of explanation print(top_teams_by_rating) Explanation: Among the teams with most comics, nothing surprising either. X-Men in first, Avenger in second and Justice League in third. Then, the other teams, like Fantastic Four, Suicide Squad: End of explanation # Create charts related to the characters information plt.figure(3,figsize=(10, 6)) plt.subplot(121) # Characters by quantity of comics y_axis = top_characters_by_quantity['Quantity_of_comics'] x_axis = np.arange(len(y_axis)) plt.bar(x_axis, y_axis) plt.xticks(x_axis+0.4, tuple(top_characters_by_quantity['Character_Name']), rotation=90) plt.title('Characters by quantity of comics') plt.tight_layout() plt.subplot(122) # Characters by average rating y_axis = top_characters_by_rating['Average_Rating'] x_axis = np.arange(len(y_axis)) plt.bar(x_axis, y_axis) plt.xticks(x_axis+0.4, tuple(top_characters_by_rating['Character_Name']), rotation=90) plt.title('Characters by average ratings') plt.tight_layout() plt.show() Explanation: On the ratings, the top 3 is formed by the All-Star Squadron, from DC Comics, Fantastic Four and the Thunderbolts, from Marvel. X-Men, Avenger and Suicide Squad are in the end of the list. Below we plot the charts for these numbers for the characters: End of explanation # Creation of charts related to teams plt.figure(4,figsize=(10, 6)) plt.subplot(121) # Teams by quantity of comics y_axis = top_teams_by_quantity['Quantity_of_comics'] x_axis = np.arange(len(y_axis)) plt.bar(x_axis, y_axis) plt.xticks(x_axis+0.4, tuple(top_teams_by_quantity['Team_Name']), rotation=90) plt.title('Teams by quantity of comics') plt.tight_layout() plt.subplot(122) # Teams by average ratings y_axis = top_teams_by_rating['Average_Rating'] x_axis = np.arange(len(y_axis)) plt.bar(x_axis, y_axis) plt.xticks(x_axis+0.4, tuple(top_teams_by_rating['Team_Name']), rotation=90) plt.title('Teams by average ratings') plt.tight_layout() plt.show() Explanation: And below, the charts for the teams: End of explanation
1,075
Given the following text description, write Python code to implement the functionality described below step by step Description: A Simple Autoencoder We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data. In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset. Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits. Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input. Exercise Step3: Training Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss. Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed). Step5: Checking out the results Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
Python Code: %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) Explanation: A Simple Autoencoder We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data. In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset. End of explanation img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits. End of explanation # Size of the encoding layer (the hidden layer) encoding_dim = 32 # feel free to change this value image_size = mnist.train.images.shape[1] print(image_size) # Input and target placeholders inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs') targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets') print(inputs_) print(targets_) # Output of hidden layer, single fully connected layer here with ReLU activation encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu) print(encoded) # Output layer logits, fully connected layer with no activation logits = tf.layers.dense(encoded, image_size, activation=None) print(logits) # Sigmoid output from logits decoded = tf.nn.sigmoid(logits, name='output') print(decoded) # Sigmoid cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) print(loss) # Mean of the loss cost = tf.reduce_mean(loss) print(cost) # Adam optimizer opt = tf.train.AdamOptimizer(0.001).minimize(cost) print(opt) Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input. Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function. End of explanation # Create the session sess = tf.Session() Explanation: Training End of explanation epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) feed = {inputs_: batch[0], targets_: batch[0]} batch_cost, _ = sess.run([cost, opt], feed_dict=feed) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss. Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed). End of explanation fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() Explanation: Checking out the results Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. End of explanation
1,076
Given the following text description, write Python code to implement the functionality described below step by step Description: Fitting Models Exercise 1 Imports Step1: Fitting a quadratic curve For this problem we are going to work with the following model Step2: First, generate a dataset using this model using these parameters and the following characteristics Step3: Now fit the model to the dataset to recover estimates for the model's parameters
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt Explanation: Fitting Models Exercise 1 Imports End of explanation a_true = 0.5 b_true = 2.0 c_true = -4.0 dy = 2.0 x = np.linspace(-5,5,30) Explanation: Fitting a quadratic curve For this problem we are going to work with the following model: $$ y_{model}(x) = a x^2 + b x + c $$ The true values of the model parameters are as follows: End of explanation ydata = a_true*x**2 + b_true*x + c_true assert True # leave this cell for grading the raw data generation and plot Explanation: First, generate a dataset using this model using these parameters and the following characteristics: For your $x$ data use 30 uniformly spaced points between $[-5,5]$. Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal). After you generate the data, make a plot of the raw data (use points). End of explanation assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors plt.plot(x, ydata, 'k.') plt.xlabel('x') plt.ylabel('y') plt.xlim(-5,5) ; plt.errorbar(x, ydata, dy, fmt='.k', ecolor='lightgray') def exp_model(x, A, B, C): return A*np.exp(x*B) + C yfit = exp_model(x, a_true, b_true, c_true) plt.plot(x, yfit) plt.plot(x, ydata, 'k.') plt.xlabel('x') plt.ylabel('y') plt.ylim(-20,100); Explanation: Now fit the model to the dataset to recover estimates for the model's parameters: Print out the estimates and uncertainties of each parameter. Plot the raw data and best fit of the model. End of explanation
1,077
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Stroop Task Background Information In a Stroop task, participants are presented with a list of words, with each word displayed in a color of ink. The participant’s task is to say out loud the color of the ink in which the word is printed. The task has two conditions Step3: 5. Now, perform the statistical test and report your results. What is your confidence level and your critical statistic value? Do you reject the null hypothesis or fail to reject it? Come to a conclusion in terms of the experiment task. Did the results match up with your expectations? Descriptive statistics for the difference between two conditions Let $\mu_D$ be defined as the mean difference between the incongruent set and congruent set. Let $\sigma_D$ be defined as the standard deviation of the differences between the incongruent and congruent set. Step5: Calculate the critical t value Step7: Calculate the t value Step9: Confidence Interval Step10: Our results show that t is within the critical region, meaning
Python Code: import math import pandas as pd import scipy.stats as st from IPython.display import Latex from IPython.display import Math from IPython.display import display %matplotlib inline path = r'./stroopdata.csv' df_stroop = pd.read_csv(path) df_stroop mu_congruent = round(df_stroop['Congruent'].mean(),4) mu_incongruent = round(df_stroop['Incongruent'].mean(),4) text = r \begin{{align}} \mu_{{congruent}}={}\\ \mu_{{incongruent}}={} \end{{align}}.format(mu_congruent, mu_incongruent) Latex(text) df_stroop.plot(kind="bar") Explanation: Stroop Task Background Information In a Stroop task, participants are presented with a list of words, with each word displayed in a color of ink. The participant’s task is to say out loud the color of the ink in which the word is printed. The task has two conditions: a congruent words condition, and an incongruent words condition. In the congruent words condition, the words being displayed are color words whose names match the colors in which they are printed: for example <span style="color:red;">RED</span>, <span style="color:blue;">BLUE</span>. In the incongruent words condition, the words displayed are color words whose names do not match the colors in which they are printed: for example <span style="color:#6aa84f;">PURPLE</span>, <span style="color:#674ea7;">ORANGE</span>. In each case, we measure the time it takes to name the ink colors in equally-sized lists. Each participant will go through and record a time from each condition. 1. What is our independent variable? What is our dependent variable? Our independent variable is the test that participants take under two conditions: congruent and incongruent. Our dependent variable is the time it takes for a participant to name the ink colors in equally-sized lists. 2. What is an appropriate set of hypotheses for this task? What kind of statistical test do you expect to perform? Justify your choices. By looking at the data provided in the sample dataset, it seems like participants taken from a population are taking longer to identify the names of colors when printed in a different color (incongruent words condition). From this observation, we can come up with the following hypotheses: Let $\mu_{incongruent}$ be defined as the mean of the population that performs the task under the incongruent condition. Let $\mu_{congruent}$ be defined as the mean of the population that perform the task under the congruent condition. Null Hypothesis: There is no statistical difference in time measurement between the two population means. It will take the same time for an individuals from the population to perform each task. $$H_0 :\ \mu_{incongruent} = \mu_{congruent}$$ Alternative Hypothesis: Individuals from the population will take longer to complete the task under the incongruent words condition than with the congruent words condition. $$H_A: \mu_{incongruent} > \mu_{congruent}$$ A One-tailed dependent t-test in the positive direction is expected to be performed to see if any hypotheses need to be rejected. * A t-test is used because: * The sample size is below 30. * The population standard deviation is not known. * A dependent t-test is used because the two samples are dependent; the same partipants take the two tests under different conditions. * Only one direction is tested because we want to see if it takes longer for partipants to perform the task under the incongruent words condition. Based on the results of our t-test we can make a inference towards how conflicting cues play a role in how fast individuals from the human population can process information. In this particular case, we will find out how a name of a color being displayed with a different color affect how long it takes to be recited. 3. Report some descriptive statistics regarding this dataset. Include at least one measure of central tendency and at least one measure of variability. End of explanation df_diff = df_stroop['Incongruent']-df_stroop['Congruent'] mu_D = round(df_diff.mean(),4) std_D = round(df_diff.std(),4) text = r \begin{{align}} \mu_D = {}\\ \sigma_D = {}\\ \end{{align}}.format(mu_D,std_D) display(Latex(text)) Explanation: 5. Now, perform the statistical test and report your results. What is your confidence level and your critical statistic value? Do you reject the null hypothesis or fail to reject it? Come to a conclusion in terms of the experiment task. Did the results match up with your expectations? Descriptive statistics for the difference between two conditions Let $\mu_D$ be defined as the mean difference between the incongruent set and congruent set. Let $\sigma_D$ be defined as the standard deviation of the differences between the incongruent and congruent set. End of explanation n = df_diff.count() df = n-1 alpha = .05 t_critical = round(st.t.ppf(1-alpha,df),3) text = rt_{{critical}}=t_{{{},{}}}={}.format(alpha,df,t_critical) display(Math(text)) Explanation: Calculate the critical t value: End of explanation # Calculate t SEM = round(std_D/math.sqrt(n),4) t = round(mu_D/SEM,3) text = rt = \frac{{\mu_D}}{{SEM}} = \ \frac{{{}}}{{{}}} = {}.format(mu_D,SEM,t) display(Math(text)) Explanation: Calculate the t value: End of explanation two_tailed_t_critical = round(st.t.ppf(alpha/2,df),3) m_error = -1*round(two_tailed_t_critical * SEM,4) text = r \begin{{align}} {} \pm {}\\ [{},{}]\\ \end{{align}}.format(mu_D,m_error,mu_D-m_error,mu_D+m_error) display(Latex(text)) Explanation: Confidence Interval: End of explanation import IPython.display as disp def css_styling(): styles = open("../css/custom.css", "r").read() return disp.HTML(styles) css_styling() Explanation: Our results show that t is within the critical region, meaning: $t > t_{critical}$. The Stroop Effect is observed with these findings. Partipants are likely to take 6 to 10 seconds longer on average to finishing the incongruent word condition compared with the congruent words condition. We reject the null hypothesis. Based on our statistical findings we can say that the participants take longer to finish the task under the incongruent words condition. The results match up my expectations. 6. Optional: What do you think is responsible for the effects observed? Can you think of an alternative or similar task that would result in a similar effect? Some research about the problem will be helpful for thinking about these two questions! There is a part of our brain that deal with recalling information given a certain stimuli. There is another part of our brain that deals with processing new information. When the information supplied from both of these parts interfere with another, there is a time delay in regards to processing the correct information. A real life scenario of this problem is learning a new language which uses the roman alphabet. If someone knows English, but tries to learn French, then initially that person will struggle to attach different sounds to the letters that already have a sound attached to it. End of explanation
1,078
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have a dataframe with column names, and I want to find the one that contains a certain string, but does not exactly match it. I'm searching for 'spike' in column names like 'spike-2', 'hey spike', 'spiked-in' (the 'spike' part is always continuous).
Problem: import pandas as pd data = {'spike-2': [1,2,3], 'hey spke': [4,5,6], 'spiked-in': [7,8,9], 'no': [10,11,12]} df = pd.DataFrame(data) s = 'spike' def g(df, s): spike_cols = [col for col in df.columns if s in col and col != s] return df[spike_cols] result = g(df.copy(),s)
1,079
Given the following text description, write Python code to implement the functionality described below step by step Description: Working with data 2017. Class 8 Contact Javier Garcia-Bernardo [email protected] 1. Clustering 2. Data imputation 3. Dimensionality reduction Step1: 3. Dimensionality reduction Many times we want to combine variables (for linear regression to avoid multicollinearity, to create indexes, etc) Our data Step2: Correlation between variables Step3: Revenue, employees and assets are highly correlated. Let's imagine wwe want to explain the market capitalization in terms of the other variables. Step4: 3.1 Combining variables Multiplying/summing variables It's easy It's arbitrary Step5: 3.2 PCA Keep all the info The resulting variables do not actually mean much Step6: 3.3 Factor analysis The observations are assumed to be caused by a linear transformation of lower dimensional latent factors and added Gaussian noise. Without loss of generality the factors are distributed according to a Gaussian with zero mean and unit covariance. The noise is also zero mean and has an arbitrary diagonal covariance matrix. Step7: Difference between FA and PCA http Step8: Linear regression (to compare) Step9: SVR - Gives balanced weights (the most correlated independent variable (with the dependent) doesn't take all the weight). - Very good when you have hundreds of variables. You can iteratively drop the worst predictor. - It allow for more than linear "regression". The default kernel is "rbf", which fits curves. The problem is that interpreting it is hard. Step10: Lasso - Have a penalty - Discards the variables with low weights. Step11: Summary
Python Code: ##Some code to run at the beginning of the file, to be able to show images in the notebook ##Don't worry about this cell #Print the plots in this screen %matplotlib inline #Be able to plot images saved in the hard drive from IPython.display import Image #Make the notebook wider from IPython.core.display import display, HTML display(HTML("<style>.container { width:90% !important; }</style>")) import seaborn as sns import pylab as plt import pandas as pd import numpy as np import scipy.stats import statsmodels.formula.api as smf import sklearn from sklearn.model_selection import train_test_split Explanation: Working with data 2017. Class 8 Contact Javier Garcia-Bernardo [email protected] 1. Clustering 2. Data imputation 3. Dimensionality reduction End of explanation #Read data df_companies = pd.read_csv("data/big3_position.csv",sep="\t") df_companies["log_revenue"] = np.log10(df_companies["Revenue"]) df_companies["log_assets"] = np.log10(df_companies["Assets"]) df_companies["log_employees"] = np.log10(df_companies["Employees"]) df_companies["log_marketcap"] = np.log10(df_companies["MarketCap"]) #Keep only industrial companies df_companies = df_companies.loc[:,["log_revenue","log_assets","log_employees","log_marketcap","Company_name","TypeEnt"]] df_companies = df_companies.loc[df_companies["TypeEnt"]=="Industrial company"] #Dropnans df_companies = df_companies.replace([np.inf,-np.inf],np.nan) df_companies = df_companies.dropna() df_companies.head() Explanation: 3. Dimensionality reduction Many times we want to combine variables (for linear regression to avoid multicollinearity, to create indexes, etc) Our data End of explanation # Compute the correlation matrix corr = df_companies.corr() # Generate a mask for the upper triangle (hide the upper triangle) mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr, mask=mask, square=True,linewidths=.5,cmap="YlOrRd",vmin=0,vmax=1) plt.show() Explanation: Correlation between variables End of explanation mod = smf.ols(formula='log_marketcap ~ log_revenue + log_employees + log_assets', data=df_companies) res = mod.fit() print(res.summary()) #The residuals are fine plt.figure(figsize=(4,3)) sns.regplot(res.predict(),df_companies["log_marketcap"] -res.predict()) #Get many models to see hwo coefficient changes from statsmodels.iolib.summary2 import summary_col mod1 = smf.ols(formula='log_marketcap ~ log_revenue + log_employees + log_assets', data=df_companies).fit() mod2 = smf.ols(formula='log_marketcap ~ log_revenue + log_assets', data=df_companies).fit() mod3 = smf.ols(formula='log_marketcap ~ log_employees + log_assets', data=df_companies).fit() mod4 = smf.ols(formula='log_marketcap ~ log_assets', data=df_companies).fit() mod5 = smf.ols(formula='log_marketcap ~ log_revenue + log_employees ', data=df_companies).fit() mod6 = smf.ols(formula='log_marketcap ~ log_revenue ', data=df_companies).fit() mod7 = smf.ols(formula='log_marketcap ~ log_employees ', data=df_companies).fit() output = summary_col([mod1,mod2,mod3,mod4,mod5,mod6,mod7],stars=True) print(mod1.rsquared_adj,mod2.rsquared_adj,mod3.rsquared_adj,mod4.rsquared_adj,mod5.rsquared_adj,mod6.rsquared_adj,mod7.rsquared_adj) output Explanation: Revenue, employees and assets are highly correlated. Let's imagine wwe want to explain the market capitalization in terms of the other variables. End of explanation X = df_companies.loc[:,["log_revenue","log_employees","log_assets"]] X.head(2) #Let's scale all the columns to have mean 0 and std 1 from sklearn.preprocessing import scale X_to_combine = scale(X) X_to_combine #In this case we sum them together X_combined = np.sum(X_to_combine,axis=1) X_combined #Add a new column with our combined variable and run regression df_companies["combined"] = X_combined print(smf.ols(formula='log_marketcap ~ combined ', data=df_companies).fit().summary()) Explanation: 3.1 Combining variables Multiplying/summing variables It's easy It's arbitrary End of explanation #Do the fitting from sklearn.decomposition import PCA pca = PCA(n_components=2) new_X = pca.fit_transform(X) print("Explained variance") print(pca.explained_variance_ratio_) print() print("Weight of components") print(["log_revenue","log_employees","log_assets"]) print(pca.components_) print() new_X #Create our new variables (2 components, so 2 variables) df_companies["pca_x1"] = new_X[:,0] df_companies["pca_x2"] = new_X[:,1] print(smf.ols(formula='log_marketcap ~ pca_x1 + pca_x2 ', data=df_companies).fit().summary()) print("Before") sns.lmplot("log_revenue","log_assets",data=df_companies,fit_reg=False) print("After") sns.lmplot("pca_x1","pca_x2",data=df_companies,fit_reg=False) Explanation: 3.2 PCA Keep all the info The resulting variables do not actually mean much End of explanation from sklearn.decomposition import FactorAnalysis fa = FactorAnalysis(n_components=2) new_X = fa.fit_transform(X) print("Weight of components") print(["log_revenue","log_employees","log_assets"]) print(fa.components_) print() new_X #New variables df_companies["fa_x1"] = new_X[:,0] df_companies["fa_x2"] = new_X[:,1] print(smf.ols(formula='log_marketcap ~ fa_x1 + fa_x2 ', data=df_companies).fit().summary()) print("After") sns.lmplot("fa_x1","fa_x2",data=df_companies,fit_reg=False) Explanation: 3.3 Factor analysis The observations are assumed to be caused by a linear transformation of lower dimensional latent factors and added Gaussian noise. Without loss of generality the factors are distributed according to a Gaussian with zero mean and unit covariance. The noise is also zero mean and has an arbitrary diagonal covariance matrix. End of explanation Image(url="http://www.holehouse.org/mlclass/07_Regularization_files/Image.png") from sklearn.model_selection import train_test_split y = df_companies["log_marketcap"] X = df_companies.loc[:,["log_revenue","log_employees","log_assets"]] X.head(2) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33) X_train.head() Explanation: Difference between FA and PCA http://stats.stackexchange.com/questions/1576/what-are-the-differences-between-factor-analysis-and-principal-component-analysi Principal component analysis involves extracting linear composites of observed variables. Factor analysis is based on a formal model predicting observed variables from theoretical latent factors. 3.4 Methods to avoid overfitting (Machine learning with regularization) SVR: http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html Lasso regression: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html Both have a regularization parameter, that penalizes having many terms. How to choose the best value of this parameter? - With a train_test split (or cross-validation) - http://scikit-learn.org/stable/modules/cross_validation.html - http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html End of explanation df_train = X_train.copy() df_train["log_marketcap"] = y_train df_train.head() mod = smf.ols(formula='log_marketcap ~ log_revenue + log_employees + log_assets', data=df_train).fit() print("log_revenue log_employees log_assets ") print(mod.params.values[1:]) Explanation: Linear regression (to compare) End of explanation from sklearn.svm import SVR clf = SVR(C=0.1, epsilon=0.2,kernel="linear") clf.fit(X_train, y_train) print("log_revenue log_employees log_assets ") print(clf.coef_) Explanation: SVR - Gives balanced weights (the most correlated independent variable (with the dependent) doesn't take all the weight). - Very good when you have hundreds of variables. You can iteratively drop the worst predictor. - It allow for more than linear "regression". The default kernel is "rbf", which fits curves. The problem is that interpreting it is hard. End of explanation from sklearn import linear_model reg = linear_model.Lasso(alpha = 0.01) reg.fit(X_train,y_train) print("log_revenue log_employees log_assets ") print(reg.coef_) Explanation: Lasso - Have a penalty - Discards the variables with low weights. End of explanation print(["SVR","Lasso","Linear regression"]) err1,err2,err3 = sklearn.metrics.mean_squared_error(clf.predict(X_test),y_test),sklearn.metrics.mean_squared_error(reg.predict(X_test),y_test),sklearn.metrics.mean_squared_error(mod.predict(X_test),y_test) print(err1,err2,err3) print(["SVR","Lasso","Linear regression"]) err1,err2,err3 = sklearn.metrics.r2_score(clf.predict(X_test),y_test),sklearn.metrics.r2_score(reg.predict(X_test),y_test),sklearn.metrics.r2_score(mod.predict(X_test),y_test) print(err1,err2,err3) Explanation: Summary End of explanation
1,080
Given the following text description, write Python code to implement the functionality described below step by step Description: graded = 9/9 Homework assignment #3 These problem sets focus on using the Beautiful Soup library to scrape web pages. Problem Set #1 Step1: Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of &lt;h3&gt; tags contained in widgets2016.html. Step2: Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header. Step3: In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order) Step4: Problem set #2 Step5: In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this Step6: Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse. Expected output Step7: In the cell below, write some Python code that prints the names of widgets whose price is above $9.30. Expected output Step9: Problem set #3 Step10: If our task was to create a dictionary that maps the name of the cheese to the description that follows in the &lt;p&gt; tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above Step11: With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header "Hallowed Widgets." Expected output Step12: Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it! In the cell below, I've created a variable category_counts and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are "categories" of widgets (e.g., the contents of the &lt;h3&gt; tags on the page
Python Code: from bs4 import BeautifulSoup from urllib.request import urlopen html_str = urlopen("http://static.decontextualize.com/widgets2016.html").read() document = BeautifulSoup(html_str, "html.parser") Explanation: graded = 9/9 Homework assignment #3 These problem sets focus on using the Beautiful Soup library to scrape web pages. Problem Set #1: Basic scraping I've made a web page for you to scrape. It's available here. The page concerns the catalog of a famous widget company. You'll be answering several questions about this web page. In the cell below, I've written some code so that you end up with a variable called html_str that contains the HTML source code of the page, and a variable document that stores a Beautiful Soup object. End of explanation h3_tags = document.find_all('h3') h3_tags_count = 0 for tag in h3_tags: h3_tags_count = h3_tags_count + 1 print(h3_tags_count) Explanation: Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of &lt;h3&gt; tags contained in widgets2016.html. End of explanation #inspecting webpace with help of developer tools -- shows infomation is stored in an a tag that has the class 'tel' a_tags = document.find_all('a', {'class':'tel'}) for tag in a_tags: print(tag.string) #Does not return the same: [tag.string for tag in a_tags] Explanation: Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header. End of explanation search_table = document.find_all('table',{'class': 'widgetlist'}) #print(search_table) tables_content = [table('td', {'class':'wname'}) for table in search_table] #print(tables_content) for table in tables_content: for single_table in table: print(single_table.string) Explanation: In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order): Skinner Widget Widget For Furtiveness Widget For Strawman Jittery Widget Silver Widget Divided Widget Manicurist Widget Infinite Widget Yellow-Tipped Widget Unshakable Widget Self-Knowledge Widget Widget For Cinema End of explanation widgets = [] #STEP 1: Find all tr tags, because that's what tds are grouped by for tr_tags in document.find_all('tr', {'class': 'winfo'}): #STEP 2: For each tr_tag in tr_tags, make a dict of its td tr_dict ={} for td_tags in tr_tags.find_all('td'): td_tags_class = td_tags['class'] for tag in td_tags_class: tr_dict[tag] = td_tags.string #STEP3: add dicts to list widgets.append(tr_dict) widgets #widgets[5]['partno'] Explanation: Problem set #2: Widget dictionaries For this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called widgets. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The keys of each dictionary should be partno, wname, price, and quantity, and the value for each of the keys should be the value for the corresponding column for each row. After executing the cell, your list should look something like this: [{'partno': 'C1-9476', 'price': '$2.70', 'quantity': u'512', 'wname': 'Skinner Widget'}, {'partno': 'JDJ-32/V', 'price': '$9.36', 'quantity': '967', 'wname': u'Widget For Furtiveness'}, ...several items omitted... {'partno': '5B-941/F', 'price': '$13.26', 'quantity': '919', 'wname': 'Widget For Cinema'}] And this expression: widgets[5]['partno'] ... should evaluate to: LH-74/O End of explanation #had to rename variables as it kept printing the ones from the cell above... widgetsN = [] for trN_tags in document.find_all('tr', {'class': 'winfo'}): trN_dict ={} for tdN_tags in trN_tags.find_all('td'): tdN_tags_class = tdN_tags['class'] for tagN in tdN_tags_class: if tagN == 'price': sliced_tag_string = tdN_tags.string[1:] trN_dict[tagN] = float(sliced_tag_string) elif tagN == 'quantity': trN_dict[tagN] = int(tdN_tags.string) else: trN_dict[tagN] = tdN_tags.string widgetsN.append(trN_dict) widgetsN Explanation: In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this: [{'partno': 'C1-9476', 'price': 2.7, 'quantity': 512, 'widgetname': 'Skinner Widget'}, {'partno': 'JDJ-32/V', 'price': 9.36, 'quantity': 967, 'widgetname': 'Widget For Furtiveness'}, ... some items omitted ... {'partno': '5B-941/F', 'price': 13.26, 'quantity': 919, 'widgetname': 'Widget For Cinema'}] (Hint: Use the float() and int() functions. You may need to use string slices to convert the price field to a floating-point number.) End of explanation widget_quantity_list = [element['quantity'] for element in widgetsN] sum(widget_quantity_list) Explanation: Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse. Expected output: 7928 End of explanation for widget in widgetsN: if widget['price'] > 9.30: print(widget['wname']) Explanation: In the cell below, write some Python code that prints the names of widgets whose price is above $9.30. Expected output: Widget For Furtiveness Jittery Widget Silver Widget Infinite Widget Widget For Cinema End of explanation example_html = <h2>Camembert</h2> <p>A soft cheese made in the Camembert region of France.</p> <h2>Cheddar</h2> <p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p> Explanation: Problem set #3: Sibling rivalries In the following problem set, you will yet again be working with the data in widgets2016.html. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's .find_next_sibling() method. Here's some information about that method, cribbed from the notes: Often, the tags we're looking for don't have a distinguishing characteristic, like a class attribute, that allows us to find them using .find() and .find_all(), and the tags also aren't in a parent-child relationship. This can be tricky! For example, take the following HTML snippet, (which I've assigned to a string called example_html): End of explanation example_doc = BeautifulSoup(example_html, "html.parser") cheese_dict = {} for h2_tag in example_doc.find_all('h2'): cheese_name = h2_tag.string cheese_desc_tag = h2_tag.find_next_sibling('p') cheese_dict[cheese_name] = cheese_desc_tag.string cheese_dict Explanation: If our task was to create a dictionary that maps the name of the cheese to the description that follows in the &lt;p&gt; tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above: End of explanation for h3_tags in document.find_all('h3'): if h3_tags.string == 'Hallowed widgets': hallowed_table = h3_tags.find_next_sibling('table') for element in hallowed_table.find_all('td', {'class':'partno'}): print(element.string) Explanation: With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header "Hallowed Widgets." Expected output: MZ-556/B QV-730 T1-9731 5B-941/F End of explanation category_counts = {} for x_tags in document.find_all('h3'): x_table = x_tags.find_next_sibling('table') tr_info_tags = x_table.find_all('tr', {'class':'winfo'}) category_counts[x_tags.string] = len(tr_info_tags) category_counts Explanation: Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it! In the cell below, I've created a variable category_counts and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are "categories" of widgets (e.g., the contents of the &lt;h3&gt; tags on the page: "Forensic Widgets", "Mood widgets", "Hallowed Widgets") and the value for each key is the number of widgets that occur in that category. I.e., after your code has been executed, the dictionary category_counts should look like this: {'Forensic Widgets': 3, 'Hallowed widgets': 4, 'Mood widgets': 2, 'Wondrous widgets': 3} End of explanation
1,081
Given the following text description, write Python code to implement the functionality described below step by step Description: Basic MR Class Step1: Spin Velocity When an atomic species with net spin is placed in a steady magnetic field, it precesses at a frequency that is characteristic of that species and that varies with the strength of the magnetic field. We can compute the precession frequency from a following simple formula. Note the units. (see chapter 3 and ca. p. 51i, 60ii for a discussion of precession). Step2: Question 1 What are the units of the Larmor frequency, $v$, for hydrogen as expressed above? Step3: The gyromagnetic constant of sodium is $11.27 \times 10^6 Hz/Tesla$. Compute the Larmor frequency of sodium in a 3T magnet. Ordinarily, the resonant frequencies are expressed in units of MegaHertz (millions of hertz) Step4: Spin Energy The energy in precessing spin is proportional to its resonant frequency, $v$. The constant of proportionality between energy and frequency is a famous constant called Planck's constant. (Extra credit Step5: Question 2 What are the units of E? At steady state, say when the subject enters the magnet and prior to any measurements, the dipoles align parallel or anti-parallel to the magnetic field. (The textbook illustrates this with a nice metaphor of gravity in Figure 3.7ii, 3,6i.) The energy difference between these two states is proportional to the mean magnetic field. This is called the Zeeman effect (see Figure 3.8i, 3.9ii in the book). The proportion of dipoles aligned parallel (low energy) and anti-parallel (high energy) to the main field is described by the Boltzmann distribution. The formula that determines the fraction of dipoles in the low and high energy states is in Hornak (Chapter 2). Step6: At low temperatures (near absolute zero), very few of the dipoles enter the high (anti-parallel) energy state. No, this is not important for us. But I find the numbers interesting and can see why people might want to work on low temperature physics for a while. Step8: Question 3 Where is human body temperature on the graph? Given human body temperature, what is the ratio of high/low energy? Would it matter if we kept the room cooler? T1 Tissue Contrast The tendency of dipoles to align (either parallel or anti-parallel) with the B0 magnetic field imposes an opportunity to probe tissue properties using magnetic resonance. MR signals are derived by perturbing the ordered spins (excitation) and then measuring the emissions (reception) as the dipoles return to their low energy state. The way the dipoles return to their low energy state provides information about the local tissue. Summing the effect of the many many dipoles within a voxel, we obtain a measure of the dipoles within the voxel called the net magnetization. This is represented by a single vector (see the many examples in Hornak). Most reasoning about the MR signal is based on understanding models of the net magnetization vector and how it recovers after being perturbed by the radio frequency pulses in the presence of changing magnetic fields. The MR measurements that we obtain describe the net magnetization in the direction perpendicular to the main axis (B0 field). This direction is illustrated in the following figures. First, here is a 3D plot that shows the net magnetization as a red circle in the steady-state. The black lines show the three axes. Step9: The size of the MR signal we measure depends on how far the magnetization deviates from the main z-axis. We can see this more easily by looking at the figure straight from the top. In initial situation, the point is at $(0, 0)$ in the $(x, y)$ plane. Step10: Notice that from this view, looking down the z-axis, we see only the x- and y-axes. The net magnetization is at $(0,0)$ so we will not see any signal. Suppose we excite the tissue and place the net magnetization along the x-axis. Step11: When we look from the top now, we see that there is a large magnetization component. The green point is removed from $(0,0)$ and falls along the x-axis. Step12: When we change the net magnetization from the steady-state position (red circle) to the excited position (green circle), it is like introducing a 90 deg rotation in the magnetization direction. This is usually called the flip angle. This is one of the parameters that you select when doing MR imaging. Step13: As the net magnetization returns to the steady-state position, the distance from the origin decreases, reducing the signal. This is illustrated by the collection of points plotted here that show the net magnetization rotating back towards the z-axis. Step14: When viewed from the top, you can see that the green points head back towards the origin. Step15: After receiving an excitation pulse, this component of the MR signal decays gradually over time. The spin-lattice decay has been measured and, in general, it follows an exponential decay rate. Specifically, here is the rate of recovery of the T1 magnetization for hydrogen molecules in gray matter. Step16: This is the exponential rate of recovery of the T1 magnetization, after it has been set to zero by flipping the net magnetization 90 degrees. Step17: Plotted is a graph we have the magnetization of gray matter as Step18: The decay rates for various brain tissues (summarized by the parameter T1 above) differs both with the material and with the level of the B0 field. The value T1 = 0.88 seconds above is typical of gray matter at 1.5T. The T1 value for white matter is slightly smaller. Comparing the two we see that white matter recovers slightly faster (has a smaller T1) Step19: Notice that the time to recover all the way back to Mo is fairly long, on the order of 3-4 sec. This is a limitation in the speed of acquisition for T1 imaging. The difference in the T1 component of the signal from gray matter and white matter changes over time. This difference is plotted in the next graph. Step20: Question 4 If you are seeking to make a measurement that optimizes the signal to noise ratio between these two materials, at what time would you measure the recovery of the T1 signal? Question 5 Look up the T1 value of cerebro-spinal fluid (CSF). Plot the T1 recovery of CSF. At what time you would measure to maximize the white/CSF contrast. We can visualize this difference as follows. Suppose that we have two beakers, adjacent to one another, containing materials with different T1 values. Suppose we make a pair of images in which the intensity of each image is set to the T1 value over time. What would the T1 images look like? The beakers start with the same, Mo, magnetization. Step21: They will have different T1 relaxation values Step22: Here is a movie showing the relative intensities of the images over a 4 sec period, measured every 100 ms. You can play the movie by moving the slider to change the time point displayed. Step23: As you can see, if we make a picture of the net magnetization around 0.6-1.0 sec during the decay, there will be a good contrast difference between the gray and white matter. Measured earlier or later, the picture will have less contrast. As a preview of further work, later in the course, we should add just a little noise to the measurements. After all, all measurements have some noise. Let's look again. Step24: T2 Contrast There is a second physical mechanism, in addition to the spin-lattice measurement, that influences the MR signal. This second mechanism is called spin-spin interaction (transverse relaxation). This signaling mechanism is particularly important for functional magnetic resonance imaging and the BOLD signal. In describing the T1 signal, we treated the MR signal as a single unified vector. In the example above, we explored what happens to the net magnetization of the MR signal when the vector is rotated 90 deg into the x-y plane. But we omitted any discussion the fact that the dipoles are assumed to be continuously precessing around the main axis together, in unison. Perhaps in a perfectly homogeneous environment, these rotating dipoles would precess at the Larmor frequency in perfect synchronization and we could treat the single large vector as we have. But in practice, the dipoles within a single voxel of a functional image each experience slightly different magnetic field environments. Consequently, they each have their own individual Larmor frequencies, proportional to the magnetic field that they experience. An important second mechanism of MR is a consequence of the fact that the individual dipoles each have their own local magnetic field and the synchrony soon dissipate. Suppose we have a large sample of dipoles that are spinning together in perfect phase. We can specify their orientation as an angle in this plane, theta. Let's assume they all share a common angle Step25: The position of the spins in the $(x, y)$ plane will be Step26: And they will all fall at the same position Step27: The total magnetization, summed across all the dipoles, is the vector length of the sum of these spins Step28: Now, suppose that spins are precessing at slightly different rates. So after a few moments in time they do not fall at exactly the same angle. We can express this by creating a new vector theta that has some variability in it. Step29: Here is the distribution of the angles Step30: Going through the same process we can make a plot of the spin positions Step31: Now, you can see that the net magnetization is somewhat smaller Step32: If the spins grow even further out of phase, say they are spread over a full $\pi$ radians (180 degrees), then we have a dramatic reduction in the net magnetization Step33: In a typical experiment, in which the spins are in an inhomogeneous environment, the spins spread out more and more with time, and the transverse magnetization declines. The loss of signal from this spin-dephasing mechanism follows an exponential time constant. Step34: Experimental measurements of spin-spin decay shows that it occurs at a much faster rate than spin-lattice. Comparison of T1 (spin-lattice) and T2 (spin-spin) decay constants at various B0 field strengths are
Python Code: %pylab inline import matplotlib as mpl mpl.rcParams["figure.figsize"] = (8, 6) mpl.rcParams["axes.grid"] = True from IPython.display import display from ipywidgets import interact,FloatSlider Explanation: Basic MR Class: Psych 204a Tutorial: Basic MR Duration: 90 minutes Authors: Originally written in Matlab by Brian Wandell in 2004 Checked and updated: 2007 Rory Sayres 2009.22.09, 2010.18.09 Jon Winawer 2012.09 Michael Waskom (translated to Python) 2013.09 Grace Tang, Bob Dougherty Copyright: Stanford University This tutorial explains basic principles of magnetic resonance signals. As the first tutorial in the series, it also gives you an opportunity to examine basic Python commands. (For a short tutorial on the Jupyter/IPython notebook interface, have a look at this video: https://www.youtube.com/watch?v=lmoNmY-cmSI.) In this tutorial, principles are illustrated for signals derived from a bulk substance, such as a beaker of water. Various terms used to describe the MR principles, including spins, parallel and anti-parallel, Boltzman distribution are introduced. Also, tissue properties such as T1 (longitudinal) and T2 (spin-spin) interactions, are explained and the way in which signal contrast depends on these parameters is explained. The next tutorial, MRImaging, takes up the topic of how to make images that measure these tissue properties in a non-uniform volume, such as a head. References to help with this tutorial: Huettel et al. Chapters 1-3 (mainly Chapter 3) John Hornak online MRI book (especially Chapter 3) McRobbie et al, MRI, From Picture to Proton, Chapter 8 1st v. 2nd edition of text: The course text is Huettel et al, 2nd edition. The first and second editions are quite similar, especially in the earlier chapters. References followed by "ii" are 2nd edition, "i" first edition. Hence p. 51i, 60ii means p 51 in the first edition, p 60 in the second. And so on. Getting Started OK, let's get started with the tutorial! Each cell that has a gray box with 'In [ ]:' in front of it contains code. To run the code, click on the cell (it will be highlighted with a green outline when you do) and then hit the Enter key while holding shift. Some later cells depend on values set in earlier cells, so be sure to execute them in order. In the first cell of code just below this paragraph, we set some general parameters for python. End of explanation B0 = 1.5 # Magnetic field strength (Tesla) g = 42.58e6 # Gyromagnetic constant for hyrdogen (Hz / Tesla) v = g * B0 # The resonant frequence of hydrogen, also called its Larmor frequency Explanation: Spin Velocity When an atomic species with net spin is placed in a steady magnetic field, it precesses at a frequency that is characteristic of that species and that varies with the strength of the magnetic field. We can compute the precession frequency from a following simple formula. Note the units. (see chapter 3 and ca. p. 51i, 60ii for a discussion of precession). End of explanation # Python string interpolation is accomplished through the % operator print "The resonant frequency of spins in hydrogen is %0.4f (MHz) at %.2f Tesla" % (v / (10 ** 6), B0) Explanation: Question 1 What are the units of the Larmor frequency, $v$, for hydrogen as expressed above? End of explanation # Compute your answer here Explanation: The gyromagnetic constant of sodium is $11.27 \times 10^6 Hz/Tesla$. Compute the Larmor frequency of sodium in a 3T magnet. Ordinarily, the resonant frequencies are expressed in units of MegaHertz (millions of hertz) End of explanation h = 6.626e-34 # Planck's constant (Joules-seconds) E = h * v print "E = %.4g" % E Explanation: Spin Energy The energy in precessing spin is proportional to its resonant frequency, $v$. The constant of proportionality between energy and frequency is a famous constant called Planck's constant. (Extra credit: Find out something interesting about Max Planck). Hence, the amount of energy in a hydrogen molecular in this magnetic field is End of explanation k = 1.3805e-23 # Boltzmann's constant, J/Kelvin T = 300 # Degrees Kelvin at room temperature dE = h * g * B0 # Transition energy ratio_high_to_low = exp(-dE / (k * T)) # Boltzmann's formula, Hornak, Chapter 3; Huettel, p. 76ii print "High-low ratio: %.6f" % ratio_high_to_low Explanation: Question 2 What are the units of E? At steady state, say when the subject enters the magnet and prior to any measurements, the dipoles align parallel or anti-parallel to the magnetic field. (The textbook illustrates this with a nice metaphor of gravity in Figure 3.7ii, 3,6i.) The energy difference between these two states is proportional to the mean magnetic field. This is called the Zeeman effect (see Figure 3.8i, 3.9ii in the book). The proportion of dipoles aligned parallel (low energy) and anti-parallel (high energy) to the main field is described by the Boltzmann distribution. The formula that determines the fraction of dipoles in the low and high energy states is in Hornak (Chapter 2). End of explanation T = logspace(-3, 2.5, 50) r = exp(-dE / (k*T)) plot(T, r) semilogx() xlabel('Temperature (K)') ylabel('Ratio of high/low energy state dipoles') ylim([0, 1.1]); Explanation: At low temperatures (near absolute zero), very few of the dipoles enter the high (anti-parallel) energy state. No, this is not important for us. But I find the numbers interesting and can see why people might want to work on low temperature physics for a while. End of explanation # This function just saves us some typing below # It's not relevant to the conceptual material # But it does show a nice trick on the last line def axisplot(ax): Convenience function to plot axis lines in a 3D plot ax.plot([-1, 1], [0, 0], [0, 0], "k:") ax.plot([0, 0], [-1, 1], [0, 0], "k:") ax.plot([0, 0], [0, 0], [-1, 1], "k:") for axis in ["x", "y", "z"]: getattr(ax, "set_%slabel" % axis)(axis) from mpl_toolkits.mplot3d import Axes3D f = figure() ax = f.add_subplot(111, projection='3d', aspect="equal") ax.plot([0], [0], [1], "ro") axisplot(ax) az0 = 332.5 el0 = 30 ax.view_init(el0, az0) Explanation: Question 3 Where is human body temperature on the graph? Given human body temperature, what is the ratio of high/low energy? Would it matter if we kept the room cooler? T1 Tissue Contrast The tendency of dipoles to align (either parallel or anti-parallel) with the B0 magnetic field imposes an opportunity to probe tissue properties using magnetic resonance. MR signals are derived by perturbing the ordered spins (excitation) and then measuring the emissions (reception) as the dipoles return to their low energy state. The way the dipoles return to their low energy state provides information about the local tissue. Summing the effect of the many many dipoles within a voxel, we obtain a measure of the dipoles within the voxel called the net magnetization. This is represented by a single vector (see the many examples in Hornak). Most reasoning about the MR signal is based on understanding models of the net magnetization vector and how it recovers after being perturbed by the radio frequency pulses in the presence of changing magnetic fields. The MR measurements that we obtain describe the net magnetization in the direction perpendicular to the main axis (B0 field). This direction is illustrated in the following figures. First, here is a 3D plot that shows the net magnetization as a red circle in the steady-state. The black lines show the three axes. End of explanation az1 = 0 el1 = 90 ax.view_init(el1, az1) display(f) Explanation: The size of the MR signal we measure depends on how far the magnetization deviates from the main z-axis. We can see this more easily by looking at the figure straight from the top. In initial situation, the point is at $(0, 0)$ in the $(x, y)$ plane. End of explanation f = figure() ax = f.add_subplot(111, projection='3d', aspect="equal") ax.plot([1], [0], [0], "go") axisplot(ax) ax.view_init(el0, az0) Explanation: Notice that from this view, looking down the z-axis, we see only the x- and y-axes. The net magnetization is at $(0,0)$ so we will not see any signal. Suppose we excite the tissue and place the net magnetization along the x-axis. End of explanation ax.view_init(el1, az1) display(f) Explanation: When we look from the top now, we see that there is a large magnetization component. The green point is removed from $(0,0)$ and falls along the x-axis. End of explanation ax = subplot(111, projection='3d', aspect="equal") ax.plot([0], [0], [1], "ro") ax.plot([1], [0], [0], "go") axisplot(ax) ax.view_init(el0, az0) Explanation: When we change the net magnetization from the steady-state position (red circle) to the excited position (green circle), it is like introducing a 90 deg rotation in the magnetization direction. This is usually called the flip angle. This is one of the parameters that you select when doing MR imaging. End of explanation theta = ((pi / 2) * (linspace(0, 10, 11) / 10)).reshape(11, 1) x = cos(theta) y = zeros_like(x) z = sin(theta) f = figure() ax = f.add_subplot(111, projection='3d', aspect="equal") clut = linspace(1, 0, 11) for i, (x_i, y_i, z_i) in enumerate(zip(x, y, z)): ax.plot(x_i, y_i, z_i, "o", color=cm.RdYlGn(clut[i])) axisplot(ax) ax.view_init(el0, az0) Explanation: As the net magnetization returns to the steady-state position, the distance from the origin decreases, reducing the signal. This is illustrated by the collection of points plotted here that show the net magnetization rotating back towards the z-axis. End of explanation ax.view_init(el1, az1) display(f) Explanation: When viewed from the top, you can see that the green points head back towards the origin. End of explanation T1_gray = 0.88 # Time constant units of S t_T1 = arange(0.02, 6, 0.02) # Time in seconds Mo = 1 # Set the net magnetization in the steady state to 1 and ignore. Explanation: After receiving an excitation pulse, this component of the MR signal decays gradually over time. The spin-lattice decay has been measured and, in general, it follows an exponential decay rate. Specifically, here is the rate of recovery of the T1 magnetization for hydrogen molecules in gray matter. End of explanation MzG_T1 = Mo * (1 - exp(-t_T1 / T1_gray)) Explanation: This is the exponential rate of recovery of the T1 magnetization, after it has been set to zero by flipping the net magnetization 90 degrees. End of explanation plot(t_T1, MzG_T1) xlabel('Time (s)') ylabel('Longitudinal magnetization (T1)'); Explanation: Plotted is a graph we have the magnetization of gray matter as End of explanation T1_white = 0.64; MzW_T1 = Mo * (1 - exp(-t_T1 / T1_white)) plot(t_T1, MzG_T1, 'black', label="Gray") plot(t_T1, MzW_T1, 'gray', linestyle="--", label="White") xlabel('Time (s)') ylabel('Longitudinal magnetization (T1)') legend(loc="best"); Explanation: The decay rates for various brain tissues (summarized by the parameter T1 above) differs both with the material and with the level of the B0 field. The value T1 = 0.88 seconds above is typical of gray matter at 1.5T. The T1 value for white matter is slightly smaller. Comparing the two we see that white matter recovers slightly faster (has a smaller T1): End of explanation plot(t_T1, abs(MzW_T1 - MzG_T1)) xlabel('Time (s)') ylabel('Magnetization difference'); Explanation: Notice that the time to recover all the way back to Mo is fairly long, on the order of 3-4 sec. This is a limitation in the speed of acquisition for T1 imaging. The difference in the T1 component of the signal from gray matter and white matter changes over time. This difference is plotted in the next graph. End of explanation beaker1 = Mo * ones((32, 32)) beaker2 = Mo * ones((32, 32)) Explanation: Question 4 If you are seeking to make a measurement that optimizes the signal to noise ratio between these two materials, at what time would you measure the recovery of the T1 signal? Question 5 Look up the T1 value of cerebro-spinal fluid (CSF). Plot the T1 recovery of CSF. At what time you would measure to maximize the white/CSF contrast. We can visualize this difference as follows. Suppose that we have two beakers, adjacent to one another, containing materials with different T1 values. Suppose we make a pair of images in which the intensity of each image is set to the T1 value over time. What would the T1 images look like? The beakers start with the same, Mo, magnetization. End of explanation T1 = (0.64, 0.88) # White, gray T1 Explanation: They will have different T1 relaxation values End of explanation f = figure() beakers = [beaker1, beaker2] def draw_beakers(time): for i, beaker in enumerate(beakers): subplot(1, 2, i + 1) img = beaker * (1 - exp(-time / T1[i])) imshow(img, vmin=0, vmax=1, cmap="gray") grid(False) xticks([]) yticks([]) ylim([-5, 32]) title("Beaker %d" % (i + 1)) text(8, -3, 'Time: %.2f sec' % time, fontdict=dict(size=12)) interact(draw_beakers, time=FloatSlider(min=0, max=4.0, value=0)); Explanation: Here is a movie showing the relative intensities of the images over a 4 sec period, measured every 100 ms. You can play the movie by moving the slider to change the time point displayed. End of explanation def draw_beakers(time, noise_level): for i, beaker in enumerate(beakers): subplot(1, 2, i + 1) img = beaker * (1 - exp(-time / T1[i])) + randn(*beaker.shape)*noise_level img = abs(img) imshow(img, vmin=0, vmax=1, cmap="gray") grid(False) xticks([]) yticks([]) ylim([-5, 32]) title("Beaker %d" % (i + 1)) text(8, -3, 'Time: %.2f sec' % time, fontdict=dict(size=12)) interact(draw_beakers, time=FloatSlider(min=0, max=4.0, value=0), noise_level=FloatSlider(min=0, max=1.0, value=0.1)); Explanation: As you can see, if we make a picture of the net magnetization around 0.6-1.0 sec during the decay, there will be a good contrast difference between the gray and white matter. Measured earlier or later, the picture will have less contrast. As a preview of further work, later in the course, we should add just a little noise to the measurements. After all, all measurements have some noise. Let's look again. End of explanation n_samples = 10000 theta = zeros(n_samples) Explanation: T2 Contrast There is a second physical mechanism, in addition to the spin-lattice measurement, that influences the MR signal. This second mechanism is called spin-spin interaction (transverse relaxation). This signaling mechanism is particularly important for functional magnetic resonance imaging and the BOLD signal. In describing the T1 signal, we treated the MR signal as a single unified vector. In the example above, we explored what happens to the net magnetization of the MR signal when the vector is rotated 90 deg into the x-y plane. But we omitted any discussion the fact that the dipoles are assumed to be continuously precessing around the main axis together, in unison. Perhaps in a perfectly homogeneous environment, these rotating dipoles would precess at the Larmor frequency in perfect synchronization and we could treat the single large vector as we have. But in practice, the dipoles within a single voxel of a functional image each experience slightly different magnetic field environments. Consequently, they each have their own individual Larmor frequencies, proportional to the magnetic field that they experience. An important second mechanism of MR is a consequence of the fact that the individual dipoles each have their own local magnetic field and the synchrony soon dissipate. Suppose we have a large sample of dipoles that are spinning together in perfect phase. We can specify their orientation as an angle in this plane, theta. Let's assume they all share a common angle End of explanation spins = [cos(theta), sin(theta)] Explanation: The position of the spins in the $(x, y)$ plane will be End of explanation subplot(111, aspect="equal") plot(spins[0], spins[1], "o") xlim(-2, 2) ylim(-2, 2) xlabel("x") ylabel("y"); Explanation: And they will all fall at the same position End of explanation avg_pos = sum(spins, axis=1) / n_samples net_mag = sqrt(sum(square(avg_pos))) print "Net magnetization: %.4f" % net_mag Explanation: The total magnetization, summed across all the dipoles, is the vector length of the sum of these spins End of explanation theta = rand(n_samples) * 0.5 # Uniform random number generator Explanation: Now, suppose that spins are precessing at slightly different rates. So after a few moments in time they do not fall at exactly the same angle. We can express this by creating a new vector theta that has some variability in it. End of explanation hist(theta) xlabel('Angle') ylabel('Number of spins') xlim(0, 2 * pi) xticks([0, pi / 2, pi, 3 * pi / 2, 2 * pi], ["0", r"$\frac{\pi}{2}$", "$\pi$", r"$\frac{3\pi}{2}$", "$2\pi$"], size=14); Explanation: Here is the distribution of the angles End of explanation spins = [cos(theta), sin(theta)] subplot(111, aspect="equal") plot(spins[0], spins[1], 'o') xlim(-2, 2) ylim(-2, 2) xlabel('x'), ylabel('y'); Explanation: Going through the same process we can make a plot of the spin positions End of explanation avg_pos = sum(spins, axis=1) / n_samples net_mag = sqrt(sum(square(avg_pos))) print "Net magnetization: %.4f" % net_mag Explanation: Now, you can see that the net magnetization is somewhat smaller End of explanation theta = rand(n_samples) * pi hist(theta) xlabel('Angle') ylabel('Number of spins') xlim(0, 2 * pi) xticks([0, pi / 2, pi, 3 * pi / 2, 2 * pi], ["0", r"$\frac{\pi}{2}$", "$\pi$", r"$\frac{3\pi}{2}$", "$2\pi$"], size=14); spins = [cos(theta), sin(theta)] subplot(111, aspect="equal") plot(spins[0], spins[1], 'o') xlim(-2, 2) ylim(-2, 2) xlabel('x') ylabel('y') avg_pos = sum(spins, axis=1) / n_samples net_mag = sqrt(sum(square(avg_pos))) print "Net magnetization: %.4f" % net_mag Explanation: If the spins grow even further out of phase, say they are spread over a full $\pi$ radians (180 degrees), then we have a dramatic reduction in the net magnetization End of explanation t_T2 = arange(0.01, 0.3, 0.01) # Time in secs T2_white = 0.08 # T2 for white matter T2_gray = 0.11 # T2 for gray matter MzG_T2 = Mo * exp(-t_T2 / T2_gray) MzW_T2 = Mo * exp(-t_T2 / T2_white) plot(t_T2, MzG_T2, 'k', label="Gray") plot(t_T2, MzW_T2, 'gray', linestyle="--", label="White") xlabel('Time (s)') ylabel('Transverse magnetization (T2)') legend(); Explanation: In a typical experiment, in which the spins are in an inhomogeneous environment, the spins spread out more and more with time, and the transverse magnetization declines. The loss of signal from this spin-dephasing mechanism follows an exponential time constant. End of explanation plot(t_T1, abs(MzG_T1 - MzW_T1), label="T1") plot(t_T2, abs(MzG_T2 - MzW_T2), label="T2") xlabel('Time (s)') ylabel('Transverse magnetization difference') legend(); Explanation: Experimental measurements of spin-spin decay shows that it occurs at a much faster rate than spin-lattice. Comparison of T1 (spin-lattice) and T2 (spin-spin) decay constants at various B0 field strengths are: <table> <tr> <td></td> <td colspan=3><b>T1</b></td> <td colspan=3><b>T2</b></td> </tr> <tr> <td><b>Field</b></td> <td><i>1.5T</i></td> <td><i>3.0T</i></td> <td><i>4.0T</i></td> <td><i>1.5T</i></td> <td><i>3.0T</i></td> <td><i>4.0T</i></td> </tr> <tr> <td><i>White</i></td> <td>0.64</td> <td>0.86</td> <td>1.04</td> <td>0.08</td> <td>0.08</td> <td>0.05</td> </tr> <tr> <td><i>Gray</i></td> <td>0.88</td> <td>1.20</td> <td>1.40</td> <td>0.08</td> <td>0.11</td> <td>0.05</td> </tr> </table> Source: Jezzard and Clare, Chapter 3 in the Oxford fMRI Book Also, notice that the peak difference occurs a very short time compared to the T1 difference. End of explanation
1,082
Given the following text description, write Python code to implement the functionality described below step by step Description: Send email Clint Importing all dependency Step1: User Details Function Step2: Login function In this function we call user details function and get the user name and password, Than we use those details for IMAP login. SMTP is Simple Mail Transfer Protocol Step4: Send mail function. This function takes 5 argument. 1. Login Data. 2. To Email 3. From Email 4. HTML format massage 5. Normal text The HTML message, is best and preferred.
Python Code: # ! /usr/bin/python import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from email.header import Header from email.utils import formataddr import getpass Explanation: Send email Clint Importing all dependency End of explanation def user(): # ORG_EMAIL = "@gmail.com" # FROM_EMAIL = "ypur mail" + ORG_EMAIL # FROM_PWD = "yourpss" FROM_EMAIL = raw_input("insert Email : ") FROM_PWD = getpass.getpass("input : ") return FROM_EMAIL,FROM_PWD Explanation: User Details Function End of explanation def login(): gmail_user, gmail_pwd = user() #calling the user function for get user details smtpserver = smtplib.SMTP("smtp.gmail.com",587) #Declaring gmail SMTP server address and port smtpserver.starttls() #Starting tls service, Transport Layer Security (TLS) are cryptographic protocols that provide communications security over a computer network. smtpserver.login(gmail_user, gmail_pwd) #Login to Gmail server using TLS print 'Login successful' return smtpserver Explanation: Login function In this function we call user details function and get the user name and password, Than we use those details for IMAP login. SMTP is Simple Mail Transfer Protocol End of explanation # text = "Hi!\n5633222222222222222http://www.python.org" # html = \ # <html> # <head></head> # <body> # <p>Hi!<br> # How are you?<br> # Here is the <a href="http://www.python.org">link</a> you wanted. # </p> # </body> # </html> # def Send_Mail(smtpserver,TO_EMAIL,text=None,html=None,subject='Subject missing',FROM_EMAIL='Shahariar'): # Create message container - the correct MIME type is multipart/alternative. msg = MIMEMultipart('alternative') # In turn, use text/plain and text/html parts within the multipart/alternative part. msg['Subject'] = subject #Subject of the message msg['From'] = formataddr((str(Header(FROM_EMAIL, 'utf-8')), FROM_EMAIL)) #Adding custom Sender Name msg['To'] = TO_EMAIL #Assining Reciver email part1 = MIMEText(text, 'plain') #adding text part of mail part2 = MIMEText(html, 'html') #Adding HTMLpart of mail # Attach parts into message container. # According to RFC 2046, the last part of a multipart message, in this case # the HTML message, is best and preferred. msg.attach(part1) #attach Plain text msg.attach(part2) #attach HTML text # sendmail function takes 3 arguments: sender's address, recipient's address # and message to send - here it is sent as one string. try: smtpserver.sendmail(FROM_EMAIL, TO_EMAIL, msg.as_string()) print " Message Send" smtpserver.quit() #stopping server except Exception: print Exception Explanation: Send mail function. This function takes 5 argument. 1. Login Data. 2. To Email 3. From Email 4. HTML format massage 5. Normal text The HTML message, is best and preferred. End of explanation
1,083
Given the following text description, write Python code to implement the functionality described below step by step Description: Optimizing ORM Queries Introduction This notebook provides some background on the various extractor queries. These queries are on the Submission model which has foreign key relationships on the User and Form. For data extraction, we need data from all 3 models, but not necessarily all their properties at once. Step1: Default behavior ORM Queries If you query the Submission model without any other options, it will use default SQLAlchemy behavior. SQLAlchemy lazy-loads related models by default (unless otherwise defined in the model). As such, if you render the query to SQL, you'll see that no joins occur. The related models are loaded only when access (lazily) using separate SELECT queries. Step2: Joined Loading SQLAlchemy provides the ability to specify a "joined load" option. Passing a orm.joinedload() to Query.options() will emit a left join operation by default. So you need to set innerjoin=True if required. Data can then be eager-loaded. As an example, we will extend our query to only "join-load" the User model. Subsequent accesses to the user property in a Submission instance will not emit SELECT queries. But note that joined-loads will load all columns in the related model. This is fine for the User model because it has relatively few columns which are expected to be short strings (first and last name). In the example below, we don't do this for models.Form. This is conscious decision as the Form.schema column is a JSON field which, relative to other columns, can be quite large. SQLAlchemy will continue to use its default lazy loading behavior and load the form using separate SELECT queries when form property of a Submission instance. This may actually be fine for relatively few forms because their schemas will remain in the Session cache after loading and thus potentially avoiding repeated SELECT queries. Step3: Explicit Join and Eager Load It may be desirable to force data extraction to one single SELECT query. This does require a bit more code but is possible using explicit joins and eager loads. This provides full control and avoids relying on lazy-loading or Session cache behavior. Our ETL transformation only requires the name column from the Form model. You can eager-load related tables more precisely as follows
Python Code: import sys import sqlalchemy as sa import sqlalchemy.orm as sa_orm import testing.postgresql from app import models from app.util import sqldebug # open a test session postgresql = testing.postgresql.Postgresql(base_dir='.test_db') db_engine = sa.create_engine(postgresql.url()) models.init_database(db_engine) sessionmaker = sa_orm.sessionmaker(db_engine) session = sessionmaker() Explanation: Optimizing ORM Queries Introduction This notebook provides some background on the various extractor queries. These queries are on the Submission model which has foreign key relationships on the User and Form. For data extraction, we need data from all 3 models, but not necessarily all their properties at once. End of explanation simple_query = session.query(models.Submission) sqldebug.pp_query(simple_query) Explanation: Default behavior ORM Queries If you query the Submission model without any other options, it will use default SQLAlchemy behavior. SQLAlchemy lazy-loads related models by default (unless otherwise defined in the model). As such, if you render the query to SQL, you'll see that no joins occur. The related models are loaded only when access (lazily) using separate SELECT queries. End of explanation current_joined_query = session.query(models.Submission)\ .options( sa_orm.joinedload(models.Submission.user, innerjoin=True) ) sqldebug.pp_query(current_joined_query) Explanation: Joined Loading SQLAlchemy provides the ability to specify a "joined load" option. Passing a orm.joinedload() to Query.options() will emit a left join operation by default. So you need to set innerjoin=True if required. Data can then be eager-loaded. As an example, we will extend our query to only "join-load" the User model. Subsequent accesses to the user property in a Submission instance will not emit SELECT queries. But note that joined-loads will load all columns in the related model. This is fine for the User model because it has relatively few columns which are expected to be short strings (first and last name). In the example below, we don't do this for models.Form. This is conscious decision as the Form.schema column is a JSON field which, relative to other columns, can be quite large. SQLAlchemy will continue to use its default lazy loading behavior and load the form using separate SELECT queries when form property of a Submission instance. This may actually be fine for relatively few forms because their schemas will remain in the Session cache after loading and thus potentially avoiding repeated SELECT queries. End of explanation new_joined_query = session.query(models.Submission)\ .join(models.User)\ .join(models.Form)\ .options( sa_orm.contains_eager(models.Submission.user), sa_orm.contains_eager(models.Submission.form).load_only('name'), ) sqldebug.pp_query(new_joined_query) Explanation: Explicit Join and Eager Load It may be desirable to force data extraction to one single SELECT query. This does require a bit more code but is possible using explicit joins and eager loads. This provides full control and avoids relying on lazy-loading or Session cache behavior. Our ETL transformation only requires the name column from the Form model. You can eager-load related tables more precisely as follows: * Chain call Query.join() for each model you wish to eager load via a INNER JOIN * Pass to to Query.options() a orm.contains_eager() for each related property you wish to eager-load * To restrict to only a subset of columns, extend your eager option by chaining to orm.load_only() with the column attribute string you want to restrict it to. In the example below, we show explicit joins for both the user and form relations. However, we restrict eager loading the form to only the name property. The resulting query gives us precise control. Note that the primary keys (id columns) are still loaded as part of the join which is default behavior in SQLAlchemy. This is acceptable as our primary aim was to avoid loading the largest column (schema). End of explanation
1,084
Given the following text description, write Python code to implement the functionality described below step by step Description: Finding similar documents with Word2Vec and WMD Word Mover's Distance is a promising new tool in machine learning that allows us to submit a query and return the most relevant documents. For example, in a blog post OpenTable use WMD on restaurant reviews. Using this approach, they are able to mine different aspects of the reviews. In part 2 of this tutorial, we show how you can use Gensim's WmdSimilarity to do something similar to what OpenTable did. In part 1 shows how you can compute the WMD distance between two documents using wmdistance. Part 1 is optional if you want use WmdSimilarity, but is also useful in it's own merit. First, however, we go through the basics of what WMD is. Word Mover's Distance basics WMD is a method that allows us to assess the "distance" between two documents in a meaningful way, even when they have no words in common. It uses word2vec [4] vector embeddings of words. It been shown to outperform many of the state-of-the-art methods in k-nearest neighbors classification [3]. WMD is illustrated below for two very similar sentences (illustration taken from Vlad Niculae's blog). The sentences have no words in common, but by matching the relevant words, WMD is able to accurately measure the (dis)similarity between the two sentences. The method also uses the bag-of-words representation of the documents (simply put, the word's frequencies in the documents), noted as $d$ in the figure below. The intuition behind the method is that we find the minimum "traveling distance" between documents, in other words the most efficient way to "move" the distribution of document 1 to the distribution of document 2. <img src='https Step1: These sentences have very similar content, and as such the WMD should be low. Before we compute the WMD, we want to remove stopwords ("the", "to", etc.), as these do not contribute a lot to the information in the sentences. Step2: Now, as mentioned earlier, we will be using some downloaded pre-trained embeddings. We load these into a Gensim Word2Vec model class. Note that the embeddings we have chosen here require a lot of memory. Step3: So let's compute WMD using the wmdistance method. Step4: Let's try the same thing with two completely unrelated sentences. Notice that the distance is larger. Step5: Normalizing word2vec vectors When using the wmdistance method, it is beneficial to normalize the word2vec vectors first, so they all have equal length. To do this, simply call model.init_sims(replace=True) and Gensim will take care of that for you. Usually, one measures the distance between two word2vec vectors using the cosine distance (see cosine similarity), which measures the angle between vectors. WMD, on the other hand, uses the Euclidean distance. The Euclidean distance between two vectors might be large because their lengths differ, but the cosine distance is small because the angle between them is small; we can mitigate some of this by normalizing the vectors. Note that normalizing the vectors can take some time, especially if you have a large vocabulary and/or large vectors. Usage is illustrated in the example below. It just so happens that the vectors we have downloaded are already normalized, so it won't do any difference in this case. Step6: Part 2 Step7: Below is a plot with a histogram of document lengths and includes the average document length as well. Note that these are the pre-processed documents, meaning stopwords are removed, punctuation is removed, etc. Document lengths have a high impact on the running time of WMD, so when comparing running times with this experiment, the number of documents in query corpus (about 4000) and the length of the documents (about 62 words on average) should be taken into account. Step8: Now we want to initialize the similarity class with a corpus and a word2vec model (which provides the embeddings and the wmdistance method itself). Step9: The num_best parameter decides how many results the queries return. Now let's try making a query. The output is a list of indeces and similarities of documents in the corpus, sorted by similarity. Note that the output format is slightly different when num_best is None (i.e. not assigned). In this case, you get an array of similarities, corresponding to each of the documents in the corpus. The query below is taken directly from one of the reviews in the corpus. Let's see if there are other reviews that are similar to this one. Step10: The query and the most similar documents, together with the similarities, are printed below. We see that the retrieved documents are discussing the same thing as the query, although using different words. The query talks about getting a seat "outdoor", while the results talk about sitting "outside", and one of them says the restaurant has a "nice view". Step11: Let's try a different query, also taken directly from one of the reviews in the corpus. Step12: This time around, the results are more straight forward; the retrieved documents basically contain the same words as the query. WmdSimilarity normalizes the word embeddings by default (using init_sims(), as explained before), but you can overwrite this behaviour by calling WmdSimilarity with normalize_w2v_and_replace=False.
Python Code: from time import time start_nb = time() # Initialize logging. import logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s') sentence_obama = 'Obama speaks to the media in Illinois' sentence_president = 'The president greets the press in Chicago' sentence_obama = sentence_obama.lower().split() sentence_president = sentence_president.lower().split() Explanation: Finding similar documents with Word2Vec and WMD Word Mover's Distance is a promising new tool in machine learning that allows us to submit a query and return the most relevant documents. For example, in a blog post OpenTable use WMD on restaurant reviews. Using this approach, they are able to mine different aspects of the reviews. In part 2 of this tutorial, we show how you can use Gensim's WmdSimilarity to do something similar to what OpenTable did. In part 1 shows how you can compute the WMD distance between two documents using wmdistance. Part 1 is optional if you want use WmdSimilarity, but is also useful in it's own merit. First, however, we go through the basics of what WMD is. Word Mover's Distance basics WMD is a method that allows us to assess the "distance" between two documents in a meaningful way, even when they have no words in common. It uses word2vec [4] vector embeddings of words. It been shown to outperform many of the state-of-the-art methods in k-nearest neighbors classification [3]. WMD is illustrated below for two very similar sentences (illustration taken from Vlad Niculae's blog). The sentences have no words in common, but by matching the relevant words, WMD is able to accurately measure the (dis)similarity between the two sentences. The method also uses the bag-of-words representation of the documents (simply put, the word's frequencies in the documents), noted as $d$ in the figure below. The intuition behind the method is that we find the minimum "traveling distance" between documents, in other words the most efficient way to "move" the distribution of document 1 to the distribution of document 2. <img src='https://vene.ro/images/wmd-obama.png' height='600' width='600'> This method was introduced in the article "From Word Embeddings To Document Distances" by Matt Kusner et al. (link to PDF). It is inspired by the "Earth Mover's Distance", and employs a solver of the "transportation problem". In this tutorial, we will learn how to use Gensim's WMD functionality, which consists of the wmdistance method for distance computation, and the WmdSimilarity class for corpus based similarity queries. Note: If you use this software, please consider citing [1], [2] and [3]. Running this notebook You can download this iPython Notebook, and run it on your own computer, provided you have installed Gensim, PyEMD, NLTK, and downloaded the necessary data. The notebook was run on an Ubuntu machine with an Intel core i7-4770 CPU 3.40GHz (8 cores) and 32 GB memory. Running the entire notebook on this machine takes about 3 minutes. Part 1: Computing the Word Mover's Distance To use WMD, we need some word embeddings first of all. You could train a word2vec (see tutorial here) model on some corpus, but we will start by downloading some pre-trained word2vec embeddings. Download the GoogleNews-vectors-negative300.bin.gz embeddings here (warning: 1.5 GB, file is not needed for part 2). Training your own embeddings can be beneficial, but to simplify this tutorial, we will be using pre-trained embeddings at first. Let's take some sentences to compute the distance between. End of explanation # Import and download stopwords from NLTK. from nltk.corpus import stopwords from nltk import download download('stopwords') # Download stopwords list. # Remove stopwords. stop_words = stopwords.words('english') sentence_obama = [w for w in sentence_obama if w not in stop_words] sentence_president = [w for w in sentence_president if w not in stop_words] Explanation: These sentences have very similar content, and as such the WMD should be low. Before we compute the WMD, we want to remove stopwords ("the", "to", etc.), as these do not contribute a lot to the information in the sentences. End of explanation import gensim.downloader as api api.load('word2vec-google-news-300') start = time() import os # from gensim.models import KeyedVectors # if not os.path.exists('/data/w2v_googlenews/GoogleNews-vectors-negative300.bin.gz'): # raise ValueError("SKIP: You need to download the google news model") # # model = KeyedVectors.load_word2vec_format('/data/w2v_googlenews/GoogleNews-vectors-negative300.bin.gz', binary=True) model = api.load('word2vec-google-news-300') print('Cell took %.2f seconds to run.' % (time() - start)) Explanation: Now, as mentioned earlier, we will be using some downloaded pre-trained embeddings. We load these into a Gensim Word2Vec model class. Note that the embeddings we have chosen here require a lot of memory. End of explanation distance = model.wmdistance(sentence_obama, sentence_president) print('distance = %.4f' % distance) Explanation: So let's compute WMD using the wmdistance method. End of explanation sentence_orange = 'Oranges are my favorite fruit' sentence_orange = sentence_orange.lower().split() sentence_orange = [w for w in sentence_orange if w not in stop_words] distance = model.wmdistance(sentence_obama, sentence_orange) print('distance = %.4f' % distance) Explanation: Let's try the same thing with two completely unrelated sentences. Notice that the distance is larger. End of explanation # Normalizing word2vec vectors. start = time() model.init_sims(replace=True) # Normalizes the vectors in the word2vec class. distance = model.wmdistance(sentence_obama, sentence_president) # Compute WMD as normal. print('distance: %r', distance) print('Cell took %.2f seconds to run.' %(time() - start)) Explanation: Normalizing word2vec vectors When using the wmdistance method, it is beneficial to normalize the word2vec vectors first, so they all have equal length. To do this, simply call model.init_sims(replace=True) and Gensim will take care of that for you. Usually, one measures the distance between two word2vec vectors using the cosine distance (see cosine similarity), which measures the angle between vectors. WMD, on the other hand, uses the Euclidean distance. The Euclidean distance between two vectors might be large because their lengths differ, but the cosine distance is small because the angle between them is small; we can mitigate some of this by normalizing the vectors. Note that normalizing the vectors can take some time, especially if you have a large vocabulary and/or large vectors. Usage is illustrated in the example below. It just so happens that the vectors we have downloaded are already normalized, so it won't do any difference in this case. End of explanation # Pre-processing a document. from nltk import word_tokenize download('punkt') # Download data for tokenizer. def preprocess(doc): doc = doc.lower() # Lower the text. doc = word_tokenize(doc) # Split into words. doc = [w for w in doc if not w in stop_words] # Remove stopwords. doc = [w for w in doc if w.isalpha()] # Remove numbers and punctuation. return doc start = time() import json from smart_open import smart_open # Business IDs of the restaurants. ids = ['4bEjOyTaDG24SY5TxsaUNQ', '2e2e7WgqU1BnpxmQL5jbfw', 'zt1TpTuJ6y9n551sw9TaEg', 'Xhg93cMdemu5pAMkDoEdtQ', 'sIyHTizqAiGu12XMLX3N3g', 'YNQgak-ZLtYJQxlDwN-qIg'] w2v_corpus = [] # Documents to train word2vec on (all 6 restaurants). wmd_corpus = [] # Documents to run queries against (only one restaurant). documents = [] # wmd_corpus, with no pre-processing (so we can see the original documents). with smart_open('/data/yelp_academic_dataset_review.json', 'rb') as data_file: for line in data_file: json_line = json.loads(line) if json_line['business_id'] not in ids: # Not one of the 6 restaurants. continue # Pre-process document. text = json_line['text'] # Extract text from JSON object. text = preprocess(text) # Add to corpus for training Word2Vec. w2v_corpus.append(text) if json_line['business_id'] == ids[0]: # Add to corpus for similarity queries. wmd_corpus.append(text) documents.append(json_line['text']) print 'Cell took %.2f seconds to run.' %(time() - start) Explanation: Part 2: Similarity queries using WmdSimilarity You can use WMD to get the most similar documents to a query, using the WmdSimilarity class. Its interface is similar to what is described in the Similarity Queries Gensim tutorial. Important note: WMD is a measure of distance. The similarities in WmdSimilarity are simply the negative distance. Be careful not to confuse distances and similarities. Two similar documents will have a high similarity score and a small distance; two very different documents will have low similarity score, and a large distance. Yelp data Let's try similarity queries using some real world data. For that we'll be using Yelp reviews, available at http://www.yelp.com/dataset_challenge. Specifically, we will be using reviews of a single restaurant, namely the Mon Ami Gabi. To get the Yelp data, you need to register by name and email address. The data is 775 MB. This time around, we are going to train the Word2Vec embeddings on the data ourselves. One restaurant is not enough to train Word2Vec properly, so we use 6 restaurants for that, but only run queries against one of them. In addition to the Mon Ami Gabi, mentioned above, we will be using: Earl of Sandwich. Wicked Spoon. Serendipity 3. Bacchanal Buffet. The Buffet. The restaurants we chose were those with the highest number of reviews in the Yelp dataset. Incidentally, they all are on the Las Vegas Boulevard. The corpus we trained Word2Vec on has 18957 documents (reviews), and the corpus we used for WmdSimilarity has 4137 documents. Below a JSON file with Yelp reviews is read line by line, the text is extracted, tokenized, and stopwords and punctuation are removed. End of explanation from matplotlib import pyplot as plt %matplotlib inline # Document lengths. lens = [len(doc) for doc in wmd_corpus] # Plot. plt.rc('figure', figsize=(8,6)) plt.rc('font', size=14) plt.rc('lines', linewidth=2) plt.rc('axes', color_cycle=('#377eb8','#e41a1c','#4daf4a', '#984ea3','#ff7f00','#ffff33')) # Histogram. plt.hist(lens, bins=20) plt.hold(True) # Average length. avg_len = sum(lens) / float(len(lens)) plt.axvline(avg_len, color='#e41a1c') plt.hold(False) plt.title('Histogram of document lengths.') plt.xlabel('Length') plt.text(100, 800, 'mean = %.2f' % avg_len) plt.show() Explanation: Below is a plot with a histogram of document lengths and includes the average document length as well. Note that these are the pre-processed documents, meaning stopwords are removed, punctuation is removed, etc. Document lengths have a high impact on the running time of WMD, so when comparing running times with this experiment, the number of documents in query corpus (about 4000) and the length of the documents (about 62 words on average) should be taken into account. End of explanation # Train Word2Vec on all the restaurants. model = Word2Vec(w2v_corpus, workers=3, size=100) # Initialize WmdSimilarity. from gensim.similarities import WmdSimilarity num_best = 10 instance = WmdSimilarity(wmd_corpus, model, num_best=10) Explanation: Now we want to initialize the similarity class with a corpus and a word2vec model (which provides the embeddings and the wmdistance method itself). End of explanation start = time() sent = 'Very good, you should seat outdoor.' query = preprocess(sent) sims = instance[query] # A query is simply a "look-up" in the similarity class. print 'Cell took %.2f seconds to run.' %(time() - start) Explanation: The num_best parameter decides how many results the queries return. Now let's try making a query. The output is a list of indeces and similarities of documents in the corpus, sorted by similarity. Note that the output format is slightly different when num_best is None (i.e. not assigned). In this case, you get an array of similarities, corresponding to each of the documents in the corpus. The query below is taken directly from one of the reviews in the corpus. Let's see if there are other reviews that are similar to this one. End of explanation # Print the query and the retrieved documents, together with their similarities. print 'Query:' print sent for i in range(num_best): print print 'sim = %.4f' % sims[i][1] print documents[sims[i][0]] Explanation: The query and the most similar documents, together with the similarities, are printed below. We see that the retrieved documents are discussing the same thing as the query, although using different words. The query talks about getting a seat "outdoor", while the results talk about sitting "outside", and one of them says the restaurant has a "nice view". End of explanation start = time() sent = 'I felt that the prices were extremely reasonable for the Strip' query = preprocess(sent) sims = instance[query] # A query is simply a "look-up" in the similarity class. print 'Query:' print sent for i in range(num_best): print print 'sim = %.4f' % sims[i][1] print documents[sims[i][0]] print '\nCell took %.2f seconds to run.' %(time() - start) Explanation: Let's try a different query, also taken directly from one of the reviews in the corpus. End of explanation print 'Notebook took %.2f seconds to run.' %(time() - start_nb) Explanation: This time around, the results are more straight forward; the retrieved documents basically contain the same words as the query. WmdSimilarity normalizes the word embeddings by default (using init_sims(), as explained before), but you can overwrite this behaviour by calling WmdSimilarity with normalize_w2v_and_replace=False. End of explanation
1,085
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to Keras with MNIST Import various modules that we need for this notebook. Step1: Load the MNIST dataset, flatten the images, convert the class labels, and scale the data. Step2: I. Basic example Build and compile a basic model. Step3: Fit the model over 25 epochs. Step4: Evaluate model on the test set Step5: Predict classes on the test set. Step6: II. Deeper model with dropout and cross entropy Let's now build a deeper model, with three hidden dense layers and dropout layers. I'll use rectified linear units as they tend to perform better on deep models. I also initilize the nodes using "glorot_normal", which uses Gaussian noise scaled by the sum of the inputs plus outputs from the node. Notice that we do not need to give an input shape to any layers other than the first. Step7: III. Small model Step8: The classification rate on the validation set is not nearly as predictive, but it is still not too bad overall. A model object contains a list of its layers. The weights are easy to pull out. Step9: The first set of weights will be given as weights the same size as the input space. Notice how Step10: The second layer of weights will be given as a single 16x16 matrix of weights. Step11: IV. Further tweaks
Python Code: %pylab inline import numpy as np import pandas as pd import matplotlib.pyplot as plt from keras.datasets import mnist from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation from keras.optimizers import SGD, RMSprop from keras.utils import np_utils from keras.regularizers import l2 Explanation: Introduction to Keras with MNIST Import various modules that we need for this notebook. End of explanation (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.reshape(60000, 28**2).astype('float32') / 255 X_test = X_test.reshape(10000, 28**2).astype('float32') / 255 Y_train = np_utils.to_categorical(y_train, 10) Y_test = np_utils.to_categorical(y_test, 10) Explanation: Load the MNIST dataset, flatten the images, convert the class labels, and scale the data. End of explanation model = Sequential() model.add(Dense(512, input_shape=(28 * 28,))) model.add(Activation("sigmoid")) model.add(Dense(10)) sgd = SGD(lr = 0.01, momentum = 0.9, nesterov = True) model.compile(loss='mse', optimizer=sgd) Explanation: I. Basic example Build and compile a basic model. End of explanation model.fit(X_train, Y_train, batch_size=32, nb_epoch=10, verbose=1, show_accuracy=True, validation_split=0.1) Explanation: Fit the model over 25 epochs. End of explanation print("Test classification rate %0.05f" % model.evaluate(X_test, Y_test, show_accuracy=True)[1]) Explanation: Evaluate model on the test set End of explanation y_hat = model.predict_classes(X_test) pd.crosstab(y_hat, y_test) Explanation: Predict classes on the test set. End of explanation model = Sequential() model.add(Dense(512, input_shape=(28 * 28,), init="glorot_normal")) model.add(Activation("relu")) model.add(Dropout(0.5)) model.add(Dense(512, init="glorot_normal")) model.add(Activation("relu")) model.add(Dropout(0.5)) model.add(Dense(512, init="glorot_normal")) model.add(Activation("relu")) model.add(Dropout(0.5)) model.add(Dense(512, init="glorot_normal")) model.add(Activation("relu")) model.add(Dropout(0.5)) model.add(Dense(10)) model.add(Activation('softmax')) sgd = SGD(lr = 0.01, momentum = 0.9, nesterov = True) model.compile(loss='categorical_crossentropy', optimizer=sgd) model.fit(X_train, Y_train, batch_size=32, nb_epoch=10, verbose=1, show_accuracy=True, validation_split=0.1) print("Test classification rate %0.05f" % model.evaluate(X_test, Y_test, show_accuracy=True)[1]) fy_hat = model.predict_classes(X_test) pd.crosstab(y_hat, y_test) test_wrong = [im for im in zip(X_test,y_hat,y_test) if im[1] != im[2]] plt.figure(figsize=(15, 15)) for ind, val in enumerate(test_wrong[:100]): plt.subplot(10, 10, ind + 1) im = 1 - val[0].reshape((28,28)) axis("off") plt.imshow(im, cmap='gray') Explanation: II. Deeper model with dropout and cross entropy Let's now build a deeper model, with three hidden dense layers and dropout layers. I'll use rectified linear units as they tend to perform better on deep models. I also initilize the nodes using "glorot_normal", which uses Gaussian noise scaled by the sum of the inputs plus outputs from the node. Notice that we do not need to give an input shape to any layers other than the first. End of explanation model = Sequential() model.add(Dense(16, input_shape=(28 * 28,), init="glorot_normal")) model.add(Activation("relu")) model.add(Dropout(0.5)) model.add(Dense(16, init="glorot_normal")) model.add(Activation("relu")) model.add(Dropout(0.5)) model.add(Dense(10)) model.add(Activation('softmax')) rms = RMSprop() model.compile(loss='categorical_crossentropy', optimizer=rms) model.fit(X_train, Y_train, batch_size=32, nb_epoch=10, verbose=1, show_accuracy=True, validation_split=0.1) Explanation: III. Small model: Visualizing weights Now, I want to make a model that has only a small number of hidden nodes in each layer. We may then have a chance of actually visualizing the weights. End of explanation print(model.layers) # list of the layers print(model.layers[0].get_weights()[0].shape) # the weights Explanation: The classification rate on the validation set is not nearly as predictive, but it is still not too bad overall. A model object contains a list of its layers. The weights are easy to pull out. End of explanation W1 = model.layers[0].get_weights()[0] for ind, val in enumerate(W1.T): plt.figure(figsize=(3, 3), frameon=False) im = val.reshape((28,28)) plt.axis("off") plt.imshow(im, cmap='seismic') Explanation: The first set of weights will be given as weights the same size as the input space. Notice how End of explanation W2 = model.layers[3].get_weights()[0] plt.figure(figsize=(3, 3)) im = W2.reshape((16,16)) plt.axis("off") plt.imshow(im, cmap='seismic') Explanation: The second layer of weights will be given as a single 16x16 matrix of weights. End of explanation model = Sequential() model.add(Dense(128, input_shape=(28 * 28,), init="glorot_normal")) model.add(Activation("relu")) model.add(Dropout(0.5)) model.add(Dense(512, init="glorot_normal",W_regularizer=l2(0.1))) model.add(Activation("relu")) model.add(Dropout(0.2)) model.add(Dense(512, init="glorot_normal",W_regularizer=l2(0.1))) model.add(Activation("relu")) model.add(Dropout(0.2)) model.add(Dense(10)) model.add(Activation('softmax')) rms = RMSprop() model.compile(loss='categorical_crossentropy', optimizer=rms) model.fit(X_train, Y_train, batch_size=32, nb_epoch=5, verbose=1, show_accuracy=True, validation_split=0.1) Explanation: IV. Further tweaks: weights and alternative optimizers Just to show off a few more tweaks, we'll run one final model. Here we use weights and an alternative to vanillia stochastic gradient descent. End of explanation
1,086
Given the following text description, write Python code to implement the functionality described below step by step Description: Training Visualization In this part we’ll see how to create a simple but wrong model with Keras, and, gradually, how it can be improved with step-by-step debugging and understanding with TensorBoard. Please, download log files first. Let's start from importing all necessary components/layers for the future CNN. Step1: Toy convolutional model for classification First, we create a skeleton for a model with one convolutional and one dense layer Step2: We can train the model on IRMAS data using the training procedure below. First, we have to define the optimizer. We're using Stochastic Gradient Descent with Momentum Step3: Now we can check the model structure, specify which metrics we would like to keep eye on and compile the model. Step4: From the previous part, we have two generators which can provide us training samples and validation samples. We will use them during the training. We also specify the number of steps per epoch, the total number of epoch and the log verbosity level Step5: As we can see, neither validation nor the training metrics have improved, so we need to explore that's wrong with the model. Keras Callbacks will help us in this. Keras Callbacks The Callback in Keras is a set of functions to be applied to a certain event during the training process. The typical triggers for events are Step6: Let's get acquainted with the TensorBoard Callback. The parameters are Step7: Now we can add the callbacks to the training process and observe the corresponding events and obtain the corresponding logs. Step8: You can download the event files for all runs from here. Now create the ./logs directory and launch TensorBoard bash tar -xvzf logs.tar.gz cd logs tensorboard --logdir ./example_1 and navigate to http Step9: If you will repeat the training process, you may notice that classification performance improved significantly. Have a look at a new log file in the ./example_2 directory and restart TensorBoard to explore new data. bash cd logs tensorboard --logdir ./example_2 --port=6002 TensorFlow name scopes You might have noticed the hell on the Graphs tab. That's because TensorBoard can't connect all the data nodes in the model and operations in the training process together, it's smart enough to group the nodes with similar structure but don't expect too much. In order to make the better graph visualisation, we need to define the name scopes for each logical layer and each operation we want to see as an individual element. We can do it just by adding with tf.name_scope(name_scope) clause Step11: Have a look at a new log file in the ./example_3 directory and restart TensorBoard to explore new data. bash cd logs tensorboard --logdir ./example_3 Embeddings and Hidden Layers Output Visualisation With TensorBoard we can also visualise the embeddings of the model. In order to do it, you can add Embedding layer to you model. To visualize the outputs of intermediate layers, we can write our custom callback and use it to store the outputs on validation data during the training process. We will follow the notation from TensorBoard callback, but add some functionality Step12: Now we can add the new callback, recompile and retrain the model.
Python Code: from keras.models import Model from keras.layers import Convolution2D, BatchNormalization, MaxPooling2D, Flatten, Dense from keras.layers import Input, Dropout from keras.layers.advanced_activations import ELU from keras.regularizers import l2 from keras.optimizers import SGD import tensorflow as tf from settings import * import numpy as np import os import dataset from dataset import MyDataset db=MyDataset(feature_dir=os.path.join('./IRMAS-Sample', 'features', 'Training'), batch_size=8, time_context=128, step=50, suffix_in='_mel_',suffix_out='_label_',floatX=np.float32,train_percent=0.8) val_data = db() Explanation: Training Visualization In this part we’ll see how to create a simple but wrong model with Keras, and, gradually, how it can be improved with step-by-step debugging and understanding with TensorBoard. Please, download log files first. Let's start from importing all necessary components/layers for the future CNN. End of explanation def build_model(n_classes): input_shape = (N_MEL_BANDS, SEGMENT_DUR, 1) channel_axis = 3 melgram_input = Input(shape=input_shape) m_size = 70 n_size = 3 n_filters = 64 maxpool_const = 4 x = Convolution2D(n_filters, (m_size, n_size), padding='same', kernel_initializer='zeros', kernel_regularizer=l2(1e-5))(melgram_input) x = BatchNormalization(axis=channel_axis)(x) x = ELU()(x) x = MaxPooling2D(pool_size=(N_MEL_BANDS, SEGMENT_DUR/maxpool_const))(x) x = Flatten()(x) x = Dropout(0.5)(x) x = Dense(n_classes, kernel_initializer='zeros', kernel_regularizer=l2(1e-5), activation='softmax', name='prediction')(x) model = Model(melgram_input, x) return model model = build_model(IRMAS_N_CLASSES) Explanation: Toy convolutional model for classification First, we create a skeleton for a model with one convolutional and one dense layer End of explanation init_lr = 0.001 optimizer = SGD(lr=init_lr, momentum=0.9, nesterov=True) Explanation: We can train the model on IRMAS data using the training procedure below. First, we have to define the optimizer. We're using Stochastic Gradient Descent with Momentum End of explanation model.summary() model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy']) Explanation: Now we can check the model structure, specify which metrics we would like to keep eye on and compile the model. End of explanation model.fit_generator(db, steps_per_epoch=4, epochs=4, verbose=2, validation_data=val_data, class_weight=None, workers=1) Explanation: From the previous part, we have two generators which can provide us training samples and validation samples. We will use them during the training. We also specify the number of steps per epoch, the total number of epoch and the log verbosity level End of explanation from keras.callbacks import Callback, ModelCheckpoint, EarlyStopping, TensorBoard early_stopping = EarlyStopping(monitor='val_loss', patience=EARLY_STOPPING_EPOCH) save_clb = ModelCheckpoint("{weights_basepath}/".format(weights_basepath=MODEL_WEIGHT_BASEPATH) + "epoch.{epoch:02d}-val_loss.{val_loss:.3f}", monitor='val_loss', save_best_only=True) Explanation: As we can see, neither validation nor the training metrics have improved, so we need to explore that's wrong with the model. Keras Callbacks will help us in this. Keras Callbacks The Callback in Keras is a set of functions to be applied to a certain event during the training process. The typical triggers for events are: * on_epoch_begin * on_epoch_end * on_batch_begin * on_batch_end * on_train_begin * on_train_end There are some useful callbacks: End of explanation tb = TensorBoard(log_dir='./example_1', write_graph=True, write_grads=True, write_images=True, histogram_freq=1) # if we want to compute activations and weight histogram, we need to specify the validation data for that. tb.validation_data = val_data Explanation: Let's get acquainted with the TensorBoard Callback. The parameters are: * log_dir - where to store the logs, metadata, and events of the model training process * write_graph - whether or not to write the graph of data and control dependencies * write_grads - whether or not to save the parameters of the model for visualisation * histogram_freq - how often to save the parameters of the model * write_images - whether or not to save the weight and visualise them as images End of explanation model.fit_generator(db, steps_per_epoch=1, # change to STEPS_PER_EPOCH epochs=1, # change to MAX_EPOCH_NUM verbose=2, validation_data=val_data, callbacks=[save_clb, early_stopping, tb], class_weight=None, workers=1) Explanation: Now we can add the callbacks to the training process and observe the corresponding events and obtain the corresponding logs. End of explanation def build_model(n_classes): input_shape = (N_MEL_BANDS, SEGMENT_DUR, 1) channel_axis = 3 melgram_input = Input(shape=input_shape) m_size = 70 n_size = 3 n_filters = 64 maxpool_const = 4 x = Convolution2D(n_filters, (m_size, n_size), padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-5))(melgram_input) x = BatchNormalization(axis=channel_axis)(x) x = ELU()(x) x = MaxPooling2D(pool_size=(N_MEL_BANDS, SEGMENT_DUR/maxpool_const))(x) x = Flatten()(x) x = Dropout(0.5)(x) x = Dense(n_classes, kernel_initializer='he_normal', kernel_regularizer=l2(1e-5), activation='softmax', name='prediction')(x) model = Model(melgram_input, x) return model model = build_model(IRMAS_N_CLASSES) Explanation: You can download the event files for all runs from here. Now create the ./logs directory and launch TensorBoard bash tar -xvzf logs.tar.gz cd logs tensorboard --logdir ./example_1 and navigate to http://0.0.0.0:6006 We can notice, that it's almost impossible to see anything on the Graphs tab but we can see vividly that the metrics on the Scalar tab are not improving and the gradients values on the Histograms tab are zero. Our problem is in the weights initialization kernel_initializer='zeros' so now we can fix it and define new model. End of explanation global_namescope = 'train' def build_model(n_classes): with tf.name_scope('input'): input_shape = (N_MEL_BANDS, SEGMENT_DUR, 1) channel_axis = 3 melgram_input = Input(shape=input_shape) m_size = [5, 5] n_size = [5, 5] n_filters = 64 maxpool_const = 8 with tf.name_scope('conv1'): x = Convolution2D(n_filters, (m_size[0], n_size[0]), padding='same', kernel_initializer='he_uniform')(melgram_input) x = BatchNormalization(axis=channel_axis)(x) x = ELU()(x) x = MaxPooling2D(pool_size=(maxpool_const, maxpool_const))(x) with tf.name_scope('conv2'): x = Convolution2D(n_filters*2, (m_size[1], n_size[1]), padding='same', kernel_initializer='he_uniform')(x) x = BatchNormalization(axis=channel_axis)(x) x = ELU()(x) x = MaxPooling2D(pool_size=(maxpool_const, maxpool_const))(x) x = Flatten()(x) with tf.name_scope('dense1'): x = Dropout(0.5)(x) x = Dense(n_filters, kernel_initializer='he_uniform', name='hidden')(x) x = ELU()(x) with tf.name_scope('dense2'): x = Dropout(0.5)(x) x = Dense(n_classes, kernel_initializer='he_uniform', activation='softmax', name='prediction')(x) model = Model(melgram_input, x) return model model = build_model(IRMAS_N_CLASSES) with tf.name_scope('optimizer'): optimizer = SGD(lr=init_lr, momentum=0.9, nesterov=True) with tf.name_scope('model'): model = build_model(IRMAS_N_CLASSES) # for the sake of memory, only graphs now with tf.name_scope('callbacks'): # The TensorBoard developers are strongly encourage us to use different directories for every run tb = TensorBoard(log_dir='./example_3', write_graph=True) # yes, we need to recompile the model every time with tf.name_scope('compile'): model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy']) # and preudo-train the model with tf.name_scope(global_namescope): model.fit_generator(db, steps_per_epoch=1, # just one step epochs=1, # one epoch to save the graphs verbose=2, validation_data=val_data, callbacks=[tb], workers=1) Explanation: If you will repeat the training process, you may notice that classification performance improved significantly. Have a look at a new log file in the ./example_2 directory and restart TensorBoard to explore new data. bash cd logs tensorboard --logdir ./example_2 --port=6002 TensorFlow name scopes You might have noticed the hell on the Graphs tab. That's because TensorBoard can't connect all the data nodes in the model and operations in the training process together, it's smart enough to group the nodes with similar structure but don't expect too much. In order to make the better graph visualisation, we need to define the name scopes for each logical layer and each operation we want to see as an individual element. We can do it just by adding with tf.name_scope(name_scope) clause: End of explanation from keras import backend as K if K.backend() == 'tensorflow': import tensorflow as tf from tensorflow.contrib.tensorboard.plugins import projector class TensorBoardHiddenOutputVis(Callback): Tensorboard Intermediate Outputs visualization callback. def __init__(self, log_dir='./logs_embed', batch_size=32, freq=0, layer_names=None, metadata=None, sprite=None, sprite_shape=None): super(TensorBoardHiddenOutputVis, self).__init__() self.log_dir = log_dir self.freq = freq self.layer_names = layer_names # Notice, that only one file is supported in the present callback self.metadata = metadata self.sprite = sprite self.sprite_shape = sprite_shape self.batch_size = batch_size def set_model(self, model): self.model = model self.sess = K.get_session() self.summary_writer = tf.summary.FileWriter(self.log_dir) self.outputs_ckpt_path = os.path.join(self.log_dir, 'keras_outputs.ckpt') if self.freq and self.validation_data: # define tensors to compute outputs on outputs_layers = [layer for layer in self.model.layers if layer.name in self.layer_names] self.output_tensors = [tf.get_default_graph().get_tensor_by_name(layer.get_output_at(0).name) for layer in outputs_layers] # create configuration for visualisation in the same manner as for embeddings config = projector.ProjectorConfig() for i in range(len(self.output_tensors)): embedding = config.embeddings.add() embedding.tensor_name = '{ns}/hidden_{i}'.format(ns=global_namescope, i=i) # Simpliest metadata handler, a single file for all embeddings if self.metadata: embedding.metadata_path = self.metadata # Sprite image handler if self.sprite and self.sprite_shape: embedding.sprite.image_path = self.sprite embedding.sprite.single_image_dim.extend(self.sprite_shape) # define TF variables to store the hidden outputs during the training # Notice, that only 1D outputs are supported self.hidden_vars = [tf.Variable(np.zeros((len(self.validation_data[0]), self.output_tensors[i].shape[1]), dtype='float32'), name='hidden_{}'.format(i)) for i in range(len(self.output_tensors))] # add TF variables into computational graph for hidden_var in self.hidden_vars: self.sess.run(hidden_var.initializer) # save the config and setup TF saver for hidden variables projector.visualize_embeddings(self.summary_writer, config) self.saver = tf.train.Saver(self.hidden_vars) def on_epoch_end(self, epoch, logs=None): if self.validation_data and self.freq: if epoch % self.freq == 0: val_data = self.validation_data tensors = (self.model.inputs + self.model.targets + self.model.sample_weights) all_outputs = [[]]*len(self.output_tensors) if self.model.uses_learning_phase: tensors += [K.learning_phase()] assert len(val_data) == len(tensors) val_size = val_data[0].shape[0] i = 0 # compute outputs batch by batch on validation data while i < val_size: step = min(self.batch_size, val_size - i) batch_val = [] batch_val.append(val_data[0][i:i + step]) batch_val.append(val_data[1][i:i + step]) batch_val.append(val_data[2][i:i + step]) if self.model.uses_learning_phase: batch_val.append(val_data[3]) feed_dict = dict(zip(tensors, batch_val)) tensor_outputs = self.sess.run(self.output_tensors, feed_dict=feed_dict) for output_idx, tensor_output in enumerate(tensor_outputs): all_outputs[output_idx].extend(tensor_output) i += self.batch_size # rewrite the current state of hidden outputs with new values for idx, embed in enumerate(self.hidden_vars): embed.assign(np.array(all_outputs[idx])).eval(session=self.sess) self.saver.save(self.sess, self.outputs_ckpt_path, epoch) self.summary_writer.flush() def on_train_end(self, _): self.summary_writer.close() Explanation: Have a look at a new log file in the ./example_3 directory and restart TensorBoard to explore new data. bash cd logs tensorboard --logdir ./example_3 Embeddings and Hidden Layers Output Visualisation With TensorBoard we can also visualise the embeddings of the model. In order to do it, you can add Embedding layer to you model. To visualize the outputs of intermediate layers, we can write our custom callback and use it to store the outputs on validation data during the training process. We will follow the notation from TensorBoard callback, but add some functionality: layer_names - a list of names of layers to keep eye on metadata - a path to a TSV file with associated meta information (labels, notes, etc.), format and details sprite - a path to a sprite image, format and details sprite_shape - a list with values [M, N], the dimensionality of a single image, format and details End of explanation layers_to_monitor = ['hidden'] # find the files precomputed in ./logs_embed directory metadata_file_name = 'metadata.tsv' sprite_file_name = 'sprite.png' sprite_shape = [N_MEL_BANDS, SEGMENT_DUR] with tf.name_scope('callbacks'): tbe = TensorBoardHiddenOutputVis(log_dir='./logs_embed', freq=1, layer_names=layers_to_monitor, metadata=metadata_file_name, sprite=sprite_file_name, sprite_shape=sprite_shape) tbe.validation_data = val_data with tf.name_scope('compile'): model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy']) with tf.name_scope(global_namescope): model.fit_generator(db, steps_per_epoch=1, # change to STEPS_PER_EPOCH epochs=1, # change to MAX_EPOCH_NUM verbose=2, callbacks=[tbe], validation_data=val_data, class_weight=None, workers=1) Explanation: Now we can add the new callback, recompile and retrain the model. End of explanation
1,087
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. Step3: Explore the Data Play around with view_sentence_range to view different parts of the data. Step6: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing Step8: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. Step10: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step12: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU Step15: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below Step18: Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch. Step21: Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn(). Step24: Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs. Step27: Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Step30: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note Step33: Build the Neural Network Apply the functions you implemented above to Step34: Neural Network Training Hyperparameters Tune the following parameters Step36: Build the Graph Build the graph using the neural network you implemented. Step39: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem. Step41: Save Parameters Save the batch_size and save_path parameters for inference. Step43: Checkpoint Step46: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. Step48: Translate This will translate translate_sentence from English to French.
Python Code: DON'T MODIFY ANYTHING IN THIS CELL import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation view_sentence_range = (0, 10) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation def single_text_to_ids(text, vocab_to_int, add_EOS): id_text = [] for sentence in text.split('\n'): id_sentence = [] for word in sentence.split(): id_sentence.append(vocab_to_int[word]) if add_EOS: id_sentence.append(vocab_to_int['<EOS>']) #print(sentence) #print(id_sentence) id_text.append(id_sentence) #print(id_text) return id_text def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) # TODO: Implement Function #print(source_text) #print(target_text) #print(source_vocab_to_int) #print(target_vocab_to_int) source_id_text = single_text_to_ids(source_text, source_vocab_to_int, False) target_id_text = single_text_to_ids(target_text, target_vocab_to_int, True) return source_id_text, target_id_text DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_text_to_ids(text_to_ids) Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation DON'T MODIFY ANYTHING IN THIS CELL helper.preprocess_and_save_data(source_path, target_path, text_to_ids) Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation def model_inputs(): Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) # TODO: Implement Function input = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') learning_rate = tf.placeholder(tf.float32, name='learning_rate') keep_prob = tf.placeholder(tf.float32, name='keep_prob') return input, targets, learning_rate, keep_prob DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_model_inputs(model_inputs) Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoding_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability) End of explanation def process_decoding_input(target_data, target_vocab_to_int, batch_size): Preprocess target data for decoding :param target_data: Target Placeholder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data # TODO: Implement Function ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_process_decoding_input(process_decoding_input) Explanation: Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch. End of explanation def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) lstm = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) enc_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers) _, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32) return enc_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_encoding_layer(encoding_layer) Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn(). End of explanation def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits # TODO: Implement Function #with tf.variable_scope("decoding") as decoding_scope: # Training Decoder train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob) train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope) # Apply output function train_logits = output_fn(train_pred) return train_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_train(decoding_layer_train) Explanation: Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs. End of explanation def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits # TODO: Implement Function # Inference Decoder infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope) return inference_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_infer(decoding_layer_infer) Explanation: Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder(). End of explanation def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) # TODO: Implement Function # Decoder RNNs lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) lstm = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) dec_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers) with tf.variable_scope("decoding") as decoding_scope: # Output Layer output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope) #with tf.variable_scope("decoding") as decoding_scope: train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) with tf.variable_scope("decoding", reuse=True) as decoding_scope: start_of_sequence_id = target_vocab_to_int['<GO>'] end_of_sequence_id = target_vocab_to_int['<EOS>'] maximum_length = sequence_length - 1 inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) return train_logits, inference_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer(decoding_layer) Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) # TODO: Implement Function #Apply embedding to the input data for the encoder. enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) #Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob) #Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size) #Apply embedding to the target data for the decoder. dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) #Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob). train_logits, inference_logits = decoding_layer(dec_embed_input, dec_embeddings, enc_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) return train_logits, inference_logits DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_seq2seq_model(seq2seq_model) Explanation: Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. Apply embedding to the target data for the decoder. Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob). End of explanation # Number of Epochs epochs = 20 # Batch Size batch_size = 512 # RNN Size rnn_size = 128 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 200 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.5 Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability End of explanation DON'T MODIFY ANYTHING IN THIS CELL save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import time def get_accuracy(target, logits): Calculate accuracy max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() if batch_i % 10 == 0: print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem. End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Save parameters for checkpoint helper.save_params(save_path) Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() Explanation: Checkpoint End of explanation def sentence_to_seq(sentence, vocab_to_int): Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids # TODO: Implement Function lower_sentence = sentence.lower() id_seq = [] for word in lower_sentence.split(): id_seq.append(vocab_to_int.get(word, vocab_to_int['<UNK>'])) return id_seq DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_sentence_to_seq(sentence_to_seq) Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation translate_sentence = 'he saw a old yellow truck .' DON'T MODIFY ANYTHING IN THIS CELL translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) Explanation: Translate This will translate translate_sentence from English to French. End of explanation
1,088
Given the following text description, write Python code to implement the functionality described below step by step Description: So, this is all about the innocent little &#10033; (star or asterisk) as a versatile syntax element in Python. Depending on the context, it fulfills quite a few different roles. Here is a piece of code that uses all of them (as far as I know). Step1: Simple things first Step2: Although it is not even trying to make sense, the star-spangled code example further up actually works. If you look at the wrapped function, &#10033; is used as a boring old mathematical operator here Step3: Enforcing the API of a function If your function needs 23 arguments, you have a big problem anyway but you can at least alleviate it a bit by making calls to that function more readable. Passing some or all arguments as keyword arguments usually helps. Problem is Step4: With Python 3.0 a new syntax was introduced to make enforcement of so called "keyword-only arguments" possible. This is used in the definition of the append function above. When using this, everything after the [, ]*, has to be passed as keyword argument or you get into trouble. Trying to decorate a function with append and not passing end as a keyword parameter results in a friendly TypeError exception Step5: Pack and unpack arguments This goes back to at least Python 2.0. In this case &#10033; and &#10033;&#10033; are syntax elements to be used as prefix, when defining or calling functions. The idea is usually that you want to pass through parameters to an underlying function without having to care about what or even how many they are. In this example we have a function that is just passing through arguments without needing to now anything about them Step6: A case where this is particularly useful is when creating decorators that are not opinionated about the kind of function they decorate (like append). They just need to pass through whatever the decorated function needs to be called with. There is even more to unpack In pre Python3 days so-called tuple unpacking was already supported. Here is the classic example of swapping assignments between two names Step7: PEP 3132 - extended iterable unpacking brought the star into the "classic" tuple unpacking (which was never restricted to tuples but that name somehow stuck) Step8: There is also more to pack Pretty much analogous to how &#10033; and &#10033;&#10033; are used in function calls they can be used in literals to create new iterables or mappings Step9: This is the more "natural" approach for sets though (union) Step10: As the underlying functionality only cares about whether something is iterable, you can mix and match. This creates a tuple from a list and a set Step11: Be aware though that merging maps like this is not recursive. Later keys overwrite earlier ones. Here foo will contain the second dict after merging Step12: Import all the things The last star shines a bit dimly as this is usually an antipattern and it looks like this
Python Code: # Yes. The code makes no sense. Thanks for pointing it out. from os import * def append(*, end=linesep): def _append(function): def star_reporter(*args, **kwargs): print(*args, **kwargs, end=end) return function(*args, **kwargs) return star_reporter return _append @append(end=" ❇❇❇" + linesep) def wrapped(stars, bars): first, *middle, last = stars for elem in [*middle, last, *bars]: first *= 2 ** elem print(f"answer: {first} (don't know the question though)") Explanation: So, this is all about the innocent little &#10033; (star or asterisk) as a versatile syntax element in Python. Depending on the context, it fulfills quite a few different roles. Here is a piece of code that uses all of them (as far as I know). End of explanation 2 * 2 2 ** 4 'spam' * 3 Explanation: Simple things first: &#10033; and &#10033;&#10033; operators One of the first things a new Python disciple might learn is how to use Python as a calculator - like in many other languages &#10033; is the multiplication operator and &#10033;&#10033; is used for exponentiation - e.g.: End of explanation wrapped([1, 2, 3, 4], (23, 42)) Explanation: Although it is not even trying to make sense, the star-spangled code example further up actually works. If you look at the wrapped function, &#10033; is used as a boring old mathematical operator here: first *= 2 ** elem (which is using an augmented assignment and is the same as first = first * 2 ** elem). If we run wrappped, we won't get a useful result but at least we see that the code executes: End of explanation def kw_only_you_wish(spam=None, eggs=None, lobster=None): return spam + eggs * lobster kw_only_you_wish(2, 3 ,4) Explanation: Enforcing the API of a function If your function needs 23 arguments, you have a big problem anyway but you can at least alleviate it a bit by making calls to that function more readable. Passing some or all arguments as keyword arguments usually helps. Problem is: the caller normally has the choice how to pass the arguments. You can even call a "keyword only" function like this: End of explanation @append("❈❈❈") def badly_wrapped(): pass Explanation: With Python 3.0 a new syntax was introduced to make enforcement of so called "keyword-only arguments" possible. This is used in the definition of the append function above. When using this, everything after the [, ]*, has to be passed as keyword argument or you get into trouble. Trying to decorate a function with append and not passing end as a keyword parameter results in a friendly TypeError exception: End of explanation def passing_things_through_function(*args, **kwargs): print(f"passing through {args=} and {kwargs=}") the_actual_function(*args, **kwargs) def the_actual_function(a, b, c=None, d=None): print(f"passed arguments: {a=}, {b=}, {c=}, {d=}") passing_things_through_function(*[1, 2], **dict(c=3, d=4)) Explanation: Pack and unpack arguments This goes back to at least Python 2.0. In this case &#10033; and &#10033;&#10033; are syntax elements to be used as prefix, when defining or calling functions. The idea is usually that you want to pass through parameters to an underlying function without having to care about what or even how many they are. In this example we have a function that is just passing through arguments without needing to now anything about them: End of explanation a = 1 b = 2 print(f"before: {a=}, {b=}") a, b = b, a print(f"after: {a=}, {b=}") Explanation: A case where this is particularly useful is when creating decorators that are not opinionated about the kind of function they decorate (like append). They just need to pass through whatever the decorated function needs to be called with. There is even more to unpack In pre Python3 days so-called tuple unpacking was already supported. Here is the classic example of swapping assignments between two names: End of explanation for iterable in [ "egg", [1, 2, 3], (1, 2, 3), {1, 2, 3}, {1: 'a', 2: 'b', 3: 'c'} ]: print(f"{iterable} ({type(iterable)}):") a, b, c = iterable print(f"a, b, c -> {a} {b} {c}") *a, b = iterable print(f"*a, b = iterable -> {a} {b}") a, *b = iterable print(f"a, *b = iterable -> {a} {b}\n") Explanation: PEP 3132 - extended iterable unpacking brought the star into the "classic" tuple unpacking (which was never restricted to tuples but that name somehow stuck): End of explanation a, b = [1, 2, 3], [4, 5, 6] [*a, *b] a, b = {1: 2, 2: 3, 3: 4}, {1: 4, 4: 5, 5: 6} {**a, **b} a, b = {1, 2 ,3}, {3, 4 ,5} {*a, *b} Explanation: There is also more to pack Pretty much analogous to how &#10033; and &#10033;&#10033; are used in function calls they can be used in literals to create new iterables or mappings: This syntax to merge iterables was implemented via PEP 448 (additional unpacking generalizations) in Python 3.5.[^1] [^1]: For the historically interested: discussion on the mailing list part I and part II. End of explanation a | b Explanation: This is the more "natural" approach for sets though (union): End of explanation (*[1, 2 ,3], *{3, 4 ,5}) Explanation: As the underlying functionality only cares about whether something is iterable, you can mix and match. This creates a tuple from a list and a set: End of explanation a = {"a": 1, "foo": { "a": 1}} b = {"a": 1, "foo": { "b": 2, "c": 3}} {**a, **b} Explanation: Be aware though that merging maps like this is not recursive. Later keys overwrite earlier ones. Here foo will contain the second dict after merging: End of explanation from os import * Explanation: Import all the things The last star shines a bit dimly as this is usually an antipattern and it looks like this: End of explanation
1,089
Given the following text description, write Python code to implement the functionality described below step by step Description: Initialisation Ecrire un jeu d'instructions permettant d'initialiser la vitesse de chacun des moteurs à 30°/s. de mettre les moteurs poppy.m1 à poppy.m6 dans les positions données par la liste suivante pos_init = [0, -90, 30, 0, 60, 0]. Remarque Step1: Quelques remarques L'itérateur m est ici une variable locale au sein de la boucle pour et il va parcourir dans l'ordre la liste poppy.motors On peut donc considérer qu'il est du type poppy.mi il possède alors tous les attributs de poppy.mi On a besoin d'un compteur pour parcourir la liste contenant les positions initiales. Ici, la variable i qui doit alors être incrémentée de 1 à chaque passage dans la boucle pour afin de faire coincider le moteur et la position à atteindre par ce moteur. Une autre syntaxe possible pour parcourrir deux listes simultanement Step2: Faire de ce jeu d'instructions une procédure On veut faire de ces instructions d'initialisation une procédure dont les arguments sont le robot nommé bot et la liste donnant les positions initiales des moteurs nommé pos_initiale. Le prototype de cette procédure est Step3: Quelques remarques La variable i est ici une variable locale bot et pos_initiale sont les deux arguments de la fonction. Ici, lors de l'appel de la procédure, poppy est visible à partir de son instanciation et du fait des caractéristiques du robot (synchronisation), il n'est pas utile de le passer en argument à moins d'avoir plusieurs robots instanciés. Par ailleurs, on pourrait créer une fonction avec des arguments optionnels, la recherche de la syntaxe est laissée à votre charge. Tester votre procédure Faire fonctionner votre procédure avec poppy et pos_init = [0, -90, 30, 0, 60, 0] puis avec [30, -60, 30, -30, 60, 20] Step4: Une nouvelle procédure f_init2 Définir une nouvelle procédure f_init dont le prototype est f_init2(bot, pos_initiale, vitesse) et qui permet cette fois d'initialiser la vitesse des moteurs à la valeur vitesse donnée en argument. Step5: QUESTION Step6: Vérifier votre réponse en la testant. Step7: Expliquer ce que doit faire la fonction f_pos_cible. La fonction f_pos_cible va enregistrer la position courante de tous les moteurs du robot dans une liste et la retourner. Step8: Défi On veut pouvoir créer un mouvement d'une position de départ à une position d'arrivée. Pour cela
Python Code: i = 0 pos_init = [0, -90, 30, 0, 60, 0] for m in poppy.motors: m.moving_speed = 60 m.compliant = False m.goal_position = pos_init[i] i = i + 1 Explanation: Initialisation Ecrire un jeu d'instructions permettant d'initialiser la vitesse de chacun des moteurs à 30°/s. de mettre les moteurs poppy.m1 à poppy.m6 dans les positions données par la liste suivante pos_init = [0, -90, 30, 0, 60, 0]. Remarque : Lorsque la vitesse du moteur poppy.m1 == 0 ou lorsque le moteur est dans l'état compliant == True, la commande poppy.m1.goal_position = 50 n'a pas d'effet. End of explanation for (motor, pos) in zip(poppy.motors, pos_init): motor.moving_speed = 60 motor.compliant = False motor.goal_position = pos Explanation: Quelques remarques L'itérateur m est ici une variable locale au sein de la boucle pour et il va parcourir dans l'ordre la liste poppy.motors On peut donc considérer qu'il est du type poppy.mi il possède alors tous les attributs de poppy.mi On a besoin d'un compteur pour parcourir la liste contenant les positions initiales. Ici, la variable i qui doit alors être incrémentée de 1 à chaque passage dans la boucle pour afin de faire coincider le moteur et la position à atteindre par ce moteur. Une autre syntaxe possible pour parcourrir deux listes simultanement : End of explanation def f_init(bot, pos_initiale): i = 0 for m in bot.motors: m.moving_speed = 30 m.compliant = False m.goal_position = pos_initiale[i] i = i + 1 print("la vitesse de mouvement a été mise à jour à ", bot.m1.moving_speed) Explanation: Faire de ce jeu d'instructions une procédure On veut faire de ces instructions d'initialisation une procédure dont les arguments sont le robot nommé bot et la liste donnant les positions initiales des moteurs nommé pos_initiale. Le prototype de cette procédure est : f_init(bot, pos_initiale). A la fin de l'exécution de la procédure, on affichera un message pour identifier ce qui a été fait. Remarque : En Python, on déclare une procédure à l'aide du mot réservé def suivi du prototype de la procédure. Cette ligne ce termine par :. Ensuite, c'est l'indentation qui délimite le contenu de cette procédure. Remarque : Il en est de même pour une fonction, celle-ci comportera le mot réservé return qui permettra à l'issue du traitement de retourner le contenu souhaité. End of explanation f_init(poppy, pos_init) print(poppy.m1.moving_speed) Explanation: Quelques remarques La variable i est ici une variable locale bot et pos_initiale sont les deux arguments de la fonction. Ici, lors de l'appel de la procédure, poppy est visible à partir de son instanciation et du fait des caractéristiques du robot (synchronisation), il n'est pas utile de le passer en argument à moins d'avoir plusieurs robots instanciés. Par ailleurs, on pourrait créer une fonction avec des arguments optionnels, la recherche de la syntaxe est laissée à votre charge. Tester votre procédure Faire fonctionner votre procédure avec poppy et pos_init = [0, -90, 30, 0, 60, 0] puis avec [30, -60, 30, -30, 60, 20] End of explanation def f_init2(bot, pos_initiale, vitesse): i = 0 for m in bot.motors: m.moving_speed = vitesse m.compliant = False m.goal_position = pos_initiale[i] i = i + 1 Explanation: Une nouvelle procédure f_init2 Définir une nouvelle procédure f_init dont le prototype est f_init2(bot, pos_initiale, vitesse) et qui permet cette fois d'initialiser la vitesse des moteurs à la valeur vitesse donnée en argument. End of explanation def f_bouger_a_la_main(bot): for m in bot.motors : m.compliant = True Explanation: QUESTION : Expliquer pourquoi deux procédures ne "peuvent" pas avoir le même nom. Si elles avaient le même nom, l'exécution du bloc d'instructions permettant de la définir la seconde fois écraserait alors la définition première de la fonction. Il n'en resterait alors plus qu'une seule, la dernière. Ce n'est pas le nombre d'arguments qui va définir une fonction mais son nom. Expliquer le rôle de la procédure définie ci-dessous. Par son appel, elle va permettre de mettre tous les moteurs du robot bot dans l'état compliant c'est à dire le rendre souple/mobile. End of explanation f_bouger_a_la_main(poppy) Explanation: Vérifier votre réponse en la testant. End of explanation def f_pos_cible(bot): f_pos_cible = [] for m in bot.motors: f_pos_cible.append(m.present_position) return f_pos_cible Explanation: Expliquer ce que doit faire la fonction f_pos_cible. La fonction f_pos_cible va enregistrer la position courante de tous les moteurs du robot dans une liste et la retourner. End of explanation pos_arrivee = f_pos_cible(poppy) pos_arrivee = [30.65, -34.46, 63.49, -30.94, -86.36, -31.82] pos_depart = f_pos_cible(poppy) pos_depart = [3.37, -98.39, 80.79, -1.32, 11.58, -15.98] def f_mouv(bot, pos_D, pos_A): f_init2(poppy, pos_D, 50) time.sleep(3.0) i = 0 for m in poppy.motors: m.goal_position = pos_A[i] m.led = 'red' time.sleep(2 * abs(pos_A[i] - pos_D[i]) / m.moving_speed) i = i + 1 m.led = 'green' f_mouv(poppy, pos_depart, pos_arrivee) f_mouv(poppy, pos_arrivee, pos_depart) Explanation: Défi On veut pouvoir créer un mouvement d'une position de départ à une position d'arrivée. Pour cela : on va initialiser les positions de départ et d'arrivée Faire bouger les moteurs un par un de la position de départ à la position d'arrivée. Pendant toute la durée du mouvement, la led du moteur doit être rouge une fois le mouvement fini, elle doit passer au vert. End of explanation
1,090
Given the following text description, write Python code to implement the functionality described below step by step Description: 2-3 Trees Step1: Ths notebook presents <a href="https Step2: The function make_string is a helper function used to shorten the implementation of __str__. - obj is the object that is to be rendered as a string - attributes is a list of those member variables that are used to produce the string Step3: The method $t.\texttt{toDot}()$ takes a 2-3-4 tree $t$ and returns a graph that depicts the tree $t$. Step4: The method $t.\texttt{collectIDs}(d)$ takes a tree $t$ and a dictionary $d$ and updates the dictionary so that the following holds Step5: The function $\texttt{toDotList}(\texttt{NodeList})$ takes a list of trees and displays them one by one. Step6: The class Tree is not used in the implementation of 2-3 trees. It is only used for displaying abstract subtrees in equations. It is displayed as a triangle containing the string that is stored in the member variable mName. Step7: The class Method is not used in the implementation of 2-3 trees. It is only used for displaying method calls in equations. It is displayed as a rectangle containing the string that is stored in the member variable mLabel. Step8: The class Nil represents an empty tree. It has no member variables of its own. Step9: The class Onerepresents a 1-node. These are nodes without a key that have only a single child. Step10: Graphically, the node $\texttt{One}(t)$ is represented as shown below Step11: The class Two represents a 2-node of the form $\texttt{Two}(l, k, r)$. It manages three member variables Step12: Graphically, the node $\texttt{Two}(l, k, r)$ is represented as shown below Step13: The class Three represents a 3-node of the form $\texttt{Three}(l, k_L, m, k_R, r)$. It manages 5 member variables Step14: Graphically, the node $\texttt{Three}(l, k_L, m, k_R, r)$ is represented as shown below Step15: The class Four represents a 4-node. It manages 7 member variables Step16: Graphically, the node $\texttt{Four}(l, k_L, m_L, k_M, m_R, k_R, r)$ is represented as shown below Step17: Methods of the Class Nil The empty tree does not contain any keys Step18: Insertings a key $k$ into an empty node returns a 2-node with two empty subtrees. Step19: Mathematically, this can be written as follows Step20: Methods of the Class One Step21: Methods of the Class Two The method extract returns the member variables stored in a 2-node. This is usefull to shorten the code since when we use this method, we don't have to prefix all variable names with self.. Step22: Given a 2-node $t$ and a key $k$, the method $t.\texttt{member}(k)$ checks whether the key $k$ occurs in $t$. It is specified as follows Step23: The method $t.\texttt{ins}(k)$ takes a 2-3 tree $t$ and and a key $k$ and inserts the key $k$ into $t$. It returns a 2-3-4 tree that has at most one 4-node, which has to be a child of the root node. The function $\texttt{ins}$ is recursive and uses the function $\texttt{restore}$ defined below. The most important invariant satisfied by the method call $t.\texttt{ins}(k)$ is the fact that the tree $t.\texttt{ins}(k)$ has the same height as the tree $t$. The different cases that need to be handled by ins are shown graphically below Step24: $\displaystyle\texttt{Two}(l,k,r).\texttt{ins}(k) = \texttt{Two}(l,k,r)$ Step25: $k_1 < k_2 \rightarrow \texttt{Two}(\texttt{Nil},k_1,\texttt{Nil}).\texttt{ins}(k_2) = \texttt{Three}(\texttt{Nil},k_1,\texttt{Nil},k_2,\texttt{Nil})$ Step26: $k_2 < k_1 \rightarrow \texttt{Two}(\texttt{Nil},k_1,\texttt{Nil}).\texttt{ins}(k_2) = \texttt{Three}(\texttt{Nil},k_2,\texttt{Nil},k_1,\texttt{Nil})$ Step27: $k_1 < k_2 \wedge l \not= \texttt{Nil} \wedge r \not= \texttt{Nil} \rightarrow \texttt{Two}(l,k_1,r).\texttt{ins}(k_2) = \texttt{Two}(l,k_1,r.\texttt{ins}(k_2)).\texttt{restore}()$ Step28: $k_2 < k_1 \wedge l \not= \texttt{Nil} \wedge r \not= \texttt{Nil} \rightarrow \texttt{Two}(l,k_1,r).\texttt{ins}(k_2) = \texttt{Two}(l.\texttt{ins}(k_2),k_1,r).\texttt{restore}()$ I have collected all of these equations below Step29: The function call $t.\texttt{restore}()$ takes a 2-3-4 tree $t$ that has at most one 4-node. This 4-node has to be a child of the root. It returns a 2-3-4 tree that has at most one 4-node. This 4-node has to be the root node. Graphically, it is specified as shown below. Step30: $\texttt{Two}\bigl(\texttt{Four}(l_1,k_l,m_l,k_m,m_r,k_r,r_1), k, r\bigr).\texttt{restore}() = \texttt{Three}\bigl(\texttt{Two}(l_1, k_l, m_l), k_m, \texttt{Two}(m_r, k_r, r_1), k, r\bigr) $ Step31: $\texttt{Two}\bigl(l, k, \texttt{Four}(l_1,k_l,m_l,k_m,m_r,k_r,r_1)\bigr).\texttt{restore}() = \texttt{Three}\bigl(l, k, \texttt{Two}(l_1, k_l, m_l), k_m, \texttt{Two}(m_r, k_r, r_1)\bigr) $ I have collected both equations below Step32: Methods of the Class Three The method extract returns the member variables stored in a 3-node. Step33: Given a 3-node $t$ and a key $k$, the method $t.\texttt{member}(k)$ checks whether the key $k$ occurs in $t$. It is specified as follows Step34: The method $t.\texttt{ins}(k)$ takes a 2-3 tree $t$ and and a key $k$ and inserts the key $k$ into $t$. It returns a 2-3-4 tree that has at most one 4-node, which has to be a child of the root node. The function $\texttt{ins}$ is recursive and uses the function $\texttt{restore}$ defined below. It is defined as follows Step35: The function call $t.\texttt{restore}()$ takes a 2-3-4 tree $t$ that has at most one 4-node. This 4-node has to be a child of the root. It returns a 2-3-4 tree that has at most one 4-node. This 4-node has to be the root node. The most important invariant satisfied by the method call $t.\texttt{ins}(k)$ is the fact that the tree $t.\texttt{ins}(k)$ has the same height as the tree $t$. The different cases that need to be handled by ins are shown graphically below Step36: $\texttt{Three}\bigl(\texttt{Four}(l_1,k_1,m_l,k_2,m_r,k_3,r_1), k_l, m, k_r, r\bigr).\texttt{restore}() = \texttt{Four}\bigl(\texttt{Two}(l_1, k_1, m_l), k_2, \texttt{Two}(m_r, k_3, r_1), k_l, m, k_r, r\bigr) $ Step37: $\texttt{Three}\bigl(l, k_l, \texttt{Four}(l_1,k_1,m_l,k_2,m_r,k_3,r_1), k_r, r\bigr).\texttt{restore}() = \texttt{Four}\bigl(l, k_l, \texttt{Two}(l_1, k_1, m_l), k_2, \texttt{Two}(m_r, k_3, r_1), k_r, r\bigr) $ Step38: $\texttt{Three}\bigl(l, k_l, m, k_r, \texttt{Four}(l_1,k_1,m_l,k_2,m_r,k_3,r_1)\bigr).\texttt{restore}() = \texttt{Four}\bigl(l, k_l, m, k_r, \texttt{Two}(l_1, k_1, m_l), k_2, \texttt{Two}(m_r, k_3, r_1)\bigr) $ Below I have collected all the equations specifying the implementation of restore for 3-nodes. - $\texttt{Three}\bigl(\texttt{Four}(l_1,k_1,m_l,k_2,m_r,k_3,r_1), k_l, m, k_r, r\bigr).\texttt{restore}() = \texttt{Four}\bigl(\texttt{Two}(l_1, k_1, m_l), k_2, \texttt{Two}(m_r, k_3, r_1), k_l, m, k_r, r\bigr) $ - $\texttt{Three}\bigl(l, k_l, \texttt{Four}(l_1,k_1,m_l,k_2,m_r,k_3,r_1), k_r, r\bigr).\texttt{restore}() = \texttt{Four}\bigl(l, k_l, \texttt{Two}(l_1, k_1, m_l), k_2, \texttt{Two}(m_r, k_3, r_1), k_r, r\bigr) $ - $\texttt{Three}\bigl(l, k_l, m, k_r, \texttt{Four}(l_1,k_1,m_l,k_2,m_r,k_3,r_1)\bigr).\texttt{restore}() = \texttt{Four}\bigl(l, k_l, m, k_r, \texttt{Two}(l_1, k_1, m_l), k_2, \texttt{Two}(m_r, k_3, r_1)\bigr) $ If neither of the child nodes of a 3-node is a 4-node, the node is returned unchanged. Step39: Methods of the Class Four The method extract returns the member variables stored in a 4-node. Step40: The method restore returns a 4-node unchanged. Step41: The function grow turns a 4-node into 3 2-nodes. Graphically, it is specified as follows Step42: $\texttt{Four}(l, k_l, m_l, k_m, m_r, k_r, r).\texttt{grow}() = \texttt{Two}\bigl(\texttt{Two}(l, k_l, m_l), k_m, \texttt{Two}(m_r, k_r, r)\bigr)$ Step43: Testing Step44: Let's generate 2-3 tree with random keys. Step45: Lets us try to create a tree by inserting sorted numbers because that resulted in linear complexity for ordered binary trees. Step46: Finally, we compute the set of prime numbers $\leq 100$. Mathematically, this set is given as follows
Python Code: import graphviz as gv Explanation: 2-3 Trees End of explanation class TwoThreeTree: sNodeCount = 0 def __init__(self): TwoThreeTree.sNodeCount += 1 self.mID = TwoThreeTree.sNodeCount def getID(self): return self.mID def isNil(self): return False def isOne(self): return False def isTwo(self): return False def isThree(self): return False def isFour(self): return False def isTree(self): return False def isMethod(self): return False def insert(self, k): return self._ins(k)._restore()._grow() def delete(self, k): return self._del(k)._repair()._shrink() def _grow(self): return self def _shrink(self): return self Explanation: Ths notebook presents <a href="https://en.wikipedia.org/wiki/2-3_tree">2-3 trees</a>. We define these trees inductively as follows: - $\texttt{Nil}$ is a 2-3 tree that represents the empty set. - $\texttt{Two}(l, k, r)$ is a 2-3 tree provided that - $l$ is a 2-3 tree, - $k$ is a key, - $r$ is a 2-3 tree, - all keys stored in $l$ are less than k and all keys stored in $r$ are bigger than $k$, that is we have $$ l < k < r. $$ - $l$ and $r$ have the same height. A node of the form $\texttt{Two}(l, k, r)$ is called a 2-node. Except for the fact that there is no value, a 2-node is interpreted in the same way as we have interpreted the term $\texttt{Node}(k, v, l, r)$. - $\texttt{Three}(l, k_l, m, k_r, r)$ is a 2-3 tree provided - $l$, $m$, and $r$ are 2-3 trees, - $k_l$ and $k_r$ are keys, - $l < k_l < m < k_r < r$, - $l$, $m$, and $r$ have the same height. A node of the form $\texttt{Three}(l, k_l, m, k_r, r)$ is called a 3-node. In order to keep 2-3 trees balanced when inserting new keys we use a fourth constructor of the form $\texttt{Four}(l,k_1,m_l, k_2, m_r, k_3, r)$. A term of the form $\texttt{Four}(l,k_1,m_l, k_2, m_r, k_3, r)$ is a 2-3-4 tree iff - $l$, $m_l$, $m_r$, and $r$ are 2-3 trees, - $k_l$, $k_m$, and $k_r$ are keys, - $l < k_l < m_l < k_m < m_r < k_r < r$, - $l$, $m_l$, $m_r$, and $r$ all have the same height. Nodes of this form are called 4-nodes and the key $k_m$ is called the middle key. Trees containing 2-nodes, 3-node, and 4-nodes are called 2-3-4 trees. In order to keep 2-3 trees balanced when deleting keys we use a fifth constructor of the form $\texttt{One}(t)$. A term of the form $\texttt{One}(t)$ is a 1-2-3 tree iff $t$ is a 2-3 tree. The class TwoThreeTree is a superclass for constructing the nodes of 2-3-4 trees. It has one static variable sNodeCount. This variable is used to equip all nodes with a unique identifier. This identifier is used to draw the trees using graphviz. Every node has a uniques id mID that is stored as a member variable. Furthermore, this class provides defaults for the functions isNil, isTwo, isThree, and isFour. These functions can be used to check the type of a node. End of explanation def _make_string(self, attributes): # map the function __str__ to all attributes and join them with a comma name = self.__class__.__name__ return f"{name}({', '.join(map(str, [getattr(self, at) for at in attributes]))})" TwoThreeTree._make_string = _make_string Explanation: The function make_string is a helper function used to shorten the implementation of __str__. - obj is the object that is to be rendered as a string - attributes is a list of those member variables that are used to produce the string End of explanation def toDot(self): dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'}) nodeDict = {} self._collectIDs(nodeDict) for n, t in nodeDict.items(): if t.isNil(): dot.node(str(n), label='', shape='point') elif t.isOne(): dot.node(str(n), label='', shape='point') elif t.isTwo(): dot.node(str(n), label=str(t.mKey)) elif t.isThree(): dot.node(str(n), label=str(t.mKeyL) + '|' + str(t.mKeyR)) elif t.isFour(): dot.node(str(n), label=str(t.mKeyL) + '|' + str(t.mKeyM) + '|' + str(t.mKeyR)) elif t.isTree(): dot.node(str(n), label=str(t.mName), shape='triangle') else: assert False, f'Unknown node {t}' for n, t in nodeDict.items(): if t.isOne(): dot.edge(str(n), str(t.mChild.getID())) if t.isTwo(): dot.edge(str(n), str(t.mLeft .getID())) dot.edge(str(n), str(t.mRight.getID())) if t.isThree(): dot.edge(str(n), str(t.mLeft .getID())) dot.edge(str(n), str(t.mMiddle.getID())) dot.edge(str(n), str(t.mRight .getID())) if t.isFour(): dot.edge(str(n), str(t.mLeft .getID())) dot.edge(str(n), str(t.mMiddleL.getID())) dot.edge(str(n), str(t.mMiddleR.getID())) dot.edge(str(n), str(t.mRight .getID())) return dot TwoThreeTree.toDot = toDot Explanation: The method $t.\texttt{toDot}()$ takes a 2-3-4 tree $t$ and returns a graph that depicts the tree $t$. End of explanation def _collectIDs(self, nodeDict): nodeDict[self.getID()] = self if self.isOne(): self.mChild._collectIDs(nodeDict) elif self.isTwo(): self.mLeft ._collectIDs(nodeDict) self.mRight._collectIDs(nodeDict) elif self.isThree(): self.mLeft ._collectIDs(nodeDict) self.mMiddle._collectIDs(nodeDict) self.mRight ._collectIDs(nodeDict) elif self.isFour(): self.mLeft ._collectIDs(nodeDict) self.mMiddleL._collectIDs(nodeDict) self.mMiddleR._collectIDs(nodeDict) self.mRight ._collectIDs(nodeDict) TwoThreeTree._collectIDs = _collectIDs Explanation: The method $t.\texttt{collectIDs}(d)$ takes a tree $t$ and a dictionary $d$ and updates the dictionary so that the following holds: $$ d[\texttt{id}] = n \quad \mbox{for every node $n$ in $t$.} $$ Here, $\texttt{id}$ is the unique identifier of the node $n$, i.e. $d$ associates the identifiers with the corresponding nodes. End of explanation def toDotList(NodeList): dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'}) nodeDict = {} for node in NodeList: node._collectIDs(nodeDict) for n, t in nodeDict.items(): if t.isNil(): dot.node(str(n), label='', shape='point') elif t.isOne(): dot.node(str(n), label='', shape='point') elif t.isTwo(): dot.node(str(n), label=str(t.mKey)) elif t.isThree(): dot.node(str(n), label=str(t.mKeyL) + '|' + str(t.mKeyR)) elif t.isFour(): dot.node(str(n), label=str(t.mKeyL) + '|' + str(t.mKeyM) + '|' + str(t.mKeyR)) elif t.isTree(): dot.node(str(n), label=str(t.mName), shape='triangle', style='solid') elif t.isMethod(): dot.node(str(n), label=str(t.mLabel), shape='rectangle', style='dotted') else: assert False, f'toDotList: Unknown node {str(t)}' for n, t in nodeDict.items(): if t.isOne(): dot.edge(str(n), str(t.mChild.getID())) if t.isTwo(): dot.edge(str(n), str(t.mLeft .getID())) dot.edge(str(n), str(t.mRight.getID())) if t.isThree(): dot.edge(str(n), str(t.mLeft .getID())) dot.edge(str(n), str(t.mMiddle.getID())) dot.edge(str(n), str(t.mRight .getID())) if t.isFour(): dot.edge(str(n), str(t.mLeft .getID())) dot.edge(str(n), str(t.mMiddleL.getID())) dot.edge(str(n), str(t.mMiddleR.getID())) dot.edge(str(n), str(t.mRight .getID())) return dot Explanation: The function $\texttt{toDotList}(\texttt{NodeList})$ takes a list of trees and displays them one by one. End of explanation class Tree(TwoThreeTree): def __init__(self, name): TwoThreeTree.__init__(self) self.mName = name def __str__(self): return self.mName def isTree(self): return True Explanation: The class Tree is not used in the implementation of 2-3 trees. It is only used for displaying abstract subtrees in equations. It is displayed as a triangle containing the string that is stored in the member variable mName. End of explanation class Method(TwoThreeTree): def __init__(self, label): TwoThreeTree.__init__(self) self.mLabel = label def __str__(self): return self.mLabel def isMethod(self): return True Explanation: The class Method is not used in the implementation of 2-3 trees. It is only used for displaying method calls in equations. It is displayed as a rectangle containing the string that is stored in the member variable mLabel. End of explanation class Nil(TwoThreeTree): def __init__(self): TwoThreeTree.__init__(self) def isNil(self): return True def __str__(self): return 'Nil()' Explanation: The class Nil represents an empty tree. It has no member variables of its own. End of explanation class One(TwoThreeTree): def __init__(self, child): TwoThreeTree.__init__(self) self.mChild = child def isOne(self): return True def __str__(self): return make_string(self, ['mChild']) Explanation: The class Onerepresents a 1-node. These are nodes without a key that have only a single child. End of explanation toDotList([One(Tree('t'))]) Explanation: Graphically, the node $\texttt{One}(t)$ is represented as shown below: End of explanation class Two(TwoThreeTree): def __init__(self, left, key, right): TwoThreeTree.__init__(self) self.mLeft = left self.mKey = key self.mRight = right def isTwo(self): return True def __str__(self): return self._make_string(['mLeft', 'mKey', 'mRight']) Explanation: The class Two represents a 2-node of the form $\texttt{Two}(l, k, r)$. It manages three member variables: - mLeft is the left subtree $l$, - mKey is the key that is stored at this node, - mRight is the right subtree $r$. End of explanation toDotList([Two(Tree('l'), 'k', Tree('r'))]) Explanation: Graphically, the node $\texttt{Two}(l, k, r)$ is represented as shown below: End of explanation class Three(TwoThreeTree): def __init__(self, left, keyL, middle, keyR, right): TwoThreeTree.__init__(self) self.mLeft = left self.mKeyL = keyL self.mMiddle = middle self.mKeyR = keyR self.mRight = right def __str__(self): return self._make_string(['mLeft', 'mKeyL', 'mMiddle', 'mKeyR', 'mRight']) def isThree(self): return True Explanation: The class Three represents a 3-node of the form $\texttt{Three}(l, k_L, m, k_R, r)$. It manages 5 member variables: - mLeft is the left subtree $l$, - mKeyL is the left key $k_L$, - mMiddle is the middle subtree $m$, - mKeyR is the right key $k_r$, - mRight is the right subtree. End of explanation toDotList([Three(Tree('l'), 'kL', Tree('m'), 'kR', Tree('r'))]) Explanation: Graphically, the node $\texttt{Three}(l, k_L, m, k_R, r)$ is represented as shown below: End of explanation class Four(TwoThreeTree): def __init__(self, l, kl, ml, km, mr, kr, r): TwoThreeTree.__init__(self) self.mLeft = l self.mKeyL = kl self.mMiddleL = ml self.mKeyM = km self.mMiddleR = mr self.mKeyR = kr self.mRight = r def __str__(self): return self._make_string(['mLeft', 'mKeyL', 'mMiddleL', 'mKeyM', 'mMiddleR', 'mKeyR', 'mRight']) def isFour(self): return True Explanation: The class Four represents a 4-node. It manages 7 member variables: - mLeft is the left subtree $l$, - mKeyL is the left key $k_L$, - mMiddleL is the middle left subtree $m_L$, - mKeyM is the middle key, - mMiddleR is the middle right subtree $m_R$, - mKeyR is the right key $k_r$, - mRight is the right subtree. End of explanation toDotList([Four(Tree('l'), 'kL', Tree('mL'), 'kM', Tree('mR'), 'kR', Tree('r'))]) Explanation: Graphically, the node $\texttt{Four}(l, k_L, m_L, k_M, m_R, k_R, r)$ is represented as shown below: End of explanation def member(self, k): return False Nil.member = member Explanation: Methods of the Class Nil The empty tree does not contain any keys: $$ \texttt{Nil}.\texttt{member}(k) = \texttt{False} $$ End of explanation toDotList([Nil(), Method('ins(k)'), Two(Nil(), 'k', Nil())]) Explanation: Insertings a key $k$ into an empty node returns a 2-node with two empty subtrees. End of explanation def _ins(self, k): return "your code here" Nil._ins = _ins Explanation: Mathematically, this can be written as follows: $$ \texttt{Nil}.\texttt{ins}(k) = \texttt{Two}(\texttt{Nil}, k, \texttt{Nil}) $$ The implementation is straightforward as shown below. End of explanation def _extract(self): return self.mChild One._extract = _extract Explanation: Methods of the Class One End of explanation def _extract(self): return self.mLeft, self.mKey, self.mRight Two._extract = _extract Explanation: Methods of the Class Two The method extract returns the member variables stored in a 2-node. This is usefull to shorten the code since when we use this method, we don't have to prefix all variable names with self.. End of explanation def member(self, key): l, k, r = self._extract() if k == key: return True elif key < k: return l.member(key) elif key > self.mKey: return r.member(key) Two.member = member Explanation: Given a 2-node $t$ and a key $k$, the method $t.\texttt{member}(k)$ checks whether the key $k$ occurs in $t$. It is specified as follows: - $\texttt{Two}(l,k,r).\texttt{member}(k) = \texttt{True}$, - $k_1 < k_2 \rightarrow \texttt{Two}(l,k_1,r).\texttt{member}(k_2) = r.\texttt{member}(k_2)$, - $k_1 > k_2 \rightarrow \texttt{Two}(l,k_1,r).\texttt{member}(k_2) = l.\texttt{member}(k_2)$. End of explanation toDotList([Two(Tree('l'), 'k', Tree('r')), Method('.ins(k)'), Two(Tree('l'), 'k', Tree('r')) ]) Explanation: The method $t.\texttt{ins}(k)$ takes a 2-3 tree $t$ and and a key $k$ and inserts the key $k$ into $t$. It returns a 2-3-4 tree that has at most one 4-node, which has to be a child of the root node. The function $\texttt{ins}$ is recursive and uses the function $\texttt{restore}$ defined below. The most important invariant satisfied by the method call $t.\texttt{ins}(k)$ is the fact that the tree $t.\texttt{ins}(k)$ has the same height as the tree $t$. The different cases that need to be handled by ins are shown graphically below: End of explanation toDotList([Method('k1 < k2:'), Two(Nil(), 'k1', Nil()), Method('.ins(k2)'), Three(Nil(), 'k1', Nil(), 'k2', Nil()) ]) Explanation: $\displaystyle\texttt{Two}(l,k,r).\texttt{ins}(k) = \texttt{Two}(l,k,r)$ End of explanation toDotList([Method('k2 < k1:'), Two(Nil(), 'k1', Nil()), Method('.ins(k2)'), Three(Nil(), 'k2', Nil(), 'k1', Nil()) ]) Explanation: $k_1 < k_2 \rightarrow \texttt{Two}(\texttt{Nil},k_1,\texttt{Nil}).\texttt{ins}(k_2) = \texttt{Three}(\texttt{Nil},k_1,\texttt{Nil},k_2,\texttt{Nil})$ End of explanation toDotList([Method('k1 < k2:'), Two(Tree('l'), 'k1', Tree('r')), Method('.ins(k2)'), Two(Tree('l'), 'k1', Tree('r.ins(k2)')) ]) Explanation: $k_2 < k_1 \rightarrow \texttt{Two}(\texttt{Nil},k_1,\texttt{Nil}).\texttt{ins}(k_2) = \texttt{Three}(\texttt{Nil},k_2,\texttt{Nil},k_1,\texttt{Nil})$ End of explanation toDotList([Method('k2 < k1:'), Two(Tree('l'), 'k1', Tree('r')), Method('.ins(k2)'), Two(Tree('l.ins(k2)'), 'k1', Tree('r')) ]) Explanation: $k_1 < k_2 \wedge l \not= \texttt{Nil} \wedge r \not= \texttt{Nil} \rightarrow \texttt{Two}(l,k_1,r).\texttt{ins}(k_2) = \texttt{Two}(l,k_1,r.\texttt{ins}(k_2)).\texttt{restore}()$ End of explanation def _ins(self, key): "your code here" assert False, f'Unbalanced node {self}' Two._ins = _ins Explanation: $k_2 < k_1 \wedge l \not= \texttt{Nil} \wedge r \not= \texttt{Nil} \rightarrow \texttt{Two}(l,k_1,r).\texttt{ins}(k_2) = \texttt{Two}(l.\texttt{ins}(k_2),k_1,r).\texttt{restore}()$ I have collected all of these equations below: - $\texttt{Two}(l,k,r).\texttt{ins}(k) = \texttt{Two}(l,k,r)$ - $k_1 < k_2 \rightarrow \texttt{Two}(\texttt{Nil},k_1,\texttt{Nil}).\texttt{ins}(k_2) = \texttt{Three}(\texttt{Nil},k_1,\texttt{Nil},k_2,\texttt{Nil})$ - $k_2 < k_1 \rightarrow \texttt{Two}(\texttt{Nil},k_1,\texttt{Nil}).\texttt{ins}(k_2) = \texttt{Three}(\texttt{Nil},k_2,\texttt{Nil},k_1,\texttt{Nil})$ - $k_1 < k_2 \wedge l \not= \texttt{Nil} \wedge r \not= \texttt{Nil} \rightarrow \texttt{Two}(l,k_1,r).\texttt{ins}(k_2) = \texttt{Two}(l,k_1,r.\texttt{ins}(k_2)).\texttt{restore}()$ - $k_2 < k_1 \wedge l \not= \texttt{Nil} \wedge r \not= \texttt{Nil} \rightarrow \texttt{Two}(l,k_1,r).\texttt{ins}(k_2) = \texttt{Two}(l.\texttt{ins}(k_2),k_1,r).\texttt{restore}()$ Using these equations, the implementation of ins is straightforward. End of explanation toDotList([Two(Four(Tree('l1'),'kl',Tree('ml'),'km', Tree('mr'),'kr',Tree('r1')), 'k', Tree('r')), Method('.restore()'), Three(Two(Tree('l1'),'kl',Tree('ml')), 'km', Two(Tree('mr'),'kr',Tree('r1')), 'k', Tree('r'))]) Explanation: The function call $t.\texttt{restore}()$ takes a 2-3-4 tree $t$ that has at most one 4-node. This 4-node has to be a child of the root. It returns a 2-3-4 tree that has at most one 4-node. This 4-node has to be the root node. Graphically, it is specified as shown below. End of explanation toDotList([Two(Tree('l'), 'k', Four(Tree('l1'),'kl',Tree('ml'),'km', Tree('mr'),'kr',Tree('r1'))), Method('.restore()'), Three(Tree('l'), 'k', Two(Tree('l1'),'kl',Tree('ml')), 'km', Two(Tree('mr'),'kr',Tree('r1')))]) Explanation: $\texttt{Two}\bigl(\texttt{Four}(l_1,k_l,m_l,k_m,m_r,k_r,r_1), k, r\bigr).\texttt{restore}() = \texttt{Three}\bigl(\texttt{Two}(l_1, k_l, m_l), k_m, \texttt{Two}(m_r, k_r, r_1), k, r\bigr) $ End of explanation def _restore(self): "your code here" return self Two._restore = _restore Explanation: $\texttt{Two}\bigl(l, k, \texttt{Four}(l_1,k_l,m_l,k_m,m_r,k_r,r_1)\bigr).\texttt{restore}() = \texttt{Three}\bigl(l, k, \texttt{Two}(l_1, k_l, m_l), k_m, \texttt{Two}(m_r, k_r, r_1)\bigr) $ I have collected both equations below: - $\texttt{Two}\bigl(\texttt{Four}(l_1,k_l,m_l,k_m,m_r,k_r,r_1), k, r\bigr).\texttt{restore}() = \texttt{Three}\bigl(\texttt{Two}(l_1, k_l, m_l), k_m, \texttt{Two}(m_r, k_r, r_1), k, r\bigr) $, - $\texttt{Two}\bigl(l, k, \texttt{Four}(l_1,k_l,m_l,k_m,m_r,k_r,r_1)\bigr).\texttt{restore}() = \texttt{Three}\bigl(l, k, \texttt{Two}(l_1, k_l, m_l), k_m, \texttt{Two}(m_r, k_r, r_1)\bigr) $ If neither the left nor the right child node of a 2-node is a 4-node, the node is returned unchanged. End of explanation def _extract(self): return self.mLeft, self.mKeyL, self.mMiddle, self.mKeyR, self.mRight Three._extract = _extract Explanation: Methods of the Class Three The method extract returns the member variables stored in a 3-node. End of explanation def member(self, key): l, kL, m, kR, r = self._extract() if key == kL or key == kR: return True if key < kL: return l.member(key) if kL < key < kR: return m.member(key) if kR < key: return self.mRight.member(key) Three.member = member Explanation: Given a 3-node $t$ and a key $k$, the method $t.\texttt{member}(k)$ checks whether the key $k$ occurs in $t$. It is specified as follows: - $k = k_l \vee k = k_r \rightarrow \texttt{Three}(l,k_l,m,k_r,r).\texttt{member}(k) = \texttt{True}$, - $k < k_l \rightarrow \texttt{Three}(l,k_l,m,k_r,r).\texttt{member}(k) = l.\texttt{member}(k)$, - $k_l < k < k_r \rightarrow \texttt{Three}(l,k_l,m,k_r,r).\texttt{member}(k) = m.\texttt{member}(k)$, - $k_r < k \rightarrow \texttt{Three}(l,k_l,m,k_r,r).\texttt{member}(k) = r.\texttt{member}(k)$. End of explanation def _ins(self, key): "your code here" assert False, f'Unbalanced node {self}' Three._ins = _ins Explanation: The method $t.\texttt{ins}(k)$ takes a 2-3 tree $t$ and and a key $k$ and inserts the key $k$ into $t$. It returns a 2-3-4 tree that has at most one 4-node, which has to be a child of the root node. The function $\texttt{ins}$ is recursive and uses the function $\texttt{restore}$ defined below. It is defined as follows: - $k = k_l \vee k = k_r \rightarrow\texttt{Three}(l,k_l,m,k_r,r).\texttt{ins}(k) = \texttt{Three}(l,k_l,m,k_r,r)$ - $k < k_l \rightarrow \texttt{Three}(\texttt{Nil},k_l,\texttt{Nil},k_r,\texttt{Nil}).\texttt{ins}(k) = \texttt{Four}(\texttt{Nil},k,\texttt{Nil},k_l,\texttt{Nil},k_r,\texttt{Nil})$ - $k_l < k < k_r \rightarrow \texttt{Three}(\texttt{Nil},k_l,\texttt{Nil},k_r,\texttt{Nil}).\texttt{ins}(k) = \texttt{Four}(\texttt{Nil},k_l,\texttt{Nil},k,\texttt{Nil},k_r,\texttt{Nil})$ - $k_r < k \rightarrow \texttt{Three}(\texttt{Nil},k_l,\texttt{Nil},k_r,\texttt{Nil}).\texttt{ins}(k) = \texttt{Four}(\texttt{Nil},k_l,\texttt{Nil},k_r,\texttt{Nil},k,\texttt{Nil})$ $k < k_l \wedge l \not= \texttt{Nil} \wedge m \not= \texttt{Nil}\wedge r \not= \texttt{Nil} \rightarrow \texttt{Three}(l,k_l,m,k_r,r).\texttt{ins}(k) = \texttt{Three}\bigl(l.\texttt{ins}(k),k_l,m,k_r,r\bigr).\texttt{restore}()$ $k_l < k < k_r \wedge l \not= \texttt{Nil} \wedge m \not= \texttt{Nil}\wedge r \not= \texttt{Nil} \rightarrow \texttt{Three}(l,k_l,m,k_r,r).\texttt{ins}(k) = \texttt{Three}\bigl(l,k_l,m.\texttt{ins}(k),k_r,r\bigr).\texttt{restore}()$ $k_r < k \wedge l \not= \texttt{Nil} \wedge m \not= \texttt{Nil}\wedge r \not= \texttt{Nil} \rightarrow \texttt{Three}(l,k_l,m,k_r,r).\texttt{ins}(k) = \texttt{Three}\bigl(l,k_l,m,k_r,r.\texttt{ins}(k)\bigr).\texttt{restore}()$ End of explanation toDotList([Three(Four(Tree('l1'), 'k1', Tree('ml'), 'k2', Tree('mr'), 'k3', Tree('r1')), 'kl', Tree('m'), 'kr', Tree('r')), Method('.restore()'), Four(Two(Tree('l1'), 'k1', Tree('ml')), 'k2', Two(Tree('mr'), 'k3', Tree('r1')), 'kl', Tree('m'), 'kr', Tree('r')), ]) Explanation: The function call $t.\texttt{restore}()$ takes a 2-3-4 tree $t$ that has at most one 4-node. This 4-node has to be a child of the root. It returns a 2-3-4 tree that has at most one 4-node. This 4-node has to be the root node. The most important invariant satisfied by the method call $t.\texttt{ins}(k)$ is the fact that the tree $t.\texttt{ins}(k)$ has the same height as the tree $t$. The different cases that need to be handled by ins are shown graphically below: End of explanation toDotList([Three(Tree('l'), 'kL', Four(Tree('l1'), 'k1', Tree('ml'), 'k2', Tree('mr'), 'k3', Tree('r1')), 'kR', Tree('r')), Method('.restore()'), Four(Tree('l'), 'kL', Two(Tree('l1'), 'k1', Tree('ml')), 'k2', Two(Tree('mr'), 'k3', Tree('r1')), 'kR', Tree('r')) ]) Explanation: $\texttt{Three}\bigl(\texttt{Four}(l_1,k_1,m_l,k_2,m_r,k_3,r_1), k_l, m, k_r, r\bigr).\texttt{restore}() = \texttt{Four}\bigl(\texttt{Two}(l_1, k_1, m_l), k_2, \texttt{Two}(m_r, k_3, r_1), k_l, m, k_r, r\bigr) $ End of explanation toDotList([Three(Tree('l'), 'kl', Tree('m'), 'kr', Four(Tree('l1'), 'k1', Tree('ml'), 'k2', Tree('mr'), 'k3', Tree('r1'))), Method('.restore()'), Four(Tree('l'), 'kl', Tree('m'), 'kr', Two(Tree('l1'), 'k1', Tree('ml')), 'k2', Two(Tree('mr'), 'k3', Tree('r1'))) ]) Explanation: $\texttt{Three}\bigl(l, k_l, \texttt{Four}(l_1,k_1,m_l,k_2,m_r,k_3,r_1), k_r, r\bigr).\texttt{restore}() = \texttt{Four}\bigl(l, k_l, \texttt{Two}(l_1, k_1, m_l), k_2, \texttt{Two}(m_r, k_3, r_1), k_r, r\bigr) $ End of explanation def _restore(self): "your code here" return self Three._restore = _restore Explanation: $\texttt{Three}\bigl(l, k_l, m, k_r, \texttt{Four}(l_1,k_1,m_l,k_2,m_r,k_3,r_1)\bigr).\texttt{restore}() = \texttt{Four}\bigl(l, k_l, m, k_r, \texttt{Two}(l_1, k_1, m_l), k_2, \texttt{Two}(m_r, k_3, r_1)\bigr) $ Below I have collected all the equations specifying the implementation of restore for 3-nodes. - $\texttt{Three}\bigl(\texttt{Four}(l_1,k_1,m_l,k_2,m_r,k_3,r_1), k_l, m, k_r, r\bigr).\texttt{restore}() = \texttt{Four}\bigl(\texttt{Two}(l_1, k_1, m_l), k_2, \texttt{Two}(m_r, k_3, r_1), k_l, m, k_r, r\bigr) $ - $\texttt{Three}\bigl(l, k_l, \texttt{Four}(l_1,k_1,m_l,k_2,m_r,k_3,r_1), k_r, r\bigr).\texttt{restore}() = \texttt{Four}\bigl(l, k_l, \texttt{Two}(l_1, k_1, m_l), k_2, \texttt{Two}(m_r, k_3, r_1), k_r, r\bigr) $ - $\texttt{Three}\bigl(l, k_l, m, k_r, \texttt{Four}(l_1,k_1,m_l,k_2,m_r,k_3,r_1)\bigr).\texttt{restore}() = \texttt{Four}\bigl(l, k_l, m, k_r, \texttt{Two}(l_1, k_1, m_l), k_2, \texttt{Two}(m_r, k_3, r_1)\bigr) $ If neither of the child nodes of a 3-node is a 4-node, the node is returned unchanged. End of explanation def _extract(self): return self.mLeft, self.mKeyL, self.mMiddleL, self.mKeyM, self.mMiddleR, self.mKeyR, self.mRight Four._extract = _extract Explanation: Methods of the Class Four The method extract returns the member variables stored in a 4-node. End of explanation def _restore(self): return self Four._restore = _restore Explanation: The method restore returns a 4-node unchanged. End of explanation toDotList([Four(Tree('l'),'kl', Tree('ml'), 'km', Tree('mr'), 'kr', Tree('r')), Method('.grow()'), Two(Two(Tree('l'),'kl', Tree('ml')), 'km', Two(Tree('mr'), 'kr', Tree('r'))) ]) Explanation: The function grow turns a 4-node into 3 2-nodes. Graphically, it is specified as follows: End of explanation def _grow(self): "your code here" Four._grow = _grow Explanation: $\texttt{Four}(l, k_l, m_l, k_m, m_r, k_r, r).\texttt{grow}() = \texttt{Two}\bigl(\texttt{Two}(l, k_l, m_l), k_m, \texttt{Two}(m_r, k_r, r)\bigr)$ End of explanation m = Nil() m.isNil() m.toDot() m = m.insert("anton") m.toDot() m = m.insert("hugo" ) m.toDot() m = m.insert("gustav") m.toDot() m = m.insert("jens") m.toDot() m = m.insert("hubert") m.toDot() m = m.insert("andre") m.toDot() m = m.insert("philipp") m.toDot() m = m.insert("rene") m.toDot() m = m.insert("walter") m.toDot() Explanation: Testing End of explanation import random as rnd t = Nil() for k in range(30): k = rnd.randrange(100) t = t.insert(k) t.toDot() Explanation: Let's generate 2-3 tree with random keys. End of explanation M = Nil() for k in range(30): M = M.insert(k) M.toDot() Explanation: Lets us try to create a tree by inserting sorted numbers because that resulted in linear complexity for ordered binary trees. End of explanation Products = Nil() for i in range(2, 101): for j in range(2, 101): Products = Products.insert(i * j) Primes = Nil() for k in range(2, 101): if not Products.member(k): Primes = Primes.insert(k) Primes.toDot() Explanation: Finally, we compute the set of prime numbers $\leq 100$. Mathematically, this set is given as follows: $$ \bigl{2, \cdots, 100 \bigr} - \bigl{ i \cdot j \bigm| i, j \in {2, \cdots, 100 }\bigr}$$ First, we compute the set of products $\bigl{ i \cdot j \bigm| i, j \in {2, \cdots, 100 }\bigr}$. Then, we insert all naturals numbers less than 100 that are not products into the set of primes. End of explanation
1,091
Given the following text description, write Python code to implement the functionality described below step by step Description: Multi-state complexes <div class="admonition note"> **Topics** Step1: As usual, we first need to import required modules and define a Model object. The creation of subunit states and subunit is then straighforward with the SubUnitState and SubUnit classes. SubUnitState behaves as Species, we do not need to specify any parameters for their creation. SubUnit takes a list of SubUnitStates as a parameter. Finally, the complex is created with the Complex class that takes a list of SubUnit objects as argument as well as the statesAsSpecies keyword argument that specifies that all states of the complex should be automatically declared in STEPS as Species. This keyword parameter is required in the current version of STEPS since multi-state complexes are not natively supported yet. Note that the list of SubUnit objects that is given to the Complex constructor can totally contain duplicates since complexes can be composed of several identical subunits. In addition, the order in which the subunits are given is important when these subunits are not identical as it will later be used to identify specific subunits in a complex. In our graphical representations, we will assume that the first element of this list is the subunit in the top right corner of the complex and the remaning subunits are read in clockwise order from there. We can then list all the states that this complex can be in with Step2: We used the square bracket notation after the complex name to access its states Step3: We can then print all the corresponding states Step4: We get 10 states, as expected from the figure. Again, we used the square bracket notation CB[...] to access complex states ; in the next section, we describe how this notation works. Complex selectors A complex selector is an instance of the ComplexSelector class and is created when using the square bracket notation on a complex. Simply put, the square bracket notation allows to slice the complex state space in a way that is similar to array slicing in numpy (see the numpy documentation for more details). As we will see later, these complex selectors can then be used for declaring reactions that apply to a subset of complex states without having to enumerate all the states. The following figure shows how various square bracket notations select various part of the complex state space Step5: The first example A corresponds to the complex selector we used so far, it returns all the possible states of the complex. Like for numpy slicing, the easiest way to select all 'dimensions' from the complex is to use a colon Step6: Examples B and C slice the state space in one dimension. The complex selector in example B has colons for the first two dimensions, meaning all subunit states are selected, and the last one has the T1 SubUnitState object, indicating that only complex states in which the third subunit is in state T1 should be selected. Again, the two colons can be replaced by an ellipsis object .... Step7: Example D specifies two out of three dimensions, it selects all states in which the second S subunit is in state S1 and the T subunit is in state T1. Note that if all subunits / 'dimensions' are uniquely specified, the square bracket notation returns a ComplexState instead of a ComplexSelector. As expected example D return 3 states Step8: Example E combines two SubUnitState with the union operator | in order to select states for which the first subunit is either in state S0 or S2. Alternatively, since there are only 3 possibles states for this subunit, we can use the negation operator ~S1 to select all subunit states that are not S1 Step9: Not that both | and ~ operators return a SubUnitSelector object (see documentation) that represent a subset of the SubUnitStates associated to a given SubUnit. Examples F and G illustrate the possibility to combine complex selectors. Example F shows the intersection between two result selectors with the &amp; operator while example G shows the union with the | operator. In both cases the result object is a complex selector itself and can thus be further combined with other complex selectors. As expected, the union from example G yields 15 states Step10: Note that, while example F can also be written as a single complex selector, example G cannot. Example H illustrates the use of the &lt;&lt; operator to inject subunit states in a complex selector. CC[..] &lt;&lt; S0 should be read 'inject a subunit state S0 in any available position'. Since, in CC[...], there are 2 free positions that can be in state S0, it is equivalent to CC[S0, Step11: Complexes and rule based modeling Although STEPS complexes offer similar capabilities as rule-based modeling frameworks like bionetgen, they are not completely equivalent. STEPS complexes require the explicit declaration of all complexes before any simulation takes place. In contrast, bionetgen allows the creation of new complexes through the binding of smaller complexes. Thus, STEPS complexes are more suited to cases in which the complex has a set structure and its state space is known before simulation. Having introduced the main concepts relative to the Complex class, we can now use multi-state complexes in a full example. We first reset the jupyter kernel to start from scratch Step12: IP3 receptor model In this section, we will implement the IP3 receptor model described in De Young and Keizer, A single-pool inositol 1, 4, 5-trisphosphate-receptor-based model for agonist-stimulated oscillations in Ca2+ concentration, PNAS, 1992. This model relies on a markov chain description of IP3R subunits in which each of the 4 identical subunits have 3 binding sites, one for IP3 and two for Ca2+, one activating, the other inactivating. This results in $2^3 = 8$ possible states per subunit and the whole channel is deemed open if at least three of the subunits are in the state in which one IP3 and the activating Ca2+ are bound. We first import the required modules and declare the parameters as specified in the original article Step13: We then declare the model, the species and most importantly, the complex that we will use to simulate IP3 receptors. The following figure describes the IP3R complex Step14: The declaration of the complex itself follows what we saw in the first part of this chapter. We can count the number of distinct complex states Step15: Note that, since the default ordering is NoOrdering, this is much lower than the $8^4=4096$ states that could be expected if StrongOrdering was used instead. The next step is to declare all the reactions involving the IP3R channel. Most of them correspond to IP3 and Ca2+ binding / unbinding that changes the states of subunits. In addition, we also need to write a reaction that will account for the Ca2+ flux from the endoplasmic reticulum (ER) through open IP3R channels. In the following section, we will see how to declare all these reactions. Reactions involving complex states The simplest way to declare a reaction involving a complex consists in simply using a complex state as a reactant in a normal reaction. For example, if we wanted to only allow Ca2+ through the IP3R channel when the four subunits are in the open state, we would write Step16: The next step is to somehow declare reactions that are associated to IP3 and Ca2+ binding / unbinding to IP3R subunits, as described in the figure. Let us first consider all reactions linked to IP3 binding to IP3R subunits and let us specifically focus on IP3 binding to IP3R subunits in the R000 state, the rate of these reactions will depend on the number of subunits in this state. We can tackle this by writing a complex selector that controls the number of subunits in this state. For example, IP3R[R000, ~R000, ~R000, ~R000] corresponds to all states in which only one subunit is in the R000 state. We could thus write all IP3 binding reactions to R000 with Step17: The full subunit reaction network is declared in the with IP3R[...] Step18: As in other chapters, we then declare the simulation object as well as the data to be saved Step19: Both cytCa and caFlux result selectors use syntaxes that were already presented in the previous chapters. Note however that we use rs.SUM() on caFlux paths because rs.memb.caflux['fwd'].Extent saves the extents of all reactions that are implied by the 'caflux' complex reaction. Since we want to look at the overall complex reaction extent, we sum these values with rs.SUM(). The data saving relative to complexes themselves is new but relatively easy to understand. In our example, we want to track how receptors are distributed in terms of number of subunits in the R110 open state. We save 5 values Step20: Note that injecting IP3R complexes requires specifying their states completely. Plotting the results We then plot the results from the cytCa and caFlux result selectors first Step21: We then plot the data from the IP3RStates result selector. In addition to the raw data, we compute a sliding window average to ease visualization
Python Code: import steps.interface from steps.model import * mdl = Model() with mdl: A0, A1, A2 = SubUnitState.Create() ASU = SubUnit.Create([A0, A1, A2]) CA = Complex.Create([ASU, ASU, ASU, ASU], statesAsSpecies=True) Explanation: Multi-state complexes <div class="admonition note"> **Topics**: Complexes, complex reactions. </div> In this chapter, we will introduce a concise way of declaring reactions between molecules that can be in a high number of distinct functional states. We will use the Complex class and its subconstituents SubUnits and SubunitStates to specify the state space of these molecules. We will first intoduce Complexes in a general way and compare them to other forms of rule-based modeling frameworks. We will then present their use in an IP3 receptor example that builds on the one used in a previous chapter. Complex declaration Complexes are composed of an arbitrary number of subunits that can themselves be in an arbitrary number of states. In this guide, we will represent complexes as collections of geometric shapes, like in the following examples: <img src="images/complex_examples.png"/> Each complex consists of a list of subunits, represented by different geometrical shapes in the second column of the figure. These subunits can be in various states (represented by colors), as shown in the third column. Specific instances of complexes can thus be in various states, resulting from all the possible combinations of subunit states. The last column only shows a few examples of such states for each complex. In order to declare a complex, we first need to declare all its subunits along with their subunit states. We then need to provide a list of subunits that the complex is made of. Consider the following example, corresponding to the first row of the figure: End of explanation def printStates(cs): print(f'{len(cs)} states') for state in cs: print(state) printStates(CA[...]) Explanation: As usual, we first need to import required modules and define a Model object. The creation of subunit states and subunit is then straighforward with the SubUnitState and SubUnit classes. SubUnitState behaves as Species, we do not need to specify any parameters for their creation. SubUnit takes a list of SubUnitStates as a parameter. Finally, the complex is created with the Complex class that takes a list of SubUnit objects as argument as well as the statesAsSpecies keyword argument that specifies that all states of the complex should be automatically declared in STEPS as Species. This keyword parameter is required in the current version of STEPS since multi-state complexes are not natively supported yet. Note that the list of SubUnit objects that is given to the Complex constructor can totally contain duplicates since complexes can be composed of several identical subunits. In addition, the order in which the subunits are given is important when these subunits are not identical as it will later be used to identify specific subunits in a complex. In our graphical representations, we will assume that the first element of this list is the subunit in the top right corner of the complex and the remaning subunits are read in clockwise order from there. We can then list all the states that this complex can be in with: End of explanation with mdl: B0, B1, R0, R1 = SubUnitState.Create() BSU, RSU = SubUnit.Create([B0, B1], [R0, R1]) CB = Complex.Create([BSU, RSU, BSU, RSU], statesAsSpecies=True, order=RotationalSymmetryOrdering) Explanation: We used the square bracket notation after the complex name to access its states: CA[...]. This notation returns an object that describes a set of states of the complex, when using it only with the ellipsis ... object, this corresponds to all possible states of the complex. We will see how to use this notation later in the chapter. Note that instead of the $3^4 = 81$ states that should result from all possible combinations of 3 subunit states for 4 subunits, we only have 15 states. This is due to the fact that, by default, complex states do not take the order of subunits into account. The state CA_A0_A0_A0_A1 is equivalent to the state CA_A0_A0_A1_A0 since they are both composed of 3 subunits in state A0 and one subunit in state A1. Only one of the four equivalent states is conserved and declared in STEPS as a Species. Complex ordering This behavior is however not always desirable as neighboring relations between subunits can sometimes be considered important. The Complex constructor can thus take an additional keyword argument order. This argument makes it possible to specify groups of complex states that will be considered equivalent. STEPS comes with 3 built-in choices for this parameter: NoOrdering, the default ; StrongOrdering, that considers all possible ordered states ; and RotationalSymmetryOrdering, that we will explain below. It is also possible to implement a custom order function, more details are given in the documentation. The following figure shows how states are grouped in the 3 order functions for a complex with 4 identical subunits with 2 states: <img src="images/complex_states_1.png"/> Columns correspond to the number of subunits in state S1 (dark blue), starting with all subunits in state S0 (light blue). The last two columns are ommited since they are identical to the first two if states are inverted. Grey lines represent which states are grouped together under the different ordering functions. The first row contains all the possible ordered states and the last one contains the unordered states. Since subunits can only be in two states, there are only 5 states under the NoOrdering function: all subunits in S0, one subunit in S1, two in S1, etc. The RotationalSymmetryOrdering function is a bit trickier, it groups all states that are identical under rotation. When only one subunit is in S1, all states can be made equivalent with quarter turn rotations. This is not the case when two subunits are in S1, there are then two distinct states that cannot be made identical with quarter turn rotations: a state in which the two subunits in S1 are adjacent, and another in which they are opposite. Note that this rotational symmetry still takes into account handedness: <img src="images/complex_states_2.png"/> In the above figure, 4 identical subunits can be in 3 different states and we only consider the case in which two subunits are in S0 (light blue), one in S1 (dark blue) and one in S2 (teal). Note that under rotational symmetry, there are two complex states in which S1 and S2 are adjacent but these states are not identical: the left one has S1 then S2 while the other has S2 then S1 (in clockwise direction). When complexes contain different subunits, and depending in which order the subunits are declared in the complex, it becomes less likely for complex states to be rotationaly equivalent: <img src="images/complex_states_3.png"/> We can declare the complex described in this last figure in STEPS with the rotational symmetry ordering function: End of explanation printStates(CB[...]) Explanation: We can then print all the corresponding states: End of explanation with mdl: S0, S1, S2, T0, T1, T2 = SubUnitState.Create() SSU, TSU = SubUnit.Create([S0, S1, S2], [T0, T1, T2]) CC = Complex.Create([SSU, SSU, TSU], statesAsSpecies=True, order=StrongOrdering) Explanation: We get 10 states, as expected from the figure. Again, we used the square bracket notation CB[...] to access complex states ; in the next section, we describe how this notation works. Complex selectors A complex selector is an instance of the ComplexSelector class and is created when using the square bracket notation on a complex. Simply put, the square bracket notation allows to slice the complex state space in a way that is similar to array slicing in numpy (see the numpy documentation for more details). As we will see later, these complex selectors can then be used for declaring reactions that apply to a subset of complex states without having to enumerate all the states. The following figure shows how various square bracket notations select various part of the complex state space: <img src="images/complex_selectors.png"/> For simplicity of representation, the complex used in these examples has 3 subunits: two identical subunits S and one subunit T, both these subunits can be in 3 different states. The same principles of course apply for complexes with more than 3 subunits. While in these examples, the full ordered state space is represented, the complex states selected by a complex selectors will depend on the specific ordering function used during the creation of the Complex. The states are organized spatially as if they were part of a three dimensional matrix, to make the analogy with numpy slicing easier to see. Let us declare this complex in STEPS and evaluate these complex selectors: End of explanation printStates(CC[...]) Explanation: The first example A corresponds to the complex selector we used so far, it returns all the possible states of the complex. Like for numpy slicing, the easiest way to select all 'dimensions' from the complex is to use a colon : for each dimension, meaning we want to select everything in this 'dimension'. The complex has 3 subunits / 'dimensions' so we need 3 colons in the square bracket: CC[:, :, :]. The order of dimensions is the same as the one used when declaring the complex. The ellipsis object ... can be used, like in numpy, to avoid repeating colons when the number of subunits / dimensions is high. It is equivalent to typing comma separated colons for the remaning dimensions. Note however that only one ellipsis object can be used in a square bracket notation since using several could lead to ambiguities (in CC[..., S0, ...] it would not be clear which dimension should correspond to S0). If no ellipsis object is used, the number of comma separated values should always match the number of subunits in the complex. We can thus get all $3^3 = 27$ complex states with: End of explanation printStates(CC[:, :, T1]) Explanation: Examples B and C slice the state space in one dimension. The complex selector in example B has colons for the first two dimensions, meaning all subunit states are selected, and the last one has the T1 SubUnitState object, indicating that only complex states in which the third subunit is in state T1 should be selected. Again, the two colons can be replaced by an ellipsis object .... End of explanation printStates(CC[:, S1, T2]) Explanation: Example D specifies two out of three dimensions, it selects all states in which the second S subunit is in state S1 and the T subunit is in state T1. Note that if all subunits / 'dimensions' are uniquely specified, the square bracket notation returns a ComplexState instead of a ComplexSelector. As expected example D return 3 states: End of explanation printStates(CC[S0 | S2, :, T1]) printStates(CC[~S1, :, T1]) Explanation: Example E combines two SubUnitState with the union operator | in order to select states for which the first subunit is either in state S0 or S2. Alternatively, since there are only 3 possibles states for this subunit, we can use the negation operator ~S1 to select all subunit states that are not S1: End of explanation printStates(CC[:, :, T1] | CC[:, S1, :]) Explanation: Not that both | and ~ operators return a SubUnitSelector object (see documentation) that represent a subset of the SubUnitStates associated to a given SubUnit. Examples F and G illustrate the possibility to combine complex selectors. Example F shows the intersection between two result selectors with the &amp; operator while example G shows the union with the | operator. In both cases the result object is a complex selector itself and can thus be further combined with other complex selectors. As expected, the union from example G yields 15 states: End of explanation printStates(CC[...] << S0) Explanation: Note that, while example F can also be written as a single complex selector, example G cannot. Example H illustrates the use of the &lt;&lt; operator to inject subunit states in a complex selector. CC[..] &lt;&lt; S0 should be read 'inject a subunit state S0 in any available position'. Since, in CC[...], there are 2 free positions that can be in state S0, it is equivalent to CC[S0, :, :] | CC[:, S0, :]. It is not very useful in our example but becomes convenient for bigger complexes. Note that the right hand side of the &lt;&lt; operator can also be a SubUnitSelector: CC[...] &lt;&lt; (S0 | S1). Finally, several subunit states can be injected at once with e.g. CC[...] &lt;&lt; 2 * S0. Detailed explanations and examples are available in the documentation. In example H we have: End of explanation %reset -f Explanation: Complexes and rule based modeling Although STEPS complexes offer similar capabilities as rule-based modeling frameworks like bionetgen, they are not completely equivalent. STEPS complexes require the explicit declaration of all complexes before any simulation takes place. In contrast, bionetgen allows the creation of new complexes through the binding of smaller complexes. Thus, STEPS complexes are more suited to cases in which the complex has a set structure and its state space is known before simulation. Having introduced the main concepts relative to the Complex class, we can now use multi-state complexes in a full example. We first reset the jupyter kernel to start from scratch: End of explanation import steps.interface from steps.model import * from steps.geom import * from steps.sim import * from steps.saving import * from steps.rng import * nAvog = 6.02214076e23 nbIP3R = 5 nbPumps = 5 c0 = 2e-6 c1 = 0.185 cytVol = 1.6572e-19 ERVol = cytVol * c1 a1 = 400e6 a2 = 0.2e6 a3 = 400e6 a4 = 0.2e6 a5 = 20e6 b1 = 0.13e-6 * a1 b2 = 1.049e-6 * a2 b3 = 943.4e-9 * a3 b4 = 144.5e-9 * a4 b5 = 82.34e-9 * a5 v1 = 6 v2 = 0.11 v3 = 0.9e-6 k3 = 0.1e-6 rp = v3 * 1e3 * cytVol * nAvog / nbPumps / 2 rb = 10 * rp rf = (rb + rp) / (k3 ** 2) kip3 = 1e3 * nAvog * ERVol * v1 / nbIP3R Explanation: IP3 receptor model In this section, we will implement the IP3 receptor model described in De Young and Keizer, A single-pool inositol 1, 4, 5-trisphosphate-receptor-based model for agonist-stimulated oscillations in Ca2+ concentration, PNAS, 1992. This model relies on a markov chain description of IP3R subunits in which each of the 4 identical subunits have 3 binding sites, one for IP3 and two for Ca2+, one activating, the other inactivating. This results in $2^3 = 8$ possible states per subunit and the whole channel is deemed open if at least three of the subunits are in the state in which one IP3 and the activating Ca2+ are bound. We first import the required modules and declare the parameters as specified in the original article: End of explanation mdl = Model() r = ReactionManager() with mdl: Ca, IP3, ERPump, ERPump2Ca = Species.Create() R000, R100, R010, R001, R110, R101, R111, R011 = SubUnitState.Create() IP3RSU = SubUnit.Create([R000, R100, R010, R001, R110, R101, R111, R011]) IP3R = Complex.Create([IP3RSU, IP3RSU, IP3RSU, IP3RSU], statesAsSpecies=True) ssys = SurfaceSystem.Create() Explanation: We then declare the model, the species and most importantly, the complex that we will use to simulate IP3 receptors. The following figure describes the IP3R complex: <img src="images/complex_ip3_structure.png"/> As explained before, it is composed of 4 identical subunits which can be in 8 distinct states, we name the states according to what is bound to the subunit: for state $ijk$, $i$ is 1 if IP3 is bound, $j$ is 1 if the activating Ca2+ is bound, and $k$ is 1 if the inactivating Ca2+ is bound. State $110$ thus corresponds to the open state. Below the complex and its subunits, we represented the reaction network that governs the transitions between the subunit states. Each transition involves the binding or unbinding of either IP3 or Ca2+. We then proceed to declaring the IP3R complex: End of explanation len(IP3R[...]) Explanation: The declaration of the complex itself follows what we saw in the first part of this chapter. We can count the number of distinct complex states: End of explanation with mdl, ssys: # Ca2+ passing through open IP3R channel IP3R_1 = IP3R.get() IP3R_1[R110, R110, R110, :].s + Ca.i <r['caflux']> IP3R_1[R110, R110, R110, :].s + Ca.o r['caflux'].K = kip3, kip3 Explanation: Note that, since the default ordering is NoOrdering, this is much lower than the $8^4=4096$ states that could be expected if StrongOrdering was used instead. The next step is to declare all the reactions involving the IP3R channel. Most of them correspond to IP3 and Ca2+ binding / unbinding that changes the states of subunits. In addition, we also need to write a reaction that will account for the Ca2+ flux from the endoplasmic reticulum (ER) through open IP3R channels. In the following section, we will see how to declare all these reactions. Reactions involving complex states The simplest way to declare a reaction involving a complex consists in simply using a complex state as a reactant in a normal reaction. For example, if we wanted to only allow Ca2+ through the IP3R channel when the four subunits are in the open state, we would write: python with mdl, ssys: IP3R[R110, R110, R110, R110].s + Ca.i &lt;r[1]&gt; IP3R[R110, R110, R110, R110].s + Ca.o r[1].K = kip3, kip3 Both left hand side and right hand side of the reaction contain the IP3R complex in a fully specified state. In this case, no changes are made to the complex but there are a lot of cases in which changes to the complex are required. Let us imagine for example that some species X can react with the fully open IP3R channel and force the unbinding of IP3 and Ca from one of its subunits. We would have the following reaction: python with mdl, ssys: IP3R[R110, R110, R110, R110].s + X.o &gt;r[1]&gt; X.o + IP3R[R110, R110, R110, R000].s + Ca.o + IP3.o r[1].K = rate Note that the specific position of the subunit that is changed does not matter since we declared the complex using the detault NoOrdering setting. Complex states are thus used in reactions as if they were Species; this is convenient when only a single state of the complex can undergo a specific reaction but it quickly becomes unpractical when several complex states can undergo the same reaction. If, as is the case in the original De Young Keizer model, the IP3R channel opens when at least 3 subunits are in state R110, we would need to declare 8 reactions involving fully specified complex states: python with mdl, ssys: IP3R[R110, R110, R110, R000].s + Ca.i &lt;r[1]&gt; IP3R[R110, R110, R110, R000].s + Ca.o IP3R[R110, R110, R110, R001].s + Ca.i &lt;r[2]&gt; IP3R[R110, R110, R110, R001].s + Ca.o ... IP3R[R110, R110, R110, R110].s + Ca.i &lt;r[7]&gt; IP3R[R110, R110, R110, R110].s + Ca.o IP3R[R110, R110, R110, R111].s + Ca.i &lt;r[8]&gt; IP3R[R110, R110, R110, R111].s + Ca.o r[1].K = kip3, kip3 ... r[8].K = kip3, kip3 This case needs to be tackled using complex selectors instead. Reactions involving complex selectors In order to group all these reactions in a single one, we could use the complex selector IP3R[R110, R110, R110, :] that encompasses all of the above 8 states. We would intuitively try to declare the reaction like so: python with mdl, ssys: IP3R[R110, R110, R110, :].s + Ca.i &lt;r[1]&gt; IP3R[R110, R110, R110, :].s + Ca.o r[1].K = kip3, kip3 <div class="warning alert alert-block alert-danger"> <b>This raises the following exception</b>: <code>Complex selector IP3R[R110, R110, R110, :] is used in the right hand side of a reaction but is not matching anything in the left hand side and is not fully defined. The reaction is ambiguous.</code> </div> When trying to declare the reaction in this way, STEPS throws an exception. This is due to the fact that, in general, STEPS does not know whether the two result selectors refer to the same specific complex or to distinct ones. It is important here to make the distinction between the complex selectors during reaction declaration and the specific complexes that will exist during a simulation. Specific complexes in a simulation are always fully defined while complex selectors are only partially specified. In an actual simulation, specific complexes thus need to be matched to these partially specified objects. Although it might not seem very important in the reaction we tried to declare above, it becomes critical when expressing reactions between 2 complexes of the same type. Consider the following reaction using the CC complex declared in the first part of this chapter: python CC[:, :, T0] + CC[:, :, T1] &gt;r[1]&gt; CC[:, :, T1] + CC[:, :, T2] r[1].K = 1 This reaction would also result in the same exception being thrown. This reaction happens when two complexes of the same CC type meet and when one has its T subunit in state T0 and the other in state T1, ignoring the states of the S subunits. The intuitive way to read this reaction is that the T0 complex is changed to T1 and the T1 complex is changed to T2. It could however be read in a different way: maybe the T0 complex should be changed to T2 while the T1 should remain in T1. Imagine for example the specific reaction in which the left hand side is CC[S0, S0, T0] + CC[S1, S1, T1], should the right hand side be CC[S0, S0, T1] + CC[S1, S1, T2] or CC[S0, S0, T2] + CC[S1, S1, T1]? In order to make it explicit, STEPS thus requires the user to use identified complexes in reactions involving complex selectors. To get an identified complex in the same example, we would write: python CC_1 = CC.get() CC_2 = CC.get() CC_1[:, :, T0] + CC_2[:, :, T1] &gt;r[1]&gt; CC_1[:, :, T1] + CC_2[:, :, T2] r[1].K = 1 Calling the get() method on the complex returns an object that behaves like a Complex but keeps a specific identity so that, if it appears several times in a reaction, STEPS knows that it refers to the same specific complex. The reaction is now unambiguous and no exceptions are thrown. Coming back to our IP3R channel example, we can now declare the reaction associated to the Ca2+ flux through open IP3R channels with: End of explanation with mdl, ssys: # IP3R subunits reaction network with IP3R[...]: R000.s + IP3.o <r[1]> R100.s R000.s + Ca.o <r[2]> R010.s R000.s + Ca.o <r[3]> R001.s R100.s + Ca.o <r[4]> R110.s R100.s + Ca.o <r[5]> R101.s R010.s + IP3.o <r[6]> R110.s R010.s + Ca.o <r[7]> R011.s R001.s + IP3.o <r[8]> R101.s R001.s + Ca.o <r[9]> R011.s R110.s + Ca.o <r[10]> R111.s R101.s + Ca.o <r[11]> R111.s R011.s + IP3.o <r[12]> R111.s r[1].K = a1, b1 r[2].K = a5, b5 r[3].K = a4, b4 r[4].K = a5, b5 r[5].K = a2, b2 r[6].K = a1, b1 r[7].K = a4, b4 r[8].K = a3, b3 r[9].K = a5, b5 r[10].K = a2, b2 r[11].K = a5, b5 r[12].K = a3, b3 # Ca2+ leak Ca.i <r[1]> Ca.o r[1].K = v2, c1 * v2 2*Ca.o + ERPump.s <r[1]> ERPump2Ca.s >r[2]> 2*Ca.i + ERPump.s r[1].K = rf, rb r[2].K = rp Explanation: The next step is to somehow declare reactions that are associated to IP3 and Ca2+ binding / unbinding to IP3R subunits, as described in the figure. Let us first consider all reactions linked to IP3 binding to IP3R subunits and let us specifically focus on IP3 binding to IP3R subunits in the R000 state, the rate of these reactions will depend on the number of subunits in this state. We can tackle this by writing a complex selector that controls the number of subunits in this state. For example, IP3R[R000, ~R000, ~R000, ~R000] corresponds to all states in which only one subunit is in the R000 state. We could thus write all IP3 binding reactions to R000 with: python with mdl, ssys: IP3R_1 = IP3R.get() IP3R_1[R000, ~R000, ~R000, ~R000].s + IP3.o &gt;r[1]&gt; IP3R_1[R100, ~R000, ~R000, ~R000].s IP3R_1[R000, R000, ~R000, ~R000].s + IP3.o &gt;r[2]&gt; IP3R_1[R100, R000, ~R000, ~R000].s IP3R_1[R000, R000, R000, ~R000].s + IP3.o &gt;r[3]&gt; IP3R_1[R100, R000, R000, ~R000].s IP3R_1[R000, R000, R000, R000].s + IP3.o &gt;r[4]&gt; IP3R_1[R100, R000, R000, R000].s r[1].K = 1 * a1 r[2].K = 2 * a1 r[3].K = 3 * a1 r[4].K = 4 * a1 There are 4 reactions, corresponding to the cases in which the IP3R complex has 1, 2, 3 and 4 subunits in state R000. Since there are 4 ways to bind IP3 to an R000 subunit in a IP3R[R000, R000, R000, R000] complex state, the rate of the reaction should be 4 times the elementary rate $a_1$. Expressing the unbinding reactions is however not trivial using these reactions. Let us consider the first of these 4 reactions, making it bidirectional would be equivalent to adding the following reaction: python IP3R_1[R100, ~R000, ~R000, ~R000].s &gt;r[1]&gt; IP3R_1[R000, ~R000, ~R000, ~R000].s + IP3.o In contrast with the binding reactions, it is not clear which rate should be used for this reaction, we know that, in the left hand side, at least one subunit is in state R100 but the other subunits might also be in the same state, it is not prevented by the ~R000 subunit selector. In order to be sure that e.g. only one subunit is in state R100 we would instead need to write: python IP3R_1[R100, ~R100, ~R100, ~R100].s &gt;r[1]&gt; IP3R_1[R000, ~R100, ~R100, ~R100].s + IP3.o r[1].K = b1 The following tentative solution using a single bidirectional reaction will not work: python IP3R_1[R000, ~R000, ~R000, ~R000].s + IP3.o &lt;r[1]&gt; IP3R_1[R100, ~(R000 | R100), ~(R000 | R100), ~(R000 | R100)].s r[1].K = a1, b1 This reaction is invalid because the right hand side is more restrictive than the left hand side. The left hand side matches e.g. IP3R[R000, R100, R100, R100] but the right hand side cannot match it. As a side note, the only way for a right hand side complex selector to be more restrictive is to constrain the subunits to a single state. In this case, there is no ambiguity and the reaction is valid. We could try to fix this validity issue by using the same subunit selectors on the left hand side: python IP3R_1[R000, ~(R000 | R100), ~(R000 | R100), ~(R000 | R100)].s + IP3.o &lt;r[1]&gt; IP3R_1[R100, ~(R000 | R100), ~(R000 | R100), ~(R000 | R100)].s r[1].K = a1, b1 This is a valid reaction but it does not cover all cases of IP3 binding to an IP3R in which only one subunit is in state R000. For example, IP3R[R000, R100, R111, R111] would not be taken into account because its second subunit is R100, which does not match with the subunit selector ~(R000 | R100). From all these examples, it becomes clear that complex selectors are not well suited to declaring reactions that involve single subunits instead of full complexes. These reactions should instead be declared with their dedicated syntax. Reactions involving subunits In order to express reactions that involve subunits instead of full complexes, we can simply use subunit states as reactants. The IP3 binding reaction to R000 can thus be declared with: python with mdl, ssys: with IP3R[...]: R000.s + IP3.o &lt;r[1]&gt; R100.s r[1].K = a1, b1 The reaction itself corresponds exactly to the reaction being represented on the previous figure. The main difference with the full complex reactions we saw before is that the reaction declaration needs to be done inside a with block that uses a complex selector. This specifies the complex on which the reaction applies as well as the states that the complex needs to be in for the reaction to apply. In our case, the reaction applies to IP3R complexes in any state. We do not need to specify that at least one subunit should be in state R000 since it is already implicitely required by the presence of R000.s in the left hand side of the reaction. Note that, in addition to being much simpler than our previous attempts using complex selectors, this syntax makes it very easy to declare the unbinding reaction ; we just need to make the reaction bidirectional. The rates are the per-subunit rates, as in the figure. STEPS will automatically compute the coefficients such that a complex with 2 subunits in state R000 will undergo the change of one of its subunits with rate $2a_1$. FInally, the position of the complex is indicated by adding the position indicator .s to the subunit state itself. The following figure represents the full complex reactions that are equivalent to 2 examples of subunits reactions: <img src="images/complex_reactions.png"/> Note that in both cases, only a very low number of possible reactions are represented. In each case, the required coefficient is applied to the rate that was used in the subunit reaction. For example, the first complex reaction of the left column can happen in four different ways since all four subunits are in the R000 state; since all these ways result in the same equivalent state IP3R[R100, R000, R000, R000], the subunit reaction rate is multiplied by 4 to get the complex reaction rate. Note that if we used the StrongOrdering ordering function, IP3R[R100, R000, R000, R000] would be different from e.g. IP3R[R000, R100, R000, R000] so four distinct complex reactions with rate $a_1$ would be declared. Expressing cooperativity with complex selectors In our example, subunits bind IP3 and Ca2+ independently ; a simple way to express cooperativity would be to use several with blocks with different complex selectors. For example, if the binding rate of IP3 to a R000 subunit depended on the number of subunits in the R100 state we could write: python with mdl, ssys: # Binding with IP3R[~R100, ~R100, ~R100, ~R100]: R000.s + IP3.o &gt;r[1]&gt; R100.s r[1].K = a1_0 with IP3R[ R100, ~R100, ~R100, ~R100]: R000.s + IP3.o &gt;r[1]&gt; R100.s r[1].K = a1_1 with IP3R[ R100, R100, ~R100, ~R100]: R000.s + IP3.o &gt;r[1]&gt; R100.s r[1].K = a1_2 with IP3R[ R100, R100, R100, ~R100]: R000.s + IP3.o &gt;r[1]&gt; R100.s r[1].K = a1_3 # Unbinding with IP3R[...]: R100.s &gt;r[1]&gt; R000.s + IP3.o r[1].K = b1 With a1_0 the IP3 binding rate to R000 when no subunits are in the R100 state, a1_1 when one subunit is in this state, etc. Note that the unbinding reaction now needs to be declared separately because, for the with IP3R[~R100, ~R100, ~R100, ~R100]: block, the complex selector would be incompatible with the R100.s right hand side. Expressing cooperativity with complex-dependent reaction rates There is however a simpler way to express cooperativity by using complex-dependent reaction rate. The following example declares the same reactions as the previous one: ```python rates = [a1_0, a1_1, a1_2, a1_3] a1 = CompDepRate(lambda state: rates[state.Count(R100)], [IP3R]) with mdl, ssys: with IP3R[...]: R000.s + IP3.o <r[1]> R100.s r[1].K = a1, b1 ``` We first declare a list to hold all our a1_x rates ; we then declare the a1 rate as a CompDepRate object. Its constructor (see documentation) takes two parameters: the first one is a function that takes one or several complex states as parameter and returns a reaction rate ; the second is the list of complexes whose states influence the rate. In our case, the rate only depends on the state of the IP3R complex. Since it is possible to declare reactions between two complexes, corresponding rate can be declared with CompDepRate(lambda state1, state2: ..., [Comp1, Comp2]). Note that the lambda function now takes two parameters, corresponding to the states of the two complexes. They are given in the same order as in the [Comp1, Comp2] list. Note that the lambda function in the CompDepRate constructor makes uses of the Count method (see documentation) from the ComplexState class. This method takes a SubUnitState or a SubUnitSelector as a parameter and returns the number of subunits in the state that correspond to the one passed as parameter. The reaction can then be declared inside a with IP3R[...] block, meaning it applies to all complexes, no matter their state. The forward rate is then simply set to the CompDepRate object we declared. Declaring reactions involving subunits can be done in a lot of different ways. We covered the most common cases in the previous subsections and advanced use cases are treated in a separate section, as appendix to this chapter. Let us now come back to our main IP3R simulation example and declare the missing reactions: End of explanation geom = Geometry() with geom: cyt, ER = Compartment.Create() cyt.Vol = cytVol ER.Vol = ERVol memb = Patch.Create(ER, cyt, ssys) memb.Area = 0.4143e-12 Explanation: The full subunit reaction network is declared in the with IP3R[...]: block. The remaining lines declare the reactions associated to the Ca2+ leak from the endoplasmic reticulum (ER) as well as the Ca2+ pumping into the ER. Geometry and simulation The well-mixed geometry is declared easily with: End of explanation rng = RNG('mt19937', 512, 7233) sim = Simulation('Wmdirect', mdl, geom, rng) rs = ResultSelector(sim) cytCa = rs.cyt.Ca.Conc caFlux = rs.SUM(rs.memb.caflux['fwd'].Extent) << rs.SUM(rs.memb.caflux['bkw'].Extent) IP3RStates = rs.memb.IP3R[~R110, ~R110, ~R110, ~R110].Count IP3RStates <<= rs.memb.IP3R[ R110, ~R110, ~R110, ~R110].Count IP3RStates <<= rs.memb.IP3R[ R110, R110, ~R110, ~R110].Count IP3RStates <<= rs.memb.IP3R[ R110, R110, R110, ~R110].Count IP3RStates <<= rs.memb.IP3R[ R110, R110, R110, R110].Count sim.toSave(cytCa, caFlux, IP3RStates, dt=0.05) Explanation: As in other chapters, we then declare the simulation object as well as the data to be saved: End of explanation ENDT = 10.0 sim.newRun() # Initial conditions sim.cyt.Ca.Conc = 3.30657e-8 sim.cyt.IP3.Conc = 0.2e-6 sim.ER.Ca.Conc = c0/c1 sim.memb.ERPump.Count = nbPumps sim.memb.IP3R[R000, R000, R000, R000].Count = nbIP3R sim.run(ENDT) Explanation: Both cytCa and caFlux result selectors use syntaxes that were already presented in the previous chapters. Note however that we use rs.SUM() on caFlux paths because rs.memb.caflux['fwd'].Extent saves the extents of all reactions that are implied by the 'caflux' complex reaction. Since we want to look at the overall complex reaction extent, we sum these values with rs.SUM(). The data saving relative to complexes themselves is new but relatively easy to understand. In our example, we want to track how receptors are distributed in terms of number of subunits in the R110 open state. We save 5 values: the number of IP3R that have 0 subunits in the R110 state, the number of IP3R that have 1 subunit in this state, etc. Note that the rs.memb.IP3R.Count result selector would save the total number of IP3R on the ER membrane. In addition to counting numbers of complexes, it is also possible to count numbers of subunits. rs.memb.IP3R.R110.Count would save the total number of subunits of IP3R that are in state R110. Finally, if one wanted to save the separate counts of all states matching some complex selectors, one could use rs.memb.LIST(*IP3R[R110, R110, ...]).Count. This uses the LIST() function that we saw in previous chapters by feeding it all the states that we want to save. We can then proceed to setting up intial conditions and running the simulation: End of explanation from matplotlib import pyplot as plt import numpy as np plt.figure(figsize=(10, 7)) plt.plot(cytCa.time[0], cytCa.data[0]*1e6) plt.legend(cytCa.labels) plt.xlabel('Time [s]') plt.ylabel('Concentration [μM]') plt.show() plt.figure(figsize=(10, 7)) plt.plot(caFlux.time[0], caFlux.data[0]) plt.legend(caFlux.labels) plt.xlabel('Time [s]') plt.ylabel('Reaction extent') plt.show() Explanation: Note that injecting IP3R complexes requires specifying their states completely. Plotting the results We then plot the results from the cytCa and caFlux result selectors first: End of explanation n = 20 plt.figure(figsize=(10, 7)) for i in range(IP3RStates.data[0].shape[1]): sig = IP3RStates.data[0, :, i] avg = np.convolve(sig, np.ones(n) / n, 'valid') tme = IP3RStates.time[0, n//2:-n//2+1] plt.plot(tme, avg, color=f'C{i}', label=IP3RStates.labels[i]) plt.plot(IP3RStates.time[0], sig, '--', linewidth=1, color=f'C{i}', alpha=0.4) plt.legend(loc=1) plt.xlabel('Time [s]') plt.ylabel('Count') plt.show() Explanation: We then plot the data from the IP3RStates result selector. In addition to the raw data, we compute a sliding window average to ease visualization: End of explanation
1,092
Given the following text description, write Python code to implement the functionality described below step by step Description: <center> <img src="http Step1: <div id='intro' /> Introduction Back to TOC In this jupyter notebook we will study the behaviour of a ABSRF (A Bad and Slow Root Finder). Notice that we have added "(?)" in the title since we still need to evaluate it in kore detail, basically, we will make a comparison with the bisection method itself. We will study this root finder because it has been traditionally proposed every time this class has been taught and it is important to quantify its behaviour, this means, to understand how fast it converges to a root and how long it takes. The algorithm is as follows Step2: The following code is just the implementation of the Bisection method. We include it here just for comparison purposes. Step3: <div id='test_function' /> The test function Back to TOC To test the ASRF against the Bisection method, we need to build a function that we can move the location of the root. To acomplish this, we designed the following function Step4: <div id='FNE' /> The First Numerical Experiment Back to TOC The output looks similar to the output generated by the Bisection Method, however in this case we added the index for the internal loop that goes over each interval with the index $i$. Step5: <div id='SNE' /> The Second Numerical Experiment Back to TOC This second numerical experiments computed the number of function evaluation required to obtain the root of $f_2(x)$ as we change the number of intervals used. Notice that when $N=2$ you may think we should actually get back the bisection method, but we, unfortunately, don't. In particular, it seems that using $N=8$ we get the best performance, which is $56$ function evaluations. Step6: <div id='TNE' /> The Third Numerical Experiment Back to TOC This last numerical experiment is the largest case, we test the ASRF method against the Bisection method. We select the range from $2$ up to $20$ intervals and we selected $12$ roots equalspaced from $h=0.12$ up to $0.92$, this may seem like arbitrary numbers and they are, so we invite you to try with different number, but just remember that if you change the range, you must change the values for $a$ and $b$ accordingly. Step7: Fist large plot This first large plots show in color the behavior of ASRF as we change the number of intervals for each the $12$ values of $h$ described before. We show in a red dashed line the maximum number of function evaluation allowed and in a thick black line the mean value of the function evaluation for each number of intervals. This plot tells us that, on average, it seems that using $5$ intervals may be the most competitive value for the number of intervals. Will this be better than the output (number of function evaluation) obtain by the Bisection method? To answer this question we need to change the plot, but using the same data. If we try to plot the output of the bisection method in this figure, it may not be fair since the bisection method does not depend on the number of intervals, since it always uses 2 intervals. To solve this problem, we change the way we plot the results in the next plot. Step8: Second large plot In this case, we plot the behavior of the algorithm as a function of the root we are looking for. The behavior of the different number of intervals is considered with different colors over the set of roots. Now it is possible to add the Bisection method in cyan with large square markers. The behavior of the Bisection method is constant, as expected. A reasonable question that you may have by looking at this plot is the following
Python Code: import numpy as np import matplotlib.pyplot as plt import sympy as sym sym.init_printing() import bitstring as bs import pandas as pd pd.set_option("display.colheader_justify","center") pd.options.display.float_format = '{:.10f}'.format # This function shows the bits used for the sign, exponent and mantissa for a 64-bit double presision number. # fps: Floating Point Standard # Double: Double precision IEEE 754 def to_fps_double(f): b = bs.pack('>d', f) b = b.bin #show sign + exponent + mantisa print(b[0]+' '+b[1:12]+ ' '+b[12:]) from colorama import Fore, Back, Style # https://pypi.org/project/colorama/ # Fore: BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE, RESET. # Back: BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE, RESET. # Style: DIM, NORMAL, BRIGHT, RESET_ALL textBold = lambda x: Style.BRIGHT+x+Style.RESET_ALL textBoldH = lambda x: Style.BRIGHT+Back.YELLOW+x+Style.RESET_ALL textBoldB = lambda x: Style.BRIGHT+Back.BLUE+Fore.BLACK+x+Style.RESET_ALL textBoldR = lambda x: Style.BRIGHT+Back.RED+Fore.BLACK+x+Style.RESET_ALL Explanation: <center> <img src="http://sct.inf.utfsm.cl/wp-content/uploads/2020/04/logo_di.png" style="width:60%"> <h1> INF-285 - Computación Científica </h1> <h2> An slow root finder (?) </h2> <h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2> <h2> Version: 1.00 </h2> </center> <div id='toc' /> Table of Contents Introduction The test function The first numerical experiment The second numerical experiment The third numerical experiment Conclusion Acknowledgements End of explanation ''' INPUT: a : (double) Left end of interval [a,b] where we will find the root. b : (double) Right end of interval [a,b] where we will find the root. f : (lambda) Lambda function that we will find an approximation of a root. N : (integer) Number of segments we will be using in each decomposition of the interval. TOL : (double) Tolerance for converge we will be using. flag_OUTPUT : (bool) Flag to show the evolution of the algorithm. M : (int) Max number of evaluations allowed. OUTPUT: r : (double) The approximation found for the root. m : (int) Number of functions evaluations computed. ''' def ASRF(a,b,f,N=10,TOL=1e-8, flag_OUTPUT=True, M=int(1e2)): fa = f(a) m = 1 # Counting the number of evaluations used. r = a computed_error = 2*TOL+1 # Output table to store the numerical evolution of the algorithm output_table = [] # If the root is at "a" if np.abs((b-a)/2.)<TOL: r = a return r, m # If there is no change of sign m = m+1 if fa*f(b)>=0: print('Error: No change of sign') return np.nan, m # N must be at least 3 if N<2: print('Error: N must be at least 2') return np.nan, m # Counter for the number of lines of output. k=0 # Iterate until the tolerance is reached while computed_error > TOL: X = np.linspace(a,b,N+1) for i, xi in np.ndenumerate(X[1:]): # There is key difference respecto to the bisection method at this point, # the difference is that in general we need to determine in which interval # it is the root, where in the bisection method we know that if it is not # in one interval, it must be on the other interval. Of course we need # to consider the case when the root is at $x=c$. m = m+1 fxi=f(xi) if fa*fxi<0: a = X[i] b = xi r = (a+b)/2. fa = f(a) m = m+1 # Saving the output data k=k+1 output_table.append([k, int(i[0]), a, xi, b, f(a), f(xi), f(b), b-a, r, m]) break elif fxi==0: r = xi output_table.append([k, int(i[0]), a, xi, b, f(a), f(xi), f(b), b-a, r, m]) if flag_OUTPUT: # Showing final output table columns = ['$k$', '$i$', '$a$', '$x_i$', '$b$','$f(a)$', '$f(x_i)$', '$f(b)$', '$b-a$', '$r$', '$m$'] df = pd.DataFrame(data=output_table, columns=columns) display(df) return r, m else: # Saving the output data k = k+1 output_table.append([k, int(i[0]), a, xi, b, f(a), f(xi), f(b), b-a, r, m]) if m>=M: break computed_error = np.abs((b-a)/2.) m=m+1 if m>=M: print('Max number of evaluation allowed has been reached') break if flag_OUTPUT: # Showing final output table columns = ['$k$', '$i$', '$a$', '$x_i$', '$b$','$f(a)$', '$f(x_i)$', '$f(b)$', '$b-a$', '$r$', '$m$'] df = pd.DataFrame(data=output_table, columns=columns) display(df) return r, m Explanation: <div id='intro' /> Introduction Back to TOC In this jupyter notebook we will study the behaviour of a ABSRF (A Bad and Slow Root Finder). Notice that we have added "(?)" in the title since we still need to evaluate it in kore detail, basically, we will make a comparison with the bisection method itself. We will study this root finder because it has been traditionally proposed every time this class has been taught and it is important to quantify its behaviour, this means, to understand how fast it converges to a root and how long it takes. The algorithm is as follows: - Consider we have a continuous function $f(x)$ that has a root in the interval $[a,b]$. - The ABSRF divides the interval in $N$ equalspaced segments and then it finds in which interval there is a change of sign, it then subdivides that interval and continues the procedure. The algorithm is simple and it will be interesting to analyze it. End of explanation # This implemenatation of the bisection method was obtained from the # Jupyter Notebook called '03_roots_of_1D_equations.ipynb', the main difference # is that in this implementation returns the number of function # evaluations used. def bisect(f, a, b, tol=1e-8, maxNumberIterations=100,flag_OUTPUT=True): # Evaluating the extreme points of the interval provided fa = f(a) m = 1 # This variable will be used to count the number of function evaluations fb = f(b) m = m+1 # Iteration counter. i = 0 # Just checking if the sign is not negative => not root necessarily if np.sign(fa*fb) >= 0: # This was updated! print('f(a)f(b)<0 not satisfied!') return None # Output table to store the numerical evolution of the algorithm output_table = [] # Main loop: it will iterate until it satisfies one of the two criterias: # The tolerance 'tol' is achived or the max number of iterations is reached. while ((b-a)/2 > tol) and i<=maxNumberIterations: # Obtaining the midpoint of the interval. Quick question: What could happen if a different point is used? c = (a+b)/2. # Evaluating the mid point fc = f(c) m = m+1 # Saving the output data output_table.append([i, a, c, b, fa, fc, fb, b-a]) # Did we find the root? if fc == 0: print('f(c)==0') break elif np.sign(fa*fc) < 0: # This first case consider that the new inetrval is defined by [a,c] b = c fb = fc else: # This second case consider that the new interval is defined by [c,b] a = c fa = fc # Increasing the iteration counter i += 1 if flag_OUTPUT: # Showing final output table columns = ['$i$', '$a_i$', '$c_i$', '$b_i$', '$f(a_i)$', '$f(c_i)$', '$f(b_i)$', '$b_i-a_i$'] df = pd.DataFrame(data=output_table, columns=columns) display(df) # Computing the best approximation obtaind for the root, which is the midpoint of the final interval. xc = (a+b)/2. return xc, m Explanation: The following code is just the implementation of the Bisection method. We include it here just for comparison purposes. End of explanation # Testing function f2=(x+2*h)*(x-h). # Notice we know it has a root in h and -2*h. # To make the evaluation 'challenging', # we will expand the polynomial. # The advantage of this test function is that we know in advance # that it has a root in the interval [0,1] if 0<h<1. f = lambda x,h: np.power(x,2)+h*x-2*np.power(h,2) f2 = lambda x: f(x,np.sqrt(2)/2) Explanation: <div id='test_function' /> The test function Back to TOC To test the ASRF against the Bisection method, we need to build a function that we can move the location of the root. To acomplish this, we designed the following function: $$ \begin{align} f(x,h) &= (x+2\,h)\,(x-h)\ &= x^2+h\,x-2\,h^2. \end{align} $$ For the numerical experiments, we can simply set value for $h$ between $0$ and $1$ for simplicity. This will allow us to to have a known interval. We clearly see that the roots of the polynomial on $x$ are $r_1=-2\,h$ and $r_2=h$, in particular we will be trying to recover numerically $r_2$. Notice that we have expanded the polynomial to make it more interesting from the floating point standard point of view. The following code implements $f(x,h)$ and an auxiliary function $f_2(x)$. End of explanation r, m = ASRF(a=0, b=1, f=f2, N=5) print(r, m) Explanation: <div id='FNE' /> The First Numerical Experiment Back to TOC The output looks similar to the output generated by the Bisection Method, however in this case we added the index for the internal loop that goes over each interval with the index $i$. End of explanation data_number_of_function_evaluations = [] # The range of intervals to be used Ns = np.arange(2,21) for N in Ns: r, m = ASRF(a=0, b=1, f=f2, N=N, flag_OUTPUT=False) data_number_of_function_evaluations.append(m) plt.figure(figsize=(10,10)) plt.plot(Ns,data_number_of_function_evaluations,'.', markersize=10) plt.title('ASRF') plt.xticks(Ns) plt.xlabel('# of intervals N') plt.ylabel('# of function evaluations') plt.grid(True) plt.ylim([0, 110]) plt.show() Explanation: <div id='SNE' /> The Second Numerical Experiment Back to TOC This second numerical experiments computed the number of function evaluation required to obtain the root of $f_2(x)$ as we change the number of intervals used. Notice that when $N=2$ you may think we should actually get back the bisection method, but we, unfortunately, don't. In particular, it seems that using $N=8$ we get the best performance, which is $56$ function evaluations. End of explanation # Range of number of intervals used, starting from 2. Ns = np.arange(2,21) # Range of roots tested. Hs = np.linspace(0.12,0.92,12) # Running ASRF numerical experiments. all_output_ASRF = [] for h in Hs: print('Running numerical experiment with h=',h) output_ASRF= [] for N in Ns: rN, m = ASRF(a=0, b=1, f=lambda x: f(x,h), N=N, flag_OUTPUT=False) output_ASRF.append(m) all_output_ASRF.append(output_ASRF) # Running Bisection method experiments. out_bisection = [] for h in Hs: r, m = bisect(f=lambda x: f(x,h), a=0, b=1,flag_OUTPUT=False) out_bisection.append(m) Explanation: <div id='TNE' /> The Third Numerical Experiment Back to TOC This last numerical experiment is the largest case, we test the ASRF method against the Bisection method. We select the range from $2$ up to $20$ intervals and we selected $12$ roots equalspaced from $h=0.12$ up to $0.92$, this may seem like arbitrary numbers and they are, so we invite you to try with different number, but just remember that if you change the range, you must change the values for $a$ and $b$ accordingly. End of explanation plt.figure(figsize=(10,10)) plt.plot(Ns,Ns*0+100,'r--',label="Max # of eval") for output_ASRF, h in zip(all_output_ASRF,Hs): plt.plot(Ns,output_ASRF,'.-',label=r"ASRF, $r=%.4f$"%(h)) plt.plot(Ns,np.mean(np.array(all_output_ASRF),0),'.-k',linewidth=4,label='ASRF, Mean # func. eval.', markersize=20) plt.xlabel('# of intervals N') plt.xticks(Ns) plt.ylabel('# of function evaluations') plt.legend(bbox_to_anchor=(1,1), loc="upper left") plt.grid(True) plt.ylim([0,110]) plt.show() Explanation: Fist large plot This first large plots show in color the behavior of ASRF as we change the number of intervals for each the $12$ values of $h$ described before. We show in a red dashed line the maximum number of function evaluation allowed and in a thick black line the mean value of the function evaluation for each number of intervals. This plot tells us that, on average, it seems that using $5$ intervals may be the most competitive value for the number of intervals. Will this be better than the output (number of function evaluation) obtain by the Bisection method? To answer this question we need to change the plot, but using the same data. If we try to plot the output of the bisection method in this figure, it may not be fair since the bisection method does not depend on the number of intervals, since it always uses 2 intervals. To solve this problem, we change the way we plot the results in the next plot. End of explanation plt.figure(figsize=(10,10)) #plt.plot(Ns,Ns*0+100,'r--',label="Max # of eval") output_ASRF = np.array(all_output_ASRF) for output_Hs, N in zip(output_ASRF.T,Ns): plt.plot(Hs,output_Hs,'.-',label=r"ASRF, $N=%d$"%(N)) plt.plot(Hs,np.mean(np.array(output_ASRF),1),'.-k',linewidth=4,label='ASRF, Mean # func. eval.', markersize=20) plt.plot(Hs, out_bisection,'cs-',label='Bisection method', markersize=10, linewidth=4) plt.xlabel('Root r=h') plt.xticks(Hs) plt.ylabel('# of function evaluations') plt.legend(bbox_to_anchor=(1,1), loc="upper left") plt.ylim([0,110]) plt.grid(True) plt.show() Explanation: Second large plot In this case, we plot the behavior of the algorithm as a function of the root we are looking for. The behavior of the different number of intervals is considered with different colors over the set of roots. Now it is possible to add the Bisection method in cyan with large square markers. The behavior of the Bisection method is constant, as expected. A reasonable question that you may have by looking at this plot is the following: - Why ASRF seems better than the Bisection method for the first and last root? (The answer is left as an exercise!) End of explanation
1,093
Given the following text description, write Python code to implement the functionality described below step by step Description: 가우시안 정규 분포 가우시안 정규 분포(Gaussian normal distribution), 혹은 그냥 간단히 정규 분포라고 부르는 분포는 자연 현상에서 나타나는 숫자를 확률 모형으로 모형화할 때 가장 많이 사용되는 확률 모형이다. 정규 분포는 평균 $\mu$와 분산 $\sigma^2$ 이라는 두 개의 모수만으로 정의되며 확률 밀도 함수(pdf Step1: pdf 메서드를 사용하면 확률 밀도 함수(pdf Step2: 시뮬레이션을 통해 샘플을 얻으려면 rvs 메서드를 사용한다. Step3: Q-Q 플롯 정규 분포는 여러가지 연속 확률 분포 중에서도 가장 유용한 특성을 지니며 널리 사용되는 확률 분포이다. 따라서 어떤 확률 변수의 분포가 정규 분포인지 아닌지 확인하는 것은 정규 분포 검정(normality test)은 가장 중요한 통계적 분석 중의 하나이다. 그러나 구체적인 정규 분포 검정을 사용하기에 앞서 시작적으로 간단하게 정규 분포를 확인하는 Q-Q 플롯을 사용할 수 있다. Q-Q(Quantile-Quantile) 플롯은 분석하고자 하는 샘플의 분포과 정규 분포의 분포 형태를 비교하는 시각적 도구이다. Q-Q 플롯은 동일 분위수에 해당하는 정상 분포의 값과 주어진 분포의 값을 한 쌍으로 만들어 스캐터 플롯(scatter plot)으로 그린 것이다. Q-Q 플롯을 그리는 구체적인 방법은 다음과 같다. 대상 샘플을 크기에 따라 정렬(sort)한다. 각 샘플의 분위수(quantile number)를 구한다. 각 샘플의 분위수와 일치하는 분위수를 가지는 정규 분포 값을 구한다. 대상 샘플과 정규 분포 값을 하나의 쌍으로 생각하여 2차원 공간에 하나의 점(point)으로 그린다. 모든 샘플에 대해 2부터 4까지의 과정을 반복하여 스캐터 플롯과 유사한 형태의 플롯을 완성한다. 비교를 위한 45도 직선을 그린다. SciPy 패키지의 stats 서브 패키지는 Q-Q 플롯을 계산하고 그리기 위한 probplot 명령을 제공한다. http Step4: 정규 분포를 따르지 않는 데이터 샘플을 Q-Q 플롯으로 그리면 다음과 같이 직선이 아닌 휘어진 형태로 나타난다. Step5: 중심 극한 정리 실세계에서 발생하는 현상 중 많은 것들이 정규 분포로 모형화 가능하다. 그 이유 중의 하나는 다음과 같은 중심 극한 정리(Central Limit Theorem)이다. 중심 극한 정리는 어떤 분포를 따르는 확류 변수든 간에 해당 확률 변수가 복수인 경우 그 합은 정규 분포와 비슥한 분포를 이루는 현상을 말한다. 좀 더 수학적인 용어로 쓰면 다음과 같다. $X_1, X_2, \ldots, X_n$가 기댓값이 $\mu$이고 분산이 $\sigma^2$으로 동일한 분포이며 서로 독립인 확률 변수들이라고 하자. 이 값들의 합 $$ S_n = X_1+\cdots+X_n $$ 도 마찬가지로 확률 변수이다. 이 확률 변수 $S_n$의 분포는 $n$이 증가할 수록 다음과 같은 정규 분포에 수렴한다. $$ \dfrac{S_n}{\sqrt{n}} \xrightarrow{d}\ N(\mu,\;\sigma^2) $$ 시뮬레이션을 사용하여 중심 극한 정리가 성립하는지 살펴보도록 하자.
Python Code: mu = 0 std = 1 rv = sp.stats.norm(mu, std) rv Explanation: 가우시안 정규 분포 가우시안 정규 분포(Gaussian normal distribution), 혹은 그냥 간단히 정규 분포라고 부르는 분포는 자연 현상에서 나타나는 숫자를 확률 모형으로 모형화할 때 가장 많이 사용되는 확률 모형이다. 정규 분포는 평균 $\mu$와 분산 $\sigma^2$ 이라는 두 개의 모수만으로 정의되며 확률 밀도 함수(pdf: probability density function)는 다음과 같은 수식을 가진다. $$ \mathcal{N}(x; \mu, \sigma^2) = \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x-\mu)^2}{2\sigma^2}\right) $$ 정규 분포 중에서도 평균이 0 이고 분산이 1 인 ($\mu=0$, $\sigma^2=1$) 정규 분포를 표준 정규 분포(standard normal distribution)라고 한다. SciPy를 사용한 정규 분포의 시뮬레이션 Scipy의 stats 서브 패키지에 있는 norm 클래스는 정규 분포에 대한 클래스이다. loc 인수로 평균을 설정하고 scale 인수로 표준 편차를 설정한다. End of explanation xx = np.linspace(-5, 5, 100) plt.plot(xx, rv.pdf(xx)) plt.ylabel("p(x)") plt.title("pdf of normal distribution") plt.show() Explanation: pdf 메서드를 사용하면 확률 밀도 함수(pdf: probability density function)를 계산할 수 있다. End of explanation np.random.seed(0) x = rv.rvs(100) x sns.distplot(x, kde=False, fit=sp.stats.norm) plt.show() Explanation: 시뮬레이션을 통해 샘플을 얻으려면 rvs 메서드를 사용한다. End of explanation np.random.seed(0) x = np.random.randn(100) plt.figure(figsize=(7,7)) sp.stats.probplot(x, plot=plt) plt.axis("equal") plt.show() Explanation: Q-Q 플롯 정규 분포는 여러가지 연속 확률 분포 중에서도 가장 유용한 특성을 지니며 널리 사용되는 확률 분포이다. 따라서 어떤 확률 변수의 분포가 정규 분포인지 아닌지 확인하는 것은 정규 분포 검정(normality test)은 가장 중요한 통계적 분석 중의 하나이다. 그러나 구체적인 정규 분포 검정을 사용하기에 앞서 시작적으로 간단하게 정규 분포를 확인하는 Q-Q 플롯을 사용할 수 있다. Q-Q(Quantile-Quantile) 플롯은 분석하고자 하는 샘플의 분포과 정규 분포의 분포 형태를 비교하는 시각적 도구이다. Q-Q 플롯은 동일 분위수에 해당하는 정상 분포의 값과 주어진 분포의 값을 한 쌍으로 만들어 스캐터 플롯(scatter plot)으로 그린 것이다. Q-Q 플롯을 그리는 구체적인 방법은 다음과 같다. 대상 샘플을 크기에 따라 정렬(sort)한다. 각 샘플의 분위수(quantile number)를 구한다. 각 샘플의 분위수와 일치하는 분위수를 가지는 정규 분포 값을 구한다. 대상 샘플과 정규 분포 값을 하나의 쌍으로 생각하여 2차원 공간에 하나의 점(point)으로 그린다. 모든 샘플에 대해 2부터 4까지의 과정을 반복하여 스캐터 플롯과 유사한 형태의 플롯을 완성한다. 비교를 위한 45도 직선을 그린다. SciPy 패키지의 stats 서브 패키지는 Q-Q 플롯을 계산하고 그리기 위한 probplot 명령을 제공한다. http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.probplot.html probplot은 기본적으로 인수로 보낸 데이터 샘플에 대한 Q-Q 정보만을 반환하고 챠트는 그리지 않는다. 만약 차트를 그리고 싶다면 plot 인수에 matplotlib.pylab 모듈 객체 혹은 Axes 클래스 객체를 넘겨주어야 한다. 정규 분포를 따르는 데이터 샘플을 Q-Q 플롯으로 그리면 다음과 같이 직선의 형태로 나타난다. End of explanation np.random.seed(0) x = np.random.rand(100) plt.figure(figsize=(7,7)) sp.stats.probplot(x, plot=plt) plt.ylim(-0.5, 1.5) plt.show() Explanation: 정규 분포를 따르지 않는 데이터 샘플을 Q-Q 플롯으로 그리면 다음과 같이 직선이 아닌 휘어진 형태로 나타난다. End of explanation xx = np.linspace(-2, 2, 100) plt.figure(figsize=(6,9)) for i, N in enumerate([1, 2, 10]): X = np.random.rand(1000, N) - 0.5 S = X.sum(axis=1)/np.sqrt(N) plt.subplot(3, 2, 2*i+1) sns.distplot(S, bins=10, kde=False, norm_hist=True) plt.xlim(-2, 2) plt.yticks([]) plt.subplot(3, 2, 2*i+2) sp.stats.probplot(S, plot=plt) plt.tight_layout() plt.show() Explanation: 중심 극한 정리 실세계에서 발생하는 현상 중 많은 것들이 정규 분포로 모형화 가능하다. 그 이유 중의 하나는 다음과 같은 중심 극한 정리(Central Limit Theorem)이다. 중심 극한 정리는 어떤 분포를 따르는 확류 변수든 간에 해당 확률 변수가 복수인 경우 그 합은 정규 분포와 비슥한 분포를 이루는 현상을 말한다. 좀 더 수학적인 용어로 쓰면 다음과 같다. $X_1, X_2, \ldots, X_n$가 기댓값이 $\mu$이고 분산이 $\sigma^2$으로 동일한 분포이며 서로 독립인 확률 변수들이라고 하자. 이 값들의 합 $$ S_n = X_1+\cdots+X_n $$ 도 마찬가지로 확률 변수이다. 이 확률 변수 $S_n$의 분포는 $n$이 증가할 수록 다음과 같은 정규 분포에 수렴한다. $$ \dfrac{S_n}{\sqrt{n}} \xrightarrow{d}\ N(\mu,\;\sigma^2) $$ 시뮬레이션을 사용하여 중심 극한 정리가 성립하는지 살펴보도록 하자. End of explanation
1,094
Given the following text description, write Python code to implement the functionality described below step by step Description: Dynamic factors and coincident indices Factor models generally try to find a small number of unobserved "factors" that influence a subtantial portion of the variation in a larger number of observed variables, and they are related to dimension-reduction techniques such as principal components analysis. Dynamic factor models explicitly model the transition dynamics of the unobserved factors, and so are often applied to time-series data. Macroeconomic coincident indices are designed to capture the common component of the "business cycle"; such a component is assumed to simultaneously affect many macroeconomic variables. Although the estimation and use of coincident indices (for example the Index of Coincident Economic Indicators) pre-dates dynamic factor models, in several influential papers Stock and Watson (1989, 1991) used a dynamic factor model to provide a theoretical foundation for them. Below, we follow the treatment found in Kim and Nelson (1999), of the Stock and Watson (1991) model, to formulate a dynamic factor model, estimate its parameters via maximum likelihood, and create a coincident index. Macroeconomic data The coincident index is created by considering the comovements in four macroeconomic variables (versions of thse variables are available on FRED; the ID of the series used below is given in parentheses) Step1: Note Step2: Stock and Watson (1991) report that for their datasets, they could not reject the null hypothesis of a unit root in each series (so the series are integrated), but they did not find strong evidence that the series were co-integrated. As a result, they suggest estimating the model using the first differences (of the logs) of the variables, demeaned and standardized. Step3: Dynamic factors A general dynamic factor model is written as Step4: Estimates Once the model has been estimated, there are two components that we can use for analysis or inference Step5: Estimated factors While it can be useful to plot the unobserved factors, it is less useful here than one might think for two reasons Step6: Post-estimation Although here we will be able to interpret the results of the model by constructing the coincident index, there is a useful and generic approach for getting a sense for what is being captured by the estimated factor. By taking the estimated factors as given, regressing them (and a constant) each (one at a time) on each of the observed variables, and recording the coefficients of determination ($R^2$ values), we can get a sense of the variables for which each factor explains a substantial portion of the variance and the variables for which it does not. In models with more variables and more factors, this can sometimes lend interpretation to the factors (for example sometimes one factor will load primarily on real variables and another on nominal variables). In this model, with only four endogenous variables and one factor, it is easy to digest a simple table of the $R^2$ values, but in larger models it is not. For this reason, a bar plot is often employed; from the plot we can easily see that the factor explains most of the variation in industrial production index and a large portion of the variation in sales and employment, it is less helpful in explaining income. Step7: Coincident Index As described above, the goal of this model was to create an interpretable series which could be used to understand the current status of the macroeconomy. This is what the coincident index is designed to do. It is constructed below. For readers interested in an explanation of the construction, see Kim and Nelson (1999) or Stock and Watson (1991). In essense, what is done is to reconstruct the mean of the (differenced) factor. We will compare it to the coincident index on published by the Federal Reserve Bank of Philadelphia (USPHCI on FRED). Step8: Below we plot the calculated coincident index along with the US recessions and the comparison coincident index USPHCI. Step9: Appendix 1 Step10: So what did we just do? __init__ The important step here was specifying the base dynamic factor model which we were operating with. In particular, as described above, we initialize with factor_order=4, even though we will only end up with an AR(2) model for the factor. We also performed some general setup-related tasks. start_params start_params are used as initial values in the optimizer. Since we are adding three new parameters, we need to pass those in. If we hadn't done this, the optimizer would use the default starting values, which would be three elements short. param_names param_names are used in a variety of places, but especially in the results class. Below we get a full result summary, which is only possible when all the parameters have associated names. transform_params and untransform_params The optimizer selects possibly parameter values in an unconstrained way. That's not usually desired (since variances can't be negative, for example), and transform_params is used to transform the unconstrained values used by the optimizer to constrained values appropriate to the model. Variances terms are typically squared (to force them to be positive), and AR lag coefficients are often constrained to lead to a stationary model. untransform_params is used for the reverse operation (and is important because starting parameters are usually specified in terms of values appropriate to the model, and we need to convert them to parameters appropriate to the optimizer before we can begin the optimization routine). Even though we don't need to transform or untransform our new parameters (the loadings can in theory take on any values), we still need to modify this function for two reasons Step11: Although this model increases the likelihood, it is not preferred by the AIC and BIC mesaures which penalize the additional three parameters. Furthermore, the qualitative results are unchanged, as we can see from the updated $R^2$ chart and the new coincident index, both of which are practically identical to the previous results.
Python Code: %matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt np.set_printoptions(precision=4, suppress=True, linewidth=120) from pandas.io.data import DataReader # Get the datasets from FRED start = '1979-01-01' end = '2014-12-01' indprod = DataReader('IPMAN', 'fred', start=start, end=end) income = DataReader('W875RX1', 'fred', start=start, end=end) sales = DataReader('CMRMTSPL', 'fred', start=start, end=end) emp = DataReader('PAYEMS', 'fred', start=start, end=end) # dta = pd.concat((indprod, income, sales, emp), axis=1) # dta.columns = ['indprod', 'income', 'sales', 'emp'] Explanation: Dynamic factors and coincident indices Factor models generally try to find a small number of unobserved "factors" that influence a subtantial portion of the variation in a larger number of observed variables, and they are related to dimension-reduction techniques such as principal components analysis. Dynamic factor models explicitly model the transition dynamics of the unobserved factors, and so are often applied to time-series data. Macroeconomic coincident indices are designed to capture the common component of the "business cycle"; such a component is assumed to simultaneously affect many macroeconomic variables. Although the estimation and use of coincident indices (for example the Index of Coincident Economic Indicators) pre-dates dynamic factor models, in several influential papers Stock and Watson (1989, 1991) used a dynamic factor model to provide a theoretical foundation for them. Below, we follow the treatment found in Kim and Nelson (1999), of the Stock and Watson (1991) model, to formulate a dynamic factor model, estimate its parameters via maximum likelihood, and create a coincident index. Macroeconomic data The coincident index is created by considering the comovements in four macroeconomic variables (versions of thse variables are available on FRED; the ID of the series used below is given in parentheses): Industrial production (IPMAN) Real aggregate income (excluding transfer payments) (W875RX1) Manufacturing and trade sales (CMRMTSPL) Employees on non-farm payrolls (PAYEMS) In all cases, the data is at the monthly frequency and has been seasonally adjusted; the time-frame considered is 1972 - 2005. End of explanation # HMRMT = DataReader('HMRMT', 'fred', start='1967-01-01', end=end) # CMRMT = DataReader('CMRMT', 'fred', start='1997-01-01', end=end) # HMRMT_growth = HMRMT.diff() / HMRMT.shift() # sales = pd.Series(np.zeros(emp.shape[0]), index=emp.index) # # Fill in the recent entries (1997 onwards) # sales[CMRMT.index] = CMRMT # # Backfill the previous entries (pre 1997) # idx = sales.ix[:'1997-01-01'].index # for t in range(len(idx)-1, 0, -1): # month = idx[t] # prev_month = idx[t-1] # sales.ix[prev_month] = sales.ix[month] / (1 + HMRMT_growth.ix[prev_month].values) dta = pd.concat((indprod, income, sales, emp), axis=1) dta.columns = ['indprod', 'income', 'sales', 'emp'] dta.ix[:, 'indprod':'emp'].plot(subplots=True, layout=(2, 2), figsize=(15, 6)); Explanation: Note: in a recent update on FRED (8/12/15) the time series CMRMTSPL was truncated to begin in 1997; this is probably a mistake due to the fact that CMRMTSPL is a spliced series, so the earlier period is from the series HMRMT and the latter period is defined by CMRMT. This has since (02/11/16) been corrected, however the series could also be constructed by hand from HMRMT and CMRMT, as shown below (process taken from the notes in the Alfred xls file). End of explanation # Create log-differenced series dta['dln_indprod'] = (np.log(dta.indprod)).diff() * 100 dta['dln_income'] = (np.log(dta.income)).diff() * 100 dta['dln_sales'] = (np.log(dta.sales)).diff() * 100 dta['dln_emp'] = (np.log(dta.emp)).diff() * 100 # De-mean and standardize dta['std_indprod'] = (dta['dln_indprod'] - dta['dln_indprod'].mean()) / dta['dln_indprod'].std() dta['std_income'] = (dta['dln_income'] - dta['dln_income'].mean()) / dta['dln_income'].std() dta['std_sales'] = (dta['dln_sales'] - dta['dln_sales'].mean()) / dta['dln_sales'].std() dta['std_emp'] = (dta['dln_emp'] - dta['dln_emp'].mean()) / dta['dln_emp'].std() Explanation: Stock and Watson (1991) report that for their datasets, they could not reject the null hypothesis of a unit root in each series (so the series are integrated), but they did not find strong evidence that the series were co-integrated. As a result, they suggest estimating the model using the first differences (of the logs) of the variables, demeaned and standardized. End of explanation # Get the endogenous data endog = dta.ix['1979-02-01':, 'std_indprod':'std_emp'] # Create the model mod = sm.tsa.DynamicFactor(endog, k_factors=1, factor_order=2, error_order=2) initial_res = mod.fit(method='powell', disp=False) res = mod.fit(initial_res.params) Explanation: Dynamic factors A general dynamic factor model is written as: $$ \begin{align} y_t & = \Lambda f_t + B x_t + u_t \ f_t & = A_1 f_{t-1} + \dots + A_p f_{t-p} + \eta_t \qquad \eta_t \sim N(0, I)\ u_t & = C_1 u_{t-1} + \dots + C_1 f_{t-q} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \Sigma) \end{align} $$ where $y_t$ are observed data, $f_t$ are the unobserved factors (evolving as a vector autoregression), $x_t$ are (optional) exogenous variables, and $u_t$ is the error, or "idiosyncratic", process ($u_t$ is also optionally allowed to be autocorrelated). The $\Lambda$ matrix is often referred to as the matrix of "factor loadings". The variance of the factor error term is set to the identity matrix to ensure identification of the unobserved factors. This model can be cast into state space form, and the unobserved factor estimated via the Kalman filter. The likelihood can be evaluated as a byproduct of the filtering recursions, and maximum likelihood estimation used to estimate the parameters. Model specification The specific dynamic factor model in this application has 1 unobserved factor which is assumed to follow an AR(2) proces. The innovations $\varepsilon_t$ are assumed to be independent (so that $\Sigma$ is a diagonal matrix) and the error term associated with each equation, $u_{i,t}$ is assumed to follow an independent AR(2) process. Thus the specification considered here is: $$ \begin{align} y_{i,t} & = \lambda_i f_t + u_{i,t} \ u_{i,t} & = c_{i,1} u_{1,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \ f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\ \end{align} $$ where $i$ is one of: [indprod, income, sales, emp ]. This model can be formulated using the DynamicFactor model built-in to Statsmodels. In particular, we have the following specification: k_factors = 1 - (there is 1 unobserved factor) factor_order = 2 - (it follows an AR(2) process) error_var = False - (the errors evolve as independent AR processes rather than jointly as a VAR - note that this is the default option, so it is not specified below) error_order = 2 - (the errors are autocorrelated of order 2: i.e. AR(2) processes) error_cov_type = 'diagonal' - (the innovations are uncorrelated; this is again the default) Once the model is created, the parameters can be estimated via maximum likelihood; this is done using the fit() method. Note: recall that we have de-meaned and standardized the data; this will be important in interpreting the results that follow. Aside: in their empirical example, Kim and Nelson (1999) actually consider a slightly different model in which the employment variable is allowed to also depend on lagged values of the factor - this model does not fit into the built-in DynamicFactor class, but can be accomodated by using a subclass to implement the required new parameters and restrictions - see Appendix A, below. Parameter estimation Multivariate models can have a relatively large number of parameters, and it may be difficult to escape from local minima to find the maximized likelihood. In an attempt to mitigate this problem, I perform an initial maximization step (from the model-defined starting paramters) using the modified Powell method available in Scipy (see the minimize documentation for more information). The resulting parameters are then used as starting parameters in the standard LBFGS optimization method. End of explanation print(res.summary(separate_params=False)) Explanation: Estimates Once the model has been estimated, there are two components that we can use for analysis or inference: The estimated parameters The estimated factor Parameters The estimated parameters can be helpful in understanding the implications of the model, although in models with a larger number of observed variables and / or unobserved factors they can be difficult to interpret. One reason for this difficulty is due to identification issues between the factor loadings and the unobserved factors. One easy-to-see identification issue is the sign of the loadings and the factors: an equivalent model to the one displayed below would result from reversing the signs of all factor loadings and the unobserved factor. Here, one of the easy-to-interpret implications in this model is the persistence of the unobserved factor: we find that exhibits substantial persistence. End of explanation fig, ax = plt.subplots(figsize=(13,3)) # Plot the factor dates = endog.index._mpl_repr() ax.plot(dates, res.factors.filtered[0], label='Factor') ax.legend() # Retrieve and also plot the NBER recession indicators rec = DataReader('USREC', 'fred', start=start, end=end) ylim = ax.get_ylim() ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:,0], facecolor='k', alpha=0.1); Explanation: Estimated factors While it can be useful to plot the unobserved factors, it is less useful here than one might think for two reasons: The sign-related identification issue described above. Since the data was differenced, the estimated factor explains the variation in the differenced data, not the original data. It is for these reasons that the coincident index is created (see below). With these reservations, the unobserved factor is plotted below, along with the NBER indicators for US recessions. It appears that the factor is successful at picking up some degree of business cycle activity. End of explanation res.plot_coefficients_of_determination(figsize=(8,2)); Explanation: Post-estimation Although here we will be able to interpret the results of the model by constructing the coincident index, there is a useful and generic approach for getting a sense for what is being captured by the estimated factor. By taking the estimated factors as given, regressing them (and a constant) each (one at a time) on each of the observed variables, and recording the coefficients of determination ($R^2$ values), we can get a sense of the variables for which each factor explains a substantial portion of the variance and the variables for which it does not. In models with more variables and more factors, this can sometimes lend interpretation to the factors (for example sometimes one factor will load primarily on real variables and another on nominal variables). In this model, with only four endogenous variables and one factor, it is easy to digest a simple table of the $R^2$ values, but in larger models it is not. For this reason, a bar plot is often employed; from the plot we can easily see that the factor explains most of the variation in industrial production index and a large portion of the variation in sales and employment, it is less helpful in explaining income. End of explanation usphci = DataReader('USPHCI', 'fred', start='1979-01-01', end='2014-12-01')['USPHCI'] usphci.plot(figsize=(13,3)); dusphci = usphci.diff()[1:].values def compute_coincident_index(mod, res): # Estimate W(1) spec = res.specification design = mod.ssm['design'] transition = mod.ssm['transition'] ss_kalman_gain = res.filter_results.kalman_gain[:,:,-1] k_states = ss_kalman_gain.shape[0] W1 = np.linalg.inv(np.eye(k_states) - np.dot( np.eye(k_states) - np.dot(ss_kalman_gain, design), transition )).dot(ss_kalman_gain)[0] # Compute the factor mean vector factor_mean = np.dot(W1, dta.ix['1972-02-01':, 'dln_indprod':'dln_emp'].mean()) # Normalize the factors factor = res.factors.filtered[0] factor *= np.std(usphci.diff()[1:]) / np.std(factor) # Compute the coincident index coincident_index = np.zeros(mod.nobs+1) # The initial value is arbitrary; here it is set to # facilitate comparison coincident_index[0] = usphci.iloc[0] * factor_mean / dusphci.mean() for t in range(0, mod.nobs): coincident_index[t+1] = coincident_index[t] + factor[t] + factor_mean # Attach dates coincident_index = pd.Series(coincident_index, index=dta.index).iloc[1:] # Normalize to use the same base year as USPHCI coincident_index *= (usphci.ix['1992-07-01'] / coincident_index.ix['1992-07-01']) return coincident_index Explanation: Coincident Index As described above, the goal of this model was to create an interpretable series which could be used to understand the current status of the macroeconomy. This is what the coincident index is designed to do. It is constructed below. For readers interested in an explanation of the construction, see Kim and Nelson (1999) or Stock and Watson (1991). In essense, what is done is to reconstruct the mean of the (differenced) factor. We will compare it to the coincident index on published by the Federal Reserve Bank of Philadelphia (USPHCI on FRED). End of explanation fig, ax = plt.subplots(figsize=(13,3)) # Compute the index coincident_index = compute_coincident_index(mod, res) # Plot the factor dates = endog.index._mpl_repr() ax.plot(dates, coincident_index, label='Coincident index') ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI') ax.legend(loc='lower right') # Retrieve and also plot the NBER recession indicators ylim = ax.get_ylim() ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:,0], facecolor='k', alpha=0.1); Explanation: Below we plot the calculated coincident index along with the US recessions and the comparison coincident index USPHCI. End of explanation from statsmodels.tsa.statespace import tools class ExtendedDFM(sm.tsa.DynamicFactor): def __init__(self, endog, **kwargs): # Setup the model as if we had a factor order of 4 super(ExtendedDFM, self).__init__( endog, k_factors=1, factor_order=4, error_order=2, **kwargs) # Note: `self.parameters` is an ordered dict with the # keys corresponding to parameter types, and the values # the number of parameters of that type. # Add the new parameters self.parameters['new_loadings'] = 3 # Cache a slice for the location of the 4 factor AR # parameters (a_1, ..., a_4) in the full parameter vector offset = (self.parameters['factor_loadings'] + self.parameters['exog'] + self.parameters['error_cov']) self._params_factor_ar = np.s_[offset:offset+2] self._params_factor_zero = np.s_[offset+2:offset+4] @property def start_params(self): # Add three new loading parameters to the end of the parameter # vector, initialized to zeros (for simplicity; they could # be initialized any way you like) return np.r_[super(ExtendedDFM, self).start_params, 0, 0, 0] @property def param_names(self): # Add the corresponding names for the new loading parameters # (the name can be anything you like) return super(ExtendedDFM, self).param_names + [ 'loading.L%d.f1.%s' % (i, self.endog_names[3]) for i in range(1,4)] def transform_params(self, unconstrained): # Perform the typical DFM transformation (w/o the new parameters) constrained = super(ExtendedDFM, self).transform_params( unconstrained[:-3]) # Redo the factor AR constraint, since we only want an AR(2), # and the previous constraint was for an AR(4) ar_params = unconstrained[self._params_factor_ar] constrained[self._params_factor_ar] = ( tools.constrain_stationary_univariate(ar_params)) # Return all the parameters return np.r_[constrained, unconstrained[-3:]] def untransform_params(self, constrained): # Perform the typical DFM untransformation (w/o the new parameters) unconstrained = super(ExtendedDFM, self).untransform_params( constrained[:-3]) # Redo the factor AR unconstraint, since we only want an AR(2), # and the previous unconstraint was for an AR(4) ar_params = constrained[self._params_factor_ar] unconstrained[self._params_factor_ar] = ( tools.unconstrain_stationary_univariate(ar_params)) # Return all the parameters return np.r_[unconstrained, constrained[-3:]] def update(self, params, transformed=True): # Peform the transformation, if required if not transformed: params = self.transform_params(params) params[self._params_factor_zero] = 0 # Now perform the usual DFM update, but exclude our new parameters super(ExtendedDFM, self).update(params[:-3], transformed=True) # Finally, set our new parameters in the design matrix self.ssm['design', 3, 1:4] = params[-3:] Explanation: Appendix 1: Extending the dynamic factor model Recall that the previous specification was described by: $$ \begin{align} y_{i,t} & = \lambda_i f_t + u_{i,t} \ u_{i,t} & = c_{i,1} u_{1,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \ f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\ \end{align} $$ Written in state space form, the previous specification of the model had the following observation equation: $$ \begin{bmatrix} y_{\text{indprod}, t} \ y_{\text{income}, t} \ y_{\text{sales}, t} \ y_{\text{emp}, t} \ \end{bmatrix} = \begin{bmatrix} \lambda_\text{indprod} & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ \lambda_\text{income} & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \ \lambda_\text{sales} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \ \lambda_\text{emp} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \ \end{bmatrix} \begin{bmatrix} f_t \ f_{t-1} \ u_{\text{indprod}, t} \ u_{\text{income}, t} \ u_{\text{sales}, t} \ u_{\text{emp}, t} \ u_{\text{indprod}, t-1} \ u_{\text{income}, t-1} \ u_{\text{sales}, t-1} \ u_{\text{emp}, t-1} \ \end{bmatrix} $$ and transition equation: $$ \begin{bmatrix} f_t \ f_{t-1} \ u_{\text{indprod}, t} \ u_{\text{income}, t} \ u_{\text{sales}, t} \ u_{\text{emp}, t} \ u_{\text{indprod}, t-1} \ u_{\text{income}, t-1} \ u_{\text{sales}, t-1} \ u_{\text{emp}, t-1} \ \end{bmatrix} = \begin{bmatrix} a_1 & a_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & c_{\text{indprod}, 1} & 0 & 0 & 0 & c_{\text{indprod}, 2} & 0 & 0 & 0 \ 0 & 0 & 0 & c_{\text{income}, 1} & 0 & 0 & 0 & c_{\text{income}, 2} & 0 & 0 \ 0 & 0 & 0 & 0 & c_{\text{sales}, 1} & 0 & 0 & 0 & c_{\text{sales}, 2} & 0 \ 0 & 0 & 0 & 0 & 0 & c_{\text{emp}, 1} & 0 & 0 & 0 & c_{\text{emp}, 2} \ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \ \end{bmatrix} \begin{bmatrix} f_{t-1} \ f_{t-2} \ u_{\text{indprod}, t-1} \ u_{\text{income}, t-1} \ u_{\text{sales}, t-1} \ u_{\text{emp}, t-1} \ u_{\text{indprod}, t-2} \ u_{\text{income}, t-2} \ u_{\text{sales}, t-2} \ u_{\text{emp}, t-2} \ \end{bmatrix} + R \begin{bmatrix} \eta_t \ \varepsilon_{t} \end{bmatrix} $$ the DynamicFactor model handles setting up the state space representation and, in the DynamicFactor.update method, it fills in the fitted parameter values into the appropriate locations. The extended specification is the same as in the previous example, except that we also want to allow employment to depend on lagged values of the factor. This creates a change to the $y_{\text{emp},t}$ equation. Now we have: $$ \begin{align} y_{i,t} & = \lambda_i f_t + u_{i,t} \qquad & i \in {\text{indprod}, \text{income}, \text{sales} }\ y_{i,t} & = \lambda_{i,0} f_t + \lambda_{i,1} f_{t-1} + \lambda_{i,2} f_{t-2} + \lambda_{i,2} f_{t-3} + u_{i,t} \qquad & i = \text{emp} \ u_{i,t} & = c_{i,1} u_{i,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \ f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\ \end{align} $$ Now, the corresponding observation equation should look like the following: $$ \begin{bmatrix} y_{\text{indprod}, t} \ y_{\text{income}, t} \ y_{\text{sales}, t} \ y_{\text{emp}, t} \ \end{bmatrix} = \begin{bmatrix} \lambda_\text{indprod} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ \lambda_\text{income} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \ \lambda_\text{sales} & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \ \lambda_\text{emp,1} & \lambda_\text{emp,2} & \lambda_\text{emp,3} & \lambda_\text{emp,4} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \ \end{bmatrix} \begin{bmatrix} f_t \ f_{t-1} \ f_{t-2} \ f_{t-3} \ u_{\text{indprod}, t} \ u_{\text{income}, t} \ u_{\text{sales}, t} \ u_{\text{emp}, t} \ u_{\text{indprod}, t-1} \ u_{\text{income}, t-1} \ u_{\text{sales}, t-1} \ u_{\text{emp}, t-1} \ \end{bmatrix} $$ Notice that we have introduced two new state variables, $f_{t-2}$ and $f_{t-3}$, which means we need to update the transition equation: $$ \begin{bmatrix} f_t \ f_{t-1} \ f_{t-2} \ f_{t-3} \ u_{\text{indprod}, t} \ u_{\text{income}, t} \ u_{\text{sales}, t} \ u_{\text{emp}, t} \ u_{\text{indprod}, t-1} \ u_{\text{income}, t-1} \ u_{\text{sales}, t-1} \ u_{\text{emp}, t-1} \ \end{bmatrix} = \begin{bmatrix} a_1 & a_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & c_{\text{indprod}, 1} & 0 & 0 & 0 & c_{\text{indprod}, 2} & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & c_{\text{income}, 1} & 0 & 0 & 0 & c_{\text{income}, 2} & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & c_{\text{sales}, 1} & 0 & 0 & 0 & c_{\text{sales}, 2} & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & c_{\text{emp}, 1} & 0 & 0 & 0 & c_{\text{emp}, 2} \ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \ \end{bmatrix} \begin{bmatrix} f_{t-1} \ f_{t-2} \ f_{t-3} \ f_{t-4} \ u_{\text{indprod}, t-1} \ u_{\text{income}, t-1} \ u_{\text{sales}, t-1} \ u_{\text{emp}, t-1} \ u_{\text{indprod}, t-2} \ u_{\text{income}, t-2} \ u_{\text{sales}, t-2} \ u_{\text{emp}, t-2} \ \end{bmatrix} + R \begin{bmatrix} \eta_t \ \varepsilon_{t} \end{bmatrix} $$ This model cannot be handled out-of-the-box by the DynamicFactor class, but it can be handled by creating a subclass when alters the state space representation in the appropriate way. First, notice that if we had set factor_order = 4, we would almost have what we wanted. In that case, the last line of the observation equation would be: $$ \begin{bmatrix} \vdots \ y_{\text{emp}, t} \ \end{bmatrix} = \begin{bmatrix} \vdots & & & & & & & & & & & \vdots \ \lambda_\text{emp,1} & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \ \end{bmatrix} \begin{bmatrix} f_t \ f_{t-1} \ f_{t-2} \ f_{t-3} \ \vdots \end{bmatrix} $$ and the first line of the transition equation would be: $$ \begin{bmatrix} f_t \ \vdots \end{bmatrix} = \begin{bmatrix} a_1 & a_2 & a_3 & a_4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ \vdots & & & & & & & & & & & \vdots \ \end{bmatrix} \begin{bmatrix} f_{t-1} \ f_{t-2} \ f_{t-3} \ f_{t-4} \ \vdots \end{bmatrix} + R \begin{bmatrix} \eta_t \ \varepsilon_{t} \end{bmatrix} $$ Relative to what we want, we have the following differences: In the above situation, the $\lambda_{\text{emp}, j}$ are forced to be zero for $j > 0$, and we want them to be estimated as parameters. We only want the factor to transition according to an AR(2), but under the above situation it is an AR(4). Our strategy will be to subclass DynamicFactor, and let it do most of the work (setting up the state space representation, etc.) where it assumes that factor_order = 4. The only things we will actually do in the subclass will be to fix those two issues. First, here is the full code of the subclass; it is discussed below. It is important to note at the outset that none of the methods defined below could have been omitted. In fact, the methods __init__, start_params, param_names, transform_params, untransform_params, and update form the core of all state space models in Statsmodels, not just the DynamicFactor class. End of explanation # Create the model extended_mod = ExtendedDFM(endog) initial_extended_res = extended_mod.fit(maxiter=1000, disp=False) extended_res = extended_mod.fit(initial_extended_res.params, method='nm', maxiter=1000) print(extended_res.summary(separate_params=False)) Explanation: So what did we just do? __init__ The important step here was specifying the base dynamic factor model which we were operating with. In particular, as described above, we initialize with factor_order=4, even though we will only end up with an AR(2) model for the factor. We also performed some general setup-related tasks. start_params start_params are used as initial values in the optimizer. Since we are adding three new parameters, we need to pass those in. If we hadn't done this, the optimizer would use the default starting values, which would be three elements short. param_names param_names are used in a variety of places, but especially in the results class. Below we get a full result summary, which is only possible when all the parameters have associated names. transform_params and untransform_params The optimizer selects possibly parameter values in an unconstrained way. That's not usually desired (since variances can't be negative, for example), and transform_params is used to transform the unconstrained values used by the optimizer to constrained values appropriate to the model. Variances terms are typically squared (to force them to be positive), and AR lag coefficients are often constrained to lead to a stationary model. untransform_params is used for the reverse operation (and is important because starting parameters are usually specified in terms of values appropriate to the model, and we need to convert them to parameters appropriate to the optimizer before we can begin the optimization routine). Even though we don't need to transform or untransform our new parameters (the loadings can in theory take on any values), we still need to modify this function for two reasons: The version in the DynamicFactor class is expecting 3 fewer parameters than we have now. At a minimum, we need to handle the three new parameters. The version in the DynamicFactor class constrains the factor lag coefficients to be stationary as though it was an AR(4) model. Since we actually have an AR(2) model, we need to re-do the constraint. We also set the last two autoregressive coefficients to be zero here. update The most important reason we need to specify a new update method is because we have three new parameters that we need to place into the state space formulation. In particular we let the parent DynamicFactor.update class handle placing all the parameters except the three new ones in to the state space representation, and then we put the last three in manually. End of explanation extended_res.plot_coefficients_of_determination(figsize=(8,2)); fig, ax = plt.subplots(figsize=(13,3)) # Compute the index extended_coincident_index = compute_coincident_index(extended_mod, extended_res) # Plot the factor dates = endog.index._mpl_repr() ax.plot(dates, coincident_index, '-', linewidth=1, label='Basic model') ax.plot(dates, extended_coincident_index, '--', linewidth=3, label='Extended model') ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI') ax.legend(loc='lower right') ax.set(title='Coincident indices, comparison') # Retrieve and also plot the NBER recession indicators ylim = ax.get_ylim() ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:,0], facecolor='k', alpha=0.1); Explanation: Although this model increases the likelihood, it is not preferred by the AIC and BIC mesaures which penalize the additional three parameters. Furthermore, the qualitative results are unchanged, as we can see from the updated $R^2$ chart and the new coincident index, both of which are practically identical to the previous results. End of explanation
1,095
Given the following text description, write Python code to implement the functionality described below step by step Description: 错误可视化 对于任何科学的度量,准确地计算错误几乎和准确报告数字本身一样重要,甚至更为重要。例如,假设我正在使用一些天体观测来估计哈勃常数,这是对宇宙膨胀率的局部测量。我知道,目前的文献表明该值约为71(km / s)/ Mpc,我用我的方法测得的值为74(km / s)/ Mpc。值是否一致?给定此信息,唯一正确的答案是:没有办法知道。 假设我用报告的不确定性来补充此信息:当前文献表明该值约为71± 2.5(km / s)/ Mpc,我的方法测得的值为74± 5(km / s)/ Mpc。现在值是一致的吗?这是一个可以定量回答的问题。 在数据和结果的可视化中,有效显示这些错误可以使绘图传达更完整的信息。 基本错误栏表示 可以通过单个Matplotlib函数调用来创建基本的错误栏: Step1: 这里的fmt是控制线和点外观的格式代码,并且具有与plt.plot中使用的简写相同的语法,在Simple Line Plots和Simple Scatter Plots中进行了概述。 除了这些基本选项之外,错误栏功能还有许多选项可以微调输出。使用这些附加选项,可以轻松自定义误差线图的美感。 Step2: 除了这些选项之外,还可以指定水平误差线(xerr),单面误差线和许多其他变体。有关可用选项的更多信息,请参考plt.errorbar的文档字符串。 连续误差图 在某些情况下,希望显示连续数量的误差条。尽管Matplotlib没有针对此类应用程序的内置便利例程,但是将诸如plt.plot和plt.fill_between之类的原语组合起来相对容易,以获得有用的结果。 在这里,我们将使用Scikit-Learn API执行简单的高斯过程回归scikit doc。这是一种通过连续测量不确定性将非常灵活的非参数函数拟合到数据的方法。在这一点上,我们将不深入研究高斯过程回归的细节,而是将重点放在如何可视化这种连续误差测量上: Step3: 现在,我们有了xfit,yfit和dyfit,它们可以对我们的数据进行连续拟合。我们可以像上面那样将它们传递给plt.errorbar函数,但是我们真的不想绘制带有1,000个误差线的1,000点。相反,我们可以将plt.fill_between函数与浅色配合使用以可视化此连续错误:
Python Code: %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') import numpy as np x = np.linspace(0, 10, 50) dy = 0.8 y = np.sin(x) + dy * np.random.randn(50) # yerr表示y的误差 plt.errorbar(x, y, yerr=dy, fmt='.k'); Explanation: 错误可视化 对于任何科学的度量,准确地计算错误几乎和准确报告数字本身一样重要,甚至更为重要。例如,假设我正在使用一些天体观测来估计哈勃常数,这是对宇宙膨胀率的局部测量。我知道,目前的文献表明该值约为71(km / s)/ Mpc,我用我的方法测得的值为74(km / s)/ Mpc。值是否一致?给定此信息,唯一正确的答案是:没有办法知道。 假设我用报告的不确定性来补充此信息:当前文献表明该值约为71± 2.5(km / s)/ Mpc,我的方法测得的值为74± 5(km / s)/ Mpc。现在值是一致的吗?这是一个可以定量回答的问题。 在数据和结果的可视化中,有效显示这些错误可以使绘图传达更完整的信息。 基本错误栏表示 可以通过单个Matplotlib函数调用来创建基本的错误栏: End of explanation plt.errorbar(x, y, yerr=dy, fmt='o', color='black', ecolor='lightgray', elinewidth=3, capsize=0); Explanation: 这里的fmt是控制线和点外观的格式代码,并且具有与plt.plot中使用的简写相同的语法,在Simple Line Plots和Simple Scatter Plots中进行了概述。 除了这些基本选项之外,错误栏功能还有许多选项可以微调输出。使用这些附加选项,可以轻松自定义误差线图的美感。 End of explanation # GaussianProcessRegressor实现高斯回归 from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import RBF, WhiteKernel # define the model and draw some data model = lambda x: x * np.sin(x) xdata = np.array([1, 3, 5, 6, 8]) ydata = model(xdata) # kernel = 1.0 * RBF(length_scale=10.0, length_scale_bounds=(1e-2, 1e3)) \ + WhiteKernel(noise_level=1e-5, noise_level_bounds=(1e-10, 1e+1)) gp = GaussianProcessRegressor(kernel=kernel, alpha=0.0) # # Compute the Gaussian process fit # gp = GaussianProcessRegressor(corr='cubic', theta0=1e-2, thetaL=1e-4, thetaU=1E-1, # random_start=100) gp.fit(xdata[:, np.newaxis], ydata) xfit = np.linspace(0, 10, 1000) # MSE,yfit = gp.predict(xfit[:, np.newaxis], return_cov=True) y_mean, y_cov= gp.predict(xfit[:, np.newaxis], return_cov=True) dyfit = 2 * np.sqrt(np.diag(y_cov)) # 2*sigma ~ 95% confidence region Explanation: 除了这些选项之外,还可以指定水平误差线(xerr),单面误差线和许多其他变体。有关可用选项的更多信息,请参考plt.errorbar的文档字符串。 连续误差图 在某些情况下,希望显示连续数量的误差条。尽管Matplotlib没有针对此类应用程序的内置便利例程,但是将诸如plt.plot和plt.fill_between之类的原语组合起来相对容易,以获得有用的结果。 在这里,我们将使用Scikit-Learn API执行简单的高斯过程回归scikit doc。这是一种通过连续测量不确定性将非常灵活的非参数函数拟合到数据的方法。在这一点上,我们将不深入研究高斯过程回归的细节,而是将重点放在如何可视化这种连续误差测量上: End of explanation # Visualize the result plt.plot(xdata, ydata, 'or') plt.plot(xfit, model(xfit), '-', color='gray') plt.fill_between(xfit, y_mean - dyfit, y_mean + dyfit, alpha=0.5, color='gray') plt.xlim(0, 10); Explanation: 现在,我们有了xfit,yfit和dyfit,它们可以对我们的数据进行连续拟合。我们可以像上面那样将它们传递给plt.errorbar函数,但是我们真的不想绘制带有1,000个误差线的1,000点。相反,我们可以将plt.fill_between函数与浅色配合使用以可视化此连续错误: End of explanation
1,096
Given the following text description, write Python code to implement the functionality described below step by step Description: Bar plot demo This example shows you how to make a bar plot using the psyplot.project.ProjectPlotter.barplot method. Step1: The default is that all bars have the same width. You can however change that by setting the widths keyword to data Step2: Or you make a stacked plot
Python Code: import psyplot.project as psy %matplotlib inline %config InlineBackend.close_figures = False axes = iter(psy.multiple_subplots(2, 2, n=3)) for var in ['t2m', 'u', 'v']: psy.plot.barplot( 'demo.nc', # netCDF file storing the data name=var, # one plot for each variable y=[0, 1], # two bars in total z=0, x=0, # choose latitude and longitude as dimensions ylabel="{desc}", # use the longname and units on the y-axis ax=next(axes), color='coolwarm', xticklabels='%B %Y', legendlabels='latitude %(y)1.2f $^\circ$N', legend='upper left', title='equally spaced' ) bars = psy.gcp(True) bars.show() Explanation: Bar plot demo This example shows you how to make a bar plot using the psyplot.project.ProjectPlotter.barplot method. End of explanation bars(name='u').update(widths='data', xticks='month', title='data spaced') bars.show() Explanation: The default is that all bars have the same width. You can however change that by setting the widths keyword to data End of explanation bars(name='v').update(plot='stacked', title='stacked') bars.show() psy.close('all') Explanation: Or you make a stacked plot End of explanation
1,097
Given the following text description, write Python code to implement the functionality described below step by step Description: PDF Analysis Tutorial Introduction This tutorial demonstrates how to acquire a multidimensional pair distribution function (PDF) from both a flat field electron diffraction pattern and a scanning electron diffraction data set. The data is from an open-source paper by Shanmugam et al. [1] that is used as a reference standard. It is an Amorphous 18nm SiO2 film. The scanning electron diffraction data set is a scan of a polycrystalline gold reference standard with 128x128 real space pixels and 256x256 diffraction space pixels. The implementation also initially followed Shanmugam et al. [1] Shanmugam, J., Borisenko, K. B., Chou, Y. J., & Kirkland, A. I. (2017). eRDF Analyser Step1: <a id='loa'></a> 1. Loading and Inspection Load the diffraction data line profile Step2: For now, the code requires navigation dimensions in the reduced intensity signal, two size 1 ones are created. Step3: Set the diffraction pattern calibration. Note that pyXem uses a calibration to $s = \frac{1}{d} = 2\frac{\sin{\theta}}{\lambda}$. Step4: Plot the radial profile Step5: <a id='ri'></a> 2. Acquiring a Reduced Intensity Acquire a reduced intensity (also called a structure factor) from the radial profile. The structure factor is what will subsequently be transformed into a PDF through a fourier transform. The structure factor $\phi(s)$ is acquired by fitting a background scattering factor to the data, and then transforming the data by Step6: We then fit an electron scattering factor to the profile. To do this, we need to define a list of elements and their respective atomic fractions. Step7: Then we will fit a background scattering factor. The scattering factor parametrisation used here is that specified by Lobato and Van Dyck [2]. The plot_fit parameter ensures we check the fitted profile. [2] Lobato, I., & Van Dyck, D. (2014). An accurate parameterization for scattering factors, electron densities and electrostatic potentials for neutral atoms that obey all physical constraints. Acta Crystallographica Section A Step8: That's clearly a terrible fit! This is because we're trying to fit the beam stop. To avoid this, we specify to fit to the 'tail end' of the data by specifying a minimum and maximum scattering angle range. This is generally recommended, as electron scattering factors tend to not include inelastic scattering, which means the factors are rarely perfect fits. Step9: That's clearly much much better. Always inspect your fit. Finally, we calculate the reduced intensity itself. Step10: If it seems like the reduced intensity is not oscillating around 0 at high s, you should try fitting with a larger s_min. This generally speaking solves the issue. <a id='dri'></a> 4. Damping the Reduced Intensity The reduced intensity acquired above does not go to zero at high s as it should because the maximum acquired scattering vector is not very high. This would result in significant oscillation in the PDF due to a discontinuity in the fourier transformed data. To combat this, the reduced intensity is damped. In the X-ray community a common damping functions are the Lorch function and an exponential damping function. Both are supported here. It is worth noting that damping does reduce the resolution in r in the PDF. Step11: Additionally, it is recommended to damp the low s regime. We use an error function to do that Step12: If the function ends up overdamped, you can simply reacquire the reduced intensity using Step13: <a id='pdf'></a> 5. Acquiring a PDF Finally, a PDF is acquired from the damped reduced intensity. This is done by a fourier sine transform. To ignore parts of the scattering data that are too noisy, you can set a minimum and maximum scattering angle for the transform. First, we initialise a PDFGenerator1D object. Step14: Secify a minimum and maximum scattering angle. The maximum must be equivalent to the Lorch function s_max if the Lorch function is used to damp. Otherwise the Lorch function damping can cause artifact in the PDF. Step15: Finally we get the PDF. r_max specifies the maximum real space distance we want to interpret. Step16: The PDF can then be saved.
Python Code: %matplotlib inline import hyperspy.api as hs import pyxem as pxm import numpy as np Explanation: PDF Analysis Tutorial Introduction This tutorial demonstrates how to acquire a multidimensional pair distribution function (PDF) from both a flat field electron diffraction pattern and a scanning electron diffraction data set. The data is from an open-source paper by Shanmugam et al. [1] that is used as a reference standard. It is an Amorphous 18nm SiO2 film. The scanning electron diffraction data set is a scan of a polycrystalline gold reference standard with 128x128 real space pixels and 256x256 diffraction space pixels. The implementation also initially followed Shanmugam et al. [1] Shanmugam, J., Borisenko, K. B., Chou, Y. J., & Kirkland, A. I. (2017). eRDF Analyser: An interactive GUI for electron reduced density function analysis. SoftwareX, 6, 185-192. This functionality has been checked to run in pyxem-0.13.0 (March 2021). Bugs are always possible, do not trust the code blindly, and if you experience any issues please report them here: https://github.com/pyxem/pyxem-demos/issues Contents <a href='#loa'> Loading & Inspection</a> <a href='#rad'> Acquiring a radial profile</a> <a href='#ri'> Acquiring a Reduced Intensity</a> <a href='#dri'> Damping the Reduced Intensity</a> <a href='#pdf'> Acquiring a PDF</a> Import pyXem and other required libraries End of explanation rp = hs.load('./data/08/amorphousSiO2.hspy') rp.set_signal_type('electron_diffraction') Explanation: <a id='loa'></a> 1. Loading and Inspection Load the diffraction data line profile End of explanation rp = pxm.signals.ElectronDiffraction1D([[rp.data]]) Explanation: For now, the code requires navigation dimensions in the reduced intensity signal, two size 1 ones are created. End of explanation calibration = 0.00167 rp.set_diffraction_calibration(calibration=calibration) Explanation: Set the diffraction pattern calibration. Note that pyXem uses a calibration to $s = \frac{1}{d} = 2\frac{\sin{\theta}}{\lambda}$. End of explanation rp.plot() Explanation: Plot the radial profile End of explanation rigen = pxm.generators.ReducedIntensityGenerator1D(rp) Explanation: <a id='ri'></a> 2. Acquiring a Reduced Intensity Acquire a reduced intensity (also called a structure factor) from the radial profile. The structure factor is what will subsequently be transformed into a PDF through a fourier transform. The structure factor $\phi(s)$ is acquired by fitting a background scattering factor to the data, and then transforming the data by: $$\phi(s) = \frac{I(s) - N\Delta c_{i}f_{i}^{2}}{N\Delta c_{i}^{2}f_{i}^{2}}$$ where s is the scattering vecot, $c_{i}$ and $f_{i}$ the atomic fraction and scattering factor respectively of each element in the sample, and N is a fitted parameter to the intensity. To acquire the reduced intensity, we first initialise a ReducedIntensityGenerator1D object. End of explanation elements = ['Si','O'] fracs = [0.333,0.667] Explanation: We then fit an electron scattering factor to the profile. To do this, we need to define a list of elements and their respective atomic fractions. End of explanation rigen.fit_atomic_scattering(elements,fracs,scattering_factor='lobato',plot_fit=True,iterpath='serpentine') Explanation: Then we will fit a background scattering factor. The scattering factor parametrisation used here is that specified by Lobato and Van Dyck [2]. The plot_fit parameter ensures we check the fitted profile. [2] Lobato, I., & Van Dyck, D. (2014). An accurate parameterization for scattering factors, electron densities and electrostatic potentials for neutral atoms that obey all physical constraints. Acta Crystallographica Section A: Foundations and Advances, 70(6), 636-649. End of explanation rigen.set_s_cutoff(s_min=1.5,s_max=4) rigen.fit_atomic_scattering(elements,fracs,scattering_factor='lobato',plot_fit=True,iterpath='serpentine') Explanation: That's clearly a terrible fit! This is because we're trying to fit the beam stop. To avoid this, we specify to fit to the 'tail end' of the data by specifying a minimum and maximum scattering angle range. This is generally recommended, as electron scattering factors tend to not include inelastic scattering, which means the factors are rarely perfect fits. End of explanation ri = rigen.get_reduced_intensity() ri.plot() Explanation: That's clearly much much better. Always inspect your fit. Finally, we calculate the reduced intensity itself. End of explanation ri.damp_exponential(b=0.1) ri.plot() ri.damp_lorch(s_max=4) ri.plot() Explanation: If it seems like the reduced intensity is not oscillating around 0 at high s, you should try fitting with a larger s_min. This generally speaking solves the issue. <a id='dri'></a> 4. Damping the Reduced Intensity The reduced intensity acquired above does not go to zero at high s as it should because the maximum acquired scattering vector is not very high. This would result in significant oscillation in the PDF due to a discontinuity in the fourier transformed data. To combat this, the reduced intensity is damped. In the X-ray community a common damping functions are the Lorch function and an exponential damping function. Both are supported here. It is worth noting that damping does reduce the resolution in r in the PDF. End of explanation ri.damp_low_q_region_erfc(offset=4) ri.plot() Explanation: Additionally, it is recommended to damp the low s regime. We use an error function to do that End of explanation ri = rigen.get_reduced_intensity() Explanation: If the function ends up overdamped, you can simply reacquire the reduced intensity using: End of explanation pdfgen = pxm.generators.PDFGenerator1D(ri) Explanation: <a id='pdf'></a> 5. Acquiring a PDF Finally, a PDF is acquired from the damped reduced intensity. This is done by a fourier sine transform. To ignore parts of the scattering data that are too noisy, you can set a minimum and maximum scattering angle for the transform. First, we initialise a PDFGenerator1D object. End of explanation s_min = 0. s_max = 4. Explanation: Secify a minimum and maximum scattering angle. The maximum must be equivalent to the Lorch function s_max if the Lorch function is used to damp. Otherwise the Lorch function damping can cause artifact in the PDF. End of explanation pdf = pdfgen.get_pdf(s_min=s_min, s_max=s_max, r_max=10) pdf.plot() Explanation: Finally we get the PDF. r_max specifies the maximum real space distance we want to interpret. End of explanation pdf.save('Demo-PDF.hspy') Explanation: The PDF can then be saved. End of explanation
1,098
Given the following text description, write Python code to implement the functionality described below step by step Description: ClickDetector use example This algorithm detects the locations of impulsive noises (clicks and pops) on the input audio frame. It relies on LPC coefficients to inverse-filter the audio in order to attenuate the stationary part and enhance the prediction error (or excitation noise)[1]. After this, a matched filter is used to further enhance the impulsive peaks. The detection threshold is obtained from a robust estimate of the excitation noise power [2] plus a parametric gain value. References Step1: Generating a click example Lets start by degradating some audio files with some clicks of different amplitudes Step2: Lets listen to the clip to have an idea on how audible the clips are Step3: The algorithm This algorithm outputs the starts and ends timestapms of the clicks. The following plots show how the algorithm performs in the previous examples
Python Code: import essentia.standard as es import numpy as np import matplotlib.pyplot as plt from IPython.display import Audio from essentia import array as esarr plt.rcParams["figure.figsize"] =(12,9) def compute(x, frame_size=1024, hop_size=512, **kwargs): clickDetector = es.ClickDetector(frameSize=frame_size, hopSize=hop_size, **kwargs) ends = [] starts = [] for frame in es.FrameGenerator(x, frameSize=frame_size, hopSize=hop_size, startFromZero=True): frame_starts, frame_ends = clickDetector(frame) for s in frame_starts: starts.append(s) for e in frame_ends: ends.append(e) return starts, ends Explanation: ClickDetector use example This algorithm detects the locations of impulsive noises (clicks and pops) on the input audio frame. It relies on LPC coefficients to inverse-filter the audio in order to attenuate the stationary part and enhance the prediction error (or excitation noise)[1]. After this, a matched filter is used to further enhance the impulsive peaks. The detection threshold is obtained from a robust estimate of the excitation noise power [2] plus a parametric gain value. References: [1] Vaseghi, S. V., &amp; Rayner, P. J. W. (1990). Detection and suppression of impulsive noise in speech communication systems. IEE Proceedings I (Communications, Speech and Vision), 137(1), 38-46. [2] Vaseghi, S. V. (2008). Advanced digital signal processing and noise reduction. John Wiley &amp; Sons. Page 355 End of explanation fs = 44100. audio_dir = '../../audio/' audio = es.MonoLoader(filename='{}/{}'.format(audio_dir, 'recorded/vignesh.wav'), sampleRate=fs)() originalLen = len(audio) jumpLocation1 = int(originalLen / 4.) jumpLocation2 = int(originalLen / 2.) jumpLocation3 = int(originalLen * 3 / 4.) audio[jumpLocation1] += .5 audio[jumpLocation2] += .15 audio[jumpLocation3] += .05 groundTruth = esarr([jumpLocation1, jumpLocation2, jumpLocation3]) / fs for point in groundTruth: l1 = plt.axvline(point, color='g', alpha=.5) times = np.linspace(0, len(audio) / fs, len(audio)) plt.plot(times, audio) l1.set_label('Click locations') plt.legend() plt.title('Signal with artificial clicks of different amplitudes') Explanation: Generating a click example Lets start by degradating some audio files with some clicks of different amplitudes End of explanation Audio(audio, rate=fs) Explanation: Lets listen to the clip to have an idea on how audible the clips are End of explanation starts, ends = compute(audio) fig, ax = plt.subplots(len(groundTruth)) plt.subplots_adjust(hspace=.4) for idx, point in enumerate(groundTruth): l1 = ax[idx].axvline(starts[idx], color='r', alpha=.5) ax[idx].axvline(ends[idx], color='r', alpha=.5) l2 = ax[idx].axvline(point, color='g', alpha=.5) ax[idx].plot(times, audio) ax[idx].set_xlim([point-.001, point+.001]) ax[idx].set_title('Click located at {:.2f}s'.format(point)) fig.legend((l1, l2), ('Detected click', 'Ground truth'), 'upper right') Explanation: The algorithm This algorithm outputs the starts and ends timestapms of the clicks. The following plots show how the algorithm performs in the previous examples End of explanation
1,099
Given the following text description, write Python code to implement the functionality described below step by step Description: automaton.compose The (accessible part of the) composition of two transducers ($A_1$ and $A_2$). Preconditions Step1: The result of the composition has a useless state. Note that only the accessible part has been computed. Step2: Translations The composition of a "translator" from French to English with one from English to Spanish is analogous to the computation of the French to Spanish "translator". Step3: Relying on "string-letters" This example follows the same path, but using letters that are strings.
Python Code: import vcsn ctx1 = vcsn.context("lat<lal<char(ab)>, lal<char(jk)>>, b") ctx2 = vcsn.context("lat<lal<char(jk)>, lal<char(xy)>>, b") a1 = ctx1.expression("(a|k)(a|j) + (b|k)*").automaton() a1 a2 = ctx2.expression("(k|y)(k|x)*").automaton() a2 Explanation: automaton.compose The (accessible part of the) composition of two transducers ($A_1$ and $A_2$). Preconditions: - $A_1$ and $A_2$ are transducers - $A_1$ has at least 2 tapes - The second tape of $A_1$ must have the same labelset as the first tape of $A_2$ Postconditions: - $\forall u \in alphabet(A_1)^*, \; A_2.eval(A_1.eval(u)) = A_1.compose(A_2).eval(u)$ See also: - automaton.insplit Examples End of explanation a1.compose(a2) Explanation: The result of the composition has a useless state. Note that only the accessible part has been computed. End of explanation %%file fr2en chien|dot chat|cat ctx = vcsn.context("lat<lan<char>, lan<char>>, b") fr_to_en = ctx.trie('fr2en') fr_to_en en_to_es = ctx.expression("dog|perro + cat|gato").automaton() en_to_es fr_to_es = fr_to_en.compose(en_to_es) fr_to_es Explanation: Translations The composition of a "translator" from French to English with one from English to Spanish is analogous to the computation of the French to Spanish "translator". End of explanation import vcsn ctx = vcsn.context("lat<lan<string>, lan<string>>, b") ctx %%file fr2en 'chien'|'dog' 'chat'|'cat' 'oiseau'|'bird' 'souris'|'mouse' 'souris'|'mice' fr2en = ctx.trie('fr2en') fr2en %%file en2es 'dog'|'perro' 'cat'|'gato' 'mouse'|'ratón' 'mice'|'ratones' en2es = ctx.trie('en2es') en2es fr2en.compose(en2es) Explanation: Relying on "string-letters" This example follows the same path, but using letters that are strings. End of explanation