markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Procesamiento de ADN
1.1 Solución Secuencia aleatoria v1 | from random import choice
# Definicion de funcion
def cadena_al_azar(n):
adn = ""
for i in range(n):
adn += choice("acgt")
return adn
# Casos de uso
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(10)
print cadena_al_azar(10)
print cadena_al_azar(10)
print cadena_al_azar(10) | ipynb/23-ProcesamientoDeTexto/Texto.ipynb | usantamaria/iwi131 | cc0-1.0 |
Procesamiento de ADN
1.1 Solución Secuencia aleatoria v2 | from random import choice
# Definicion de funcion
def cadena_al_azar(n):
bases = []
for i in range(n):
bases.append(choice("acgt"))
adn = "".join(bases)
return adn
# Casos de uso
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(10)
print cadena_al_azar(10)
print cadena_al_azar(10)
print cadena_al_azar(10) | ipynb/23-ProcesamientoDeTexto/Texto.ipynb | usantamaria/iwi131 | cc0-1.0 |
Procesamiento de texto
Procesamiento de ADN: Secuencia complementaria
Escriba la función complementaria(s) que regrese la cadena complementaria de c: el complementario de "a" es "t" (y viceversa), y el complementario de "c" es "g" (y viceversa).
Python
cadena = 'cagcccatgaggcagggtg'
print complementaria(cadena)
'gtcgggtactccgtcccac'
Procesamiento de texto
Procesamiento de ADN: Secuencia complementaria
¿Tareas? | # Solucion estudiantes
def cadena_(n):
adn = ""
for i in range(n):
adn += choice("acgt")
return adn | ipynb/23-ProcesamientoDeTexto/Texto.ipynb | usantamaria/iwi131 | cc0-1.0 |
Procesamiento de texto
Solución Secuencia complementaria v1 | def complementaria(adn):
rna = ""
for base in adn:
if base=="a":
rna += "t"
elif base=="t":
rna += "a"
elif base=="c":
rna += "g"
else:
rna += "c"
return rna
adn = cadena_al_azar(20)
print adn
print complementaria(adn) | ipynb/23-ProcesamientoDeTexto/Texto.ipynb | usantamaria/iwi131 | cc0-1.0 |
Procesamiento de texto
Solución Secuencia complementaria v2 | def complementaria(adn):
pares = {"a":"t", "t":"a", "c":"g", "g":"c"}
rna = ""
for base in adn:
rna += pares[base]
return rna
adn = cadena_al_azar(20)
print adn
print complementaria(adn) | ipynb/23-ProcesamientoDeTexto/Texto.ipynb | usantamaria/iwi131 | cc0-1.0 |
Procesamiento de texto
Solución Secuencia complementaria v3 | def complementaria(adn):
rna = adn.replace("a","T").replace("t","A").replace("c","G").replace("g","C")
return rna.lower()
adn = cadena_al_azar(20)
print adn
print complementaria(adn) | ipynb/23-ProcesamientoDeTexto/Texto.ipynb | usantamaria/iwi131 | cc0-1.0 |
I. Creating a NetworkModel | # Create the model from a PSSTCase, optionally passing a sel_bus
m = NetworkModel(case, sel_bus='Bus1') | docs/notebooks/interactive_visuals/Demo.ipynb | kdheepak/psst | mit |
In the __init__, the NetworkModel... | display(m.case) # saves the case
display(m.network) # creates a PSSTNetwork
display(m.G) # stores the networkX graph (an attribute of the PSSTNetwork)
display(m.model) # builds/solves the model
# Creates df of x,y positions for each node (bus, load, gen), based off self.network.positions
m.all_pos.head(n=10)
# Creates a df of start and end x,y positions for each edge, based off self.G.edges()
m.all_edges.head(n=10) | docs/notebooks/interactive_visuals/Demo.ipynb | kdheepak/psst | mit |
The sel_bus and view_buses attributes | # `sel_bus` is a single bus, upon which the visualization is initially centered.
# It can be changed programatically, or via the dropdown menu.
m.sel_bus
# At first, it is the only bus in view_buses.
# More buses get added to view_buses as they are clicked.
m.view_buses | docs/notebooks/interactive_visuals/Demo.ipynb | kdheepak/psst | mit |
II. Creating a NetworkView from the model | # Create the view from the model
# (It can, alternatively, be created from a case.)
v = NetworkView(model=m)
v | docs/notebooks/interactive_visuals/Demo.ipynb | kdheepak/psst | mit |
III. Generating the x,y data for the view
Whenever the view_buses list get changed, it triggers the callback _callback_view_change
This function first calls subset_positions and subset_edges
Then, the subsetted DataFrames get segregated into seperate ones for bus, gen, and load
Finally, the x,y coordinates are extracted into a format the NetworkView can use. | # The subsetting that occurs is all based on `view_buses`
m.view_buses | docs/notebooks/interactive_visuals/Demo.ipynb | kdheepak/psst | mit |
The subset_positions() call | # Subset positions creates self.pos
m.pos | docs/notebooks/interactive_visuals/Demo.ipynb | kdheepak/psst | mit |
The function looks like this:
python
def subset_positions(self):
"""Subset self.all_pos to include only nodes adjacent to those in view_buses list."""
nodes = [list(self.G.adj[item].keys()) for item in self.view_buses] # get list of nodes adj to selected buses
nodes = set(itertools.chain.from_iterable(nodes)) # chain lists together, eliminate duplicates w/ set
nodes.update(self.view_buses) # Add the view_buses themselves to the set
return self.all_pos.loc[nodes] # Subset df of all positions to include only desired nodes.
The subset_edges() call | # Subset edges creates self.edges
m.edges | docs/notebooks/interactive_visuals/Demo.ipynb | kdheepak/psst | mit |
The function looks like this:
python
def subset_edges(self):
"""Subset all_edges, with G.edges() info, based on view_buses list."""
edge_list = self.G.edges(nbunch=self.view_buses) # get edges of view_buses as list of tuples
edges_fwd = self.all_edges.loc[edge_list] # query all_pos with edge_list
edge_list_rev = [tuple(reversed(tup)) for tup in edge_list] # reverse order of each tuple
edges_rev = self.all_edges.loc[edge_list_rev] # query all_pos again, with reversed edge_list
edges = edges_fwd.append(edges_rev).dropna(subset=['start_x']) # combine results, dropping false hits
return edges
If you want a closer look... | m.view_buses = ['Bus2','Bus3']
edge_list = m.G.edges(nbunch=m.view_buses) # get edges of view_buses as list of tuples
edge_list
edges_fwd = m.all_edges.loc[edge_list] # query all_pos with edge_list
edges_fwd
edge_list_rev = [tuple(reversed(tup)) for tup in edge_list] # reverse order of each tuple
edge_list_rev
edges_rev = m.all_edges.loc[edge_list_rev] # query all_pos again, with reversed edge_list
edges_rev
edges = edges_fwd.append(edges_rev).dropna(subset=['start_x']) # combine results, dropping false hits
edges | docs/notebooks/interactive_visuals/Demo.ipynb | kdheepak/psst | mit |
Segregating DataFrames and extracting data
The DataFrames are segregated into bus, case, and load, using the names in case.bus, case.gen, and case.load
x,y data is extracted, ready to be plotted by NetworkView
Extracting bus data looks like this:
python
bus_pos = self.pos[self.pos.index.isin(self.case.bus_name)]
self.bus_x_vals = bus_pos['x']
self.bus_y_vals = bus_pos['y']
self.bus_names = list(bus_pos.index)
(Similar for the other nodes) | print("x_vals: ", m.bus_x_vals)
print("y_vals: ", m.bus_y_vals)
print("names: ", m.bus_names) | docs/notebooks/interactive_visuals/Demo.ipynb | kdheepak/psst | mit |
Extracting branch data looks like this:
```python
edges = self.edges.reset_index()
_df = edges.loc[edges.start.isin(self.case.bus_name) & edges.end.isin(self.case.bus_name)]
self.bus_x_edges = [tuple(edge) for edge in _df[['start_x', 'end_x']].values]
self.bus_y_edges = [tuple(edge) for edge in _df[['start_y', 'end_y']].values]
```
(Similar for the other edges) | print("bus_x_edges:")
print(m.bus_x_edges)
print("\nbus_y_edges:")
print(m.bus_y_edges) | docs/notebooks/interactive_visuals/Demo.ipynb | kdheepak/psst | mit |
Data | tempsC = np.array([26, 27, 29, 31, 33, 35, 37])
voltages = np.array([2,3,6,7,9,11,12.5,14,16,18,20,22,23.5,26,27.5,29,31,32.5,34,36])
voltages = np.array([1.826,3.5652,5.3995,7.2368,9.0761,10.8711,12.7109,14.5508,16.3461,18.1414,19.9816,21.822,23.6174,25.4577,27.253,29.0935,30.889,32.7924,34.5699,35.8716])
measured_psi1 = np.array([[11.4056,20.4615,25.4056,27.9021,29.028,29.6154,30.2517,30.8392,31.1329,31.5245,31.8671,32.014,32.3077,32.5034,32.7972,32.9929,33.1399,33.3357,33.4336,33.6783]])
#This Block just converts units
fields = np.array([entry/t for entry in voltages])
KC = 273.15
tempsK = np.array([entry+KC for entry in tempsC]) #Celsius to Kelvin
# measured_psi1 = np.array([[11,20.5,25.5,27.5,29,30,30.5,31,31.25,31.5,31.75,32,32.25,32.5,32.75,33,33.25,33.5,33.75,34]])
# measured_psi2 = np.array([[7.6, 11.5, 22.3, 24.7, 27.8, 29.4, 30.1, 30.7, 31.2, 31.6, 31.9, 32.2, 32.4, 32.6, 32.7, 32.8, 32.9, 32.9, 33.0, 33.1]])
# measured_psi3 = np.array([[4.7, 7.3, 15.5, 18.1, 22.7, 25.9, 27.5, 28.6, 29.6, 30.3, 30.8, 31.2, 31.5, 31.8, 32.0, 32.1, 32.3, 32.4, 32.5, 32.6]])
# measured_psi4 = np.array([[3.5, 5.4, 11.5, 13.8, 18.1, 21.9, 24.1, 25.9, 27.5, 28.7, 29.5, 30.1,30.5, 31.0, 31.3, 31.5, 31.7, 31.9, 32.0, 32.2]])
# measured_psi5 = np.array([[2.5, 3.7, 8.0, 9.6, 12.9, 16.3, 18.7, 20.9, 23.4, 25.3, 26.8, 27.9, 28.5, 29.4, 29.8, 30.2, 30.6, 30.8, 31.1, 31.3]])
# measured_psi6 = np.array([[1.9, 2.9, 6.1, 7.3, 9.8, 12.6, 14.7, 16.8, 19.4, 21.7, 23.6, 25.2, 26.1, 27.4, 28.0, 28.6, 29.2, 29.5, 29.9, 30.3]])
# measured_psi7 = np.array([[1.5, 2.3, 4.7, 5.6, 7.5, 9.6, 11.2, 12.9, 15.2, 17.5, 19.6, 21.4, 22.7, 24.4, 25.37, 26.1, 27.02, 27.5, 28.0, 28.6]])
# AllPsi = np.concatenate((measured_psi1,measured_psi2,measured_psi3,measured_psi4,measured_psi5,measured_psi6,measured_psi7),axis=0) | ElectroOptics/MinimizeAttempt.ipynb | JAmarel/LiquidCrystals | mit |
Calculate the Boltzmann Factor and the Partition Function
$$ {Boltz() \:returns:}\:\: e^{\frac{-U}{k_bT}}\:sin\:{\theta}\ $$ | def Boltz(theta,phi,T,p0k,alpha,E):
"""Compute the integrand for the Boltzmann factor.
Returns
-------
A function of theta,phi,T,p0k,alpha,E to be used within dblquad
"""
return np.exp((1/T)*p0k*E*np.sin(theta)*np.cos(phi)*(1+alpha*E*np.cos(phi)))*np.sin(theta) | ElectroOptics/MinimizeAttempt.ipynb | JAmarel/LiquidCrystals | mit |
Calculate the Tilt Angle $\psi$
$$ numerator() \:returns: {sin\:{2\theta}\:cos\:{\phi}}\:e^{\frac{-U}{k_bT}}\:sin\:{\theta} $$ | def numerator(theta,phi,T,p0k,alpha,E):
boltz = Boltz(theta,phi,T,p0k,alpha,E)
return np.sin(2*theta)*np.cos(phi)*boltz | ElectroOptics/MinimizeAttempt.ipynb | JAmarel/LiquidCrystals | mit |
$$ denominator()\: returns: {({cos}^2{\theta} - {sin}^2{\theta}\:{cos}^2{\phi}})\:e^{\frac{-U}{k_bT}}\:sin\:{\theta} $$ | def denominator(theta,phi,T,p0k,alpha,E):
boltz = Boltz(theta,phi,T,p0k,alpha,E)
return ((np.cos(theta)**2) - ((np.sin(theta)**2) * (np.cos(phi)**2)))*boltz | ElectroOptics/MinimizeAttempt.ipynb | JAmarel/LiquidCrystals | mit |
$$ tan(2\psi) = \frac{\int_{\theta_{min}}^{\theta_{max}} \int_0^{2\pi} {sin\:{2\theta}\:cos\:{\phi}}\:e^{\frac{-U}{k_bT}}\:sin\:{\theta}\: d\theta d\phi}{\int_{\theta_{min}}^{\theta_{max}} \int_0^{2\pi} ({{cos}^2{\theta} - {sin}^2{\theta}\:{cos}^2{\phi}})\:e^{\frac{-U}{k_bT}}\:sin\:{\theta}\: d\theta d\phi} $$ | def compute_psi(T,p0k,alpha,E,thetamin,thetamax):
"""Computes the tilt angle(psi) by use of our tan(2psi) equation
Returns
-------
Float:
The statistical tilt angle with conditions T,p0k,alpha,E
"""
avg_numerator, avg_numerator_error = dblquad(numerator, 0, 2*np.pi, lambda theta: thetamin, lambda theta: thetamax,args=(T,p0k,alpha,E))
avg_denominator, avg_denominator_error = dblquad(denominator, 0, 2*np.pi, lambda theta: thetamin, lambda theta: thetamax,args=(T,p0k,alpha,E))
psi = (1/2)*np.arctan(avg_numerator / (avg_denominator)) * (180 /(np.pi)) #Converting to degrees from radians and divide by two
return psi | ElectroOptics/MinimizeAttempt.ipynb | JAmarel/LiquidCrystals | mit |
Least Square Fitting $\alpha$ and $\rho_0$ | def compute_error(xo,fields,T,thetamin,thetamax,measured_psi):
"""Computes the squared error for a pair of parameters by comparing it to all measured tilt angles
at one temperature.
This will be used with the minimization function, xo is a point that the minimization checks.
Parameters/Conditions
----------
x0:
An array of the form [alpha^13,p0^33].
Returns
-------
Float: Error
"""
alpha = xo[0]/(1e10)
p0 = xo[1]/(1e30)
p0k = p0/1.3806488e-23
computed_psi = np.array([compute_psi(T,p0k,alpha,E,thetamin,thetamax) for E in fields])
Err = computed_psi - measured_psi
ErrSqr = np.array([i**2 for i in Err])
return np.sum(ErrSqr)*1e8 #Scaling the Squared Error up here seems to help with minimization precision. | ElectroOptics/MinimizeAttempt.ipynb | JAmarel/LiquidCrystals | mit |
It might be better to use the minimization function individually for each temperature range. The minimization function returns a minimization object, which gives extra information about the results. The two important entries are fun and x.
fun is the scalar value of the function that is being minimized. In our case fun is the squared error.
x is the solution array of the form [alpha^10,p0^30]
The reason it might be better to just minimze the squared error function, instead of using the minimize_func that I wrote below is because the minimize function is very picky about the initial guess. Also the minimization function tends to stop when the result of the function is one the order of 10^-3.
Final Result for $\alpha$ and $\rho_0$
Right now everything below this might not work as well as manually guessing and checking. The idea for this section was to automate that process and just return our entire solution arrays at the end of the notebook. | def minimize_func(guess,fields,T,thetamin,thetamax,measured_psi,bnds):
"""A utility function that is will help me construct alpha and p0 arrays later.
Uses the imported minimize function and compute_error to best fit our parameters
at a temperature.
Parameters/Conditions
----------
guess:
The initial guess for minimize().
Returns
-------
Array: [alpha,p0]
"""
results = minimize(compute_error,guess,args=(fields,T,thetamin,thetamax,measured_psi),method = 'SLSQP',bounds = bnds)
xres = np.array(dict(results.items())['x'])
"""Minimize returns a special minimization object. That is similar to a dictionary but not quite.
xres is grabbing just the x result of the minimization object, which is the [alpha,p0] array that
we care about"""
alpha_results = xres[0]
p0_results = xres[1]
return np.array([alpha_results,p0_results])
guess = (2575,2168)
bnds = ((1000,2600),(200,2400))
results = minimize(compute_error,guess,args=(fields,tempsK[0],thetamin,thetamax,measured_psi1),method = 'TNC',bounds = bnds)
results
res = np.array(dict(results.items())['x'])
alpha = res[0]
p0 = res[1]
alpha = alpha*1e-4
p0 = p0/3.33564
print("alpha micro: " + str(alpha))
print('p0 debye: ' + str(p0))
#Minimization claims that it did not succeed. But the results were pretty good. I think it believes that it did not succeed because I have the squared error scaled up very high.
def solution(initial_guess,fields,tempsK,thetamin,thetamax,AllPsi,initial_bnds):
"""Constructs Alpha and p0 arrays where each entry is the value of alpha,p0 at the corresponding temperature in
tempsK. Initial guess and initial bounds are changed each iteration of the loop to the previous values of alpha and p0.
Alpha and p0 decrease so this helps to cut down on the range.
Parameters/Conditions
----------
initial_guess:
The initial guess for minimize().
initial_bnds:
The initial bounds for minimize().
Returns
-------
Array,Array: Alpha Array in micro meters, p0 Array in debye
"""
alpha = np.array([])
p0 = np.array([])
guess = initial_guess
bnds = initial_bnds
for i in range(len(tempsK)):
res = minimize_func(guess,fields,tempsK[i],thetamin,thetamax,AllPsi[i],bnds)
alpha = np.append(alpha,res[0])
p0 = np.append(p0,res[1])
guess = (res[0]-10,res[1]-10)
bnds = ((initial_bnds[0][0],res[0]),(initial_bnds[1][0],res[1]))
alpha = alpha*1e-4
p0 = p0/(3.33564)
return alpha,p0
initial_guess = (2575,2168)
initial_bnds = ((1000,2600),(200,2300))
alpha_micro,p0Debye = solution(initial_guess,fields,tempsK,thetamin,thetamax,AllPsi,initial_bnds) | ElectroOptics/MinimizeAttempt.ipynb | JAmarel/LiquidCrystals | mit |
Build an artificial dataset: starting from the string 'abcdefghijklmnopqrstuvwxyz', generate iteratively strings by swapping two characters at random. In this way instances are progressively more dissimilar | import random
def make_data(size):
text = ''.join([str(unichr(97+i)) for i in range(26)])
seqs = []
def swap_two_characters(seq):
'''define a function that swaps two characters at random positions in a string '''
line = list(seq)
id_i = random.randint(0,len(line)-1)
id_j = random.randint(0,len(line)-1)
line[id_i], line[id_j] = line[id_j], line[id_i]
return ''.join(line)
for i in range(size):
text = swap_two_characters( text )
seqs.append( text )
print text
return seqs
seqs = make_data(25) | examples/Sequence_example.ipynb | bgruening/EDeN | gpl-3.0 |
define a function that builds a graph from a string, i.e. the path graph with the characters as node labels | import networkx as nx
def sequence_to_graph(seq):
'''convert a sequence into a EDeN 'compatible' graph
i.e. a graph with the attribute 'label' for every node and edge'''
G = nx.Graph()
for id,character in enumerate(seq):
G.add_node(id, label = character )
if id > 0:
G.add_edge(id-1, id, label = '-')
return G | examples/Sequence_example.ipynb | bgruening/EDeN | gpl-3.0 |
make a generator that yields graphs: generators are 'good' as they allow functional composition | def pre_process(iterable):
for seq in iterable:
yield sequence_to_graph(seq) | examples/Sequence_example.ipynb | bgruening/EDeN | gpl-3.0 |
initialize the vectorizer object with the desired 'resolution' | %%time
from eden.graph import Vectorizer
vectorizer = Vectorizer( complexity = 4 ) | examples/Sequence_example.ipynb | bgruening/EDeN | gpl-3.0 |
obtain an iterator over the sequences processed into graphs | %%time
graphs = pre_process( seqs ) | examples/Sequence_example.ipynb | bgruening/EDeN | gpl-3.0 |
compute the vector encoding of each instance in a sparse data matrix | %%time
X = vectorizer.transform( graphs )
print 'Instances: %d ; Features: %d with an avg of %d features per instance' % (X.shape[0], X.shape[1], X.getnnz()/X.shape[0]) | examples/Sequence_example.ipynb | bgruening/EDeN | gpl-3.0 |
compute the pairwise similarity as the dot product between the vector representations of each sequence | from sklearn import metrics
K=metrics.pairwise.pairwise_kernels(X, metric='linear')
print K | examples/Sequence_example.ipynb | bgruening/EDeN | gpl-3.0 |
visualize it as a picture is worth thousand words... | import pylab as plt
plt.figure( figsize=(8,8) )
img = plt.imshow( K, interpolation='none', cmap=plt.get_cmap( 'YlOrRd' ) )
plt.show() | examples/Sequence_example.ipynb | bgruening/EDeN | gpl-3.0 |
2.Drucke alle die Zahlen von 0 bis 4 aus: | for x in range(5):
print(x) | Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb | barjacks/pythonrecherche | mit |
3.Drucke die Zahlen 3,4,5 aus: | for x in range(3, 6):
print(x) | Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb | barjacks/pythonrecherche | mit |
4.Baue einen For-Loop, indem Du alle geraden Zahlen ausdruckst, die tiefer sind als 237. | numbers = [
951, 402, 984, 651, 360, 69, 408, 319, 601, 485, 980, 507, 725, 547, 544,
615, 83, 165, 141, 501, 263, 617, 865, 575, 219, 390, 984, 592, 236, 105, 942, 941,
386, 462, 47, 418, 907, 344, 236, 375, 823, 566, 597, 978, 328, 615, 953, 345,
399, 162, 758, 219, 918, 237, 412, 566, 826, 248, 866, 950, 626, 949, 687, 217,
815, 67, 104, 58, 512, 24, 892, 894, 767, 553, 81, 379, 843, 831, 445, 742, 717,
958, 609, 842, 451, 688, 753, 854, 685, 93, 857, 440, 380, 126, 721, 328, 753, 470,
743, 527
]
for x in numbers:
if x in range(237):
if (x % 2 == 0):
print(x)
#Lösung: | Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb | barjacks/pythonrecherche | mit |
5.Addiere alle Zahlen in der Liste | sum(numbers)
#Lösung: | Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb | barjacks/pythonrecherche | mit |
6.Addiere nur die Zahlen, die gerade sind | numbers = [
951, 402, 984, 651, 360, 69, 408, 319, 601, 485, 980, 507, 725, 547, 544,
615, 83, 165, 141, 501, 263, 617, 865, 575, 219, 390, 984, 592, 236, 105, 942, 941,
386, 462, 47, 418, 907, 344, 236, 375, 823, 566, 597, 978, 328, 615, 953, 345,
399, 162, 758, 219, 918, 237, 412, 566, 826, 248, 866, 950, 626, 949, 687, 217,
815, 67, 104, 58, 512, 24, 892, 894, 767, 553, 81, 379, 843, 831, 445, 742, 717,
958, 609, 842, 451, 688, 753, 854, 685, 93, 857, 440, 380, 126, 721, 328, 753, 470,
743, 527
]
new_list=[]
for elem in numbers:
if elem % 2 == 0:
new_list.append(elem)
sum(new_list) | Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb | barjacks/pythonrecherche | mit |
7.Drucke mit einem For Loop 5 Mal hintereinander Hello World aus | for x in range(5):
print ("Hello World")
#Lösung | Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb | barjacks/pythonrecherche | mit |
8.Entwickle ein Programm, das alle Nummern zwischen 2000 und 3200 findet, die durch 7, aber nicht durch 5 teilbar sind. Das Ergebnis sollte auf einer Zeile ausgedruckt werden. Tipp: Schaue Dir hier die Vergleichsoperanden von Python an. | l=[]
for i in range(2000, 3200):
if (i%7==0) and (i%5>=0):
l.append(str(i))
print(','.join(l)) | Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb | barjacks/pythonrecherche | mit |
9.Schreibe einen For Loop, der die Nummern in der folgenden Liste von int in str verwandelt. | lst = range(45,99)
new_list=[]
for elem in lst:
str(elem)
new_list.append(str(elem))
print(new_list) | Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb | barjacks/pythonrecherche | mit |
10.Schreibe nun ein Programm, das alle Ziffern 4 mit dem Buchstaben A ersetzte, alle Ziffern 5 mit dem Buchtaben B. | newnewlist = []
for elem in new_list:
if '4' in elem:
elem = elem.replace('4', 'A')
if '5' in elem:
elem = elem.replace('5', 'B')
newnewlist.append(elem)
newnewlist | Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb | barjacks/pythonrecherche | mit |
The pyradi toolkit is a Python toolkit to perform optical and infrared computational radiometry (flux flow) calculations.
Radiometry is the measurement and calculation of electromagnetic flux transfer for systems operating in the spectral region ranging from ultraviolet to microwaves. Indeed, these principles can be applied to electromagnetic radiation of any wavelength. This book only considers ray-based radiometry for incoherent radiation fields.
The briefly summarised information in this notebook is taken from my book, see the book for more details. | display(Image(filename='images/PM236.jpg')) | 03-Introduction-to-Radiometry.ipynb | NelisW/ComputationalRadiometry | mpl-2.0 |
Electromagnetic radiation can be modeled as a number of different phenomena: rays, electromagnetic waves, wavefronts, or particles. All of these models are mathematically related. The appropriate model to use depends on the task at hand. Either the electromagnetic wave model(developed by Maxwell) or the particle model (developed by Einstein) are used when most appropriate. The part of the electromagnetic spectrum normally considered in optical radiometry is as follows: | display(Image(filename='images/radiometry03.png')) | 03-Introduction-to-Radiometry.ipynb | NelisW/ComputationalRadiometry | mpl-2.0 |
The photon is a massless elementary particle and acts as the energy carrier for the electromagnetic wave.
Photon particles have discrete energy quanta proportional to the frequency of the electromagnetic energy, $Q = h\nu = hc/\lambda$, where $h$ is Planck's constant.
Definitions
The following figure (expanded from Pinson) and table defines the key radiometry units. The difference operator '$d$' is used to denote 'a small quantity of ...'. This 'small quantity' of one variable is almost always related to a 'small quantity' of another variable in some physical dependency. For example, irradiance is defined as $E=d\Phi/dA$, which means that a small amount of flux $d\Phi$ impinges on a small area $dA$, resulting in an irradiance of $E$. 'Small' is defined as the extent or domain over which the quantity, or any of its dependent quantities, does not vary significantly. Because any finite-sized quantity varies over a finite-sized domain, the $d$ operation is only valid over an infinitely small domain $dA=\lim_{\Delta A \to 0}\Delta A$. The difference operator, written in the form of a differential such as $E=d\Phi/dA$, is not primarily meant to mean differentiation in the mathematical sense. Rather, it is used to indicate something that can be integrated (or summed).
In practice, it is impossible to consider infinitely many, infinitely small domains. Following the reductionist approach, any real system can, however, be assembled as the sum of a set of these small domains, by integration over the physical domain as in $A=\int dA$. Hence, the 'small-quantity' approach proves very useful to describe and understand the problem, whereas the real-world solution can be obtained as the sum of a set of such small quantities. In almost all of the cases in this notebook, it is implied that such 'small-quantity' domains will be integrated (or summed) over the (larger) problem domain.
Photon rates are measured in quanta per second.
The 'second' is an SI unit, whereas quanta is a unitless count: the number of photons. Photon rate therefore has units of [1/s] or [s$^{-1}$]. This form tends to lose track of the fact that the number of quanta per second is described. The notebook may occasionally contain units of the form [q/s] to emphasize the photon count. In this case, the 'q' is not a formal unit, it is merely a reminder of 'counts.' In dimensional analysis the 'q' is handled the same as any other unit.
Radiometric quantities can be defined in terms of three different but related units: radiant power (watts), photon rates (quanta per second), or photometric luminosity (lumen). Photometry is radiometry applied to human visual perception.
The conversion from radiometric to photometric quantities is
covered in more detail in my book. It is important to realize
that the underlying concepts are the same, irrespective of the nature of
the quantity. All of the derivations and examples presented in this book are equally valid for radiant, photon, or photometric quantities.
Flux is the amount of optical power, a photon rate, or photometric luminous flux, flowing between two surfaces. There is always a source area and a receiving area, with the flux flowing between them. All quantities of flux are denoted by the symbol $\Phi$. The units are [W], [q/s], or [lm], depending on the nature of the quantity.
Irradiance (areance) is the areal density of flux on the receiving surface area. The flux flows inward onto the surface with no regard to incoming angular density. All quantities of irradiance are denoted by the symbol $E$. The units are [W/m$^2$], [q/(s$\cdot$m$^2$)], or [lm/m$^2$], depending on the nature of the quantity.
Exitance (areance)
is the areal density of flux on the source surface
area. The flux flows outward from the surface with no regard to angular density. The exitance leaving a surface
can be due to reflected light, transmitted light, emitted light, or any combination thereof. All quantities of exitance are denoted by the
symbol $M$. The units are [W/m$^2$], [q/(s$\cdot$m$^2$)], or [lm/m$^2$], depending on the
nature of the quantity.
Intensity (pointance) is the density of flux over solid angle. The flux flows outward from the source with no regard for surface area. Intensity is denoted by the symbol $I$. The human perception of a point source (e.g., a star at long range) 'brightness' is an intensity measurement. The units are [W/sr], [q/(s$\cdot$sr)], or [lm/sr], depending on the nature of the quantity.
Radiance (sterance) is the density of flux per unit source surface area and unit solid angle.
Radiance is a property of the electromagnetic field irrespective of spatial location (in a lossless medium). For a radiating surface, the radiance may comprise transmitted light, reflected light, emitted light, or any combination thereof. The radiance in a field created by a Lambertian source is conserved: the radiance is constant anywhere in space, also on the receiving surface. All radiance quantities are denoted by the symbol $L$. The human perception of 'brightness' of a large surface can be likened to a radiance experience (beware of the nonlinear response in the eye, however). The units are
[W/(m$^2$ $\cdot$sr)], [q/(s$\cdot$m$^2$ $\cdot$sr)], or [lm/(m$^2$ $\cdot$sr)], depending on the nature of the
quantity. | display(Image(filename='images/radiometry01.png'))
display(Image(filename='images/radiometry02.png')) | 03-Introduction-to-Radiometry.ipynb | NelisW/ComputationalRadiometry | mpl-2.0 |
Spectral quantities
See notebook 4 in this series, Introduction to computational radiometry with pyradi, for a detailed description of spectral quantities.
Three spectral domains are commonly used: wavelength $\lambda$ in [m], frequency $\nu$ in [Hz], and wavenumber $\tilde{\nu}$ in [cm$^{-1}$] (the number of waves that will fit into a 1-cm length).
Spectral quantities indicate an amount of the quantity within a small spectral width $d\lambda$ around the value of $\lambda$: it is a spectral density. Spectral density quantity symbols are subscripted with a $\lambda$ or $\nu$, i.e., $L_\lambda$ or $L_\nu$. The dimensional units of a spectral density quantity are indicated as [$\mu$m$^{-1}$] or [(cm$^{-1})^{-1}$], i.e., [W/(m$^2$ $\cdot$sr$\cdot$ $\mu$m)].
The relationship between the wavelength and wavenumber spectral domains is $\tilde{\nu}=10^4/\lambda$ , where $\lambda$ is in units of $\mu$m. The conversion of a spectral density quantity such as [W/(m$^2$ $\cdot$sr$\cdot$cm$^{-1}$)] requires the derivative, %$d{\tilde{\nu}}=-\frac{10^4}{\lambda^2}d\lambda=-\frac{{\tilde{\nu}}^2}{10^4}d\lambda$.
$d{\tilde{\nu}}=-10^4d\lambda /\lambda^2=-\tilde{\nu}^2d\lambda/10^4$.
The derivative relationship converts between the spectral widths, and hence the spectral densities, in the two respective domains.
The conversion from a wavelength spectral density quantity to a wavenumber spectral density quantity is
$d{}L_{\tilde{\nu}}=d{}L_\lambda \lambda^2/10^4=d{} L_\lambda 10^4/\tilde{\nu}^2$.
Spectral quantities denote the amount in a small spectral width $d\lambda$ around a wavelength $\lambda$. It follows that the total quantity over a spectral range can be determined by integration (summation) over the spectral range of interest:
$$
L=\int_{\lambda_1}^{\lambda_2}L_\lambda d\lambda.
$$
The above integral satisfies the requirements of dimensional analysis (see my book) because the units of $L_\lambda$ are [W/(m$^2$ $\cdot$sr$\cdot$ $\mu$m)], whereas $d\lambda$ has the units of [$\mu$m], and $L$ has units of [W/(m$^2$ $\cdot$sr)].
Solid Angle
The geometric solid angle $\omega$ of any arbitrary surface $P$ from the reference point is given by
$$
\omega=\int!!!!\int^{P} \frac{d^2 P \cos\theta_1}{R^2},
$$
where $d^2 P \cos\theta_1$ is the projected surface area of the surface $P$ in the direction of the reference point, and $R$ is the distance from $d^2 P$ to the reference point. The integral is independent of the viewing direction $(\theta_0, \alpha_0)$ from the reference point. Hence, a given area at a given distance will always have the same geometric solid angle irrespective of the direction of the area.
The geometric solid angle of a cone is $\omega=4\pi\sin^2\left(\frac{\Theta}{2}\right)$, where $\Theta$ is the cone half-apex angle.
The projected solid angle $\Omega$ of any arbitrary surface $P$ from the reference area $dA_0$ is given by
$$
\Omega=\int!!!!\int^{P} \frac{d^2 P \cos\theta_0 \cos\theta_1}{R^2},
$$
where $d^2 P \cos\theta_1$ is the projected surface area of the surface $P$ in the direction of the reference area, and $R$ is the distance from $d^2 P$ to the reference area. The integral depends on the viewing direction $(\theta_0, \alpha_0)$ from the reference area, by the projected area ($dA_0\cos\theta_0$) of $dA_0$ in the direction of $d^2 P$.
Hence, a given area at a given distance will always have a different projected solid angle in different directions.
The projected solid angle of a cone is $\omega=\pi\sin^2\left(\Theta\right)$, where $\Theta$ is the cone half-apex angle. | display(Image(filename='images/radiometry04.png')) | 03-Introduction-to-Radiometry.ipynb | NelisW/ComputationalRadiometry | mpl-2.0 |
Lambertian radiators
A Lambertian source is, by definition, one whose radiance is completely independent of viewing angle. Many (but not all) rough and natural surfaces produce radiation whose radiance is approximately independent of the angle of observation. These surfaces generally have a rough texture at microscopic scales. Planck-law blackbody radiators are also Lambertian sources (see my book). Any Lambertian radiator is completely described by its scalar radiance magnitude only, with no angular dependence in radiance.
The relationship between the exitance and radiance for such a Lambertian surface can be easily derived. If the flux radiated from a Lambertian surface $\Phi$ [W] is known, it is a simple matter to calculate the exitance $M=\Phi/A$ [W/m$^2$], where $A$ is the radiating surface area. The exitance of a Lambertian radiator is related to radiance by the projected solid angle of $\pi$ sr, not the geometric solid angle of $2\pi$ sr as one might expect. The details are given in my book.
Conservation of radiance
Radiance is conserved for flux from a Lambertian surface propagation through a lossless optical
medium. Consider the construction below: two elemental areas $dA_0$ and $dA_1$ are separated by a distance $R_{01}$, with the angles between the normal vector of each surface and the line of sight given by $\theta_0$ and $\theta_1$. A total flux of $d^2\Phi$ is flowing through both the surfaces. It can be shown (see my book) that for a Lambertian radiator the radiance in an arbitrary $dA_n$ is the same as the radiance in $dA_1$.
As light propagates through mediums with different refractive indices $n$ such as air, water, glass, etc., the entity called basic radiance, defined by $L/n^2$, is invariant. It can be shown that for light propagating from a medium with refractive index $n_1$ to a medium with refractive index $n_2$, the basic radiance is conserved:
$$
\frac{L_1}{n_1^2}=\frac{L_2}{n_2^2}.
$$ | display(Image(filename='images/radiometry05.png')) | 03-Introduction-to-Radiometry.ipynb | NelisW/ComputationalRadiometry | mpl-2.0 |
Flux transfer through lossless and lossy mediums
A lossless medium is defined as a medium with no losses between the source and the receiver, such as a complete vacuum. This implies that no absorption, scattering, or any other attenuating mechanism is present in the medium. For a lossless medium the flux that flow between both $dA_0$ and $dA_1$ is given by
$$
d^2 \Phi= \frac{L_{01}\,d A_0\,\cos\theta_0\, d A_1\,\cos\theta_1}{R_{01}^2}.
$$
If the medium has loss, the loss effect is accounted for by including a 'transmittance' factor $\tau_{01}=\Phi_1/\Phi_0=L_{10}/L_{01}$, i.e., the fraction of the flux from $A_0$ that arrives at $A_1$, then
$$
d^2 \Phi= \frac{L_{01}\,d A_0\,\cos\theta_0\, d A_1\,\cos\theta_1 \tau_{01}}{R_{01}^2}.
$$
Sources and receivers of arbitrary shape
The above equation calculates the flux flowing between two infinitely small areas. The flux flowing between two arbitrary shapes can be calculated by integrating the equation over the source surface and the receiving surface. In the general case, the radiance $L$ cannot be assumed constant over $A_0$, introducing the spatial radiance distribution $L(dA_{0})$ as a factor into the spatial integral.
Likewise, the medium transmittance between any two areas $dA_{0}$ and $dA_{1}$ varies with the spatial locations of $dA_{0}$ and $dA_{1}$ --- hence $\tau_{01}(dA_{0},dA_{1})$ should also be included in the spatial integral.
The integral can be performed over any arbitrary shape, as shown in the following figure, supporting the solution with complex geometries. Clearly matters such as obscuration and occlusion should be considered when performing this integral:
$$
\Phi=\int_{A_0}\int_{A_1}
\frac{L(dA_{0})\,dA_0\,\cos\theta_0\, dA_1\,\cos\theta_1\,\tau_{01}(dA_{0},dA_{1})}{R_{01}^2}.
$$ | display(Image(filename='images/radiometry06.png')) | 03-Introduction-to-Radiometry.ipynb | NelisW/ComputationalRadiometry | mpl-2.0 |
Multi-spectral flux transfer
The optical power leaving a source undergoes a succession of scaling or 'spectral filtering' processes as the flux propagates through the system, as shown below. This filtering varies with wavelength.
Examples of such filters are source emissivity, atmospheric transmittance, optical filter transmittance, and detector responsivity. The multi-spectral filter approach described here is conceptually simple but fundamental to the calculation of radiometric flux. | display(Image(filename='images/radiometry07.png')) | 03-Introduction-to-Radiometry.ipynb | NelisW/ComputationalRadiometry | mpl-2.0 |
Extend the above flux-transfer equation for multi-spectral calculations by noting that over a spectral width $d\lambda$ the radiance is given by $L = L_\lambda d\lambda$:
$$
d^3 \Phi_\lambda=
\frac{L_{01\lambda}\,dA_0\;\cos\theta_0\,dA_1\;\cos\theta_1
\;\tau_{01}\,d\lambda}{R_{01}^2},
$$
where $d^3\Phi_\lambda$ is the total flux in [W] or [q/s] flowing in a spectral width $d\lambda$ at wavelength $\lambda$, from a radiator with radiance $L_{0\lambda}$ with units [W/(m$^2$ $\cdot$sr$\cdot$ $\mu$m)] and projected surface area $dA_0\cos\theta_0$, through a receiver with projected surface area $dA_1\cos\theta_1$ at a distance $R_{01}$, with a transmittance of $\tau_{01}$ between the two surfaces. The transmittance $\tau_{01}$ now includes all of the spectral variables in the path between the source and the receiver.
To determine the total flux flowing from elemental area $dA_0$ through $dA_1$ over a wide spectral width, divide the wide spectral band into a large number $N$ of narrow widths $\Delta\lambda$ at wavelengths $\lambda_n$ and add the flux for all of these narrow bandwidths together as follows:
$$
d^2 \Phi=
\sum_{n=0}^{N}
\left(
\frac{L_{01\lambda_n}
\,dA_{0}\,\cos\theta_0\,
\,dA_{1}\,\cos\theta_1\,
\tau_{01\lambda_n}
\Delta\lambda}{R_{01}^2}
\right).
$$
By the Riemann--Stieltjes theorem in reverse, if now $\Delta\lambda\rightarrow 0$ and $N\rightarrow\infty$, the summation becomes the integral
$$
d^2 \Phi=
\int_{\lambda_1}^{\lambda_2}
\frac{L_{01\lambda}
\,dA_{0}\,\cos\theta_0\,
\,dA_{1}\,\cos\theta_1 \,\tau_{01\lambda}d\lambda}{R_{01}^2}\ .
$$
This equation describes the total flux at all wavelengths in the spectral range $\lambda_1$ to $\lambda_2$ passing
through the system. This equation is developed further in my book.
Conclusion
The flux transfer between any two arbitrary surfaces, over any spectral band can be calculated by
$$
\Phi=
\int_{A_0}
\int_{A_1}
\int_{\lambda_1}^{\lambda_2}
\frac{L_{01\lambda}
\,dA_{0}\,\cos\theta_0\,
\,dA_{1}\,\cos\theta_1 \,\tau_{01\lambda}d\lambda}{R_{01}^2}\ .
$$
In practice these integrals are performed by finite sums of small elemental areas and spectral widths.
Any arbitrary problem can be solved using this approach. For a simple example see the flame sensor and the other pages of this notebook series.
Python and module versions, and dates | try:
import pyradi.ryutils as ryutils
print(ryutils.VersionInformation('matplotlib,numpy,pyradi,scipy,pandas'))
except:
print("pyradi.ryutils not found") | 03-Introduction-to-Radiometry.ipynb | NelisW/ComputationalRadiometry | mpl-2.0 |
1. Source additional data from public sources
This section will provide short examples to demonstrate the use of public data sources in your notebooks.
1.1 World Bank
This example demonstrates how to source data from an external source to enrich your existing analyses. You will need to combine the data sources and add additional features to the example of student locations plotted on the world map in Module 1's Notebook 3.
The specific indicator chosen has little relevance other than to demonstrate the process that you will typically follow in completing your projects. Population counts, from an untrusted source, will be added to your map, and you will use scaling factors combined with the number of students, and population size of the country to demonstrate adding external data with minimal effort.
This example makes use of the pandas-datareader module, which supports remote data access. This library has support for extracting data from various internet sources into a Pandas DataFrame. Currently, the supported sources are:
Google Finance
Enigma
Quandl
St.Louis FED (FRED)
Kenneth French’s data library
World Bank
OECD
Eurostat
Thrift Savings Plan
Nasdaq Trader symbol definitions.
This example focuses on enriching your student dataset from Module 1, using the World Bank's Development Indicators. In the following sections, you will use the data you saved in a previous exercise, add corresponding indicators for each country in the data, and find the mean location for all observed coordinates per country.
Prepare the student data
In the next code cell, you will load the data from disk, apply the groupby method to group the data by country and, for each group, find the total student count and the average of their GPS coordinates. The final dataset containing the country, student count, and averaged GPS coordinates is saved as a separate DataFrame variable. | # Load the grouped_geocoded dataset from Module 1.
df1 = pd.read_csv('data/grouped_geocoded.csv',index_col=[0])
# Prepare the student location dataset for use in this example.
# We use the geometrical center by obtaining the mean location for all observed coordinates per country.
df2 = df1.groupby('country').agg({'student_count': [np.sum], 'lat': [np.mean],
'long': [np.mean]}).reset_index()
# Reset the index.
df3 = df2.reset_index(level=1, drop=True)
# Review the data
df3.head() | module_2/M2_NB1_SourcesOfData.ipynb | getsmarter/bda | mit |
The column label index has multiple levels. Although this is useful metadata, it would be better to drop multilevel labeling and, instead, rename the columns to capture this information. | df3.columns = df3.columns.droplevel(1)
df3.rename(columns={'lat': "lat_mean",
'long': "long_mean"}, inplace=True)
df3.head() | module_2/M2_NB1_SourcesOfData.ipynb | getsmarter/bda | mit |
Get and prepare the external dataset from the World Bank
Remember you can use "wb.download?" (without the quotation marks) in a separate code cell to get help on the pandas-datareader method for remote data access of the World Bank Indicators. Refer to the pandas-datareader remote data access documentation for more detailed help. | # After running this cell you can close the help by clicking on close (`X`) button in the upper right corner
wb.download?
# The selected indicator is the world population, "SP.POP.TOTL", for the years from 2008 to 2016
wb_indicator = 'SP.POP.TOTL'
start_year = 2008
end_year = 2016
df4 = wb.download(indicator = wb_indicator,
country = ['all'],
start = start_year,
end = end_year)
# Review the data
df4.head() | module_2/M2_NB1_SourcesOfData.ipynb | getsmarter/bda | mit |
The data set contains entries for multiple years. The focus of this example is the entry corresponding to the latest year of data available for each country. | df5 = df4.reset_index()
idx = df5.groupby(['country'])['year'].transform(max) == df5['year'] | module_2/M2_NB1_SourcesOfData.ipynb | getsmarter/bda | mit |
You can now extract only the values that correspond to the most recent year available for each country. | # Create a new dataframe where entries corresponds to maximum year indexes in previous list.
df6 = df5.loc[idx,:]
# Review the data
df6.head() | module_2/M2_NB1_SourcesOfData.ipynb | getsmarter/bda | mit |
Now merge your dataset with the World Bank data. | # Combine the student and population datasets.
df7 = pd.merge(df3, df6, on='country', how='left')
# Rename the columns of our merged dataset and assign to a new variable.
df8 = df7.rename(index=str, columns={('SP.POP.TOTL'): "PopulationTotal_Latest_WB"})
# Drop NAN values.
df8 = df8[~df8.PopulationTotal_Latest_WB.isnull()]
# Reset index.
df8.reset_index(inplace = True)
df8.head() | module_2/M2_NB1_SourcesOfData.ipynb | getsmarter/bda | mit |
Let's plot the data.
Note:
The visualization below does not have any meaning. The scaling factors selected are used to demonstrate the difference in population sizes, and number of students on this course, per country. | # Plot the combined dataset
# Set map center and zoom level
mapc = [0, 30]
zoom = 2
# Create map object.
map_osm = folium.Map(location=mapc,
tiles='Stamen Toner',
zoom_start=zoom)
# Plot each of the locations that we geocoded.
for j in range(len(df8)):
# Plot a blue circle marker for country population.
folium.CircleMarker([df8.lat_mean[j], df8.long_mean[j]],
radius=df8.PopulationTotal_Latest_WB[j]/20000000,
popup='Population',
color='#3186cc',
fill_color='#3186cc',
).add_to(map_osm)
# Plot a red circle marker for students per country.
folium.CircleMarker([df8.lat_mean[j], df8.long_mean[j]],
radius=df8.student_count[j]/50,
popup='Students',
color='red',
fill_color='red',
).add_to(map_osm)
# Show the map.
map_osm | module_2/M2_NB1_SourcesOfData.ipynb | getsmarter/bda | mit |
<br>
<div class="alert alert-info">
<b>Exercise 1 Start.</b>
</div>
Instructions
Review the available indicators in the World Bank dataset, and select an indicator of your choice (other than the population indicator).
Using a copy of the code (from above) in the cells below, replace the population indicator with your selected indicator. Instead of returning the most recent value for your selected indicator, compute the mean and standard deviation for the years from 2006 to 2016. You will need to use the Pandas groupby().agg() chained methods, together with the following functions from NumPy:
np.mean
np.std.
You can review the data preparation section for the student data above for an example.
Add comments (lines starting with a "#") giving a brief description of your view on the observed results. Make sure to include, in one or two sentences in each case, the following:
1. A clear description why you selected the indicator.
- What your expectation was before including the data.
- What you think the results may indicate.
Important:
- Only the external data needs to be prepared. You do not need to prepare the student dataset again. Just use the student data that you prepared above and join this to the new dataset you sourced.
- Only plot the mean values for your selected indicator (not the standard deviation values). | # Your solution here
# Note: Break your logic using separate cells to break code into units that can be executed
# should you need to review individual steps.
| module_2/M2_NB1_SourcesOfData.ipynb | getsmarter/bda | mit |
<br>
<div class="alert alert-info">
<b>Exercise 1 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
1.2 Using Wikipedia as a data source
To demonstrate how quickly data can be sourced from public, "untrusted" data sources, you have been supplied with a number of sample scripts below. While these sources contain distinctly rich datasets, which you can acquire with minimal effort, they can be amended by anyone, and may not be 100% accurate. In some cases, you will have to manually transform the datasets, while in others, you might be able to use pre-built libraries.
Execute the code cells below before completing Exercise 2. | # Display MIT page summary from Wikipedia
print(wikipedia.summary("MIT"))
# Display a single sentence summary.
wikipedia.summary("MIT", sentences=1)
# Create variable page that contains the wikipedia information.
page = wikipedia.page("List of countries and dependencies by population")
# Display the page title.
page.title
# Display the page URL. This can be utilised to create links back to descriptions.
page.url | module_2/M2_NB1_SourcesOfData.ipynb | getsmarter/bda | mit |
Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.) | #path = "data/dogscats/"
path = "data/dogscats/sample/" | cnn/tw_vgg16.ipynb | sysid/nbs | mit |
A few basic libraries that we'll need for the initial exercises: | from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt | cnn/tw_vgg16.ipynb | sysid/nbs | mit |
We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them. | import utils
import importlib
importlib.reload(utils)
from utils import plots | cnn/tw_vgg16.ipynb | sysid/nbs | mit |
Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline: state of the art custom model in 7 lines of code
Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work. | # As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
# batch_size=64
batch_size=2
# Import our class, and instantiate
import vgg16
from vgg16 import Vgg16
vgg.classes
# %%capture x # ping bug: disconnect -> reconnect kernel workaround
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+ 'train', batch_size=batch_size)
batches.nb_class
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1, verbose=1)
#x.show() | cnn/tw_vgg16.ipynb | sysid/nbs | mit |
The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object: | vgg = Vgg16() | cnn/tw_vgg16.ipynb | sysid/nbs | mit |
Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder: | batches = vgg.get_batches(path+'train', batch_size=4) | cnn/tw_vgg16.ipynb | sysid/nbs | mit |
(BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels. | imgs,labels = next(batches)
imgs[0].shape
labels | cnn/tw_vgg16.ipynb | sysid/nbs | mit |
As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1. | plots(imgs, titles=labels) | cnn/tw_vgg16.ipynb | sysid/nbs | mit |
We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction. | vgg.predict(imgs, True) | cnn/tw_vgg16.ipynb | sysid/nbs | mit |
The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four: | vgg.classes[:4] | cnn/tw_vgg16.ipynb | sysid/nbs | mit |
(Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory. | batch_size=64
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size) | cnn/tw_vgg16.ipynb | sysid/nbs | mit |
Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'. | vgg.finetune(batches) | cnn/tw_vgg16.ipynb | sysid/nbs | mit |
Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.) | vgg.fit(batches, val_batches, nb_epoch=1) | cnn/tw_vgg16.ipynb | sysid/nbs | mit |
Imports
Run both of these cells: | #@title All `dm_control` imports required for this tutorial
# The basic mujoco wrapper.
from dm_control import mujoco
# Access to enums and MuJoCo library functions.
from dm_control.mujoco.wrapper.mjbindings import enums
from dm_control.mujoco.wrapper.mjbindings import mjlib
# PyMJCF
from dm_control import mjcf
# Composer high level imports
from dm_control import composer
from dm_control.composer.observation import observable
from dm_control.composer import variation
# Imports for Composer tutorial example
from dm_control.composer.variation import distributions
from dm_control.composer.variation import noises
from dm_control.locomotion.arenas import floors
# Control Suite
from dm_control import suite
# Run through corridor example
from dm_control.locomotion.walkers import cmu_humanoid
from dm_control.locomotion.arenas import corridors as corridor_arenas
from dm_control.locomotion.tasks import corridors as corridor_tasks
# Soccer
from dm_control.locomotion import soccer
# Manipulation
from dm_control import manipulation
#@title Other imports and helper functions
# General
import copy
import os
import itertools
from IPython.display import clear_output
import numpy as np
# Graphics-related
import matplotlib
import matplotlib.animation as animation
import matplotlib.pyplot as plt
from IPython.display import HTML
import PIL.Image
# Internal loading of video libraries.
# Use svg backend for figure rendering
%config InlineBackend.figure_format = 'svg'
# Font sizes
SMALL_SIZE = 8
MEDIUM_SIZE = 10
BIGGER_SIZE = 12
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
# Inline video helper function
if os.environ.get('COLAB_NOTEBOOK_TEST', False):
# We skip video generation during tests, as it is quite expensive.
display_video = lambda *args, **kwargs: None
else:
def display_video(frames, framerate=30):
height, width, _ = frames[0].shape
dpi = 70
orig_backend = matplotlib.get_backend()
matplotlib.use('Agg') # Switch to headless 'Agg' to inhibit figure rendering.
fig, ax = plt.subplots(1, 1, figsize=(width / dpi, height / dpi), dpi=dpi)
matplotlib.use(orig_backend) # Switch back to the original backend.
ax.set_axis_off()
ax.set_aspect('equal')
ax.set_position([0, 0, 1, 1])
im = ax.imshow(frames[0])
def update(frame):
im.set_data(frame)
return [im]
interval = 1000/framerate
anim = animation.FuncAnimation(fig=fig, func=update, frames=frames,
interval=interval, blit=True, repeat=False)
return HTML(anim.to_html5_video())
# Seed numpy's global RNG so that cell outputs are deterministic. We also try to
# use RandomState instances that are local to a single cell wherever possible.
np.random.seed(42) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Model definition, compilation and rendering
We begin by describing some basic concepts of the MuJoCo physics simulation library, but recommend the official documentation for details.
Let's define a simple model with two geoms and a light. | #@title A static model {vertical-output: true}
static_model = """
<mujoco>
<worldbody>
<light name="top" pos="0 0 1"/>
<geom name="red_box" type="box" size=".2 .2 .2" rgba="1 0 0 1"/>
<geom name="green_sphere" pos=".2 .2 .2" size=".1" rgba="0 1 0 1"/>
</worldbody>
</mujoco>
"""
physics = mujoco.Physics.from_xml_string(static_model)
pixels = physics.render()
PIL.Image.fromarray(pixels) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
static_model is written in MuJoCo's XML-based MJCF modeling language. The from_xml_string() method invokes the model compiler, which instantiates the library's internal data structures. These can be accessed via the physics object, see below.
Adding DOFs and simulating, advanced rendering
This is a perfectly legitimate model, but if we simulate it, nothing will happen except for time advancing. This is because this model has no degrees of freedom (DOFs). We add DOFs by adding joints to bodies, specifying how they can move with respect to their parents. Let us add a hinge joint and re-render, visualizing the joint axis. | #@title A child body with a joint { vertical-output: true }
swinging_body = """
<mujoco>
<worldbody>
<light name="top" pos="0 0 1"/>
<body name="box_and_sphere" euler="0 0 -30">
<joint name="swing" type="hinge" axis="1 -1 0" pos="-.2 -.2 -.2"/>
<geom name="red_box" type="box" size=".2 .2 .2" rgba="1 0 0 1"/>
<geom name="green_sphere" pos=".2 .2 .2" size=".1" rgba="0 1 0 1"/>
</body>
</worldbody>
</mujoco>
"""
physics = mujoco.Physics.from_xml_string(swinging_body)
# Visualize the joint axis.
scene_option = mujoco.wrapper.core.MjvOption()
scene_option.flags[enums.mjtVisFlag.mjVIS_JOINT] = True
pixels = physics.render(scene_option=scene_option)
PIL.Image.fromarray(pixels) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
The things that move (and which have inertia) are called bodies. The body's child joint specifies how that body can move with respect to its parent, in this case box_and_sphere w.r.t the worldbody.
Note that the body's frame is rotated with an euler directive, and its children, the geoms and the joint, rotate with it. This is to emphasize the local-to-parent-frame nature of position and orientation directives in MJCF.
Let's make a video, to get a sense of the dynamics and to see the body swinging under gravity. | #@title Making a video {vertical-output: true}
duration = 2 # (seconds)
framerate = 30 # (Hz)
# Visualize the joint axis
scene_option = mujoco.wrapper.core.MjvOption()
scene_option.flags[enums.mjtVisFlag.mjVIS_JOINT] = True
# Simulate and display video.
frames = []
physics.reset() # Reset state and time
while physics.data.time < duration:
physics.step()
if len(frames) < physics.data.time * framerate:
pixels = physics.render(scene_option=scene_option)
frames.append(pixels)
display_video(frames, framerate) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Note how we collect the video frames. Because physics simulation timesteps are generally much smaller than framerates (the default timestep is 2ms), we don't render after each step.
Rendering options
Like joint visualisation, additional rendering options are exposed as parameters to the render method. | #@title Enable transparency and frame visualization {vertical-output: true}
scene_option = mujoco.wrapper.core.MjvOption()
scene_option.frame = enums.mjtFrame.mjFRAME_GEOM
scene_option.flags[enums.mjtVisFlag.mjVIS_TRANSPARENT] = True
pixels = physics.render(scene_option=scene_option)
PIL.Image.fromarray(pixels)
#@title Depth rendering {vertical-output: true}
# depth is a float array, in meters.
depth = physics.render(depth=True)
# Shift nearest values to the origin.
depth -= depth.min()
# Scale by 2 mean distances of near rays.
depth /= 2*depth[depth <= 1].mean()
# Scale to [0, 255]
pixels = 255*np.clip(depth, 0, 1)
PIL.Image.fromarray(pixels.astype(np.uint8))
#@title Segmentation rendering {vertical-output: true}
seg = physics.render(segmentation=True)
# Display the contents of the first channel, which contains object
# IDs. The second channel, seg[:, :, 1], contains object types.
geom_ids = seg[:, :, 0]
# Infinity is mapped to -1
geom_ids = geom_ids.astype(np.float64) + 1
# Scale to [0, 1]
geom_ids = geom_ids / geom_ids.max()
pixels = 255*geom_ids
PIL.Image.fromarray(pixels.astype(np.uint8))
#@title Projecting from world to camera coordinates {vertical-output: true}
# Get the world coordinates of the box corners
box_pos = physics.named.data.geom_xpos['red_box']
box_mat = physics.named.data.geom_xmat['red_box'].reshape(3, 3)
box_size = physics.named.model.geom_size['red_box']
offsets = np.array([-1, 1]) * box_size[:, None]
xyz_local = np.stack(itertools.product(*offsets)).T
xyz_global = box_pos[:, None] + box_mat @ xyz_local
# Camera matrices multiply homogenous [x, y, z, 1] vectors.
corners_homogeneous = np.ones((4, xyz_global.shape[1]), dtype=float)
corners_homogeneous[:3, :] = xyz_global
# Get the camera matrix.
camera = mujoco.Camera(physics)
camera_matrix = camera.matrix
# Project world coordinates into pixel space. See:
# https://en.wikipedia.org/wiki/3D_projection#Mathematical_formula
xs, ys, s = camera_matrix @ corners_homogeneous
# x and y are in the pixel coordinate system.
x = xs / s
y = ys / s
# Render the camera view and overlay the projected corner coordinates.
pixels = camera.render()
fig, ax = plt.subplots(1, 1)
ax.imshow(pixels)
ax.plot(x, y, '+', c='w')
ax.set_axis_off() | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
MuJoCo basics and named indexing
mjModel
MuJoCo's mjModel, encapsulated in physics.model, contains the model description, including the default initial state and other fixed quantities which are not a function of the state, e.g. the positions of geoms in the frame of their parent body. The (x, y, z) offsets of the box and sphere geoms, relative their parent body box_and_sphere are given by model.geom_pos: | physics.model.geom_pos | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
The model.opt structure contains global quantities like | print('timestep', physics.model.opt.timestep)
print('gravity', physics.model.opt.gravity) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
mjData
mjData, encapsulated in physics.data, contains the state and quantities that depend on it. The state is made up of time, generalized positions and generalised velocities. These are respectively data.time, data.qpos and data.qvel.
Let's print the state of the swinging body where we left it: | print(physics.data.time, physics.data.qpos, physics.data.qvel) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
physics.data also contains functions of the state, for example the cartesian positions of objects in the world frame. The (x, y, z) positions of our two geoms are in data.geom_xpos: | print(physics.data.geom_xpos) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Named indexing
The semantics of the above arrays are made clearer using the named wrapper, which assigns names to rows and type names to columns. | print(physics.named.data.geom_xpos) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Note how model.geom_pos and data.geom_xpos have similar semantics but very different meanings. | print(physics.named.model.geom_pos) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Name strings can be used to index into the relevant quantities, making code much more readable and robust. | physics.named.data.geom_xpos['green_sphere', 'z'] | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Joint names can be used to index into quantities in configuration space (beginning with the letter q): | physics.named.data.qpos['swing'] | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
We can mix NumPy slicing operations with named indexing. As an example, we can set the color of the box using its name ("red_box") as an index into the rows of the geom_rgba array. | #@title Changing colors using named indexing{vertical-output: true}
random_rgb = np.random.rand(3)
physics.named.model.geom_rgba['red_box', :3] = random_rgb
pixels = physics.render()
PIL.Image.fromarray(pixels) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Note that while physics.model quantities will not be changed by the engine, we can change them ourselves between steps. This however is generally not recommended, the preferred approach being to modify the model at the XML level using the PyMJCF library, see below.
Setting the state with reset_context()
In order for data quantities that are functions of the state to be in sync with the state, MuJoCo's mj_step1() needs to be called. This is facilitated by the reset_context() context, please see in-depth discussion in Section 2.1 of the tech report. | physics.named.data.qpos['swing'] = np.pi
print('Without reset_context, spatial positions are not updated:',
physics.named.data.geom_xpos['green_sphere', ['z']])
with physics.reset_context():
physics.named.data.qpos['swing'] = np.pi
print('After reset_context, positions are up-to-date:',
physics.named.data.geom_xpos['green_sphere', ['z']]) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Free bodies: the self-inverting "tippe-top"
A free body is a body with a free joint, with 6 movement DOFs: 3 translations and 3 rotations. We could give our box_and_sphere body a free joint and watch it fall, but let's look at something more interesting. A "tippe top" is a spinning toy which flips itself on its head (Wikipedia). We model it as follows: | #@title The "tippe-top" model{vertical-output: true}
tippe_top = """
<mujoco model="tippe top">
<option integrator="RK4"/>
<asset>
<texture name="grid" type="2d" builtin="checker" rgb1=".1 .2 .3"
rgb2=".2 .3 .4" width="300" height="300"/>
<material name="grid" texture="grid" texrepeat="8 8" reflectance=".2"/>
</asset>
<worldbody>
<geom size=".2 .2 .01" type="plane" material="grid"/>
<light pos="0 0 .6"/>
<camera name="closeup" pos="0 -.1 .07" xyaxes="1 0 0 0 1 2"/>
<body name="top" pos="0 0 .02">
<freejoint/>
<geom name="ball" type="sphere" size=".02" />
<geom name="stem" type="cylinder" pos="0 0 .02" size="0.004 .008"/>
<geom name="ballast" type="box" size=".023 .023 0.005" pos="0 0 -.015"
contype="0" conaffinity="0" group="3"/>
</body>
</worldbody>
<keyframe>
<key name="spinning" qpos="0 0 0.02 1 0 0 0" qvel="0 0 0 0 1 200" />
</keyframe>
</mujoco>
"""
physics = mujoco.Physics.from_xml_string(tippe_top)
PIL.Image.fromarray(physics.render(camera_id='closeup')) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Note several new features of this model definition:
0. The free joint is added with the <freejoint/> clause, which is similar to <joint type="free"/>, but prohibits unphysical attributes like friction or stiffness.
1. We use the <option/> clause to set the integrator to the more accurate Runge Kutta 4th order.
2. We define the floor's grid material inside the <asset/> clause and reference it in the floor geom.
3. We use an invisible and non-colliding box geom called ballast to move the top's center-of-mass lower. Having a low center of mass is (counter-intuitively) required for the flipping behaviour to occur.
4. We save our initial spinning state as a keyframe. It has a high rotational velocity around the z-axis, but is not perfectly oriented with the world.
5. We define a <camera> in our model, and then render from it using the camera_id argument to render().
Let us examine the state: | print('positions', physics.data.qpos)
print('velocities', physics.data.qvel) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
The velocities are easy to interpret, 6 zeros, one for each DOF. What about the length-7 positions? We can see the initial 2cm height of the body; the subsequent four numbers are the 3D orientation, defined by a unit quaternion. These normalized four-vectors, which preserve the topology of the orientation group, are the reason that data.qpos can be bigger than data.qvel: 3D orientations are represented with 4 numbers while angular velocities are 3 numbers. | #@title Video of the tippe-top {vertical-output: true}
duration = 7 # (seconds)
framerate = 60 # (Hz)
# Simulate and display video.
frames = []
physics.reset(0) # Reset to keyframe 0 (load a saved state).
while physics.data.time < duration:
physics.step()
if len(frames) < (physics.data.time) * framerate:
pixels = physics.render(camera_id='closeup')
frames.append(pixels)
display_video(frames, framerate) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Measuring values from physics.data
The physics.data structure contains all of the dynamic variables and intermediate results produced by the simulation. These are expected to change on each timestep.
Below we simulate for 2000 timesteps and plot the state and height of the sphere as a function of time. | #@title Measuring values {vertical-output: true}
timevals = []
angular_velocity = []
stem_height = []
# Simulate and save data
physics.reset(0)
while physics.data.time < duration:
physics.step()
timevals.append(physics.data.time)
angular_velocity.append(physics.data.qvel[3:6].copy())
stem_height.append(physics.named.data.geom_xpos['stem', 'z'])
dpi = 100
width = 480
height = 640
figsize = (width / dpi, height / dpi)
_, ax = plt.subplots(2, 1, figsize=figsize, dpi=dpi, sharex=True)
ax[0].plot(timevals, angular_velocity)
ax[0].set_title('angular velocity')
ax[0].set_ylabel('radians / second')
ax[1].plot(timevals, stem_height)
ax[1].set_xlabel('time (seconds)')
ax[1].set_ylabel('meters')
_ = ax[1].set_title('stem height') | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
PyMJCF tutorial
This library provides a Python object model for MuJoCo's XML-based
MJCF physics modeling language. The
goal of the library is to allow users to easily interact with and modify MJCF
models in Python, similarly to what the JavaScript DOM does for HTML.
A key feature of this library is the ability to easily compose multiple separate
MJCF models into a larger one. Disambiguation of duplicated names from different
models, or multiple instances of the same model, is handled automatically.
One typical use case is when we want robots with a variable number of joints. This is a fundamental change to the kinematics, requiring a new XML descriptor and new binary model to be compiled.
The following snippets realise this scenario and provide a quick example of this library's use case. | class Leg(object):
"""A 2-DoF leg with position actuators."""
def __init__(self, length, rgba):
self.model = mjcf.RootElement()
# Defaults:
self.model.default.joint.damping = 2
self.model.default.joint.type = 'hinge'
self.model.default.geom.type = 'capsule'
self.model.default.geom.rgba = rgba # Continued below...
# Thigh:
self.thigh = self.model.worldbody.add('body')
self.hip = self.thigh.add('joint', axis=[0, 0, 1])
self.thigh.add('geom', fromto=[0, 0, 0, length, 0, 0], size=[length/4])
# Hip:
self.shin = self.thigh.add('body', pos=[length, 0, 0])
self.knee = self.shin.add('joint', axis=[0, 1, 0])
self.shin.add('geom', fromto=[0, 0, 0, 0, 0, -length], size=[length/5])
# Position actuators:
self.model.actuator.add('position', joint=self.hip, kp=10)
self.model.actuator.add('position', joint=self.knee, kp=10) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
The Leg class describes an abstract articulated leg, with two joints and corresponding proportional-derivative actuators.
Note that:
MJCF attributes correspond directly to arguments of the add() method.
When referencing elements, e.g when specifying the joint to which an actuator is attached, the MJCF element itself is used, rather than the name string. | BODY_RADIUS = 0.1
BODY_SIZE = (BODY_RADIUS, BODY_RADIUS, BODY_RADIUS / 2)
random_state = np.random.RandomState(42)
def make_creature(num_legs):
"""Constructs a creature with `num_legs` legs."""
rgba = random_state.uniform([0, 0, 0, 1], [1, 1, 1, 1])
model = mjcf.RootElement()
model.compiler.angle = 'radian' # Use radians.
# Make the torso geom.
model.worldbody.add(
'geom', name='torso', type='ellipsoid', size=BODY_SIZE, rgba=rgba)
# Attach legs to equidistant sites on the circumference.
for i in range(num_legs):
theta = 2 * i * np.pi / num_legs
hip_pos = BODY_RADIUS * np.array([np.cos(theta), np.sin(theta), 0])
hip_site = model.worldbody.add('site', pos=hip_pos, euler=[0, 0, theta])
leg = Leg(length=BODY_RADIUS, rgba=rgba)
hip_site.attach(leg.model)
return model | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
The make_creature function uses PyMJCF's attach() method to procedurally attach legs to the torso. Note that at this stage both the torso and hip attachment sites are children of the worldbody, since their parent body has yet to be instantiated. We'll now make an arena with a chequered floor and two lights, and place our creatures in a grid. | #@title Six Creatures on a floor.{vertical-output: true}
arena = mjcf.RootElement()
chequered = arena.asset.add('texture', type='2d', builtin='checker', width=300,
height=300, rgb1=[.2, .3, .4], rgb2=[.3, .4, .5])
grid = arena.asset.add('material', name='grid', texture=chequered,
texrepeat=[5, 5], reflectance=.2)
arena.worldbody.add('geom', type='plane', size=[2, 2, .1], material=grid)
for x in [-2, 2]:
arena.worldbody.add('light', pos=[x, -1, 3], dir=[-x, 1, -2])
# Instantiate 6 creatures with 3 to 8 legs.
creatures = [make_creature(num_legs=num_legs) for num_legs in range(3, 9)]
# Place them on a grid in the arena.
height = .15
grid = 5 * BODY_RADIUS
xpos, ypos, zpos = np.meshgrid([-grid, 0, grid], [0, grid], [height])
for i, model in enumerate(creatures):
# Place spawn sites on a grid.
spawn_pos = (xpos.flat[i], ypos.flat[i], zpos.flat[i])
spawn_site = arena.worldbody.add('site', pos=spawn_pos, group=3)
# Attach to the arena at the spawn sites, with a free joint.
spawn_site.attach(model).add('freejoint')
# Instantiate the physics and render.
physics = mjcf.Physics.from_mjcf_model(arena)
PIL.Image.fromarray(physics.render()) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Multi-legged creatures, ready to roam! Let's inject some controls and watch them move. We'll generate a sinusoidal open-loop control signal of fixed frequency and random phase, recording both video frames and the horizontal positions of the torso geoms, in order to plot the movement trajectories. | #@title Video of the movement{vertical-output: true}
#@test {"timeout": 600}
duration = 10 # (Seconds)
framerate = 30 # (Hz)
video = []
pos_x = []
pos_y = []
torsos = [] # List of torso geom elements.
actuators = [] # List of actuator elements.
for creature in creatures:
torsos.append(creature.find('geom', 'torso'))
actuators.extend(creature.find_all('actuator'))
# Control signal frequency, phase, amplitude.
freq = 5
phase = 2 * np.pi * random_state.rand(len(actuators))
amp = 0.9
# Simulate, saving video frames and torso locations.
physics.reset()
while physics.data.time < duration:
# Inject controls and step the physics.
physics.bind(actuators).ctrl = amp * np.sin(freq * physics.data.time + phase)
physics.step()
# Save torso horizontal positions using bind().
pos_x.append(physics.bind(torsos).xpos[:, 0].copy())
pos_y.append(physics.bind(torsos).xpos[:, 1].copy())
# Save video frames.
if len(video) < physics.data.time * framerate:
pixels = physics.render()
video.append(pixels.copy())
display_video(video, framerate)
#@title Movement trajectories{vertical-output: true}
creature_colors = physics.bind(torsos).rgba[:, :3]
fig, ax = plt.subplots(figsize=(4, 4))
ax.set_prop_cycle(color=creature_colors)
_ = ax.plot(pos_x, pos_y, linewidth=4) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
The plot above shows the corresponding movement trajectories of creature positions. Note how physics.bind(torsos) was used to access both xpos and rgba values. Once the Physics had been instantiated by from_mjcf_model(), the bind() method will expose both the associated mjData and mjModel fields of an mjcf element, providing unified access to all quantities in the simulation.
Composer tutorial
In this tutorial we will create a task requiring our "creature" above to press a colour-changing button on the floor with a prescribed force. We begin by implementing our creature as a composer.Entity: | #@title The `Creature` class
class Creature(composer.Entity):
"""A multi-legged creature derived from `composer.Entity`."""
def _build(self, num_legs):
self._model = make_creature(num_legs)
def _build_observables(self):
return CreatureObservables(self)
@property
def mjcf_model(self):
return self._model
@property
def actuators(self):
return tuple(self._model.find_all('actuator'))
# Add simple observable features for joint angles and velocities.
class CreatureObservables(composer.Observables):
@composer.observable
def joint_positions(self):
all_joints = self._entity.mjcf_model.find_all('joint')
return observable.MJCFFeature('qpos', all_joints)
@composer.observable
def joint_velocities(self):
all_joints = self._entity.mjcf_model.find_all('joint')
return observable.MJCFFeature('qvel', all_joints) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
The Creature Entity includes generic Observables for joint angles and velocities. Because find_all() is called on the Creature's MJCF model, it will only return the creature's leg joints, and not the "free" joint with which it will be attached to the world. Note that Composer Entities should override the _build and _build_observables methods rather than __init__. The implementation of __init__ in the base class calls _build and _build_observables, in that order, to ensure that the entity's MJCF model is created before its observables. This was a design choice which allows the user to refer to an observable as an attribute (entity.observables.foo) while still making it clear which attributes are observables. The stateful Button class derives from composer.Entity and implements the initialize_episode and after_substep callbacks. | #@title The `Button` class
NUM_SUBSTEPS = 25 # The number of physics substeps per control timestep.
class Button(composer.Entity):
"""A button Entity which changes colour when pressed with certain force."""
def _build(self, target_force_range=(5, 10)):
self._min_force, self._max_force = target_force_range
self._mjcf_model = mjcf.RootElement()
self._geom = self._mjcf_model.worldbody.add(
'geom', type='cylinder', size=[0.25, 0.02], rgba=[1, 0, 0, 1])
self._site = self._mjcf_model.worldbody.add(
'site', type='cylinder', size=self._geom.size*1.01, rgba=[1, 0, 0, 0])
self._sensor = self._mjcf_model.sensor.add('touch', site=self._site)
self._num_activated_steps = 0
def _build_observables(self):
return ButtonObservables(self)
@property
def mjcf_model(self):
return self._mjcf_model
# Update the activation (and colour) if the desired force is applied.
def _update_activation(self, physics):
current_force = physics.bind(self.touch_sensor).sensordata[0]
self._is_activated = (current_force >= self._min_force and
current_force <= self._max_force)
physics.bind(self._geom).rgba = (
[0, 1, 0, 1] if self._is_activated else [1, 0, 0, 1])
self._num_activated_steps += int(self._is_activated)
def initialize_episode(self, physics, random_state):
self._reward = 0.0
self._num_activated_steps = 0
self._update_activation(physics)
def after_substep(self, physics, random_state):
self._update_activation(physics)
@property
def touch_sensor(self):
return self._sensor
@property
def num_activated_steps(self):
return self._num_activated_steps
class ButtonObservables(composer.Observables):
"""A touch sensor which averages contact force over physics substeps."""
@composer.observable
def touch_force(self):
return observable.MJCFFeature('sensordata', self._entity.touch_sensor,
buffer_size=NUM_SUBSTEPS, aggregator='mean') | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Note how the Button counts the number of sub-steps during which it is pressed with the desired force. It also exposes an Observable of the force being applied to the button, whose value is an average of the readings over the physics time-steps.
We import some variation modules and an arena factory: | #@title Random initialiser using `composer.variation`
class UniformCircle(variation.Variation):
"""A uniformly sampled horizontal point on a circle of radius `distance`."""
def __init__(self, distance):
self._distance = distance
self._heading = distributions.Uniform(0, 2*np.pi)
def __call__(self, initial_value=None, current_value=None, random_state=None):
distance, heading = variation.evaluate(
(self._distance, self._heading), random_state=random_state)
return (distance*np.cos(heading), distance*np.sin(heading), 0)
#@title The `PressWithSpecificForce` task
class PressWithSpecificForce(composer.Task):
def __init__(self, creature):
self._creature = creature
self._arena = floors.Floor()
self._arena.add_free_entity(self._creature)
self._arena.mjcf_model.worldbody.add('light', pos=(0, 0, 4))
self._button = Button()
self._arena.attach(self._button)
# Configure initial poses
self._creature_initial_pose = (0, 0, 0.15)
button_distance = distributions.Uniform(0.5, .75)
self._button_initial_pose = UniformCircle(button_distance)
# Configure variators
self._mjcf_variator = variation.MJCFVariator()
self._physics_variator = variation.PhysicsVariator()
# Configure and enable observables
pos_corrptor = noises.Additive(distributions.Normal(scale=0.01))
self._creature.observables.joint_positions.corruptor = pos_corrptor
self._creature.observables.joint_positions.enabled = True
vel_corruptor = noises.Multiplicative(distributions.LogNormal(sigma=0.01))
self._creature.observables.joint_velocities.corruptor = vel_corruptor
self._creature.observables.joint_velocities.enabled = True
self._button.observables.touch_force.enabled = True
def to_button(physics):
button_pos, _ = self._button.get_pose(physics)
return self._creature.global_vector_to_local_frame(physics, button_pos)
self._task_observables = {}
self._task_observables['button_position'] = observable.Generic(to_button)
for obs in self._task_observables.values():
obs.enabled = True
self.control_timestep = NUM_SUBSTEPS * self.physics_timestep
@property
def root_entity(self):
return self._arena
@property
def task_observables(self):
return self._task_observables
def initialize_episode_mjcf(self, random_state):
self._mjcf_variator.apply_variations(random_state)
def initialize_episode(self, physics, random_state):
self._physics_variator.apply_variations(physics, random_state)
creature_pose, button_pose = variation.evaluate(
(self._creature_initial_pose, self._button_initial_pose),
random_state=random_state)
self._creature.set_pose(physics, position=creature_pose)
self._button.set_pose(physics, position=button_pose)
def get_reward(self, physics):
return self._button.num_activated_steps / NUM_SUBSTEPS
#@title Instantiating an environment{vertical-output: true}
creature = Creature(num_legs=4)
task = PressWithSpecificForce(creature)
env = composer.Environment(task, random_state=np.random.RandomState(42))
env.reset()
PIL.Image.fromarray(env.physics.render()) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
The Control Suite
The Control Suite is a set of stable, well-tested tasks designed to serve as a benchmark for continuous control learning agents. Tasks are written using the basic MuJoCo wrapper interface. Standardised action, observation and reward structures make suite-wide benchmarking simple and learning curves easy to interpret. Control Suite domains are not meant to be modified, in order to facilitate benchmarking. For full details regarding benchmarking, please refer to our original publication.
A video of solved benchmark tasks is available here.
The suite come with convenient module level tuples for iterating over tasks: | #@title Iterating over tasks{vertical-output: true}
max_len = max(len(d) for d, _ in suite.BENCHMARKING)
for domain, task in suite.BENCHMARKING:
print(f'{domain:<{max_len}} {task}')
#@title Loading and simulating a `suite` task{vertical-output: true}
# Load the environment
random_state = np.random.RandomState(42)
env = suite.load('hopper', 'stand', task_kwargs={'random': random_state})
# Simulate episode with random actions
duration = 4 # Seconds
frames = []
ticks = []
rewards = []
observations = []
spec = env.action_spec()
time_step = env.reset()
while env.physics.data.time < duration:
action = random_state.uniform(spec.minimum, spec.maximum, spec.shape)
time_step = env.step(action)
camera0 = env.physics.render(camera_id=0, height=200, width=200)
camera1 = env.physics.render(camera_id=1, height=200, width=200)
frames.append(np.hstack((camera0, camera1)))
rewards.append(time_step.reward)
observations.append(copy.deepcopy(time_step.observation))
ticks.append(env.physics.data.time)
html_video = display_video(frames, framerate=1./env.control_timestep())
# Show video and plot reward and observations
num_sensors = len(time_step.observation)
_, ax = plt.subplots(1 + num_sensors, 1, sharex=True, figsize=(4, 8))
ax[0].plot(ticks, rewards)
ax[0].set_ylabel('reward')
ax[-1].set_xlabel('time')
for i, key in enumerate(time_step.observation):
data = np.asarray([observations[j][key] for j in range(len(observations))])
ax[i+1].plot(ticks, data, label=key)
ax[i+1].set_ylabel(key)
html_video
#@title Visualizing an initial state of one task per domain in the Control Suite
domains_tasks = {domain: task for domain, task in suite.ALL_TASKS}
random_state = np.random.RandomState(42)
num_domains = len(domains_tasks)
n_col = num_domains // int(np.sqrt(num_domains))
n_row = num_domains // n_col + int(0 < num_domains % n_col)
_, ax = plt.subplots(n_row, n_col, figsize=(12, 12))
for a in ax.flat:
a.axis('off')
a.grid(False)
print(f'Iterating over all {num_domains} domains in the Suite:')
for j, [domain, task] in enumerate(domains_tasks.items()):
print(domain, task)
env = suite.load(domain, task, task_kwargs={'random': random_state})
timestep = env.reset()
pixels = env.physics.render(height=200, width=200, camera_id=0)
ax.flat[j].imshow(pixels)
ax.flat[j].set_title(domain + ': ' + task)
clear_output() | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Locomotion
Humanoid running along corridor with obstacles
As an illustrative example of using the Locomotion infrastructure to build an RL environment, consider placing a humanoid in a corridor with walls, and a task specifying that the humanoid will be rewarded for running along this corridor, navigating around the wall obstacles using vision. We instantiate the environment as a composition of the Walker, Arena, and Task as follows. First, we build a position-controlled CMU humanoid walker. | #@title A position controlled `cmu_humanoid`
walker = cmu_humanoid.CMUHumanoidPositionControlledV2020(
observable_options={'egocentric_camera': dict(enabled=True)}) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Next, we construct a corridor-shaped arena that is obstructed by walls. | #@title A corridor arena with wall obstacles
arena = corridor_arenas.WallsCorridor(
wall_gap=3.,
wall_width=distributions.Uniform(2., 3.),
wall_height=distributions.Uniform(2.5, 3.5),
corridor_width=4.,
corridor_length=30.,
) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
The task constructor places the walker in the arena. | #@title A task to navigate the arena
task = corridor_tasks.RunThroughCorridor(
walker=walker,
arena=arena,
walker_spawn_position=(0.5, 0, 0),
target_velocity=3.0,
physics_timestep=0.005,
control_timestep=0.03,
) | tutorial.ipynb | deepmind/dm_control | apache-2.0 |
Subsets and Splits