path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
examples/Tutorial 4-Launch_demo/DP proof.ipynb | ###Markdown
1.4 Demonstration of Differential Privacy using PyDPThe PyDP package provides a Python API into Google's Differential Privacy library. This example uses the alpha 1.0 version of the package that has the following limitations:Laplace noise generation technique.Supports only integer and floating point values To demonstrate DP, we protect the user from something called Membership Inference Attack(MIA). The idea behing DP is we should **not** be able to identify the identity of an individual user.To prove that, we are going to take a database and create two copies of it such that they differ in only one record. Or exactly one record is absent from the original Database. The ProofWe have generated a synthetic dataset with 5000 records with private information such as name as well as the email of the user. The objective of this notebook is going to be able to demonstrate how DP can protect the user from MIA. In the dataset at hand, we have the Item spent by the user (sales_amount) of each user. So if we take the sum of the sales_amount and compare it with the sum of sales_amount with the DB which has exactly one less record, we should be able to identify which user has spend how much and hence identify the user, which using the help of DP, we can avoid that.
###Code
!pip install python-dp # installing PyDP
import pydp as dp # by convention our package is to be imported as dp (for Differential Privacy!)
from pydp.algorithms.laplacian import BoundedSum, BoundedMean, Count, Max
import pandas as pd
import statistics # for calculating mean without applying differential privacy
###Output
_____no_output_____
###Markdown
Fetching the Data and loading it!
###Code
# get carrots data from our public github repo
url1 = 'https://raw.githubusercontent.com/OpenMined/PyDP/dev/examples/Tutorial%204-Launch_demo/data/01.csv'
df1 = pd.read_csv(url1,sep=",", engine = "python")
df1.head()
###Output
_____no_output_____
###Markdown
###Code
url2 = 'https://raw.githubusercontent.com/OpenMined/PyDP/dev/examples/Tutorial%204-Launch_demo/data/02.csv'
df2 = pd.read_csv(url2,sep=",", engine = "python")
df2.head()
url3 = 'https://raw.githubusercontent.com/OpenMined/PyDP/dev/examples/Tutorial%204-Launch_demo/data/03.csv'
df3 = pd.read_csv(url3,sep=",", engine = "python")
df3.head()
url4 = 'https://raw.githubusercontent.com/OpenMined/PyDP/dev/examples/Tutorial%204-Launch_demo/data/04.csv'
df4 = pd.read_csv(url4,sep=",", engine = "python")
df4.head()
url5 = 'https://raw.githubusercontent.com/OpenMined/PyDP/dev/examples/Tutorial%204-Launch_demo/data/05.csv'
df5 = pd.read_csv(url5,sep=",", engine = "python")
df5.head()
###Output
_____no_output_____
###Markdown
Combining the whole data into one single dataframe.
###Code
combined_df_temp = [df1, df2, df3, df4, df5]
original_dataset = pd.concat(combined_df_temp)
###Output
_____no_output_____
###Markdown
The size of the combined dataset:
###Code
original_dataset.shape
###Output
_____no_output_____
###Markdown
Now we create our new dataset for testing DP in which we remove exactly one record from the original DB.
###Code
redact_dataset = original_dataset.copy()
redact_dataset = redact_dataset[1:] # this dataset does not have
# Osbourne Gillions [email protected] 31.94 Florida
original_dataset.head()
redact_dataset.head()
###Output
_____no_output_____
###Markdown
If we find the sum of sales_amount in `total_datset` and `redact_datset`, we should see the difference in the sum to be exactly equal to the money spent (`sales_amount`) by Osbourne Gillions.
###Code
sum_original_dataset = round(sum(original_dataset['sales_amount'].to_list()), 2)
sum_redact_dataset = round(sum(redact_dataset['sales_amount'].to_list()), 2)
sales_amount_Osbourne = round((sum_original_dataset - sum_redact_dataset), 2)
assert sales_amount_Osbourne == original_dataset.iloc[0, 4]
###Output
_____no_output_____
###Markdown
It's quite evident that using the traditional methods, even though if we remove the private information like name and email, we could still infer the identity of the user. Using Differential Privacy, we can solve this!
###Code
# we can set the lower bound as 5 because we know the person has to spend minimum of $5
# while the upper bound as 250 as that's the maximum amount user can spend
# Keeping the datatype as float as the data has floating point numbers.
# if your data has integer only, you should consider using int as it saves a lot of memory
dp_sum_original_dataset = BoundedSum(epsilon= 1, lower_bound = 5, upper_bound = 250, dtype ='float')
dp_sum_original_dataset.reset()
dp_sum_original_dataset.add_entries(original_dataset['sales_amount'].to_list()) # adding the data to the DP algorithm
dp_sum_og = round(dp_sum_original_dataset.result(), 2)
print(dp_sum_og)
###Output
636661.79
###Markdown
Taking Sum of data on the Redacted Dataset
###Code
dp_redact_dataset = BoundedSum(epsilon= 1, lower_bound = 5, upper_bound = 250, dtype ='float')
dp_redact_dataset.add_entries(redact_dataset['sales_amount'].to_list())
dp_redact_dataset.memory_used()
dp_sum_redact = round(dp_redact_dataset.result(),2)
print(dp_sum_redact)
round(dp_sum_og - dp_sum_redact, 2)
print("Difference in sum using DP: {}".format(round(dp_sum_og - dp_sum_redact, 2)))
print("Actual Value: {}".format(sales_amount_Osbourne))
assert round(dp_sum_og - dp_sum_redact, 2) != sales_amount_Osbourne
print("Sum of sales_value in the orignal Dataset: {}".format(sum_original_dataset))
print("Sum of sales_value in the orignal Dataset using DP: {}".format(dp_sum_og))
assert dp_sum_og != sum_original_dataset
print("Sum of sales_value in the redacted Dataset: {}".format(sum_redact_dataset))
print("Sum of sales_value in the redacted Dataset using DP: {}".format(dp_sum_redact))
assert dp_sum_redact != sum_redact_dataset
###Output
Sum of sales_value in the redacted Dataset: 636562.65
Sum of sales_value in the redacted Dataset using DP: 636534.92
###Markdown
Quering on the PartialConsider a case when you are obtaining a stream of data and you want to give a partial result as and when you receive the data. The more stream of data you get, you get a better picture of what's there, but in this condition you have to give results as and when a new stream of data arrives. To achieve this, PyDP provides an option of using your partial privacy_budget.
###Code
partial_dp_obj = BoundedSum(epsilon= 1, lower_bound = 5, upper_bound = 250, dtype ='float')
###Output
_____no_output_____
###Markdown
Combining first 3000 records in stream and then the other 2000 records.
###Code
new_df_1 = pd.concat([df1, df2, df3])
new_df_2 = pd.concat([df4, df5])
print(new_df_1.shape,new_df_2.shape)
partial_dp_obj.add_entries(new_df_1['sales_amount'].to_list()) # adding the first 3000 records
partial_dp_obj.privacy_budget_left()
partial_sum_dp = round(partial_dp_obj.result(privacy_budget=0.3), 2) # using only 30% of available privacy budget
print(partial_sum_dp)
actual_partial_sum = round(sum(new_df_1['sales_amount'].to_list()), 2)
print(actual_partial_sum)
print("Difference in sum for first 3000 records which used only 30% privacy budget= {}".format(round(abs(actual_partial_sum - partial_sum_dp), 2)))
partial_dp_obj.privacy_budget_left()
partial_dp_obj.add_entries(new_df_2['sales_amount'].to_list()) # adding the remaining 2000 records to the list
partial_total_sum = round(partial_dp_obj.result(), 2)
print(partial_total_sum)
partial_dp_obj.privacy_budget_left() # we have used up all the budget available to us
def sum_og_dataset(budget):
'''
Sample Function to calculate BoundedSum on the whole dataset with budget as specified
'''
dp_sum_original_dataset.reset()
dp_sum_original_dataset.add_entries(original_dataset['sales_amount'].to_list())
return round(dp_sum_original_dataset.result(budget), 2)
print("Actual Sum: {}".format(sum_original_dataset))
print("Sum from the previous run with privacy budget 1.0: {}".format(dp_sum_og))
print("Sum when using privacy_budget as 0.7 on the whole dataset together: {}".format(sum_og_dataset(budget=0.7)))
print("Sum from this run with privacy budget 0.7 on split dataset: {}".format(partial_total_sum))
###Output
Actual Sum: 636594.59
Sum from the previous run with privacy budget 1.0: 636661.79
Sum when using privacy_budget as 0.7 on the whole dataset together: 636926.06
Sum from this run with privacy budget 0.7 on split dataset: 636379.19
|
pandemic_flu_spread/main_findings.ipynb | ###Markdown
Pandemic Flu Spread 1 - Approach to solving the problemIn this problem, the event that a kid gets infected by another kid is defined by aBernoulli trial. Since we have i.i.d Bernoulli trials with 20 susceptible kids and aprobability of infection of 0.02, the number of kids that could get infected by a singlekid is a Binomial distribution.Taking the example of Tommy on the first day of the simulation, the distribution ofkids that Tommy infects on day 1 is defined by the following:$$ Pr(X = k) = {{n}\choose{k}} \cdot p^kq^{n-k}$$for k = 0, 1, 2, …, n, where n = 20 and p = 0.02This results in the following Binomial distribution of infected kids on day one:
###Code
n, p = 20, 0.02
fig, ax = plt.subplots(1, 1, figsize=(7, 5))
x = np.arange(binom.ppf(0.01, n, p),
binom.ppf(1, n, p))
ax.plot(x, binom.pmf(x, n, p), 'bo', ms=8, label='binom pmf')
ax.vlines(x, 0, binom.pmf(x, n, p), colors='b', lw=5, alpha=0.5)
ax.set_title("PMF of Infected Kids on Day 1", fontsize=17)
ax.set_xlabel("Number of Infected Kids", fontsize=14)
ax.tick_params(axis='both', which='major', labelsize=12)
plt.savefig(plots_folder / "pmf_infected_day_one.svg")
plt.show()
###Output
_____no_output_____
###Markdown
This implies that Tommy can only infect at most one kid on the first day but there is ahigher probability that he does not infect any:
###Code
print(binom.pmf(0, n, p))
###Output
0.6676079717550946
###Markdown
Thus the expected number of infected kids
###Code
print(binom.expect(args=(n, p)))
###Output
0.39999999999999974
###Markdown
2 - Sample simulation run
###Code
run_params = dict(days=20,
susceptible_students=20,
infected_students=1,
probability_infection=0.02,
days_to_recover=3)
single_sim_results = PandemicSim.run_sim_with(**run_params, debug=True)
_ = single_sim_results.plot()
plt.title("Pandemic Simulation")
plt.ylabel("Number of Students")
plt.show()
###Output
Day 4: st_21 has recovered!!
Day 3: Pandemic is over!!!
Day 4: Pandemic is over!!!
Day 5: Pandemic is over!!!
Day 6: Pandemic is over!!!
Day 7: Pandemic is over!!!
Day 8: Pandemic is over!!!
Day 9: Pandemic is over!!!
Day 10: Pandemic is over!!!
Day 11: Pandemic is over!!!
Day 12: Pandemic is over!!!
Day 13: Pandemic is over!!!
Day 14: Pandemic is over!!!
Day 15: Pandemic is over!!!
Day 16: Pandemic is over!!!
Day 17: Pandemic is over!!!
Day 18: Pandemic is over!!!
Day 19: Pandemic is over!!!
Day 20: Pandemic is over!!!
###Markdown
3 - Get the expected values with several simulations trials
###Code
results = PandemicSim.run_sims_with(trials=1000,
days=20,
susceptible_students=20,
infected_students=1,
probability_infection=0.02,
days_to_recover=3)
plot_sim(results, 0.02, 3, "sim_proj")
display(results)
###Output
_____no_output_____ |
Graph/1115/210. Course Schedule II.ipynb | ###Markdown
现在你总共有 n 门课需要选,记为 0 到 n-1。 在选修某些课程之前需要一些先修课程。 例如,想要学习课程 0 ,你需要先完成课程 1 ,我们用一个匹配来表示他们: [0,1] 给定课程总量以及它们的先决条件,返回你为了学完所有课程所安排的学习顺序。 可能会有多个正确的顺序,你只要返回一种就可以了。如果不可能完成所有课程,返回一个空数组。示例 1: 输入: 2, [[1,0]] 输出: [0,1] 解释: 总共有 2 门课程。要学习课程 1,你需要先完成课程 0。因此,正确的课程顺序为 [0,1] 。示例 2: 输入: 4, [[1,0],[2,0],[3,1],[3,2]] 输出: [0,1,2,3] or [0,2,1,3] 解释: 总共有 4 门课程。要学习课程 3,你应该先完成课程 1 和课程 2。并且课程 1 和课程 2 都应该排在课程 0 之后。 因此,一个正确的课程顺序是 [0,1,2,3] 。另一个正确的排序是 [0,2,1,3] 。说明: 输入的先决条件是由边缘列表表示的图形,而不是邻接矩阵。详情请参见图的表示法。 你可以假定输入的先决条件中没有重复的边。提示: 这个问题相当于查找一个循环是否存在于有向图中。如果存在循环,则不存在拓扑排序,因此不可能选取所有课程进行学习。 通过 DFS 进行拓扑排序 - 一个关于Coursera的精彩视频教程(21分钟),介绍拓扑排序的基本概念。 拓扑排序也可以通过 BFS 完成。Constraints: 1 <= numCourses <= 2000 0 <= prerequisites.length <= numCourses * (numCourses - 1) prerequisites[i].length == 2 0 <= ai, bi < numCourses ai != bi All the pairs [ai, bi] are distinct.
###Code
from collections import defaultdict
class Solution:
def findOrder(self, numCourses: int, prerequisites):
graph = defaultdict(list)
indeg = [0] * numCourses
for a, b in prerequisites:
graph[b].append(a)
indeg[a] += 1
q = [i for i, x in enumerate(indeg) if x == 0]
res = []
while q:
n = q.pop()
res.append(n)
for j in graph[n]:
indeg[j] -= 1
if indeg[j] == 0:
q.append(j)
return res if len(res) == numCourses else []
solution = Solution()
solution.findOrder(4, [[1,0],[2,0],[3,1],[3,2]])
###Output
_____no_output_____ |
R_TBC_Prog_category.ipynb | ###Markdown
###Code
#@title Save the Genres variables
#save variables
import pickle
with open('Prog_genr','wb') as f:
pickle.dump(Prog_genr, f)
!ls
#@title load the Genres variables
# Open variables
with open('Prog_genr', 'rb') as f:
Prog_genr = pickle.load(f)
Prog_genr
#@title Create Genres dataframe and count it
#https://stackoverflow.com/questions/15943769/how-do-i-get-the-row-count-of-a-pandas-dataframe
# create datagrame for Genre
df = pd.DataFrame(Prog_genr)
df.columns =['prog_name','genre','desc']
print(len(df.index))
#@title Check the collected dataframe
df
#@title Randomly select rows and check it
#randomly select rows
#https://www.geeksforgeeks.org/how-to-randomly-select-rows-from-pandas-dataframe/
df.sample(n =3)
#@title Remove empty rows
# #https://stackoverflow.com/questions/32093829/remove-duplicates-from-dataframe-based-on-two-columns-a-b-keeping-row-with-max
# # Drop rows with any duplicate cell and reset indext removing the original index
# df = df.drop_duplicates(['genre','prog_name']).reset_index(drop=True)
# #or
# df = df.drop_duplicates()
# #https://stackoverflow.com/questions/58311140/remove-white-space-from-entire-dataframe
# #https://hackersandslackers.com/pandas-dataframe-drop/#:~:text=Drop%20Empty%20Rows%20or%20Columns,method%20is%20specifically%20for%20this.&text=Technically%20you%20could%20run%20df,rows%20where%20are%20completely%20empty.
# # Drop rows with any empty or white space cell
for column in df.columns:
df[column] = df[column].apply(lambda x:x.strip())
#https://www.kite.com/python/answers/how-to-drop-empty-rows-from-a-pandas-dataframe-in-python
#https://xspdf.com/resolution/56076054.html
#how the df looks like
nan_value = float("NaN")
df.replace("", nan_value, inplace=True)
df.dropna(subset = ["genre"], inplace=True)
df.dropna(subset = ["prog_name"], inplace=True)
#@title Create df only with import words
#https://stackoverflow.com/questions/3368969/find-string-between-two-substrings
#sort only import words
#@title Create Genres dataframe and count it
#https://stackoverflow.com/questions/15943769/how-do-i-get-the-row-count-of-a-pandas-dataframe
# create datagrame for Genre
# df = pd.DataFrame(Prog_genr)
# df.columns =['prog_name','genre','desc']
print(len(df.index))
df
###Output
_____no_output_____ |
VK_parce_notoken.ipynb | ###Markdown
После парсинга - создание базы данных из полученного файла. Для этого сначала пришлось получить данные из кодировки, а потом перезаписать их в эксель-файл
###Code
import pandas as pd
import numpy as np
import re
VKdf = pd.read_excel('database_excel.xlsx', sheet_name='Лист1')
VKdf
VKdf['only_date'] = VKdf['date'].dt.strftime('%Y-%m')
VKdf['only_time'] = VKdf['date'].dt.strftime('%H')
VKdf['weekday'] = VKdf['date'].dt.dayofweek
VKdf
months = pd.date_range('2020-08-19', '2021-08-19', freq='1M', normalize=True)
months_to_analyse = [d.strftime('%Y-%m') for d in months]
vk2021 = VKdf.loc[VKdf['only_date'].isin(months_to_analyse)]
vk2021.dropna(inplace=True)
vk2021
vk2021.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 661 entries, 65 to 728
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 661 non-null datetime64[ns]
1 text 661 non-null object
2 comments 661 non-null float64
3 likes 661 non-null float64
4 reposts 661 non-null float64
5 views 661 non-null float64
6 attach_count 661 non-null float64
7 attach_types 661 non-null object
8 only_date 661 non-null object
9 only_time 661 non-null object
10 weekday 661 non-null int64
dtypes: datetime64[ns](1), float64(5), int64(1), object(4)
memory usage: 62.0+ KB
###Markdown
А теперь напишем функцию, которая в список выводит рубрики и применим ее построчно, создавая новую колонку вот так: cool_dict = lessons_more_month.apply(lambda row1 : tryings(row1['start_date'], row1['finish_date'], row1['title']), axis=1) и потом эту колонку explode еще можно посчитать количество слов и тоже в отдельную колонку
###Code
def find_topic(text):
regular = r'#\S+'
compiled = re.compile(regular)
try:
lst_topics = compiled.findall(text)
except BaseException:
lst_topics = ['No topic']
return lst_topics
vk2021['topics'] = vk2021.apply(lambda row : find_topic(row['text']), axis=1)
vk2021
def word_count(textik):
lst = textik.split()
return len(lst)
vk2021['text_length'] = vk2021.apply(lambda row : word_count(row['text']), axis=1)
vk2021['count'] = 1
vk2021_exploded = vk2021.explode('topics')
vk2021_exploded
###Output
_____no_output_____
###Markdown
Теперь есть два датафрейма* vk2021 использовать для анализа зависимости даты публикации, других закономерностей кроме рубрики * vk2021_exploded использовать для анализа рубрик может еще посмотреть количество слов, плюс можно спарсить еще что-то? например, количество картинок в посте или прикрепено ли к нему видео? UPDATE в код для парсинга были добавлены значения из JSON, чтобы спарсить количество вложений. Но обновленный код пока остался на другом компьютере, сейчас доступа к нему нет.
###Code
vk2021.to_csv("vk2021.csv", sep=';')
vk2021_exploded.to_csv("vk2021_exploded.csv", sep=';')
###Output
_____no_output_____
###Markdown
сделала новые таблицы в файлах, чтобы не проходить код снова сверху и сразу тут открывать
###Code
import pandas as pd
import numpy as np
import re
import seaborn as sns
import matplotlib.pyplot as plt
vkdf = pd.read_excel('vk2021.xlsx')
vkexp_df = pd.read_excel('vk2021_exploded.xlsx')
vkdf.drop(labels='Column1', axis=1, inplace=True)
vkexp_df.drop(labels='Column1', axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Теперь делаю функцию, чтобы все длины текста можно было сгруппировать по длине в интервалах 50 слов. Это просто удобнее.
###Code
def text_group_by_count(textt):
if textt < 50:
return 50
elif textt >= 50 and textt < 100:
return 100
elif textt >= 100 and textt < 150:
return 150
elif textt >= 150 and textt < 200:
return 200
elif textt >= 200 and textt < 250:
return 250
else:
return 300
vkdf['text_length'] = vkdf.apply(lambda row : text_group_by_count(row['text_length']), axis=1)
vkdf.head()
vkdf.describe()
#смотрим зависимость популярности постов от времени
time_grouped = vkdf.groupby('only_time').agg('sum').reset_index()
time_grouped
###Output
_____no_output_____
###Markdown
кажется, в 18.00 и 19.00, а также в 13.00 - пиковая активность по публикации постов, после этого часа публикуется в среднем в 2 раза больше постов, чем в другие часы. Это час пик и обед, люди зависают в социальных сетях. Я думаю, стоит убрать эти часы из дальнейшего анализа и проанализировать их отдельно. Также уберу часы 9.00, 10.00 и 21.00 - тут минимальная активность публикаций. а теперь еще посмотрю ту же группировку по дням недели
###Code
day_grouped = vkdf.groupby('weekday').agg('median').reset_index()
day_grouped
###Output
_____no_output_____
###Markdown
По дням недели тоже надо анализировать отдельно. Потому что в выходные количество постов падает в 2 раза. Но очень интересно, что несмотря на это, количество лайков, репостов и комментов (в среднем на 1 текст) не падает, а даже увеличивается. Можно сделать вывод, что в выходные посетители более расположены расположены лайкать и репостить. И даже комментируют очень хорошо. Т.е. вывод: в выходные активность пользователей (вовлеченность) гораздо выше, чем в будни. * может быть есть какая-то еще закономерность? может, рубрики особые? или еще что? анализируем два датафрейма (будни и выходные) чтобы узнать. * надо поделать facet grids или построить разные графики, чтобы посмотреть, какие есть закономерности между несколькими значениями
###Code
workday_df_not_rush = vkdf.loc[(vkdf['weekday'].isin(list(range(0, 5)))) & (vkdf['only_time'].isin(['11','12','14','15','16','17','20']))]
workday_df_not_rush['only_time'].unique()
###Output
_____no_output_____
###Markdown
ЧТО ДАЛЬШЕ? * Теперь я сделаю датасет, где по времени посмотрю медианные значения активностей. * потом еще датасет, где по времени количество постов. * и сравню все эти значения на графике Вывод такой, что: Не в час пик количество постов самое высоке с 11 до 19 и активность тоже примерно одинаковая, исключая количество лайков в утреннее время. А вот репостов немного больше после 17. Может быть, если нужно больше лайков, стоит тоже побольше публиковать в 11? А для репостов - в 17 Подсчеты для не часов пик в буднях
###Code
median_activity = workday_df_not_rush.groupby('only_time').agg('median').reset_index()
length_count_time = workday_df_not_rush.groupby('only_time')['count'].sum().reset_index()
fig, ax = plt.subplots(figsize = (8, 4))
for activity in ['comments', 'likes', 'reposts']:
plt.plot(median_activity['only_time'], median_activity[activity], label = activity)
plt.plot(length_count_time['only_time'], length_count_time['count'], label = 'count')
plt.xticks(list(median_activity['only_time'].unique()))
plt.title('Средние значения активностей посетилей паблика по времени публикации в будни не в часы пик')
plt.xlabel('Время публикации, час')
plt.ylabel('Количество единиц активности')
plt.xticks(rotation=45)
plt.legend()
###Output
_____no_output_____
###Markdown
А теперь посмотрим, зависят ли активности от длины текста. Видно, что больше всего текстов у нас примерно до 120 слов. Интересно, что даже на малом количестве текстов, которые больше 150 слов, активность будто не падает, то есть на один текст имеем все равно примерно одинаковое количество активностей пользователей. Вывод Что будто активность хуже всего при длине текстов от 120 до 220 слов.
###Code
length_count = workday_df_not_rush.groupby('text_length')['count'].sum().reset_index()
length_grouped = workday_df_not_rush.groupby('text_length').agg('median').reset_index()
fig, ax = plt.subplots(figsize = (8, 4))
for activity in ['comments', 'likes', 'reposts']:
plt.plot(length_grouped['text_length'], length_grouped[activity], label = activity)
plt.plot(length_count['text_length'], length_count['count'], label = 'count')
plt.title('Средние значения активностей посетилей паблика по длине текста')
plt.xlabel('Длина текста')
plt.ylabel('Количество активностей')
plt.xticks(rotation=45)
plt.legend()
###Output
_____no_output_____
###Markdown
А теперь посмотрим, что там по вложениям
###Code
attachments = workday_df_not_rush.groupby('attach_types').agg({'likes' : 'median', 'reposts' : 'median', 'attach_count' : 'median', 'count' : 'sum'}).reset_index()
attachments.sort_values('likes')
list(workday_df_not_rush.loc[workday_df_not_rush['attach_types'] == 'photodoc', 'text'])
list(workday_df_not_rush.loc[workday_df_not_rush['attach_types'] == 'photovideo', 'text'])
###Output
_____no_output_____
###Markdown
выводы Пожалуй, стоит обратить внимание на посты с фото, ссылками, а также где есть голосования по карточкам (видео), либо где есть смешанный формат вложений - такие хорошо лайкают, особенно нужно обратить внимание на три поста: 'Подводим итоги фестиваля Big Picture!Совместно с креативным агентством POSSIBLE и продакшеном HYPE Production мы в очередной раз провели Big Picture — премию, которая отмечает вклад продакшенов в развитие визуальных коммуникаций.\xa0В этом году во всех номинациях объявлено 60 победителей — полный список лауреатов можно посмотреть на сайте фестиваля, а самых титулованных из них показываем в карточках. Листайте и пишите в комментариях, какая работа впечатлила вас больше всего!Все победители тут: https://bigpicturefestival.ru/winners/\xa0skillbox_мультимедиа', 'Что станет с вашим городом в ближайшем будущем? Так звучит тема нашего челленджа для студентов курсов «Рекламная графика», «CG-дженералист» и «Художник компьютерной графики».Мы предложили участникам сфотографировать любую локацию в своем городе и показать, как она будет выглядеть через несколько лет. И вот наши победители!⭐ 1 место — [id1853210|Антон Вереин]⭐ 2 место — [id18661064|Настя Петрова] ⭐ 3 место — Тимур СадвакасовСмотрите в карусели работы победителей и видео того, как они создавались 🤩Вдохновляет? Тогда добро пожаловать в Skillbox 👇🏻Курс «Рекламная графика»: https://vk.cc/bXmagSКурс «Профессия CG-дженералист»: https://vk.cc/bXmal6Курс «Профессия Художник компьютерной графики»: https://vk.cc/atjgEpSkillbox_дизайн Студенты_Skillbox', 'Забрендировать пустыню — это что-то новое.Так выглядит креативная рекламная кампания Burberry, бренда люксовых товаров. Монограмму компании нанесли прямо на песчаные дюны в Дубае. Но дюны оказались не единственными полотнами — логотип Burberry появился на воздушных шарах в Монголии и парусных лодках в Китае.Перфоманс посвящен выходу капсульной коллекции TB Summer Monogram, а придумал все эти текстурные надписи художник Натаниэль А. Алапид (@alapide_creator). Кейс_Skillbox Skillbox_маркетинг''Корейское телевидение славится своим самобытным креативом. И вот еще одно подтверждение👇🏻 Недавно в стране запустили Discovery. Хотя международная телесеть обновляла айдентику всего год назад, корейцы решили внести в ее визуальное оформление свою лепту.Дизайн-студия из Сеула SUPER VERY MORE SVM (@superverymore) разработала красочные 3D-объекты, обыгрывающие брендовые элементы. Причем идей наштормили с хорошим запасом — для ребрендинга всех каналов сети.С таким дизайном можно и телик посмотреть! А пока включайте видео и пишите в комментариях свои мысли о проекте студии. Кейс_Skillbox Skillbox_маркетинг' 'ОСТОРОЖНО: этот пост может изменить ваше представление о доме мечты.Архитекторы из Москвы Давит и Мэри Джилавян разработали концепцию частного дома Carmine House.Основным цветом авторы выбрали оттенок красного кармина — он отлично выделяется на фоне леса. Дом разделен на две части углом в 45 градусов, чтобы сохранить растущие месте постройки деревья.Со стороны гостиной установлены вращающиеся окна, через которые можно выйти на террасу и пирс. На чердаке установлены окна, трансформирующиеся в мини-балконы.Как видите, многие решения нацелены на то, чтобы хозяева дома ощущали больше гармонии с природой.Как вам проект?Skillbox_вдохновляет Skillbox_дизайн' А еще хорошо репостят посты со ссылками и карточками. Подсчеты для часов пик в будняхТеперь можно все те же самые подсчеты сделать для часов пик.
###Code
workday_rush = vkdf.loc[(vkdf['weekday'].isin(list(range(0, 5)))) & (vkdf['only_time'].isin(['13','18','19']))]
median_activity = workday_rush.groupby('only_time').agg('median').reset_index()
length_count_time = workday_rush.groupby('only_time')['count'].sum().reset_index()
fig, ax = plt.subplots(figsize = (8, 4))
for activity in ['comments', 'likes', 'reposts']:
plt.plot(median_activity['only_time'], median_activity[activity], label = activity)
plt.plot(length_count_time['only_time'], length_count_time['count'], label = 'count')
plt.xticks(list(median_activity['only_time'].unique()))
plt.title('Средние значения активностей посетилей паблика по времени публикации в часы-пик')
plt.xlabel('Время публикации, час')
plt.ylabel('Количество единиц активности')
plt.xticks(rotation=45)
plt.legend()
length_count = workday_rush.groupby('text_length')['count'].sum().reset_index()
length_grouped = workday_rush.groupby('text_length').agg('median').reset_index()
fig, ax = plt.subplots(figsize = (8, 4))
for activity in ['comments', 'likes', 'reposts']:
plt.plot(length_grouped['text_length'], length_grouped[activity], label = activity)
plt.plot(length_count['text_length'], length_count['count'], label = 'count')
plt.title('Средние значения активностей посетилей паблика по длине текста в часы-пик')
plt.xlabel('Длина текста')
plt.ylabel('Количество активностей')
plt.xticks(rotation=45)
plt.legend()
###Output
_____no_output_____
###Markdown
Выводы самая большая активность пользователей - в обед. Потому что количество публикаций вырастает после 18, но при этом количество активностей не меняется, значит, на один пост становится меньше активностей. Что касается длины текстов, которые публикуются в часы пик, лучше репостят тексты от 200 слов, лайкают лучше тексты от 200 слов и до 100 слов. При этом большинство текстов - до 150 слов. Вывод: если нужны лайки - золотая середины - до 150 слов. Если репосты - то можно делать тексты подлиннее.
###Code
attachments = workday_rush.groupby('attach_types').agg({'likes' : 'median', 'reposts' : 'median', 'attach_count' : 'median', 'count' : 'sum'}).reset_index()
attachments.sort_values('likes')
###Output
_____no_output_____
###Markdown
В часы пик хорошо лайкают видео, наверное, потому что есть время их посмотреть в обед. Т.е. стоит в обед и часы пик постить с видео. И с карточками (фотопул) и со ссылками По-прежнему хорошо репостят ссылки. А теперь посмотрим на выходные
###Code
weekend_df = vkdf.loc[(vkdf['weekday'].isin(list(range(5, 7))))]
weekend_df
median_activity = weekend_df.groupby('only_time').agg('median').reset_index()
length_count_time = weekend_df.groupby('only_time')['count'].sum().reset_index()
fig, ax = plt.subplots(figsize = (8, 4))
for activity in ['comments', 'likes', 'reposts']:
plt.plot(median_activity['only_time'], median_activity[activity], label = activity)
plt.plot(length_count_time['only_time'], length_count_time['count'], label = 'count')
plt.xticks(list(median_activity['only_time'].unique()))
plt.title('Средние значения активностей посетилей паблика по времени публикации в выходные')
plt.xlabel('Время публикации, час')
plt.ylabel('Количество единиц активности')
plt.xticks(rotation=45)
plt.legend()
length_count = weekend_df.groupby('text_length')['count'].sum().reset_index()
length_grouped = weekend_df.groupby('text_length').agg('median').reset_index()
fig, ax = plt.subplots(figsize = (8, 4))
for activity in ['comments', 'likes', 'reposts']:
plt.plot(length_grouped['text_length'], length_grouped[activity], label = activity)
plt.plot(length_count['text_length'], length_count['count'], label = 'count')
plt.title('Средние значения активностей посетилей паблика по длине текста в выходные')
plt.xlabel('Длина текста')
plt.ylabel('Количество активностей')
plt.xticks(rotation=45)
plt.legend()
###Output
_____no_output_____
###Markdown
Выводы * интересно, что количество лайков и репостов падает в период наибольной публикационной активности в выходные. Больше всего их в 10, 15 и после 17. * тексты до 150 слов набирают больше всего лайков и репостов, а вот чуть-чуть больше комментируют тексты от 150 до 250 слов. Стоит попробовать изменить время публикации в выходные (сейчас это 10-15 часов, а надо до 10 и после 15) И публиковать лучше тексты до 150 слов.
###Code
attachments1 = weekend_df.groupby('attach_types').agg({'likes' : 'median', 'reposts' : 'median', 'attach_count' : 'median', 'count' : 'sum'}).reset_index()
attachments1.sort_values('likes')
###Output
_____no_output_____
###Markdown
Выводы Интересно, что при равном количестве (1) больше всего лайков набирают посты с фото и документом каким-то. Хорошо лайкают посты с фото в целом, а также где и видео и фото. Репостят тоже хорошо видео и фото. Похоже, на выходных, в отличие от будней, пользователям больше нравятся видео и фото, может быть, потому что в целом мало постят ссылок. А теперь смотрим темы публикаций.
###Code
vkexp_df
###Output
_____no_output_____
###Markdown
Попробую выбрать записи с теми топиками, которые составляют не менее 1% от общего количества записей. Чтобы посмотреть на закономерности и не работать с топиками, на которые очень мало публикаций.
###Code
topics_sorted = vkexp_df['topics'].value_counts()[vkexp_df['topics'].value_counts(normalize=True) > 0.01]
topics_sorted
###Output
_____no_output_____
###Markdown
Вижу, что часто одна и та же тема, но разница в регистре. Поэтому нужно привести все символы в строковых значениях к одному регистру. Для этого модифицируем общий датасет. Потом оттуда выбираем список тем, на которые есть хотя бы 1% от общего числа публикаций. Выбираем из датасета большого только те записи, которые на эти темы.
###Code
vkexp_df['topics'] = vkexp_df['topics'].str.lower()
topics_sorted1 = vkexp_df['topics'].value_counts()[vkexp_df['topics'].value_counts(normalize=True) > 0.01]
topics_sorted1
#vk_pop_topics = vkexp_df.loc[vkexp_df['topics'].isin()]
top_topics = list(topics_sorted1.index)
vk_pop_topics = vkexp_df.loc[vkexp_df['topics'].isin(top_topics)]
vk_pop_topics
###Output
_____no_output_____
###Markdown
А теперь смотрим, какая активность пользователей по разным тематикам.
###Code
vk_pop_topics.groupby('topics').agg({'likes' : 'median', 'reposts' : 'median', 'attach_count' : 'median', 'count' : 'sum'}).reset_index().sort_values(by='likes', ascending=False)
###Output
_____no_output_____
###Markdown
Выводы Кажется, что топики: кейс, в закладки, вдохновляет - хорошо лайкают, а топик "в закладки" еще и хорошо репостят. На них и нужно обратить внимание. В целом. а теперь еще можно ради интереса посмотреть это же на выходных и буднях. Вдруг есть разница?
###Code
vk_pop_topics_weekday = vk_pop_topics.loc[(vk_pop_topics['weekday'].isin(list(range(0, 5))))]
vk_pop_topics_weekday.groupby('topics').agg({'likes' : 'median', 'reposts' : 'median', 'attach_count' : 'median', 'count' : 'sum'}).reset_index().sort_values(by='likes', ascending=False)
vk_pop_topics_weekend = vk_pop_topics.loc[(vk_pop_topics['weekday'].isin(list(range(5, 7))))]
vk_pop_topics_weekend.groupby('topics').agg({'likes' : 'median', 'reposts' : 'median', 'attach_count' : 'median', 'count' : 'sum'}).reset_index().sort_values(by='likes', ascending=False)
###Output
_____no_output_____
###Markdown
Интерпретация Пиковые активности по публикациям вижу в 13-14 и 18-19 часов в будни. В выходные публикационная активность самая высокая с 11 до 15, но интересно, что в этот период люди меньше лайкают, репостят или комментируют посты. В целом, в выходные пользователи проявляют больше активностей на один пост, чем в будни. Т.е. следует больше публиковать в выходные до 11 и после 15. В будни, если нужно больше лайков, стоит побольше публиковать в 11. А для репостов - в 17. РЕКОМЕНДАЦИИ НЕ ДЛЯ ЧАСОВ ПИК В БУДНИТакже в будни активностей пользователей меньше всего при длине текста от 120 до 220 слов. Пожалуй, стоит обратить внимание на посты с фото, ссылками, а также где есть голосования по карточкам (видео), либо где есть смешанный формат вложений - такие хорошо лайкают, особенно нужно обратить внимание на посты и, может быть, делать похожие: 'Подводим итоги фестиваля Big Picture!Совместно с креативным агентством POSSIBLE и продакшеном HYPE Production мы в очередной раз провели Big Picture — премию, которая отмечает вклад продакшенов в развитие визуальных коммуникаций.\xa0В этом году во всех номинациях объявлено 60 победителей — полный список лауреатов можно посмотреть на сайте фестиваля, а самых титулованных из них показываем в карточках. Листайте и пишите в комментариях, какая работа впечатлила вас больше всего!Все победители тут: https://bigpicturefestival.ru/winners/\xa0skillbox_мультимедиа', 'Что станет с вашим городом в ближайшем будущем? Так звучит тема нашего челленджа для студентов курсов «Рекламная графика», «CG-дженералист» и «Художник компьютерной графики».Мы предложили участникам сфотографировать любую локацию в своем городе и показать, как она будет выглядеть через несколько лет. И вот наши победители!⭐ 1 место — [id1853210|Антон Вереин]⭐ 2 место — [id18661064|Настя Петрова] ⭐ 3 место — Тимур СадвакасовСмотрите в карусели работы победителей и видео того, как они создавались 🤩Вдохновляет? Тогда добро пожаловать в Skillbox 👇🏻Курс «Рекламная графика»: https://vk.cc/bXmagSКурс «Профессия CG-дженералист»: https://vk.cc/bXmal6Курс «Профессия Художник компьютерной графики»: https://vk.cc/atjgEpSkillbox_дизайн Студенты_Skillbox', 'Забрендировать пустыню — это что-то новое.Так выглядит креативная рекламная кампания Burberry, бренда люксовых товаров. Монограмму компании нанесли прямо на песчаные дюны в Дубае. Но дюны оказались не единственными полотнами — логотип Burberry появился на воздушных шарах в Монголии и парусных лодках в Китае.Перфоманс посвящен выходу капсульной коллекции TB Summer Monogram, а придумал все эти текстурные надписи художник Натаниэль А. Алапид (@alapide_creator). Кейс_Skillbox Skillbox_маркетинг''Корейское телевидение славится своим самобытным креативом. И вот еще одно подтверждение👇🏻 Недавно в стране запустили Discovery. Хотя международная телесеть обновляла айдентику всего год назад, корейцы решили внести в ее визуальное оформление свою лепту.Дизайн-студия из Сеула SUPER VERY MORE SVM (@superverymore) разработала красочные 3D-объекты, обыгрывающие брендовые элементы. Причем идей наштормили с хорошим запасом — для ребрендинга всех каналов сети.С таким дизайном можно и телик посмотреть! А пока включайте видео и пишите в комментариях свои мысли о проекте студии. Кейс_Skillbox Skillbox_маркетинг''ОСТОРОЖНО: этот пост может изменить ваше представление о доме мечты.Архитекторы из Москвы Давит и Мэри Джилавян разработали концепцию частного дома Carmine House.Основным цветом авторы выбрали оттенок красного кармина — он отлично выделяется на фоне леса. Дом разделен на две части углом в 45 градусов, чтобы сохранить растущие месте постройки деревья.Со стороны гостиной установлены вращающиеся окна, через которые можно выйти на террасу и пирс. На чердаке установлены окна, трансформирующиеся в мини-балконы.Как видите, многие решения нацелены на то, чтобы хозяева дома ощущали больше гармонии с природой.Как вам проект?Skillbox_вдохновляет Skillbox_дизайн'А еще хорошо репостят посты со ссылками и карточками. РЕКОМЕНДАЦИИ ДЛЯ ЧАСОВ ПИК В БУДНЯХСамая большая активность пользователей - в обед. Потому что количество публикаций вырастает после 18, но при этом количество активностей не меняется, значит, на один пост становится меньше активностей.Что касается длины текстов, которые публикуются в часы пик, лучше репостят тексты от 200 слов, лайкают лучше тексты от 200 слов и до 100 слов. При этом большинство текстов - до 150 слов. Вывод: если нужны лайки - золотая середины - до 150 слов. Если репосты - то можно делать тексты подлиннее.В часы пик хорошо лайкают видео, наверное, потому что есть время их посмотреть в обед. Т.е. стоит в обед и часы пик постить с видео. И с карточками (фотопул) и со ссылкамиПо-прежнему хорошо репостят ссылки.РЕКОМЕНДАЦИИ В ВЫХОДНЫЕ Стоит попробовать изменить время публикации в выходные (сейчас это 10-15 часов, а надо до 10 и после 15)И публиковать лучше тексты до 150 слов.Интересно, что при равном количестве (1) больше всего лайков набирают посты с фото и документом каким-то. Хорошо лайкают посты с фото в целом, а также где и видео и фото. Репостят тоже хорошо видео и фото.Похоже, на выходных, в отличие от будней, пользователям больше нравятся видео и фото, может быть, потому что в целом мало постят ссылок.ПО ТЕМАМКажется, что топики: кейс, в закладки, вдохновляет - хорошо лайкают, а топик "в закладки" еще и хорошо репостят. На них и нужно обратить внимание.
###Code
###Output
_____no_output_____ |
5- Text Classification II/cls5-Assignment1-sol.ipynb | ###Markdown
You are given 3 CSV Files for the following 3 questions respectively.• HouseData.csv• Marketing.csv• Results.csv1. Write a function which accepts a CSV file location and split the data randomly in the file into 60% and 40% and save it in two different files namely “First.csv” and “Second.csv”.Run this function for HouseData.csv
###Code
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
df = pd.read_csv('Dataset/698_m5_datasets_v1.0/HouseData.csv')
x1, x2 = train_test_split(df, test_size=0.3)
df.shape
x1.shape
x2.shape
x1.to_csv('First.csv')
x2.to_csv('Second.csv')
###Output
_____no_output_____
###Markdown
2. Write a function which accepts a CSV and performs Label Encoding for the “Class” column of that file, creates a new Column called “Label” and saves the result in the same file.Run this function for Marketing.csv
###Code
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
df2 = pd.read_csv('Dataset/698_m5_datasets_v1.0/Marketing.csv')
df2.info()
df2.shape
df2.sample(10)
df2['class'].unique()
df2['label'] = labelencoder.fit_transform(df2['class'].astype('str'))
df2['label'].unique()
df2.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 7414 entries, 0 to 7413
Data columns (total 24 columns):
Unnamed: 0 7414 non-null int64
custAge 5610 non-null float64
class 7414 non-null object
marital 7414 non-null object
schooling 5259 non-null object
default 7414 non-null object
housing 7414 non-null object
loan 7414 non-null object
contact 7414 non-null object
month 7414 non-null object
day_of_week 6703 non-null object
campaign 7414 non-null int64
pdays 7414 non-null int64
previous 7414 non-null int64
poutcome 7414 non-null object
emp.var.rate 7414 non-null float64
cons.price.idx 7414 non-null float64
cons.conf.idx 7414 non-null float64
euribor3m 7414 non-null float64
nr.employed 7414 non-null float64
pmonths 7414 non-null float64
pastEmail 7414 non-null int64
responded 7414 non-null object
label 7414 non-null int32
dtypes: float64(7), int32(1), int64(5), object(11)
memory usage: 1.3+ MB
###Markdown
3. Write a function which accepts a CSV which has two columns: “ActualValues” and “PredictedValues”. You need to design a confusion matrix and give values for the following:• Accuracy• Misclassification Rate• True Positive Rate• False Positive Rate• Specificity• Precision• Null Error RateAlso plot the ROC CurveProvide your understanding for each of them in a sentence or two.Run this function for Results.csv
###Code
df3 = pd.read_csv('Dataset/698_m5_datasets_v1.0/Results.csv')
df3.info()
df3.head()
df3.shape
from sklearn.metrics import confusion_matrix
confusion_matrix(df3.ActualValues, df3.PredictedValues)
from sklearn.metrics import accuracy_score
#accuracy
accuracy = accuracy_score(df3.ActualValues, df3.PredictedValues)
accuracy
#misclassification rate
1-accuracy
#True Positive Rate
3097/(3097+477+3170+670)
#False Positive rate
670/(3097+477+3170+670)
# Specificity
# (true negatives / all actual negatives) =TN / TN + FP
670/(670+477)
# Precision (TP/(TP+FP))
precision = 3097/(3097+3170)
precision
# Null Error Rate
###Output
_____no_output_____ |
ssd300_training_gps_regression.ipynb | ###Markdown
SSD300 Training TutorialThis tutorial explains how to train an SSD300 on the Pascal VOC datasets. The preset parameters reproduce the training of the original SSD300 "07+12" model. Training SSD512 works simiarly, so there's no extra tutorial for that. The same goes for training on other datasets.You can find a summary of a full training here to get an impression of what it should look like:[SSD300 "07+12" training summary](https://github.com/pierluigiferrari/ssd_keras/blob/master/training_summaries/ssd300_pascal_07%2B12_training_summary.md)
###Code
from keras.optimizers import Adam, SGD
from keras.callbacks import Callback, ModelCheckpoint, LearningRateScheduler, TerminateOnNaN, CSVLogger, EarlyStopping, TensorBoard
from keras import backend as K
from keras.models import load_model
from math import ceil
import numpy as np
from matplotlib import pyplot as plt
from keras.models import Model
from matplotlib import pyplot as plt
from keras.preprocessing import image
from imageio import imread
from models.keras_ssd300 import ssd_300
from keras_loss_function.keras_ssd_loss_mod import SSDLoss
from keras_loss_function.keras_ssd_loss_proj import SSDLoss_proj
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from keras_layers.keras_layer_L2Normalization import L2Normalization
from ssd_encoder_decoder.ssd_input_encoder_mod import SSDInputEncoder
from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_geometric_ops import Resize_Modified
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels_Modified
from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation_modified
from data_generator.object_detection_2d_geometric_ops import Resize
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels
from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation
from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms
from bounding_box_utils.bounding_box_utils import iou, convert_coordinates
from ssd_encoder_decoder.matching_utils import match_bipartite_greedy, match_multi
import random
np.set_printoptions(precision=20)
import tensorflow as tf
np.random.seed(1337)
%matplotlib inline
###Output
Using TensorFlow backend.
###Markdown
0. Preliminary noteAll places in the code where you need to make any changes are marked `TODO` and explained accordingly. All code cells that don't contain `TODO` markers just need to be executed. 1. Set the model configuration parameters
###Code
img_height = 300 # Height of the model input images
img_width = 600 # Width of the model input images
img_channels = 3 # Number of color channels of the model input images
mean_color = [123, 117, 104] # The per-channel mean of the images in the dataset. Do not change this value if you're using any of the pre-trained weights.
swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we'll have the model reverse the color channel order of the input images.
n_classes = 1 # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO
scales_pascal = [0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets
scales_coco = [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] # The anchor box scaling factors used in the original SSD300 for the MS COCO datasets
scales = scales_pascal
aspect_ratios = [[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD300; the order matters
two_boxes_for_ar1 = True # print(y_encoded)
steps = [8, 16, 32, 64, 100, 300] # The space between two adjacent anchor box center points for each predictor layer.
offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer.
clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries
variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are divided as in the original implementation
normalize_coords = True
###Output
_____no_output_____
###Markdown
2. Build or load the modelYou will want to execute either of the two code cells in the subsequent two sub-sections, not both.
###Code
# 1: Build the Keras model.
K.clear_session() # Clear previous models from memory.
model = ssd_300(image_size=(img_height, img_width, img_channels),
n_classes=n_classes,
mode='training',
l2_regularization=0.0005,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
normalize_coords=normalize_coords,
subtract_mean=mean_color,
swap_channels=swap_channels)
# 2: Load some weights into the model.
# TODO: Set the path to the weights you want to load.
weights_path = 'weights/VGG_ILSVRC_16_layers_fc_reduced.h5'
model.load_weights(weights_path, by_name=True)
# 3: Instantiate an optimizer and the SSD loss function and compile the model.
# If you want to follow the original Caffe implementation, use the preset SGD
# optimizer, otherwise I'd recommend the commented-out Adam optimizer.
model.summary()
def gt_rem(pred, gt):
val = tf.subtract(tf.shape(pred)[1], tf.shape(gt)[1],name="gt_rem_subtract")
gt = tf.slice(gt, [0, 0, 0], [1, tf.shape(pred)[1], 18],name="rem_slice")
return gt
def gt_add(pred, gt):
#add to gt
val = tf.subtract(tf.shape(pred)[1], tf.shape(gt)[1],name="gt_add_subtract")
ext = tf.slice(gt, [0, 0, 0], [1, val, 18], name="add_slice")
gt = K.concatenate([ext,gt], axis=1)
return gt
def equalalready(gt, pred): return pred
def make_equal(pred, gt):
equal_tensor = tf.cond(tf.shape(pred)[1] < tf.shape(gt)[1], lambda: gt_rem(pred, gt), lambda: gt_add(pred, gt), name="make_equal_cond")
return equal_tensor
def matcher(y_true_1,y_true_2,y_pred_1,y_pred_2, bsz):
pred = 0
gt = 0
for i in range(bsz):
filterer = tf.where(tf.not_equal(y_true_1[i,:,-4],99))
y_true_new = tf.gather_nd(y_true_1[i,:,:],filterer)
y_true_new = tf.expand_dims(y_true_new, 0)
iou_out = tf.py_func(iou, [y_pred_1[i,:,-16:-12],tf.convert_to_tensor(y_true_new[i,:,-16:-12])], tf.float64, name="iou_out")
bipartite_matches = tf.py_func(match_bipartite_greedy, [iou_out], tf.int64, name="bipartite_matches")
out = tf.gather(y_pred_2[i,:,:], [bipartite_matches], axis=0, name="out")
filterer_2 = tf.where(tf.not_equal(y_true_2[i,:,-4],99))
y_true_2_new = tf.gather_nd(y_true_2[i,:,:],filterer_2)
y_true_2_new = tf.expand_dims(y_true_2_new, 0)
box_comparer = tf.reduce_all(tf.equal(tf.shape(out)[1], tf.shape(y_true_2_new)[1]), name="box_comparer")
y_true_2_equal = tf.cond(box_comparer, lambda: equalalready(out, y_true_2_new), lambda: make_equal(out, y_true_2_new), name="y_true_cond")
if i != 0:
pred = K.concatenate([pred,out], axis=-1)
gt = K.concatenate([gt,y_true_2_equal], axis=0)
else:
pred = out
gt = y_true_2_equal
return pred, gt
# ssd_loss3 = SSDLoss_proj(neg_pos_ratio=3, alpha=1.0)
# ssd_loss4 = SSDLoss_proj(neg_pos_ratio=3, alpha=1.0)
def Accuracy(y_true, y_pred):
'''Calculates the mean accuracy rate across all predictions for
multiclass classification problems.
'''
print("y_pred: ",y_pred)
print("y_true: ",y_true)
y_true = y_true[:,:,:18]
y_pred = y_pred[:,:,:18]
return K.mean(K.equal(K.argmax(y_true[:,:,:-4], axis=-1),
K.argmax(y_pred[:,:,:-4], axis=-1)))
def Accuracy_Proj(y_pred, y_true):
#add to gt
y_true_1 = y_true[:,:,:18]
y_pred_1 = y_pred[:,:,:18]
y_true_2 = y_true[:,:,18:]
y_pred_2 = y_pred[:,:,18:]
acc = tf.constant(0)
y_pred, y_true = matcher(y_true_1,y_pred_1,y_true_2,y_pred_2,1)
return K.mean(K.equal(K.argmax(y_true[:,:,:-4], axis=-1),
K.argmax(y_pred[:,:,:-4], axis=-1)))
adam = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
ssd_loss1 = SSDLoss(neg_pos_ratio=3, alpha=1.0)
ssd_loss2 = SSDLoss(neg_pos_ratio=3, alpha=1.0)
ssd_loss3 = SSDLoss_proj(neg_pos_ratio=3, alpha=1.0)
ssd_loss4 = SSDLoss_proj(neg_pos_ratio=3, alpha=1.0)
losses = {
"predictions_1": ssd_loss1.compute_loss,
"predictions_2": ssd_loss2.compute_loss,
"predictions_1_proj": ssd_loss3.compute_loss,
"predictions_2_proj": ssd_loss4.compute_loss
}
lossWeights = {"predictions_1": 1.0,"predictions_2": 1.0,"predictions_1_proj": 1.0,"predictions_2_proj": 1.0}
# MetricstDict = {"predictions_1": Accuracy,"predictions_2": Accuracy, "predictions_1_proj": Accuracy_Proj,"predictions_2_proj": Accuracy_Proj}
# lossWeights = {"predictions_1": 1.0,"predictions_2": 1.0}
MetricstDict = {"predictions_1": Accuracy,"predictions_2": Accuracy}
model.compile(optimizer=adam, loss=losses, loss_weights=lossWeights, metrics=MetricstDict)
# model.compile(optimizer=adam, loss=losses, loss_weights=lossWeights)
model.summary()
###Output
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 300, 600, 3) 0
____________________________________________________________________________________________________
input_2 (InputLayer) (None, 300, 600, 3) 0
____________________________________________________________________________________________________
identity_layer__1 (Lambda) (None, 300, 600, 3) 0 input_1[0][0]
____________________________________________________________________________________________________
identity_layer__2 (Lambda) (None, 300, 600, 3) 0 input_2[0][0]
____________________________________________________________________________________________________
input_mean_normalization__1 (Lam (None, 300, 600, 3) 0 identity_layer__1[0][0]
____________________________________________________________________________________________________
input_mean_normalization__2 (Lam (None, 300, 600, 3) 0 identity_layer__2[0][0]
____________________________________________________________________________________________________
input_channel_swap__1 (Lambda) (None, 300, 600, 3) 0 input_mean_normalization__1[0][0]
____________________________________________________________________________________________________
input_channel_swap__2 (Lambda) (None, 300, 600, 3) 0 input_mean_normalization__2[0][0]
____________________________________________________________________________________________________
conv1_1__1 (Conv2D) (None, 300, 600, 64) 1792 input_channel_swap__1[0][0]
____________________________________________________________________________________________________
conv1_1__2 (Conv2D) (None, 300, 600, 64) 1792 input_channel_swap__2[0][0]
____________________________________________________________________________________________________
conv1_2__1 (Conv2D) (None, 300, 600, 64) 36928 conv1_1__1[0][0]
____________________________________________________________________________________________________
conv1_2__2 (Conv2D) (None, 300, 600, 64) 36928 conv1_1__2[0][0]
____________________________________________________________________________________________________
pool1__1 (MaxPooling2D) (None, 150, 300, 64) 0 conv1_2__1[0][0]
____________________________________________________________________________________________________
pool1__2 (MaxPooling2D) (None, 150, 300, 64) 0 conv1_2__2[0][0]
____________________________________________________________________________________________________
conv2_1__1 (Conv2D) (None, 150, 300, 128) 73856 pool1__1[0][0]
____________________________________________________________________________________________________
conv2_1__2 (Conv2D) (None, 150, 300, 128) 73856 pool1__2[0][0]
____________________________________________________________________________________________________
conv2_2__1 (Conv2D) (None, 150, 300, 128) 147584 conv2_1__1[0][0]
____________________________________________________________________________________________________
conv2_2__2 (Conv2D) (None, 150, 300, 128) 147584 conv2_1__2[0][0]
____________________________________________________________________________________________________
pool2__1 (MaxPooling2D) (None, 75, 150, 128) 0 conv2_2__1[0][0]
____________________________________________________________________________________________________
pool2__2 (MaxPooling2D) (None, 75, 150, 128) 0 conv2_2__2[0][0]
____________________________________________________________________________________________________
conv3_1__1 (Conv2D) (None, 75, 150, 256) 295168 pool2__1[0][0]
____________________________________________________________________________________________________
conv3_1__2 (Conv2D) (None, 75, 150, 256) 295168 pool2__2[0][0]
____________________________________________________________________________________________________
conv3_2__1 (Conv2D) (None, 75, 150, 256) 590080 conv3_1__1[0][0]
____________________________________________________________________________________________________
conv3_2__2 (Conv2D) (None, 75, 150, 256) 590080 conv3_1__2[0][0]
____________________________________________________________________________________________________
conv3_3__1 (Conv2D) (None, 75, 150, 256) 590080 conv3_2__1[0][0]
____________________________________________________________________________________________________
conv3_3__2 (Conv2D) (None, 75, 150, 256) 590080 conv3_2__2[0][0]
____________________________________________________________________________________________________
pool3__1 (MaxPooling2D) (None, 38, 75, 256) 0 conv3_3__1[0][0]
____________________________________________________________________________________________________
pool3__2 (MaxPooling2D) (None, 38, 75, 256) 0 conv3_3__2[0][0]
____________________________________________________________________________________________________
conv4_1__1 (Conv2D) (None, 38, 75, 512) 1180160 pool3__1[0][0]
____________________________________________________________________________________________________
conv4_1__2 (Conv2D) (None, 38, 75, 512) 1180160 pool3__2[0][0]
____________________________________________________________________________________________________
conv4_2__1 (Conv2D) (None, 38, 75, 512) 2359808 conv4_1__1[0][0]
____________________________________________________________________________________________________
conv4_2__2 (Conv2D) (None, 38, 75, 512) 2359808 conv4_1__2[0][0]
____________________________________________________________________________________________________
conv4_3__1 (Conv2D) (None, 38, 75, 512) 2359808 conv4_2__1[0][0]
____________________________________________________________________________________________________
conv4_3__2 (Conv2D) (None, 38, 75, 512) 2359808 conv4_2__2[0][0]
____________________________________________________________________________________________________
pool4__1 (MaxPooling2D) (None, 19, 38, 512) 0 conv4_3__1[0][0]
____________________________________________________________________________________________________
pool4__2 (MaxPooling2D) (None, 19, 38, 512) 0 conv4_3__2[0][0]
____________________________________________________________________________________________________
conv5_1__1 (Conv2D) (None, 19, 38, 512) 2359808 pool4__1[0][0]
____________________________________________________________________________________________________
conv5_1__2 (Conv2D) (None, 19, 38, 512) 2359808 pool4__2[0][0]
____________________________________________________________________________________________________
conv5_2__1 (Conv2D) (None, 19, 38, 512) 2359808 conv5_1__1[0][0]
____________________________________________________________________________________________________
conv5_2__2 (Conv2D) (None, 19, 38, 512) 2359808 conv5_1__2[0][0]
____________________________________________________________________________________________________
conv5_3__1 (Conv2D) (None, 19, 38, 512) 2359808 conv5_2__1[0][0]
____________________________________________________________________________________________________
conv5_3__2 (Conv2D) (None, 19, 38, 512) 2359808 conv5_2__2[0][0]
____________________________________________________________________________________________________
pool5__1 (MaxPooling2D) (None, 19, 38, 512) 0 conv5_3__1[0][0]
____________________________________________________________________________________________________
pool5__2 (MaxPooling2D) (None, 19, 38, 512) 0 conv5_3__2[0][0]
____________________________________________________________________________________________________
fc6__1 (Conv2D) (None, 19, 38, 1024) 4719616 pool5__1[0][0]
____________________________________________________________________________________________________
fc6__2 (Conv2D) (None, 19, 38, 1024) 4719616 pool5__2[0][0]
____________________________________________________________________________________________________
fc7__1 (Conv2D) (None, 19, 38, 1024) 1049600 fc6__1[0][0]
____________________________________________________________________________________________________
fc7__2 (Conv2D) (None, 19, 38, 1024) 1049600 fc6__2[0][0]
____________________________________________________________________________________________________
conv6_1__1 (Conv2D) (None, 19, 38, 256) 262400 fc7__1[0][0]
____________________________________________________________________________________________________
conv6_1__2 (Conv2D) (None, 19, 38, 256) 262400 fc7__2[0][0]
____________________________________________________________________________________________________
conv6adding__1 (ZeroPadding2D) (None, 21, 40, 256) 0 conv6_1__1[0][0]
____________________________________________________________________________________________________
conv6adding__2 (ZeroPadding2D) (None, 21, 40, 256) 0 conv6_1__2[0][0]
____________________________________________________________________________________________________
conv6_2__1 (Conv2D) (None, 10, 19, 512) 1180160 conv6adding__1[0][0]
____________________________________________________________________________________________________
conv6_2__2 (Conv2D) (None, 10, 19, 512) 1180160 conv6adding__2[0][0]
____________________________________________________________________________________________________
conv7_1__1 (Conv2D) (None, 10, 19, 128) 65664 conv6_2__1[0][0]
____________________________________________________________________________________________________
conv7_1__2 (Conv2D) (None, 10, 19, 128) 65664 conv6_2__2[0][0]
____________________________________________________________________________________________________
conv7adding__1 (ZeroPadding2D) (None, 12, 21, 128) 0 conv7_1__1[0][0]
____________________________________________________________________________________________________
conv7adding__2 (ZeroPadding2D) (None, 12, 21, 128) 0 conv7_1__2[0][0]
____________________________________________________________________________________________________
conv7_2__1 (Conv2D) (None, 5, 10, 256) 295168 conv7adding__1[0][0]
____________________________________________________________________________________________________
conv7_2__2 (Conv2D) (None, 5, 10, 256) 295168 conv7adding__2[0][0]
____________________________________________________________________________________________________
conv8_1__1 (Conv2D) (None, 5, 10, 128) 32896 conv7_2__1[0][0]
____________________________________________________________________________________________________
conv8_1__2 (Conv2D) (None, 5, 10, 128) 32896 conv7_2__2[0][0]
____________________________________________________________________________________________________
conv8_2__1 (Conv2D) (None, 3, 8, 256) 295168 conv8_1__1[0][0]
____________________________________________________________________________________________________
conv8_2__2 (Conv2D) (None, 3, 8, 256) 295168 conv8_1__2[0][0]
____________________________________________________________________________________________________
conv9_1__1 (Conv2D) (None, 3, 8, 128) 32896 conv8_2__1[0][0]
____________________________________________________________________________________________________
conv9_1__2 (Conv2D) (None, 3, 8, 128) 32896 conv8_2__2[0][0]
____________________________________________________________________________________________________
conv4_3_norm__1 (L2Normalization (None, 38, 75, 512) 512 conv4_3__1[0][0]
____________________________________________________________________________________________________
conv9_2__1 (Conv2D) (None, 1, 6, 256) 295168 conv9_1__1[0][0]
____________________________________________________________________________________________________
conv4_3_norm__2 (L2Normalization (None, 38, 75, 512) 512 conv4_3__2[0][0]
____________________________________________________________________________________________________
conv9_2__2 (Conv2D) (None, 1, 6, 256) 295168 conv9_1__2[0][0]
____________________________________________________________________________________________________
conv4_3_norm_mbox_loc__1 (Conv2D (None, 38, 75, 16) 73744 conv4_3_norm__1[0][0]
____________________________________________________________________________________________________
fc7_mbox_loc__1 (Conv2D) (None, 19, 38, 24) 221208 fc7__1[0][0]
____________________________________________________________________________________________________
conv6_2_mbox_loc__1 (Conv2D) (None, 10, 19, 24) 110616 conv6_2__1[0][0]
____________________________________________________________________________________________________
conv7_2_mbox_loc__1 (Conv2D) (None, 5, 10, 24) 55320 conv7_2__1[0][0]
____________________________________________________________________________________________________
conv8_2_mbox_loc__1 (Conv2D) (None, 3, 8, 16) 36880 conv8_2__1[0][0]
____________________________________________________________________________________________________
conv9_2_mbox_loc__1 (Conv2D) (None, 1, 6, 16) 36880 conv9_2__1[0][0]
____________________________________________________________________________________________________
conv4_3_norm_mbox_loc__2 (Conv2D (None, 38, 75, 16) 73744 conv4_3_norm__2[0][0]
____________________________________________________________________________________________________
fc7_mbox_loc__2 (Conv2D) (None, 19, 38, 24) 221208 fc7__2[0][0]
____________________________________________________________________________________________________
conv6_2_mbox_loc__2 (Conv2D) (None, 10, 19, 24) 110616 conv6_2__2[0][0]
____________________________________________________________________________________________________
conv7_2_mbox_loc__2 (Conv2D) (None, 5, 10, 24) 55320 conv7_2__2[0][0]
____________________________________________________________________________________________________
conv8_2_mbox_loc__2 (Conv2D) (None, 3, 8, 16) 36880 conv8_2__2[0][0]
____________________________________________________________________________________________________
conv9_2_mbox_loc__2 (Conv2D) (None, 1, 6, 16) 36880 conv9_2__2[0][0]
____________________________________________________________________________________________________
conv4_3_norm_mbox_conf__1 (Conv2 (None, 38, 75, 8) 36872 conv4_3_norm__1[0][0]
____________________________________________________________________________________________________
fc7_mbox_conf__1 (Conv2D) (None, 19, 38, 12) 110604 fc7__1[0][0]
____________________________________________________________________________________________________
conv6_2_mbox_conf__1 (Conv2D) (None, 10, 19, 12) 55308 conv6_2__1[0][0]
____________________________________________________________________________________________________
conv7_2_mbox_conf__1 (Conv2D) (None, 5, 10, 12) 27660 conv7_2__1[0][0]
____________________________________________________________________________________________________
conv8_2_mbox_conf__1 (Conv2D) (None, 3, 8, 8) 18440 conv8_2__1[0][0]
____________________________________________________________________________________________________
conv9_2_mbox_conf__1 (Conv2D) (None, 1, 6, 8) 18440 conv9_2__1[0][0]
____________________________________________________________________________________________________
conv4_3_norm_mbox_priorbox__1 (A (None, 38, 75, 4, 8) 0 conv4_3_norm_mbox_loc__1[0][0]
____________________________________________________________________________________________________
fc7_mbox_priorbox__1 (AnchorBoxe (None, 19, 38, 6, 8) 0 fc7_mbox_loc__1[0][0]
____________________________________________________________________________________________________
conv6_2_mbox_priorbox__1 (Anchor (None, 10, 19, 6, 8) 0 conv6_2_mbox_loc__1[0][0]
____________________________________________________________________________________________________
conv7_2_mbox_priorbox__1 (Anchor (None, 5, 10, 6, 8) 0 conv7_2_mbox_loc__1[0][0]
____________________________________________________________________________________________________
conv8_2_mbox_priorbox__1 (Anchor (None, 3, 8, 4, 8) 0 conv8_2_mbox_loc__1[0][0]
____________________________________________________________________________________________________
conv9_2_mbox_priorbox__1 (Anchor (None, 1, 6, 4, 8) 0 conv9_2_mbox_loc__1[0][0]
____________________________________________________________________________________________________
conv4_3_norm_mbox_conf__2 (Conv2 (None, 38, 75, 8) 36872 conv4_3_norm__2[0][0]
____________________________________________________________________________________________________
fc7_mbox_conf__2 (Conv2D) (None, 19, 38, 12) 110604 fc7__2[0][0]
____________________________________________________________________________________________________
conv6_2_mbox_conf__2 (Conv2D) (None, 10, 19, 12) 55308 conv6_2__2[0][0]
____________________________________________________________________________________________________
conv7_2_mbox_conf__2 (Conv2D) (None, 5, 10, 12) 27660 conv7_2__2[0][0]
____________________________________________________________________________________________________
conv8_2_mbox_conf__2 (Conv2D) (None, 3, 8, 8) 18440 conv8_2__2[0][0]
____________________________________________________________________________________________________
conv9_2_mbox_conf__2 (Conv2D) (None, 1, 6, 8) 18440 conv9_2__2[0][0]
____________________________________________________________________________________________________
conv4_3_norm_mbox_priorbox__2 (A (None, 38, 75, 4, 8) 0 conv4_3_norm_mbox_loc__2[0][0]
____________________________________________________________________________________________________
fc7_mbox_priorbox__2 (AnchorBoxe (None, 19, 38, 6, 8) 0 fc7_mbox_loc__2[0][0]
____________________________________________________________________________________________________
conv6_2_mbox_priorbox__2 (Anchor (None, 10, 19, 6, 8) 0 conv6_2_mbox_loc__2[0][0]
____________________________________________________________________________________________________
conv7_2_mbox_priorbox__2 (Anchor (None, 5, 10, 6, 8) 0 conv7_2_mbox_loc__2[0][0]
____________________________________________________________________________________________________
conv8_2_mbox_priorbox__2 (Anchor (None, 3, 8, 4, 8) 0 conv8_2_mbox_loc__2[0][0]
____________________________________________________________________________________________________
conv9_2_mbox_priorbox__2 (Anchor (None, 1, 6, 4, 8) 0 conv9_2_mbox_loc__2[0][0]
____________________________________________________________________________________________________
conv4_3_norm_mbox_conf_reshape__ (None, 11400, 2) 0 conv4_3_norm_mbox_conf__1[0][0]
____________________________________________________________________________________________________
fc7_mbox_conf_reshape__1 (Reshap (None, 4332, 2) 0 fc7_mbox_conf__1[0][0]
____________________________________________________________________________________________________
conv6_2_mbox_conf_reshape__1 (Re (None, 1140, 2) 0 conv6_2_mbox_conf__1[0][0]
____________________________________________________________________________________________________
conv7_2_mbox_conf_reshape__1 (Re (None, 300, 2) 0 conv7_2_mbox_conf__1[0][0]
____________________________________________________________________________________________________
conv8_2_mbox_conf_reshape__1 (Re (None, 96, 2) 0 conv8_2_mbox_conf__1[0][0]
____________________________________________________________________________________________________
conv9_2_mbox_conf_reshape__1 (Re (None, 24, 2) 0 conv9_2_mbox_conf__1[0][0]
____________________________________________________________________________________________________
conv4_3_norm_mbox_loc_reshape__1 (None, 11400, 4) 0 conv4_3_norm_mbox_loc__1[0][0]
____________________________________________________________________________________________________
fc7_mbox_loc_reshape__1 (Reshape (None, 4332, 4) 0 fc7_mbox_loc__1[0][0]
____________________________________________________________________________________________________
conv6_2_mbox_loc_reshape__1 (Res (None, 1140, 4) 0 conv6_2_mbox_loc__1[0][0]
____________________________________________________________________________________________________
conv7_2_mbox_loc_reshape__1 (Res (None, 300, 4) 0 conv7_2_mbox_loc__1[0][0]
____________________________________________________________________________________________________
conv8_2_mbox_loc_reshape__1 (Res (None, 96, 4) 0 conv8_2_mbox_loc__1[0][0]
____________________________________________________________________________________________________
conv9_2_mbox_loc_reshape__1 (Res (None, 24, 4) 0 conv9_2_mbox_loc__1[0][0]
____________________________________________________________________________________________________
conv4_3_norm_mbox_priorbox_resha (None, 11400, 8) 0 conv4_3_norm_mbox_priorbox__1[0][
____________________________________________________________________________________________________
fc7_mbox_priorbox_reshape__1 (Re (None, 4332, 8) 0 fc7_mbox_priorbox__1[0][0]
____________________________________________________________________________________________________
conv6_2_mbox_priorbox_reshape__1 (None, 1140, 8) 0 conv6_2_mbox_priorbox__1[0][0]
____________________________________________________________________________________________________
conv7_2_mbox_priorbox_reshape__1 (None, 300, 8) 0 conv7_2_mbox_priorbox__1[0][0]
____________________________________________________________________________________________________
conv8_2_mbox_priorbox_reshape__1 (None, 96, 8) 0 conv8_2_mbox_priorbox__1[0][0]
____________________________________________________________________________________________________
conv9_2_mbox_priorbox_reshape__1 (None, 24, 8) 0 conv9_2_mbox_priorbox__1[0][0]
____________________________________________________________________________________________________
conv4_3_norm_mbox_conf_reshape__ (None, 11400, 2) 0 conv4_3_norm_mbox_conf__2[0][0]
____________________________________________________________________________________________________
fc7_mbox_conf_reshape__2 (Reshap (None, 4332, 2) 0 fc7_mbox_conf__2[0][0]
____________________________________________________________________________________________________
conv6_2_mbox_conf_reshape__2 (Re (None, 1140, 2) 0 conv6_2_mbox_conf__2[0][0]
____________________________________________________________________________________________________
conv7_2_mbox_conf_reshape__2 (Re (None, 300, 2) 0 conv7_2_mbox_conf__2[0][0]
____________________________________________________________________________________________________
conv8_2_mbox_conf_reshape__2 (Re (None, 96, 2) 0 conv8_2_mbox_conf__2[0][0]
____________________________________________________________________________________________________
conv9_2_mbox_conf_reshape__2 (Re (None, 24, 2) 0 conv9_2_mbox_conf__2[0][0]
____________________________________________________________________________________________________
conv4_3_norm_mbox_loc_reshape__2 (None, 11400, 4) 0 conv4_3_norm_mbox_loc__2[0][0]
____________________________________________________________________________________________________
fc7_mbox_loc_reshape__2 (Reshape (None, 4332, 4) 0 fc7_mbox_loc__2[0][0]
____________________________________________________________________________________________________
conv6_2_mbox_loc_reshape__2 (Res (None, 1140, 4) 0 conv6_2_mbox_loc__2[0][0]
____________________________________________________________________________________________________
conv7_2_mbox_loc_reshape__2 (Res (None, 300, 4) 0 conv7_2_mbox_loc__2[0][0]
____________________________________________________________________________________________________
conv8_2_mbox_loc_reshape__2 (Res (None, 96, 4) 0 conv8_2_mbox_loc__2[0][0]
____________________________________________________________________________________________________
conv9_2_mbox_loc_reshape__2 (Res (None, 24, 4) 0 conv9_2_mbox_loc__2[0][0]
____________________________________________________________________________________________________
conv4_3_norm_mbox_priorbox_resha (None, 11400, 8) 0 conv4_3_norm_mbox_priorbox__2[0][
____________________________________________________________________________________________________
fc7_mbox_priorbox_reshape__2 (Re (None, 4332, 8) 0 fc7_mbox_priorbox__2[0][0]
____________________________________________________________________________________________________
conv6_2_mbox_priorbox_reshape__2 (None, 1140, 8) 0 conv6_2_mbox_priorbox__2[0][0]
____________________________________________________________________________________________________
conv7_2_mbox_priorbox_reshape__2 (None, 300, 8) 0 conv7_2_mbox_priorbox__2[0][0]
____________________________________________________________________________________________________
conv8_2_mbox_priorbox_reshape__2 (None, 96, 8) 0 conv8_2_mbox_priorbox__2[0][0]
____________________________________________________________________________________________________
conv9_2_mbox_priorbox_reshape__2 (None, 24, 8) 0 conv9_2_mbox_priorbox__2[0][0]
____________________________________________________________________________________________________
mbox_conf__1 (Concatenate) (None, 17292, 2) 0 conv4_3_norm_mbox_conf_reshape__1
fc7_mbox_conf_reshape__1[0][0]
conv6_2_mbox_conf_reshape__1[0][0
conv7_2_mbox_conf_reshape__1[0][0
conv8_2_mbox_conf_reshape__1[0][0
conv9_2_mbox_conf_reshape__1[0][0
____________________________________________________________________________________________________
mbox_loc__1 (Concatenate) (None, 17292, 4) 0 conv4_3_norm_mbox_loc_reshape__1[
fc7_mbox_loc_reshape__1[0][0]
conv6_2_mbox_loc_reshape__1[0][0]
conv7_2_mbox_loc_reshape__1[0][0]
conv8_2_mbox_loc_reshape__1[0][0]
conv9_2_mbox_loc_reshape__1[0][0]
____________________________________________________________________________________________________
mbox_priorbox__1 (Concatenate) (None, 17292, 8) 0 conv4_3_norm_mbox_priorbox_reshap
fc7_mbox_priorbox_reshape__1[0][0
conv6_2_mbox_priorbox_reshape__1[
conv7_2_mbox_priorbox_reshape__1[
conv8_2_mbox_priorbox_reshape__1[
conv9_2_mbox_priorbox_reshape__1[
____________________________________________________________________________________________________
mbox_conf__2 (Concatenate) (None, 17292, 2) 0 conv4_3_norm_mbox_conf_reshape__2
fc7_mbox_conf_reshape__2[0][0]
conv6_2_mbox_conf_reshape__2[0][0
conv7_2_mbox_conf_reshape__2[0][0
conv8_2_mbox_conf_reshape__2[0][0
conv9_2_mbox_conf_reshape__2[0][0
____________________________________________________________________________________________________
mbox_loc__2 (Concatenate) (None, 17292, 4) 0 conv4_3_norm_mbox_loc_reshape__2[
fc7_mbox_loc_reshape__2[0][0]
conv6_2_mbox_loc_reshape__2[0][0]
conv7_2_mbox_loc_reshape__2[0][0]
conv8_2_mbox_loc_reshape__2[0][0]
conv9_2_mbox_loc_reshape__2[0][0]
____________________________________________________________________________________________________
mbox_priorbox__2 (Concatenate) (None, 17292, 8) 0 conv4_3_norm_mbox_priorbox_reshap
fc7_mbox_priorbox_reshape__2[0][0
conv6_2_mbox_priorbox_reshape__2[
conv7_2_mbox_priorbox_reshape__2[
conv8_2_mbox_priorbox_reshape__2[
conv9_2_mbox_priorbox_reshape__2[
____________________________________________________________________________________________________
input_3 (InputLayer) (None, 17292, 3) 0
____________________________________________________________________________________________________
input_4 (InputLayer) (None, 17292, 3) 0
____________________________________________________________________________________________________
mbox_conf_softmax__1 (Activation (None, 17292, 2) 0 mbox_conf__1[0][0]
____________________________________________________________________________________________________
lambda_1 (Lambda) (None, 17292, 4) 0 mbox_loc__1[0][0]
____________________________________________________________________________________________________
mbox_conf_softmax__2 (Activation (None, 17292, 2) 0 mbox_conf__2[0][0]
____________________________________________________________________________________________________
lambda_2 (Lambda) (None, 17292, 4) 0 mbox_loc__2[0][0]
____________________________________________________________________________________________________
predictions_tot_1 (Concatenate) (None, 17292, 20) 0 mbox_conf__1[0][0]
mbox_loc__1[0][0]
mbox_priorbox__1[0][0]
input_3[0][0]
input_4[0][0]
____________________________________________________________________________________________________
predictions_tot_2 (Concatenate) (None, 17292, 20) 0 mbox_conf__2[0][0]
mbox_loc__2[0][0]
mbox_priorbox__2[0][0]
input_4[0][0]
input_3[0][0]
____________________________________________________________________________________________________
predictions_1 (Concatenate) (None, 17292, 18) 0 mbox_conf_softmax__1[0][0]
mbox_loc__1[0][0]
mbox_priorbox__1[0][0]
lambda_1[0][0]
____________________________________________________________________________________________________
predictions_2 (Concatenate) (None, 17292, 18) 0 mbox_conf_softmax__2[0][0]
mbox_loc__2[0][0]
mbox_priorbox__2[0][0]
lambda_2[0][0]
____________________________________________________________________________________________________
predictions__1_mbox_proj (Lambda (None, 17292, 4) 0 predictions_tot_1[0][0]
____________________________________________________________________________________________________
predictions__2_mbox_proj (Lambda (None, 17292, 4) 0 predictions_tot_2[0][0]
____________________________________________________________________________________________________
predictions_1_proj (Concatenate) (None, 17292, 36) 0 predictions_1[0][0]
mbox_conf_softmax__1[0][0]
predictions__1_mbox_proj[0][0]
mbox_priorbox__1[0][0]
lambda_1[0][0]
____________________________________________________________________________________________________
predictions_2_proj (Concatenate) (None, 17292, 36) 0 predictions_2[0][0]
mbox_conf_softmax__2[0][0]
predictions__2_mbox_proj[0][0]
mbox_priorbox__2[0][0]
lambda_2[0][0]
====================================================================================================
Total params: 47,491,816
Trainable params: 47,491,816
Non-trainable params: 0
____________________________________________________________________________________________________
###Markdown
2.2 Load a previously created model
###Code
# train_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path='dataset_pascal_voc_07+12_trainval.h5')
# val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path='dataset_pascal_voc_07_test.h5')
train_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset_1 = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
VOC_2007_images_dir = '../datasets/Images/'
# The directories that contain the annotations.
VOC_2007_annotations_dir = '../datasets/VOC/Pasadena/Annotations_Multi/'
VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid/train.txt'
VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid/val.txt'
VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid/test.txt'
# The pat[Accuracy]hs to the image sets.
# VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/trainval_sia.txt'
# VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/val_sia.txt'
# VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/test_sia.txt'
# VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/trainval_sia_same.txt'
# VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/val_sia_same.txt'
# VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/test_sia_same.txt'
# VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/trainval_sia_sub.txt'
# VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/val_sia_sub.txt'
# VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/test_sia_sub.txt'
# VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/trainval_one.txt'
# VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/val_one.txt'
# VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/test_one.txt'
# The XML parser needs to now what object class names to look for and in which order to map them to integers.
classes = ['background',
'tree']
train_dataset.parse_xml(images_dirs=[VOC_2007_images_dir],
image_set_filenames=[VOC_2007_trainval_image_set_filename],
annotations_dirs=[VOC_2007_annotations_dir],
classes=classes,
include_classes='all',
exclude_truncated=False,
exclude_difficult=False,
ret=False)
val_dataset.parse_xml(images_dirs=[VOC_2007_images_dir],
image_set_filenames=[VOC_2007_val_image_set_filename],
annotations_dirs=[VOC_2007_annotations_dir],
classes=classes,
include_classes='all',
exclude_truncated=False,
exclude_difficult=True,
ret=False)
batch_size = 4
ssd_data_augmentation = SSDDataAugmentation_modified(img_height=img_height,
img_width=img_width,
background=mean_color)
# For the validation generator:
convert_to_3_channels = ConvertTo3Channels_Modified()
resize = Resize_Modified(height=img_height, width=img_width)
# 5: Instantiate an encoder that can encode ground truth labels into the format needed by the SSD loss function.
# The encoder constructor needs the spatial dimensions of the model's predictor layers to create the anchor boxes.
predictor_sizes = [model.get_layer('conv4_3_norm_mbox_conf__1').output_shape[1:3],
model.get_layer('fc7_mbox_conf__1').output_shape[1:3],
model.get_layer('conv6_2_mbox_conf__1').output_shape[1:3],
model.get_layer('conv7_2_mbox_conf__1').output_shape[1:3],
model.get_layer('conv8_2_mbox_conf__1').output_shape[1:3],
model.get_layer('conv9_2_mbox_conf__1').output_shape[1:3]]
ssd_input_encoder = SSDInputEncoder(img_height=img_height,
img_width=img_width,
n_classes=n_classes,
predictor_sizes=predictor_sizes,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
matching_type='multi',
pos_iou_threshold=0.5,
neg_iou_limit=0.5,
normalize_coords=normalize_coords)
# 6: Create the generator handles that will be passed to Keras' `fit_generator()` function.
train_generator = train_dataset.generate(batch_size=batch_size,
shuffle=False,
transformations=[ssd_data_augmentation],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
val_generator = val_dataset.generate(batch_size=batch_size,
shuffle=False,
transformations=[convert_to_3_channels,
resize],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
# Get the number of samples in the training and validations datasets.
train_dataset_size = train_dataset.get_dataset_size()
val_dataset_size = val_dataset.get_dataset_size()
print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size))
print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size))
###Output
Number of images in the training dataset: 17325
Number of images in the validation dataset: 4331
###Markdown
4. Set the remaining training parametersWe've already chosen an optimizer and set the batch size above, now let's set the remaining training parameters. I'll set one epoch to consist of 1,000 training steps. The next code cell defines a learning rate schedule that replicates the learning rate schedule of the original Caffe implementation for the training of the SSD300 Pascal VOC "07+12" model. That model was trained for 120,000 steps with a learning rate of 0.001 for the first 80,000 steps, 0.0001 for the next 20,000 steps, and 0.00001 for the last 20,000 steps. If you're training on a different dataset, define the learning rate schedule however you see fit.I'll set only a few essential Keras callbacks below, feel free to add more callbacks if you want TensorBoard summaries or whatever. We obviously need the learning rate scheduler and we want to save the best models during the training. It also makes sense to continuously stream our training history to a CSV log file after every epoch, because if we didn't do that, in case the training terminates with an exception at some point or if the kernel of this Jupyter notebook dies for some reason or anything like that happens, we would lose the entire history for the trained epochs. Finally, we'll also add a callback that makes sure that the training terminates if the loss becomes `NaN`. Depending on the optimizer you use, it can happen that the loss becomes `NaN` during the first iterations of the training. In later iterations it's less of a risk. For example, I've never seen a `NaN` loss when I trained SSD using an Adam optimizer, but I've seen a `NaN` loss a couple of times during the very first couple of hundred training steps of training a new model when I used an SGD optimizer.
###Code
# Define a learning rate schedule.
def lr_schedule(epoch):
if epoch < 80:
return 0.001
elif epoch < 100:
return 0.0001
else:
return 0.00001
class prediction_history(Callback):
def __init__(self):
print("Predictor")
def on_epoch_end(self, epoch, logs={}):
ssd_loss1 = SSDLoss(neg_pos_ratio=3, alpha=1.0)
predder = np.load('outputs/predder.npy')
bX = predder[0][0]
bZ = predder[0][1]
gX = predder[0][2]
gZ = predder[0][3]
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer("predictions_1").output)
intermediate_layer_model_1 = Model(inputs=model.input,
outputs=model.get_layer("predictions_1_proj").output)
intermediate_layer_model_2 = Model(inputs=model.input,
outputs=model.get_layer("predictions_2").output)
intermediate_layer_model_3 = Model(inputs=model.input,
outputs=model.get_layer("predictions_2_proj").output)
intermediate_output = intermediate_layer_model.predict([bX,bZ,gX,gZ])
intermediate_output_1 = intermediate_layer_model_1.predict([bX,bZ,gX,gZ])
intermediate_output_2 = intermediate_layer_model_2.predict([bX,bZ,gX,gZ])
intermediate_output_3 = intermediate_layer_model_3.predict([bX,bZ,gX,gZ])
np.save('outputs/predictions_1_'+str(epoch)+'.npy',intermediate_output)
np.save('outputs/predictions_1_proj_'+str(epoch)+'.npy',intermediate_output_1)
np.save('outputs/predictions_2_'+str(epoch)+'.npy',intermediate_output_2)
np.save('outputs/predictions_2_proj_'+str(epoch)+'.npy',intermediate_output_3)
# Define model callbacks.
# TODO: Set the filepath under which you want to save the model.
model_checkpoint = ModelCheckpoint(filepath='checkpoints/double_ssd300_pascal_07+12_epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5',
monitor='val_loss',
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='auto',
period=1)
#model_checkpoint.best =
tbCallBack = TensorBoard(log_dir='./Graph', histogram_freq=0, write_graph=True, write_images=True)
csv_logger = CSVLogger(filename='ssd300_pascal_07+12_training_log.csv',
separator=',',
append=True)
learning_rate_scheduler = LearningRateScheduler(schedule=lr_schedule)
early_stopping = EarlyStopping(monitor='val_loss',
min_delta=0,
patience=1,
verbose=0, mode='auto')
terminate_on_nan = TerminateOnNaN()
# printer_callback = prediction_history()
# custom_los = custom_loss()
callbacks = [
model_checkpoint,
# csv_logger,
# custom_los,
learning_rate_scheduler,
early_stopping,
terminate_on_nan,
# printer_callback,
tbCallBack
]
###Output
_____no_output_____
###Markdown
5. Train
###Code
# If you're resuming a previous training, set `initial_epoch` and `final_epoch` accordingly.
initial_epoch = 0
final_epoch = 500
steps_per_epoch = 1000
# history = model.fit_generator(generator=train_generator,
# steps_per_epoch=ceil(train_dataset_size/batch_size),
# epochs=final_epoch,
# callbacks=callbacks,
# verbose=1,
# validation_data=val_generator,
# validation_steps=ceil(val_dataset_size/batch_size),
# initial_epoch=initial_epoch)
history = model.fit_generator(generator=train_generator,
steps_per_epoch=ceil(train_dataset_size/batch_size),
epochs=final_epoch,
callbacks=callbacks,
verbose=1,
validation_data=val_generator,
validation_steps=ceil(val_dataset_size/batch_size),
initial_epoch=initial_epoch)
###Output
Epoch 1/500
4331/4332 [============================>.] - ETA: 2s - loss: 38.6047 - predictions_1_loss: 5.5310 - predictions_2_loss: 5.2037 - predictions_1_proj_loss: 10.9100 - predictions_2_proj_loss: 12.1026 - predictions_1_Accuracy: 0.8475 - predictions_2_Accuracy: 0.8296Epoch 00000: val_loss improved from inf to 33.66418, saving model to checkpoints/double_ssd300_pascal_07+12_epoch-00_loss-38.6046_val_loss-33.6642.h5
4332/4332 [==============================] - 11544s - loss: 38.6043 - predictions_1_loss: 5.5314 - predictions_2_loss: 5.2038 - predictions_1_proj_loss: 10.9098 - predictions_2_proj_loss: 12.1025 - predictions_1_Accuracy: 0.8475 - predictions_2_Accuracy: 0.8296 - val_loss: 33.6642 - val_predictions_1_loss: 3.9459 - val_predictions_2_loss: 3.7309 - val_predictions_1_proj_loss: 11.6284 - val_predictions_2_proj_loss: 11.7344 - val_predictions_1_Accuracy: 0.8330 - val_predictions_2_Accuracy: 0.8307
Epoch 2/500
4331/4332 [============================>.] - ETA: 2s - loss: 30.5188 - predictions_1_loss: 4.1936 - predictions_2_loss: 3.7751 - predictions_1_proj_loss: 9.8950 - predictions_2_proj_loss: 10.8198 - predictions_1_Accuracy: 0.8585 - predictions_2_Accuracy: 0.8458Epoch 00001: val_loss improved from 33.66418 to 31.59535, saving model to checkpoints/double_ssd300_pascal_07+12_epoch-01_loss-30.5192_val_loss-31.5953.h5
4332/4332 [==============================] - 11951s - loss: 30.5203 - predictions_1_loss: 4.1942 - predictions_2_loss: 3.7756 - predictions_1_proj_loss: 9.8955 - predictions_2_proj_loss: 10.8199 - predictions_1_Accuracy: 0.8585 - predictions_2_Accuracy: 0.8458 - val_loss: 31.5953 - val_predictions_1_loss: 3.7104 - val_predictions_2_loss: 3.8055 - val_predictions_1_proj_loss: 11.3436 - val_predictions_2_proj_loss: 11.4764 - val_predictions_1_Accuracy: 0.8476 - val_predictions_2_Accuracy: 0.8465
Epoch 3/500
4331/4332 [============================>.] - ETA: 2s - loss: 28.9374 - predictions_1_loss: 4.0124 - predictions_2_loss: 3.6122 - predictions_1_proj_loss: 9.6828 - predictions_2_proj_loss: 10.6359 - predictions_1_Accuracy: 0.8624 - predictions_2_Accuracy: 0.8493Epoch 00002: val_loss improved from 31.59535 to 30.29661, saving model to checkpoints/double_ssd300_pascal_07+12_epoch-02_loss-28.9378_val_loss-30.2966.h5
4332/4332 [==============================] - 11802s - loss: 28.9390 - predictions_1_loss: 4.0136 - predictions_2_loss: 3.6130 - predictions_1_proj_loss: 9.6826 - predictions_2_proj_loss: 10.6359 - predictions_1_Accuracy: 0.8624 - predictions_2_Accuracy: 0.8494 - val_loss: 30.2966 - val_predictions_1_loss: 3.6051 - val_predictions_2_loss: 3.6923 - val_predictions_1_proj_loss: 10.9940 - val_predictions_2_proj_loss: 11.1517 - val_predictions_1_Accuracy: 0.8528 - val_predictions_2_Accuracy: 0.8445
Epoch 4/500
4331/4332 [============================>.] - ETA: 2s - loss: 28.3599 - predictions_1_loss: 3.8980 - predictions_2_loss: 3.5451 - predictions_1_proj_loss: 9.5897 - predictions_2_proj_loss: 10.4974 - predictions_1_Accuracy: 0.8641 - predictions_2_Accuracy: 0.8517Epoch 00003: val_loss improved from 30.29661 to 29.49253, saving model to checkpoints/double_ssd300_pascal_07+12_epoch-03_loss-28.3602_val_loss-29.4925.h5
4332/4332 [==============================] - 12021s - loss: 28.3611 - predictions_1_loss: 3.8985 - predictions_2_loss: 3.5456 - predictions_1_proj_loss: 9.5895 - predictions_2_proj_loss: 10.4977 - predictions_1_Accuracy: 0.8641 - predictions_2_Accuracy: 0.8517 - val_loss: 29.4925 - val_predictions_1_loss: 3.5007 - val_predictions_2_loss: 3.4560 - val_predictions_1_proj_loss: 10.7237 - val_predictions_2_proj_loss: 11.0044 - val_predictions_1_Accuracy: 0.8506 - val_predictions_2_Accuracy: 0.8472
Epoch 5/500
4331/4332 [============================>.] - ETA: 2s - loss: 28.1144 - predictions_1_loss: 3.8691 - predictions_2_loss: 3.5027 - predictions_1_proj_loss: 9.5164 - predictions_2_proj_loss: 10.4311 - predictions_1_Accuracy: 0.8662 - predictions_2_Accuracy: 0.8528Epoch 00004: val_loss improved from 29.49253 to 29.39773, saving model to checkpoints/double_ssd300_pascal_07+12_epoch-04_loss-28.1147_val_loss-29.3977.h5
4332/4332 [==============================] - 12026s - loss: 28.1157 - predictions_1_loss: 3.8697 - predictions_2_loss: 3.5031 - predictions_1_proj_loss: 9.5161 - predictions_2_proj_loss: 10.4315 - predictions_1_Accuracy: 0.8662 - predictions_2_Accuracy: 0.8528 - val_loss: 29.3977 - val_predictions_1_loss: 3.4791 - val_predictions_2_loss: 3.4225 - val_predictions_1_proj_loss: 10.5793 - val_predictions_2_proj_loss: 11.1328 - val_predictions_1_Accuracy: 0.8563 - val_predictions_2_Accuracy: 0.8498
Epoch 6/500
4331/4332 [============================>.] - ETA: 2s - loss: 28.0538 - predictions_1_loss: 3.8453 - predictions_2_loss: 3.4586 - predictions_1_proj_loss: 9.6052 - predictions_2_proj_loss: 10.3679 - predictions_1_Accuracy: 0.8651 - predictions_2_Accuracy: 0.8536Epoch 00005: val_loss improved from 29.39773 to 29.09748, saving model to checkpoints/double_ssd300_pascal_07+12_epoch-05_loss-28.0541_val_loss-29.0975.h5
4332/4332 [==============================] - 13560s - loss: 28.0549 - predictions_1_loss: 3.8460 - predictions_2_loss: 3.4593 - predictions_1_proj_loss: 9.6050 - predictions_2_proj_loss: 10.3678 - predictions_1_Accuracy: 0.8651 - predictions_2_Accuracy: 0.8536 - val_loss: 29.0975 - val_predictions_1_loss: 3.3973 - val_predictions_2_loss: 3.2923 - val_predictions_1_proj_loss: 10.6599 - val_predictions_2_proj_loss: 10.9775 - val_predictions_1_Accuracy: 0.8443 - val_predictions_2_Accuracy: 0.8484
Epoch 7/500
4331/4332 [============================>.] - ETA: 3s - loss: 27.7588 - predictions_1_loss: 3.8445 - predictions_2_loss: 3.4384 - predictions_1_proj_loss: 9.4027 - predictions_2_proj_loss: 10.3027 - predictions_1_Accuracy: 0.8682 - predictions_2_Accuracy: 0.8543Epoch 00006: val_loss improved from 29.09748 to 29.01092, saving model to checkpoints/double_ssd300_pascal_07+12_epoch-06_loss-27.7592_val_loss-29.0109.h5
4332/4332 [==============================] - 15167s - loss: 27.7605 - predictions_1_loss: 3.8452 - predictions_2_loss: 3.4390 - predictions_1_proj_loss: 9.4030 - predictions_2_proj_loss: 10.3027 - predictions_1_Accuracy: 0.8682 - predictions_2_Accuracy: 0.8543 - val_loss: 29.0109 - val_predictions_1_loss: 3.5444 - val_predictions_2_loss: 3.3791 - val_predictions_1_proj_loss: 10.4661 - val_predictions_2_proj_loss: 10.8532 - val_predictions_1_Accuracy: 0.8457 - val_predictions_2_Accuracy: 0.8463
Epoch 8/500
4331/4332 [============================>.] - ETA: 3s - loss: 27.7130 - predictions_1_loss: 3.8011 - predictions_2_loss: 3.4250 - predictions_1_proj_loss: 9.4252 - predictions_2_proj_loss: 10.2938 - predictions_1_Accuracy: 0.8676 - predictions_2_Accuracy: 0.8551Epoch 00007: val_loss improved from 29.01092 to 28.86669, saving model to checkpoints/double_ssd300_pascal_07+12_epoch-07_loss-27.7135_val_loss-28.8667.h5
4332/4332 [==============================] - 15063s - loss: 27.7148 - predictions_1_loss: 3.8023 - predictions_2_loss: 3.4260 - predictions_1_proj_loss: 9.4250 - predictions_2_proj_loss: 10.2936 - predictions_1_Accuracy: 0.8676 - predictions_2_Accuracy: 0.8551 - val_loss: 28.8667 - val_predictions_1_loss: 3.3264 - val_predictions_2_loss: 3.3227 - val_predictions_1_proj_loss: 10.7622 - val_predictions_2_proj_loss: 10.6897 - val_predictions_1_Accuracy: 0.8546 - val_predictions_2_Accuracy: 0.8562
Epoch 9/500
4331/4332 [============================>.] - ETA: 2s - loss: 27.9846 - predictions_1_loss: 3.8294 - predictions_2_loss: 3.4170 - predictions_1_proj_loss: 9.7078 - predictions_2_proj_loss: 10.2657 - predictions_1_Accuracy: 0.8651 - predictions_2_Accuracy: 0.8551Epoch 00008: val_loss improved from 28.86669 to 28.40392, saving model to checkpoints/double_ssd300_pascal_07+12_epoch-08_loss-27.9849_val_loss-28.4039.h5
4332/4332 [==============================] - 12704s - loss: 27.9860 - predictions_1_loss: 3.8301 - predictions_2_loss: 3.4177 - predictions_1_proj_loss: 9.7075 - predictions_2_proj_loss: 10.2661 - predictions_1_Accuracy: 0.8651 - predictions_2_Accuracy: 0.8551 - val_loss: 28.4039 - val_predictions_1_loss: 3.3730 - val_predictions_2_loss: 3.2313 - val_predictions_1_proj_loss: 10.3865 - val_predictions_2_proj_loss: 10.6495 - val_predictions_1_Accuracy: 0.8547 - val_predictions_2_Accuracy: 0.8478
Epoch 10/500
4331/4332 [============================>.] - ETA: 2s - loss: 27.7618 - predictions_1_loss: 3.7929 - predictions_2_loss: 3.3896 - predictions_1_proj_loss: 9.5921 - predictions_2_proj_loss: 10.2264 - predictions_1_Accuracy: 0.8665 - predictions_2_Accuracy: 0.8556Epoch 00009: val_loss did not improve
4332/4332 [==============================] - 12351s - loss: 27.7632 - predictions_1_loss: 3.7935 - predictions_2_loss: 3.3902 - predictions_1_proj_loss: 9.5919 - predictions_2_proj_loss: 10.2267 - predictions_1_Accuracy: 0.8665 - predictions_2_Accuracy: 0.8556 - val_loss: 28.6248 - val_predictions_1_loss: 3.4202 - val_predictions_2_loss: 3.2748 - val_predictions_1_proj_loss: 10.5455 - val_predictions_2_proj_loss: 10.6248 - val_predictions_1_Accuracy: 0.8561 - val_predictions_2_Accuracy: 0.8522
Epoch 11/500
4331/4332 [============================>.] - ETA: 2s - loss: 27.4590 - predictions_1_loss: 3.7700 - predictions_2_loss: 3.3821 - predictions_1_proj_loss: 9.3163 - predictions_2_proj_loss: 10.2307 - predictions_1_Accuracy: 0.8704 - predictions_2_Accuracy: 0.8559Epoch 00010: val_loss did not improve
4332/4332 [==============================] - 12634s - loss: 27.4600 - predictions_1_loss: 3.7704 - predictions_2_loss: 3.3829 - predictions_1_proj_loss: 9.3161 - predictions_2_proj_loss: 10.2307 - predictions_1_Accuracy: 0.8704 - predictions_2_Accuracy: 0.8559 - val_loss: 28.6604 - val_predictions_1_loss: 3.3439 - val_predictions_2_loss: 3.2837 - val_predictions_1_proj_loss: 10.5326 - val_predictions_2_proj_loss: 10.7421 - val_predictions_1_Accuracy: 0.8616 - val_predictions_2_Accuracy: 0.8528
|
COVID19_Spread_Prediction.ipynb | ###Markdown
**CoronaVirus** **Prediction**
###Code
# Get data from Github
import numpy as np
from math import sqrt
from sklearn.metrics import mean_squared_error
import pandas as pd
#url = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Confirmed.csv'
url = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Confirmed.csv'
confirmed = pd.read_csv(url, error_bad_lines=False)
url = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Deaths.csv'
death = pd.read_csv(url, error_bad_lines=False)
url = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Recovered.csv'
recover = pd.read_csv(url, error_bad_lines=False)
# fix region names
confirmed['Country/Region']= confirmed['Country/Region'].str.replace("Mainland China", "China")
confirmed['Country/Region']= confirmed['Country/Region'].str.replace("US", "Unites States")
death['Country/Region']= death['Country/Region'].str.replace("Mainland China", "China")
death['Country/Region']= death['Country/Region'].str.replace("US", "Unites States")
recover['Country/Region']= recover['Country/Region'].str.replace("Mainland China", "China")
recover['Country/Region']= recover['Country/Region'].str.replace("US", "Unites States")
###Output
_____no_output_____
###Markdown
Get Population
###Code
from google.colab import files
uploaded = files.upload()
import io
population = pd.read_csv(io.BytesIO(uploaded['population.csv']), sep=',', encoding='latin1')
confirmed=pd.merge(confirmed, population,how='left' ,on=['Province/State','Country/Region'])
death=pd.merge(death, population,how='left' ,on=['Province/State','Country/Region'])
recover=pd.merge(recover, population,how='left' ,on=['Province/State','Country/Region'])
# merge region
confirmed['region']=confirmed['Country/Region'].map(str)+'_'+confirmed['Province/State'].map(str)
death['region']=death['Country/Region'].map(str)+'_'+death['Province/State'].map(str)
recover['region']=recover['Country/Region'].map(str)+'_'+recover['Province/State'].map(str)
confirmed.iloc[:5,:]
###Output
_____no_output_____
###Markdown
Create Time Series + Plots
###Code
def create_ts(df):
ts=df
ts=ts.drop(['Province/State', 'Country/Region','Lat', 'Long',' Population '], axis=1)
ts.set_index('region')
ts=ts.T
ts.columns=ts.loc['region']
ts=ts.drop('region')
ts=ts.fillna(0)
ts=ts.reindex(sorted(ts.columns), axis=1)
return (ts)
ts=create_ts(confirmed)
ts_d=create_ts(death)
ts_rec=create_ts(recover)
import matplotlib.pyplot as plt
p=ts.reindex(ts.max().sort_values(ascending=False).index, axis=1)
p.iloc[:,:1].plot(marker='*',figsize=(10,4)).set_title('Daily Total Confirmed - Hubei',fontdict={'fontsize': 22})
p.iloc[:,2:10].plot(marker='*',figsize=(10,4)).set_title('Daily Total Confirmed - Major areas',fontdict={'fontsize': 22})
p_d=ts_d.reindex(ts.mean().sort_values(ascending=False).index, axis=1)
p_d.iloc[:,:1].plot(marker='*',figsize=(10,4)).set_title('Daily Total Death - Hubei',fontdict={'fontsize': 22})
p_d.iloc[:,2:10].plot(marker='*',figsize=(10,4)).set_title('Daily Total Death - Major areas',fontdict={'fontsize': 22})
p_r=ts_rec.reindex(ts.mean().sort_values(ascending=False).index, axis=1)
p_r.iloc[:,:1].plot(marker='*',figsize=(10,4)).set_title('Daily Total Recoverd - Hubei',fontdict={'fontsize': 22})
p_r.iloc[:,2:10].plot(marker='*',figsize=(10,4)).set_title('Daily Total Recoverd - Major areas',fontdict={'fontsize': 22})
###Output
_____no_output_____
###Markdown
**Kelman Filter With R**
###Code
from google.colab import files
uploaded = files.upload()
import io
# Create data for R script
ts_r=ts.reset_index()
ts_r=ts_r.rename(columns = {'index':'date'})
ts_r['date']=pd.to_datetime(ts_r['date'] ,errors ='coerce')
ts_r.to_csv('ts_r.csv')
import rpy2
%load_ext rpy2.ipython
%%R
install.packages('pracma')
install.packages('reshape')
%%R
require(pracma)
require(Metrics)
require(readr)
all<- read_csv("ts_r.csv")
all$X1<-NULL
date<-all[,1]
date[nrow(date) + 1,1] <-all[nrow(all),1]+1
pred_all<-NULL
for (n in 2:ncol(all)-1) {
Y<-ts(data = all[n+1], start = 1, end =nrow(all)+1)
sig_w<-0.01
w<-sig_w*randn(1,100) # acceleration which denotes the fluctuation (Q/R) rnorm(100, mean = 0, sd = 1)
sig_v<-0.01
v<-sig_v*randn(1,100)
t<-0.45
phi<-matrix(c(1,0,t,1),2,2)
gama<-matrix(c(0.5*t^2,t),2,1)
H<-matrix(c(1,0),1,2)
#Kalman
x0_0<-p0_0<-matrix(c(0,0),2,1)
p0_0<-matrix(c(1,0,0,1),2,2)
Q<-0.01
R<-0.01
X<-NULL
X2<-NULL
pred<-NULL
for (i in 0:nrow(all)) {
namp <-paste("p", i+1,"_",i, sep = "")
assign(namp, phi%*%(get(paste("p", i,"_",i, sep = "")))%*%t(phi)+gama%*%Q%*%t(gama))
namk <- paste("k", i+1, sep = "")
assign(namk,get(paste("p", i+1,"_",i, sep = ""))%*%t(H)%*%(1/(H%*%get(paste("p", i+1,"_",i, sep = ""))%*%t(H)+R)))
namx <- paste("x", i+1,"_",i, sep = "")
assign(namx,phi%*%get(paste("x", i,"_",i, sep = "")))
namE <- paste("E", i+1, sep = "")
assign(namE,Y[i+1]-H%*%get(paste("x", i+1,"_",i, sep = "")))
namx2 <- paste("x", i+1,"_",i+1, sep = "")
assign(namx2,get(paste("x", i+1,"_",i, sep = ""))+get(paste("k", i+1, sep = ""))%*%get(paste("E", i+1, sep = "")))
namp2 <- paste("p", i+1,"_",i+1, sep = "")
assign(namp2,(p0_0-get(paste("k", i+1, sep = ""))%*%H)%*%get(paste("p", i+1,"_",i, sep = "")))
X<-rbind(X,get(paste("x", i+1,"_",i,sep = ""))[1])
X2<-rbind(X2,get(paste("x", i+1,"_",i,sep = ""))[2])
if(i>2){
remove(list=(paste("p", i-1,"_",i-2, sep = "")))
remove(list=(paste("k", i-1, sep = "")))
remove(list=(paste("E", i-1, sep = "")))
remove(list=(paste("p", i-2,"_",i-2, sep = "")))
remove(list=(paste("x", i-1,"_",i-2, sep = "")))
remove(list=(paste("x", i-2,"_",i-2, sep = "")))}
}
pred<-NULL
pred<-cbind(Y,X,round(X2,4))
pred<-as.data.frame(pred)
pred$region<-colnames(all[,n+1])
pred$date<-date$date
pred$actual<-rbind(0,(cbind(pred[2:nrow(pred),1])/pred[1:nrow(pred)-1,1]-1)*100)
pred$predict<-rbind(0,(cbind(pred[2:nrow(pred),2])/pred[1:nrow(pred)-1,2]-1)*100)
pred$pred_rate<-(pred$X/pred$Y-1)*100
pred$X2_change<-rbind(0,(cbind(pred[2:nrow(pred),3]-pred[1:nrow(pred)-1,3])))
pred_all<-rbind(pred_all,pred)
}
pred_all<-cbind(pred_all[,4:5],pred_all[,1:3])
names(pred_all)[5]<-"X2"
pred_all=pred_all[with( pred_all, order(region, date)), ]
pred_all<-pred_all[,3:5]
p=%R pred_all
############ Merge R output due to package problem
t=ts_d
t=t.stack().reset_index(name='confirmed')
t.columns=['date', 'region','confirmed']
t['date']=pd.to_datetime(t['date'] ,errors ='coerce')
t=t.sort_values(['region', 'date'])
temp=t.iloc[:,:3]
temp=temp.reset_index(drop=True)
for i in range(1,len(t)+1):
if(temp.iloc[i,1] is not temp.iloc[i-1,1]):
temp.loc[len(temp)+1] = [temp.iloc[i-1,0]+ pd.DateOffset(1),temp.iloc[i-1,1], 0]
temp=temp.sort_values(['region', 'date'])
temp=temp.reset_index(drop=True)
temp['Y']=p['Y']
temp['X']=p['X']
temp['X2']=p['X2']
from google.colab import files
uploaded = files.upload()
from google.colab import files
uploaded = files.upload()
w=pd.read_csv('w.csv', sep=',', encoding='latin1')
w['date']=pd.to_datetime(w['date'],format='%d/%m/%Y')
#w['date']=pd.to_datetime(w['date'],errors ='coerce')
w_forecast=pd.read_csv('w_forecast.csv', sep=',', encoding='latin1')
w_forecast['date']=pd.to_datetime(w_forecast['date'],format='%d/%m/%Y')
###Output
_____no_output_____
###Markdown
**Pre Proccessing Data** Build & Train Data Structure
###Code
t=ts
t=t.stack().reset_index(name='confirmed')
t.columns=['date', 'region','confirmed']
t['date']=pd.to_datetime(t['date'] ,errors ='coerce')
t=t.sort_values(['region', 'date'])
# Add 1 Future day for prediction
t=t.reset_index(drop=True)
for i in range(1,len(t)+1):
if(t.iloc[i,1] is not t.iloc[i-1,1]):
t.loc[len(t)+1] = [t.iloc[i-1,0]+ pd.DateOffset(1),t.iloc[i-1,1], 0]
t=t.sort_values(['region', 'date'])
t=t.reset_index(drop=True)
t['1_day_change']=t['3_day_change']=t['7_day_change']=t['1_day_change_rate']=t['3_day_change_rate']=t['7_day_change_rate']=t['last_day']=0
for i in range(1,len(t)):
if(t.iloc[i,1] is t.iloc[i-2,1]):
t.iloc[i,3]=t.iloc[i-1,2]-t.iloc[i-2,2]
t.iloc[i,6]=(t.iloc[i-1,2]/t.iloc[i-2,2]-1)*100
t.iloc[i,9]=t.iloc[i-1,2]
if(t.iloc[i,1] is t.iloc[i-4,1]):
t.iloc[i,4]=t.iloc[i-1,2]-t.iloc[i-4,2]
t.iloc[i,7]=(t.iloc[i-1,2]/t.iloc[i-4,2]-1)*100
if(t.iloc[i,1] is t.iloc[i-8,1]):
t.iloc[i,5]=t.iloc[i-1,2]-t.iloc[i-8,2]
t.iloc[i,8]=(t.iloc[i-1,2]/t.iloc[i-8,2]-1)*100
t=t.fillna(0)
t=t.merge(temp[['date','region', 'X']],how='left',on=['date','region'])
t=t.rename(columns = {'X':'kalman_prediction'})
t=t.replace([np.inf, -np.inf], 0)
t['kalman_prediction']=round(t['kalman_prediction'])
train=t.merge(confirmed[['region',' Population ']],how='left',on='region')
train=train.rename(columns = {' Population ':'population'})
train['population']=train['population'].str.replace(r" ", '')
train['population']=train['population'].str.replace(r",", '')
train['population']=train['population'].fillna(1)
train['population']=train['population'].astype('int32')
train['infected_rate'] =train['last_day']/train['population']*10000
train=train.merge(w, how='left',on=['date','region'])
train=train.sort_values(['region', 'date'])
### fill missing weather
for i in range(0,len(train)):
if(np.isnan(train.iloc[i,13])):
if(train.iloc[i,1] is train.iloc[i-1,1]):
train.iloc[i,13]=train.iloc[i-1,13]
train.iloc[i,14]=train.iloc[i-1,14]
###Output
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:5: RuntimeWarning: invalid value encountered in double_scalars
"""
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:9: RuntimeWarning: invalid value encountered in double_scalars
if __name__ == '__main__':
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:12: RuntimeWarning: invalid value encountered in double_scalars
if sys.path[0] == '':
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:5: RuntimeWarning: divide by zero encountered in double_scalars
"""
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:9: RuntimeWarning: divide by zero encountered in double_scalars
if __name__ == '__main__':
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:12: RuntimeWarning: divide by zero encountered in double_scalars
if sys.path[0] == '':
###Markdown
**Kalman 1 day Prediction with Evaluation**
###Code
# Select region
region='China_Hubei'
evaluation=pd.DataFrame(columns=['region','mse','rmse','mae'])
place=0
for i in range(1,len(t)):
if(t.iloc[i,1] is not t.iloc[i-1,1]):
ex=np.array(t.iloc[i-len(ts):i,10])
pred=np.array(t.iloc[i-len(ts):i,2])
evaluation=evaluation.append({'region': t.iloc[i-1,1], 'mse': np.power((ex - pred),2).mean(),'rmse':sqrt(mean_squared_error(ex,pred)),'mae': (abs(ex - pred)).mean()}, ignore_index=True)
p=t[t['region']==region][['date','region','confirmed','kalman_prediction']]
p=p.rename(columns = {'confirmed':'recoverd'})
p.iloc[len(p)-1,2]=None
p=p.set_index(['date'])
p.iloc[:,1:].plot(marker='o',figsize=(16,8)).set_title('Kalman Prediction - Select Region to Change - {}'.format(p.iloc[0,0]))
print(evaluation[evaluation['region']==p.iloc[0,0]])
###Output
_____no_output_____ |
examples/Forest_portability_example.ipynb | ###Markdown
Code Portability and Intro to Forest The quantum hardware landscape is incredibly competitive and rapidly changing. Many full-stack quantum software platforms lock users into them in order to use the associated devices and simulators. This notebook demonstrates how `pytket` can free up your existing high-level code to be used on devices from other providers. We will take a state-preparation and evolution circuit generated using `qiskit`, and enable it to be run on several Rigetti backends.To use a real hardware device, this notebook should be run from a Rigetti QMI instance. Look [here](https://www.rigetti.com/qcs/docs/intro-to-qcs) for information on how to set this up. Otherwise, make sure you have QuilC and QVM running in server mode. You will need to have `pytket`, `pytket_pyquil`, and `pytket_qiskit` installed, which are all available from PyPI. We will start by using `qiskit` to build a random initial state over some qubits. (We remove the initial "reset" gates from the circuit since these are not recognized by the Forest backends, which assume an all-zero initial state.)
###Code
from qiskit import QuantumCircuit
from qiskit.quantum_info.states.random import random_statevector
n_qubits = 3
state = random_statevector((1 << n_qubits, 1)).data
state_prep_circ = QuantumCircuit(n_qubits)
state_prep_circ.initialize(state)
state_prep_circ = state_prep_circ.decompose()
state_prep_circ.data = [
datum for datum in state_prep_circ.data if datum[0].name != "reset"
]
print(state_prep_circ)
###Output
_____no_output_____
###Markdown
We can now evolve this state under an operator for a given duration.
###Code
from qiskit.opflow import PauliTrotterEvolution
from qiskit.opflow.primitive_ops import PauliSumOp
from qiskit.quantum_info import Pauli
duration = 1.2
op = PauliSumOp.from_list([("XXI", 0.3), ("YYI", 0.5 + 1j * 0.2), ("ZZZ", -0.4)])
evolved_op = (duration * op).exp_i()
evolution_circ = PauliTrotterEvolution(reps=1).convert(evolved_op).to_circuit()
print(evolution_circ)
state_prep_circ += evolution_circ
###Output
_____no_output_____
###Markdown
Now that we have a circuit, `pytket` can take this and start operating on it directly. For example, we can apply some basic compilation passes to simplify it.
###Code
from pytket.extensions.qiskit import qiskit_to_tk
tk_circ = qiskit_to_tk(state_prep_circ)
from pytket.passes import (
SequencePass,
CliffordSimp,
DecomposeBoxes,
KAKDecomposition,
SynthesiseTket,
)
DecomposeBoxes().apply(tk_circ)
optimise = SequencePass([KAKDecomposition(), CliffordSimp(False), SynthesiseTket()])
optimise.apply(tk_circ)
###Output
_____no_output_____
###Markdown
Display the optimised circuit:
###Code
from pytket.circuit.display import render_circuit_jupyter
render_circuit_jupyter(tk_circ)
###Output
_____no_output_____
###Markdown
The Backends in `pytket` abstract away the differences between different devices and simulators as much as possible, allowing painless switching between them. The `pytket_pyquil` package provides two Backends: `ForestBackend` encapsulates both running on physical devices via Rigetti QCS and simulating those devices on the QVM, and `ForestStateBackend` acts as a wrapper to the pyQuil Wavefunction Simulator.Both of these still have a few restrictions on the circuits that can be run. Each only supports a subset of the gate types available in `pytket`, and a real device or associated simulation will have restricted qubit connectivity. The Backend objects will contain a default compilation pass that will statisfy these constraints as much as possible, with minimal or no optimisation.The `ForestStateBackend` will allow us to view the full statevector (wavefunction) expected from a perfect execution of the circuit.
###Code
from pytket.extensions.pyquil import ForestStateBackend
state_backend = ForestStateBackend()
state_backend.compile_circuit(tk_circ)
handle = state_backend.process_circuit(tk_circ)
state = state_backend.get_result(handle).get_state()
print(state)
###Output
_____no_output_____
###Markdown
For users who are familiar with the Forest SDK, the association of qubits to indices of bitstrings (and consequently the ordering of statevectors) used by default in `pytket` Backends differs from that described in the [Forest docs](http://docs.rigetti.com/en/stable/wavefunction_simulator.htmlmulti-qubit-basis-enumeration). You can recover the ordering used by the Forest systems with `BackendResult.get_state(tk_circ, basis:pytket.BasisOrder.dlo)` (see our docs on the `BasisOrder` enum for more details). Connecting to real devices works very similarly. Instead of obtaining the full statevector, we are only able to measure the quantum state and sample from the resulting distribution. Beyond that, the process is pretty much the same.The following shows how to run the circuit on the "9q-square" lattice. The `as_qvm` switch on the `get_qc` method will switch between connecting to the real Aspen device and the QVM, allowing you to test your code with a simulator before you reserve your slot with the device.
###Code
tk_circ.measure_all()
from pyquil import get_qc
from pytket.extensions.pyquil import ForestBackend
aspen_qc = get_qc("9q-square", as_qvm=True)
aspen_backend = ForestBackend(aspen_qc)
aspen_backend.compile_circuit(tk_circ)
counts = aspen_backend.run_circuit(tk_circ, 2000).get_counts()
print(counts)
###Output
_____no_output_____ |
04Machine Learning 2/01Advanced Regression/03Model Evaluation/Regression+Assignment-+Predicting+CO2+Emissions.ipynb | ###Markdown
Regression Assignment: Predicting CO2 Emissions Using Other Variables (for a given year)Let's now try to build a regression model to predict CO2 emissions using other variables (for any given year). This will help us in understanding which variables affect CO2 emissions. This understanding can then be used by, for example, organisations/authorities such as the UN, governments etc. to create regulatory policies etc.
###Code
# total co2 emissions of various countries (for a given year)
# note that this is co2 emissions per person (tonnes per person) for a given year
year = "2014"
total_co2 = pd.read_csv("co2_prediction/yearly_co2_emissions_1000_tonnes.csv")
total_co2 = total_co2[["geo", year]]
total_co2.columns = ["geo", "co2"]
print(total_co2.shape)
total_co2.head()
# median value
median_co2 = total_co2["co2"].median()
median_co2
# plot co2 emissions by country
total_co2 = total_co2[total_co2["co2"] > median_co2]
plt.figure(figsize=(16, 8))
sns.barplot(x="geo", y="co2", data=total_co2)
plt.title("Total CO2 Emission in {}".format(year))
plt.xticks(rotation=90)
###Output
_____no_output_____
###Markdown
The top three countries in terms of total CO2 emissions are China, the US, and India. If this makes you think that these three countries are the main culprits in global warming, think again. What we've plotted above is the total CO2 emissions, which will be high for countries with higher populations. A better variable to consider is the CO2 emission *per capita*.
###Code
# co2 emissions per capita
# this is co2 emissions per person (tonnes per person) for a given year
co2 = pd.read_csv("co2_prediction/co2_emissions_tonnes_per_person.csv")
co2 = co2[["geo", year]]
co2.columns = ["geo", "co2"]
co2.head()
# median value
median_per_capita_co2 = co2["co2"].median()
median_per_capita_co2
# plot per capita co2 emissions
co2 = co2[co2["co2"] > median_per_capita_co2]
plt.figure(figsize=(16, 8))
sns.barplot(x="geo", y="co2", data=co2)
plt.xticks(rotation=90)
# income
income = pd.read_csv("co2_prediction/income_per_person_gdppercapita_ppp_inflation_adjusted.csv")
income = income[["geo", year]]
income.columns = ["geo", "income"]
income.head()
# merge with co2 df
df = pd.merge(co2, income, on='geo', how='inner')
df.head()
# plot per capita income and co2
plt.scatter(df['income'], df['co2'])
plt.xlabel('income')
plt.ylabel('co2')
# forest
forest = pd.read_csv("co2_prediction/forest_coverage_percent.csv")
forest = forest[["geo", year]]
forest.columns = ["geo", "forest"]
forest.head()
# merge forest with the df above
df = pd.merge(df, forest, how="inner",
left_on="geo", right_on="geo")
df.head()
# plot forest and co2
plt.scatter(df['forest'], df['co2'])
plt.xlabel('forest')
plt.ylabel('co2')
###Output
_____no_output_____
###Markdown
Function to Read and Merge Files
###Code
# fn to read and merge dfs
import os
root_folder = "co2_prediction/"
def read_and_merge_files(root_folder, year="2010"):
dfs = []
for subdir, dirs, files in os.walk(root_folder):
for file in files:
filename = file.split(".")[0]
df = pd.read_csv(root_folder+str(file))
# include only datasets having more than 50 countries' data
if year in df.columns and df.shape[0] > 50:
df = df[["geo", year]]
df.columns = ["geo", filename]
dfs.append(df)
# merge all files using the key 'geo'
master_df = dfs[0]
for i in range(len(dfs)-1):
master_df = pd.merge(master_df, dfs[i+1], how="inner",
left_on="geo", right_on="geo")
return master_df
master_df = read_and_merge_files(root_folder, year="2007")
master_df.head()
# saving the merged dataset in a CSV file
master_df.to_csv("co2_prediction_dataset", sep=',')
# plots
df = master_df.iloc[:, 1:]
print(df.shape)
df.head()
# plot CO2 with all other variables
plt.figure()
for col in range(1, df.shape[1]):
plt.scatter(df["co2_emissions_tonnes_per_person"], df.iloc[:, col])
plt.xlabel(df.columns[col])
plt.ylabel("co2 per capita")
plt.show()
###Output
_____no_output_____
###Markdown
Regression
###Code
master_df = read_and_merge_files(root_folder, year="2007")
df = master_df.iloc[:, 1:]
print(df.shape)
df.head()
df.isna().sum()
# drop rows with NaNs
df = df.dropna(axis=0)
df.isna().sum()
df.head()
y = df.loc[:, 'co2_emissions_tonnes_per_person']
X = df.loc[:, df.columns != 'co2_emissions_tonnes_per_person']
# scale
scaler = StandardScaler()
scaler.fit(X)
# split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size = 0.2,
random_state = 1)
# linear regression
lm = LinearRegression()
lm.fit(X_train, y_train)
# predict
y_train_pred = lm.predict(X_train)
metrics.r2_score(y_true=y_train, y_pred=y_train_pred)
y_test_pred = lm.predict(X_test)
metrics.r2_score(y_true=y_test, y_pred=y_test_pred)
# model coefficients
cols = X.columns
cols = cols.insert(0, "constant")
list(zip(cols, model_parameters))
###Output
_____no_output_____
###Markdown
Lasso
###Code
# lasso regression
lm = Lasso(alpha=0.001)
lm.fit(X_train, y_train)
# predict
y_train_pred = lm.predict(X_train)
print(metrics.r2_score(y_true=y_train, y_pred=y_train_pred))
y_test_pred = lm.predict(X_test)
print(metrics.r2_score(y_true=y_test, y_pred=y_test_pred))
# lasso model parameters
model_parameters = list(lm.coef_)
model_parameters.insert(0, lm.intercept_)
model_parameters = [round(x, 3) for x in model_parameters]
cols = X.columns
cols = cols.insert(0, "constant")
list(zip(cols, model_parameters))
# grid search CV
# set up cross validation scheme
folds = KFold(n_splits = 5, shuffle = True, random_state = 4)
# specify range of hyperparameters
params = {'alpha': [0.001, 0.01, 1.0, 5.0, 10.0]}
# grid search
# lasso model
model = Lasso()
model_cv = GridSearchCV(estimator = model, param_grid = params,
scoring= 'r2',
cv = folds,
return_train_score=True, verbose = 1)
model_cv.fit(X_train, y_train)
cv_results = pd.DataFrame(model_cv.cv_results_)
cv_results.head()
# plot
cv_results['param_alpha'] = cv_results['param_alpha'].astype('float32')
plt.plot(cv_results['param_alpha'], cv_results['mean_train_score'])
plt.plot(cv_results['param_alpha'], cv_results['mean_test_score'])
plt.xlabel('alpha')
plt.ylabel('r2 score')
plt.xscale('log')
plt.show()
# model with optimal alpha
# lasso regression
lm = Lasso(alpha=0.001)
lm.fit(X_train, y_train)
# predict
y_train_pred = lm.predict(X_train)
print(metrics.r2_score(y_true=y_train, y_pred=y_train_pred))
y_test_pred = lm.predict(X_test)
print(metrics.r2_score(y_true=y_test, y_pred=y_test_pred))
# lasso model parameters
model_parameters = list(lm.coef_)
model_parameters.insert(0, lm.intercept_)
model_parameters = [round(x, 3) for x in model_parameters]
cols = X.columns
cols = cols.insert(0, "constant")
list(zip(cols, model_parameters))
###Output
_____no_output_____ |
notebooks/10-WCS/Visualization_tutorial.ipynb | ###Markdown
Visualizing astronomical images and coordinates Scaling and StretchingThe astropy.visualization module provides a framework for transforming values in images (and more generally any arrays), typically for the purpose of visualization. Two main types of transformations are provided:Normalization to the [0:1] range using lower and upper limits where $x$ represents the values in the original image: $y = {{x - v_{min}} \over {v_{max} - v_{min}}}$ Stretching of values in the [0:1] range to the [0:1] range using a linear or non-linear function: $z=f(y)$ displaying a bitmap -- imshow (matplotlib)
###Code
import astropy.io.fits as fits
import matplotlib.pylab as plt
from astropy.visualization import (MinMaxInterval, LogStretch,
ImageNormalize)
###Output
_____no_output_____
###Markdown
Here we use only the numpy array of the data.
###Code
image = fits.getdata('data/w5.fits')
###Output
_____no_output_____
###Markdown
Astropy visualization functions for scaling and stretching
###Code
# Scale to image minimum and maximum, stretch with log function
norm = ImageNormalize(image, interval=MinMaxInterval(),
stretch=LogStretch())
###Output
_____no_output_____
###Markdown
Note that astronomical images in FITS use origin="lower" which is not the matplotlib default
###Code
%matplotlib inline
fig = plt.figure(figsize=(8,8))
plt.imshow(image, norm=norm, origin="lower", cmap='Greys_r');
###Output
_____no_output_____
###Markdown
Contours with matplotlib
###Code
%matplotlib inline
from astropy.visualization import (ManualInterval, LogStretch,
ImageNormalize)
norm = ImageNormalize(image, interval=ManualInterval(vmin=370.0, vmax=1000.0),
stretch=LogStretch())
%matplotlib inline
fig = plt.figure(figsize=(8,8))
plt.contour(image, [550, 750, 950], origin='lower',
colors=['grey', 'green', 'blue']);
###Output
_____no_output_____
###Markdown
See this [contour demo](http://matplotlib.org/examples/pylab_examples/contour_demo.html) from the matplotlib documentation for more to do with contours Adding coordinates (wcsaxes) Prior to this point, we have visualized numpy arrays
###Code
from astropy.wcs import WCS
hdu = fits.open('data/w5.fits')[0]
wcs = WCS(hdu.header)
###Output
_____no_output_____
###Markdown
In the case the World Coordinate System is a simple tangent projection
###Code
wcs
norm = ImageNormalize(hdu.data, interval=MinMaxInterval(),
stretch=LogStretch())
###Output
_____no_output_____
###Markdown
Make sure to pick a good [sequential colormap](http://matplotlib.org/users/colormaps.html) and avoid 'jet'!
###Code
%matplotlib inline
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(projection=wcs)
plt.imshow(hdu.data, norm=norm, cmap='viridis', origin="lower")
plt.grid(color='white', ls='solid')
plt.xlabel('Right Ascension')
plt.ylabel('Declination')
###Output
_____no_output_____
###Markdown
Overlaying markers
###Code
from astropy.table import Table
w5tbl = Table.read('data/w5_wise.tbl', format='ascii.ipac')
w5tbl = w5tbl[w5tbl['w4snr'] > 30.0]
fit = plt.figure(figsize=(8,8))
hdu = fits.open('data/w5.fits')[0]
wcs = WCS(hdu.header)
ax = plt.subplot(projection=wcs)
plt.imshow(hdu.data, norm=norm, origin="lower", cmap='Greys_r')
ax.scatter(w5tbl['ra'], w5tbl['dec'], transform=ax.get_transform('world'))
plt.xlabel('Right Ascension')
plt.ylabel('Declination')
###Output
_____no_output_____ |
process_mimic.ipynb | ###Markdown
###Code
import sys
import _pickle as pickle
import numpy as np
from datetime import datetime
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
def convert_to_icd9(dxStr):
if dxStr.startswith('E'):
if len(dxStr) > 4: return dxStr[:4] + '.' + dxStr[4:]
else: return dxStr
else:
if len(dxStr) > 3: return dxStr[:3] + '.' + dxStr[3:]
else: return dxStr
def convert_to_3digit_icd9(dxStr):
if dxStr.startswith('E'):
if len(dxStr) > 4: return dxStr[:4]
else: return dxStr
else:
if len(dxStr) > 3: return dxStr[:3]
else: return dxStr
# input arguments
binary_count = 'binary'
root_dir = "/content/gdrive/My Drive/"
if binary_count == 'count':
base_dir = root_dir + 'GOSH/Synthetic Data/medgan/count/'
else:
base_dir = root_dir + 'GOSH/Synthetic Data/medgan/binary/'
raw_data_dir = root_dir + 'GOSH/Synthetic Data/medgan/mimic/'
processed_data_dir = base_dir + 'processed_mimic/'
model_dir = base_dir + 'models/'
gen_data_dir = base_dir + 'generated_data/'
admissionFile = raw_data_dir + 'ADMISSIONS.csv'
diagnosisFile = raw_data_dir + 'DIAGNOSES_ICD.csv'
outFile = processed_data_dir + 'processed_mimic'
if binary_count != 'binary' and binary_count != 'count':
print('You must choose either binary or count.')
print('Building pid-admission mapping, admission-date mapping')
pidAdmMap = {}
admDateMap = {}
infd = open(admissionFile, 'r')
infd.readline()
for line in infd:
tokens = line.strip().split(',')
pid = int(tokens[1])
admId = int(tokens[2])
admTime = datetime.strptime(tokens[3], '%Y-%m-%d %H:%M:%S')
admDateMap[admId] = admTime
if pid in pidAdmMap: pidAdmMap[pid].append(admId)
else: pidAdmMap[pid] = [admId]
infd.close()
print('Building admission-dxList mapping')
admDxMap = {}
infd = open(diagnosisFile, 'r')
infd.readline()
for line in infd:
tokens = line.strip().split(',')
admId = int(tokens[2])
# Uncomment this line and comment the line below, if you want to use the entire ICD9 digits.
dxStr = 'D_' + convert_to_icd9(tokens[4][1:-1])
#dxStr = 'D_' + convert_to_3digit_icd9(tokens[4][1:-1])
if admId in admDxMap: admDxMap[admId].append(dxStr)
else: admDxMap[admId] = [dxStr]
infd.close()
print('Building pid-sortedVisits mapping')
pidSeqMap = {}
for pid, admIdList in pidAdmMap.items():
#if len(admIdList) < 2: continue
sortedList = sorted([(admDateMap[admId], admDxMap[admId]) for admId in admIdList])
pidSeqMap[pid] = sortedList
print('Building pids, dates, strSeqs')
pids = []
dates = []
seqs = []
for pid, visits in pidSeqMap.items():
pids.append(pid)
seq = []
date = []
for visit in visits:
date.append(visit[0])
seq.append(visit[1])
dates.append(date)
seqs.append(seq)
print('Converting strSeqs to intSeqs, and making types')
types = {}
newSeqs = []
for patient in seqs:
newPatient = []
for visit in patient:
newVisit = []
for code in visit:
if code in types:
newVisit.append(types[code])
else:
types[code] = len(types)
newVisit.append(types[code])
newPatient.append(newVisit)
newSeqs.append(newPatient)
print('Constructing the matrix')
numPatients = len(newSeqs)
numCodes = len(types)
matrix = np.zeros((numPatients, numCodes)).astype('float32')
for i, patient in enumerate(newSeqs):
for visit in patient:
for code in visit:
if binary_count == 'binary':
matrix[i][code] = 1.
else:
matrix[i][code] += 1.
pickle.dump(pids, open(outFile+'.pids', 'wb'), -1)
pickle.dump(matrix, open(outFile+'.matrix', 'wb'), -1)
pickle.dump(types, open(outFile+'.types', 'wb'), -1)
###Output
_____no_output_____ |
data_modelling/01_indroduction_to_data_modelling/support_material/l1-demo-1-creating-a-table-with-postgres.ipynb | ###Markdown
Lesson 1 Demo 1: Creating a Table with PostgreSQLImage title: Postgres Icon Walk through the basics of PostgreSQL:Creating a table Inserting rows of data, Running a simple SQL query to validate the information. Typically, we would use a python wrapper called *psycopg2* to run the PostgreSQL queries. This library should be preinstalled but in the future to install this library, run the following command in the notebook to install locally: !pip3 install --user psycopg2 More documentation can be found here: http://initd.org/psycopg/ Import the library Note: An error might popup after this command has executed. Read it carefully before proceeding.
###Code
import psycopg2
###Output
_____no_output_____
###Markdown
Create a connection to the database1. Connect to the local instance of PostgreSQL (*127.0.0.1*)2. Use the database/schema from the instance. 3. The connection reaches out to the database (*studentdb*) and uses the correct privileges to connect to the database (*user and password = student*). Note 1: This block of code will be standard in all notebooks. Note 2: Adding the try except will make sure errors are caught and understood
###Code
try:
conn = psycopg2.connect("host=127.0.0.1 dbname=studentdb user=student password=student")
except psycopg2.Error as e:
print("Error: Could not make connection to the Postgres database")
print(e)
###Output
_____no_output_____
###Markdown
Use the connection to get a cursor that can be used to execute queries.
###Code
try:
cur = conn.cursor()
except psycopg2.Error as e:
print("Error: Could not get curser to the Database")
print(e)
###Output
_____no_output_____
###Markdown
Use automactic commit so that each action is commited without having to call conn.commit() after each command. The ability to rollback and commit transactions is a feature of Relational Databases.
###Code
conn.set_session(autocommit=True)
###Output
_____no_output_____
###Markdown
Test the Connection and Error Handling CodeThe try-except block should handle the error: We are trying to do a select * on a table but the table has not been created yet.
###Code
try:
cur.execute("select * from udacity.music_library")
except psycopg2.Error as e:
print(e)
###Output
_____no_output_____
###Markdown
Create a database to work in
###Code
try:
cur.execute("create database udacity")
except psycopg2.Error as e:
print(e)
###Output
_____no_output_____
###Markdown
Close our connection to the default database, reconnect to the Udacity database, and get a new cursor.
###Code
try:
conn.close()
except psycopg2.Error as e:
print(e)
try:
conn = psycopg2.connect("host=127.0.0.1 dbname=studentdb user=student password=student")
except psycopg2.Error as e:
print("Error: Could not make connection to the Postgres database")
print(e)
try:
cur = conn.cursor()
except psycopg2.Error as e:
print("Error: Could not get curser to the Database")
print(e)
conn.set_session(autocommit=True)
###Output
_____no_output_____
###Markdown
We will create a Music Library of albums. Each album has a lot of information we could add to the music library table. We will start with album name, artist name, year. `Table Name: music_librarycolumn 1: Album Namecolumn 2: Artist Namecolumn 3: Year ` Translate this information into a Create Table Statement. Review this document on PostgreSQL datatypes: https://www.postgresql.org/docs/9.5/datatype.html
###Code
try:
cur.execute("CREATE TABLE IF NOT EXISTS music_library (album_name varchar, artist_name varchar, year int);")
except psycopg2.Error as e:
print("Error: Issue creating table")
print (e)
###Output
_____no_output_____
###Markdown
No error was found, but lets check to ensure our table was created. `select count(*)` which should return 0 as no rows have been inserted in the table.
###Code
try:
cur.execute("select count(*) from music_library")
except psycopg2.Error as e:
print("Error: Issue creating table")
print (e)
print(cur.fetchall())
###Output
_____no_output_____
###Markdown
Insert two rows
###Code
try:
cur.execute("INSERT INTO music_library (album_name, artist_name, year) \
VALUES (%s, %s, %s)", \
("Let It Be", "The Beatles", 1970))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
try:
cur.execute("INSERT INTO music_library (album_name, artist_name, year) \
VALUES (%s, %s, %s)", \
("Rubber Soul", "The Beatles", 1965))
except psycopg2.Error as e:
print("Error: Inserting Rows")
print (e)
###Output
_____no_output_____
###Markdown
Validate your data was inserted into the table. The while loop is used for printing the results. If executing queries in the Postgres shell, this would not be required. Note: If you run the insert statement code more than once, you will see duplicates of your data. PostgreSQL allows for duplicates.
###Code
try:
cur.execute("SELECT * FROM music_library;")
except psycopg2.Error as e:
print("Error: select *")
print (e)
row = cur.fetchone()
while row:
print(row)
row = cur.fetchone()
###Output
_____no_output_____
###Markdown
Drop the table to avoid duplicates and clean up
###Code
try:
cur.execute("DROP table music_library")
except psycopg2.Error as e:
print("Error: Dropping table")
print (e)
###Output
_____no_output_____
###Markdown
Close the cursor and connection.
###Code
cur.close()
conn.close()
###Output
_____no_output_____ |
packages/syft/examples/hyperledger-aries/recipes/blank_template.ipynb | ###Markdown
ACA-Py & ACC-Py Basic Template Copy this template into the root folder of your notebook workspace to get started Imports
###Code
from aries_cloudcontroller import AriesAgentController
import os
from termcolor import colored
###Output
_____no_output_____
###Markdown
Initialise the Agent Controller
###Code
api_key = os.getenv("ACAPY_ADMIN_API_KEY")
admin_url = os.getenv("ADMIN_URL")
print(
f"Initialising a controller with admin api at {admin_url} and an api key of {api_key}"
)
agent_controller = AriesAgentController(admin_url, api_key)
###Output
_____no_output_____
###Markdown
Start a Webhook Server
###Code
webhook_port = os.getenv("WEBHOOK_PORT")
webhook_host = "0.0.0.0"
agent_controller.init_webhook_server(webhook_host, webhook_port)
await agent_controller.listen_webhooks()
print(f"Listening for webhooks from agent at http://{webhook_host}:{webhook_port}")
###Output
_____no_output_____
###Markdown
Register Agent Event ListenersYou can see some examples within the webhook_listeners recipe. Copy any relevant cells across and customise as needed.Note you do not need to regsiter listeners but it is recommended.
###Code
listeners = []
## YOUR LISTENERS HERE
agent_controller.register_listeners(listeners)
###Output
_____no_output_____
###Markdown
Write Your Business LogicDevelop your business logic however you like. Be sure to check the other recipes for easy examples to get you started. These include: Credential Issuance (see issuer_template)* Issuer Setup* Request Credential* Issue Credential* Receive Credential Credential Presentation (see verifier_template)* Request Presentation* Offer Presentation* Send Presentation* Verify Presentation
###Code
## Write your logic
###Output
_____no_output_____
###Markdown
Terminate ControllerWhenever you have finished with this notebook, be sure to terminate the controller. This is especially important if your business logic runs across multiple notebooks.
###Code
await agent_controller.terminate()
###Output
_____no_output_____ |
Find_Lane_Line-V5.ipynb | ###Markdown
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a perspective transform to rectify binary image ("birds-eye view").* Detect lane pixels and fit to find the lane boundary.* Determine the curvature of the lane and vehicle position with respect to center.* Warp the detected lane boundaries back onto the original image.* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.--- First, I'll compute the camera calibration using chessboard images
###Code
import os
import numpy as np
import cv2
import glob
from queue import Queue
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
%matplotlib auto
# Define a class to receive the characteristics of each line detection
class Line():
def __init__(self):
# was the line detected in the last iteration?
self.detected = False
# x values of the last n fits of the line
self.recent_xfitted = []
#average x values of the fitted line over the last n iterations
self.bestx = None
#polynomial coefficients averaged over the last n iterations
self.best_fit = None
#polynomial coefficients for the most recent fit
self.current_fit = [np.array([False])]
self.current_fitm = [np.array([False])] # by meters
#radius of curvature of the line in some units
self.radius_of_curvature = None
#distance in meters of vehicle center from the line
self.line_base_pos = None
#difference in fit coefficients between last and new fits
self.diffs = np.array([0,0,0], dtype='float')
#x values for detected line pixels
self.allx = []
#y values for detected line pixels
self.ally = []
class Lane():
def __init__(self):
self.image = Queue(maxsize=5) # keep 5 frame image of Lane
self.ym_per_pix = None # y dimension, transfer pix scale to real world meter scale
self.xm_per_pix = None # x dimension, transfer pix scale to real world meter scale
self.mtx = None # camera calibration mtx
self.dist = None # camera calibration dst
self.count = 0
left_line = Line()
right_line = Line()
lane = Lane()
# Make a list of calibration images
imgfiles = glob.glob('camera_cal/calibration*.jpg')
def camera_calibration(imgfiles,display = False):
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
# x axis 9 pionts, y axis 6 points, scan from x axis, one piont by one piont
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Step through the list and search for chessboard corners
for fname in imgfiles:
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None) # corners are 9 x 6 = 54 coordinates
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints,
gray.shape[::-1], None, None)
return mtx, dist
mtx, dist = camera_calibration(imgfiles)
lane.mtx = mtx
lane.dist = dist
###Output
Using matplotlib backend: Qt4Agg
###Markdown
Apply a distortion correction to raw images.
###Code
%matplotlib auto
# undistort image with camera calibration mtx, dist
def undistort(img,mtx, dist):
undist = cv2.undistort(img, mtx, dist, None, mtx)
return undist
img = mpimg.imread('camera_cal/calibration1.jpg')
undist = undistort(img, mtx, dist)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))
f.tight_layout()
ax1.imshow(img)
ax1.set_title('Original Image')
ax2.imshow(undist)
ax2.set_title('Undistorted Image')
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
###Output
Using matplotlib backend: Qt4Agg
###Markdown
Use color transforms, gradients, etc., to create a thresholded binary image.
###Code
# define color and x-gradient filter
def image_filter(img, l_thresh=(200, 255), b_thresh=(160,255),sx_thresh=(70, 210)):
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# convert to LUV color space
luv = cv2.cvtColor(img,cv2.COLOR_RGB2Luv)
l_channel = luv[:,:,0]
u_channel = luv[:,:,1]
v_channel = luv[:,:,2]
luv_binary = np.zeros_like(l_channel)
# luv_threshold = (230,255)
u_threshold = (50,255)
v_threshold = (50,255)
l_th = (l_channel >= l_thresh[0]) & (l_channel <=l_thresh[1])
u_th = (u_channel >= u_threshold[0]) & (u_channel <=u_threshold[1])
v_th = (v_channel >= v_threshold[0]) & (v_channel <=v_threshold[1])
luv_binary[l_th & u_th & v_th] =1
# convert to Lab color space
lab = cv2.cvtColor(img,cv2.COLOR_RGB2Lab)
# lab_channel = lab[:,:,0]
# a_channel = lab[:,:,1]
b_channel = lab[:,:,2]
lab_binary = np.zeros_like(b_channel)
# lab_threshold = (0,255)
# a_threshold = (0,255)
# b_threshold = (170,255)
# lab_th = (lab_channel >= lab_threshold[0]) & (lab_channel <=lab_threshold[1])
# a_th = (a_channel >= a_threshold[0]) & (a_channel <=a_threshold[1])
b_th = (b_channel >= b_thresh[0]) & (b_channel <=b_thresh[1])
lab_binary[b_th] = 1
# Sobel x
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1
# combined channel
combined = np.zeros_like(sxbinary)
combined[ (luv_binary == 1) | (lab_binary ==1) | (sxbinary == 1) ] = 1
return combined
undist = mpimg.imread('output_images/straight_lines1.jpg')
# undist = mpimg.imread('output_images/output4.jpg')
combined = image_filter(undist)
# cv2.imwrite('output_images/binary_combined.jpg',filtered)
# Plot the result
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))
f.tight_layout()
ax1.imshow(undist)
ax1.set_title('Undistorted Image', fontsize=40)
ax2.imshow(combined,cmap='gray')
# plt.savefig('output_images/binary_combined.jpg')
ax2.set_title('Binary threshold Image', fontsize=40)
plt.subplots_adjust(left=0.02, right=1, top=0.9, bottom=0.)
###Output
_____no_output_____
###Markdown
Apply ROI to binary image
###Code
def region_mask(img):
mask = np.zeros_like(img)
# inner_mask = np.zeros_like(img)
h,w = mask.shape[0],mask.shape[1]
vertices = np.array([[(0.45*w-50,h*0.65),(0.55*w+50,h*0.65),(w*0.85+150, h),
(w*0.15-150,h)]],dtype=np.int32)
# '''
inner_vertices = np.array([[(0.45*w+20,h*0.7),(0.55*w-20,h*0.7),(w*0.8-200, h),
(w*0.2+200,h)]],dtype=np.int32)
# '''
cv2.fillPoly(mask,vertices,1)
cv2.fillPoly(mask,inner_vertices,0)
masked = cv2.bitwise_and(img,mask)
return masked
masked = region_mask(combined)
f = plt.figure(figsize=(24,9))
plt.imshow(masked,cmap='gray')
# to search binary masked image to get nonzero points to match criteria
# used to select src, dst points automatically when the lane change sharply
def search_pixels(img): # request binary input image
histogram = np.sum(img[np.int(0.35 * img.shape[0]):,:], axis=0)
midpoint = np.int(histogram.shape[0]//2)
window_height = 10
# only scan y axis from h = img.shape[0] to 0.65 * h
nwindows = np.int(0.35 * img.shape[0]//window_height)
xwindow_width = 20
# Set minimum number of pixels found to recenter window
minpix = 10
# Set height of windows - based on nwindows above and image shape
window_height = np.int(img.shape[0]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = img.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
left_start = np.min(nonzerox)
right_end = np.max(nonzerox)
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = img.shape[0] - (window+1)*window_height
win_y_high = img.shape[0] - window*window_height
### TO-DO: Find the four below boundaries of the window ###
for xwindow in range((right_end-left_start)//xwindow_width):
xwindow_low = left_start + xwindow * xwindow_width
xwindow_high = left_start + (xwindow+1) * xwindow_width
left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= xwindow_low) & (nonzerox < midpoint)).nonzero()[0]
right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox > midpoint) & (nonzerox <= xwindow_high)).nonzero()[0]
# Append these indices to the lists
if len(left_inds)>minpix:
left_lane_inds.append(left_inds)
if len(right_inds)>minpix:
right_lane_inds.append(right_inds)
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx,lefty,rightx,righty
# used to pipeline2 for rapidly changed lane line in challenge video
def get_src_dst(img): # request binary input image
leftx,lefty,rightx,righty = search_pixels(img)
# print('lefty: \n', lefty)
# print('righty: \n',righty)
# select left_bottom point
y_top = np.max([np.min(lefty),np.min(righty)])
# y_top = np.min([np.min(lefty),np.min(righty)])
# print('y_top: ',y_top)
y_bottom = np.min([np.max(lefty),np.max(righty)])
# y_bottom = np.max([np.max(lefty),np.max(righty)])
# print('y_bottom: ',y_bottom)
r_m, r_b = np.polyfit(righty, rightx, 1)
l_m, l_b = np.polyfit(lefty, leftx, 1)
left_upper_x = l_m * y_top + l_b
left_bottom_x = l_m * y_bottom + l_b
right_bottom_x = r_m * y_bottom + r_b
right_upper_x = r_m * y_top + r_b
src = np.int_([[left_upper_x,y_top],[left_bottom_x,y_bottom],
[right_bottom_x,y_bottom],[right_upper_x,y_top]])
dst = np.int_([[left_bottom_x,0],[left_bottom_x,masked.shape[0]],
[right_bottom_x,masked.shape[0]],[right_bottom_x,0]])
print(src)
print(dst)
return src, dst
###Output
_____no_output_____
###Markdown
Apply a perspective transform to rectify binary image ("birds-eye view").
###Code
test = mpimg.imread('output_images/straight_lines1.jpg')
# test = mpimg.imread('output_images/output4.jpg')
imshape = masked.shape
h = imshape[0]
w = imshape[1]
'''
# select 4 source points
src = [[w*0.45-5,h*0.65],[w*0.55+5,h*0.65],[w*0.85+10, h],[w*0.15+10, h]]
# lane.src = src
# select 4 destination points
dst = [[w*0.15,0],[w*0.85,0],[w*0.85, h],[w*0.15, h]]
# lane.dst = dst
'''
# perspective transform
def perspective_transform(img,src,dst):
src = np.float32(src)
dst = np.float32(dst)
# use src, dst points to compute M
M = cv2.getPerspectiveTransform(src, dst)
# Warp an image using the perspective transform, M
warped = cv2.warpPerspective(img, M, (w,h), flags=cv2.INTER_LINEAR)
return warped
def draw_lines(img,points):
pts = np.array(points, np.int32)
pts = pts.reshape((-1,1,2))
# print(pts)
cv2.polylines(img,[pts],True,(255,0,0),5)
return img
src, dst = get_src_dst(masked)
test_warped = perspective_transform(test,src,dst)
# draw source and destination points to tune points parameter,when setup completed, comment the draw function.
# draw source points on undistorted image
draw_lines(test,src)
# draw destination points on warped image
draw_lines(test_warped,dst)
# Plot the result
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))
f.tight_layout()
ax1.imshow(test)
ax1.set_title('Undistorted Image', fontsize=40)
ax2.imshow(test_warped)
ax2.set_title('Warped Image', fontsize=40)
plt.subplots_adjust(left=0.02, right=1.0, top=0.9, bottom=0.)
###Output
[[ 562 475]
[ 249 691]
[1056 691]
[ 723 475]]
[[ 249 0]
[ 249 720]
[1056 720]
[1056 0]]
###Markdown
Detect lane pixels and fit to find the lane boundary.
###Code
binary_warped = perspective_transform(masked,src,dst)
def find_lane_pixels(binary_warped):
# Take a histogram of the bottom half of the image
histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)
# Create an output image to draw on and visualize the result
out_img = np.dstack((binary_warped, binary_warped, binary_warped))
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2)
leftx_base = np.argmax(histogram[:midpoint])
print('leftx_base: ',leftx_base)
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
print('rightx_base: ',rightx_base)
# HYPERPARAMETERS
# Choose the number of sliding windows
nwindows = 9
# Set the width of the windows +/- margin
margin = 100
# Set minimum number of pixels found to recenter window
minpix = 50
# Set height of windows - based on nwindows above and image shape
window_height = np.int(binary_warped.shape[0]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated later for each window in nwindows
leftx_current = leftx_base
rightx_current = rightx_base
lane.centroid = (leftx_base+rightx_base)/2
lane.ym_per_pix = 30/binary_warped.shape[0] # y dimension, transfer pix scale to real world meter scale
lane.xm_per_pix = 3.7/(rightx_base-leftx_base) # x dimension, transfer pix scale to real world meter scale
left_line.line_base_pos = lane.xm_per_pix * (midpoint - leftx_base)
right_line.line_base_pos = lane.xm_per_pix * (midpoint - rightx_base)
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = binary_warped.shape[0] - (window+1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
### TO-DO: Find the four below boundaries of the window ###
win_xleft_low = leftx_current - margin # Update this
win_xleft_high = leftx_current + margin # Update this
win_xright_low = rightx_current - margin # Update this
win_xright_high = rightx_current + margin # Update this
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),
(win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),
(win_xright_high,win_y_high),(0,255,0), 2)
### Identify the nonzero pixels in x and y within the window ###
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
### TO-DO: If you found > minpix pixels, recenter next window ###
### (`right` or `leftx_current`) on their mean position ###
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices (previously was a list of lists of pixels)
try:
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
except ValueError:
# Avoids an error if the above is not implemented fully
print('ValueError: can not concatenate left and right lane indices')
pass
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx, lefty, rightx, righty, out_img
def fit_polynomial(binary_warped):
# Find our lane pixels first
leftx, lefty, rightx, righty, out_img = find_lane_pixels(binary_warped)
print('length of leftx: ',len(leftx))
print('length of lefty: ', len(lefty))
print('length of rithtx: ', len(rightx))
print('length of righty: ', len(righty))
# if len of leftx, lefty, rightx, righty == 0
# predict left_fitx, right_fitx based on last 1 image
# transfer x,y position from pix scale to real world
leftxm = lane.xm_per_pix * leftx
leftym = lane.ym_per_pix * lefty
rightxm = lane.xm_per_pix * rightx
rightym = lane.ym_per_pix * righty
### TO-DO: Fit a second order polynomial to each using `np.polyfit` ###
left_fit = np.polyfit(lefty,leftx, 2)
left_fitm = np.polyfit(leftym,leftxm, 2)
right_fit = np.polyfit(righty,rightx,2)
right_fitm = np.polyfit(rightym,rightxm, 2)
# Generate x and y values for plotting
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
try:
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
except TypeError:
# Avoids an error if `left` and `right_fit` are still none or incorrect
print('The function failed to fit a line!')
left_fitx = 1*ploty**2 + 1*ploty
right_fitx = 1*ploty**2 + 1*ploty
## Visualization ##
# Colors in the left and right lane regions
out_img[lefty, leftx] = [255, 0, 0]
out_img[righty, rightx] = [0, 0, 255]
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
cv2.polylines(out_img,np.int_([pts]),True,(255,255,0),2)
# left_line.detected = True
# right_line.detected = True
left_line.current_fit = left_fit
left_line.current_fitm = left_fitm
right_line.current_fit = right_fit
right_line.current_fitm = right_fitm
left_line.recent_xfitted.append(left_fitx)
right_line.recent_xfitted.append(right_fitx)
return out_img,ploty,left_fitx,right_fitx
out_img,ploty,left_fitx,right_fitx = fit_polynomial(binary_warped)
# Plot the result
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))
f.tight_layout()
ax1.imshow(binary_warped,cmap = 'gray')
ax1.set_title('binary_warped Image', fontsize=40)
ax2.imshow(out_img)
ax2.set_title('sliding window Image', fontsize=40)
plt.subplots_adjust(left=0.02, right=1.0, top=0.9, bottom=0.)
###Output
leftx_base: 237
rightx_base: 1052
length of leftx: 27633
length of lefty: 27633
length of rithtx: 7988
length of righty: 7988
###Markdown
Determine the curvature of the lane and vehicle position with respect to center.
###Code
def measure_curvature_pixels(ploty,left_fit, right_fit):
'''
Calculates the curvature of polynomial functions in pixels.
'''
# Define y-value where we want radius of curvature
# We'll choose the maximum y-value, corresponding to the bottom of the image
y_eval = np.max(ploty)
##### TO-DO: Implement the calculation of R_curve (radius of curvature) #####
left_curverad = ((1+(2*left_fit[0]*y_eval + left_fit[1])**2)**(3/2))/np.abs(2*left_fit[0]) ## Implement the calculation of the left line here
right_curverad = ((1+(2*right_fit[0]*y_eval + right_fit[1])**2)**(3/2))/np.abs(2*right_fit[0]) ## Implement the calculation of the right line here
return left_curverad, right_curverad
def measure_curvature_meters(ploty,left_fit, right_fit):
'''
Calculates the curvature of polynomial functions in pixels.
'''
# Define y-value where we want radius of curvature
# We'll choose the maximum y-value, corresponding to the bottom of the image
y_eval = np.max(ploty)
##### TO-DO: Implement the calculation of R_curve (radius of curvature) #####
left_curverad = ((1+(2*left_fit[0]*y_eval + left_fit[1])**2)**(3/2))/np.abs(2*left_fit[0]) ## Implement the calculation of the left line here
right_curverad = ((1+(2*right_fit[0]*y_eval + right_fit[1])**2)**(3/2))/np.abs(2*right_fit[0]) ## Implement the calculation of the right line here
left_line.radius_of_curvature = left_curverad
right_line.radius_of_curvature = right_curverad
average_curverad = (left_curverad + right_curverad)/2
return average_curverad
left_fitm = left_line.current_fitm
right_fitm = right_line.current_fitm
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
plotym = lane.ym_per_pix * ploty
average_curverad,left_curverad,right_curverad = measure_curvature_meters(plotym,left_fitm, right_fitm)
offset = (left_line.line_base_pos + right_line.line_base_pos)/2
print('average_curverad: ',average_curverad)
print('vehicle offset: ', offset)
# left_curverad, right_curverad = measure_curvature_pixels(ploty,left_fit, right_fit)
# print(left_curverad, right_curverad)
###Output
average_curverad: 6930.885968678408
vehicle offset: -0.020429447852760685
###Markdown
Warp the detected lane boundaries back onto the original image.
###Code
# Create an image to draw the lines on
def draw_back(undist,binary_warped,src,dst,ploty,left_fitx,right_fitx):
warp_zero = np.zeros_like(binary_warped).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
src = np.float32(src)
dst = np.float32(dst)
# compute inverse M transformation martix
Minv = cv2.getPerspectiveTransform(dst, src)
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, Minv, (img.shape[1], img.shape[0]))
# Combine the result with the original image
result = cv2.addWeighted(undist, 1, newwarp, 0.3, 0)
return result
left_fitx = left_line.recent_xfitted
right_fitx = right_line.recent_xfitted
result = draw_back(undist,binary_warped,src,dst,ploty,left_fitx,right_fitx)
# plot the result
f = plt.figure(figsize=(24,9))
plt.imshow(result)
###Output
_____no_output_____
###Markdown
search around previously polynomial fit
###Code
def fit_poly(img_shape, leftx, lefty, rightx, righty):
### TO-DO: Fit a second order polynomial to each with np.polyfit() ###
if (len(leftx)>0) & (len(lefty)>0) & (len(rightx)>0) & (len(righty)>0):
# print('refit with new found x, y')
# transfer x,y position from pix scale to real world
leftxm = lane.xm_per_pix * leftx
leftym = lane.ym_per_pix * lefty
rightxm = lane.xm_per_pix * rightx
rightym = lane.ym_per_pix * righty
left_fit = np.polyfit(lefty,leftx, 2)
right_fit = np.polyfit(righty,rightx, 2)
left_fitm = np.polyfit(leftym,leftxm, 2)
right_fitm = np.polyfit(rightym,rightxm, 2)
# update left_fitx, right_fitx,,left_fit,right_fit
left_line.current_fit = left_fit
left_line.current_fitm = left_fitm
right_line.current_fit = right_fit
right_line.current_fitm = right_fitm
left_line.line_base_pos = lane.xm_per_pix * (img_shape[1]/2 - leftx[-1])
right_line.line_base_pos = lane.xm_per_pix * (img_shape[1]/2 - rightx[-1])
else:
# print('x, y not found')
left_fit = left_line.current_fit
right_fit = right_line.current_fit
left_line.detected = False
right_line.detected = False
# Generate x and y values for plotting
ploty = np.linspace(0, img_shape[0]-1, img_shape[0])
### TO-DO: Calc both polynomials using ploty, left_fit and right_fit ###
left_fitx = left_fit[0]*(ploty**2) + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*(ploty**2) + right_fit[1]*ploty + right_fit[2]
left_line.recent_xfitted.append(left_fitx)
right_line.recent_xfitted.append(right_fitx)
return left_fitx, right_fitx, ploty,left_fit,right_fit
def search_around_poly(binary_warped):
# HYPERPARAMETER
# Choose the width of the margin around the previous polynomial to search
margin = 150
# Grab activated pixels
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
### TO-DO: Set the area of search based on activated x-values ###
### within the +/- margin of our polynomial function ###
### Hint: consider the window areas for the similarly named variables ###
### in the previous quiz, but change the windows to our new search area ###
left_fit = left_line.current_fit
right_fit = right_line.current_fit
left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy +
left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) +
left_fit[1]*nonzeroy + left_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy +
right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) +
right_fit[1]*nonzeroy + right_fit[2] + margin)))
print('left lane points: ',len(left_lane_inds))
print('right lane points: ',len(right_lane_inds))
# Again, extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
print('leftx points: ',len(leftx))
lefty = nonzeroy[left_lane_inds]
print('lefty points: ',len(lefty))
rightx = nonzerox[right_lane_inds]
print('rightx points: ',len(rightx))
righty = nonzeroy[right_lane_inds]
print('righty points: ',len(righty))
# Fit new polynomials
left_fitx,right_fitx,ploty,left_fit,right_fit = fit_poly(binary_warped.shape, leftx, lefty, rightx, righty)
## Visualization ##
# Create an image to draw on and an image to show the selection window
out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
window_img = np.zeros_like(out_img)
# Color in left and right line pixels
out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]
# Generate a polygon to illustrate the search window area
# And recast the x and y points into usable format for cv2.fillPoly()
left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))])
left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin,
ploty])))])
left_line_pts = np.hstack((left_line_window1, left_line_window2))
right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))])
right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin,
ploty])))])
right_line_pts = np.hstack((right_line_window1, right_line_window2))
# Draw the lane onto the warped blank image
cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0))
cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0))
result = cv2.addWeighted(out_img, 1, window_img, 0.3, 0)
# Plot the polynomial lines onto the image
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
cv2.polylines(result,np.int_([pts]),True,(255,255,0),2)
## End visualization steps ##
return result,ploty,left_fitx,right_fitx
# Run image through the pipeline
# Note that in your project, you'll also want to feed in the previous fits
out_img,ploty,left_fitx,right_fitx = search_around_poly(binary_warped)
# View your output
f = plt.figure(figsize=(24,9))
plt.imshow(out_img)
def pipeline(img):
undist = undistort(img,mtx, dist)
combined = image_filter(undist)
masked = region_mask(combined)
binary_warped = perspective_transform(masked,src,dst)
if (left_line.detected == False) | (right_line.detected == False):
out_img,ploty,left_fitx,right_fitx = fit_polynomial(binary_warped)
left_line.detected = True
right_line.detected = True
else:
out_img,ploty,left_fitx,right_fitx = search_around_poly(binary_warped)
result = draw_back(undist,binary_warped,src,dst,ploty,left_fitx,right_fitx)
plotym = lane.ym_per_pix * ploty
average_curverad = measure_curvature_meters(plotym,left_line.current_fitm, right_line.current_fitm)
offset = (left_line.line_base_pos + right_line.line_base_pos)/2
shift = 'right '
if offset <0:
shift = 'left '
offset = -offset
cv2.putText(result,'Radius of Curvature: '+str(round(average_curverad)).rjust(6)+'(m)',(50,50),
cv2.FONT_HERSHEY_SIMPLEX,1,(255,255,255),thickness=2)
cv2.putText(result,'Vehicle is ' + str(round(offset,2))+'m '+shift.rjust(5)+'of center',
(50,100),cv2.FONT_HERSHEY_SIMPLEX,1,(255,255,255),thickness=2)
return result
def pipeline2(img):
undist = undistort(img,mtx, dist)
combined = image_filter(undist)
masked = region_mask(combined)
src,dst = get_src_dst(masked)
# print(src)
# print(dst)
binary_warped = perspective_transform(masked,src,dst)
if (left_line.detected == False) | (right_line.detected == False):
out_img,ploty,left_fitx,right_fitx = fit_polynomial(binary_warped)
left_line.detected = True
right_line.detected = True
else:
out_img,ploty,left_fitx,right_fitx = search_around_poly(binary_warped)
result = draw_back(undist,binary_warped,src,dst,ploty,left_fitx,right_fitx)
plotym = lane.ym_per_pix * ploty
average_curverad = measure_curvature_meters(plotym,left_line.current_fitm, right_line.current_fitm)
offset = (left_line.line_base_pos + right_line.line_base_pos)/2
shift = 'right '
if offset <0:
shift = 'left '
offset = -offset
cv2.putText(result,'Radius of Curvature: '+str(round(average_curverad)).rjust(6)+'(m)',(50,50),
cv2.FONT_HERSHEY_SIMPLEX,1,(255,255,255),thickness=2)
cv2.putText(result,'Vehicle is ' + str(round(offset,2))+'m '+shift.rjust(5)+'of center',
(50,100),cv2.FONT_HERSHEY_SIMPLEX,1,(255,255,255),thickness=2)
return result
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from moviepy.editor import *
from IPython.display import HTML
project_video_output = 'output_videos/project_video.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("project_video.mp4")
output_clip = clip1.fl_image(pipeline2) #NOTE: this function expects color images!!
%time output_clip.write_videofile(project_video_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(project_video_output))
###Output
_____no_output_____
###Markdown
try challenge_video
###Code
from moviepy.editor import VideoFileClip
from moviepy.editor import *
from IPython.display import HTML
challenge_video_output = 'output_videos/challenge_video.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("challenge_video.mp4")
output_clip = clip1.fl_image(pipeline2) #NOTE: this function expects color images!!
%time output_clip.write_videofile(challenge_video_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_video_output))
###Output
_____no_output_____
###Markdown
try harder_challenger_video
###Code
harder_challenge_video_output = 'output_videos/harder_challenge_video.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("harder_challenge_video.mp4")
output_clip = clip1.fl_image(pipeline) #NOTE: this function expects color images!!
%time output_clip.write_videofile(harder_challenge_video_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(harder_challenge_video_output))
# to output speicfic image from challenge or hard_challenge video
def output_image(img):
while(lane.count < 300):
if lane.count % 10 == 0:
imgcopy = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
cv2.imwrite('output_images/output%d.jpg'%(lane.count//10),imgcopy)
lane.count += 1
return img
return None
###Output
_____no_output_____ |
February/Week5/31.ipynb | ###Markdown
Spiral Traversal of Grid[The Original Question](https://mp.weixin.qq.com/s/Xk8GOWrlgcCzYfo3JN54DA) QuesitonYou are given a 2D array of integers. Print out the clockwise spiral traversal of the matrix. Example```python3grid = [ [1, 2, 3, 4, 5], [5, 6, 7, 8, 9, 10], [11, 12, 13, 14, 15], [16, 17, 18, 19, 20]]```The clockwise spiral traversal of this array is:```python3[1, 2, 3, 4, 5, 10, 15, 20, 19, 18, 17, 16, 11, 6, 7, 8, 9, 14, 13, 12]```
###Code
def matrix_spiral_print(M):
# Fill this in.
path = list()
# Confirm borders of the matrix.
up = left = 0
right = len(grid[0])
bottom = len(grid)
# Traverse a loop each time.
while up < bottom and left < right:
# The upper border.
for k in range(left, right):
path.append(grid[up][k])
pass
up += 1
# The right border.
for k in range(up, bottom):
path.append(grid[k][right - 1])
pass
right -= 1
# The bottom border.
for k in range(right - 1, left - 1, -1):
path.append(grid[bottom - 1][k])
pass
bottom -= 1
# The left border.
for k in range(bottom - 1, up - 1, -1):
path.append(grid[k][left])
pass
left += 1
pass
# Output result.
print(path)
pass
grid = [
[1, 2, 3, 4, 5],
[5, 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20]
]
matrix_spiral_print(grid)
###Output
[1, 2, 3, 4, 5, 9, 15, 20, 19, 18, 17, 16, 11, 5, 6, 7, 8, 14, 13, 12]
|
figures/fig1_map.ipynb | ###Markdown
NotesThe data generated by this notebook can be used to make the maps in figure 1 and is saved in two locations:* In the `main` folder under the path pointed to by `FIGURE_PATH`* In the `d3map` folder in the folder this notebook is in - this makes generating the map from that folder easier.
###Code
import sys
sys.executable
%matplotlib inline
import sys
import os
DATA_PATH = os.getenv('DATA_PATH')
CODE_PATH = os.getenv('CODE_PATH')
FIGURE_PATH = os.getenv('FIGURE_PATH')
sys.path.insert(0, os.path.join(CODE_PATH))
import pandas as pd
import numpy as np
import json
import re
import time
from src.load import EGRID, BA_DATA
from d3map_utils import resetCoords, addDataNodes, data_path, data_path2
import matplotlib.pyplot as plt
import logging.config
logging.config.fileConfig(os.path.join(CODE_PATH, "src/logging.conf"))
logger = logging.getLogger(__name__)
data_path
# Make sure directory exists
os.makedirs(data_path2, exist_ok=True)
co2 = BA_DATA(fileNm=os.path.join(DATA_PATH, "analysis/SEED_CO2_Y.csv"), variable="CO2")
so2 = BA_DATA(fileNm=os.path.join(DATA_PATH, "analysis/SEED_SO2_Y.csv"), variable="SO2")
nox = BA_DATA(fileNm=os.path.join(DATA_PATH, "analysis/SEED_NOX_Y.csv"), variable="NOX")
WECC_BAs = [
"AVA", "AZPS","BANC", "BPAT","CHPD",
"CISO", "DEAA", "DOPD", "EPE", "GCPD", "GRMA", "GWA",
"HGMA", "IID", "IPCO", "LDWP", "NEVP", "NWMT",
"PACE", "PACW", "PGE", "PNM", "PSCO",
"PSEI", "SCL", "SRP", "TEPC","TIDC",
"TPWR", "WACM", "WALC", "WAUW", "WWA"]
# CO2 map
resetCoords()
for poll in ["CO2", "SO2", "NOX"]:
for variable in ["E", poll]:
variable_scalings = {
"E": 1e-6,
"CO2": 1e-6,
"SO2": 1e-3,
"NOX": 1e-3,
}
polli_scalings = {
"CO2": 1000,
"SO2": 1e6,
"NOX": 1e6,
}
legCircleTitles = {
"E": "Electricity consumption (TWh)",
"CO2": "Carbon consumption (Mtons)",
"SO2": "SO2 consumption (ktons)",
"NOX": "NOX consumption (ktons)"
}
legLineTitles = {
"E": "Electricity trade (TWh)",
"CO2": "Carbon trade (Mtons)",
"SO2": "SO2 trade (ktons)",
"NOX": "NOx trade (ktons)",
}
legColorTitles = {
"CO2": "Consumption-based carbon intensity (kg/MWh)",
"SO2": "Consumption-based SO2 intensity (ton/MWh)",
"NOX": "Consumption-based NOX intensity (ton/MWh)",
}
titles = {
"E": "ELECTRICITY",
"CO2": "CARBON",
"SO2": "SULFUR DIOXIDE",
"NOX": "NITROGEN OXIDES"
}
units = {
"E": "TWh",
"CO2": "Mtons",
"SO2": "ktons",
"NOX": "ktons"
}
# Unpack options
legCircleTitle = legCircleTitles[variable]
legLineTitle = legLineTitles[variable]
variable_scaling = variable_scalings[variable]
unit = units[variable]
title = titles[variable]
legColorTitle = legColorTitles[poll]
polli_scaling = polli_scalings[poll]
fileNm_out = "graph_%s_%si.json" % (variable, poll)
poll_data = BA_DATA(fileNm=os.path.join(
DATA_PATH, "analysis/SEED_%s_Y.csv" % poll),
variable="%s" % poll)
elec = BA_DATA(fileNm=os.path.join(
DATA_PATH, "analysis/SEED_E_Y.csv"),
variable="E")
if variable == "E":
variable_data = elec
else:
variable_data = poll_data
# Prepare data
D = (variable_data.df[variable_data.get_cols(field="D")].values[0]
* variable_scaling)
X = (poll_data.df[poll_data.get_cols(field="D")].values[0]
/ elec.df[elec.get_cols(field="D")].values[0])
polli_data = dict(zip(elec.regions, polli_scaling * X))
elec_data = dict(zip(elec.regions, D))
baseGraphPath = os.path.join(data_path, "graph2.json")
with open(baseGraphPath, 'r') as fr:
graph = json.load(fr)
data = elec_data
for ba in elec.regions:
el = data.pop(ba)
if el == 0.:
el = None
data[ba] = el
graph = addDataNodes(graph, "E_D", data)
graph = addDataNodes(graph, "E_D", data, "labels")
data = polli_data
for ba in elec.regions:
el = data.pop(ba)
if np.isnan(el):
el = None
data[ba] = el
graph = addDataNodes(graph, "CO2_Di", data)
data = {}
for ba in elec.regions:
if ba in WECC_BAs:
data[ba] = "wecc"
elif ba == "ERCO":
data[ba] = "erco"
else:
data[ba] = "eic"
graph = addDataNodes(graph, "interconnect", data)
node_list = [el["shortNm"] for el in graph['nodes']]
shortNm2ind = {node["shortNm"]:i for i, node in enumerate(graph['nodes'])}
# Add data for the links
links = []
regions = variable_data.regions
for i in range(len(regions)):
for j in range(i, len(regions)):
from_ba = regions[i]
to_ba = regions[j]
if ((elec.KEY["ID"] % (from_ba, to_ba) in elec.df.columns)
& (elec.KEY["ID"] % (to_ba, from_ba) in elec.df.columns)):
elec_transfer = elec.df.loc[
:, elec.KEY["ID"] % (from_ba, to_ba)].values[0]
if elec_transfer < 0: # Have this be positive
from_ba = regions[j]
to_ba = regions[i]
elec_transfer = elec.df.loc[
:, elec.KEY["ID"] % (from_ba, to_ba)].values[0]
if variable != "E":
elec_transfer *= (polli_data[from_ba] / polli_scaling)
links += [{
"source": shortNm2ind[from_ba],
"target": shortNm2ind[to_ba],
"TI": elec_transfer*variable_scaling,
"TI_i": polli_data[from_ba]}]
graph["links"] = links
graph["meta"] = {
"colorModeAuto": False,
"fieldRadius": 'E_D',
"fieldLineWidth": 'TI',
"fieldCircle": 'CO2_Di',
"fieldLineColor": 'TI_i',
"legColorTitle": legColorTitle,
"legCircleTitle": legCircleTitle,
"legLineTitle": legLineTitle,
"unit": unit,
"title": title
}
graphPath_out = os.path.join(data_path, fileNm_out)
with open(graphPath_out, 'w') as fw:
json.dump(graph, fw)
# Save a local copy
graphPath_out = os.path.join(data_path2, fileNm_out)
with open(graphPath_out, 'w') as fw:
json.dump(graph, fw)
# Aside: Understanding CA imports of SOx and NOx
so2.df[so2.get_cols(r="CISO", field="NG")+ so2.get_cols(r="CISO", field="D")]
so2.df.loc[:, [so2.KEY["ID"] % (from_ba, "CISO") for from_ba in so2.get_trade_partners("CISO")]]
so2.df[so2.get_cols(field="D")].transpose().sort_values(by="2016-01-01 00:00:00", ascending=False)
nox.df.loc[:, [nox.KEY["ID"] % (from_ba, "CISO") for from_ba in so2.get_trade_partners("CISO")]]
nox.df[nox.get_cols(r="CISO", field="NG")+ nox.get_cols(r="CISO", field="D")]
###Output
_____no_output_____ |
Week 4/Day 1/Tugas Hari 1 Pekan 4.ipynb | ###Markdown
Soal 1. Menentukan jenis data categorical atau numericJelaskan data kategorikal (kualitatif) dan data numerik (kuantitatif) Jawab disini: - Data kategorikal merupakan data yang mencerminkan karakteristik seperti bahasa, jenis kelamin seseorang, warna rambut seseorang, atau nilai numerik bulat yang tidak memiliki makna matematis seperti 1 untuk laki-laki dan 0 untuk perempuan. Data kategorikal tersebut kemudian terbagi atas dua jenis: Nominal dan Ordinal- Data Nominal. Data Nominal merupakan data yang tidak memiliki order dan berbentuk label tanpa adanya tingkatan perangkingan.- Data Ordinal. Sementara itu, data ordinal merupakan data kategorikal yang disusun dalam sebuah order dan diatur menggunakan peringkat-peringkat tertentu.- Data Numerik merupakan data yang digambarkan dengan angka dan terbagi dalam 2 jenis: Interval dan Rasio- Data Internval adalah skala yang punya sifat ordinal dan mengandung jarak. Contoh: Pakaian A 100rb, Pakaian C 200rb, Internval 100rb- Data Rasio adalah skala dengan sifat nominal, ordinal, interval dan punya rasio antar objek yang diukur, Contoh: - A 100rb, C 200rb. Rasio A & C 1/2, C = 2x A Download [austin_weather.csv](https://drive.google.com/uc?export=download&id=19Yc404D3U3OPPoUP8J1pXETlTmA4hOOX)
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv('austin_weather.csv')
df.head()
df.info()
df = df.sample(1000)
###Output
_____no_output_____
###Markdown
Soal 2. Visualisasi Scatter Plot dengan memiliki missing value Pada tugas kali ini kita akan mengamati nilai DewPointAvg (F) dengan mengamati nilai HumidityAvg (%), TempAvg (F), dan WindAvg (MPG)Perhatikan bahwa data kita tidaklah siap untuk di analisis, salah satunya tipe data dari DewPointAvg (F), HumidityAvg (%), dan WindAvg (MPG) adalah object, padahalnya data nya ber isi numeric. maka :- Ubahlah tipe data tersebut menjadi tipe data floatKemudian: - Kalian tidak akan dengan mudah mengubah tipe data tersebut karena column tersebut mempunyai nilai '-' dan 'na' yang dianggap sebagai missing value, namun data ini tidak bisa di ubah ke bentuk float, maka ubahlah data tersebut menjadi NaN dengan menambahkan argumen na_values=['na', '-'] saat menggunakan atribut pd.read_csv('nama file.csv'. na_values=..]- Sekarang ubah tipe datanya dengan float, gunakan method .astype(), baca dokumentasi https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.astype.htmlSetelah ini sebagian data siap untuk di jadikan bahan analisis. maka:a. Buahlah visualisasi scatter plot menggunakan sample(1000), sehingga menghasilkan gambar seperti dibawah:ket: - colormap adalah 'coolwarm'- berikat warna terhadap setiap data poin dengan nilai dari column TempAvgF- berikan size terhadap setiap data poin dengan nilai dari column WindAvgMPH, kalikan dengan 20 agar size terlihat lebih besarb. Kemudian bandingkan data visualisasi diatas dengan visualisasi data (sample=1000) setelah handling missing value menggunakan:- Isi nilai nan dengan nilai sebelumnya di row tersebut. gunakan method .fillna() dengan argument method bernilai 'ffill', baca dokumentasi https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html **Expected output** **Tanpa Handling missing value** **Dengan Handling missing value**
###Code
# code here
###Output
_____no_output_____
###Markdown
Analisa : Download [price.csv](https://drive.google.com/uc?export=download&id=1LfuQmLb8AZxAvJzgWJ3u4h49EoTGqO-R)
###Code
df2 = pd.read_csv('price.csv')
df2.head()
###Output
_____no_output_____
###Markdown
--- Soal 3. Visualisasi Data with Handling outliers Pada tugas kali ini, kita akan melakukan handling outliers. Outliers ditemukan pada kolom 'House_Price' dengan menggunakan boxplot dari library seaborn seperti gambar berikut: Dengan menggunakan Interquartile range, kita bisa temukan nilai data outliers dan meremove data outlier tersebut. * Tentukan nilai batas atas dan batas bawah dari Interquartile range tersebut* Remove data outliers tersebut menggunakan batas atas dan batas bawah* Visualisasikan hasil data yang sudah d remove outliers seperti expected berikut: **EXPECTED OUTPUT:**Batas bawah & batas atas:871625.0 10976625.0Visualisasi boxplot:
###Code
#Code here
plt.boxplot(df2['House_Price'])
df2['House_Price'].min()
df2['House_Price'].max()
df2.drop([678], axis=0, inplace=True)
#Code here
plt.boxplot(df2['House_Price'],vert=False)
###Output
_____no_output_____ |
doc/nersc_desi_spectral_fitting.ipynb | ###Markdown
DESI spectral fitting on NERSCIn this jupyter notebook, I'll demonstrate how we can use the `provabgs` pipeline to fit DESI spectra! This notebook uses the DESI master jupyter notebook kernel (see [Jupyter on NERSC](https://desi.lbl.gov/trac/wiki/Computing/JupyterAtNERSC) for instructions).
###Code
# lets install the python package `provabgs`, a python package for generating the PRObabilistic Value-Added BGS (PROVABGS)
!pip install git+https://github.com/changhoonhahn/provabgs.git --upgrade --user
!pip install corner
import numpy as np
from provabgs import infer as Infer
from provabgs import models as Models
from provabgs import flux_calib as FluxCalib
# make pretty plots
import corner
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['axes.linewidth'] = 1.5
mpl.rcParams['axes.xmargin'] = 1
mpl.rcParams['xtick.labelsize'] = 'x-large'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['xtick.major.width'] = 1.5
mpl.rcParams['ytick.labelsize'] = 'x-large'
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['ytick.major.width'] = 1.5
mpl.rcParams['legend.frameon'] = False
###Output
_____no_output_____
###Markdown
read in DESI Denali spectraLets read in an arbitrary BGS galaxy spectra from the Denali reduction
###Code
# read in DESI Denali spectra from TILE 80612
from desispec.io import read_spectra
spectra = read_spectra('/global/cfs/cdirs/desi/spectro/redux/denali/tiles/cumulative/80612/20201223/coadd-0-80612-thru20201223.fits')
# pick arbitrary BGS galaxy
igal = 10
# read in redshift from redrock output
from astropy.table import Table
zbest = Table.read('/global/cfs/cdirs/desi/spectro/redux/denali/tiles/cumulative/80612/20201223/zbest-0-80612-thru20201223.fits', hdu=1)
zred = zbest['Z'][igal]
print('z=%f' % zred)
###Output
z=0.291233
###Markdown
Here's what the galaxy spectra looks like
###Code
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
sub.plot(spectra.wave['b'], spectra.flux['b'][igal])
sub.plot(spectra.wave['r'], spectra.flux['r'][igal])
sub.plot(spectra.wave['z'], spectra.flux['z'][igal])
sub.set_xlabel('wavelength', fontsize=25)
sub.set_xlim(spectra.wave['b'].min(), spectra.wave['z'].max())
sub.set_ylabel('flux [$10^{-17} erg/s/cm^2/A$]', fontsize=25)
sub.set_ylim(-0.5, 5)
###Output
_____no_output_____
###Markdown
Bayesian spectral SED modeling The `provabgs` python package provides all the tools you need to conduct a Bayesian SED modeling of DESI spectra using a full MCMC. You need to1. specify the prior2. specify the SED model3. specify the flux calibration modelIn the following example, I'll a default (recommended) prior, the NMF SED model (without starburst, with emulator), and no flux calibration (since we're fitting only the spectra).
###Code
# 1. specify the prior
priors = Infer.default_NMF_prior(burst=False)
# 2. specify the SED model, without starburst, with emulator
m_sed = Models.NMF(burst=False, emulator=True)
# 3. specify the flux calibration (for jointly fitting photometry and spectra, you will need a more flexible flux calibration function)
fluxcalib = FluxCalib.no_flux_factor
###Output
_____no_output_____
###Markdown
plug in the prior, SED model, and flux calibration model into a `infer.desiMCMC` object
###Code
desi_mcmc = Infer.desiMCMC(
model=m_sed,
flux_calib=fluxcalib,
prior=priors
)
###Output
_____no_output_____
###Markdown
then run MCMC using `zeus` ensemble slice sampling method (see https://zeus-mcmc.readthedocs.io/), `emcee` is also supported
###Code
mcmc = desi_mcmc.run(
wave_obs=[spectra.wave['b'], spectra.wave['r'], spectra.wave['z']],
flux_obs=[spectra.flux['b'][igal], spectra.flux['r'][igal], spectra.flux['z'][igal]],
flux_ivar_obs=[spectra.ivar['b'][igal], spectra.ivar['r'][igal], spectra.ivar['z'][igal]],
zred=zred,
sampler='zeus',
nwalkers=30,
burnin=100,
opt_maxiter=10000,
niter=2000,
debug=True # if True, you'll see a bunch of diagnostic print
)
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
sub.plot(spectra.wave['b'], spectra.flux['b'][igal])
sub.plot(spectra.wave['r'], spectra.flux['r'][igal])
sub.plot(spectra.wave['z'], spectra.flux['z'][igal])
sub.plot(mcmc['wavelength_obs'], mcmc['flux_spec_model'], c='k', ls='--', label='best-fit model')
sub.legend(loc='upper right', fontsize=20)
sub.set_xlim(spectra.wave['b'].min(), spectra.wave['z'].max())
sub.set_xlabel('wavelength', fontsize=25)
sub.set_xlim(spectra.wave['b'].min(), spectra.wave['z'].max())
sub.set_ylabel('flux [$10^{-17} erg/s/cm^2/A$]', fontsize=25)
sub.set_ylim(-0.5, 5)
mcmc_chain = desi_mcmc._flatten_chain(mcmc['mcmc_chain'][500:,:,:])
print('median log(stellar mass) = %.2f' % np.median(mcmc_chain[:,0]))
sfr_mcmc = m_sed.avgSFR(mcmc_chain, zred=zred, dt=1.)
print('median average SFR over 1 Gyr = %.2f' % np.median(sfr_mcmc))
_ = corner.corner(np.array([mcmc_chain[:,0], np.log10(sfr_mcmc)]).T,
range=[(10., 11.), (0., 1.)],
labels=[r'$\log M_*$', r'$\log {\rm SFR}$'], label_kwargs={'fontsize': 25})
t_lb, sfh_mcmc = m_sed.SFH(mcmc_chain, zred=zred)
sfh_q = np.quantile(sfh_mcmc, [0.16, 0.5, 0.84], axis=0)
fig = plt.figure(figsize=(8,6))
sub = fig.add_subplot(111)
sub.fill_between(t_lb, sfh_q[0], sfh_q[2], alpha=0.5)
sub.plot(t_lb, sfh_q[1], c='C0', ls='-')
sub.set_xlabel(r'$t_{\rm lookback}$', fontsize=25)
sub.set_xlim(0., m_sed.cosmo.age(zred).value)
sub.set_ylabel('star-formation history', fontsize=25)
sub.set_yscale('log')
###Output
_____no_output_____ |
experiments_human_activity_recgonition_data.ipynb | ###Markdown
Experiments on Human Activity Recognition DatasetThe purpose of this notebook is to explore machine learning methods to do human activity recognition in federated settings. by using this dataset. https://www.kaggle.com/uciml/human-activity-recognition-with-smartphones?select=train.csvData are collected from smartphone (Samsung Galaxy S II) accelerometer and gyroscope, 21 people wearing a smart phone on waist perform one of six activities: walking, walking-upstairs, walking-downstairs, sitting, standing and lying-down. The raw sensor signals (accelerometer and gyroscope) were pre-processed by applying noise filters and then sampled in fixed-width sliding windows of 2.56 sec and 50% overlap (128 readings/window). The sensor acceleration signal, which has gravitational and body motion components, was separated using a Butterworth low-pass filter into body acceleration and gravity. The gravitational force is assumed to have only low frequency components, therefore a filter with 0.3 Hz cutoff frequency was used. From each window, a vector of features was obtained by calculating variables from the time and frequency domain. Ultimately, 561-length feature vectors of time and frequency domain variables are generated for each instance (as one datapoint).We model each individual as a separate task and predict between sitting and the other activities (a binary classification problem). We formulate the problem as federated multi-task learning on graph. Each individual has 281-409 instance,i.e. 281-409 datappoints representing a local dataset associated with one node/vertex of the graph.
###Code
data = pd.read_csv('smart_sensor/train.csv')
data.Activity.unique()
data.replace(['SITTING'],1,inplace=True)
# for the sake of simplification, we convert the task to a binary classification problem
data.replace(['WALKING_UPSTAIRS','STANDING', 'WALKING', 'LAYING','WALKING_DOWNSTAIRS'],0,inplace=True)
subject_array = data.subject.unique() # identitifier of 21 individual
num_tasks = len(subject_array)
print('num of tasks:',num_tasks)
###Output
num of tasks: 21
###Markdown
define models, optimizers, penalties
###Code
import torch
import abc
import torch.nn.functional as F
from abc import ABC
# The linear model which implemented by pytorch
class TorchLogModel(torch.nn.Module):
def __init__(self, n):
super(TorchLogModel, self).__init__()
self.linear = torch.nn.Linear(n, 1, bias=False)
def forward(self, x):
y_pred = torch.sigmoid(self.linear(x))
return y_pred
# The abstract optimizer model which should have model, optimizer, and criterion
class Optimizer(ABC):
def __init__(self, model, optimizer, criterion):
self.model = model
self.optimizer = optimizer
self.criterion = criterion
@abc.abstractmethod
def optimize(self, x_data, y_data, old_weight, regularizer_term):
torch_old_weight = torch.from_numpy(np.array(old_weight, dtype=np.float32))
self.model.linear.weight.data = torch_old_weight
for iterinner in range(5):
self.optimizer.zero_grad()
y_pred = self.model(x_data)
loss1 = self.criterion(y_pred, y_data)
loss2 = 1 / (2 * regularizer_term) * torch.mean((self.model.linear.weight - torch_old_weight) ** 2)
loss = loss1 + loss2
loss.backward()
self.optimizer.step()
return self.model.linear.weight.data.numpy()
# The Linear optimizer model which implemented by pytorch
class TorchLogOptimizer(Optimizer):
def __init__(self, model):
criterion = torch.nn.BCELoss(reduction='mean')
optimizer = torch.optim.Adam(model.parameters())
super(TorchLogOptimizer, self).__init__(model, optimizer, criterion)
def optimize(self, x_data, y_data, old_weight, regularizer_term):
torch_old_weight = torch.from_numpy(np.array(old_weight, dtype=np.float32))
self.model.linear.weight.data = torch_old_weight
for iterinner in range(5):
self.optimizer.zero_grad()
y_pred = self.model(x_data)
loss1 = self.criterion(y_pred, y_data)
loss2 = 1 / (2 * regularizer_term) * torch.mean((self.model.linear.weight - torch_old_weight) ** 2)
loss = loss1 + loss2
loss.backward()
self.optimizer.step()
return self.model.linear.weight.data.numpy()
# The abstract penalty function which has a function update
class Penalty(ABC):
def __init__(self, lambda_lasso, weight_vec, Sigma, n):
self.lambda_lasso = lambda_lasso
self.weight_vec = weight_vec
self.Sigma = Sigma
@abc.abstractmethod
def update(self, new_u):
pass
# The norm2 penalty function
class Norm2Pelanty(Penalty):
def __init__(self, lambda_lasso, weight_vec, Sigma, n):
super(Norm2Pelanty, self).__init__(lambda_lasso, weight_vec, Sigma, n)
self.limit = np.array(lambda_lasso * weight_vec)
def update(self, new_u):
normalized_u = np.where(np.linalg.norm(new_u, axis=1) >= self.limit)
new_u[normalized_u] = (new_u[normalized_u].T * self.limit[normalized_u] / np.linalg.norm(new_u[normalized_u], axis=1)).T
return new_u
# The MOCHA penalty function
class MOCHAPelanty(Penalty):
def __init__(self, lambda_lasso, weight_vec, Sigma, n):
super(MOCHAPelanty, self).__init__(lambda_lasso, weight_vec, Sigma, n)
self.normalize_factor = 1 + np.dot(2 * self.Sigma, 1/(self.lambda_lasso * self.weight_vec))
def update(self, new_u):
for i in range(new_u.shape[1]):
new_u[:, i] /= self.normalize_factor
return new_u
# The norm1 penalty function
class Norm1Pelanty(Penalty):
def __init__(self, lambda_lasso, weight_vec, Sigma, n):
super(Norm1Pelanty, self).__init__(lambda_lasso, weight_vec, Sigma, n)
self.limit = np.array([np.zeros(n) for i in range(len(weight_vec))])
for i in range(n):
self.limit[:, i] = lambda_lasso * weight_vec
def update(self, new_u):
normalized_u = np.where(abs(new_u) >= self.limit)
new_u[normalized_u] = self.limit[normalized_u] * new_u[normalized_u] / abs(new_u[normalized_u])
return new_u
from torch.autograd import Variable
from graspy.simulations import sbm
def get_B_weight_vec(num_nodes,num_edges):
'''
:param num_nodes: number of nodes
:param num_edges: number of edges
:return B: incidence matrix of the graph
:return weight_vec: a list containing the edges's weights of the graph
'''
N = num_nodes
E = num_edges
'''
N: total number of nodes
E: total number of edges
'''
# create B(adjacency matrix) and edges's weights vector(weight_vec) based on the graph G
B = np.zeros((E, N))
'''
B: incidence matrix of the graph with the shape of E*N
'''
weight_vec = np.zeros(E)
'''
weight_vec: a list containing the edges's weights of the graph with the shape of E
'''
cnt = 0
for i in range(N):
for j in range(N):
if i >= j:
continue
B[cnt, i] = 1
B[cnt, j] = -1
weight_vec[cnt] = 0.01
cnt += 1
return B, weight_vec
def total_loss(datapoints,new_w,new_B,new_weight_vec):
'''
Total loss of the graph structure learning algorithm
'''
loss=0
N = new_w.shape[0]
for i in range(N):
y = datapoints[i]['label']
model = datapoints[i]['model']
model.linear.weight.data = torch.from_numpy(np.array(new_w[i], dtype=np.float32))
# print(model.linear.weight)
# print(new_w[i])
y_pred = model(datapoints[i]['features'])
criterion = torch.nn.MSELoss(reduction='mean')
loss += criterion(y,y_pred)
loss = loss+np.dot(new_weight_vec,np.linalg.norm(new_B.dot(new_w),ord=1,axis=1))
return loss
###Output
_____no_output_____
###Markdown
Generate local datasets from the csv fileTo simulate the fact that the amount of training data is usually insufficient, we randomly selected 100 datapoints as training data for each local datatset.The left datapoints will be used for test.
###Code
from sklearn.preprocessing import MinMaxScaler
def generate_datapoints(data):
'''
Input
data: dataframe storing all datasets
Output
datapoints: a dictionary containing the attributes for each node in the graph,
which are features, label, model, and also the optimizer for each node
'''
datapoints = {}
subject_array = data.subject.unique()
scaler = MinMaxScaler()
for i,subject in enumerate(subject_array):
temp = data[data.subject==subject]
temp = temp.sample(frac=1,replace=True).reset_index(drop=True) #shuffle the dataset
features = np.array(temp.drop(['subject','Activity'],axis=1))
features = scaler.fit_transform(features)
# print(len(features))
labels=np.array(temp.Activity)
n = features.shape[1]
model = TorchLogModel(n)
'''
model : the logistic model for the node i that is implemented by pytorch
'''
optimizer = TorchLogOptimizer(model)
'''
optimizer : the optimizer model for the node i that is implemented by pytorch with BCE loss function
'''
datapoints[i] = {
'features': Variable(torch.from_numpy(features[:100,:])).to(torch.float32),
'model': model,
'label': Variable(torch.from_numpy(labels[:100])).to(torch.float32),
'optimizer': optimizer,
'features_val': Variable(torch.from_numpy(features[100:150,:])).to(torch.float32),
'label_val': Variable(torch.from_numpy(labels[100:150])).to(torch.float32),
'features_test': Variable(torch.from_numpy(features[150:,:])).to(torch.float32),
'label_test': Variable(torch.from_numpy(labels[150:])).to(torch.float32)
}
return datapoints
datapoints = generate_datapoints(data)
datapoints[2]
###Output
_____no_output_____
###Markdown
Algorithms
###Code
def learn_graph_structure(K1,K2, graph,learning_rate, lambda_lasso=1, penalty_func_name='norm1', get_loss=False):
'''
The algorithm to learn datasets relationships.
Inputs
K1: out iteration numbers
K2: inner iteration numbers
graph: graph with node attributes setted up
Outputs:
new_weight_vec: updated dual variable
Loss: iteration loss
'''
num_nodes = len(graph.nodes)
num_edges = len(graph.edges)
B, weight_vec = get_B_weight_vec(num_nodes,num_edges)
Sigma = np.diag(np.full(weight_vec.shape, 0.9 / 2))
T_matrix = np.diag(np.array((1.0 / (np.sum(abs(B), 0)))).ravel())
'''
T_matrix: the block diagonal matrix T
'''
E, N = B.shape
'''
shape of the graph
'''
m, n = graph.nodes[1]['features'].shape
'''
shape of the feature vectors of each node in the graph
'''
new_w = np.array([np.zeros(n) for i in range(N)])
new_u = np.array([np.zeros(n) for i in range(E)])
new_weight_vec = weight_vec
Loss = {}
iteration_scores = []
for j in range(K1):
new_B = np.dot(np.diag(new_weight_vec),B)
T_matrix = np.diag(np.array((1 / (np.sum(abs(B), 0)))).ravel())
T = np.array((1 / (np.sum(abs(B), 0)))).ravel()
for iterk in range(K2):
# if iterk % 100 == 0:
# print ('iter:', iterk)
prev_w = np.copy(new_w)
# line 2 algorithm 1
hat_w = new_w - np.dot(T_matrix, np.dot(new_B.T, new_u))
for i in range(N):
optimizer = graph.nodes[i]['optimizer']
new_w[i] = optimizer.optimize(graph.nodes[i]['features'],graph.nodes[i]['label'], hat_w[i],T[i])
tilde_w = 2 * new_w - prev_w
new_u = new_u + np.dot(Sigma, np.dot(new_B, tilde_w))
penalty_func = Norm1Pelanty(lambda_lasso, new_weight_vec, Sigma, n)
new_u = penalty_func.update(new_u)
new_weight_vec = new_weight_vec +learning_rate*np.linalg.norm(np.dot(B, new_w),ord=1,axis=1)
if get_loss==True:
Loss[j] = total_loss(graph.nodes,new_w,new_B,new_weight_vec)
return new_weight_vec,Loss
from sklearn.metrics import mean_squared_error
def algorithm_1(K,graph,lambda_lasso, penalty_func_name='norm1', calculate_score=False):
'''
:param K: the number of iterations
:param D: the block incidence matrix
:param graph: a graph with node attributes setted up
:param lambda_lasso: the parameter lambda
:param penalty_func_name: the name of the penalty function used in the algorithm
:return iteration_scores: the mean squared error of the predicted weight vectors in each iteration
:return new_w: the predicted weigh vectors for each node
'''
num_nodes = len(graph.nodes)
num_edges = len(graph.edges)
D = np.zeros((num_edges,num_nodes))
for i,e in enumerate(graph.edges):
D[i,e[0]]=1
D[i,e[1]]=-1
# D, _ = get_B_weight_vec(num_nodes,num_edges)
weight_vec = np.array(list(nx.get_edge_attributes(graph,'weight').values()))
Sigma = np.diag(np.full(weight_vec.shape, 0.9 / 2))
'''
Sigma: the block diagonal matrix Sigma
'''
T_matrix = np.diag(np.array(1.0 / (np.sum(abs(D), 0))).ravel())
'''
T_matrix: the block diagonal matrix T
'''
T = np.array(1.0 / (np.sum(abs(D), 0))).ravel()
# T = np.ones(num_nodes)*0.5
# T_matrix = np.eye(num_nodes)*0.5
E, N = D.shape
m, n = graph.nodes[0]['features'].shape
# define the penalty function
if penalty_func_name == 'norm1':
penalty_func = Norm1Pelanty(lambda_lasso, weight_vec, Sigma, n)
elif penalty_func_name == 'norm2':
penalty_func = Norm2Pelanty(lambda_lasso, weight_vec, Sigma, n)
elif penalty_func_name == 'mocha':
penalty_func = MOCHAPelanty(lambda_lasso, weight_vec, Sigma, n)
else:
raise Exception('Invalid penalty name')
# starting algorithm 1
new_w = np.array([np.zeros(n) for i in range(N)])
'''
new_w: the primal variable of the algorithm 1
'''
new_u = np.array([np.zeros(n) for i in range(E)])
'''
new_u: the dual variable of the algorithm 1
'''
iteration_scores = []
for iterk in range(K):
prev_w = np.copy(new_w)
hat_w = new_w - np.dot(T_matrix, np.dot(D.T, new_u))
for i in range(N):
optimizer = graph.nodes[i]['optimizer']
# datapoints[i]['optimizer']
new_w[i] = optimizer.optimize(graph.nodes[i]['features'],
graph.nodes[i]['label'],
hat_w[i],T[i])
tilde_w = 2 * new_w - prev_w
new_u = new_u + np.dot(Sigma, np.dot(D, tilde_w))
new_u = penalty_func.update(new_u)
# calculate the MSE of the predicted weight vectors
if calculate_score:
Y_pred = []
for i in range(N):
Y_pred.append(np.dot(graph.nodes[i]['features'], new_w[i]))
iteration_scores.append(mean_squared_error(true_labels.reshape(N, m), Y_pred))
# print (np.max(abs(new_w - prev_w)))
return iteration_scores, new_w
###Output
_____no_output_____
###Markdown
run experiments datasets relationships learn relations via dual-ascent
###Code
g = nx.complete_graph(21) # we assume a fully connected graph initially
nx.set_node_attributes(g,datapoints) # set up the graph with a dictionary datapoints generated from raw datsets
# Learn edge weights from datasets associated with nodes
new_weight_vec,_ = learn_graph_structure(5,300,g,0.01,1)
# Inverse of the learned dual variables as weights associated with graph edges
learned_weight_vec = np.exp(-new_weight_vec/1)
plt.plot(learned_weight_vec,'bo')
plt.plot(new_weight_vec,'bo')
###Output
_____no_output_____
###Markdown
Should I threshold the learned edge weights? After thresholding, some nodes will become isolated nodes, can we allow this?
###Code
thresh_weight_vec = np.exp(-new_weight_vec)
thresh_weight_vec[thresh_weight_vec<0.4]=0
def generate_similarity_matrix(weight_vector):
'''
Generate similarity matrix from the learned weight vector
'''
similarity_matrix = np.zeros((21,21))
cnt=0
for i in range(21):
similarity_matrix[i,i+1:]=weight_vector[cnt:cnt+20-i]
cnt+=20-i
return similarity_matrix+similarity_matrix.T
similarity_matrix = generate_similarity_matrix(thresh_weight_vec)
import seaborn as sns
thresh_weight_vec.shape
plt.plot(learned_weight_vec,'bo')
np.sum(similarity_matrix,axis=0)
sns.heatmap(similarity_matrix,annot=None)
###Output
_____no_output_____
###Markdown
visulize dataset relations via T-sne
###Code
features = data.drop(['subject','Activity'],axis=1)
subject_temp = data.subject
tsne = TSNE(2)
feat_tsne = tsne.fit_transform (features)
feat_tsne = pd.DataFrame(feat_tsne)
feat_tsne["subject"] = subject_temp
print(feat_tsne.shape)
figure = plt.figure(figsize=(12,12))
for i,sub in enumerate(subject_array[[6,7,5]]):
print(i)
plt.scatter(feat_tsne[feat_tsne["subject"]==sub].iloc[:,0],
feat_tsne[feat_tsne["subject"]==sub].iloc[:,1],
label=str(i))
plt.legend()
###Output
_____no_output_____
###Markdown
obtain relation via KNN
###Code
accuracy = []
for i in subject_array[0:]:
for j in subject_array[0:]:
if i>=j:
continue
knn = KNeighborsClassifier(5)
X = data[data.subject.isin([i,j])]
# print(i,j)
knn.fit(X.iloc[:,:-2],X.iloc[:,-2])
score = knn.score(X.iloc[:,:-2],X.iloc[:,-2])
accuracy.append(score)
accus = np.array(accuracy)
learned_weight_vec = accus
###Output
_____no_output_____
###Markdown
learn tailored predictor for each local dataset
###Code
def update_graph_weight(g,weight_vector,sparse=True,threshold=None):
if sparse:
ind = np.argwhere(weight_vector<threshold)
weigh = {}
for i,e in enumerate(g.edges(data=False)):
if i in ind:
g.remove_edge(*e)
continue
else:
ran = np.random.rand(1)
if ran>0.5:
g.remove_edge(*e)
continue
weigh[e] = {'weight':weight_vector[i]}
nx.set_edge_attributes(g,weigh)
return g
# Run the main algorithm to obtain hypothesis function weights for local datasets,
# in this case, a tailored logistic regression model for each local dataset.
def sigmoid(z):
return 1/(1 + np.exp(-z))
plt.plot(learned_weight_vec,'go')
g = update_graph_weight(g,learned_weight_vec,sparse=True,threshold=0.42)
# Calculate the averaged accuracy on test sets
for lambd in [0.01]:
print(lambd)
_, new_w = algorithm_1(3000,g,lambd)
accus = []
for i in range(21):
y_pred=sigmoid(np.dot(g.nodes[i]['features_test'],new_w[i]))
y_pred = [1 if i>=0.5 else 0 for i in y_pred]
accu = 1-np.sum(abs(y_pred-g.nodes[i]['label_test'].numpy()))/len(y_pred)
accus.append(accu)
print('accuracy on test sets:', np.mean(accus))
# Calculate the averaged accuracy on test sets
for lambd in [0.001,0.01,0.1,1,5,10]:
print(lambd)
_, new_w = algorithm_1(3000,g,lambd)
accus = []
for i in range(21):
y_pred=sigmoid(np.dot(g.nodes[i]['features_val'],new_w[i]))
y_pred = [1 if i>=0.5 else 0 for i in y_pred]
accu = 1-np.sum(abs(y_pred-g.nodes[i]['label_val'].numpy()))/len(y_pred)
accus.append(accu)
print('accuracy on val sets:', np.mean(accus))
###Output
0.001
accuracy on val sets: 0.9800000000000001
0.01
accuracy on val sets: 0.9933333333333333
0.1
accuracy on val sets: 0.9723809523809527
1
accuracy on val sets: 0.9733333333333334
5
accuracy on val sets: 0.9733333333333334
10
accuracy on val sets: 0.9733333333333334
###Markdown
locally fitted logistic regression for each node seperatelyLocal logistic regression models were fitted to each local dataset seperately to provide a baseline. Gridsearchs were executed to select the best hyper-parameters, for each model using 5-fold cross-validation. The averaged accuracy is around 97%.
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
## Gridsearch for the best hyper-parameters for each local Logistic regression model
## for each model using 5-fold cross-validation
## Calculate accuracy on test sets
accus_test = []
accus_val = []
for i in range(21):
grid={"C":np.logspace(-3,3,7), "penalty":["l1","l2"],"solver":['liblinear']}
logreg=LogisticRegression()
logreg_cv=GridSearchCV(logreg,grid,cv=5)
X = torch.cat((datapoints[i]['features'],datapoints[i]['features_val']))
y = torch.cat((datapoints[i]['label'],datapoints[i]['label_val']))
best_clf =logreg_cv.fit(X,y)
accu_test = best_clf.score(g.nodes[i]['features_test'],g.nodes[i]['label_test'].squeeze())
accus_test.append(accu_test)
accu_val = best_clf.best_score_
accus_val.append(accu_val)
print('mean accuracy on test sets:',np.mean(accus_test))
print('mean accuracy on val sets:',np.mean(accus_val))
## Gridsearch for the best hyper-parameters for each local Logistic regression model
## for each model using 5-fold cross-validation
## Calculate accuracy on test sets
accus_test = []
accus_val = []
for i in range(21):
grid={"C":np.logspace(-3,3,7), "penalty":["l1","l2"],"solver":['liblinear']}
logreg=LogisticRegression()
logreg_cv=GridSearchCV(logreg,grid,cv=3)
X = torch.cat((datapoints[i]['features'],datapoints[i]['features_val']))
y = torch.cat((datapoints[i]['label'],datapoints[i]['label_val']))
best_clf =logreg_cv.fit(X,y)
accu_test = best_clf.score(g.nodes[i]['features_test'],g.nodes[i]['label_test'].squeeze())
accus_test.append(accu_test)
accu_val = best_clf.best_score_
accus_val.append(accu_val)
print('mean accuracy on test sets:',np.mean(accus))
print('mean accuracy on val sets:',np.mean(accus_val))
data.head()
###Output
_____no_output_____ |
lecture_13/02_fine_tuning_for_classification.ipynb | ###Markdown
ファインチューニングによる感情分析ファインチューニングを活用し、文章の好悪感情を判別できるようにモデルを訓練します。 ライブラリのインストールライブラリTransformers、およびnlpをインストールします。
###Code
!pip install transformers
!pip install nlp
###Output
_____no_output_____
###Markdown
モデルとTokenizerの読み込み事前学習済みのモデルと、これと紐づいたTokenizerを読み込みます。
###Code
from transformers import BertForSequenceClassification, BertTokenizerFast
sc_model = BertForSequenceClassification.from_pretrained("bert-base-uncased")
sc_model.cuda()
tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
###Output
_____no_output_____
###Markdown
データセットの読み込みライブラリnlpを使用して、IMDbデータセットを読み込みます。 IMDbデータセットは、25000の映画レビューコメントに、ポジティブかネガティブの好悪感情を表すラベルが付随した、感情分析用のデータセットです。 https://www.imdb.com/interfaces/ 読み込んだIMDbのデータはトークナイザーで処理し、形式を整えます。
###Code
from nlp import load_dataset
def tokenize(batch):
return tokenizer(batch["text"], padding=True, truncation=True)
train_data, test_data = load_dataset("imdb", split=["train", "test"])
print(train_data["label"][0], train_data["text"][0]) # 好意的なコメント
print(train_data["label"][20000], train_data["text"][20000]) # 否定的なコメント
train_data = train_data.map(tokenize, batched=True, batch_size=len(train_data))
train_data.set_format("torch", columns=["input_ids", "attention_mask", "label"])
test_data = test_data.map(tokenize, batched=True, batch_size=len(train_data))
test_data.set_format("torch", columns=["input_ids", "attention_mask", "label"])
###Output
_____no_output_____
###Markdown
評価用の関数`sklearn.metrics`を使用し、モデルを評価するための関数を定義します。
###Code
from sklearn.metrics import accuracy_score
def compute_metrics(result):
labels = result.label_ids
preds = result.predictions.argmax(-1)
acc = accuracy_score(labels, preds)
return {
"accuracy": acc,
}
###Output
_____no_output_____
###Markdown
Trainerの設定Trainerクラス、およびTrainingArgumentsクラスを使用して、訓練を行うTrainerの設定を行います。 https://huggingface.co/transformers/main_classes/trainer.html https://huggingface.co/transformers/main_classes/trainer.htmltrainingarguments
###Code
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir = "./results",
num_train_epochs = 1,
per_device_train_batch_size = 8,
per_device_eval_batch_size = 32,
per_gpu_train_batch_size = 8,
warmup_steps = 500, # 学習係数が0からこのステップ数で上昇
weight_decay = 0.01, # 重みの減衰率
# evaluate_during_training = True, # ここの記述はバージョンによっては必要ありません
logging_dir = "./logs",
)
trainer = Trainer(
model = sc_model,
args = training_args,
compute_metrics = compute_metrics,
train_dataset = train_data,
eval_dataset = test_data
)
###Output
_____no_output_____
###Markdown
モデルの訓練設定に基づきモデルを訓練します。
###Code
trainer.train()
###Output
_____no_output_____
###Markdown
モデルの評価Trainerの`evaluate()`メソッドによりモデルを評価します。
###Code
trainer.evaluate()
###Output
_____no_output_____
###Markdown
TensorBoardによる結果の表示TensorBoardを使って、logsフォルダに格納された学習過程を表示します。
###Code
%load_ext tensorboard
%tensorboard --logdir logs
###Output
_____no_output_____ |
Intriguing properties of neural networks.ipynb | ###Markdown
1. Requirements
###Code
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.utils
import torchvision.datasets as dsets
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
2. Set Args
###Code
weight_decay = 0
num_epochs = 10
use_cuda = True
batch_size = 100
###Output
_____no_output_____
###Markdown
3. Prepare Data
###Code
mnist_train = dsets.MNIST(root='data/',
train=True,
transform=transforms.ToTensor(),
download=True)
train_loader = torch.utils.data.DataLoader(dataset=mnist_train,
batch_size=batch_size,
shuffle=True)
mnist_test = dsets.MNIST(root='data/',
train=False,
transform=transforms.ToTensor(),
download=True)
test_loader = torch.utils.data.DataLoader(dataset=mnist_test,
batch_size=10000,
shuffle=False)
###Output
_____no_output_____
###Markdown
4. Define Model
###Code
device = torch.device("cuda" if use_cuda else "cpu")
class FC(nn.Module):
def __init__(self):
super(FC, self).__init__()
self.layer_1 = nn.Sequential(
nn.Linear(28*28, 100),
nn.ReLU()
)
self.layer_2 = nn.Sequential(
nn.Linear(100, 100),
nn.ReLU()
)
self.layer_3 = nn.Sequential(
nn.Linear(100, 10)
)
def forward(self, x):
x = x.view(-1, 28*28)
out_1 = self.layer_1(x)
out_2 = self.layer_2(out_1)
out_3 = self.layer_3(out_2)
return out_3, out_2, out_1
model = FC().to(device)
###Output
_____no_output_____
###Markdown
5. Define Loss and Optimizer
###Code
loss = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001, weight_decay=weight_decay)
###Output
_____no_output_____
###Markdown
6. Train Model
###Code
for epoch in range(num_epochs):
total_batch = len(mnist_train) // batch_size
for i, (batch_images, batch_labels) in enumerate(train_loader):
X = batch_images.to(device)
Y = batch_labels.to(device)
pre, _, _ = model(X)
cost = loss(pre, Y)
optimizer.zero_grad()
cost.backward()
optimizer.step()
if (i+1) % 100 == 0:
print('Epoch [%d/%d], lter [%d/%d], Loss: %.4f'
%(epoch+1, num_epochs, i+1, total_batch, cost.item()))
print("Learning Finished!")
###Output
Epoch [1/10], lter [100/600], Loss: 0.5275
Epoch [1/10], lter [200/600], Loss: 0.2618
Epoch [1/10], lter [300/600], Loss: 0.1694
Epoch [1/10], lter [400/600], Loss: 0.2137
Epoch [1/10], lter [500/600], Loss: 0.2742
Epoch [1/10], lter [600/600], Loss: 0.2312
Epoch [2/10], lter [100/600], Loss: 0.2740
Epoch [2/10], lter [200/600], Loss: 0.1189
Epoch [2/10], lter [300/600], Loss: 0.1204
Epoch [2/10], lter [400/600], Loss: 0.1688
Epoch [2/10], lter [500/600], Loss: 0.1166
Epoch [2/10], lter [600/600], Loss: 0.1465
Epoch [3/10], lter [100/600], Loss: 0.1128
Epoch [3/10], lter [200/600], Loss: 0.1074
Epoch [3/10], lter [300/600], Loss: 0.1271
Epoch [3/10], lter [400/600], Loss: 0.1159
Epoch [3/10], lter [500/600], Loss: 0.1479
Epoch [3/10], lter [600/600], Loss: 0.1370
Epoch [4/10], lter [100/600], Loss: 0.0626
Epoch [4/10], lter [200/600], Loss: 0.0596
Epoch [4/10], lter [300/600], Loss: 0.0882
Epoch [4/10], lter [400/600], Loss: 0.0664
Epoch [4/10], lter [500/600], Loss: 0.0475
Epoch [4/10], lter [600/600], Loss: 0.0301
Epoch [5/10], lter [100/600], Loss: 0.0289
Epoch [5/10], lter [200/600], Loss: 0.0791
Epoch [5/10], lter [300/600], Loss: 0.0393
Epoch [5/10], lter [400/600], Loss: 0.0443
Epoch [5/10], lter [500/600], Loss: 0.0456
Epoch [5/10], lter [600/600], Loss: 0.0370
Epoch [6/10], lter [100/600], Loss: 0.0144
Epoch [6/10], lter [200/600], Loss: 0.0218
Epoch [6/10], lter [300/600], Loss: 0.0804
Epoch [6/10], lter [400/600], Loss: 0.0324
Epoch [6/10], lter [500/600], Loss: 0.0569
Epoch [6/10], lter [600/600], Loss: 0.1680
Epoch [7/10], lter [100/600], Loss: 0.0142
Epoch [7/10], lter [200/600], Loss: 0.0238
Epoch [7/10], lter [300/600], Loss: 0.0183
Epoch [7/10], lter [400/600], Loss: 0.1391
Epoch [7/10], lter [500/600], Loss: 0.0201
Epoch [7/10], lter [600/600], Loss: 0.0201
Epoch [8/10], lter [100/600], Loss: 0.0097
Epoch [8/10], lter [200/600], Loss: 0.0093
Epoch [8/10], lter [300/600], Loss: 0.0219
Epoch [8/10], lter [400/600], Loss: 0.0317
Epoch [8/10], lter [500/600], Loss: 0.1303
Epoch [8/10], lter [600/600], Loss: 0.0124
Epoch [9/10], lter [100/600], Loss: 0.0234
Epoch [9/10], lter [200/600], Loss: 0.0358
Epoch [9/10], lter [300/600], Loss: 0.0133
Epoch [9/10], lter [400/600], Loss: 0.0345
Epoch [9/10], lter [500/600], Loss: 0.0470
Epoch [9/10], lter [600/600], Loss: 0.0512
Epoch [10/10], lter [100/600], Loss: 0.0100
Epoch [10/10], lter [200/600], Loss: 0.0154
Epoch [10/10], lter [300/600], Loss: 0.0087
Epoch [10/10], lter [400/600], Loss: 0.0055
Epoch [10/10], lter [500/600], Loss: 0.0107
Epoch [10/10], lter [600/600], Loss: 0.0705
Learning Finished!
###Markdown
7. Test Model
###Code
model.eval()
correct = 0
total = 0
for images, labels in test_loader:
images = images.to(device)
labels = labels.to(device)
outputs, _, _ = model(images)
_, predicted = torch.max(outputs.data, 1)
total += len(labels)
correct += (predicted == labels).sum()
print('Accuracy of test images: %f %%' % (100 * float(correct) / total))
###Output
Accuracy of test images: 97.620000 %
###Markdown
8. Units of $\phi(x)$
###Code
def imshow(img, title):
npimg = img.numpy()
fig = plt.figure(figsize = (10, 20))
plt.imshow(np.transpose(npimg,(1,2,0)))
plt.title(title)
plt.show()
###Output
_____no_output_____
###Markdown
1) Unit Vector
###Code
model.eval()
for i in range(5) :
unit_vector = torch.eye(100)[i,:]
for images, labels in test_loader:
_, phi_x, _ = model(images.to(device))
values = torch.mv(phi_x.cpu(), unit_vector)
top_idx = np.argsort(values.data.numpy())[-8:]
top_img = images[top_idx]
imshow(torchvision.utils.make_grid(top_img, normalize=True), "Direction to :" + str(i+1) + "th vector")
###Output
_____no_output_____
###Markdown
2) Random Vector
###Code
model.eval()
for i in range(5) :
random_vector = torch.rand(100)
for images, labels in test_loader:
_, phi_x, _ = model(images.to(device))
values = torch.mv(phi_x.cpu(), random_vector)
top_idx = np.argsort(values.data.numpy())[-8:]
top_img = images[top_idx]
imshow(torchvision.utils.make_grid(top_img, normalize=True), "Direction to :" + str(i+1) + "th vector")
###Output
_____no_output_____
###Markdown
9. Small perbertation using Backprop
###Code
# In the paper, they use L-BFGS to solve equation with constraints.
# However, in this code, backpropagation method is used instead of L-BFGS.
def imshow(img, title):
npimg = img.numpy()
fig = plt.figure(figsize = (5, 20))
plt.imshow(np.transpose(npimg,(1,2,0)))
plt.title(title)
plt.show()
sample_img = mnist_test[0][0]
outputs, _, _ = model(sample_img.to(device))
_, predicted = torch.max(outputs.data, 1)
imshow(torchvision.utils.make_grid(sample_img, normalize=True), "Predicted label : " + str(predicted.item()))
# Attack to answer a different number(num) to digit '7'
for num in range(10) :
r = torch.rand(1, 28, 28).cuda()
r.requires_grad_()
optimizer_adv = optim.Adam([r], lr=0.001)
for i in range(3000):
X = torch.clamp(sample_img.cuda() + r, 0, 1)
Y = torch.tensor([num]).cuda()
outputs, _, _ = model(X)
_, predicted = torch.max(outputs.data, 1)
loss_adv = r.abs().sum() + loss(outputs, Y)
optimizer_adv.zero_grad()
loss_adv.backward(retain_graph=True)
optimizer_adv.step()
if predicted.item() != 7 :
print("Attack successed!")
imshow(torchvision.utils.make_grid(X.data.cpu(), normalize=True), "Predicted label : " + str(predicted.item()))
break
###Output
Attack successed!
|
The Ultimate Notebook.ipynb | ###Markdown
In this notebook you will find a lot of necessary functions, methods, commands to go through an EDA (Exploratory Data Analysis). Please feel free to give your point of view or even ask to add the one that I miss.@author: MM Data exploration All the import used below supposed that you already have install the library corresponding.**What is Pandas?**Pandas is a data science library that allows us to load data and play with them. * Pandas uses dataframes (df) which we can think of as tables. * We can perform functions on the rows or columns.* Pandas also has some visualization tools
###Code
import pandas as pd
# read in data from workign directory (folder in top right)
# can read other file format
df = pd.read_csv("/Users/YourFolder/YourData.csv")
# shape of your df (DataFrame)
df.shape
# returns x number of rows when head(num)
df.head() # or df.tail()
# returns an object with all of the column headers
df.columns
# basic info
df.info()
# statistics on numeric columns
df.describe()
# shows type of data (float, int, string, bool, etc.)
df.dtypes
# view all rows for one column
df.column_name
# or
df["column_name"]
# view all columns for select group of rows
df[0:10]
# filter for multiple columns (all below do the same thing )
df[["columnA", "columnB", "columnC"]]
df.loc[:,["columnA", "columnB", "columnC"]]
df.iloc[:,0:3]
# filter by rows and columns
df.loc[0:100,["columnA", "columnB", "columnC"]]
df.iloc[0:100,0:3]
# filter by column list
df[df.columns]
# filter data by columns
df[(df["column"] < 5)]
# for numerical variables
# shows which values are null
df.isnull()
# shows which columns have null values
df.isnull().any()
# shows for each column the percentage of null values
df.isnull().sum() / df.shape[0]
# for categorical variables
# check unique values in the column columnA
df.columnA.unique()
# shows the counts
df.columnA.value_counts()
# or
len(df["columnA"].unique())
#shows the percentage of values from
df.columnA.value_counts()/ df.columnA.notnull().sum()
# another way to have a quick data exploration in one line
# generates profile reports from a pandas DataFrame
from pandas_profiling import ProfileReport
profile = ProfileReport(df, title='Pandas Profiling Report', explorative=True)
###Output
_____no_output_____
###Markdown
Data Cleaning
###Code
# check for nulls / % of nulls
df.isnull().any()
df.isnull().sum()/ df.shape[0]
# imputing nulls fillna()
df[[column a, column b]].fillna(value=0) # value, or mean or median
# remove duplicates
df.drop_duplicates(inplace= True)
# drop
df.drop("columnA", axis = 1) # inplace = True
# remove columns with certain threshold of nulls
# threshold is the number of columns or rows without nulls
thresh = len(df)*.6
df.dropna(thresh = thresh, axis = 1)
df.dropna(thresh = 21, axis = 0)
# add column
df["new_column_price_per_sqfeet"] = df["price"] / df["sqfeet"]
# pass everything lower or uppercase
df.apply(lambda x: x.lower()) # upper()
# use regex .extract, strip(), replace(), split()
df.column = df.column.apply(lambda x: str(x).replace("something","").strip())
df.column.value_counts()
# find numeric column
numeric = df._get_numeric_data()
# change data type
df.column.dtype
df.column = pd.to_numeric(df.column, errors = 'coerce')
# rename columns
df.rename(index=str,columns={"url":"new_url"})
# apply function
def timex2(x):
return 2*x
df["pricex2"] = data["price"].apply(timex2)
# lambda function
df["pricex2"] = df["price"].apply(lambda x: x*2)
# check where an appartement is bigger 200sqfeet for less than 500€/month
df["bigandcheap"] = data[["price","sqfeet"]].apply(lambda x: 'yes' if x[0] < 500 and x[1] > 200 else 'no', axis = 1)
# dummy variables (create dummy variables for categorical features)
df_dummies = pd.get_dummies(df[["catcolumnA", "catcolumnB", "catcolumnC"]])
# merge df
df1 = data[["columnA", "columnB"]]
df2 = data[["columnA", "columnC"]]
df_merged = pd.merge(df1, df2, on="columnA")
# group by
df.groupby(["columnA", "columnB"]).mean()
###Output
_____no_output_____
###Markdown
Visualization tools
###Code
# histogram
df.column.hist() == df.column.plot(kind='hist')
# bar chart
df.column.value_counts().plot(kind='bar')
# boxplot
df.boxplot("column_name")
###Output
_____no_output_____ |
hotspots/analyses.ipynb | ###Markdown
Mutational burden analysis for denv2 intra-host genetic variants data
###Code
import os
path = ""
file = "data.csv"
csv = os.path.join(path, file)
import pandas as pd
csv_data = pd.read_csv(csv).fillna(0)
csv_data.head()
# samples by clinical classification
DF = list(map(str, [ # dengue fever
160,
161,
162,
163,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
177,
178,
179,
180,
181,
182,
183,
184,
141,
142,
145,
146,
151,
154,
155,
158,
159,
]))
WS = list(map(str, [ # warning signs
185,
186,
187,
188,
189,
190,
191,
192,
193,
207,
205,
206,
138,
144,
147,
148,
153,
156,
157,
]))
SD = list(map(str, [ # severe dengue
208,
209,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
137,
139,
140,
143,
149,
]))
annots = {
"5UTR": (1, 96),
"C": (97, 438),
"prM/M": (439, 936),
"E": (937, 2421),
"NS1": (2422, 3477),
"NS2A": (3478, 4131),
"NS2B": (4132, 4521),
"NS3": (4522, 6375),
"NS4A": (6376, 6825),
"NS4B": (6826, 7569),
"NS5": (7570, 10269),
"3UTR": (10273, 10723),
}
# separate by class
df_data = csv_data[csv_data.columns[:5]].assign(df = csv_data[DF].sum(axis=1))
ws_data = csv_data[csv_data.columns[:5]].assign(ws = csv_data[WS].sum(axis=1))
sd_data = csv_data[csv_data.columns[:5]].assign(sd = csv_data[SD].sum(axis=1))
# remove lines with no mutations
df_data = df_data[df_data.df != 0.0]
ws_data = ws_data[ws_data.ws != 0.0]
sd_data = sd_data[sd_data.sd != 0.0]
import numpy as np
def get_mutations(region_len, df):
"""Get unique mutations count in each region."""
x = [] # mutation count in each region
n = [] # number of bases in each region
k = annots["3UTR"][1] - annots["5UTR"][0]
labels_str = []
labels_int = []
for region in range(0, k, region_len):
below = df[df["pos"] < region + region_len]
above = below[below["pos"] >= region]
x.append(above.shape[0])
if region + region_len > k:
n.append(k - region)
# labels_str.append("({0:05d}, {1:05d})".format(region, region + (k - region)))
labels_str.append("({}, {})".format(region + 1, region + (k - region) + 1))
labels_int.append((region + 1, region + (k - region) + 1))
else:
n.append(region_len)
# labels_str.append("({0:05d}, {1:05d})".format(region, region + region_len))
labels_str.append("({}, {})".format(region + 1, region + region_len + 1))
labels_int.append((region + 1, region + region_len + 1))
return x, n, labels_str, labels_int
# define window size
REGION_LEN = 3 * 6
muts_df, windows_df, labels_str_df, labels_int_df = get_mutations(REGION_LEN, df_data)
muts_ws, windows_ws, labels_str_ws, labels_int_ws = get_mutations(REGION_LEN, ws_data)
muts_sd, windows_sd, labels_str_sd, labels_int_sd = get_mutations(REGION_LEN, sd_data)
# get density (divide by region len)
df_dens = np.asarray(muts_df) / np.asarray(windows_df)
ws_dens = np.asarray(muts_ws) / np.asarray(windows_ws)
sd_dens = np.asarray(muts_sd) / np.asarray(windows_sd)
###Output
_____no_output_____
###Markdown
Plot heatmaps
###Code
import matplotlib.pyplot as plt
%matplotlib inline
%matplotlib widget
def plot_heatmap(df_dens, ws_dens, sd_dens, annots, labels_str, labels_int, gene):
inds = ((np.array(labels_int)[:, 0] >= annots[gene][0]).astype(int) * (np.array(labels_int)[:, 1] <= annots[gene][1] + REGION_LEN).astype(int)).astype(bool)
heat = np.array([df_dens[inds], ws_dens[inds], sd_dens[inds]])
fig, ax = plt.subplots()
fig.set_size_inches(25.5, 5.5)
tot = np.array([df_dens, ws_dens, sd_dens])
im = ax.imshow(heat, interpolation='nearest', aspect='auto', cmap='YlOrBr', vmin=np.min(tot), vmax=np.max(tot))
ax.set_yticks(range(3))
ax.set_yticklabels(['DF', 'WS', 'SD'])
ax.set_xticks(range(len(np.array(labels_str)[inds])))
ax.set_xticklabels(np.array(labels_str)[inds])
ax.set_title(f"{gene}")
plt.xticks(rotation=90)
plt.tight_layout()
plt.colorbar(im)
# plot heatmaps
for gene in annots.keys():
plot_heatmap(df_dens, ws_dens, sd_dens, annots, labels_str_df, labels_int_df, gene=gene)
###Output
_____no_output_____
###Markdown
Hotspot candidates with the binomial model
###Code
def get_bg_mutation_rates_per_gene(annots, df):
"""Calculate background mutation rates per gene."""
rates = {
"5UTR": 0,
"C": 0,
"prM/M": 0,
"E": 0,
"NS1": 0,
"NS2A": 0,
"NS2B": 0,
"NS3": 0,
"NS4A": 0,
"NS4B": 0,
"NS5": 0,
"3UTR": 0,
}
for key in rates.keys():
abv = df[df['pos'] >= annots[key][0]]
blw = abv[abv['pos'] <= annots[key][1]]
rates[key] = blw.shape[0] / (annots[key][1] - annots[key][0] + 1)
return rates
def get_bg_mutation_rate(x, n):
"""Global estimation of the mutation rate.
x : List[int]
mutation count in each region
n : List[int]
number of bases in each region
"""
x = np.asarray(x)
n = np.asarray(n)
assert x.shape[0] == n.shape[0], "# of regions must match"
return x.sum()/n.sum()
# get background mutation rates per gene
bg_df = get_bg_mutation_rates_per_gene(annots, df_data)
bg_ws = get_bg_mutation_rates_per_gene(annots, ws_data)
bg_sd = get_bg_mutation_rates_per_gene(annots, sd_data)
# global bg rates
p_df = get_bg_mutation_rate(muts_df, windows_df)
p_ws = get_bg_mutation_rate(muts_ws, windows_ws)
p_sd = get_bg_mutation_rate(muts_sd, windows_sd)
from scipy import stats
import re
def binom_test(x, n, p, bg_genes, labels_int, annots):
"""Perform a binomial test for each region."""
assert len(x) == len(n) == len(labels_int)
p_vals = []
for r in range(len(x)):
flag = False
for key in annots.keys():
if labels_int[r][0] >= annots[key][0] and labels_int[r][1] <= annots[key][1]:
p_val = stats.binom_test(x=x[r], n=n[r], p=bg_genes[key], alternative="greater")
p_vals.append(p_val)
flag = True
if flag is False:
# its a region in between two genes
# use 'default' mutation rate
p_val = stats.binom_test(x=x[r], n=n[r], p=p, alternative="greater")
p_vals.append(p_val)
return p_vals
import statsmodels.stats.multitest as stm
from prettytable import PrettyTable
def print_table_of_significant_regions(muts, windows, labels, class_name, annots, p, bg_genes):
p_vals = stm.multipletests(np.asarray(binom_test(muts, windows, p, bg_genes, labels, annots)), method='fdr_bh')[1]
# to np array
muts = np.asarray(muts)
windows = np.asarray(windows)
labels = np.asarray(labels)
p_inds = np.arange(p_vals.shape[0])[p_vals < 0.05] # select regions
flags = {
"5UTR": False,
"C": False,
"prM/M": False,
"E": False,
"NS1": False,
"NS2A": False,
"NS2B": False,
"NS3": False,
"NS4A": False,
"NS4B": False,
"NS5": False,
"3UTR": False,
}
tab = PrettyTable()
tab.title = f"{class_name} class"
tab.field_names = ["region", "length", "mutations", "adj. p-value", "index"]
tab.vrules = 0
tab.align = "l"
for i in p_inds:
flag = False
for r in annots.keys():
if labels[i][0] > annots[r][0] and labels[i][1] < annots[r][1]:
flag = True
if not flags[r]:
tab.add_row(["----", "----", "<" + r + ">", "----", "----"])
flags[r] = True
tab.add_row([f"{labels[i][0]}-{labels[i][0]}", windows[i], muts[i], round(p_vals[i], 5), i])
print(tab)
print()
# print tables
print_table_of_significant_regions(muts_df, windows_df, labels_int_df, "DF", annots, p_df, bg_df)
print_table_of_significant_regions(muts_ws, windows_ws, labels_int_ws, "WS", annots, p_ws, bg_ws)
print_table_of_significant_regions(muts_sd, windows_sd, labels_int_sd, "SD", annots, p_sd, bg_sd)
###Output
+---------------------------------------------------------+
| DF class |
+---------------------------------------------------------+
| region length mutations adj. p-value index |
+---------------------------------------------------------+
| ---- ---- <NS1> ---- ---- |
| 2485-2485 18 9 0.04716 138 |
| ---- ---- <NS3> ---- ---- |
| 5833-5833 18 10 0.04716 324 |
| ---- ---- <3UTR> ---- ---- |
| 10603-10603 18 8 0.04716 589 |
+---------------------------------------------------------+
+----------------------------------------------------+
| WS class |
+----------------------------------------------------+
| region length mutations adj. p-value index |
+----------------------------------------------------+
+----------------------------------------------------+
+---------------------------------------------------------+
| SD class |
+---------------------------------------------------------+
| region length mutations adj. p-value index |
+---------------------------------------------------------+
| ---- ---- <prM/M> ---- ---- |
| 523-523 18 13 0.0001 29 |
| ---- ---- <3UTR> ---- ---- |
| 10387-10387 18 10 0.0001 577 |
+---------------------------------------------------------+
|
02_K_Means.ipynb | ###Markdown
###Code
# Previous-> SUPERVISED algos. Supervised algo means that you know what the label is
# LABEL was Diagnosis!
# when we don't know the label, then we can only CLASSIFY the points based on various factors,
# such as - are they kept next to each other?
# UNSUPERVISED ALGOS-> no idea what the label is, just figure out if things could be kept
# together
import pandas as pd
df = pd.DataFrame({
'x':[12,20,28,18,29,33,24,45,52,45,51,52,55,53,55,61,65,66,72,22],
'y':[39, 35, 30, 52, 55, 53, 46, 55, 59, 63,70, 66,63,58,23,14,8,19,7,24]
})
# THIS IS THE ONLY DATA THAT WE HAVE-> there is no output or label to guide us
# Hence, no xtrain,ytrain,xtest, ytest either!!!
###Output
_____no_output_____
###Markdown
1) ASSUME k number of CENTROIDS. These centroids are the no. of clusters you want to divide your data into2) RANDOMLY select k points from given dataset or you can even select your OWN random points 3) CALCULATE DISTANCE of EVERY point from these centroids 4) EACH point will be classified to the closest (NEAREST) CENTROID 5) Is this the best answer? NO? [DISTANCE algorithms-> STD -> MIN STD for best algo]6) MOVE THE CENTROIDS to their NEAREST NEIGHBOR7) GO TO STEP 3 again. Keep repeating till all the points are visited or min STD achieved.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# if you want to use iPython's matplotlib
# inline -> use local definition not official python definition
# local definition-> Whatever Colab's iPython has decided
# here matplotlib is a HTML/CSS/JS library
# on local machine matplotlib is a C++ library
# to ensure we get the same randomness everytime we run random functions, we will fix the seed
# for randomness -> random_state
np.random.seed(42) # EVEN THO THERE IS RANDOMNESS -> SAME randomness is applied for all of us
k = 3 # ASSUMPTION
centroids = {i+1:[np.random.randint(0,80),np.random.randint(0,80) ] for i in range(k)} # (x,y)
centroids
fig = plt.figure(figsize=(5,5))
plt.scatter(df['x'], df['y'], color='k') # k means black
# let's plot our centroids overlay on this scatter plot
color_dic = {1:'r', 2:'b', 3:'g'}
for i in centroids.keys():
plt.scatter(*centroids[i], color=color_dic[i])
plt.xlim(0,80) # min and max of scale on x -axis
plt.ylim(0,80) # same as above on y axis
plt.show()
def Fit(df, centroids):
for i in centroids.keys():
# squared root distance formulae
# np.root((x2-x1)**2 + (y2-y1)**2) -> (x2,y2) is the centroid, (x1,y1) are all my data points
df['distance_from_{}'.format(i)] = (np.sqrt((df['x']-centroids[i][0])**2 + (df['y']-centroids[i][1])**2))
# create new cols for comparison of which distance is least
# CALCULATE DISTANCE from each centroid, and CREATE a COLUMN out of it in DF
centroid_new_cols = ['distance_from_{}'.format(i) for i in centroids.keys()]
# SELECT the column with SMALLEST DISTANCE -> IDXMIN --> Index of MINIMUM VALUE [ distance_from_1:10, distance_from_2:20, distance_from_3:15] -> distance_from_1
df['closest'] = df.loc[:,centroid_new_cols].idxmin(axis=1) # axis=1, accessing a new dimension in existing datafram
# distance_from_3 was smalled value, and that has been entered as CLOSEST value
# df['closest'] -> distance_from_3, distance_from_1 and so on
df['closest'] = df['closest'].map(lambda x: int(x.lstrip('distance_from_'))) # remove extra words and leave on centroid number
# converting distance_from_3 to 3
df['color'] = df['closest'].map(lambda x: color_dic[x])
return df
# please feel free to take a break -> 4:50 resuming
df_modified = Fit(df, centroids)
df_modified.head(10)
fig = plt.figure(figsize=(5,5))
plt.scatter(df_modified['x'], df_modified['y'], color=df_modified['color']) # k means black
color_dic = {1:'r', 2:'b', 3:'g'}
for i in centroids.keys():
plt.scatter(*centroids[i], color=color_dic[i]) # overlaying centroids
plt.xlim(0,80) # min and max of scale on x -axis
plt.ylim(0,80) # same as above on y axis
plt.show()
# EITHER syntax or SCIENCE
# update centroids
# prev centroids to move to new centroids
# DEEP copy of existing centroids
# https://www.geeksforgeeks.org/copy-python-deep-copy-shallow-copy/
import copy
old_centroids = copy.deepcopy(centroids)
def update_centroids(df, k):
centroids_new = centroids
for i in centroids.keys():
centroids_new[i][0] = np.mean(df[df['closest']==i]['x']) # UPDATING to MEAN of the cluster (new centroid), instead of mean, if you took MODE, the same algo is called k-mode
centroids_new[i][1] = np.mean(df[df['closest']==i]['y'])
return centroids_new
new_centroids = update_centroids(df_modified, k )
new_centroids
df_new = Fit(df_modified, new_centroids)
fig = plt.figure(figsize=(5,5))
plt.scatter(df_new['x'], df_new['y'], color=df_new['color']) # k means black
color_dic2 = {1:'k', 2:'k', 3:'k'}
for i in new_centroids.keys():
plt.scatter(*new_centroids[i], color=color_dic2[i]) # overlaying centroids
plt.xlim(0,80) # min and max of scale on x -axis
plt.ylim(0,80) # same as above on y axis
plt.show()
# KEEP UPDATING
while True:
closest_centroids = df_new['closest'].copy(deep=True) # DEEP COPY from DataFrame
centroids = update_centroids(df_new, centroids)
df_new = Fit(df_new, centroids)
if closest_centroids.equals(df['closest']): # all nodes have optimized now, nothing further to traverse
break
fig = plt.figure(figsize=(5,5))
plt.scatter(df_new['x'], df_new['y'], color=df_new['color']) # k means black
for i in new_centroids.keys():
plt.scatter(*new_centroids[i], color=color_dic2[i]) # overlaying centroids
plt.xlim(0,80) # min and max of scale on x -axis
plt.ylim(0,80) # same as above on y axis
plt.show()
###Output
_____no_output_____ |
jupyter/Cloud Pak for Data v3.0.x/Model a Golomb ruler using DO.ipynb | ###Markdown
Golomb RulerThis tutorial includes everything you need to set up decision optimization engines, build constraint programming models.Table of contents:- [Describe the business problem](Describe-the-business-problem)* [How decision optimization (prescriptive analytics) can help](How--decision-optimization-can-help)* [Use decision optimization](Use-decision-optimization) * [Step 1: Model the Data](Step-1:-Model-the-data) * [Step 2: Set up the prescriptive model](Step-2:-Set-up-the-prescriptive-model) * [Define the decision variables](Define-the-decision-variables) * [Express the business constraints](Express-the-business-constraints) * [Express the objective](Express-the-objective) * [Solve with Decision Optimization solve service](Solve-with-Decision-Optimization-solve-service) * [Step 3: Investigate the solution and run an example analysis](Step-3:-Investigate-the-solution-and-then-run-an-example-analysis)* [Summary](Summary)**** Describe the business problem* A detailed description (from which this paragraph comes from) is available on Wikipedia at https://en.wikipedia.org/wiki/Golomb_ruler.* In mathematics, a Golomb ruler is a set of marks at integer positions along an imaginary ruler such that no two pairs of marks are the same distance apart. The number of marks on the ruler is its order, and the largest distance between two of its marks is its length. Following is an example of Golomb ruler of order 4 and length 6.This problem is not only an intellectual problem. It has a lot of practical applications: within Information Theory related to error correcting codes, the selection of radio frequencies to reduce the effects of intermodulation interference, the design of conference rooms, to maximize the number of possible configurations with a minimum of partitions: ***** How decision optimization can help* Prescriptive analytics technology recommends actions based on desired outcomes, taking into account specific scenarios, resources, and knowledge of past and current events. This insight can help your organization make better decisions and have greater control of business outcomes. * Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. * Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage. + For example: + Automate complex decisions and trade-offs to better manage limited resources. + Take advantage of a future opportunity or mitigate a future risk. + Proactively update recommendations based on changing events. + Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes. Modeling the problemConstraint Programming is a programming paradigm that allows to express a problem using:* the unknowns of the problem (the variables),* the constraints/laws/rules of the problem, mathematical expressions linking variables together (the constraints),* what is to be optimized (the objective function).All this information, plus some configuration parameters, is aggregated into a single object called model. The remainder of this notebook describes in details how to build and solve this problem with IBM CP Optimizer, using its DOcplex Python modeling API. Use decision optimization Step 1: Model the data
###Code
# Import Constraint Programming modelization functions
from docplex.cp.model import CpoModel
###Output
_____no_output_____
###Markdown
Define model input dataThe first thing to define is the model input data.In the case of the Golomb Ruler problem, there is only one input which is the order of the ruler, that is the number of marks:
###Code
# Define required number of marks on the ruler
ORDER = 7
###Output
_____no_output_____
###Markdown
Step 2: Set up the prescriptive model Create the model containerThe model is represented by a Python object that is filled with the different model elements (variables, constraints, objective function, etc). The first thing to do is then to create such an object:
###Code
# Create model object
mdl = CpoModel(name="GolombRuler")
###Output
_____no_output_____
###Markdown
Define the decision variables* Now, you need to define the variables of the problem. As the expected problem result is the list of mark positions, the simplest choice is to create one integer variable to represent the position of each mark on the ruler.* Each variable has a a set of possible values called his domain. To reduce the search space, it is important to reduce this domain as far as possible.* In our case, we can naively estimate that the maximum distance between two adjacent marks is the order of the ruler minus one. Then the maximal position of a mark is (ORDER - 1)². Each variable domain is then limited to an interval [0..(ORDER - 1)²].* A list of integer variables can be defined using method integer_var_list(). In our case, defining one variable for each mark can be created as follows:
###Code
# Create array of variables corresponding to ruler marks
marks = mdl.integer_var_list(ORDER, 0, (ORDER - 1) ** 2, "M")
###Output
_____no_output_____
###Markdown
Express the business constraints* To express that all possible distances between two marks must be different, create an array that contains all these distances:
###Code
# Create an array with all distances between all marks
dist = [marks[i] - marks[j] for i in range(1, ORDER) for j in range(0, i)]
###Output
_____no_output_____
###Markdown
The operator '-' is used to express the difference between variables. This might appear strange as the variables are not instantiated at that time, but the Python operator has been overloaded to construct a CP expression instead of attempting to compute the arithmetic difference. All other standard Python operators can be used to make operations between CP objects (, =, ==, !=, +, -, /, *, &, |, //, **, ...). See documentation for details.To force all these distances to be different, use the special all_diff() constraint as follows:
###Code
# Force all distances to be different
mdl.add(mdl.all_diff(dist))
###Output
_____no_output_____
###Markdown
The call mdl.add(...) is necessary to express that the constraint must be added to the model. Remove symmetriesThe constraint you have expressed above is theoretically sufficient, and the model can be solved as it is.However, it does not differentiate between all possible permutations of the different mark positions that are solutions to the problem, for example, 0-1-4-6, 4-6-1-0, 6-0-1-4, etc. As there are ORDER! (factorial of ORDER) such permutations, the search space would be drastically reduced by removing them.You can do that by forcing an order between marks, for example the order of their index:
###Code
# Avoid symmetric solutions by ordering marks
for i in range(1, ORDER):
mdl.add(marks[i] > marks[i - 1])
###Output
_____no_output_____
###Markdown
You also know that first mark is at the beginning of the ruler:
###Code
# Force first mark position to zero
mdl.add(marks[0] == 0)
###Output
_____no_output_____
###Markdown
Avoid mirror solutionsEach optimal solution has a mirror, with all mark distances in the reverse order, for example, 0-1-4-6 and 0-2-5-6. The following constraint can be added to avoid this:
###Code
# Avoid mirror solution
mdl.add((marks[1] - marks[0]) < (marks[ORDER - 1] - marks[ORDER - 2]))
###Output
_____no_output_____
###Markdown
Express the objective* Finally, to get the shortest Golomb Ruler, this can be expressed by minimizing the position of the last mark.As you have ordered the marks, you can do this using:
###Code
# Minimize ruler size
mdl.add(mdl.minimize(marks[ORDER - 1]))
###Output
_____no_output_____
###Markdown
If the marks were not ordered, you could have instead used: mdl.add(mdl.minimize(mdl.max(marks))) Solve with Decision Optimization solve serviceBy default, the modeling layer looks for a local runtime, but other solving environments, such as *docloud*, are also available.Refer to the documentation for a good understanding of the various solving/generation modes.If you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage might fail and will need a paying subscription or product installation. The model can be solved by calling:
###Code
# Solve the model
print("Solving model....")
msol = mdl.solve(TimeLimit=10)
###Output
_____no_output_____
###Markdown
Step 3: Investigate the solution and then run an example analysisThe shortest way to output the solution that has been found by the solver is to call the method print_solution() as follows:
###Code
# Print solution
print("Solution: ")
msol.write()
###Output
_____no_output_____
###Markdown
This output is totally generic and simply prints the value of all model variables, the objective value, and some other solution information.A more specific output can be generated by writing more code. The following example illustrates how to access specific elements of the solution.
###Code
# Print solution
from sys import stdout
if msol:
# Print found solution
stdout.write("Solution: " + msol.get_solve_status() + "\n")
stdout.write("Position of ruler marks: ")
for v in marks:
stdout.write(" " + str(msol[v]))
stdout.write("\n")
stdout.write("Solve time: " + str(round(msol.get_solve_time(), 2)) + "s\n")
else:
# No solution found
stdout.write("No solution found. Search status: " + msol.get_solve_status() + "\n")
###Output
_____no_output_____
###Markdown
Another possibility is for example to simulate a real ruler using characters, as follows:
###Code
# Print solution as a ruler
if msol:
stdout.write("Ruler: +")
for i in range(1, ORDER):
stdout.write('-' * (msol[marks[i]] - msol[marks[i - 1]] - 1) + '+')
stdout.write("\n")
###Output
_____no_output_____ |
Kim_Lowry_DS_Unit_1_Sprint_Challenge_3_Pandas23.ipynb | ###Markdown
Data Science Unit 1 Sprint Challenge 4 Exploring Data, Testing HypothesesIn this sprint challenge you will look at a dataset of people being approved or rejected for credit.https://archive.ics.uci.edu/ml/datasets/Credit+ApprovalData Set Information: This file concerns credit card applications. All attribute names and values have been changed to meaningless symbols to protect confidentiality of the data. This dataset is interesting because there is a good mix of attributes -- continuous, nominal with small numbers of values, and nominal with larger numbers of values. There are also a few missing values.Attribute Information:- A1: b, a.- A2: continuous.- A3: continuous.- A4: u, y, l, t.- A5: g, p, gg.- A6: c, d, cc, i, j, k, m, r, q, w, x, e, aa, ff.- A7: v, h, bb, j, n, z, dd, ff, o.- A8: continuous.- A9: t, f.- A10: t, f.- A11: continuous.- A12: t, f.- A13: g, p, s.- A14: continuous.- A15: continuous.- A16: +,- (class attribute)Yes, most of that doesn't mean anything. A16 (the class attribute) is the most interesting, as it separates the 307 approved cases from the 383 rejected cases. The remaining variables have been obfuscated for privacy - a challenge you may have to deal with in your data science career.Sprint challenges are evaluated based on satisfactory completion of each part. It is suggested you work through it in order, getting each aspect reasonably working, before trying to deeply explore, iterate, or refine any given step. Once you get to the end, if you want to go back and improve things, go for it! Part 1 - Load and validate the data- Load the data as a `pandas` data frame.- Validate that it has the appropriate number of observations (you can check the raw file, and also read the dataset description from UCI).- UCI says there should be missing data - check, and if necessary change the data so pandas recognizes it as na- Make sure that the loaded features are of the types described above (continuous values should be treated as float), and correct as necessaryThis is review, but skills that you'll use at the start of any data exploration. Further, you may have to do some investigation to figure out which file to load from - that is part of the puzzle.
###Code
!pip install pandas==0.23.4
import pandas as pd
import numpy as np
credit_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/credit-screening/crx.data'
credit = pd.read_csv(credit_url)
credit.rename(columns={credit.columns[0]: 'A1',
credit.columns[1]: 'A2',
credit.columns[2]: 'A3',
credit.columns[3]: 'A4',
credit.columns[4]: 'A5',
credit.columns[5]: 'A6',
credit.columns[6]: 'A7',
credit.columns[7]: 'A8',
credit.columns[8]: 'A9',
credit.columns[9]: 'A10',
credit.columns[10]: 'A11',
credit.columns[11]: 'A12',
credit.columns[12]: 'A13',
credit.columns[13]: 'A14',
credit.columns[14]: 'A15',
credit.columns[15]: 'A16'},
inplace=True)
credit.head()
credit.shape
credit.dtypes
convert_dict = {'A2': float, 'A11': float, 'A14': float, 'A15': float}
credit = credit.replace('?', np.nan)
credit = credit.astype(convert_dict)
credit.dtypes
###Output
_____no_output_____
###Markdown
Part 2 - Exploring data, Testing hypothesesThe only thing we really know about this data is that A16 is the class label. Besides that, we have 6 continuous (float) features and 9 categorical features.Explore the data: you can use whatever approach (tables, utility functions, visualizations) to get an impression of the distributions and relationships of the variables. In general, your goal is to understand how the features are different when grouped by the two class labels (`+` and `-`).For the 6 continuous features, how are they different when split between the two class labels? Choose two features to run t-tests (again split by class label) - specifically, select one feature that is *extremely* different between the classes, and another feature that is notably less different (though perhaps still "statistically significantly" different). You may have to explore more than two features to do this.For the categorical features, explore by creating "cross tabs" (aka [contingency tables](https://en.wikipedia.org/wiki/Contingency_table)) between them and the class label, and apply the Chi-squared test to them. [pandas.crosstab](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.crosstab.html) can create contingency tables, and [scipy.stats.chi2_contingency](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html) can calculate the Chi-squared statistic for them.There are 9 categorical features - as with the t-test, try to find one where the Chi-squared test returns an extreme result (rejecting the null that the data are independent), and one where it is less extreme.**NOTE** - "less extreme" just means smaller test statistic/larger p-value. Even the least extreme differences may be strongly statistically significant.Your *main* goal is the hypothesis tests, so don't spend too much time on the exploration/visualization piece. That is just a means to an end - use simple visualizations, such as boxplots or a scatter matrix (both built in to pandas), to get a feel for the overall distribution of the variables.This is challenging, so manage your time and aim for a baseline of at least running two t-tests and two Chi-squared tests before polishing. And don't forget to answer the questions in part 3, even if your results in this part aren't what you want them to be.
###Code
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
credit_pos = credit[credit['A16'] == '+']
credit_pos.shape
credit_pos.describe()
credit_neg = credit[credit['A16'] == '-']
credit_neg.shape
credit_neg.describe()
A15_tresult = stats.ttest_ind(credit_pos['A15'], credit_neg['A15'], nan_policy='omit')
A15_tresult
A2_tresult = stats.ttest_ind(credit_pos['A2'], credit_neg['A2'], nan_policy='omit')
A2_tresult
A8_tresult = stats.ttest_ind(credit_pos['A8'], credit_neg['A8'], nan_policy='omit')
A8_tresult
from scipy.stats import chi2_contingency
credit.describe(exclude=np.number)
a7_contingency = pd.crosstab(credit['A16'], credit['A7'])
a7_contingency
c, p, dof, expected = chi2_contingency(a7_contingency)
print(c, p, dof)
print(expected)
a13_contingency = pd.crosstab(credit['A16'], credit['A13'])
a13_contingency
c, p, dof, expected = chi2_contingency(a13_contingency)
print(c, p, dof)
print(expected)
###Output
9.131631022234679 0.010401393295183721 2
[[277.13207547 3.55297533 25.3149492 ]
[346.86792453 4.44702467 31.6850508 ]]
|
Data Science Resources/Jose portila - ML/05-Seaborn/05-Seaborn-Grids.ipynb | ###Markdown
______Copyright by Pierian Data Inc.For more information, visit us at www.pieriandata.com Grids Imports
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
The Data
###Code
df = pd.read_csv('StudentsPerformance.csv')
df.head()
###Output
_____no_output_____
###Markdown
catplot()
###Code
# Kind Options are: “point”, “bar”, “strip”, “swarm”, “box”, “violin”, or “boxen”
sns.catplot(x='gender',y='math score',data=df,kind='box')
sns.catplot(x='gender',y='math score',data=df,kind='box',row='lunch')
sns.catplot(x='gender',y='math score',data=df,kind='box',row='lunch',col='test preparation course')
###Output
_____no_output_____
###Markdown
PairgridGrid that pairplot is built on top of, allows for heavy customization of the pairplot seen earlier.
###Code
g = sns.PairGrid(df)
g = g.map_upper(sns.scatterplot)
g = g.map_diag(sns.kdeplot, lw=2)
g = g.map_lower(sns.kdeplot, colors="red")
g = sns.PairGrid(df, hue="gender", palette="viridis",hue_kws={"marker": ["o", "+"]})
g = g.map_upper(sns.scatterplot, linewidths=1, edgecolor="w", s=40)
g = g.map_diag(sns.distplot)
g = g.map_lower(sns.kdeplot)
g = g.add_legend();
# Safely ignore the warning, its telling you it didn't use the marker for kde plot
###Output
c:\users\marcial\anaconda3\envs\ml_master\lib\site-packages\seaborn\distributions.py:434: UserWarning: The following kwargs were not used by contour: 'marker'
cset = contour_func(xx, yy, z, n_levels, **kwargs)
###Markdown
FacetGrid
###Code
sns.FacetGrid(data=df,col='gender',row='lunch')
g = sns.FacetGrid(data=df,col='gender',row='lunch')
g = g.map(plt.scatter, "math score", "reading score", edgecolor="w")
g.add_legend()
# https://stackoverflow.com/questions/43669229/increase-space-between-rows-on-facetgrid-plot
g = sns.FacetGrid(data=df,col='gender',row='lunch')
g = g.map(plt.scatter, "math score", "reading score", edgecolor="w")
g.add_legend()
plt.subplots_adjust(hspace=0.4, wspace=1)
###Output
_____no_output_____ |
Income_Tax_Paying_Population.ipynb | ###Markdown
Getting data about tax payers from Income Tax website of India
###Code
import urllib.request
from bs4 import BeautifulSoup
import matplotlib.pyplot as plt
import numpy
url = "https://www.incometaxindia.gov.in/Charts%20%20Tables/Why%20should%20I%20pay%20tax.htm#:~:text=Taxes%20are%20used%20by%20the,welfare%20schemes%20including%20employment%20programmes.&text=Thus%20on%20considering%20these%20various,act%20like%20a%20responsible%20citizen."
html = urllib.request.urlopen(url)
htmlParse = BeautifulSoup(html, 'html.parser')
i = 0
inTxData = []
for para in htmlParse.find_all("p"):
inTxData.append(para.get_text())
print(para.get_text())
i = i+1
###Output
'Why should I pay tax?'
The citizens of India are required to pay Income tax as well as other taxes as per law.
1. Some people raise the question 'Why should I pay tax? They argue: I have to pay for my food, for my house, for my travel, for my medical treatment, for owning a vehicle not only cost of vehicle but also vehicle tax and what not. Even on many roads, one has to pay toll tax! They also say that if we compare with countries like USA and UK, the people get social security as also medical facilities virtually without any cost. But India does not offer such facilities.
2. What does the Government do for citizens :
It is true that India does not offer social security and free medical facilities as being provided in some developed countries. But we need to ponder over the issue with a larger canvass. We need to appreciate that the Government has to discharge a number of responsibilities, which include Health care through Government hospitals (usually they offer service without any cost), Education (In Municipal and Government schools the fee is negligible). The Government also provides cooking gas at concessional rate or gives subsidy. Of course the major expenditure of Government has to be incurred on National Defence, Infrastructure Developments etc. Taxes are used by the government for carrying out various welfare schemes including employment programmes. There are Lakhs of employees in various departments and the administrative cost has to be borne by the Government. Though the judicial process involves delay, yet the Salaries, perks of Judges, Magistrates and judicial staff has also to be paid by the Government. Thus on considering these various duties of the Government, we need to appreciate that we must pay tax as per law. We have to act like a responsible citizen.
3. Why tax considered as burden and not a price we pay for civilisation :
A tax payer in general feels that taxes are a burden and it is human tendency to avoid payment of tax or at least minimising the tax liability. In earlier years the tax rates were also exorbitant. Prior to Eighties, the rate of Income-tax was as much as 97.75 per cent inclusive of surcharge. But now the scenario is fast changing. Though the tax rates have been lowered, but still our country lacks desired tax culture like developed nations. It has been said by Justice Homes of the US Supreme Court that "Taxes are the price for civilisation". It is time tax is no longer considered as burden but a price for civilization.
4. Let us join hands to develop tax culture:
a) It is an admitted fact that in India, we are lacking tax culture. Despite considerable efforts for widening the tax base, still the number of taxpayers in our country, is about
82.7 million people which is
6.25 per cent of the over 132 crore population, which is too small for our country. In contrast, in the U.S., about 45 per cent of the population pays taxes. There are many reasons for this. Part of it has to do with the fact that many Indians do not earn enough annual income to even qualify to pay income tax, but a larger factor has to do with lack of tax culture, as also India's huge rural and underground economies.
b) A taxpayer feels that the tax system in our country is ludicrously complicated, confounding, contradictory, wracked by inefficiency, incompetence and to some extent corruption. However, in contrast with the highest rate of 97.75 per cent income tax (including surcharge) in seventies, now Indians earning upto Rs. 2.5 Lakhs annually (which cover the overwhelming majority of the country) are exempt for paying any income tax. Those earning between Rs. 2.5 Lakhs and 5 Lakhs are subject to
5 per cent tax; those earning between 5 Lakhs and 10 lakhs rupees, 20 percent tax; and those above 10 lakhs, a 30 percent rate.
Further you are not required to any Income-tax if your total income doesn't
exceed Rs. 5,00,000. This is done by providing tax rebate of upto Rs. 12,500 in
case of small taxpayers earning income upto Rs. 5,00,000.
The Finance Act, 2020 has also introduced new optional tax regime for Individual and HUF wherein no tax is payable on income upto Rs. 2.5 Lakhs annually. Those between Rs. 2.5 Lakhs and 5 Lakhs are subject to 5 per cent tax; those earning between 5 Lakhs and 7.5 lakhs rupees, 10 percent tax; those earning between 7.5 Lakhs and 10 lakhs rupees, 15 percent tax; those earning between 10 Lakhs and 12.50 lakhs rupees, 20 percent tax; those earning between 12.5 Lakhs and 15 lakhs rupees, 25 percent tax; and those above 15 lakhs, a 30 percent rate. Here also tax rebate of upto Rs. 12,500 is provided in case of small taxpayers earning income upto Rs. 5,00,000.
c) For sure, potential for tax collection is much higher than what we achieve at present but that is possible if we take adequate and sustained efforts for developing tax culture and also take sincere steps for minimizing harassment of tax-payers as well as develop sense of accountability to ensure hassle –free service, just and fair dealing with the taxpayers. The government
provisional figure for tax collection was Rs
9.45 lakh crore for the financial year 2020-21
d) The department has already started to focus on non-filers and stop-filers in order to enhance the tax base. We should aim at achieving a tax regulation regimen in India which can match the best in the world. According to the Credit Suisse Global Wealth Report, India now has some
1815 ultra high net worth individuals with wealth of at least $50 million, 761 who have more than $100 million of assets
and about 2,45,000 millionaires. On the other side of the wealth spectrum, it reflects India's immense income inequality, 95 percent of the Indians have assets below $10,000.
5. The reasons for apathy of the government and the taxpayers towards the tax payment and development of tax culture are:-
a) Most people feel that tax is a burden and should be avoided.
b) Taxpayers feel that they are being treated harshly and the punitive provisions in the tax laws are applied ruthlessly against them. Hence, it is better to be away from the tax department and the number of non-filers of tax returns is increasing.
c) A proper tax culture can develop only when taxpayers and tax collectors discharge their obligations equally well.
d) Many taxpayers become defiant, demotivated and disillusioned because of wrong notions held by tax collectors about their powers, the desire to pass on their part of work to taxpayers, indifference towards them and an attitude that assessees are out to manipulate figures and evade taxes. Such notions strike at the roots of a healthy tax culture.
6. Changing behavior of taxpayers :
The present realities of taxpayers' behavior are increasing tendency for payment of lawful tax. Particularly the young businessmen's trend is to pay proper tax, which is a welcome sign. Analytical study of grievance about work culture and sense of fair play on part of the IT authorities will further help.
7. Tax payers' education :
If one learns the basic rules, he feels comfortable in making compliance, and fear about income tax department may remain no longer. In fact many a times it is a fear of unknown. It is necessary to frequently organize meaningful and well designed taxpayers' education and assistance programmes coupled with making available to the taxpayers departmental publication in Hindi, English as well as local language of that State/ region.
8. Role of professionals :
What is really needed on the part of the tax professionals also is to advice their clients on the present tax scenario which is much liberal than earlier decades. They may play a vital role in educating the taxpayers as to why they should pay right amount of tax.
9. Students are the future of the nation:
We should even inculcate among students as to how important it is to pay right amount of tax for the development of nation. They are the future of the nation.
10. Those having taxable income :
Those having taxable income should certainly declare the income, pay income tax and furnish the Income tax return within prescribed time. The default or delay in fulfilling one's obligation may result in levy of interest and penalty. Even in some cases of tax evasion and other serious lapses the authorities may launch prosecution against erring people. The department may carry out Survey. It has also an Investigation wing, which is empowered to carry out search and seizure operations. Therefore one needs to be very careful.
11. Conclusion :
Let us take a pledge to help in developing tax culture and help to create a positive public opinion. We need to shun the apathy of some taxpayers who are averse to payment of the taxes.
[As amended by Finance Act, 2021]
###Markdown
Following line has the data that we need
###Code
print(inTxData[10])
###Output
a) It is an admitted fact that in India, we are lacking tax culture. Despite considerable efforts for widening the tax base, still the number of taxpayers in our country, is about
82.7 million people which is
6.25 per cent of the over 132 crore population, which is too small for our country. In contrast, in the U.S., about 45 per cent of the population pays taxes. There are many reasons for this. Part of it has to do with the fact that many Indians do not earn enough annual income to even qualify to pay income tax, but a larger factor has to do with lack of tax culture, as also India's huge rural and underground economies.
###Markdown
Finding the total tax paying population
###Code
total_population = 1320000000.0
total_taxpayers = 82700000.0
taxpayers_ratio = total_taxpayers/total_population * 100
nontaxpayers_ratio = 100 - taxpayers_ratio
print(taxpayers_ratio,nontaxpayers_ratio)
data = numpy.array([taxpayers_ratio,nontaxpayers_ratio])
labels = ["Tax Payers","Non Tax Payers"]
fig1, ax1 = plt.subplots()
ax1.pie(data, labels=labels, autopct='%.3f%%')
plt.title("Tax Payers", loc='left', fontweight = 'bold')
plt.show()
###Output
_____no_output_____ |
fusion.ipynb | ###Markdown
colab specific task* mount google drive* change working directory to git repo* Check TF version* TPU check
###Code
from google.colab import drive
drive.mount('/content/gdrive')
cd /content/gdrive/My\ Drive/CopyMove/pytfBusterNet/
!pip3 install tensorflow==1.13.1
###Output
Requirement already satisfied: tensorflow==1.13.1 in /usr/local/lib/python3.6/dist-packages (1.13.1)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1) (1.1.0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1) (0.33.6)
Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1) (3.7.1)
Requirement already satisfied: keras-applications>=1.0.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1) (1.0.8)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1) (0.8.0)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1) (1.12.0)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1) (1.15.0)
Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1) (0.2.2)
Requirement already satisfied: tensorflow-estimator<1.14.0rc0,>=1.13.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1) (1.13.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1) (1.1.0)
Requirement already satisfied: absl-py>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1) (0.8.0)
Requirement already satisfied: tensorboard<1.14.0,>=1.13.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1) (1.13.1)
Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1) (1.16.5)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.6.1->tensorflow==1.13.1) (41.2.0)
Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.6->tensorflow==1.13.1) (2.8.0)
Requirement already satisfied: mock>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-estimator<1.14.0rc0,>=1.13.0->tensorflow==1.13.1) (3.0.5)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow==1.13.1) (3.1.1)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow==1.13.1) (0.15.6)
###Markdown
TPU Check
###Code
import os
import pprint
import tensorflow as tf
if 'COLAB_TPU_ADDR' not in os.environ:
print('ERROR: Not connected to a TPU runtime; please see the first cell in this notebook for instructions!')
else:
tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
print ('TPU address is', tpu_address)
with tf.Session(tpu_address) as session:
devices = session.list_devices()
print('TPU devices:')
pprint.pprint(devices)
tf.__version__
###Output
TPU address is grpc://10.101.157.130:8470
TPU devices:
[_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:CPU:0, CPU, -1, 4761961009959105818),
_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 5638059082635466872),
_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 18016856909551166751),
_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 6947801159749927165),
_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, 17060972187671535414),
_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 16873963880719943249),
_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 1523699779303488709),
_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, 14906596199184025070),
_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 18303067172786003934),
_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 11387980976858962598),
_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 8589934592, 16995379351701383568)]
###Markdown
Fusion Model Training IMPORT
###Code
# imports for fusion_net and loading
from BusterNet.models import fusion_net
import h5py
import numpy as np
# read data
def readh5(d_path):
data=h5py.File(d_path, 'r')
data = np.array(data['data'])
return data
# load data
d_path=os.path.join(os.getcwd(),'DataSet')
Xmp=os.path.join(d_path,'Xm.h5')
Xsp=os.path.join(d_path,'Xs.h5')
Yp=os.path.join(d_path,'Y.h5')
Xm=readh5(Xmp)
Xs=readh5(Xsp)
Y=readh5(Yp)
print(Xm.shape)
print(Xs.shape)
print(Y.shape)
###Output
Using TensorFlow backend.
###Markdown
Compile Model
###Code
from tensorflow.keras.optimizers import Adam
model=fusion_net()
model.summary()
model.compile(optimizer=Adam(lr=0.01), loss=tf.keras.losses.categorical_crossentropy)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 256, 256, 1) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 256, 256, 1) 0
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 256, 256, 2) 0 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 256, 256, 3) 9 concatenate[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 256, 256, 3) 57 concatenate[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 256, 256, 3) 153 concatenate[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 256, 256, 9) 0 conv2d[0][0]
conv2d_1[0][0]
conv2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_v1 (BatchNo (None, 256, 256, 9) 36 concatenate_1[0][0]
__________________________________________________________________________________________________
activation (Activation) (None, 256, 256, 9) 0 batch_normalization_v1[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 256, 256, 3) 246 activation[0][0]
__________________________________________________________________________________________________
lambda (Lambda) (None, 256, 256, 3) 0 conv2d_3[0][0]
==================================================================================================
Total params: 501
Trainable params: 483
Non-trainable params: 18
__________________________________________________________________________________________________
###Markdown
Convert Model
###Code
# This address identifies the TPU we'll use when configuring TensorFlow.
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
tf.logging.set_verbosity(tf.logging.INFO)
def convert_model_TPU(model):
return tf.contrib.tpu.keras_to_tpu_model(model,strategy=tf.contrib.tpu.TPUDistributionStrategy(tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER)))
model=convert_model_TPU(model)
###Output
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
INFO:tensorflow:Querying Tensorflow master (grpc://10.101.157.130:8470) for TPU system metadata.
INFO:tensorflow:Found TPU system:
INFO:tensorflow:*** Num TPU Cores: 8
INFO:tensorflow:*** Num TPU Workers: 1
INFO:tensorflow:*** Num TPU Cores Per Worker: 8
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, -1, 4761961009959105818)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 5638059082635466872)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 18016856909551166751)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 6947801159749927165)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, 17060972187671535414)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 16873963880719943249)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 1523699779303488709)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, 14906596199184025070)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 18303067172786003934)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 11387980976858962598)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 8589934592, 16995379351701383568)
WARNING:tensorflow:tpu_model (from tensorflow.contrib.tpu.python.tpu.keras_support) is experimental and may change or be removed at any time, and without warning.
INFO:tensorflow:Cloning Adam {'lr': 0.009999999776482582, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'decay': 0.0, 'epsilon': 1e-07, 'amsgrad': False}
INFO:tensorflow:Cloning Adam {'lr': 0.009999999776482582, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'decay': 0.0, 'epsilon': 1e-07, 'amsgrad': False}
###Markdown
Training Parameters
###Code
epochs=250
batch_size=30
###Output
_____no_output_____
###Markdown
Train
###Code
history=model.fit([Xs,Xm],Y,epochs=epochs,batch_size=batch_size, verbose=1)
###Output
Epoch 1/250
INFO:tensorflow:New input shapes; (re-)compiling: mode=train (# of cores 8), [TensorSpec(shape=(3,), dtype=tf.int32, name='core_id0'), TensorSpec(shape=(3, 256, 256, 1), dtype=tf.float32, name='input_1_10'), TensorSpec(shape=(3, 256, 256, 1), dtype=tf.float32, name='input_2_10'), TensorSpec(shape=(3, 256, 256, 3), dtype=tf.float32, name='lambda_target_30')]
INFO:tensorflow:Overriding default placeholder.
INFO:tensorflow:Cloning Adam {'lr': 0.009999999776482582, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'decay': 0.0, 'epsilon': 1e-07, 'amsgrad': False}
INFO:tensorflow:Remapping placeholder for input_1
INFO:tensorflow:Remapping placeholder for input_2
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_support.py:302: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
INFO:tensorflow:KerasCrossShard: <tensorflow.python.keras.optimizers.Adam object at 0x7f04d8583cc0> []
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_grad.py:102: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
INFO:tensorflow:Started compiling
INFO:tensorflow:Finished compiling. Time elapsed: 7.158541202545166 secs
INFO:tensorflow:Setting weights on TPU model.
INFO:tensorflow:CPU -> TPU lr: 0.009999999776482582 {0.01}
INFO:tensorflow:CPU -> TPU beta_1: 0.8999999761581421 {0.9}
INFO:tensorflow:CPU -> TPU beta_2: 0.9990000128746033 {0.999}
INFO:tensorflow:CPU -> TPU decay: 0.0 {0.0}
WARNING:tensorflow:Cannot update non-variable config: epsilon
WARNING:tensorflow:Cannot update non-variable config: amsgrad
630/672 [===========================>..] - ETA: 1s - loss: 0.3346INFO:tensorflow:New input shapes; (re-)compiling: mode=train (# of cores 8), [TensorSpec(shape=(1,), dtype=tf.int32, name='core_id0'), TensorSpec(shape=(1, 256, 256, 1), dtype=tf.float32, name='input_1_10'), TensorSpec(shape=(1, 256, 256, 1), dtype=tf.float32, name='input_2_10'), TensorSpec(shape=(1, 256, 256, 3), dtype=tf.float32, name='lambda_target_30')]
INFO:tensorflow:Overriding default placeholder.
INFO:tensorflow:Remapping placeholder for input_1
INFO:tensorflow:Remapping placeholder for input_2
INFO:tensorflow:KerasCrossShard: <tensorflow.python.keras.optimizers.Adam object at 0x7f04d8583cc0> [<tf.Variable 'tpu_139658788042008/Adam/iterations:0' shape=() dtype=int64>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d7a8f550>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d7a8fe10>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d7a50198>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d7a371d0>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d79bc7b8>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d79f5ac8>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d79873c8>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d78d1a58>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d7861f28>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d7828be0>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d77f4978>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d77bca90>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d7787d30>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d76f12b0>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d774ef98>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d765ee10>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d764ff60>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d75f3f60>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d755f860>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d7528e10>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d74f1470>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d745d5f8>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d7429fd0>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d73efe10>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d7380c88>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d7324f28>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d72eefd0>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d72b6e10>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d7224be0>, <tensorflow.contrib.tpu.python.tpu.keras_tpu_variables.ReplicatedVariable object at 0x7f04d71ec748>]
INFO:tensorflow:Started compiling
INFO:tensorflow:Finished compiling. Time elapsed: 4.767093896865845 secs
672/672 [==============================] - 31s 46ms/sample - loss: 0.3153
Epoch 2/250
672/672 [==============================] - 6s 9ms/sample - loss: 0.0170
Epoch 3/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0157
Epoch 4/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0150
Epoch 5/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0144
Epoch 6/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0144
Epoch 7/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0127
Epoch 8/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0152
Epoch 9/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0134
Epoch 10/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0147
Epoch 11/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0143
Epoch 12/250
672/672 [==============================] - 6s 9ms/sample - loss: 0.0152
Epoch 13/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0143
Epoch 14/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0131
Epoch 15/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0137
Epoch 16/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0138
Epoch 17/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0142
Epoch 18/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0136
Epoch 19/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0138
Epoch 20/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0135
Epoch 21/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0145
Epoch 22/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0143
Epoch 23/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0144
Epoch 24/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0138
Epoch 25/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0131
Epoch 26/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0155
Epoch 27/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0135
Epoch 28/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0135
Epoch 29/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0143
Epoch 30/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0138
Epoch 31/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0121
Epoch 32/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0130
Epoch 33/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0141
Epoch 34/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0135
Epoch 35/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0124
Epoch 36/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0122
Epoch 37/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0136
Epoch 38/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0159
Epoch 39/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0137
Epoch 40/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0137
Epoch 41/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0132
Epoch 42/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0126
Epoch 43/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0123
Epoch 44/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0112
Epoch 45/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0119
Epoch 46/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0121
Epoch 47/250
672/672 [==============================] - 6s 9ms/sample - loss: 0.0111
Epoch 48/250
672/672 [==============================] - 6s 9ms/sample - loss: 0.0114
Epoch 49/250
672/672 [==============================] - 6s 9ms/sample - loss: 0.0120
Epoch 50/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0111
Epoch 51/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0121
Epoch 52/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0119
Epoch 53/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0117
Epoch 54/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0111
Epoch 55/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0119
Epoch 56/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0118
Epoch 57/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0119
Epoch 58/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0126
Epoch 59/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0106
Epoch 60/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0118
Epoch 61/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0124
Epoch 62/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0123
Epoch 63/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0124
Epoch 64/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0118
Epoch 65/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0113
Epoch 66/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0120
Epoch 67/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0122
Epoch 68/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0119
Epoch 69/250
672/672 [==============================] - 6s 9ms/sample - loss: 0.0119
Epoch 70/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0122
Epoch 71/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0118
Epoch 72/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0111
Epoch 73/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0106
Epoch 74/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0120
Epoch 75/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0121
Epoch 76/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0119
Epoch 77/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0123
Epoch 78/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0111
Epoch 79/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0121
Epoch 80/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0108
Epoch 81/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0113
Epoch 82/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0122
Epoch 83/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0116
Epoch 84/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0115
Epoch 85/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0121
Epoch 86/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0098
Epoch 87/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0118
Epoch 88/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0115
Epoch 89/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0116
Epoch 90/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0115
Epoch 91/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0101
Epoch 92/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0113
Epoch 93/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0115
Epoch 94/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0119
Epoch 95/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0120
Epoch 96/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0108
Epoch 97/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0119
Epoch 98/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0110
Epoch 99/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0126
Epoch 100/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0123
Epoch 101/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0115
Epoch 102/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0115
Epoch 103/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0111
Epoch 104/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0099
Epoch 105/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0115
Epoch 106/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0119
Epoch 107/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0106
Epoch 108/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0117
Epoch 109/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0112
Epoch 110/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0113
Epoch 111/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 112/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0112
Epoch 113/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 114/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0104
Epoch 115/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0121
Epoch 116/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0105
Epoch 117/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0115
Epoch 118/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0115
Epoch 119/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0110
Epoch 120/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0114
Epoch 121/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0115
Epoch 122/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0110
Epoch 123/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0116
Epoch 124/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0100
Epoch 125/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0113
Epoch 126/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0116
Epoch 127/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0100
Epoch 128/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 129/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0112
Epoch 130/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0113
Epoch 131/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0115
Epoch 132/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0115
Epoch 133/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0113
Epoch 134/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0101
Epoch 135/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0117
Epoch 136/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0116
Epoch 137/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0121
Epoch 138/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0123
Epoch 139/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0122
Epoch 140/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0116
Epoch 141/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 142/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0118
Epoch 143/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0104
Epoch 144/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0113
Epoch 145/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0102
Epoch 146/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0112
Epoch 147/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0099
Epoch 148/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0110
Epoch 149/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0102
Epoch 150/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0111
Epoch 151/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0115
Epoch 152/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0116
Epoch 153/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0100
Epoch 154/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0098
Epoch 155/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0111
Epoch 156/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0120
Epoch 157/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0115
Epoch 158/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0118
Epoch 159/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0113
Epoch 160/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0101
Epoch 161/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0111
Epoch 162/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0104
Epoch 163/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 164/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0117
Epoch 165/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0116
Epoch 166/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0122
Epoch 167/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0123
Epoch 168/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 169/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0111
Epoch 170/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0108
Epoch 171/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 172/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0111
Epoch 173/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0115
Epoch 174/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 175/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0113
Epoch 176/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0103
Epoch 177/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0111
Epoch 178/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0104
Epoch 179/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 180/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0104
Epoch 181/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0116
Epoch 182/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 183/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0117
Epoch 184/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0122
Epoch 185/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0116
Epoch 186/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 187/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0112
Epoch 188/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 189/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0113
Epoch 190/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0109
Epoch 191/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0109
Epoch 192/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0110
Epoch 193/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0106
Epoch 194/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0113
Epoch 195/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0109
Epoch 196/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0112
Epoch 197/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0101
Epoch 198/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0107
Epoch 199/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0108
Epoch 200/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0108
Epoch 201/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0110
Epoch 202/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0110
Epoch 203/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 204/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0112
Epoch 205/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0108
Epoch 206/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0106
Epoch 207/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0110
Epoch 208/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0107
Epoch 209/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0110
Epoch 210/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0110
Epoch 211/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0109
Epoch 212/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0110
Epoch 213/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 214/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0104
Epoch 215/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0109
Epoch 216/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0108
Epoch 217/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0118
Epoch 218/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0114
Epoch 219/250
672/672 [==============================] - 6s 9ms/sample - loss: 0.0111
Epoch 220/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0107
Epoch 221/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0107
Epoch 222/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0105
Epoch 223/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0109
Epoch 224/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0104
Epoch 225/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0099
Epoch 226/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0111
Epoch 227/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0108
Epoch 228/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0104
Epoch 229/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0104
Epoch 230/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0105
Epoch 231/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0107
Epoch 232/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0107
Epoch 233/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0093
Epoch 234/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0092
Epoch 235/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0108
Epoch 236/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0094
Epoch 237/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0095
Epoch 238/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0108
Epoch 239/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0097
Epoch 240/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0106
Epoch 241/250
672/672 [==============================] - 6s 8ms/sample - loss: 0.0105
Epoch 242/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0102
Epoch 243/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0108
Epoch 244/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0112
Epoch 245/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0117
Epoch 246/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0096
Epoch 247/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0110
Epoch 248/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0100
Epoch 249/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0106
Epoch 250/250
672/672 [==============================] - 5s 8ms/sample - loss: 0.0105
###Markdown
Save Model Weights
###Code
model.save_weights('fusion_net.h5')
###Output
INFO:tensorflow:Copying TPU weights to the CPU
INFO:tensorflow:TPU -> CPU lr: 0.009999999776482582
INFO:tensorflow:TPU -> CPU beta_1: 0.8999999761581421
INFO:tensorflow:TPU -> CPU beta_2: 0.9990000128746033
INFO:tensorflow:TPU -> CPU decay: 0.0
INFO:tensorflow:TPU -> CPU epsilon: 1e-07
WARNING:tensorflow:Cannot update non-variable config: epsilon
INFO:tensorflow:TPU -> CPU amsgrad: False
WARNING:tensorflow:Cannot update non-variable config: amsgrad
###Markdown
Plot Training Histoty
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(history.history['loss'])
plt.title('model training')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train'], loc='upper left')
###Output
_____no_output_____ |
residual_demand.ipynb | ###Markdown
Residual DemandThis notebook calculates the residual demand function necesary to study the effect of backstop technology. In a static model, suppose that there are two sources of energy supply: fossil fuels ($y$) and solar power ($z$). By assumption, the two types of energy are perfects substitutes, so they recieve the same price. The inverse demand function is $p = a - w$, where $w$ is total energy supply ($w = y + z$). The solar power marginal cost is = supply function is $b + cy$. Parameters
###Code
a = 20
b = 15
c = 0
###Output
_____no_output_____
###Markdown
Figure 7.1To replicate Figure 7.1 set $a=20$, $b=15$ and $c=0$. You can change these parameters to see diferent residual demand functions. Find the residual demand fucnction when the solar power marginal cost is = supply function is $2z$.
###Code
y = np.linspace(0, a, 100)
p = a - y
MCs = b + (c*y)
R = np.minimum(p, MCs)
plt.plot(y, p, 'b', label='aggregate demand')
plt.plot(y, MCs, 'g', label='solar supply')
plt.plot(y, R, 'r', linestyle='dashed', label='residual demand')
plt.xlabel('y')
plt.ylabel('$')
plt.legend()
plt.show()
###Output
_____no_output_____ |
ru/source/examples/examples_sRoc_Stochastic.ipynb | ###Markdown
Technical Analysis Template: sRoC + Stochastic Oscillator Buy and sell using oversold and overbought levels of the Stochastic oscillator according the trend. The trend indicator is the smooth Rate of Change with WMA as a smooth function.
###Code
import qnt.graph as qngraph
import qnt.data as qndata
import qnt.stats as qnstats
import qnt.xr_talib as qnxrtalib
import xarray as xr
import pandas as pd
from qnt.stepper import test_strategy
#import qnt.forward_looking as qnfl
import xarray.ufuncs as xrf
import datetime as dt
###Output
_____no_output_____
###Markdown
Data
###Code
data = qndata.load_data(tail=dt.timedelta(days=5*365), dims=("time", "field", "asset"), forward_order=True)
###Output
fetched chunk 1/6 1s
fetched chunk 2/6 2s
fetched chunk 3/6 4s
fetched chunk 4/6 5s
fetched chunk 5/6 6s
fetched chunk 6/6 7s
Data loaded 7s
###Markdown
Calc output
###Code
SROC_POSITIVE_TREND_LEVEL=0.05
SROC_CLOSE_LEVEL=-0.05
STOCH_OVERBOUGHT_LEVEL=92
STOCH_OVERSOLD_LEVEL=31
wma = qnxrtalib.WMA(data.sel(field='close'), 120)
sroc = qnxrtalib.ROCP(wma, 60)
stoch = qnxrtalib.STOCH(data, 8, 3, 3)
k = stoch.sel(field='slowk')
d = stoch.sel(field='slowd')
data_ext = xr.concat([wma, sroc, k, d], pd.Index(['wma', 'sroc', 'k', 'd'], name='field'))
data_ext = xr.concat([data, data_ext], 'field')
weights = data.isel(time=0, field=0)
weights[:] = 0
def step(data):
latest = data.isel(time=-1)
is_liquid = latest.sel(field="is_liquid")
sroc = latest.sel(field='sroc')
k = latest.sel(field='k')
d = latest.sel(field='d')
need_open = xrf.logical_and(
sroc > SROC_POSITIVE_TREND_LEVEL,
xrf.logical_and(k < STOCH_OVERSOLD_LEVEL, d < STOCH_OVERSOLD_LEVEL)
)
need_close = xrf.logical_or(
sroc < SROC_CLOSE_LEVEL,
xrf.logical_and(k > STOCH_OVERBOUGHT_LEVEL, d > STOCH_OVERBOUGHT_LEVEL)
)
global weights
weights.loc[need_open] = 1
weights.loc[need_close] = 0
weights.loc[is_liquid == 0] = 0 # prevention of illiquid assets trading
return (weights / weights.sum('asset')).fillna(0)
output = test_strategy(data_ext, step=step)
###Output
Testing started...
Testing progress: 449/1257 5s
Testing progress: 894/1257 10s
Testing complete 14.093945503234863s
###Markdown
Stats and plots
###Code
stat = qnstats.calc_stat(data, output, max_periods=252 * 3)
display(stat.to_pandas().tail())
qngraph.make_plot_filled(
stat.coords['time'].to_pandas(),
stat.loc[:, 'equity'].values,
color="blue",
name="PnL (Equity)",
type="log"
)
qngraph.make_plot_filled(
stat.coords['time'].to_pandas(),
stat.loc[:, 'underwater'].values,
color="red",
name="Underwater Chart",
range_max= 0
)
SR_OFFSET = 252 * 3 + 120 + 60 + 8 * 3 * 3
qngraph.make_plot_filled(
stat.coords['time'].to_pandas()[SR_OFFSET:],
stat.loc[:, 'sharpe_ratio'].values[SR_OFFSET:],
color="purple",
name="Rolling SR"
)
qngraph.make_plot_filled(
stat.coords['time'].to_pandas(),
stat.loc[:, 'bias'].values,
color="gray",
name="Bias"
)
###Output
_____no_output_____
###Markdown
Checks
###Code
qnstats.print_correlation(output, data)
###Output
WARNING! This strategy correlates with other strategies.
The number of systems with a larger Sharpe ratio and correlation larger than 0.8: 1
The max correlation value (with systems with a larger Sharpe ratio): 0.9055071260527578
Current sharpe ratio(3y): 0.592895288234106
###Markdown
Save output
###Code
qndata.write_output(output)
###Output
write output: /root/fractions.nc.gz
|
Machine Learning/Recommeder Systems/Basic Recommender System - K.ipynb | ###Markdown
Recommender Systems with PythonIn this exercise we will make a basic recommendation systems using Python and pandas. The goal is to recommend movies that are most similar to a particular movie. This is not a robust recommendation system. This system will just tell us what movies are the most similar to your movie choice.In the next exercise, we are going to build a more complex Recommender System. We will cover different techniques like`"Collaborative Filtering and Content-Based Filtering"` ___ Your Job ___***Things to do***- Perform the data pre-processing.- Perform EDA- Create a Movie Matrix that has the user ids on one access and the movie title on another axis. Each cell will then consist of the rating the user gave to that movie. (Hint: Pivot_table)Note: there will be a lot of NaN values, because most people have not seen most of the movies.- (Step 4) Now, select any of the two, three or any number of movies and grab the user ratings for those two movies.- (Step 5) Now you have to calculate corelation of these movies separately with the Movie matrix that you created above. - (Step 6) Make a dataframe containing the `Movies_names , corelation` and sort the dataframe by corelation. - Do Step 6 for each movie you selected in step 4.- Join the `number of ratings` column, with the `corelation` table.***What will be new***- You will learn how to make a very basic Recommender system which will recommend movies similar to your movie choices. ***What will be tricky***- Step 5 might be a bit tricky step, what you can do is, you can use [corrwith()](https://www.geeksforgeeks.org/python-pandas-dataframe-corrwith/:~:text=Pandas%20dataframe.,will%20be%20a%20NaN%20value.) method to get correlations between two pandas series.
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
movies = pd.read_csv('Movie_data.csv')
movies.head()
movies.describe()
movies.isnull().sum()
movies['title'].nunique()
movies['user_id'].nunique()
###Output
_____no_output_____
###Markdown
EDA
###Code
# count of ratings
sns.countplot(x = 'rating', data=movies)
# distribution of users contributions
ax = sns.countplot(x = 'user_id', data=movies, order = movies['user_id'].value_counts().index)
ax.set(xticklabels=[])
# distribution of movies apparition
ax = sns.countplot(x = 'title', data=movies, order = movies['title'].value_counts().index)
ax.set(xticklabels=[])
movies['timestamp'] = pd.to_datetime(movies['timestamp'] ,unit='s')
movies['timestamp'] = pd.to_datetime(movies['timestamp'].dt.strftime('%Y-%m-%d'))
movies['year']= movies['title'].str.split().str[-1].str.strip('()')
movies
# distribution of movies apparition
data = movies['year'].value_counts().sort_index()
ax = sns.lineplot(x = data.index, y= data.values)
###Output
_____no_output_____
###Markdown
Create a Movie Matrix that has the user ids on one axis and the movie title on another axis
###Code
movies = pd.read_csv('Movie_data.csv')
df = pd.pivot_table(movies, values = 'rating', columns = 'title', index = 'user_id' )
df.head()
Avg_rating['Avg Rating'] = pd.DataFrame(movies.groupby('title')['rating'].mean())
Tot_rating['Tot Rating'] = pd.DataFrame(movies.groupby('title')['rating'].count())
my_movie = df.iloc[:,3]
recommendation = df.corrwith(my_movie, axis = 0)
recommendation = pd.DataFrame(recommendation, columns=['score'])
recommendation= recommendation.join(Avg_rating['Avg Rating'])
recommendation= recommendation.join(Tot_rating['Tot Rating'])
recommendation.sort_values(by = 'score',ascending = False, inplace=True)
# Minimum rating: 2
recommendation = recommendation[recommendation['Avg Rating']>2]
# Minimum number of Ratings: 50
recommendation = recommendation[recommendation['Tot Rating']>50]
recommendation.head(15)
my_movie = df.iloc[:,225]
recommendation = df.corrwith(my_movie, axis = 0)
recommendation = pd.DataFrame(recommendation, columns=['score'])
recommendation= recommendation.join(Avg_rating['Avg Rating'])
recommendation= recommendation.join(Tot_rating['Tot Rating'])
recommendation.sort_values(by = 'score',ascending = False, inplace=True)
# Minimum rating: 2
recommendation = recommendation[recommendation['Avg Rating']>2]
# Minimum number of Ratings: 50
recommendation = recommendation[recommendation['Tot Rating']>50]
recommendation.head(15)
my_movie = df.iloc[:,654]
recommendation = df.corrwith(my_movie, axis = 0)
recommendation = pd.DataFrame(recommendation, columns=['score'])
recommendation= recommendation.join(Avg_rating['Avg Rating'])
recommendation= recommendation.join(Tot_rating['Tot Rating'])
recommendation.sort_values(by = 'score',ascending = False, inplace=True)
# Minimum rating: 2
recommendation = recommendation[recommendation['Avg Rating']>2]
# Minimum number of Ratings: 50
recommendation = recommendation[recommendation['Tot Rating']>50]
recommendation.head(15)
###Output
C:\Users\delchain_default\anaconda3\lib\site-packages\numpy\lib\function_base.py:2526: RuntimeWarning: Degrees of freedom <= 0 for slice
c = cov(x, y, rowvar)
C:\Users\delchain_default\anaconda3\lib\site-packages\numpy\lib\function_base.py:2455: RuntimeWarning: divide by zero encountered in true_divide
c *= np.true_divide(1, fact)
|
DNN-prediction.ipynb | ###Markdown
Load data
###Code
l1k = multimodal_data.load_l1000("replicate_level_all_alleles.csv")
cp = multimodal_data.load_cell_painting(
"/data1/luad/others/morphology.csv",
"resnet18-validation-well_profiles.csv",
aggregate_replicates=False
)
l1k, cp = multimodal_data.align_profiles(l1k, cp, sample=0)
GE = np.asarray(l1k)[:,1:]
MP = np.asarray(cp)[:,1:]
###Output
_____no_output_____
###Markdown
Separate training and validation
###Code
common_alleles = set(cp["Allele"].unique()).intersection( l1k["Allele"].unique() )
genes = list(common_alleles)
genes = [x for x in genes if x not in ["EGFP", "BFP", "HCRED"]]
np.random.shuffle(genes)
train = genes[0:9*int(len(genes)/10)]
test = genes[9*int(len(genes)/10):]
GE_train = l1k[l1k["Allele"].isin(train)]
MP_train = cp[cp["Allele"].isin(train)]
GE_test = l1k[l1k["Allele"].isin(test)]
MP_test = cp[cp["Allele"].isin(test)]
###Output
_____no_output_____
###Markdown
Normalize inputs and outputs
###Code
def z_score(A, model, features):
alleles = list(A["Allele"])
A = pd.DataFrame(data=model.transform(A[features]), columns=features)
A["Allele"] = alleles
return A[["Allele"] + features]
ge_features = [str(i) for i in range(GE.shape[1])]
sc_l1k = sklearn.preprocessing.StandardScaler()
sc_l1k.fit(GE_train[ge_features])
GE_train = z_score(GE_train, sc_l1k, ge_features)
GE_test = z_score(GE_test, sc_l1k, ge_features)
mp_features = [str(i) for i in range(MP.shape[1])]
sc_cp = sklearn.preprocessing.StandardScaler()
sc_cp.fit(MP_train[mp_features])
MP_train = z_score(MP_train, sc_cp, mp_features)
MP_test = z_score(MP_test, sc_cp, mp_features)
###Output
_____no_output_____
###Markdown
Create Neural Net
###Code
def mp2ge_net(in_size, out_size):
inLayer = tf.keras.layers.Input([in_size])
net = tf.keras.layers.Dense(in_size, activation="relu")(inLayer)
net = tf.keras.layers.BatchNormalization()(net)
net = tf.keras.layers.Dense(in_size//2, activation="relu")(net)
net = tf.keras.layers.BatchNormalization()(net)
net = tf.keras.layers.Dropout(0.5)(net)
net = tf.keras.layers.Dense(out_size//4, activation="relu")(net)
net = tf.keras.layers.BatchNormalization()(net)
net = tf.keras.layers.Dropout(0.5)(net)
net = tf.keras.layers.Dense(out_size, activation=None)(net)
return tf.keras.Model(inLayer, net)
model = mp2ge_net(MP.shape[1], GE.shape[1])
model.summary()
###Output
_____no_output_____
###Markdown
Prepare data generator
###Code
class MultimodalDataGenerator(tf.keras.utils.Sequence):
'Generates data for Keras'
def __init__(self, modA, modB, batch_size=32):
'Initialization'
self.batch_size = batch_size
self.modA = modA
self.modB = modB
self.classes = set( modA["Allele"].unique()).intersection( modB["Allele"].unique() )
self.classes = list(self.classes)
self.create_samples()
def create_samples(self):
dataA = []
dataB = []
classes = []
# Generate all combinations of A and B with the same label
for cl in self.classes:
for idx, rowA in self.modA[self.modA["Allele"] == cl].iterrows():
for jdx, rowB in self.modB[self.modB["Allele"] == cl].iterrows():
dataA.append(np.reshape(np.asarray(rowA)[1:], (1,self.modA.shape[1]-1)))
dataB.append(np.reshape(np.asarray(rowB)[1:], (1,self.modB.shape[1]-1)))
classes.append(cl)
self.X = np.concatenate(dataA)
self.Y = np.concatenate(dataB)
self.Z = classes
print("Total pairs:", len(dataA), self.X.shape, self.Y.shape)
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.modA) / self.batch_size))
def __getitem__(self, index):
'Generate one batch of data'
# Initialization
index = np.arange(0,self.X.shape[0])
np.random.shuffle(index)
X = self.X[index[0:self.batch_size], :]
Y = self.Y[index[0:self.batch_size], :]
return X, Y
###Output
_____no_output_____
###Markdown
Train model
###Code
#build session running on GPU 1
configuration = tf.ConfigProto()
configuration.gpu_options.allow_growth = True
configuration.gpu_options.visible_device_list = "3"
session = tf.Session(config = configuration)
tf.keras.backend.set_session(session)
model.compile(optimizer='adam', loss='mean_absolute_error')
dgen_train = MultimodalDataGenerator(MP_train, GE_train)
dgen_test = MultimodalDataGenerator(MP_test, GE_test)
model.fit_generator(dgen_train, epochs=100, validation_data=dgen_test)
###Output
_____no_output_____
###Markdown
Make predictions
###Code
predicted_ge = model.predict(np.asarray(MP_test)[:,1:])
predicted_ge = pd.DataFrame(data=predicted_ge, columns=ge_features)
predicted_ge["Allele"] = MP_test["Allele"]
predicted_ge = predicted_ge[["Allele"] + ge_features]
predicted_ge["Real"] = False
GE_test["Real"] = True
compare_ge = pd.concat([GE_test, predicted_ge]).reset_index(drop=True)
# Compute tSNE
X = np.asarray(compare_ge)[:,1:-1]
X = np.asarray(X, dtype=np.float)
Y = tsne.tsne(X)
compare_ge["X"] = Y[:,0]
compare_ge["Y"] = Y[:,1]
sb.lmplot(data=compare_ge, x="X", y="Y", hue="Real", fit_reg=False)
M1 = Y[0:GE_test.shape[0],0:2]
M2 = Y[GE_test.shape[0]:,0:2]
D = scipy.spatial.distance_matrix(M1, M2)
NN = np.argsort(D, axis=1) # Nearest morphology point to each gene expression point
plt.figure(figsize=(10,10))
plt.scatter(M1[:,0], M1[:,1], c="lime", s=50, edgecolor='gray', linewidths=1)
plt.scatter(M2[:,0], M2[:,1], c="purple", s=50, edgecolor='gray', linewidths=1)
plt.figure(figsize=(10,10))
plt.scatter(M1[:,0], M1[:,1], c="lime", s=50, edgecolor='gray', linewidths=1)
plt.scatter(M2[:,0], M2[:,1], c="purple", s=50, edgecolor='gray', linewidths=1)
for i in range(M2.shape[0]):
for j in range(M1.shape[0]):
if predicted_ge.iloc[i].Allele == GE_test.iloc[NN[j,i]].Allele:
plt.plot([M1[NN[j,i],0],M2[i,0]],[M1[NN[j,i],1],M2[i,1]], 'k-', color="red")
break
NN.shape, M1.shape, M2.shape
plt.figure(figsize=(12,12))
p1 = sb.regplot(data=compare_ge[compare_ge["Real"]], x="X", y="Y", fit_reg=False, color="#FF983E", scatter_kws={'s':50})
for point in range(compare_ge.shape[0]):
if compare_ge.Real[point]:
p1.text(compare_ge.X[point], compare_ge.Y[point], compare_ge.Allele[point], horizontalalignment='left', size='small', color='black')
p2 = sb.regplot(data=compare_ge[~compare_ge["Real"]], x="X", y="Y", fit_reg=False, color="#4B91C2", scatter_kws={'s':50})
# for point in range(compare_ge.shape[0]):
# if not compare_ge.Real[point]:
# p2.text(compare_ge.X[point], compare_ge.Y[point], compare_ge.Allele[point], horizontalalignment='left', size='small', color='black')
compare_ge.Real[point]
###Output
_____no_output_____ |
experiments/tl_1v2/cores-oracle.run1.limited/trials/26/trial.ipynb | ###Markdown
Transfer Learning Template
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_1v2:cores-oracle.run1.limited",
"device": "cuda",
"lr": 0.0001,
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/root/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "CORES_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/root/csc500-main/datasets/oracle.Run1_10kExamples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "ORACLE.run1_",
},
],
"dataset_seed": 500,
"seed": 500,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
iffts.ipynb | ###Markdown
Examine the psd files and compute iffts for them.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from astropy.io import fits
plotpar = {'axes.labelsize': 18,
'font.size': 10,
'legend.fontsize': 18,
'xtick.labelsize': 18,
'ytick.labelsize': 18,
'text.usetex': True}
plt.rcParams.update(plotpar)
###Output
_____no_output_____
###Markdown
Load the first fits file.
###Code
filename = "data/weighted_power_spectra/kplr001430163_kasoc-wpsd_llc_v1.fits"
with fits.open(filename) as hdul:
hdul.info()
hdr = hdul[0].header
data = hdul[1].data
hdr;
freq, power = zip(*data)
freq, power = np.array(freq), np.array(power)
plt.plot(freq, power)
s = np.fft.ifft(power)
t = np.arange(len(power))
plt.plot(t, s.real, 'b-', t, s.imag, 'r--')
plt.legend(('real', 'imaginary'))
###Output
_____no_output_____
###Markdown
Now load all the light curves and save the real part of the ifft.
###Code
import glob
df = pd.read_csv("training_labels.csv")
filenames_v1 = ["data/weighted_power_spectra/kplr{0}_kasoc-wpsd_llc_v1.fits".format(str(kepid).zfill(9))
for kepid in df.kepid.values]
filenames_v2 = ["data/weighted_power_spectra/kplr{0}_kasoc-wpsd_llc_v2.fits".format(str(kepid).zfill(9))
for kepid in df.kepid.values]
filenames_v3 = ["data/weighted_power_spectra/kplr{0}_kasoc-wpsd_llc_v3.fits".format(str(kepid).zfill(9))
for kepid in df.kepid.values]
def extract_ifft_real(filename_v1, filename_v2, filename_v3):
try:
with fits.open(filename_v3) as hdul:
data = hdul[1].data
flag = 1
except FileNotFoundError:
try:
with fits.open(filename_v2) as hdul:
data = hdul[1].data
flag = 1
except FileNotFoundError:
try:
with fits.open(filename_v1) as hdul:
data = hdul[1].data
flag = 1
except FileNotFoundError:
print("File not found", filename_v1, "possibly only short cadence version available. Skipping")
freq, power = np.arange(100), np.zeros(100)
data = zip(freq, power)
flag = 0
freq, power = zip(*data)
freq, power = np.array(freq), np.array(power)
s = np.fft.ifft(power)
return s.real, s.imag, flag
import os
reals, flags = [], []
N = 200
for i, filename_v1 in enumerate(filenames_v1):
print(i, "of", len(filenames_v1))
ifft_file_name = "data/ifft_real/{}_ifft.csv".format(df.kepid.values[i])
if os.path.exists(ifft_file_name):
ifft_df = pd.read_csv(ifft_file_name)
real, imag = ifft_df.real, ifft_df.imag
else:
real, imag, flag = extract_ifft_real(filename_v1, filenames_v2[i], filenames_v3[i])
ifft = pd.DataFrame(dict({"real": real, "imag": imag}))
ifft.to_csv(ifft_file_name)
reals.append(np.abs(real))
flags.append(flag)
pixel1 = np.array([reals[i][0] for i in range(N)])
m = flag > 0
len(filenames_v1)
plt.plot(df.teff.values[:N][m][0], np.log10(pixel1[m][0]), ".")
plt.xlim(4500, 7000)
plt.xlabel("$T_{\mathrm{eff}}~[K]$")
plt.ylabel("$\log_{10}(\mathrm{Pixel~\#1})$")
plt.subplots_adjust(bottom=.15)
plt.savefig("pixel1_teff")
plt.plot(df.logg.values[:N][m][0], np.log(pixel1[m][0]), ".")
plt.xlabel("$\log(g)$")
plt.ylabel("$\log_{10}(\mathrm{Pixel~\#1})$")
plt.subplots_adjust(bottom=.15)
plt.savefig("pixel1_logg")
plt.plot(df.age.values[:N][m][0], np.log(pixel1[m][0]), ".")
plt.xlabel("$\mathrm{Age~[Gyr]}$")
plt.ylabel("$\log_{10}(\mathrm{Pixel~\#1})$")
plt.subplots_adjust(bottom=.15)
plt.savefig("pixel1_age")
###Output
/Users/ruthangus/anaconda/lib/python3.5/site-packages/ipykernel/__main__.py:1: RuntimeWarning: divide by zero encountered in log
if __name__ == '__main__':
|
opt_scipy/Optimizing_Scientific_Python.ipynb | ###Markdown
Optimizing Scientific Python + other neat tools to make your life easier! Nelson Liu August 22, 2016[download tutorial materials here](https://github.com/nelson-liu/talks_and_tutorials/tree/master/opt_scipy) > "Premature optimization is the root of all evil" >> ~ Donald Knuth Optimized code is more complicated, which leads to it being harder to debug if problems arise! Optimizing too early leads to greater development costs further down the road. Outline- Motivating Example / Early Optimization Steps - using list comprehensions and NumPy- Why Python?- Deeper Optimization - "Low" hanging fruit: JIT Compilers - Numba and PyPy - More complex: C extensions - Cython- Multiprocessing and Multithreading - Cython and the GIL Motivating Example: Cosine Distance / Analogies- Given two vectors, find the angle between them- Commonly used in semantic similarity tasks; words represented by vectors with a smaller angle of separation are generally "closer" in semantic meaning - Given vectors $A$ and $B$, the cosine similarity is calculated with the dot product and magnitude.$$similarity = cos(\theta) = \frac{A \cdot B}{\left|\left|A\right|\right| \left|\left|B\right|\right|}$$ - The analogy prediction task is defined as follows: given a word pair ($w_a, w_b$) (i.e. man, woman) and anotherword $w_c$ (i.e. king), predict the best word $w_d$ (i.e. queen) such that the pair ($w_c, w_d$) hassimilar relation to ($w_a$, $w_b$).Namely, to get the solution for an analogy $w_d$:$$X = vector(w_b) − vector(w_a) + vector(w_c)$$$$w_d = argmax_{w \in V \forall w \notin \{w_a, w_b, w_c\}} {cos(vector(w), X)} $$ For didactic purposes, we'll implement an analogy solver in plain Python first, then go about ways to speed it up. Let's start by getting some GloVe vectors
###Code
try:
import cPickle as pickle
except:
import pickle
import numpy as np
import urllib2
# if you have pickled GloVe vectors already, feel free to replace ``None`` with
# a path to a dictionary of word vectors.
# The pickle file I've provided only includes words in the English language as
# judged by an online dictionary.
local_path = "./data/glove.840B.300d.pkl"
if local_path == None:
# download pickled word vectors
pickled_vectors = urllib2.urlopen("http://www.nelsonliu.me/files/glove.840B.300d.pkl")
glove_vecs = pickle.load(pickled_vectors)
else:
glove_vecs = pickle.load(open(local_path,"rb"))
vocabulary = glove_vecs.keys()
# the dictionary is {word:list}, let's make it {word:ndarray}
# feel free to comment this out if you don't need it
for word in vocabulary:
glove_vecs[word] = np.array(glove_vecs[word])
###Output
_____no_output_____
###Markdown
Let's write a preliminary naïve Python implementation
###Code
import math
def cosine_sim_py_naive(A, B):
# calculate the dot product
dot_prod = 0
mag_A = 0
mag_B = 0
for i in xrange(len(A)):
dot_prod += A[i]*B[i]
mag_A += A[i]*A[i]
mag_B += B[i]*B[i]
mag_A = math.sqrt(mag_A)
mag_B = math.sqrt(mag_B)
return dot_prod / (mag_A * mag_B)
###Output
_____no_output_____
###Markdown
We'll use this method to calculate analogies given `w_a`, `w_b`, `w_c`, and a cosine similarity function.
###Code
def calculate_analogies(w_a, w_b, w_c, cosine_sim_func):
# get the vectors corresponding to the words
A = glove_vecs[w_a]
B = glove_vecs[w_b]
C = glove_vecs[w_c]
X = np.add(np.subtract(B, A), C)
max_cosine_similarity = -1
w_d = None
for w_d_candidate in vocabulary:
if (w_d_candidate == w_a or
w_d_candidate == w_b or
w_d_candidate == w_c):
continue
D_candidate = glove_vecs[w_d_candidate]
cos_similarity = cosine_sim_func(X, D_candidate)
if cos_similarity > max_cosine_similarity:
max_cosine_similarity = cos_similarity
w_d = w_d_candidate
return w_d
###Output
_____no_output_____
###Markdown
Let's time this baseline, wow-I-can't-believe-someone-wrote-this implementation
###Code
# this code snippet might take a while to run...
%timeit calculate_analogies("man", "woman", "king", cosine_sim_py_naive)
###Output
1 loop, best of 3: 50.9 s per loop
###Markdown
Wow, that's atrocious! Let's think about some basic ways to optimize it For some background: this very task was actually one part of a undergrad NLP homework assignment; we had to solve 10k+ analogies using at least 2 types of pre-trained embeddings. With a runtime like the above for just one analogy, it's no wonder some students spent several days running their code! List comprehensions are your friend! - Not only do they make your code more concise, they're faster!
###Code
def cosine_sim_py_comprehension(A, B):
# calculate the dot product
results = [sum(x) for x in zip(*[(i*j, i*i, j*j) for i,j in zip(A, B)])]
dot_prod = results[0]
mag_A = math.sqrt(results[1])
mag_B = math.sqrt(results[2])
return dot_prod / (mag_A * mag_B)
%timeit calculate_analogies("man", "woman", "king", cosine_sim_py_comprehension)
###Output
1 loop, best of 3: 31.3 s per loop
###Markdown
That's a pretty sizeable speedup! Now, it'd only take us 81 hours! Jests aside, list comprehensions do offer significant speedup over verbose loops. However, keep in mind that complex list comprehensions can be hard to interpret when you dust off your code 3 months down the line.- A big reason why this code is so slow with loops is because of Python's dynamic type checking. - Let's take a look at some other strategies we can use Use Numpy Functions Why are Numpy Arrays / their functions fast?- Densely packed, and of homogenous type - On the other hand, Python lists are arrays of pointers to objects - This gives NumPy the advantage of [locality of reference](https://en.wikipedia.org/wiki/Locality_of_reference) - Most operations are implemented in C - Avoids costly dynamic type checking, which really made our previous implementation slow. - Gateway to more optimizations with things like Numba, etc. Let's use numpy functions to try to speed this up
###Code
def cosine_sim_py_numpy(A, B):
# calculate the dot product
dot_prod = np.dot(A,B)
# calculate the product of magnitudes
mag_A = np.linalg.norm(A)
mag_B = np.linalg.norm(B)
return dot_prod / (mag_A * mag_B)
%timeit calculate_analogies("man", "woman", "king", cosine_sim_py_numpy)
###Output
1 loop, best of 3: 1.56 s per loop
###Markdown
Numpy functions gave us a great speedup, as the `dot_prod` and `lingalg.norm` methods use [broadcasting](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) to loop over the data structure at the C level. Lets try some libraries that directly implement cosine similarity or similar routines, and compare their performance. Scipy Cosine Distance-Based Implementation
###Code
from scipy import spatial
def cosine_sim_py_scipy(A, B):
return 1 - spatial.distance.cosine(A, B)
%timeit calculate_analogies("man", "woman", "king", cosine_sim_py_scipy)
###Output
1 loop, best of 3: 5.42 s per loop
###Markdown
scikit-learn Cosine Similarity Implementation
###Code
from sklearn.metrics.pairwise import cosine_similarity
def cosine_sim_py_sklearn(A, B):
return cosine_similarity(A.reshape(1,-1), B.reshape(1,-1))[0][0]
# this is actually surprisingly bad, i've taken it upon myself to see why this is happening
%timeit calculate_analogies("man", "woman", "king", cosine_sim_py_sklearn)
###Output
1 loop, best of 3: 22.1 s per loop
###Markdown
CheckpointIt seems like our custom method using numpy is the fastest; this makes sense, since the implementations in scikit-learn and scipy have to cater to move than just `ndarray`s and thus spend some time doing validation / other checks.At this point, we've reached the point that most developers / researchers would get to. It's likely that at this point, we'd just run the our analogy solver and go sleep / do other things for a few hours or days.However, we can do much better than our current performance by tapping into some other external tools. Intermezzo: Why bother using Python in the first place?If all you care about is performance, you should not be using Python; bare-metal languages like C / C++ are probably more suited to that singular need. However, rarely do we only care about performance. Development speed, maintainability, useability, and scalability are all important considerations. - Python (or mostly-Python) code is easier to read, maintain, and contribute to! - This is especially important for replicability - Python-based tools are easy to run anywhere - (Generally) No complicated install or build process required, just setup a {virtual|conda}env, pip install the things you need, and off you go! JIT Compilers -- minimal effort, potentially lots of reward - Good first place to start if you don't want to work a lot (so, everyone) - Requires minimal change to your code. - Two main methods in the Python: - PyPy: fast, compliant, alternate implementation of the Python language (2.7, 3.5) - Numba: NumPy aware dynamic Python compiler using LLVM PyPy, in short> "If you want your code to run faster, you should probably just use PyPy." >> ~ Guido van Rossum (creator of Python)- Essentially Python, but with a JIT compiler. - Passes the CPython (the standard implementation of Python) test suiteUnfortunately, it's not fully compatible with NumPy yet, which makes it of limited use to us. It's quite interseting though, and may be game-changing when the SciPy stack is supported. Numba> Numba is a mechanism for producing machine code from Python syntax and typed data structures such as those that exist in NumPy.>> ~ [Numba Repo](https://github.com/numba/numba)- Requires minimal modification to code - just add a `jit` decorator to the methods you want to compile Installation / Setup- Numba uses LLVM, a compilation framework - which means you need LLVM to run numba, which prevents you from simply doing `pip install numba` in most cases. - I hear it works quite well with `conda` though, if you use it. Installation on OS X with `brew` ```brew install homebrew/versions/llvm38 --with-rttigit clone https://github.com/numba/llvmlitecd llvmlitepip install enum34LLVM_CONFIG=/usr/local/Cellar/llvm38/3.8.0/bin/llvm-config-3.8 python setup.py installLLVM_CONFIG=/usr/local/Cellar/llvm38/3.8.0/bin/llvm-config-3.8 pip install numbacd ..rm -rf llvmlite``` Installation on Linux - The instructions below assume you have `clang` and thus `llvm` on your machine - If you don't, see if you can ask a system admin to install it. - Alternatively, you can build `clang` (`llvm` is a dep of clang) in your local directory - This sounds like a horrible experience, though. If you do end up doing such a thing, please let me know! - I did it, and it wasn't too bad; it just took a while. However, I had issues installing numba in a virtualenv. - But I did manage to get it working with `conda install numba` out of the box, so there's that. ```git clone https://github.com/numba/llvmlitecd llvmlitepip install enum34LLVM_CONFIG= python setup.py installLLVM_CONFIG= pip install numbacd ..rm -rf llvmlite``` more info [here](http://stackoverflow.com/questions/28782512/getting-python-numba-working-on-ubuntu-14-10-or-fedora-21-with-python-2-7), I haven't tried a linux install yet so please let me know if you get one to work! Let's try out Numba on our code above
###Code
from numba.decorators import jit
cosine_sim_py_numba = jit(cosine_sim_py_naive)
%timeit calculate_analogies("man", "woman", "king", cosine_sim_py_numba)
###Output
1 loop, best of 3: 280 ms per loop
###Markdown
A simple Numba wrapper sped up our original code by ~175x! This is even ~5.5x faster than the function with Numpy. With Numba, one could expect to see even greater increases in speed if you have a bunch of nested for loops or the like, since they'd all get turned into machine code It's worth noting that all the code we've written so far is still pure Python Our Numba function is probably very close to the extent you can push this function without mathematical / algorithmic tricks. Some Numba Tips- It's important to think about what you're optimizing. For example, notice that we chose to optimize our naive Python implementation. Why not optimize our fast numpy implementation, to make it even faster?
###Code
cosine_sim_py_numpy_numba = jit(cosine_sim_py_numpy)
%timeit calculate_analogies("man", "woman", "king", cosine_sim_py_numpy_numba)
###Output
1 loop, best of 3: 2.12 s per loop
###Markdown
As you can see, running the NumPy implementation with Numba gave us no performance boosts; in fact, our code got a bit slower! - If you think about it though, this makes perfect sense - The numpy operations internally already use C functions, so the JIT does not help it at all - In extreme cases (e.g. this one), the small performance cost added by using Numba is even greater than the optimizations, because the original code is already compiled. - just calling `jit` directly on the callable generally leads to pretty good results, but there are cases where slowdowns may incur because the C code is forced to fall back on a Python object or the like. - The less Python objects you use, the more `jit` can do for you! - numpy arrays are an exception to this rule, because we'll see later that it's quite simple to use them in C. C/C++ Extensions to Python- It's possible to write C code that can be imported to Python as a module, which is quite useful from an efficiency standpoint. - Python calls these "extensions" or "extension modules" - This is what they (roughly) look like Python Code```import some_c_moduleresult = some_c_module.some_method()```C Code```static PyObject * some_c_module(PyObject *self){ // some method return Py_BuildValue("i", result);}``` - This is is generally pretty painful to do, and I wouldn't advise writing C code for use in Python in this way. However, making your own C extensions can be quite useful if you have pre-written code in C and want to have a Python wrapper. There's a pretty good tutorial on doing that [here](http://dan.iel.fm/posts/python-c-extensions/). - In most cases, you won't have prewritten C code to use. - But you still want to optimize your code! - But you don't want to dive down the rabbit hole of C extensions / the Python-C API! Cython is here for you!>Cython is an optimising static compiler for both the Python programming language and the extended Cython programming language (based on Pyrex). It makes writing C extensions for Python as easy as Python itself.>> [Cython documentation](http://cython.org/)- It's easy to see that Cython is quite different than Numba. - For one, Cython requires a separate compilation step before running your Python code. Let's try writing porting our code to Cython- I'll use the opportunity to demonstrate everything I wish I knew about Cython when I was first starting out. Cython Implementation
###Code
%%cython
from __future__ import division
# "cimport" is used to import special compile-time information about the numpy module
cimport numpy as np
cimport cython
from libc.math cimport sqrt
@cython.boundscheck(False) # turn off bounds-checking for entire function
@cython.wraparound(False) # turn off wrap around for entire function
def cosine_sim_cython(double[:] A, double[:] B):
cdef size_t i
cdef double dot_product = 0.0
cdef double mag_A = 0.0
cdef double mag_B = 0.0
# let's rewrite the dot product without numpy
# and calculate the magnitude in the same loop
for i in range(A.shape[0]):
dot_product += A[i] * B[i]
mag_A += A[i] * A[i]
mag_B += B[i] * B[i]
mag_A = sqrt(mag_A)
mag_b = sqrt(mag_B)
return dot_product / (mag_A * mag_B)
%timeit calculate_analogies("man", "woman", "king", cosine_sim_cython)
###Output
1 loop, best of 3: 448 ms per loop
###Markdown
Note that the performance of our preliminary Cython implementation was slower than Numba and we did a lot more work too!- However, Cython is powerful because you can choose how low-level you want your code to be. The code above, while it is still Cython, makes use of Python objects which slows down the performance. What if we completely turned off Python? - One way of doing this is by removing the GIL. This also gives us the benefit of having easily parallelizable code. What's the Global Interpreter Lock (GIL)?- the GIL is a mutex that prevents multiple native threads from executing Python bytecodes at once - In English, it's a "lock" that prevents a Python program from executing multiple threads at the same time. - This is necessary because Python's memory is not thread safe! - This prevents efficient multi-threading in Python. - People use multiprocessing to get around this, but spawning processes is more expensive than spawning threads / they have different memory pools and have to pass objects between each other- In Cython, you can remove the GIL to easily run your code at the C-level with multithreading. Removing the GIL- In order for code to run with `nogil`, it must satisfy several constraints - Only uses statically typed variables of C primitives (e.g. int, long, double, size_t) - Arrays must be represented using pointers (goodbye, numpy arrays!) - No Python objects or Python methods can be used at all - All functions must have `nogil` at the end of their definition. But wait, how are we going to use our data without numpy arrays? - Fortunately, you can easily extract a pointer to the underlying numpy array and it's dtype```cdef dtype* X_pointer = X_numpyarray.data```- Since we're using doubles here, we have to set the type as such:```cdef double* X_pointer = X_numpyarray.data```- Lastly, the original numpy array must be cast as such, explicitly```cdef double* X_pointer = ( X_numpyarray).data``` An alternate solution: `MemoryViews` - typed `MemoryViews` allow you to efficiently access data buffers (e.g. those underlying numpy arrays) without Python overhead - `MemoryView` array of doubles : `cdef double [:] ` - `MemoryView` 2d array of ints: `cdef int [:, :] ` - The Cython userguide has [a great page](http://cython.readthedocs.io/en/latest/src/userguide/memoryviews.html) explaining and showing examples of using `MemoryViews` Let's write a Cython `nogil`-compatible version of our analogy solver- We have to be a bit creative with our data because strings are Python objects, and thus not allowed in `nogil`!
###Code
def calculate_analogies_cython(w_a, w_b, w_c):
# get the vectors corresponding to the words
A = glove_vecs[w_a]
B = glove_vecs[w_b]
C = glove_vecs[w_c]
nd_vectors = np.array(glove_vecs.values())
return glove_vecs.keys()[calculate_analogies_cython_helper(w_a, w_b, w_c, A, B, C, nd_vectors)]
###Output
_____no_output_____
###Markdown
And now for the `nogil` Cython methods
###Code
%%cython
from __future__ import division
cimport cython
import numpy as np
cimport numpy as np
from libc.math cimport sqrt
from cython.parallel cimport prange, parallel
@cython.boundscheck(False)
def calculate_analogies_cython_helper(str w_a, str w_b, str w_c,
double [:] A_memview, double [:] B_memview,
double [:] C_memview, double [:,:] vectors_memview):
# build the X array for comparison
cdef double[:] X_memview = np.add(np.subtract(B_memview, A_memview), C_memview)
# hardcoded variable for dimensions, figure it out dynamically if i have time to
# come back and change it
cdef size_t dimensions = 300
# keep track of the max cosine similarity and the index of its associated w_d
cdef double max_cosine_similarity = -1.0
cdef size_t w_d_idx = -1
# temp variable for the word vector we're currently comparing
cdef double[:] d_candidate
cdef double[:] similarities
cdef double d_cos_similarity
# keep track of the number of vectors
cdef size_t num_vectors = vectors_memview.shape[0]
# temp variable for iteration, since we can't dynamically generate them
# in the loop declaration
cdef size_t i = 0
with nogil:
for i in range(num_vectors):
if(memview_equals(vectors_memview[i], A_memview, dimensions)
or memview_equals(vectors_memview[i], B_memview, dimensions)
or memview_equals(vectors_memview[i], C_memview, dimensions)):
continue
d_cos_similarity = cosine_sim_cython_nogil(vectors_memview[i], X_memview, dimensions)
if d_cos_similarity > max_cosine_similarity:
max_cosine_similarity = d_cos_similarity
w_d_idx = i
return w_d_idx
@cython.boundscheck(False)
cdef bint memview_equals(double[:] X, double[:] Y, size_t size) nogil:
cdef size_t i
for i in range(size):
if X[i] != Y[i]:
return 0
return 1
@cython.boundscheck(False)
cdef double cosine_sim_cython_nogil(double[:] A, double[:] B, size_t size) nogil:
cdef size_t i
cdef double dot_product = 0.0
cdef double mag_A = 0.0
cdef double mag_B = 0.0
for i in prange(size, schedule='guided', num_threads=4):
dot_product += A[i] * B[i]
mag_A += A[i] * A[i]
mag_B += B[i] * B[i]
mag_A = sqrt(mag_A)
mag_b = sqrt(mag_B)
return dot_product / (mag_A * mag_B)
###Output
warning: /Users/nfliu/.ipython/cython/_cython_magic_ec1c2bd3bdb74fe2ac786ce13a912192.pyx:65:10: Unsigned index type not allowed before OpenMP 3.0
###Markdown
Phew, that was a lot of work! Let's time it
###Code
%timeit calculate_analogies_cython("man", "woman", "king")
###Output
1 loop, best of 3: 398 ms per loop
|
.ipynb_checkpoints/Deep-seq2seq-LSTM-checkpoint.ipynb | ###Markdown
Process Joint Labels
###Code
directory = 'labels/'
frames = []
for filename in os.listdir(directory):
annotations = loadmat(directory + filename)
if annotations['action'][0] == 'squat':
# Create Nx13x2 joint labels for each video
frames.append(np.stack([annotations['x'], annotations['y']], axis=2))
# Keep only videos with more than 70 image frames
top_frames = []
for i in range(231):
if frames[i].shape[0] > 70:
top_frames.append(frames[i])
frames_train = top_frames[:150]
frames_test = top_frames[150:]
len(top_frames)
###Output
_____no_output_____
###Markdown
LSTM Params
###Code
L = 13 # num of joints
k = 50 # training num
T = 10 # prediction num
H = 1024 # hidden layer size
def RNN(p, weights, biases):
# p should be shape (batch_size, T, 2 * L)
# unstack gets us a list of T (batch_size, 2 * L) tensors
stacked_lstm = rnn.MultiRNNCell([rnn.BasicLSTMCell(H, forget_bias=1.0) for _ in range(2)])
batch_size = tf.shape(p)[0]
p = tf.unstack(p, k, axis=1)
outputs, states = legacy_seq2seq.basic_rnn_seq2seq(p, [p[-1]]*T, stacked_lstm)
# outputs is a list of T (batch_size, H) arrays
# concat outputs is (batch_size * T, H)
concat_outputs = tf.concat(outputs, axis=0)
# predictions is (batch_size * T, 2 * L)
predictions = tf.matmul(concat_outputs, weights) + biases
# reshape into (T, batch_size, 2 * L) then transpose into (batch_size, T, 2 * L)
return tf.transpose(tf.reshape(predictions, (T, batch_size, L * 2)), perm=[1, 0, 2])
tf.reset_default_graph()
# Parameters
learning_rate = 0.001
epochs = 2000
batch_size = 10
n_videos = len(frames_train)
display_step = 50
p_input = tf.placeholder(tf.float32, shape=[None, k, L*2])
p_output = tf.placeholder(tf.float32, shape=[p_input.get_shape()[0], T, L*2])
W = tf.get_variable('W', shape=[H, L*2], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer())
b = tf.get_variable('b', shape=[L*2], dtype=tf.float32, initializer=tf.zeros_initializer())
p_output_predicted = RNN(p_input, W, b)
# Define loss and optimizer
loss = tf.reduce_mean(tf.squared_difference(p_output_predicted, p_output))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
###Output
_____no_output_____
###Markdown
LSTM Training/Validation
###Code
sess = tf.Session()
sess.run(tf.global_variables_initializer())
# saver = tf.train.Saver()
# saver.restore(sess, 'lstm-reg-20000')
mean_losses = []
for epoch in range(epochs):
total_iter = n_videos // batch_size
total_iter = 1
total_loss = 0
for i in range(total_iter):
inputs = []
expected_outputs = []
for frame in frames_train:
start_time = np.random.randint(frame.shape[0] - (k + T) + 1)
inputs.append(frame[start_time : start_time + k].reshape(k, 2 * L))
expected_outputs.append(frame[start_time + k : start_time + k + T].reshape(T, 2 * L))
_, loss_value = sess.run((optimizer, loss), feed_dict={ p_input : np.asarray(inputs), p_output : np.asarray(expected_outputs) })
total_loss += loss_value
mean_loss = total_loss / total_iter
mean_losses.append(mean_loss)
if (epoch + 1) % display_step == 0:
print('epoch %s: loss=%.4f' % (epoch + 1, mean_loss))
inputs = []
expected_outputs = []
for frame in frames_train:
start_time = np.random.randint(frame.shape[0] - (k + T) + 1)
inputs.append(frame[start_time : start_time + k].reshape(k, 2 * L))
expected_outputs.append(frame[start_time + k : start_time + k + T].reshape(T, 2 * L))
output = sess.run((p_output_predicted), feed_dict={ p_input : np.asarray(inputs)})
for i in range(T):
print(np.mean(np.linalg.norm(
output.reshape((1, T, 13, 2))[:,i,:,:] - np.array(expected_outputs).reshape((1, T, 13, 2))[:,i,:,:],
axis=2)))
for i in range(T):
if i % 1 == 0:
image = i
print('T = ', i)
plt.subplot(1,2,1)
plt.imshow(np.zeros((1,1)), cmap = 'gray')
plt.scatter((output[0][image].reshape(13,2)).T[0], (output[0][image].reshape(13,2)).T[1])
plt.subplot(1,2,2)
plt.imshow(np.zeros((1,1)), cmap = 'gray')
plt.scatter((expected_outputs[0][image].reshape(13,2)).T[0], (expected_outputs[0][image].reshape(13,2)).T[1])
plt.show()
for i in range(T):
if i % 1 == 0:
image = i
print('T = ', i)
print((output[0][image].reshape(13,2)).T[0], (output[0][image].reshape(13,2)).T[1])
saver = tf.train.Saver()
saver.save(sess, 'lstm-reg', global_step=20000)
###Output
_____no_output_____ |
.ipynb_checkpoints/curve_fit_binodal_2.0-checkpoint.ipynb | ###Markdown
__Before using the codes below, notation:__1. for the mixture component, using a sliced-like list. (Could use the liquid volume or mass ratio)eg: Ethanol: Water: Toluene = [::] Retrieve the wavelength for the binodal line Import packages and set initial parameters for the graphs
###Code
from dataGadgets import *
plt.rcParams['figure.figsize'] = [10,6]
plt.rcParams['figure.dpi'] = 100
# for the folder where stores the *.csv file for the different mixture.
# Other parameters
plt.rcParams['figure.figsize'] = [12, 8]
plt.rcParams['figure.dpi'] = 100
plt.rcParams['font.size'] = 18
plt.rcParams['axes.labelsize'] = 18
plt.rcParams['xtick.labelsize'] = 14
plt.rcParams['ytick.labelsize'] = 14
plt.rcParams['lines.linewidth'] = 2
###Output
_____no_output_____
###Markdown
Define the folder path for the spectra
###Code
specFold = 'curve_fit_binodal/'
###Output
_____no_output_____
###Markdown
set initial guess for the parameters, plot the original data and the fitted line For the mixture with the component -- Water: Toluene: Ethanol = [0:1:0]
###Code
# csv file format is 'vol W_ vol T_ vol E_sample no_rep no'
raw_sampleT_01 = yld_rawdata(specFold+'sampleT_pure_01.csv',6)
raw_sampleT_02 = yld_rawdata(specFold+'sampleT_pure_02.csv',6)
raw_sampleT_03 = yld_rawdata(specFold+'sampleT_pure_03.csv',6)
raw_sampleT_04 = yld_rawdata(specFold+'sampleT_pure_04.csv',6)
raw_sampleT_05 = yld_rawdata(specFold+'sampleT_pure_05.csv',6)
wv_sampleT_01,inten_sampleT_01 = yld_xy_sliced(raw_sampleT_01)
wv_sampleT_02,inten_sampleT_02 = yld_xy_sliced(raw_sampleT_02)
wv_sampleT_03,inten_sampleT_03 = yld_xy_sliced(raw_sampleT_03)
wv_sampleT_04,inten_sampleT_04 = yld_xy_sliced(raw_sampleT_04)
wv_sampleT_05,inten_sampleT_05 = yld_xy_sliced(raw_sampleT_05)
# Sliced list for the curve fitting
# yld_xy_sliced takes the start & end indexes and slices the wavelengths & intensities
sampleT_01_bg = 410
sampleT_01_end = 433
wv_fit_sampleT_01,inten_fit_sampleT_01 = yld_xy_sliced(raw_sampleT_01,sampleT_01_bg,sampleT_01_end)
wv_fit_sampleT_02,inten_fit_sampleT_02 = yld_xy_sliced(raw_sampleT_02,sampleT_01_bg,sampleT_01_end)
wv_fit_sampleT_03,inten_fit_sampleT_03 = yld_xy_sliced(raw_sampleT_03,sampleT_01_bg,sampleT_01_end)
wv_fit_sampleT_04,inten_fit_sampleT_04 = yld_xy_sliced(raw_sampleT_04,sampleT_01_bg,sampleT_01_end)
wv_fit_sampleT_05,inten_fit_sampleT_05 = yld_xy_sliced(raw_sampleT_05,sampleT_01_bg,sampleT_01_end)
inten_fit_sampleT = [inten_fit_sampleT_01,inten_fit_sampleT_02,inten_fit_sampleT_03,inten_fit_sampleT_04,inten_fit_sampleT_05]
# initial guess for fitted parameters, may need to be changed
# Initial guess for the curve para_curve = a*((x + b)**2) + c in the form [a,b,c]
# a changes curvature, b transforms in x-direction, c transforms in y-direction
initGuessT = [-0.5,-500,1800]
f1_sampleT = []
max_wav_sampleT = []
for i in range(len(inten_fit_sampleT)):
C_fit = yld_curve_fit_Y(para_curve,wv_fit_sampleT_01,inten_fit_sampleT[i],initGuessT)
f1_individual = C_fit[0]
max_wav_individual = C_fit[1]
f1_sampleT.append(f1_individual)
max_wav_sampleT.append(max_wav_individual)
ave_max_wav_sampleT = sum(max_wav_sampleT)/len(max_wav_sampleT)
std_max_wav_sampleT = np.std(max_wav_sampleT,ddof=1)
fig1 = plt.figure(facecolor='white')
ax1 = plt.axes(frameon=True)
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
plt.plot(wv_sampleT_05,inten_sampleT_05,marker = '',linestyle = '-',label = 'Spectra of pure Toluene')
plt.plot(wv_fit_sampleT_01,f1_sampleT[0],label = 'Fitted curve')
plt.plot(wv_fit_sampleT_02,f1_sampleT[1],label = 'Fitted curve')
plt.plot(wv_fit_sampleT_03,f1_sampleT[2],label = 'Fitted curve')
plt.plot(wv_fit_sampleT_04,f1_sampleT[3],label = 'Fitted curve')
plt.plot(wv_fit_sampleT_05,f1_sampleT[4],label = 'Fitted curve')
plt.legend(loc = 'best')
plt.title('Prodan Emission Spectra of binodals')
plt.xlabel('Wavelength, $\lambda$ [nm]')
plt.ylabel('Intensity')
plt.show()
print('Ave. maximum wavelength for the sample is:',ave_max_wav_sampleT,'nm')
print('Standard deviation for the max wavelength is:',std_max_wav_sampleT)
###Output
_____no_output_____
###Markdown
For the mixture with the component -- Water: Toluene: Ethanol = [1:0:0]
###Code
# csv file format is 'vol W_ vol T_ vol E_sample no_rep no'
raw_sampleW_01 = yld_rawdata(specFold+'sampleW_pure_01.csv',6)
raw_sampleW_02 = yld_rawdata(specFold+'sampleW_pure_02.csv',6)
raw_sampleW_03 = yld_rawdata(specFold+'sampleW_pure_03.csv',6)
raw_sampleW_04 = yld_rawdata(specFold+'sampleW_pure_04.csv',6)
raw_sampleW_05 = yld_rawdata(specFold+'sampleW_pure_05.csv',6)
wv_sampleW_01,inten_sampleW_01 = yld_xy_sliced(raw_sampleW_01)
wv_sampleW_02,inten_sampleW_02 = yld_xy_sliced(raw_sampleW_02)
wv_sampleW_03,inten_sampleW_03 = yld_xy_sliced(raw_sampleW_03)
wv_sampleW_04,inten_sampleW_04 = yld_xy_sliced(raw_sampleW_04)
wv_sampleW_05,inten_sampleW_05 = yld_xy_sliced(raw_sampleW_05)
# Sliced list for the curve fitting
# yld_xy_sliced takes the start & end indexes and slices the wavelengths & intensities
sampleW_01_bg = 510 # should be named as sampleW_bg
sampleW_01_end = 541 # same as above
wv_fit_sampleW_01,inten_fit_sampleW_01 = yld_xy_sliced(raw_sampleW_01,sampleW_01_bg,sampleW_01_end)
wv_fit_sampleW_02,inten_fit_sampleW_02 = yld_xy_sliced(raw_sampleW_02,sampleW_01_bg,sampleW_01_end)
wv_fit_sampleW_03,inten_fit_sampleW_03 = yld_xy_sliced(raw_sampleW_03,sampleW_01_bg,sampleW_01_end)
wv_fit_sampleW_04,inten_fit_sampleW_04 = yld_xy_sliced(raw_sampleW_04,sampleW_01_bg,sampleW_01_end)
wv_fit_sampleW_05,inten_fit_sampleW_05 = yld_xy_sliced(raw_sampleW_05,sampleW_01_bg,sampleW_01_end)
inten_fit_sampleW = [inten_fit_sampleW_01,inten_fit_sampleW_02,inten_fit_sampleW_03,inten_fit_sampleW_04,inten_fit_sampleW_05]
# initial guess for fitted parameters, may need to be changed
# Initial guess for the curve para_curve = a*((x + b)**2) + c in the form [a,b,c]
# a changes curvature, b transforms in x-direction, c transforms in y-direction
initGuessW = [-0.5,-500,1800]
f1_sampleW = []
max_wav_sampleW = []
for i in range(len(inten_fit_sampleW)):
C_fit = yld_curve_fit_Y(para_curve,wv_fit_sampleW_01,inten_fit_sampleW[i],initGuessW)
f1_individual = C_fit[0]
max_wav_individual = C_fit[1]
f1_sampleW.append(f1_individual)
max_wav_sampleW.append(max_wav_individual)
ave_max_wav_sampleW = sum(max_wav_sampleW)/len(max_wav_sampleW)
std_max_wav_sampleW = np.std(max_wav_sampleW,ddof=1)
fig1 = plt.figure(facecolor='white')
ax1 = plt.axes(frameon=True)
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
plt.plot(wv_sampleW_04,inten_sampleW_04,marker = '',linestyle = '-',label = 'Spectra of Pure Water')
plt.plot(wv_fit_sampleW_01,f1_sampleW[0],label = 'Fitted curve')
plt.plot(wv_fit_sampleW_02,f1_sampleW[1],label = 'Fitted curve')
plt.plot(wv_fit_sampleW_03,f1_sampleW[2],label = 'Fitted curve')
plt.plot(wv_fit_sampleW_04,f1_sampleW[3],label = 'Fitted curve')
plt.plot(wv_fit_sampleW_05,f1_sampleW[4],label = 'Fitted curve')
plt.legend(loc = 'best')
plt.title('Prodan Emission Spectra of binodals')
plt.xlabel('Wavelength, $\lambda$ [nm]')
plt.ylabel('Intensity')
plt.show()
print('Ave. maximum wavelength for the sample is:',ave_max_wav_sampleW,'nm')
print('Standard deviation for the max wavelength is:',std_max_wav_sampleW)
###Output
_____no_output_____
###Markdown
For the mixture with the component -- Water: Toluene: Ethanol = [0:0:1]
###Code
# csv file format is 'vol W_ vol T_ vol E_sample no_rep no'
raw_sampleE_01 = yld_rawdata(specFold+'sampleE_pure_01.csv',6)
raw_sampleE_02 = yld_rawdata(specFold+'sampleE_pure_02.csv',6)
raw_sampleE_03 = yld_rawdata(specFold+'sampleE_pure_03.csv',6)
raw_sampleE_04 = yld_rawdata(specFold+'sampleE_pure_04.csv',6)
raw_sampleE_05 = yld_rawdata(specFold+'sampleE_pure_05.csv',6)
wv_sampleE_01,inten_sampleE_01 = yld_xy_sliced(raw_sampleE_01)
wv_sampleE_02,inten_sampleE_02 = yld_xy_sliced(raw_sampleE_02)
wv_sampleE_03,inten_sampleE_03 = yld_xy_sliced(raw_sampleE_03)
wv_sampleE_04,inten_sampleE_04 = yld_xy_sliced(raw_sampleE_04)
wv_sampleE_05,inten_sampleE_05 = yld_xy_sliced(raw_sampleE_05)
# Sliced list for the curve fitting
# yld_xy_sliced takes the start & end indexes and slices the wavelengths & intensities
sampleE_01_bg = 485 # should be named as sampleE_bg
sampleE_01_end = 513 # same as above
wv_fit_sampleE_01,inten_fit_sampleE_01 = yld_xy_sliced(raw_sampleE_01,sampleE_01_bg,sampleE_01_end)
wv_fit_sampleE_02,inten_fit_sampleE_02 = yld_xy_sliced(raw_sampleE_02,sampleE_01_bg,sampleE_01_end)
wv_fit_sampleE_03,inten_fit_sampleE_03 = yld_xy_sliced(raw_sampleE_03,sampleE_01_bg,sampleE_01_end)
wv_fit_sampleE_04,inten_fit_sampleE_04 = yld_xy_sliced(raw_sampleE_04,sampleE_01_bg,sampleE_01_end)
wv_fit_sampleE_05,inten_fit_sampleE_05 = yld_xy_sliced(raw_sampleE_05,sampleE_01_bg,sampleE_01_end)
inten_fit_sampleE = [inten_fit_sampleE_01,inten_fit_sampleE_02,inten_fit_sampleE_03,inten_fit_sampleE_04,inten_fit_sampleE_05]
# initial guess for fitted parameters, may need to be changed
# Initial guess for the curve para_curve = a*((x + b)**2) + c in the form [a,b,c]
# a changes curvature, b transforms in x-direction, c transforms in y-direction
initGuessE = [-0.5,-500,1800]
f1_sampleE = []
max_wav_sampleE = []
for i in range(len(inten_fit_sampleE)):
C_fit = yld_curve_fit_Y(para_curve,wv_fit_sampleE_01,inten_fit_sampleE[i],initGuessE)
f1_individual = C_fit[0]
max_wav_individual = C_fit[1]
f1_sampleE.append(f1_individual)
max_wav_sampleE.append(max_wav_individual)
ave_max_wav_sampleE = sum(max_wav_sampleE)/len(max_wav_sampleE)
std_max_wav_sampleE = np.std(max_wav_sampleE,ddof=1)
fig1 = plt.figure(facecolor='white')
ax1 = plt.axes(frameon=True)
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
plt.plot(wv_sampleE_01,inten_sampleE_01,marker = '',linestyle = '-',label = 'Spectra of Pure Ethanol')
plt.plot(wv_fit_sampleE_01,f1_sampleE[0],label = 'Fitted curve')
plt.plot(wv_fit_sampleE_02,f1_sampleE[1],label = 'Fitted curve')
plt.plot(wv_fit_sampleE_03,f1_sampleE[2],label = 'Fitted curve')
plt.plot(wv_fit_sampleE_04,f1_sampleE[3],label = 'Fitted curve')
plt.plot(wv_fit_sampleE_05,f1_sampleE[4],label = 'Fitted curve')
plt.legend(loc = 'best')
plt.title('Prodan Emission Spectra of binodals')
plt.xlabel('Wavelength, $\lambda$ [nm]')
plt.ylabel('Intensity')
plt.show()
print('Ave. maximum wavelength for the sample is:',ave_max_wav_sampleE,'nm')
print('Standard deviation for the max wavelength is:',std_max_wav_sampleE)
###Output
_____no_output_____
###Markdown
For the mixture with the component -- Water: Toluene: Ethanol = [4.0:25.4:]
###Code
# csv file format is 'vol W_ vol T_ vol E_sample no_rep no'
raw_sample01_01 = yld_rawdata(specFold+'4W_25.4T_22.1E_sample01_01.csv',6)
raw_sample01_02 = yld_rawdata(specFold+'4W_25.4T_22.1E_sample01_02.csv',6)
raw_sample01_03 = yld_rawdata(specFold+'4W_25.4T_22.1E_sample01_03.csv',6)
raw_sample01_04 = yld_rawdata(specFold+'4W_25.4T_22.1E_sample01_04.csv',6)
raw_sample01_05 = yld_rawdata(specFold+'4W_25.4T_22.1E_sample01_05.csv',6)
wv_sample01_01,inten_sample01_01 = yld_xy_sliced(raw_sample01_01)
wv_sample01_02,inten_sample01_02 = yld_xy_sliced(raw_sample01_02)
wv_sample01_03,inten_sample01_03 = yld_xy_sliced(raw_sample01_03)
wv_sample01_04,inten_sample01_04 = yld_xy_sliced(raw_sample01_04)
wv_sample01_05,inten_sample01_05 = yld_xy_sliced(raw_sample01_05)
# Sliced list for the curve fitting
# yld_xy_sliced takes the start & end indexes and slices the wavelengths & intensities
sample01_bg = 485
sample01_end = 515
wv_fit_sample01_01,inten_fit_sample01_01 = yld_xy_sliced(raw_sample01_01,sample01_bg,sample01_end)
wv_fit_sample01_02,inten_fit_sample01_02 = yld_xy_sliced(raw_sample01_02,sample01_bg,sample01_end)
wv_fit_sample01_03,inten_fit_sample01_03 = yld_xy_sliced(raw_sample01_03,sample01_bg,sample01_end)
wv_fit_sample01_04,inten_fit_sample01_04 = yld_xy_sliced(raw_sample01_04,sample01_bg,sample01_end)
wv_fit_sample01_05,inten_fit_sample01_05 = yld_xy_sliced(raw_sample01_05,sample01_bg,sample01_end)
inten_fit_sample01 = [inten_fit_sample01_01,inten_fit_sample01_02,inten_fit_sample01_03,inten_fit_sample01_04,inten_fit_sample01_05]
# initial guess for fitted parameters, may need to be changed
# Initial guess for the curve para_curve = a*((x + b)**2) + c in the form [a,b,c]
# a changes curvature, b transforms in x-direction, c transforms in y-direction
initGuess01 = [-0.5,-500,1800]
f1_sample01 = []
max_wav_sample01 = []
for i in range(len(inten_fit_sample01)):
C_fit = yld_curve_fit_Y(para_curve,wv_fit_sample01_01,inten_fit_sample01[i],initGuess01)
f1_individual = C_fit[0]
max_wav_individual = C_fit[1]
f1_sample01.append(f1_individual)
max_wav_sample01.append(max_wav_individual)
ave_max_wav_sample01 = sum(max_wav_sample01)/len(max_wav_sample01)
std_max_wav_sample01 = np.std(max_wav_sample01,ddof=1)
fig1 = plt.figure(facecolor='white')
ax1 = plt.axes(frameon=True)
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
plt.plot(wv_sample01_01,inten_sample01_01,marker = '',linestyle = '-',label = 'Spectra of mixture [4.0:25.4:]')
plt.plot(wv_fit_sample01_01,f1_sample01[0],label = 'Fitted curve')
plt.plot(wv_fit_sample01_02,f1_sample01[1],label = 'Fitted curve')
plt.plot(wv_fit_sample01_03,f1_sample01[2],label = 'Fitted curve')
plt.plot(wv_fit_sample01_04,f1_sample01[3],label = 'Fitted curve')
plt.plot(wv_fit_sample01_05,f1_sample01[4],label = 'Fitted curve')
plt.legend(loc = 'best')
plt.title('Prodan Emission Spectra of binodals')
plt.xlabel('Wavelength, $\lambda$ [nm]')
plt.ylabel('Intensity')
plt.show()
print('Ave. maximum wavelength for the sample is:',ave_max_wav_sample01,'nm')
print('Standard deviation for the max wavelength is:',std_max_wav_sample01)
###Output
_____no_output_____
###Markdown
For the mixture with the component -- Water: Toluene: Ethanol = [10.0:11.5:]
###Code
# csv file format is 'vol W_ vol T_ vol E_sample no_rep no'
raw_sample02_01 = yld_rawdata(specFold+'10W_11.5T_32.8E_sample02_01.csv',6)
raw_sample02_02 = yld_rawdata(specFold+'10W_11.5T_32.8E_sample02_02.csv',6)
raw_sample02_03 = yld_rawdata(specFold+'10W_11.5T_32.8E_sample02_03.csv',6)
raw_sample02_04 = yld_rawdata(specFold+'10W_11.5T_32.8E_sample02_04.csv',6)
raw_sample02_05 = yld_rawdata(specFold+'10W_11.5T_32.8E_sample02_05.csv',6)
wv_sample02_01,inten_sample02_01 = yld_xy_sliced(raw_sample02_01)
wv_sample02_02,inten_sample02_02 = yld_xy_sliced(raw_sample02_02)
wv_sample02_03,inten_sample02_03 = yld_xy_sliced(raw_sample02_03)
wv_sample02_04,inten_sample02_04 = yld_xy_sliced(raw_sample02_04)
wv_sample02_05,inten_sample02_05 = yld_xy_sliced(raw_sample02_05)
# Sliced list for the curve fitting
# yld_xy_sliced takes the start & end indexes and slices the wavelengths & intensities
sample02_bg = 488
sample02_end = 517
wv_fit_sample02_01,inten_fit_sample02_01 = yld_xy_sliced(raw_sample02_01,sample02_bg,sample02_end)
wv_fit_sample02_02,inten_fit_sample02_02 = yld_xy_sliced(raw_sample02_02,sample02_bg,sample02_end)
wv_fit_sample02_03,inten_fit_sample02_03 = yld_xy_sliced(raw_sample02_03,sample02_bg,sample02_end)
wv_fit_sample02_04,inten_fit_sample02_04 = yld_xy_sliced(raw_sample02_04,sample02_bg,sample02_end)
wv_fit_sample02_05,inten_fit_sample02_05 = yld_xy_sliced(raw_sample02_05,sample02_bg,sample02_end)
inten_fit_sample02 = [inten_fit_sample02_01,inten_fit_sample02_02,inten_fit_sample02_03,inten_fit_sample02_04,inten_fit_sample02_05]
# initial guess for fitted parameters, may need to be changed
# Initial guess for the curve para_curve = a*((x + b)**2) + c in the form [a,b,c]
# a changes curvature, b transforms in x-direction, c transforms in y-direction
initGuess02 = [-0.5,-500,1800]
f1_sample02 = []
max_wav_sample02 = []
for i in range(len(inten_fit_sample02)):
C_fit = yld_curve_fit_Y(para_curve,wv_fit_sample02_01,inten_fit_sample02[i],initGuess02)
f1_individual = C_fit[0]
max_wav_individual = C_fit[1]
f1_sample02.append(f1_individual)
max_wav_sample02.append(max_wav_individual)
ave_max_wav_sample02 = sum(max_wav_sample02)/len(max_wav_sample02)
std_max_wav_sample02 = np.std(max_wav_sample02,ddof=1)
fig1 = plt.figure(facecolor='white')
ax1 = plt.axes(frameon=True)
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
plt.plot(wv_sample02_01,inten_sample02_01,marker = '',linestyle = '-',label = 'Spectra of mixture [4.0:25.4:]')
plt.plot(wv_fit_sample02_01,f1_sample02[0],label = 'Fitted curve')
plt.plot(wv_fit_sample02_02,f1_sample02[1],label = 'Fitted curve')
plt.plot(wv_fit_sample02_03,f1_sample02[2],label = 'Fitted curve')
plt.plot(wv_fit_sample02_04,f1_sample02[3],label = 'Fitted curve')
plt.plot(wv_fit_sample02_05,f1_sample02[4],label = 'Fitted curve')
plt.legend(loc = 'best')
plt.title('Prodan Emission Spectra of binodals')
plt.xlabel('Wavelength, $\lambda$ [nm]')
plt.ylabel('Intensity')
plt.show()
print('Ave. maximum wavelength for the sample is:',ave_max_wav_sample02,'nm')
print('Standard deviation for the max wavelength is:',std_max_wav_sample02)
###Output
_____no_output_____
###Markdown
For the mixture with the component -- Water: Toluene: Ethanol = [10.0:4.9:]
###Code
# csv file format is 'vol W_ vol T_ vol E_sample no_rep no'
raw_sample03_01 = yld_rawdata(specFold+'10W_4.9T_24.6E_sample03_01.csv',6)
raw_sample03_02 = yld_rawdata(specFold+'10W_4.9T_24.6E_sample03_02.csv',6)
raw_sample03_03 = yld_rawdata(specFold+'10W_4.9T_24.6E_sample03_03.csv',6)
raw_sample03_04 = yld_rawdata(specFold+'10W_4.9T_24.6E_sample03_04.csv',6)
raw_sample03_05 = yld_rawdata(specFold+'10W_4.9T_24.6E_sample03_05.csv',6)
wv_sample03_01,inten_sample03_01 = yld_xy_sliced(raw_sample03_01)
wv_sample03_02,inten_sample03_02 = yld_xy_sliced(raw_sample03_02)
wv_sample03_03,inten_sample03_03 = yld_xy_sliced(raw_sample03_03)
wv_sample03_04,inten_sample03_04 = yld_xy_sliced(raw_sample03_04)
wv_sample03_05,inten_sample03_05 = yld_xy_sliced(raw_sample03_05)
# Sliced list for the curve fitting
# yld_xy_sliced takes the start & end indexes and slices the wavelengths & intensities
sample03_bg = 490
sample03_end = 520
wv_fit_sample03_01,inten_fit_sample03_01 = yld_xy_sliced(raw_sample03_01,sample03_bg,sample03_end)
wv_fit_sample03_02,inten_fit_sample03_02 = yld_xy_sliced(raw_sample03_02,sample03_bg,sample03_end)
wv_fit_sample03_03,inten_fit_sample03_03 = yld_xy_sliced(raw_sample03_03,sample03_bg,sample03_end)
wv_fit_sample03_04,inten_fit_sample03_04 = yld_xy_sliced(raw_sample03_04,sample03_bg,sample03_end)
wv_fit_sample03_05,inten_fit_sample03_05 = yld_xy_sliced(raw_sample03_05,sample03_bg,sample03_end)
inten_fit_sample03 = [inten_fit_sample03_01,inten_fit_sample03_02,inten_fit_sample03_03,inten_fit_sample03_04,inten_fit_sample03_05]
# initial guess for fitted parameters, may need to be changed
# Initial guess for the curve para_curve = a*((x + b)**2) + c in the form [a,b,c]
# a changes curvature, b transforms in x-direction, c transforms in y-direction
initGuess03 = [-0.5,-500,1800]
f1_sample03 = []
max_wav_sample03 = []
for i in range(len(inten_fit_sample03)):
C_fit = yld_curve_fit_Y(para_curve,wv_fit_sample03_01,inten_fit_sample03[i],initGuess03)
f1_individual = C_fit[0]
max_wav_individual = C_fit[1]
f1_sample03.append(f1_individual)
max_wav_sample03.append(max_wav_individual)
ave_max_wav_sample03 = sum(max_wav_sample03)/len(max_wav_sample03)
std_max_wav_sample03 = np.std(max_wav_sample03,ddof=1)
fig1 = plt.figure(facecolor='white')
ax1 = plt.axes(frameon=True)
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
plt.plot(wv_sample03_01,inten_sample03_01,marker = '',linestyle = '-',label = 'Spectra of mixture [4.0:25.4:]')
plt.plot(wv_fit_sample03_01,f1_sample03[0],label = 'Fitted curve')
plt.plot(wv_fit_sample03_02,f1_sample03[1],label = 'Fitted curve')
plt.plot(wv_fit_sample03_03,f1_sample03[2],label = 'Fitted curve')
plt.plot(wv_fit_sample03_04,f1_sample03[3],label = 'Fitted curve')
plt.plot(wv_fit_sample03_05,f1_sample03[4],label = 'Fitted curve')
plt.legend(loc = 'best')
plt.title('Prodan Emission Spectra of binodals')
plt.xlabel('Wavelength, $\lambda$ [nm]')
plt.ylabel('Intensity')
plt.show()
print('Ave. maximum wavelength for the sample is:',ave_max_wav_sample03,'nm')
print('Standard deviation for the max wavelength is:',std_max_wav_sample03)
###Output
_____no_output_____
###Markdown
For the mixture with the component -- Water: Toluene: Ethanol = [16.5:1.0:]
###Code
# csv file format is 'vol W_ vol T_ vol E_sample no_rep no'
raw_sample04_01 = yld_rawdata(specFold+'16.5W_1T_16.7E_sample04_01.csv',6)
raw_sample04_02 = yld_rawdata(specFold+'16.5W_1T_16.7E_sample04_02.csv',6)
raw_sample04_03 = yld_rawdata(specFold+'16.5W_1T_16.7E_sample04_03.csv',6)
raw_sample04_04 = yld_rawdata(specFold+'16.5W_1T_16.7E_sample04_04.csv',6)
raw_sample04_05 = yld_rawdata(specFold+'16.5W_1T_16.7E_sample04_05.csv',6)
wv_sample04_01,inten_sample04_01 = yld_xy_sliced(raw_sample04_01)
wv_sample04_02,inten_sample04_02 = yld_xy_sliced(raw_sample04_02)
wv_sample04_03,inten_sample04_03 = yld_xy_sliced(raw_sample04_03)
wv_sample04_04,inten_sample04_04 = yld_xy_sliced(raw_sample04_04)
wv_sample04_05,inten_sample04_05 = yld_xy_sliced(raw_sample04_05)
# Sliced list for the curve fitting
# yld_xy_sliced takes the start & end indexes and slices the wavelengths & intensities
sample04_bg = 493
sample04_end = 528
wv_fit_sample04_01,inten_fit_sample04_01 = yld_xy_sliced(raw_sample04_01,sample04_bg,sample04_end)
wv_fit_sample04_02,inten_fit_sample04_02 = yld_xy_sliced(raw_sample04_02,sample04_bg,sample04_end)
wv_fit_sample04_03,inten_fit_sample04_03 = yld_xy_sliced(raw_sample04_03,sample04_bg,sample04_end)
wv_fit_sample04_04,inten_fit_sample04_04 = yld_xy_sliced(raw_sample04_04,sample04_bg,sample04_end)
wv_fit_sample04_05,inten_fit_sample04_05 = yld_xy_sliced(raw_sample04_05,sample04_bg,sample04_end)
inten_fit_sample04 = [inten_fit_sample04_01,inten_fit_sample04_02,inten_fit_sample04_03,inten_fit_sample04_04,inten_fit_sample04_05]
# initial guess for fitted parameters, may need to be changed
# Initial guess for the curve para_curve = a*((x + b)**2) + c in the form [a,b,c]
# a changes curvature, b transforms in x-direction, c transforms in y-direction
initGuess04 = [-0.5,-500,180]
f1_sample04 = []
max_wav_sample04 = []
for i in range(len(inten_fit_sample04)):
C_fit = yld_curve_fit_Y(para_curve,wv_fit_sample04_01,inten_fit_sample04[i],initGuess04)
f1_individual = C_fit[0]
max_wav_individual = C_fit[1]
f1_sample04.append(f1_individual)
max_wav_sample04.append(max_wav_individual)
ave_max_wav_sample04 = sum(max_wav_sample04)/len(max_wav_sample04)
std_max_wav_sample04 = np.std(max_wav_sample04,ddof=1)
fig1 = plt.figure(facecolor='white')
ax1 = plt.axes(frameon=True)
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
plt.plot(wv_sample04_01,inten_sample04_01,marker = '',linestyle = '-',label = 'Spectra of mixture [4.0:25.4:]')
plt.plot(wv_fit_sample04_01,f1_sample04[0],label = 'Fitted curve')
plt.plot(wv_fit_sample04_02,f1_sample04[1],label = 'Fitted curve')
plt.plot(wv_fit_sample04_03,f1_sample04[2],label = 'Fitted curve')
plt.plot(wv_fit_sample04_04,f1_sample04[3],label = 'Fitted curve')
plt.plot(wv_fit_sample04_05,f1_sample04[4],label = 'Fitted curve')
plt.legend(loc = 'best')
plt.title('Prodan Emission Spectra of binodals')
plt.xlabel('Wavelength, $\lambda$ [nm]')
plt.ylabel('Intensity')
plt.show()
print('Ave. maximum wavelength for the sample is:',ave_max_wav_sample04,'nm')
print('Standard deviation for the max wavelength is:',std_max_wav_sample04)
###Output
_____no_output_____
###Markdown
Plot all the data on the same graph.
###Code
ave_wav_samples = [ave_max_wav_sample01,ave_max_wav_sample02,ave_max_wav_sample03,ave_max_wav_sample04]
std_wav_samples = [std_max_wav_sample01,std_max_wav_sample02,std_max_wav_sample03,std_max_wav_sample04]
print(ave_wav_samples)
print(std_wav_samples)
###Output
[499.7885468660831, 503.20555282788075, 505.15714714765227, 510.79481262199204]
[0.011503510474612568, 0.019569831650286838, 0.023182083601149678, 0.018882302600965444]
|
Numpy_Org_Quickstart.ipynb | ###Markdown
Array CreationThere are several ways to create arrays.For example, you can create an array from a regular Python list or tuple using the array function. The type of the resulting array is deduced fromthe type of the elements in the sequences.
###Code
b = np.array([12,23,33,44,55])
print(b)
b.dtype
c=np.array([[1.2,2.2,3.3,4.04],
[3.5,33.4,44.5,99.09]])
print(c)
print(c.dtype)
print(c.dtype.itemsize)
print(c.ndim)
print(c.size)
print(c.shape)
print(c.data)
d = np.array([[1,2,3,4],[6,7,8,9]], dtype=np.float64)
d
###Output
_____no_output_____
###Markdown
The function zeros creates an array full of zeros, the function ones creates an array full of ones, and the function empty creates an array whose initial content is random and depends on the state of the memory. By default, the dtype of the created array is float64.
###Code
s= np.zeros((3,3))
print(s)
z=np.ones((3,4,4),dtype=np.float64)
print(z)
x=np.empty((2,3,3))
print(x)
###Output
[[[4.45057637e-308 9.34604358e-307 9.34605037e-307]
[1.11260483e-306 1.78019354e-306 4.45066125e-308]
[9.34605037e-307 1.60220393e-306 1.24611674e-306]]
[[4.45057637e-308 9.34597567e-307 2.22523004e-306]
[2.13620264e-306 4.45063578e-308 1.37961302e-306]
[8.45596650e-307 5.73116149e-322 2.56765117e-312]]]
###Markdown
To create sequences of numbers, NumPy provides a function analogous to range that returns arrays instead of lists.
###Code
ar= np.arange(0,50,5).reshape(5,2)
print(ar)
arr=np.arange(0,3,.2)
print(arr)
###Output
[0. 0.2 0.4 0.6 0.8 1. 1.2 1.4 1.6 1.8 2. 2.2 2.4 2.6 2.8]
###Markdown
When arange is used with floating point arguments, it is generally not possible to predict the number of elements obtained, due to the finite floating point precision. For this reason, it is usually better to use the function linspace that receives as an argument the number of elements that we want, instead of the step:
###Code
lin = np.linspace(2,5,9)
print(lin)
from numpy import pi
a = np.arange(0,3*pi, .5)
print(a)
b= np.linspace(3,pi*4, 5)
print(b)
x=55
r =np.sin(x)
print(r)
###Output
-0.9997551733586199
|
models/deprecated/3-2 (4). VGG16 Triplet KNN Model.ipynb | ###Markdown
Data Information
###Code
train_df = pd.read_csv('./data/triplet/train.csv')
val_df = pd.read_csv('./data/triplet/validation.csv')
test_df = pd.read_csv('./data/triplet/test.csv')
print('Train:\t\t', train_df.shape)
print('Validation:\t', val_df.shape)
print('Test:\t\t', test_df.shape)
print('\nTrain Landmarks:\t', len(train_df['landmark_id'].unique()))
print('Validation Landmarks:\t', len(val_df['landmark_id'].unique()))
print('Test Landmarks:\t\t', len(test_df['landmark_id'].unique()))
train_df.head()
###Output
_____no_output_____
###Markdown
Load Features and Labels
###Code
# Already normalized
train_feature = np.load('./data/triplet/train_triplet_vgg16(4)_features.npy')
val_feature = np.load('./data/triplet/validation_triplet_vgg16(4)_features.npy')
test_feature = np.load('./data/triplet/test_triplet_vgg16(4)_features.npy')
train_df = pd.read_csv('./data/triplet/train.csv')
val_df = pd.read_csv('./data/triplet/validation.csv')
test_df = pd.read_csv('./data/triplet/test.csv')
print('Train:\t\t', train_feature.shape, train_df.shape)
print('Validation:\t', val_feature.shape, val_df.shape)
print('Test:\t\t', test_feature.shape, test_df.shape)
# Helper function
def accuracy(true_label, prediction, top=1):
""" function to calculate the prediction accuracy """
prediction = prediction[:, :top]
count = 0
for i in range(len(true_label)):
if true_label[i] in prediction[i]:
count += 1
return count / len(true_label)
###Output
_____no_output_____
###Markdown
Implement KNN Model
###Code
# Merge train and validation features
train_val_feature = np.concatenate((train_feature, val_feature), axis=0)
train_val_df = pd.concat((train_df, val_df), axis=0)
train_val_df = train_val_df.reset_index(drop=True)
# Implement KNN model
knn = NearestNeighbors(n_neighbors=50, algorithm='auto', leaf_size=30,
metric='minkowski', p=2, n_jobs=-1)
knn.fit(train_val_feature)
# Search the first 50 neighbors
distance, neighbor_index = knn.kneighbors(test_feature, return_distance=True)
# Save the results
np.save('./result/knn_triplet_vgg16(4)_distance.npy', distance)
np.save('./result/knn_triplet_vgg16(4)_neighbor.npy', neighbor_index)
###Output
_____no_output_____
###Markdown
Search Neighbors
###Code
knn_distance = np.load('./result/knn_triplet_vgg16(4)_distance.npy')
knn_neighbor = np.load('./result/knn_triplet_vgg16(4)_neighbor.npy')
# Get the first 50 neighbors
predictions = []
for neighbors in knn_neighbor:
predictions.append(train_val_df.loc[neighbors]['landmark_id'].values)
predictions = np.array(predictions)
np.save('./result/knn_triplet_vgg16(4)_test_prediction.npy', predictions)
###Output
_____no_output_____
###Markdown
Compute Accuracy
###Code
print('Top 1 accuracy:\t', accuracy(test_df['landmark_id'].values, predictions, top=1))
print('Top 5 accuracy:\t', accuracy(test_df['landmark_id'].values, predictions, top=5))
print('Top 10 accuracy:\t', accuracy(test_df['landmark_id'].values, predictions, top=10))
print('Top 20 accuracy:\t', accuracy(test_df['landmark_id'].values, predictions, top=20))
knn_acc = []
for i in range(1, 51):
tmp_acc = accuracy(test_df['landmark_id'].values, predictions, top=i)
knn_acc.append(tmp_acc)
np.save('./result/knn_triplet_vgg16(4)_accuracy.npy', knn_acc)
###Output
_____no_output_____ |
doc/nb/calib-color-color.ipynb | ###Markdown
Model the g-r, r-z color-color sequence for stars
###Code
# Build a sample of stars with good photometry from a single sweep.
rbright = 18
rfaint = 19.5
sweep = fits.getdata('sweep-340p000-350p005.fits', 1)
keep = np.where((sweep['TYPE'].strip() == 'PSF')*
(np.sum((sweep['DECAM_FLUX'][:, [1,2,4]] > 0)*1, axis=1)==3)*
(np.sum((sweep['DECAM_ANYMASK'][:, [1,2,4]] > 0)*1, axis=1)==0)*
(np.sum((sweep['DECAM_FRACFLUX'][:, [1,2,4]] < 0.05)*1, axis=1)==3)*
(sweep['DECAM_FLUX'][:,2]<(10**(0.4*(22.5-rbright))))*
(sweep['DECAM_FLUX'][:,2]>(10**(0.4*(22.5-rfaint)))))[0]
stars = sweep[keep]
print('Found {} stars with good photometry.'.format(len(stars)))
gg = 22.5-2.5*np.log10(stars['DECAM_FLUX'][:, 1])
rr = 22.5-2.5*np.log10(stars['DECAM_FLUX'][:, 2])
zz = 22.5-2.5*np.log10(stars['DECAM_FLUX'][:, 4])
gr = gg - rr
rz = rr - zz
Xall = np.array([rz, gr]).T
# Determine how many Gaussian components we need by looking at the Bayesian
# Information Criterion.
ncomp = np.arange(2, 10)
bic = getbic(Xall, ncomp)
fig, ax = plt.subplots(1, 1, figsize=(8,5))
ax.plot(ncomp, bic, marker='s', ls='-')
ax.set_xlim((0, 10))
ax.set_xlabel('Number of Gaussian Components')
ax.set_ylabel('Bayesian Information Criterion')
plt.legend(labels=['Star g-r, r-z colors'])
plt.tight_layout()
plt.show()
# Model the distribution using a mixture of Gaussians and write out.
ncomp = 6 # from figure above
mog = GMM(n_components=ncomp, covariance_type="full").fit(Xall)
print('Writing {}'.format(star_mogfile))
GaussianMixtureModel.save(mog, star_mogfile)
# Reread and sample from the MoGs.
mog = GaussianMixtureModel.load(star_mogfile)
samp = mog.sample(1500)
# Build a color-color plot. Show the data on the left-hand panel and random draws from
# the MoGs on the right-hand panel.
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8.5, 5), sharey=True)
sns.kdeplot(rz, gr, clip=(rzrange, grrange), ax=ax1, gridsize=40,
shade=True, cut=0, shade_lowest=False, label='DECaLS Stars')
ax1.set_xlim(rzrange)
ax1.set_ylim(grrange)
ax1.set_xlabel('r - z')
ax1.set_ylabel('g - r')
#ax1.legend(loc='lower right', prop={'size': 14}, labelspacing=0.25, markerscale=2)
ax2.plot(samp[:,0], samp[:,1], 'o', label='Random Draws', c=col[0], markersize=3)
ax2.set_xlim(rzrange)
ax2.set_ylim(grrange)
ax2.yaxis.tick_right()
ax2.yaxis.set_label_position("right")
ax2.set_xlabel('r - z')
ax2.set_ylabel('g - r')
ax2.legend(loc='lower right', prop={'size': 14}, labelspacing=0.25, markerscale=2)
fig.subplots_adjust(wspace=0.05, hspace=0.1)
plt.show()
###Output
_____no_output_____
###Markdown
Model the g-r, r-z color-color sequence for ELGs
###Code
# Grab the sample DEEP2 ELGs whose SEDs have been modeled.
elgs = fits.getdata('elg_templates_v2.0.fits', 1)
morph = np.where(elgs['radius_halflight'] > 0)[0]
print('Grabbed {} ELGs, of which {} have HST morphologies.'.format(len(elgs), len(morph)))
gg = elgs['DECAM_G']
rr = elgs['DECAM_R']
zz = elgs['DECAM_Z']
gr = gg - rr
rz = rr - zz
Xall = np.array([rz, gr]).T
r50 = elgs['RADIUS_HALFLIGHT'][morph]
sersicn = elgs['SERSICN'][morph]
# Determine how many Gaussian components we need by looking at the Bayesian
# Information Criterion.
ncomp = np.arange(2, 10)
bic = getbic(Xall, ncomp)
fig, ax = plt.subplots(1, 1, figsize=(8,5))
ax.plot(ncomp, bic, marker='s', ls='-')
ax.set_xlim((0, 10))
ax.set_xlabel('Number of Gaussian Components')
ax.set_ylabel('Bayesian Information Criterion')
plt.legend(labels=['ELG g-r, r-z colors'])
plt.tight_layout()
plt.show()
# Model the distribution using a mixture of Gaussians and write out.
ncomp = 6 # from figure above
mog = GMM(n_components=ncomp, covariance_type="full").fit(Xall)
print('Writing {}'.format(elg_mogfile))
GaussianMixtureModel.save(mog, elg_mogfile)
# Reread and sample from the MoGs.
mog = GaussianMixtureModel.load(elg_mogfile)
samp = mog.sample(1500)
# Build a color-color plot. Show the data on the left-hand panel and random draws from
# the MoGs on the right-hand panel.
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8.5, 5), sharey=True)
ax1.scatter(rz, gr, s=3, color=col[1], label='DEEP2 ELGs')
#sns.kdeplot(rz, gr, clip=(rzrange, grrange), ax=ax1, gridsize=40,
# shade=True, cut=0, shade_lowest=False, label='DECaLS ELGs')
ax1.set_xlim(rzrange)
ax1.set_ylim(grrange)
ax1.set_xlabel('r - z')
ax1.set_ylabel('g - r')
ax1.legend(loc='upper left', prop={'size': 14}, labelspacing=0.25, markerscale=2)
ax2.plot(samp[:,0], samp[:,1], 'o', label='Random Draws', c=col[0], markersize=3)
ax2.set_xlim(rzrange)
ax2.set_ylim(grrange)
ax2.yaxis.tick_right()
ax2.yaxis.set_label_position("right")
ax2.set_xlabel('r - z')
ax2.set_ylabel('g - r')
ax2.legend(loc='upper left', prop={'size': 14}, labelspacing=0.25, markerscale=2)
fig.subplots_adjust(wspace=0.05, hspace=0.1)
plt.show()
# Now look at the subset of ELGs with HST morphologies. The "banding" in the colors probably comes from not doing the
# K-corrections correctly, but we don't really care about that here.
fig, ax1 = plt.subplots(1, 1, figsize=(5, 5))
ax1.scatter(rz[morph], gr[morph], s=3, color=col[1], label='DEEP2/HST ELGs')
#sns.kdeplot(rz, gr, clip=(rzrange, grrange), ax=ax1, gridsize=40,
# shade=True, cut=0, shade_lowest=False, label='DECaLS ELGs')
ax1.set_xlim(rzrange)
ax1.set_ylim(grrange)
ax1.set_xlabel('r - z')
ax1.set_ylabel('g - r')
ax1.legend(loc='upper left', prop={'size': 14}, labelspacing=0.25, markerscale=2)
jp = sns.jointplot(r50, sersicn, kind='scatter', stat_func=None,
xlim=(0, 1.5), ylim=(0, 4))
jp.set_axis_labels('Half-light radius (arcsec)', 'Sersic n')
###Output
_____no_output_____ |
05_RepoFor LandCOver Agg Census/01_Creating Training Sample from Vector and Grid.ipynb | ###Markdown
Load Vector training dataBefore run the cell please make sure that the data is in GPKG format to ease reading.
###Code
train_vector_path='kerangka-spasial/vector-training/'+v_name
train_vector=gpd.read_file(train_vector_path)
train_vector['id'].unique()
###Output
_____no_output_____
###Markdown
Load Raster training data
###Code
data_=iterate_raster(folder, list_imagery,train_vector)
data_['TRAIN_CLASS'].unique()
###Output
_____no_output_____
###Markdown
Encoding Landform Following the PODES Topography Level
###Code
dict_data_landform={0:'others',11:'others',12:'others',14:'others',15:'others',21: 'u-slope',22:'u-slope',
31:'l-slope',32:'l-slope',41:'valley',42:'valley',24:'flat',34:'flat',13:'others',
33:'l-slope',23:'u-slope'}
data_['PODES_landform']=data_['alos_landform'].apply(lambda y: dict_data_landform[y])
data_.columns
###Output
_____no_output_____
###Markdown
Exporting data to Google Drive
###Code
file_name='kerangka-spasial/ml_learning/train_data/'+train_data
data_ready=data_[['x', 'y', 'PODES_landform', 'alos_slope', 'alos_dsm', 'wet_mean',
'green_mean', 'bright_mean', 'ARVI_mean', 'SAVI_mean', 'NDBI_mean',
'mNDWI_mean', 'NDWI_mean', 'mNDVI_mean', 'NDVI_mean', 'wet_p50',
'green_p50', 'bright_p50', 'ARVI_p50', 'SAVI_p50', 'NDBI_p50',
'mNDWI_p50', 'NDWI_p50', 'mNDVI_p50', 'NDVI_p50', 'S2_B12mean',
'S2_B11mean', 'S2_B8mean', 'S2_B4mean', 'S2_B3mean', 'S2_B2mean',
'S2_B12med', 'S2_B11med', 'S2_B8med', 'S2_B4med', 'S2_B3med',
'S2_B2med', 'TRAIN_CLASS']]
data_ready.to_csv(file_name, index=False,sep=';')
data_ready.head()
###Output
_____no_output_____ |
notebooks/e_extra/Camera_Calibration.ipynb | ###Markdown
(Cam_Calib)= Camera calibrationCameras introduce distortions into our data, these arise for various reasons such as the SSD, focal length, lens distortions, etc. Perhaps the most obvious is when using a fish-eye lens, this allows for a wider area to be captured however introduces large distortions near the image edges. Camera calibration is the process of determining the internal camera geometric and optical characteristics and/or the orientation of the camera frame relative to a certain world coordinate system. Physical camera parameters are commonly divided into extrinsic and intrinsic parameters. Extrinsic parametersExtrinsic parameters are needed to transform object coordinates to a camera centered coordinate frame. To simplify this problem we use the camera pinhole model as shown in figure 1:```{figure} ./Camera_Calibration/pinhole.png:width: 400px```This is based on the principle of collinearity, where each point in the object space (X,Y,Z) is projected by a straight line through the projection center into the image plane (x,y,z). so the problem can be simplified to a simple transformation:```{figure} ./Camera_Calibration/transformation.png:width: 400px```Where matrix m carries out the translation and rotation which translates the coordinates of a 3D point to a coordinate system on the image plane. Intrinsic ParametersThese are parameters specific to the camera and map the coordinates from the image-plane/SSD (x,y,z) to the digital image (pixel coordinate axis with origin commonly in the upper left corner of the iamge array). Parameters often included are the focal length $(f_x, f_y)$ ,optical centers $(c_x, c_y)$, etc. The values are often stored in what we call the camera matrix:```{figure} ./Camera_Calibration/matrix.png:width: 400px```These are specific to the camera, so can be stored for future use. Extended modelThe pinhole model is only an approximation of the real camera projection! It is useful to establish simple mathematical formulations, however is not valid when high accuracy is required! Often the pinhole model is extended with some corrections, 2 common distortion corrections are: 1. radial -> straight lines will appear curved due to image points being displaced radially. More pronounced near the image edges. this distortion is solved as follows in our example below:$$x_{corrected} = x (1+k_1 r^2 +k_2 r^4 + k_3 r^6)$$$$y_{corrected} = y (1+k_1 r^2 +k_2 r^4 + k_3 r^6)$$2. tangential -> occurs because the lense is not algined perfectly parallel to the image plane. Visualised as some areas in the image looking nearer than expected. This distortion is solved as follows in our example below:$$x_{corrected} = x + (2p_1xy+p_2(r^2+2x^2))$$$$y_{corrected} = y + (p_1(r^2+2y^2)+2p_2xy)$$So we are looking for the followig distortion coefficients:$$ (k_1, k_2, p_1, p_2, k_3)$$See image transformations notebook for further understanding of how these equations function Example camera calibrationWe can link the image coordinates and real world coordinates for a few points and hence we can solve for the translation matrix , with an empirical inverse model, to map between real world and image coordinates. Typically a checkerboard tends to be used to take images for easy point detection, from this the calibration matrixes are calculated. Then these matrixes can be stored and used to undistort other images. An example of how this process works is demonstrated below, images can be obtained from [here](https://github.com/DavidWangWood/Camera-Calibration-Python).
###Code
# import required libraries
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import pickle
%matplotlib inline
#%matplotlib qt
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*8,3), np.float32)
objp[:,:2] = np.mgrid[0:8, 0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('Camera_Calibration/GO*.jpg')
# Step through the list and search for chessboard corners
for idx, fname in enumerate(images):
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chessboard corners, set grid pattern that is looked for (8,6)
ret, corners = cv2.findChessboardCorners(gray, (8,6), None)
# If corners found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (8,6), corners, ret)
#write_name = 'corners_found'+str(idx)+'.jpg'
#cv2.imwrite(write_name, img) #save image
cv2.imshow('img', img)
cv2.waitKey(500)
cv2.destroyAllWindows()
# Test undistortion on an image
img = cv2.imread('Camera_Calibration/test_image.jpg')
img_size = (img.shape[1], img.shape[0])
# Do camera calibration given object points and image points
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size,None,None)
#returns camera matrix, distortion coefficients, rotation and translation vectors
#undistort image
dst = cv2.undistort(img, mtx, dist, None, mtx)
cv2.imwrite('Camera_Calibration/test_undist.jpg',dst)
# Save the camera calibration result for later use (we won't worry about rvecs / tvecs)
dist_pickle = {}
dist_pickle["mtx"] = mtx
dist_pickle["dist"] = dist
pickle.dump( dist_pickle, open( "Camera_Calibration/wide_dist_pickle.p", "wb" ) )
#dst = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB)
# Visualize correction
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(dst)
ax2.set_title('Undistorted Image', fontsize=30)
plt.show()
###Output
_____no_output_____
###Markdown
Now you can store the camera matrix and distortion coefficients using write functions in Numpy (np.savez, np.savetxt etc) for future uses. ErrorsRe-projection error gives a good estimation of just how exact the found parameters are. This should be as close to zero as possible.
###Code
tot_error=0
for i in range(len(objpoints)):
# transform object point to image point
imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
#calculate absolute norm between result of transformation and the corner finding algorithm
error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
tot_error += error
#display the average error for all calibration images
print ("total error: ", tot_error/len(objpoints))
###Output
total error: 0.07734644251618723
|
Monte-Carlo-Attacks/Monte-Carlo-CIFAR_GAN/Evaluation/Evaluation CIFAR.ipynb | ###Markdown
White Box
###Code
len(white_box)
print(str(np.round(100*white_box.white_box_50.mean(),2))+'$\pm$'+str(np.round(100*white_box.white_box_50.std()/np.sqrt(white_box.white_box_50.count()),2)))
print(str(np.round(100*white_box.set_accuracy_wb.mean(),2))+'$\pm$'+str(np.round(100*white_box.set_accuracy_wb.std()/np.sqrt(white_box.set_accuracy_wb.count()),2)))
###Output
97.6$\pm$0.59
100.0$\pm$0.0
###Markdown
AIS
###Code
print(ais.mem_inf_adv_ais.mean())
print(ais.set_accuracy_ais.mean())
###Output
nan
nan
###Markdown
PCA0.1 mc_attack_log_50
###Code
pca.pca_n.unique()
pca_opt_dim = pca[pca.pca_n == 120]
opt_perc = -1
print(pca_opt_dim[pca_opt_dim.percentile == opt_perc].set_accuracy_mc_log.mean())
print(pca_opt_dim[pca_opt_dim.percentile == opt_perc].set_accuracy_mc_ones.mean())
100*pca_opt_dim.groupby(['percentile']).mean()[['set_accuracy_mc_ones','set_accuracy_mc_log','successful_sum_attack_1','successful_sum_attack_2']]
np.round(pca_opt_dim.groupby(['percentile']).std()[['set_accuracy_mc_ones','set_accuracy_mc_log','successful_sum_attack_1','successful_sum_attack_2']]*100
/np.sqrt(pca_opt_dim.groupby(['percentile']).count()[['set_accuracy_mc_ones','set_accuracy_mc_log','successful_sum_attack_1','successful_sum_attack_2']]),2)
np.round(100*pca_opt_dim.groupby(['percentile']).mean()[['mc_attack_log_50','mc_attack_eps_50']],2)
np.round(pca_opt_dim.groupby(['percentile']).std()[['mc_attack_log_50','mc_attack_eps_50','successful_sum_attack_1','successful_sum_attack_2']]*100
/np.sqrt(pca_opt_dim.groupby(['percentile']).count()[['mc_attack_log_50','mc_attack_eps_50','successful_sum_attack_1','successful_sum_attack_2']]),2)
###Output
_____no_output_____
###Markdown
Color Histogram0.1 mc_attack_log_50
###Code
color_hist
opt_perc = -1
print(color_hist[color_hist.percentile == opt_perc].set_accuracy_mc_log.mean())
print(color_hist[color_hist.percentile == opt_perc].set_accuracy_mc_ones.mean())
print(pca_opt_dim[pca_opt_dim.percentile == opt_perc].successful_sum_attack_1.mean())
print(pca_opt_dim[pca_opt_dim.percentile == opt_perc].successful_sum_attack_2.mean())
np.round(100*color_hist.groupby(['percentile']).mean()[['set_accuracy_mc_ones','set_accuracy_mc_log','successful_sum_attack_1','successful_sum_attack_2']],2)
np.round(color_hist.groupby(['percentile']).std()[['set_accuracy_mc_ones','set_accuracy_mc_log','successful_sum_attack_1','successful_sum_attack_2']]*100
/np.sqrt(color_hist.groupby(['percentile']).count()[['set_accuracy_mc_ones','set_accuracy_mc_log','successful_sum_attack_1','successful_sum_attack_2']]),2)
np.round(100*color_hist.groupby(['percentile']).mean()[['mc_attack_log_50','mc_attack_eps_50']],2)
np.round(100*color_hist.groupby(['percentile']).std()[['mc_attack_log_50','set_accuracy_mc_log','successful_sum_attack_1','successful_sum_attack_2']]/np.sqrt(color_hist.groupby(['percentile']).count()[['mc_attack_log_50','set_accuracy_mc_log','successful_sum_attack_1','successful_sum_attack_2']]),2)
###Output
_____no_output_____ |
day3/aoc_day3.ipynb | ###Markdown
Advent of Code Day 3 No Matter How You Slice It
###Code
import numpy as np
from collections import namedtuple
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Part 1 The Elves managed to locate the chimney-squeeze prototype fabric for Santa's suit (thanks to someone who helpfully wrote its box IDs on the wall of the warehouse in the middle of the night). Unfortunately, anomalies are still affecting them - nobody can even agree on how to cut the fabric.The whole piece of fabric they're working on is a very large square - at least 1000 inches on each side.Each Elf has made a claim about which area of fabric would be ideal for Santa's suit. All claims have an ID and consist of a single rectangle with edges parallel to the edges of the fabric. Each claim's rectangle is defined as follows: The number of inches between the left edge of the fabric and the left edge of the rectangle. The number of inches between the top edge of the fabric and the top edge of the rectangle. The width of the rectangle in inches. The height of the rectangle in inches.A claim like 123 @ 3,2: 5x4 means that claim ID 123 specifies a rectangle 3 inches from the left edge, 2 inches from the top edge, 5 inches wide, and 4 inches tall. Visually, it claims the square inches of fabric represented by (and ignores the square inches of fabric represented by .) in the diagram below: ........... ........... ...... ...... ...... ...... ........... ........... ...........The problem is that many of the claims overlap, causing two or more claims to cover part of the same areas. For example, consider the following claims: 1 @ 1,3: 4x4 2 @ 3,1: 4x4 3 @ 5,5: 2x2Visually, these claim the following areas: ........ ...2222. ...2222. .11XX22. .11XX22. .111133. .111133. ........The four square inches marked with X are claimed by both 1 and 2. (Claim 3, while adjacent to the others, does not overlap either of them.)If the Elves all proceed with their own plans, none of them will have enough fabric. How many square inches of fabric are within two or more claims?
###Code
Patch = namedtuple('Patch', ['id', 'corner', 'size'])
patches = []
with open('day3_input1.txt') as file:
for line in file:
line_split = line.split()
patches.append(Patch(int(line_split[0].replace('#', '')), [int(i) for i in line_split[2].replace(':', '').split(',')], [int(i) for i in line_split[3].split('x')]))
patches[0].corner[0]
fabric = np.zeros((1000, 1000))
for patch in patches:
sl = np.s_[patch.corner[1]:patch.corner[1]+patch.size[1], patch.corner[0]:patch.corner[0]+patch.size[0]]
fabric[sl] += 1
plt.imshow(fabric)
np.sum(fabric > 1)
###Output
_____no_output_____
###Markdown
Part 2 Amidst the chaos, you notice that exactly one claim doesn't overlap by even a single square inch of fabric with any other claim. If you can somehow draw attention to it, maybe the Elves will be able to make Santa's suit after all!For example, in the claims above, only claim 3 is intact after all claims are made.What is the ID of the only claim that doesn't overlap?
###Code
for patch in patches:
sl = np.s_[patch.corner[1]:patch.corner[1]+patch.size[1], patch.corner[0]:patch.corner[0]+patch.size[0]]
if np.all(fabric[sl] == 1):
intact = patch.id
break
print(intact)
###Output
382
|
study/M_L/210923_ML_Used_Car.ipynb | ###Markdown
주제 : 자동으로 모은 데이터는 분석하기 어렵다면서? 자동으로 모은 중고 자동차 데이터를 분석해보자!---------- 실습 가이드 1. 데이터를 다운로드하여 Colab에 불러옵니다. 2. 필요한 라이브러리는 모두 코드로 작성되어 있습니다. 3. 코드는 위에서부터 아래로 순서대로 실행합니다. 데이터 소개 - 이번 주제는 Used Cars Dataset을 사용합니다. - 파일은 한 개이며, 각각의 컬럼은 아래와 같습니다. - vehicles.csv id : 중고차 거래의 아이디 url : 중고차 거래 페이지 region : 해당 거래의 관리 지점 region_url : 거래 관리 지점의 홈페이지 price : 기입된 자동차의 거래가 year : 거래가 기입된 년도 manufacturer : 자동차를 생산한 회사 model : 자동차 모델명 condition : 자동차의 상태 cylinders : 자동차의 기통 수 fuel : 자동차의 연료 타입 odometer : 자동차의 운행 마일 수 title_status : 자동차의 타이틀 상태 (소유주 등록 상태) transmission : 자동차의 트랜스미션 종류 vin : 자동차의 식별 번호 (vehicle identification number) drive : 자동차의 구동 타입 size : 자동차 크기 type : 자동차의 일반 타입 (세단, ...) paint_color : 자동차 색상 image_url : 자동차 이미지 description : 세부 설명 county : 실수로 생성된 미사용 컬럼 state : 거래가 업로드된 미 주 lat : 거래가 업로드된 곳의 위도 long : 거래가 업로드된 곳의 경도 - 데이터 출처: https://www.kaggle.com/austinreese/craigslist-carstrucks-data 최종 목표 - 스크래핑된 dirty 데이터 클리닝 방법 이해 - 다양한 종류의 데이터 정규화 방법 습득 - 데이터 시각화를 통한 인사이트 습득 방법의 이해 - Scikit-learn 기반의 모델 학습 방법 습득 - XGBoost, LightGBM 기반의 모델 학습 방법 습득 - 학습된 모델의 평가 방법 및 시각화 방법 습득- 출제자 : 신제용 강사--- Step 0. 데이터 스크래핑이 대하여 스크래핑을 이용한 자동 데이터 습득 스크래핑된 데이터에서 아웃라이어의 특징 Step 1. 데이터셋 준비하기
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
문제 1. Colab Notebook에 Kaggle API 세팅하기
###Code
import os
# os.environ을 이용하여 Kaggle API Username, Key 세팅하기
os.environ['KAGGLE_USERNAME'] = \
os.environ['KAGGLE_KEY'] = "\
###Output
_____no_output_____
###Markdown
문제 2. 데이터 다운로드 및 압축 해제하기
###Code
# Linux 명령어로 Kaggle API를 이용하여 데이터셋 다운로드하기 (!kaggle ~)
# Linux 명령어로 압축 해제하기
!rm *.* # 다시 실행할때 문제없게
!kaggle datasets download -d austinreese/craigslist-carstrucks-data
!unzip '*.zip'
###Output
rm: cannot remove '*.*': No such file or directory
Downloading craigslist-carstrucks-data.zip to /content
99% 260M/262M [00:02<00:00, 101MB/s]
100% 262M/262M [00:02<00:00, 112MB/s]
Archive: craigslist-carstrucks-data.zip
inflating: vehicles.csv
###Markdown
문제 3. Pandas 라이브러리로 csv파일 읽어들이기
###Code
df = pd.read_csv('vehicles.csv')
###Output
_____no_output_____
###Markdown
Step 2. EDA 및 데이터 기초 통계 분석 문제 4. 불필요한 데이터 데이터프레임에서 제거하기
###Code
df.head()
# DataFrame에서 제공하는 메소드를 이용하여 각 데이터프레임의 구조 분석하기 (head(), info(), describe())
# 데이터프레임에서 불필요한 컬럼 제거하기
df.info()
df.describe()
#크게 들어간 숫자, 가격이 0인 경우, 연식 등 값이 지나치게 크고 작은것 확인
df.isna().sum()
df.columns
df.drop(['id', 'url', 'region_url', 'VIN',
'image_url', 'description', 'state', 'lat',
'long', 'posting_date'], axis=1, inplace=True)
df
df['age'] = 2021 - df['year']
df.drop('year', axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
문제 5. 범주형 데이터의 통계 분석하기
###Code
df.drop(['county'],axis=1,inplace=True)
# 범주형 데이터의 값의 범위, 기초 통계 분석하기
df.columns
len(df['manufacturer'].value_counts())
df['manufacturer'].value_counts()
fig = plt.figure(figsize=(6, 6))
sns.countplot(y='manufacturer', data=df.fillna('n/a'), order=df.fillna('n/a')['manufacturer'].value_counts().index)
# 적혀있지 않은 카테고리조 좀 있음. 소수 카테고리는 하나로 묶어보는 것도 좋아보임
# 모델 카테고리가 너무 많아서 묶어서 봐보자
for model, num in zip(df['model'].value_counts().index, df['model'].value_counts()):
print(model, num)
# 대소문자구분, 스페이스2개, 사람마다 모델 표현방식이 달라서 너무 다양함(스크래핑 데이터의 단점)
# 양이 너무 많음(메모리문제)
# fig = plt.figure(figsize=(6, 6))
# sns.countplot(y='model', data=df.fillna('n/a'), order=df.fillna('n/a')['model'].value_counts().index)
sns.countplot(y='condition', data=df.fillna('n/a'), order=df.fillna('n/a')['condition'].value_counts().index)
sns.countplot(y='cylinders', data=df.fillna('n/a'), order=df.fillna('n/a')['cylinders'].value_counts().index)
sns.countplot(y='fuel', data=df.fillna('n/a'), order=df.fillna('n/a')['fuel'].value_counts().index)
sns.countplot(y='transmission', data=df.fillna('n/a'), order=df.fillna('n/a')['transmission'].value_counts().index)
sns.countplot(y='drive', data=df.fillna('n/a'), order=df.fillna('n/a')['drive'].value_counts().index)
sns.countplot(y='size', data=df.fillna('n/a'), order=df.fillna('n/a')['size'].value_counts().index)
sns.countplot(y='type', data=df.fillna('n/a'), order=df.fillna('n/a')['type'].value_counts().index)
sns.countplot(y='paint_color', data=df.fillna('n/a'), order=df.fillna('n/a')['paint_color'].value_counts().index)
# 스크래핑데이터는 이렇게 하나씩 다 뜯어봐야 한다. 데이터를 좀 꼼꼼히 봐야함
# n/a가 많으면 drop으로 작업을 좀 편하게하자
###Output
_____no_output_____
###Markdown
문제 6. 수치형 데이터의 통계 분석하기
###Code
# 수치형 데이터의 값의 범위, 기초 통계 분석하기
fig = plt.figure(figsize=(6, 2))
sns.rugplot(x='price', data=df, height=1)
fig = plt.figure(figsize=(6, 2))
sns.rugplot(x='odometer', data=df, height=1)
## 현재 데이터를 파악하기 어려움
sns.histplot(x='age', data=df, bins=18, kde=True)
# 10년정도 된 차가 많음
###Output
_____no_output_____
###Markdown
Step 3. 데이터 클리닝 수행하기 문제 7. 범주형 데이터 시각화하여 분석하기
###Code
df.columns
# Boxplot 계열로 범주형 데이터를 시각화하여 분석하기
sns.boxplot(x='manufacturer', y='price', data=df.fillna('n/a'))
sns.boxplot(x='fuel', y='price', data=df.fillna('n/a'))
# 일단 데이터를 정리해야 뭔가를 할 수 있을것 같다
###Output
_____no_output_____
###Markdown
문제 8. 범주형 데이터 클리닝하기
###Code
df.columns
df.drop('title_status', axis=1, inplace=True)
# 범주형 데이터를 아래 방법 중 적절히 판단하여 처리하기
# 1. 결손 데이터가 포함된 Row를 제거
# 2. 결손 데이터를 others 범주로 변경하기
# 3. 지나치게 소수로 이루어진 범주를 others 범주로 변경하기
# 4. Classifier를 학습, 결손 데이터를 추정하여 채워넣기
df['manufacturer'].fillna('others').value_counts()
# 적은수의 인덱스들을 other로 바꿔야해서 좀 편하게 작동시켜보자
col = 'manufacturer'
counts = df[col].fillna('others').value_counts()
plt.plot(range(len(counts)), counts)
# 10번째 이후의 데이터들이 좀 데이터가 적어보임
# manufacturer 10개까지만 남겨보자
n_categorical = 10
counts.index[n_categorical:]
df[col] = df[col].apply(lambda s: s if str(s) not in counts.index[n_categorical:] else 'others')
df[col].value_counts()
# 다른 컬럼들도 동일하게 작업해주자
col = 'region'
counts = df[col].fillna('others').value_counts()
plt.grid()
plt.plot(range(len(counts)), counts)
# region 5개까지만
n_categorical = 5
df[col] = df[col].apply(lambda s: s if str(s) not in counts.index[n_categorical:] else 'others')
df[col].value_counts()
# 다른 컬럼들도 동일하게 작업해주자2
col = 'model'
counts = df[col].fillna('others').value_counts()
plt.grid()
plt.plot(range(len(counts)), counts)
# model은 좀 더 자세히 봐보자(상위 20개만)_카테고리가 3만개가 있음
plt.grid()
plt.plot(range(len(counts[:20])), counts[:20])
# model 10개까지만
n_categorical = 10
others = counts.index[n_categorical:]
df[col] = df[col].apply(lambda s: s if str(s) not in others else 'others') # 10개 외에 나머지 카테고리가 3만개가 있어서 너무 느림 -> others 변수 미리만들어서 작업
df[col].value_counts()
# 다른 컬럼들도 동일하게 작업해주자3
col = 'condition'
counts = df[col].fillna('others').value_counts()
plt.grid()
plt.plot(range(len(counts)), counts)
# condition 3개까지만
n_categorical = 3
others = counts.index[n_categorical:]
df[col] = df[col].apply(lambda s: s if str(s) not in others else 'others')
df[col].value_counts()
# 다른 컬럼들도 동일하게 작업해주자3
col = 'cylinders'
counts = df[col].fillna('others').value_counts()
plt.grid()
plt.plot(range(len(counts)), counts)
# cylinders 4개까지만
n_categorical = 4
others = counts.index[n_categorical:]
df[col] = df[col].apply(lambda s: s if str(s) not in others else 'others')
df[col].value_counts()
# 다른 컬럼들도 동일하게 작업해주자3
col = 'fuel'
counts = df[col].fillna('others').value_counts()
plt.grid()
plt.plot(range(len(counts)), counts)
df[col].fillna('others').value_counts()
counts.fillna('others').index
# fuel 4개까지만
n_categorical = 3
other = counts.index[n_categorical:]
df[col] = df[col].apply(lambda s: s if str(s) not in other else 'other')
df.loc[df[col] == 'other', col] = 'others'
df[col].value_counts()
# 다른 컬럼들도 동일하게 작업해주자4
col = 'transmission'
counts = df[col].fillna('others').value_counts()
plt.grid()
plt.plot(range(len(counts)), counts)
n_categorical = 3
others = counts.index[n_categorical:]
df[col] = df[col].apply(lambda s: s if str(s) not in others else 'others')
df[col].value_counts()
# 다른 컬럼들도 동일하게 작업해주자5
col = 'drive'
counts = df[col].fillna('others').value_counts()
plt.grid()
plt.plot(range(len(counts)), counts)
n_categorical = 3
others = counts.index[n_categorical:]
df[col] = df[col].apply(lambda s: s if str(s) not in others else 'others')
df[col].value_counts()
# 다른 컬럼들도 동일하게 작업해주자5
col = 'size'
counts = df[col].fillna('others').value_counts()
plt.grid()
plt.plot(range(len(counts)), counts)
n_categorical = 2
others = counts.index[n_categorical:]
df[col] = df[col].apply(lambda s: s if str(s) not in others else 'others')
df[col].value_counts()
# 다른 컬럼들도 동일하게 작업해주자6
col = 'type'
counts = df[col].fillna('others').value_counts()
plt.grid()
plt.plot(range(len(counts)), counts)
n_categorical = 8
others = counts.index[n_categorical:]
df[col] = df[col].apply(lambda s: s if str(s) not in others else 'others')
df[col].value_counts()
df.loc[df[col] == 'other', col] = 'others'
# 다른 컬럼들도 동일하게 작업해주자6
col = 'paint_color'
counts = df[col].fillna('others').value_counts()
plt.grid()
plt.plot(range(len(counts)), counts)
n_categorical = 7
others = counts.index[n_categorical:]
df[col] = df[col].apply(lambda s: s if str(s) not in others else 'others')
df[col].value_counts()
###Output
_____no_output_____
###Markdown
문제 9. 수치형 데이터 시각화하여 분석하기
###Code
# Seaborn을 이용하여 범주형 데이터를 시각화하여 분석하기
# Hint) 값의 범위가 너무 넓을 경우 histplot() 등이 잘 동작하지 않으므로, rugplot을 활용
fig = plt.figure(figsize=(6, 2))
sns.rugplot(x='price', data=df, height=1)
fig = plt.figure(figsize=(6, 2))
sns.rugplot(x='odometer', data=df, height=1)
sns.histplot(x='age', data=df, bins=18, kde=True)
###Output
_____no_output_____
###Markdown
문제 10. 수치형 데이터 클리닝하기
###Code
# quantile() 메소드를 이용하여 outlier 제거하고 시각화하여 확인하기
p1 = df['price'].quantile(0.99)
p2 = df['price'].quantile(0.1)
print(p1,p2)
df = df[(p1 > df['price']) & (df['price'] > p2)]
o1 = df['odometer'].quantile(0.99)
o2 = df['odometer'].quantile(0.1)
print(o1, o2)
df = df[(o1 > df['odometer']) & (df['odometer'] > o2)]
df.describe()
df.columns
# Boxplot 계열로 범주형 데이터를 시각화하여 분석하기
fig = plt.figure(figsize=(10, 5))
sns.boxplot(x='manufacturer', y='price', data=df)
fig = plt.figure(figsize=(14, 5))
sns.boxplot(x='model', y='price', data=df)
###Output
_____no_output_____
###Markdown
문제 11. 컬럼간의 Correlation Heatmap으로 시각화하기
###Code
sns.heatmap(df.corr(), annot=True, cmap='YlOrRd')
###Output
_____no_output_____
###Markdown
Step 4. 모델 학습을 위한 데이터 전처리 문제 12. StandardScaler를 이용해 수치형 데이터 표준화하기
###Code
from sklearn.preprocessing import StandardScaler
# StandardScaler를 이용해 수치형 데이터를 표준화하기
X_num = df[['odometer', 'age']]
scaler = StandardScaler()
scaler.fit(X_num)
X_scaled = scaler.transform(X_num)
X_scaled = pd.DataFrame(X_scaled, index=X_num.index, columns=X_num.columns)
# get_dummies를 이용해 범주형 데이터를 one-hot 벡터로 변경하기
X_cat = df.drop(['price', 'odometer', 'age'], axis=1)
X_cat = pd.get_dummies(X_cat)
# 입출력 데이터 통합하기
X = pd.concat([X_scaled, X_cat], axis=1)
y = df['price']
X.shape
X.isna().sum()
# age 빈 값이 많다 => 평균으로 넣어두자
X['age'].mean() # 표준화로인해 0에 가깝다 -> 0으로 채우자
X.fillna(0.0, inplace=True)
###Output
_____no_output_____
###Markdown
문제 13. 학습데이터와 테스트데이터 분리하기
###Code
from sklearn.model_selection import train_test_split
# train_test_split() 함수로 학습 데이터와 테스트 데이터 분리하기
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)
###Output
_____no_output_____
###Markdown
Step 5. Regression 모델 학습하기 문제 14. XGBoost Regression 모델 학습하기
###Code
from xgboost import XGBRegressor
# XGBRegressor 모델 생성/학습
model_reg = XGBRegressor()
model_reg.fit(X_train, y_train)
###Output
[03:16:56] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
###Markdown
문제 15. 모델 학습 결과 평가하기
###Code
from sklearn.metrics import mean_absolute_error, mean_squared_error
from math import sqrt
# Predict를 수행하고 mean_absolute_error, rmse 결과 출력하기
pred = model_reg.predict(X_test)
print(mean_absolute_error(y_test, pred))
print(sqrt(mean_squared_error(y_test, pred)))
###Output
4459.597895957964
6621.79855754232
###Markdown
Step 6. 모델 학습 결과 심화 분석하기 문제 16. 실제 값과 추측 값의 Scatter plot 시각화하기
###Code
# y_test vs. pred Scatter 플랏으로 시각적으로 분석하기
# Hint) Scatter로 시각적 확인이 어려울 경우, histplot 등 활용
plt.scatter(x=y_test, y=pred, alpha=0.005)
plt.plot([0,60000], [0,60000], 'r-')
# 값이 적어도 비싸다고 판단하는 경우도 있어보인다
sns.histplot(x=y_test, y=pred)
plt.plot([0, 60000], [0, 60000], 'r-')
###Output
_____no_output_____
###Markdown
문제 17. 에러 값의 히스토그램 확인하기
###Code
# err의 히스토그램으로 에러율 히스토그램 확인하기
err = (pred - y_test) / y_test * 100
sns.histplot(err)
plt.xlabel('error (%)')
plt.xlim(-100, 100)
plt.grid()
# 모델이 약간 적은 값으로 추측하고 있음
err = (pred - y_test)
sns.histplot(err)
plt.xlabel('error ($)')
plt.grid()
###Output
_____no_output_____ |
Data Science and Machine Learning/Thorough Python Data Science Topics/Combinatory Categorial Grammar Parsing with NLTK.ipynb | ###Markdown
Combinatory Categorial Grammar Parsing with NLTK Code Examples The initial code examples here were taken from http://www.nltk.org/howto/ccg.html and adapted for our course needs.
###Code
from nltk.ccg import chart, lexicon
###Output
_____no_output_____
###Markdown
We can specify a lexicon as follows:
###Code
lex = lexicon.fromstring('''
:- S, NP, N, VP
Det :: NP/N
Pro :: NP
Modal :: S\\NP/VP
TV :: VP/NP
DTV :: TV/NP
the => Det
that => Det
that => NP
I => Pro
you => Pro
we => Pro
chef => N
cake => N
children => N
dough => N
will => Modal
should => Modal
might => Modal
must => Modal
and => var\\.,var/.,var
to => VP[to]/VP
without => (VP\\VP)/VP[ing]
be => TV
cook => TV
eat => TV
cooking => VP[ing]/NP
give => DTV
is => (S\\NP)/NP
prefer => (S\\NP)/NP
which => (N\\N)/(S/NP)
persuade => (VP/VP[to])/NP
''')
###Output
_____no_output_____
###Markdown
We instantiate a parser instance using this lexicon specification:
###Code
parser = chart.CCGChartParser(lex, chart.DefaultRuleSet)
###Output
_____no_output_____
###Markdown
The following function wraps the parser calls. It takes a parser object as the first argument and the sentence as a string as the second.
###Code
def parse(myparser, sentence):
for p in myparser.parse(sentence.split()): # doctest: +SKIP
chart.printCCGDerivation(p)
break
parse(parser, "you prefer that cake")
parse(parser, "that is the cake which you prefer")
nosub_parser = chart.CCGChartParser(lex, chart.ApplicationRuleSet + chart.CompositionRuleSet + chart.TypeRaiseRuleSet)
parse(nosub_parser, "that is the dough which you will eat without cooking")
parse(parser, "that is the dough which you will eat without cooking")
parse(parser, "that is the cake which we will persuade the chef to cook")
parse(parser, "that is the cake which we will persuade the chef to give the children")
###Output
that is the cake which we will persuade the chef to give the children
NP ((S\NP)/NP) (NP/N) N ((N\N)/(S/NP)) NP ((S\NP)/VP) ((VP/VP['to'])/NP) (NP/N) N (VP['to']/VP) ((VP/NP)/NP) (NP/N) N
---->T
(S/(S\NP))
-------------->
NP
---------------------------------->
(VP/VP['to'])
------------------>
NP
-------------------------------->
(VP/NP)
----------------------------------------------->B
(VP['to']/NP)
--------------------------------------------------------------------------------->B
(VP/NP)
---------------------------------------------------------------------------------------------->B
((S\NP)/NP)
-------------------------------------------------------------------------------------------------->B
(S/NP)
------------------------------------------------------------------------------------------------------------------>
(N\N)
------------------------------------------------------------------------------------------------------------------------<
N
-------------------------------------------------------------------------------------------------------------------------------->
NP
--------------------------------------------------------------------------------------------------------------------------------------------->
(S\NP)
---------------------------------------------------------------------------------------------------------------------------------------------------<
S
|
Applied Text Mining/Week 2/Assignment/Assignment+2.ipynb | ###Markdown
---_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._--- Assignment 2 - Introduction to NLTKIn part 1 of this assignment you will use nltk to explore the Herman Melville novel Moby Dick. Then in part 2 you will create a spelling recommender function that uses nltk to find words similar to the misspelling. Part 1 - Analyzing Moby Dick
###Code
import nltk
nltk.download('punkt')
nltk.download('wordnet')
import pandas as pd
import numpy as np
# If you would like to work with the raw text you can use 'moby_raw'
with open('moby.txt', 'r') as f:
moby_raw = f.read()
# If you would like to work with the novel in nltk.Text format you can use 'text1'
moby_tokens = nltk.word_tokenize(moby_raw)
text1 = nltk.Text(moby_tokens)
###Output
_____no_output_____
###Markdown
Example 1How many tokens (words and punctuation symbols) are in text1?*This function should return an integer.*
###Code
def example_one():
return len(nltk.word_tokenize(moby_raw)) # or alternatively len(text1)
example_one()
###Output
_____no_output_____
###Markdown
Example 2How many unique tokens (unique words and punctuation) does text1 have?*This function should return an integer.*
###Code
def example_two():
return len(set(nltk.word_tokenize(moby_raw))) # or alternatively len(set(text1))
example_two()
###Output
_____no_output_____
###Markdown
Example 3After lemmatizing the verbs, how many unique tokens does text1 have?*This function should return an integer.*
###Code
from nltk.stem import WordNetLemmatizer
def example_three():
lemmatizer = WordNetLemmatizer()
lemmatized = [lemmatizer.lemmatize(w,'v') for w in text1]
return len(set(lemmatized))
example_three()
###Output
_____no_output_____
###Markdown
Question 1What is the lexical diversity of the given text input? (i.e. ratio of unique tokens to the total number of tokens)*This function should return a float.*
###Code
def answer_one():
tokens = nltk.word_tokenize(moby_raw)
unique_tokens = len(set(tokens))
total_tokens = len(tokens)
ratio = unique_tokens / total_tokens
return ratio
answer_one()
###Output
_____no_output_____
###Markdown
Question 2What percentage of tokens is 'whale'or 'Whale'?*This function should return a float.*
###Code
def answer_two():
tokens = nltk.word_tokenize(moby_raw)
num_whales = len([word for word in tokens if word == "whale" or word == "Whale"])
percentage = (num_whales / len(tokens)) * 100
return float(percentage)
answer_two()
###Output
_____no_output_____
###Markdown
Question 3What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency?*This function should return a list of 20 tuples where each tuple is of the form `(token, frequency)`. The list should be sorted in descending order of frequency.*
###Code
# My approach without nltk
def answer_three():
# Get the frequency distribution
freq_dist = nltk.FreqDist(moby_tokens)
# Create a dict of sorted tokens
freq_dict = {
k: v for k, v in sorted(freq_dist.items(), key=lambda x: x[1], reverse=True)
}
# Store the top 20 to a list
top20 = []
for n, v in enumerate(freq_dict.items()):
if n < 20:
top20.append(v)
else:
pass
return top20
answer_three()
# Approach usin nltk
def answer_three():
# Get the frequency distribution
freq_dist = nltk.FreqDist(moby_tokens)
# Store the top 20
top20 = freq_dist.most_common()[:20]
return top20
answer_three()
###Output
_____no_output_____
###Markdown
Question 4What tokens have a length of greater than 5 and frequency of more than 150?*This function should return an alphabetically sorted list of the tokens that match the above constraints. To sort your list, use `sorted()`*
###Code
def answer_four():
# Get the frequency distribution
freq_dist = nltk.FreqDist(moby_tokens)
# Gather tokens with a length > 5 & freq > 150
final_tokens = {
k.lower(): v for k, v in freq_dist.items() if len(k) > 5 and v > 150
}
return sorted(final_tokens)
answer_four()
###Output
_____no_output_____
###Markdown
Question 5Find the longest word in text1 and that word's length.*This function should return a tuple `(longest_word, length)`.*
###Code
def answer_five():
# Store the max token length
max_token_len = max([len(token) for token in text1.tokens])
# Return a tuple consisting of the token and the token length
longest_token = [
(token, len(token)) for token in text1.tokens if len(token) == max_token_len
][0]
return longest_token
answer_five()
###Output
_____no_output_____
###Markdown
Question 6What unique words have a frequency of more than 2000? What is their frequency?"Hint: you may want to use `isalpha()` to check if the token is a word and not punctuation."*This function should return a list of tuples of the form `(frequency, word)` sorted in descending order of frequency.*
###Code
def answer_six():
# Get the frequency distribution
freq_dist = nltk.FreqDist(moby_tokens)
# Store terms in a dictionary
high_freq_dict = {
k: v
for k, v in sorted(freq_dist.items(), key=lambda x: x[1], reverse=True)
if k.isalpha() and v > 2000
}
# List of tuples
list_of_tuples = [(k, v) for k, v in high_freq_dict.items()]
return list_of_tuples
answer_six()
###Output
_____no_output_____
###Markdown
Question 7What is the average number of tokens per sentence?*This function should return a float.*
###Code
def answer_seven():
avg_tokens_per_sent = np.mean(
[len(nltk.word_tokenize(x)) for x in nltk.sent_tokenize(moby_raw)]
)
return avg_tokens_per_sent
answer_seven()
###Output
_____no_output_____
###Markdown
Question 8What are the 5 most frequent parts of speech in this text? What is their frequency?*This function should return a list of tuples of the form `(part_of_speech, frequency)` sorted in descending order of frequency.*
###Code
def answer_eight():
# Extract the pos tag
pos_tag = nltk.pos_tag(text1)
# Create counter
counter = {}
for token, pos in pos_tag:
if pos not in counter.keys():
counter[pos] = 1
else:
counter[pos] += 1
# Sort the pos by frequency
pos_dict = {
k: v for k, v in sorted(counter.items(), key=lambda x: x[1], reverse=True)
}
# Store and return the top 5
top_five = [(k, v) for k, v in pos_dict.items()][:5]
return top_five
answer_eight()
###Output
_____no_output_____
###Markdown
Part 2 - Spelling RecommenderFor this part of the assignment you will create three different spelling recommenders, that each take a list of misspelled words and recommends a correctly spelled word for every word in the list.For every misspelled word, the recommender should find find the word in `correct_spellings` that has the shortest distance*, and starts with the same letter as the misspelled word, and return that word as a recommendation.*Each of the three different recommenders will use a different distance measure (outlined below).Each of the recommenders should provide recommendations for the three default words provided: `['cormulent', 'incendenece', 'validrate']`.
###Code
from nltk.corpus import words
correct_spellings = words.words()
###Output
_____no_output_____
###Markdown
Question 9For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:**[Jaccard distance](https://en.wikipedia.org/wiki/Jaccard_index) on the trigrams of the two words.***This function should return a list of length three:`['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
###Code
def answer_nine(entries=["cormulent", "incendenece", "validrate"]):
n_ngram = 3
outputs = []
for entry in entries:
first_letter = entry[0]
candidate_words = [
word
for word in correct_spellings
if word.startswith(first_letter) and len(word) > 2 #len>2 avoids warning with nltk.ngrams
]
jaccard_dist = [
(
word,
nltk.jaccard_distance(
set(nltk.ngrams(entry, n=n_ngram)),
set(nltk.ngrams(word, n=n_ngram)),
),
)
for word in candidate_words
]
# Sort, select the tuple with the min distance
outputs.append(sorted(jaccard_dist, key=lambda x: x[1])[0][0])
return outputs
answer_nine()
###Output
_____no_output_____
###Markdown
Question 10For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:**[Jaccard distance](https://en.wikipedia.org/wiki/Jaccard_index) on the 4-grams of the two words.***This function should return a list of length three:`['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
###Code
def answer_ten(entries=['cormulent', 'incendenece', 'validrate']):
n_ngram = 4
outputs = []
for entry in entries:
first_letter = entry[0]
candidate_words = [
word
for word in correct_spellings
if word.startswith(first_letter) and len(word) > 2 #len>2 avoids warning with nltk.ngrams
]
jaccard_dist = [
(
word,
nltk.jaccard_distance(
set(nltk.ngrams(entry, n=n_ngram)),
set(nltk.ngrams(word, n=n_ngram)),
),
)
for word in candidate_words
]
# Sort, select the tuple with the min distance
outputs.append(sorted(jaccard_dist, key=lambda x: x[1])[0][0])
return outputs
answer_ten()
###Output
_____no_output_____
###Markdown
Question 11For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:**[Edit distance on the two words with transpositions.](https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance)***This function should return a list of length three:`['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
###Code
def answer_eleven(entries=["cormulent", "incendenece", "validrate"]):
outputs = []
for entry in entries:
first_letter = entry[0]
candidate_words = [
word
for word in correct_spellings
if word.startswith(first_letter)
and len(word) > 2 # len>2 avoids warning with nltk.ngrams
]
levenshtein_dist = [
(word, nltk.edit_distance(entry, word, transpositions=True))
for word in candidate_words
]
# Sort, select the tuple with the min distance
outputs.append(sorted(levenshtein_dist, key=lambda x: x[1])[0][0])
return outputs
answer_eleven()
###Output
_____no_output_____ |
DataAnalysis/Notebooks/.ipynb_checkpoints/Hashtag exploration-checkpoint.ipynb | ###Markdown
Most common hashtags
###Code
query = """
SELECT h.hashtag, count(x.tweetID) AS tweetCount
FROM tweetsXtags x
INNER JOIN hashtags h ON(x.tagID = h.tagId)
GROUP BY (h.tagID)
ORDER BY tweetCount DESC
LIMIT 500
"""
###Output
_____no_output_____ |
arrays/sparse_array.ipynb | ###Markdown
ProblemYou have a large array with most of the elements as zero.Use a more space-efficient data structure, SparseArray, that implements the same interface:- `init(arr, size)`: initialize with the original large array and size.- `set(i, val)`: updates index at i with val.- `get(i)`: gets the value at index i.
###Code
class SparseArray:
def __init__(self, arr, size):
self._dict = {}
self.size = size
for i, value in enumerate(arr):
if value != 0:
self._dict[i] = value
def set(self, i, val):
if i < 0 or i >= self.size:
raise IndexError("Out of bounds")
if val != 0:
self._dict[i] = val
return
elif val in self._dict:
del self._dict[i]
def get(self, i):
if i < 0 or i >= self.size:
raise IndexError("Out of bounds")
return self._dict.get(i, 0)
arr = [0 for i in range(100)]
arr = [i for i in range(15)] + arr
sparse_array = SparseArray(arr, 20)
sparse_array.get(14)
sparse_array.set(14, 12310)
sparse_array._dict
###Output
_____no_output_____ |
small_run/Flow_Cytometry_Mondrian_Processes-AML-small-run.ipynb | ###Markdown
Mondrian Processes Various Functions for Mondrian Processes Sampling...
###Code
### SAMPLE MONDRIAN PROCESS ###
def draw_Mondrian(theta_space, budget=5):
return draw_Mondrian_at_t(theta_space, 0, budget)
def draw_Mondrian_at_t(theta_space, t, budget):
dists = theta_space[:,1] - theta_space[:,0]
lin_dim = np.sum(dists)
T = np.random.exponential(scale=1./lin_dim)
if t+T > budget:
return (theta_space, None, None)
d = np.argmax(np.random.multinomial(n=1, pvals=dists/lin_dim))
x = np.random.uniform(low=theta_space[d,0], high=theta_space[d,1])
theta_left = np.copy(theta_space)
theta_left[d][1] = x
M_left = draw_Mondrian_at_t(theta_left, t+T, budget)
theta_right = np.copy(theta_space)
theta_right[d][0] = x
M_right = draw_Mondrian_at_t(theta_right, t+T, budget)
return (theta_space, M_left, M_right)
def comp_log_p_sample(theta_space, data):
if theta_space[1] == None and theta_space[2] == None:
if data.shape[0] == 0:
return 0
else:
mu = np.mean(data, axis = 0)
residual = data - mu
cov = np.dot(residual.T , residual) / data.shape[0] + np.identity(data.shape[1])*0.001
return np.log(multivariate_normal.pdf(data, mean=mu, cov=cov)).sum()
# find the dimension and location of first cut
root_rec = theta_space[0]
left_rec = theta_space[1][0]
for _ in range(root_rec.shape[0]):
if root_rec[_,1] != left_rec[_,1]:
break
dim, pos = _, left_rec[_,1]
idx_left = data[:,dim] < pos
idx_right = data[:,dim] >= pos
log_len_left = np.log(pos - root_rec[dim,0])
log_len_right = np.log(root_rec[dim,1] - pos)
return comp_log_p_sample(theta_space[1], data[idx_left]) + comp_log_p_sample(theta_space[2], data[idx_right])
###Output
_____no_output_____
###Markdown
Visualization...
###Code
### VISUALIZE 2D MONDRIAN PROCESS ###
def print_partitions(p, trans_level=1., color='k'):
if not p[1] and not p[2]:
plt.plot([p[0][0,0], p[0][0,0]], [p[0][1,0], p[0][1,1]], color+'-', linewidth=3, alpha=trans_level)
plt.plot([p[0][0,1], p[0][0,1]], [p[0][1,0], p[0][1,1]], color+'-', linewidth=3, alpha=trans_level)
plt.plot([p[0][0,0], p[0][0,1]], [p[0][1,0], p[0][1,0]], color+'-', linewidth=3, alpha=trans_level)
plt.plot([p[0][0,0], p[0][0,1]], [p[0][1,1], p[0][1,1]], color+'-', linewidth=3, alpha=trans_level)
else:
print_partitions(p[1], trans_level, color)
print_partitions(p[2], trans_level, color)
### VISUALIZE 2D POSTERIOR WITH DATA###
def print_posterior(data, samples, trans_level=.05, color='k'):
plt.figure()
plt.scatter(data[:,0], data[:,1], c='k', edgecolors='k', s=5, alpha=.5)
#print all samples
for sample in samples:
print_partitions(sample, trans_level, color)
def print_tree_at_leaf(mp_tree, table):
if mp_tree[1] == None and mp_tree[2] == None:
print table.shape
return 1
# find the dimension and location of first cut
root_rec = mp_tree[0]
left_rec = mp_tree[1][0]
for _ in range(root_rec.shape[0]):
if root_rec[_,1] != left_rec[_,1]:
break
d, pos = _, left_rec[_,1]
cut_type = ' '.join([str(int(x)) for x in sorted(set(table[table.columns[d]]))])
if cut_type in {"-1 0 1", '-1 1'}:
idx_table_left = table[table.columns[d]] != 1
table_left = table.loc[idx_table_left]
idx_table_right = table[table.columns[d]] != -1
table_right = table.loc[idx_table_right]
if cut_type == '-1 0':
idx_table_left = table[table.columns[d]] == -1
table_left = table.loc[idx_table_left]
idx_table_right = table[table.columns[d]] == 0
table_right = table.loc[idx_table_right]
if cut_type == '0 1':
idx_table_left = table[table.columns[d]] == 0
table_left = table.loc[idx_table_left]
idx_table_right = table[table.columns[d]] == 1
table_right = table.loc[idx_table_right]
return print_tree_at_leaf(mp_tree[1], table_left) + print_tree_at_leaf(mp_tree[2], table_right)
###Output
_____no_output_____
###Markdown
Mondrian Process Generative ModelWe apply Mondrian Processes (MPs) to flow cytometry data, using the prior information in the table above to guide the axis-aligned cuts. Instead of uniformly, we draw the cut proportion from $w \sim \text{Beta}(a_{0}, b_{0})$. Now let's re-implement the MP sampling function, accounting for the prior information...
###Code
### SAMPLE MONDRIAN PROCESS WITH PRIOR INFORMATION ###
def draw_informed_Mondrian(theta_space, table, budget=5):
# INFORMATIVE PRIORS
upper_cut = (5., 2.)
lower_cut = (2., 5.)
middle_cut = (5., 5.)
neutral_cut = (2., 2.)
priors_dict = { '-1':lower_cut, '0':neutral_cut, '1':upper_cut,
'-1 0':lower_cut, '-1 1':middle_cut, '0 1':upper_cut,
'-1 0 1': middle_cut, '': neutral_cut
}
cut_history = [1] * theta_space.shape[0]
return draw_informed_Mondrian_at_t(theta_space, table, priors_dict, cut_history)
def draw_informed_Mondrian_at_t(theta_space, table, priors_dict, cut_history):
if sum(cut_history) == 0 or table.shape[0] == 1:
return (theta_space, None, None)
types_str = [' '.join([str(int(x)) for x in sorted(set(table[table.columns[d]]))])
for d in range(table.shape[1])]
if set([types_str[d] for d in range(table.shape[1]) if cut_history[d] == 1]).issubset({'0','1','-1'}):
return (theta_space, None, None)
low, medium, high, very_high = 0, 1, 100, 1000
priority_dict = {'-1': low , '0': low, '1': low,
'-1 0': medium, '0 1': medium,
'-1 0 1': high, '-1 1':very_high
}
types = np.array([priority_dict[_] for _ in types_str])
dists = (theta_space[:,1] - theta_space[:,0])* types
lin_dim = np.sum(dists)
# draw dimension to cut
dim_probs = ((dists/lin_dim) * np.array(cut_history))
dim_probs /= np.sum(dim_probs)
d = np.argmax(np.random.multinomial(n=1, pvals=dim_probs))
cut_history[d] = 0
prior_type_str = ' '.join([str(int(x)) for x in sorted(set(table[table.columns[d]]))])
prior_params = priors_dict[prior_type_str]
# make scaled cut
x = (theta_space[d,1] - theta_space[d,0]) * np.random.beta(prior_params[0], prior_params[1]) + theta_space[d,0]
cut_type = types_str[d]
if cut_type in {"-1 0 1", '-1 1'}:
idx_table_left = table[table.columns[d]] != 1
table_left = table.loc[idx_table_left]
idx_table_right = table[table.columns[d]] != -1
table_right = table.loc[idx_table_right]
if cut_type == '-1 0':
idx_table_left = table[table.columns[d]] == -1
table_left = table.loc[idx_table_left]
idx_table_right = table[table.columns[d]] == 0
table_right = table.loc[idx_table_right]
if cut_type == '0 1':
idx_table_left = table[table.columns[d]] == 0
table_left = table.loc[idx_table_left]
idx_table_right = table[table.columns[d]] == 1
table_right = table.loc[idx_table_right]
# make lower partition
theta_left = np.copy(theta_space)
theta_left[d][1] = x
M_left = draw_informed_Mondrian_at_t(theta_left, table_left, priors_dict, list(cut_history))
# make upper partition
theta_right = np.copy(theta_space)
theta_right[d][0] = x
M_right = draw_informed_Mondrian_at_t(theta_right, table_right, priors_dict,list(cut_history))
return (theta_space, M_left, M_right)
def Mondrian_Gaussian_perturbation(theta_space, old_sample, stepsize):
"""
Input:
theta_space: a rectangle
old_sample: partioned theta_space of a mondrian process
stepsize: gaussian std
"""
if old_sample[1] == None and old_sample[2] == None:
return (theta_space, None, None)
# find the dimension and location of first cut in the old_sample
for _ in range(old_sample[0].shape[0]):
if old_sample[0][_,1] > old_sample[1][0][_,1]:
break
dim, pos = _, old_sample[1][0][_,1]
# propose position of new cut
good_propose = False
while good_propose == False:
new_pos = pos + np.random.normal(0,(old_sample[0][dim,1] - old_sample[0][dim,0])*stepsize,1)[0]
if new_pos < theta_space[dim,1] and new_pos > theta_space[dim,0]:
good_propose = True
theta_left = np.copy(theta_space)
theta_left[dim,1] = new_pos
theta_right = np.copy(theta_space)
theta_right[dim,0] = new_pos
new_M_left= Mondrian_Gaussian_perturbation(theta_left, old_sample[1], stepsize)
new_M_right = Mondrian_Gaussian_perturbation(theta_right, old_sample[2], stepsize)
return (theta_space, new_M_left, new_M_right)
def comp_log_p_prior(theta_space, table, cut_history):
"""
This function returns prior probability of a Mondrian process theta_space
"""
if theta_space[1] == None and theta_space[2] == None:
return 0
log_prior = 0
# INFORMATIVE PRIORS
upper_cut = (5., 2.)
lower_cut = (2., 5.)
middle_cut = (5., 5.)
neutral_cut = (2., 2.)
priors_dict = { '-1':lower_cut, '0':neutral_cut, '1':upper_cut,
'-1 0':lower_cut, '-1 1':middle_cut, '0 1':upper_cut,
'-1 0 1': middle_cut, '': neutral_cut
}
# find the dimension and location of first cut
root_rec = theta_space[0]
left_rec = theta_space[1][0]
for _ in range(root_rec.shape[0]):
if root_rec[_,1] != left_rec[_,1]:
break
dim = _
beta_pos = (left_rec[_,1] - left_rec[dim,0])/(root_rec[dim,1] - root_rec[dim, 0])
prior_params = priors_dict[' '.join([str(int(x)) \
for x in sorted(set(table[table.columns[dim]]))])]
# compute the log likelihood of the first cut
types_str = [' '.join([str(int(x)) for x in sorted(set(table[table.columns[d]]))])
for d in range(table.shape[1])]
low_priority, medium_priority, high_priority, very_high_priority = 0, 1, 100, 1000
priority_dict = {'-1': low_priority , '0': low_priority, '1': low_priority,
'-1 0': medium_priority, '0 1': medium_priority,
'-1 0 1': high_priority, '-1 1':very_high_priority
}
types = np.array([priority_dict[_] for _ in types_str])
dists = (root_rec[:,1] - root_rec[:,0])* types
lin_dim = np.sum(dists)
# probability of dim
dim_probs = ((dists/lin_dim) * np.array(cut_history))
dim_probs /= np.sum(dim_probs)
log_prior += np.log(dim_probs[dim])
# probability of pos
log_prior += np.log(beta.pdf(beta_pos, prior_params[0], prior_params[1]))
# split the table
cut_history[dim] = 0
cut_type = types_str[dim]
if cut_type in {"-1 0 1", '-1 1'}:
idx_table_left = table[table.columns[dim]] != 1
table_left = table.loc[idx_table_left]
idx_table_right = table[table.columns[dim]] != -1
table_right = table.loc[idx_table_right]
if cut_type == '-1 0':
idx_table_left = table[table.columns[dim]] == -1
table_left = table.loc[idx_table_left]
idx_table_right = table[table.columns[dim]] == 0
table_right = table.loc[idx_table_right]
if cut_type == '0 1':
idx_table_left = table[table.columns[dim]] == 0
table_left = table.loc[idx_table_left]
idx_table_right = table[table.columns[dim]] == 1
table_right = table.loc[idx_table_right]
return log_prior + comp_log_p_prior(theta_space[1], table_left, list(cut_history)) \
+ comp_log_p_prior(theta_space[2], table_right, list(cut_history))
###Output
_____no_output_____
###Markdown
Classification
###Code
def classify_cells(data, mp_tree, table, cell_type_name2idx):
Y = np.array([1]*data.shape[0])
if data.shape[0] == 0:
return Y
if mp_tree[1] == None and mp_tree[2] == None:
if table.shape[0] > 1:
# print "more than one clusters, number of data points:", data.shape[0]
labels = [cell_type_name2idx[table.index[_]] for _ in range(table.shape[0])]
return np.array(np.random.choice(labels, data.shape[0],replace = True))
else:
return Y * cell_type_name2idx[table.index[0]]
# find the dimension and location of first cut
root_rec = mp_tree[0]
left_rec = mp_tree[1][0]
for _ in range(root_rec.shape[0]):
if root_rec[_,1] != left_rec[_,1]:
break
dim, pos = _, left_rec[_,1]
# find labels that match dim info from table
idx_table_left = table[table.columns[dim]] != 1
table_left = table.loc[idx_table_left]
idx_table_right = table[table.columns[dim]] != -1
table_right = table.loc[idx_table_right]
# find data INDICIES that go high / low on cut position in dimension dim
idx_left = data[:,dim] < pos
idx_right = data[:,dim] >= pos
Y[idx_left] = classify_cells(data[idx_left],mp_tree[1],table_left, cell_type_name2idx)
Y[idx_right] = classify_cells(data[idx_right],mp_tree[2],table_right, cell_type_name2idx)
return Y
###Output
_____no_output_____
###Markdown
Running time of Functions
###Code
%%time
# load AML data and table
##### X: np.array, flow cytometry data, arcsin transformed
##### T: table of expert knowledge
np.random.seed(1234)
PATH = '/home/disij/projects/acdc/data/'
### LOAD DATA ###
path = PATH + 'AML_benchmark/'
df = pd.read_csv( path + 'AML_benchmark.csv.gz', sep=',', header = 0, compression = 'gzip', engine='python')
table = pd.read_csv(path + 'AML_table.csv', sep=',', header=0, index_col=0)
### PROCESS: discard ungated events ###
df = df[df.cell_type != 'NotGated']
df = df.drop(['Time', 'Cell_length','file_number', 'event_number', 'DNA1(Ir191)Di',
'DNA2(Ir193)Di', 'Viability(Pt195)Di', 'subject'], axis = 1)
channels = [item[:item.find('(')] for item in df.columns[:-1]]
df.columns = channels + ['cell_type']
df = df.loc[df['cell_type'] != 'NotDebrisSinglets']
table = table.fillna(0)
X = df[channels].values
table_headers = list(table)
df2 = pd.DataFrame([[0]*table.shape[1]], columns=table.columns)
table.append(df2)
### transform data
data = np.arcsinh((X-1.)/5.)
theta_space = np.array([[data[:,d].min(), data[:,d].max()] for d in range(data.shape[1])])
def comp_depth_tree(sample):
if sample == None:
return 0
else:
return 1+ max(comp_depth_tree(sample[1]),comp_depth_tree(sample[2]))
###Output
_____no_output_____
###Markdown
Flow Cytometry DataLoad AML dataset from [ACDC paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5447237/pdf/btx054.pdf)...
###Code
# load AML data and table
##### X: np.array, flow cytometry data, arcsin transformed
##### T: table of expert knowledge
np.random.seed(1234)
PATH = '/home/disij/projects/acdc/data/'
### LOAD DATA ###
path = PATH + 'AML_benchmark/'
df = pd.read_csv( path + 'AML_benchmark.csv.gz', sep=',', header = 0, compression = 'gzip', engine='python')
table = pd.read_csv(path + 'AML_table.csv', sep=',', header=0, index_col=0)
print table.shape
### PROCESS: discard ungated events ###
df = df[df.cell_type != 'NotGated']
df = df.drop(['Time', 'Cell_length','file_number', 'event_number', 'DNA1(Ir191)Di',
'DNA2(Ir193)Di', 'Viability(Pt195)Di', 'subject'], axis = 1)
channels = [item[:item.find('(')] for item in df.columns[:-1]]
df.columns = channels + ['cell_type']
df = df.loc[df['cell_type'] != 'NotDebrisSinglets']
table = table.fillna(0)
X = df[channels].values
table_headers = list(table)
# df2 = pd.DataFrame([[0]*table.shape[1]], columns=table.columns, index =['unknown'])
# table = table.append(df2)
### transform data
data = np.arcsinh((X-1.)/5.)
theta_space = np.array([[data[:,d].min(), data[:,d].max()] for d in range(data.shape[1])])
cell_type_name2idx = {x:i for i,x in enumerate(table.index)}
Y = np.array([cell_type_name2idx[_] for _ in df.cell_type])
print data.shape
###Output
(104184, 32)
###Markdown
Experiment 2: subset AML data, 50 chains
###Code
from sklearn.utils import shuffle
N = 10000
new_df = shuffle(df)[:N]
X_subset = new_df[channels].values
Y_subset = np.array([cell_type_name2idx[_] for _ in new_df.cell_type])
# X_subset = df[channels].values
# Y_subset = np.array([cell_type_name2idx[_] for _ in df.cell_type])
data_subset = np.arcsinh((X_subset-1.)/5.)
N, d = data_subset.shape
print N,d
# rename table header 'HLA-DR' to 'HLADR' to prevent error from '-'
temp_headers = list(table)
temp_headers[29] = "HLADR"
table.columns = temp_headers
print table.columns
emp_bounds = np.array([(data_subset[:,i].min(), data_subset[:,i].max()) for i in range(d)])
%%time
n_mcmc_chain = 100
n_mcmc_sample = 3000
accepts = [[] for _ in range(n_mcmc_chain)]
rejects = [[] for _ in range(n_mcmc_chain)]
logl_accepted_trace = [[] for _ in range(n_mcmc_chain)]
logl_complete_trace = [[] for _ in range(n_mcmc_chain)]
Y_predict_accepted_trace = [[] for _ in range(n_mcmc_chain)]
accuracy_accepted_trace = [[] for _ in range(n_mcmc_chain)]
threshold = -113888.425449/data.shape[0]*N - 10000000000
for chain in range(n_mcmc_chain):
mcmc_gaussin_std = 0.1 # tune step size s.t. acceptance rate ~50%
print "Drawing Chain %d ..." % chain
sample = draw_informed_Mondrian(emp_bounds, table)
log_p_sample = comp_log_p_sample(sample, data_subset)
accepts[chain].append(sample)
logl_accepted_trace[chain].append(log_p_sample)
logl_complete_trace[chain].append(log_p_sample)
Y_predict = classify_cells(data_subset, sample, table, cell_type_name2idx)
accuracy = sum(Y_subset == Y_predict)*1.0/ data_subset.shape[0]
accuracy_accepted_trace[chain].append(accuracy)
Y_predict_accepted_trace[chain].append(Y_predict)
for idx in range(n_mcmc_sample):
if idx % (n_mcmc_sample / 3) == 0:
mcmc_gaussin_std = mcmc_gaussin_std / 5
new_sample = Mondrian_Gaussian_perturbation(emp_bounds,sample, mcmc_gaussin_std)
new_log_p_sample = comp_log_p_sample(new_sample, data_subset)
logl_complete_trace[chain].append(new_log_p_sample)
if new_log_p_sample < log_p_sample and \
np.log(np.random.uniform(low=0, high=1.)) > (new_log_p_sample - log_p_sample):
rejects[chain].append(new_sample)
else:
if new_log_p_sample < log_p_sample:
print "accepted some bad samples"
sample = new_sample
log_p_sample = new_log_p_sample
accepts[chain].append(sample)
logl_accepted_trace[chain].append(log_p_sample)
Y_predict = classify_cells(data_subset, sample, table, cell_type_name2idx)
accuracy = sum(Y_subset == Y_predict)*1.0/ data_subset.shape[0]
accuracy_accepted_trace[chain].append(accuracy)
Y_predict_accepted_trace[chain].append(Y_predict)
if (idx+1) % 500 == 0:
print "Iteration %d, cummulative accepted sample size is %d" %(idx+1, len(accepts[chain]))
if idx + 1 == 500:
if logl_accepted_trace[-1] < threshold:
print "bad start point, drop the chain and restart"
accepts[chain] = []
rejects[chain] = []
logl_accepted_trace[chain] = []
logl_complete_trace[chain] = []
Y_predict_accepted_trace[chain] = []
accuracy_accepted_trace[chain] = []
chain -= 1
break
# prediction and visualization
Y_predict = classify_cells(data_subset, accepts[chain][-1], table, cell_type_name2idx)
accuracy = sum(Y_subset == Y_predict)*1.0/ data_subset.shape[0]
print "Chain % d accuracy on subset data: %.3f" % (chain, accuracy)
print "Chain % d loglik on subset data: %.3f" % (chain, logl_accepted_trace[chain][-1])
print "Total number of accepted samples: %d" %(sum([len(accepts[chain]) for chain in range(n_mcmc_chain)]))
# 3 chain, 5000 samples
###Output
Drawing Chain 0 ...
Iteration 500, cummulative accepted sample size is 70
Iteration 1000, cummulative accepted sample size is 70
Iteration 1500, cummulative accepted sample size is 82
Iteration 2000, cummulative accepted sample size is 84
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 92
Iteration 3000, cummulative accepted sample size is 93
Chain 0 accuracy on subset data: 0.878
Chain 0 loglik on subset data: -110082.291
Drawing Chain 1 ...
Iteration 500, cummulative accepted sample size is 67
Iteration 1000, cummulative accepted sample size is 68
Iteration 1500, cummulative accepted sample size is 79
Iteration 2000, cummulative accepted sample size is 81
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 105
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 123
Chain 1 accuracy on subset data: 0.935
Chain 1 loglik on subset data: -107469.269
Drawing Chain 2 ...
Iteration 500, cummulative accepted sample size is 42
Iteration 1000, cummulative accepted sample size is 45
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 48
Iteration 2000, cummulative accepted sample size is 50
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 64
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 71
Chain 2 accuracy on subset data: 0.898
Chain 2 loglik on subset data: -109127.245
Drawing Chain 3 ...
Iteration 500, cummulative accepted sample size is 114
Iteration 1000, cummulative accepted sample size is 115
Iteration 1500, cummulative accepted sample size is 124
Iteration 2000, cummulative accepted sample size is 126
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 135
Iteration 3000, cummulative accepted sample size is 138
Chain 3 accuracy on subset data: 0.789
Chain 3 loglik on subset data: -108642.689
Drawing Chain 4 ...
accepted some bad samples
Iteration 500, cummulative accepted sample size is 71
Iteration 1000, cummulative accepted sample size is 72
Iteration 1500, cummulative accepted sample size is 83
accepted some bad samples
Iteration 2000, cummulative accepted sample size is 85
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 99
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 102
Chain 4 accuracy on subset data: 0.902
Chain 4 loglik on subset data: -105895.829
Drawing Chain 5 ...
Iteration 500, cummulative accepted sample size is 47
Iteration 1000, cummulative accepted sample size is 54
Iteration 1500, cummulative accepted sample size is 71
Iteration 2000, cummulative accepted sample size is 75
Iteration 2500, cummulative accepted sample size is 82
Iteration 3000, cummulative accepted sample size is 85
Chain 5 accuracy on subset data: 0.834
Chain 5 loglik on subset data: -98192.796
Drawing Chain 6 ...
Iteration 500, cummulative accepted sample size is 66
Iteration 1000, cummulative accepted sample size is 68
Iteration 1500, cummulative accepted sample size is 74
Iteration 2000, cummulative accepted sample size is 74
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 80
Iteration 3000, cummulative accepted sample size is 84
Chain 6 accuracy on subset data: 0.861
Chain 6 loglik on subset data: -105317.390
Drawing Chain 7 ...
accepted some bad samples
Iteration 500, cummulative accepted sample size is 54
Iteration 1000, cummulative accepted sample size is 56
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 68
Iteration 2000, cummulative accepted sample size is 70
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 86
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 120
Chain 7 accuracy on subset data: 0.910
Chain 7 loglik on subset data: -110523.609
Drawing Chain 8 ...
Iteration 500, cummulative accepted sample size is 60
Iteration 1000, cummulative accepted sample size is 67
Iteration 1500, cummulative accepted sample size is 74
Iteration 2000, cummulative accepted sample size is 77
Iteration 2500, cummulative accepted sample size is 91
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 95
Chain 8 accuracy on subset data: 0.896
Chain 8 loglik on subset data: -111304.704
Drawing Chain 9 ...
accepted some bad samples
Iteration 500, cummulative accepted sample size is 37
Iteration 1000, cummulative accepted sample size is 40
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 47
Iteration 2000, cummulative accepted sample size is 48
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 56
Iteration 3000, cummulative accepted sample size is 56
Chain 9 accuracy on subset data: 0.871
Chain 9 loglik on subset data: -109045.013
Drawing Chain 10 ...
Iteration 500, cummulative accepted sample size is 45
Iteration 1000, cummulative accepted sample size is 46
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 56
Iteration 2000, cummulative accepted sample size is 60
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 68
Iteration 3000, cummulative accepted sample size is 71
Chain 10 accuracy on subset data: 0.919
Chain 10 loglik on subset data: -107902.140
Drawing Chain 11 ...
Iteration 500, cummulative accepted sample size is 53
Iteration 1000, cummulative accepted sample size is 53
Iteration 1500, cummulative accepted sample size is 56
Iteration 2000, cummulative accepted sample size is 57
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 63
Iteration 3000, cummulative accepted sample size is 63
Chain 11 accuracy on subset data: 0.829
Chain 11 loglik on subset data: -112612.071
Drawing Chain 12 ...
Iteration 500, cummulative accepted sample size is 59
Iteration 1000, cummulative accepted sample size is 60
Iteration 1500, cummulative accepted sample size is 66
Iteration 2000, cummulative accepted sample size is 66
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 74
Iteration 3000, cummulative accepted sample size is 75
Chain 12 accuracy on subset data: 0.901
Chain 12 loglik on subset data: -109087.791
Drawing Chain 13 ...
Iteration 500, cummulative accepted sample size is 44
Iteration 1000, cummulative accepted sample size is 48
Iteration 1500, cummulative accepted sample size is 56
Iteration 2000, cummulative accepted sample size is 57
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 60
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 65
Chain 13 accuracy on subset data: 0.858
Chain 13 loglik on subset data: -110335.293
Drawing Chain 14 ...
Iteration 500, cummulative accepted sample size is 54
Iteration 1000, cummulative accepted sample size is 57
Iteration 1500, cummulative accepted sample size is 65
Iteration 2000, cummulative accepted sample size is 67
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 79
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 89
Chain 14 accuracy on subset data: 0.853
Chain 14 loglik on subset data: -112607.258
Drawing Chain 15 ...
Iteration 500, cummulative accepted sample size is 63
Iteration 1000, cummulative accepted sample size is 69
Iteration 1500, cummulative accepted sample size is 82
Iteration 2000, cummulative accepted sample size is 84
Iteration 2500, cummulative accepted sample size is 90
Iteration 3000, cummulative accepted sample size is 90
Chain 15 accuracy on subset data: 0.832
Chain 15 loglik on subset data: -100690.447
Drawing Chain 16 ...
Iteration 500, cummulative accepted sample size is 75
Iteration 1000, cummulative accepted sample size is 88
Iteration 1500, cummulative accepted sample size is 95
Iteration 2000, cummulative accepted sample size is 95
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 108
Iteration 3000, cummulative accepted sample size is 108
Chain 16 accuracy on subset data: 0.837
Chain 16 loglik on subset data: -111393.697
Drawing Chain 17 ...
accepted some bad samples
Iteration 500, cummulative accepted sample size is 62
Iteration 1000, cummulative accepted sample size is 86
Iteration 1500, cummulative accepted sample size is 100
Iteration 2000, cummulative accepted sample size is 105
Iteration 2500, cummulative accepted sample size is 111
Iteration 3000, cummulative accepted sample size is 114
Chain 17 accuracy on subset data: 0.801
Chain 17 loglik on subset data: -106686.209
Drawing Chain 18 ...
Iteration 500, cummulative accepted sample size is 45
Iteration 1000, cummulative accepted sample size is 48
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 57
Iteration 2000, cummulative accepted sample size is 64
Iteration 2500, cummulative accepted sample size is 66
Iteration 3000, cummulative accepted sample size is 81
Chain 18 accuracy on subset data: 0.906
Chain 18 loglik on subset data: -109379.972
Drawing Chain 19 ...
Iteration 500, cummulative accepted sample size is 45
Iteration 1000, cummulative accepted sample size is 49
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 55
Iteration 2000, cummulative accepted sample size is 56
Iteration 2500, cummulative accepted sample size is 60
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 65
Chain 19 accuracy on subset data: 0.832
Chain 19 loglik on subset data: -112470.923
Drawing Chain 20 ...
Iteration 500, cummulative accepted sample size is 52
Iteration 1000, cummulative accepted sample size is 55
Iteration 1500, cummulative accepted sample size is 60
Iteration 2000, cummulative accepted sample size is 61
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 72
Iteration 3000, cummulative accepted sample size is 72
Chain 20 accuracy on subset data: 0.828
Chain 20 loglik on subset data: -109546.577
Drawing Chain 21 ...
Iteration 500, cummulative accepted sample size is 23
Iteration 1000, cummulative accepted sample size is 30
Iteration 1500, cummulative accepted sample size is 51
Iteration 2000, cummulative accepted sample size is 53
Iteration 2500, cummulative accepted sample size is 58
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 65
Chain 21 accuracy on subset data: 0.827
Chain 21 loglik on subset data: -112160.927
Drawing Chain 22 ...
accepted some bad samples
Iteration 500, cummulative accepted sample size is 67
Iteration 1000, cummulative accepted sample size is 72
Iteration 1500, cummulative accepted sample size is 78
Iteration 2000, cummulative accepted sample size is 79
Iteration 2500, cummulative accepted sample size is 84
Iteration 3000, cummulative accepted sample size is 85
Chain 22 accuracy on subset data: 0.901
Chain 22 loglik on subset data: -108593.484
Drawing Chain 23 ...
Iteration 500, cummulative accepted sample size is 49
Iteration 1000, cummulative accepted sample size is 52
Iteration 1500, cummulative accepted sample size is 63
accepted some bad samples
Iteration 2000, cummulative accepted sample size is 69
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 88
Iteration 3000, cummulative accepted sample size is 91
Chain 23 accuracy on subset data: 0.760
Chain 23 loglik on subset data: -108792.474
Drawing Chain 24 ...
Iteration 500, cummulative accepted sample size is 55
Iteration 1000, cummulative accepted sample size is 58
accepted some bad samples
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 72
Iteration 2000, cummulative accepted sample size is 78
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 94
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 100
Chain 24 accuracy on subset data: 0.878
Chain 24 loglik on subset data: -107336.365
Drawing Chain 25 ...
Iteration 500, cummulative accepted sample size is 72
Iteration 1000, cummulative accepted sample size is 79
Iteration 1500, cummulative accepted sample size is 89
accepted some bad samples
Iteration 2000, cummulative accepted sample size is 93
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 101
Iteration 3000, cummulative accepted sample size is 102
Chain 25 accuracy on subset data: 0.905
Chain 25 loglik on subset data: -108946.109
Drawing Chain 26 ...
Iteration 500, cummulative accepted sample size is 67
Iteration 1000, cummulative accepted sample size is 69
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 78
Iteration 2000, cummulative accepted sample size is 79
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 100
Iteration 3000, cummulative accepted sample size is 104
Chain 26 accuracy on subset data: 0.742
Chain 26 loglik on subset data: -105092.418
Drawing Chain 27 ...
Iteration 500, cummulative accepted sample size is 58
Iteration 1000, cummulative accepted sample size is 66
Iteration 1500, cummulative accepted sample size is 69
Iteration 2000, cummulative accepted sample size is 70
Iteration 2500, cummulative accepted sample size is 76
Iteration 3000, cummulative accepted sample size is 80
Chain 27 accuracy on subset data: 0.879
Chain 27 loglik on subset data: -104471.991
Drawing Chain 28 ...
Iteration 500, cummulative accepted sample size is 34
Iteration 1000, cummulative accepted sample size is 34
Iteration 1500, cummulative accepted sample size is 37
Iteration 2000, cummulative accepted sample size is 38
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 45
Iteration 3000, cummulative accepted sample size is 46
Chain 28 accuracy on subset data: 0.872
Chain 28 loglik on subset data: -112056.874
Drawing Chain 29 ...
Iteration 500, cummulative accepted sample size is 52
Iteration 1000, cummulative accepted sample size is 54
Iteration 1500, cummulative accepted sample size is 60
accepted some bad samples
Iteration 2000, cummulative accepted sample size is 63
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 72
Iteration 3000, cummulative accepted sample size is 77
Chain 29 accuracy on subset data: 0.883
Chain 29 loglik on subset data: -110999.528
Drawing Chain 30 ...
Iteration 500, cummulative accepted sample size is 27
Iteration 1000, cummulative accepted sample size is 34
Iteration 1500, cummulative accepted sample size is 37
Iteration 2000, cummulative accepted sample size is 39
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 53
Iteration 3000, cummulative accepted sample size is 57
Chain 30 accuracy on subset data: 0.939
Chain 30 loglik on subset data: -108401.187
Drawing Chain 31 ...
accepted some bad samples
Iteration 500, cummulative accepted sample size is 69
Iteration 1000, cummulative accepted sample size is 71
Iteration 1500, cummulative accepted sample size is 75
Iteration 2000, cummulative accepted sample size is 76
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 85
Iteration 3000, cummulative accepted sample size is 86
Chain 31 accuracy on subset data: 0.850
Chain 31 loglik on subset data: -113399.442
Drawing Chain 32 ...
Iteration 500, cummulative accepted sample size is 65
Iteration 1000, cummulative accepted sample size is 83
Iteration 1500, cummulative accepted sample size is 95
Iteration 2000, cummulative accepted sample size is 99
Iteration 2500, cummulative accepted sample size is 105
Iteration 3000, cummulative accepted sample size is 106
Chain 32 accuracy on subset data: 0.841
Chain 32 loglik on subset data: -101833.827
Drawing Chain 33 ...
Iteration 500, cummulative accepted sample size is 57
Iteration 1000, cummulative accepted sample size is 57
Iteration 1500, cummulative accepted sample size is 62
Iteration 2000, cummulative accepted sample size is 68
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 85
Iteration 3000, cummulative accepted sample size is 87
Chain 33 accuracy on subset data: 0.820
Chain 33 loglik on subset data: -112459.372
Drawing Chain 34 ...
Iteration 500, cummulative accepted sample size is 60
Iteration 1000, cummulative accepted sample size is 60
Iteration 1500, cummulative accepted sample size is 69
Iteration 2000, cummulative accepted sample size is 81
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 103
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 119
Chain 34 accuracy on subset data: 0.833
Chain 34 loglik on subset data: -111283.692
Drawing Chain 35 ...
Iteration 500, cummulative accepted sample size is 40
Iteration 1000, cummulative accepted sample size is 41
Iteration 1500, cummulative accepted sample size is 47
Iteration 2000, cummulative accepted sample size is 51
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 67
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 71
Chain 35 accuracy on subset data: 0.819
Chain 35 loglik on subset data: -111408.999
Drawing Chain 36 ...
Iteration 500, cummulative accepted sample size is 62
Iteration 1000, cummulative accepted sample size is 66
Iteration 1500, cummulative accepted sample size is 81
Iteration 2000, cummulative accepted sample size is 88
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 114
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 119
Chain 36 accuracy on subset data: 0.856
Chain 36 loglik on subset data: -111885.164
Drawing Chain 37 ...
Iteration 500, cummulative accepted sample size is 77
Iteration 1000, cummulative accepted sample size is 83
Iteration 1500, cummulative accepted sample size is 96
Iteration 2000, cummulative accepted sample size is 96
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 102
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 114
Chain 37 accuracy on subset data: 0.867
Chain 37 loglik on subset data: -109250.209
Drawing Chain 38 ...
Iteration 500, cummulative accepted sample size is 58
Iteration 1000, cummulative accepted sample size is 59
accepted some bad samples
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 71
accepted some bad samples
Iteration 2000, cummulative accepted sample size is 74
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 80
Iteration 3000, cummulative accepted sample size is 80
Chain 38 accuracy on subset data: 0.817
Chain 38 loglik on subset data: -110758.495
Drawing Chain 39 ...
Iteration 500, cummulative accepted sample size is 46
Iteration 1000, cummulative accepted sample size is 49
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 57
Iteration 2000, cummulative accepted sample size is 59
Iteration 2500, cummulative accepted sample size is 63
Iteration 3000, cummulative accepted sample size is 65
Chain 39 accuracy on subset data: 0.821
Chain 39 loglik on subset data: -109220.461
Drawing Chain 40 ...
Iteration 500, cummulative accepted sample size is 23
Iteration 1000, cummulative accepted sample size is 25
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 36
Iteration 2000, cummulative accepted sample size is 36
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 46
Iteration 3000, cummulative accepted sample size is 47
Chain 40 accuracy on subset data: 0.876
Chain 40 loglik on subset data: -112164.920
Drawing Chain 41 ...
Iteration 500, cummulative accepted sample size is 35
Iteration 1000, cummulative accepted sample size is 35
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 50
Iteration 2000, cummulative accepted sample size is 53
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 66
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 70
Chain 41 accuracy on subset data: 0.805
Chain 41 loglik on subset data: -110614.475
Drawing Chain 42 ...
Iteration 500, cummulative accepted sample size is 57
Iteration 1000, cummulative accepted sample size is 60
Iteration 1500, cummulative accepted sample size is 65
Iteration 2000, cummulative accepted sample size is 68
Iteration 2500, cummulative accepted sample size is 75
Iteration 3000, cummulative accepted sample size is 77
Chain 42 accuracy on subset data: 0.849
Chain 42 loglik on subset data: -112689.325
Drawing Chain 43 ...
Iteration 500, cummulative accepted sample size is 49
Iteration 1000, cummulative accepted sample size is 56
Iteration 1500, cummulative accepted sample size is 64
Iteration 2000, cummulative accepted sample size is 65
Iteration 2500, cummulative accepted sample size is 71
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 75
Chain 43 accuracy on subset data: 0.887
Chain 43 loglik on subset data: -111815.168
Drawing Chain 44 ...
Iteration 500, cummulative accepted sample size is 65
Iteration 1000, cummulative accepted sample size is 68
Iteration 1500, cummulative accepted sample size is 74
Iteration 2000, cummulative accepted sample size is 76
Iteration 2500, cummulative accepted sample size is 81
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 83
Chain 44 accuracy on subset data: 0.871
Chain 44 loglik on subset data: -111684.589
Drawing Chain 45 ...
Iteration 500, cummulative accepted sample size is 62
Iteration 1000, cummulative accepted sample size is 72
Iteration 1500, cummulative accepted sample size is 81
Iteration 2000, cummulative accepted sample size is 83
Iteration 2500, cummulative accepted sample size is 97
Iteration 3000, cummulative accepted sample size is 97
Chain 45 accuracy on subset data: 0.888
Chain 45 loglik on subset data: -109341.580
Drawing Chain 46 ...
Iteration 500, cummulative accepted sample size is 49
Iteration 1000, cummulative accepted sample size is 56
Iteration 1500, cummulative accepted sample size is 59
accepted some bad samples
Iteration 2000, cummulative accepted sample size is 60
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 75
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 81
Chain 46 accuracy on subset data: 0.789
Chain 46 loglik on subset data: -109646.414
Drawing Chain 47 ...
Iteration 500, cummulative accepted sample size is 51
Iteration 1000, cummulative accepted sample size is 53
Iteration 1500, cummulative accepted sample size is 57
Iteration 2000, cummulative accepted sample size is 58
Iteration 2500, cummulative accepted sample size is 62
Iteration 3000, cummulative accepted sample size is 63
Chain 47 accuracy on subset data: 0.915
Chain 47 loglik on subset data: -106468.417
Drawing Chain 48 ...
Iteration 500, cummulative accepted sample size is 35
Iteration 1000, cummulative accepted sample size is 39
Iteration 1500, cummulative accepted sample size is 53
Iteration 2000, cummulative accepted sample size is 58
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 67
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 84
Chain 48 accuracy on subset data: 0.838
Chain 48 loglik on subset data: -108442.224
Drawing Chain 49 ...
Iteration 500, cummulative accepted sample size is 49
Iteration 1000, cummulative accepted sample size is 54
Iteration 1500, cummulative accepted sample size is 67
accepted some bad samples
accepted some bad samples
Iteration 2000, cummulative accepted sample size is 78
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 88
Iteration 3000, cummulative accepted sample size is 88
Chain 49 accuracy on subset data: 0.858
Chain 49 loglik on subset data: -109032.555
Drawing Chain 50 ...
Iteration 500, cummulative accepted sample size is 22
Iteration 1000, cummulative accepted sample size is 24
Iteration 1500, cummulative accepted sample size is 30
Iteration 2000, cummulative accepted sample size is 34
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 72
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 90
Chain 50 accuracy on subset data: 0.914
Chain 50 loglik on subset data: -106462.736
Drawing Chain 51 ...
Iteration 500, cummulative accepted sample size is 53
Iteration 1000, cummulative accepted sample size is 56
Iteration 1500, cummulative accepted sample size is 64
Iteration 2000, cummulative accepted sample size is 65
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 83
Iteration 3000, cummulative accepted sample size is 86
Chain 51 accuracy on subset data: 0.839
Chain 51 loglik on subset data: -112425.098
Drawing Chain 52 ...
Iteration 500, cummulative accepted sample size is 63
Iteration 1000, cummulative accepted sample size is 67
Iteration 1500, cummulative accepted sample size is 71
Iteration 2000, cummulative accepted sample size is 72
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 80
Iteration 3000, cummulative accepted sample size is 83
Chain 52 accuracy on subset data: 0.863
Chain 52 loglik on subset data: -111881.059
Drawing Chain 53 ...
accepted some bad samples
Iteration 500, cummulative accepted sample size is 51
accepted some bad samples
Iteration 1000, cummulative accepted sample size is 58
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 65
Iteration 2000, cummulative accepted sample size is 68
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 78
Iteration 3000, cummulative accepted sample size is 78
Chain 53 accuracy on subset data: 0.745
Chain 53 loglik on subset data: -109517.566
Drawing Chain 54 ...
Iteration 500, cummulative accepted sample size is 79
Iteration 1000, cummulative accepted sample size is 80
Iteration 1500, cummulative accepted sample size is 86
Iteration 2000, cummulative accepted sample size is 89
Iteration 2500, cummulative accepted sample size is 100
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 102
Chain 54 accuracy on subset data: 0.812
Chain 54 loglik on subset data: -112058.797
Drawing Chain 55 ...
Iteration 500, cummulative accepted sample size is 57
Iteration 1000, cummulative accepted sample size is 57
Iteration 1500, cummulative accepted sample size is 64
Iteration 2000, cummulative accepted sample size is 64
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 79
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 81
Chain 55 accuracy on subset data: 0.871
Chain 55 loglik on subset data: -108511.152
Drawing Chain 56 ...
accepted some bad samples
accepted some bad samples
Iteration 500, cummulative accepted sample size is 62
Iteration 1000, cummulative accepted sample size is 65
Iteration 1500, cummulative accepted sample size is 72
Iteration 2000, cummulative accepted sample size is 75
Iteration 2500, cummulative accepted sample size is 78
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 87
Chain 56 accuracy on subset data: 0.879
Chain 56 loglik on subset data: -110801.942
Drawing Chain 57 ...
accepted some bad samples
Iteration 500, cummulative accepted sample size is 67
Iteration 1000, cummulative accepted sample size is 73
Iteration 1500, cummulative accepted sample size is 85
Iteration 2000, cummulative accepted sample size is 89
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 97
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 104
Chain 57 accuracy on subset data: 0.815
Chain 57 loglik on subset data: -112309.684
Drawing Chain 58 ...
Iteration 500, cummulative accepted sample size is 32
accepted some bad samples
Iteration 1000, cummulative accepted sample size is 34
Iteration 1500, cummulative accepted sample size is 45
Iteration 2000, cummulative accepted sample size is 48
Iteration 2500, cummulative accepted sample size is 54
Iteration 3000, cummulative accepted sample size is 57
Chain 58 accuracy on subset data: 0.673
Chain 58 loglik on subset data: -113754.191
Drawing Chain 59 ...
Iteration 500, cummulative accepted sample size is 48
Iteration 1000, cummulative accepted sample size is 55
Iteration 1500, cummulative accepted sample size is 61
Iteration 2000, cummulative accepted sample size is 61
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 82
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 90
Chain 59 accuracy on subset data: 0.903
Chain 59 loglik on subset data: -110872.145
Drawing Chain 60 ...
Iteration 500, cummulative accepted sample size is 45
Iteration 1000, cummulative accepted sample size is 53
Iteration 1500, cummulative accepted sample size is 60
Iteration 2000, cummulative accepted sample size is 66
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 80
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 84
Chain 60 accuracy on subset data: 0.891
Chain 60 loglik on subset data: -107862.546
Drawing Chain 61 ...
Iteration 500, cummulative accepted sample size is 39
Iteration 1000, cummulative accepted sample size is 46
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 58
Iteration 2000, cummulative accepted sample size is 58
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 64
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 70
Chain 61 accuracy on subset data: 0.929
Chain 61 loglik on subset data: -109030.659
Drawing Chain 62 ...
Iteration 500, cummulative accepted sample size is 30
Iteration 1000, cummulative accepted sample size is 35
Iteration 1500, cummulative accepted sample size is 42
Iteration 2000, cummulative accepted sample size is 42
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 49
Iteration 3000, cummulative accepted sample size is 50
Chain 62 accuracy on subset data: 0.693
Chain 62 loglik on subset data: -116769.586
Drawing Chain 63 ...
Iteration 500, cummulative accepted sample size is 56
Iteration 1000, cummulative accepted sample size is 67
accepted some bad samples
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 81
Iteration 2000, cummulative accepted sample size is 86
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 92
Iteration 3000, cummulative accepted sample size is 93
Chain 63 accuracy on subset data: 0.926
Chain 63 loglik on subset data: -106440.819
Drawing Chain 64 ...
Iteration 500, cummulative accepted sample size is 62
Iteration 1000, cummulative accepted sample size is 72
accepted some bad samples
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 88
Iteration 2000, cummulative accepted sample size is 90
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 97
Iteration 3000, cummulative accepted sample size is 99
Chain 64 accuracy on subset data: 0.897
Chain 64 loglik on subset data: -110416.749
Drawing Chain 65 ...
Iteration 500, cummulative accepted sample size is 62
accepted some bad samples
Iteration 1000, cummulative accepted sample size is 69
Iteration 1500, cummulative accepted sample size is 77
Iteration 2000, cummulative accepted sample size is 85
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 89
Iteration 3000, cummulative accepted sample size is 89
Chain 65 accuracy on subset data: 0.933
Chain 65 loglik on subset data: -108992.448
Drawing Chain 66 ...
accepted some bad samples
Iteration 500, cummulative accepted sample size is 59
Iteration 1000, cummulative accepted sample size is 61
Iteration 1500, cummulative accepted sample size is 68
Iteration 2000, cummulative accepted sample size is 68
Iteration 2500, cummulative accepted sample size is 71
Iteration 3000, cummulative accepted sample size is 75
Chain 66 accuracy on subset data: 0.909
Chain 66 loglik on subset data: -110168.725
Drawing Chain 67 ...
Iteration 500, cummulative accepted sample size is 57
Iteration 1000, cummulative accepted sample size is 58
Iteration 1500, cummulative accepted sample size is 64
Iteration 2000, cummulative accepted sample size is 64
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 72
Iteration 3000, cummulative accepted sample size is 73
Chain 67 accuracy on subset data: 0.861
Chain 67 loglik on subset data: -110354.964
Drawing Chain 68 ...
Iteration 500, cummulative accepted sample size is 79
Iteration 1000, cummulative accepted sample size is 83
Iteration 1500, cummulative accepted sample size is 89
Iteration 2000, cummulative accepted sample size is 94
Iteration 2500, cummulative accepted sample size is 99
Iteration 3000, cummulative accepted sample size is 100
Chain 68 accuracy on subset data: 0.826
Chain 68 loglik on subset data: -113061.859
Drawing Chain 69 ...
Iteration 500, cummulative accepted sample size is 60
Iteration 1000, cummulative accepted sample size is 61
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 69
accepted some bad samples
Iteration 2000, cummulative accepted sample size is 72
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 90
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 106
Chain 69 accuracy on subset data: 0.664
Chain 69 loglik on subset data: -111541.779
Drawing Chain 70 ...
accepted some bad samples
Iteration 500, cummulative accepted sample size is 38
Iteration 1000, cummulative accepted sample size is 46
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 57
Iteration 2000, cummulative accepted sample size is 74
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 89
Iteration 3000, cummulative accepted sample size is 90
Chain 70 accuracy on subset data: 0.924
Chain 70 loglik on subset data: -108560.494
Drawing Chain 71 ...
Iteration 500, cummulative accepted sample size is 75
Iteration 1000, cummulative accepted sample size is 82
Iteration 1500, cummulative accepted sample size is 95
Iteration 2000, cummulative accepted sample size is 95
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 104
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 108
Chain 71 accuracy on subset data: 0.795
Chain 71 loglik on subset data: -107165.408
Drawing Chain 72 ...
Iteration 500, cummulative accepted sample size is 38
Iteration 1000, cummulative accepted sample size is 40
Iteration 1500, cummulative accepted sample size is 46
Iteration 2000, cummulative accepted sample size is 47
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 59
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 62
Chain 72 accuracy on subset data: 0.874
Chain 72 loglik on subset data: -109669.706
Drawing Chain 73 ...
Iteration 500, cummulative accepted sample size is 45
Iteration 1000, cummulative accepted sample size is 46
Iteration 1500, cummulative accepted sample size is 56
Iteration 2000, cummulative accepted sample size is 57
Iteration 2500, cummulative accepted sample size is 64
Iteration 3000, cummulative accepted sample size is 64
Chain 73 accuracy on subset data: 0.847
Chain 73 loglik on subset data: -109800.097
Drawing Chain 74 ...
Iteration 500, cummulative accepted sample size is 54
Iteration 1000, cummulative accepted sample size is 55
Iteration 1500, cummulative accepted sample size is 62
Iteration 2000, cummulative accepted sample size is 67
Iteration 2500, cummulative accepted sample size is 71
Iteration 3000, cummulative accepted sample size is 72
Chain 74 accuracy on subset data: 0.908
Chain 74 loglik on subset data: -110641.310
Drawing Chain 75 ...
Iteration 500, cummulative accepted sample size is 60
Iteration 1000, cummulative accepted sample size is 69
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 97
Iteration 2000, cummulative accepted sample size is 105
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 114
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 120
Chain 75 accuracy on subset data: 0.850
Chain 75 loglik on subset data: -105395.730
Drawing Chain 76 ...
Iteration 500, cummulative accepted sample size is 72
Iteration 1000, cummulative accepted sample size is 74
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 92
Iteration 2000, cummulative accepted sample size is 94
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 101
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 106
Chain 76 accuracy on subset data: 0.851
Chain 76 loglik on subset data: -108321.443
Drawing Chain 77 ...
Iteration 500, cummulative accepted sample size is 51
Iteration 1000, cummulative accepted sample size is 51
accepted some bad samples
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 68
Iteration 2000, cummulative accepted sample size is 68
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 85
Iteration 3000, cummulative accepted sample size is 85
Chain 77 accuracy on subset data: 0.898
Chain 77 loglik on subset data: -107821.814
Drawing Chain 78 ...
Iteration 500, cummulative accepted sample size is 55
Iteration 1000, cummulative accepted sample size is 63
Iteration 1500, cummulative accepted sample size is 66
Iteration 2000, cummulative accepted sample size is 68
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 108
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 117
Chain 78 accuracy on subset data: 0.853
Chain 78 loglik on subset data: -109421.574
Drawing Chain 79 ...
Iteration 500, cummulative accepted sample size is 52
Iteration 1000, cummulative accepted sample size is 53
Iteration 1500, cummulative accepted sample size is 60
Iteration 2000, cummulative accepted sample size is 61
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 71
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 79
Chain 79 accuracy on subset data: 0.819
Chain 79 loglik on subset data: -110951.447
Drawing Chain 80 ...
accepted some bad samples
Iteration 500, cummulative accepted sample size is 47
Iteration 1000, cummulative accepted sample size is 51
Iteration 1500, cummulative accepted sample size is 57
Iteration 2000, cummulative accepted sample size is 61
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 68
Iteration 3000, cummulative accepted sample size is 69
Chain 80 accuracy on subset data: 0.912
Chain 80 loglik on subset data: -109865.857
Drawing Chain 81 ...
Iteration 500, cummulative accepted sample size is 44
Iteration 1000, cummulative accepted sample size is 55
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 68
Iteration 2000, cummulative accepted sample size is 70
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 94
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 106
Chain 81 accuracy on subset data: 0.878
Chain 81 loglik on subset data: -112076.901
Drawing Chain 82 ...
Iteration 500, cummulative accepted sample size is 48
Iteration 1000, cummulative accepted sample size is 49
Iteration 1500, cummulative accepted sample size is 54
Iteration 2000, cummulative accepted sample size is 54
Iteration 2500, cummulative accepted sample size is 56
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 57
Chain 82 accuracy on subset data: 0.836
Chain 82 loglik on subset data: -112324.965
Drawing Chain 83 ...
accepted some bad samples
Iteration 500, cummulative accepted sample size is 68
Iteration 1000, cummulative accepted sample size is 75
Iteration 1500, cummulative accepted sample size is 82
Iteration 2000, cummulative accepted sample size is 86
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 98
Iteration 3000, cummulative accepted sample size is 99
Chain 83 accuracy on subset data: 0.789
Chain 83 loglik on subset data: -107483.687
Drawing Chain 84 ...
Iteration 500, cummulative accepted sample size is 63
Iteration 1000, cummulative accepted sample size is 69
Iteration 1500, cummulative accepted sample size is 80
Iteration 2000, cummulative accepted sample size is 82
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 89
Iteration 3000, cummulative accepted sample size is 91
Chain 84 accuracy on subset data: 0.919
Chain 84 loglik on subset data: -109340.100
Drawing Chain 85 ...
Iteration 500, cummulative accepted sample size is 48
accepted some bad samples
Iteration 1000, cummulative accepted sample size is 71
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 80
Iteration 2000, cummulative accepted sample size is 80
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 101
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 102
Chain 85 accuracy on subset data: 0.681
Chain 85 loglik on subset data: -113504.238
Drawing Chain 86 ...
Iteration 500, cummulative accepted sample size is 54
Iteration 1000, cummulative accepted sample size is 57
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 68
Iteration 2000, cummulative accepted sample size is 68
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 82
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 89
Chain 86 accuracy on subset data: 0.737
Chain 86 loglik on subset data: -100357.640
Drawing Chain 87 ...
Iteration 500, cummulative accepted sample size is 75
Iteration 1000, cummulative accepted sample size is 81
Iteration 1500, cummulative accepted sample size is 84
Iteration 2000, cummulative accepted sample size is 84
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 90
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 94
Chain 87 accuracy on subset data: 0.902
Chain 87 loglik on subset data: -106181.539
Drawing Chain 88 ...
Iteration 500, cummulative accepted sample size is 48
Iteration 1000, cummulative accepted sample size is 53
Iteration 1500, cummulative accepted sample size is 55
Iteration 2000, cummulative accepted sample size is 56
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 62
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 65
Chain 88 accuracy on subset data: 0.922
Chain 88 loglik on subset data: -109283.265
Drawing Chain 89 ...
accepted some bad samples
Iteration 500, cummulative accepted sample size is 54
Iteration 1000, cummulative accepted sample size is 59
Iteration 1500, cummulative accepted sample size is 69
accepted some bad samples
Iteration 2000, cummulative accepted sample size is 70
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 93
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 99
Chain 89 accuracy on subset data: 0.929
Chain 89 loglik on subset data: -105698.502
Drawing Chain 90 ...
Iteration 500, cummulative accepted sample size is 38
Iteration 1000, cummulative accepted sample size is 41
Iteration 1500, cummulative accepted sample size is 45
Iteration 2000, cummulative accepted sample size is 49
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 67
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 68
Chain 90 accuracy on subset data: 0.856
Chain 90 loglik on subset data: -112788.219
Drawing Chain 91 ...
Iteration 500, cummulative accepted sample size is 64
Iteration 1000, cummulative accepted sample size is 66
Iteration 1500, cummulative accepted sample size is 71
Iteration 2000, cummulative accepted sample size is 72
Iteration 2500, cummulative accepted sample size is 82
Iteration 3000, cummulative accepted sample size is 88
Chain 91 accuracy on subset data: 0.862
Chain 91 loglik on subset data: -112623.619
Drawing Chain 92 ...
Iteration 500, cummulative accepted sample size is 67
Iteration 1000, cummulative accepted sample size is 76
Iteration 1500, cummulative accepted sample size is 79
Iteration 2000, cummulative accepted sample size is 82
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 89
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 96
Chain 92 accuracy on subset data: 0.876
Chain 92 loglik on subset data: -109221.883
Drawing Chain 93 ...
Iteration 500, cummulative accepted sample size is 46
Iteration 1000, cummulative accepted sample size is 48
Iteration 1500, cummulative accepted sample size is 59
Iteration 2000, cummulative accepted sample size is 65
Iteration 2500, cummulative accepted sample size is 71
Iteration 3000, cummulative accepted sample size is 71
Chain 93 accuracy on subset data: 0.849
Chain 93 loglik on subset data: -113210.974
Drawing Chain 94 ...
accepted some bad samples
Iteration 500, cummulative accepted sample size is 73
Iteration 1000, cummulative accepted sample size is 80
Iteration 1500, cummulative accepted sample size is 90
Iteration 2000, cummulative accepted sample size is 93
Iteration 2500, cummulative accepted sample size is 98
Iteration 3000, cummulative accepted sample size is 98
Chain 94 accuracy on subset data: 0.834
Chain 94 loglik on subset data: -110381.351
Drawing Chain 95 ...
Iteration 500, cummulative accepted sample size is 73
Iteration 1000, cummulative accepted sample size is 82
Iteration 1500, cummulative accepted sample size is 103
Iteration 2000, cummulative accepted sample size is 104
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 111
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 117
Chain 95 accuracy on subset data: 0.792
Chain 95 loglik on subset data: -102693.982
Drawing Chain 96 ...
Iteration 500, cummulative accepted sample size is 57
Iteration 1000, cummulative accepted sample size is 59
Iteration 1500, cummulative accepted sample size is 66
Iteration 2000, cummulative accepted sample size is 73
Iteration 2500, cummulative accepted sample size is 79
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 94
Chain 96 accuracy on subset data: 0.834
Chain 96 loglik on subset data: -111275.555
Drawing Chain 97 ...
Iteration 500, cummulative accepted sample size is 73
Iteration 1000, cummulative accepted sample size is 78
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 1500, cummulative accepted sample size is 98
Iteration 2000, cummulative accepted sample size is 101
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 115
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 130
Chain 97 accuracy on subset data: 0.839
Chain 97 loglik on subset data: -111171.745
Drawing Chain 98 ...
Iteration 500, cummulative accepted sample size is 45
Iteration 1000, cummulative accepted sample size is 54
Iteration 1500, cummulative accepted sample size is 59
Iteration 2000, cummulative accepted sample size is 60
Iteration 2500, cummulative accepted sample size is 64
accepted some bad samples
Iteration 3000, cummulative accepted sample size is 68
Chain 98 accuracy on subset data: 0.888
Chain 98 loglik on subset data: -111756.694
Drawing Chain 99 ...
Iteration 500, cummulative accepted sample size is 73
Iteration 1000, cummulative accepted sample size is 85
Iteration 1500, cummulative accepted sample size is 92
Iteration 2000, cummulative accepted sample size is 92
accepted some bad samples
Iteration 2500, cummulative accepted sample size is 99
Iteration 3000, cummulative accepted sample size is 100
Chain 99 accuracy on subset data: 0.917
Chain 99 loglik on subset data: -109018.743
Total number of accepted samples: 8680
CPU times: user 4h 55min 14s, sys: 9h 39min 3s, total: 14h 34min 17s
Wall time: 1h 49min 26s
###Markdown
Ensemble tail of each chain
###Code
# vote, and compute accuracy
# keep that samples whose likelihood is above threshold
burnt_samples = []
burnt_predictions = []
for i in range(len(accepts)):
accepted_chain = accepts[i]
likelihoods = logl_accepted_trace[i]
predictions = Y_predict_accepted_trace[i]
burnt_samples += [accepted_chain[_] for _ in range(len(accepted_chain)-20,len(accepted_chain)) ]
burnt_predictions += [predictions[_] for _ in range(len(accepted_chain)-20,len(accepted_chain))]
# vote
votes = np.zeros([data_subset.shape[0], table.shape[0]])
for Y_predict in burnt_predictions:
for _ in range(len(Y_predict)):
votes[_,Y_predict[_]] += 1
Y_predict_majority = np.argmax(votes, axis=1)
print len(burnt_predictions)
accuracy = sum(Y_subset == Y_predict_majority)*1.0/ data_subset.shape[0]
print "Chain % d accuracy on subset data: %.3f" % (chain+1,accuracy)
bins = table.shape[0]
plt.hist(Y_subset, bins, alpha=0.5, label='Y:cell type')
plt.hist(Y_predict_majority, bins, alpha=0.5, label='Z:prediction')
plt.legend(loc='upper right')
plt.show()
###Output
2000
Chain 100 accuracy on subset data: 0.911
###Markdown
Ensemble K most probable samples
###Code
# vote, and compute accuracy
# keep that samples whose likelihood is above threshold
K = 20
max_logl_per_chain = [max(logl_accepted_trace[i]) for i in range(n_mcmc_chain)]
topK_chain_id = sorted(range(n_mcmc_chain), key=lambda i: max_logl_per_chain[i])[-K:]
topK_samples = []
topK_predictions = []
for i in range(K):
chain_id = topK_chain_id[i]
sample_id = logl_accepted_trace[chain_id].index(max(logl_accepted_trace[chain_id]))
topK_samples.append(accepts[chain_id][sample_id])
topK_predictions.append(Y_predict_accepted_trace[chain_id][sample_id])
# vote
votes = np.zeros([data_subset.shape[0], table.shape[0]])
for Y_predict in topK_predictions:
for _ in range(len(Y_predict)):
votes[_,Y_predict[_]] += 1
Y_predict_majority = np.argmax(votes, axis=1)
accuracy = sum(Y_subset == Y_predict_majority)*1.0/ data_subset.shape[0]
print "Accuracy of ensembling top %d samples: %.3f" % (K,accuracy)
bins = table.shape[0]
plt.hist(Y_subset, bins, alpha=0.5, label='Y:cell type')
plt.hist(Y_predict_majority, bins, alpha=0.5, label='Z:prediction')
plt.legend(loc='upper right')
plt.show()
# """
# predict on full dataset
# """
# Y_predict = classify_cells(data, most_probable_sample, table, cell_type_name2idx)
# accuracy = sum(Y == Y_predict)*1.0/ data.shape[0]
# print "Chain % d accuracy on subset data: %.3f" % (argmax_chain_id+1,accuracy)
# bins = table.shape[0]
# plt.hist(Y, bins, alpha=0.5, label='Y:cell type')
# plt.hist(Y_predict, bins, alpha=0.5, label='Z:prediction')
# plt.legend(loc='upper right')
# plt.show()
"""
predict on subset of dataset
"""
#Y_predict = Y_predict_accepted_trace[argmax_chain_id][argmax_sample_id]
# accuracy = sum(Y_subset == Y_predict)*1.0/ data_subset.shape[0]
# print "Chain % d accuracy on subset data: %.3f" % (argmax_chain_id+1,accuracy)
# bins = table.shape[0]
# plt.hist(Y_subset, bins, alpha=0.5, label='Y:cell type')
# plt.hist(Y_predict, bins, alpha=0.5, label='Z:prediction')
# plt.legend(loc='upper right')
# plt.show()
# # plot 5 chains
# threshold = -113888.425449/data.shape[0]*N
# fig, axs = plt.subplots(5, 3, figsize=(10,10) )
# for chain in range(5):
# axs[chain, 0].plot(logl_complete_trace[chain])
# axs[chain, 1].plot(logl_accepted_trace[chain])
# axs[chain, 2].plot(accuracy_accepted_trace[chain])
# axs[chain, 0].set_title('Trace of likelihood Chain %d, all samples' % chain, fontsize=8)
# axs[chain, 1].set_title('Trace of likelihood Chain %d, accepted samples' % chain, fontsize=8)
# axs[chain, 2].set_title('Trace of accuracy Chain %d, accepted samples' % chain, fontsize=8)
# fig.tight_layout()
plt.plot([max(logl_accepted_trace_[i]) for i in range(n_mcmc_chain)])
print([max(logl_accepted_trace[i]) for i in range(n_mcmc_chain)])
print([max(logl_accepted_trace[i]) for i in range(n_mcmc_chain)][29])
plt.show()
plt.plot([max(accuracy_accepted_trace[i]) for i in range(n_mcmc_chain)])
print([max(accuracy_accepted_trace[i]) for i in range(n_mcmc_chain)])
print([max(accuracy_accepted_trace[i]) for i in range(n_mcmc_chain)][29])
# emsemble 100 samples from prior
# vote
votes = np.zeros([data_subset.shape[0], table.shape[0]])
for i in range(n_mcmc_chain):
Y_predict = Y_predict_accepted_trace[i][0]
for _ in range(len(Y_predict)):
votes[_,Y_predict[_]] += 1
Y_predict_majority = np.argmax(votes, axis=1)
print votes.sum()
accuracy = sum(Y_subset == Y_predict_majority)*1.0/ data_subset.shape[0]
print "Chain % d accuracy on subset data: %.3f" % (chain+1,accuracy)
bins = table.shape[0]
plt.hist(Y_subset, bins, alpha=0.5, label='Y:cell type')
plt.hist(Y_predict_majority, bins, alpha=0.5, label='Z:prediction')
plt.legend(loc='upper right')
plt.show()
def query_table(table,list_of_condition):
if len(list_of_condition) == 0:
return table
dim, label = list_of_condition[0]
if label == 1:
table = table[table[dim] >= 0]
else:
table = table[table[dim] <= 0]
return query_table(table, list_of_condition[1:])
conditions = [['CD123',-1],['CD3', -1],['CD7',-1],['CD11c',-1],['CD19',+1],['CD34',-1]]
print query_table(table, conditions)
###Output
CD45RA CD133 CD19 CD22 CD11b CD4 CD8 CD34 Flt3 CD20 \
Mature B cells 0.0 0.0 1 0.0 0.0 0.0 -1.0 -1 0.0 0.0
Plasma B cells 0.0 0.0 1 0.0 0.0 0.0 -1.0 -1 0.0 0.0
Pre B cells 0.0 0.0 1 0.0 0.0 0.0 -1.0 -1 0.0 0.0
... CD44 CD38 CD13 CD3 CD61 CD117 CD49d HLADR CD64 \
Mature B cells ... 0.0 0.0 0.0 -1 0.0 0.0 0.0 0.0 0.0
Plasma B cells ... 0.0 1.0 0.0 -1 0.0 0.0 0.0 -1.0 0.0
Pre B cells ... 0.0 1.0 0.0 -1 0.0 0.0 0.0 1.0 -1.0
CD41
Mature B cells 0.0
Plasma B cells 0.0
Pre B cells 0.0
[3 rows x 32 columns]
|
notebooks/ProbModelingDNNs/NLPCA-Keras-Layers.ipynb | ###Markdown
Data For illustrating purposes, the MNIST dataset containg handwritten digits will be used. In particular we might obtain the data from the ``inferpy`` package (and hence from keras):
###Code
# number of observations (dataset size)
N = 1000
# digits considered
DIG = [0,1,2]
# load the data
(x_train, y_train), _ = mnist.load_data(num_instances=N, digits=DIG)
# plot the digits
mnist.plot_digits(x_train, grid=[5,5])
###Output
_____no_output_____
###Markdown
Model definitionThe implementation for the generative model for a NLPCA model (Algorithm 3) is defined below. The input parameters are: `k` is the latent dimension, `d` is the data-dimension and `N`the number of samples or data instances.
###Code
# Model constants
k, d0, d1 = 2, 100, np.shape(x_train)[-1]
# initial values
loc_init = 0.001
scale_init = 1
@inf.probmodel
def nlpca(k, d0, d1, decoder):
with inf.datamodel():
# Define local latent variables
z = inf.Normal(loc=tf.ones([k]), scale=1, name="z")
output = decoder(d0, d1, z)
# Define the observed variables
x = inf.Normal(loc=output, scale=1., name="x")
def decoder(d0, d1, z):
return tf.keras.Sequential([
tf.keras.layers.Dense(d0, activation=tf.nn.relu),
tf.keras.layers.Dense(d1),
])(z)
print(nlpca)
print(decoder)
###Output
<function nlpca at 0x132b4cd90>
<function decoder at 0x132b4cd08>
###Markdown
This is a latent varable model (LVM) containing DNNs where the latent representation $\boldsymbol{z}$ is known as the scores, and the affine transformation is performed using a DNN parametrized by $\boldsymbol{\beta}$ and $\boldsymbol{\alpha}$. InferenceVariational inference is a deterministic technique that finds a tractable approximation to an intractable (posterior) distribution. We will use $q$ to denote the approximation, and use $p$ to signify the true distribution (like $p(\boldsymbol{\beta, \alpha},\boldsymbol{z}|\boldsymbol{x})$ in the example above).More specifically, ${\cal Q}$ will denote a set of possible approximations $q$.In practice, we define a generative model for sampling from $q(\boldsymbol{\beta, \alpha},\boldsymbol{z} | \boldsymbol{\lambda}, \boldsymbol{\phi})$, where $\boldsymbol{\lambda}, \boldsymbol{\phi}$ are the variational parameters to optimise.
###Code
@inf.probmodel
def Q(k, d0, d1):
with inf.datamodel():
qz_loc = inf.Parameter(tf.ones([k]), name="qz_loc")
qz_scale = tf.math.softplus(inf.Parameter(tf.ones([k]), name="qz_scale"))
qz = inf.Normal(loc=qz_loc, scale=qz_scale, name="z")
print(Q)
###Output
<function Q at 0x1397f6bf8>
###Markdown
Variational methods adjusts the parameters by maximizing the ELBO (Evidence LOwer Bound) denoted $\cal{L}$ and expressed as $\cal{L}(\boldsymbol{\lambda},\boldsymbol{\phi}) = \mathbb{E}_q [\ln p(\boldsymbol{x}, \boldsymbol{z}, \boldsymbol{\beta, \alpha})] - \mathbb{E}_q [\ln q(\boldsymbol{\beta, \alpha},\boldsymbol{z}|\boldsymbol{\lambda},\boldsymbol{\phi})]$In InferPy, this is transparent to the user: it is only required to create the instances of the P and Q models, the optimizer and inference method objects.
###Code
# create an instance of the P model and the Q model
m = nlpca(k, d0, d1, decoder)
q = Q(k,d0,d1)
# load the data
(x_train, y_train), _ = mnist.load_data(num_instances=N, digits=DIG)
optimizer = tf.train.AdamOptimizer(learning_rate)
VI = inf.inference.VI(q, optimizer=optimizer, epochs=2000)
###Output
WARNING: Logging before flag parsing goes to stderr.
W0108 10:45:19.412915 4515657152 deprecation_wrapper.py:119] From ../../inferpy/models/prob_model.py:63: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
W0108 10:45:19.420696 4515657152 deprecation_wrapper.py:119] From ../../inferpy/util/tf_graph.py:63: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
W0108 10:45:19.427248 4515657152 deprecation_wrapper.py:119] From ../../inferpy/util/interceptor.py:142: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.
W0108 10:45:19.454304 4515657152 deprecation.py:506] From /Users/rcabanas/venv/InferPy/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
###Markdown
Finally, the ELBO function is maximized.
###Code
m.fit({"x": x_train}, VI)
###Output
W0108 10:45:19.933058 4515657152 deprecation.py:323] From ../../inferpy/util/interceptor.py:34: Variable.load (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Prefer Variable.assign which has equivalent behavior in 2.X.
W0108 10:45:20.312705 4515657152 deprecation.py:323] From /Users/rcabanas/venv/InferPy/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py:1354: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
W0108 10:45:20.597137 4515657152 deprecation_wrapper.py:119] From ../../inferpy/inference/variational/vi.py:187: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
###Markdown
After the inference, we can plot the hidden representation:
###Code
post = {"z": m.posterior("z", data={"x": x_train}).sample()}
markers = ["x", "+", "o"]
colors = [plt.get_cmap("gist_rainbow")(0.05),
plt.get_cmap("gnuplot2")(0.08),
plt.get_cmap("gist_rainbow")(0.33)]
transp = [0.9, 0.9, 0.5]
fig = plt.figure()
for c in range(0, len(DIG)):
col = colors[c]
plt.scatter(post["z"][y_train == DIG[c], 0], post["z"][y_train == DIG[c], 1], color=col,
label=DIG[c], marker=markers[c], alpha=transp[c], s=60)
plt.legend(loc='upper right',framealpha=1)
###Output
_____no_output_____
###Markdown
TestFor testing our model, we will generate samples of $\boldsymbol{x}$ given the infered posterior distributions.
###Code
x_gen = m.posterior_predictive('x', data=post).sample()
# plot the digits
mnist.plot_digits(x_gen, grid=[5,5])
###Output
_____no_output_____ |
examples/brightfield - m025-A1.ipynb | ###Markdown
Tracking particles in brightfield**TL;DR** We developed a new method to more accurately track colloidal particles in bright field images that do not resemble Gaussian features.The standard `locate` function in `trackpy` finds features by determining local maxima of intensity, thereby assuming the particles that need to be tracked have a Gaussian intensity distribution with a maximum in the center. While this is usually true for fluorescent particles, particles in bright field mode show a different intensity profile (see example images in this notebook).Because the intensity profile is usually different from a Gaussian in bright field mode, the `locate` function fails to accurately find the center of the particle. We have developed a new method that uses the edge of the particle to more accurately refine the position of the center of the particle as detected by the standard `locate` function.**Citing this work**The algorithm we'll use is described in the following paper:"Colloid supported lipid bilayers for self-assembly", M. Rinaldin, R.W. Verweij, I. Chakrabory, D.J. Kraft, Soft Matter (2019) [https://doi.org/10.1039/c8sm01661e](https://doi.org/10.1039/c8sm01661e). It is implemented in `trackpy` as `locate_brightfield_ring`. If you're using this code in your work, please cite both the paper and the appropriate `trackpy` version. Preliminary imports
###Code
from __future__ import division, unicode_literals, print_function # for compatibility with Python 2 and 3
import matplotlib as mpl
import matplotlib.pyplot as plt
# change the following to %matplotlib notebook for interactive plotting
%matplotlib inline
# Optionally, tweak styles.
mpl.rc('figure', figsize=(10, 5))
mpl.rc('image', cmap='gray')
###Output
_____no_output_____
###Markdown
We also might want to use scientific Python libraries. Finally, we'll import ``trackpy`` itself and its sister project, `pims`.
###Code
import numpy as np
import pandas as pd
from pandas import DataFrame, Series # for convenience
import pims
from pims import ND2_Reader
import trackpy as tp
###Output
_____no_output_____
###Markdown
Step 1: Read the Data Opening images or videoWe open the files with `pims`:
###Code
frames = ND2_Reader('Z:\\Hannah - vimentin networks\\m025\\210624_Vimentin_m025_p06_A_001.nd2')
micron_per_pixel = 0.13
feature_diameter = 1.0 # um
radius = int(np.round(feature_diameter/2.0/micron_per_pixel))
if radius % 2 == 0:
radius += 1
print('Using a radius of {:d} px'.format(radius))
frames
###Output
Using a radius of 5 px
###Markdown
These are colloidal particles diffusing in quasi-2D on a substrate. The images are cropped to focus on just five particles for 50 frames (approximately 7 seconds). Let's have a look a the first frame:
###Code
frames[0]
###Output
_____no_output_____
###Markdown
Step 2: Locate Features Using the locate function First, we'll try the standard `locate` function to find the particle positions:
###Code
# we use a slightly larger radius
#f_locate = tp.locate(frames[100], 9, minmass=100000, invert=True)
f_locate = tp.locate(frames[100], 21, minmass=400000, invert=True)
tp.annotate(f_locate, frames[100], plot_style={'markersize': radius});
#f = tp.batch(frames[:5000], 9, minmass=100000, invert=True, processes=1)
f = tp.batch(frames[:5000], 21, minmass=400000, invert=True, processes=1)
t = tp.link(f, 5, memory=3)
t.to_pickle("210624_Vimentin_m025_p06_A1_TRAJECTORY_NEW.pkl")
plt.figure()
tp.plot_traj(t);
d = tp.compute_drift(t)
d.plot()
plt.show()
tm = tp.subtract_drift(t.copy(), d)
im = tp.imsd(tm, 0.13, 99.92)
fig, ax = plt.subplots()
ax.plot(im.index, im, 'k-', alpha=0.3) # black lines, semitransparent
ax.set(ylabel=r'$\langle \Delta r^2 \rangle$ [$\mu$m$^2$]',
xlabel='lag time $t$')
ax.set_xscale('log')
ax.set_yscale('log')
em_dc = tp.emsd(tm, 0.13, 99.92, max_lagtime=1000)
em = tp.emsd(t, 0.13, 99.92, max_lagtime=1000)
fig, ax = plt.subplots()
ax.loglog(em_dc.index, em_dc, '--ko', alpha=0.9) # black lines, semitransparent
ax.loglog(em.index, em, '--ro', alpha=0.9) # red lines, semitransparent
ax.set(ylabel=r'$\langle \Delta r^2 \rangle$ [$\mu$m$^2$]',
xlabel='lag time $t$')
#ax.set_xscale('log')
#ax.set_yscale('log')
plt.title("m03\\210624_Vimentin_m03_p06_A_001")
#ax.plot(em.index, (em.index**0.32) * 0.032, '-b')
#plt.axhline(y=0.024)
em.to_pickle("./210624_Vimentin_m025_p06_A1_not_drift_corrected_trackpy.pkl")
em_dc.to_pickle("./210624_Vimentin_m025_p06_A1_drift_corrected_trackpy.pkl")
tp.emsd?
###Output
_____no_output_____
###Markdown
All particle trajectories seem to be tracked. Let's zoom in a bit more to verify this:
###Code
plt.figure()
background = np.mean(frames[0:10], axis=0)
tp.plot_traj(tm, superimpose=background, ax=plt.gca(), plot_style={'linewidth': 2})
#plt.ylim(0, 150)
#plt.xlim(200, 450);
###Output
_____no_output_____ |
Assingments/module05/students/module2/labs/.ipynb_checkpoints/intro_descriptive_stats_python-checkpoint.ipynb | ###Markdown
Soft Introduction to Descriptive Statistics with `NumPy` **Note: The Game of Thrones Data Set is explained in the R Notebook Lab.**Now that we have the basics of data manipulation with `pandas` under our belts, we are going to begin diving into descriptive statistics. Don't worry; we aren't leaving data manipulation behind. It will play a big role throughout the rest of course and our data science careers.Today we will be introduced to another popular data science library known as `NumPy`\*. `NumPy` provides us with methods to work with mathematical functions to run on large arrays and matrices. Today we will be using it for some of our descriptive statistics...\* *[`intro_numpy.ipynb`](intro_numpy.ipynb) gives an overview of the `NumPy` package. This notebook is meant to give you an overview of `NumPy` within the scope of descriptive statistics.* Read in Data Set
###Code
import pandas as pd
import numpy as np
with open('../../datasets/game-of-thrones/GoT_age_at_death.csv') as file:
df = pd.read_csv(file)
df.columns = ['character', 'age', 'dead', 'gender', 'affiliation'] # change file header names
# change column types
df['dead'] = df['dead'].astype('category')
df['gender'] = df['gender'].astype('category')
df['affiliation'] = df['affiliation'].astype('category')
df.head()
###Output
_____no_output_____
###Markdown
The Mean A brief note about `pandas` and `NumPy`It is important to know that `pandas` is built on top of `NumPy` and therefore much of the functionality that we will be introducing today as `NumPy` methods is also available with `pandas`. Let's see for ourselves...
###Code
pandas_mean = df.age.mean()
numpy_mean = np.mean(df.age)
print("I am the mean constructed with pandas: {}".format(pandas_mean))
print("And I am the mean constructed with NumPy: {}".format(numpy_mean))
###Output
I am the mean constructed with pandas: 35.59891598915989
And I am the mean constructed with NumPy: 35.59891598915989
###Markdown
See... the two produce the same result, although the methods look a bit different. Either way is fine for finding the mean from a numeric column of a data frame. But the `pandas` way will only work on a `pandas` object. Imagine if we wanted to find the mean of a numeric list we created without `pandas`.
###Code
x = [1,2,3,4,5] # create a list of numbers
###Output
_____no_output_____
###Markdown
The `pandas` way won't work...
###Code
x.mean()
###Output
_____no_output_____
###Markdown
But the `NumPy` way will...
###Code
np.mean(x)
###Output
_____no_output_____
###Markdown
For the majority of the following descriptive statistics we will be using `NumPy` so we can familiarize ourselves with the `NumPy` package, which will be used heavily in other modules and courses. The Standard DeviationFor a more detailed description of all of the statistics that we will be using in this notebook, refer to the `R` lab notebook on the topic as it discusses a bit more of the logic behind each one. Below is how to find the standard deviation with `NumPy`.
###Code
np.std(df.age)
###Output
_____no_output_____
###Markdown
Hmm...this looks a little odd. If you have a good memory and can remember back to our `R` lab notebook, you might recall that the standard deviation of this dataset's `Age` variable was 19.0something, and although 18.992 is rather close to 19.0..., the result is different based on design.When looking at statistics we have populations and then we have samples. Simply put, a population is everyone while a sample is a subset of the population. Imagine we wanted to measure the heights of Oregonians. The population would be a measurement for every single person from Oregon. This would be impossible to do, so instead we would take a sample of individuals from Oregon. Now, standard deviation is found by taking the square root of a statistic called the "variance" and the way variance is computed for the population is just a tad bit different than how it is computed for the sample (we will get more into the math behind this in the Statistical and Mathematical Foundations course). It is this small difference that produces the different results. `NumPy` defaults to the population standard deviation. Here is how we specify the sample standard deviation...
###Code
np.std(df.age,ddof = 1)
###Output
_____no_output_____
###Markdown
There we have it! Instead of getting too deep into the math behind this `ddof` parameter and the argument that we passed it, just know that this is subtracting 1 from the sample size (amount of data points), which is what it takes to produce the sample standard deviation. We will get into this parameter a bit more during the Mathematical and Statistical Foundations course. The MedianThe median is simple to find with `NumPy`. Remember, the median is not sensitive to outliers and, therefore, is sometimes more preferable than the mean when trying to find the average.Here's how we do it...
###Code
np.median(df.age)
###Output
_____no_output_____
###Markdown
QuartilesIn `R`, you may recall, the `summary` function will print the quartiles as well as the mean of a numeric variable. In `pandas` the `describe()` method is akin to that, except that it also prints out the number of values in the variable as well.Let's try it out below.
###Code
df.age.describe()
###Output
_____no_output_____
###Markdown
That's nice and all, but what if we wanted to see the max value of the points at a different percentage of the dataset? `NumPy` provides us with a very convenient function to do so.
###Code
np.percentile(df.age, 65)
###Output
_____no_output_____
###Markdown
So 65% of the values in the age variable are 41 and below. This method is simple. We inserted two arguments: the first is a numeric object, in this case the `age` variable of our *Game of Thrones* dataset, and the second is a number between 0 and 100 signifying the percentage. Maximum and Minimum ValuesOnce again, we see that there are multiple ways to do the same thing. We can also find the maximum and minimum value of a variable by calling the `amax` and `amin` method.
###Code
np.amax(df.age)
np.amin(df.age)
###Output
_____no_output_____
###Markdown
Bivariate AnalysisWe're going to jump over to our Stature Hand and Foot dataset to find the covariance and correlation. Remember, these are statistics that look at the relationship between two variables. For these two statistics, we will be using `pandas` again. Read in the Data
###Code
with open('../../datasets/stature-hand-foot/stature-hand-foot.csv') as file2:
df2 = pd.read_csv(file2)
df2['gender'] = df2['gender'].astype('category')
df2.columns = ['gender', 'height', 'hand_length', 'foot_length']
df2.head()
###Output
_____no_output_____
###Markdown
CovarianceWhen finding the covariance of two variables, you are going be performing a method on a pandas series type object. In other words, we are going to be calling a method on a numeric variable and passing another variable as an argument. Take a look at the example below.
###Code
df2.hand_length.cov(df2.foot_length)
###Output
_____no_output_____
###Markdown
As you can see, we run the method on the `hand_length` variable and pass the `foot_length` variable as the argument. CorrelationAnd now to assess the strength of the relationship between these two variables. As you can imagine, the sytax is the same as it was for the covariance.
###Code
df2.hand_length.corr(df2.foot_length)
###Output
_____no_output_____ |
python/basis/jupyter/Generator.ipynb | ###Markdown
生成器 生成器是一种惰性可迭代对象,当需要下一个执行数据时,才会计算生成数据,由此节约内存空间 创建生成器
###Code
array = (i*2 for i in range(10))
array
###Output
_____no_output_____
###Markdown
生成器操作 生成器无法通过下标调用
###Code
array[3]
###Output
_____no_output_____
###Markdown
生成器可以使用 next() 方法获取下一个值
###Code
array.__next__()
###Output
_____no_output_____
###Markdown
生成器可迭代
###Code
for i in array:
print(i)
###Output
2
4
6
8
10
12
14
16
18
###Markdown
当生成器数据遍历完成后,再次调用 next() 方法,会抛出 StopIteration 异常
###Code
array.__next__()
###Output
_____no_output_____
###Markdown
生成器函数
###Code
def fib(max):
n,a,b = 0,0,1
while n < max:
yield b #中断并返回b,接受到 next() 或 send() 时恢复中断
a,b = b,a+b
n = n+1
return("done") #异常值,判断生成器是否正常结束
f = fib(5)
print(f.send(None))
print(f.send(12)) # send() 可以替换yield变量的值? 此处看来没有
while True:
try:
print(next(f))
except StopIteration as e:
print(e.value)
break
###Output
1
1
2
3
5
done
|
Chapter8_MetricsAndEvaluation/CrossValidation/CrossValidation.ipynb | ###Markdown
Data preparation
###Code
import numpy as np
np.random.seed(42)
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
dataset = load_breast_cancer()
x = dataset.data
y = dataset.target
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
print(f"x_train shape: {x_train.shape} x_test.shape: {x_test.shape}")
###Output
x_train shape: (455, 30) x_test.shape: (114, 30)
###Markdown
Cross Validation
###Code
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
kf = KFold(n_splits=10, shuffle=True)
clf = KNeighborsClassifier(n_neighbors=2)
scores = cross_val_score(clf, x_train, y_train, cv=kf, n_jobs=-1)
predictions = cross_val_predict(clf, x_train, y_train, cv=kf, n_jobs=-1)
mean_score = np.mean(scores)
std_score = np.std(scores)
print(f"Accuracies: {scores}")
print(f"Mean Score: {mean_score}")
print(f"Std Score: {std_score}")
kf = KFold(n_splits=10, shuffle=True)
clf = KNeighborsClassifier(n_neighbors=3)
scores = cross_val_score(clf, x_train, y_train, cv=kf, n_jobs=-1)
predictions = cross_val_predict(clf, x_train, y_train, cv=kf, n_jobs=-1)
mean_score = np.mean(scores)
std_score = np.std(scores)
print(f"Accuracies: {scores}")
print(f"Mean Score: {mean_score}")
print(f"Std Score: {std_score}")
kf = KFold(n_splits=10, shuffle=True)
clf = KNeighborsClassifier(n_neighbors=4)
scores = cross_val_score(clf, x_train, y_train, cv=kf, n_jobs=-1)
predictions = cross_val_predict(clf, x_train, y_train, cv=kf, n_jobs=-1)
mean_score = np.mean(scores)
std_score = np.std(scores)
print(f"Accuracies: {scores}")
print(f"Mean Score: {mean_score}")
print(f"Std Score: {std_score}")
plt.plot(range(len(scores)), scores, color="blue")
plt.xlim(0, 10)
plt.ylim(0.85, 1)
plt.axhline(mean_score, linestyle="-", color="red")
plt.legend(["Validation Scores", "Mean Score"])
plt.show()
###Output
_____no_output_____ |
RIVM Besmettingen per gemeente.ipynb | ###Markdown
RIVM aantal besmettingen per gemeente
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
import cufflinks as cf
from IPython.display import display,HTML
cf.set_config_file(sharing='public',theme='ggplot',offline=True)
df = pd.read_json("https://data.rivm.nl/covid-19/COVID-19_aantallen_gemeente_per_dag.json")
df
df.head()
df.info()
df.describe()
df.groupby(["Municipality_name"]).sum().nlargest(10, "Total_reported")
df.groupby(["Municipality_name"]).sum().nlargest(10, "Deceased")
df.groupby(["Municipality_name"]).sum().nsmallest(10, "Total_reported")
df.groupby(["Municipality_name"]).sum().nsmallest(10, "Deceased")
df.groupby(["Security_region_name"]).sum().nlargest(10, "Total_reported")
df["Total_reported"].sum()
df.groupby(["Date_of_publication"]).sum()
df.groupby(["Date_of_publication"]).sum().iplot()
dftr = df.groupby(["Date_of_publication"])["Total_reported"].sum()
dftr.iplot()
dfha = df.groupby(["Date_of_publication"])["Hospital_admission"].sum()
dfha.iplot()
dfd = df.groupby(["Date_of_publication"])["Deceased"].sum()
dfd.iplot()
def lesslabels():
for ind, label in enumerate(fig.get_xticklabels()):
if ind % 15 == 0: # every 15th label is kept
label.set_visible(True)
else:
label.set_visible(False)
def cm2inch(value):
return value/2.54
plt.figure(figsize=(cm2inch(60), cm2inch(20)))
fig = sns.lineplot(data=dftr)
# plt.axhline(y=1.0, color='r', linestyle='-')
lesslabels()
plt.xticks(rotation=45)
plt.show()
plt.figure(figsize=(cm2inch(60), cm2inch(20)))
fig = sns.relplot(data=dftr, kind = "line")
# plt.axhline(y=1.0, color='r', linestyle='-')
# lesslabels()
plt.xticks(rotation=45)
plt.show()
###Output
_____no_output_____ |
keras/171225-encoder-decoder-with-attention.ipynb | ###Markdown
Encoder-Decoder Model with Attention
###Code
from random import randint
import numpy as np
from keras.models import Sequential
from keras.layers import LSTM, Dense, TimeDistributed, RepeatVector
def generate_sequence(length, n_unique):
return [randint(0, n_unique - 1) for _ in range(length)]
def one_hot_encode(sequence, n_unique):
encoding = list()
for value in sequence:
vector = [0 for _ in range(n_unique)]
vector[value] = 1
encoding.append(vector)
return np.array(encoding)
def one_hot_decode(encoded_seq):
return [np.argmax(vector) for vector in encoded_seq]
sequence = generate_sequence(5, 50)
print(sequence)
encoded = one_hot_encode(sequence, 50)
print(encoded.shape)
print(encoded)
decoded = one_hot_decode(encoded)
print(decoded)
def get_pair(n_in, n_out, n_unique):
# n_in=5, n_out=2のとき
# 入力系列と出力系列は同じ長さで0パディングする
# [4, 8, 1, 2, 4] => [4, 8, 0, 0, 0]
sequence_in = generate_sequence(n_in, n_unique)
sequence_out = sequence_in[:n_out] + [0 for _ in range(n_in - n_out)]
X = one_hot_encode(sequence_in, n_unique)
y = one_hot_encode(sequence_out, n_unique)
# (samples, time steps, features)
X = X.reshape((1, X.shape[0], X.shape[1]))
y = y.reshape((1, y.shape[0], y.shape[1]))
return X, y
X, y = get_pair(5, 2, 50)
print(X.shape, y.shape)
print('X=%s, y=%s' % (one_hot_decode(X[0]), one_hot_decode(y[0])))
###Output
(1, 5, 50) (1, 5, 50)
X=[26, 42, 49, 24, 21], y=[26, 42, 0, 0, 0]
###Markdown
Encoder-Decoder without Attention
###Code
n_features = 50
n_timesteps_in = 5
n_timesteps_out = 2
model = Sequential()
# return_sequences=Falseなので最後の入力の出力のみ返す
model.add(LSTM(150, input_shape=(n_timesteps_in, n_features)))
model.add(RepeatVector(n_timesteps_in)) # 出力は入力系列と同じ系列長だけ出力する
model.add(LSTM(150, return_sequences=True))
model.add(TimeDistributed(Dense(n_features, activation='softmax')))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.summary()
for epoch in range(5000):
X, y = get_pair(n_timesteps_in, n_timesteps_out, n_features)
model.fit(X, y, epochs=1, verbose=2)
# evaluate LSTM
total, correct = 100, 0
for _ in range(total):
X, y = get_pair(n_timesteps_in, n_timesteps_out, n_features)
yhat = model.predict(X, verbose=0)
if np.array_equal(one_hot_decode(y[0]), one_hot_decode(yhat[0])):
correct += 1
print('Accuracy: %.2f%%' % (float(correct) / float(total) * 100.0))
for _ in range(10):
X, y = get_pair(n_timesteps_in, n_timesteps_out, n_features)
yhat = model.predict(X, verbose=0)
print('Expected:', one_hot_decode(y[0]), 'Predicted:', one_hot_decode(yhat[0]))
###Output
Expected: [41, 46, 0, 0, 0] Predicted: [41, 41, 0, 0, 0]
Expected: [9, 48, 0, 0, 0] Predicted: [9, 26, 0, 0, 0]
Expected: [6, 1, 0, 0, 0] Predicted: [6, 0, 0, 0, 0]
Expected: [25, 32, 0, 0, 0] Predicted: [25, 32, 0, 0, 0]
Expected: [44, 2, 0, 0, 0] Predicted: [44, 2, 0, 0, 0]
Expected: [46, 31, 0, 0, 0] Predicted: [46, 1, 0, 0, 0]
Expected: [17, 7, 0, 0, 0] Predicted: [17, 14, 0, 0, 0]
Expected: [39, 42, 0, 0, 0] Predicted: [39, 39, 0, 0, 0]
Expected: [6, 24, 0, 0, 0] Predicted: [6, 0, 0, 0, 0]
Expected: [5, 40, 0, 0, 0] Predicted: [5, 5, 0, 0, 0]
###Markdown
Attention
###Code
from attention_decoder import AttentionDecoder
model = Sequential()
# Attentionを使うときは重み付けするため各ステップの出力も使う
model.add(LSTM(150, input_shape=(n_timesteps_in, n_features), return_sequences=True))
model.add(AttentionDecoder(150, n_features))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
model.summary()
for epoch in range(5000):
X, y = get_pair(n_timesteps_in, n_timesteps_out, n_features)
model.fit(X, y, epochs=1, verbose=2)
# evaluate LSTM
total, correct = 100, 0
for _ in range(total):
X, y = get_pair(n_timesteps_in, n_timesteps_out, n_features)
yhat = model.predict(X, verbose=0)
if np.array_equal(one_hot_decode(y[0]), one_hot_decode(yhat[0])):
correct += 1
print('Accuracy: %.2f%%' % (float(correct) / float(total) * 100.0))
for _ in range(10):
X, y = get_pair(n_timesteps_in, n_timesteps_out, n_features)
yhat = model.predict(X, verbose=0)
print('Expected:', one_hot_decode(y[0]), 'Predicted:', one_hot_decode(yhat[0]))
###Output
Expected: [24, 33, 0, 0, 0] Predicted: [24, 33, 0, 0, 0]
Expected: [28, 44, 0, 0, 0] Predicted: [28, 44, 0, 0, 0]
Expected: [40, 23, 0, 0, 0] Predicted: [40, 23, 0, 0, 0]
Expected: [32, 47, 0, 0, 0] Predicted: [32, 47, 0, 0, 0]
Expected: [43, 27, 0, 0, 0] Predicted: [43, 27, 0, 0, 0]
Expected: [15, 34, 0, 0, 0] Predicted: [15, 34, 0, 0, 0]
Expected: [10, 16, 0, 0, 0] Predicted: [10, 16, 0, 0, 0]
Expected: [23, 28, 0, 0, 0] Predicted: [23, 28, 0, 0, 0]
Expected: [46, 47, 0, 0, 0] Predicted: [46, 47, 0, 0, 0]
Expected: [45, 35, 0, 0, 0] Predicted: [45, 35, 0, 0, 0]
###Markdown
Comparison of Models
###Code
def baseline_model(n_timesteps_in, n_features):
model = Sequential()
model.add(LSTM(150, input_shape=(n_timesteps_in, n_features)))
model.add(RepeatVector(n_timesteps_in))
model.add(LSTM(150, return_sequences=True))
model.add(TimeDistributed(Dense(n_features, activation='softmax')))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
return model
def attention_model(n_timesteps_in, n_features):
model = Sequential()
model.add(LSTM(150, input_shape=(n_timesteps_in, n_features)), return_sequences=True)
model.add(AttentionDecoder(150, n_features))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
return model
# train and evaluate a model, return accuracy
def train_evaluate_model(model, n_timesteps_in, n_timesteps_out, n_features):
# train LSTM
for epoch in range(5000):
# generate new random sequence
X, y = get_pair(n_timesteps_in, n_timesteps_out, n_features)
# fit model for one epoch on this sequence
model.fit(X, y, epochs=1, verbose=0)
# evaluate LSTM
total, correct = 100, 0
for _ in range(total):
X, y = get_pair(n_timesteps_in, n_timesteps_out, n_features)
yhat = model.predict(X, verbose=0)
if array_equal(one_hot_decode(y[0]), one_hot_decode(yhat[0])):
correct += 1
return float(correct) / float(total)*100.0
###Output
_____no_output_____ |
stochepi/notebooks/FigSMCFitTmaxAnalysisMultiRegion.ipynb | ###Markdown
Combined figure with SMC fit and CIs for $t \leq t_{\max}$with multiple regions in one figure
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import re
import scipy.stats as sts
import xml.etree.ElementTree as ET
import warnings
import pickle
import copy
import csv
import datetime
import json
import string
from scipy.interpolate import UnivariateSpline
from scipy.optimize import minimize_scalar, root_scalar
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
import matplotlib.ticker as ticker
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import sys, importlib
sys.path.append("..")
from evpytools import evplot
from evpytools import pftools
from evpytools import auxiliary as aux
from evpytools import definitions as defn
for mod in [evplot, pftools, aux, defn]:
importlib.reload(mod)
plt.rcParams.update({'font.size': 18})
###Output
_____no_output_____
###Markdown
Import data files
###Code
fdata_files = [
"../data/in/sars2-seq-death-week-Netherlands-B.1.351.tsv",
"../data/in/sars2-seq-death-week-Japan-R.1.tsv"
]
#fdata_files = [
# "../data/in/sars2-seq-death-week-United_Kingdom-B.1.1.7.tsv",
# "../data/in/sars2-seq-death-week-Netherlands-B.1.1.7.tsv"
#]
#fdata_files = [
# "../data/in/sars2-seq-death-week-United_Kingdom-D614G.tsv",
# "../data/in/sars2-seq-death-week-Netherlands-D614G.tsv"
#]
fdatadicts = [
pftools.import_filter_data(fdata_file)
for fdata_file in fdata_files
]
###Output
_____no_output_____
###Markdown
Import filter results
###Code
pfout_files = [
"../data/out/ipf_result-sars_model_Netherlands_B.1.351.xml",
"../data/out/ipf_result-sars_model_Japan_R.1.xml"
]
#pfout_files = [
# "../data/out/ipf_result-sars_model_United_Kingdom_B.1.1.7.xml",
# "../data/out/ipf_result-sars_model_Netherlands_B.1.1.7.xml"
#]
#pfout_files = [
# "../data/out/ipf_result-sars_model_UK-614-wk.xml",
# "../data/out/ipf_result-sars_model_NL-614-wk.xml"
#]
idx = -1 ## select one of the PF iterations
pf_datas = [
pftools.extract_pfilter_data(pfout_file)
for pfout_file in pfout_files
]
###Output
_____no_output_____
###Markdown
Import profile likelihood results
###Code
prof_lik_files = [
"../data/out/profile-lik-tmax-results_Netherlands_B.1.351.json",
"../data/out/profile-lik-tmax-results_Japan-R.1.json"
]
#prof_lik_files = [
# "../data/out/profile-lik-tmax-results_United_Kingdom-B.1.1.7.json",
# "../data/out/profile-lik-tmax-results_Netherlands-B.1.1.7.json"
#]
#prof_lik_files = []
proflik_result_dicts = []
for prof_lik_file in prof_lik_files:
with open(prof_lik_file, 'r') as f:
proflik_result_dict = json.load(f)
proflik_result_dicts.append(proflik_result_dict)
###Output
_____no_output_____
###Markdown
Functions for creating the figure
###Code
def plot_data(axs, dds):
# deaths
ax = axs[0]
ws = [row["t"] for row in dds if row["deaths_cc"] == defn.uncensored_code]
Ds = [row["deaths"] for row in dds if row["deaths_cc"] == defn.uncensored_code]
ax.scatter(ws, Ds, color='k', edgecolor='k', zorder=4, label='data', s=20)
# mutant freq
ax = axs[1]
ts = [row["t"] for row in dds if row["Ntot"] > 0]
Fms = [row["Nmut"] / row["Ntot"] for row in dds if row["Ntot"] > 0]
## CIs for mutant frequency
lFms = [sts.beta.ppf(0.025, row["Nmut"]+0.5, row["Ntot"] - row["Nmut"]+0.5)
for row in dds if row["Ntot"] > 0]
uFms = [sts.beta.ppf(0.975, row["Nmut"]+0.5, row["Ntot"] - row["Nmut"]+0.5)
for row in dds if row["Ntot"] > 0]
for t, l, u in zip(ts, lFms, uFms):
ax.plot([t,t], [l,u], color='k', alpha=1)
ax.scatter(ts, Fms, color='k', edgecolor='k', zorder=4, label='data',
s=40, marker='_')
def plot_trajectories(ax, pf_data, varname, date0, color="tab:blue",
pretty_varname=None):
ID = pf_data["pfIDs"][0] ## select single ID
## latent paths
trajcolor = color
alpha_traj = 0.7
if pretty_varname is None:
pretty_varname = varname
## model predictions of the data
for j, path in enumerate(pf_data["paths"][ID]):
lab = None if j > 0 else pretty_varname
## extract timeseries
xs = path.findall("state")
ts = [float(x.attrib["t"]) for x in xs]
Xs = [float(x.find(f"var_vec[@name='{varname}']/var").attrib["val"])
for x in xs]
## plot
ax.plot(ts, Xs, color=trajcolor, alpha=alpha_traj,
linewidth=0.5, zorder=1, label=lab)
def plot_predictions(axs, pf_data, dds):
dt = 1
varcolor = ['purple', 'tab:blue']
obsvarnames = ['D', 'Fm']
ID = pf_data["pfIDs"][0] ## select single ID
ts = [float(x.attrib["t"]) for x in pf_data["pred_medians"][ID]]
for i, X in enumerate(obsvarnames):
ws = [row["t"] for row in dds if row["deaths_cc"] == defn.uncensored_code]
mask = [False if t in ws else True for t in ts]
ax = axs[i]
rans = pf_data["ranges"][ID]
Xs_ran = [[float(x.find(f"var_vec[@name='{X}']/var").attrib["val"])
for x in ran] for ran in rans]
Xs_pred = [float(x.find(f"var_vec[@name='{X}']/var").attrib["val"])
for x in pf_data["pred_medians"][ID]]
Xs_filt = [float(x.find(f"var_vec[@name='{X}']/var").attrib["val"])
for x in pf_data["filter_medians"][ID]]
evplot.pfilter_boxplot(ax, ts, Xs_ran, Xs_pred, Xs_filt, mask=mask,
color=varcolor[i], dt=dt)
def plot_CIs(ax, LLss, tmaxs, sigmas, max_diff=11):
DL = sts.chi2.ppf(0.95,1)/2
for i, LLs in enumerate(LLss):
## compute means
meanLLs = np.mean(LLs, axis=1)
## remove very small LLs
sigs, lls = aux.unzip([(s, l) for s, l in zip(sigmas, meanLLs) if l >= np.max(LLs)-max_diff])
bounds = (sigs[0], sigs[-1])
cs = UnivariateSpline(sigs, lls, s=10, ext='raise')
xs = np.linspace(*bounds, 250)
## find max of spline and CI
res = minimize_scalar(lambda x: -cs(x), bounds=bounds, method='bounded')
max_LL = -res.fun
sigma_opt = res.x
sign = 0 < bounds[0] or cs(0) < max_LL-DL
ax.plot(cs(xs)-max_LL+tmaxs[i]+DL, xs, label='spline', color='k', linewidth=2)
print(f"s_opt = {sigma_opt:0.2f}")
print(f"max LL = {max_LL:0.2f}")
try:
lres = root_scalar(lambda x: cs(x)-max_LL + DL, bracket=[sigs[0], sigma_opt])
lCI = lres.root
except:
print("unable to compute lower bound CI!")
lCI = np.nan
try:
rres = root_scalar(lambda x: cs(x)-max_LL + DL, bracket=[sigma_opt, sigs[-1]])
rCI = rres.root
except:
print("unable to compute upper bound CI!")
rCI = np.nan
print(f"95% CI = [{lCI:0.2f}, {rCI:0.2f}]")
if not np.isnan(lCI) and lCI > 0.0:
ax.text(tmaxs[i], 1.005, "*", fontsize=18, ha='center',
transform=evplot.hybrid_trans(ax))
## plot dots
ax.scatter(np.array(lls)-max_LL+tmaxs[i]+DL, sigs, color='k', s=5)
ax.axvline(x=tmaxs[i], color='k', alpha=0.4)
data_markers = ['o', '|']
#legend_locs = [1, 1, 4] ## D614G
legend_locs = [1, 1, 2] ## others
data_colors = ['w', 'k']
trajcolor = ["pink", "deepskyblue"]
varcolor = ['purple', 'tab:blue']
varnames = ["D", "Fm"]
#regions = ["United Kingdom D614G", "Netherlands D614G"]
#regions = ["United Kingdom B.1.1.7", "Netherlands B.1.1.7"]
regions = ["Netherlands B.1.351", "Japan R.1"]
## insets only used for D614G
xlim_insets = [(65,75), (58,68)]
ylim_insets = [(0,10000), (0,1000)]
## plot profile-likelihood results?
#plot_prof_lik = False
plot_prof_lik = True
## plot an inset with a close-up of the population sizes?
plot_inset = False
#plot_inset = True
## scale the y-axis limits to [0,1]?
#unit_freq_limits = True
unit_freq_limits = False
date0 = datetime.datetime.strptime("01/01/2020", "%m/%d/%Y")
numrows = 4 if plot_prof_lik else 3
fig, axs = plt.subplots(numrows, len(regions), figsize=(7*len(regions),10), sharex='col')
if len(regions) == 1:
axs = np.array([axs]).T
for r, region in enumerate(regions):
plot_data(axs[1:,r], fdatadicts[r])
for i, varname in enumerate(varnames):
plot_trajectories(axs[i+1,r], pf_datas[r],
varname, date0, color=trajcolor[i])
plot_predictions(axs[1:,r], pf_datas[r], fdatadicts[r])
plot_trajectories(axs[0,r], pf_datas[r], "Iw", date0, color='tab:orange',
pretty_varname="$I_{\\rm wt}$")
plot_trajectories(axs[0,r], pf_datas[r], "Im", date0, color='tab:blue',
pretty_varname="$I_{\\rm mt}$")
axs[0,r].legend()
axs[0,r].yaxis.set_major_formatter(ticker.FuncFormatter(evplot.y_fmt))
axs[0,r].tick_params(axis="y", labelsize=12)
## dates in x-axis
days = [dd["t"] for dd in fdatadicts[r]]
dates = [date0 + datetime.timedelta(days=d) for d in days]
xticks = days[::2] ## every 2 weeks
xticklabels = [d.strftime("%b %d") for d in dates[::2]]
axs[-1,r].set_xlabel("date")
axs[-1,r].set_xticks(xticks)
axs[-1,r].set_xticklabels(xticklabels, fontsize='x-small', rotation=45, ha='right')
## add legends
leg = axs[0,r].legend(ncol=1, loc=legend_locs[0], fontsize='x-small')
for lh in leg.legendHandles:
lh.set_alpha(1)
lh.set_linewidth(1)
for i, ax in enumerate(axs[1:3,r]):
## Legend
legend_elements = [
Line2D([0], [0], marker=data_markers[i], color=data_colors[i], label='data',
markerfacecolor='k', markeredgecolor='k', markersize=7),
Line2D([0], [0], color=varcolor[i], label='model'),
]
ax.legend(handles=legend_elements, ncol=1, fontsize='x-small', loc=legend_locs[i+1])
## profile likelihoods
if plot_prof_lik:
proflik_result_dict = proflik_result_dicts[r]
LLss = proflik_result_dict["LLss"]
tmaxs = proflik_result_dict["tmaxs"]
sigmas = proflik_result_dict["sigmas"]
## replace tmax with the largest observation time $\leq$ tmax
tmaxs = [np.max([t for t in days if t <= tm])
for tm in tmaxs]
plot_CIs(axs[-1,r], LLss, tmaxs, sigmas)
axs[-1,r].axhline(y=0, color='red', linewidth=0.5)
## inset
if plot_inset:
axins = inset_axes(axs[0,r], width="20%", height="35%", loc=1,
bbox_to_anchor=(0,0,0.8,1), bbox_transform=axs[0,r].transAxes)
plot_trajectories(axins, pf_datas[r], "Iw", date0, color='tab:orange')
plot_trajectories(axins, pf_datas[r], "Im", date0, color='tab:blue')
axins.set_xlim(*xlim_insets[r])
axins.set_ylim(*ylim_insets[r])
axins.tick_params(axis='both', which='major', labelsize='xx-small')
## dates as xticklabels
xmin, xmax = xlim_insets[r]
xticks = range(xmin+1, xmax, 4)
xtickdates = [date0 + datetime.timedelta(days=x) for x in xticks]
xticklabels = [d.strftime("%b %d") for d in xtickdates]
axins.set_xticks(xticks)
axins.set_xticklabels(xticklabels, rotation=45, ha='right')
## title
axs[0,r].set_title(region)
# y-labels
ylabs = [
"population\nsize",
"death\nincidence",
"mutant\nfrequency",
"selection ($s$)"
]
for ax, ylab in zip(axs[:,0], ylabs):
ax.set_ylabel(ylab, fontsize='small')
if unit_freq_limits:
for ax in axs[2,:]:
ax.set_ylim(-0.05, 1.05)
fig.align_ylabels(axs)
## add labels
subplot_labels = string.ascii_uppercase
for i, ax in enumerate(axs.flatten()):
ax.text(-0.15, 1.02, subplot_labels[i], fontsize=22, transform=ax.transAxes)
fig.savefig("../data/out/SMCFitTmax.pdf", bbox_inches='tight')
###Output
_____no_output_____ |
examples/statline_plot.ipynb | ###Markdown
StatLineTable plot voorbeeld Hier geven we een voorbeeld hoe je makkelijk met een Python script data van OpenData (StatLine) kan halen en een plot kan maken in de CBS huisstijl. We maken gebruik van de *StatLineTable* klasse uit de *cbs_utils.readers* module. Deze klasse is een toevoeging op de *cbsodata* module (het maakt voor het inlezen gebruik van *cbsodata*). Met *StatLineTable* kan je de data in een gestructureerde pandas dataframe stoppen, zodat het bewerken en plotten een stuk eenvoudiger gaat.We delen dit voorbeeld op in vier secties:1. [*Inlezen*](inlezen): Inlezen van de OpenData Tabel2. [*Analyse*](analyse): Analyse van de stuctuur van de OpenData Tabel3. [*Dumpen*](dumpen): Het wegschrijven van de plaatjes van *alle* vragen4. [*Plotten*](plot_een_vraag): Een mooie plot van één vraag uit de statline tabelWe beginnen met het inladen van de benodigde modules en het initialiseren van de logger
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import logging
import sys
from cbsodata.utils import StatLineTable
logging.basicConfig(format='%(levelname)s : %(message)s', level=logging.INFO, stream=sys.stdout)
logger = logging.getLogger()
###Output
_____no_output_____
###Markdown
Inlezen van een OpenData Tabel Het inladen van een tabel uit opendata is eenvoudig. We moeten eerst de ID van een tabel vinden door naar statline te gaan en je tabel te openen. Bijvoorbeeld, het ICT gebruik van bedrijven in 2018 naar bedrijfsgrootte staat in deze tabelhttps://opendata.cbs.nl//CBS/nl/dataset/84410NED/table?ts=1568706226304Dit betekent dat de tabel ID '84410NED' is. Deze gaan we nu inlezen
###Code
%%time
table_id = "84410NED"
logger.info(f"Start met het lezen van tabel {table_id}")
statline = StatLineTable(table_id=table_id)
###Output
INFO : Start met het lezen van tabel 84410NED
INFO : Reading json cache/84410NED/DataProperties.json
INFO : Reading json cache/84410NED/TypedDataSet.json
INFO : Reading json cache/84410NED/TableInfos.json
INFO : Reading from pickle database cache/84410NED_question.pkl
INFO : Reading from pickle database cache/84410NED_section.pkl
INFO : Reading from pickle database cache/84410NED_dimensions.pkl
INFO : Writing table information to images/84410NED/TableInfos.yml
INFO : Writing question structure to images/84410NED/QuestionTable.txt
INFO : Writing question structure to images/84410NED/SectionTable.txt
CPU times: user 159 ms, sys: 12 ms, total: 171 ms
Wall time: 251 ms
###Markdown
Je ziet dat er 'under the hood' een hoop gebeurt:1. Het laden van de data wordt door de *cbsodata* module afgehandeld. Deze module slaat de data op in verschillende json file. 2. Vervolgens worden deze json files samengevoegd tot een multi-index pandas Dataframe. 3. We slaan de data frames op in een pickel file (standaar binary python file). De volgende keer als je dezelfde plot maakt zullen de pickel files ingelezen worden. Dit gaat veel sneller dan de de data nog een keer met *cbsodata* van het internet te downloadedn. Het maakt het ontwikkelen van een script een stuk gemakkelijker.4. We slaan de structuur van de tabel op in wat txt files: QuestionTable bevat alle vragen, SectionTable alle modules. Deze data files kan je gebruiken om snel op te zoeken welk nummer je moet gebruiken om een vraag te plotten De eerste keer dat we de script draaide duurde het zo'n 7 seconden, de tweede keer een fractie van een seconde. Dit tijdsvoordeel is vooral prettig als je de script vaak herdraait om een plaatje te 'tunen' De structuur van de opendata tabel in StatLineTable Als volgende stap gaan we bekijken hoe de data structuur van de vragenlijst eruit ziet. Zoals gezegd, dit kan je ook zien door de inhoud van QuestionTable.txt in the image directory te bekijken. Maar we kunnen het ook direct opvragen:
###Code
table = statline.show_module_table()
###Output
+------+------------+----------------------------------------+
| ID | ParentID | Title |
|------+------------+----------------------------------------|
| 1 | | Personeel en ICT |
| 4 | 1 | ICT-specialisten |
| 9 | 1 | ICT-beveiliging/bescherming data |
| 13 | | Toegang en gebruik internet |
| 14 | 13 | Bedrijven met website |
| 16 | 14 | Website bevat |
| 23 | | Cloud-diensten |
| 25 | 23 | Type cloud-diensten |
| 33 | 25 | Type server |
| 36 | | Big-data-analyse |
| 38 | 36 | Bronnen big data voor analyse |
| 43 | 36 | Wie analyseerde big data |
| 46 | | ICT-veiligheid |
| 47 | 46 | Gebruikte ICT-veiligheidsmaatregelen |
| 60 | 46 | Optreden van ICT veiligheidsincidenten |
| 65 | 46 | Oorzaken van ICT veiligheidsincidenten |
| 72 | 46 | Kosten ICT-veiligheidsincidenten |
| 79 | 46 | Uitvoeren updates (security patching) |
| 83 | | Facturen |
| 84 | 83 | Facturen verzonden aan |
| 88 | 83 | Wijze van verzending van facturen |
| 92 | 83 | Wijze van ontvangst van facturen |
| 97 | | E-commerce in 2017 |
| 98 | 97 | Gebruik voor verkoop |
| 100 | 98 | Verdeling omzet: eigen site of extern |
| 110 | 98 | Via een website/app |
| 119 | 110 | Naar afnemer |
| 124 | 98 | Via EDI |
| 133 | 97 | Gebruik voor inkoop |
| 142 | 133 | Via een website/app |
| 151 | 133 | Via EDI |
+------+------------+----------------------------------------+
###Markdown
We zien hier een lijst van modules, submodules, en vragen. Een vraag kan weer uit meerder opties (aanvink mogelijkheden) bestaan. Een module heeft in de structuur het hoogste niveau binnen een tabel: heeft is een blok vragen die allen bij één onderwerp horen. Modules in deze lijst kan je herkennen omdat ze geen parentID hebben: ze zijn zelf het hoogst in de hiërarchie. De modules in deze tabel zijn dus: * 1: Personeel en ICT* 13: Toegang en gebruik internet * 23: Cloud-diensten* 36: Big-data-analyse* 46: ICT-veiligheid* 83: Facturen* 97: E-commerce in 2017Een module heeft als volgende niveau *of* een vraag *of* een submodule. Dit kunnen we zien door de structuur van de hele vragenlijst te printen. Dit doen we als volgt:
###Code
question_table =statline.show_question_table(max_width=23)
###Output
INFO : Structure of all questions
+-------------------------------+-------------------------+-------------------------+-------------------------+
| | Key | Title | Unit |
|-------------------------------+-------------------------+-------------------------+-------------------------|
| (1, 2, nan, nan, nan) | ICTPersAangenomenWillen | ICT-pers. aangenomen/wi | % van bedrijven |
| (1, 3, nan, nan, nan) | ICTVacaturesLastigTeVer | ICT-vacatures lastig te | % van bedrijven |
| (1, 4, 5.0, nan, nan) | ICTSpecialistenInLoondi | ICT-specialisten in loo | % van werkzame personen |
| (1, 4, 6.0, nan, nan) | ICTSpecOntwEnOnderhoudS | ICT-spec. ontw. en onde | % van werkzame personen |
| (1, 7, nan, nan, nan) | ICTCursusAanICTSpeciali | ICT-cursus aan ICT-spec | % van bedrijven |
| (1, 8, nan, nan, nan) | ICTCursusAanOverigPerso | ICT-cursus aan overig p | % van bedrijven |
| (1, 9, 10.0, nan, nan) | VoornUitgDoorEigenPerso | Voorn. uitg. door eigen | % van bedrijven |
| (1, 9, 11.0, nan, nan) | VoornUitgDoorExterneLev | Voorn. uitg. door exter | % van bedrijven |
| (1, 9, 12.0, nan, nan) | NietVanToepassing_9 | Niet van toepassing | % van bedrijven |
| (13, 14, 15.0, nan, nan) | BedrijvenMetWebsite_10 | Bedrijven met website | % van bedrijven |
| (13, 14, 16.0, 17.0, nan) | OnlineBestellenBoekenOf | Online bestellen, boeke | % van bedrijven |
| (13, 14, 16.0, 18.0, nan) | BeschrijvingenVanProduc | Beschrijvingen van prod | % van bedrijven |
| (13, 14, 16.0, 19.0, nan) | BestellingOnlineVolgen_ | Bestelling online volge | % van bedrijven |
| (13, 14, 16.0, 20.0, nan) | ProductaanpassingDoorKl | Productaanpassing door | % van bedrijven |
| (13, 14, 16.0, 21.0, nan) | KlantspecifiekeInfoVoor | Klantspecifieke info vo | % van bedrijven |
| (13, 14, 16.0, 22.0, nan) | VerwijzingNaarSocialeMe | Verwijzing naar sociale | % van bedrijven |
| (23, 24, nan, nan, nan) | CloudDienstenGebruikt_1 | Cloud-diensten gebruikt | % van bedrijven |
| (23, 25, 26.0, nan, nan) | EMailAlsCloudDienst_18 | E-mail (als cloud-diens | % van bedrijven |
| (23, 25, 27.0, nan, nan) | OfficeSoftware_19 | Office software | % van bedrijven |
| (23, 25, 28.0, nan, nan) | DatabaseHosting_20 | Database hosting | % van bedrijven |
| (23, 25, 29.0, nan, nan) | BestandenOpslaanAlsClou | Bestanden opslaan (als | % van bedrijven |
| (23, 25, 30.0, nan, nan) | SoftwareVoorBoekhouding | Software voor boekhoudi | % van bedrijven |
| (23, 25, 31.0, nan, nan) | SoftwareKlantinformatie | Software klantinformati | % van bedrijven |
| (23, 25, 32.0, nan, nan) | RekenkrachtVoorSoftware | Rekenkracht voor softwa | % van bedrijven |
| (23, 25, 33.0, 34.0, nan) | GedeeldeServers_25 | Gedeelde servers | % van bedrijven |
| (23, 25, 33.0, 35.0, nan) | ServersUitsluitendVoorH | Servers uitsluitend voo | % van bedrijven |
| (36, 37, nan, nan, nan) | Uitgevoerd_27 | Uitgevoerd | % van bedrijven |
| (36, 38, 39.0, nan, nan) | DataVanBedrijfZelf_28 | Data van bedrijf zelf | % van bedrijven |
| (36, 38, 40.0, nan, nan) | DataOverGeografischeLoc | Data over geografische | % van bedrijven |
| (36, 38, 41.0, nan, nan) | DataVanSocialeMedia_30 | Data van sociale media | % van bedrijven |
| (36, 38, 42.0, nan, nan) | AndereBronnen_31 | Andere bronnen | % van bedrijven |
| (36, 43, 44.0, nan, nan) | MedewerkersEigenBedrijf | Medewerkers eigen bedri | % van bedrijven |
| (36, 43, 45.0, nan, nan) | AnderBedrijf_33 | Ander bedrijf | % van bedrijven |
| (46, 47, 48.0, nan, nan) | Antivirussoftware_34 | Antivirussoftware | % van bedrijven |
| (46, 47, 49.0, nan, nan) | BeleidVoorSterkeWachtwo | Beleid voor sterke wach | % van bedrijven |
| (46, 47, 50.0, nan, nan) | AuthenticatieViaSoftOfH | Authenticatie via soft | % van bedrijven |
| (46, 47, 51.0, nan, nan) | EncryptieVoorHetOpslaan | Encryptie voor het opsl | % van bedrijven |
| (46, 47, 52.0, nan, nan) | EncryptieVoorHetVerstur | Encryptie voor het vers | % van bedrijven |
| (46, 47, 53.0, nan, nan) | GegevensOpAndereFysieke | Gegevens op andere fysi | % van bedrijven |
| (46, 47, 54.0, nan, nan) | NetworkAccessControl_40 | Network access control | % van bedrijven |
| (46, 47, 55.0, nan, nan) | VPNBijInternetgebruikBu | VPN bij internetgebruik | % van bedrijven |
| (46, 47, 56.0, nan, nan) | LogbestandenVoorAnalyse | Logbestanden voor analy | % van bedrijven |
| (46, 47, 57.0, nan, nan) | MethodesVoorBeoordelenI | Methodes voor beoordele | % van bedrijven |
| (46, 47, 58.0, nan, nan) | Risicoanalyses_44 | Risicoanalyses | % van bedrijven |
| (46, 47, 59.0, nan, nan) | AndereMaatregelen_45 | Andere maatregelen | % van bedrijven |
| (46, 60, 61.0, nan, nan) | IncidentOpgetreden_46 | Incident opgetreden | % van bedrijven |
| (46, 60, 62.0, nan, nan) | KostenGehadAanICTIncide | Kosten gehad aan ICT-in | % van bedrijven |
| (46, 60, 63.0, nan, nan) | IncidentDoorAanvalVanBu | Incident door aanval va | % van bedrijven |
| (46, 60, 64.0, nan, nan) | KostenIncidentAanvalBui | Kosten incident (aanval | % van bedrijven |
| (46, 65, 66.0, nan, nan) | UitvalICTDienstDoorVeil | Uitval ICT-dienst door | % van bedrijven |
| (46, 65, 67.0, nan, nan) | UitvalICTDienstDoorAanv | Uitval ICT-dienst door | % van bedrijven |
| (46, 65, 68.0, nan, nan) | VernietigingDataDoorVei | Vernietiging data door | % van bedrijven |
| (46, 65, 69.0, nan, nan) | VernietigingDataAanvalV | Vernietiging data; aan | % van bedrijven |
| (46, 65, 70.0, nan, nan) | OnthullingGegevensDoorI | Onthulling gegevens doo | % van bedrijven |
| (46, 65, 71.0, nan, nan) | OnthullingGegevensDoorI | Onthulling gegevens doo | % van bedrijven |
| (46, 72, 73.0, nan, nan) | UitvalICTDienstDoorVeil | Uitval ICT-dienst door | % van bedrijven |
| (46, 72, 74.0, nan, nan) | UitvalICTDienstDoorAanv | Uitval ICT-dienst door | % van bedrijven |
| (46, 72, 75.0, nan, nan) | VernietigingDataDoorVei | Vernietiging data door | % van bedrijven |
| (46, 72, 76.0, nan, nan) | VernietigingDataAanvalV | Vernietiging data; aan | % van bedrijven |
| (46, 72, 77.0, nan, nan) | OnthullingGegevensDoorI | Onthulling gegevens doo | % van bedrijven |
| (46, 72, 78.0, nan, nan) | OnthullingGegevensDoorI | Onthulling gegevens doo | % van bedrijven |
| (46, 79, 80.0, nan, nan) | MeestalVolledigAutomati | Meestal volledig automa | % van bedrijven |
| (46, 79, 81.0, nan, nan) | MeestalDeelsHandmatig_6 | Meestal (deels) handmat | % van bedrijven |
| (46, 79, 82.0, nan, nan) | NietVanToepassing_64 | Niet van toepassing | % van bedrijven |
| (83, 84, 85.0, nan, nan) | AndereBedrijven_65 | Andere bedrijven | % van bedrijven |
| (83, 84, 86.0, nan, nan) | Overheden_66 | Overheden | % van bedrijven |
| (83, 84, 87.0, nan, nan) | Consumenten_67 | Consumenten | % van bedrijven |
| (83, 88, 89.0, nan, nan) | EFactuur_68 | E-factuur | % van verzonden facture |
| (83, 88, 90.0, nan, nan) | ElektronischMaarGeenEFa | Elektronisch, maar geen | % van verzonden facture |
| (83, 88, 91.0, nan, nan) | InPapierenVorm_70 | In papieren vorm | % van verzonden facture |
| (83, 92, 93.0, nan, nan) | EFactuur_71 | E-factuur | % van ontvangen facture |
| (83, 92, 94.0, nan, nan) | GeenEFactuur_72 | Geen e-factuur | % van ontvangen facture |
| (83, 95, nan, nan, nan) | EFacturenVerzonden_73 | E-facturen verzonden | % van bedrijven |
| (83, 96, nan, nan, nan) | EFacturenOntvangen_74 | E-facturen ontvangen | % van bedrijven |
| (97, 98, 99.0, nan, nan) | VerkoopViaECommerce_75 | Verkoop via e-commerce | % van bedrijven |
| (97, 98, 100.0, 101.0, nan) | ViaWebsiteOfAppsVanEige | Via website of apps van | % van omzet |
| (97, 98, 100.0, 102.0, nan) | ViaWebsiteOfAppsVanAnde | Via website of apps van | % van omzet |
| (97, 98, 103.0, nan, nan) | Verkoopwaarde1VanDeTota | Verkoopwaarde < 1% van | % van bedrijven |
| (97, 98, 104.0, nan, nan) | Verkoopwaarde1VanDeTota | Verkoopwaarde >= 1% van | % van bedrijven |
| (97, 98, 105.0, nan, nan) | Verkoopwaarde2VanDeTota | Verkoopwaarde >= 2% van | % van bedrijven |
| (97, 98, 106.0, nan, nan) | Verkoopwaarde5VanDeTota | Verkoopwaarde >= 5% van | % van bedrijven |
| (97, 98, 107.0, nan, nan) | Verkoopwaarde10VanDeTot | Verkoopwaarde >= 10% va | % van bedrijven |
| (97, 98, 108.0, nan, nan) | Verkoopwaarde25VanDeTot | Verkoopwaarde >= 25% va | % van bedrijven |
| (97, 98, 109.0, nan, nan) | Verkoopwaarde50VanDeTot | Verkoopwaarde >= 50% va | % van bedrijven |
| (97, 98, 110.0, 111.0, nan) | VerkoopViaEenWebsiteApp | Verkoop via een website | % van bedrijven |
| (97, 98, 110.0, 112.0, nan) | Verkoopwaarde1VanDeTota | Verkoopwaarde < 1% van | % van bedrijven |
| (97, 98, 110.0, 113.0, nan) | Verkoopwaarde1VanDeTota | Verkoopwaarde >= 1% van | % van bedrijven |
| (97, 98, 110.0, 114.0, nan) | Verkoopwaarde2VanDeTota | Verkoopwaarde >= 2% van | % van bedrijven |
| (97, 98, 110.0, 115.0, nan) | Verkoopwaarde5VanDeTota | Verkoopwaarde >= 5% van | % van bedrijven |
| (97, 98, 110.0, 116.0, nan) | Verkoopwaarde10VanDeTot | Verkoopwaarde >= 10% va | % van bedrijven |
| (97, 98, 110.0, 117.0, nan) | Verkoopwaarde25VanDeTot | Verkoopwaarde >= 25% va | % van bedrijven |
| (97, 98, 110.0, 118.0, nan) | Verkoopwaarde50VanDeTot | Verkoopwaarde >= 50% va | % van bedrijven |
| (97, 98, 110.0, 119.0, 120.0) | NederlandseConsumenten_ | Nederlandse consumenten | % van omzet via website |
| (97, 98, 110.0, 119.0, 121.0) | BuitenlandseConsumenten | Buitenlandse consumente | % van omzet via website |
| (97, 98, 110.0, 119.0, 122.0) | Bedrijven_95 | Bedrijven | % van omzet via website |
| (97, 98, 110.0, 119.0, 123.0) | Overheden_96 | Overheden | % van omzet via website |
| (97, 98, 124.0, 125.0, nan) | VerkoopViaEDI_97 | Verkoop via EDI | % van bedrijven |
| (97, 98, 124.0, 126.0, nan) | Verkoopwaarde1VanDeTota | Verkoopwaarde < 1% van | % van bedrijven |
| (97, 98, 124.0, 127.0, nan) | Verkoopwaarde1VanDeTota | Verkoopwaarde >= 1% van | % van bedrijven |
| (97, 98, 124.0, 128.0, nan) | Verkoopwaarde2VanDeTota | Verkoopwaarde >= 2% van | % van bedrijven |
| (97, 98, 124.0, 129.0, nan) | Verkoopwaarde5VanDeTota | Verkoopwaarde >= 5% van | % van bedrijven |
| (97, 98, 124.0, 130.0, nan) | Verkoopwaarde10VanDeTot | Verkoopwaarde >= 10% va | % van bedrijven |
| (97, 98, 124.0, 131.0, nan) | Verkoopwaarde25VanDeTot | Verkoopwaarde >= 25% va | % van bedrijven |
| (97, 98, 124.0, 132.0, nan) | Verkoopwaarde50VanDeTot | Verkoopwaarde >= 50% va | % van bedrijven |
| (97, 133, 134.0, nan, nan) | InkoopViaECommerce_105 | Inkoop via e-commerce | % van bedrijven |
| (97, 133, 135.0, nan, nan) | Inkoopwaarde1VanDeTotal | Inkoopwaarde < 1% van d | % van bedrijven |
| (97, 133, 136.0, nan, nan) | Inkoopwaarde1VanDeTotal | Inkoopwaarde >= 1% van | % van bedrijven |
| (97, 133, 137.0, nan, nan) | Inkoopwaarde2VanDeTotal | Inkoopwaarde >= 2% van | % van bedrijven |
| (97, 133, 138.0, nan, nan) | Inkoopwaarde5VanDeTotal | Inkoopwaarde >= 5% van | % van bedrijven |
| (97, 133, 139.0, nan, nan) | Inkoopwaarde10VanDeTota | Inkoopwaarde >= 10% van | % van bedrijven |
| (97, 133, 140.0, nan, nan) | Inkoopwaarde25VanDeTota | Inkoopwaarde >= 25% van | % van bedrijven |
| (97, 133, 141.0, nan, nan) | Inkoopwaarde50VanDeTota | Inkoopwaarde >= 50% van | % van bedrijven |
| (97, 133, 142.0, 143.0, nan) | InkoopViaEenWebsiteApp_ | Inkoop via een website/ | % van bedrijven |
| (97, 133, 142.0, 144.0, nan) | Inkoopwaarde1VanDeTotal | Inkoopwaarde < 1% van d | % van bedrijven |
| (97, 133, 142.0, 145.0, nan) | Inkoopwaarde1VanDeTotal | Inkoopwaarde >= 1% van | % van bedrijven |
| (97, 133, 142.0, 146.0, nan) | Inkoopwaarde2VanDeTotal | Inkoopwaarde >= 2% van | % van bedrijven |
| (97, 133, 142.0, 147.0, nan) | Inkoopwaarde5VanDeTotal | Inkoopwaarde >= 5% van | % van bedrijven |
| (97, 133, 142.0, 148.0, nan) | Inkoopwaarde10VanDeTota | Inkoopwaarde >= 10% van | % van bedrijven |
| (97, 133, 142.0, 149.0, nan) | Inkoopwaarde25VanDeTota | Inkoopwaarde >= 25% van | % van bedrijven |
| (97, 133, 142.0, 150.0, nan) | Inkoopwaarde50VanDeTota | Inkoopwaarde >= 50% van | % van bedrijven |
| (97, 133, 151.0, 152.0, nan) | InkoopViaEDI_121 | Inkoop via EDI | % van bedrijven |
| (97, 133, 151.0, 153.0, nan) | Inkoopwaarde1VanDeTotal | Inkoopwaarde < 1% van d | % van bedrijven |
| (97, 133, 151.0, 154.0, nan) | Inkoopwaarde1VanDeTotal | Inkoopwaarde >= 1% van | % van bedrijven |
| (97, 133, 151.0, 155.0, nan) | Inkoopwaarde2VanDeTotal | Inkoopwaarde >= 2% van | % van bedrijven |
| (97, 133, 151.0, 156.0, nan) | Inkoopwaarde5VanDeTotal | Inkoopwaarde >= 5% van | % van bedrijven |
| (97, 133, 151.0, 157.0, nan) | Inkoopwaarde10VanDeTota | Inkoopwaarde >= 10% van | % van bedrijven |
| (97, 133, 151.0, 158.0, nan) | Inkoopwaarde25VanDeTota | Inkoopwaarde >= 25% van | % van bedrijven |
| (97, 133, 151.0, 159.0, nan) | Inkoopwaarde50VanDeTota | Inkoopwaarde >= 50% van | % van bedrijven |
+-------------------------------+-------------------------+-------------------------+-------------------------+
###Markdown
De eerste kolom geeft de ID van de vragen en de positie in de hierchie van de tabel. We zien in de eerste tabel dat de module 46 (ICT-veiligheid) als eerste een vraag heeft, namelijk 47 (gebruikte ICT maatregelen), en dat deze vraag 47 weer twaalf onderdelen heeft, namelijk optie 48 tot en met 59, die allemaal als ParentID 47 hebben. Dit is een voorbeeld van een vraag op niveau 1 (als we het module niveau als niveau 0 zien), met de opties op niveau 2. Uit de eerste tabel zien we dat E-commerce (ID 97) als eerste een submodule bevat, 98 (gebruikt voor verkoop). Dit zien we omdat in de tweede tabel, 98 nooit aan een variabele toegekent wordt. Het is 99 (Verkoop via e-commerce) dat de eerste stanalone vraag is die een parent 98 heeft. Daarom dat deze vraag, die in een submodule zit, op niveau 2 komt. De Volgende vraag 100 is weer een vraag met optie. De vraag 100 is in de eerste tabel terug te vinden (Verdeling omzet: eigen site of extern), maar omdat deze vraag weer optie heeft, zitten deze optie weer op niveau 3: 101 en 102.Deze structuur zit wel in de json files beschreven, maar StatLineTable maakt er een multi-index pandas data frame van, zodat je makkelijk de vragen en modules die bijelkaar horen kan processen (dit in tegenstelling tot de pandas dataframe die door *cbsodata* geretourneerd wordt. Dit is gewoon een platte lijst van variables, waardoor het lastig wordt om de vragen die bijelkaar horen te groeperen. Naast het aanbrengen van structuur, zorgt StatLineTable er ook voor dat de volledige beschrijving van een variable en de eenheden in de pandas DataFrame terug te vinden zijn (in respectievelijk de 2de en 3de kolom).Het belangrijkste wat je hier moet weten is hoe je kan opzoeken welke ID bij welke module wordt, zodat je deze nummers later kan gebruiken om een selectie van modules te maken. Dumpen van plaatjes van alle vragen uit de tabel Bovenstaande klinkt misschien complex, maar het doel van StatLineTable is juist om het eenvoudiger te maken een plaatje van de vragen te maken. Het maken van plaatjes kan dus intern door StatLineTable gedaan worden. Deze plaatjes zijn met default settings opgemaakt, dus het is niet geschikt voor publicatie, maar vooral voor een snelle blik op al je data in de tabel. Het maken van plaatjes van al je vragen gaat als volgt:
###Code
statline = StatLineTable(table_id=table_id, make_the_plots=True, modules_to_plot=[1, 46])
###Output
INFO : Reading json cache/84410NED/DataProperties.json
INFO : Reading json cache/84410NED/TypedDataSet.json
INFO : Reading json cache/84410NED/TableInfos.json
INFO : Reading from pickle database cache/84410NED_question.pkl
INFO : Reading from pickle database cache/84410NED_section.pkl
INFO : Reading from pickle database cache/84410NED_dimensions.pkl
INFO : Writing table information to images/84410NED/TableInfos.yml
INFO : Writing question structure to images/84410NED/QuestionTable.txt
INFO : Writing question structure to images/84410NED/SectionTable.txt
INFO : Processing module 1:
###Markdown
Nadat we klaar zijn hebben we een plaatje van 11 vragen geplot. Dit is niet alles, omdat we een lijst van modules die we willen plotten hebben meegegeven: 1 (Personeel en ICT) en 46 ICT-veiligheid. De nummers die je aan de lijst meegeeft hebben we in de eerste tabel boven terug kunnen vinden door het runnen van de *show_module_table* method (waar de modules de items zijn die *geen* *ParentID* hebben). Als de het argument *modules_to_plot* niet meegegeven hadden, dan hadden we een dump alle vragen gemaakt. Dat doen we hier maar even niet omwille van het beperken van de uitvoer.Als je de plaatjes als png wilt saven dan moet je de optie *save_plot=True* meegeven. Dan kan je de plaatjes terug vinden in de directory *images/84410NED*, welke automatisch aangemaakt wordt mocht deze nog niet bestaan. Wegschrijven naar Excel, Latex tabular, of Sql liteEen andere interessante feature is dat je de data die bij één plaatje hoort ook naar een excel, latex, of sqlite file kan wegschrijven. Gewoon de flag *to_xls=True*, *to_sql=True*, of *to_tex=True* meegeven. De data files worden met de zelfde naam als de plaatjes in de image directory weggeschreven. Het plotten van een CBS grafiek voor één vraag Het dumpen van de vragen als plots is handig om een overzicht van je data te krijgen, maar echt mooi zijn de plots niet. Dit komt omdat we de layout niet getuned hebben, en bovendien omdat voor iedere vraag de waardes voor alle grootteklasses geplot is. We gaan nu laten zien hoe we de data van één vraag kunnen plotten voor een selectie van grootteklases. Klaar maken van de plot data Eerst willen we kijken welke grootteklasses we allemaal tot ons beschikking hebben. We doen dit als volgt:
###Code
seltext = statline.show_selection()
###Output
INFO : Processing module 1:
INFO : Processing module 46:
INFO : You can make a selection from the following values
Index(['2 of meer werkzame personen', '2 tot 250 werkzame personen',
'2 werkzame personen', '3 tot 5 werkzame personen',
'5 tot 10 werkzame personen', '10 tot 20 werkzame personen',
'20 tot 50 werkzame personen', '50 tot 100 werkzame personen',
'100 tot 250 werkzame personen', '250 tot 500 werkzame personen',
'500 of meer werkzame personen'],
dtype='object', name='Bedrijfsgrootte')
###Markdown
De waardes van deze grootteklasses worden aan de *selection_option* attribute toegevoegd. Stel dat we voor de plot alleen de grootteklasses 2, 20 tot 50, en 500 of meer werkzame personen willen laten zien dan moeten we dus respectievelijk het 3de (index=2), 7de (index=6) en laatste (index=-1) item hebben. Laten we deze waardes er eens uit plukken:
###Code
selection = [statline.selection_options[2],
statline.selection_options[6],
statline.selection_options[-1]]
logger.info(f"We hebben de volgende grootteklasses geselecteerd:\n{selection}")
###Output
INFO : We hebben de volgende grootteklasses geselecteerd:
['2 werkzame personen', '20 tot 50 werkzame personen', '500 of meer werkzame personen']
###Markdown
We hadden bij het initialiseren van de StatLineTable klasse deze selectie als argument mee kunnen geven als *selection=selection*, met daarnaast ook nog de *apply_selection=True* flag meegegeven. Maar in Python kan je de waardes ook nog achteraf toekennen. Laten we dat nu doen:
###Code
statline.selection = selection
statline.apply_selection = True
###Output
_____no_output_____
###Markdown
Doordat we de selectie aan de *selection* attribute hebben toegekend, zullen we dadelijk bij het reorganiseren van de data alleen nog de waardes van deze grootteklasses krijgen. Voor het opvragen van een dataframe voor één vraag kunnen we de *get_question_df* method gebruiken. Bijvoorbeeld, stel we willen vraag met ID 47 plotten. Aan de eerste tabel die we met *show_module_table* lieten zien vinden we terug dat deze vraag hoort bij de vraag *Gebruikte ICT-veiligheidsmaatregelen*. Verder liet de 2de tabel die we met *show_question_table* maakten zien dat deze vraag 12 keuze mogelijkheden had. We trekken nu als volgt de vraag uit de tabel:
###Code
question_df = statline.get_question_df(47)
question_df.head(12) # dit laat de eerste twaalf rijen uit de data frame zien
###Output
_____no_output_____
###Markdown
De eerste twaalf regels van de dataframe laten zien dat we nog alle grootteklasse in de Dataframe terugvinden (we beginnen immers met de 2 of meer werknemers items). In totaal moeten we dus 12 * 11 = 132 in de dataframe hebben. Dit kunnen we met info zien
###Code
question_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 132 entries, (48, nan, nan) to (59, nan, nan)
Data columns (total 13 columns):
Section 132 non-null object
ID 132 non-null object
ParentID 132 non-null object
Key 132 non-null object
Title 132 non-null object
Description 121 non-null object
Datatype 132 non-null object
Unit 132 non-null object
Decimals 132 non-null object
Default 132 non-null object
Bedrijfsgrootte 132 non-null object
Bedrijfsgrootte_Key 132 non-null object
Values 132 non-null int64
dtypes: int64(1), object(12)
memory usage: 14.9+ KB
###Markdown
Inderdaad hebben we 132 entries. Voordat we de dataframe klaar maken om te plotten, slaan we eerst even de eenheden van de data op. Deze moet als het goed is voor alle entries hetzelfde zijn omdat ze allemaal bij dezelfde vraag horen. Dus we doen
###Code
units = question_df[statline.units_key].values[0]
logger.info(f"De eenheden zijn: {units}")
###Output
INFO : De eenheden zijn: % van bedrijven
###Markdown
Inderdaad, de vraag voor gebruikte ICT veiligheids maatregelen heeft % van bedrijven. Als laatste stap voordat we gaan plotten, gaan we de data frame reorgiseren:
###Code
question_df = statline.prepare_data_frame(question_df)
question_df.head(12)
###Output
_____no_output_____
###Markdown
Door het runnen van *prepare_data_frame* wordt de selectie die we eerder aan de *selection* attribute toegekend hebben uitgevoerd, en bovendien worden de grootteklasses nu als kolommen weergeven. Deze data frame is geschikt om te plotten. Het maken van de plot We hebben de data die we willen plotten dus in de *question_df* dataframe staan. Dit is een standaard *pandas* data frame, dus met het aanroepen van de *plot* method kunnen we al een plaatje maken. Echter, we willen de plot ook gaan tunen. Daarom is het handiger om buiten *pandas* om de figuur en assen te maken en deze aan de *plot* method mee te geven, zodat we de waardes daarna makkelijk kunnen aanpassen. We beginnen dus met het initialiseren van de figure en axis:n
###Code
fig, axis = plt.subplots(dpi=72) # dpi is noodzakelijk om het wegschrijven naar pdf goed te krijgen
fig.subplots_adjust(left=0.5, bottom=0.25, top=0.98)
###Output
_____no_output_____
###Markdown
Dit is nog niet zo spannend. Enige belangrijke is dat we de marges aangepast hebben. We gaan namelijk een horizontale bar plot maken, zodat we aan de linker kant ruimte nodig hebben voor de labels van de staven. Daarom dat we *left=0.5* zetten (0 is vanaf links geen marge, 1 is vanaf links alleen maar marge tot de rechter as, dus 0.5 is een marge die de helft van de plot breedte inneemt. Ook aan de onderkant vergroten we de marge wat omdat we de legend later kwijt moeten. Aan de bovenkant verkleinen we de marge juist. Rechts laten we op de default staan.We hebben door het gebruik van de *subplots* functie uit de *matplotlib.pyplot* module gelijk toegang tot de figuur en assen. Dus laten we nu de plot method van pandas gebruiken om een horizontale staafdiagram te maken in de assen die we zojuist gemaakt hebben
###Code
question_df.plot(kind="barh", ax=axis)
fig
###Output
_____no_output_____
###Markdown
We zie dat we inderdaad een liggend bar plot hebben met voldoen ruimte voor de labels links. Ook het kleurenschema is al wat we nodig hebben. Echter, we moeten nog wel wat tunen. We halen eerste de y-as label weg (die is default 'Title') en stellen de x-as label gelijk aan de units die we net opgeslagen hebben in *units*
###Code
axis.set_ylabel("")
axis.set_xlabel(units)
axis.xaxis.set_label_coords(0.98, -0.1)
###Output
_____no_output_____
###Markdown
Als default wordt de x-as label in het midden geplaatst. Met de *set_label_coords* method kunnen we deze wat meer naar rechts zetten (de coordinaten zien in fraxies van de assen, links onder is 0,0, rechtboven is 1,1)Nu gaan we de randen van de assen aanpassen. Deze worden default alle vier getekend, maar we willen alleen de linker rand overhouden. Voor de gridlijnen nemen we alleen de lijnen vanaf de x-grid
###Code
for side in ["top", "bottom", "right"]:
axis.spines[side].set_visible(False)
axis.spines['left'].set_position('zero')
axis.tick_params(which="both", bottom=False, left=False)
axis.xaxis.grid(True)
axis.yaxis.grid(False)
###Output
_____no_output_____
###Markdown
We zien verder dat de x-range iets verder dan 100 doorloopt. Voor percentage willen we dat gewoon op 100 laten eindigen. Dus stel ook de x-range in, maar laat hem tot 101 doorlopen. Dit om te voorkomen dat de laatste gridlijn op 100 wegvalt
###Code
axis.set_xlim([0, 101])
###Output
_____no_output_____
###Markdown
Een andere eigenaardigheid is dat de items van de bar plot precies andersom geplot worden als dat we in de *question_df* dataframe hadden. Daar hadden we 'Antivirussoftware' op de eerste regel, en 'Andere maatregelen' als laatste, maar dit wordt precies omgekeerd geplot. We draaien dat om met *invert_yaxis*
###Code
axis.invert_yaxis()
###Output
_____no_output_____
###Markdown
Ten slotte gaan we de legenda wat aanpassen. Deze is boven op de bar plot gezet. We willen deze aan de onderkant hebben. Daarom doen we:
###Code
axis.legend(bbox_to_anchor=(0.01, 0.00), ncol=2, bbox_transform=fig.transFigure, loc="lower left", frameon=False)
###Output
_____no_output_____
###Markdown
Hier volgt een uitleg van de argumenten van de *legend* method:* De coordinaten die hier met *bbox_to_anchor* gegeven worden zijn ditmaal ten opzichte van de hele figuur (dwz de canvas waar de plot in gemaakt wordt) omdat we *bbox_transform=fig.transFigure* meegeven hebben. Dus 0,0 is helemaal links onder in het hoekje van je bitmap. * Zouden we *bbox_transform* niet meegeven dan zou de default transform gekozen worden (axis.transAxes), zodat 0,0 overeen komt met de linker onderhoek van de *assen*. Maar het is handiger om de coordinaten ten opzichte van de figuur te zetten. * De *loc='lower left'* optie slaat op het punt van de legenda bounding box dat op de positie van de *bbox_to_anchor* coordinate gezet wordt. Nu is dat dus de linker onderhoek van de hele legend box je referentie punt. Maar als je je box rechts wilt uitlijnen kan het handig zijn om voor *loc* juist de rechter bovenhoek te kiezen. * *ncol=2* slaat op het aantal kollom waarin de legend georganiseerd wordt. Default is ncol=1, zodat je een vertikaal rijtje van legend items krijgt. Maar wij willen de items juist naast elkaar. We hebben 3 items, dus met *ncol=2* krijgen we twee rijen: de eerste rij heeft twee lijnen, de tweede rij één.* De *frameon=False* optie spreekt voor zich: we willen geen box om de legend, dus haal hem erafEens kijken hoe het plaatje er nu uitziet:
###Code
fig
###Output
_____no_output_____
###Markdown
Hiermee zijn we klaar met de plot. Als je de plot als PDF naar file weg schrijft kan je hem met de beste kwaliteit (namelijk als vectorformaat) in latex inlezen
###Code
fig.savefig("maatregelen.pdf", bbox_inches = 'tight')
###Output
_____no_output_____ |
AutoDiff/AutoDiff_Demo.ipynb | ###Markdown
Newton Raphson DemoWe find one of the roots of the following function using our AutoDiff package:$$f(x) = 5^{\left(1 + sin\left(log\left(5 + x^2\right)\right)\right)} - 10$$
###Code
x = Variable('x')
c1 = Constant('c1', 1)
c2 = Constant('c2', 2)
c3 = Constant('c3', 5)
c4 = Constant('c4', 10)
f = c3**(c1 + mo.sin(mo.log(c3 + x**c2))) - c4
tolerance = 0.001
guess = 20
max_iter = 10000
val_dict = {'x' : guess}
evals = []
fx = f.get_val(val_dict)
for i in range(max_iter):
evals += [fx]
dx = f.get_der(val_dict)['x']
val_dict['x'] = val_dict['x'] - fx/dx
new_fx = f.get_val(val_dict)
if abs(new_fx - fx) < tolerance: fx = new_fx; break
fx = new_fx
x_vals = np.linspace(-100,100,100)
y_vals = 5**(1+ np.sin(np.log(5 + x_vals**2))) - 10
fig = plt.gcf()
fig.set_size_inches(9,6)
_ = plt.plot(x_vals, y_vals)
_ = plt.axhline(0, color='k', ls='--', lw=1.25)
_ = plt.ylabel('f(x)', fontsize=12)
_ = plt.xlabel('x', fontsize=12)
_ = plt.scatter([val_dict['x']], [fx], color='r', marker='o', s=80)
_ = plt.annotate('Root found by AutoDiff', (val_dict['x']+5, fx+1))
###Output
_____no_output_____
###Markdown
Elementary Math Operators (add, subtract, mult, divide)
###Code
val_dict = {'x' : 10, 'y' : 20, 'z' : 1}
x = Variable('x')
y = Variable('y')
f0 = x + y
print('Value: ', f0.get_val(val_dict))
print('Gradient: ', f0.get_der(val_dict))
f1 = x - y
print('Value: ', f1.get_val(val_dict))
print('Gradient: ', f1.get_der(val_dict))
f2 = x*y
print('Value: ', f2.get_val(val_dict))
print('Gradient: ', f2.get_der(val_dict))
f3 = x/y
print('Value: ', f3.get_val(val_dict))
print('Gradient: ', f3.get_der(val_dict))
f4 = f1 + f2 + f3
print('Value: ', f4.get_val(val_dict))
print('Gradient: ', f4.get_der(val_dict))
f5 = f4*f1
print('Value: ', f5.get_val(val_dict))
print('Gradient: ', f5.get_der(val_dict))
z = Variable('z')
z.get_der(val_dict, ['x', 'y', 'z'])
c = Constant('c', 5)
print('Value: ', c.get_val(val_dict))
print('Gradient: ', c.get_der(val_dict, ['x', 'y']))
f6 = x + y + c
print('Value: ', f6.get_val(val_dict))
print('Gradient: ', f6.get_der(val_dict))
f7 = c*x*y
print('Value: ', f7.get_val(val_dict))
print('Gradient: ', f7.get_der(val_dict))
###Output
Value: 1000
Gradient: {'y': 50, 'x': 100}
###Markdown
Other Math Operators (sin, cos, tan, log)
###Code
import math
c1 = Constant('c1', math.pi/2)
val_dict1 = {'x' : math.pi/2, 'y' : math.pi/4, 'z' : 2}
f8 = mo.sin(c1)
print('Value: ', f8.get_val(val_dict1))
print('Gradient: ', f8.get_der(val_dict1))
f9 = mo.sin(x + y)
print('Value: ', f9.get_val(val_dict1))
print('Gradient: ', f9.get_der(val_dict1))
f10 = x + mo.sin(x*z + y)
print('Value: ', f10.get_val(val_dict1))
print('Gradient: ', f10.get_der(val_dict1))
f11 = x*y + mo.log(x*y*z)
print('Value: ', f11.get_val(val_dict1))
print('Gradient: ', f11.get_der(val_dict1))
###Output
Value: 2.1368659607150793
Gradient: {'y': 2.8440358715300595, 'z': 0.5, 'x': 1.4220179357650298}
###Markdown
Exponentiation
###Code
val_dict2 = {'x': 3, 'y': math.pi/3, 'z' : 10}
c2 = Constant('c2', 4)
f12 = x**c2
print('Value: ', f12.get_val(val_dict2))
print('Gradient: ', f12.get_der(val_dict2))
f13 = x**mo.sin(y)
print('Value: ', f13.get_val(val_dict2))
print('Gradient: ', f13.get_der(val_dict2))
f14 = x**(mo.sin(y) + mo.log(x*z))
print('Value: ', f14.get_val(val_dict2))
print('Gradient: ', f14.get_der(val_dict2))
f15 = mo.cos(mo.sin(mo.log(x**c2 + c2) + y**c2 + z**c2))
print('Value: ', f14.get_val(val_dict2))
print('Gradient: ', f14.get_der(val_dict2))
c3 = Constant('c3', 0)
f16 = x**c3
print('Value: ', f16.get_val(val_dict2))
print('Gradient: ', f16.get_der(val_dict2))
val_dict4 = {'x' : 2, 'y' : 0}
f17 = x**y
print('Value: ', f17.get_val(val_dict4))
print('Gradient: ', f17.get_der(val_dict4))
val_dict4 = {'x' : 2, 'y' : -0.5}
f17 = x**y
print('Value: ', f17.get_val(val_dict4))
print('Gradient: ', f17.get_der(val_dict4))
f18 = -x
print('Value: ', f18.get_val(val_dict4))
print('Gradient: ', f18.get_der(val_dict4))
f19 = -mo.log(x)
print('Value: ', f19.get_val(val_dict4))
print('Gradient: ', f19.get_der(val_dict4))
###Output
Value: -0.6931471805599453
Gradient: {'x': -0.5}
###Markdown
Certain Exceptions Handled by AutoDiff
###Code
f20 = Constant('zero', 0)**x
val_dict5 = {'x' : 1, 'y' : 0}
print('Value: ', f20.get_val(val_dict5))
print('Gradient: ', f20.get_der(val_dict5))
f21 = mo.log(y)
print('Gradient: ', f21.get_der(val_dict5))
f22 = x/y
print('Gradient: ', f22.get_der(val_dict5))
f23 = 5 + x
print('Gradient: ', f22.get_der(val_dict5))
###Output
_____no_output_____ |
calculating-bandgap/lstsq_znse_znsse.ipynb | ###Markdown
Calculating the bangap of material from transmittance data Importing necessary libraries and modules
###Code
import time
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import savgol_filter
from sklearn.metrics import mean_squared_error
from matplotlib import style
plt.style.use('seaborn-poster')
###Output
_____no_output_____
###Markdown
Follwing function loads the transmittance data using `loadtxt` from `numpy`The transmittance values were converted to absolute values$$T = \frac{T\%}{100}$$Absorption coefficient was calculated using expression$$\alpha = - \frac{ln(T)}{t}$$where $t$ is the thickness of sampleEnergy of photon was calculated using formula$$E = h \nu \hspace{1cm} \Rightarrow \hspace{1cm} E = \frac{h c}{\lambda}$$$(\alpha h \nu)^2$ values were calculated and smoothened using a fourth order Savitzsky-Golay filter.The data was rescaled by dividing with the maximum value.The function returns $h \nu$ and rescaled $(\alpha h \nu)^2$ values
###Code
#function to get data to required format
def dataformat(datafile):
wavelength_data, T_data = np.loadtxt(datafile, dtype='object', delimiter=',', unpack=True)
wavelength_data = wavelength_data.astype('float64')
T_data = T_data.astype('float64')
#T = T%/100
T = T_data/100
wavelength = wavelength_data*1e-9
h = 6.626e-34 #planck's constant
c = 3e8 #velocity of light
eV = 1.602e-19 #1 electron-volt
E = h*c/(wavelength*eV)
t = 2e-7 #thickness of sample in meter
alpha = - np.log(T)/t
#setting power for direct or indirect semiconductor
n=2
#evaluating the values for Tauc Plot
TP = (alpha*E)**n
#smoothening the data using Savitzky-Golay Filter
sg = savgol_filter(TP, 9, 4)
#calculating the maximum value of Tauc plot for rescaling
sgmax = max(sg)
#rescaling the Tauc plot
sgre = sg/sgmax
return E, sgre
###Output
_____no_output_____
###Markdown
Following function applies **segmentation** algorithm to evaluate bandgap of material.The function returns bandgap of material, slope of selected line and root mean square error corresponding to given length of segment.
###Code
#function to implement segmentation algorithm
def segmentation(L, E, sgre):
#initiating arrays to store values
rmse = []
slope = []
intercept = []
for i in range(len(E)):
#calculating slope and intercept of line for every L points
if i + L <= len(E):
A = np.vstack([E[i:i+L], np.ones(len(E[i:i+L]))]).T
m, b = np.linalg.lstsq(A ,sgre[i:i+L], rcond=None)[0]
slope.append(m)
intercept.append(b)
sgpred = []
for j in range(0,L):
if(i+j<len(E)):
sgpred.append(m*E[i+j]+b)
if i + L <= len(E):
sgsub = sgre[i:i+L]
mse = mean_squared_error(sgsub, sgpred)
rmse.append(np.sqrt(mse))
#initiating array to save slopes of selected segments
selseg = []
#selecting only those segments for which rmse<0.75
for i in range(len(slope)):
if(rmse[i]<0.75):
selseg.append(slope[i])
else:
selseg.append(0)
#finding the maximum slope within the selected segments
max_slope = max(selseg)
#find the index for which slope is maximum
max_slope_index = selseg.index(max_slope)
#calculating the bandgap of material
#bg = (max_slope*E[max_slope_index]-sgre[max_slope_index])/max_slope
bg = -intercept[max_slope_index]/slope[max_slope_index]
return bg, max_slope, rmse[max_slope_index]
###Output
_____no_output_____
###Markdown
Function to take file containing data and return the value of bandgap of material for segment have least root mean square error.
###Code
def print_output(datafile):
bg = []
max_slope = []
rmse = []
for L in range(6, 12):
E, sgre = dataformat(datafile)
bg_L, max_slope_L, rmse_L = segmentation(L, E, sgre)
bg.append(bg_L)
max_slope.append(max_slope_L)
rmse.append(rmse_L)
#selecting the bandgap corresponding to least root mean square error
bandgap = bg[rmse.index(min(rmse))]
bandgap_error = min(rmse)/max_slope[rmse.index(min(rmse))]
print('The band gap of material is: ', round(bandgap, 3), '+-', round(bandgap_error, 3))
x = np.linspace(bandgap, E[np.argmax(sgre)], 100)
y = max_slope[rmse.index(min(rmse))]*(x-bandgap)
name = datafile.rsplit('/', 1)[-1].rsplit('.')[0]
print('Tauc Plot for ', name, 'for L = ', rmse.index(min(rmse))+6)
plt.plot(E, sgre)
plt.plot(x,y)
plt.xlabel(r'$h \nu$')
plt.ylabel(r'$(\alpha h \nu)^2$')
plt.grid()
plt.annotate(r'$E_g = {}\ eV$'.format(round(bandgap, 3)),
xy = (bandgap+0.02, 0), fontsize = 12)
plt.savefig('{}'.format(name), bbox_inches='tight')
plt.show()
time.sleep(1)
data = ['https://raw.githubusercontent.com/python4phys1cs/physics-problems/main/calculating-bandgap/data/znse.csv',
'https://raw.githubusercontent.com/python4phys1cs/physics-problems/main/calculating-bandgap/data/znsse.csv',
'https://raw.githubusercontent.com/python4phys1cs/physics-problems/main/calculating-bandgap/data/znsse2.csv']
for i in range(len(data)):
print_output(data[i])
###Output
The band gap of material is: 2.738 +- 0.005
Tauc Plot for znse for L = 6
|
00-PythonLearning/01-Tutorials/python_examples/basics_map.ipynb | ###Markdown
**`__map__`**
###Code
def square(number):
return number * number
[x for x in map(square, [1, 2, 3, 4, 5, 6])]
###Output
_____no_output_____
###Markdown
**List comprehension**
###Code
[square(x) for x in [1, 2, 3, 4, 5, 6]]
pow(2, 3)
###Output
_____no_output_____
###Markdown
**Another Example for \_\_map\_\_**
###Code
list(map(pow,[2, 3, 4], [10, 11, 12]))
###Output
_____no_output_____ |
3.4_numpy_random.ipynb | ###Markdown
Seeding
###Code
np.random.seed(123)
np.random.rand(5)
np.random.rand(5, 3)
a = 6; b= 12
(b-a) * np.random.rand(5) + a
###Output
_____no_output_____
###Markdown
Normal distribution
###Code
np.random.randn(5)
np.random.randn(5, 4)
mu = 5; sigma = 2
mu + sigma*np.random.randn(5)
X1 = np.random.randn(10000)
X2 = mu + sigma*np.random.randn(10000)
plt.figure(figsize=(10,6))
plt.hist(X1, bins=20, alpha=0.4)
plt.hist(X2, bins=20, alpha=0.4)
plt.show()
###Output
_____no_output_____
###Markdown
Random integers
###Code
np.random.randint(2, size=10)
np.random.randint(5, 20, size=100)
np.random.randint(5, 20, size=(4,7))
###Output
_____no_output_____ |
Project2- NoShowAppointments-Solution.ipynb | ###Markdown
Project: Investigate a Dataset - [No-Show Medical Appointments] Table of ContentsIntroductionDataset Description Initial ExplorationCleaningAugmenting the Dataset Monovariate Analysis Exploring the full dataset Exploring the patient datasetQuestions for AnalysisExploratory Data Analysis Question 1 Question 2 Question 3 Question 4 Question 5 Question 6 Question 7Conclusions and Limitations Introduction Dataset Description This is a dataset containing data in a single file for over 110,000 medical appointments in Brazil, where the objective is to investigate the most important factors in predicting whether a patient will show up for their scheduled appointment. The original source of the dataset can be found [here](https://www.kaggle.com/joniarroba/noshowappointments)Column names:1) PatientId - A unique identifier for the patient2) AppointmentID - A unique identifier for the appointment made, as hypothetically a certain patient can book more than one appointment.3) Gender - Gender of the patient (Male 'M' or Female 'F')4) ScheduledDay - The date and time the patient called in to set an appointment5) AppointmentDay - The date and time the appointment is due6) Age - Patient's age7) Neighborhood - The location of the hospital8) Scholarship - Binary indicator: Whether or not the patient is enrolled in the Brazilian social welfare program Bolsa Família9) Hipertension (misspelled for Hypertension)- Whether or not the patient suffers from Hypertension10) Diabetes - Binary indicator: Whether or not the patient is diabetic11) Alcoholism - Binary indicator: Whether or not the patient suffers from alcoholism12) Handcap (misspelled for Handicap): Binary indicator: Whether or not the patient is handicapped13) SMS_received - The number of SMS reminders the patient received before the appointment14) No-show - Binary indicator: (Yes) if patient did NOT show up, and (No) if the patient SHOWED UP.Columns numbered 1-13 are features or indepdenent variables, while column 14 is the target or dependent variable. Initial Exploration
###Code
# import necessary packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from datetime import datetime, date
%matplotlib inline
sns.set_style('darkgrid')
plt.rcParams["figure.figsize"] = (10, 5) #sets the size for all figures following this statement
# import file
df=pd.read_csv('noshowappointments-kagglev2-may-2016.csv')
df.head()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 110527 entries, 0 to 110526
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PatientId 110527 non-null float64
1 AppointmentID 110527 non-null int64
2 Gender 110527 non-null object
3 ScheduledDay 110527 non-null object
4 AppointmentDay 110527 non-null object
5 Age 110527 non-null int64
6 Neighbourhood 110527 non-null object
7 Scholarship 110527 non-null int64
8 Hipertension 110527 non-null int64
9 Diabetes 110527 non-null int64
10 Alcoholism 110527 non-null int64
11 Handcap 110527 non-null int64
12 SMS_received 110527 non-null int64
13 No-show 110527 non-null object
dtypes: float64(1), int64(8), object(5)
memory usage: 11.8+ MB
###Markdown
Count Null Entries
###Code
df.isnull().sum().max()
###Output
_____no_output_____
###Markdown
Count duplicate entries based on both patient and appointment ID
###Code
df.duplicated(subset=['PatientId','AppointmentID']).sum()
df['Handcap'].value_counts()
###Output
_____no_output_____
###Markdown
**Since handcap should be boolean, it is likely that the column labels of 'Handcap' and 'SMS_received' were flipped**
###Code
df['Age'].value_counts()
###Output
_____no_output_____
###Markdown
We should drop the row where age is -1 as it is invalid Cleaning* Correct misspelled column labels, shorten column labels where possible, and turn into snake case (i.e. lower case, separating words with underscore)* Flip the labels of 'SMS_received' and 'Handicapped' columns* Turn ScheduledDay into date/time format* Turn AppointmentDay into date/time format* Delete the row where 'Age' is invalid (-1)* To avoid confusion in the meaning of 'Yes' and 'No' in the No-show column, rename the No-show column to 'show', and change every 'Yes' to 0='No' and every 'No' to 1='Yes'* PatientId : float --> int
###Code
df.head()
###Output
_____no_output_____
###Markdown
Correct misspelled column labels
###Code
new_cols={'PatientId':'patient',
'AppointmentID':'appt',
'Gender':'sex',
'ScheduledDay':'scd_datetime',
'AppointmentDay': 'appt_datetime',
'Age':'age',
'Neighbourhood':'area',
'Scholarship':'bolsa',
'Hipertension':'hypert',
'Diabetes':'diabetic',
'Alcoholism': 'alcoholic',
'Handcap':'handicap',
'SMS_received':'sms',
'No-show':'no_show'
}
df.rename(columns=new_cols,inplace=True)
df.head(1)
###Output
_____no_output_____
###Markdown
Flip the labels of 'sms' and 'handicap' columns
###Code
df.rename(columns={'sms':'handicap',
'handicap':'sms'}
,inplace=True)
df.head(1)
# Check that sms has values 0-4 and handicap has values 0-1
df.sms.value_counts(),df.handicap.value_counts()
###Output
_____no_output_____
###Markdown
Turn ScheduledDay into date/time format
###Code
df['scd_datetime']=df['scd_datetime'].apply(lambda x:x.replace('T',' ').split('Z')[0])
df['scd_datetime']=pd.to_datetime(df['scd_datetime'],infer_datetime_format=True)
# Check
print(type(df['scd_datetime'][0]))
df.head()
###Output
<class 'pandas._libs.tslibs.timestamps.Timestamp'>
###Markdown
Turn AppointmentDay into date/time format
###Code
df['appt_datetime']=df['appt_datetime'].apply(lambda x:x.replace('T',' ').split('Z')[0])
df['appt_datetime']=pd.to_datetime(df['appt_datetime'],infer_datetime_format=True)
# Check
print(type(df['appt_datetime'][0]))
df.head()
###Output
<class 'pandas._libs.tslibs.timestamps.Timestamp'>
###Markdown
Delete the row where 'Age' is invalid (-1)
###Code
idx=df[df.age==-1].index
idx
df.drop(labels=idx,axis=0,inplace=True,)
#Check
df[df.age==-1]
###Output
_____no_output_____
###Markdown
Adjust the No-show column To avoid confusion in the meaning of 'Yes' and 'No' in the No-show column, rename the No-show column to 'show', and change every 'Yes' to 0='No' and every 'No' to 1='Yes'
###Code
# First, count the number of Yes and No entries before switching
df.no_show.value_counts()
df.rename(columns={'no_show':'show'}
,inplace=True)
df['show'].replace({'Yes':0,'No':1},inplace=True)
df.head(1)
df.show.value_counts()
###Output
_____no_output_____
###Markdown
PatientId : float --> int
###Code
# casting to int64, because simply stating 'int' defaults to int32 which reverses the sign of the patient ID
df.patient=df.patient.astype(np.int64)
type(df.patient[0])
df.head()
# Check all datatypes are appropriate
df.info()
# Save the cleaned dataset to HDF5 file instead of CSV in order to preserve the date-time datatypes
df.to_hdf('full_db_cleaned.h5',key='df',index=False)
###Output
_____no_output_____
###Markdown
Augmenting the Dataset The current dataframe *df* contains duplicated patient IDs as some patients have made more than one appointment in the period being considered:
###Code
df.patient.duplicated().sum()
###Output
_____no_output_____
###Markdown
It may be useful to create another dataframe where each row is unique to a patient and add columns indicating:1- The number of appointments made by each patient 2- The number of times they showed up3- Each patient's attendance rate (this helps us normalize by the number of appointments per patient)
###Code
patient_data_cols=np.r_[0,2,5,7:11,12,13]
df_patient_1=df.iloc[:,patient_data_cols]
df_patient_1
df_patient_2=pd.DataFrame(df_patient_1.pivot_table(columns=['patient'], aggfunc='size'))
df_patient_2.rename(columns={0:'num_appts'},inplace=True)
df_patient_2
df_patient_3=pd.DataFrame(df.groupby('patient')['show'].sum())
df_patient_3.rename(columns={'show':'num_shows'},inplace=True)
df_patient_3
df_patient_23=pd.merge(df_patient_2,df_patient_3,left_on='patient',right_on='patient',how='inner')
df_patient_23
df_patient=pd.merge(df_patient_23,df_patient_1,left_on='patient',right_on='patient',how='inner')
df_patient.drop_duplicates(subset='patient', inplace=True)
df_patient
df_patient['num_diseases']=df_patient.hypert+df_patient.diabetic+df_patient.alcoholic+df_patient.handicap
df_patient
df_patient['att_rate']=df_patient['num_shows']/df_patient['num_appts']
df_patient['att_rate'].describe()
df_patient.duplicated().sum()
# Save the cleaned patient dataset to HDF5 file
df_patient.to_hdf('patient_db_cleaned.h5',key='df_patient',index=False)
###Output
_____no_output_____
###Markdown
Monovariate Analysis Exploring the full dataset
###Code
df.describe()
df_patient.describe()
df.iloc[:,2:].head()
# Plotting the histograms for all the variables in the full dataset excluding the patient and appointment IDs
df.iloc[:,2:].hist(figsize=(20,10));
df.sex.value_counts().plot(kind='bar');
###Output
_____no_output_____
###Markdown
The schedule dates distribution is skewed to the left, showing that most appointment requests were made between April and June 2016. This may be indicative of seasonal illness such as the common flu, and such knowledge could be used to better allocate medical resources during those months of the year.It would also be interesting to superimpose the schedule date and appointment date distributions on top of one another.We also see that most appointments were made by people free from at least one of hypertension, diabeters, alcoholism, or a disability. It makes us wonder whether a patient who has a chronic illness or disability is more keen not to miss the doctor's appointment than a person who does not.Also, most appointments were made by people not enrolled in the social welfare program, and most appointment-goers are female.The SMS histogram shows that predominantly, no SMS reminders were sent prior to each appointment's due date. It would be interesting to explore whether the number of SMS reminders sent has any bearing on attendance rates.
###Code
fig,ax=plt.subplots(figsize=(8,6))
ax.hist(df.scd_datetime,density=True,label='Schedule Dates',alpha=0.5)
ax.hist(df.appt_datetime,density=True,label='Appointment Dates',alpha=0.5)
ax.legend()
ax.set_title('Schedule and Appointment Dates Distributions')
ax.set_xlabel('Date')
ax.set_ylabel('Density');
###Output
_____no_output_____
###Markdown
This is interesting: Almost all of the appointment dates are due over the period May-June 2016, and hardly any appointments were given in the six-month window preceding May 2016. This does not have to be unreasonable as hospitals and clinics have probably been seeing patients in the period 11-2015 to 05-2015 but for appointments scheduled prior to the data collection. This means that some patients might have had to wait for 2-6 months for the nearest appointment. It will be insightful to see how long on average a new patient would have to wait for the nearest available appointment. Exploring the patient dataset
###Code
# Plotting the histograms for all the variables in the patient dataset excluding the patient IDs
df_patient.iloc[:,1:].hist(figsize=(20,10));
###Output
_____no_output_____
###Markdown
The number of shows is about the same as the number of appointments for most patients, resulting in the most predominant attendance rate being 90-100%. We are more interested in the characteristics of patients with low attendance rates. We will later section this dataframe by attendance rate and concentrate on patients with attendance rates less than 50%.In each of the histograms for hypertension, diabetes, alcoholism, and handicap, the number of people who are free from the illness is much higher than those who aren't. However, the histogram of the number of diseases shows that the number of people who suffer from at least one of these is almost equal to those who suffer from none. This is an insight that could not be gleaned by looking at each of the illness variables independently. It inspires the question of whether the attendance rate is correlated with the number of chronic diseases. Question(s) for AnalysisBased on the initial exploration above:**Q1: Is a patient's attendance rate correlated positively with their number of chronic illnesses?****Q2: Would sending more SMS reminders help improve the attendance rate?****Q3: Do appointments made and scheduled on the same day (likely indicating an urgent medical situation) lead to greater chances of showing up compared to appointments made one or more days earlier?****Q4: How long would a new patient have to wait on average for the next available appointment?****Q5: What are the characteristics of patients with attendance rates less than 50%?**Other questions we could ask about the data:**Q6: Are certain age groups more likely than others to miss their appointments?****Q7: Is the hospital location of any importance in predicting the possibility of a no-show?** Exploratory Data Analysis Q1: Relationship between number of chronic illnesses and attendance rate
###Code
df_patient.groupby('num_diseases')['att_rate'].mean()
y=df_patient.groupby('num_diseases')['att_rate'].mean()
x=y.index
plt.scatter(x,y)
m, b = np.polyfit(x, y, 1)
plt.plot(x, m*x + b,color='orange');
plt.xlabel('Number of chronic illnesses')
plt.ylabel('Mean Attendance Rate');
plt.title('Mean Attendance Rate by Number of Chronic Illnesses');
###Output
_____no_output_____
###Markdown
This is a surprising and counter-intuitive result. It seems that patients with more chronic illnesses are more likely to miss their appointments. Q2: Relationship between number of SMS reminders and attendance rate
###Code
df.groupby('sms')['show'].mean().plot(kind='line');
plt.title('Average Attendance Rate by No. of SMS Reminders',fontsize=14)
plt.xlabel('Number of SMS Reminders',fontsize=12)
plt.ylabel('Average Attendance Rate',fontsize=12);
df.corr()['sms']['show']
###Output
_____no_output_____
###Markdown
From the line plot above, it seems that increasing the number of SMS reminders will not necessarily improve attendance. Sending one SMS reminder might have bumped up the average attendance rate by a mere 2%, and sending more than one SMS seems to have had a negative effect, which may be coincidental. Q3: Relationship between attendance rate and number of days an appointment is booked in advance
###Code
# Create a new column for the difference between the scheduling date and the appointment date
df['days_diff']=(df['appt_datetime'].dt.date-df['scd_datetime'].dt.date).dt.days
# check for any negative date difference
df.days_diff.describe()
# Inspect problematic rows where the appointment date precedes the schedule date
idx=df[df.days_diff < 0].index
# delete problematic rows
df=df.drop(labels=idx,axis=0)
# Check that the minimum number of days difference is 0
df.days_diff.min()
y=df.groupby('days_diff').mean()['show']
x=y.index
plt.scatter(x,y);
plt.xlabel('Days Booked In Advance',fontsize=12)
plt.ylabel('Mean Attendance Rate',fontsize=12)
plt.title('Mean Attendance Rate vs. Days Booked In Advance',fontsize=14);
###Output
_____no_output_____
###Markdown
The complexity of the figure above requires sectioning the plot into two more plots for appointments made fewer than 40 days in advance, and those made at least 40 days in advance:
###Code
mask_l40=df.days_diff<40
mask_ge40=df.days_diff>=40
y= df[mask_l40].groupby('days_diff').mean()['show']
x=y.index
plt.scatter(x,y);
m,b = np.polyfit(x, y, 1)
plt.plot(x, m*x+b,color='orange')
plt.xlabel('Days Booked In Advance',fontsize=12)
plt.ylabel('Mean Attendance Rate',fontsize=12)
plt.title('Mean Attendance Rate vs. Days Booked <40 days In Advance',fontsize=14);
###Output
_____no_output_____
###Markdown
For appointments made 40 or fewer days in advance, appointments due on the same day they're made scheduled the highest attendance rate. Thereafter, people tend to miss their appointments the earlier they book it in advance.
###Code
y= df[mask_ge40].groupby('days_diff').mean()['show']
x=y.index
plt.scatter(x,y);
m,b = np.polyfit(x, y, 1)
plt.plot(x, m*x+b,color='red')
plt.xlabel('Days Booked In Advance',fontsize=12)
plt.ylabel('Mean Attendance Rate',fontsize=12)
plt.title('Mean Attendance Rate vs. Days Booked >=40 days In Advance',fontsize=14);
###Output
_____no_output_____
###Markdown
The line plot above for appointments made at least 40 days in advance does not show a clear pattern. We may need to inspect the correlation coefficient:
###Code
df[mask_l40].corr()['show']['days_diff']
df[mask_ge40].corr()['show']['days_diff']
###Output
_____no_output_____
###Markdown
The coefficients of correlation reveal that the attendance rate is correlated negatively - although weakly- with the number of days an appointment is booked in advance, if the appointment is booked fewer than 40 days in advance. However, if an appointment is booked 40 days or more in advance, it is difficult to predict whether or not the patient will show-up based on this information alone. Q4: The average waiting time for the nearest appointment
###Code
df.days_diff.describe()
df.boxplot(column=['days_diff'])
###Output
_____no_output_____
###Markdown
The boxplot above for days waited to the next available appointment shows plenty of outliers. The median is probably a better representative of central tendency than the mean in this case.One could say that, on average, a new patient would wait 4 days till the next available appointment. Q5: Characteristics of the patients with attendance rates < 50%
###Code
lowatt=df_patient[df_patient.att_rate<0.5]
lowatt
lowatt.iloc[:,3:12].hist(figsize=(10,10));
lowatt.sex.value_counts().plot(kind='pie');
###Output
_____no_output_____
###Markdown
The histogram matrix (and bar chart of the gender) for the characteristics of patients with lower attendance rates shows that those patients likely :* Have one or no chronic illnesses* If they have a chronic illness, it is probably a disability (handicapped)* Are femaleAre the predictor variables for these patients correlated ? To investigate, we use seaborn's pair plot and inspect off-diagonal scatter plots
###Code
sns.pairplot(lowatt.iloc[:,3:12]);
###Output
_____no_output_____
###Markdown
The only interesting relationship in this pairplot is that between age and the number of diseases, suspected to be a positive correlation.
###Code
lowatt.corr()['age']['num_diseases']
###Output
_____no_output_____
###Markdown
This positive correlation between illness and age is expected of any human population. But is it sufficient to conclude that seniors are the likeliest to miss their appointments due to not only old age, but also the higher number of illnesses that accompany old age?
###Code
y=lowatt.groupby('age')['att_rate'].mean()
x=y.index
plt.scatter(x,y)
m, b = np.polyfit(x, y, 1)
print(m)
plt.plot(x, m*x + b,color='orange');
plt.xlabel('Age')
plt.ylabel('Mean Attendance Rate');
plt.title('Mean Attendance Rate by Age for Low-Attendance Patients');
###Output
2.1634296333138136e-05
###Markdown
The above plot is skewed by the outlier on the top right. Let's remove the outlier from the datasets and replot the same figure:
###Code
idx=lowatt.query('age>100').index
lowatt=lowatt.drop(index=idx)
y=lowatt.groupby('age')['att_rate'].mean()
x=y.index
plt.scatter(x,y)
m, b = np.polyfit(x, y, 1)
plt.plot(x, m*x + b,color='orange');
plt.xlabel('Age')
plt.ylabel('Mean Attendance Rate');
plt.title('Mean Attendance Rate by Age for Low-Attendance Patients');
lowatt.corr()['age']['att_rate']
###Output
_____no_output_____
###Markdown
After removing the outlier, the regression line now slopes down, indicating a negative correlation between age and the attendance rate, even for patients with low attendance. Upon inspecting the correlation coefficient, however, we see that this relationship is too weak to conclude that seniors are the likeliest to miss their appointments.It would be interesting to ask the same question to the larger dataset. Q6: Are certain age groups more likely than others to miss their appointments? First, let's define the bounds of age groups. I used [this article](https://www.researchgate.net/publication/228404297_Classification_of_Age_Groups_Based_on_Facial_Features) as a reference.
###Code
# Remove age outliers
idx=df.query('age>100').index
df=df.drop(index=idx)
# First, define age groups
bin_edges=[0,2,39,59,115]
bin_names=['Baby','Young Adult','Middle-Aged Adult','Old Adult']
# Creates feature_levels column
df['age_group'] = pd.cut(df['age'], bin_edges, labels=bin_names)
# Find the mean target value of each feature level with groupby
df.groupby('age_group').mean()['show'].plot(kind='bar');
plt.xticks(rotation='0')
plt.xlabel('Age Group',fontsize=14)
plt.ylabel('Mean Attendance Rate',fontsize=14)
plt.title('Mean Attendance Rate by Age Group',fontsize=14);
df.groupby('age_group').mean()['show'].idxmax()
df.groupby('age_group').mean()['show'].idxmin()
###Output
_____no_output_____
###Markdown
In stark contrast to our previous analysis on the low-attendance dataset, it seems that old adults (i.e. seniors) are in general the likelist to show up for their appointments, followed by babies (who must be accompanied by their parents), middle-aged adults come in the third place, while **young adults are the likeliest to miss their appointments!!** Q7: Is the hospital location of any importance in predicting the possibility of a no-show?
###Code
df.groupby('area')['show'].mean().plot(kind='bar',figsize=(20,5));
plt.xlabel('Neighborhood')
plt.ylabel('Mean Attendance Rate')
plt.title('Mean Attendance Rate by Neighborhood');
###Output
_____no_output_____
###Markdown
The figure above demonstrates that almost all but 4 neighborhoods have a similar attendance rate around 80%. Three exceptional neighborhoods have above average attendance rates, which are:
###Code
df.groupby('area')['show'].mean().sort_values(ascending=False)[:3]
###Output
_____no_output_____
###Markdown
Another exceptional neighborhood has 0% mean attendance rate: ILHAS OCEÂNICAS DE TRINDADE
###Code
df.groupby('area')['show'].mean().sort_values()
###Output
_____no_output_____
###Markdown
Inspecting the appointments made in that neighborhood, we find only two appintments, both of which were no-shows. Two records in a datset of 110,500+ appointments means that not enough data was collected from that neighborhood to judge fairly whether it performs better or worse than others.
###Code
df[df.area=='ILHAS OCEÂNICAS DE TRINDADE']
###Output
_____no_output_____ |
roboflow_detr_finetune.ipynb | ###Markdown
DETR + ROBOFLOWThe following notebook implements DETR finetuning on the Roboflow COCO format dataset. The main implementation is taken from: https://github.com/aivclab/detrWith changes made to the coco loader to fit our dataset. See below in Dataset section for details. Colab Setup
###Code
# Ensure colab doesn't disconnect
%%javascript
function ClickConnect(){
console.log("Working");
document.querySelector("colab-toolbar-button#connect").click()
}setInterval(ClickConnect,60000)
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
from psutil import virtual_memory
ram_gb = virtual_memory().total / 1e9
print('Your runtime has {:.1f} gigabytes of available RAM\n'.format(ram_gb))
if ram_gb < 20:
print('To enable a high-RAM runtime, select the Runtime > "Change runtime type"')
print('menu, and then select High-RAM in the Runtime shape dropdown. Then, ')
print('re-execute this cell.')
else:
print('You are using a high-RAM runtime!')
###Output
Your runtime has 13.6 gigabytes of available RAM
To enable a high-RAM runtime, select the Runtime > "Change runtime type"
menu, and then select High-RAM in the Runtime shape dropdown. Then,
re-execute this cell.
###Markdown
Fork DETR and setup
###Code
!git clone 'https://github.com/aivclab/detr'
%cd /content/detr
!pip install -r requirements.txt
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
###Output
1.8.1+cu101 True
###Markdown
Load a modelFirst we have to decide if our model should be pretrained.This greatly depends on the size of a dataset. Smaller datasets rely more on finetuning.
###Code
pretrained = True
if pretrained:
# Get pretrained weights
checkpoint = torch.hub.load_state_dict_from_url(
url='https://dl.fbaipublicfiles.com/detr/detr-r50-e632da11.pth',
map_location='cpu',
check_hash=True)
# Remove class weights
del checkpoint["model"]["class_embed.weight"]
del checkpoint["model"]["class_embed.bias"]
# SaveOGH
torch.save(checkpoint,
'detr-r50_no-class-head.pth')
###Output
_____no_output_____
###Markdown
DatasetOur dataset should be loadable as a COCO formatThis allows us to use the pycocotools to load the data dict for the main python scriptNeed to change /content/detr/datasets/coco.py build() line 147 to:```def build(image_set, args): root = Path(args.coco_path) assert root.exists(), f'provided COCO path {root} does not exist' mode = 'instances' PATHS = { "train": (root / "train", root / "train/_annotations.coco.json"), "val": (root / "valid", root / "valid/_annotations.coco.json"), } img_folder, ann_file = PATHS[image_set] dataset = CocoDetection(img_folder, ann_file, transforms=make_coco_transforms(image_set), return_masks=args.masks) return dataset``` Download Roboflow dataset
###Code
!mkdir /content/roboflow-dataset
%cd /content/roboflow-dataset
!rm -rf train valid test README.roboflow.txt data.yaml
!curl -L "https://app.roboflow.com/ds/CcZcXC9tAY?key=19cj8EBfm3" > roboflow.zip; unzip roboflow.zip; rm roboflow.zip
dataset_file = "coco" # alternatively, implement your own coco-type dataset loader in datasets and add this "key" to datasets/__init__.py
dataDir='/content/roboflow-dataset' # should lead to a directory with a train2017 and val2017 folder as well as an annotations folder
num_classes = 3 # this int should be the actual number of classes + 1 (for no class)
outDir = 'outputs'
resume = "detr-r50_no-class-head.pth" if pretrained else ""
###Output
_____no_output_____
###Markdown
TrainingWe use the main.py script to run our trainingNote - changes made to class loss in /content/detr/detr.py line 126:``` losses['class_error'] = 100 - accuracy(src_logits[idx][..., :-1], target_classes_o)[0]```
###Code
%cd /content/detr
!python main.py \
--dataset_file $dataset_file \
--coco_path $dataDir \
--output_dir $outDir \
--resume $resume \
--num_classes $num_classes \
--lr 1e-5 \
--lr_backbone 1e-6 \
--epochs 50 \
--batch_size 16
###Output
/content/detr
Not using distributed mode
git:
sha: 8830cacdc981924169546a0e59d94b6c94fd775d, status: has uncommited changes, branch: master
Namespace(aux_loss=True, backbone='resnet50', batch_size=16, bbox_loss_coef=5, clip_max_norm=0.1, coco_panoptic_path=None, coco_path='/content/roboflow-dataset', dataset_file='coco', dec_layers=6, device='cuda', dice_loss_coef=1, dilation=False, dim_feedforward=2048, dist_url='env://', distributed=False, dropout=0.1, enc_layers=6, eos_coef=0.1, epochs=50, eval=False, frozen_weights=None, giou_loss_coef=2, hidden_dim=256, lr=1e-05, lr_backbone=1e-06, lr_drop=200, mask_loss_coef=1, masks=False, nheads=8, num_classes=3, num_queries=100, num_workers=2, output_dir='outputs', position_embedding='sine', pre_norm=False, remove_difficult=False, resume='detr-r50_no-class-head.pth', seed=42, set_cost_bbox=5, set_cost_class=1, set_cost_giou=2, start_epoch=0, weight_decay=0.0001, world_size=1)
Building a DETR model with 3 classes
number of params: 41279752
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Dataset CocoDetection
Number of datapoints: 258
Root location: /content/roboflow-dataset/train
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Dataset CocoDetection
Number of datapoints: 64
Root location: /content/roboflow-dataset/valid
Start training
Epoch: [0] [ 0/16] eta: 0:00:40 lr: 0.000010 class_error: 46.67 loss: 36.5252 (36.5252) loss_ce: 1.9253 (1.9253) loss_bbox: 2.2952 (2.2952) loss_giou: 1.8454 (1.8454) loss_ce_0: 1.9124 (1.9124) loss_bbox_0: 2.4054 (2.4054) loss_giou_0: 1.9550 (1.9550) loss_ce_1: 1.8691 (1.8691) loss_bbox_1: 2.4608 (2.4608) loss_giou_1: 1.8435 (1.8435) loss_ce_2: 1.8044 (1.8044) loss_bbox_2: 2.2864 (2.2864) loss_giou_2: 1.8625 (1.8625) loss_ce_3: 1.8988 (1.8988) loss_bbox_3: 2.2595 (2.2595) loss_giou_3: 1.8650 (1.8650) loss_ce_4: 1.9054 (1.9054) loss_bbox_4: 2.2882 (2.2882) loss_giou_4: 1.8428 (1.8428) loss_ce_unscaled: 1.9253 (1.9253) class_error_unscaled: 46.6667 (46.6667) loss_bbox_unscaled: 0.4590 (0.4590) loss_giou_unscaled: 0.9227 (0.9227) cardinality_error_unscaled: 97.0000 (97.0000) loss_ce_0_unscaled: 1.9124 (1.9124) loss_bbox_0_unscaled: 0.4811 (0.4811) loss_giou_0_unscaled: 0.9775 (0.9775) cardinality_error_0_unscaled: 97.6875 (97.6875) loss_ce_1_unscaled: 1.8691 (1.8691) loss_bbox_1_unscaled: 0.4922 (0.4922) loss_giou_1_unscaled: 0.9218 (0.9218) cardinality_error_1_unscaled: 97.2500 (97.2500) loss_ce_2_unscaled: 1.8044 (1.8044) loss_bbox_2_unscaled: 0.4573 (0.4573) loss_giou_2_unscaled: 0.9312 (0.9312) cardinality_error_2_unscaled: 95.4375 (95.4375) loss_ce_3_unscaled: 1.8988 (1.8988) loss_bbox_3_unscaled: 0.4519 (0.4519) loss_giou_3_unscaled: 0.9325 (0.9325) cardinality_error_3_unscaled: 96.9375 (96.9375) loss_ce_4_unscaled: 1.9054 (1.9054) loss_bbox_4_unscaled: 0.4576 (0.4576) loss_giou_4_unscaled: 0.9214 (0.9214) cardinality_error_4_unscaled: 97.0625 (97.0625) time: 2.5071 data: 1.1450 max mem: 11580
Epoch: [0] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 53.33 loss: 34.1447 (34.5387) loss_ce: 1.5645 (1.6201) loss_bbox: 2.1405 (2.1516) loss_giou: 1.9087 (1.9407) loss_ce_0: 1.6430 (1.6695) loss_bbox_0: 2.3091 (2.3018) loss_giou_0: 1.9550 (1.9430) loss_ce_1: 1.5980 (1.6165) loss_bbox_1: 2.1991 (2.2308) loss_giou_1: 1.9874 (1.9503) loss_ce_2: 1.5230 (1.5587) loss_bbox_2: 2.1292 (2.2079) loss_giou_2: 1.8921 (1.9320) loss_ce_3: 1.5662 (1.6168) loss_bbox_3: 2.0982 (2.1707) loss_giou_3: 1.8919 (1.9265) loss_ce_4: 1.5683 (1.6154) loss_bbox_4: 2.1021 (2.1490) loss_giou_4: 1.8967 (1.9374) loss_ce_unscaled: 1.5645 (1.6201) class_error_unscaled: 43.7500 (46.3578) loss_bbox_unscaled: 0.4281 (0.4303) loss_giou_unscaled: 0.9544 (0.9704) cardinality_error_unscaled: 90.1250 (89.5170) loss_ce_0_unscaled: 1.6430 (1.6695) loss_bbox_0_unscaled: 0.4618 (0.4604) loss_giou_0_unscaled: 0.9775 (0.9715) cardinality_error_0_unscaled: 93.0000 (93.1136) loss_ce_1_unscaled: 1.5980 (1.6165) loss_bbox_1_unscaled: 0.4398 (0.4462) loss_giou_1_unscaled: 0.9937 (0.9752) cardinality_error_1_unscaled: 90.6875 (90.7102) loss_ce_2_unscaled: 1.5230 (1.5587) loss_bbox_2_unscaled: 0.4258 (0.4416) loss_giou_2_unscaled: 0.9460 (0.9660) cardinality_error_2_unscaled: 86.3125 (86.5795) loss_ce_3_unscaled: 1.5662 (1.6168) loss_bbox_3_unscaled: 0.4196 (0.4341) loss_giou_3_unscaled: 0.9460 (0.9633) cardinality_error_3_unscaled: 90.3125 (90.0966) loss_ce_4_unscaled: 1.5683 (1.6154) loss_bbox_4_unscaled: 0.4204 (0.4298) loss_giou_4_unscaled: 0.9483 (0.9687) cardinality_error_4_unscaled: 89.6250 (89.7898) time: 1.3527 data: 0.1670 max mem: 12069
Epoch: [0] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 50.00 loss: 33.3868 (33.5867) loss_ce: 1.5025 (1.5622) loss_bbox: 1.9948 (2.0558) loss_giou: 1.8752 (1.9257) loss_ce_0: 1.5911 (1.6423) loss_bbox_0: 2.1590 (2.1938) loss_giou_0: 1.9467 (1.9132) loss_ce_1: 1.5328 (1.5818) loss_bbox_1: 2.0160 (2.1446) loss_giou_1: 1.9206 (1.9155) loss_ce_2: 1.4821 (1.5205) loss_bbox_2: 2.0426 (2.1010) loss_giou_2: 1.8911 (1.9252) loss_ce_3: 1.5350 (1.5700) loss_bbox_3: 2.0433 (2.0657) loss_giou_3: 1.8919 (1.9258) loss_ce_4: 1.5165 (1.5638) loss_bbox_4: 1.9853 (2.0556) loss_giou_4: 1.8866 (1.9242) loss_ce_unscaled: 1.5025 (1.5622) class_error_unscaled: 50.0000 (49.4342) loss_bbox_unscaled: 0.3990 (0.4112) loss_giou_unscaled: 0.9376 (0.9628) cardinality_error_unscaled: 84.8750 (86.3867) loss_ce_0_unscaled: 1.5911 (1.6423) loss_bbox_0_unscaled: 0.4318 (0.4388) loss_giou_0_unscaled: 0.9734 (0.9566) cardinality_error_0_unscaled: 91.8125 (92.3047) loss_ce_1_unscaled: 1.5328 (1.5818) loss_bbox_1_unscaled: 0.4032 (0.4289) loss_giou_1_unscaled: 0.9603 (0.9578) cardinality_error_1_unscaled: 88.1875 (89.2734) loss_ce_2_unscaled: 1.4821 (1.5205) loss_bbox_2_unscaled: 0.4085 (0.4202) loss_giou_2_unscaled: 0.9456 (0.9626) cardinality_error_2_unscaled: 82.6875 (84.2383) loss_ce_3_unscaled: 1.5350 (1.5700) loss_bbox_3_unscaled: 0.4087 (0.4131) loss_giou_3_unscaled: 0.9460 (0.9629) cardinality_error_3_unscaled: 86.3125 (87.6484) loss_ce_4_unscaled: 1.5165 (1.5638) loss_bbox_4_unscaled: 0.3971 (0.4111) loss_giou_4_unscaled: 0.9433 (0.9621) cardinality_error_4_unscaled: 85.2500 (86.9336) time: 1.3032 data: 0.1327 max mem: 12069
Epoch: [0] Total time: 0:00:20 (1.3071 s / it)
Averaged stats: lr: 0.000010 class_error: 50.00 loss: 33.3868 (33.5867) loss_ce: 1.5025 (1.5622) loss_bbox: 1.9948 (2.0558) loss_giou: 1.8752 (1.9257) loss_ce_0: 1.5911 (1.6423) loss_bbox_0: 2.1590 (2.1938) loss_giou_0: 1.9467 (1.9132) loss_ce_1: 1.5328 (1.5818) loss_bbox_1: 2.0160 (2.1446) loss_giou_1: 1.9206 (1.9155) loss_ce_2: 1.4821 (1.5205) loss_bbox_2: 2.0426 (2.1010) loss_giou_2: 1.8911 (1.9252) loss_ce_3: 1.5350 (1.5700) loss_bbox_3: 2.0433 (2.0657) loss_giou_3: 1.8919 (1.9258) loss_ce_4: 1.5165 (1.5638) loss_bbox_4: 1.9853 (2.0556) loss_giou_4: 1.8866 (1.9242) loss_ce_unscaled: 1.5025 (1.5622) class_error_unscaled: 50.0000 (49.4342) loss_bbox_unscaled: 0.3990 (0.4112) loss_giou_unscaled: 0.9376 (0.9628) cardinality_error_unscaled: 84.8750 (86.3867) loss_ce_0_unscaled: 1.5911 (1.6423) loss_bbox_0_unscaled: 0.4318 (0.4388) loss_giou_0_unscaled: 0.9734 (0.9566) cardinality_error_0_unscaled: 91.8125 (92.3047) loss_ce_1_unscaled: 1.5328 (1.5818) loss_bbox_1_unscaled: 0.4032 (0.4289) loss_giou_1_unscaled: 0.9603 (0.9578) cardinality_error_1_unscaled: 88.1875 (89.2734) loss_ce_2_unscaled: 1.4821 (1.5205) loss_bbox_2_unscaled: 0.4085 (0.4202) loss_giou_2_unscaled: 0.9456 (0.9626) cardinality_error_2_unscaled: 82.6875 (84.2383) loss_ce_3_unscaled: 1.5350 (1.5700) loss_bbox_3_unscaled: 0.4087 (0.4131) loss_giou_3_unscaled: 0.9460 (0.9629) cardinality_error_3_unscaled: 86.3125 (87.6484) loss_ce_4_unscaled: 1.5165 (1.5638) loss_bbox_4_unscaled: 0.3971 (0.4111) loss_giou_4_unscaled: 0.9433 (0.9621) cardinality_error_4_unscaled: 85.2500 (86.9336)
Test: [0/4] eta: 0:00:07 class_error: 43.75 loss: 25.5645 (25.5645) loss_ce: 1.3839 (1.3839) loss_bbox: 1.1454 (1.1454) loss_giou: 1.6425 (1.6425) loss_ce_0: 1.5123 (1.5123) loss_bbox_0: 1.2378 (1.2378) loss_giou_0: 1.7447 (1.7447) loss_ce_1: 1.4317 (1.4317) loss_bbox_1: 1.2225 (1.2225) loss_giou_1: 1.5989 (1.5989) loss_ce_2: 1.3826 (1.3826) loss_bbox_2: 1.2516 (1.2516) loss_giou_2: 1.5998 (1.5998) loss_ce_3: 1.4154 (1.4154) loss_bbox_3: 1.1580 (1.1580) loss_giou_3: 1.6460 (1.6460) loss_ce_4: 1.4098 (1.4098) loss_bbox_4: 1.1459 (1.1459) loss_giou_4: 1.6356 (1.6356) loss_ce_unscaled: 1.3839 (1.3839) class_error_unscaled: 43.7500 (43.7500) loss_bbox_unscaled: 0.2291 (0.2291) loss_giou_unscaled: 0.8213 (0.8213) cardinality_error_unscaled: 74.9375 (74.9375) loss_ce_0_unscaled: 1.5123 (1.5123) loss_bbox_0_unscaled: 0.2476 (0.2476) loss_giou_0_unscaled: 0.8724 (0.8724) cardinality_error_0_unscaled: 85.6875 (85.6875) loss_ce_1_unscaled: 1.4317 (1.4317) loss_bbox_1_unscaled: 0.2445 (0.2445) loss_giou_1_unscaled: 0.7995 (0.7995) cardinality_error_1_unscaled: 79.5000 (79.5000) loss_ce_2_unscaled: 1.3826 (1.3826) loss_bbox_2_unscaled: 0.2503 (0.2503) loss_giou_2_unscaled: 0.7999 (0.7999) cardinality_error_2_unscaled: 75.1250 (75.1250) loss_ce_3_unscaled: 1.4154 (1.4154) loss_bbox_3_unscaled: 0.2316 (0.2316) loss_giou_3_unscaled: 0.8230 (0.8230) cardinality_error_3_unscaled: 80.3750 (80.3750) loss_ce_4_unscaled: 1.4098 (1.4098) loss_bbox_4_unscaled: 0.2292 (0.2292) loss_giou_4_unscaled: 0.8178 (0.8178) cardinality_error_4_unscaled: 77.1250 (77.1250) time: 1.9536 data: 1.1724 max mem: 12069
Test: [3/4] eta: 0:00:00 class_error: 37.50 loss: 28.7757 (29.0725) loss_ce: 1.3651 (1.3810) loss_bbox: 1.4792 (1.5587) loss_giou: 1.8093 (1.8076) loss_ce_0: 1.4932 (1.5049) loss_bbox_0: 1.8279 (1.7334) loss_giou_0: 1.8268 (1.8243) loss_ce_1: 1.3974 (1.4234) loss_bbox_1: 1.7525 (1.6987) loss_giou_1: 1.7903 (1.7656) loss_ce_2: 1.3744 (1.3798) loss_bbox_2: 1.5562 (1.6288) loss_giou_2: 1.8466 (1.8149) loss_ce_3: 1.4097 (1.4103) loss_bbox_3: 1.5146 (1.5682) loss_giou_3: 1.8148 (1.8069) loss_ce_4: 1.3966 (1.4072) loss_bbox_4: 1.4948 (1.5508) loss_giou_4: 1.8172 (1.8078) loss_ce_unscaled: 1.3651 (1.3810) class_error_unscaled: 43.7500 (51.5625) loss_bbox_unscaled: 0.2958 (0.3117) loss_giou_unscaled: 0.9046 (0.9038) cardinality_error_unscaled: 67.1875 (69.3125) loss_ce_0_unscaled: 1.4932 (1.5049) loss_bbox_0_unscaled: 0.3656 (0.3467) loss_giou_0_unscaled: 0.9134 (0.9121) cardinality_error_0_unscaled: 81.8750 (82.2031) loss_ce_1_unscaled: 1.3974 (1.4234) loss_bbox_1_unscaled: 0.3505 (0.3397) loss_giou_1_unscaled: 0.8951 (0.8828) cardinality_error_1_unscaled: 71.9375 (74.6875) loss_ce_2_unscaled: 1.3744 (1.3798) loss_bbox_2_unscaled: 0.3112 (0.3258) loss_giou_2_unscaled: 0.9233 (0.9075) cardinality_error_2_unscaled: 66.4375 (70.3125) loss_ce_3_unscaled: 1.4097 (1.4103) loss_bbox_3_unscaled: 0.3029 (0.3136) loss_giou_3_unscaled: 0.9074 (0.9035) cardinality_error_3_unscaled: 70.6250 (73.7188) loss_ce_4_unscaled: 1.3966 (1.4072) loss_bbox_4_unscaled: 0.2990 (0.3102) loss_giou_4_unscaled: 0.9086 (0.9039) cardinality_error_4_unscaled: 69.0000 (71.3750) time: 0.9915 data: 0.3503 max mem: 12069
Test: Total time: 0:00:04 (1.0080 s / it)
Averaged stats: class_error: 37.50 loss: 28.7757 (29.0725) loss_ce: 1.3651 (1.3810) loss_bbox: 1.4792 (1.5587) loss_giou: 1.8093 (1.8076) loss_ce_0: 1.4932 (1.5049) loss_bbox_0: 1.8279 (1.7334) loss_giou_0: 1.8268 (1.8243) loss_ce_1: 1.3974 (1.4234) loss_bbox_1: 1.7525 (1.6987) loss_giou_1: 1.7903 (1.7656) loss_ce_2: 1.3744 (1.3798) loss_bbox_2: 1.5562 (1.6288) loss_giou_2: 1.8466 (1.8149) loss_ce_3: 1.4097 (1.4103) loss_bbox_3: 1.5146 (1.5682) loss_giou_3: 1.8148 (1.8069) loss_ce_4: 1.3966 (1.4072) loss_bbox_4: 1.4948 (1.5508) loss_giou_4: 1.8172 (1.8078) loss_ce_unscaled: 1.3651 (1.3810) class_error_unscaled: 43.7500 (51.5625) loss_bbox_unscaled: 0.2958 (0.3117) loss_giou_unscaled: 0.9046 (0.9038) cardinality_error_unscaled: 67.1875 (69.3125) loss_ce_0_unscaled: 1.4932 (1.5049) loss_bbox_0_unscaled: 0.3656 (0.3467) loss_giou_0_unscaled: 0.9134 (0.9121) cardinality_error_0_unscaled: 81.8750 (82.2031) loss_ce_1_unscaled: 1.3974 (1.4234) loss_bbox_1_unscaled: 0.3505 (0.3397) loss_giou_1_unscaled: 0.8951 (0.8828) cardinality_error_1_unscaled: 71.9375 (74.6875) loss_ce_2_unscaled: 1.3744 (1.3798) loss_bbox_2_unscaled: 0.3112 (0.3258) loss_giou_2_unscaled: 0.9233 (0.9075) cardinality_error_2_unscaled: 66.4375 (70.3125) loss_ce_3_unscaled: 1.4097 (1.4103) loss_bbox_3_unscaled: 0.3029 (0.3136) loss_giou_3_unscaled: 0.9074 (0.9035) cardinality_error_3_unscaled: 70.6250 (73.7188) loss_ce_4_unscaled: 1.3966 (1.4072) loss_bbox_4_unscaled: 0.2990 (0.3102) loss_giou_4_unscaled: 0.9086 (0.9039) cardinality_error_4_unscaled: 69.0000 (71.3750)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.023
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.037
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.029
Epoch: [1] [ 0/16] eta: 0:00:38 lr: 0.000010 class_error: 14.29 loss: 30.6267 (30.6267) loss_ce: 1.3556 (1.3556) loss_bbox: 1.8101 (1.8101) loss_giou: 1.9002 (1.9002) loss_ce_0: 1.4654 (1.4654) loss_bbox_0: 1.8623 (1.8623) loss_giou_0: 1.9097 (1.9097) loss_ce_1: 1.4117 (1.4117) loss_bbox_1: 1.8260 (1.8260) loss_giou_1: 1.8404 (1.8404) loss_ce_2: 1.3656 (1.3656) loss_bbox_2: 1.8248 (1.8248) loss_giou_2: 1.8852 (1.8852) loss_ce_3: 1.3876 (1.3876) loss_bbox_3: 1.8491 (1.8491) loss_giou_3: 1.8648 (1.8648) loss_ce_4: 1.3694 (1.3694) loss_bbox_4: 1.7837 (1.7837) loss_giou_4: 1.9151 (1.9151) loss_ce_unscaled: 1.3556 (1.3556) class_error_unscaled: 14.2857 (14.2857) loss_bbox_unscaled: 0.3620 (0.3620) loss_giou_unscaled: 0.9501 (0.9501) cardinality_error_unscaled: 75.6250 (75.6250) loss_ce_0_unscaled: 1.4654 (1.4654) loss_bbox_0_unscaled: 0.3725 (0.3725) loss_giou_0_unscaled: 0.9548 (0.9548) cardinality_error_0_unscaled: 86.5000 (86.5000) loss_ce_1_unscaled: 1.4117 (1.4117) loss_bbox_1_unscaled: 0.3652 (0.3652) loss_giou_1_unscaled: 0.9202 (0.9202) cardinality_error_1_unscaled: 81.6875 (81.6875) loss_ce_2_unscaled: 1.3656 (1.3656) loss_bbox_2_unscaled: 0.3650 (0.3650) loss_giou_2_unscaled: 0.9426 (0.9426) cardinality_error_2_unscaled: 76.5000 (76.5000) loss_ce_3_unscaled: 1.3876 (1.3876) loss_bbox_3_unscaled: 0.3698 (0.3698) loss_giou_3_unscaled: 0.9324 (0.9324) cardinality_error_3_unscaled: 78.9375 (78.9375) loss_ce_4_unscaled: 1.3694 (1.3694) loss_bbox_4_unscaled: 0.3567 (0.3567) loss_giou_4_unscaled: 0.9575 (0.9575) cardinality_error_4_unscaled: 76.6875 (76.6875) time: 2.4239 data: 1.2136 max mem: 12069
Epoch: [1] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 26.67 loss: 28.4990 (27.8393) loss_ce: 1.2110 (1.2461) loss_bbox: 1.5045 (1.4878) loss_giou: 1.8259 (1.8101) loss_ce_0: 1.4127 (1.4376) loss_bbox_0: 1.6641 (1.6179) loss_giou_0: 1.7647 (1.7759) loss_ce_1: 1.3358 (1.3525) loss_bbox_1: 1.6075 (1.5642) loss_giou_1: 1.8164 (1.7867) loss_ce_2: 1.2577 (1.2810) loss_bbox_2: 1.5672 (1.5551) loss_giou_2: 1.8237 (1.7913) loss_ce_3: 1.2626 (1.2917) loss_bbox_3: 1.5190 (1.5054) loss_giou_3: 1.8134 (1.7877) loss_ce_4: 1.2418 (1.2725) loss_bbox_4: 1.4969 (1.4662) loss_giou_4: 1.8116 (1.8099) loss_ce_unscaled: 1.2110 (1.2461) class_error_unscaled: 50.0000 (43.1548) loss_bbox_unscaled: 0.3009 (0.2976) loss_giou_unscaled: 0.9130 (0.9050) cardinality_error_unscaled: 61.0000 (62.4545) loss_ce_0_unscaled: 1.4127 (1.4376) loss_bbox_0_unscaled: 0.3328 (0.3236) loss_giou_0_unscaled: 0.8824 (0.8880) cardinality_error_0_unscaled: 80.3125 (81.5455) loss_ce_1_unscaled: 1.3358 (1.3525) loss_bbox_1_unscaled: 0.3215 (0.3128) loss_giou_1_unscaled: 0.9082 (0.8933) cardinality_error_1_unscaled: 72.7500 (74.1534) loss_ce_2_unscaled: 1.2577 (1.2810) loss_bbox_2_unscaled: 0.3134 (0.3110) loss_giou_2_unscaled: 0.9118 (0.8956) cardinality_error_2_unscaled: 65.8750 (65.9375) loss_ce_3_unscaled: 1.2626 (1.2917) loss_bbox_3_unscaled: 0.3038 (0.3011) loss_giou_3_unscaled: 0.9067 (0.8938) cardinality_error_3_unscaled: 67.5000 (68.4716) loss_ce_4_unscaled: 1.2418 (1.2725) loss_bbox_4_unscaled: 0.2994 (0.2932) loss_giou_4_unscaled: 0.9058 (0.9049) cardinality_error_4_unscaled: 63.6250 (65.1023) time: 1.3559 data: 0.1823 max mem: 12069
Epoch: [1] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 62.50 loss: 27.0730 (26.5761) loss_ce: 1.1500 (1.1965) loss_bbox: 1.3663 (1.3855) loss_giou: 1.7428 (1.7376) loss_ce_0: 1.3713 (1.4012) loss_bbox_0: 1.5393 (1.5170) loss_giou_0: 1.7180 (1.7234) loss_ce_1: 1.2608 (1.3097) loss_bbox_1: 1.5415 (1.4605) loss_giou_1: 1.7381 (1.7309) loss_ce_2: 1.1868 (1.2385) loss_bbox_2: 1.5244 (1.4554) loss_giou_2: 1.7878 (1.7183) loss_ce_3: 1.1993 (1.2452) loss_bbox_3: 1.3946 (1.4003) loss_giou_3: 1.7455 (1.7216) loss_ce_4: 1.1815 (1.2267) loss_bbox_4: 1.3694 (1.3652) loss_giou_4: 1.7337 (1.7425) loss_ce_unscaled: 1.1500 (1.1965) class_error_unscaled: 50.0000 (48.6793) loss_bbox_unscaled: 0.2733 (0.2771) loss_giou_unscaled: 0.8714 (0.8688) cardinality_error_unscaled: 52.7500 (56.4688) loss_ce_0_unscaled: 1.3713 (1.4012) loss_bbox_0_unscaled: 0.3079 (0.3034) loss_giou_0_unscaled: 0.8590 (0.8617) cardinality_error_0_unscaled: 76.7500 (78.3320) loss_ce_1_unscaled: 1.2608 (1.3097) loss_bbox_1_unscaled: 0.3083 (0.2921) loss_giou_1_unscaled: 0.8691 (0.8654) cardinality_error_1_unscaled: 67.1875 (69.4492) loss_ce_2_unscaled: 1.1868 (1.2385) loss_bbox_2_unscaled: 0.3049 (0.2911) loss_giou_2_unscaled: 0.8939 (0.8591) cardinality_error_2_unscaled: 57.1250 (61.0586) loss_ce_3_unscaled: 1.1993 (1.2452) loss_bbox_3_unscaled: 0.2789 (0.2801) loss_giou_3_unscaled: 0.8727 (0.8608) cardinality_error_3_unscaled: 60.1250 (63.1055) loss_ce_4_unscaled: 1.1815 (1.2267) loss_bbox_4_unscaled: 0.2739 (0.2730) loss_giou_4_unscaled: 0.8668 (0.8713) cardinality_error_4_unscaled: 56.6875 (59.5703) time: 1.3095 data: 0.1444 max mem: 12069
Epoch: [1] Total time: 0:00:21 (1.3138 s / it)
Averaged stats: lr: 0.000010 class_error: 62.50 loss: 27.0730 (26.5761) loss_ce: 1.1500 (1.1965) loss_bbox: 1.3663 (1.3855) loss_giou: 1.7428 (1.7376) loss_ce_0: 1.3713 (1.4012) loss_bbox_0: 1.5393 (1.5170) loss_giou_0: 1.7180 (1.7234) loss_ce_1: 1.2608 (1.3097) loss_bbox_1: 1.5415 (1.4605) loss_giou_1: 1.7381 (1.7309) loss_ce_2: 1.1868 (1.2385) loss_bbox_2: 1.5244 (1.4554) loss_giou_2: 1.7878 (1.7183) loss_ce_3: 1.1993 (1.2452) loss_bbox_3: 1.3946 (1.4003) loss_giou_3: 1.7455 (1.7216) loss_ce_4: 1.1815 (1.2267) loss_bbox_4: 1.3694 (1.3652) loss_giou_4: 1.7337 (1.7425) loss_ce_unscaled: 1.1500 (1.1965) class_error_unscaled: 50.0000 (48.6793) loss_bbox_unscaled: 0.2733 (0.2771) loss_giou_unscaled: 0.8714 (0.8688) cardinality_error_unscaled: 52.7500 (56.4688) loss_ce_0_unscaled: 1.3713 (1.4012) loss_bbox_0_unscaled: 0.3079 (0.3034) loss_giou_0_unscaled: 0.8590 (0.8617) cardinality_error_0_unscaled: 76.7500 (78.3320) loss_ce_1_unscaled: 1.2608 (1.3097) loss_bbox_1_unscaled: 0.3083 (0.2921) loss_giou_1_unscaled: 0.8691 (0.8654) cardinality_error_1_unscaled: 67.1875 (69.4492) loss_ce_2_unscaled: 1.1868 (1.2385) loss_bbox_2_unscaled: 0.3049 (0.2911) loss_giou_2_unscaled: 0.8939 (0.8591) cardinality_error_2_unscaled: 57.1250 (61.0586) loss_ce_3_unscaled: 1.1993 (1.2452) loss_bbox_3_unscaled: 0.2789 (0.2801) loss_giou_3_unscaled: 0.8727 (0.8608) cardinality_error_3_unscaled: 60.1250 (63.1055) loss_ce_4_unscaled: 1.1815 (1.2267) loss_bbox_4_unscaled: 0.2739 (0.2730) loss_giou_4_unscaled: 0.8668 (0.8713) cardinality_error_4_unscaled: 56.6875 (59.5703)
Test: [0/4] eta: 0:00:07 class_error: 43.75 loss: 20.4049 (20.4049) loss_ce: 1.1008 (1.1008) loss_bbox: 0.8092 (0.8092) loss_giou: 1.3155 (1.3155) loss_ce_0: 1.3006 (1.3006) loss_bbox_0: 0.9955 (0.9955) loss_giou_0: 1.4015 (1.4015) loss_ce_1: 1.2028 (1.2028) loss_bbox_1: 0.9187 (0.9187) loss_giou_1: 1.3513 (1.3513) loss_ce_2: 1.1561 (1.1561) loss_bbox_2: 0.8889 (0.8889) loss_giou_2: 1.3756 (1.3756) loss_ce_3: 1.1565 (1.1565) loss_bbox_3: 0.8364 (0.8364) loss_giou_3: 1.3172 (1.3172) loss_ce_4: 1.1380 (1.1380) loss_bbox_4: 0.8095 (0.8095) loss_giou_4: 1.3310 (1.3310) loss_ce_unscaled: 1.1008 (1.1008) class_error_unscaled: 43.7500 (43.7500) loss_bbox_unscaled: 0.1618 (0.1618) loss_giou_unscaled: 0.6578 (0.6578) cardinality_error_unscaled: 47.0625 (47.0625) loss_ce_0_unscaled: 1.3006 (1.3006) loss_bbox_0_unscaled: 0.1991 (0.1991) loss_giou_0_unscaled: 0.7007 (0.7007) cardinality_error_0_unscaled: 71.8125 (71.8125) loss_ce_1_unscaled: 1.2028 (1.2028) loss_bbox_1_unscaled: 0.1837 (0.1837) loss_giou_1_unscaled: 0.6756 (0.6756) cardinality_error_1_unscaled: 60.5625 (60.5625) loss_ce_2_unscaled: 1.1561 (1.1561) loss_bbox_2_unscaled: 0.1778 (0.1778) loss_giou_2_unscaled: 0.6878 (0.6878) cardinality_error_2_unscaled: 55.1250 (55.1250) loss_ce_3_unscaled: 1.1565 (1.1565) loss_bbox_3_unscaled: 0.1673 (0.1673) loss_giou_3_unscaled: 0.6586 (0.6586) cardinality_error_3_unscaled: 55.6250 (55.6250) loss_ce_4_unscaled: 1.1380 (1.1380) loss_bbox_4_unscaled: 0.1619 (0.1619) loss_giou_4_unscaled: 0.6655 (0.6655) cardinality_error_4_unscaled: 52.4375 (52.4375) time: 1.9963 data: 1.1587 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 37.50 loss: 20.6760 (22.7299) loss_ce: 1.0878 (1.1040) loss_bbox: 0.8909 (1.0358) loss_giou: 1.3659 (1.5033) loss_ce_0: 1.2994 (1.3067) loss_bbox_0: 0.9955 (1.1465) loss_giou_0: 1.4015 (1.5446) loss_ce_1: 1.1863 (1.2072) loss_bbox_1: 0.9498 (1.1221) loss_giou_1: 1.4078 (1.5435) loss_ce_2: 1.1561 (1.1621) loss_bbox_2: 0.9319 (1.0961) loss_giou_2: 1.4273 (1.5346) loss_ce_3: 1.1565 (1.1613) loss_bbox_3: 0.9194 (1.0539) loss_giou_3: 1.3845 (1.5147) loss_ce_4: 1.1314 (1.1415) loss_bbox_4: 0.9043 (1.0446) loss_giou_4: 1.3757 (1.5077) loss_ce_unscaled: 1.0878 (1.1040) class_error_unscaled: 43.7500 (51.5625) loss_bbox_unscaled: 0.1782 (0.2072) loss_giou_unscaled: 0.6830 (0.7517) cardinality_error_unscaled: 41.2500 (43.3594) loss_ce_0_unscaled: 1.2994 (1.3067) loss_bbox_0_unscaled: 0.1991 (0.2293) loss_giou_0_unscaled: 0.7007 (0.7723) cardinality_error_0_unscaled: 66.8125 (67.9219) loss_ce_1_unscaled: 1.1863 (1.2072) loss_bbox_1_unscaled: 0.1900 (0.2244) loss_giou_1_unscaled: 0.7039 (0.7717) cardinality_error_1_unscaled: 55.6875 (56.5781) loss_ce_2_unscaled: 1.1561 (1.1621) loss_bbox_2_unscaled: 0.1864 (0.2192) loss_giou_2_unscaled: 0.7136 (0.7673) cardinality_error_2_unscaled: 47.5625 (50.6250) loss_ce_3_unscaled: 1.1565 (1.1613) loss_bbox_3_unscaled: 0.1839 (0.2108) loss_giou_3_unscaled: 0.6923 (0.7573) cardinality_error_3_unscaled: 49.1250 (50.7344) loss_ce_4_unscaled: 1.1314 (1.1415) loss_bbox_4_unscaled: 0.1809 (0.2089) loss_giou_4_unscaled: 0.6878 (0.7538) cardinality_error_4_unscaled: 44.6250 (47.7812) time: 1.0320 data: 0.3829 max mem: 12069
Test: Total time: 0:00:04 (1.0504 s / it)
Averaged stats: class_error: 37.50 loss: 20.6760 (22.7299) loss_ce: 1.0878 (1.1040) loss_bbox: 0.8909 (1.0358) loss_giou: 1.3659 (1.5033) loss_ce_0: 1.2994 (1.3067) loss_bbox_0: 0.9955 (1.1465) loss_giou_0: 1.4015 (1.5446) loss_ce_1: 1.1863 (1.2072) loss_bbox_1: 0.9498 (1.1221) loss_giou_1: 1.4078 (1.5435) loss_ce_2: 1.1561 (1.1621) loss_bbox_2: 0.9319 (1.0961) loss_giou_2: 1.4273 (1.5346) loss_ce_3: 1.1565 (1.1613) loss_bbox_3: 0.9194 (1.0539) loss_giou_3: 1.3845 (1.5147) loss_ce_4: 1.1314 (1.1415) loss_bbox_4: 0.9043 (1.0446) loss_giou_4: 1.3757 (1.5077) loss_ce_unscaled: 1.0878 (1.1040) class_error_unscaled: 43.7500 (51.5625) loss_bbox_unscaled: 0.1782 (0.2072) loss_giou_unscaled: 0.6830 (0.7517) cardinality_error_unscaled: 41.2500 (43.3594) loss_ce_0_unscaled: 1.2994 (1.3067) loss_bbox_0_unscaled: 0.1991 (0.2293) loss_giou_0_unscaled: 0.7007 (0.7723) cardinality_error_0_unscaled: 66.8125 (67.9219) loss_ce_1_unscaled: 1.1863 (1.2072) loss_bbox_1_unscaled: 0.1900 (0.2244) loss_giou_1_unscaled: 0.7039 (0.7717) cardinality_error_1_unscaled: 55.6875 (56.5781) loss_ce_2_unscaled: 1.1561 (1.1621) loss_bbox_2_unscaled: 0.1864 (0.2192) loss_giou_2_unscaled: 0.7136 (0.7673) cardinality_error_2_unscaled: 47.5625 (50.6250) loss_ce_3_unscaled: 1.1565 (1.1613) loss_bbox_3_unscaled: 0.1839 (0.2108) loss_giou_3_unscaled: 0.6923 (0.7573) cardinality_error_3_unscaled: 49.1250 (50.7344) loss_ce_4_unscaled: 1.1314 (1.1415) loss_bbox_4_unscaled: 0.1809 (0.2089) loss_giou_4_unscaled: 0.6878 (0.7538) cardinality_error_4_unscaled: 44.6250 (47.7812)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.055
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.075
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.019
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.066
Epoch: [2] [ 0/16] eta: 0:00:40 lr: 0.000010 class_error: 46.67 loss: 23.8726 (23.8726) loss_ce: 1.0272 (1.0272) loss_bbox: 1.2105 (1.2105) loss_giou: 1.5749 (1.5749) loss_ce_0: 1.2895 (1.2895) loss_bbox_0: 1.2989 (1.2989) loss_giou_0: 1.6609 (1.6609) loss_ce_1: 1.1823 (1.1823) loss_bbox_1: 1.2440 (1.2440) loss_giou_1: 1.6782 (1.6782) loss_ce_2: 1.1099 (1.1099) loss_bbox_2: 1.2367 (1.2367) loss_giou_2: 1.6355 (1.6355) loss_ce_3: 1.0986 (1.0986) loss_bbox_3: 1.1917 (1.1917) loss_giou_3: 1.5823 (1.5823) loss_ce_4: 1.0677 (1.0677) loss_bbox_4: 1.2152 (1.2152) loss_giou_4: 1.5687 (1.5687) loss_ce_unscaled: 1.0272 (1.0272) class_error_unscaled: 46.6667 (46.6667) loss_bbox_unscaled: 0.2421 (0.2421) loss_giou_unscaled: 0.7874 (0.7874) cardinality_error_unscaled: 40.3750 (40.3750) loss_ce_0_unscaled: 1.2895 (1.2895) loss_bbox_0_unscaled: 0.2598 (0.2598) loss_giou_0_unscaled: 0.8304 (0.8304) cardinality_error_0_unscaled: 71.8125 (71.8125) loss_ce_1_unscaled: 1.1823 (1.1823) loss_bbox_1_unscaled: 0.2488 (0.2488) loss_giou_1_unscaled: 0.8391 (0.8391) cardinality_error_1_unscaled: 61.1875 (61.1875) loss_ce_2_unscaled: 1.1099 (1.1099) loss_bbox_2_unscaled: 0.2473 (0.2473) loss_giou_2_unscaled: 0.8177 (0.8177) cardinality_error_2_unscaled: 50.1250 (50.1250) loss_ce_3_unscaled: 1.0986 (1.0986) loss_bbox_3_unscaled: 0.2383 (0.2383) loss_giou_3_unscaled: 0.7911 (0.7911) cardinality_error_3_unscaled: 50.8750 (50.8750) loss_ce_4_unscaled: 1.0677 (1.0677) loss_bbox_4_unscaled: 0.2430 (0.2430) loss_giou_4_unscaled: 0.7844 (0.7844) cardinality_error_4_unscaled: 44.6875 (44.6875) time: 2.5165 data: 1.2293 max mem: 12069
Epoch: [2] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 78.57 loss: 23.2214 (22.9994) loss_ce: 0.9646 (0.9952) loss_bbox: 1.1412 (1.1172) loss_giou: 1.6317 (1.5968) loss_ce_0: 1.2384 (1.2566) loss_bbox_0: 1.1892 (1.2023) loss_giou_0: 1.5928 (1.5712) loss_ce_1: 1.1299 (1.1442) loss_bbox_1: 1.1713 (1.1746) loss_giou_1: 1.6234 (1.5871) loss_ce_2: 1.0525 (1.0754) loss_bbox_2: 1.1420 (1.1824) loss_giou_2: 1.6145 (1.5753) loss_ce_3: 1.0387 (1.0628) loss_bbox_3: 1.1071 (1.1344) loss_giou_3: 1.5821 (1.5773) loss_ce_4: 1.0059 (1.0329) loss_bbox_4: 1.1286 (1.1143) loss_giou_4: 1.6322 (1.5993) loss_ce_unscaled: 0.9646 (0.9952) class_error_unscaled: 46.6667 (49.6429) loss_bbox_unscaled: 0.2282 (0.2234) loss_giou_unscaled: 0.8159 (0.7984) cardinality_error_unscaled: 32.3750 (34.7898) loss_ce_0_unscaled: 1.2384 (1.2566) loss_bbox_0_unscaled: 0.2378 (0.2405) loss_giou_0_unscaled: 0.7964 (0.7856) cardinality_error_0_unscaled: 65.5000 (65.9318) loss_ce_1_unscaled: 1.1299 (1.1442) loss_bbox_1_unscaled: 0.2343 (0.2349) loss_giou_1_unscaled: 0.8117 (0.7936) cardinality_error_1_unscaled: 52.9375 (53.1648) loss_ce_2_unscaled: 1.0525 (1.0754) loss_bbox_2_unscaled: 0.2284 (0.2365) loss_giou_2_unscaled: 0.8072 (0.7877) cardinality_error_2_unscaled: 42.8125 (43.9261) loss_ce_3_unscaled: 1.0387 (1.0628) loss_bbox_3_unscaled: 0.2214 (0.2269) loss_giou_3_unscaled: 0.7911 (0.7886) cardinality_error_3_unscaled: 42.3750 (43.6818) loss_ce_4_unscaled: 1.0059 (1.0329) loss_bbox_4_unscaled: 0.2257 (0.2229) loss_giou_4_unscaled: 0.8161 (0.7997) cardinality_error_4_unscaled: 36.6250 (39.0625) time: 1.3453 data: 0.1695 max mem: 12069
Epoch: [2] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 57.14 loss: 22.4510 (22.4567) loss_ce: 0.9534 (0.9602) loss_bbox: 1.0866 (1.0859) loss_giou: 1.5749 (1.5670) loss_ce_0: 1.2153 (1.2247) loss_bbox_0: 1.1600 (1.1649) loss_giou_0: 1.5607 (1.5418) loss_ce_1: 1.0987 (1.1135) loss_bbox_1: 1.1605 (1.1493) loss_giou_1: 1.6096 (1.5620) loss_ce_2: 1.0369 (1.0457) loss_bbox_2: 1.1420 (1.1469) loss_giou_2: 1.5470 (1.5490) loss_ce_3: 1.0189 (1.0303) loss_bbox_3: 1.1213 (1.1139) loss_giou_3: 1.5575 (1.5453) loss_ce_4: 0.9832 (0.9997) loss_bbox_4: 1.1127 (1.0889) loss_giou_4: 1.5717 (1.5675) loss_ce_unscaled: 0.9534 (0.9602) class_error_unscaled: 46.6667 (49.0030) loss_bbox_unscaled: 0.2173 (0.2172) loss_giou_unscaled: 0.7874 (0.7835) cardinality_error_unscaled: 31.4375 (31.8633) loss_ce_0_unscaled: 1.2153 (1.2247) loss_bbox_0_unscaled: 0.2320 (0.2330) loss_giou_0_unscaled: 0.7803 (0.7709) cardinality_error_0_unscaled: 62.1875 (63.2812) loss_ce_1_unscaled: 1.0987 (1.1135) loss_bbox_1_unscaled: 0.2321 (0.2299) loss_giou_1_unscaled: 0.8048 (0.7810) cardinality_error_1_unscaled: 48.7500 (50.1602) loss_ce_2_unscaled: 1.0369 (1.0457) loss_bbox_2_unscaled: 0.2284 (0.2294) loss_giou_2_unscaled: 0.7735 (0.7745) cardinality_error_2_unscaled: 41.8125 (41.3398) loss_ce_3_unscaled: 1.0189 (1.0303) loss_bbox_3_unscaled: 0.2243 (0.2228) loss_giou_3_unscaled: 0.7788 (0.7726) cardinality_error_3_unscaled: 39.6875 (40.7031) loss_ce_4_unscaled: 0.9832 (0.9997) loss_bbox_4_unscaled: 0.2225 (0.2178) loss_giou_4_unscaled: 0.7858 (0.7838) cardinality_error_4_unscaled: 36.0000 (36.1992) time: 1.2969 data: 0.1342 max mem: 12069
Epoch: [2] Total time: 0:00:20 (1.3012 s / it)
Averaged stats: lr: 0.000010 class_error: 57.14 loss: 22.4510 (22.4567) loss_ce: 0.9534 (0.9602) loss_bbox: 1.0866 (1.0859) loss_giou: 1.5749 (1.5670) loss_ce_0: 1.2153 (1.2247) loss_bbox_0: 1.1600 (1.1649) loss_giou_0: 1.5607 (1.5418) loss_ce_1: 1.0987 (1.1135) loss_bbox_1: 1.1605 (1.1493) loss_giou_1: 1.6096 (1.5620) loss_ce_2: 1.0369 (1.0457) loss_bbox_2: 1.1420 (1.1469) loss_giou_2: 1.5470 (1.5490) loss_ce_3: 1.0189 (1.0303) loss_bbox_3: 1.1213 (1.1139) loss_giou_3: 1.5575 (1.5453) loss_ce_4: 0.9832 (0.9997) loss_bbox_4: 1.1127 (1.0889) loss_giou_4: 1.5717 (1.5675) loss_ce_unscaled: 0.9534 (0.9602) class_error_unscaled: 46.6667 (49.0030) loss_bbox_unscaled: 0.2173 (0.2172) loss_giou_unscaled: 0.7874 (0.7835) cardinality_error_unscaled: 31.4375 (31.8633) loss_ce_0_unscaled: 1.2153 (1.2247) loss_bbox_0_unscaled: 0.2320 (0.2330) loss_giou_0_unscaled: 0.7803 (0.7709) cardinality_error_0_unscaled: 62.1875 (63.2812) loss_ce_1_unscaled: 1.0987 (1.1135) loss_bbox_1_unscaled: 0.2321 (0.2299) loss_giou_1_unscaled: 0.8048 (0.7810) cardinality_error_1_unscaled: 48.7500 (50.1602) loss_ce_2_unscaled: 1.0369 (1.0457) loss_bbox_2_unscaled: 0.2284 (0.2294) loss_giou_2_unscaled: 0.7735 (0.7745) cardinality_error_2_unscaled: 41.8125 (41.3398) loss_ce_3_unscaled: 1.0189 (1.0303) loss_bbox_3_unscaled: 0.2243 (0.2228) loss_giou_3_unscaled: 0.7788 (0.7726) cardinality_error_3_unscaled: 39.6875 (40.7031) loss_ce_4_unscaled: 0.9832 (0.9997) loss_bbox_4_unscaled: 0.2225 (0.2178) loss_giou_4_unscaled: 0.7858 (0.7838) cardinality_error_4_unscaled: 36.0000 (36.1992)
Test: [0/4] eta: 0:00:08 class_error: 43.75 loss: 17.6108 (17.6108) loss_ce: 0.8781 (0.8781) loss_bbox: 0.6361 (0.6361) loss_giou: 1.2426 (1.2426) loss_ce_0: 1.1069 (1.1069) loss_bbox_0: 0.7879 (0.7879) loss_giou_0: 1.3262 (1.3262) loss_ce_1: 1.0033 (1.0033) loss_bbox_1: 0.7700 (0.7700) loss_giou_1: 1.3175 (1.3175) loss_ce_2: 0.9687 (0.9687) loss_bbox_2: 0.7157 (0.7157) loss_giou_2: 1.2341 (1.2341) loss_ce_3: 0.9486 (0.9486) loss_bbox_3: 0.6830 (0.6830) loss_giou_3: 1.1944 (1.1944) loss_ce_4: 0.9172 (0.9172) loss_bbox_4: 0.6535 (0.6535) loss_giou_4: 1.2270 (1.2270) loss_ce_unscaled: 0.8781 (0.8781) class_error_unscaled: 43.7500 (43.7500) loss_bbox_unscaled: 0.1272 (0.1272) loss_giou_unscaled: 0.6213 (0.6213) cardinality_error_unscaled: 26.8125 (26.8125) loss_ce_0_unscaled: 1.1069 (1.1069) loss_bbox_0_unscaled: 0.1576 (0.1576) loss_giou_0_unscaled: 0.6631 (0.6631) cardinality_error_0_unscaled: 53.8125 (53.8125) loss_ce_1_unscaled: 1.0033 (1.0033) loss_bbox_1_unscaled: 0.1540 (0.1540) loss_giou_1_unscaled: 0.6587 (0.6587) cardinality_error_1_unscaled: 40.8750 (40.8750) loss_ce_2_unscaled: 0.9687 (0.9687) loss_bbox_2_unscaled: 0.1431 (0.1431) loss_giou_2_unscaled: 0.6170 (0.6170) cardinality_error_2_unscaled: 35.3750 (35.3750) loss_ce_3_unscaled: 0.9486 (0.9486) loss_bbox_3_unscaled: 0.1366 (0.1366) loss_giou_3_unscaled: 0.5972 (0.5972) cardinality_error_3_unscaled: 35.5625 (35.5625) loss_ce_4_unscaled: 0.9172 (0.9172) loss_bbox_4_unscaled: 0.1307 (0.1307) loss_giou_4_unscaled: 0.6135 (0.6135) cardinality_error_4_unscaled: 32.0625 (32.0625) time: 2.0643 data: 1.3506 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 37.50 loss: 17.6108 (19.0945) loss_ce: 0.8781 (0.8910) loss_bbox: 0.6407 (0.7958) loss_giou: 1.2426 (1.3164) loss_ce_0: 1.1069 (1.1314) loss_bbox_0: 0.8089 (0.9001) loss_giou_0: 1.3262 (1.4252) loss_ce_1: 1.0033 (1.0280) loss_bbox_1: 0.7700 (0.8815) loss_giou_1: 1.3381 (1.4132) loss_ce_2: 0.9687 (0.9883) loss_bbox_2: 0.7157 (0.8299) loss_giou_2: 1.3208 (1.3595) loss_ce_3: 0.9486 (0.9661) loss_bbox_3: 0.7015 (0.8086) loss_giou_3: 1.2182 (1.3135) loss_ce_4: 0.9172 (0.9338) loss_bbox_4: 0.6535 (0.8018) loss_giou_4: 1.2279 (1.3104) loss_ce_unscaled: 0.8781 (0.8910) class_error_unscaled: 43.7500 (51.5625) loss_bbox_unscaled: 0.1281 (0.1592) loss_giou_unscaled: 0.6213 (0.6582) cardinality_error_unscaled: 26.8125 (26.5469) loss_ce_0_unscaled: 1.1069 (1.1314) loss_bbox_0_unscaled: 0.1618 (0.1800) loss_giou_0_unscaled: 0.6631 (0.7126) cardinality_error_0_unscaled: 51.8750 (52.5000) loss_ce_1_unscaled: 1.0033 (1.0280) loss_bbox_1_unscaled: 0.1540 (0.1763) loss_giou_1_unscaled: 0.6691 (0.7066) cardinality_error_1_unscaled: 39.5000 (39.8438) loss_ce_2_unscaled: 0.9687 (0.9883) loss_bbox_2_unscaled: 0.1431 (0.1660) loss_giou_2_unscaled: 0.6604 (0.6798) cardinality_error_2_unscaled: 35.3750 (34.8438) loss_ce_3_unscaled: 0.9486 (0.9661) loss_bbox_3_unscaled: 0.1403 (0.1617) loss_giou_3_unscaled: 0.6091 (0.6568) cardinality_error_3_unscaled: 35.1875 (33.8594) loss_ce_4_unscaled: 0.9172 (0.9338) loss_bbox_4_unscaled: 0.1307 (0.1604) loss_giou_4_unscaled: 0.6140 (0.6552) cardinality_error_4_unscaled: 31.6875 (30.7188) time: 1.0315 data: 0.4014 max mem: 12069
Test: Total time: 0:00:04 (1.0484 s / it)
Averaged stats: class_error: 37.50 loss: 17.6108 (19.0945) loss_ce: 0.8781 (0.8910) loss_bbox: 0.6407 (0.7958) loss_giou: 1.2426 (1.3164) loss_ce_0: 1.1069 (1.1314) loss_bbox_0: 0.8089 (0.9001) loss_giou_0: 1.3262 (1.4252) loss_ce_1: 1.0033 (1.0280) loss_bbox_1: 0.7700 (0.8815) loss_giou_1: 1.3381 (1.4132) loss_ce_2: 0.9687 (0.9883) loss_bbox_2: 0.7157 (0.8299) loss_giou_2: 1.3208 (1.3595) loss_ce_3: 0.9486 (0.9661) loss_bbox_3: 0.7015 (0.8086) loss_giou_3: 1.2182 (1.3135) loss_ce_4: 0.9172 (0.9338) loss_bbox_4: 0.6535 (0.8018) loss_giou_4: 1.2279 (1.3104) loss_ce_unscaled: 0.8781 (0.8910) class_error_unscaled: 43.7500 (51.5625) loss_bbox_unscaled: 0.1281 (0.1592) loss_giou_unscaled: 0.6213 (0.6582) cardinality_error_unscaled: 26.8125 (26.5469) loss_ce_0_unscaled: 1.1069 (1.1314) loss_bbox_0_unscaled: 0.1618 (0.1800) loss_giou_0_unscaled: 0.6631 (0.7126) cardinality_error_0_unscaled: 51.8750 (52.5000) loss_ce_1_unscaled: 1.0033 (1.0280) loss_bbox_1_unscaled: 0.1540 (0.1763) loss_giou_1_unscaled: 0.6691 (0.7066) cardinality_error_1_unscaled: 39.5000 (39.8438) loss_ce_2_unscaled: 0.9687 (0.9883) loss_bbox_2_unscaled: 0.1431 (0.1660) loss_giou_2_unscaled: 0.6604 (0.6798) cardinality_error_2_unscaled: 35.3750 (34.8438) loss_ce_3_unscaled: 0.9486 (0.9661) loss_bbox_3_unscaled: 0.1403 (0.1617) loss_giou_3_unscaled: 0.6091 (0.6568) cardinality_error_3_unscaled: 35.1875 (33.8594) loss_ce_4_unscaled: 0.9172 (0.9338) loss_bbox_4_unscaled: 0.1307 (0.1604) loss_giou_4_unscaled: 0.6140 (0.6552) cardinality_error_4_unscaled: 31.6875 (30.7188)
Accumulating evaluation results...
DONE (t=0.07s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.077
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.125
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.031
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.087
Epoch: [3] [ 0/16] eta: 0:00:37 lr: 0.000010 class_error: 31.25 loss: 19.4424 (19.4424) loss_ce: 0.8508 (0.8508) loss_bbox: 0.8568 (0.8568) loss_giou: 1.4465 (1.4465) loss_ce_0: 1.1214 (1.1214) loss_bbox_0: 0.8855 (0.8855) loss_giou_0: 1.4773 (1.4773) loss_ce_1: 1.0242 (1.0242) loss_bbox_1: 0.8641 (0.8641) loss_giou_1: 1.3409 (1.3409) loss_ce_2: 0.9619 (0.9619) loss_bbox_2: 0.8407 (0.8407) loss_giou_2: 1.3634 (1.3634) loss_ce_3: 0.9391 (0.9391) loss_bbox_3: 0.8594 (0.8594) loss_giou_3: 1.4298 (1.4298) loss_ce_4: 0.8954 (0.8954) loss_bbox_4: 0.8932 (0.8932) loss_giou_4: 1.3919 (1.3919) loss_ce_unscaled: 0.8508 (0.8508) class_error_unscaled: 31.2500 (31.2500) loss_bbox_unscaled: 0.1714 (0.1714) loss_giou_unscaled: 0.7233 (0.7233) cardinality_error_unscaled: 23.6875 (23.6875) loss_ce_0_unscaled: 1.1214 (1.1214) loss_bbox_0_unscaled: 0.1771 (0.1771) loss_giou_0_unscaled: 0.7387 (0.7387) cardinality_error_0_unscaled: 54.3125 (54.3125) loss_ce_1_unscaled: 1.0242 (1.0242) loss_bbox_1_unscaled: 0.1728 (0.1728) loss_giou_1_unscaled: 0.6705 (0.6705) cardinality_error_1_unscaled: 41.5625 (41.5625) loss_ce_2_unscaled: 0.9619 (0.9619) loss_bbox_2_unscaled: 0.1681 (0.1681) loss_giou_2_unscaled: 0.6817 (0.6817) cardinality_error_2_unscaled: 34.8125 (34.8125) loss_ce_3_unscaled: 0.9391 (0.9391) loss_bbox_3_unscaled: 0.1719 (0.1719) loss_giou_3_unscaled: 0.7149 (0.7149) cardinality_error_3_unscaled: 33.3750 (33.3750) loss_ce_4_unscaled: 0.8954 (0.8954) loss_bbox_4_unscaled: 0.1786 (0.1786) loss_giou_4_unscaled: 0.6960 (0.6960) cardinality_error_4_unscaled: 28.1250 (28.1250) time: 2.3690 data: 1.1388 max mem: 12069
Epoch: [3] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 62.50 loss: 20.7727 (20.2514) loss_ce: 0.7927 (0.7914) loss_bbox: 0.9694 (0.9458) loss_giou: 1.4882 (1.4888) loss_ce_0: 1.0656 (1.0708) loss_bbox_0: 1.1621 (1.0337) loss_giou_0: 1.4773 (1.5112) loss_ce_1: 0.9568 (0.9607) loss_bbox_1: 1.0462 (1.0064) loss_giou_1: 1.5343 (1.4897) loss_ce_2: 0.9014 (0.9002) loss_bbox_2: 1.0912 (1.0050) loss_giou_2: 1.5141 (1.4717) loss_ce_3: 0.8706 (0.8731) loss_bbox_3: 1.0002 (0.9593) loss_giou_3: 1.5087 (1.4828) loss_ce_4: 0.8320 (0.8368) loss_bbox_4: 0.9965 (0.9466) loss_giou_4: 1.5106 (1.4773) loss_ce_unscaled: 0.7927 (0.7914) class_error_unscaled: 53.3333 (49.2162) loss_bbox_unscaled: 0.1939 (0.1892) loss_giou_unscaled: 0.7441 (0.7444) cardinality_error_unscaled: 15.8125 (17.1193) loss_ce_0_unscaled: 1.0656 (1.0708) loss_bbox_0_unscaled: 0.2324 (0.2067) loss_giou_0_unscaled: 0.7387 (0.7556) cardinality_error_0_unscaled: 45.3750 (47.1080) loss_ce_1_unscaled: 0.9568 (0.9607) loss_bbox_1_unscaled: 0.2092 (0.2013) loss_giou_1_unscaled: 0.7671 (0.7448) cardinality_error_1_unscaled: 32.4375 (34.4773) loss_ce_2_unscaled: 0.9014 (0.9002) loss_bbox_2_unscaled: 0.2182 (0.2010) loss_giou_2_unscaled: 0.7570 (0.7358) cardinality_error_2_unscaled: 25.8125 (27.9716) loss_ce_3_unscaled: 0.8706 (0.8731) loss_bbox_3_unscaled: 0.2000 (0.1919) loss_giou_3_unscaled: 0.7543 (0.7414) cardinality_error_3_unscaled: 24.4375 (25.6477) loss_ce_4_unscaled: 0.8320 (0.8368) loss_bbox_4_unscaled: 0.1993 (0.1893) loss_giou_4_unscaled: 0.7553 (0.7387) cardinality_error_4_unscaled: 19.7500 (21.5114) time: 1.3508 data: 0.1652 max mem: 12069
Epoch: [3] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 66.67 loss: 19.4424 (19.7079) loss_ce: 0.7449 (0.7605) loss_bbox: 0.8669 (0.9191) loss_giou: 1.4823 (1.4556) loss_ce_0: 1.0172 (1.0400) loss_bbox_0: 0.9221 (1.0051) loss_giou_0: 1.4524 (1.4763) loss_ce_1: 0.9047 (0.9317) loss_bbox_1: 0.9504 (0.9809) loss_giou_1: 1.4315 (1.4653) loss_ce_2: 0.8580 (0.8722) loss_bbox_2: 0.9051 (0.9650) loss_giou_2: 1.4288 (1.4461) loss_ce_3: 0.8256 (0.8433) loss_bbox_3: 0.9260 (0.9299) loss_giou_3: 1.4298 (1.4481) loss_ce_4: 0.7939 (0.8064) loss_bbox_4: 0.8932 (0.9179) loss_giou_4: 1.4581 (1.4444) loss_ce_unscaled: 0.7449 (0.7605) class_error_unscaled: 53.3333 (50.0801) loss_bbox_unscaled: 0.1734 (0.1838) loss_giou_unscaled: 0.7411 (0.7278) cardinality_error_unscaled: 13.0625 (14.6758) loss_ce_0_unscaled: 1.0172 (1.0400) loss_bbox_0_unscaled: 0.1844 (0.2010) loss_giou_0_unscaled: 0.7262 (0.7382) cardinality_error_0_unscaled: 42.0000 (44.3047) loss_ce_1_unscaled: 0.9047 (0.9317) loss_bbox_1_unscaled: 0.1901 (0.1962) loss_giou_1_unscaled: 0.7158 (0.7326) cardinality_error_1_unscaled: 28.8125 (31.4766) loss_ce_2_unscaled: 0.8580 (0.8722) loss_bbox_2_unscaled: 0.1810 (0.1930) loss_giou_2_unscaled: 0.7144 (0.7231) cardinality_error_2_unscaled: 23.4375 (25.1484) loss_ce_3_unscaled: 0.8256 (0.8433) loss_bbox_3_unscaled: 0.1852 (0.1860) loss_giou_3_unscaled: 0.7149 (0.7241) cardinality_error_3_unscaled: 21.1250 (22.5117) loss_ce_4_unscaled: 0.7939 (0.8064) loss_bbox_4_unscaled: 0.1786 (0.1836) loss_giou_4_unscaled: 0.7290 (0.7222) cardinality_error_4_unscaled: 17.0625 (18.6875) time: 1.3059 data: 0.1315 max mem: 12069
Epoch: [3] Total time: 0:00:20 (1.3101 s / it)
Averaged stats: lr: 0.000010 class_error: 66.67 loss: 19.4424 (19.7079) loss_ce: 0.7449 (0.7605) loss_bbox: 0.8669 (0.9191) loss_giou: 1.4823 (1.4556) loss_ce_0: 1.0172 (1.0400) loss_bbox_0: 0.9221 (1.0051) loss_giou_0: 1.4524 (1.4763) loss_ce_1: 0.9047 (0.9317) loss_bbox_1: 0.9504 (0.9809) loss_giou_1: 1.4315 (1.4653) loss_ce_2: 0.8580 (0.8722) loss_bbox_2: 0.9051 (0.9650) loss_giou_2: 1.4288 (1.4461) loss_ce_3: 0.8256 (0.8433) loss_bbox_3: 0.9260 (0.9299) loss_giou_3: 1.4298 (1.4481) loss_ce_4: 0.7939 (0.8064) loss_bbox_4: 0.8932 (0.9179) loss_giou_4: 1.4581 (1.4444) loss_ce_unscaled: 0.7449 (0.7605) class_error_unscaled: 53.3333 (50.0801) loss_bbox_unscaled: 0.1734 (0.1838) loss_giou_unscaled: 0.7411 (0.7278) cardinality_error_unscaled: 13.0625 (14.6758) loss_ce_0_unscaled: 1.0172 (1.0400) loss_bbox_0_unscaled: 0.1844 (0.2010) loss_giou_0_unscaled: 0.7262 (0.7382) cardinality_error_0_unscaled: 42.0000 (44.3047) loss_ce_1_unscaled: 0.9047 (0.9317) loss_bbox_1_unscaled: 0.1901 (0.1962) loss_giou_1_unscaled: 0.7158 (0.7326) cardinality_error_1_unscaled: 28.8125 (31.4766) loss_ce_2_unscaled: 0.8580 (0.8722) loss_bbox_2_unscaled: 0.1810 (0.1930) loss_giou_2_unscaled: 0.7144 (0.7231) cardinality_error_2_unscaled: 23.4375 (25.1484) loss_ce_3_unscaled: 0.8256 (0.8433) loss_bbox_3_unscaled: 0.1852 (0.1860) loss_giou_3_unscaled: 0.7149 (0.7241) cardinality_error_3_unscaled: 21.1250 (22.5117) loss_ce_4_unscaled: 0.7939 (0.8064) loss_bbox_4_unscaled: 0.1786 (0.1836) loss_giou_4_unscaled: 0.7290 (0.7222) cardinality_error_4_unscaled: 17.0625 (18.6875)
Test: [0/4] eta: 0:00:08 class_error: 43.75 loss: 16.3252 (16.3252) loss_ce: 0.7128 (0.7128) loss_bbox: 0.6388 (0.6388) loss_giou: 1.2480 (1.2480) loss_ce_0: 0.9518 (0.9518) loss_bbox_0: 0.7802 (0.7802) loss_giou_0: 1.2257 (1.2257) loss_ce_1: 0.8576 (0.8576) loss_bbox_1: 0.7037 (0.7037) loss_giou_1: 1.1768 (1.1768) loss_ce_2: 0.8264 (0.8264) loss_bbox_2: 0.6233 (0.6233) loss_giou_2: 1.2203 (1.2203) loss_ce_3: 0.7951 (0.7951) loss_bbox_3: 0.6368 (0.6368) loss_giou_3: 1.2497 (1.2497) loss_ce_4: 0.7561 (0.7561) loss_bbox_4: 0.6391 (0.6391) loss_giou_4: 1.2831 (1.2831) loss_ce_unscaled: 0.7128 (0.7128) class_error_unscaled: 43.7500 (43.7500) loss_bbox_unscaled: 0.1278 (0.1278) loss_giou_unscaled: 0.6240 (0.6240) cardinality_error_unscaled: 11.5000 (11.5000) loss_ce_0_unscaled: 0.9518 (0.9518) loss_bbox_0_unscaled: 0.1560 (0.1560) loss_giou_0_unscaled: 0.6129 (0.6129) cardinality_error_0_unscaled: 37.2500 (37.2500) loss_ce_1_unscaled: 0.8576 (0.8576) loss_bbox_1_unscaled: 0.1407 (0.1407) loss_giou_1_unscaled: 0.5884 (0.5884) cardinality_error_1_unscaled: 26.4375 (26.4375) loss_ce_2_unscaled: 0.8264 (0.8264) loss_bbox_2_unscaled: 0.1247 (0.1247) loss_giou_2_unscaled: 0.6102 (0.6102) cardinality_error_2_unscaled: 22.1250 (22.1250) loss_ce_3_unscaled: 0.7951 (0.7951) loss_bbox_3_unscaled: 0.1274 (0.1274) loss_giou_3_unscaled: 0.6248 (0.6248) cardinality_error_3_unscaled: 18.2500 (18.2500) loss_ce_4_unscaled: 0.7561 (0.7561) loss_bbox_4_unscaled: 0.1278 (0.1278) loss_giou_4_unscaled: 0.6415 (0.6415) cardinality_error_4_unscaled: 15.6875 (15.6875) time: 2.0226 data: 1.1949 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 43.75 loss: 16.3252 (16.7594) loss_ce: 0.7128 (0.7148) loss_bbox: 0.6513 (0.7148) loss_giou: 1.2480 (1.2231) loss_ce_0: 0.9535 (0.9820) loss_bbox_0: 0.7802 (0.7626) loss_giou_0: 1.2257 (1.2617) loss_ce_1: 0.8576 (0.8834) loss_bbox_1: 0.7037 (0.7330) loss_giou_1: 1.1768 (1.2461) loss_ce_2: 0.8264 (0.8382) loss_bbox_2: 0.6233 (0.7079) loss_giou_2: 1.2203 (1.2374) loss_ce_3: 0.7951 (0.8010) loss_bbox_3: 0.6368 (0.7093) loss_giou_3: 1.2497 (1.2334) loss_ce_4: 0.7561 (0.7610) loss_bbox_4: 0.6391 (0.6894) loss_giou_4: 1.2830 (1.2606) loss_ce_unscaled: 0.7128 (0.7148) class_error_unscaled: 43.7500 (53.1250) loss_bbox_unscaled: 0.1303 (0.1430) loss_giou_unscaled: 0.6240 (0.6115) cardinality_error_unscaled: 11.5000 (12.2812) loss_ce_0_unscaled: 0.9535 (0.9820) loss_bbox_0_unscaled: 0.1560 (0.1525) loss_giou_0_unscaled: 0.6129 (0.6308) cardinality_error_0_unscaled: 37.2500 (37.8594) loss_ce_1_unscaled: 0.8576 (0.8834) loss_bbox_1_unscaled: 0.1407 (0.1466) loss_giou_1_unscaled: 0.5884 (0.6231) cardinality_error_1_unscaled: 26.4375 (27.4844) loss_ce_2_unscaled: 0.8264 (0.8382) loss_bbox_2_unscaled: 0.1247 (0.1416) loss_giou_2_unscaled: 0.6102 (0.6187) cardinality_error_2_unscaled: 22.1250 (22.9688) loss_ce_3_unscaled: 0.7951 (0.8010) loss_bbox_3_unscaled: 0.1274 (0.1419) loss_giou_3_unscaled: 0.6248 (0.6167) cardinality_error_3_unscaled: 18.2500 (19.6094) loss_ce_4_unscaled: 0.7561 (0.7610) loss_bbox_4_unscaled: 0.1278 (0.1379) loss_giou_4_unscaled: 0.6415 (0.6303) cardinality_error_4_unscaled: 15.6875 (16.2031) time: 1.0116 data: 0.3597 max mem: 12069
Test: Total time: 0:00:04 (1.0285 s / it)
Averaged stats: class_error: 43.75 loss: 16.3252 (16.7594) loss_ce: 0.7128 (0.7148) loss_bbox: 0.6513 (0.7148) loss_giou: 1.2480 (1.2231) loss_ce_0: 0.9535 (0.9820) loss_bbox_0: 0.7802 (0.7626) loss_giou_0: 1.2257 (1.2617) loss_ce_1: 0.8576 (0.8834) loss_bbox_1: 0.7037 (0.7330) loss_giou_1: 1.1768 (1.2461) loss_ce_2: 0.8264 (0.8382) loss_bbox_2: 0.6233 (0.7079) loss_giou_2: 1.2203 (1.2374) loss_ce_3: 0.7951 (0.8010) loss_bbox_3: 0.6368 (0.7093) loss_giou_3: 1.2497 (1.2334) loss_ce_4: 0.7561 (0.7610) loss_bbox_4: 0.6391 (0.6894) loss_giou_4: 1.2830 (1.2606) loss_ce_unscaled: 0.7128 (0.7148) class_error_unscaled: 43.7500 (53.1250) loss_bbox_unscaled: 0.1303 (0.1430) loss_giou_unscaled: 0.6240 (0.6115) cardinality_error_unscaled: 11.5000 (12.2812) loss_ce_0_unscaled: 0.9535 (0.9820) loss_bbox_0_unscaled: 0.1560 (0.1525) loss_giou_0_unscaled: 0.6129 (0.6308) cardinality_error_0_unscaled: 37.2500 (37.8594) loss_ce_1_unscaled: 0.8576 (0.8834) loss_bbox_1_unscaled: 0.1407 (0.1466) loss_giou_1_unscaled: 0.5884 (0.6231) cardinality_error_1_unscaled: 26.4375 (27.4844) loss_ce_2_unscaled: 0.8264 (0.8382) loss_bbox_2_unscaled: 0.1247 (0.1416) loss_giou_2_unscaled: 0.6102 (0.6187) cardinality_error_2_unscaled: 22.1250 (22.9688) loss_ce_3_unscaled: 0.7951 (0.8010) loss_bbox_3_unscaled: 0.1274 (0.1419) loss_giou_3_unscaled: 0.6248 (0.6167) cardinality_error_3_unscaled: 18.2500 (19.6094) loss_ce_4_unscaled: 0.7561 (0.7610) loss_bbox_4_unscaled: 0.1278 (0.1379) loss_giou_4_unscaled: 0.6415 (0.6303) cardinality_error_4_unscaled: 15.6875 (16.2031)
Accumulating evaluation results...
DONE (t=0.07s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.002
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.079
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.100
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.075
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.076
Epoch: [4] [ 0/16] eta: 0:00:38 lr: 0.000010 class_error: 46.15 loss: 20.4025 (20.4025) loss_ce: 0.6184 (0.6184) loss_bbox: 1.0111 (1.0111) loss_giou: 1.6279 (1.6279) loss_ce_0: 0.9085 (0.9085) loss_bbox_0: 1.0778 (1.0778) loss_giou_0: 1.4891 (1.4891) loss_ce_1: 0.8029 (0.8029) loss_bbox_1: 1.0942 (1.0942) loss_giou_1: 1.5918 (1.5918) loss_ce_2: 0.7415 (0.7415) loss_bbox_2: 1.1082 (1.1082) loss_giou_2: 1.5995 (1.5995) loss_ce_3: 0.7007 (0.7007) loss_bbox_3: 1.0810 (1.0810) loss_giou_3: 1.6304 (1.6304) loss_ce_4: 0.6667 (0.6667) loss_bbox_4: 1.0143 (1.0143) loss_giou_4: 1.6386 (1.6386) loss_ce_unscaled: 0.6184 (0.6184) class_error_unscaled: 46.1538 (46.1538) loss_bbox_unscaled: 0.2022 (0.2022) loss_giou_unscaled: 0.8139 (0.8139) cardinality_error_unscaled: 5.5000 (5.5000) loss_ce_0_unscaled: 0.9085 (0.9085) loss_bbox_0_unscaled: 0.2156 (0.2156) loss_giou_0_unscaled: 0.7445 (0.7445) cardinality_error_0_unscaled: 32.3750 (32.3750) loss_ce_1_unscaled: 0.8029 (0.8029) loss_bbox_1_unscaled: 0.2188 (0.2188) loss_giou_1_unscaled: 0.7959 (0.7959) cardinality_error_1_unscaled: 19.5625 (19.5625) loss_ce_2_unscaled: 0.7415 (0.7415) loss_bbox_2_unscaled: 0.2216 (0.2216) loss_giou_2_unscaled: 0.7997 (0.7997) cardinality_error_2_unscaled: 14.1250 (14.1250) loss_ce_3_unscaled: 0.7007 (0.7007) loss_bbox_3_unscaled: 0.2162 (0.2162) loss_giou_3_unscaled: 0.8152 (0.8152) cardinality_error_3_unscaled: 10.4375 (10.4375) loss_ce_4_unscaled: 0.6667 (0.6667) loss_bbox_4_unscaled: 0.2029 (0.2029) loss_giou_4_unscaled: 0.8193 (0.8193) cardinality_error_4_unscaled: 7.8750 (7.8750) time: 2.3824 data: 1.1442 max mem: 12069
Epoch: [4] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 50.00 loss: 17.8795 (18.3218) loss_ce: 0.6498 (0.6526) loss_bbox: 0.8014 (0.8720) loss_giou: 1.3373 (1.3451) loss_ce_0: 0.9314 (0.9378) loss_bbox_0: 0.9554 (0.9529) loss_giou_0: 1.4238 (1.4161) loss_ce_1: 0.8380 (0.8366) loss_bbox_1: 0.9097 (0.9400) loss_giou_1: 1.4497 (1.4199) loss_ce_2: 0.7783 (0.7769) loss_bbox_2: 0.8629 (0.9083) loss_giou_2: 1.3820 (1.3634) loss_ce_3: 0.7437 (0.7400) loss_bbox_3: 0.8400 (0.8763) loss_giou_3: 1.3471 (1.3652) loss_ce_4: 0.6994 (0.6989) loss_bbox_4: 0.8358 (0.8683) loss_giou_4: 1.3184 (1.3514) loss_ce_unscaled: 0.6498 (0.6526) class_error_unscaled: 46.6667 (51.3170) loss_bbox_unscaled: 0.1603 (0.1744) loss_giou_unscaled: 0.6686 (0.6725) cardinality_error_unscaled: 6.0000 (6.3693) loss_ce_0_unscaled: 0.9314 (0.9378) loss_bbox_0_unscaled: 0.1911 (0.1906) loss_giou_0_unscaled: 0.7119 (0.7081) cardinality_error_0_unscaled: 34.8750 (33.9602) loss_ce_1_unscaled: 0.8380 (0.8366) loss_bbox_1_unscaled: 0.1819 (0.1880) loss_giou_1_unscaled: 0.7248 (0.7099) cardinality_error_1_unscaled: 23.3750 (22.0284) loss_ce_2_unscaled: 0.7783 (0.7769) loss_bbox_2_unscaled: 0.1726 (0.1817) loss_giou_2_unscaled: 0.6910 (0.6817) cardinality_error_2_unscaled: 16.8125 (16.1420) loss_ce_3_unscaled: 0.7437 (0.7400) loss_bbox_3_unscaled: 0.1680 (0.1753) loss_giou_3_unscaled: 0.6736 (0.6826) cardinality_error_3_unscaled: 13.0000 (12.6761) loss_ce_4_unscaled: 0.6994 (0.6989) loss_bbox_4_unscaled: 0.1672 (0.1737) loss_giou_4_unscaled: 0.6592 (0.6757) cardinality_error_4_unscaled: 9.8750 (9.4432) time: 1.3380 data: 0.1693 max mem: 12069
Epoch: [4] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 56.25 loss: 17.7127 (18.0785) loss_ce: 0.6184 (0.6313) loss_bbox: 0.8014 (0.8521) loss_giou: 1.3373 (1.3502) loss_ce_0: 0.9085 (0.9142) loss_bbox_0: 0.9323 (0.9399) loss_giou_0: 1.4125 (1.4218) loss_ce_1: 0.8029 (0.8139) loss_bbox_1: 0.8708 (0.9230) loss_giou_1: 1.3852 (1.4017) loss_ce_2: 0.7415 (0.7541) loss_bbox_2: 0.8353 (0.8792) loss_giou_2: 1.3820 (1.3646) loss_ce_3: 0.7007 (0.7173) loss_bbox_3: 0.8264 (0.8581) loss_giou_3: 1.3471 (1.3704) loss_ce_4: 0.6667 (0.6775) loss_bbox_4: 0.8358 (0.8506) loss_giou_4: 1.3184 (1.3587) loss_ce_unscaled: 0.6184 (0.6313) class_error_unscaled: 46.6667 (49.6294) loss_bbox_unscaled: 0.1603 (0.1704) loss_giou_unscaled: 0.6686 (0.6751) cardinality_error_unscaled: 5.5000 (5.5898) loss_ce_0_unscaled: 0.9085 (0.9142) loss_bbox_0_unscaled: 0.1865 (0.1880) loss_giou_0_unscaled: 0.7063 (0.7109) cardinality_error_0_unscaled: 31.1875 (31.6094) loss_ce_1_unscaled: 0.8029 (0.8139) loss_bbox_1_unscaled: 0.1742 (0.1846) loss_giou_1_unscaled: 0.6926 (0.7009) cardinality_error_1_unscaled: 19.5625 (20.1914) loss_ce_2_unscaled: 0.7415 (0.7541) loss_bbox_2_unscaled: 0.1671 (0.1758) loss_giou_2_unscaled: 0.6910 (0.6823) cardinality_error_2_unscaled: 14.1250 (14.6172) loss_ce_3_unscaled: 0.7007 (0.7173) loss_bbox_3_unscaled: 0.1653 (0.1716) loss_giou_3_unscaled: 0.6736 (0.6852) cardinality_error_3_unscaled: 10.5000 (11.2930) loss_ce_4_unscaled: 0.6667 (0.6775) loss_bbox_4_unscaled: 0.1672 (0.1701) loss_giou_4_unscaled: 0.6592 (0.6793) cardinality_error_4_unscaled: 7.8750 (8.3047) time: 1.2942 data: 0.1358 max mem: 12069
Epoch: [4] Total time: 0:00:20 (1.2992 s / it)
Averaged stats: lr: 0.000010 class_error: 56.25 loss: 17.7127 (18.0785) loss_ce: 0.6184 (0.6313) loss_bbox: 0.8014 (0.8521) loss_giou: 1.3373 (1.3502) loss_ce_0: 0.9085 (0.9142) loss_bbox_0: 0.9323 (0.9399) loss_giou_0: 1.4125 (1.4218) loss_ce_1: 0.8029 (0.8139) loss_bbox_1: 0.8708 (0.9230) loss_giou_1: 1.3852 (1.4017) loss_ce_2: 0.7415 (0.7541) loss_bbox_2: 0.8353 (0.8792) loss_giou_2: 1.3820 (1.3646) loss_ce_3: 0.7007 (0.7173) loss_bbox_3: 0.8264 (0.8581) loss_giou_3: 1.3471 (1.3704) loss_ce_4: 0.6667 (0.6775) loss_bbox_4: 0.8358 (0.8506) loss_giou_4: 1.3184 (1.3587) loss_ce_unscaled: 0.6184 (0.6313) class_error_unscaled: 46.6667 (49.6294) loss_bbox_unscaled: 0.1603 (0.1704) loss_giou_unscaled: 0.6686 (0.6751) cardinality_error_unscaled: 5.5000 (5.5898) loss_ce_0_unscaled: 0.9085 (0.9142) loss_bbox_0_unscaled: 0.1865 (0.1880) loss_giou_0_unscaled: 0.7063 (0.7109) cardinality_error_0_unscaled: 31.1875 (31.6094) loss_ce_1_unscaled: 0.8029 (0.8139) loss_bbox_1_unscaled: 0.1742 (0.1846) loss_giou_1_unscaled: 0.6926 (0.7009) cardinality_error_1_unscaled: 19.5625 (20.1914) loss_ce_2_unscaled: 0.7415 (0.7541) loss_bbox_2_unscaled: 0.1671 (0.1758) loss_giou_2_unscaled: 0.6910 (0.6823) cardinality_error_2_unscaled: 14.1250 (14.6172) loss_ce_3_unscaled: 0.7007 (0.7173) loss_bbox_3_unscaled: 0.1653 (0.1716) loss_giou_3_unscaled: 0.6736 (0.6852) cardinality_error_3_unscaled: 10.5000 (11.2930) loss_ce_4_unscaled: 0.6667 (0.6775) loss_bbox_4_unscaled: 0.1672 (0.1701) loss_giou_4_unscaled: 0.6592 (0.6793) cardinality_error_4_unscaled: 7.8750 (8.3047)
Test: [0/4] eta: 0:00:08 class_error: 43.75 loss: 14.2637 (14.2637) loss_ce: 0.5902 (0.5902) loss_bbox: 0.4930 (0.4930) loss_giou: 1.1058 (1.1058) loss_ce_0: 0.8328 (0.8328) loss_bbox_0: 0.6750 (0.6750) loss_giou_0: 1.1793 (1.1793) loss_ce_1: 0.7473 (0.7473) loss_bbox_1: 0.6116 (0.6116) loss_giou_1: 1.1273 (1.1273) loss_ce_2: 0.7067 (0.7067) loss_bbox_2: 0.5298 (0.5298) loss_giou_2: 1.1038 (1.1038) loss_ce_3: 0.6730 (0.6730) loss_bbox_3: 0.4915 (0.4915) loss_giou_3: 1.1335 (1.1335) loss_ce_4: 0.6350 (0.6350) loss_bbox_4: 0.5093 (0.5093) loss_giou_4: 1.1187 (1.1187) loss_ce_unscaled: 0.5902 (0.5902) class_error_unscaled: 43.7500 (43.7500) loss_bbox_unscaled: 0.0986 (0.0986) loss_giou_unscaled: 0.5529 (0.5529) cardinality_error_unscaled: 3.8750 (3.8750) loss_ce_0_unscaled: 0.8328 (0.8328) loss_bbox_0_unscaled: 0.1350 (0.1350) loss_giou_0_unscaled: 0.5897 (0.5897) cardinality_error_0_unscaled: 23.0000 (23.0000) loss_ce_1_unscaled: 0.7473 (0.7473) loss_bbox_1_unscaled: 0.1223 (0.1223) loss_giou_1_unscaled: 0.5637 (0.5637) cardinality_error_1_unscaled: 15.2500 (15.2500) loss_ce_2_unscaled: 0.7067 (0.7067) loss_bbox_2_unscaled: 0.1060 (0.1060) loss_giou_2_unscaled: 0.5519 (0.5519) cardinality_error_2_unscaled: 11.3125 (11.3125) loss_ce_3_unscaled: 0.6730 (0.6730) loss_bbox_3_unscaled: 0.0983 (0.0983) loss_giou_3_unscaled: 0.5668 (0.5668) cardinality_error_3_unscaled: 8.5625 (8.5625) loss_ce_4_unscaled: 0.6350 (0.6350) loss_bbox_4_unscaled: 0.1019 (0.1019) loss_giou_4_unscaled: 0.5593 (0.5593) cardinality_error_4_unscaled: 6.2500 (6.2500) time: 2.1770 data: 1.1165 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 43.75 loss: 14.2637 (15.0694) loss_ce: 0.5902 (0.5908) loss_bbox: 0.4930 (0.5616) loss_giou: 1.1058 (1.1835) loss_ce_0: 0.8389 (0.8588) loss_bbox_0: 0.6750 (0.6937) loss_giou_0: 1.1793 (1.2276) loss_ce_1: 0.7473 (0.7665) loss_bbox_1: 0.6116 (0.6742) loss_giou_1: 1.1273 (1.1855) loss_ce_2: 0.7067 (0.7163) loss_bbox_2: 0.5298 (0.6118) loss_giou_2: 1.1038 (1.1774) loss_ce_3: 0.6730 (0.6786) loss_bbox_3: 0.4915 (0.5665) loss_giou_3: 1.1335 (1.1874) loss_ce_4: 0.6350 (0.6366) loss_bbox_4: 0.5093 (0.5708) loss_giou_4: 1.1187 (1.1820) loss_ce_unscaled: 0.5902 (0.5908) class_error_unscaled: 43.7500 (53.1250) loss_bbox_unscaled: 0.0986 (0.1123) loss_giou_unscaled: 0.5529 (0.5917) cardinality_error_unscaled: 3.8750 (3.8906) loss_ce_0_unscaled: 0.8389 (0.8588) loss_bbox_0_unscaled: 0.1350 (0.1387) loss_giou_0_unscaled: 0.5897 (0.6138) cardinality_error_0_unscaled: 23.0000 (26.0156) loss_ce_1_unscaled: 0.7473 (0.7665) loss_bbox_1_unscaled: 0.1223 (0.1348) loss_giou_1_unscaled: 0.5637 (0.5927) cardinality_error_1_unscaled: 15.2500 (16.1562) loss_ce_2_unscaled: 0.7067 (0.7163) loss_bbox_2_unscaled: 0.1060 (0.1224) loss_giou_2_unscaled: 0.5519 (0.5887) cardinality_error_2_unscaled: 11.3125 (11.9375) loss_ce_3_unscaled: 0.6730 (0.6786) loss_bbox_3_unscaled: 0.0983 (0.1133) loss_giou_3_unscaled: 0.5668 (0.5937) cardinality_error_3_unscaled: 8.5625 (9.0781) loss_ce_4_unscaled: 0.6350 (0.6366) loss_bbox_4_unscaled: 0.1019 (0.1142) loss_giou_4_unscaled: 0.5593 (0.5910) cardinality_error_4_unscaled: 6.2500 (6.3906) time: 1.0416 data: 0.3358 max mem: 12069
Test: Total time: 0:00:04 (1.0577 s / it)
Averaged stats: class_error: 43.75 loss: 14.2637 (15.0694) loss_ce: 0.5902 (0.5908) loss_bbox: 0.4930 (0.5616) loss_giou: 1.1058 (1.1835) loss_ce_0: 0.8389 (0.8588) loss_bbox_0: 0.6750 (0.6937) loss_giou_0: 1.1793 (1.2276) loss_ce_1: 0.7473 (0.7665) loss_bbox_1: 0.6116 (0.6742) loss_giou_1: 1.1273 (1.1855) loss_ce_2: 0.7067 (0.7163) loss_bbox_2: 0.5298 (0.6118) loss_giou_2: 1.1038 (1.1774) loss_ce_3: 0.6730 (0.6786) loss_bbox_3: 0.4915 (0.5665) loss_giou_3: 1.1335 (1.1874) loss_ce_4: 0.6350 (0.6366) loss_bbox_4: 0.5093 (0.5708) loss_giou_4: 1.1187 (1.1820) loss_ce_unscaled: 0.5902 (0.5908) class_error_unscaled: 43.7500 (53.1250) loss_bbox_unscaled: 0.0986 (0.1123) loss_giou_unscaled: 0.5529 (0.5917) cardinality_error_unscaled: 3.8750 (3.8906) loss_ce_0_unscaled: 0.8389 (0.8588) loss_bbox_0_unscaled: 0.1350 (0.1387) loss_giou_0_unscaled: 0.5897 (0.6138) cardinality_error_0_unscaled: 23.0000 (26.0156) loss_ce_1_unscaled: 0.7473 (0.7665) loss_bbox_1_unscaled: 0.1223 (0.1348) loss_giou_1_unscaled: 0.5637 (0.5927) cardinality_error_1_unscaled: 15.2500 (16.1562) loss_ce_2_unscaled: 0.7067 (0.7163) loss_bbox_2_unscaled: 0.1060 (0.1224) loss_giou_2_unscaled: 0.5519 (0.5887) cardinality_error_2_unscaled: 11.3125 (11.9375) loss_ce_3_unscaled: 0.6730 (0.6786) loss_bbox_3_unscaled: 0.0983 (0.1133) loss_giou_3_unscaled: 0.5668 (0.5937) cardinality_error_3_unscaled: 8.5625 (9.0781) loss_ce_4_unscaled: 0.6350 (0.6366) loss_bbox_4_unscaled: 0.1019 (0.1142) loss_giou_4_unscaled: 0.5593 (0.5910) cardinality_error_4_unscaled: 6.2500 (6.3906)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.003
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.100
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.175
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.056
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.103
Epoch: [5] [ 0/16] eta: 0:00:39 lr: 0.000010 class_error: 42.86 loss: 16.1236 (16.1236) loss_ce: 0.6156 (0.6156) loss_bbox: 0.6138 (0.6138) loss_giou: 1.2470 (1.2470) loss_ce_0: 0.9161 (0.9161) loss_bbox_0: 0.7235 (0.7235) loss_giou_0: 1.3013 (1.3013) loss_ce_1: 0.8116 (0.8116) loss_bbox_1: 0.7640 (0.7640) loss_giou_1: 1.2567 (1.2567) loss_ce_2: 0.7493 (0.7493) loss_bbox_2: 0.6872 (0.6872) loss_giou_2: 1.2476 (1.2476) loss_ce_3: 0.7091 (0.7091) loss_bbox_3: 0.6499 (0.6499) loss_giou_3: 1.2512 (1.2512) loss_ce_4: 0.6635 (0.6635) loss_bbox_4: 0.6352 (0.6352) loss_giou_4: 1.2811 (1.2811) loss_ce_unscaled: 0.6156 (0.6156) class_error_unscaled: 42.8571 (42.8571) loss_bbox_unscaled: 0.1228 (0.1228) loss_giou_unscaled: 0.6235 (0.6235) cardinality_error_unscaled: 5.7500 (5.7500) loss_ce_0_unscaled: 0.9161 (0.9161) loss_bbox_0_unscaled: 0.1447 (0.1447) loss_giou_0_unscaled: 0.6506 (0.6506) cardinality_error_0_unscaled: 33.2500 (33.2500) loss_ce_1_unscaled: 0.8116 (0.8116) loss_bbox_1_unscaled: 0.1528 (0.1528) loss_giou_1_unscaled: 0.6283 (0.6283) cardinality_error_1_unscaled: 21.7500 (21.7500) loss_ce_2_unscaled: 0.7493 (0.7493) loss_bbox_2_unscaled: 0.1374 (0.1374) loss_giou_2_unscaled: 0.6238 (0.6238) cardinality_error_2_unscaled: 16.5625 (16.5625) loss_ce_3_unscaled: 0.7091 (0.7091) loss_bbox_3_unscaled: 0.1300 (0.1300) loss_giou_3_unscaled: 0.6256 (0.6256) cardinality_error_3_unscaled: 12.3750 (12.3750) loss_ce_4_unscaled: 0.6635 (0.6635) loss_bbox_4_unscaled: 0.1270 (0.1270) loss_giou_4_unscaled: 0.6406 (0.6406) cardinality_error_4_unscaled: 9.0625 (9.0625) time: 2.4810 data: 1.2385 max mem: 12069
Epoch: [5] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 68.75 loss: 15.4566 (15.9304) loss_ce: 0.5363 (0.5418) loss_bbox: 0.6447 (0.7075) loss_giou: 1.2470 (1.2397) loss_ce_0: 0.8041 (0.8182) loss_bbox_0: 0.7249 (0.7829) loss_giou_0: 1.2989 (1.2800) loss_ce_1: 0.7093 (0.7240) loss_bbox_1: 0.6655 (0.7613) loss_giou_1: 1.2816 (1.2844) loss_ce_2: 0.6483 (0.6659) loss_bbox_2: 0.6653 (0.7336) loss_giou_2: 1.2589 (1.2551) loss_ce_3: 0.6177 (0.6277) loss_bbox_3: 0.6499 (0.7153) loss_giou_3: 1.2684 (1.2483) loss_ce_4: 0.5770 (0.5872) loss_bbox_4: 0.6439 (0.7151) loss_giou_4: 1.2573 (1.2424) loss_ce_unscaled: 0.5363 (0.5418) class_error_unscaled: 50.0000 (51.1805) loss_bbox_unscaled: 0.1289 (0.1415) loss_giou_unscaled: 0.6235 (0.6198) cardinality_error_unscaled: 1.8750 (2.6023) loss_ce_0_unscaled: 0.8041 (0.8182) loss_bbox_0_unscaled: 0.1450 (0.1566) loss_giou_0_unscaled: 0.6494 (0.6400) cardinality_error_0_unscaled: 20.5625 (21.6023) loss_ce_1_unscaled: 0.7093 (0.7240) loss_bbox_1_unscaled: 0.1331 (0.1523) loss_giou_1_unscaled: 0.6408 (0.6422) cardinality_error_1_unscaled: 11.7500 (12.3352) loss_ce_2_unscaled: 0.6483 (0.6659) loss_bbox_2_unscaled: 0.1331 (0.1467) loss_giou_2_unscaled: 0.6295 (0.6276) cardinality_error_2_unscaled: 7.3125 (8.1136) loss_ce_3_unscaled: 0.6177 (0.6277) loss_bbox_3_unscaled: 0.1300 (0.1431) loss_giou_3_unscaled: 0.6342 (0.6241) cardinality_error_3_unscaled: 4.7500 (5.3239) loss_ce_4_unscaled: 0.5770 (0.5872) loss_bbox_4_unscaled: 0.1288 (0.1430) loss_giou_4_unscaled: 0.6287 (0.6212) cardinality_error_4_unscaled: 2.9375 (3.8807) time: 1.3512 data: 0.1670 max mem: 12069
Epoch: [5] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 66.67 loss: 15.2112 (15.5371) loss_ce: 0.5208 (0.5295) loss_bbox: 0.6166 (0.6828) loss_giou: 1.2201 (1.2117) loss_ce_0: 0.7882 (0.7977) loss_bbox_0: 0.7235 (0.7643) loss_giou_0: 1.2266 (1.2557) loss_ce_1: 0.7033 (0.7087) loss_bbox_1: 0.6655 (0.7381) loss_giou_1: 1.2479 (1.2545) loss_ce_2: 0.6428 (0.6515) loss_bbox_2: 0.6653 (0.7198) loss_giou_2: 1.2043 (1.2241) loss_ce_3: 0.6085 (0.6140) loss_bbox_3: 0.6499 (0.6971) loss_giou_3: 1.2016 (1.2115) loss_ce_4: 0.5631 (0.5734) loss_bbox_4: 0.6439 (0.6917) loss_giou_4: 1.2097 (1.2110) loss_ce_unscaled: 0.5208 (0.5295) class_error_unscaled: 50.0000 (51.6189) loss_bbox_unscaled: 0.1233 (0.1366) loss_giou_unscaled: 0.6100 (0.6059) cardinality_error_unscaled: 1.5625 (2.1289) loss_ce_0_unscaled: 0.7882 (0.7977) loss_bbox_0_unscaled: 0.1447 (0.1529) loss_giou_0_unscaled: 0.6133 (0.6278) cardinality_error_0_unscaled: 20.1250 (19.7070) loss_ce_1_unscaled: 0.7033 (0.7087) loss_bbox_1_unscaled: 0.1331 (0.1476) loss_giou_1_unscaled: 0.6239 (0.6273) cardinality_error_1_unscaled: 10.1250 (11.0625) loss_ce_2_unscaled: 0.6428 (0.6515) loss_bbox_2_unscaled: 0.1331 (0.1440) loss_giou_2_unscaled: 0.6022 (0.6121) cardinality_error_2_unscaled: 5.9375 (7.1328) loss_ce_3_unscaled: 0.6085 (0.6140) loss_bbox_3_unscaled: 0.1300 (0.1394) loss_giou_3_unscaled: 0.6008 (0.6058) cardinality_error_3_unscaled: 3.6875 (4.5820) loss_ce_4_unscaled: 0.5631 (0.5734) loss_bbox_4_unscaled: 0.1288 (0.1383) loss_giou_4_unscaled: 0.6048 (0.6055) cardinality_error_4_unscaled: 2.2500 (3.2109) time: 1.3053 data: 0.1327 max mem: 12069
Epoch: [5] Total time: 0:00:20 (1.3095 s / it)
Averaged stats: lr: 0.000010 class_error: 66.67 loss: 15.2112 (15.5371) loss_ce: 0.5208 (0.5295) loss_bbox: 0.6166 (0.6828) loss_giou: 1.2201 (1.2117) loss_ce_0: 0.7882 (0.7977) loss_bbox_0: 0.7235 (0.7643) loss_giou_0: 1.2266 (1.2557) loss_ce_1: 0.7033 (0.7087) loss_bbox_1: 0.6655 (0.7381) loss_giou_1: 1.2479 (1.2545) loss_ce_2: 0.6428 (0.6515) loss_bbox_2: 0.6653 (0.7198) loss_giou_2: 1.2043 (1.2241) loss_ce_3: 0.6085 (0.6140) loss_bbox_3: 0.6499 (0.6971) loss_giou_3: 1.2016 (1.2115) loss_ce_4: 0.5631 (0.5734) loss_bbox_4: 0.6439 (0.6917) loss_giou_4: 1.2097 (1.2110) loss_ce_unscaled: 0.5208 (0.5295) class_error_unscaled: 50.0000 (51.6189) loss_bbox_unscaled: 0.1233 (0.1366) loss_giou_unscaled: 0.6100 (0.6059) cardinality_error_unscaled: 1.5625 (2.1289) loss_ce_0_unscaled: 0.7882 (0.7977) loss_bbox_0_unscaled: 0.1447 (0.1529) loss_giou_0_unscaled: 0.6133 (0.6278) cardinality_error_0_unscaled: 20.1250 (19.7070) loss_ce_1_unscaled: 0.7033 (0.7087) loss_bbox_1_unscaled: 0.1331 (0.1476) loss_giou_1_unscaled: 0.6239 (0.6273) cardinality_error_1_unscaled: 10.1250 (11.0625) loss_ce_2_unscaled: 0.6428 (0.6515) loss_bbox_2_unscaled: 0.1331 (0.1440) loss_giou_2_unscaled: 0.6022 (0.6121) cardinality_error_2_unscaled: 5.9375 (7.1328) loss_ce_3_unscaled: 0.6085 (0.6140) loss_bbox_3_unscaled: 0.1300 (0.1394) loss_giou_3_unscaled: 0.6008 (0.6058) cardinality_error_3_unscaled: 3.6875 (4.5820) loss_ce_4_unscaled: 0.5631 (0.5734) loss_bbox_4_unscaled: 0.1288 (0.1383) loss_giou_4_unscaled: 0.6048 (0.6055) cardinality_error_4_unscaled: 2.2500 (3.2109)
Test: [0/4] eta: 0:00:08 class_error: 50.00 loss: 12.8958 (12.8958) loss_ce: 0.5152 (0.5152) loss_bbox: 0.4676 (0.4676) loss_giou: 1.0414 (1.0414) loss_ce_0: 0.7273 (0.7273) loss_bbox_0: 0.5483 (0.5483) loss_giou_0: 1.1407 (1.1407) loss_ce_1: 0.6519 (0.6519) loss_bbox_1: 0.5211 (0.5211) loss_giou_1: 1.0266 (1.0266) loss_ce_2: 0.6190 (0.6190) loss_bbox_2: 0.4963 (0.4963) loss_giou_2: 0.9915 (0.9915) loss_ce_3: 0.5874 (0.5874) loss_bbox_3: 0.4990 (0.4990) loss_giou_3: 1.0185 (1.0185) loss_ce_4: 0.5545 (0.5545) loss_bbox_4: 0.4659 (0.4659) loss_giou_4: 1.0238 (1.0238) loss_ce_unscaled: 0.5152 (0.5152) class_error_unscaled: 50.0000 (50.0000) loss_bbox_unscaled: 0.0935 (0.0935) loss_giou_unscaled: 0.5207 (0.5207) cardinality_error_unscaled: 1.8125 (1.8125) loss_ce_0_unscaled: 0.7273 (0.7273) loss_bbox_0_unscaled: 0.1097 (0.1097) loss_giou_0_unscaled: 0.5704 (0.5704) cardinality_error_0_unscaled: 13.1250 (13.1250) loss_ce_1_unscaled: 0.6519 (0.6519) loss_bbox_1_unscaled: 0.1042 (0.1042) loss_giou_1_unscaled: 0.5133 (0.5133) cardinality_error_1_unscaled: 6.6875 (6.6875) loss_ce_2_unscaled: 0.6190 (0.6190) loss_bbox_2_unscaled: 0.0993 (0.0993) loss_giou_2_unscaled: 0.4958 (0.4958) cardinality_error_2_unscaled: 5.0000 (5.0000) loss_ce_3_unscaled: 0.5874 (0.5874) loss_bbox_3_unscaled: 0.0998 (0.0998) loss_giou_3_unscaled: 0.5093 (0.5093) cardinality_error_3_unscaled: 3.0000 (3.0000) loss_ce_4_unscaled: 0.5545 (0.5545) loss_bbox_4_unscaled: 0.0932 (0.0932) loss_giou_4_unscaled: 0.5119 (0.5119) cardinality_error_4_unscaled: 2.3750 (2.3750) time: 2.0574 data: 1.1524 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 50.00 loss: 13.0569 (13.5330) loss_ce: 0.5070 (0.5095) loss_bbox: 0.4755 (0.5276) loss_giou: 1.0414 (1.0287) loss_ce_0: 0.7447 (0.7517) loss_bbox_0: 0.6210 (0.6224) loss_giou_0: 1.1407 (1.1724) loss_ce_1: 0.6583 (0.6689) loss_bbox_1: 0.5694 (0.5975) loss_giou_1: 1.0666 (1.1116) loss_ce_2: 0.6187 (0.6211) loss_bbox_2: 0.4963 (0.5607) loss_giou_2: 0.9954 (1.0639) loss_ce_3: 0.5864 (0.5873) loss_bbox_3: 0.4990 (0.5396) loss_giou_3: 1.0214 (1.0580) loss_ce_4: 0.5454 (0.5498) loss_bbox_4: 0.4940 (0.5399) loss_giou_4: 1.0238 (1.0225) loss_ce_unscaled: 0.5070 (0.5095) class_error_unscaled: 50.0000 (57.8125) loss_bbox_unscaled: 0.0951 (0.1055) loss_giou_unscaled: 0.5207 (0.5144) cardinality_error_unscaled: 1.1250 (1.3125) loss_ce_0_unscaled: 0.7447 (0.7517) loss_bbox_0_unscaled: 0.1242 (0.1245) loss_giou_0_unscaled: 0.5704 (0.5862) cardinality_error_0_unscaled: 13.6875 (15.5469) loss_ce_1_unscaled: 0.6583 (0.6689) loss_bbox_1_unscaled: 0.1139 (0.1195) loss_giou_1_unscaled: 0.5333 (0.5558) cardinality_error_1_unscaled: 6.7500 (7.7969) loss_ce_2_unscaled: 0.6187 (0.6211) loss_bbox_2_unscaled: 0.0993 (0.1121) loss_giou_2_unscaled: 0.4977 (0.5319) cardinality_error_2_unscaled: 4.7500 (4.5781) loss_ce_3_unscaled: 0.5864 (0.5873) loss_bbox_3_unscaled: 0.0998 (0.1079) loss_giou_3_unscaled: 0.5107 (0.5290) cardinality_error_3_unscaled: 2.5625 (2.6875) loss_ce_4_unscaled: 0.5454 (0.5498) loss_bbox_4_unscaled: 0.0988 (0.1080) loss_giou_4_unscaled: 0.5119 (0.5112) cardinality_error_4_unscaled: 1.5625 (1.7344) time: 1.0158 data: 0.3463 max mem: 12069
Test: Total time: 0:00:04 (1.0327 s / it)
Averaged stats: class_error: 50.00 loss: 13.0569 (13.5330) loss_ce: 0.5070 (0.5095) loss_bbox: 0.4755 (0.5276) loss_giou: 1.0414 (1.0287) loss_ce_0: 0.7447 (0.7517) loss_bbox_0: 0.6210 (0.6224) loss_giou_0: 1.1407 (1.1724) loss_ce_1: 0.6583 (0.6689) loss_bbox_1: 0.5694 (0.5975) loss_giou_1: 1.0666 (1.1116) loss_ce_2: 0.6187 (0.6211) loss_bbox_2: 0.4963 (0.5607) loss_giou_2: 0.9954 (1.0639) loss_ce_3: 0.5864 (0.5873) loss_bbox_3: 0.4990 (0.5396) loss_giou_3: 1.0214 (1.0580) loss_ce_4: 0.5454 (0.5498) loss_bbox_4: 0.4940 (0.5399) loss_giou_4: 1.0238 (1.0225) loss_ce_unscaled: 0.5070 (0.5095) class_error_unscaled: 50.0000 (57.8125) loss_bbox_unscaled: 0.0951 (0.1055) loss_giou_unscaled: 0.5207 (0.5144) cardinality_error_unscaled: 1.1250 (1.3125) loss_ce_0_unscaled: 0.7447 (0.7517) loss_bbox_0_unscaled: 0.1242 (0.1245) loss_giou_0_unscaled: 0.5704 (0.5862) cardinality_error_0_unscaled: 13.6875 (15.5469) loss_ce_1_unscaled: 0.6583 (0.6689) loss_bbox_1_unscaled: 0.1139 (0.1195) loss_giou_1_unscaled: 0.5333 (0.5558) cardinality_error_1_unscaled: 6.7500 (7.7969) loss_ce_2_unscaled: 0.6187 (0.6211) loss_bbox_2_unscaled: 0.0993 (0.1121) loss_giou_2_unscaled: 0.4977 (0.5319) cardinality_error_2_unscaled: 4.7500 (4.5781) loss_ce_3_unscaled: 0.5864 (0.5873) loss_bbox_3_unscaled: 0.0998 (0.1079) loss_giou_3_unscaled: 0.5107 (0.5290) cardinality_error_3_unscaled: 2.5625 (2.6875) loss_ce_4_unscaled: 0.5454 (0.5498) loss_bbox_4_unscaled: 0.0988 (0.1080) loss_giou_4_unscaled: 0.5119 (0.5112) cardinality_error_4_unscaled: 1.5625 (1.7344)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.001
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.016
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.145
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.237
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.069
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.158
Epoch: [6] [ 0/16] eta: 0:00:41 lr: 0.000010 class_error: 57.14 loss: 14.7424 (14.7424) loss_ce: 0.4778 (0.4778) loss_bbox: 0.6182 (0.6182) loss_giou: 1.1867 (1.1867) loss_ce_0: 0.7277 (0.7277) loss_bbox_0: 0.7558 (0.7558) loss_giou_0: 1.1980 (1.1980) loss_ce_1: 0.6506 (0.6506) loss_bbox_1: 0.7266 (0.7266) loss_giou_1: 1.2626 (1.2626) loss_ce_2: 0.5904 (0.5904) loss_bbox_2: 0.6395 (0.6395) loss_giou_2: 1.2051 (1.2051) loss_ce_3: 0.5557 (0.5557) loss_bbox_3: 0.6104 (0.6104) loss_giou_3: 1.1982 (1.1982) loss_ce_4: 0.5148 (0.5148) loss_bbox_4: 0.6528 (0.6528) loss_giou_4: 1.1714 (1.1714) loss_ce_unscaled: 0.4778 (0.4778) class_error_unscaled: 57.1429 (57.1429) loss_bbox_unscaled: 0.1236 (0.1236) loss_giou_unscaled: 0.5933 (0.5933) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.7277 (0.7277) loss_bbox_0_unscaled: 0.1512 (0.1512) loss_giou_0_unscaled: 0.5990 (0.5990) cardinality_error_0_unscaled: 11.2500 (11.2500) loss_ce_1_unscaled: 0.6506 (0.6506) loss_bbox_1_unscaled: 0.1453 (0.1453) loss_giou_1_unscaled: 0.6313 (0.6313) cardinality_error_1_unscaled: 4.3750 (4.3750) loss_ce_2_unscaled: 0.5904 (0.5904) loss_bbox_2_unscaled: 0.1279 (0.1279) loss_giou_2_unscaled: 0.6026 (0.6026) cardinality_error_2_unscaled: 2.2500 (2.2500) loss_ce_3_unscaled: 0.5557 (0.5557) loss_bbox_3_unscaled: 0.1221 (0.1221) loss_giou_3_unscaled: 0.5991 (0.5991) cardinality_error_3_unscaled: 1.2500 (1.2500) loss_ce_4_unscaled: 0.5148 (0.5148) loss_bbox_4_unscaled: 0.1306 (0.1306) loss_giou_4_unscaled: 0.5857 (0.5857) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 2.5941 data: 1.2914 max mem: 12069
Epoch: [6] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 81.25 loss: 14.7424 (14.7748) loss_ce: 0.4710 (0.4716) loss_bbox: 0.6534 (0.6619) loss_giou: 1.2220 (1.1836) loss_ce_0: 0.7148 (0.7206) loss_bbox_0: 0.7166 (0.7112) loss_giou_0: 1.1980 (1.2158) loss_ce_1: 0.6395 (0.6416) loss_bbox_1: 0.7045 (0.6927) loss_giou_1: 1.2322 (1.2319) loss_ce_2: 0.5781 (0.5838) loss_bbox_2: 0.6844 (0.6773) loss_giou_2: 1.2051 (1.2080) loss_ce_3: 0.5485 (0.5468) loss_bbox_3: 0.6665 (0.6633) loss_giou_3: 1.2077 (1.2058) loss_ce_4: 0.5087 (0.5082) loss_bbox_4: 0.6528 (0.6604) loss_giou_4: 1.2191 (1.1903) loss_ce_unscaled: 0.4710 (0.4716) class_error_unscaled: 57.1429 (57.9895) loss_bbox_unscaled: 0.1307 (0.1324) loss_giou_unscaled: 0.6110 (0.5918) cardinality_error_unscaled: 1.0000 (1.0568) loss_ce_0_unscaled: 0.7148 (0.7206) loss_bbox_0_unscaled: 0.1433 (0.1422) loss_giou_0_unscaled: 0.5990 (0.6079) cardinality_error_0_unscaled: 13.4375 (13.0739) loss_ce_1_unscaled: 0.6395 (0.6416) loss_bbox_1_unscaled: 0.1409 (0.1385) loss_giou_1_unscaled: 0.6161 (0.6160) cardinality_error_1_unscaled: 6.2500 (6.0057) loss_ce_2_unscaled: 0.5781 (0.5838) loss_bbox_2_unscaled: 0.1369 (0.1355) loss_giou_2_unscaled: 0.6026 (0.6040) cardinality_error_2_unscaled: 3.0000 (3.2102) loss_ce_3_unscaled: 0.5485 (0.5468) loss_bbox_3_unscaled: 0.1333 (0.1327) loss_giou_3_unscaled: 0.6039 (0.6029) cardinality_error_3_unscaled: 1.6875 (1.7898) loss_ce_4_unscaled: 0.5087 (0.5082) loss_bbox_4_unscaled: 0.1306 (0.1321) loss_giou_4_unscaled: 0.6096 (0.5952) cardinality_error_4_unscaled: 1.4375 (1.3523) time: 1.3564 data: 0.1705 max mem: 12069
Epoch: [6] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 62.50 loss: 14.4121 (14.5865) loss_ce: 0.4653 (0.4644) loss_bbox: 0.6182 (0.6454) loss_giou: 1.1867 (1.1719) loss_ce_0: 0.7040 (0.7066) loss_bbox_0: 0.7103 (0.7039) loss_giou_0: 1.1954 (1.2193) loss_ce_1: 0.6211 (0.6288) loss_bbox_1: 0.6657 (0.6817) loss_giou_1: 1.2322 (1.2229) loss_ce_2: 0.5662 (0.5729) loss_bbox_2: 0.6598 (0.6677) loss_giou_2: 1.2000 (1.1981) loss_ce_3: 0.5327 (0.5371) loss_bbox_3: 0.6383 (0.6484) loss_giou_3: 1.1982 (1.1951) loss_ce_4: 0.4948 (0.4996) loss_bbox_4: 0.6337 (0.6440) loss_giou_4: 1.1942 (1.1786) loss_ce_unscaled: 0.4653 (0.4644) class_error_unscaled: 56.2500 (54.9459) loss_bbox_unscaled: 0.1236 (0.1291) loss_giou_unscaled: 0.5933 (0.5859) cardinality_error_unscaled: 1.0000 (1.0586) loss_ce_0_unscaled: 0.7040 (0.7066) loss_bbox_0_unscaled: 0.1421 (0.1408) loss_giou_0_unscaled: 0.5977 (0.6096) cardinality_error_0_unscaled: 12.3750 (11.9023) loss_ce_1_unscaled: 0.6211 (0.6288) loss_bbox_1_unscaled: 0.1331 (0.1363) loss_giou_1_unscaled: 0.6161 (0.6114) cardinality_error_1_unscaled: 5.1250 (5.2383) loss_ce_2_unscaled: 0.5662 (0.5729) loss_bbox_2_unscaled: 0.1320 (0.1335) loss_giou_2_unscaled: 0.6000 (0.5990) cardinality_error_2_unscaled: 2.2500 (2.6719) loss_ce_3_unscaled: 0.5327 (0.5371) loss_bbox_3_unscaled: 0.1277 (0.1297) loss_giou_3_unscaled: 0.5991 (0.5976) cardinality_error_3_unscaled: 1.3750 (1.5703) loss_ce_4_unscaled: 0.4948 (0.4996) loss_bbox_4_unscaled: 0.1267 (0.1288) loss_giou_4_unscaled: 0.5971 (0.5893) cardinality_error_4_unscaled: 1.0625 (1.2695) time: 1.3038 data: 0.1358 max mem: 12069
Epoch: [6] Total time: 0:00:20 (1.3081 s / it)
Averaged stats: lr: 0.000010 class_error: 62.50 loss: 14.4121 (14.5865) loss_ce: 0.4653 (0.4644) loss_bbox: 0.6182 (0.6454) loss_giou: 1.1867 (1.1719) loss_ce_0: 0.7040 (0.7066) loss_bbox_0: 0.7103 (0.7039) loss_giou_0: 1.1954 (1.2193) loss_ce_1: 0.6211 (0.6288) loss_bbox_1: 0.6657 (0.6817) loss_giou_1: 1.2322 (1.2229) loss_ce_2: 0.5662 (0.5729) loss_bbox_2: 0.6598 (0.6677) loss_giou_2: 1.2000 (1.1981) loss_ce_3: 0.5327 (0.5371) loss_bbox_3: 0.6383 (0.6484) loss_giou_3: 1.1982 (1.1951) loss_ce_4: 0.4948 (0.4996) loss_bbox_4: 0.6337 (0.6440) loss_giou_4: 1.1942 (1.1786) loss_ce_unscaled: 0.4653 (0.4644) class_error_unscaled: 56.2500 (54.9459) loss_bbox_unscaled: 0.1236 (0.1291) loss_giou_unscaled: 0.5933 (0.5859) cardinality_error_unscaled: 1.0000 (1.0586) loss_ce_0_unscaled: 0.7040 (0.7066) loss_bbox_0_unscaled: 0.1421 (0.1408) loss_giou_0_unscaled: 0.5977 (0.6096) cardinality_error_0_unscaled: 12.3750 (11.9023) loss_ce_1_unscaled: 0.6211 (0.6288) loss_bbox_1_unscaled: 0.1331 (0.1363) loss_giou_1_unscaled: 0.6161 (0.6114) cardinality_error_1_unscaled: 5.1250 (5.2383) loss_ce_2_unscaled: 0.5662 (0.5729) loss_bbox_2_unscaled: 0.1320 (0.1335) loss_giou_2_unscaled: 0.6000 (0.5990) cardinality_error_2_unscaled: 2.2500 (2.6719) loss_ce_3_unscaled: 0.5327 (0.5371) loss_bbox_3_unscaled: 0.1277 (0.1297) loss_giou_3_unscaled: 0.5991 (0.5976) cardinality_error_3_unscaled: 1.3750 (1.5703) loss_ce_4_unscaled: 0.4948 (0.4996) loss_bbox_4_unscaled: 0.1267 (0.1288) loss_giou_4_unscaled: 0.5971 (0.5893) cardinality_error_4_unscaled: 1.0625 (1.2695)
Test: [0/4] eta: 0:00:09 class_error: 50.00 loss: 11.7081 (11.7081) loss_ce: 0.4596 (0.4596) loss_bbox: 0.4284 (0.4284) loss_giou: 0.9412 (0.9412) loss_ce_0: 0.6483 (0.6483) loss_bbox_0: 0.4939 (0.4939) loss_giou_0: 1.0188 (1.0188) loss_ce_1: 0.5894 (0.5894) loss_bbox_1: 0.4420 (0.4420) loss_giou_1: 0.9224 (0.9224) loss_ce_2: 0.5467 (0.5467) loss_bbox_2: 0.4618 (0.4618) loss_giou_2: 0.9791 (0.9791) loss_ce_3: 0.5173 (0.5173) loss_bbox_3: 0.4351 (0.4351) loss_giou_3: 0.9475 (0.9475) loss_ce_4: 0.4903 (0.4903) loss_bbox_4: 0.4345 (0.4345) loss_giou_4: 0.9517 (0.9517) loss_ce_unscaled: 0.4596 (0.4596) class_error_unscaled: 50.0000 (50.0000) loss_bbox_unscaled: 0.0857 (0.0857) loss_giou_unscaled: 0.4706 (0.4706) cardinality_error_unscaled: 1.3750 (1.3750) loss_ce_0_unscaled: 0.6483 (0.6483) loss_bbox_0_unscaled: 0.0988 (0.0988) loss_giou_0_unscaled: 0.5094 (0.5094) cardinality_error_0_unscaled: 7.1250 (7.1250) loss_ce_1_unscaled: 0.5894 (0.5894) loss_bbox_1_unscaled: 0.0884 (0.0884) loss_giou_1_unscaled: 0.4612 (0.4612) cardinality_error_1_unscaled: 3.0625 (3.0625) loss_ce_2_unscaled: 0.5467 (0.5467) loss_bbox_2_unscaled: 0.0924 (0.0924) loss_giou_2_unscaled: 0.4895 (0.4895) cardinality_error_2_unscaled: 2.1875 (2.1875) loss_ce_3_unscaled: 0.5173 (0.5173) loss_bbox_3_unscaled: 0.0870 (0.0870) loss_giou_3_unscaled: 0.4738 (0.4738) cardinality_error_3_unscaled: 1.6875 (1.6875) loss_ce_4_unscaled: 0.4903 (0.4903) loss_bbox_4_unscaled: 0.0869 (0.0869) loss_giou_4_unscaled: 0.4759 (0.4759) cardinality_error_4_unscaled: 1.7500 (1.7500) time: 2.2642 data: 1.2769 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 50.00 loss: 12.0681 (12.1003) loss_ce: 0.4568 (0.4557) loss_bbox: 0.4598 (0.4620) loss_giou: 0.8906 (0.9354) loss_ce_0: 0.6706 (0.6718) loss_bbox_0: 0.5231 (0.5334) loss_giou_0: 1.0160 (1.0424) loss_ce_1: 0.5956 (0.6009) loss_bbox_1: 0.5318 (0.5110) loss_giou_1: 1.0382 (1.0224) loss_ce_2: 0.5451 (0.5490) loss_bbox_2: 0.4618 (0.4886) loss_giou_2: 0.9791 (0.9857) loss_ce_3: 0.5150 (0.5178) loss_bbox_3: 0.4765 (0.4761) loss_giou_3: 0.9475 (0.9629) loss_ce_4: 0.4862 (0.4874) loss_bbox_4: 0.4623 (0.4607) loss_giou_4: 0.8982 (0.9369) loss_ce_unscaled: 0.4568 (0.4557) class_error_unscaled: 50.0000 (59.3750) loss_bbox_unscaled: 0.0920 (0.0924) loss_giou_unscaled: 0.4453 (0.4677) cardinality_error_unscaled: 1.0000 (1.0781) loss_ce_0_unscaled: 0.6706 (0.6718) loss_bbox_0_unscaled: 0.1046 (0.1067) loss_giou_0_unscaled: 0.5080 (0.5212) cardinality_error_0_unscaled: 7.6250 (8.0312) loss_ce_1_unscaled: 0.5956 (0.6009) loss_bbox_1_unscaled: 0.1064 (0.1022) loss_giou_1_unscaled: 0.5191 (0.5112) cardinality_error_1_unscaled: 2.6250 (2.8438) loss_ce_2_unscaled: 0.5451 (0.5490) loss_bbox_2_unscaled: 0.0924 (0.0977) loss_giou_2_unscaled: 0.4895 (0.4929) cardinality_error_2_unscaled: 1.5625 (1.5938) loss_ce_3_unscaled: 0.5150 (0.5178) loss_bbox_3_unscaled: 0.0953 (0.0952) loss_giou_3_unscaled: 0.4738 (0.4815) cardinality_error_3_unscaled: 1.1875 (1.2344) loss_ce_4_unscaled: 0.4862 (0.4874) loss_bbox_4_unscaled: 0.0925 (0.0921) loss_giou_4_unscaled: 0.4491 (0.4685) cardinality_error_4_unscaled: 1.0000 (1.1875) time: 1.0553 data: 0.3628 max mem: 12069
Test: Total time: 0:00:04 (1.0726 s / it)
Averaged stats: class_error: 50.00 loss: 12.0681 (12.1003) loss_ce: 0.4568 (0.4557) loss_bbox: 0.4598 (0.4620) loss_giou: 0.8906 (0.9354) loss_ce_0: 0.6706 (0.6718) loss_bbox_0: 0.5231 (0.5334) loss_giou_0: 1.0160 (1.0424) loss_ce_1: 0.5956 (0.6009) loss_bbox_1: 0.5318 (0.5110) loss_giou_1: 1.0382 (1.0224) loss_ce_2: 0.5451 (0.5490) loss_bbox_2: 0.4618 (0.4886) loss_giou_2: 0.9791 (0.9857) loss_ce_3: 0.5150 (0.5178) loss_bbox_3: 0.4765 (0.4761) loss_giou_3: 0.9475 (0.9629) loss_ce_4: 0.4862 (0.4874) loss_bbox_4: 0.4623 (0.4607) loss_giou_4: 0.8982 (0.9369) loss_ce_unscaled: 0.4568 (0.4557) class_error_unscaled: 50.0000 (59.3750) loss_bbox_unscaled: 0.0920 (0.0924) loss_giou_unscaled: 0.4453 (0.4677) cardinality_error_unscaled: 1.0000 (1.0781) loss_ce_0_unscaled: 0.6706 (0.6718) loss_bbox_0_unscaled: 0.1046 (0.1067) loss_giou_0_unscaled: 0.5080 (0.5212) cardinality_error_0_unscaled: 7.6250 (8.0312) loss_ce_1_unscaled: 0.5956 (0.6009) loss_bbox_1_unscaled: 0.1064 (0.1022) loss_giou_1_unscaled: 0.5191 (0.5112) cardinality_error_1_unscaled: 2.6250 (2.8438) loss_ce_2_unscaled: 0.5451 (0.5490) loss_bbox_2_unscaled: 0.0924 (0.0977) loss_giou_2_unscaled: 0.4895 (0.4929) cardinality_error_2_unscaled: 1.5625 (1.5938) loss_ce_3_unscaled: 0.5150 (0.5178) loss_bbox_3_unscaled: 0.0953 (0.0952) loss_giou_3_unscaled: 0.4738 (0.4815) cardinality_error_3_unscaled: 1.1875 (1.2344) loss_ce_4_unscaled: 0.4862 (0.4874) loss_bbox_4_unscaled: 0.0925 (0.0921) loss_giou_4_unscaled: 0.4491 (0.4685) cardinality_error_4_unscaled: 1.0000 (1.1875)
Accumulating evaluation results...
DONE (t=0.07s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.001
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.008
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.160
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.263
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.069
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.176
Epoch: [7] [ 0/16] eta: 0:00:42 lr: 0.000010 class_error: 66.67 loss: 13.2446 (13.2446) loss_ce: 0.4371 (0.4371) loss_bbox: 0.5915 (0.5915) loss_giou: 1.0272 (1.0272) loss_ce_0: 0.6717 (0.6717) loss_bbox_0: 0.6299 (0.6299) loss_giou_0: 1.1275 (1.1275) loss_ce_1: 0.6031 (0.6031) loss_bbox_1: 0.5690 (0.5690) loss_giou_1: 1.0448 (1.0448) loss_ce_2: 0.5446 (0.5446) loss_bbox_2: 0.6000 (0.6000) loss_giou_2: 1.0802 (1.0802) loss_ce_3: 0.5089 (0.5089) loss_bbox_3: 0.5936 (0.5936) loss_giou_3: 1.0755 (1.0755) loss_ce_4: 0.4708 (0.4708) loss_bbox_4: 0.6050 (0.6050) loss_giou_4: 1.0641 (1.0641) loss_ce_unscaled: 0.4371 (0.4371) class_error_unscaled: 66.6667 (66.6667) loss_bbox_unscaled: 0.1183 (0.1183) loss_giou_unscaled: 0.5136 (0.5136) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.6717 (0.6717) loss_bbox_0_unscaled: 0.1260 (0.1260) loss_giou_0_unscaled: 0.5637 (0.5637) cardinality_error_0_unscaled: 7.5625 (7.5625) loss_ce_1_unscaled: 0.6031 (0.6031) loss_bbox_1_unscaled: 0.1138 (0.1138) loss_giou_1_unscaled: 0.5224 (0.5224) cardinality_error_1_unscaled: 3.0625 (3.0625) loss_ce_2_unscaled: 0.5446 (0.5446) loss_bbox_2_unscaled: 0.1200 (0.1200) loss_giou_2_unscaled: 0.5401 (0.5401) cardinality_error_2_unscaled: 1.8125 (1.8125) loss_ce_3_unscaled: 0.5089 (0.5089) loss_bbox_3_unscaled: 0.1187 (0.1187) loss_giou_3_unscaled: 0.5378 (0.5378) cardinality_error_3_unscaled: 1.1250 (1.1250) loss_ce_4_unscaled: 0.4708 (0.4708) loss_bbox_4_unscaled: 0.1210 (0.1210) loss_giou_4_unscaled: 0.5320 (0.5320) cardinality_error_4_unscaled: 0.8750 (0.8750) time: 2.6816 data: 1.3858 max mem: 12069
Epoch: [7] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 50.00 loss: 13.2446 (13.3945) loss_ce: 0.4298 (0.4287) loss_bbox: 0.5747 (0.5788) loss_giou: 1.0691 (1.0830) loss_ce_0: 0.6590 (0.6549) loss_bbox_0: 0.6299 (0.6471) loss_giou_0: 1.1771 (1.1735) loss_ce_1: 0.5787 (0.5808) loss_bbox_1: 0.5730 (0.6049) loss_giou_1: 1.1331 (1.1410) loss_ce_2: 0.5238 (0.5256) loss_bbox_2: 0.5878 (0.5813) loss_giou_2: 1.1079 (1.1151) loss_ce_3: 0.4935 (0.4917) loss_bbox_3: 0.5744 (0.5730) loss_giou_3: 1.0839 (1.0989) loss_ce_4: 0.4598 (0.4590) loss_bbox_4: 0.5823 (0.5720) loss_giou_4: 1.0914 (1.0851) loss_ce_unscaled: 0.4298 (0.4287) class_error_unscaled: 53.3333 (54.4156) loss_bbox_unscaled: 0.1149 (0.1158) loss_giou_unscaled: 0.5345 (0.5415) cardinality_error_unscaled: 0.9375 (0.9432) loss_ce_0_unscaled: 0.6590 (0.6549) loss_bbox_0_unscaled: 0.1260 (0.1294) loss_giou_0_unscaled: 0.5886 (0.5868) cardinality_error_0_unscaled: 6.8125 (7.0909) loss_ce_1_unscaled: 0.5787 (0.5808) loss_bbox_1_unscaled: 0.1146 (0.1210) loss_giou_1_unscaled: 0.5666 (0.5705) cardinality_error_1_unscaled: 2.4375 (2.6648) loss_ce_2_unscaled: 0.5238 (0.5256) loss_bbox_2_unscaled: 0.1176 (0.1163) loss_giou_2_unscaled: 0.5540 (0.5576) cardinality_error_2_unscaled: 1.5625 (1.6534) loss_ce_3_unscaled: 0.4935 (0.4917) loss_bbox_3_unscaled: 0.1149 (0.1146) loss_giou_3_unscaled: 0.5419 (0.5495) cardinality_error_3_unscaled: 1.1250 (1.1648) loss_ce_4_unscaled: 0.4598 (0.4590) loss_bbox_4_unscaled: 0.1165 (0.1144) loss_giou_4_unscaled: 0.5457 (0.5426) cardinality_error_4_unscaled: 1.0000 (1.0227) time: 1.3425 data: 0.1795 max mem: 12069
Epoch: [7] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 53.33 loss: 13.2028 (13.3209) loss_ce: 0.4209 (0.4255) loss_bbox: 0.5747 (0.5948) loss_giou: 1.0640 (1.0687) loss_ce_0: 0.6464 (0.6426) loss_bbox_0: 0.6299 (0.6500) loss_giou_0: 1.1504 (1.1502) loss_ce_1: 0.5733 (0.5709) loss_bbox_1: 0.5954 (0.6155) loss_giou_1: 1.1189 (1.1178) loss_ce_2: 0.5186 (0.5183) loss_bbox_2: 0.5878 (0.5978) loss_giou_2: 1.0802 (1.1009) loss_ce_3: 0.4828 (0.4854) loss_bbox_3: 0.5744 (0.5880) loss_giou_3: 1.0755 (1.0826) loss_ce_4: 0.4492 (0.4542) loss_bbox_4: 0.5823 (0.5891) loss_giou_4: 1.0641 (1.0685) loss_ce_unscaled: 0.4209 (0.4255) class_error_unscaled: 53.3333 (55.5357) loss_bbox_unscaled: 0.1149 (0.1190) loss_giou_unscaled: 0.5320 (0.5344) cardinality_error_unscaled: 0.9375 (0.9414) loss_ce_0_unscaled: 0.6464 (0.6426) loss_bbox_0_unscaled: 0.1260 (0.1300) loss_giou_0_unscaled: 0.5752 (0.5751) cardinality_error_0_unscaled: 6.1250 (6.2656) loss_ce_1_unscaled: 0.5733 (0.5709) loss_bbox_1_unscaled: 0.1191 (0.1231) loss_giou_1_unscaled: 0.5595 (0.5589) cardinality_error_1_unscaled: 2.2500 (2.2852) loss_ce_2_unscaled: 0.5186 (0.5183) loss_bbox_2_unscaled: 0.1176 (0.1196) loss_giou_2_unscaled: 0.5401 (0.5505) cardinality_error_2_unscaled: 1.4375 (1.4180) loss_ce_3_unscaled: 0.4828 (0.4854) loss_bbox_3_unscaled: 0.1149 (0.1176) loss_giou_3_unscaled: 0.5378 (0.5413) cardinality_error_3_unscaled: 1.0625 (1.0898) loss_ce_4_unscaled: 0.4492 (0.4542) loss_bbox_4_unscaled: 0.1165 (0.1178) loss_giou_4_unscaled: 0.5320 (0.5342) cardinality_error_4_unscaled: 0.9375 (0.9883) time: 1.3002 data: 0.1413 max mem: 12069
Epoch: [7] Total time: 0:00:20 (1.3049 s / it)
Averaged stats: lr: 0.000010 class_error: 53.33 loss: 13.2028 (13.3209) loss_ce: 0.4209 (0.4255) loss_bbox: 0.5747 (0.5948) loss_giou: 1.0640 (1.0687) loss_ce_0: 0.6464 (0.6426) loss_bbox_0: 0.6299 (0.6500) loss_giou_0: 1.1504 (1.1502) loss_ce_1: 0.5733 (0.5709) loss_bbox_1: 0.5954 (0.6155) loss_giou_1: 1.1189 (1.1178) loss_ce_2: 0.5186 (0.5183) loss_bbox_2: 0.5878 (0.5978) loss_giou_2: 1.0802 (1.1009) loss_ce_3: 0.4828 (0.4854) loss_bbox_3: 0.5744 (0.5880) loss_giou_3: 1.0755 (1.0826) loss_ce_4: 0.4492 (0.4542) loss_bbox_4: 0.5823 (0.5891) loss_giou_4: 1.0641 (1.0685) loss_ce_unscaled: 0.4209 (0.4255) class_error_unscaled: 53.3333 (55.5357) loss_bbox_unscaled: 0.1149 (0.1190) loss_giou_unscaled: 0.5320 (0.5344) cardinality_error_unscaled: 0.9375 (0.9414) loss_ce_0_unscaled: 0.6464 (0.6426) loss_bbox_0_unscaled: 0.1260 (0.1300) loss_giou_0_unscaled: 0.5752 (0.5751) cardinality_error_0_unscaled: 6.1250 (6.2656) loss_ce_1_unscaled: 0.5733 (0.5709) loss_bbox_1_unscaled: 0.1191 (0.1231) loss_giou_1_unscaled: 0.5595 (0.5589) cardinality_error_1_unscaled: 2.2500 (2.2852) loss_ce_2_unscaled: 0.5186 (0.5183) loss_bbox_2_unscaled: 0.1176 (0.1196) loss_giou_2_unscaled: 0.5401 (0.5505) cardinality_error_2_unscaled: 1.4375 (1.4180) loss_ce_3_unscaled: 0.4828 (0.4854) loss_bbox_3_unscaled: 0.1149 (0.1176) loss_giou_3_unscaled: 0.5378 (0.5413) cardinality_error_3_unscaled: 1.0625 (1.0898) loss_ce_4_unscaled: 0.4492 (0.4542) loss_bbox_4_unscaled: 0.1165 (0.1178) loss_giou_4_unscaled: 0.5320 (0.5342) cardinality_error_4_unscaled: 0.9375 (0.9883)
Test: [0/4] eta: 0:00:07 class_error: 43.75 loss: 11.1375 (11.1375) loss_ce: 0.4330 (0.4330) loss_bbox: 0.3940 (0.3940) loss_giou: 0.9366 (0.9366) loss_ce_0: 0.5985 (0.5985) loss_bbox_0: 0.4563 (0.4563) loss_giou_0: 0.9201 (0.9201) loss_ce_1: 0.5475 (0.5475) loss_bbox_1: 0.4360 (0.4360) loss_giou_1: 0.9234 (0.9234) loss_ce_2: 0.5049 (0.5049) loss_bbox_2: 0.4333 (0.4333) loss_giou_2: 0.9731 (0.9731) loss_ce_3: 0.4800 (0.4800) loss_bbox_3: 0.4110 (0.4110) loss_giou_3: 0.9033 (0.9033) loss_ce_4: 0.4533 (0.4533) loss_bbox_4: 0.4095 (0.4095) loss_giou_4: 0.9236 (0.9236) loss_ce_unscaled: 0.4330 (0.4330) class_error_unscaled: 43.7500 (43.7500) loss_bbox_unscaled: 0.0788 (0.0788) loss_giou_unscaled: 0.4683 (0.4683) cardinality_error_unscaled: 1.4375 (1.4375) loss_ce_0_unscaled: 0.5985 (0.5985) loss_bbox_0_unscaled: 0.0913 (0.0913) loss_giou_0_unscaled: 0.4600 (0.4600) cardinality_error_0_unscaled: 4.0625 (4.0625) loss_ce_1_unscaled: 0.5475 (0.5475) loss_bbox_1_unscaled: 0.0872 (0.0872) loss_giou_1_unscaled: 0.4617 (0.4617) cardinality_error_1_unscaled: 2.1875 (2.1875) loss_ce_2_unscaled: 0.5049 (0.5049) loss_bbox_2_unscaled: 0.0867 (0.0867) loss_giou_2_unscaled: 0.4865 (0.4865) cardinality_error_2_unscaled: 1.5625 (1.5625) loss_ce_3_unscaled: 0.4800 (0.4800) loss_bbox_3_unscaled: 0.0822 (0.0822) loss_giou_3_unscaled: 0.4517 (0.4517) cardinality_error_3_unscaled: 1.5000 (1.5000) loss_ce_4_unscaled: 0.4533 (0.4533) loss_bbox_4_unscaled: 0.0819 (0.0819) loss_giou_4_unscaled: 0.4618 (0.4618) cardinality_error_4_unscaled: 1.3750 (1.3750) time: 1.8790 data: 1.1048 max mem: 12069
Test: [3/4] eta: 0:00:00 class_error: 62.50 loss: 11.2523 (11.4494) loss_ce: 0.4330 (0.4325) loss_bbox: 0.3940 (0.4307) loss_giou: 0.8860 (0.8942) loss_ce_0: 0.6234 (0.6229) loss_bbox_0: 0.4982 (0.4963) loss_giou_0: 0.9633 (0.9784) loss_ce_1: 0.5553 (0.5589) loss_bbox_1: 0.4861 (0.5019) loss_giou_1: 0.9660 (0.9823) loss_ce_2: 0.5071 (0.5098) loss_bbox_2: 0.4916 (0.4807) loss_giou_2: 0.9401 (0.9502) loss_ce_3: 0.4800 (0.4828) loss_bbox_3: 0.4205 (0.4468) loss_giou_3: 0.9033 (0.8989) loss_ce_4: 0.4533 (0.4554) loss_bbox_4: 0.4095 (0.4261) loss_giou_4: 0.9209 (0.9006) loss_ce_unscaled: 0.4330 (0.4325) class_error_unscaled: 62.5000 (60.9375) loss_bbox_unscaled: 0.0788 (0.0861) loss_giou_unscaled: 0.4430 (0.4471) cardinality_error_unscaled: 1.0000 (1.1094) loss_ce_0_unscaled: 0.6234 (0.6229) loss_bbox_0_unscaled: 0.0996 (0.0993) loss_giou_0_unscaled: 0.4817 (0.4892) cardinality_error_0_unscaled: 4.3750 (4.6562) loss_ce_1_unscaled: 0.5553 (0.5589) loss_bbox_1_unscaled: 0.0972 (0.1004) loss_giou_1_unscaled: 0.4830 (0.4912) cardinality_error_1_unscaled: 1.5000 (1.7500) loss_ce_2_unscaled: 0.5071 (0.5098) loss_bbox_2_unscaled: 0.0983 (0.0961) loss_giou_2_unscaled: 0.4700 (0.4751) cardinality_error_2_unscaled: 1.0625 (1.1250) loss_ce_3_unscaled: 0.4800 (0.4828) loss_bbox_3_unscaled: 0.0841 (0.0894) loss_giou_3_unscaled: 0.4517 (0.4494) cardinality_error_3_unscaled: 1.0000 (1.1094) loss_ce_4_unscaled: 0.4533 (0.4554) loss_bbox_4_unscaled: 0.0819 (0.0852) loss_giou_4_unscaled: 0.4604 (0.4503) cardinality_error_4_unscaled: 1.0000 (1.0781) time: 0.9780 data: 0.3306 max mem: 12069
Test: Total time: 0:00:03 (0.9938 s / it)
Averaged stats: class_error: 62.50 loss: 11.2523 (11.4494) loss_ce: 0.4330 (0.4325) loss_bbox: 0.3940 (0.4307) loss_giou: 0.8860 (0.8942) loss_ce_0: 0.6234 (0.6229) loss_bbox_0: 0.4982 (0.4963) loss_giou_0: 0.9633 (0.9784) loss_ce_1: 0.5553 (0.5589) loss_bbox_1: 0.4861 (0.5019) loss_giou_1: 0.9660 (0.9823) loss_ce_2: 0.5071 (0.5098) loss_bbox_2: 0.4916 (0.4807) loss_giou_2: 0.9401 (0.9502) loss_ce_3: 0.4800 (0.4828) loss_bbox_3: 0.4205 (0.4468) loss_giou_3: 0.9033 (0.8989) loss_ce_4: 0.4533 (0.4554) loss_bbox_4: 0.4095 (0.4261) loss_giou_4: 0.9209 (0.9006) loss_ce_unscaled: 0.4330 (0.4325) class_error_unscaled: 62.5000 (60.9375) loss_bbox_unscaled: 0.0788 (0.0861) loss_giou_unscaled: 0.4430 (0.4471) cardinality_error_unscaled: 1.0000 (1.1094) loss_ce_0_unscaled: 0.6234 (0.6229) loss_bbox_0_unscaled: 0.0996 (0.0993) loss_giou_0_unscaled: 0.4817 (0.4892) cardinality_error_0_unscaled: 4.3750 (4.6562) loss_ce_1_unscaled: 0.5553 (0.5589) loss_bbox_1_unscaled: 0.0972 (0.1004) loss_giou_1_unscaled: 0.4830 (0.4912) cardinality_error_1_unscaled: 1.5000 (1.7500) loss_ce_2_unscaled: 0.5071 (0.5098) loss_bbox_2_unscaled: 0.0983 (0.0961) loss_giou_2_unscaled: 0.4700 (0.4751) cardinality_error_2_unscaled: 1.0625 (1.1250) loss_ce_3_unscaled: 0.4800 (0.4828) loss_bbox_3_unscaled: 0.0841 (0.0894) loss_giou_3_unscaled: 0.4517 (0.4494) cardinality_error_3_unscaled: 1.0000 (1.1094) loss_ce_4_unscaled: 0.4533 (0.4554) loss_bbox_4_unscaled: 0.0819 (0.0852) loss_giou_4_unscaled: 0.4604 (0.4503) cardinality_error_4_unscaled: 1.0000 (1.0781)
Accumulating evaluation results...
DONE (t=0.07s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.001
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.010
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.166
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.225
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.081
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.189
Epoch: [8] [ 0/16] eta: 0:00:39 lr: 0.000010 class_error: 60.00 loss: 14.2771 (14.2771) loss_ce: 0.4233 (0.4233) loss_bbox: 0.6476 (0.6476) loss_giou: 1.1861 (1.1861) loss_ce_0: 0.6437 (0.6437) loss_bbox_0: 0.6738 (0.6738) loss_giou_0: 1.2055 (1.2055) loss_ce_1: 0.5719 (0.5719) loss_bbox_1: 0.7079 (0.7079) loss_giou_1: 1.2035 (1.2035) loss_ce_2: 0.5129 (0.5129) loss_bbox_2: 0.6436 (0.6436) loss_giou_2: 1.2142 (1.2142) loss_ce_3: 0.4872 (0.4872) loss_bbox_3: 0.6560 (0.6560) loss_giou_3: 1.2051 (1.2051) loss_ce_4: 0.4508 (0.4508) loss_bbox_4: 0.6524 (0.6524) loss_giou_4: 1.1916 (1.1916) loss_ce_unscaled: 0.4233 (0.4233) class_error_unscaled: 60.0000 (60.0000) loss_bbox_unscaled: 0.1295 (0.1295) loss_giou_unscaled: 0.5931 (0.5931) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.6437 (0.6437) loss_bbox_0_unscaled: 0.1348 (0.1348) loss_giou_0_unscaled: 0.6028 (0.6028) cardinality_error_0_unscaled: 5.7500 (5.7500) loss_ce_1_unscaled: 0.5719 (0.5719) loss_bbox_1_unscaled: 0.1416 (0.1416) loss_giou_1_unscaled: 0.6017 (0.6017) cardinality_error_1_unscaled: 1.3125 (1.3125) loss_ce_2_unscaled: 0.5129 (0.5129) loss_bbox_2_unscaled: 0.1287 (0.1287) loss_giou_2_unscaled: 0.6071 (0.6071) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.4872 (0.4872) loss_bbox_3_unscaled: 0.1312 (0.1312) loss_giou_3_unscaled: 0.6025 (0.6025) cardinality_error_3_unscaled: 0.8750 (0.8750) loss_ce_4_unscaled: 0.4508 (0.4508) loss_bbox_4_unscaled: 0.1305 (0.1305) loss_giou_4_unscaled: 0.5958 (0.5958) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 2.4655 data: 1.2768 max mem: 12069
Epoch: [8] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 62.50 loss: 12.9903 (13.0954) loss_ce: 0.4233 (0.4210) loss_bbox: 0.6079 (0.5953) loss_giou: 1.0492 (1.0532) loss_ce_0: 0.6071 (0.6123) loss_bbox_0: 0.6194 (0.6293) loss_giou_0: 1.1121 (1.0983) loss_ce_1: 0.5463 (0.5486) loss_bbox_1: 0.6196 (0.6213) loss_giou_1: 1.0856 (1.1013) loss_ce_2: 0.4993 (0.4995) loss_bbox_2: 0.5989 (0.6018) loss_giou_2: 1.0575 (1.0799) loss_ce_3: 0.4688 (0.4704) loss_bbox_3: 0.6148 (0.6052) loss_giou_3: 1.0550 (1.0533) loss_ce_4: 0.4402 (0.4425) loss_bbox_4: 0.6111 (0.5997) loss_giou_4: 1.0926 (1.0625) loss_ce_unscaled: 0.4233 (0.4210) class_error_unscaled: 64.2857 (65.0108) loss_bbox_unscaled: 0.1216 (0.1191) loss_giou_unscaled: 0.5246 (0.5266) cardinality_error_unscaled: 1.0000 (0.9716) loss_ce_0_unscaled: 0.6071 (0.6123) loss_bbox_0_unscaled: 0.1239 (0.1259) loss_giou_0_unscaled: 0.5560 (0.5492) cardinality_error_0_unscaled: 3.4375 (4.0625) loss_ce_1_unscaled: 0.5463 (0.5486) loss_bbox_1_unscaled: 0.1239 (0.1243) loss_giou_1_unscaled: 0.5428 (0.5507) cardinality_error_1_unscaled: 1.3125 (1.4034) loss_ce_2_unscaled: 0.4993 (0.4995) loss_bbox_2_unscaled: 0.1198 (0.1204) loss_giou_2_unscaled: 0.5288 (0.5399) cardinality_error_2_unscaled: 1.0000 (1.1477) loss_ce_3_unscaled: 0.4688 (0.4704) loss_bbox_3_unscaled: 0.1230 (0.1210) loss_giou_3_unscaled: 0.5275 (0.5267) cardinality_error_3_unscaled: 1.0000 (1.0170) loss_ce_4_unscaled: 0.4402 (0.4425) loss_bbox_4_unscaled: 0.1222 (0.1199) loss_giou_4_unscaled: 0.5463 (0.5312) cardinality_error_4_unscaled: 0.9375 (0.9773) time: 1.3443 data: 0.1747 max mem: 12069
Epoch: [8] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 56.25 loss: 12.8589 (12.8881) loss_ce: 0.4139 (0.4143) loss_bbox: 0.5765 (0.5783) loss_giou: 1.0452 (1.0404) loss_ce_0: 0.5983 (0.6034) loss_bbox_0: 0.5989 (0.6182) loss_giou_0: 1.1121 (1.0990) loss_ce_1: 0.5388 (0.5402) loss_bbox_1: 0.6094 (0.6006) loss_giou_1: 1.0817 (1.0825) loss_ce_2: 0.4935 (0.4926) loss_bbox_2: 0.5942 (0.5844) loss_giou_2: 1.0575 (1.0645) loss_ce_3: 0.4644 (0.4629) loss_bbox_3: 0.5838 (0.5902) loss_giou_3: 1.0550 (1.0497) loss_ce_4: 0.4351 (0.4359) loss_bbox_4: 0.5874 (0.5808) loss_giou_4: 1.0598 (1.0502) loss_ce_unscaled: 0.4139 (0.4143) class_error_unscaled: 62.5000 (62.3412) loss_bbox_unscaled: 0.1153 (0.1157) loss_giou_unscaled: 0.5226 (0.5202) cardinality_error_unscaled: 0.9375 (0.9609) loss_ce_0_unscaled: 0.5983 (0.6034) loss_bbox_0_unscaled: 0.1198 (0.1236) loss_giou_0_unscaled: 0.5560 (0.5495) cardinality_error_0_unscaled: 3.2500 (3.6875) loss_ce_1_unscaled: 0.5388 (0.5402) loss_bbox_1_unscaled: 0.1219 (0.1201) loss_giou_1_unscaled: 0.5408 (0.5412) cardinality_error_1_unscaled: 1.2500 (1.3477) loss_ce_2_unscaled: 0.4935 (0.4926) loss_bbox_2_unscaled: 0.1188 (0.1169) loss_giou_2_unscaled: 0.5288 (0.5323) cardinality_error_2_unscaled: 0.9375 (1.0938) loss_ce_3_unscaled: 0.4644 (0.4629) loss_bbox_3_unscaled: 0.1168 (0.1180) loss_giou_3_unscaled: 0.5275 (0.5248) cardinality_error_3_unscaled: 0.9375 (0.9844) loss_ce_4_unscaled: 0.4351 (0.4359) loss_bbox_4_unscaled: 0.1175 (0.1162) loss_giou_4_unscaled: 0.5299 (0.5251) cardinality_error_4_unscaled: 0.9375 (0.9570) time: 1.2829 data: 0.1380 max mem: 12069
Epoch: [8] Total time: 0:00:20 (1.2871 s / it)
Averaged stats: lr: 0.000010 class_error: 56.25 loss: 12.8589 (12.8881) loss_ce: 0.4139 (0.4143) loss_bbox: 0.5765 (0.5783) loss_giou: 1.0452 (1.0404) loss_ce_0: 0.5983 (0.6034) loss_bbox_0: 0.5989 (0.6182) loss_giou_0: 1.1121 (1.0990) loss_ce_1: 0.5388 (0.5402) loss_bbox_1: 0.6094 (0.6006) loss_giou_1: 1.0817 (1.0825) loss_ce_2: 0.4935 (0.4926) loss_bbox_2: 0.5942 (0.5844) loss_giou_2: 1.0575 (1.0645) loss_ce_3: 0.4644 (0.4629) loss_bbox_3: 0.5838 (0.5902) loss_giou_3: 1.0550 (1.0497) loss_ce_4: 0.4351 (0.4359) loss_bbox_4: 0.5874 (0.5808) loss_giou_4: 1.0598 (1.0502) loss_ce_unscaled: 0.4139 (0.4143) class_error_unscaled: 62.5000 (62.3412) loss_bbox_unscaled: 0.1153 (0.1157) loss_giou_unscaled: 0.5226 (0.5202) cardinality_error_unscaled: 0.9375 (0.9609) loss_ce_0_unscaled: 0.5983 (0.6034) loss_bbox_0_unscaled: 0.1198 (0.1236) loss_giou_0_unscaled: 0.5560 (0.5495) cardinality_error_0_unscaled: 3.2500 (3.6875) loss_ce_1_unscaled: 0.5388 (0.5402) loss_bbox_1_unscaled: 0.1219 (0.1201) loss_giou_1_unscaled: 0.5408 (0.5412) cardinality_error_1_unscaled: 1.2500 (1.3477) loss_ce_2_unscaled: 0.4935 (0.4926) loss_bbox_2_unscaled: 0.1188 (0.1169) loss_giou_2_unscaled: 0.5288 (0.5323) cardinality_error_2_unscaled: 0.9375 (1.0938) loss_ce_3_unscaled: 0.4644 (0.4629) loss_bbox_3_unscaled: 0.1168 (0.1180) loss_giou_3_unscaled: 0.5275 (0.5248) cardinality_error_3_unscaled: 0.9375 (0.9844) loss_ce_4_unscaled: 0.4351 (0.4359) loss_bbox_4_unscaled: 0.1175 (0.1162) loss_giou_4_unscaled: 0.5299 (0.5251) cardinality_error_4_unscaled: 0.9375 (0.9570)
Test: [0/4] eta: 0:00:08 class_error: 50.00 loss: 10.7686 (10.7686) loss_ce: 0.4221 (0.4221) loss_bbox: 0.3832 (0.3832) loss_giou: 0.9180 (0.9180) loss_ce_0: 0.5606 (0.5606) loss_bbox_0: 0.4494 (0.4494) loss_giou_0: 0.8821 (0.8821) loss_ce_1: 0.5171 (0.5171) loss_bbox_1: 0.4139 (0.4139) loss_giou_1: 0.8972 (0.8972) loss_ce_2: 0.4794 (0.4794) loss_bbox_2: 0.4134 (0.4134) loss_giou_2: 0.9244 (0.9244) loss_ce_3: 0.4595 (0.4595) loss_bbox_3: 0.3989 (0.3989) loss_giou_3: 0.8949 (0.8949) loss_ce_4: 0.4405 (0.4405) loss_bbox_4: 0.3906 (0.3906) loss_giou_4: 0.9235 (0.9235) loss_ce_unscaled: 0.4221 (0.4221) class_error_unscaled: 50.0000 (50.0000) loss_bbox_unscaled: 0.0766 (0.0766) loss_giou_unscaled: 0.4590 (0.4590) cardinality_error_unscaled: 1.1250 (1.1250) loss_ce_0_unscaled: 0.5606 (0.5606) loss_bbox_0_unscaled: 0.0899 (0.0899) loss_giou_0_unscaled: 0.4410 (0.4410) cardinality_error_0_unscaled: 2.6250 (2.6250) loss_ce_1_unscaled: 0.5171 (0.5171) loss_bbox_1_unscaled: 0.0828 (0.0828) loss_giou_1_unscaled: 0.4486 (0.4486) cardinality_error_1_unscaled: 1.6875 (1.6875) loss_ce_2_unscaled: 0.4794 (0.4794) loss_bbox_2_unscaled: 0.0827 (0.0827) loss_giou_2_unscaled: 0.4622 (0.4622) cardinality_error_2_unscaled: 1.5000 (1.5000) loss_ce_3_unscaled: 0.4595 (0.4595) loss_bbox_3_unscaled: 0.0798 (0.0798) loss_giou_3_unscaled: 0.4474 (0.4474) cardinality_error_3_unscaled: 1.3750 (1.3750) loss_ce_4_unscaled: 0.4405 (0.4405) loss_bbox_4_unscaled: 0.0781 (0.0781) loss_giou_4_unscaled: 0.4618 (0.4618) cardinality_error_4_unscaled: 1.4375 (1.4375) time: 2.0772 data: 1.1800 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 62.50 loss: 10.6431 (10.8737) loss_ce: 0.4151 (0.4163) loss_bbox: 0.3850 (0.4156) loss_giou: 0.8311 (0.8454) loss_ce_0: 0.5804 (0.5804) loss_bbox_0: 0.4754 (0.4824) loss_giou_0: 0.8868 (0.9481) loss_ce_1: 0.5212 (0.5250) loss_bbox_1: 0.4706 (0.4586) loss_giou_1: 0.8972 (0.9172) loss_ce_2: 0.4794 (0.4822) loss_bbox_2: 0.4399 (0.4492) loss_giou_2: 0.9017 (0.9100) loss_ce_3: 0.4579 (0.4581) loss_bbox_3: 0.3989 (0.4320) loss_giou_3: 0.8198 (0.8535) loss_ce_4: 0.4337 (0.4351) loss_bbox_4: 0.3906 (0.4120) loss_giou_4: 0.8143 (0.8526) loss_ce_unscaled: 0.4151 (0.4163) class_error_unscaled: 56.2500 (59.3750) loss_bbox_unscaled: 0.0770 (0.0831) loss_giou_unscaled: 0.4156 (0.4227) cardinality_error_unscaled: 1.0000 (1.0312) loss_ce_0_unscaled: 0.5804 (0.5804) loss_bbox_0_unscaled: 0.0951 (0.0965) loss_giou_0_unscaled: 0.4434 (0.4741) cardinality_error_0_unscaled: 2.3750 (2.6250) loss_ce_1_unscaled: 0.5212 (0.5250) loss_bbox_1_unscaled: 0.0941 (0.0917) loss_giou_1_unscaled: 0.4486 (0.4586) cardinality_error_1_unscaled: 1.1250 (1.2969) loss_ce_2_unscaled: 0.4794 (0.4822) loss_bbox_2_unscaled: 0.0880 (0.0898) loss_giou_2_unscaled: 0.4509 (0.4550) cardinality_error_2_unscaled: 1.0000 (1.1250) loss_ce_3_unscaled: 0.4579 (0.4581) loss_bbox_3_unscaled: 0.0798 (0.0864) loss_giou_3_unscaled: 0.4099 (0.4267) cardinality_error_3_unscaled: 1.0000 (1.0938) loss_ce_4_unscaled: 0.4337 (0.4351) loss_bbox_4_unscaled: 0.0781 (0.0824) loss_giou_4_unscaled: 0.4072 (0.4263) cardinality_error_4_unscaled: 1.0000 (1.1094) time: 1.0378 data: 0.3602 max mem: 12069
Test: Total time: 0:00:04 (1.0575 s / it)
Averaged stats: class_error: 62.50 loss: 10.6431 (10.8737) loss_ce: 0.4151 (0.4163) loss_bbox: 0.3850 (0.4156) loss_giou: 0.8311 (0.8454) loss_ce_0: 0.5804 (0.5804) loss_bbox_0: 0.4754 (0.4824) loss_giou_0: 0.8868 (0.9481) loss_ce_1: 0.5212 (0.5250) loss_bbox_1: 0.4706 (0.4586) loss_giou_1: 0.8972 (0.9172) loss_ce_2: 0.4794 (0.4822) loss_bbox_2: 0.4399 (0.4492) loss_giou_2: 0.9017 (0.9100) loss_ce_3: 0.4579 (0.4581) loss_bbox_3: 0.3989 (0.4320) loss_giou_3: 0.8198 (0.8535) loss_ce_4: 0.4337 (0.4351) loss_bbox_4: 0.3906 (0.4120) loss_giou_4: 0.8143 (0.8526) loss_ce_unscaled: 0.4151 (0.4163) class_error_unscaled: 56.2500 (59.3750) loss_bbox_unscaled: 0.0770 (0.0831) loss_giou_unscaled: 0.4156 (0.4227) cardinality_error_unscaled: 1.0000 (1.0312) loss_ce_0_unscaled: 0.5804 (0.5804) loss_bbox_0_unscaled: 0.0951 (0.0965) loss_giou_0_unscaled: 0.4434 (0.4741) cardinality_error_0_unscaled: 2.3750 (2.6250) loss_ce_1_unscaled: 0.5212 (0.5250) loss_bbox_1_unscaled: 0.0941 (0.0917) loss_giou_1_unscaled: 0.4486 (0.4586) cardinality_error_1_unscaled: 1.1250 (1.2969) loss_ce_2_unscaled: 0.4794 (0.4822) loss_bbox_2_unscaled: 0.0880 (0.0898) loss_giou_2_unscaled: 0.4509 (0.4550) cardinality_error_2_unscaled: 1.0000 (1.1250) loss_ce_3_unscaled: 0.4579 (0.4581) loss_bbox_3_unscaled: 0.0798 (0.0864) loss_giou_3_unscaled: 0.4099 (0.4267) cardinality_error_3_unscaled: 1.0000 (1.0938) loss_ce_4_unscaled: 0.4337 (0.4351) loss_bbox_4_unscaled: 0.0781 (0.0824) loss_giou_4_unscaled: 0.4072 (0.4263) cardinality_error_4_unscaled: 1.0000 (1.1094)
Accumulating evaluation results...
DONE (t=0.07s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.001
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.017
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.154
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.163
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.056
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.196
Epoch: [9] [ 0/16] eta: 0:00:38 lr: 0.000010 class_error: 20.00 loss: 12.4249 (12.4249) loss_ce: 0.4002 (0.4002) loss_bbox: 0.5012 (0.5012) loss_giou: 1.1058 (1.1058) loss_ce_0: 0.5779 (0.5779) loss_bbox_0: 0.5233 (0.5233) loss_giou_0: 1.2182 (1.2182) loss_ce_1: 0.5146 (0.5146) loss_bbox_1: 0.4558 (0.4558) loss_giou_1: 1.1117 (1.1117) loss_ce_2: 0.4689 (0.4689) loss_bbox_2: 0.4515 (0.4515) loss_giou_2: 1.0339 (1.0339) loss_ce_3: 0.4353 (0.4353) loss_bbox_3: 0.4980 (0.4980) loss_giou_3: 1.1051 (1.1051) loss_ce_4: 0.4140 (0.4140) loss_bbox_4: 0.5043 (0.5043) loss_giou_4: 1.1051 (1.1051) loss_ce_unscaled: 0.4002 (0.4002) class_error_unscaled: 20.0000 (20.0000) loss_bbox_unscaled: 0.1002 (0.1002) loss_giou_unscaled: 0.5529 (0.5529) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.5779 (0.5779) loss_bbox_0_unscaled: 0.1047 (0.1047) loss_giou_0_unscaled: 0.6091 (0.6091) cardinality_error_0_unscaled: 4.3125 (4.3125) loss_ce_1_unscaled: 0.5146 (0.5146) loss_bbox_1_unscaled: 0.0912 (0.0912) loss_giou_1_unscaled: 0.5559 (0.5559) cardinality_error_1_unscaled: 2.1875 (2.1875) loss_ce_2_unscaled: 0.4689 (0.4689) loss_bbox_2_unscaled: 0.0903 (0.0903) loss_giou_2_unscaled: 0.5170 (0.5170) cardinality_error_2_unscaled: 1.0625 (1.0625) loss_ce_3_unscaled: 0.4353 (0.4353) loss_bbox_3_unscaled: 0.0996 (0.0996) loss_giou_3_unscaled: 0.5525 (0.5525) cardinality_error_3_unscaled: 0.8125 (0.8125) loss_ce_4_unscaled: 0.4140 (0.4140) loss_bbox_4_unscaled: 0.1009 (0.1009) loss_giou_4_unscaled: 0.5525 (0.5525) cardinality_error_4_unscaled: 0.8125 (0.8125) time: 2.4173 data: 1.1315 max mem: 12069
Epoch: [9] [10/16] eta: 0:00:07 lr: 0.000010 class_error: 50.00 loss: 12.5515 (12.5510) loss_ce: 0.4002 (0.3973) loss_bbox: 0.5523 (0.5565) loss_giou: 1.0276 (1.0257) loss_ce_0: 0.5635 (0.5709) loss_bbox_0: 0.5833 (0.6006) loss_giou_0: 1.1071 (1.1258) loss_ce_1: 0.5106 (0.5129) loss_bbox_1: 0.5794 (0.5669) loss_giou_1: 1.1075 (1.0720) loss_ce_2: 0.4657 (0.4665) loss_bbox_2: 0.5768 (0.5737) loss_giou_2: 1.0571 (1.0449) loss_ce_3: 0.4384 (0.4387) loss_bbox_3: 0.5825 (0.5676) loss_giou_3: 1.0341 (1.0295) loss_ce_4: 0.4140 (0.4153) loss_bbox_4: 0.5417 (0.5581) loss_giou_4: 1.0044 (1.0281) loss_ce_unscaled: 0.4002 (0.3973) class_error_unscaled: 61.5385 (55.1940) loss_bbox_unscaled: 0.1105 (0.1113) loss_giou_unscaled: 0.5138 (0.5129) cardinality_error_unscaled: 1.0000 (0.9489) loss_ce_0_unscaled: 0.5635 (0.5709) loss_bbox_0_unscaled: 0.1167 (0.1201) loss_giou_0_unscaled: 0.5535 (0.5629) cardinality_error_0_unscaled: 2.1875 (2.7159) loss_ce_1_unscaled: 0.5106 (0.5129) loss_bbox_1_unscaled: 0.1159 (0.1134) loss_giou_1_unscaled: 0.5538 (0.5360) cardinality_error_1_unscaled: 1.0000 (1.2330) loss_ce_2_unscaled: 0.4657 (0.4665) loss_bbox_2_unscaled: 0.1154 (0.1147) loss_giou_2_unscaled: 0.5286 (0.5224) cardinality_error_2_unscaled: 1.0000 (0.9943) loss_ce_3_unscaled: 0.4384 (0.4387) loss_bbox_3_unscaled: 0.1165 (0.1135) loss_giou_3_unscaled: 0.5171 (0.5147) cardinality_error_3_unscaled: 0.9375 (0.9261) loss_ce_4_unscaled: 0.4140 (0.4153) loss_bbox_4_unscaled: 0.1083 (0.1116) loss_giou_4_unscaled: 0.5022 (0.5140) cardinality_error_4_unscaled: 1.0000 (0.9375) time: 1.3314 data: 0.1697 max mem: 12069
Epoch: [9] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 60.00 loss: 12.2299 (12.5212) loss_ce: 0.3910 (0.3949) loss_bbox: 0.5487 (0.5525) loss_giou: 1.0026 (1.0271) loss_ce_0: 0.5572 (0.5630) loss_bbox_0: 0.5833 (0.5982) loss_giou_0: 1.0846 (1.1260) loss_ce_1: 0.4985 (0.5075) loss_bbox_1: 0.5644 (0.5693) loss_giou_1: 1.0314 (1.0654) loss_ce_2: 0.4594 (0.4629) loss_bbox_2: 0.5768 (0.5762) loss_giou_2: 1.0339 (1.0509) loss_ce_3: 0.4332 (0.4357) loss_bbox_3: 0.5658 (0.5616) loss_giou_3: 1.0023 (1.0314) loss_ce_4: 0.4095 (0.4128) loss_bbox_4: 0.5412 (0.5540) loss_giou_4: 0.9935 (1.0315) loss_ce_unscaled: 0.3910 (0.3949) class_error_unscaled: 56.2500 (54.7948) loss_bbox_unscaled: 0.1097 (0.1105) loss_giou_unscaled: 0.5013 (0.5135) cardinality_error_unscaled: 0.9375 (0.9492) loss_ce_0_unscaled: 0.5572 (0.5630) loss_bbox_0_unscaled: 0.1167 (0.1196) loss_giou_0_unscaled: 0.5423 (0.5630) cardinality_error_0_unscaled: 2.1875 (2.5781) loss_ce_1_unscaled: 0.4985 (0.5075) loss_bbox_1_unscaled: 0.1129 (0.1139) loss_giou_1_unscaled: 0.5157 (0.5327) cardinality_error_1_unscaled: 1.0625 (1.2617) loss_ce_2_unscaled: 0.4594 (0.4629) loss_bbox_2_unscaled: 0.1154 (0.1152) loss_giou_2_unscaled: 0.5170 (0.5255) cardinality_error_2_unscaled: 1.0000 (1.0078) loss_ce_3_unscaled: 0.4332 (0.4357) loss_bbox_3_unscaled: 0.1132 (0.1123) loss_giou_3_unscaled: 0.5011 (0.5157) cardinality_error_3_unscaled: 0.9375 (0.9531) loss_ce_4_unscaled: 0.4095 (0.4128) loss_bbox_4_unscaled: 0.1082 (0.1108) loss_giou_4_unscaled: 0.4967 (0.5157) cardinality_error_4_unscaled: 0.9375 (0.9336) time: 1.2956 data: 0.1354 max mem: 12069
Epoch: [9] Total time: 0:00:20 (1.3002 s / it)
Averaged stats: lr: 0.000010 class_error: 60.00 loss: 12.2299 (12.5212) loss_ce: 0.3910 (0.3949) loss_bbox: 0.5487 (0.5525) loss_giou: 1.0026 (1.0271) loss_ce_0: 0.5572 (0.5630) loss_bbox_0: 0.5833 (0.5982) loss_giou_0: 1.0846 (1.1260) loss_ce_1: 0.4985 (0.5075) loss_bbox_1: 0.5644 (0.5693) loss_giou_1: 1.0314 (1.0654) loss_ce_2: 0.4594 (0.4629) loss_bbox_2: 0.5768 (0.5762) loss_giou_2: 1.0339 (1.0509) loss_ce_3: 0.4332 (0.4357) loss_bbox_3: 0.5658 (0.5616) loss_giou_3: 1.0023 (1.0314) loss_ce_4: 0.4095 (0.4128) loss_bbox_4: 0.5412 (0.5540) loss_giou_4: 0.9935 (1.0315) loss_ce_unscaled: 0.3910 (0.3949) class_error_unscaled: 56.2500 (54.7948) loss_bbox_unscaled: 0.1097 (0.1105) loss_giou_unscaled: 0.5013 (0.5135) cardinality_error_unscaled: 0.9375 (0.9492) loss_ce_0_unscaled: 0.5572 (0.5630) loss_bbox_0_unscaled: 0.1167 (0.1196) loss_giou_0_unscaled: 0.5423 (0.5630) cardinality_error_0_unscaled: 2.1875 (2.5781) loss_ce_1_unscaled: 0.4985 (0.5075) loss_bbox_1_unscaled: 0.1129 (0.1139) loss_giou_1_unscaled: 0.5157 (0.5327) cardinality_error_1_unscaled: 1.0625 (1.2617) loss_ce_2_unscaled: 0.4594 (0.4629) loss_bbox_2_unscaled: 0.1154 (0.1152) loss_giou_2_unscaled: 0.5170 (0.5255) cardinality_error_2_unscaled: 1.0000 (1.0078) loss_ce_3_unscaled: 0.4332 (0.4357) loss_bbox_3_unscaled: 0.1132 (0.1123) loss_giou_3_unscaled: 0.5011 (0.5157) cardinality_error_3_unscaled: 0.9375 (0.9531) loss_ce_4_unscaled: 0.4095 (0.4128) loss_bbox_4_unscaled: 0.1082 (0.1108) loss_giou_4_unscaled: 0.4967 (0.5157) cardinality_error_4_unscaled: 0.9375 (0.9336)
Test: [0/4] eta: 0:00:07 class_error: 50.00 loss: 10.6393 (10.6393) loss_ce: 0.4087 (0.4087) loss_bbox: 0.3879 (0.3879) loss_giou: 0.9028 (0.9028) loss_ce_0: 0.5297 (0.5297) loss_bbox_0: 0.4255 (0.4255) loss_giou_0: 0.8828 (0.8828) loss_ce_1: 0.4915 (0.4915) loss_bbox_1: 0.4370 (0.4370) loss_giou_1: 0.8793 (0.8793) loss_ce_2: 0.4586 (0.4586) loss_bbox_2: 0.4196 (0.4196) loss_giou_2: 0.9219 (0.9219) loss_ce_3: 0.4405 (0.4405) loss_bbox_3: 0.3955 (0.3955) loss_giou_3: 0.9208 (0.9208) loss_ce_4: 0.4277 (0.4277) loss_bbox_4: 0.3995 (0.3995) loss_giou_4: 0.9101 (0.9101) loss_ce_unscaled: 0.4087 (0.4087) class_error_unscaled: 50.0000 (50.0000) loss_bbox_unscaled: 0.0776 (0.0776) loss_giou_unscaled: 0.4514 (0.4514) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.5297 (0.5297) loss_bbox_0_unscaled: 0.0851 (0.0851) loss_giou_0_unscaled: 0.4414 (0.4414) cardinality_error_0_unscaled: 2.0625 (2.0625) loss_ce_1_unscaled: 0.4915 (0.4915) loss_bbox_1_unscaled: 0.0874 (0.0874) loss_giou_1_unscaled: 0.4397 (0.4397) cardinality_error_1_unscaled: 1.5625 (1.5625) loss_ce_2_unscaled: 0.4586 (0.4586) loss_bbox_2_unscaled: 0.0839 (0.0839) loss_giou_2_unscaled: 0.4609 (0.4609) cardinality_error_2_unscaled: 1.5000 (1.5000) loss_ce_3_unscaled: 0.4405 (0.4405) loss_bbox_3_unscaled: 0.0791 (0.0791) loss_giou_3_unscaled: 0.4604 (0.4604) cardinality_error_3_unscaled: 1.3750 (1.3750) loss_ce_4_unscaled: 0.4277 (0.4277) loss_bbox_4_unscaled: 0.0799 (0.0799) loss_giou_4_unscaled: 0.4551 (0.4551) cardinality_error_4_unscaled: 1.2500 (1.2500) time: 1.9223 data: 1.1780 max mem: 12069
Test: [3/4] eta: 0:00:00 class_error: 62.50 loss: 10.6393 (10.5973) loss_ce: 0.4048 (0.4056) loss_bbox: 0.3784 (0.3823) loss_giou: 0.7993 (0.8311) loss_ce_0: 0.5420 (0.5477) loss_bbox_0: 0.4307 (0.4672) loss_giou_0: 0.9119 (0.9482) loss_ce_1: 0.4918 (0.5003) loss_bbox_1: 0.4281 (0.4370) loss_giou_1: 0.9174 (0.9268) loss_ce_2: 0.4586 (0.4623) loss_bbox_2: 0.4204 (0.4421) loss_giou_2: 0.8711 (0.8986) loss_ce_3: 0.4405 (0.4416) loss_bbox_3: 0.3955 (0.4101) loss_giou_3: 0.8391 (0.8567) loss_ce_4: 0.4209 (0.4227) loss_bbox_4: 0.3873 (0.3902) loss_giou_4: 0.8128 (0.8268) loss_ce_unscaled: 0.4048 (0.4056) class_error_unscaled: 56.2500 (57.8125) loss_bbox_unscaled: 0.0757 (0.0765) loss_giou_unscaled: 0.3996 (0.4155) cardinality_error_unscaled: 1.0000 (0.9844) loss_ce_0_unscaled: 0.5420 (0.5477) loss_bbox_0_unscaled: 0.0861 (0.0934) loss_giou_0_unscaled: 0.4559 (0.4741) cardinality_error_0_unscaled: 1.8125 (1.9062) loss_ce_1_unscaled: 0.4918 (0.5003) loss_bbox_1_unscaled: 0.0856 (0.0874) loss_giou_1_unscaled: 0.4587 (0.4634) cardinality_error_1_unscaled: 1.0000 (1.1406) loss_ce_2_unscaled: 0.4586 (0.4623) loss_bbox_2_unscaled: 0.0841 (0.0884) loss_giou_2_unscaled: 0.4356 (0.4493) cardinality_error_2_unscaled: 1.0000 (1.1250) loss_ce_3_unscaled: 0.4405 (0.4416) loss_bbox_3_unscaled: 0.0791 (0.0820) loss_giou_3_unscaled: 0.4195 (0.4283) cardinality_error_3_unscaled: 1.0000 (1.0938) loss_ce_4_unscaled: 0.4209 (0.4227) loss_bbox_4_unscaled: 0.0775 (0.0780) loss_giou_4_unscaled: 0.4064 (0.4134) cardinality_error_4_unscaled: 1.0000 (1.0625) time: 0.9863 data: 0.3512 max mem: 12069
Test: Total time: 0:00:04 (1.0076 s / it)
Averaged stats: class_error: 62.50 loss: 10.6393 (10.5973) loss_ce: 0.4048 (0.4056) loss_bbox: 0.3784 (0.3823) loss_giou: 0.7993 (0.8311) loss_ce_0: 0.5420 (0.5477) loss_bbox_0: 0.4307 (0.4672) loss_giou_0: 0.9119 (0.9482) loss_ce_1: 0.4918 (0.5003) loss_bbox_1: 0.4281 (0.4370) loss_giou_1: 0.9174 (0.9268) loss_ce_2: 0.4586 (0.4623) loss_bbox_2: 0.4204 (0.4421) loss_giou_2: 0.8711 (0.8986) loss_ce_3: 0.4405 (0.4416) loss_bbox_3: 0.3955 (0.4101) loss_giou_3: 0.8391 (0.8567) loss_ce_4: 0.4209 (0.4227) loss_bbox_4: 0.3873 (0.3902) loss_giou_4: 0.8128 (0.8268) loss_ce_unscaled: 0.4048 (0.4056) class_error_unscaled: 56.2500 (57.8125) loss_bbox_unscaled: 0.0757 (0.0765) loss_giou_unscaled: 0.3996 (0.4155) cardinality_error_unscaled: 1.0000 (0.9844) loss_ce_0_unscaled: 0.5420 (0.5477) loss_bbox_0_unscaled: 0.0861 (0.0934) loss_giou_0_unscaled: 0.4559 (0.4741) cardinality_error_0_unscaled: 1.8125 (1.9062) loss_ce_1_unscaled: 0.4918 (0.5003) loss_bbox_1_unscaled: 0.0856 (0.0874) loss_giou_1_unscaled: 0.4587 (0.4634) cardinality_error_1_unscaled: 1.0000 (1.1406) loss_ce_2_unscaled: 0.4586 (0.4623) loss_bbox_2_unscaled: 0.0841 (0.0884) loss_giou_2_unscaled: 0.4356 (0.4493) cardinality_error_2_unscaled: 1.0000 (1.1250) loss_ce_3_unscaled: 0.4405 (0.4416) loss_bbox_3_unscaled: 0.0791 (0.0820) loss_giou_3_unscaled: 0.4195 (0.4283) cardinality_error_3_unscaled: 1.0000 (1.0938) loss_ce_4_unscaled: 0.4209 (0.4227) loss_bbox_4_unscaled: 0.0775 (0.0780) loss_giou_4_unscaled: 0.4064 (0.4134) cardinality_error_4_unscaled: 1.0000 (1.0625)
Accumulating evaluation results...
DONE (t=0.07s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.001
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.006
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.169
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.150
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.119
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.196
Epoch: [10] [ 0/16] eta: 0:00:40 lr: 0.000010 class_error: 53.33 loss: 10.5505 (10.5505) loss_ce: 0.3940 (0.3940) loss_bbox: 0.4009 (0.4009) loss_giou: 0.9103 (0.9103) loss_ce_0: 0.5423 (0.5423) loss_bbox_0: 0.4968 (0.4968) loss_giou_0: 0.9487 (0.9487) loss_ce_1: 0.4836 (0.4836) loss_bbox_1: 0.4220 (0.4220) loss_giou_1: 0.9095 (0.9095) loss_ce_2: 0.4463 (0.4463) loss_bbox_2: 0.3687 (0.3687) loss_giou_2: 0.8454 (0.8454) loss_ce_3: 0.4257 (0.4257) loss_bbox_3: 0.4027 (0.4027) loss_giou_3: 0.8541 (0.8541) loss_ce_4: 0.4074 (0.4074) loss_bbox_4: 0.4157 (0.4157) loss_giou_4: 0.8764 (0.8764) loss_ce_unscaled: 0.3940 (0.3940) class_error_unscaled: 53.3333 (53.3333) loss_bbox_unscaled: 0.0802 (0.0802) loss_giou_unscaled: 0.4551 (0.4551) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.5423 (0.5423) loss_bbox_0_unscaled: 0.0994 (0.0994) loss_giou_0_unscaled: 0.4744 (0.4744) cardinality_error_0_unscaled: 1.7500 (1.7500) loss_ce_1_unscaled: 0.4836 (0.4836) loss_bbox_1_unscaled: 0.0844 (0.0844) loss_giou_1_unscaled: 0.4548 (0.4548) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.4463 (0.4463) loss_bbox_2_unscaled: 0.0737 (0.0737) loss_giou_2_unscaled: 0.4227 (0.4227) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.4257 (0.4257) loss_bbox_3_unscaled: 0.0805 (0.0805) loss_giou_3_unscaled: 0.4270 (0.4270) cardinality_error_3_unscaled: 0.9375 (0.9375) loss_ce_4_unscaled: 0.4074 (0.4074) loss_bbox_4_unscaled: 0.0831 (0.0831) loss_giou_4_unscaled: 0.4382 (0.4382) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 2.5465 data: 1.2708 max mem: 12069
Epoch: [10] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 80.00 loss: 11.8000 (11.8109) loss_ce: 0.3940 (0.3896) loss_bbox: 0.4812 (0.4934) loss_giou: 0.9542 (0.9619) loss_ce_0: 0.5378 (0.5378) loss_bbox_0: 0.5569 (0.5804) loss_giou_0: 1.0821 (1.0709) loss_ce_1: 0.4895 (0.4893) loss_bbox_1: 0.5402 (0.5368) loss_giou_1: 0.9988 (1.0086) loss_ce_2: 0.4463 (0.4456) loss_bbox_2: 0.4944 (0.5213) loss_giou_2: 0.9922 (0.9842) loss_ce_3: 0.4245 (0.4219) loss_bbox_3: 0.5072 (0.5153) loss_giou_3: 0.9876 (0.9853) loss_ce_4: 0.4057 (0.4034) loss_bbox_4: 0.4888 (0.4994) loss_giou_4: 0.9803 (0.9658) loss_ce_unscaled: 0.3940 (0.3896) class_error_unscaled: 53.3333 (55.7522) loss_bbox_unscaled: 0.0962 (0.0987) loss_giou_unscaled: 0.4771 (0.4809) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.5378 (0.5378) loss_bbox_0_unscaled: 0.1114 (0.1161) loss_giou_0_unscaled: 0.5411 (0.5354) cardinality_error_0_unscaled: 1.8125 (1.9489) loss_ce_1_unscaled: 0.4895 (0.4893) loss_bbox_1_unscaled: 0.1080 (0.1074) loss_giou_1_unscaled: 0.4994 (0.5043) cardinality_error_1_unscaled: 1.0625 (1.1534) loss_ce_2_unscaled: 0.4463 (0.4456) loss_bbox_2_unscaled: 0.0989 (0.1043) loss_giou_2_unscaled: 0.4961 (0.4921) cardinality_error_2_unscaled: 0.9375 (0.9943) loss_ce_3_unscaled: 0.4245 (0.4219) loss_bbox_3_unscaled: 0.1014 (0.1031) loss_giou_3_unscaled: 0.4938 (0.4926) cardinality_error_3_unscaled: 0.9375 (0.9489) loss_ce_4_unscaled: 0.4057 (0.4034) loss_bbox_4_unscaled: 0.0978 (0.0999) loss_giou_4_unscaled: 0.4901 (0.4829) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 1.3736 data: 0.1816 max mem: 12069
Epoch: [10] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 68.75 loss: 12.0092 (11.9304) loss_ce: 0.3940 (0.3916) loss_bbox: 0.5301 (0.5030) loss_giou: 0.9638 (0.9702) loss_ce_0: 0.5378 (0.5386) loss_bbox_0: 0.5649 (0.5893) loss_giou_0: 1.0519 (1.0722) loss_ce_1: 0.4907 (0.4907) loss_bbox_1: 0.5569 (0.5502) loss_giou_1: 1.0142 (1.0227) loss_ce_2: 0.4481 (0.4480) loss_bbox_2: 0.5076 (0.5278) loss_giou_2: 1.0006 (1.0010) loss_ce_3: 0.4245 (0.4246) loss_bbox_3: 0.5191 (0.5172) loss_giou_3: 0.9876 (0.9957) loss_ce_4: 0.4074 (0.4064) loss_bbox_4: 0.5298 (0.5067) loss_giou_4: 0.9803 (0.9744) loss_ce_unscaled: 0.3940 (0.3916) class_error_unscaled: 53.3333 (57.1838) loss_bbox_unscaled: 0.1060 (0.1006) loss_giou_unscaled: 0.4819 (0.4851) cardinality_error_unscaled: 0.9375 (0.9492) loss_ce_0_unscaled: 0.5378 (0.5386) loss_bbox_0_unscaled: 0.1130 (0.1179) loss_giou_0_unscaled: 0.5259 (0.5361) cardinality_error_0_unscaled: 1.6875 (1.8047) loss_ce_1_unscaled: 0.4907 (0.4907) loss_bbox_1_unscaled: 0.1114 (0.1100) loss_giou_1_unscaled: 0.5071 (0.5114) cardinality_error_1_unscaled: 1.0000 (1.0859) loss_ce_2_unscaled: 0.4481 (0.4480) loss_bbox_2_unscaled: 0.1015 (0.1056) loss_giou_2_unscaled: 0.5003 (0.5005) cardinality_error_2_unscaled: 0.9375 (0.9766) loss_ce_3_unscaled: 0.4245 (0.4246) loss_bbox_3_unscaled: 0.1038 (0.1034) loss_giou_3_unscaled: 0.4938 (0.4979) cardinality_error_3_unscaled: 0.9375 (0.9492) loss_ce_4_unscaled: 0.4074 (0.4064) loss_bbox_4_unscaled: 0.1060 (0.1013) loss_giou_4_unscaled: 0.4901 (0.4872) cardinality_error_4_unscaled: 0.9375 (0.9492) time: 1.3199 data: 0.1433 max mem: 12069
Epoch: [10] Total time: 0:00:21 (1.3242 s / it)
Averaged stats: lr: 0.000010 class_error: 68.75 loss: 12.0092 (11.9304) loss_ce: 0.3940 (0.3916) loss_bbox: 0.5301 (0.5030) loss_giou: 0.9638 (0.9702) loss_ce_0: 0.5378 (0.5386) loss_bbox_0: 0.5649 (0.5893) loss_giou_0: 1.0519 (1.0722) loss_ce_1: 0.4907 (0.4907) loss_bbox_1: 0.5569 (0.5502) loss_giou_1: 1.0142 (1.0227) loss_ce_2: 0.4481 (0.4480) loss_bbox_2: 0.5076 (0.5278) loss_giou_2: 1.0006 (1.0010) loss_ce_3: 0.4245 (0.4246) loss_bbox_3: 0.5191 (0.5172) loss_giou_3: 0.9876 (0.9957) loss_ce_4: 0.4074 (0.4064) loss_bbox_4: 0.5298 (0.5067) loss_giou_4: 0.9803 (0.9744) loss_ce_unscaled: 0.3940 (0.3916) class_error_unscaled: 53.3333 (57.1838) loss_bbox_unscaled: 0.1060 (0.1006) loss_giou_unscaled: 0.4819 (0.4851) cardinality_error_unscaled: 0.9375 (0.9492) loss_ce_0_unscaled: 0.5378 (0.5386) loss_bbox_0_unscaled: 0.1130 (0.1179) loss_giou_0_unscaled: 0.5259 (0.5361) cardinality_error_0_unscaled: 1.6875 (1.8047) loss_ce_1_unscaled: 0.4907 (0.4907) loss_bbox_1_unscaled: 0.1114 (0.1100) loss_giou_1_unscaled: 0.5071 (0.5114) cardinality_error_1_unscaled: 1.0000 (1.0859) loss_ce_2_unscaled: 0.4481 (0.4480) loss_bbox_2_unscaled: 0.1015 (0.1056) loss_giou_2_unscaled: 0.5003 (0.5005) cardinality_error_2_unscaled: 0.9375 (0.9766) loss_ce_3_unscaled: 0.4245 (0.4246) loss_bbox_3_unscaled: 0.1038 (0.1034) loss_giou_3_unscaled: 0.4938 (0.4979) cardinality_error_3_unscaled: 0.9375 (0.9492) loss_ce_4_unscaled: 0.4074 (0.4064) loss_bbox_4_unscaled: 0.1060 (0.1013) loss_giou_4_unscaled: 0.4901 (0.4872) cardinality_error_4_unscaled: 0.9375 (0.9492)
Test: [0/4] eta: 0:00:08 class_error: 50.00 loss: 10.3573 (10.3573) loss_ce: 0.4093 (0.4093) loss_bbox: 0.3825 (0.3825) loss_giou: 0.8646 (0.8646) loss_ce_0: 0.5048 (0.5048) loss_bbox_0: 0.4413 (0.4413) loss_giou_0: 0.8879 (0.8879) loss_ce_1: 0.4737 (0.4737) loss_bbox_1: 0.4200 (0.4200) loss_giou_1: 0.8651 (0.8651) loss_ce_2: 0.4445 (0.4445) loss_bbox_2: 0.4195 (0.4195) loss_giou_2: 0.9045 (0.9045) loss_ce_3: 0.4301 (0.4301) loss_bbox_3: 0.3871 (0.3871) loss_giou_3: 0.8537 (0.8537) loss_ce_4: 0.4191 (0.4191) loss_bbox_4: 0.3851 (0.3851) loss_giou_4: 0.8645 (0.8645) loss_ce_unscaled: 0.4093 (0.4093) class_error_unscaled: 50.0000 (50.0000) loss_bbox_unscaled: 0.0765 (0.0765) loss_giou_unscaled: 0.4323 (0.4323) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.5048 (0.5048) loss_bbox_0_unscaled: 0.0883 (0.0883) loss_giou_0_unscaled: 0.4440 (0.4440) cardinality_error_0_unscaled: 1.6875 (1.6875) loss_ce_1_unscaled: 0.4737 (0.4737) loss_bbox_1_unscaled: 0.0840 (0.0840) loss_giou_1_unscaled: 0.4326 (0.4326) cardinality_error_1_unscaled: 1.6250 (1.6250) loss_ce_2_unscaled: 0.4445 (0.4445) loss_bbox_2_unscaled: 0.0839 (0.0839) loss_giou_2_unscaled: 0.4522 (0.4522) cardinality_error_2_unscaled: 1.4375 (1.4375) loss_ce_3_unscaled: 0.4301 (0.4301) loss_bbox_3_unscaled: 0.0774 (0.0774) loss_giou_3_unscaled: 0.4269 (0.4269) cardinality_error_3_unscaled: 1.2500 (1.2500) loss_ce_4_unscaled: 0.4191 (0.4191) loss_bbox_4_unscaled: 0.0770 (0.0770) loss_giou_4_unscaled: 0.4323 (0.4323) cardinality_error_4_unscaled: 1.0625 (1.0625) time: 2.2018 data: 1.1775 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 62.50 loss: 10.3573 (10.3650) loss_ce: 0.4074 (0.4052) loss_bbox: 0.3825 (0.4013) loss_giou: 0.8038 (0.8109) loss_ce_0: 0.5156 (0.5204) loss_bbox_0: 0.4413 (0.4492) loss_giou_0: 0.9234 (0.9218) loss_ce_1: 0.4737 (0.4809) loss_bbox_1: 0.4446 (0.4457) loss_giou_1: 0.8961 (0.8957) loss_ce_2: 0.4440 (0.4467) loss_bbox_2: 0.4200 (0.4333) loss_giou_2: 0.8522 (0.8735) loss_ce_3: 0.4301 (0.4313) loss_bbox_3: 0.3871 (0.4114) loss_giou_3: 0.8052 (0.8042) loss_ce_4: 0.4164 (0.4154) loss_bbox_4: 0.3851 (0.4031) loss_giou_4: 0.8114 (0.8151) loss_ce_unscaled: 0.4074 (0.4052) class_error_unscaled: 62.5000 (59.3750) loss_bbox_unscaled: 0.0765 (0.0803) loss_giou_unscaled: 0.4019 (0.4054) cardinality_error_unscaled: 1.0000 (0.9844) loss_ce_0_unscaled: 0.5156 (0.5204) loss_bbox_0_unscaled: 0.0883 (0.0898) loss_giou_0_unscaled: 0.4617 (0.4609) cardinality_error_0_unscaled: 1.6250 (1.5000) loss_ce_1_unscaled: 0.4737 (0.4809) loss_bbox_1_unscaled: 0.0889 (0.0891) loss_giou_1_unscaled: 0.4480 (0.4478) cardinality_error_1_unscaled: 1.0000 (1.1719) loss_ce_2_unscaled: 0.4440 (0.4467) loss_bbox_2_unscaled: 0.0840 (0.0867) loss_giou_2_unscaled: 0.4261 (0.4367) cardinality_error_2_unscaled: 1.0000 (1.1094) loss_ce_3_unscaled: 0.4301 (0.4313) loss_bbox_3_unscaled: 0.0774 (0.0823) loss_giou_3_unscaled: 0.4026 (0.4021) cardinality_error_3_unscaled: 1.0000 (1.0625) loss_ce_4_unscaled: 0.4164 (0.4154) loss_bbox_4_unscaled: 0.0770 (0.0806) loss_giou_4_unscaled: 0.4057 (0.4076) cardinality_error_4_unscaled: 1.0000 (1.0156) time: 1.0588 data: 0.3584 max mem: 12069
Test: Total time: 0:00:04 (1.0748 s / it)
Averaged stats: class_error: 62.50 loss: 10.3573 (10.3650) loss_ce: 0.4074 (0.4052) loss_bbox: 0.3825 (0.4013) loss_giou: 0.8038 (0.8109) loss_ce_0: 0.5156 (0.5204) loss_bbox_0: 0.4413 (0.4492) loss_giou_0: 0.9234 (0.9218) loss_ce_1: 0.4737 (0.4809) loss_bbox_1: 0.4446 (0.4457) loss_giou_1: 0.8961 (0.8957) loss_ce_2: 0.4440 (0.4467) loss_bbox_2: 0.4200 (0.4333) loss_giou_2: 0.8522 (0.8735) loss_ce_3: 0.4301 (0.4313) loss_bbox_3: 0.3871 (0.4114) loss_giou_3: 0.8052 (0.8042) loss_ce_4: 0.4164 (0.4154) loss_bbox_4: 0.3851 (0.4031) loss_giou_4: 0.8114 (0.8151) loss_ce_unscaled: 0.4074 (0.4052) class_error_unscaled: 62.5000 (59.3750) loss_bbox_unscaled: 0.0765 (0.0803) loss_giou_unscaled: 0.4019 (0.4054) cardinality_error_unscaled: 1.0000 (0.9844) loss_ce_0_unscaled: 0.5156 (0.5204) loss_bbox_0_unscaled: 0.0883 (0.0898) loss_giou_0_unscaled: 0.4617 (0.4609) cardinality_error_0_unscaled: 1.6250 (1.5000) loss_ce_1_unscaled: 0.4737 (0.4809) loss_bbox_1_unscaled: 0.0889 (0.0891) loss_giou_1_unscaled: 0.4480 (0.4478) cardinality_error_1_unscaled: 1.0000 (1.1719) loss_ce_2_unscaled: 0.4440 (0.4467) loss_bbox_2_unscaled: 0.0840 (0.0867) loss_giou_2_unscaled: 0.4261 (0.4367) cardinality_error_2_unscaled: 1.0000 (1.1094) loss_ce_3_unscaled: 0.4301 (0.4313) loss_bbox_3_unscaled: 0.0774 (0.0823) loss_giou_3_unscaled: 0.4026 (0.4021) cardinality_error_3_unscaled: 1.0000 (1.0625) loss_ce_4_unscaled: 0.4164 (0.4154) loss_bbox_4_unscaled: 0.0770 (0.0806) loss_giou_4_unscaled: 0.4057 (0.4076) cardinality_error_4_unscaled: 1.0000 (1.0156)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.005
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.001
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.005
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.006
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.174
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.200
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.119
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.193
Epoch: [11] [ 0/16] eta: 0:00:40 lr: 0.000010 class_error: 64.29 loss: 10.9203 (10.9203) loss_ce: 0.3787 (0.3787) loss_bbox: 0.4637 (0.4637) loss_giou: 0.8991 (0.8991) loss_ce_0: 0.5135 (0.5135) loss_bbox_0: 0.4990 (0.4990) loss_giou_0: 0.9985 (0.9985) loss_ce_1: 0.4507 (0.4507) loss_bbox_1: 0.4798 (0.4798) loss_giou_1: 0.9688 (0.9688) loss_ce_2: 0.4260 (0.4260) loss_bbox_2: 0.4210 (0.4210) loss_giou_2: 0.9348 (0.9348) loss_ce_3: 0.4055 (0.4055) loss_bbox_3: 0.4452 (0.4452) loss_giou_3: 0.8864 (0.8864) loss_ce_4: 0.3927 (0.3927) loss_bbox_4: 0.4673 (0.4673) loss_giou_4: 0.8895 (0.8895) loss_ce_unscaled: 0.3787 (0.3787) class_error_unscaled: 64.2857 (64.2857) loss_bbox_unscaled: 0.0927 (0.0927) loss_giou_unscaled: 0.4495 (0.4495) cardinality_error_unscaled: 0.8750 (0.8750) loss_ce_0_unscaled: 0.5135 (0.5135) loss_bbox_0_unscaled: 0.0998 (0.0998) loss_giou_0_unscaled: 0.4992 (0.4992) cardinality_error_0_unscaled: 1.9375 (1.9375) loss_ce_1_unscaled: 0.4507 (0.4507) loss_bbox_1_unscaled: 0.0960 (0.0960) loss_giou_1_unscaled: 0.4844 (0.4844) cardinality_error_1_unscaled: 1.0625 (1.0625) loss_ce_2_unscaled: 0.4260 (0.4260) loss_bbox_2_unscaled: 0.0842 (0.0842) loss_giou_2_unscaled: 0.4674 (0.4674) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.4055 (0.4055) loss_bbox_3_unscaled: 0.0890 (0.0890) loss_giou_3_unscaled: 0.4432 (0.4432) cardinality_error_3_unscaled: 0.8125 (0.8125) loss_ce_4_unscaled: 0.3927 (0.3927) loss_bbox_4_unscaled: 0.0935 (0.0935) loss_giou_4_unscaled: 0.4447 (0.4447) cardinality_error_4_unscaled: 0.8125 (0.8125) time: 2.5165 data: 1.2465 max mem: 12069
Epoch: [11] [10/16] eta: 0:00:07 lr: 0.000010 class_error: 61.54 loss: 11.7163 (11.6680) loss_ce: 0.3999 (0.3936) loss_bbox: 0.4761 (0.5108) loss_giou: 0.9749 (0.9533) loss_ce_0: 0.5135 (0.5147) loss_bbox_0: 0.5070 (0.5454) loss_giou_0: 1.0353 (1.0422) loss_ce_1: 0.4717 (0.4735) loss_bbox_1: 0.4950 (0.5417) loss_giou_1: 0.9933 (0.9888) loss_ce_2: 0.4402 (0.4383) loss_bbox_2: 0.5156 (0.5270) loss_giou_2: 0.9585 (0.9703) loss_ce_3: 0.4166 (0.4169) loss_bbox_3: 0.4962 (0.5159) loss_giou_3: 0.9775 (0.9589) loss_ce_4: 0.4059 (0.4034) loss_bbox_4: 0.4778 (0.5127) loss_giou_4: 0.9765 (0.9605) loss_ce_unscaled: 0.3999 (0.3936) class_error_unscaled: 62.5000 (62.9537) loss_bbox_unscaled: 0.0952 (0.1022) loss_giou_unscaled: 0.4875 (0.4767) cardinality_error_unscaled: 1.0000 (0.9659) loss_ce_0_unscaled: 0.5135 (0.5147) loss_bbox_0_unscaled: 0.1014 (0.1091) loss_giou_0_unscaled: 0.5176 (0.5211) cardinality_error_0_unscaled: 1.1250 (1.3466) loss_ce_1_unscaled: 0.4717 (0.4735) loss_bbox_1_unscaled: 0.0990 (0.1083) loss_giou_1_unscaled: 0.4966 (0.4944) cardinality_error_1_unscaled: 1.0000 (0.9716) loss_ce_2_unscaled: 0.4402 (0.4383) loss_bbox_2_unscaled: 0.1031 (0.1054) loss_giou_2_unscaled: 0.4792 (0.4851) cardinality_error_2_unscaled: 1.0000 (0.9659) loss_ce_3_unscaled: 0.4166 (0.4169) loss_bbox_3_unscaled: 0.0992 (0.1032) loss_giou_3_unscaled: 0.4888 (0.4795) cardinality_error_3_unscaled: 1.0000 (0.9602) loss_ce_4_unscaled: 0.4059 (0.4034) loss_bbox_4_unscaled: 0.0956 (0.1025) loss_giou_4_unscaled: 0.4883 (0.4803) cardinality_error_4_unscaled: 1.0000 (0.9602) time: 1.3300 data: 0.1674 max mem: 12069
Epoch: [11] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 60.00 loss: 11.7163 (11.5967) loss_ce: 0.3947 (0.3923) loss_bbox: 0.4761 (0.5034) loss_giou: 0.9749 (0.9550) loss_ce_0: 0.5130 (0.5079) loss_bbox_0: 0.5070 (0.5393) loss_giou_0: 1.0353 (1.0545) loss_ce_1: 0.4656 (0.4673) loss_bbox_1: 0.4950 (0.5282) loss_giou_1: 0.9920 (0.9873) loss_ce_2: 0.4295 (0.4343) loss_bbox_2: 0.5156 (0.5136) loss_giou_2: 0.9585 (0.9671) loss_ce_3: 0.4142 (0.4136) loss_bbox_3: 0.4962 (0.5112) loss_giou_3: 0.9775 (0.9591) loss_ce_4: 0.4017 (0.4013) loss_bbox_4: 0.4778 (0.5047) loss_giou_4: 0.9562 (0.9567) loss_ce_unscaled: 0.3947 (0.3923) class_error_unscaled: 60.0000 (59.5567) loss_bbox_unscaled: 0.0952 (0.1007) loss_giou_unscaled: 0.4875 (0.4775) cardinality_error_unscaled: 1.0000 (0.9648) loss_ce_0_unscaled: 0.5130 (0.5079) loss_bbox_0_unscaled: 0.1014 (0.1079) loss_giou_0_unscaled: 0.5176 (0.5272) cardinality_error_0_unscaled: 1.1875 (1.3164) loss_ce_1_unscaled: 0.4656 (0.4673) loss_bbox_1_unscaled: 0.0990 (0.1056) loss_giou_1_unscaled: 0.4960 (0.4936) cardinality_error_1_unscaled: 0.9375 (0.9648) loss_ce_2_unscaled: 0.4295 (0.4343) loss_bbox_2_unscaled: 0.1031 (0.1027) loss_giou_2_unscaled: 0.4792 (0.4835) cardinality_error_2_unscaled: 1.0000 (0.9609) loss_ce_3_unscaled: 0.4142 (0.4136) loss_bbox_3_unscaled: 0.0992 (0.1022) loss_giou_3_unscaled: 0.4888 (0.4796) cardinality_error_3_unscaled: 1.0000 (0.9609) loss_ce_4_unscaled: 0.4017 (0.4013) loss_bbox_4_unscaled: 0.0956 (0.1009) loss_giou_4_unscaled: 0.4781 (0.4783) cardinality_error_4_unscaled: 1.0000 (0.9570) time: 1.2883 data: 0.1339 max mem: 12069
Epoch: [11] Total time: 0:00:20 (1.2925 s / it)
Averaged stats: lr: 0.000010 class_error: 60.00 loss: 11.7163 (11.5967) loss_ce: 0.3947 (0.3923) loss_bbox: 0.4761 (0.5034) loss_giou: 0.9749 (0.9550) loss_ce_0: 0.5130 (0.5079) loss_bbox_0: 0.5070 (0.5393) loss_giou_0: 1.0353 (1.0545) loss_ce_1: 0.4656 (0.4673) loss_bbox_1: 0.4950 (0.5282) loss_giou_1: 0.9920 (0.9873) loss_ce_2: 0.4295 (0.4343) loss_bbox_2: 0.5156 (0.5136) loss_giou_2: 0.9585 (0.9671) loss_ce_3: 0.4142 (0.4136) loss_bbox_3: 0.4962 (0.5112) loss_giou_3: 0.9775 (0.9591) loss_ce_4: 0.4017 (0.4013) loss_bbox_4: 0.4778 (0.5047) loss_giou_4: 0.9562 (0.9567) loss_ce_unscaled: 0.3947 (0.3923) class_error_unscaled: 60.0000 (59.5567) loss_bbox_unscaled: 0.0952 (0.1007) loss_giou_unscaled: 0.4875 (0.4775) cardinality_error_unscaled: 1.0000 (0.9648) loss_ce_0_unscaled: 0.5130 (0.5079) loss_bbox_0_unscaled: 0.1014 (0.1079) loss_giou_0_unscaled: 0.5176 (0.5272) cardinality_error_0_unscaled: 1.1875 (1.3164) loss_ce_1_unscaled: 0.4656 (0.4673) loss_bbox_1_unscaled: 0.0990 (0.1056) loss_giou_1_unscaled: 0.4960 (0.4936) cardinality_error_1_unscaled: 0.9375 (0.9648) loss_ce_2_unscaled: 0.4295 (0.4343) loss_bbox_2_unscaled: 0.1031 (0.1027) loss_giou_2_unscaled: 0.4792 (0.4835) cardinality_error_2_unscaled: 1.0000 (0.9609) loss_ce_3_unscaled: 0.4142 (0.4136) loss_bbox_3_unscaled: 0.0992 (0.1022) loss_giou_3_unscaled: 0.4888 (0.4796) cardinality_error_3_unscaled: 1.0000 (0.9609) loss_ce_4_unscaled: 0.4017 (0.4013) loss_bbox_4_unscaled: 0.0956 (0.1009) loss_giou_4_unscaled: 0.4781 (0.4783) cardinality_error_4_unscaled: 1.0000 (0.9570)
Test: [0/4] eta: 0:00:07 class_error: 68.75 loss: 10.3301 (10.3301) loss_ce: 0.4018 (0.4018) loss_bbox: 0.3804 (0.3804) loss_giou: 0.8726 (0.8726) loss_ce_0: 0.4782 (0.4782) loss_bbox_0: 0.4040 (0.4040) loss_giou_0: 0.9065 (0.9065) loss_ce_1: 0.4502 (0.4502) loss_bbox_1: 0.4241 (0.4241) loss_giou_1: 0.9022 (0.9022) loss_ce_2: 0.4291 (0.4291) loss_bbox_2: 0.4170 (0.4170) loss_giou_2: 0.9133 (0.9133) loss_ce_3: 0.4167 (0.4167) loss_bbox_3: 0.3871 (0.3871) loss_giou_3: 0.8848 (0.8848) loss_ce_4: 0.4124 (0.4124) loss_bbox_4: 0.3830 (0.3830) loss_giou_4: 0.8667 (0.8667) loss_ce_unscaled: 0.4018 (0.4018) class_error_unscaled: 68.7500 (68.7500) loss_bbox_unscaled: 0.0761 (0.0761) loss_giou_unscaled: 0.4363 (0.4363) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.4782 (0.4782) loss_bbox_0_unscaled: 0.0808 (0.0808) loss_giou_0_unscaled: 0.4533 (0.4533) cardinality_error_0_unscaled: 1.6250 (1.6250) loss_ce_1_unscaled: 0.4502 (0.4502) loss_bbox_1_unscaled: 0.0848 (0.0848) loss_giou_1_unscaled: 0.4511 (0.4511) cardinality_error_1_unscaled: 1.5000 (1.5000) loss_ce_2_unscaled: 0.4291 (0.4291) loss_bbox_2_unscaled: 0.0834 (0.0834) loss_giou_2_unscaled: 0.4566 (0.4566) cardinality_error_2_unscaled: 1.1875 (1.1875) loss_ce_3_unscaled: 0.4167 (0.4167) loss_bbox_3_unscaled: 0.0774 (0.0774) loss_giou_3_unscaled: 0.4424 (0.4424) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.4124 (0.4124) loss_bbox_4_unscaled: 0.0766 (0.0766) loss_giou_4_unscaled: 0.4333 (0.4333) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 1.8957 data: 1.1265 max mem: 12069
Test: [3/4] eta: 0:00:00 class_error: 62.50 loss: 10.0862 (10.1132) loss_ce: 0.4013 (0.3999) loss_bbox: 0.3804 (0.3798) loss_giou: 0.7685 (0.7986) loss_ce_0: 0.4943 (0.4948) loss_bbox_0: 0.4218 (0.4468) loss_giou_0: 0.9065 (0.9058) loss_ce_1: 0.4567 (0.4609) loss_bbox_1: 0.4388 (0.4443) loss_giou_1: 0.8833 (0.8730) loss_ce_2: 0.4291 (0.4322) loss_bbox_2: 0.4000 (0.4154) loss_giou_2: 0.8337 (0.8559) loss_ce_3: 0.4167 (0.4163) loss_bbox_3: 0.3758 (0.3896) loss_giou_3: 0.8017 (0.8082) loss_ce_4: 0.4107 (0.4078) loss_bbox_4: 0.3830 (0.3872) loss_giou_4: 0.7834 (0.7968) loss_ce_unscaled: 0.4013 (0.3999) class_error_unscaled: 62.5000 (62.5000) loss_bbox_unscaled: 0.0761 (0.0760) loss_giou_unscaled: 0.3843 (0.3993) cardinality_error_unscaled: 1.0000 (0.9844) loss_ce_0_unscaled: 0.4943 (0.4948) loss_bbox_0_unscaled: 0.0844 (0.0894) loss_giou_0_unscaled: 0.4533 (0.4529) cardinality_error_0_unscaled: 1.3125 (1.3438) loss_ce_1_unscaled: 0.4567 (0.4609) loss_bbox_1_unscaled: 0.0878 (0.0889) loss_giou_1_unscaled: 0.4416 (0.4365) cardinality_error_1_unscaled: 1.0000 (1.1250) loss_ce_2_unscaled: 0.4291 (0.4322) loss_bbox_2_unscaled: 0.0800 (0.0831) loss_giou_2_unscaled: 0.4168 (0.4279) cardinality_error_2_unscaled: 1.0000 (1.0469) loss_ce_3_unscaled: 0.4167 (0.4163) loss_bbox_3_unscaled: 0.0752 (0.0779) loss_giou_3_unscaled: 0.4009 (0.4041) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.4107 (0.4078) loss_bbox_4_unscaled: 0.0766 (0.0774) loss_giou_4_unscaled: 0.3917 (0.3984) cardinality_error_4_unscaled: 1.0000 (0.9844) time: 0.9915 data: 0.3429 max mem: 12069
Test: Total time: 0:00:04 (1.0087 s / it)
Averaged stats: class_error: 62.50 loss: 10.0862 (10.1132) loss_ce: 0.4013 (0.3999) loss_bbox: 0.3804 (0.3798) loss_giou: 0.7685 (0.7986) loss_ce_0: 0.4943 (0.4948) loss_bbox_0: 0.4218 (0.4468) loss_giou_0: 0.9065 (0.9058) loss_ce_1: 0.4567 (0.4609) loss_bbox_1: 0.4388 (0.4443) loss_giou_1: 0.8833 (0.8730) loss_ce_2: 0.4291 (0.4322) loss_bbox_2: 0.4000 (0.4154) loss_giou_2: 0.8337 (0.8559) loss_ce_3: 0.4167 (0.4163) loss_bbox_3: 0.3758 (0.3896) loss_giou_3: 0.8017 (0.8082) loss_ce_4: 0.4107 (0.4078) loss_bbox_4: 0.3830 (0.3872) loss_giou_4: 0.7834 (0.7968) loss_ce_unscaled: 0.4013 (0.3999) class_error_unscaled: 62.5000 (62.5000) loss_bbox_unscaled: 0.0761 (0.0760) loss_giou_unscaled: 0.3843 (0.3993) cardinality_error_unscaled: 1.0000 (0.9844) loss_ce_0_unscaled: 0.4943 (0.4948) loss_bbox_0_unscaled: 0.0844 (0.0894) loss_giou_0_unscaled: 0.4533 (0.4529) cardinality_error_0_unscaled: 1.3125 (1.3438) loss_ce_1_unscaled: 0.4567 (0.4609) loss_bbox_1_unscaled: 0.0878 (0.0889) loss_giou_1_unscaled: 0.4416 (0.4365) cardinality_error_1_unscaled: 1.0000 (1.1250) loss_ce_2_unscaled: 0.4291 (0.4322) loss_bbox_2_unscaled: 0.0800 (0.0831) loss_giou_2_unscaled: 0.4168 (0.4279) cardinality_error_2_unscaled: 1.0000 (1.0469) loss_ce_3_unscaled: 0.4167 (0.4163) loss_bbox_3_unscaled: 0.0752 (0.0779) loss_giou_3_unscaled: 0.4009 (0.4041) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.4107 (0.4078) loss_bbox_4_unscaled: 0.0766 (0.0774) loss_giou_4_unscaled: 0.3917 (0.3984) cardinality_error_4_unscaled: 1.0000 (0.9844)
Accumulating evaluation results...
DONE (t=0.07s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.006
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.001
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.005
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.012
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.162
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.138
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.156
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.169
Epoch: [12] [ 0/16] eta: 0:00:41 lr: 0.000010 class_error: 60.00 loss: 12.4626 (12.4626) loss_ce: 0.3764 (0.3764) loss_bbox: 0.5578 (0.5578) loss_giou: 1.0134 (1.0134) loss_ce_0: 0.5062 (0.5062) loss_bbox_0: 0.6452 (0.6452) loss_giou_0: 1.1606 (1.1606) loss_ce_1: 0.4693 (0.4693) loss_bbox_1: 0.5903 (0.5903) loss_giou_1: 1.0749 (1.0749) loss_ce_2: 0.4299 (0.4299) loss_bbox_2: 0.5740 (0.5740) loss_giou_2: 1.0789 (1.0789) loss_ce_3: 0.4005 (0.4005) loss_bbox_3: 0.5903 (0.5903) loss_giou_3: 1.0471 (1.0471) loss_ce_4: 0.3898 (0.3898) loss_bbox_4: 0.5538 (0.5538) loss_giou_4: 1.0043 (1.0043) loss_ce_unscaled: 0.3764 (0.3764) class_error_unscaled: 60.0000 (60.0000) loss_bbox_unscaled: 0.1116 (0.1116) loss_giou_unscaled: 0.5067 (0.5067) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.5062 (0.5062) loss_bbox_0_unscaled: 0.1290 (0.1290) loss_giou_0_unscaled: 0.5803 (0.5803) cardinality_error_0_unscaled: 1.7500 (1.7500) loss_ce_1_unscaled: 0.4693 (0.4693) loss_bbox_1_unscaled: 0.1181 (0.1181) loss_giou_1_unscaled: 0.5375 (0.5375) cardinality_error_1_unscaled: 1.0625 (1.0625) loss_ce_2_unscaled: 0.4299 (0.4299) loss_bbox_2_unscaled: 0.1148 (0.1148) loss_giou_2_unscaled: 0.5394 (0.5394) cardinality_error_2_unscaled: 1.1250 (1.1250) loss_ce_3_unscaled: 0.4005 (0.4005) loss_bbox_3_unscaled: 0.1181 (0.1181) loss_giou_3_unscaled: 0.5235 (0.5235) cardinality_error_3_unscaled: 0.9375 (0.9375) loss_ce_4_unscaled: 0.3898 (0.3898) loss_bbox_4_unscaled: 0.1108 (0.1108) loss_giou_4_unscaled: 0.5021 (0.5021) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 2.6080 data: 1.2831 max mem: 12069
Epoch: [12] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 68.75 loss: 11.6159 (11.4770) loss_ce: 0.3880 (0.3881) loss_bbox: 0.4911 (0.4948) loss_giou: 0.9552 (0.9524) loss_ce_0: 0.5012 (0.4931) loss_bbox_0: 0.5278 (0.5472) loss_giou_0: 1.0361 (1.0415) loss_ce_1: 0.4625 (0.4595) loss_bbox_1: 0.5323 (0.5257) loss_giou_1: 0.9991 (0.9846) loss_ce_2: 0.4213 (0.4258) loss_bbox_2: 0.5403 (0.5056) loss_giou_2: 0.9973 (0.9675) loss_ce_3: 0.4032 (0.4068) loss_bbox_3: 0.4931 (0.5008) loss_giou_3: 0.9583 (0.9465) loss_ce_4: 0.3918 (0.3954) loss_bbox_4: 0.5028 (0.4906) loss_giou_4: 0.9335 (0.9510) loss_ce_unscaled: 0.3880 (0.3881) class_error_unscaled: 60.0000 (61.1364) loss_bbox_unscaled: 0.0982 (0.0990) loss_giou_unscaled: 0.4776 (0.4762) cardinality_error_unscaled: 0.9375 (0.9659) loss_ce_0_unscaled: 0.5012 (0.4931) loss_bbox_0_unscaled: 0.1056 (0.1094) loss_giou_0_unscaled: 0.5181 (0.5208) cardinality_error_0_unscaled: 0.9375 (1.0909) loss_ce_1_unscaled: 0.4625 (0.4595) loss_bbox_1_unscaled: 0.1065 (0.1051) loss_giou_1_unscaled: 0.4996 (0.4923) cardinality_error_1_unscaled: 0.9375 (0.9773) loss_ce_2_unscaled: 0.4213 (0.4258) loss_bbox_2_unscaled: 0.1081 (0.1011) loss_giou_2_unscaled: 0.4986 (0.4838) cardinality_error_2_unscaled: 1.0000 (0.9773) loss_ce_3_unscaled: 0.4032 (0.4068) loss_bbox_3_unscaled: 0.0986 (0.1002) loss_giou_3_unscaled: 0.4791 (0.4733) cardinality_error_3_unscaled: 0.9375 (0.9602) loss_ce_4_unscaled: 0.3918 (0.3954) loss_bbox_4_unscaled: 0.1006 (0.0981) loss_giou_4_unscaled: 0.4668 (0.4755) cardinality_error_4_unscaled: 0.9375 (0.9659) time: 1.3582 data: 0.1826 max mem: 12069
Epoch: [12] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 69.23 loss: 11.0777 (11.3318) loss_ce: 0.3874 (0.3867) loss_bbox: 0.4596 (0.4887) loss_giou: 0.8846 (0.9393) loss_ce_0: 0.4883 (0.4871) loss_bbox_0: 0.5278 (0.5353) loss_giou_0: 0.9649 (1.0143) loss_ce_1: 0.4500 (0.4527) loss_bbox_1: 0.5311 (0.5201) loss_giou_1: 0.9157 (0.9689) loss_ce_2: 0.4166 (0.4209) loss_bbox_2: 0.4865 (0.5026) loss_giou_2: 0.9172 (0.9578) loss_ce_3: 0.4005 (0.4025) loss_bbox_3: 0.4734 (0.4964) loss_giou_3: 0.9277 (0.9364) loss_ce_4: 0.3918 (0.3929) loss_bbox_4: 0.4695 (0.4900) loss_giou_4: 0.8985 (0.9392) loss_ce_unscaled: 0.3874 (0.3867) class_error_unscaled: 60.0000 (57.6603) loss_bbox_unscaled: 0.0919 (0.0977) loss_giou_unscaled: 0.4423 (0.4697) cardinality_error_unscaled: 0.9375 (0.9570) loss_ce_0_unscaled: 0.4883 (0.4871) loss_bbox_0_unscaled: 0.1056 (0.1071) loss_giou_0_unscaled: 0.4824 (0.5071) cardinality_error_0_unscaled: 0.9375 (1.0703) loss_ce_1_unscaled: 0.4500 (0.4527) loss_bbox_1_unscaled: 0.1062 (0.1040) loss_giou_1_unscaled: 0.4578 (0.4844) cardinality_error_1_unscaled: 0.9375 (0.9531) loss_ce_2_unscaled: 0.4166 (0.4209) loss_bbox_2_unscaled: 0.0973 (0.1005) loss_giou_2_unscaled: 0.4586 (0.4789) cardinality_error_2_unscaled: 0.9375 (0.9609) loss_ce_3_unscaled: 0.4005 (0.4025) loss_bbox_3_unscaled: 0.0947 (0.0993) loss_giou_3_unscaled: 0.4639 (0.4682) cardinality_error_3_unscaled: 0.9375 (0.9531) loss_ce_4_unscaled: 0.3918 (0.3929) loss_bbox_4_unscaled: 0.0939 (0.0980) loss_giou_4_unscaled: 0.4492 (0.4696) cardinality_error_4_unscaled: 0.9375 (0.9570) time: 1.3090 data: 0.1442 max mem: 12069
Epoch: [12] Total time: 0:00:21 (1.3133 s / it)
Averaged stats: lr: 0.000010 class_error: 69.23 loss: 11.0777 (11.3318) loss_ce: 0.3874 (0.3867) loss_bbox: 0.4596 (0.4887) loss_giou: 0.8846 (0.9393) loss_ce_0: 0.4883 (0.4871) loss_bbox_0: 0.5278 (0.5353) loss_giou_0: 0.9649 (1.0143) loss_ce_1: 0.4500 (0.4527) loss_bbox_1: 0.5311 (0.5201) loss_giou_1: 0.9157 (0.9689) loss_ce_2: 0.4166 (0.4209) loss_bbox_2: 0.4865 (0.5026) loss_giou_2: 0.9172 (0.9578) loss_ce_3: 0.4005 (0.4025) loss_bbox_3: 0.4734 (0.4964) loss_giou_3: 0.9277 (0.9364) loss_ce_4: 0.3918 (0.3929) loss_bbox_4: 0.4695 (0.4900) loss_giou_4: 0.8985 (0.9392) loss_ce_unscaled: 0.3874 (0.3867) class_error_unscaled: 60.0000 (57.6603) loss_bbox_unscaled: 0.0919 (0.0977) loss_giou_unscaled: 0.4423 (0.4697) cardinality_error_unscaled: 0.9375 (0.9570) loss_ce_0_unscaled: 0.4883 (0.4871) loss_bbox_0_unscaled: 0.1056 (0.1071) loss_giou_0_unscaled: 0.4824 (0.5071) cardinality_error_0_unscaled: 0.9375 (1.0703) loss_ce_1_unscaled: 0.4500 (0.4527) loss_bbox_1_unscaled: 0.1062 (0.1040) loss_giou_1_unscaled: 0.4578 (0.4844) cardinality_error_1_unscaled: 0.9375 (0.9531) loss_ce_2_unscaled: 0.4166 (0.4209) loss_bbox_2_unscaled: 0.0973 (0.1005) loss_giou_2_unscaled: 0.4586 (0.4789) cardinality_error_2_unscaled: 0.9375 (0.9609) loss_ce_3_unscaled: 0.4005 (0.4025) loss_bbox_3_unscaled: 0.0947 (0.0993) loss_giou_3_unscaled: 0.4639 (0.4682) cardinality_error_3_unscaled: 0.9375 (0.9531) loss_ce_4_unscaled: 0.3918 (0.3929) loss_bbox_4_unscaled: 0.0939 (0.0980) loss_giou_4_unscaled: 0.4492 (0.4696) cardinality_error_4_unscaled: 0.9375 (0.9570)
Test: [0/4] eta: 0:00:08 class_error: 62.50 loss: 10.2224 (10.2224) loss_ce: 0.3982 (0.3982) loss_bbox: 0.3815 (0.3815) loss_giou: 0.9032 (0.9032) loss_ce_0: 0.4639 (0.4639) loss_bbox_0: 0.4117 (0.4117) loss_giou_0: 0.9002 (0.9002) loss_ce_1: 0.4413 (0.4413) loss_bbox_1: 0.3881 (0.3881) loss_giou_1: 0.8955 (0.8955) loss_ce_2: 0.4199 (0.4199) loss_bbox_2: 0.4086 (0.4086) loss_giou_2: 0.8867 (0.8867) loss_ce_3: 0.4063 (0.4063) loss_bbox_3: 0.3660 (0.3660) loss_giou_3: 0.8797 (0.8797) loss_ce_4: 0.3995 (0.3995) loss_bbox_4: 0.3626 (0.3626) loss_giou_4: 0.9093 (0.9093) loss_ce_unscaled: 0.3982 (0.3982) class_error_unscaled: 62.5000 (62.5000) loss_bbox_unscaled: 0.0763 (0.0763) loss_giou_unscaled: 0.4516 (0.4516) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4639 (0.4639) loss_bbox_0_unscaled: 0.0823 (0.0823) loss_giou_0_unscaled: 0.4501 (0.4501) cardinality_error_0_unscaled: 1.5000 (1.5000) loss_ce_1_unscaled: 0.4413 (0.4413) loss_bbox_1_unscaled: 0.0776 (0.0776) loss_giou_1_unscaled: 0.4478 (0.4478) cardinality_error_1_unscaled: 1.3125 (1.3125) loss_ce_2_unscaled: 0.4199 (0.4199) loss_bbox_2_unscaled: 0.0817 (0.0817) loss_giou_2_unscaled: 0.4433 (0.4433) cardinality_error_2_unscaled: 1.1250 (1.1250) loss_ce_3_unscaled: 0.4063 (0.4063) loss_bbox_3_unscaled: 0.0732 (0.0732) loss_giou_3_unscaled: 0.4398 (0.4398) cardinality_error_3_unscaled: 0.9375 (0.9375) loss_ce_4_unscaled: 0.3995 (0.3995) loss_bbox_4_unscaled: 0.0725 (0.0725) loss_giou_4_unscaled: 0.4547 (0.4547) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 2.0293 data: 1.1501 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 68.75 loss: 10.1036 (10.1452) loss_ce: 0.3982 (0.3973) loss_bbox: 0.3815 (0.4005) loss_giou: 0.7589 (0.8158) loss_ce_0: 0.4797 (0.4803) loss_bbox_0: 0.4197 (0.4437) loss_giou_0: 0.8909 (0.8943) loss_ce_1: 0.4442 (0.4496) loss_bbox_1: 0.4223 (0.4281) loss_giou_1: 0.8917 (0.8888) loss_ce_2: 0.4199 (0.4244) loss_bbox_2: 0.4086 (0.4334) loss_giou_2: 0.8273 (0.8403) loss_ce_3: 0.4063 (0.4098) loss_bbox_3: 0.3720 (0.4083) loss_giou_3: 0.7677 (0.8115) loss_ce_4: 0.3995 (0.4022) loss_bbox_4: 0.3627 (0.4060) loss_giou_4: 0.7588 (0.8107) loss_ce_unscaled: 0.3982 (0.3973) class_error_unscaled: 56.2500 (59.3750) loss_bbox_unscaled: 0.0763 (0.0801) loss_giou_unscaled: 0.3794 (0.4079) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4797 (0.4803) loss_bbox_0_unscaled: 0.0839 (0.0887) loss_giou_0_unscaled: 0.4454 (0.4472) cardinality_error_0_unscaled: 1.1875 (1.2656) loss_ce_1_unscaled: 0.4442 (0.4496) loss_bbox_1_unscaled: 0.0845 (0.0856) loss_giou_1_unscaled: 0.4459 (0.4444) cardinality_error_1_unscaled: 1.0000 (1.0625) loss_ce_2_unscaled: 0.4199 (0.4244) loss_bbox_2_unscaled: 0.0817 (0.0867) loss_giou_2_unscaled: 0.4137 (0.4202) cardinality_error_2_unscaled: 1.0000 (1.0312) loss_ce_3_unscaled: 0.4063 (0.4098) loss_bbox_3_unscaled: 0.0744 (0.0817) loss_giou_3_unscaled: 0.3838 (0.4058) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3995 (0.4022) loss_bbox_4_unscaled: 0.0725 (0.0812) loss_giou_4_unscaled: 0.3794 (0.4053) cardinality_error_4_unscaled: 1.0000 (0.9844) time: 1.0210 data: 0.3525 max mem: 12069
Test: Total time: 0:00:04 (1.0380 s / it)
Averaged stats: class_error: 68.75 loss: 10.1036 (10.1452) loss_ce: 0.3982 (0.3973) loss_bbox: 0.3815 (0.4005) loss_giou: 0.7589 (0.8158) loss_ce_0: 0.4797 (0.4803) loss_bbox_0: 0.4197 (0.4437) loss_giou_0: 0.8909 (0.8943) loss_ce_1: 0.4442 (0.4496) loss_bbox_1: 0.4223 (0.4281) loss_giou_1: 0.8917 (0.8888) loss_ce_2: 0.4199 (0.4244) loss_bbox_2: 0.4086 (0.4334) loss_giou_2: 0.8273 (0.8403) loss_ce_3: 0.4063 (0.4098) loss_bbox_3: 0.3720 (0.4083) loss_giou_3: 0.7677 (0.8115) loss_ce_4: 0.3995 (0.4022) loss_bbox_4: 0.3627 (0.4060) loss_giou_4: 0.7588 (0.8107) loss_ce_unscaled: 0.3982 (0.3973) class_error_unscaled: 56.2500 (59.3750) loss_bbox_unscaled: 0.0763 (0.0801) loss_giou_unscaled: 0.3794 (0.4079) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4797 (0.4803) loss_bbox_0_unscaled: 0.0839 (0.0887) loss_giou_0_unscaled: 0.4454 (0.4472) cardinality_error_0_unscaled: 1.1875 (1.2656) loss_ce_1_unscaled: 0.4442 (0.4496) loss_bbox_1_unscaled: 0.0845 (0.0856) loss_giou_1_unscaled: 0.4459 (0.4444) cardinality_error_1_unscaled: 1.0000 (1.0625) loss_ce_2_unscaled: 0.4199 (0.4244) loss_bbox_2_unscaled: 0.0817 (0.0867) loss_giou_2_unscaled: 0.4137 (0.4202) cardinality_error_2_unscaled: 1.0000 (1.0312) loss_ce_3_unscaled: 0.4063 (0.4098) loss_bbox_3_unscaled: 0.0744 (0.0817) loss_giou_3_unscaled: 0.3838 (0.4058) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3995 (0.4022) loss_bbox_4_unscaled: 0.0725 (0.0812) loss_giou_4_unscaled: 0.3794 (0.4053) cardinality_error_4_unscaled: 1.0000 (0.9844)
Accumulating evaluation results...
DONE (t=0.07s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.005
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.001
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.017
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.165
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.177
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.186
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.160
Epoch: [13] [ 0/16] eta: 0:00:38 lr: 0.000010 class_error: 37.50 loss: 12.0524 (12.0524) loss_ce: 0.3877 (0.3877) loss_bbox: 0.5032 (0.5032) loss_giou: 1.0111 (1.0111) loss_ce_0: 0.4846 (0.4846) loss_bbox_0: 0.6215 (0.6215) loss_giou_0: 1.0959 (1.0959) loss_ce_1: 0.4580 (0.4580) loss_bbox_1: 0.5872 (0.5872) loss_giou_1: 1.0788 (1.0788) loss_ce_2: 0.4117 (0.4117) loss_bbox_2: 0.5030 (0.5030) loss_giou_2: 1.0654 (1.0654) loss_ce_3: 0.4031 (0.4031) loss_bbox_3: 0.5495 (0.5495) loss_giou_3: 0.9939 (0.9939) loss_ce_4: 0.3921 (0.3921) loss_bbox_4: 0.5013 (0.5013) loss_giou_4: 1.0041 (1.0041) loss_ce_unscaled: 0.3877 (0.3877) class_error_unscaled: 37.5000 (37.5000) loss_bbox_unscaled: 0.1006 (0.1006) loss_giou_unscaled: 0.5056 (0.5056) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4846 (0.4846) loss_bbox_0_unscaled: 0.1243 (0.1243) loss_giou_0_unscaled: 0.5479 (0.5479) cardinality_error_0_unscaled: 0.9375 (0.9375) loss_ce_1_unscaled: 0.4580 (0.4580) loss_bbox_1_unscaled: 0.1174 (0.1174) loss_giou_1_unscaled: 0.5394 (0.5394) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.4117 (0.4117) loss_bbox_2_unscaled: 0.1006 (0.1006) loss_giou_2_unscaled: 0.5327 (0.5327) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.4031 (0.4031) loss_bbox_3_unscaled: 0.1099 (0.1099) loss_giou_3_unscaled: 0.4970 (0.4970) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3921 (0.3921) loss_bbox_4_unscaled: 0.1003 (0.1003) loss_giou_4_unscaled: 0.5021 (0.5021) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 2.3847 data: 1.1106 max mem: 12069
Epoch: [13] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 50.00 loss: 10.7399 (11.0760) loss_ce: 0.3877 (0.3877) loss_bbox: 0.4851 (0.4693) loss_giou: 0.9610 (0.9176) loss_ce_0: 0.4751 (0.4772) loss_bbox_0: 0.5383 (0.5360) loss_giou_0: 0.9981 (0.9950) loss_ce_1: 0.4428 (0.4448) loss_bbox_1: 0.5277 (0.5148) loss_giou_1: 0.9565 (0.9620) loss_ce_2: 0.4117 (0.4170) loss_bbox_2: 0.4927 (0.4843) loss_giou_2: 0.9385 (0.9350) loss_ce_3: 0.4030 (0.4006) loss_bbox_3: 0.4745 (0.4663) loss_giou_3: 0.9068 (0.9047) loss_ce_4: 0.3921 (0.3925) loss_bbox_4: 0.4926 (0.4641) loss_giou_4: 0.9400 (0.9068) loss_ce_unscaled: 0.3877 (0.3877) class_error_unscaled: 56.2500 (58.3442) loss_bbox_unscaled: 0.0970 (0.0939) loss_giou_unscaled: 0.4805 (0.4588) cardinality_error_unscaled: 1.0000 (0.9659) loss_ce_0_unscaled: 0.4751 (0.4772) loss_bbox_0_unscaled: 0.1077 (0.1072) loss_giou_0_unscaled: 0.4991 (0.4975) cardinality_error_0_unscaled: 1.0000 (1.0170) loss_ce_1_unscaled: 0.4428 (0.4448) loss_bbox_1_unscaled: 0.1055 (0.1030) loss_giou_1_unscaled: 0.4783 (0.4810) cardinality_error_1_unscaled: 1.0000 (0.9602) loss_ce_2_unscaled: 0.4117 (0.4170) loss_bbox_2_unscaled: 0.0985 (0.0969) loss_giou_2_unscaled: 0.4693 (0.4675) cardinality_error_2_unscaled: 1.0000 (0.9659) loss_ce_3_unscaled: 0.4030 (0.4006) loss_bbox_3_unscaled: 0.0949 (0.0933) loss_giou_3_unscaled: 0.4534 (0.4524) cardinality_error_3_unscaled: 1.0000 (0.9659) loss_ce_4_unscaled: 0.3921 (0.3925) loss_bbox_4_unscaled: 0.0985 (0.0928) loss_giou_4_unscaled: 0.4700 (0.4534) cardinality_error_4_unscaled: 1.0000 (0.9659) time: 1.3480 data: 0.1624 max mem: 12069
Epoch: [13] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 68.75 loss: 10.7399 (11.0805) loss_ce: 0.3823 (0.3850) loss_bbox: 0.4451 (0.4726) loss_giou: 0.9610 (0.9185) loss_ce_0: 0.4736 (0.4731) loss_bbox_0: 0.5383 (0.5357) loss_giou_0: 0.9894 (0.9853) loss_ce_1: 0.4409 (0.4430) loss_bbox_1: 0.5277 (0.5142) loss_giou_1: 0.9543 (0.9598) loss_ce_2: 0.4100 (0.4145) loss_bbox_2: 0.4696 (0.4894) loss_giou_2: 0.9791 (0.9465) loss_ce_3: 0.3925 (0.3989) loss_bbox_3: 0.4405 (0.4705) loss_giou_3: 0.9068 (0.9048) loss_ce_4: 0.3846 (0.3902) loss_bbox_4: 0.4506 (0.4679) loss_giou_4: 0.9400 (0.9107) loss_ce_unscaled: 0.3823 (0.3850) class_error_unscaled: 56.2500 (57.9762) loss_bbox_unscaled: 0.0890 (0.0945) loss_giou_unscaled: 0.4805 (0.4592) cardinality_error_unscaled: 0.9375 (0.9648) loss_ce_0_unscaled: 0.4736 (0.4731) loss_bbox_0_unscaled: 0.1077 (0.1071) loss_giou_0_unscaled: 0.4947 (0.4926) cardinality_error_0_unscaled: 1.0000 (1.0039) loss_ce_1_unscaled: 0.4409 (0.4430) loss_bbox_1_unscaled: 0.1055 (0.1028) loss_giou_1_unscaled: 0.4772 (0.4799) cardinality_error_1_unscaled: 1.0000 (0.9609) loss_ce_2_unscaled: 0.4100 (0.4145) loss_bbox_2_unscaled: 0.0939 (0.0979) loss_giou_2_unscaled: 0.4896 (0.4733) cardinality_error_2_unscaled: 0.9375 (0.9609) loss_ce_3_unscaled: 0.3925 (0.3989) loss_bbox_3_unscaled: 0.0881 (0.0941) loss_giou_3_unscaled: 0.4534 (0.4524) cardinality_error_3_unscaled: 0.9375 (0.9648) loss_ce_4_unscaled: 0.3846 (0.3902) loss_bbox_4_unscaled: 0.0901 (0.0936) loss_giou_4_unscaled: 0.4700 (0.4554) cardinality_error_4_unscaled: 0.9375 (0.9648) time: 1.3064 data: 0.1305 max mem: 12069
Epoch: [13] Total time: 0:00:20 (1.3106 s / it)
Averaged stats: lr: 0.000010 class_error: 68.75 loss: 10.7399 (11.0805) loss_ce: 0.3823 (0.3850) loss_bbox: 0.4451 (0.4726) loss_giou: 0.9610 (0.9185) loss_ce_0: 0.4736 (0.4731) loss_bbox_0: 0.5383 (0.5357) loss_giou_0: 0.9894 (0.9853) loss_ce_1: 0.4409 (0.4430) loss_bbox_1: 0.5277 (0.5142) loss_giou_1: 0.9543 (0.9598) loss_ce_2: 0.4100 (0.4145) loss_bbox_2: 0.4696 (0.4894) loss_giou_2: 0.9791 (0.9465) loss_ce_3: 0.3925 (0.3989) loss_bbox_3: 0.4405 (0.4705) loss_giou_3: 0.9068 (0.9048) loss_ce_4: 0.3846 (0.3902) loss_bbox_4: 0.4506 (0.4679) loss_giou_4: 0.9400 (0.9107) loss_ce_unscaled: 0.3823 (0.3850) class_error_unscaled: 56.2500 (57.9762) loss_bbox_unscaled: 0.0890 (0.0945) loss_giou_unscaled: 0.4805 (0.4592) cardinality_error_unscaled: 0.9375 (0.9648) loss_ce_0_unscaled: 0.4736 (0.4731) loss_bbox_0_unscaled: 0.1077 (0.1071) loss_giou_0_unscaled: 0.4947 (0.4926) cardinality_error_0_unscaled: 1.0000 (1.0039) loss_ce_1_unscaled: 0.4409 (0.4430) loss_bbox_1_unscaled: 0.1055 (0.1028) loss_giou_1_unscaled: 0.4772 (0.4799) cardinality_error_1_unscaled: 1.0000 (0.9609) loss_ce_2_unscaled: 0.4100 (0.4145) loss_bbox_2_unscaled: 0.0939 (0.0979) loss_giou_2_unscaled: 0.4896 (0.4733) cardinality_error_2_unscaled: 0.9375 (0.9609) loss_ce_3_unscaled: 0.3925 (0.3989) loss_bbox_3_unscaled: 0.0881 (0.0941) loss_giou_3_unscaled: 0.4534 (0.4524) cardinality_error_3_unscaled: 0.9375 (0.9648) loss_ce_4_unscaled: 0.3846 (0.3902) loss_bbox_4_unscaled: 0.0901 (0.0936) loss_giou_4_unscaled: 0.4700 (0.4554) cardinality_error_4_unscaled: 0.9375 (0.9648)
Test: [0/4] eta: 0:00:07 class_error: 62.50 loss: 10.0109 (10.0109) loss_ce: 0.3943 (0.3943) loss_bbox: 0.3847 (0.3847) loss_giou: 0.8949 (0.8949) loss_ce_0: 0.4494 (0.4494) loss_bbox_0: 0.3879 (0.3879) loss_giou_0: 0.8616 (0.8616) loss_ce_1: 0.4273 (0.4273) loss_bbox_1: 0.3721 (0.3721) loss_giou_1: 0.8896 (0.8896) loss_ce_2: 0.4124 (0.4124) loss_bbox_2: 0.3835 (0.3835) loss_giou_2: 0.8795 (0.8795) loss_ce_3: 0.3998 (0.3998) loss_bbox_3: 0.3585 (0.3585) loss_giou_3: 0.8619 (0.8619) loss_ce_4: 0.3962 (0.3962) loss_bbox_4: 0.3652 (0.3652) loss_giou_4: 0.8921 (0.8921) loss_ce_unscaled: 0.3943 (0.3943) class_error_unscaled: 62.5000 (62.5000) loss_bbox_unscaled: 0.0769 (0.0769) loss_giou_unscaled: 0.4475 (0.4475) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4494 (0.4494) loss_bbox_0_unscaled: 0.0776 (0.0776) loss_giou_0_unscaled: 0.4308 (0.4308) cardinality_error_0_unscaled: 1.4375 (1.4375) loss_ce_1_unscaled: 0.4273 (0.4273) loss_bbox_1_unscaled: 0.0744 (0.0744) loss_giou_1_unscaled: 0.4448 (0.4448) cardinality_error_1_unscaled: 1.0625 (1.0625) loss_ce_2_unscaled: 0.4124 (0.4124) loss_bbox_2_unscaled: 0.0767 (0.0767) loss_giou_2_unscaled: 0.4398 (0.4398) cardinality_error_2_unscaled: 1.0625 (1.0625) loss_ce_3_unscaled: 0.3998 (0.3998) loss_bbox_3_unscaled: 0.0717 (0.0717) loss_giou_3_unscaled: 0.4309 (0.4309) cardinality_error_3_unscaled: 0.9375 (0.9375) loss_ce_4_unscaled: 0.3962 (0.3962) loss_bbox_4_unscaled: 0.0730 (0.0730) loss_giou_4_unscaled: 0.4461 (0.4461) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 1.8609 data: 1.1080 max mem: 12069
Test: [3/4] eta: 0:00:00 class_error: 62.50 loss: 9.8634 (9.9560) loss_ce: 0.3943 (0.3966) loss_bbox: 0.3847 (0.3999) loss_giou: 0.7719 (0.8127) loss_ce_0: 0.4697 (0.4655) loss_bbox_0: 0.4147 (0.4304) loss_giou_0: 0.8616 (0.8696) loss_ce_1: 0.4351 (0.4394) loss_bbox_1: 0.4105 (0.4139) loss_giou_1: 0.8543 (0.8688) loss_ce_2: 0.4124 (0.4183) loss_bbox_2: 0.4115 (0.4312) loss_giou_2: 0.7735 (0.8079) loss_ce_3: 0.3998 (0.4041) loss_bbox_3: 0.3762 (0.4019) loss_giou_3: 0.7791 (0.7930) loss_ce_4: 0.3962 (0.4001) loss_bbox_4: 0.3674 (0.4004) loss_giou_4: 0.7754 (0.8022) loss_ce_unscaled: 0.3943 (0.3966) class_error_unscaled: 56.2500 (59.3750) loss_bbox_unscaled: 0.0769 (0.0800) loss_giou_unscaled: 0.3860 (0.4063) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4697 (0.4655) loss_bbox_0_unscaled: 0.0829 (0.0861) loss_giou_0_unscaled: 0.4308 (0.4348) cardinality_error_0_unscaled: 1.0625 (1.1406) loss_ce_1_unscaled: 0.4351 (0.4394) loss_bbox_1_unscaled: 0.0821 (0.0828) loss_giou_1_unscaled: 0.4272 (0.4344) cardinality_error_1_unscaled: 1.0000 (1.0156) loss_ce_2_unscaled: 0.4124 (0.4183) loss_bbox_2_unscaled: 0.0823 (0.0862) loss_giou_2_unscaled: 0.3868 (0.4040) cardinality_error_2_unscaled: 1.0000 (1.0156) loss_ce_3_unscaled: 0.3998 (0.4041) loss_bbox_3_unscaled: 0.0752 (0.0804) loss_giou_3_unscaled: 0.3896 (0.3965) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3962 (0.4001) loss_bbox_4_unscaled: 0.0735 (0.0801) loss_giou_4_unscaled: 0.3877 (0.4011) cardinality_error_4_unscaled: 1.0000 (0.9844) time: 0.9839 data: 0.3367 max mem: 12069
Test: Total time: 0:00:03 (0.9998 s / it)
Averaged stats: class_error: 62.50 loss: 9.8634 (9.9560) loss_ce: 0.3943 (0.3966) loss_bbox: 0.3847 (0.3999) loss_giou: 0.7719 (0.8127) loss_ce_0: 0.4697 (0.4655) loss_bbox_0: 0.4147 (0.4304) loss_giou_0: 0.8616 (0.8696) loss_ce_1: 0.4351 (0.4394) loss_bbox_1: 0.4105 (0.4139) loss_giou_1: 0.8543 (0.8688) loss_ce_2: 0.4124 (0.4183) loss_bbox_2: 0.4115 (0.4312) loss_giou_2: 0.7735 (0.8079) loss_ce_3: 0.3998 (0.4041) loss_bbox_3: 0.3762 (0.4019) loss_giou_3: 0.7791 (0.7930) loss_ce_4: 0.3962 (0.4001) loss_bbox_4: 0.3674 (0.4004) loss_giou_4: 0.7754 (0.8022) loss_ce_unscaled: 0.3943 (0.3966) class_error_unscaled: 56.2500 (59.3750) loss_bbox_unscaled: 0.0769 (0.0800) loss_giou_unscaled: 0.3860 (0.4063) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4697 (0.4655) loss_bbox_0_unscaled: 0.0829 (0.0861) loss_giou_0_unscaled: 0.4308 (0.4348) cardinality_error_0_unscaled: 1.0625 (1.1406) loss_ce_1_unscaled: 0.4351 (0.4394) loss_bbox_1_unscaled: 0.0821 (0.0828) loss_giou_1_unscaled: 0.4272 (0.4344) cardinality_error_1_unscaled: 1.0000 (1.0156) loss_ce_2_unscaled: 0.4124 (0.4183) loss_bbox_2_unscaled: 0.0823 (0.0862) loss_giou_2_unscaled: 0.3868 (0.4040) cardinality_error_2_unscaled: 1.0000 (1.0156) loss_ce_3_unscaled: 0.3998 (0.4041) loss_bbox_3_unscaled: 0.0752 (0.0804) loss_giou_3_unscaled: 0.3896 (0.3965) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3962 (0.4001) loss_bbox_4_unscaled: 0.0735 (0.0801) loss_giou_4_unscaled: 0.3877 (0.4011) cardinality_error_4_unscaled: 1.0000 (0.9844)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.004
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.006
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.001
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.025
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.150
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.240
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.109
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.154
Epoch: [14] [ 0/16] eta: 0:00:39 lr: 0.000010 class_error: 85.71 loss: 10.8443 (10.8443) loss_ce: 0.3663 (0.3663) loss_bbox: 0.4392 (0.4392) loss_giou: 0.9383 (0.9383) loss_ce_0: 0.4606 (0.4606) loss_bbox_0: 0.5177 (0.5177) loss_giou_0: 0.8977 (0.8977) loss_ce_1: 0.4192 (0.4192) loss_bbox_1: 0.5166 (0.5166) loss_giou_1: 0.9159 (0.9159) loss_ce_2: 0.4017 (0.4017) loss_bbox_2: 0.5249 (0.5249) loss_giou_2: 0.9282 (0.9282) loss_ce_3: 0.3832 (0.3832) loss_bbox_3: 0.4563 (0.4563) loss_giou_3: 0.9450 (0.9450) loss_ce_4: 0.3718 (0.3718) loss_bbox_4: 0.4276 (0.4276) loss_giou_4: 0.9341 (0.9341) loss_ce_unscaled: 0.3663 (0.3663) class_error_unscaled: 85.7143 (85.7143) loss_bbox_unscaled: 0.0878 (0.0878) loss_giou_unscaled: 0.4691 (0.4691) cardinality_error_unscaled: 0.8750 (0.8750) loss_ce_0_unscaled: 0.4606 (0.4606) loss_bbox_0_unscaled: 0.1035 (0.1035) loss_giou_0_unscaled: 0.4489 (0.4489) cardinality_error_0_unscaled: 1.0625 (1.0625) loss_ce_1_unscaled: 0.4192 (0.4192) loss_bbox_1_unscaled: 0.1033 (0.1033) loss_giou_1_unscaled: 0.4579 (0.4579) cardinality_error_1_unscaled: 0.8750 (0.8750) loss_ce_2_unscaled: 0.4017 (0.4017) loss_bbox_2_unscaled: 0.1050 (0.1050) loss_giou_2_unscaled: 0.4641 (0.4641) cardinality_error_2_unscaled: 0.8750 (0.8750) loss_ce_3_unscaled: 0.3832 (0.3832) loss_bbox_3_unscaled: 0.0913 (0.0913) loss_giou_3_unscaled: 0.4725 (0.4725) cardinality_error_3_unscaled: 0.8750 (0.8750) loss_ce_4_unscaled: 0.3718 (0.3718) loss_bbox_4_unscaled: 0.0855 (0.0855) loss_giou_4_unscaled: 0.4671 (0.4671) cardinality_error_4_unscaled: 0.8750 (0.8750) time: 2.4900 data: 1.2403 max mem: 12069
Epoch: [14] [10/16] eta: 0:00:07 lr: 0.000010 class_error: 81.25 loss: 10.8443 (10.7126) loss_ce: 0.3867 (0.3819) loss_bbox: 0.4392 (0.4397) loss_giou: 0.8643 (0.8675) loss_ce_0: 0.4670 (0.4633) loss_bbox_0: 0.4925 (0.5075) loss_giou_0: 0.9314 (0.9581) loss_ce_1: 0.4352 (0.4324) loss_bbox_1: 0.5166 (0.4899) loss_giou_1: 0.9540 (0.9409) loss_ce_2: 0.4114 (0.4076) loss_bbox_2: 0.5126 (0.4854) loss_giou_2: 0.8929 (0.8919) loss_ce_3: 0.3972 (0.3931) loss_bbox_3: 0.4563 (0.4613) loss_giou_3: 0.8665 (0.8852) loss_ce_4: 0.3918 (0.3861) loss_bbox_4: 0.4352 (0.4463) loss_giou_4: 0.8728 (0.8746) loss_ce_unscaled: 0.3867 (0.3819) class_error_unscaled: 66.6667 (63.2035) loss_bbox_unscaled: 0.0878 (0.0879) loss_giou_unscaled: 0.4321 (0.4337) cardinality_error_unscaled: 0.9375 (0.9545) loss_ce_0_unscaled: 0.4670 (0.4633) loss_bbox_0_unscaled: 0.0985 (0.1015) loss_giou_0_unscaled: 0.4657 (0.4790) cardinality_error_0_unscaled: 1.0000 (0.9830) loss_ce_1_unscaled: 0.4352 (0.4324) loss_bbox_1_unscaled: 0.1033 (0.0980) loss_giou_1_unscaled: 0.4770 (0.4705) cardinality_error_1_unscaled: 0.9375 (0.9432) loss_ce_2_unscaled: 0.4114 (0.4076) loss_bbox_2_unscaled: 0.1025 (0.0971) loss_giou_2_unscaled: 0.4464 (0.4460) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3972 (0.3931) loss_bbox_3_unscaled: 0.0913 (0.0923) loss_giou_3_unscaled: 0.4333 (0.4426) cardinality_error_3_unscaled: 0.9375 (0.9432) loss_ce_4_unscaled: 0.3918 (0.3861) loss_bbox_4_unscaled: 0.0870 (0.0893) loss_giou_4_unscaled: 0.4364 (0.4373) cardinality_error_4_unscaled: 0.9375 (0.9545) time: 1.3219 data: 0.1691 max mem: 12069
Epoch: [14] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 43.75 loss: 10.6506 (10.6764) loss_ce: 0.3867 (0.3842) loss_bbox: 0.4252 (0.4374) loss_giou: 0.8643 (0.8640) loss_ce_0: 0.4628 (0.4632) loss_bbox_0: 0.5177 (0.5225) loss_giou_0: 0.9489 (0.9708) loss_ce_1: 0.4337 (0.4321) loss_bbox_1: 0.4652 (0.4818) loss_giou_1: 0.9107 (0.9214) loss_ce_2: 0.4106 (0.4084) loss_bbox_2: 0.4361 (0.4756) loss_giou_2: 0.8929 (0.8883) loss_ce_3: 0.3972 (0.3941) loss_bbox_3: 0.4399 (0.4535) loss_giou_3: 0.8665 (0.8798) loss_ce_4: 0.3918 (0.3880) loss_bbox_4: 0.4302 (0.4417) loss_giou_4: 0.8568 (0.8695) loss_ce_unscaled: 0.3867 (0.3842) class_error_unscaled: 60.0000 (61.1607) loss_bbox_unscaled: 0.0850 (0.0875) loss_giou_unscaled: 0.4321 (0.4320) cardinality_error_unscaled: 0.9375 (0.9609) loss_ce_0_unscaled: 0.4628 (0.4632) loss_bbox_0_unscaled: 0.1035 (0.1045) loss_giou_0_unscaled: 0.4744 (0.4854) cardinality_error_0_unscaled: 0.9375 (0.9688) loss_ce_1_unscaled: 0.4337 (0.4321) loss_bbox_1_unscaled: 0.0930 (0.0964) loss_giou_1_unscaled: 0.4553 (0.4607) cardinality_error_1_unscaled: 0.9375 (0.9570) loss_ce_2_unscaled: 0.4106 (0.4084) loss_bbox_2_unscaled: 0.0872 (0.0951) loss_giou_2_unscaled: 0.4464 (0.4442) cardinality_error_2_unscaled: 0.9375 (0.9492) loss_ce_3_unscaled: 0.3972 (0.3941) loss_bbox_3_unscaled: 0.0880 (0.0907) loss_giou_3_unscaled: 0.4333 (0.4399) cardinality_error_3_unscaled: 0.9375 (0.9531) loss_ce_4_unscaled: 0.3918 (0.3880) loss_bbox_4_unscaled: 0.0860 (0.0883) loss_giou_4_unscaled: 0.4284 (0.4347) cardinality_error_4_unscaled: 0.9375 (0.9609) time: 1.2910 data: 0.1362 max mem: 12069
Epoch: [14] Total time: 0:00:20 (1.2952 s / it)
Averaged stats: lr: 0.000010 class_error: 43.75 loss: 10.6506 (10.6764) loss_ce: 0.3867 (0.3842) loss_bbox: 0.4252 (0.4374) loss_giou: 0.8643 (0.8640) loss_ce_0: 0.4628 (0.4632) loss_bbox_0: 0.5177 (0.5225) loss_giou_0: 0.9489 (0.9708) loss_ce_1: 0.4337 (0.4321) loss_bbox_1: 0.4652 (0.4818) loss_giou_1: 0.9107 (0.9214) loss_ce_2: 0.4106 (0.4084) loss_bbox_2: 0.4361 (0.4756) loss_giou_2: 0.8929 (0.8883) loss_ce_3: 0.3972 (0.3941) loss_bbox_3: 0.4399 (0.4535) loss_giou_3: 0.8665 (0.8798) loss_ce_4: 0.3918 (0.3880) loss_bbox_4: 0.4302 (0.4417) loss_giou_4: 0.8568 (0.8695) loss_ce_unscaled: 0.3867 (0.3842) class_error_unscaled: 60.0000 (61.1607) loss_bbox_unscaled: 0.0850 (0.0875) loss_giou_unscaled: 0.4321 (0.4320) cardinality_error_unscaled: 0.9375 (0.9609) loss_ce_0_unscaled: 0.4628 (0.4632) loss_bbox_0_unscaled: 0.1035 (0.1045) loss_giou_0_unscaled: 0.4744 (0.4854) cardinality_error_0_unscaled: 0.9375 (0.9688) loss_ce_1_unscaled: 0.4337 (0.4321) loss_bbox_1_unscaled: 0.0930 (0.0964) loss_giou_1_unscaled: 0.4553 (0.4607) cardinality_error_1_unscaled: 0.9375 (0.9570) loss_ce_2_unscaled: 0.4106 (0.4084) loss_bbox_2_unscaled: 0.0872 (0.0951) loss_giou_2_unscaled: 0.4464 (0.4442) cardinality_error_2_unscaled: 0.9375 (0.9492) loss_ce_3_unscaled: 0.3972 (0.3941) loss_bbox_3_unscaled: 0.0880 (0.0907) loss_giou_3_unscaled: 0.4333 (0.4399) cardinality_error_3_unscaled: 0.9375 (0.9531) loss_ce_4_unscaled: 0.3918 (0.3880) loss_bbox_4_unscaled: 0.0860 (0.0883) loss_giou_4_unscaled: 0.4284 (0.4347) cardinality_error_4_unscaled: 0.9375 (0.9609)
Test: [0/4] eta: 0:00:08 class_error: 68.75 loss: 9.5127 (9.5127) loss_ce: 0.3941 (0.3941) loss_bbox: 0.3425 (0.3425) loss_giou: 0.8336 (0.8336) loss_ce_0: 0.4436 (0.4436) loss_bbox_0: 0.3651 (0.3651) loss_giou_0: 0.8330 (0.8330) loss_ce_1: 0.4217 (0.4217) loss_bbox_1: 0.3612 (0.3612) loss_giou_1: 0.8445 (0.8445) loss_ce_2: 0.4034 (0.4034) loss_bbox_2: 0.3696 (0.3696) loss_giou_2: 0.8235 (0.8235) loss_ce_3: 0.3994 (0.3994) loss_bbox_3: 0.3406 (0.3406) loss_giou_3: 0.7883 (0.7883) loss_ce_4: 0.3958 (0.3958) loss_bbox_4: 0.3354 (0.3354) loss_giou_4: 0.8173 (0.8173) loss_ce_unscaled: 0.3941 (0.3941) class_error_unscaled: 68.7500 (68.7500) loss_bbox_unscaled: 0.0685 (0.0685) loss_giou_unscaled: 0.4168 (0.4168) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4436 (0.4436) loss_bbox_0_unscaled: 0.0730 (0.0730) loss_giou_0_unscaled: 0.4165 (0.4165) cardinality_error_0_unscaled: 1.2500 (1.2500) loss_ce_1_unscaled: 0.4217 (0.4217) loss_bbox_1_unscaled: 0.0722 (0.0722) loss_giou_1_unscaled: 0.4223 (0.4223) cardinality_error_1_unscaled: 1.1250 (1.1250) loss_ce_2_unscaled: 0.4034 (0.4034) loss_bbox_2_unscaled: 0.0739 (0.0739) loss_giou_2_unscaled: 0.4117 (0.4117) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3994 (0.3994) loss_bbox_3_unscaled: 0.0681 (0.0681) loss_giou_3_unscaled: 0.3941 (0.3941) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3958 (0.3958) loss_bbox_4_unscaled: 0.0671 (0.0671) loss_giou_4_unscaled: 0.4087 (0.4087) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 2.1399 data: 1.1173 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 56.25 loss: 9.8218 (9.7543) loss_ce: 0.3914 (0.3927) loss_bbox: 0.3803 (0.3814) loss_giou: 0.7775 (0.8020) loss_ce_0: 0.4633 (0.4596) loss_bbox_0: 0.4312 (0.4184) loss_giou_0: 0.8330 (0.8557) loss_ce_1: 0.4327 (0.4344) loss_bbox_1: 0.4139 (0.4143) loss_giou_1: 0.8385 (0.8420) loss_ce_2: 0.4066 (0.4128) loss_bbox_2: 0.4038 (0.4257) loss_giou_2: 0.7653 (0.7790) loss_ce_3: 0.3994 (0.4012) loss_bbox_3: 0.3877 (0.3923) loss_giou_3: 0.7786 (0.7731) loss_ce_4: 0.3958 (0.3970) loss_bbox_4: 0.3810 (0.3834) loss_giou_4: 0.7599 (0.7892) loss_ce_unscaled: 0.3914 (0.3927) class_error_unscaled: 56.2500 (59.3750) loss_bbox_unscaled: 0.0761 (0.0763) loss_giou_unscaled: 0.3888 (0.4010) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4633 (0.4596) loss_bbox_0_unscaled: 0.0862 (0.0837) loss_giou_0_unscaled: 0.4165 (0.4278) cardinality_error_0_unscaled: 1.0000 (1.1094) loss_ce_1_unscaled: 0.4327 (0.4344) loss_bbox_1_unscaled: 0.0828 (0.0829) loss_giou_1_unscaled: 0.4192 (0.4210) cardinality_error_1_unscaled: 1.0000 (1.0312) loss_ce_2_unscaled: 0.4066 (0.4128) loss_bbox_2_unscaled: 0.0808 (0.0851) loss_giou_2_unscaled: 0.3827 (0.3895) cardinality_error_2_unscaled: 1.0000 (0.9844) loss_ce_3_unscaled: 0.3994 (0.4012) loss_bbox_3_unscaled: 0.0775 (0.0785) loss_giou_3_unscaled: 0.3893 (0.3865) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3958 (0.3970) loss_bbox_4_unscaled: 0.0762 (0.0767) loss_giou_4_unscaled: 0.3800 (0.3946) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 1.0558 data: 0.3507 max mem: 12069
Test: Total time: 0:00:04 (1.0725 s / it)
Averaged stats: class_error: 56.25 loss: 9.8218 (9.7543) loss_ce: 0.3914 (0.3927) loss_bbox: 0.3803 (0.3814) loss_giou: 0.7775 (0.8020) loss_ce_0: 0.4633 (0.4596) loss_bbox_0: 0.4312 (0.4184) loss_giou_0: 0.8330 (0.8557) loss_ce_1: 0.4327 (0.4344) loss_bbox_1: 0.4139 (0.4143) loss_giou_1: 0.8385 (0.8420) loss_ce_2: 0.4066 (0.4128) loss_bbox_2: 0.4038 (0.4257) loss_giou_2: 0.7653 (0.7790) loss_ce_3: 0.3994 (0.4012) loss_bbox_3: 0.3877 (0.3923) loss_giou_3: 0.7786 (0.7731) loss_ce_4: 0.3958 (0.3970) loss_bbox_4: 0.3810 (0.3834) loss_giou_4: 0.7599 (0.7892) loss_ce_unscaled: 0.3914 (0.3927) class_error_unscaled: 56.2500 (59.3750) loss_bbox_unscaled: 0.0761 (0.0763) loss_giou_unscaled: 0.3888 (0.4010) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4633 (0.4596) loss_bbox_0_unscaled: 0.0862 (0.0837) loss_giou_0_unscaled: 0.4165 (0.4278) cardinality_error_0_unscaled: 1.0000 (1.1094) loss_ce_1_unscaled: 0.4327 (0.4344) loss_bbox_1_unscaled: 0.0828 (0.0829) loss_giou_1_unscaled: 0.4192 (0.4210) cardinality_error_1_unscaled: 1.0000 (1.0312) loss_ce_2_unscaled: 0.4066 (0.4128) loss_bbox_2_unscaled: 0.0808 (0.0851) loss_giou_2_unscaled: 0.3827 (0.3895) cardinality_error_2_unscaled: 1.0000 (0.9844) loss_ce_3_unscaled: 0.3994 (0.4012) loss_bbox_3_unscaled: 0.0775 (0.0785) loss_giou_3_unscaled: 0.3893 (0.3865) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3958 (0.3970) loss_bbox_4_unscaled: 0.0762 (0.0767) loss_giou_4_unscaled: 0.3800 (0.3946) cardinality_error_4_unscaled: 1.0000 (1.0000)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.005
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.006
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.001
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.033
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.145
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.212
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.144
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.136
Epoch: [15] [ 0/16] eta: 0:00:40 lr: 0.000010 class_error: 33.33 loss: 12.7913 (12.7913) loss_ce: 0.3749 (0.3749) loss_bbox: 0.6497 (0.6497) loss_giou: 0.9990 (0.9990) loss_ce_0: 0.4456 (0.4456) loss_bbox_0: 0.6597 (0.6597) loss_giou_0: 1.1616 (1.1616) loss_ce_1: 0.4135 (0.4135) loss_bbox_1: 0.6569 (0.6569) loss_giou_1: 1.1177 (1.1177) loss_ce_2: 0.3957 (0.3957) loss_bbox_2: 0.7211 (0.7211) loss_giou_2: 1.0596 (1.0596) loss_ce_3: 0.3830 (0.3830) loss_bbox_3: 0.6635 (0.6635) loss_giou_3: 1.0606 (1.0606) loss_ce_4: 0.3808 (0.3808) loss_bbox_4: 0.6548 (0.6548) loss_giou_4: 0.9937 (0.9937) loss_ce_unscaled: 0.3749 (0.3749) class_error_unscaled: 33.3333 (33.3333) loss_bbox_unscaled: 0.1299 (0.1299) loss_giou_unscaled: 0.4995 (0.4995) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.4456 (0.4456) loss_bbox_0_unscaled: 0.1319 (0.1319) loss_giou_0_unscaled: 0.5808 (0.5808) cardinality_error_0_unscaled: 0.9375 (0.9375) loss_ce_1_unscaled: 0.4135 (0.4135) loss_bbox_1_unscaled: 0.1314 (0.1314) loss_giou_1_unscaled: 0.5589 (0.5589) cardinality_error_1_unscaled: 0.9375 (0.9375) loss_ce_2_unscaled: 0.3957 (0.3957) loss_bbox_2_unscaled: 0.1442 (0.1442) loss_giou_2_unscaled: 0.5298 (0.5298) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3830 (0.3830) loss_bbox_3_unscaled: 0.1327 (0.1327) loss_giou_3_unscaled: 0.5303 (0.5303) cardinality_error_3_unscaled: 0.9375 (0.9375) loss_ce_4_unscaled: 0.3808 (0.3808) loss_bbox_4_unscaled: 0.1310 (0.1310) loss_giou_4_unscaled: 0.4969 (0.4969) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 2.5078 data: 1.2556 max mem: 12069
Epoch: [15] [10/16] eta: 0:00:07 lr: 0.000010 class_error: 56.25 loss: 10.7534 (10.5772) loss_ce: 0.3884 (0.3867) loss_bbox: 0.4626 (0.4539) loss_giou: 0.8215 (0.8326) loss_ce_0: 0.4623 (0.4603) loss_bbox_0: 0.5198 (0.5161) loss_giou_0: 0.9239 (0.9370) loss_ce_1: 0.4388 (0.4343) loss_bbox_1: 0.4877 (0.4881) loss_giou_1: 0.8867 (0.9010) loss_ce_2: 0.4135 (0.4115) loss_bbox_2: 0.4912 (0.4823) loss_giou_2: 0.9033 (0.8840) loss_ce_3: 0.3977 (0.3965) loss_bbox_3: 0.4661 (0.4606) loss_giou_3: 0.8075 (0.8510) loss_ce_4: 0.3902 (0.3902) loss_bbox_4: 0.4569 (0.4531) loss_giou_4: 0.8143 (0.8380) loss_ce_unscaled: 0.3884 (0.3867) class_error_unscaled: 46.6667 (52.0076) loss_bbox_unscaled: 0.0925 (0.0908) loss_giou_unscaled: 0.4107 (0.4163) cardinality_error_unscaled: 0.9375 (0.9602) loss_ce_0_unscaled: 0.4623 (0.4603) loss_bbox_0_unscaled: 0.1040 (0.1032) loss_giou_0_unscaled: 0.4620 (0.4685) cardinality_error_0_unscaled: 0.9375 (1.0284) loss_ce_1_unscaled: 0.4388 (0.4343) loss_bbox_1_unscaled: 0.0975 (0.0976) loss_giou_1_unscaled: 0.4433 (0.4505) cardinality_error_1_unscaled: 0.9375 (0.9886) loss_ce_2_unscaled: 0.4135 (0.4115) loss_bbox_2_unscaled: 0.0982 (0.0965) loss_giou_2_unscaled: 0.4517 (0.4420) cardinality_error_2_unscaled: 0.9375 (0.9545) loss_ce_3_unscaled: 0.3977 (0.3965) loss_bbox_3_unscaled: 0.0932 (0.0921) loss_giou_3_unscaled: 0.4038 (0.4255) cardinality_error_3_unscaled: 0.9375 (0.9602) loss_ce_4_unscaled: 0.3902 (0.3902) loss_bbox_4_unscaled: 0.0914 (0.0906) loss_giou_4_unscaled: 0.4071 (0.4190) cardinality_error_4_unscaled: 0.9375 (0.9602) time: 1.3220 data: 0.1694 max mem: 12069
Epoch: [15] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 53.33 loss: 10.4878 (10.5442) loss_ce: 0.3879 (0.3873) loss_bbox: 0.4357 (0.4528) loss_giou: 0.8356 (0.8470) loss_ce_0: 0.4551 (0.4573) loss_bbox_0: 0.5075 (0.5083) loss_giou_0: 0.9330 (0.9425) loss_ce_1: 0.4329 (0.4299) loss_bbox_1: 0.4857 (0.4749) loss_giou_1: 0.8792 (0.8904) loss_ce_2: 0.4087 (0.4092) loss_bbox_2: 0.4857 (0.4660) loss_giou_2: 0.8754 (0.8792) loss_ce_3: 0.3961 (0.3948) loss_bbox_3: 0.4587 (0.4520) loss_giou_3: 0.8532 (0.8603) loss_ce_4: 0.3902 (0.3908) loss_bbox_4: 0.4491 (0.4490) loss_giou_4: 0.8536 (0.8524) loss_ce_unscaled: 0.3879 (0.3873) class_error_unscaled: 50.0000 (53.0469) loss_bbox_unscaled: 0.0871 (0.0906) loss_giou_unscaled: 0.4178 (0.4235) cardinality_error_unscaled: 0.9375 (0.9570) loss_ce_0_unscaled: 0.4551 (0.4573) loss_bbox_0_unscaled: 0.1015 (0.1017) loss_giou_0_unscaled: 0.4665 (0.4713) cardinality_error_0_unscaled: 0.9375 (1.0039) loss_ce_1_unscaled: 0.4329 (0.4299) loss_bbox_1_unscaled: 0.0971 (0.0950) loss_giou_1_unscaled: 0.4396 (0.4452) cardinality_error_1_unscaled: 0.9375 (0.9766) loss_ce_2_unscaled: 0.4087 (0.4092) loss_bbox_2_unscaled: 0.0971 (0.0932) loss_giou_2_unscaled: 0.4377 (0.4396) cardinality_error_2_unscaled: 0.9375 (0.9531) loss_ce_3_unscaled: 0.3961 (0.3948) loss_bbox_3_unscaled: 0.0917 (0.0904) loss_giou_3_unscaled: 0.4266 (0.4301) cardinality_error_3_unscaled: 0.9375 (0.9570) loss_ce_4_unscaled: 0.3902 (0.3908) loss_bbox_4_unscaled: 0.0898 (0.0898) loss_giou_4_unscaled: 0.4268 (0.4262) cardinality_error_4_unscaled: 0.9375 (0.9570) time: 1.2883 data: 0.1353 max mem: 12069
Epoch: [15] Total time: 0:00:20 (1.2926 s / it)
Averaged stats: lr: 0.000010 class_error: 53.33 loss: 10.4878 (10.5442) loss_ce: 0.3879 (0.3873) loss_bbox: 0.4357 (0.4528) loss_giou: 0.8356 (0.8470) loss_ce_0: 0.4551 (0.4573) loss_bbox_0: 0.5075 (0.5083) loss_giou_0: 0.9330 (0.9425) loss_ce_1: 0.4329 (0.4299) loss_bbox_1: 0.4857 (0.4749) loss_giou_1: 0.8792 (0.8904) loss_ce_2: 0.4087 (0.4092) loss_bbox_2: 0.4857 (0.4660) loss_giou_2: 0.8754 (0.8792) loss_ce_3: 0.3961 (0.3948) loss_bbox_3: 0.4587 (0.4520) loss_giou_3: 0.8532 (0.8603) loss_ce_4: 0.3902 (0.3908) loss_bbox_4: 0.4491 (0.4490) loss_giou_4: 0.8536 (0.8524) loss_ce_unscaled: 0.3879 (0.3873) class_error_unscaled: 50.0000 (53.0469) loss_bbox_unscaled: 0.0871 (0.0906) loss_giou_unscaled: 0.4178 (0.4235) cardinality_error_unscaled: 0.9375 (0.9570) loss_ce_0_unscaled: 0.4551 (0.4573) loss_bbox_0_unscaled: 0.1015 (0.1017) loss_giou_0_unscaled: 0.4665 (0.4713) cardinality_error_0_unscaled: 0.9375 (1.0039) loss_ce_1_unscaled: 0.4329 (0.4299) loss_bbox_1_unscaled: 0.0971 (0.0950) loss_giou_1_unscaled: 0.4396 (0.4452) cardinality_error_1_unscaled: 0.9375 (0.9766) loss_ce_2_unscaled: 0.4087 (0.4092) loss_bbox_2_unscaled: 0.0971 (0.0932) loss_giou_2_unscaled: 0.4377 (0.4396) cardinality_error_2_unscaled: 0.9375 (0.9531) loss_ce_3_unscaled: 0.3961 (0.3948) loss_bbox_3_unscaled: 0.0917 (0.0904) loss_giou_3_unscaled: 0.4266 (0.4301) cardinality_error_3_unscaled: 0.9375 (0.9570) loss_ce_4_unscaled: 0.3902 (0.3908) loss_bbox_4_unscaled: 0.0898 (0.0898) loss_giou_4_unscaled: 0.4268 (0.4262) cardinality_error_4_unscaled: 0.9375 (0.9570)
Test: [0/4] eta: 0:00:08 class_error: 68.75 loss: 9.2431 (9.2431) loss_ce: 0.3896 (0.3896) loss_bbox: 0.3594 (0.3594) loss_giou: 0.7663 (0.7663) loss_ce_0: 0.4426 (0.4426) loss_bbox_0: 0.3421 (0.3421) loss_giou_0: 0.8390 (0.8390) loss_ce_1: 0.4155 (0.4155) loss_bbox_1: 0.3481 (0.3481) loss_giou_1: 0.8237 (0.8237) loss_ce_2: 0.4052 (0.4052) loss_bbox_2: 0.3488 (0.3488) loss_giou_2: 0.7936 (0.7936) loss_ce_3: 0.3908 (0.3908) loss_bbox_3: 0.3297 (0.3297) loss_giou_3: 0.7457 (0.7457) loss_ce_4: 0.3916 (0.3916) loss_bbox_4: 0.3516 (0.3516) loss_giou_4: 0.7597 (0.7597) loss_ce_unscaled: 0.3896 (0.3896) class_error_unscaled: 68.7500 (68.7500) loss_bbox_unscaled: 0.0719 (0.0719) loss_giou_unscaled: 0.3831 (0.3831) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4426 (0.4426) loss_bbox_0_unscaled: 0.0684 (0.0684) loss_giou_0_unscaled: 0.4195 (0.4195) cardinality_error_0_unscaled: 1.1250 (1.1250) loss_ce_1_unscaled: 0.4155 (0.4155) loss_bbox_1_unscaled: 0.0696 (0.0696) loss_giou_1_unscaled: 0.4119 (0.4119) cardinality_error_1_unscaled: 1.0625 (1.0625) loss_ce_2_unscaled: 0.4052 (0.4052) loss_bbox_2_unscaled: 0.0698 (0.0698) loss_giou_2_unscaled: 0.3968 (0.3968) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3908 (0.3908) loss_bbox_3_unscaled: 0.0659 (0.0659) loss_giou_3_unscaled: 0.3728 (0.3728) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3916 (0.3916) loss_bbox_4_unscaled: 0.0703 (0.0703) loss_giou_4_unscaled: 0.3799 (0.3799) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 2.0138 data: 1.1721 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 56.25 loss: 9.8006 (9.7402) loss_ce: 0.3896 (0.3925) loss_bbox: 0.4086 (0.4002) loss_giou: 0.7663 (0.7857) loss_ce_0: 0.4599 (0.4584) loss_bbox_0: 0.4213 (0.4111) loss_giou_0: 0.8390 (0.8655) loss_ce_1: 0.4292 (0.4303) loss_bbox_1: 0.4384 (0.4185) loss_giou_1: 0.8404 (0.8630) loss_ce_2: 0.4052 (0.4094) loss_bbox_2: 0.3949 (0.3954) loss_giou_2: 0.7752 (0.7841) loss_ce_3: 0.3908 (0.3976) loss_bbox_3: 0.3963 (0.3895) loss_giou_3: 0.7457 (0.7678) loss_ce_4: 0.3916 (0.3965) loss_bbox_4: 0.3940 (0.3955) loss_giou_4: 0.7597 (0.7792) loss_ce_unscaled: 0.3896 (0.3925) class_error_unscaled: 43.7500 (53.1250) loss_bbox_unscaled: 0.0817 (0.0800) loss_giou_unscaled: 0.3831 (0.3929) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4599 (0.4584) loss_bbox_0_unscaled: 0.0843 (0.0822) loss_giou_0_unscaled: 0.4195 (0.4328) cardinality_error_0_unscaled: 1.0000 (1.0469) loss_ce_1_unscaled: 0.4292 (0.4303) loss_bbox_1_unscaled: 0.0877 (0.0837) loss_giou_1_unscaled: 0.4202 (0.4315) cardinality_error_1_unscaled: 1.0000 (1.0156) loss_ce_2_unscaled: 0.4052 (0.4094) loss_bbox_2_unscaled: 0.0790 (0.0791) loss_giou_2_unscaled: 0.3876 (0.3920) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3908 (0.3976) loss_bbox_3_unscaled: 0.0793 (0.0779) loss_giou_3_unscaled: 0.3728 (0.3839) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3916 (0.3965) loss_bbox_4_unscaled: 0.0788 (0.0791) loss_giou_4_unscaled: 0.3799 (0.3896) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 1.0146 data: 0.3516 max mem: 12069
Test: Total time: 0:00:04 (1.0321 s / it)
Averaged stats: class_error: 56.25 loss: 9.8006 (9.7402) loss_ce: 0.3896 (0.3925) loss_bbox: 0.4086 (0.4002) loss_giou: 0.7663 (0.7857) loss_ce_0: 0.4599 (0.4584) loss_bbox_0: 0.4213 (0.4111) loss_giou_0: 0.8390 (0.8655) loss_ce_1: 0.4292 (0.4303) loss_bbox_1: 0.4384 (0.4185) loss_giou_1: 0.8404 (0.8630) loss_ce_2: 0.4052 (0.4094) loss_bbox_2: 0.3949 (0.3954) loss_giou_2: 0.7752 (0.7841) loss_ce_3: 0.3908 (0.3976) loss_bbox_3: 0.3963 (0.3895) loss_giou_3: 0.7457 (0.7678) loss_ce_4: 0.3916 (0.3965) loss_bbox_4: 0.3940 (0.3955) loss_giou_4: 0.7597 (0.7792) loss_ce_unscaled: 0.3896 (0.3925) class_error_unscaled: 43.7500 (53.1250) loss_bbox_unscaled: 0.0817 (0.0800) loss_giou_unscaled: 0.3831 (0.3929) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4599 (0.4584) loss_bbox_0_unscaled: 0.0843 (0.0822) loss_giou_0_unscaled: 0.4195 (0.4328) cardinality_error_0_unscaled: 1.0000 (1.0469) loss_ce_1_unscaled: 0.4292 (0.4303) loss_bbox_1_unscaled: 0.0877 (0.0837) loss_giou_1_unscaled: 0.4202 (0.4315) cardinality_error_1_unscaled: 1.0000 (1.0156) loss_ce_2_unscaled: 0.4052 (0.4094) loss_bbox_2_unscaled: 0.0790 (0.0791) loss_giou_2_unscaled: 0.3876 (0.3920) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3908 (0.3976) loss_bbox_3_unscaled: 0.0793 (0.0779) loss_giou_3_unscaled: 0.3728 (0.3839) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3916 (0.3965) loss_bbox_4_unscaled: 0.0788 (0.0791) loss_giou_4_unscaled: 0.3799 (0.3896) cardinality_error_4_unscaled: 1.0000 (1.0000)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.006
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.008
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.001
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.002
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.054
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.156
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.255
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.140
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.144
Epoch: [16] [ 0/16] eta: 0:00:41 lr: 0.000010 class_error: 62.50 loss: 10.5666 (10.5666) loss_ce: 0.4014 (0.4014) loss_bbox: 0.4404 (0.4404) loss_giou: 0.8968 (0.8968) loss_ce_0: 0.4600 (0.4600) loss_bbox_0: 0.5209 (0.5209) loss_giou_0: 0.9142 (0.9142) loss_ce_1: 0.4189 (0.4189) loss_bbox_1: 0.4275 (0.4275) loss_giou_1: 0.9256 (0.9256) loss_ce_2: 0.4130 (0.4130) loss_bbox_2: 0.4265 (0.4265) loss_giou_2: 0.9088 (0.9088) loss_ce_3: 0.3939 (0.3939) loss_bbox_3: 0.4329 (0.4329) loss_giou_3: 0.8616 (0.8616) loss_ce_4: 0.3990 (0.3990) loss_bbox_4: 0.4501 (0.4501) loss_giou_4: 0.8750 (0.8750) loss_ce_unscaled: 0.4014 (0.4014) class_error_unscaled: 62.5000 (62.5000) loss_bbox_unscaled: 0.0881 (0.0881) loss_giou_unscaled: 0.4484 (0.4484) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4600 (0.4600) loss_bbox_0_unscaled: 0.1042 (0.1042) loss_giou_0_unscaled: 0.4571 (0.4571) cardinality_error_0_unscaled: 1.0000 (1.0000) loss_ce_1_unscaled: 0.4189 (0.4189) loss_bbox_1_unscaled: 0.0855 (0.0855) loss_giou_1_unscaled: 0.4628 (0.4628) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.4130 (0.4130) loss_bbox_2_unscaled: 0.0853 (0.0853) loss_giou_2_unscaled: 0.4544 (0.4544) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3939 (0.3939) loss_bbox_3_unscaled: 0.0866 (0.0866) loss_giou_3_unscaled: 0.4308 (0.4308) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3990 (0.3990) loss_bbox_4_unscaled: 0.0900 (0.0900) loss_giou_4_unscaled: 0.4375 (0.4375) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 2.5649 data: 1.2703 max mem: 12069
Epoch: [16] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 50.00 loss: 10.5666 (10.6171) loss_ce: 0.3903 (0.3872) loss_bbox: 0.4487 (0.4526) loss_giou: 0.8785 (0.8742) loss_ce_0: 0.4578 (0.4575) loss_bbox_0: 0.4732 (0.4867) loss_giou_0: 0.9274 (0.9266) loss_ce_1: 0.4274 (0.4270) loss_bbox_1: 0.4574 (0.4672) loss_giou_1: 0.9256 (0.9166) loss_ce_2: 0.4090 (0.4056) loss_bbox_2: 0.4872 (0.4741) loss_giou_2: 0.8796 (0.8930) loss_ce_3: 0.3938 (0.3936) loss_bbox_3: 0.4754 (0.4619) loss_giou_3: 0.8616 (0.8805) loss_ce_4: 0.3914 (0.3900) loss_bbox_4: 0.4501 (0.4499) loss_giou_4: 0.8750 (0.8728) loss_ce_unscaled: 0.3903 (0.3872) class_error_unscaled: 62.5000 (57.8355) loss_bbox_unscaled: 0.0897 (0.0905) loss_giou_unscaled: 0.4392 (0.4371) cardinality_error_unscaled: 1.0000 (0.9716) loss_ce_0_unscaled: 0.4578 (0.4575) loss_bbox_0_unscaled: 0.0946 (0.0973) loss_giou_0_unscaled: 0.4637 (0.4633) cardinality_error_0_unscaled: 1.0000 (1.0057) loss_ce_1_unscaled: 0.4274 (0.4270) loss_bbox_1_unscaled: 0.0915 (0.0934) loss_giou_1_unscaled: 0.4628 (0.4583) cardinality_error_1_unscaled: 0.9375 (0.9659) loss_ce_2_unscaled: 0.4090 (0.4056) loss_bbox_2_unscaled: 0.0974 (0.0948) loss_giou_2_unscaled: 0.4398 (0.4465) cardinality_error_2_unscaled: 1.0000 (0.9659) loss_ce_3_unscaled: 0.3938 (0.3936) loss_bbox_3_unscaled: 0.0951 (0.0924) loss_giou_3_unscaled: 0.4308 (0.4402) cardinality_error_3_unscaled: 1.0000 (0.9716) loss_ce_4_unscaled: 0.3914 (0.3900) loss_bbox_4_unscaled: 0.0900 (0.0900) loss_giou_4_unscaled: 0.4375 (0.4364) cardinality_error_4_unscaled: 1.0000 (0.9716) time: 1.3466 data: 0.1693 max mem: 12069
Epoch: [16] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 37.50 loss: 10.6062 (10.6031) loss_ce: 0.3903 (0.3860) loss_bbox: 0.4479 (0.4542) loss_giou: 0.8785 (0.8733) loss_ce_0: 0.4578 (0.4568) loss_bbox_0: 0.4732 (0.4874) loss_giou_0: 0.9280 (0.9306) loss_ce_1: 0.4274 (0.4257) loss_bbox_1: 0.4538 (0.4669) loss_giou_1: 0.9067 (0.9097) loss_ce_2: 0.4064 (0.4041) loss_bbox_2: 0.4787 (0.4646) loss_giou_2: 0.8796 (0.8900) loss_ce_3: 0.3938 (0.3920) loss_bbox_3: 0.4628 (0.4564) loss_giou_3: 0.8616 (0.8886) loss_ce_4: 0.3914 (0.3890) loss_bbox_4: 0.4500 (0.4496) loss_giou_4: 0.8750 (0.8783) loss_ce_unscaled: 0.3903 (0.3860) class_error_unscaled: 53.3333 (54.0848) loss_bbox_unscaled: 0.0896 (0.0908) loss_giou_unscaled: 0.4392 (0.4367) cardinality_error_unscaled: 1.0000 (0.9727) loss_ce_0_unscaled: 0.4578 (0.4568) loss_bbox_0_unscaled: 0.0946 (0.0975) loss_giou_0_unscaled: 0.4640 (0.4653) cardinality_error_0_unscaled: 1.0000 (0.9961) loss_ce_1_unscaled: 0.4274 (0.4257) loss_bbox_1_unscaled: 0.0908 (0.0934) loss_giou_1_unscaled: 0.4534 (0.4549) cardinality_error_1_unscaled: 0.9375 (0.9609) loss_ce_2_unscaled: 0.4064 (0.4041) loss_bbox_2_unscaled: 0.0957 (0.0929) loss_giou_2_unscaled: 0.4398 (0.4450) cardinality_error_2_unscaled: 1.0000 (0.9648) loss_ce_3_unscaled: 0.3938 (0.3920) loss_bbox_3_unscaled: 0.0926 (0.0913) loss_giou_3_unscaled: 0.4308 (0.4443) cardinality_error_3_unscaled: 1.0000 (0.9727) loss_ce_4_unscaled: 0.3914 (0.3890) loss_bbox_4_unscaled: 0.0900 (0.0899) loss_giou_4_unscaled: 0.4375 (0.4392) cardinality_error_4_unscaled: 1.0000 (0.9727) time: 1.3031 data: 0.1346 max mem: 12069
Epoch: [16] Total time: 0:00:20 (1.3073 s / it)
Averaged stats: lr: 0.000010 class_error: 37.50 loss: 10.6062 (10.6031) loss_ce: 0.3903 (0.3860) loss_bbox: 0.4479 (0.4542) loss_giou: 0.8785 (0.8733) loss_ce_0: 0.4578 (0.4568) loss_bbox_0: 0.4732 (0.4874) loss_giou_0: 0.9280 (0.9306) loss_ce_1: 0.4274 (0.4257) loss_bbox_1: 0.4538 (0.4669) loss_giou_1: 0.9067 (0.9097) loss_ce_2: 0.4064 (0.4041) loss_bbox_2: 0.4787 (0.4646) loss_giou_2: 0.8796 (0.8900) loss_ce_3: 0.3938 (0.3920) loss_bbox_3: 0.4628 (0.4564) loss_giou_3: 0.8616 (0.8886) loss_ce_4: 0.3914 (0.3890) loss_bbox_4: 0.4500 (0.4496) loss_giou_4: 0.8750 (0.8783) loss_ce_unscaled: 0.3903 (0.3860) class_error_unscaled: 53.3333 (54.0848) loss_bbox_unscaled: 0.0896 (0.0908) loss_giou_unscaled: 0.4392 (0.4367) cardinality_error_unscaled: 1.0000 (0.9727) loss_ce_0_unscaled: 0.4578 (0.4568) loss_bbox_0_unscaled: 0.0946 (0.0975) loss_giou_0_unscaled: 0.4640 (0.4653) cardinality_error_0_unscaled: 1.0000 (0.9961) loss_ce_1_unscaled: 0.4274 (0.4257) loss_bbox_1_unscaled: 0.0908 (0.0934) loss_giou_1_unscaled: 0.4534 (0.4549) cardinality_error_1_unscaled: 0.9375 (0.9609) loss_ce_2_unscaled: 0.4064 (0.4041) loss_bbox_2_unscaled: 0.0957 (0.0929) loss_giou_2_unscaled: 0.4398 (0.4450) cardinality_error_2_unscaled: 1.0000 (0.9648) loss_ce_3_unscaled: 0.3938 (0.3920) loss_bbox_3_unscaled: 0.0926 (0.0913) loss_giou_3_unscaled: 0.4308 (0.4443) cardinality_error_3_unscaled: 1.0000 (0.9727) loss_ce_4_unscaled: 0.3914 (0.3890) loss_bbox_4_unscaled: 0.0900 (0.0899) loss_giou_4_unscaled: 0.4375 (0.4392) cardinality_error_4_unscaled: 1.0000 (0.9727)
Test: [0/4] eta: 0:00:08 class_error: 68.75 loss: 9.0951 (9.0951) loss_ce: 0.3781 (0.3781) loss_bbox: 0.3448 (0.3448) loss_giou: 0.7964 (0.7964) loss_ce_0: 0.4361 (0.4361) loss_bbox_0: 0.3471 (0.3471) loss_giou_0: 0.7955 (0.7955) loss_ce_1: 0.4130 (0.4130) loss_bbox_1: 0.3500 (0.3500) loss_giou_1: 0.7929 (0.7929) loss_ce_2: 0.3947 (0.3947) loss_bbox_2: 0.3477 (0.3477) loss_giou_2: 0.7768 (0.7768) loss_ce_3: 0.3862 (0.3862) loss_bbox_3: 0.3300 (0.3300) loss_giou_3: 0.7247 (0.7247) loss_ce_4: 0.3840 (0.3840) loss_bbox_4: 0.3414 (0.3414) loss_giou_4: 0.7556 (0.7556) loss_ce_unscaled: 0.3781 (0.3781) class_error_unscaled: 68.7500 (68.7500) loss_bbox_unscaled: 0.0690 (0.0690) loss_giou_unscaled: 0.3982 (0.3982) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4361 (0.4361) loss_bbox_0_unscaled: 0.0694 (0.0694) loss_giou_0_unscaled: 0.3978 (0.3978) cardinality_error_0_unscaled: 1.1875 (1.1875) loss_ce_1_unscaled: 0.4130 (0.4130) loss_bbox_1_unscaled: 0.0700 (0.0700) loss_giou_1_unscaled: 0.3965 (0.3965) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.3947 (0.3947) loss_bbox_2_unscaled: 0.0695 (0.0695) loss_giou_2_unscaled: 0.3884 (0.3884) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3862 (0.3862) loss_bbox_3_unscaled: 0.0660 (0.0660) loss_giou_3_unscaled: 0.3624 (0.3624) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3840 (0.3840) loss_bbox_4_unscaled: 0.0683 (0.0683) loss_giou_4_unscaled: 0.3778 (0.3778) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 2.0900 data: 1.1305 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 68.75 loss: 9.3605 (9.4504) loss_ce: 0.3819 (0.3891) loss_bbox: 0.3757 (0.3749) loss_giou: 0.7703 (0.7668) loss_ce_0: 0.4558 (0.4513) loss_bbox_0: 0.3841 (0.3994) loss_giou_0: 0.7955 (0.8284) loss_ce_1: 0.4274 (0.4274) loss_bbox_1: 0.4252 (0.4167) loss_giou_1: 0.8136 (0.8210) loss_ce_2: 0.3996 (0.4051) loss_bbox_2: 0.3898 (0.3900) loss_giou_2: 0.7471 (0.7629) loss_ce_3: 0.3862 (0.3946) loss_bbox_3: 0.3510 (0.3698) loss_giou_3: 0.7247 (0.7303) loss_ce_4: 0.3840 (0.3907) loss_bbox_4: 0.3750 (0.3730) loss_giou_4: 0.7556 (0.7589) loss_ce_unscaled: 0.3819 (0.3891) class_error_unscaled: 56.2500 (57.8125) loss_bbox_unscaled: 0.0751 (0.0750) loss_giou_unscaled: 0.3852 (0.3834) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4558 (0.4513) loss_bbox_0_unscaled: 0.0768 (0.0799) loss_giou_0_unscaled: 0.3978 (0.4142) cardinality_error_0_unscaled: 1.0000 (1.0625) loss_ce_1_unscaled: 0.4274 (0.4274) loss_bbox_1_unscaled: 0.0850 (0.0833) loss_giou_1_unscaled: 0.4068 (0.4105) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.3996 (0.4051) loss_bbox_2_unscaled: 0.0780 (0.0780) loss_giou_2_unscaled: 0.3736 (0.3814) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3862 (0.3946) loss_bbox_3_unscaled: 0.0702 (0.0740) loss_giou_3_unscaled: 0.3624 (0.3652) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3840 (0.3907) loss_bbox_4_unscaled: 0.0750 (0.0746) loss_giou_4_unscaled: 0.3778 (0.3795) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 1.0546 data: 0.3670 max mem: 12069
Test: Total time: 0:00:04 (1.0709 s / it)
Averaged stats: class_error: 68.75 loss: 9.3605 (9.4504) loss_ce: 0.3819 (0.3891) loss_bbox: 0.3757 (0.3749) loss_giou: 0.7703 (0.7668) loss_ce_0: 0.4558 (0.4513) loss_bbox_0: 0.3841 (0.3994) loss_giou_0: 0.7955 (0.8284) loss_ce_1: 0.4274 (0.4274) loss_bbox_1: 0.4252 (0.4167) loss_giou_1: 0.8136 (0.8210) loss_ce_2: 0.3996 (0.4051) loss_bbox_2: 0.3898 (0.3900) loss_giou_2: 0.7471 (0.7629) loss_ce_3: 0.3862 (0.3946) loss_bbox_3: 0.3510 (0.3698) loss_giou_3: 0.7247 (0.7303) loss_ce_4: 0.3840 (0.3907) loss_bbox_4: 0.3750 (0.3730) loss_giou_4: 0.7556 (0.7589) loss_ce_unscaled: 0.3819 (0.3891) class_error_unscaled: 56.2500 (57.8125) loss_bbox_unscaled: 0.0751 (0.0750) loss_giou_unscaled: 0.3852 (0.3834) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4558 (0.4513) loss_bbox_0_unscaled: 0.0768 (0.0799) loss_giou_0_unscaled: 0.3978 (0.4142) cardinality_error_0_unscaled: 1.0000 (1.0625) loss_ce_1_unscaled: 0.4274 (0.4274) loss_bbox_1_unscaled: 0.0850 (0.0833) loss_giou_1_unscaled: 0.4068 (0.4105) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.3996 (0.4051) loss_bbox_2_unscaled: 0.0780 (0.0780) loss_giou_2_unscaled: 0.3736 (0.3814) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3862 (0.3946) loss_bbox_3_unscaled: 0.0702 (0.0740) loss_giou_3_unscaled: 0.3624 (0.3652) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3840 (0.3907) loss_bbox_4_unscaled: 0.0750 (0.0746) loss_giou_4_unscaled: 0.3778 (0.3795) cardinality_error_4_unscaled: 1.0000 (1.0000)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.008
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.008
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.002
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.009
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.043
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.152
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.212
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.130
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.151
Epoch: [17] [ 0/16] eta: 0:00:40 lr: 0.000010 class_error: 62.50 loss: 10.0417 (10.0417) loss_ce: 0.3982 (0.3982) loss_bbox: 0.4406 (0.4406) loss_giou: 0.7554 (0.7554) loss_ce_0: 0.4609 (0.4609) loss_bbox_0: 0.5372 (0.5372) loss_giou_0: 0.8465 (0.8465) loss_ce_1: 0.4403 (0.4403) loss_bbox_1: 0.4955 (0.4955) loss_giou_1: 0.8392 (0.8392) loss_ce_2: 0.4136 (0.4136) loss_bbox_2: 0.4532 (0.4532) loss_giou_2: 0.7399 (0.7399) loss_ce_3: 0.3972 (0.3972) loss_bbox_3: 0.4245 (0.4245) loss_giou_3: 0.7942 (0.7942) loss_ce_4: 0.3976 (0.3976) loss_bbox_4: 0.4247 (0.4247) loss_giou_4: 0.7830 (0.7830) loss_ce_unscaled: 0.3982 (0.3982) class_error_unscaled: 62.5000 (62.5000) loss_bbox_unscaled: 0.0881 (0.0881) loss_giou_unscaled: 0.3777 (0.3777) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4609 (0.4609) loss_bbox_0_unscaled: 0.1074 (0.1074) loss_giou_0_unscaled: 0.4233 (0.4233) cardinality_error_0_unscaled: 1.0000 (1.0000) loss_ce_1_unscaled: 0.4403 (0.4403) loss_bbox_1_unscaled: 0.0991 (0.0991) loss_giou_1_unscaled: 0.4196 (0.4196) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.4136 (0.4136) loss_bbox_2_unscaled: 0.0906 (0.0906) loss_giou_2_unscaled: 0.3699 (0.3699) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3972 (0.3972) loss_bbox_3_unscaled: 0.0849 (0.0849) loss_giou_3_unscaled: 0.3971 (0.3971) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3976 (0.3976) loss_bbox_4_unscaled: 0.0849 (0.0849) loss_giou_4_unscaled: 0.3915 (0.3915) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 2.5134 data: 1.2170 max mem: 12069
Epoch: [17] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 50.00 loss: 9.8559 (10.1362) loss_ce: 0.3900 (0.3882) loss_bbox: 0.4406 (0.4143) loss_giou: 0.7981 (0.8192) loss_ce_0: 0.4530 (0.4514) loss_bbox_0: 0.4846 (0.4769) loss_giou_0: 0.8501 (0.8968) loss_ce_1: 0.4266 (0.4258) loss_bbox_1: 0.4548 (0.4447) loss_giou_1: 0.8392 (0.8712) loss_ce_2: 0.4066 (0.4044) loss_bbox_2: 0.4438 (0.4364) loss_giou_2: 0.8113 (0.8418) loss_ce_3: 0.3961 (0.3945) loss_bbox_3: 0.4250 (0.4182) loss_giou_3: 0.8017 (0.8236) loss_ce_4: 0.3939 (0.3902) loss_bbox_4: 0.4247 (0.4164) loss_giou_4: 0.7945 (0.8221) loss_ce_unscaled: 0.3900 (0.3882) class_error_unscaled: 50.0000 (51.5909) loss_bbox_unscaled: 0.0881 (0.0829) loss_giou_unscaled: 0.3990 (0.4096) cardinality_error_unscaled: 1.0000 (0.9886) loss_ce_0_unscaled: 0.4530 (0.4514) loss_bbox_0_unscaled: 0.0969 (0.0954) loss_giou_0_unscaled: 0.4250 (0.4484) cardinality_error_0_unscaled: 1.0000 (0.9716) loss_ce_1_unscaled: 0.4266 (0.4258) loss_bbox_1_unscaled: 0.0910 (0.0889) loss_giou_1_unscaled: 0.4196 (0.4356) cardinality_error_1_unscaled: 1.0000 (0.9716) loss_ce_2_unscaled: 0.4066 (0.4044) loss_bbox_2_unscaled: 0.0888 (0.0873) loss_giou_2_unscaled: 0.4056 (0.4209) cardinality_error_2_unscaled: 1.0000 (0.9830) loss_ce_3_unscaled: 0.3961 (0.3945) loss_bbox_3_unscaled: 0.0850 (0.0836) loss_giou_3_unscaled: 0.4008 (0.4118) cardinality_error_3_unscaled: 1.0000 (0.9886) loss_ce_4_unscaled: 0.3939 (0.3902) loss_bbox_4_unscaled: 0.0849 (0.0833) loss_giou_4_unscaled: 0.3973 (0.4111) cardinality_error_4_unscaled: 1.0000 (0.9886) time: 1.3404 data: 0.1682 max mem: 12069
Epoch: [17] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 60.00 loss: 9.8326 (10.0253) loss_ce: 0.3818 (0.3812) loss_bbox: 0.4013 (0.4035) loss_giou: 0.7554 (0.8090) loss_ce_0: 0.4456 (0.4470) loss_bbox_0: 0.4683 (0.4652) loss_giou_0: 0.8501 (0.8848) loss_ce_1: 0.4206 (0.4198) loss_bbox_1: 0.4175 (0.4402) loss_giou_1: 0.8392 (0.8675) loss_ce_2: 0.3995 (0.3980) loss_bbox_2: 0.4435 (0.4290) loss_giou_2: 0.8323 (0.8462) loss_ce_3: 0.3935 (0.3878) loss_bbox_3: 0.4245 (0.4151) loss_giou_3: 0.8017 (0.8249) loss_ce_4: 0.3869 (0.3835) loss_bbox_4: 0.4092 (0.4070) loss_giou_4: 0.7830 (0.8155) loss_ce_unscaled: 0.3818 (0.3812) class_error_unscaled: 50.0000 (51.4849) loss_bbox_unscaled: 0.0803 (0.0807) loss_giou_unscaled: 0.3777 (0.4045) cardinality_error_unscaled: 1.0000 (0.9648) loss_ce_0_unscaled: 0.4456 (0.4470) loss_bbox_0_unscaled: 0.0937 (0.0930) loss_giou_0_unscaled: 0.4250 (0.4424) cardinality_error_0_unscaled: 0.9375 (0.9727) loss_ce_1_unscaled: 0.4206 (0.4198) loss_bbox_1_unscaled: 0.0835 (0.0880) loss_giou_1_unscaled: 0.4196 (0.4338) cardinality_error_1_unscaled: 0.9375 (0.9648) loss_ce_2_unscaled: 0.3995 (0.3980) loss_bbox_2_unscaled: 0.0887 (0.0858) loss_giou_2_unscaled: 0.4162 (0.4231) cardinality_error_2_unscaled: 1.0000 (0.9609) loss_ce_3_unscaled: 0.3935 (0.3878) loss_bbox_3_unscaled: 0.0849 (0.0830) loss_giou_3_unscaled: 0.4008 (0.4124) cardinality_error_3_unscaled: 1.0000 (0.9648) loss_ce_4_unscaled: 0.3869 (0.3835) loss_bbox_4_unscaled: 0.0818 (0.0814) loss_giou_4_unscaled: 0.3915 (0.4078) cardinality_error_4_unscaled: 1.0000 (0.9648) time: 1.2881 data: 0.1332 max mem: 12069
Epoch: [17] Total time: 0:00:20 (1.2927 s / it)
Averaged stats: lr: 0.000010 class_error: 60.00 loss: 9.8326 (10.0253) loss_ce: 0.3818 (0.3812) loss_bbox: 0.4013 (0.4035) loss_giou: 0.7554 (0.8090) loss_ce_0: 0.4456 (0.4470) loss_bbox_0: 0.4683 (0.4652) loss_giou_0: 0.8501 (0.8848) loss_ce_1: 0.4206 (0.4198) loss_bbox_1: 0.4175 (0.4402) loss_giou_1: 0.8392 (0.8675) loss_ce_2: 0.3995 (0.3980) loss_bbox_2: 0.4435 (0.4290) loss_giou_2: 0.8323 (0.8462) loss_ce_3: 0.3935 (0.3878) loss_bbox_3: 0.4245 (0.4151) loss_giou_3: 0.8017 (0.8249) loss_ce_4: 0.3869 (0.3835) loss_bbox_4: 0.4092 (0.4070) loss_giou_4: 0.7830 (0.8155) loss_ce_unscaled: 0.3818 (0.3812) class_error_unscaled: 50.0000 (51.4849) loss_bbox_unscaled: 0.0803 (0.0807) loss_giou_unscaled: 0.3777 (0.4045) cardinality_error_unscaled: 1.0000 (0.9648) loss_ce_0_unscaled: 0.4456 (0.4470) loss_bbox_0_unscaled: 0.0937 (0.0930) loss_giou_0_unscaled: 0.4250 (0.4424) cardinality_error_0_unscaled: 0.9375 (0.9727) loss_ce_1_unscaled: 0.4206 (0.4198) loss_bbox_1_unscaled: 0.0835 (0.0880) loss_giou_1_unscaled: 0.4196 (0.4338) cardinality_error_1_unscaled: 0.9375 (0.9648) loss_ce_2_unscaled: 0.3995 (0.3980) loss_bbox_2_unscaled: 0.0887 (0.0858) loss_giou_2_unscaled: 0.4162 (0.4231) cardinality_error_2_unscaled: 1.0000 (0.9609) loss_ce_3_unscaled: 0.3935 (0.3878) loss_bbox_3_unscaled: 0.0849 (0.0830) loss_giou_3_unscaled: 0.4008 (0.4124) cardinality_error_3_unscaled: 1.0000 (0.9648) loss_ce_4_unscaled: 0.3869 (0.3835) loss_bbox_4_unscaled: 0.0818 (0.0814) loss_giou_4_unscaled: 0.3915 (0.4078) cardinality_error_4_unscaled: 1.0000 (0.9648)
Test: [0/4] eta: 0:00:08 class_error: 62.50 loss: 9.1057 (9.1057) loss_ce: 0.3874 (0.3874) loss_bbox: 0.3652 (0.3652) loss_giou: 0.8046 (0.8046) loss_ce_0: 0.4314 (0.4314) loss_bbox_0: 0.3355 (0.3355) loss_giou_0: 0.7719 (0.7719) loss_ce_1: 0.4072 (0.4072) loss_bbox_1: 0.3294 (0.3294) loss_giou_1: 0.7601 (0.7601) loss_ce_2: 0.3944 (0.3944) loss_bbox_2: 0.3443 (0.3443) loss_giou_2: 0.7543 (0.7543) loss_ce_3: 0.3829 (0.3829) loss_bbox_3: 0.3423 (0.3423) loss_giou_3: 0.7598 (0.7598) loss_ce_4: 0.3889 (0.3889) loss_bbox_4: 0.3584 (0.3584) loss_giou_4: 0.7879 (0.7879) loss_ce_unscaled: 0.3874 (0.3874) class_error_unscaled: 62.5000 (62.5000) loss_bbox_unscaled: 0.0730 (0.0730) loss_giou_unscaled: 0.4023 (0.4023) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4314 (0.4314) loss_bbox_0_unscaled: 0.0671 (0.0671) loss_giou_0_unscaled: 0.3859 (0.3859) cardinality_error_0_unscaled: 1.1250 (1.1250) loss_ce_1_unscaled: 0.4072 (0.4072) loss_bbox_1_unscaled: 0.0659 (0.0659) loss_giou_1_unscaled: 0.3801 (0.3801) cardinality_error_1_unscaled: 1.0625 (1.0625) loss_ce_2_unscaled: 0.3944 (0.3944) loss_bbox_2_unscaled: 0.0689 (0.0689) loss_giou_2_unscaled: 0.3771 (0.3771) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3829 (0.3829) loss_bbox_3_unscaled: 0.0685 (0.0685) loss_giou_3_unscaled: 0.3799 (0.3799) cardinality_error_3_unscaled: 0.9375 (0.9375) loss_ce_4_unscaled: 0.3889 (0.3889) loss_bbox_4_unscaled: 0.0717 (0.0717) loss_giou_4_unscaled: 0.3940 (0.3940) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 2.1649 data: 1.2452 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 56.25 loss: 9.1057 (9.2654) loss_ce: 0.3874 (0.3913) loss_bbox: 0.3464 (0.3628) loss_giou: 0.7148 (0.7435) loss_ce_0: 0.4517 (0.4470) loss_bbox_0: 0.3955 (0.3973) loss_giou_0: 0.8194 (0.8212) loss_ce_1: 0.4223 (0.4212) loss_bbox_1: 0.3621 (0.3874) loss_giou_1: 0.7838 (0.7820) loss_ce_2: 0.3963 (0.4033) loss_bbox_2: 0.3452 (0.3794) loss_giou_2: 0.7472 (0.7392) loss_ce_3: 0.3829 (0.3929) loss_bbox_3: 0.3476 (0.3707) loss_giou_3: 0.6746 (0.7180) loss_ce_4: 0.3889 (0.3939) loss_bbox_4: 0.3584 (0.3704) loss_giou_4: 0.7123 (0.7437) loss_ce_unscaled: 0.3874 (0.3913) class_error_unscaled: 43.7500 (48.4375) loss_bbox_unscaled: 0.0693 (0.0726) loss_giou_unscaled: 0.3574 (0.3717) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4517 (0.4470) loss_bbox_0_unscaled: 0.0791 (0.0795) loss_giou_0_unscaled: 0.4097 (0.4106) cardinality_error_0_unscaled: 1.0000 (1.0469) loss_ce_1_unscaled: 0.4223 (0.4212) loss_bbox_1_unscaled: 0.0724 (0.0775) loss_giou_1_unscaled: 0.3919 (0.3910) cardinality_error_1_unscaled: 1.0000 (1.0156) loss_ce_2_unscaled: 0.3963 (0.4033) loss_bbox_2_unscaled: 0.0690 (0.0759) loss_giou_2_unscaled: 0.3736 (0.3696) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3829 (0.3929) loss_bbox_3_unscaled: 0.0695 (0.0741) loss_giou_3_unscaled: 0.3373 (0.3590) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3889 (0.3939) loss_bbox_4_unscaled: 0.0717 (0.0741) loss_giou_4_unscaled: 0.3562 (0.3719) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 1.0294 data: 0.3476 max mem: 12069
Test: Total time: 0:00:04 (1.0457 s / it)
Averaged stats: class_error: 56.25 loss: 9.1057 (9.2654) loss_ce: 0.3874 (0.3913) loss_bbox: 0.3464 (0.3628) loss_giou: 0.7148 (0.7435) loss_ce_0: 0.4517 (0.4470) loss_bbox_0: 0.3955 (0.3973) loss_giou_0: 0.8194 (0.8212) loss_ce_1: 0.4223 (0.4212) loss_bbox_1: 0.3621 (0.3874) loss_giou_1: 0.7838 (0.7820) loss_ce_2: 0.3963 (0.4033) loss_bbox_2: 0.3452 (0.3794) loss_giou_2: 0.7472 (0.7392) loss_ce_3: 0.3829 (0.3929) loss_bbox_3: 0.3476 (0.3707) loss_giou_3: 0.6746 (0.7180) loss_ce_4: 0.3889 (0.3939) loss_bbox_4: 0.3584 (0.3704) loss_giou_4: 0.7123 (0.7437) loss_ce_unscaled: 0.3874 (0.3913) class_error_unscaled: 43.7500 (48.4375) loss_bbox_unscaled: 0.0693 (0.0726) loss_giou_unscaled: 0.3574 (0.3717) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4517 (0.4470) loss_bbox_0_unscaled: 0.0791 (0.0795) loss_giou_0_unscaled: 0.4097 (0.4106) cardinality_error_0_unscaled: 1.0000 (1.0469) loss_ce_1_unscaled: 0.4223 (0.4212) loss_bbox_1_unscaled: 0.0724 (0.0775) loss_giou_1_unscaled: 0.3919 (0.3910) cardinality_error_1_unscaled: 1.0000 (1.0156) loss_ce_2_unscaled: 0.3963 (0.4033) loss_bbox_2_unscaled: 0.0690 (0.0759) loss_giou_2_unscaled: 0.3736 (0.3696) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3829 (0.3929) loss_bbox_3_unscaled: 0.0695 (0.0741) loss_giou_3_unscaled: 0.3373 (0.3590) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3889 (0.3939) loss_bbox_4_unscaled: 0.0717 (0.0741) loss_giou_4_unscaled: 0.3562 (0.3719) cardinality_error_4_unscaled: 1.0000 (1.0000)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.008
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.011
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.003
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.012
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.078
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.213
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.287
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.183
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.220
Epoch: [18] [ 0/16] eta: 0:00:40 lr: 0.000010 class_error: 56.25 loss: 10.4655 (10.4655) loss_ce: 0.3968 (0.3968) loss_bbox: 0.4578 (0.4578) loss_giou: 0.7664 (0.7664) loss_ce_0: 0.4477 (0.4477) loss_bbox_0: 0.5105 (0.5105) loss_giou_0: 0.9940 (0.9940) loss_ce_1: 0.4261 (0.4261) loss_bbox_1: 0.4889 (0.4889) loss_giou_1: 0.9128 (0.9128) loss_ce_2: 0.4084 (0.4084) loss_bbox_2: 0.4969 (0.4969) loss_giou_2: 0.8810 (0.8810) loss_ce_3: 0.4026 (0.4026) loss_bbox_3: 0.4645 (0.4645) loss_giou_3: 0.7616 (0.7616) loss_ce_4: 0.4005 (0.4005) loss_bbox_4: 0.4722 (0.4722) loss_giou_4: 0.7768 (0.7768) loss_ce_unscaled: 0.3968 (0.3968) class_error_unscaled: 56.2500 (56.2500) loss_bbox_unscaled: 0.0916 (0.0916) loss_giou_unscaled: 0.3832 (0.3832) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4477 (0.4477) loss_bbox_0_unscaled: 0.1021 (0.1021) loss_giou_0_unscaled: 0.4970 (0.4970) cardinality_error_0_unscaled: 1.0000 (1.0000) loss_ce_1_unscaled: 0.4261 (0.4261) loss_bbox_1_unscaled: 0.0978 (0.0978) loss_giou_1_unscaled: 0.4564 (0.4564) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.4084 (0.4084) loss_bbox_2_unscaled: 0.0994 (0.0994) loss_giou_2_unscaled: 0.4405 (0.4405) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.4026 (0.4026) loss_bbox_3_unscaled: 0.0929 (0.0929) loss_giou_3_unscaled: 0.3808 (0.3808) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.4005 (0.4005) loss_bbox_4_unscaled: 0.0944 (0.0944) loss_giou_4_unscaled: 0.3884 (0.3884) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 2.5131 data: 1.1970 max mem: 12069
Epoch: [18] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 40.00 loss: 9.9653 (10.0501) loss_ce: 0.3780 (0.3735) loss_bbox: 0.4114 (0.4262) loss_giou: 0.7775 (0.8125) loss_ce_0: 0.4261 (0.4305) loss_bbox_0: 0.5105 (0.4850) loss_giou_0: 0.9276 (0.8985) loss_ce_1: 0.4124 (0.4078) loss_bbox_1: 0.4551 (0.4517) loss_giou_1: 0.8600 (0.8654) loss_ce_2: 0.3963 (0.3892) loss_bbox_2: 0.4385 (0.4255) loss_giou_2: 0.8474 (0.8418) loss_ce_3: 0.3856 (0.3818) loss_bbox_3: 0.4354 (0.4205) loss_giou_3: 0.7950 (0.8251) loss_ce_4: 0.3824 (0.3762) loss_bbox_4: 0.4262 (0.4215) loss_giou_4: 0.7768 (0.8172) loss_ce_unscaled: 0.3780 (0.3735) class_error_unscaled: 40.0000 (38.8499) loss_bbox_unscaled: 0.0823 (0.0852) loss_giou_unscaled: 0.3888 (0.4062) cardinality_error_unscaled: 0.9375 (0.9432) loss_ce_0_unscaled: 0.4261 (0.4305) loss_bbox_0_unscaled: 0.1021 (0.0970) loss_giou_0_unscaled: 0.4638 (0.4493) cardinality_error_0_unscaled: 0.9375 (0.9545) loss_ce_1_unscaled: 0.4124 (0.4078) loss_bbox_1_unscaled: 0.0910 (0.0903) loss_giou_1_unscaled: 0.4300 (0.4327) cardinality_error_1_unscaled: 0.9375 (0.9261) loss_ce_2_unscaled: 0.3963 (0.3892) loss_bbox_2_unscaled: 0.0877 (0.0851) loss_giou_2_unscaled: 0.4237 (0.4209) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3856 (0.3818) loss_bbox_3_unscaled: 0.0871 (0.0841) loss_giou_3_unscaled: 0.3975 (0.4125) cardinality_error_3_unscaled: 0.9375 (0.9432) loss_ce_4_unscaled: 0.3824 (0.3762) loss_bbox_4_unscaled: 0.0852 (0.0843) loss_giou_4_unscaled: 0.3884 (0.4086) cardinality_error_4_unscaled: 0.9375 (0.9432) time: 1.3493 data: 0.1653 max mem: 12069
Epoch: [18] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 43.75 loss: 9.8303 (9.9573) loss_ce: 0.3686 (0.3726) loss_bbox: 0.4033 (0.4119) loss_giou: 0.7776 (0.8085) loss_ce_0: 0.4261 (0.4306) loss_bbox_0: 0.4926 (0.4755) loss_giou_0: 0.9204 (0.8980) loss_ce_1: 0.4070 (0.4078) loss_bbox_1: 0.4175 (0.4321) loss_giou_1: 0.8580 (0.8532) loss_ce_2: 0.3879 (0.3894) loss_bbox_2: 0.4214 (0.4185) loss_giou_2: 0.8124 (0.8367) loss_ce_3: 0.3755 (0.3805) loss_bbox_3: 0.4210 (0.4125) loss_giou_3: 0.7856 (0.8234) loss_ce_4: 0.3700 (0.3748) loss_bbox_4: 0.4095 (0.4130) loss_giou_4: 0.7923 (0.8183) loss_ce_unscaled: 0.3686 (0.3726) class_error_unscaled: 40.0000 (39.4883) loss_bbox_unscaled: 0.0807 (0.0824) loss_giou_unscaled: 0.3888 (0.4043) cardinality_error_unscaled: 0.9375 (0.9453) loss_ce_0_unscaled: 0.4261 (0.4306) loss_bbox_0_unscaled: 0.0985 (0.0951) loss_giou_0_unscaled: 0.4602 (0.4490) cardinality_error_0_unscaled: 0.9375 (0.9609) loss_ce_1_unscaled: 0.4070 (0.4078) loss_bbox_1_unscaled: 0.0835 (0.0864) loss_giou_1_unscaled: 0.4290 (0.4266) cardinality_error_1_unscaled: 0.9375 (0.9336) loss_ce_2_unscaled: 0.3879 (0.3894) loss_bbox_2_unscaled: 0.0843 (0.0837) loss_giou_2_unscaled: 0.4062 (0.4184) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3755 (0.3805) loss_bbox_3_unscaled: 0.0842 (0.0825) loss_giou_3_unscaled: 0.3928 (0.4117) cardinality_error_3_unscaled: 0.9375 (0.9453) loss_ce_4_unscaled: 0.3700 (0.3748) loss_bbox_4_unscaled: 0.0819 (0.0826) loss_giou_4_unscaled: 0.3962 (0.4092) cardinality_error_4_unscaled: 0.9375 (0.9453) time: 1.3020 data: 0.1322 max mem: 12069
Epoch: [18] Total time: 0:00:20 (1.3064 s / it)
Averaged stats: lr: 0.000010 class_error: 43.75 loss: 9.8303 (9.9573) loss_ce: 0.3686 (0.3726) loss_bbox: 0.4033 (0.4119) loss_giou: 0.7776 (0.8085) loss_ce_0: 0.4261 (0.4306) loss_bbox_0: 0.4926 (0.4755) loss_giou_0: 0.9204 (0.8980) loss_ce_1: 0.4070 (0.4078) loss_bbox_1: 0.4175 (0.4321) loss_giou_1: 0.8580 (0.8532) loss_ce_2: 0.3879 (0.3894) loss_bbox_2: 0.4214 (0.4185) loss_giou_2: 0.8124 (0.8367) loss_ce_3: 0.3755 (0.3805) loss_bbox_3: 0.4210 (0.4125) loss_giou_3: 0.7856 (0.8234) loss_ce_4: 0.3700 (0.3748) loss_bbox_4: 0.4095 (0.4130) loss_giou_4: 0.7923 (0.8183) loss_ce_unscaled: 0.3686 (0.3726) class_error_unscaled: 40.0000 (39.4883) loss_bbox_unscaled: 0.0807 (0.0824) loss_giou_unscaled: 0.3888 (0.4043) cardinality_error_unscaled: 0.9375 (0.9453) loss_ce_0_unscaled: 0.4261 (0.4306) loss_bbox_0_unscaled: 0.0985 (0.0951) loss_giou_0_unscaled: 0.4602 (0.4490) cardinality_error_0_unscaled: 0.9375 (0.9609) loss_ce_1_unscaled: 0.4070 (0.4078) loss_bbox_1_unscaled: 0.0835 (0.0864) loss_giou_1_unscaled: 0.4290 (0.4266) cardinality_error_1_unscaled: 0.9375 (0.9336) loss_ce_2_unscaled: 0.3879 (0.3894) loss_bbox_2_unscaled: 0.0843 (0.0837) loss_giou_2_unscaled: 0.4062 (0.4184) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3755 (0.3805) loss_bbox_3_unscaled: 0.0842 (0.0825) loss_giou_3_unscaled: 0.3928 (0.4117) cardinality_error_3_unscaled: 0.9375 (0.9453) loss_ce_4_unscaled: 0.3700 (0.3748) loss_bbox_4_unscaled: 0.0819 (0.0826) loss_giou_4_unscaled: 0.3962 (0.4092) cardinality_error_4_unscaled: 0.9375 (0.9453)
Test: [0/4] eta: 0:00:08 class_error: 68.75 loss: 8.9768 (8.9768) loss_ce: 0.3860 (0.3860) loss_bbox: 0.3513 (0.3513) loss_giou: 0.7724 (0.7724) loss_ce_0: 0.4226 (0.4226) loss_bbox_0: 0.3465 (0.3465) loss_giou_0: 0.7913 (0.7913) loss_ce_1: 0.4081 (0.4081) loss_bbox_1: 0.3170 (0.3170) loss_giou_1: 0.7470 (0.7470) loss_ce_2: 0.3909 (0.3909) loss_bbox_2: 0.3241 (0.3241) loss_giou_2: 0.7407 (0.7407) loss_ce_3: 0.3871 (0.3871) loss_bbox_3: 0.3372 (0.3372) loss_giou_3: 0.7614 (0.7614) loss_ce_4: 0.3885 (0.3885) loss_bbox_4: 0.3444 (0.3444) loss_giou_4: 0.7601 (0.7601) loss_ce_unscaled: 0.3860 (0.3860) class_error_unscaled: 68.7500 (68.7500) loss_bbox_unscaled: 0.0703 (0.0703) loss_giou_unscaled: 0.3862 (0.3862) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4226 (0.4226) loss_bbox_0_unscaled: 0.0693 (0.0693) loss_giou_0_unscaled: 0.3957 (0.3957) cardinality_error_0_unscaled: 1.0625 (1.0625) loss_ce_1_unscaled: 0.4081 (0.4081) loss_bbox_1_unscaled: 0.0634 (0.0634) loss_giou_1_unscaled: 0.3735 (0.3735) cardinality_error_1_unscaled: 0.9375 (0.9375) loss_ce_2_unscaled: 0.3909 (0.3909) loss_bbox_2_unscaled: 0.0648 (0.0648) loss_giou_2_unscaled: 0.3703 (0.3703) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3871 (0.3871) loss_bbox_3_unscaled: 0.0674 (0.0674) loss_giou_3_unscaled: 0.3807 (0.3807) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3885 (0.3885) loss_bbox_4_unscaled: 0.0689 (0.0689) loss_giou_4_unscaled: 0.3800 (0.3800) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 2.1283 data: 1.1388 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 56.25 loss: 8.8052 (8.8493) loss_ce: 0.3854 (0.3883) loss_bbox: 0.3299 (0.3343) loss_giou: 0.6740 (0.6903) loss_ce_0: 0.4427 (0.4385) loss_bbox_0: 0.3857 (0.3932) loss_giou_0: 0.8061 (0.8095) loss_ce_1: 0.4187 (0.4184) loss_bbox_1: 0.3582 (0.3817) loss_giou_1: 0.7333 (0.7484) loss_ce_2: 0.3920 (0.3992) loss_bbox_2: 0.3422 (0.3672) loss_giou_2: 0.6830 (0.6977) loss_ce_3: 0.3871 (0.3925) loss_bbox_3: 0.3160 (0.3244) loss_giou_3: 0.6261 (0.6665) loss_ce_4: 0.3885 (0.3906) loss_bbox_4: 0.3188 (0.3257) loss_giou_4: 0.6651 (0.6831) loss_ce_unscaled: 0.3854 (0.3883) class_error_unscaled: 37.5000 (46.8750) loss_bbox_unscaled: 0.0660 (0.0669) loss_giou_unscaled: 0.3370 (0.3451) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4427 (0.4385) loss_bbox_0_unscaled: 0.0771 (0.0786) loss_giou_0_unscaled: 0.4030 (0.4047) cardinality_error_0_unscaled: 1.0000 (1.0469) loss_ce_1_unscaled: 0.4187 (0.4184) loss_bbox_1_unscaled: 0.0716 (0.0763) loss_giou_1_unscaled: 0.3667 (0.3742) cardinality_error_1_unscaled: 1.0000 (0.9844) loss_ce_2_unscaled: 0.3920 (0.3992) loss_bbox_2_unscaled: 0.0684 (0.0734) loss_giou_2_unscaled: 0.3415 (0.3488) cardinality_error_2_unscaled: 1.0000 (0.9844) loss_ce_3_unscaled: 0.3871 (0.3925) loss_bbox_3_unscaled: 0.0632 (0.0649) loss_giou_3_unscaled: 0.3130 (0.3333) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3885 (0.3906) loss_bbox_4_unscaled: 0.0638 (0.0651) loss_giou_4_unscaled: 0.3326 (0.3415) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 1.0664 data: 0.3698 max mem: 12069
Test: Total time: 0:00:04 (1.0836 s / it)
Averaged stats: class_error: 56.25 loss: 8.8052 (8.8493) loss_ce: 0.3854 (0.3883) loss_bbox: 0.3299 (0.3343) loss_giou: 0.6740 (0.6903) loss_ce_0: 0.4427 (0.4385) loss_bbox_0: 0.3857 (0.3932) loss_giou_0: 0.8061 (0.8095) loss_ce_1: 0.4187 (0.4184) loss_bbox_1: 0.3582 (0.3817) loss_giou_1: 0.7333 (0.7484) loss_ce_2: 0.3920 (0.3992) loss_bbox_2: 0.3422 (0.3672) loss_giou_2: 0.6830 (0.6977) loss_ce_3: 0.3871 (0.3925) loss_bbox_3: 0.3160 (0.3244) loss_giou_3: 0.6261 (0.6665) loss_ce_4: 0.3885 (0.3906) loss_bbox_4: 0.3188 (0.3257) loss_giou_4: 0.6651 (0.6831) loss_ce_unscaled: 0.3854 (0.3883) class_error_unscaled: 37.5000 (46.8750) loss_bbox_unscaled: 0.0660 (0.0669) loss_giou_unscaled: 0.3370 (0.3451) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4427 (0.4385) loss_bbox_0_unscaled: 0.0771 (0.0786) loss_giou_0_unscaled: 0.4030 (0.4047) cardinality_error_0_unscaled: 1.0000 (1.0469) loss_ce_1_unscaled: 0.4187 (0.4184) loss_bbox_1_unscaled: 0.0716 (0.0763) loss_giou_1_unscaled: 0.3667 (0.3742) cardinality_error_1_unscaled: 1.0000 (0.9844) loss_ce_2_unscaled: 0.3920 (0.3992) loss_bbox_2_unscaled: 0.0684 (0.0734) loss_giou_2_unscaled: 0.3415 (0.3488) cardinality_error_2_unscaled: 1.0000 (0.9844) loss_ce_3_unscaled: 0.3871 (0.3925) loss_bbox_3_unscaled: 0.0632 (0.0649) loss_giou_3_unscaled: 0.3130 (0.3333) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3885 (0.3906) loss_bbox_4_unscaled: 0.0638 (0.0651) loss_giou_4_unscaled: 0.3326 (0.3415) cardinality_error_4_unscaled: 1.0000 (1.0000)
Accumulating evaluation results...
DONE (t=0.07s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.010
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.013
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.003
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.011
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.072
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.243
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.290
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.234
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.248
Epoch: [19] [ 0/16] eta: 0:00:42 lr: 0.000010 class_error: 28.57 loss: 8.7708 (8.7708) loss_ce: 0.3386 (0.3386) loss_bbox: 0.3257 (0.3257) loss_giou: 0.6927 (0.6927) loss_ce_0: 0.3917 (0.3917) loss_bbox_0: 0.3621 (0.3621) loss_giou_0: 0.8576 (0.8576) loss_ce_1: 0.3675 (0.3675) loss_bbox_1: 0.3745 (0.3745) loss_giou_1: 0.7929 (0.7929) loss_ce_2: 0.3619 (0.3619) loss_bbox_2: 0.3646 (0.3646) loss_giou_2: 0.7161 (0.7161) loss_ce_3: 0.3468 (0.3468) loss_bbox_3: 0.3634 (0.3634) loss_giou_3: 0.7322 (0.7322) loss_ce_4: 0.3425 (0.3425) loss_bbox_4: 0.3241 (0.3241) loss_giou_4: 0.7158 (0.7158) loss_ce_unscaled: 0.3386 (0.3386) class_error_unscaled: 28.5714 (28.5714) loss_bbox_unscaled: 0.0651 (0.0651) loss_giou_unscaled: 0.3464 (0.3464) cardinality_error_unscaled: 0.8750 (0.8750) loss_ce_0_unscaled: 0.3917 (0.3917) loss_bbox_0_unscaled: 0.0724 (0.0724) loss_giou_0_unscaled: 0.4288 (0.4288) cardinality_error_0_unscaled: 0.8750 (0.8750) loss_ce_1_unscaled: 0.3675 (0.3675) loss_bbox_1_unscaled: 0.0749 (0.0749) loss_giou_1_unscaled: 0.3964 (0.3964) cardinality_error_1_unscaled: 0.8750 (0.8750) loss_ce_2_unscaled: 0.3619 (0.3619) loss_bbox_2_unscaled: 0.0729 (0.0729) loss_giou_2_unscaled: 0.3581 (0.3581) cardinality_error_2_unscaled: 0.8750 (0.8750) loss_ce_3_unscaled: 0.3468 (0.3468) loss_bbox_3_unscaled: 0.0727 (0.0727) loss_giou_3_unscaled: 0.3661 (0.3661) cardinality_error_3_unscaled: 0.8750 (0.8750) loss_ce_4_unscaled: 0.3425 (0.3425) loss_bbox_4_unscaled: 0.0648 (0.0648) loss_giou_4_unscaled: 0.3579 (0.3579) cardinality_error_4_unscaled: 0.8750 (0.8750) time: 2.6401 data: 1.3287 max mem: 12069
Epoch: [19] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 31.25 loss: 9.8211 (9.8736) loss_ce: 0.3775 (0.3685) loss_bbox: 0.3965 (0.4075) loss_giou: 0.8010 (0.8023) loss_ce_0: 0.4283 (0.4281) loss_bbox_0: 0.4307 (0.4489) loss_giou_0: 0.8618 (0.8783) loss_ce_1: 0.4081 (0.4042) loss_bbox_1: 0.4242 (0.4419) loss_giou_1: 0.8833 (0.8635) loss_ce_2: 0.3966 (0.3865) loss_bbox_2: 0.4147 (0.4283) loss_giou_2: 0.8041 (0.8208) loss_ce_3: 0.3833 (0.3753) loss_bbox_3: 0.3897 (0.4084) loss_giou_3: 0.8210 (0.8223) loss_ce_4: 0.3846 (0.3730) loss_bbox_4: 0.3881 (0.4070) loss_giou_4: 0.8071 (0.8089) loss_ce_unscaled: 0.3775 (0.3685) class_error_unscaled: 50.0000 (45.1786) loss_bbox_unscaled: 0.0793 (0.0815) loss_giou_unscaled: 0.4005 (0.4012) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.4283 (0.4281) loss_bbox_0_unscaled: 0.0861 (0.0898) loss_giou_0_unscaled: 0.4309 (0.4391) cardinality_error_0_unscaled: 0.9375 (0.9773) loss_ce_1_unscaled: 0.4081 (0.4042) loss_bbox_1_unscaled: 0.0848 (0.0884) loss_giou_1_unscaled: 0.4417 (0.4317) cardinality_error_1_unscaled: 0.8750 (0.9148) loss_ce_2_unscaled: 0.3966 (0.3865) loss_bbox_2_unscaled: 0.0829 (0.0857) loss_giou_2_unscaled: 0.4020 (0.4104) cardinality_error_2_unscaled: 0.9375 (0.9318) loss_ce_3_unscaled: 0.3833 (0.3753) loss_bbox_3_unscaled: 0.0779 (0.0817) loss_giou_3_unscaled: 0.4105 (0.4111) cardinality_error_3_unscaled: 0.9375 (0.9375) loss_ce_4_unscaled: 0.3846 (0.3730) loss_bbox_4_unscaled: 0.0776 (0.0814) loss_giou_4_unscaled: 0.4035 (0.4044) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 1.3664 data: 0.1760 max mem: 12069
Epoch: [19] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 35.71 loss: 9.7237 (9.7250) loss_ce: 0.3773 (0.3720) loss_bbox: 0.3835 (0.4012) loss_giou: 0.7951 (0.7869) loss_ce_0: 0.4283 (0.4274) loss_bbox_0: 0.4307 (0.4371) loss_giou_0: 0.8618 (0.8722) loss_ce_1: 0.4081 (0.4064) loss_bbox_1: 0.3992 (0.4227) loss_giou_1: 0.8574 (0.8470) loss_ce_2: 0.3958 (0.3888) loss_bbox_2: 0.3690 (0.4115) loss_giou_2: 0.8026 (0.8047) loss_ce_3: 0.3829 (0.3784) loss_bbox_3: 0.3768 (0.3982) loss_giou_3: 0.8090 (0.7972) loss_ce_4: 0.3842 (0.3760) loss_bbox_4: 0.3860 (0.4028) loss_giou_4: 0.7994 (0.7945) loss_ce_unscaled: 0.3773 (0.3720) class_error_unscaled: 46.6667 (45.9487) loss_bbox_unscaled: 0.0767 (0.0802) loss_giou_unscaled: 0.3975 (0.3935) cardinality_error_unscaled: 0.9375 (0.9453) loss_ce_0_unscaled: 0.4283 (0.4274) loss_bbox_0_unscaled: 0.0861 (0.0874) loss_giou_0_unscaled: 0.4309 (0.4361) cardinality_error_0_unscaled: 0.9375 (0.9805) loss_ce_1_unscaled: 0.4081 (0.4064) loss_bbox_1_unscaled: 0.0798 (0.0845) loss_giou_1_unscaled: 0.4287 (0.4235) cardinality_error_1_unscaled: 0.9375 (0.9297) loss_ce_2_unscaled: 0.3958 (0.3888) loss_bbox_2_unscaled: 0.0738 (0.0823) loss_giou_2_unscaled: 0.4013 (0.4024) cardinality_error_2_unscaled: 0.9375 (0.9414) loss_ce_3_unscaled: 0.3829 (0.3784) loss_bbox_3_unscaled: 0.0754 (0.0796) loss_giou_3_unscaled: 0.4045 (0.3986) cardinality_error_3_unscaled: 0.9375 (0.9414) loss_ce_4_unscaled: 0.3842 (0.3760) loss_bbox_4_unscaled: 0.0772 (0.0806) loss_giou_4_unscaled: 0.3997 (0.3972) cardinality_error_4_unscaled: 0.9375 (0.9453) time: 1.3142 data: 0.1400 max mem: 12069
Epoch: [19] Total time: 0:00:21 (1.3185 s / it)
Averaged stats: lr: 0.000010 class_error: 35.71 loss: 9.7237 (9.7250) loss_ce: 0.3773 (0.3720) loss_bbox: 0.3835 (0.4012) loss_giou: 0.7951 (0.7869) loss_ce_0: 0.4283 (0.4274) loss_bbox_0: 0.4307 (0.4371) loss_giou_0: 0.8618 (0.8722) loss_ce_1: 0.4081 (0.4064) loss_bbox_1: 0.3992 (0.4227) loss_giou_1: 0.8574 (0.8470) loss_ce_2: 0.3958 (0.3888) loss_bbox_2: 0.3690 (0.4115) loss_giou_2: 0.8026 (0.8047) loss_ce_3: 0.3829 (0.3784) loss_bbox_3: 0.3768 (0.3982) loss_giou_3: 0.8090 (0.7972) loss_ce_4: 0.3842 (0.3760) loss_bbox_4: 0.3860 (0.4028) loss_giou_4: 0.7994 (0.7945) loss_ce_unscaled: 0.3773 (0.3720) class_error_unscaled: 46.6667 (45.9487) loss_bbox_unscaled: 0.0767 (0.0802) loss_giou_unscaled: 0.3975 (0.3935) cardinality_error_unscaled: 0.9375 (0.9453) loss_ce_0_unscaled: 0.4283 (0.4274) loss_bbox_0_unscaled: 0.0861 (0.0874) loss_giou_0_unscaled: 0.4309 (0.4361) cardinality_error_0_unscaled: 0.9375 (0.9805) loss_ce_1_unscaled: 0.4081 (0.4064) loss_bbox_1_unscaled: 0.0798 (0.0845) loss_giou_1_unscaled: 0.4287 (0.4235) cardinality_error_1_unscaled: 0.9375 (0.9297) loss_ce_2_unscaled: 0.3958 (0.3888) loss_bbox_2_unscaled: 0.0738 (0.0823) loss_giou_2_unscaled: 0.4013 (0.4024) cardinality_error_2_unscaled: 0.9375 (0.9414) loss_ce_3_unscaled: 0.3829 (0.3784) loss_bbox_3_unscaled: 0.0754 (0.0796) loss_giou_3_unscaled: 0.4045 (0.3986) cardinality_error_3_unscaled: 0.9375 (0.9414) loss_ce_4_unscaled: 0.3842 (0.3760) loss_bbox_4_unscaled: 0.0772 (0.0806) loss_giou_4_unscaled: 0.3997 (0.3972) cardinality_error_4_unscaled: 0.9375 (0.9453)
Test: [0/4] eta: 0:00:07 class_error: 62.50 loss: 8.9155 (8.9155) loss_ce: 0.3868 (0.3868) loss_bbox: 0.3284 (0.3284) loss_giou: 0.7881 (0.7881) loss_ce_0: 0.4221 (0.4221) loss_bbox_0: 0.3388 (0.3388) loss_giou_0: 0.8004 (0.8004) loss_ce_1: 0.4020 (0.4020) loss_bbox_1: 0.3214 (0.3214) loss_giou_1: 0.7298 (0.7298) loss_ce_2: 0.3911 (0.3911) loss_bbox_2: 0.3213 (0.3213) loss_giou_2: 0.7571 (0.7571) loss_ce_3: 0.3887 (0.3887) loss_bbox_3: 0.3224 (0.3224) loss_giou_3: 0.7613 (0.7613) loss_ce_4: 0.3867 (0.3867) loss_bbox_4: 0.3175 (0.3175) loss_giou_4: 0.7514 (0.7514) loss_ce_unscaled: 0.3868 (0.3868) class_error_unscaled: 62.5000 (62.5000) loss_bbox_unscaled: 0.0657 (0.0657) loss_giou_unscaled: 0.3940 (0.3940) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4221 (0.4221) loss_bbox_0_unscaled: 0.0678 (0.0678) loss_giou_0_unscaled: 0.4002 (0.4002) cardinality_error_0_unscaled: 1.0000 (1.0000) loss_ce_1_unscaled: 0.4020 (0.4020) loss_bbox_1_unscaled: 0.0643 (0.0643) loss_giou_1_unscaled: 0.3649 (0.3649) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.3911 (0.3911) loss_bbox_2_unscaled: 0.0643 (0.0643) loss_giou_2_unscaled: 0.3785 (0.3785) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3887 (0.3887) loss_bbox_3_unscaled: 0.0645 (0.0645) loss_giou_3_unscaled: 0.3807 (0.3807) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3867 (0.3867) loss_bbox_4_unscaled: 0.0635 (0.0635) loss_giou_4_unscaled: 0.3757 (0.3757) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 1.9364 data: 1.1385 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 56.25 loss: 8.9155 (8.8603) loss_ce: 0.3868 (0.3864) loss_bbox: 0.2812 (0.3121) loss_giou: 0.6701 (0.7015) loss_ce_0: 0.4329 (0.4330) loss_bbox_0: 0.3730 (0.3830) loss_giou_0: 0.7957 (0.8024) loss_ce_1: 0.4147 (0.4135) loss_bbox_1: 0.3879 (0.3805) loss_giou_1: 0.7522 (0.7644) loss_ce_2: 0.3911 (0.3976) loss_bbox_2: 0.3245 (0.3562) loss_giou_2: 0.7152 (0.7262) loss_ce_3: 0.3887 (0.3912) loss_bbox_3: 0.3201 (0.3223) loss_giou_3: 0.6758 (0.6948) loss_ce_4: 0.3867 (0.3890) loss_bbox_4: 0.2823 (0.3090) loss_giou_4: 0.6831 (0.6972) loss_ce_unscaled: 0.3868 (0.3864) class_error_unscaled: 37.5000 (48.4375) loss_bbox_unscaled: 0.0562 (0.0624) loss_giou_unscaled: 0.3350 (0.3507) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4329 (0.4330) loss_bbox_0_unscaled: 0.0746 (0.0766) loss_giou_0_unscaled: 0.3978 (0.4012) cardinality_error_0_unscaled: 1.0000 (1.0156) loss_ce_1_unscaled: 0.4147 (0.4135) loss_bbox_1_unscaled: 0.0776 (0.0761) loss_giou_1_unscaled: 0.3761 (0.3822) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.3911 (0.3976) loss_bbox_2_unscaled: 0.0649 (0.0712) loss_giou_2_unscaled: 0.3576 (0.3631) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3887 (0.3912) loss_bbox_3_unscaled: 0.0640 (0.0645) loss_giou_3_unscaled: 0.3379 (0.3474) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3867 (0.3890) loss_bbox_4_unscaled: 0.0565 (0.0618) loss_giou_4_unscaled: 0.3416 (0.3486) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 1.0179 data: 0.3545 max mem: 12069
Test: Total time: 0:00:04 (1.0366 s / it)
Averaged stats: class_error: 56.25 loss: 8.9155 (8.8603) loss_ce: 0.3868 (0.3864) loss_bbox: 0.2812 (0.3121) loss_giou: 0.6701 (0.7015) loss_ce_0: 0.4329 (0.4330) loss_bbox_0: 0.3730 (0.3830) loss_giou_0: 0.7957 (0.8024) loss_ce_1: 0.4147 (0.4135) loss_bbox_1: 0.3879 (0.3805) loss_giou_1: 0.7522 (0.7644) loss_ce_2: 0.3911 (0.3976) loss_bbox_2: 0.3245 (0.3562) loss_giou_2: 0.7152 (0.7262) loss_ce_3: 0.3887 (0.3912) loss_bbox_3: 0.3201 (0.3223) loss_giou_3: 0.6758 (0.6948) loss_ce_4: 0.3867 (0.3890) loss_bbox_4: 0.2823 (0.3090) loss_giou_4: 0.6831 (0.6972) loss_ce_unscaled: 0.3868 (0.3864) class_error_unscaled: 37.5000 (48.4375) loss_bbox_unscaled: 0.0562 (0.0624) loss_giou_unscaled: 0.3350 (0.3507) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4329 (0.4330) loss_bbox_0_unscaled: 0.0746 (0.0766) loss_giou_0_unscaled: 0.3978 (0.4012) cardinality_error_0_unscaled: 1.0000 (1.0156) loss_ce_1_unscaled: 0.4147 (0.4135) loss_bbox_1_unscaled: 0.0776 (0.0761) loss_giou_1_unscaled: 0.3761 (0.3822) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.3911 (0.3976) loss_bbox_2_unscaled: 0.0649 (0.0712) loss_giou_2_unscaled: 0.3576 (0.3631) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3887 (0.3912) loss_bbox_3_unscaled: 0.0640 (0.0645) loss_giou_3_unscaled: 0.3379 (0.3474) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3867 (0.3890) loss_bbox_4_unscaled: 0.0565 (0.0618) loss_giou_4_unscaled: 0.3416 (0.3486) cardinality_error_4_unscaled: 1.0000 (1.0000)
Accumulating evaluation results...
DONE (t=0.07s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.009
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.013
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.003
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.009
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.081
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.228
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.318
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.193
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.234
Epoch: [20] [ 0/16] eta: 0:00:40 lr: 0.000010 class_error: 50.00 loss: 11.8922 (11.8922) loss_ce: 0.3844 (0.3844) loss_bbox: 0.4887 (0.4887) loss_giou: 0.9694 (0.9694) loss_ce_0: 0.4406 (0.4406) loss_bbox_0: 0.6781 (0.6781) loss_giou_0: 1.1430 (1.1430) loss_ce_1: 0.4105 (0.4105) loss_bbox_1: 0.5915 (0.5915) loss_giou_1: 1.0415 (1.0415) loss_ce_2: 0.4070 (0.4070) loss_bbox_2: 0.5180 (0.5180) loss_giou_2: 1.0181 (1.0181) loss_ce_3: 0.3971 (0.3971) loss_bbox_3: 0.5193 (0.5193) loss_giou_3: 1.0110 (1.0110) loss_ce_4: 0.3879 (0.3879) loss_bbox_4: 0.5170 (0.5170) loss_giou_4: 0.9691 (0.9691) loss_ce_unscaled: 0.3844 (0.3844) class_error_unscaled: 50.0000 (50.0000) loss_bbox_unscaled: 0.0977 (0.0977) loss_giou_unscaled: 0.4847 (0.4847) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4406 (0.4406) loss_bbox_0_unscaled: 0.1356 (0.1356) loss_giou_0_unscaled: 0.5715 (0.5715) cardinality_error_0_unscaled: 1.0000 (1.0000) loss_ce_1_unscaled: 0.4105 (0.4105) loss_bbox_1_unscaled: 0.1183 (0.1183) loss_giou_1_unscaled: 0.5207 (0.5207) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.4070 (0.4070) loss_bbox_2_unscaled: 0.1036 (0.1036) loss_giou_2_unscaled: 0.5090 (0.5090) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3971 (0.3971) loss_bbox_3_unscaled: 0.1039 (0.1039) loss_giou_3_unscaled: 0.5055 (0.5055) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3879 (0.3879) loss_bbox_4_unscaled: 0.1034 (0.1034) loss_giou_4_unscaled: 0.4845 (0.4845) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 2.5313 data: 1.2702 max mem: 12069
Epoch: [20] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 40.00 loss: 9.9418 (9.9823) loss_ce: 0.3749 (0.3764) loss_bbox: 0.4109 (0.4167) loss_giou: 0.8250 (0.8172) loss_ce_0: 0.4354 (0.4342) loss_bbox_0: 0.4696 (0.4731) loss_giou_0: 0.8896 (0.8850) loss_ce_1: 0.4130 (0.4121) loss_bbox_1: 0.4037 (0.4372) loss_giou_1: 0.8439 (0.8531) loss_ce_2: 0.3975 (0.3968) loss_bbox_2: 0.3871 (0.4233) loss_giou_2: 0.8232 (0.8328) loss_ce_3: 0.3851 (0.3846) loss_bbox_3: 0.4161 (0.4124) loss_giou_3: 0.8053 (0.8136) loss_ce_4: 0.3816 (0.3805) loss_bbox_4: 0.4389 (0.4227) loss_giou_4: 0.8196 (0.8107) loss_ce_unscaled: 0.3749 (0.3764) class_error_unscaled: 43.7500 (44.3506) loss_bbox_unscaled: 0.0822 (0.0833) loss_giou_unscaled: 0.4125 (0.4086) cardinality_error_unscaled: 1.0000 (0.9716) loss_ce_0_unscaled: 0.4354 (0.4342) loss_bbox_0_unscaled: 0.0939 (0.0946) loss_giou_0_unscaled: 0.4448 (0.4425) cardinality_error_0_unscaled: 1.0000 (1.0000) loss_ce_1_unscaled: 0.4130 (0.4121) loss_bbox_1_unscaled: 0.0807 (0.0874) loss_giou_1_unscaled: 0.4219 (0.4266) cardinality_error_1_unscaled: 1.0000 (0.9659) loss_ce_2_unscaled: 0.3975 (0.3968) loss_bbox_2_unscaled: 0.0774 (0.0847) loss_giou_2_unscaled: 0.4116 (0.4164) cardinality_error_2_unscaled: 1.0000 (0.9716) loss_ce_3_unscaled: 0.3851 (0.3846) loss_bbox_3_unscaled: 0.0832 (0.0825) loss_giou_3_unscaled: 0.4027 (0.4068) cardinality_error_3_unscaled: 1.0000 (0.9716) loss_ce_4_unscaled: 0.3816 (0.3805) loss_bbox_4_unscaled: 0.0878 (0.0845) loss_giou_4_unscaled: 0.4098 (0.4054) cardinality_error_4_unscaled: 1.0000 (0.9716) time: 1.3480 data: 0.1703 max mem: 12069
Epoch: [20] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 50.00 loss: 9.8143 (9.7863) loss_ce: 0.3744 (0.3716) loss_bbox: 0.3872 (0.4007) loss_giou: 0.7821 (0.7973) loss_ce_0: 0.4337 (0.4300) loss_bbox_0: 0.4424 (0.4592) loss_giou_0: 0.8886 (0.8848) loss_ce_1: 0.4120 (0.4072) loss_bbox_1: 0.4066 (0.4236) loss_giou_1: 0.8293 (0.8385) loss_ce_2: 0.3938 (0.3909) loss_bbox_2: 0.3901 (0.4113) loss_giou_2: 0.8177 (0.8177) loss_ce_3: 0.3818 (0.3796) loss_bbox_3: 0.3922 (0.3988) loss_giou_3: 0.8013 (0.8006) loss_ce_4: 0.3794 (0.3758) loss_bbox_4: 0.3825 (0.4052) loss_giou_4: 0.7853 (0.7937) loss_ce_unscaled: 0.3744 (0.3716) class_error_unscaled: 43.7500 (44.2932) loss_bbox_unscaled: 0.0774 (0.0801) loss_giou_unscaled: 0.3911 (0.3987) cardinality_error_unscaled: 1.0000 (0.9609) loss_ce_0_unscaled: 0.4337 (0.4300) loss_bbox_0_unscaled: 0.0885 (0.0918) loss_giou_0_unscaled: 0.4443 (0.4424) cardinality_error_0_unscaled: 1.0000 (0.9961) loss_ce_1_unscaled: 0.4120 (0.4072) loss_bbox_1_unscaled: 0.0813 (0.0847) loss_giou_1_unscaled: 0.4147 (0.4192) cardinality_error_1_unscaled: 0.9375 (0.9570) loss_ce_2_unscaled: 0.3938 (0.3909) loss_bbox_2_unscaled: 0.0780 (0.0823) loss_giou_2_unscaled: 0.4088 (0.4088) cardinality_error_2_unscaled: 1.0000 (0.9609) loss_ce_3_unscaled: 0.3818 (0.3796) loss_bbox_3_unscaled: 0.0784 (0.0798) loss_giou_3_unscaled: 0.4007 (0.4003) cardinality_error_3_unscaled: 1.0000 (0.9609) loss_ce_4_unscaled: 0.3794 (0.3758) loss_bbox_4_unscaled: 0.0765 (0.0810) loss_giou_4_unscaled: 0.3926 (0.3969) cardinality_error_4_unscaled: 1.0000 (0.9609) time: 1.3037 data: 0.1356 max mem: 12069
Epoch: [20] Total time: 0:00:20 (1.3086 s / it)
Averaged stats: lr: 0.000010 class_error: 50.00 loss: 9.8143 (9.7863) loss_ce: 0.3744 (0.3716) loss_bbox: 0.3872 (0.4007) loss_giou: 0.7821 (0.7973) loss_ce_0: 0.4337 (0.4300) loss_bbox_0: 0.4424 (0.4592) loss_giou_0: 0.8886 (0.8848) loss_ce_1: 0.4120 (0.4072) loss_bbox_1: 0.4066 (0.4236) loss_giou_1: 0.8293 (0.8385) loss_ce_2: 0.3938 (0.3909) loss_bbox_2: 0.3901 (0.4113) loss_giou_2: 0.8177 (0.8177) loss_ce_3: 0.3818 (0.3796) loss_bbox_3: 0.3922 (0.3988) loss_giou_3: 0.8013 (0.8006) loss_ce_4: 0.3794 (0.3758) loss_bbox_4: 0.3825 (0.4052) loss_giou_4: 0.7853 (0.7937) loss_ce_unscaled: 0.3744 (0.3716) class_error_unscaled: 43.7500 (44.2932) loss_bbox_unscaled: 0.0774 (0.0801) loss_giou_unscaled: 0.3911 (0.3987) cardinality_error_unscaled: 1.0000 (0.9609) loss_ce_0_unscaled: 0.4337 (0.4300) loss_bbox_0_unscaled: 0.0885 (0.0918) loss_giou_0_unscaled: 0.4443 (0.4424) cardinality_error_0_unscaled: 1.0000 (0.9961) loss_ce_1_unscaled: 0.4120 (0.4072) loss_bbox_1_unscaled: 0.0813 (0.0847) loss_giou_1_unscaled: 0.4147 (0.4192) cardinality_error_1_unscaled: 0.9375 (0.9570) loss_ce_2_unscaled: 0.3938 (0.3909) loss_bbox_2_unscaled: 0.0780 (0.0823) loss_giou_2_unscaled: 0.4088 (0.4088) cardinality_error_2_unscaled: 1.0000 (0.9609) loss_ce_3_unscaled: 0.3818 (0.3796) loss_bbox_3_unscaled: 0.0784 (0.0798) loss_giou_3_unscaled: 0.4007 (0.4003) cardinality_error_3_unscaled: 1.0000 (0.9609) loss_ce_4_unscaled: 0.3794 (0.3758) loss_bbox_4_unscaled: 0.0765 (0.0810) loss_giou_4_unscaled: 0.3926 (0.3969) cardinality_error_4_unscaled: 1.0000 (0.9609)
Test: [0/4] eta: 0:00:07 class_error: 50.00 loss: 8.9290 (8.9290) loss_ce: 0.3858 (0.3858) loss_bbox: 0.3440 (0.3440) loss_giou: 0.7672 (0.7672) loss_ce_0: 0.4201 (0.4201) loss_bbox_0: 0.3430 (0.3430) loss_giou_0: 0.7959 (0.7959) loss_ce_1: 0.4014 (0.4014) loss_bbox_1: 0.3366 (0.3366) loss_giou_1: 0.7666 (0.7666) loss_ce_2: 0.3923 (0.3923) loss_bbox_2: 0.3210 (0.3210) loss_giou_2: 0.7619 (0.7619) loss_ce_3: 0.3864 (0.3864) loss_bbox_3: 0.3222 (0.3222) loss_giou_3: 0.7202 (0.7202) loss_ce_4: 0.3879 (0.3879) loss_bbox_4: 0.3202 (0.3202) loss_giou_4: 0.7563 (0.7563) loss_ce_unscaled: 0.3858 (0.3858) class_error_unscaled: 50.0000 (50.0000) loss_bbox_unscaled: 0.0688 (0.0688) loss_giou_unscaled: 0.3836 (0.3836) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4201 (0.4201) loss_bbox_0_unscaled: 0.0686 (0.0686) loss_giou_0_unscaled: 0.3979 (0.3979) cardinality_error_0_unscaled: 1.0625 (1.0625) loss_ce_1_unscaled: 0.4014 (0.4014) loss_bbox_1_unscaled: 0.0673 (0.0673) loss_giou_1_unscaled: 0.3833 (0.3833) cardinality_error_1_unscaled: 1.1250 (1.1250) loss_ce_2_unscaled: 0.3923 (0.3923) loss_bbox_2_unscaled: 0.0642 (0.0642) loss_giou_2_unscaled: 0.3809 (0.3809) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3864 (0.3864) loss_bbox_3_unscaled: 0.0644 (0.0644) loss_giou_3_unscaled: 0.3601 (0.3601) cardinality_error_3_unscaled: 0.9375 (0.9375) loss_ce_4_unscaled: 0.3879 (0.3879) loss_bbox_4_unscaled: 0.0640 (0.0640) loss_giou_4_unscaled: 0.3781 (0.3781) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 1.9128 data: 1.1276 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 50.00 loss: 8.9290 (9.0379) loss_ce: 0.3849 (0.3844) loss_bbox: 0.3167 (0.3315) loss_giou: 0.6779 (0.7126) loss_ce_0: 0.4318 (0.4311) loss_bbox_0: 0.3840 (0.3869) loss_giou_0: 0.8150 (0.8304) loss_ce_1: 0.4144 (0.4142) loss_bbox_1: 0.3977 (0.3923) loss_giou_1: 0.7933 (0.7928) loss_ce_2: 0.3923 (0.3987) loss_bbox_2: 0.3688 (0.3714) loss_giou_2: 0.7006 (0.7381) loss_ce_3: 0.3864 (0.3915) loss_bbox_3: 0.3222 (0.3349) loss_giou_3: 0.6990 (0.7108) loss_ce_4: 0.3879 (0.3897) loss_bbox_4: 0.3018 (0.3212) loss_giou_4: 0.6618 (0.7051) loss_ce_unscaled: 0.3849 (0.3844) class_error_unscaled: 37.5000 (43.7500) loss_bbox_unscaled: 0.0633 (0.0663) loss_giou_unscaled: 0.3390 (0.3563) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4318 (0.4311) loss_bbox_0_unscaled: 0.0768 (0.0774) loss_giou_0_unscaled: 0.4075 (0.4152) cardinality_error_0_unscaled: 1.0000 (1.0469) loss_ce_1_unscaled: 0.4144 (0.4142) loss_bbox_1_unscaled: 0.0795 (0.0785) loss_giou_1_unscaled: 0.3966 (0.3964) cardinality_error_1_unscaled: 1.0000 (1.0312) loss_ce_2_unscaled: 0.3923 (0.3987) loss_bbox_2_unscaled: 0.0738 (0.0743) loss_giou_2_unscaled: 0.3503 (0.3691) cardinality_error_2_unscaled: 1.0000 (0.9844) loss_ce_3_unscaled: 0.3864 (0.3915) loss_bbox_3_unscaled: 0.0644 (0.0670) loss_giou_3_unscaled: 0.3495 (0.3554) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3879 (0.3897) loss_bbox_4_unscaled: 0.0604 (0.0642) loss_giou_4_unscaled: 0.3309 (0.3526) cardinality_error_4_unscaled: 1.0000 (0.9844) time: 1.0070 data: 0.3376 max mem: 12069
Test: Total time: 0:00:04 (1.0239 s / it)
Averaged stats: class_error: 50.00 loss: 8.9290 (9.0379) loss_ce: 0.3849 (0.3844) loss_bbox: 0.3167 (0.3315) loss_giou: 0.6779 (0.7126) loss_ce_0: 0.4318 (0.4311) loss_bbox_0: 0.3840 (0.3869) loss_giou_0: 0.8150 (0.8304) loss_ce_1: 0.4144 (0.4142) loss_bbox_1: 0.3977 (0.3923) loss_giou_1: 0.7933 (0.7928) loss_ce_2: 0.3923 (0.3987) loss_bbox_2: 0.3688 (0.3714) loss_giou_2: 0.7006 (0.7381) loss_ce_3: 0.3864 (0.3915) loss_bbox_3: 0.3222 (0.3349) loss_giou_3: 0.6990 (0.7108) loss_ce_4: 0.3879 (0.3897) loss_bbox_4: 0.3018 (0.3212) loss_giou_4: 0.6618 (0.7051) loss_ce_unscaled: 0.3849 (0.3844) class_error_unscaled: 37.5000 (43.7500) loss_bbox_unscaled: 0.0633 (0.0663) loss_giou_unscaled: 0.3390 (0.3563) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4318 (0.4311) loss_bbox_0_unscaled: 0.0768 (0.0774) loss_giou_0_unscaled: 0.4075 (0.4152) cardinality_error_0_unscaled: 1.0000 (1.0469) loss_ce_1_unscaled: 0.4144 (0.4142) loss_bbox_1_unscaled: 0.0795 (0.0785) loss_giou_1_unscaled: 0.3966 (0.3964) cardinality_error_1_unscaled: 1.0000 (1.0312) loss_ce_2_unscaled: 0.3923 (0.3987) loss_bbox_2_unscaled: 0.0738 (0.0743) loss_giou_2_unscaled: 0.3503 (0.3691) cardinality_error_2_unscaled: 1.0000 (0.9844) loss_ce_3_unscaled: 0.3864 (0.3915) loss_bbox_3_unscaled: 0.0644 (0.0670) loss_giou_3_unscaled: 0.3495 (0.3554) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3879 (0.3897) loss_bbox_4_unscaled: 0.0604 (0.0642) loss_giou_4_unscaled: 0.3309 (0.3526) cardinality_error_4_unscaled: 1.0000 (0.9844)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.008
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.011
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.003
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.012
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.067
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.211
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.350
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.182
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.201
Epoch: [21] [ 0/16] eta: 0:00:40 lr: 0.000010 class_error: 42.86 loss: 9.4897 (9.4897) loss_ce: 0.3515 (0.3515) loss_bbox: 0.3689 (0.3689) loss_giou: 0.7695 (0.7695) loss_ce_0: 0.4077 (0.4077) loss_bbox_0: 0.4628 (0.4628) loss_giou_0: 0.9177 (0.9177) loss_ce_1: 0.3871 (0.3871) loss_bbox_1: 0.3987 (0.3987) loss_giou_1: 0.8618 (0.8618) loss_ce_2: 0.3687 (0.3687) loss_bbox_2: 0.3713 (0.3713) loss_giou_2: 0.8359 (0.8359) loss_ce_3: 0.3557 (0.3557) loss_bbox_3: 0.3635 (0.3635) loss_giou_3: 0.7803 (0.7803) loss_ce_4: 0.3558 (0.3558) loss_bbox_4: 0.3632 (0.3632) loss_giou_4: 0.7696 (0.7696) loss_ce_unscaled: 0.3515 (0.3515) class_error_unscaled: 42.8571 (42.8571) loss_bbox_unscaled: 0.0738 (0.0738) loss_giou_unscaled: 0.3847 (0.3847) cardinality_error_unscaled: 0.8750 (0.8750) loss_ce_0_unscaled: 0.4077 (0.4077) loss_bbox_0_unscaled: 0.0926 (0.0926) loss_giou_0_unscaled: 0.4588 (0.4588) cardinality_error_0_unscaled: 0.8750 (0.8750) loss_ce_1_unscaled: 0.3871 (0.3871) loss_bbox_1_unscaled: 0.0797 (0.0797) loss_giou_1_unscaled: 0.4309 (0.4309) cardinality_error_1_unscaled: 0.8750 (0.8750) loss_ce_2_unscaled: 0.3687 (0.3687) loss_bbox_2_unscaled: 0.0743 (0.0743) loss_giou_2_unscaled: 0.4179 (0.4179) cardinality_error_2_unscaled: 0.8750 (0.8750) loss_ce_3_unscaled: 0.3557 (0.3557) loss_bbox_3_unscaled: 0.0727 (0.0727) loss_giou_3_unscaled: 0.3902 (0.3902) cardinality_error_3_unscaled: 0.8750 (0.8750) loss_ce_4_unscaled: 0.3558 (0.3558) loss_bbox_4_unscaled: 0.0726 (0.0726) loss_giou_4_unscaled: 0.3848 (0.3848) cardinality_error_4_unscaled: 0.8750 (0.8750) time: 2.5506 data: 1.2420 max mem: 12069
Epoch: [21] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 31.25 loss: 9.8021 (9.5887) loss_ce: 0.3704 (0.3734) loss_bbox: 0.3741 (0.3915) loss_giou: 0.8119 (0.7838) loss_ce_0: 0.4327 (0.4254) loss_bbox_0: 0.4475 (0.4603) loss_giou_0: 0.8654 (0.8620) loss_ce_1: 0.4066 (0.4020) loss_bbox_1: 0.4190 (0.4278) loss_giou_1: 0.8299 (0.8127) loss_ce_2: 0.3924 (0.3873) loss_bbox_2: 0.4112 (0.4072) loss_giou_2: 0.8008 (0.7762) loss_ce_3: 0.3804 (0.3791) loss_bbox_3: 0.3758 (0.3916) loss_giou_3: 0.7803 (0.7672) loss_ce_4: 0.3755 (0.3766) loss_bbox_4: 0.3840 (0.3863) loss_giou_4: 0.7696 (0.7786) loss_ce_unscaled: 0.3704 (0.3734) class_error_unscaled: 37.5000 (38.6418) loss_bbox_unscaled: 0.0748 (0.0783) loss_giou_unscaled: 0.4060 (0.3919) cardinality_error_unscaled: 1.0000 (0.9545) loss_ce_0_unscaled: 0.4327 (0.4254) loss_bbox_0_unscaled: 0.0895 (0.0921) loss_giou_0_unscaled: 0.4327 (0.4310) cardinality_error_0_unscaled: 0.9375 (0.9318) loss_ce_1_unscaled: 0.4066 (0.4020) loss_bbox_1_unscaled: 0.0838 (0.0856) loss_giou_1_unscaled: 0.4150 (0.4064) cardinality_error_1_unscaled: 1.0000 (0.9545) loss_ce_2_unscaled: 0.3924 (0.3873) loss_bbox_2_unscaled: 0.0822 (0.0814) loss_giou_2_unscaled: 0.4004 (0.3881) cardinality_error_2_unscaled: 1.0000 (0.9545) loss_ce_3_unscaled: 0.3804 (0.3791) loss_bbox_3_unscaled: 0.0752 (0.0783) loss_giou_3_unscaled: 0.3902 (0.3836) cardinality_error_3_unscaled: 1.0000 (0.9489) loss_ce_4_unscaled: 0.3755 (0.3766) loss_bbox_4_unscaled: 0.0768 (0.0773) loss_giou_4_unscaled: 0.3848 (0.3893) cardinality_error_4_unscaled: 1.0000 (0.9545) time: 1.3600 data: 0.1768 max mem: 12069
Epoch: [21] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 25.00 loss: 9.3378 (9.4019) loss_ce: 0.3704 (0.3708) loss_bbox: 0.3576 (0.3729) loss_giou: 0.7695 (0.7646) loss_ce_0: 0.4231 (0.4221) loss_bbox_0: 0.4434 (0.4413) loss_giou_0: 0.8594 (0.8449) loss_ce_1: 0.4064 (0.4013) loss_bbox_1: 0.3987 (0.4120) loss_giou_1: 0.7890 (0.7966) loss_ce_2: 0.3892 (0.3855) loss_bbox_2: 0.3747 (0.3932) loss_giou_2: 0.7905 (0.7743) loss_ce_3: 0.3804 (0.3774) loss_bbox_3: 0.3635 (0.3759) loss_giou_3: 0.7490 (0.7613) loss_ce_4: 0.3755 (0.3744) loss_bbox_4: 0.3561 (0.3703) loss_giou_4: 0.7684 (0.7630) loss_ce_unscaled: 0.3704 (0.3708) class_error_unscaled: 37.5000 (38.8379) loss_bbox_unscaled: 0.0715 (0.0746) loss_giou_unscaled: 0.3847 (0.3823) cardinality_error_unscaled: 0.9375 (0.9492) loss_ce_0_unscaled: 0.4231 (0.4221) loss_bbox_0_unscaled: 0.0887 (0.0883) loss_giou_0_unscaled: 0.4297 (0.4225) cardinality_error_0_unscaled: 0.9375 (0.9336) loss_ce_1_unscaled: 0.4064 (0.4013) loss_bbox_1_unscaled: 0.0797 (0.0824) loss_giou_1_unscaled: 0.3945 (0.3983) cardinality_error_1_unscaled: 0.9375 (0.9492) loss_ce_2_unscaled: 0.3892 (0.3855) loss_bbox_2_unscaled: 0.0749 (0.0786) loss_giou_2_unscaled: 0.3952 (0.3872) cardinality_error_2_unscaled: 0.9375 (0.9492) loss_ce_3_unscaled: 0.3804 (0.3774) loss_bbox_3_unscaled: 0.0727 (0.0752) loss_giou_3_unscaled: 0.3745 (0.3807) cardinality_error_3_unscaled: 0.9375 (0.9453) loss_ce_4_unscaled: 0.3755 (0.3744) loss_bbox_4_unscaled: 0.0712 (0.0741) loss_giou_4_unscaled: 0.3842 (0.3815) cardinality_error_4_unscaled: 0.9375 (0.9492) time: 1.3127 data: 0.1397 max mem: 12069
Epoch: [21] Total time: 0:00:21 (1.3169 s / it)
Averaged stats: lr: 0.000010 class_error: 25.00 loss: 9.3378 (9.4019) loss_ce: 0.3704 (0.3708) loss_bbox: 0.3576 (0.3729) loss_giou: 0.7695 (0.7646) loss_ce_0: 0.4231 (0.4221) loss_bbox_0: 0.4434 (0.4413) loss_giou_0: 0.8594 (0.8449) loss_ce_1: 0.4064 (0.4013) loss_bbox_1: 0.3987 (0.4120) loss_giou_1: 0.7890 (0.7966) loss_ce_2: 0.3892 (0.3855) loss_bbox_2: 0.3747 (0.3932) loss_giou_2: 0.7905 (0.7743) loss_ce_3: 0.3804 (0.3774) loss_bbox_3: 0.3635 (0.3759) loss_giou_3: 0.7490 (0.7613) loss_ce_4: 0.3755 (0.3744) loss_bbox_4: 0.3561 (0.3703) loss_giou_4: 0.7684 (0.7630) loss_ce_unscaled: 0.3704 (0.3708) class_error_unscaled: 37.5000 (38.8379) loss_bbox_unscaled: 0.0715 (0.0746) loss_giou_unscaled: 0.3847 (0.3823) cardinality_error_unscaled: 0.9375 (0.9492) loss_ce_0_unscaled: 0.4231 (0.4221) loss_bbox_0_unscaled: 0.0887 (0.0883) loss_giou_0_unscaled: 0.4297 (0.4225) cardinality_error_0_unscaled: 0.9375 (0.9336) loss_ce_1_unscaled: 0.4064 (0.4013) loss_bbox_1_unscaled: 0.0797 (0.0824) loss_giou_1_unscaled: 0.3945 (0.3983) cardinality_error_1_unscaled: 0.9375 (0.9492) loss_ce_2_unscaled: 0.3892 (0.3855) loss_bbox_2_unscaled: 0.0749 (0.0786) loss_giou_2_unscaled: 0.3952 (0.3872) cardinality_error_2_unscaled: 0.9375 (0.9492) loss_ce_3_unscaled: 0.3804 (0.3774) loss_bbox_3_unscaled: 0.0727 (0.0752) loss_giou_3_unscaled: 0.3745 (0.3807) cardinality_error_3_unscaled: 0.9375 (0.9453) loss_ce_4_unscaled: 0.3755 (0.3744) loss_bbox_4_unscaled: 0.0712 (0.0741) loss_giou_4_unscaled: 0.3842 (0.3815) cardinality_error_4_unscaled: 0.9375 (0.9492)
Test: [0/4] eta: 0:00:07 class_error: 56.25 loss: 8.6333 (8.6333) loss_ce: 0.3837 (0.3837) loss_bbox: 0.3196 (0.3196) loss_giou: 0.7372 (0.7372) loss_ce_0: 0.4155 (0.4155) loss_bbox_0: 0.3213 (0.3213) loss_giou_0: 0.7830 (0.7830) loss_ce_1: 0.4009 (0.4009) loss_bbox_1: 0.3276 (0.3276) loss_giou_1: 0.7545 (0.7545) loss_ce_2: 0.3911 (0.3911) loss_bbox_2: 0.3054 (0.3054) loss_giou_2: 0.7260 (0.7260) loss_ce_3: 0.3848 (0.3848) loss_bbox_3: 0.2974 (0.2974) loss_giou_3: 0.6882 (0.6882) loss_ce_4: 0.3848 (0.3848) loss_bbox_4: 0.2967 (0.2967) loss_giou_4: 0.7154 (0.7154) loss_ce_unscaled: 0.3837 (0.3837) class_error_unscaled: 56.2500 (56.2500) loss_bbox_unscaled: 0.0639 (0.0639) loss_giou_unscaled: 0.3686 (0.3686) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4155 (0.4155) loss_bbox_0_unscaled: 0.0643 (0.0643) loss_giou_0_unscaled: 0.3915 (0.3915) cardinality_error_0_unscaled: 1.0625 (1.0625) loss_ce_1_unscaled: 0.4009 (0.4009) loss_bbox_1_unscaled: 0.0655 (0.0655) loss_giou_1_unscaled: 0.3773 (0.3773) cardinality_error_1_unscaled: 1.0625 (1.0625) loss_ce_2_unscaled: 0.3911 (0.3911) loss_bbox_2_unscaled: 0.0611 (0.0611) loss_giou_2_unscaled: 0.3630 (0.3630) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3848 (0.3848) loss_bbox_3_unscaled: 0.0595 (0.0595) loss_giou_3_unscaled: 0.3441 (0.3441) cardinality_error_3_unscaled: 0.9375 (0.9375) loss_ce_4_unscaled: 0.3848 (0.3848) loss_bbox_4_unscaled: 0.0593 (0.0593) loss_giou_4_unscaled: 0.3577 (0.3577) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 1.9736 data: 1.1920 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 62.50 loss: 8.6820 (8.8939) loss_ce: 0.3758 (0.3814) loss_bbox: 0.3196 (0.3450) loss_giou: 0.6579 (0.7060) loss_ce_0: 0.4288 (0.4265) loss_bbox_0: 0.3656 (0.3627) loss_giou_0: 0.7952 (0.7974) loss_ce_1: 0.4109 (0.4102) loss_bbox_1: 0.3735 (0.3707) loss_giou_1: 0.7545 (0.7673) loss_ce_2: 0.3952 (0.3968) loss_bbox_2: 0.3388 (0.3524) loss_giou_2: 0.6931 (0.7096) loss_ce_3: 0.3850 (0.3889) loss_bbox_3: 0.3291 (0.3454) loss_giou_3: 0.6820 (0.7054) loss_ce_4: 0.3816 (0.3854) loss_bbox_4: 0.3231 (0.3393) loss_giou_4: 0.6604 (0.7033) loss_ce_unscaled: 0.3758 (0.3814) class_error_unscaled: 37.5000 (46.8750) loss_bbox_unscaled: 0.0639 (0.0690) loss_giou_unscaled: 0.3289 (0.3530) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4288 (0.4265) loss_bbox_0_unscaled: 0.0731 (0.0725) loss_giou_0_unscaled: 0.3976 (0.3987) cardinality_error_0_unscaled: 1.0000 (1.0312) loss_ce_1_unscaled: 0.4109 (0.4102) loss_bbox_1_unscaled: 0.0747 (0.0741) loss_giou_1_unscaled: 0.3773 (0.3837) cardinality_error_1_unscaled: 1.0000 (1.0156) loss_ce_2_unscaled: 0.3952 (0.3968) loss_bbox_2_unscaled: 0.0678 (0.0705) loss_giou_2_unscaled: 0.3465 (0.3548) cardinality_error_2_unscaled: 1.0000 (0.9844) loss_ce_3_unscaled: 0.3850 (0.3889) loss_bbox_3_unscaled: 0.0658 (0.0691) loss_giou_3_unscaled: 0.3410 (0.3527) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3816 (0.3854) loss_bbox_4_unscaled: 0.0646 (0.0679) loss_giou_4_unscaled: 0.3302 (0.3517) cardinality_error_4_unscaled: 1.0000 (0.9844) time: 1.0173 data: 0.3634 max mem: 12069
Test: Total time: 0:00:04 (1.0348 s / it)
Averaged stats: class_error: 62.50 loss: 8.6820 (8.8939) loss_ce: 0.3758 (0.3814) loss_bbox: 0.3196 (0.3450) loss_giou: 0.6579 (0.7060) loss_ce_0: 0.4288 (0.4265) loss_bbox_0: 0.3656 (0.3627) loss_giou_0: 0.7952 (0.7974) loss_ce_1: 0.4109 (0.4102) loss_bbox_1: 0.3735 (0.3707) loss_giou_1: 0.7545 (0.7673) loss_ce_2: 0.3952 (0.3968) loss_bbox_2: 0.3388 (0.3524) loss_giou_2: 0.6931 (0.7096) loss_ce_3: 0.3850 (0.3889) loss_bbox_3: 0.3291 (0.3454) loss_giou_3: 0.6820 (0.7054) loss_ce_4: 0.3816 (0.3854) loss_bbox_4: 0.3231 (0.3393) loss_giou_4: 0.6604 (0.7033) loss_ce_unscaled: 0.3758 (0.3814) class_error_unscaled: 37.5000 (46.8750) loss_bbox_unscaled: 0.0639 (0.0690) loss_giou_unscaled: 0.3289 (0.3530) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4288 (0.4265) loss_bbox_0_unscaled: 0.0731 (0.0725) loss_giou_0_unscaled: 0.3976 (0.3987) cardinality_error_0_unscaled: 1.0000 (1.0312) loss_ce_1_unscaled: 0.4109 (0.4102) loss_bbox_1_unscaled: 0.0747 (0.0741) loss_giou_1_unscaled: 0.3773 (0.3837) cardinality_error_1_unscaled: 1.0000 (1.0156) loss_ce_2_unscaled: 0.3952 (0.3968) loss_bbox_2_unscaled: 0.0678 (0.0705) loss_giou_2_unscaled: 0.3465 (0.3548) cardinality_error_2_unscaled: 1.0000 (0.9844) loss_ce_3_unscaled: 0.3850 (0.3889) loss_bbox_3_unscaled: 0.0658 (0.0691) loss_giou_3_unscaled: 0.3410 (0.3527) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3816 (0.3854) loss_bbox_4_unscaled: 0.0646 (0.0679) loss_giou_4_unscaled: 0.3302 (0.3517) cardinality_error_4_unscaled: 1.0000 (0.9844)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.010
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.012
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.003
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.012
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.091
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.201
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.358
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.151
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.197
Epoch: [22] [ 0/16] eta: 0:00:42 lr: 0.000010 class_error: 43.75 loss: 10.1169 (10.1169) loss_ce: 0.3879 (0.3879) loss_bbox: 0.4229 (0.4229) loss_giou: 0.7980 (0.7980) loss_ce_0: 0.4342 (0.4342) loss_bbox_0: 0.4740 (0.4740) loss_giou_0: 0.8729 (0.8729) loss_ce_1: 0.4192 (0.4192) loss_bbox_1: 0.4501 (0.4501) loss_giou_1: 0.8562 (0.8562) loss_ce_2: 0.3964 (0.3964) loss_bbox_2: 0.4294 (0.4294) loss_giou_2: 0.9056 (0.9056) loss_ce_3: 0.3932 (0.3932) loss_bbox_3: 0.4381 (0.4381) loss_giou_3: 0.8562 (0.8562) loss_ce_4: 0.3907 (0.3907) loss_bbox_4: 0.4061 (0.4061) loss_giou_4: 0.7860 (0.7860) loss_ce_unscaled: 0.3879 (0.3879) class_error_unscaled: 43.7500 (43.7500) loss_bbox_unscaled: 0.0846 (0.0846) loss_giou_unscaled: 0.3990 (0.3990) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4342 (0.4342) loss_bbox_0_unscaled: 0.0948 (0.0948) loss_giou_0_unscaled: 0.4364 (0.4364) cardinality_error_0_unscaled: 0.9375 (0.9375) loss_ce_1_unscaled: 0.4192 (0.4192) loss_bbox_1_unscaled: 0.0900 (0.0900) loss_giou_1_unscaled: 0.4281 (0.4281) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.3964 (0.3964) loss_bbox_2_unscaled: 0.0859 (0.0859) loss_giou_2_unscaled: 0.4528 (0.4528) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3932 (0.3932) loss_bbox_3_unscaled: 0.0876 (0.0876) loss_giou_3_unscaled: 0.4281 (0.4281) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3907 (0.3907) loss_bbox_4_unscaled: 0.0812 (0.0812) loss_giou_4_unscaled: 0.3930 (0.3930) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 2.6580 data: 1.3512 max mem: 12069
Epoch: [22] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 28.57 loss: 9.4653 (9.3868) loss_ce: 0.3663 (0.3654) loss_bbox: 0.3798 (0.3680) loss_giou: 0.7133 (0.7471) loss_ce_0: 0.4144 (0.4138) loss_bbox_0: 0.4497 (0.4348) loss_giou_0: 0.8697 (0.8517) loss_ce_1: 0.4002 (0.3973) loss_bbox_1: 0.4153 (0.4128) loss_giou_1: 0.8150 (0.8110) loss_ce_2: 0.3826 (0.3799) loss_bbox_2: 0.3819 (0.3993) loss_giou_2: 0.7903 (0.7880) loss_ce_3: 0.3750 (0.3737) loss_bbox_3: 0.3826 (0.3883) loss_giou_3: 0.7725 (0.7683) loss_ce_4: 0.3691 (0.3683) loss_bbox_4: 0.3716 (0.3658) loss_giou_4: 0.7329 (0.7533) loss_ce_unscaled: 0.3663 (0.3654) class_error_unscaled: 31.2500 (36.2013) loss_bbox_unscaled: 0.0760 (0.0736) loss_giou_unscaled: 0.3566 (0.3735) cardinality_error_unscaled: 0.9375 (0.9432) loss_ce_0_unscaled: 0.4144 (0.4138) loss_bbox_0_unscaled: 0.0899 (0.0870) loss_giou_0_unscaled: 0.4348 (0.4259) cardinality_error_0_unscaled: 0.9375 (0.9375) loss_ce_1_unscaled: 0.4002 (0.3973) loss_bbox_1_unscaled: 0.0831 (0.0826) loss_giou_1_unscaled: 0.4075 (0.4055) cardinality_error_1_unscaled: 0.9375 (0.9375) loss_ce_2_unscaled: 0.3826 (0.3799) loss_bbox_2_unscaled: 0.0764 (0.0799) loss_giou_2_unscaled: 0.3952 (0.3940) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3750 (0.3737) loss_bbox_3_unscaled: 0.0765 (0.0777) loss_giou_3_unscaled: 0.3862 (0.3841) cardinality_error_3_unscaled: 0.9375 (0.9432) loss_ce_4_unscaled: 0.3691 (0.3683) loss_bbox_4_unscaled: 0.0743 (0.0732) loss_giou_4_unscaled: 0.3665 (0.3766) cardinality_error_4_unscaled: 0.9375 (0.9432) time: 1.3681 data: 0.1843 max mem: 12069
Epoch: [22] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 37.50 loss: 9.0262 (9.2477) loss_ce: 0.3663 (0.3653) loss_bbox: 0.3554 (0.3599) loss_giou: 0.7133 (0.7398) loss_ce_0: 0.4136 (0.4147) loss_bbox_0: 0.4240 (0.4181) loss_giou_0: 0.8097 (0.8345) loss_ce_1: 0.3968 (0.3978) loss_bbox_1: 0.3792 (0.3990) loss_giou_1: 0.7705 (0.7965) loss_ce_2: 0.3802 (0.3806) loss_bbox_2: 0.3702 (0.3854) loss_giou_2: 0.7695 (0.7833) loss_ce_3: 0.3739 (0.3739) loss_bbox_3: 0.3583 (0.3741) loss_giou_3: 0.7349 (0.7543) loss_ce_4: 0.3693 (0.3694) loss_bbox_4: 0.3482 (0.3576) loss_giou_4: 0.7322 (0.7434) loss_ce_unscaled: 0.3663 (0.3653) class_error_unscaled: 37.5000 (38.3333) loss_bbox_unscaled: 0.0711 (0.0720) loss_giou_unscaled: 0.3566 (0.3699) cardinality_error_unscaled: 0.9375 (0.9414) loss_ce_0_unscaled: 0.4136 (0.4147) loss_bbox_0_unscaled: 0.0848 (0.0836) loss_giou_0_unscaled: 0.4048 (0.4173) cardinality_error_0_unscaled: 0.9375 (0.9414) loss_ce_1_unscaled: 0.3968 (0.3978) loss_bbox_1_unscaled: 0.0758 (0.0798) loss_giou_1_unscaled: 0.3852 (0.3982) cardinality_error_1_unscaled: 0.9375 (0.9375) loss_ce_2_unscaled: 0.3802 (0.3806) loss_bbox_2_unscaled: 0.0740 (0.0771) loss_giou_2_unscaled: 0.3847 (0.3916) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3739 (0.3739) loss_bbox_3_unscaled: 0.0717 (0.0748) loss_giou_3_unscaled: 0.3675 (0.3771) cardinality_error_3_unscaled: 0.9375 (0.9414) loss_ce_4_unscaled: 0.3693 (0.3694) loss_bbox_4_unscaled: 0.0696 (0.0715) loss_giou_4_unscaled: 0.3661 (0.3717) cardinality_error_4_unscaled: 0.9375 (0.9414) time: 1.3137 data: 0.1453 max mem: 12069
Epoch: [22] Total time: 0:00:21 (1.3180 s / it)
Averaged stats: lr: 0.000010 class_error: 37.50 loss: 9.0262 (9.2477) loss_ce: 0.3663 (0.3653) loss_bbox: 0.3554 (0.3599) loss_giou: 0.7133 (0.7398) loss_ce_0: 0.4136 (0.4147) loss_bbox_0: 0.4240 (0.4181) loss_giou_0: 0.8097 (0.8345) loss_ce_1: 0.3968 (0.3978) loss_bbox_1: 0.3792 (0.3990) loss_giou_1: 0.7705 (0.7965) loss_ce_2: 0.3802 (0.3806) loss_bbox_2: 0.3702 (0.3854) loss_giou_2: 0.7695 (0.7833) loss_ce_3: 0.3739 (0.3739) loss_bbox_3: 0.3583 (0.3741) loss_giou_3: 0.7349 (0.7543) loss_ce_4: 0.3693 (0.3694) loss_bbox_4: 0.3482 (0.3576) loss_giou_4: 0.7322 (0.7434) loss_ce_unscaled: 0.3663 (0.3653) class_error_unscaled: 37.5000 (38.3333) loss_bbox_unscaled: 0.0711 (0.0720) loss_giou_unscaled: 0.3566 (0.3699) cardinality_error_unscaled: 0.9375 (0.9414) loss_ce_0_unscaled: 0.4136 (0.4147) loss_bbox_0_unscaled: 0.0848 (0.0836) loss_giou_0_unscaled: 0.4048 (0.4173) cardinality_error_0_unscaled: 0.9375 (0.9414) loss_ce_1_unscaled: 0.3968 (0.3978) loss_bbox_1_unscaled: 0.0758 (0.0798) loss_giou_1_unscaled: 0.3852 (0.3982) cardinality_error_1_unscaled: 0.9375 (0.9375) loss_ce_2_unscaled: 0.3802 (0.3806) loss_bbox_2_unscaled: 0.0740 (0.0771) loss_giou_2_unscaled: 0.3847 (0.3916) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3739 (0.3739) loss_bbox_3_unscaled: 0.0717 (0.0748) loss_giou_3_unscaled: 0.3675 (0.3771) cardinality_error_3_unscaled: 0.9375 (0.9414) loss_ce_4_unscaled: 0.3693 (0.3694) loss_bbox_4_unscaled: 0.0696 (0.0715) loss_giou_4_unscaled: 0.3661 (0.3717) cardinality_error_4_unscaled: 0.9375 (0.9414)
Test: [0/4] eta: 0:00:07 class_error: 62.50 loss: 8.3649 (8.3649) loss_ce: 0.3836 (0.3836) loss_bbox: 0.3073 (0.3073) loss_giou: 0.6958 (0.6958) loss_ce_0: 0.4093 (0.4093) loss_bbox_0: 0.3042 (0.3042) loss_giou_0: 0.7459 (0.7459) loss_ce_1: 0.4068 (0.4068) loss_bbox_1: 0.2929 (0.2929) loss_giou_1: 0.7058 (0.7058) loss_ce_2: 0.3915 (0.3915) loss_bbox_2: 0.2918 (0.2918) loss_giou_2: 0.6770 (0.6770) loss_ce_3: 0.3888 (0.3888) loss_bbox_3: 0.3029 (0.3029) loss_giou_3: 0.6888 (0.6888) loss_ce_4: 0.3865 (0.3865) loss_bbox_4: 0.2940 (0.2940) loss_giou_4: 0.6922 (0.6922) loss_ce_unscaled: 0.3836 (0.3836) class_error_unscaled: 62.5000 (62.5000) loss_bbox_unscaled: 0.0615 (0.0615) loss_giou_unscaled: 0.3479 (0.3479) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4093 (0.4093) loss_bbox_0_unscaled: 0.0608 (0.0608) loss_giou_0_unscaled: 0.3730 (0.3730) cardinality_error_0_unscaled: 1.0000 (1.0000) loss_ce_1_unscaled: 0.4068 (0.4068) loss_bbox_1_unscaled: 0.0586 (0.0586) loss_giou_1_unscaled: 0.3529 (0.3529) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.3915 (0.3915) loss_bbox_2_unscaled: 0.0584 (0.0584) loss_giou_2_unscaled: 0.3385 (0.3385) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3888 (0.3888) loss_bbox_3_unscaled: 0.0606 (0.0606) loss_giou_3_unscaled: 0.3444 (0.3444) cardinality_error_3_unscaled: 0.9375 (0.9375) loss_ce_4_unscaled: 0.3865 (0.3865) loss_bbox_4_unscaled: 0.0588 (0.0588) loss_giou_4_unscaled: 0.3461 (0.3461) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 1.9838 data: 1.1826 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 62.50 loss: 8.3649 (8.5785) loss_ce: 0.3737 (0.3820) loss_bbox: 0.3073 (0.3232) loss_giou: 0.6940 (0.6841) loss_ce_0: 0.4278 (0.4243) loss_bbox_0: 0.3640 (0.3530) loss_giou_0: 0.7459 (0.7517) loss_ce_1: 0.4090 (0.4096) loss_bbox_1: 0.3297 (0.3415) loss_giou_1: 0.7058 (0.7279) loss_ce_2: 0.3915 (0.3942) loss_bbox_2: 0.3229 (0.3279) loss_giou_2: 0.6470 (0.6733) loss_ce_3: 0.3887 (0.3898) loss_bbox_3: 0.3047 (0.3288) loss_giou_3: 0.6381 (0.6794) loss_ce_4: 0.3803 (0.3851) loss_bbox_4: 0.2940 (0.3151) loss_giou_4: 0.6797 (0.6877) loss_ce_unscaled: 0.3737 (0.3820) class_error_unscaled: 37.5000 (46.8750) loss_bbox_unscaled: 0.0615 (0.0646) loss_giou_unscaled: 0.3470 (0.3421) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4278 (0.4243) loss_bbox_0_unscaled: 0.0728 (0.0706) loss_giou_0_unscaled: 0.3730 (0.3758) cardinality_error_0_unscaled: 1.0000 (1.0156) loss_ce_1_unscaled: 0.4090 (0.4096) loss_bbox_1_unscaled: 0.0659 (0.0683) loss_giou_1_unscaled: 0.3529 (0.3640) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.3915 (0.3942) loss_bbox_2_unscaled: 0.0646 (0.0656) loss_giou_2_unscaled: 0.3235 (0.3366) cardinality_error_2_unscaled: 1.0000 (0.9844) loss_ce_3_unscaled: 0.3887 (0.3898) loss_bbox_3_unscaled: 0.0609 (0.0658) loss_giou_3_unscaled: 0.3190 (0.3397) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3803 (0.3851) loss_bbox_4_unscaled: 0.0588 (0.0630) loss_giou_4_unscaled: 0.3399 (0.3439) cardinality_error_4_unscaled: 1.0000 (0.9844) time: 1.0626 data: 0.3973 max mem: 12069
Test: Total time: 0:00:04 (1.0781 s / it)
Averaged stats: class_error: 62.50 loss: 8.3649 (8.5785) loss_ce: 0.3737 (0.3820) loss_bbox: 0.3073 (0.3232) loss_giou: 0.6940 (0.6841) loss_ce_0: 0.4278 (0.4243) loss_bbox_0: 0.3640 (0.3530) loss_giou_0: 0.7459 (0.7517) loss_ce_1: 0.4090 (0.4096) loss_bbox_1: 0.3297 (0.3415) loss_giou_1: 0.7058 (0.7279) loss_ce_2: 0.3915 (0.3942) loss_bbox_2: 0.3229 (0.3279) loss_giou_2: 0.6470 (0.6733) loss_ce_3: 0.3887 (0.3898) loss_bbox_3: 0.3047 (0.3288) loss_giou_3: 0.6381 (0.6794) loss_ce_4: 0.3803 (0.3851) loss_bbox_4: 0.2940 (0.3151) loss_giou_4: 0.6797 (0.6877) loss_ce_unscaled: 0.3737 (0.3820) class_error_unscaled: 37.5000 (46.8750) loss_bbox_unscaled: 0.0615 (0.0646) loss_giou_unscaled: 0.3470 (0.3421) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4278 (0.4243) loss_bbox_0_unscaled: 0.0728 (0.0706) loss_giou_0_unscaled: 0.3730 (0.3758) cardinality_error_0_unscaled: 1.0000 (1.0156) loss_ce_1_unscaled: 0.4090 (0.4096) loss_bbox_1_unscaled: 0.0659 (0.0683) loss_giou_1_unscaled: 0.3529 (0.3640) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.3915 (0.3942) loss_bbox_2_unscaled: 0.0646 (0.0656) loss_giou_2_unscaled: 0.3235 (0.3366) cardinality_error_2_unscaled: 1.0000 (0.9844) loss_ce_3_unscaled: 0.3887 (0.3898) loss_bbox_3_unscaled: 0.0609 (0.0658) loss_giou_3_unscaled: 0.3190 (0.3397) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3803 (0.3851) loss_bbox_4_unscaled: 0.0588 (0.0630) loss_giou_4_unscaled: 0.3399 (0.3439) cardinality_error_4_unscaled: 1.0000 (0.9844)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.010
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.015
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.003
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.101
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.222
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.398
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.138
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.236
Epoch: [23] [ 0/16] eta: 0:00:37 lr: 0.000010 class_error: 43.75 loss: 9.0238 (9.0238) loss_ce: 0.3762 (0.3762) loss_bbox: 0.3712 (0.3712) loss_giou: 0.6779 (0.6779) loss_ce_0: 0.4116 (0.4116) loss_bbox_0: 0.4301 (0.4301) loss_giou_0: 0.7732 (0.7732) loss_ce_1: 0.4076 (0.4076) loss_bbox_1: 0.3971 (0.3971) loss_giou_1: 0.7590 (0.7590) loss_ce_2: 0.3900 (0.3900) loss_bbox_2: 0.3932 (0.3932) loss_giou_2: 0.7594 (0.7594) loss_ce_3: 0.3865 (0.3865) loss_bbox_3: 0.3679 (0.3679) loss_giou_3: 0.6668 (0.6668) loss_ce_4: 0.3805 (0.3805) loss_bbox_4: 0.3928 (0.3928) loss_giou_4: 0.6828 (0.6828) loss_ce_unscaled: 0.3762 (0.3762) class_error_unscaled: 43.7500 (43.7500) loss_bbox_unscaled: 0.0742 (0.0742) loss_giou_unscaled: 0.3390 (0.3390) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4116 (0.4116) loss_bbox_0_unscaled: 0.0860 (0.0860) loss_giou_0_unscaled: 0.3866 (0.3866) cardinality_error_0_unscaled: 1.0625 (1.0625) loss_ce_1_unscaled: 0.4076 (0.4076) loss_bbox_1_unscaled: 0.0794 (0.0794) loss_giou_1_unscaled: 0.3795 (0.3795) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.3900 (0.3900) loss_bbox_2_unscaled: 0.0786 (0.0786) loss_giou_2_unscaled: 0.3797 (0.3797) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3865 (0.3865) loss_bbox_3_unscaled: 0.0736 (0.0736) loss_giou_3_unscaled: 0.3334 (0.3334) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3805 (0.3805) loss_bbox_4_unscaled: 0.0786 (0.0786) loss_giou_4_unscaled: 0.3414 (0.3414) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 2.3432 data: 1.1860 max mem: 12069
Epoch: [23] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 46.67 loss: 9.2553 (9.3515) loss_ce: 0.3681 (0.3681) loss_bbox: 0.3712 (0.3707) loss_giou: 0.7695 (0.7682) loss_ce_0: 0.4174 (0.4152) loss_bbox_0: 0.4228 (0.4310) loss_giou_0: 0.8210 (0.8336) loss_ce_1: 0.3949 (0.3965) loss_bbox_1: 0.3968 (0.3880) loss_giou_1: 0.7495 (0.7782) loss_ce_2: 0.3814 (0.3839) loss_bbox_2: 0.3932 (0.3889) loss_giou_2: 0.7869 (0.7911) loss_ce_3: 0.3793 (0.3771) loss_bbox_3: 0.3679 (0.3779) loss_giou_3: 0.7482 (0.7693) loss_ce_4: 0.3737 (0.3720) loss_bbox_4: 0.3778 (0.3723) loss_giou_4: 0.7685 (0.7696) loss_ce_unscaled: 0.3681 (0.3681) class_error_unscaled: 40.0000 (41.9435) loss_bbox_unscaled: 0.0742 (0.0741) loss_giou_unscaled: 0.3848 (0.3841) cardinality_error_unscaled: 0.9375 (0.9489) loss_ce_0_unscaled: 0.4174 (0.4152) loss_bbox_0_unscaled: 0.0846 (0.0862) loss_giou_0_unscaled: 0.4105 (0.4168) cardinality_error_0_unscaled: 0.9375 (0.9659) loss_ce_1_unscaled: 0.3949 (0.3965) loss_bbox_1_unscaled: 0.0794 (0.0776) loss_giou_1_unscaled: 0.3747 (0.3891) cardinality_error_1_unscaled: 0.9375 (0.9489) loss_ce_2_unscaled: 0.3814 (0.3839) loss_bbox_2_unscaled: 0.0786 (0.0778) loss_giou_2_unscaled: 0.3935 (0.3956) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3793 (0.3771) loss_bbox_3_unscaled: 0.0736 (0.0756) loss_giou_3_unscaled: 0.3741 (0.3847) cardinality_error_3_unscaled: 0.9375 (0.9432) loss_ce_4_unscaled: 0.3737 (0.3720) loss_bbox_4_unscaled: 0.0756 (0.0745) loss_giou_4_unscaled: 0.3842 (0.3848) cardinality_error_4_unscaled: 0.9375 (0.9432) time: 1.3516 data: 0.1825 max mem: 12069
Epoch: [23] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 13.33 loss: 9.1480 (9.3180) loss_ce: 0.3711 (0.3714) loss_bbox: 0.3565 (0.3674) loss_giou: 0.7347 (0.7685) loss_ce_0: 0.4174 (0.4180) loss_bbox_0: 0.4088 (0.4273) loss_giou_0: 0.8139 (0.8168) loss_ce_1: 0.4076 (0.4006) loss_bbox_1: 0.3917 (0.3861) loss_giou_1: 0.7495 (0.7795) loss_ce_2: 0.3900 (0.3862) loss_bbox_2: 0.3780 (0.3796) loss_giou_2: 0.7795 (0.7840) loss_ce_3: 0.3826 (0.3794) loss_bbox_3: 0.3606 (0.3762) loss_giou_3: 0.7371 (0.7680) loss_ce_4: 0.3737 (0.3750) loss_bbox_4: 0.3471 (0.3651) loss_giou_4: 0.7376 (0.7689) loss_ce_unscaled: 0.3711 (0.3714) class_error_unscaled: 37.5000 (36.7007) loss_bbox_unscaled: 0.0713 (0.0735) loss_giou_unscaled: 0.3673 (0.3842) cardinality_error_unscaled: 1.0000 (0.9609) loss_ce_0_unscaled: 0.4174 (0.4180) loss_bbox_0_unscaled: 0.0818 (0.0855) loss_giou_0_unscaled: 0.4070 (0.4084) cardinality_error_0_unscaled: 0.9375 (0.9766) loss_ce_1_unscaled: 0.4076 (0.4006) loss_bbox_1_unscaled: 0.0783 (0.0772) loss_giou_1_unscaled: 0.3747 (0.3898) cardinality_error_1_unscaled: 0.9375 (0.9609) loss_ce_2_unscaled: 0.3900 (0.3862) loss_bbox_2_unscaled: 0.0756 (0.0759) loss_giou_2_unscaled: 0.3898 (0.3920) cardinality_error_2_unscaled: 0.9375 (0.9531) loss_ce_3_unscaled: 0.3826 (0.3794) loss_bbox_3_unscaled: 0.0721 (0.0752) loss_giou_3_unscaled: 0.3686 (0.3840) cardinality_error_3_unscaled: 0.9375 (0.9570) loss_ce_4_unscaled: 0.3737 (0.3750) loss_bbox_4_unscaled: 0.0694 (0.0730) loss_giou_4_unscaled: 0.3688 (0.3845) cardinality_error_4_unscaled: 0.9375 (0.9570) time: 1.3056 data: 0.1445 max mem: 12069
Epoch: [23] Total time: 0:00:20 (1.3100 s / it)
Averaged stats: lr: 0.000010 class_error: 13.33 loss: 9.1480 (9.3180) loss_ce: 0.3711 (0.3714) loss_bbox: 0.3565 (0.3674) loss_giou: 0.7347 (0.7685) loss_ce_0: 0.4174 (0.4180) loss_bbox_0: 0.4088 (0.4273) loss_giou_0: 0.8139 (0.8168) loss_ce_1: 0.4076 (0.4006) loss_bbox_1: 0.3917 (0.3861) loss_giou_1: 0.7495 (0.7795) loss_ce_2: 0.3900 (0.3862) loss_bbox_2: 0.3780 (0.3796) loss_giou_2: 0.7795 (0.7840) loss_ce_3: 0.3826 (0.3794) loss_bbox_3: 0.3606 (0.3762) loss_giou_3: 0.7371 (0.7680) loss_ce_4: 0.3737 (0.3750) loss_bbox_4: 0.3471 (0.3651) loss_giou_4: 0.7376 (0.7689) loss_ce_unscaled: 0.3711 (0.3714) class_error_unscaled: 37.5000 (36.7007) loss_bbox_unscaled: 0.0713 (0.0735) loss_giou_unscaled: 0.3673 (0.3842) cardinality_error_unscaled: 1.0000 (0.9609) loss_ce_0_unscaled: 0.4174 (0.4180) loss_bbox_0_unscaled: 0.0818 (0.0855) loss_giou_0_unscaled: 0.4070 (0.4084) cardinality_error_0_unscaled: 0.9375 (0.9766) loss_ce_1_unscaled: 0.4076 (0.4006) loss_bbox_1_unscaled: 0.0783 (0.0772) loss_giou_1_unscaled: 0.3747 (0.3898) cardinality_error_1_unscaled: 0.9375 (0.9609) loss_ce_2_unscaled: 0.3900 (0.3862) loss_bbox_2_unscaled: 0.0756 (0.0759) loss_giou_2_unscaled: 0.3898 (0.3920) cardinality_error_2_unscaled: 0.9375 (0.9531) loss_ce_3_unscaled: 0.3826 (0.3794) loss_bbox_3_unscaled: 0.0721 (0.0752) loss_giou_3_unscaled: 0.3686 (0.3840) cardinality_error_3_unscaled: 0.9375 (0.9570) loss_ce_4_unscaled: 0.3737 (0.3750) loss_bbox_4_unscaled: 0.0694 (0.0730) loss_giou_4_unscaled: 0.3688 (0.3845) cardinality_error_4_unscaled: 0.9375 (0.9570)
Test: [0/4] eta: 0:00:07 class_error: 56.25 loss: 8.3492 (8.3492) loss_ce: 0.3808 (0.3808) loss_bbox: 0.2939 (0.2939) loss_giou: 0.6846 (0.6846) loss_ce_0: 0.4142 (0.4142) loss_bbox_0: 0.3293 (0.3293) loss_giou_0: 0.7328 (0.7328) loss_ce_1: 0.3994 (0.3994) loss_bbox_1: 0.3188 (0.3188) loss_giou_1: 0.7126 (0.7126) loss_ce_2: 0.3881 (0.3881) loss_bbox_2: 0.3004 (0.3004) loss_giou_2: 0.6852 (0.6852) loss_ce_3: 0.3888 (0.3888) loss_bbox_3: 0.3000 (0.3000) loss_giou_3: 0.6877 (0.6877) loss_ce_4: 0.3830 (0.3830) loss_bbox_4: 0.2855 (0.2855) loss_giou_4: 0.6643 (0.6643) loss_ce_unscaled: 0.3808 (0.3808) class_error_unscaled: 56.2500 (56.2500) loss_bbox_unscaled: 0.0588 (0.0588) loss_giou_unscaled: 0.3423 (0.3423) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4142 (0.4142) loss_bbox_0_unscaled: 0.0659 (0.0659) loss_giou_0_unscaled: 0.3664 (0.3664) cardinality_error_0_unscaled: 1.0000 (1.0000) loss_ce_1_unscaled: 0.3994 (0.3994) loss_bbox_1_unscaled: 0.0638 (0.0638) loss_giou_1_unscaled: 0.3563 (0.3563) cardinality_error_1_unscaled: 1.0625 (1.0625) loss_ce_2_unscaled: 0.3881 (0.3881) loss_bbox_2_unscaled: 0.0601 (0.0601) loss_giou_2_unscaled: 0.3426 (0.3426) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3888 (0.3888) loss_bbox_3_unscaled: 0.0600 (0.0600) loss_giou_3_unscaled: 0.3439 (0.3439) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3830 (0.3830) loss_bbox_4_unscaled: 0.0571 (0.0571) loss_giou_4_unscaled: 0.3321 (0.3321) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 1.9444 data: 1.1663 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 56.25 loss: 8.5897 (8.7890) loss_ce: 0.3761 (0.3807) loss_bbox: 0.3295 (0.3356) loss_giou: 0.6846 (0.6931) loss_ce_0: 0.4264 (0.4235) loss_bbox_0: 0.3747 (0.3728) loss_giou_0: 0.7708 (0.7697) loss_ce_1: 0.4069 (0.4077) loss_bbox_1: 0.3355 (0.3519) loss_giou_1: 0.7511 (0.7713) loss_ce_2: 0.3917 (0.3951) loss_bbox_2: 0.3304 (0.3477) loss_giou_2: 0.6793 (0.7040) loss_ce_3: 0.3869 (0.3891) loss_bbox_3: 0.3351 (0.3336) loss_giou_3: 0.6804 (0.6969) loss_ce_4: 0.3829 (0.3852) loss_bbox_4: 0.3405 (0.3343) loss_giou_4: 0.6712 (0.6968) loss_ce_unscaled: 0.3761 (0.3807) class_error_unscaled: 37.5000 (45.3125) loss_bbox_unscaled: 0.0659 (0.0671) loss_giou_unscaled: 0.3423 (0.3466) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4264 (0.4235) loss_bbox_0_unscaled: 0.0749 (0.0746) loss_giou_0_unscaled: 0.3854 (0.3848) cardinality_error_0_unscaled: 1.0000 (1.0156) loss_ce_1_unscaled: 0.4069 (0.4077) loss_bbox_1_unscaled: 0.0671 (0.0704) loss_giou_1_unscaled: 0.3755 (0.3857) cardinality_error_1_unscaled: 1.0000 (1.0156) loss_ce_2_unscaled: 0.3917 (0.3951) loss_bbox_2_unscaled: 0.0661 (0.0695) loss_giou_2_unscaled: 0.3396 (0.3520) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3869 (0.3891) loss_bbox_3_unscaled: 0.0670 (0.0667) loss_giou_3_unscaled: 0.3402 (0.3484) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3829 (0.3852) loss_bbox_4_unscaled: 0.0681 (0.0669) loss_giou_4_unscaled: 0.3356 (0.3484) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 1.0075 data: 0.3534 max mem: 12069
Test: Total time: 0:00:04 (1.0254 s / it)
Averaged stats: class_error: 56.25 loss: 8.5897 (8.7890) loss_ce: 0.3761 (0.3807) loss_bbox: 0.3295 (0.3356) loss_giou: 0.6846 (0.6931) loss_ce_0: 0.4264 (0.4235) loss_bbox_0: 0.3747 (0.3728) loss_giou_0: 0.7708 (0.7697) loss_ce_1: 0.4069 (0.4077) loss_bbox_1: 0.3355 (0.3519) loss_giou_1: 0.7511 (0.7713) loss_ce_2: 0.3917 (0.3951) loss_bbox_2: 0.3304 (0.3477) loss_giou_2: 0.6793 (0.7040) loss_ce_3: 0.3869 (0.3891) loss_bbox_3: 0.3351 (0.3336) loss_giou_3: 0.6804 (0.6969) loss_ce_4: 0.3829 (0.3852) loss_bbox_4: 0.3405 (0.3343) loss_giou_4: 0.6712 (0.6968) loss_ce_unscaled: 0.3761 (0.3807) class_error_unscaled: 37.5000 (45.3125) loss_bbox_unscaled: 0.0659 (0.0671) loss_giou_unscaled: 0.3423 (0.3466) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4264 (0.4235) loss_bbox_0_unscaled: 0.0749 (0.0746) loss_giou_0_unscaled: 0.3854 (0.3848) cardinality_error_0_unscaled: 1.0000 (1.0156) loss_ce_1_unscaled: 0.4069 (0.4077) loss_bbox_1_unscaled: 0.0671 (0.0704) loss_giou_1_unscaled: 0.3755 (0.3857) cardinality_error_1_unscaled: 1.0000 (1.0156) loss_ce_2_unscaled: 0.3917 (0.3951) loss_bbox_2_unscaled: 0.0661 (0.0695) loss_giou_2_unscaled: 0.3396 (0.3520) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3869 (0.3891) loss_bbox_3_unscaled: 0.0670 (0.0667) loss_giou_3_unscaled: 0.3402 (0.3484) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3829 (0.3852) loss_bbox_4_unscaled: 0.0681 (0.0669) loss_giou_4_unscaled: 0.3356 (0.3484) cardinality_error_4_unscaled: 1.0000 (1.0000)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.009
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.012
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.021
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.101
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.221
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.383
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.138
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.235
Epoch: [24] [ 0/16] eta: 0:00:38 lr: 0.000010 class_error: 33.33 loss: 9.7169 (9.7169) loss_ce: 0.3665 (0.3665) loss_bbox: 0.3393 (0.3393) loss_giou: 0.8017 (0.8017) loss_ce_0: 0.4017 (0.4017) loss_bbox_0: 0.4562 (0.4562) loss_giou_0: 0.8884 (0.8884) loss_ce_1: 0.3967 (0.3967) loss_bbox_1: 0.4183 (0.4183) loss_giou_1: 0.9096 (0.9096) loss_ce_2: 0.3827 (0.3827) loss_bbox_2: 0.4158 (0.4158) loss_giou_2: 0.8398 (0.8398) loss_ce_3: 0.3736 (0.3736) loss_bbox_3: 0.3731 (0.3731) loss_giou_3: 0.8186 (0.8186) loss_ce_4: 0.3697 (0.3697) loss_bbox_4: 0.3514 (0.3514) loss_giou_4: 0.8137 (0.8137) loss_ce_unscaled: 0.3665 (0.3665) class_error_unscaled: 33.3333 (33.3333) loss_bbox_unscaled: 0.0679 (0.0679) loss_giou_unscaled: 0.4009 (0.4009) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.4017 (0.4017) loss_bbox_0_unscaled: 0.0912 (0.0912) loss_giou_0_unscaled: 0.4442 (0.4442) cardinality_error_0_unscaled: 0.9375 (0.9375) loss_ce_1_unscaled: 0.3967 (0.3967) loss_bbox_1_unscaled: 0.0837 (0.0837) loss_giou_1_unscaled: 0.4548 (0.4548) cardinality_error_1_unscaled: 0.9375 (0.9375) loss_ce_2_unscaled: 0.3827 (0.3827) loss_bbox_2_unscaled: 0.0832 (0.0832) loss_giou_2_unscaled: 0.4199 (0.4199) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3736 (0.3736) loss_bbox_3_unscaled: 0.0746 (0.0746) loss_giou_3_unscaled: 0.4093 (0.4093) cardinality_error_3_unscaled: 0.9375 (0.9375) loss_ce_4_unscaled: 0.3697 (0.3697) loss_bbox_4_unscaled: 0.0703 (0.0703) loss_giou_4_unscaled: 0.4068 (0.4068) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 2.4017 data: 1.0965 max mem: 12069
Epoch: [24] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 12.50 loss: 8.9528 (9.1083) loss_ce: 0.3742 (0.3739) loss_bbox: 0.3633 (0.3494) loss_giou: 0.7232 (0.7401) loss_ce_0: 0.4188 (0.4179) loss_bbox_0: 0.4043 (0.4188) loss_giou_0: 0.8034 (0.8047) loss_ce_1: 0.4025 (0.4034) loss_bbox_1: 0.3563 (0.3739) loss_giou_1: 0.7558 (0.7685) loss_ce_2: 0.3865 (0.3883) loss_bbox_2: 0.3588 (0.3636) loss_giou_2: 0.7293 (0.7497) loss_ce_3: 0.3781 (0.3790) loss_bbox_3: 0.3574 (0.3571) loss_giou_3: 0.7353 (0.7422) loss_ce_4: 0.3786 (0.3778) loss_bbox_4: 0.3561 (0.3552) loss_giou_4: 0.7486 (0.7448) loss_ce_unscaled: 0.3742 (0.3739) class_error_unscaled: 33.3333 (32.1212) loss_bbox_unscaled: 0.0727 (0.0699) loss_giou_unscaled: 0.3616 (0.3701) cardinality_error_unscaled: 0.9375 (0.9602) loss_ce_0_unscaled: 0.4188 (0.4179) loss_bbox_0_unscaled: 0.0809 (0.0838) loss_giou_0_unscaled: 0.4017 (0.4023) cardinality_error_0_unscaled: 0.9375 (0.9602) loss_ce_1_unscaled: 0.4025 (0.4034) loss_bbox_1_unscaled: 0.0713 (0.0748) loss_giou_1_unscaled: 0.3779 (0.3843) cardinality_error_1_unscaled: 0.9375 (0.9545) loss_ce_2_unscaled: 0.3865 (0.3883) loss_bbox_2_unscaled: 0.0718 (0.0727) loss_giou_2_unscaled: 0.3647 (0.3748) cardinality_error_2_unscaled: 0.9375 (0.9489) loss_ce_3_unscaled: 0.3781 (0.3790) loss_bbox_3_unscaled: 0.0715 (0.0714) loss_giou_3_unscaled: 0.3677 (0.3711) cardinality_error_3_unscaled: 0.9375 (0.9602) loss_ce_4_unscaled: 0.3786 (0.3778) loss_bbox_4_unscaled: 0.0712 (0.0710) loss_giou_4_unscaled: 0.3743 (0.3724) cardinality_error_4_unscaled: 0.9375 (0.9602) time: 1.3371 data: 0.1633 max mem: 12069
Epoch: [24] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 20.00 loss: 8.9528 (9.1035) loss_ce: 0.3721 (0.3747) loss_bbox: 0.3633 (0.3421) loss_giou: 0.7232 (0.7441) loss_ce_0: 0.4161 (0.4163) loss_bbox_0: 0.4043 (0.4066) loss_giou_0: 0.8034 (0.8068) loss_ce_1: 0.4025 (0.4031) loss_bbox_1: 0.3563 (0.3722) loss_giou_1: 0.7653 (0.7761) loss_ce_2: 0.3864 (0.3895) loss_bbox_2: 0.3588 (0.3596) loss_giou_2: 0.7462 (0.7532) loss_ce_3: 0.3793 (0.3808) loss_bbox_3: 0.3574 (0.3552) loss_giou_3: 0.7353 (0.7489) loss_ce_4: 0.3778 (0.3787) loss_bbox_4: 0.3561 (0.3460) loss_giou_4: 0.7486 (0.7497) loss_ce_unscaled: 0.3721 (0.3747) class_error_unscaled: 33.3333 (34.5833) loss_bbox_unscaled: 0.0727 (0.0684) loss_giou_unscaled: 0.3616 (0.3720) cardinality_error_unscaled: 0.9375 (0.9609) loss_ce_0_unscaled: 0.4161 (0.4163) loss_bbox_0_unscaled: 0.0809 (0.0813) loss_giou_0_unscaled: 0.4017 (0.4034) cardinality_error_0_unscaled: 0.9375 (0.9414) loss_ce_1_unscaled: 0.4025 (0.4031) loss_bbox_1_unscaled: 0.0713 (0.0744) loss_giou_1_unscaled: 0.3827 (0.3880) cardinality_error_1_unscaled: 0.9375 (0.9531) loss_ce_2_unscaled: 0.3864 (0.3895) loss_bbox_2_unscaled: 0.0718 (0.0719) loss_giou_2_unscaled: 0.3731 (0.3766) cardinality_error_2_unscaled: 0.9375 (0.9492) loss_ce_3_unscaled: 0.3793 (0.3808) loss_bbox_3_unscaled: 0.0715 (0.0710) loss_giou_3_unscaled: 0.3677 (0.3744) cardinality_error_3_unscaled: 0.9375 (0.9609) loss_ce_4_unscaled: 0.3778 (0.3787) loss_bbox_4_unscaled: 0.0712 (0.0692) loss_giou_4_unscaled: 0.3743 (0.3748) cardinality_error_4_unscaled: 0.9375 (0.9609) time: 1.2971 data: 0.1307 max mem: 12069
Epoch: [24] Total time: 0:00:20 (1.3019 s / it)
Averaged stats: lr: 0.000010 class_error: 20.00 loss: 8.9528 (9.1035) loss_ce: 0.3721 (0.3747) loss_bbox: 0.3633 (0.3421) loss_giou: 0.7232 (0.7441) loss_ce_0: 0.4161 (0.4163) loss_bbox_0: 0.4043 (0.4066) loss_giou_0: 0.8034 (0.8068) loss_ce_1: 0.4025 (0.4031) loss_bbox_1: 0.3563 (0.3722) loss_giou_1: 0.7653 (0.7761) loss_ce_2: 0.3864 (0.3895) loss_bbox_2: 0.3588 (0.3596) loss_giou_2: 0.7462 (0.7532) loss_ce_3: 0.3793 (0.3808) loss_bbox_3: 0.3574 (0.3552) loss_giou_3: 0.7353 (0.7489) loss_ce_4: 0.3778 (0.3787) loss_bbox_4: 0.3561 (0.3460) loss_giou_4: 0.7486 (0.7497) loss_ce_unscaled: 0.3721 (0.3747) class_error_unscaled: 33.3333 (34.5833) loss_bbox_unscaled: 0.0727 (0.0684) loss_giou_unscaled: 0.3616 (0.3720) cardinality_error_unscaled: 0.9375 (0.9609) loss_ce_0_unscaled: 0.4161 (0.4163) loss_bbox_0_unscaled: 0.0809 (0.0813) loss_giou_0_unscaled: 0.4017 (0.4034) cardinality_error_0_unscaled: 0.9375 (0.9414) loss_ce_1_unscaled: 0.4025 (0.4031) loss_bbox_1_unscaled: 0.0713 (0.0744) loss_giou_1_unscaled: 0.3827 (0.3880) cardinality_error_1_unscaled: 0.9375 (0.9531) loss_ce_2_unscaled: 0.3864 (0.3895) loss_bbox_2_unscaled: 0.0718 (0.0719) loss_giou_2_unscaled: 0.3731 (0.3766) cardinality_error_2_unscaled: 0.9375 (0.9492) loss_ce_3_unscaled: 0.3793 (0.3808) loss_bbox_3_unscaled: 0.0715 (0.0710) loss_giou_3_unscaled: 0.3677 (0.3744) cardinality_error_3_unscaled: 0.9375 (0.9609) loss_ce_4_unscaled: 0.3778 (0.3787) loss_bbox_4_unscaled: 0.0712 (0.0692) loss_giou_4_unscaled: 0.3743 (0.3748) cardinality_error_4_unscaled: 0.9375 (0.9609)
Test: [0/4] eta: 0:00:07 class_error: 56.25 loss: 8.1445 (8.1445) loss_ce: 0.3808 (0.3808) loss_bbox: 0.2688 (0.2688) loss_giou: 0.6458 (0.6458) loss_ce_0: 0.4098 (0.4098) loss_bbox_0: 0.3036 (0.3036) loss_giou_0: 0.7034 (0.7034) loss_ce_1: 0.3938 (0.3938) loss_bbox_1: 0.3039 (0.3039) loss_giou_1: 0.7160 (0.7160) loss_ce_2: 0.3916 (0.3916) loss_bbox_2: 0.2942 (0.2942) loss_giou_2: 0.6917 (0.6917) loss_ce_3: 0.3843 (0.3843) loss_bbox_3: 0.2920 (0.2920) loss_giou_3: 0.6701 (0.6701) loss_ce_4: 0.3828 (0.3828) loss_bbox_4: 0.2772 (0.2772) loss_giou_4: 0.6346 (0.6346) loss_ce_unscaled: 0.3808 (0.3808) class_error_unscaled: 56.2500 (56.2500) loss_bbox_unscaled: 0.0538 (0.0538) loss_giou_unscaled: 0.3229 (0.3229) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.4098 (0.4098) loss_bbox_0_unscaled: 0.0607 (0.0607) loss_giou_0_unscaled: 0.3517 (0.3517) cardinality_error_0_unscaled: 1.0625 (1.0625) loss_ce_1_unscaled: 0.3938 (0.3938) loss_bbox_1_unscaled: 0.0608 (0.0608) loss_giou_1_unscaled: 0.3580 (0.3580) cardinality_error_1_unscaled: 0.9375 (0.9375) loss_ce_2_unscaled: 0.3916 (0.3916) loss_bbox_2_unscaled: 0.0588 (0.0588) loss_giou_2_unscaled: 0.3459 (0.3459) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3843 (0.3843) loss_bbox_3_unscaled: 0.0584 (0.0584) loss_giou_3_unscaled: 0.3350 (0.3350) cardinality_error_3_unscaled: 0.9375 (0.9375) loss_ce_4_unscaled: 0.3828 (0.3828) loss_bbox_4_unscaled: 0.0554 (0.0554) loss_giou_4_unscaled: 0.3173 (0.3173) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 1.9672 data: 1.1667 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 56.25 loss: 8.6286 (8.6097) loss_ce: 0.3774 (0.3812) loss_bbox: 0.2973 (0.3172) loss_giou: 0.6517 (0.6714) loss_ce_0: 0.4226 (0.4203) loss_bbox_0: 0.3491 (0.3519) loss_giou_0: 0.7243 (0.7458) loss_ce_1: 0.4082 (0.4046) loss_bbox_1: 0.3286 (0.3406) loss_giou_1: 0.7160 (0.7630) loss_ce_2: 0.3916 (0.3954) loss_bbox_2: 0.3351 (0.3303) loss_giou_2: 0.6951 (0.7123) loss_ce_3: 0.3843 (0.3876) loss_bbox_3: 0.3270 (0.3243) loss_giou_3: 0.6701 (0.6832) loss_ce_4: 0.3828 (0.3848) loss_bbox_4: 0.3026 (0.3198) loss_giou_4: 0.6557 (0.6761) loss_ce_unscaled: 0.3774 (0.3812) class_error_unscaled: 37.5000 (45.3125) loss_bbox_unscaled: 0.0595 (0.0634) loss_giou_unscaled: 0.3258 (0.3357) cardinality_error_unscaled: 1.0000 (0.9844) loss_ce_0_unscaled: 0.4226 (0.4203) loss_bbox_0_unscaled: 0.0698 (0.0704) loss_giou_0_unscaled: 0.3622 (0.3729) cardinality_error_0_unscaled: 1.0000 (1.0469) loss_ce_1_unscaled: 0.4082 (0.4046) loss_bbox_1_unscaled: 0.0657 (0.0681) loss_giou_1_unscaled: 0.3580 (0.3815) cardinality_error_1_unscaled: 1.0000 (0.9844) loss_ce_2_unscaled: 0.3916 (0.3954) loss_bbox_2_unscaled: 0.0670 (0.0661) loss_giou_2_unscaled: 0.3475 (0.3561) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3843 (0.3876) loss_bbox_3_unscaled: 0.0654 (0.0649) loss_giou_3_unscaled: 0.3350 (0.3416) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3828 (0.3848) loss_bbox_4_unscaled: 0.0605 (0.0640) loss_giou_4_unscaled: 0.3279 (0.3380) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 1.0187 data: 0.3508 max mem: 12069
Test: Total time: 0:00:04 (1.0351 s / it)
Averaged stats: class_error: 56.25 loss: 8.6286 (8.6097) loss_ce: 0.3774 (0.3812) loss_bbox: 0.2973 (0.3172) loss_giou: 0.6517 (0.6714) loss_ce_0: 0.4226 (0.4203) loss_bbox_0: 0.3491 (0.3519) loss_giou_0: 0.7243 (0.7458) loss_ce_1: 0.4082 (0.4046) loss_bbox_1: 0.3286 (0.3406) loss_giou_1: 0.7160 (0.7630) loss_ce_2: 0.3916 (0.3954) loss_bbox_2: 0.3351 (0.3303) loss_giou_2: 0.6951 (0.7123) loss_ce_3: 0.3843 (0.3876) loss_bbox_3: 0.3270 (0.3243) loss_giou_3: 0.6701 (0.6832) loss_ce_4: 0.3828 (0.3848) loss_bbox_4: 0.3026 (0.3198) loss_giou_4: 0.6557 (0.6761) loss_ce_unscaled: 0.3774 (0.3812) class_error_unscaled: 37.5000 (45.3125) loss_bbox_unscaled: 0.0595 (0.0634) loss_giou_unscaled: 0.3258 (0.3357) cardinality_error_unscaled: 1.0000 (0.9844) loss_ce_0_unscaled: 0.4226 (0.4203) loss_bbox_0_unscaled: 0.0698 (0.0704) loss_giou_0_unscaled: 0.3622 (0.3729) cardinality_error_0_unscaled: 1.0000 (1.0469) loss_ce_1_unscaled: 0.4082 (0.4046) loss_bbox_1_unscaled: 0.0657 (0.0681) loss_giou_1_unscaled: 0.3580 (0.3815) cardinality_error_1_unscaled: 1.0000 (0.9844) loss_ce_2_unscaled: 0.3916 (0.3954) loss_bbox_2_unscaled: 0.0670 (0.0661) loss_giou_2_unscaled: 0.3475 (0.3561) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3843 (0.3876) loss_bbox_3_unscaled: 0.0654 (0.0649) loss_giou_3_unscaled: 0.3350 (0.3416) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3828 (0.3848) loss_bbox_4_unscaled: 0.0605 (0.0640) loss_giou_4_unscaled: 0.3279 (0.3380) cardinality_error_4_unscaled: 1.0000 (1.0000)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.009
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.013
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.014
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.115
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.237
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.378
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.157
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.251
Epoch: [25] [ 0/16] eta: 0:00:38 lr: 0.000010 class_error: 35.71 loss: 9.0425 (9.0425) loss_ce: 0.3532 (0.3532) loss_bbox: 0.3510 (0.3510) loss_giou: 0.6541 (0.6541) loss_ce_0: 0.4046 (0.4046) loss_bbox_0: 0.4267 (0.4267) loss_giou_0: 0.8993 (0.8993) loss_ce_1: 0.3802 (0.3802) loss_bbox_1: 0.4387 (0.4387) loss_giou_1: 0.8309 (0.8309) loss_ce_2: 0.3682 (0.3682) loss_bbox_2: 0.3828 (0.3828) loss_giou_2: 0.7791 (0.7791) loss_ce_3: 0.3554 (0.3554) loss_bbox_3: 0.3284 (0.3284) loss_giou_3: 0.7543 (0.7543) loss_ce_4: 0.3565 (0.3565) loss_bbox_4: 0.3179 (0.3179) loss_giou_4: 0.6611 (0.6611) loss_ce_unscaled: 0.3532 (0.3532) class_error_unscaled: 35.7143 (35.7143) loss_bbox_unscaled: 0.0702 (0.0702) loss_giou_unscaled: 0.3271 (0.3271) cardinality_error_unscaled: 0.8750 (0.8750) loss_ce_0_unscaled: 0.4046 (0.4046) loss_bbox_0_unscaled: 0.0853 (0.0853) loss_giou_0_unscaled: 0.4497 (0.4497) cardinality_error_0_unscaled: 1.1250 (1.1250) loss_ce_1_unscaled: 0.3802 (0.3802) loss_bbox_1_unscaled: 0.0877 (0.0877) loss_giou_1_unscaled: 0.4154 (0.4154) cardinality_error_1_unscaled: 0.8750 (0.8750) loss_ce_2_unscaled: 0.3682 (0.3682) loss_bbox_2_unscaled: 0.0766 (0.0766) loss_giou_2_unscaled: 0.3895 (0.3895) cardinality_error_2_unscaled: 0.8750 (0.8750) loss_ce_3_unscaled: 0.3554 (0.3554) loss_bbox_3_unscaled: 0.0657 (0.0657) loss_giou_3_unscaled: 0.3772 (0.3772) cardinality_error_3_unscaled: 0.8750 (0.8750) loss_ce_4_unscaled: 0.3565 (0.3565) loss_bbox_4_unscaled: 0.0636 (0.0636) loss_giou_4_unscaled: 0.3306 (0.3306) cardinality_error_4_unscaled: 0.8750 (0.8750) time: 2.3777 data: 1.0345 max mem: 12069
Epoch: [25] [10/16] eta: 0:00:08 lr: 0.000010 class_error: 25.00 loss: 9.0423 (9.0925) loss_ce: 0.3608 (0.3649) loss_bbox: 0.3439 (0.3494) loss_giou: 0.7285 (0.7316) loss_ce_0: 0.4180 (0.4157) loss_bbox_0: 0.4335 (0.4281) loss_giou_0: 0.8345 (0.8342) loss_ce_1: 0.3935 (0.3935) loss_bbox_1: 0.3471 (0.3829) loss_giou_1: 0.7689 (0.7773) loss_ce_2: 0.3791 (0.3792) loss_bbox_2: 0.3771 (0.3662) loss_giou_2: 0.7431 (0.7532) loss_ce_3: 0.3717 (0.3695) loss_bbox_3: 0.3387 (0.3544) loss_giou_3: 0.7406 (0.7399) loss_ce_4: 0.3669 (0.3668) loss_bbox_4: 0.3275 (0.3455) loss_giou_4: 0.7353 (0.7402) loss_ce_unscaled: 0.3608 (0.3649) class_error_unscaled: 33.3333 (34.8106) loss_bbox_unscaled: 0.0688 (0.0699) loss_giou_unscaled: 0.3643 (0.3658) cardinality_error_unscaled: 0.9375 (0.9375) loss_ce_0_unscaled: 0.4180 (0.4157) loss_bbox_0_unscaled: 0.0867 (0.0856) loss_giou_0_unscaled: 0.4172 (0.4171) cardinality_error_0_unscaled: 1.0000 (1.0170) loss_ce_1_unscaled: 0.3935 (0.3935) loss_bbox_1_unscaled: 0.0694 (0.0766) loss_giou_1_unscaled: 0.3845 (0.3887) cardinality_error_1_unscaled: 0.9375 (0.9318) loss_ce_2_unscaled: 0.3791 (0.3792) loss_bbox_2_unscaled: 0.0754 (0.0732) loss_giou_2_unscaled: 0.3715 (0.3766) cardinality_error_2_unscaled: 0.9375 (0.9375) loss_ce_3_unscaled: 0.3717 (0.3695) loss_bbox_3_unscaled: 0.0677 (0.0709) loss_giou_3_unscaled: 0.3703 (0.3700) cardinality_error_3_unscaled: 0.9375 (0.9375) loss_ce_4_unscaled: 0.3669 (0.3668) loss_bbox_4_unscaled: 0.0655 (0.0691) loss_giou_4_unscaled: 0.3677 (0.3701) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 1.3334 data: 0.1550 max mem: 12069
Epoch: [25] [15/16] eta: 0:00:01 lr: 0.000010 class_error: 25.00 loss: 9.0108 (9.0410) loss_ce: 0.3657 (0.3685) loss_bbox: 0.3367 (0.3412) loss_giou: 0.7176 (0.7210) loss_ce_0: 0.4084 (0.4153) loss_bbox_0: 0.4267 (0.4182) loss_giou_0: 0.8235 (0.8315) loss_ce_1: 0.3935 (0.3953) loss_bbox_1: 0.3471 (0.3736) loss_giou_1: 0.7567 (0.7767) loss_ce_2: 0.3839 (0.3820) loss_bbox_2: 0.3434 (0.3602) loss_giou_2: 0.7326 (0.7520) loss_ce_3: 0.3736 (0.3741) loss_bbox_3: 0.3284 (0.3477) loss_giou_3: 0.7406 (0.7389) loss_ce_4: 0.3718 (0.3712) loss_bbox_4: 0.3275 (0.3404) loss_giou_4: 0.7287 (0.7331) loss_ce_unscaled: 0.3657 (0.3685) class_error_unscaled: 35.7143 (36.0417) loss_bbox_unscaled: 0.0673 (0.0682) loss_giou_unscaled: 0.3588 (0.3605) cardinality_error_unscaled: 0.9375 (0.9492) loss_ce_0_unscaled: 0.4084 (0.4153) loss_bbox_0_unscaled: 0.0853 (0.0836) loss_giou_0_unscaled: 0.4117 (0.4157) cardinality_error_0_unscaled: 1.0000 (1.0078) loss_ce_1_unscaled: 0.3935 (0.3953) loss_bbox_1_unscaled: 0.0694 (0.0747) loss_giou_1_unscaled: 0.3784 (0.3883) cardinality_error_1_unscaled: 0.9375 (0.9414) loss_ce_2_unscaled: 0.3839 (0.3820) loss_bbox_2_unscaled: 0.0687 (0.0720) loss_giou_2_unscaled: 0.3663 (0.3760) cardinality_error_2_unscaled: 0.9375 (0.9492) loss_ce_3_unscaled: 0.3736 (0.3741) loss_bbox_3_unscaled: 0.0657 (0.0695) loss_giou_3_unscaled: 0.3703 (0.3694) cardinality_error_3_unscaled: 0.9375 (0.9492) loss_ce_4_unscaled: 0.3718 (0.3712) loss_bbox_4_unscaled: 0.0655 (0.0681) loss_giou_4_unscaled: 0.3644 (0.3666) cardinality_error_4_unscaled: 0.9375 (0.9492) time: 1.2856 data: 0.1248 max mem: 12069
Epoch: [25] Total time: 0:00:20 (1.2899 s / it)
Averaged stats: lr: 0.000010 class_error: 25.00 loss: 9.0108 (9.0410) loss_ce: 0.3657 (0.3685) loss_bbox: 0.3367 (0.3412) loss_giou: 0.7176 (0.7210) loss_ce_0: 0.4084 (0.4153) loss_bbox_0: 0.4267 (0.4182) loss_giou_0: 0.8235 (0.8315) loss_ce_1: 0.3935 (0.3953) loss_bbox_1: 0.3471 (0.3736) loss_giou_1: 0.7567 (0.7767) loss_ce_2: 0.3839 (0.3820) loss_bbox_2: 0.3434 (0.3602) loss_giou_2: 0.7326 (0.7520) loss_ce_3: 0.3736 (0.3741) loss_bbox_3: 0.3284 (0.3477) loss_giou_3: 0.7406 (0.7389) loss_ce_4: 0.3718 (0.3712) loss_bbox_4: 0.3275 (0.3404) loss_giou_4: 0.7287 (0.7331) loss_ce_unscaled: 0.3657 (0.3685) class_error_unscaled: 35.7143 (36.0417) loss_bbox_unscaled: 0.0673 (0.0682) loss_giou_unscaled: 0.3588 (0.3605) cardinality_error_unscaled: 0.9375 (0.9492) loss_ce_0_unscaled: 0.4084 (0.4153) loss_bbox_0_unscaled: 0.0853 (0.0836) loss_giou_0_unscaled: 0.4117 (0.4157) cardinality_error_0_unscaled: 1.0000 (1.0078) loss_ce_1_unscaled: 0.3935 (0.3953) loss_bbox_1_unscaled: 0.0694 (0.0747) loss_giou_1_unscaled: 0.3784 (0.3883) cardinality_error_1_unscaled: 0.9375 (0.9414) loss_ce_2_unscaled: 0.3839 (0.3820) loss_bbox_2_unscaled: 0.0687 (0.0720) loss_giou_2_unscaled: 0.3663 (0.3760) cardinality_error_2_unscaled: 0.9375 (0.9492) loss_ce_3_unscaled: 0.3736 (0.3741) loss_bbox_3_unscaled: 0.0657 (0.0695) loss_giou_3_unscaled: 0.3703 (0.3694) cardinality_error_3_unscaled: 0.9375 (0.9492) loss_ce_4_unscaled: 0.3718 (0.3712) loss_bbox_4_unscaled: 0.0655 (0.0681) loss_giou_4_unscaled: 0.3644 (0.3666) cardinality_error_4_unscaled: 0.9375 (0.9492)
Test: [0/4] eta: 0:00:08 class_error: 50.00 loss: 8.0553 (8.0553) loss_ce: 0.3768 (0.3768) loss_bbox: 0.2717 (0.2717) loss_giou: 0.6293 (0.6293) loss_ce_0: 0.4071 (0.4071) loss_bbox_0: 0.3002 (0.3002) loss_giou_0: 0.7114 (0.7114) loss_ce_1: 0.3947 (0.3947) loss_bbox_1: 0.3054 (0.3054) loss_giou_1: 0.6963 (0.6963) loss_ce_2: 0.3856 (0.3856) loss_bbox_2: 0.2926 (0.2926) loss_giou_2: 0.6809 (0.6809) loss_ce_3: 0.3803 (0.3803) loss_bbox_3: 0.2731 (0.2731) loss_giou_3: 0.6680 (0.6680) loss_ce_4: 0.3791 (0.3791) loss_bbox_4: 0.2708 (0.2708) loss_giou_4: 0.6319 (0.6319) loss_ce_unscaled: 0.3768 (0.3768) class_error_unscaled: 50.0000 (50.0000) loss_bbox_unscaled: 0.0543 (0.0543) loss_giou_unscaled: 0.3147 (0.3147) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4071 (0.4071) loss_bbox_0_unscaled: 0.0600 (0.0600) loss_giou_0_unscaled: 0.3557 (0.3557) cardinality_error_0_unscaled: 1.0625 (1.0625) loss_ce_1_unscaled: 0.3947 (0.3947) loss_bbox_1_unscaled: 0.0611 (0.0611) loss_giou_1_unscaled: 0.3481 (0.3481) cardinality_error_1_unscaled: 0.9375 (0.9375) loss_ce_2_unscaled: 0.3856 (0.3856) loss_bbox_2_unscaled: 0.0585 (0.0585) loss_giou_2_unscaled: 0.3405 (0.3405) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3803 (0.3803) loss_bbox_3_unscaled: 0.0546 (0.0546) loss_giou_3_unscaled: 0.3340 (0.3340) cardinality_error_3_unscaled: 0.9375 (0.9375) loss_ce_4_unscaled: 0.3791 (0.3791) loss_bbox_4_unscaled: 0.0542 (0.0542) loss_giou_4_unscaled: 0.3159 (0.3159) cardinality_error_4_unscaled: 0.9375 (0.9375) time: 2.0371 data: 1.2282 max mem: 12069
Test: [3/4] eta: 0:00:01 class_error: 56.25 loss: 8.4433 (8.4623) loss_ce: 0.3721 (0.3770) loss_bbox: 0.3049 (0.3116) loss_giou: 0.6387 (0.6583) loss_ce_0: 0.4216 (0.4194) loss_bbox_0: 0.3334 (0.3469) loss_giou_0: 0.7351 (0.7447) loss_ce_1: 0.4034 (0.4020) loss_bbox_1: 0.3185 (0.3456) loss_giou_1: 0.6963 (0.7461) loss_ce_2: 0.3856 (0.3925) loss_bbox_2: 0.3351 (0.3279) loss_giou_2: 0.6724 (0.6851) loss_ce_3: 0.3803 (0.3848) loss_bbox_3: 0.3116 (0.3073) loss_giou_3: 0.6423 (0.6602) loss_ce_4: 0.3789 (0.3805) loss_bbox_4: 0.3159 (0.3086) loss_giou_4: 0.6563 (0.6638) loss_ce_unscaled: 0.3721 (0.3770) class_error_unscaled: 31.2500 (42.1875) loss_bbox_unscaled: 0.0610 (0.0623) loss_giou_unscaled: 0.3194 (0.3291) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4216 (0.4194) loss_bbox_0_unscaled: 0.0667 (0.0694) loss_giou_0_unscaled: 0.3676 (0.3724) cardinality_error_0_unscaled: 1.0000 (1.0469) loss_ce_1_unscaled: 0.4034 (0.4020) loss_bbox_1_unscaled: 0.0637 (0.0691) loss_giou_1_unscaled: 0.3481 (0.3730) cardinality_error_1_unscaled: 1.0000 (0.9844) loss_ce_2_unscaled: 0.3856 (0.3925) loss_bbox_2_unscaled: 0.0670 (0.0656) loss_giou_2_unscaled: 0.3362 (0.3426) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3803 (0.3848) loss_bbox_3_unscaled: 0.0623 (0.0615) loss_giou_3_unscaled: 0.3211 (0.3301) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3789 (0.3805) loss_bbox_4_unscaled: 0.0632 (0.0617) loss_giou_4_unscaled: 0.3281 (0.3319) cardinality_error_4_unscaled: 1.0000 (0.9844) time: 1.0261 data: 0.3686 max mem: 12069
Test: Total time: 0:00:04 (1.0445 s / it)
Averaged stats: class_error: 56.25 loss: 8.4433 (8.4623) loss_ce: 0.3721 (0.3770) loss_bbox: 0.3049 (0.3116) loss_giou: 0.6387 (0.6583) loss_ce_0: 0.4216 (0.4194) loss_bbox_0: 0.3334 (0.3469) loss_giou_0: 0.7351 (0.7447) loss_ce_1: 0.4034 (0.4020) loss_bbox_1: 0.3185 (0.3456) loss_giou_1: 0.6963 (0.7461) loss_ce_2: 0.3856 (0.3925) loss_bbox_2: 0.3351 (0.3279) loss_giou_2: 0.6724 (0.6851) loss_ce_3: 0.3803 (0.3848) loss_bbox_3: 0.3116 (0.3073) loss_giou_3: 0.6423 (0.6602) loss_ce_4: 0.3789 (0.3805) loss_bbox_4: 0.3159 (0.3086) loss_giou_4: 0.6563 (0.6638) loss_ce_unscaled: 0.3721 (0.3770) class_error_unscaled: 31.2500 (42.1875) loss_bbox_unscaled: 0.0610 (0.0623) loss_giou_unscaled: 0.3194 (0.3291) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4216 (0.4194) loss_bbox_0_unscaled: 0.0667 (0.0694) loss_giou_0_unscaled: 0.3676 (0.3724) cardinality_error_0_unscaled: 1.0000 (1.0469) loss_ce_1_unscaled: 0.4034 (0.4020) loss_bbox_1_unscaled: 0.0637 (0.0691) loss_giou_1_unscaled: 0.3481 (0.3730) cardinality_error_1_unscaled: 1.0000 (0.9844) loss_ce_2_unscaled: 0.3856 (0.3925) loss_bbox_2_unscaled: 0.0670 (0.0656) loss_giou_2_unscaled: 0.3362 (0.3426) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3803 (0.3848) loss_bbox_3_unscaled: 0.0623 (0.0615) loss_giou_3_unscaled: 0.3211 (0.3301) cardinality_error_3_unscaled: 1.0000 (0.9844) loss_ce_4_unscaled: 0.3789 (0.3805) loss_bbox_4_unscaled: 0.0632 (0.0617) loss_giou_4_unscaled: 0.3281 (0.3319) cardinality_error_4_unscaled: 1.0000 (0.9844)
Accumulating evaluation results...
DONE (t=0.07s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.010
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.017
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.002
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.017
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.121
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.243
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.440
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.169
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.240
Epoch: [26] [ 0/16] eta: 0:00:41 lr: 0.000010 class_error: 31.25 loss: 8.4646 (8.4646) loss_ce: 0.3842 (0.3842) loss_bbox: 0.2859 (0.2859) loss_giou: 0.6947 (0.6947) loss_ce_0: 0.4307 (0.4307) loss_bbox_0: 0.3141 (0.3141) loss_giou_0: 0.7597 (0.7597) loss_ce_1: 0.4070 (0.4070) loss_bbox_1: 0.3135 (0.3135) loss_giou_1: 0.7449 (0.7449) loss_ce_2: 0.4067 (0.4067) loss_bbox_2: 0.2760 (0.2760) loss_giou_2: 0.7034 (0.7034) loss_ce_3: 0.3843 (0.3843) loss_bbox_3: 0.2865 (0.2865) loss_giou_3: 0.7110 (0.7110) loss_ce_4: 0.3860 (0.3860) loss_bbox_4: 0.2852 (0.2852) loss_giou_4: 0.6907 (0.6907) loss_ce_unscaled: 0.3842 (0.3842) class_error_unscaled: 31.2500 (31.2500) loss_bbox_unscaled: 0.0572 (0.0572) loss_giou_unscaled: 0.3474 (0.3474) cardinality_error_unscaled: 1.0000 (1.0000) loss_ce_0_unscaled: 0.4307 (0.4307) loss_bbox_0_unscaled: 0.0628 (0.0628) loss_giou_0_unscaled: 0.3799 (0.3799) cardinality_error_0_unscaled: 1.0000 (1.0000) loss_ce_1_unscaled: 0.4070 (0.4070) loss_bbox_1_unscaled: 0.0627 (0.0627) loss_giou_1_unscaled: 0.3724 (0.3724) cardinality_error_1_unscaled: 1.0000 (1.0000) loss_ce_2_unscaled: 0.4067 (0.4067) loss_bbox_2_unscaled: 0.0552 (0.0552) loss_giou_2_unscaled: 0.3517 (0.3517) cardinality_error_2_unscaled: 1.0000 (1.0000) loss_ce_3_unscaled: 0.3843 (0.3843) loss_bbox_3_unscaled: 0.0573 (0.0573) loss_giou_3_unscaled: 0.3555 (0.3555) cardinality_error_3_unscaled: 1.0000 (1.0000) loss_ce_4_unscaled: 0.3860 (0.3860) loss_bbox_4_unscaled: 0.0570 (0.0570) loss_giou_4_unscaled: 0.3453 (0.3453) cardinality_error_4_unscaled: 1.0000 (1.0000) time: 2.5964 data: 1.3184 max mem: 12069
Epoch: [26] [10/16] eta: 0:00:07 lr: 0.000010 class_error: 31.25 loss: 9.3528 (9.2345) loss_ce: 0.3794 (0.3629) loss_bbox: 0.3737 (0.3738) loss_giou: 0.7380 (0.7387) loss_ce_0: 0.4094 (0.4092) loss_bbox_0: 0.4131 (0.4293) loss_giou_0: 0.8041 (0.8203) loss_ce_1: 0.3996 (0.3910) loss_bbox_1: 0.4074 (0.4085) loss_giou_1: 0.7997 (0.7812) loss_ce_2: 0.3847 (0.3777) loss_bbox_2: 0.3966 (0.3828) loss_giou_2: 0.8110 (0.7766) loss_ce_3: 0.3756 (0.3689) loss_bbox_3: 0.3861 (0.3841) loss_giou_3: 0.7313 (0.7449) loss_ce_4: 0.3820 (0.3652) loss_bbox_4: 0.3552 (0.3777) loss_giou_4: 0.7515 (0.7417) loss_ce_unscaled: 0.3794 (0.3629) class_error_unscaled: 31.2500 (33.9773) loss_bbox_unscaled: 0.0747 (0.0748) loss_giou_unscaled: 0.3690 (0.3694) cardinality_error_unscaled: 0.9375 (0.9489) loss_ce_0_unscaled: 0.4094 (0.4092) loss_bbox_0_unscaled: 0.0826 (0.0859) loss_giou_0_unscaled: 0.4021 (0.4102) cardinality_error_0_unscaled: 0.9375 (0.9489) loss_ce_1_unscaled: 0.3996 (0.3910) loss_bbox_1_unscaled: 0.0815 (0.0817) loss_giou_1_unscaled: 0.3999 (0.3906) cardinality_error_1_unscaled: 0.9375 (0.9489) loss_ce_2_unscaled: 0.3847 (0.3777) loss_bbox_2_unscaled: 0.0793 (0.0766) loss_giou_2_unscaled: 0.4055 (0.3883) cardinality_error_2_unscaled: 0.9375 (0.9489) loss_ce_3_unscaled: 0.3756 (0.3689) loss_bbox_3_unscaled: 0.0772 (0.0768) loss_giou_3_unscaled: 0.3657 (0.3724) cardinality_error_3_unscaled: 0.9375 (0.9489) loss_ce_4_unscaled: 0.3820 (0.3652) loss_bbox_4_unscaled: 0.0710 (0.0755) loss_giou_4_unscaled: 0.3758 (0.3708) cardinality_error_4_unscaled: 0.9375 (0.9489) time: 1.3323 data: 0.1791 max mem: 12069
###Markdown
Results¶Quick and easy overview of the training results
###Code
from util.plot_utils import plot_logs
from pathlib import Path
log_directory = [Path(outDir)]
fields_of_interest = (
'loss',
'mAP',
)
plot_logs(log_directory,
fields_of_interest)
fields_of_interest = (
'loss_ce',
'loss_bbox',
'loss_giou',
)
plot_logs(log_directory,
fields_of_interest)
fields_of_interest = (
'class_error',
'cardinality_error_unscaled',
)
plot_logs(log_directory,
fields_of_interest)
###Output
_____no_output_____ |
4.programmer/resources/OOPWithPythonExample.ipynb | ###Markdown
Object-Oriented Programming With Python
###Code
class Dog:
# Class Attribute
species = 'mammal'
# Initializer / Instance Attributes
def __init__(self, name, age):
self.name = name
self.age = age
# instance method
def description(self):
return "{} is {} years old".format(self.name, self.age)
# instance method
def speak(self, sound):
return "{} says {}".format(self.name, sound)
# Instantiate the Dog object
mikey = Dog("Mikey", 6)
# call our instance methods
print(mikey.description())
print(mikey.speak("Gruff Gruff"))
# Parent class
class Dog:
# Class attribute
species = 'mammal'
# Initializer / Instance attributes
def __init__(self, name, age):
self.name = name
self.age = age
# instance method
def description(self):
return "{} is {} years old".format(self.name, self.age)
# instance method
def speak(self, sound):
return "{} says {}".format(self.name, sound)
# Child class (inherits from Dog class)
class RussellTerrier(Dog):
def run(self, speed):
return "{} runs {}".format(self.name, speed)
# Child class (inherits from Dog class)
class Bulldog(Dog):
def run(self, speed):
return "{} runs {}".format(self.name, speed)
# Child classes inherit attributes and
# behaviors from the parent class
jim = Bulldog("Jim", 12)
print(jim.description())
# Child classes have specific attributes
# and behaviors as well
print(jim.run("slowly"))
###Output
_____no_output_____ |
feature_engenearing.ipynb | ###Markdown
Desafio 6Neste desafio, vamos praticar _feature engineering_, um dos processos mais importantes e trabalhosos de ML. Utilizaremos o _data set_ [Countries of the world](https://www.kaggle.com/fernandol/countries-of-the-world), que contém dados sobre os 227 países do mundo com informações sobre tamanho da população, área, imigração e setores de produção.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import sklearn as sk
# Algumas configurações para o matplotlib.
#%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
countries = pd.read_csv("countries.csv")
new_column_names = [
"Country", "Region", "Population", "Area", "Pop_density", "Coastline_ratio",
"Net_migration", "Infant_mortality", "GDP", "Literacy", "Phones_per_1000",
"Arable", "Crops", "Other", "Climate", "Birthrate", "Deathrate", "Agriculture",
"Industry", "Service"
]
countries.columns = new_column_names
countries.head(5)
###Output
_____no_output_____
###Markdown
ObservaçõesEsse _data set_ ainda precisa de alguns ajustes iniciais. Primeiro, note que as variáveis numéricas estão usando vírgula como separador decimal e estão codificadas como strings. Corrija isso antes de continuar: transforme essas variáveis em numéricas adequadamente.Além disso, as variáveis `Country` e `Region` possuem espaços a mais no começo e no final da string. Você pode utilizar o método `str.strip()` para remover esses espaços. Inicia sua análise a partir daqui
###Code
countries.dtypes
numericas = ["Population", "Area", "Pop_density", "Coastline_ratio",
"Net_migration", "Infant_mortality", "GDP", "Literacy", "Phones_per_1000",
"Arable", "Crops", "Other", "Climate", "Birthrate", "Deathrate", "Agriculture",
"Industry", "Service"]
nao_numericas = ["Pop_density", "Coastline_ratio",
"Net_migration", "Infant_mortality", "Literacy", "Phones_per_1000",
"Arable", "Crops", "Other", "Climate", "Birthrate", "Deathrate", "Agriculture",
"Industry", "Service"]
for x in nao_numericas:
countries[x] = countries[x].str.replace(',','.')
countries[x] = countries[x].astype('float64')
nominais = ["Country", "Region"]
for x in nominais:
countries[x] = countries[x].str.strip()
###Output
_____no_output_____
###Markdown
Questão 1Quais são as regiões (variável `Region`) presentes no _data set_? Retorne uma lista com as regiões únicas do _data set_ com os espaços à frente e atrás da string removidos (mas mantenha pontuação: ponto, hífen etc) e ordenadas em ordem alfabética.
###Code
def q1():
lista_regioes = list(countries['Region'].unique())
lista_regioes = sorted(lista_regioes)
return lista_regioes
q1()
###Output
_____no_output_____
###Markdown
Questão 2Discretizando a variável `Pop_density` em 10 intervalos com `KBinsDiscretizer`, seguindo o encode `ordinal` e estratégia `quantile`, quantos países se encontram acima do 90º percentil? Responda como um único escalar inteiro.
###Code
from sklearn.preprocessing import KBinsDiscretizer
def q2():
target = countries['Pop_density'].reset_index()
discretizer = KBinsDiscretizer(n_bins=10, encode='ordinal', strategy='quantile')
target_discretizado = discretizer.fit_transform(target)
target = pd.DataFrame(target_discretizado)
lista_final = list(target[target[0]==9].count())
return int(lista_final[0])
q2()
###Output
_____no_output_____
###Markdown
Questão 3Se codificarmos as variáveis `Region` e `Climate` usando _one-hot encoding_, quantos novos atributos seriam criados? Responda como um único escalar.
###Code
num_regions = int(countries['Region'].nunique())
num_climate = int(countries['Climate'].nunique(dropna=False))
num_novas_variaveis = num_regions + num_climate
def q3():
num_regions = int(countries['Region'].nunique())
num_climate = int(countries['Climate'].nunique())
num_novas_variaveis = num_regions + num_climate
return num_novas_variaveis
###Output
_____no_output_____
###Markdown
Questão 4Aplique o seguinte _pipeline_:1. Preencha as variáveis do tipo `int64` e `float64` com suas respectivas medianas.2. Padronize essas variáveis.Após aplicado o _pipeline_ descrito acima aos dados (somente nas variáveis dos tipos especificados), aplique o mesmo _pipeline_ (ou `ColumnTransformer`) ao dado abaixo. Qual o valor da variável `Arable` após o _pipeline_? Responda como um único float arredondado para três casas decimais.
###Code
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
countries.isna().sum()
test_country = [
'Test Country', 'NEAR EAST', -0.19032480757326514,
-0.3232636124824411, -0.04421734470810142, -0.27528113360605316,
0.13255850810281325, -0.8054845935643491, 1.0119784924248225,
0.6189182532646624, 1.0074863283776458, 0.20239896852403538,
-0.043678728558593366, -0.13929748680369286, 1.3163604645710438,
-0.3699637766938669, -0.6149300604558857, -0.854369594993175,
0.263445277972641, 0.5712416961268142
]
def q4():
pipeline = Pipeline(steps=[
('simpleimputer', SimpleImputer(missing_values =np.nan, strategy = "median")),
('standardscaler', StandardScaler())
])
x = countries.drop(nominais, axis=1)
pipeline.fit_transform(x)
df_test_country = pd.DataFrame([test_country], columns=new_column_names)
x_test = df_test_country.drop(nominais, axis=1)
x_test_transformed = pipeline.transform(x_test)
x_test_final = pd.DataFrame(x_test_transformed, columns=numericas)
answer = round(float(x_test_final['Arable'][0]),3)
return answer
q4()
###Output
_____no_output_____
###Markdown
Questão 5Descubra o número de _outliers_ da variável `Net_migration` segundo o método do _boxplot_, ou seja, usando a lógica:$$x \notin [Q1 - 1.5 \times \text{IQR}, Q3 + 1.5 \times \text{IQR}] \Rightarrow x \text{ é outlier}$$que se encontram no grupo inferior e no grupo superior.Você deveria remover da análise as observações consideradas _outliers_ segundo esse método? Responda como uma tupla de três elementos `(outliers_abaixo, outliers_acima, removeria?)` ((int, int, bool)).
###Code
countries['Net_migration'].plot(kind='box')
decision = False
quartile1 = countries['Net_migration'].quantile(0.25)
quartile3 = countries['Net_migration'].quantile(0.75)
iqr = quartile3 - quartile1
boxploteq = (quartile1 - 1.5*iqr, quartile3 + 1.5*iqr)
boxploteq
outliers_1 = len(countries[countries['Net_migration']<boxploteq[0]])
outliers_2 = len(countries[countries['Net_migration']>boxploteq[1]])
print(len(countries['Net_migration']),'É o número total de observações')
print(outliers_1, 'É o número de outliers inferiores ao intervalo do boxplot')
print(outliers_2, 'É o número de outliers superiores ao intervalo do boxplot')
def q5():
decision = False
return (outliers_1, outliers_2, decision)
###Output
_____no_output_____
###Markdown
Questão 6Para as questões 6 e 7 utilize a biblioteca `fetch_20newsgroups` de datasets de test do `sklearn`Considere carregar as seguintes categorias e o dataset `newsgroups`:```categories = ['sci.electronics', 'comp.graphics', 'rec.motorcycles']newsgroup = fetch_20newsgroups(subset="train", categories=categories, shuffle=True, random_state=42)```Aplique `CountVectorizer` ao _data set_ `newsgroups` e descubra o número de vezes que a palavra _phone_ aparece no corpus. Responda como um único escalar.
###Code
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
categories = ['sci.electronics', 'comp.graphics', 'rec.motorcycles']
newsgroup = fetch_20newsgroups(subset="train", categories=categories, shuffle=True, random_state=42)
CV = CountVectorizer()
df_vectorized = CV.fit_transform(newsgroup.data)
indice = CV.vocabulary_.get("phone")
phone = df_vectorized[:,indice]
phone_counts = int(phone.sum())
def q6():
return phone_counts
q6()
###Output
_____no_output_____
###Markdown
Questão 7Aplique `TfidfVectorizer` ao _data set_ `newsgroups` e descubra o TF-IDF da palavra _phone_. Responda como um único escalar arredondado para três casas decimais.
###Code
from sklearn.feature_extraction.text import TfidfTransformer, TfidfVectorizer
tfidf_vec = TfidfVectorizer()
vectorized_data = tfidf_vec.fit_transform(newsgroup.data)
phone_count_2 = vectorized_data[:, indice]
phone_count_2[phone_count_2 != 0]
phone_count_2.sum()
def q7():
return float(round(phone_count_2.sum(),3))
q7()
###Output
_____no_output_____ |
Chapter 6 - Decision Trees.ipynb | ###Markdown
Classification
###Code
# Training a Decision Tree on Iris Dataset
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
iris = load_iris()
X = iris.data[:, 2:] # petal length and width
y = iris.target
tree_clf = DecisionTreeClassifier(max_depth=2)
tree_clf.fit(X, y)
# Let's visualize the Decision Tree using export_graphviz() method
from graphviz import Source
from sklearn.tree import export_graphviz
export_graphviz(
tree_clf,
out_file="iris_tree.dot",
feature_names=iris.feature_names[2:],
class_names=iris.target_names,
rounded=True,
filled=True
)
Source.from_file("iris_tree.dot")
iris.feature_names
iris.target_names
###Output
_____no_output_____
###Markdown
gini determines purity while samples and values determine how many in total and of each class
###Code
# Let's do some actual prediction
tree_clf.predict_proba([[5, 1.5]])
tree_clf.predict([[5, 1.5]])
###Output
_____no_output_____
###Markdown
Regression
###Code
# Create the data
import numpy as np
# Quadratic training set + noise
np.random.seed(42)
m = 200
X = np.random.rand(m, 1)
y = 4 * (X - 0.5) ** 2
y = y + np.random.randn(m, 1) / 10
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor(max_depth=2)
tree_reg.fit(X, y)
export_graphviz(
tree_reg,
out_file="regression_tree.dot",
feature_names=["x1"],
rounded=True,
filled=True
)
Source.from_file("regression_tree.dot")
['x1']
###Output
_____no_output_____ |
prediction/single task/function documentation generation/ruby/small_model.ipynb | ###Markdown
**Predict the documentation for ruby code using codeTrans single task training model**You can make free prediction online through this Link (When using the prediction online, you need to parse and tokenize the code first.) **1. Load necessry libraries including huggingface transformers**
###Code
!pip install -q transformers sentencepiece
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
###Output
_____no_output_____
###Markdown
**2. Load the token classification pipeline and load it into the GPU if avilabile**
###Code
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby", skip_special_tokens=True),
device=0
)
###Output
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_auto.py:852: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
FutureWarning,
###Markdown
**3 Give the code for summarization, parse and tokenize it**
###Code
code = "def add(severity, progname, &block)\n return true if io.nil? || severity < level\n message = format_message(severity, progname, yield)\n MUTEX.synchronize { io.write(message) }\n true\n end" #@param {type:"raw"}
!pip install tree_sitter
!git clone https://github.com/tree-sitter/tree-sitter-ruby
from tree_sitter import Language, Parser
Language.build_library(
'build/my-languages.so',
['tree-sitter-ruby']
)
RUBY_LANGUAGE = Language('build/my-languages.so', 'ruby')
parser = Parser()
parser.set_language(RUBY_LANGUAGE)
def get_string_from_code(node, lines):
line_start = node.start_point[0]
line_end = node.end_point[0]
char_start = node.start_point[1]
char_end = node.end_point[1]
if line_start != line_end:
code_list.append(' '.join([lines[line_start][char_start:]] + lines[line_start+1:line_end] + [lines[line_end][:char_end]]))
else:
code_list.append(lines[line_start][char_start:char_end])
def my_traverse(node, code_list):
lines = code.split('\n')
if node.child_count == 0:
get_string_from_code(node, lines)
elif node.type == 'string':
get_string_from_code(node, lines)
else:
for n in node.children:
my_traverse(n, code_list)
return ' '.join(code_list)
tree = parser.parse(bytes(code, "utf8"))
code_list=[]
tokenized_code = my_traverse(tree.root_node, code_list)
print("Output after tokenization: " + tokenized_code)
###Output
Output after tokenization: def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end
###Markdown
**4. Make Prediction**
###Code
pipeline([tokenized_code])
###Output
Your max_length is set to 512, but you input_length is only 57. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)
|
Voice Analysis/ML/.ipynb_checkpoints/VoiceDeploy-checkpoint.ipynb | ###Markdown
Deploy
###Code
app = Flask(__name__)
@app.route('/')
def home():
return render_template('index.html')
@app.route('/predict',methods=['POST'])
def predict():
audio = request.files["audio"]
print(audio)
audiofile=audio.filename
#dir of audio is in the current directory
emotion = predict_voice(audiofile)
return emotion
app.run()
###Output
* Serving Flask app "__main__" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
|
notebooks/race_analysis.ipynb | ###Markdown
Race analysis Kyle Willett ([@willettk](https://github.com/willettk))Some time-wasting ways of filtering and sorting my personal data from running races.
###Code
# Get them packages
import datetime
import re
from collections import Counter
from operator import itemgetter
import pandas as pd
import numpy as np
from IPython.display import display
# Running data can be either from live website or local file
getLocalData = True
localFile = "/Users/willettk/willettk.github.io/content/racelist.html"
url = "http://willettk.github.io/racelist.html"
dataIO = localFile if getLocalData else url
try:
# Select only data for solo running races; relays, bike races, triathlons are all in separate tables
attrs = {'id': 'runtable'}
races = pd.read_html(dataIO, attrs = attrs)
except ValueError:
print("Could not find table matching attributes {} on page.".format(attrs))
# Pre-process data as pandas DataFrame
run = races[0]
# Rename columns for some easier typing
rc = run.columns
run.rename(columns={rc[0]:"date",
rc[1]:"race",
rc[2]:"location",
rc[3]:"d_km",
rc[4]:"d_mi",
rc[5]:"pace",
rc[6]:"time",
rc[7]:"place_overall",
rc[8]:"finishers_overall",
rc[9]:"place_division",
rc[10]:"finishers_division",
rc[11]:"division"
},inplace=True)
run_columns = ['date','race','location','time','pace',
'place_overall','finishers_overall',
'place_division','finishers_division']
# Convert to int where possible
for col in ('place_overall','finishers_overall','place_division','finishers_division'):
run[col] = run[col].fillna(-1).astype(int)
# Ditch races where finishing data is probably inaccurate, based on lack of pace
run = run[[False if type(x) == float and np.isnan(x) else True for x in run.pace]]
def parse_races(dt):
# Transform date, time, and pace into numerical objects
try:
dt['date'] = pd.to_datetime(dt.date)
except AttributeError:
print(dt)
dt['pace'] = [datetime.timedelta(minutes=float(x.split()[0].split(':')[0]),
seconds=float(x.split()[0].split(':')[1])) for x in dt.pace]
try:
dt['time'] = [datetime.timedelta(hours=float(x.split(':')[0]),
minutes=float(x.split(':')[1]),
seconds=float(x.split(':')[2])) for x in dt.time]
except IndexError:
dt['time'] = [datetime.timedelta(minutes=float(x.split(':')[0]),
seconds=float(x.split(':')[1])) for x in dt.time]
# Restrict to races with data on overall and division placing
dtf = dt[np.isfinite(dt['finishers_overall']) & np.isfinite(dt['finishers_division'])].copy()
for c in dtf.columns[-4:]:
dtf[c] = dtf[c].astype(int)
return dt,dtf
def filter_races(distance=None):
# Filter for races at a given distance (rounded to nearest tenth of a mile)
if distance != None:
dt = run.copy()[run['d_mi'].round(1) == distance][run_columns]
else:
dt = run.copy()[run_columns]
dt,dtf = parse_races(dt)
return dt,dtf
def distinct_places(df):
# Find distinct states/polities for a set of races
return Counter([l.split(",")[-1].strip() for l in df.location])
def more_than_once(df):
# Find races run more than once
c = Counter(df.race)
races,count = [],[]
for r in c:
if c[r] > 1:
races.append(r)
count.append(c[r])
return pd.DataFrame({'race':races},index=count).sort_index(ascending=False)
def time_formatting(t,verbose=False):
# Output times in something sensibly human-readable
if t.seconds > 3600:
if verbose:
print("Formatting as HH:MM:SS")
timestr = "{:.0f}:{:02.0f}:{:02.0f}".format(int(t.seconds / 3600), int((t.seconds % 3600)/60), t.seconds % 60 )
elif t.seconds > 60:
if verbose:
print("Formatting as MM:SS")
timestr = "{:.0f}:{:02.0f}".format(int(t.seconds / 60), t.seconds % 60 )
else:
if verbose:
print("Formatting as SS")
timestr = "{:.0f}".format(t.seconds)
return timestr
def personal_best(df):
# Return personal best time at a given distance
best = df.sort_values("time").reset_index().loc[0]
timestr = time_formatting(best.time)
race = best.race
year = best.date.year
d = {'time':timestr,'race':race,'year':year}
return d
def plural_stem(s):
return "" if s == 1 else "s"
def summarize(distance=None):
# Print out everything prettily
dt,dtf = filter_races(distance)
n = len(dt)
placeListRaw = distinct_places(dt).items()
placeList = [(x[0],x[1],len(x[0])) for x in placeListRaw]
for key,reverseOrder in zip((0,2,1),(False,False,True)):
placeList.sort(key=itemgetter(key),reverse=reverseOrder)
polities = re.sub("['\[\]]","",str(["{} ({})".format(x[0], x[1]) for x in placeList]))
# Races split by location
if distance != None:
print("\nI've run {} race{} of {} mile{}.\n".format(n, plural_stem(n), distance, plural_stem(distance)))
if n > 0:
print("Personal best: {time}, set at {race} in {year}.\n".format(**personal_best(dt)))
print("I've run {} mile{} in {}.\n".format(distance, plural_stem(distance), polities))
distanceStr = '{} mile-'.format(distance)
else:
print("I've run races in {}.\n".format(polities))
distanceStr = ''
print("\nI've run {} total races.".format(len(dt)))
# Races run more than once
mo = more_than_once(dt)
if len(mo) > 0:
if distance != None:
print("\nRaces of {} mile{} that I've run more than once:".format(distance, plural_stem(distance)))
else:
print("\nRaces that I've run more than once:")
display(mo)
else:
print("\nI've never run the same {}race more than once.".format(distanceStr))
###Output
_____no_output_____
###Markdown
For a given distance, summarize:* number of races* locations* personal best* races run more than once
###Code
# Only summarize the N most common distances
nd = 6
mcd = [round(float(x[0]),1) for x in Counter(run['d_mi']).most_common(nd)]
mcd.sort()
for d in mcd:
summarize(d)
# Summarize races over all distances
summarize()
###Output
_____no_output_____
###Markdown
How has my personal best for each distance progressed?
###Code
def personal_best_progression(distance=13.1):
# In ascending chronological order for a given distance,
# print out all races which set or equalled a previous personal best time.
dt,dtf = filter_races(distance)
n = len(dt)
if n > 0:
firstrace = dt.iloc[0]
best = firstrace.time
bestyear = firstrace.date.year
timestr = time_formatting(firstrace.time)
print("Personal best progression of {} miles ({} race{}):\n".format(distance,n,plural_stem(n)))
print("\tFirst run {}: {} at {}.".format(firstrace.date.year,timestr,firstrace.race))
for i in range(n-1):
row = dt.iloc[i+1]
if row.time <= best:
timestr_new = time_formatting(row.time)
print("\tNew PB in {}: {} at {}.".format(row.date.year,timestr_new,row.race,))
best = row.time
else:
print("No races found for distance of {} miles.".format(distance))
return None
# Example of progression of personal bests
d = 13.1
personal_best_progression(d)
###Output
_____no_output_____
###Markdown
How have I done, year over year, in setting personal bests?
###Code
# Only consider PBs at the most common/iconic distances.
distances = {1:"1 mile",
3.1:"5 km",
6.2:"10 km",
13.1:"half marathon",
26.2:"marathon"}
distances_rev = {v:k for k,v in distances.items()}
# Find range of years of active running
pb = {}
start_year = pd.to_datetime(run.iloc[0].date).year
this_year = datetime.datetime.now().year
for year in range(start_year,this_year+1):
pb[year] = []
# Append if a PB is set for any of the selected distances
for distance in distances.keys():
dt,dtf = filter_races(distance)
n = len(dt)
if n > 0:
firstrace = dt.iloc[0]
best = firstrace.time
pb[firstrace.date.year].append(distances[distance])
for i in range(n-1):
row = dt.iloc[i+1]
if row.time <= best:
pb[row.date.year].append(distances[distance])
best = row.time
# Print list of results for each year
years = sorted(list(pb.keys()))
for year in years:
sorted_pbs = sorted(list(set(pb[year])),key = lambda x: distances_rev[x])
print(year, sorted_pbs if len(pb[year]) > 0 else None)
###Output
_____no_output_____
###Markdown
2016 was a really good year for me - PRs at four distances, from 1 mile up to the marathon. And I've been lucky to be consistently improving, even well into my 30s; except for my break from running in 2005 and 2006, I've set a PR at one of the standard distances every single year. Number of races per yearPlot the total number of races per year and label the maximum.
###Code
from matplotlib import pyplot as plt
import seaborn as sns
%matplotlib inline
def label_max(histobj,ax,title):
counts,yearbins,desc = histobj
argmax = counts.argmax()
ax.text(yearbins[argmax],counts[argmax],"{:.0f}: {:.0f} {}".format(yearbins[argmax]+eps,counts[argmax],title)
,va='bottom'
,color='red')
# Number of races per year
years = [x.year for x in pd.to_datetime(run.date)]
# Number of total miles run each year in races
z = [(d.year,int(m)) for d,m in zip(pd.to_datetime(run.date),run.d_mi)]
l = [[a,]*b for a,b in z]
flat_list = [item for sublist in l for item in sublist]
# Number of total miles run each year in races
z = [(d.year,int(m)) for d,m in zip(pd.to_datetime(run.date),run.d_mi)]
l = [[a,]*b for a,b in z]
flat_list = [item for sublist in l for item in sublist]
fig,axarr = plt.subplots(1,2,figsize=(16,7))
eps = 0.5
bins = np.arange(min(years)-eps,max(years)+eps+1,1)
for ax,data,label in zip(axarr,(years,flat_list),('races','miles')):
histobj = ax.hist(data,bins=bins)
ax.set_xlabel("Year",fontsize=14)
ax.set_ylabel(label.capitalize(),fontsize=14)
label_max(histobj,ax,label);
###Output
_____no_output_____
###Markdown
Total race miles
###Code
print("I've run {:.0f} total miles in {:d} races.".format(run['d_mi'].sum(),len(run)))
d = 6.2
run[run['d_mi'] == d].sort_values(["d_km","time","date"],axis=0,ascending=[1,1,0])
###Output
_____no_output_____ |
notebooks/02_jmg_wr_exploration.ipynb | ###Markdown
Exploration of AI data for the RWJF projectHere we explore the AI data for the RWJF project. This involves:* Analysing World Report * Load * Map * Trend analysis * MeSH exploration * Consider other sources such as CrunchBase or GitHub to illustrate the range of sources we are working with.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import ast
###Output
_____no_output_____
###Markdown
World Reporter First import of the data - I chunked this because it was killing my Python
###Code
# my_files = os.listdir('../data/external/for_juan/')
# dfs_list = []
# dfs_in_5s = []
# counter = 0
# for file in my_files:
# counter+=1
# #Problem file!
# if any(x in file for x in ['590000','180000','1170000','490000']):
# continue
# df = pd.read_json(f'../data/external/for_juan/{file}')
# dfs_list.append(df)
# if counter % 10==0:
# print(counter)
# dfs_in_5s.append(pd.concat(dfs_list))
# dfs_list=[]
# wr = pd.concat(dfs_in_5s+dfs_list)
#wr.to_csv(f'../data/external/{today_str}wr_data.csv',compression='gzip')
wr = pd.read_csv('../data/external/8_11_2018wr_data.csv',error_bad_lines=False,compression='gzip',
converters={'terms_mesh_abstract':ast.literal_eval})
wr.shape
wr.reset_index(drop=True,inplace=True)
# Below we find that the IDs don't seem to relate to unique projects so I create my own here
wr['my_id'] = ['id_'+str(n) for n in range(len(wr))]
wr.set_index('my_id',inplace=True)
###Output
_____no_output_____
###Markdown
Initial exploration of world reporterProtocol:* What is in it? * Missing values * Types of entities * Entities relevant for the project (in this case health innovations) * When does it cover? * How many years* Where are these organisations? * Geography. What is in it
###Code
#wr.apply(lambda x: x.isna().mean()).sort_values().plot.barh(color='navy')
###Output
_____no_output_____
###Markdown
Things to do:* Why are there duplicates?* What is the difference between project description and project abstract?
###Code
#On the duplicate abstracts
wr.booleanFlag_duplicate_abstract.sum()
abstract_short = wr['textBody_abstract_project'].apply(lambda x: x[:1000] if type(x)==str else np.nan)
abstract_short.value_counts()[:5]
###Output
_____no_output_____
###Markdown
A few missing values
###Code
print(sum(abstract_short.value_counts()>1))
print(sum(abstract_short.value_counts()==1))
###Output
371826
175807
###Markdown
There are 371827 beginnings of abstracts that appear more than once, and 157807 abstracts that appear *just once*. This means there are ~500K unique abstract texts. We will need to do some serious deduplication here.Are any of the repeats because eg multiple organisations received a grant?
###Code
abstract_short.value_counts()[:10]
###Output
_____no_output_____
###Markdown
Some of the duplicates are repeating grants and awards involving multiple institutions. Let's check the one right above: Let's find abstracts missing
###Code
missing_abs = [any(x in text.lower() for x in ['abstract','no','not']) & (len(text)<100) for
text in wr['textBody_abstract_project'].dropna()]
sum(missing_abs)
###Output
_____no_output_____
###Markdown
Not a crazy number of missing abstracts beyond the NAs Check one project where there are repeated abstracts
###Code
text_of_interest = "This contract is for a clinical center which is one of an estimated twenty-five centers that will conduct a large scale clinical trial on left ventricular dysfunction."
projects_with_text = wr.loc[[text_of_interest in text for text in wr.textBody_abstract_project.dropna()]]
projects_with_text.head(n=20)[['title_of_organisation','terms_mesh_abstract']]
###Output
_____no_output_____
###Markdown
So this gives different organisations with the same MeSH terms. We should use the set of 'unique abstracts' for any querying Differences between text abstracts and paper abstracts
###Code
import random
for x in random.sample(range(len(wr)),5):
print(wr.iloc[x]['textBody_descriptive_project'])
print('\n')
print(wr.iloc[x]['textBody_abstract_project'])
print('\n')
print('===')
###Output
nan
After infection of E. coli by bacteriophage T4, the host RNA polymerase acquires several small phage-induced polypeptides and its alfa subunits are ADP-ribosylated. The role of these modifications in transcription control will be studied using a combination of biochemical, genetic and physiological approaches. We have purified four of the associated polypeptides (15K, 19K, 25K and 29K proteins) as well as RNA polymerases differing in the state of ADP-ribosylation and propose to study their interactions in direct binding assays. The kinetics of in vivo formation of these proteins and their interactions with other intracellular components will be analysed in order to understand the coordination between transcription and other events of T4 development such as DNA replication. We will identify T4 genes involved in the modifications and use their mutants to determine their role in phage development. In vitro experiments are proposed to study the interaction of normal and modified RNA polymerase with individual promoters with emphasis given to the change of specificity of promoter recognition from early to late sites. We have already shown that 25K protein induces late promoter specificity while 15K protein decreases the utilization of early promoters. To understand the molecular basis of the specificity change we will determine the kinetic parameters of promoter functioning with different forms of RNA polymerase. Experiments directed at biochemical identification of T4-induced antitermination mechanism are also proposed. The in vitro experiments will be backed up by the analysis of in vivo functioning of the same transcription sites using recently sequenced tRNA gene region of T4 as the experimental system.
===
nan
The Escherichia coli uvr ABC system catalyzes the incision of damaged DNA. Having available reagent quantities of homogeneous UurB and UvrC proteins permits a detailed examination of the individual partial reactions leading to the dual incision stage of nucleotide excision repair. The pre-incision steps can be subdivided into a number of partial reactions which include: (a) UvrA dimerization stimulated by ATP binding, (b) UvrA-nucleoprotein formation at both damaged and undamaged sites, (c) the accompanying topological unwinding stimulated by ATP binding, (d) the participation of the UvrB cryptic ATPase in the UvrAB catalyzed strand displacement reaction and finally (e) dual incision catalyzed by the presence of UvrC. The incision mechanisms precede the multi-nucleoprotein complex requiring coordinated excision reactions catalyzed by UvrD, DNA polymerase I and polynucleotide ligase. In the principal direction for the proposed studies we will attempt to associate the anatomy, or structure of the respective uvr A and B genes to the catalytic and protein properties of the related gene product proteins. The focal points and role of ATP in the individual processes will also be addressed. The protein sequences, or domains, of interest include putative ATP binding regions, sites sensitive to a protease specific for the Ada protein of E. coli potential DNA binding sites and to a limited extent the "zinc finger"-like sites. These sites will be engineered by oligonucleotide-directed and deletion mutants, the individual clones sequenced and over-expressed in suitable expression vectors and the catalytic and protein properties of the mutant and "wild type" proteins examined for their enzymatic phenotypes in the respective pre-, post- and incision steps. The biological role and biochemical nature of UvrB proteolysis in regulation of repair will be further investigated by genetic, structural and catalytic methods.
===
nan
We propose to establish a center of biomedical research excellence at the University of Delaware that will focus on the prevention and treatment of osteoarthritis. This proposal involves three research projects, each lead by a junior faculty member who will be mentored by senior faculty. In Project 1, Dr. Manal proposes to use a biomedical model based on MRI and electromyographic data to examine knee kinematics and kinetics while performing activities of daily living. Project 2: In-shoe wedges have been shown to be an effective conservative approach in reducing pain in patients with osteoarthritis. The mechanism responsible for this pain reduction is not well understood. In our third project, Dr. Royer proposes to examine the examine the effects of in-shoe wedges in patients with knee OA on static alignment measures. In addition, the long-term effect of this treatment will also be examined. In Project 3, Dr. Rudolph proposes to study genu varum ("bowleggedness"), which often leads to arthritis on the medial compartment of the knee. Surgery is often performed to correct the alignment of the knee in people with genu varum by removing a wedge of bone on the lateral side of the tibia just below the knee joint to align the joint surfaces more normally (closing wedge osteotomy). Removing bone can increase the laxity of the joint capsule, muscles, and ligaments around the knee and may lead to increased problems with biomechanics and muscular responses. An alternative surgery, the opening wedge osteotomy involves the creation of an opening on the medial side of the tibia which can be distracted slowly over time. The purpose of this study is to characterize the differences in movement and muscle activity before and after an opening wedge osteotomy with callus distraction to correct genu varum.
===
nan
DESCRIPTION (provided by applicant): All viral pathogens utilize cellular protein synthesis machinery for translation of their own genes and evolved to adapt to tissue-specific variations in the components of this machinery. Furthermore, modification of translational components caused by the host innate defense response or viral infection contributes to either resolution of viral infection or success of the viral pathogen. This is of particular importance for those viruses whose mRNA translation is mediated by an Internal Ribosome Entry Segment (IRES), such as Hepatitis C virus (HCV) and Hepatitis A virus (HAV). We have demonstrated for the first time that the 20S proteasome endoproteolytically cleaves two critical translation factors, elF3 and elF4F. Cleavage of elF3 or elF4F differentially affects the entry of the small ribosomal subunit, the rate-limiting step of protein synthesis, on different viral and cellular mRNAs in vitro. We report further that 26S proteasomes also endoproteolytically cleave elFs via ubiquitin- and ATP-independent processes, and differentially affect the ribosome entry on viral mRNAs. These observations suggest a novel mechanism of host cell translational control of viral mRNAs by proteasomes. These novel activities were observed in vivo where they appear to differentially regulate translation of different viral IRES-containing mRNAs in living cells: the endoproteolytic activity of proteasomes is beneficial for translation of some viral RNAs (e.g. HCV) and detrimental for others (e.g., HAV). The abundance, composition, and endoproteolytic activity of proteasomes vary significantly between organ tissues and in response to environmental cues. We propose that elFs cleavage and, consequently, translation of different viral mRNAs, is significantly modulated by proteasomes in specific tissues and under different conditions. Further, we propose that these factors affect the pathogenesis of virus-induced disease. To test these hypotheses, three specific aims will investigate the mechanism of cleavage of elFs by proteasomes in vitro and in various host tissues as well as the role of proteasomes in translational regulation of different viral mRNAs in vivo. The penultimate goal of this research is the development of novel therapeutic strategies to combat viral diseases which is of critical importance to public health. Coordinated regulation of protein synthesis and degradation is critical for proper cellular and organism function and disturbances in this regulation largely contribute to the development of multiple human disorders including cancers and Alzheimer's disease. Proteasomes are responsible for the majority of non-lysosomal protein degradation in eukaryotic cells and our finding about their involvement in translational control establishes an interesting link between protein synthesis and degradation. Hence, our research may also aid to the development of new approaches for therapeutic intervention of medically important non-viral human diseases.
===
nan
Host defenses against invading microorganisms include targeting the pathogens for degradation in acidic lysosomal organelles, and generating adaptive immune responses by orchestrating successful antigen presentation. Infected cells employ evolutionarily-conserved cellular machinery that is normally used in the degradation of intracellular protein aggregates and damaged organelles to accomplish these responses. This self-digestion pathway, known as autophagy, helps cells survive under starvation conditions by restoring nutrient balance. During autophagy, cytoplasmic material is engulfed in de novo double-membrane vesicles (referred to as autophagosomes) and is delivered to lysosomes where the cargo is degraded into its constituent parts for reuse by the cell. This process is controlled by ATG5 and ATG7, which by mechanisms similar to ubiquitin-conjugation are involved in the conjugation of lipid to microtuble-associated protein 1 light chain 3 (LC3, a homolog of yeast ATG8). Conjugation with phosphatidylethnolamine (PE) converts the soluble form of LC3 (LC3-I) to another (LC3-II) that specifically associates with autophagic membrane vesicles and thus causes a shift from a diffuse staining pattern for LC3 to a punctuate pattern, which is often used to monitor autophagosome formation by fluorescence microscopy. Unlike classical autophagy that involves nonselective bulk degradation of cytosolic material, infected cells target intracellular bacteria (or bacteria-containing phagosomes) for sequestration into LC3-positive vacuoles for eventual destruction. This process of selective removal of invading microbes using autophagic machinery, termed xenophagy, plays a key role in the restriction by destruction of several kinds of bacteria, including Escherchia coli, Salmonella enterica, Mycobacterium tuberculosis, Listeria monocytogenes, and Group A Streptococcus as well as parasites such as Toxoplasma gondii. Genetic and pharmacological interference with the autophagic machinery has been shown to increase the number of intracellular bacteria. Xenophagy can also protect against infection by the Sindbis virus in mice and the singlestranded tobacco mosaic virus in plants. In addition to eliminating intracellular microbes, xenophagy elicits adaptive immune responses by contributing to the cross-presentation of microbial peptide antigens on both MHC class I and II molecules. The signaling pathways involved in the targeted elimination of microbes by autophagy are just beginning to be understood. Toll-like receptors (TLRs) on macrophages recognize pathogen-associated molecular patterns (PAMPs) and engage autophagic processes to clear pathogens. Lysosomal maturation of bacteria-containing phagosomes appear to be enhanced by TLR- mediated recruitment of the autophagosomal marker LC3 to phagosomes and their fusion with lysosomes, leading to rapid clearance of invading bacteria. Moreover, engagement of TLR-4 is also known to induce do novo formation of LC3-positive autophagosomes that contain bacteria, a process that requires p38 MAP kinase activity. Engagement of other TLRs, such as TLR-3 with poly I:C or TLR-7 with ssRNA, have also been shown to trigger autophagic responses. While these findings have linked TLRs and autophagy in host defense processes, many questions, such as how TLR-initiated signaling leads to the assembly of LC3-positive vesicles, remain unanswered. Given the importance of autophagy for pathogen clearance, cancer and neurodegenerative diseases we have focused our efforts on identifying molecules that regulate these processes. We found that p62 (also known as SQSTM1), a ubiquitin- and LC3- binding molecule, controls this innate immune autophagic process. Activation of primary macrophages with either E. coli or LPS triggered the formation of p62-associated LC3-positive vesicular compartments. Engagement of TLR-4 resulted in increased protein ubiquitination and accumulation in p62/LC3-positive vesicles. p62 expression was upregulated in response to TLR-4 activation, which required the functions of myeloid differentiation factor 88 (MyD88), Toll-interleukin-1 receptor domain-containing adaptor-inducing interferon-&#946; (TRIF), and p38 kinase, and knockdown of p62 expression in primary macrophages suppressed the assembly of LPS- or microbe-induced LC3-postive autophagosomes. These findings reveal an anti-microbial function for p62 that links the xenophagic process with ubiquitination. We have also found that FYVE motif-containing molecules have similar roles in distinct pathways. We have generated mice that lack the expression of these genes and experiments are underway in the laboratory to validate the physiological function of these molecules in autophagy using these knock out mice.
===
###Markdown
The descriptions are an executive summary and the abstract is the scientific abstract How many IDs do we have?
###Code
len(set(wr['id_of_project']))
pd.crosstab(wr['id_of_project'].duplicated(),wr['booleanFlag_duplicate_abstract'].isna(),normalize=1)
###Output
_____no_output_____
###Markdown
There isn't a visible correlation between abstract duplicates and id duplicates
###Code
wr['id_of_project'].value_counts().head()
wr.loc[wr['id_of_project']=='3P41RR001209-23S1']['title_of_project'].head()
###Output
_____no_output_____
###Markdown
I don't think that project id means what I think it does - the above are somewhat different although all seem to relate to chrystalography. Organisations receiving grants
###Code
print(len(set(wr.title_of_organisation)))
wr.title_of_organisation.value_counts().head(n=10)
###Output
9480
###Markdown
The organisations are mostly universities although we also find some companies Relevant entities: do we find social/digital innovations?To get a handle on this I will explore the MeSH terms. What are the top terms? Can we cluster them and if so do we find a digital/social cluster?
###Code
def flatten(a_list):
'''
flattens a list
'''
out = [x for el in a_list for x in el]
return(out)
def term_distr(a_list):
'''
Counts elements in a flat list
'''
return(pd.Series(flatten(a_list)).value_counts())
mesh_distr = term_distr(wr.terms_mesh_abstract)
mesh_distr.head(n=10)
len(mesh_distr)
###Output
_____no_output_____
###Markdown
There are 53,263 MeSH terms
###Code
plt.hist(mesh_distr)
###Output
_____no_output_____
###Markdown
Most MeSH terms appear once. How do we cluster them?Options:* Topic modelling* community detection on topic co-occurrence network* semantic clustering on w2v modelAll these options might take a while. For now I'm going to take it very easy and simply look for digital and social terms and their synonyms
###Code
#Lowercase
tokenised = [[x.lower() for x in tokens] for tokens in wr.terms_mesh_abstract]
wr['mesh_lower'] = tokenised
tokenised[0]
from gensim.models import Word2Vec
w2v = Word2Vec(tokenised)
def expanded_query(w2v,term,steps,thres=0.6,counts=True):
'''
Runs an expanded query on a w2v model. Steps is the number of steps it takes beyond the initial expansion.
counts is whether it returns a count of term appearances, or a set.
'''
#Seed
seed = [x[0] for x in w2v.wv.most_similar(term) if x[1]>thres]
#return(seed)
counter =0
while counter<=steps:
expand = [w2v.wv.most_similar(x) for x in set(seed)]
new_set = list(set([x[0] for x in flatten(expand) if x[1]>thres]))
seed = seed+new_set
counter+=1
#return(term_distr(seed))
if counts==True:
return(pd.Series(seed).value_counts())
else:
return(set(seed))
software_synonyms =expanded_query(w2v,'software',steps=2)
software_synonyms.head(n=10)
online_synonyms = expanded_query(w2v,'internet',steps=2)
online_synonyms.head(n=10)
social_env = expanded_query(w2v,'social environment',steps=2)
social_env.head(n=10)
peer = expanded_query(w2v,'health behavior',steps=2)
peer.head(n=10)
health_pol = expanded_query(w2v,'health policy',steps=2)
health_pol.head(n=10)
###Output
/usr/local/lib/python3.7/site-packages/gensim/matutils.py:737: FutureWarning: Conversion of the second argument of issubdtype from `int` to `np.signedinteger` is deprecated. In future, it will be treated as `np.int64 == np.dtype(int).type`.
if np.issubdtype(vec.dtype, np.int):
###Markdown
Lots of relevant keywords...but how many projects?
###Code
def extract_projects(df,term_list,appearances,label):
'''
Returns projects if relevant terms appear in it. Relevance is based on the number of times a term
appeared in the expanded kw search
'''
terms = term_list.index[term_list>appearances]
print(terms)
projs = df.loc[[any(x in mesh for x in terms) for mesh in df.mesh_lower]]
projs[f'has_{label}'] =True
print(len(projs))
return(projs)
digital_projects = extract_projects(wr,
pd.concat([online_synonyms,software_synonyms]),
3,'digital')
social_projects = extract_projects(wr,
pd.concat([social_env,peer]),
3,'social')
policy_projects = extract_projects(wr,
health_pol,3,'policy')
###Output
Index(['quality of health care', 'health services accessibility',
'economics, medical', 'evidence-based practice', 'health personnel',
'policy making', 'insurance coverage', 'public health practice',
'public policy'],
dtype='object')
16375
###Markdown
Non-insignificant numbers. There will be duplicates though!
###Code
def sample_projects(df,variable,sample_size,length_text):
'''
Sample some projects
'''
rel= random.sample(list(df[variable]),sample_size)
for item in rel:
print(item[:length_text])
print('\n')
sample_projects(digital_projects,'textBody_abstract_project',5,500)
sample_projects(social_projects,'textBody_abstract_project',5,500)
sample_projects(policy_projects,'textBody_abstract_project',5,500)
###Output
[unreadable] DESCRIPTION (provided by applicant): South Africa currently has more HIV infected citizens than any other country in the world. Identification of HIV-infected individuals and rapid detection of tuberculosis co-infection is paramount for improving access to treatment, decreasing morbidity, and controlling the spread of the epidemic. Despite promotion of free voluntary counseling and testing (VCT) sites by the government and the near universal offer of antiretroviral therapy, only 1
DESCRIPTION (provided by applicant): In the report Five Years After to Err is Human, it was noted that "the combination of complexity, professional fragmentation, and a tradition of individualism, enhanced by a well-entrenched hierarchical authority structure and diffuse accountability, forms a daunting barrier to creating the habits and beliefs of common purpose, teamwork, and individual accountability for successful interdependence that a safe culture requires." Training physicians, nurses, a
DESCRIPTION (provided by applicant): Barron Associates, Inc. proposes to develop a TELEpHOne Monitor for the Elderly (TELEHOME), a low-profile, body-worn, wireless personal monitor with built-in speakerphone. Functions provided by the TELEHOME system include: (1) monitoring/alerting for fall events, personal emergencies, activity/inactivity, and wandering;(2) automatic connection of mobile speakerphone to call recipient in response to an emergency event, or to calling parties for incoming telep
The objective of the proposed research project is to determine whether services provided by a state-federal vocational rehabilitation program increase employability and reduce work-related disability in persons with arthritis and related musculoskeletal disorders (ARMD). One specific aim is to compare the vocational outcome and dynamics of work disability among persons receiving services from a state federal vocational rehabilitation program to a similar group of persons not receiving such servi
DESCRIPTION (provided by applicant): The Medical College of Georgia (MCG) Minority-Based Community Clinical Oncology Program (MB-CCOP) proposal is designed to increase the availability of state-of-the-art treatment and cancer prevention and control research to minority individuals in their own communities. The establishment of an operational base for extending cancer clinical trials and cancer prevention and control trials in this part of Georgia is aimed to improve reduce cancer incidence, mor
###Markdown
Check more interesting projectsDo they overlap?Who is doing them?
###Code
#This concatenates the dfs we extracted above.
interest_df = pd.concat([digital_projects,social_projects,policy_projects],
axis=1,sort=False)
for x in ['has_digital','has_social','has_policy']:
interest_df[x] = interest_df[x].fillna(False)
pd.crosstab(interest_df['has_digital'],interest_df['has_policy'],normalize=1)
pd.crosstab(interest_df['has_social'],interest_df['has_policy'],normalize=0)
pd.crosstab(interest_df['has_digital'],interest_df['has_policy'],normalize=0)
from itertools import combinations
combs = list(combinations(['has_digital','has_social','has_policy'],2))
for comb in combs:
ct = pd.crosstab(interest_df[comb[0]],interest_df[comb[1]],normalize=1)
ct.T.plot.bar(stacked=True)
top_200_institutions = wr.title_of_organisation.value_counts()[:200]
###Output
_____no_output_____
###Markdown
Check time coverage Timeseries
###Code
def get_time_series(df,var):
'''
Produces a time-series of the data after splitting by year
'''
ts = df[var].dropna().apply(lambda x: int(x.split('-')[0])).value_counts()
ts =ts[sorted(ts.index)]
return(ts)
fig, ax =plt.subplots(figsize=(7,5))
#get_time_series(wr,'date_start_project').plot(ax=ax)
get_time_series(digital_projects,'date_start_project').plot(ax=ax,color='red')
get_time_series(social_projects,'date_start_project').plot(ax=ax,color='blue')
get_time_series(policy_projects,'date_start_project').plot(ax=ax,color='green')
ax.legend(labels=['digital','social','policy'],loc='upper left')
###Output
_____no_output_____
###Markdown
The coverage seems relevant for the 2000s. Recent coverage (post 2000) doesn't look brilliant Check spatial coverage
###Code
country_cov= pd.concat(
[df['id_iso3_country'].value_counts(normalize=True)
for df in [wr,digital_projects,social_projects,policy_projects]],axis=1,sort=True)
country_cov.columns= ['all','digital','social','policy']
top_20_countries = wr.id_iso3_country.value_counts()[:20].index
country_cov.loc[top_20_countries].plot.bar()
###Output
_____no_output_____
###Markdown
In terms of funded organisations, the coverage is quite stark And regions?
###Code
country_cov= pd.concat(
[df['placeName_city_organisation'].value_counts(normalize=True)
for df in [wr,digital_projects,social_projects,policy_projects]],axis=1,sort=True)
country_cov.columns= ['all','digital','social','policy']
top_20_countries = wr.placeName_city_organisation.value_counts()[:20].index
country_cov.loc[top_20_countries].plot.bar()
###Output
_____no_output_____
###Markdown
AI analysis on the WR data and other sources
###Code
from gensim import corpora, models
from string import punctuation
from string import digits
import re
import pandas as pd
import numpy as np
#Characters to drop
drop_characters = re.sub('-','',punctuation)+digits
#Stopwords
from nltk.corpus import stopwords
stop = stopwords.words('English')
#Stem functions
from nltk.stem import *
stemmer = PorterStemmer()
def clean_tokenise(string,drop_characters=drop_characters,stopwords=stop):
'''
Takes a string and cleans (makes lowercase and removes stopwords)
'''
#Lowercase
str_low = string.lower()
#Remove symbols and numbers
str_letters = re.sub('[{drop}]'.format(drop=drop_characters),'',str_low)
#Remove stopwords
clean = [x for x in str_letters.split(' ') if (x not in stop) & (x!='')]
return(clean)
class CleanTokenize():
'''
This class takes a list of strings and returns a tokenised, clean list of token lists ready
to be processed with the LdaPipeline
It has a clean method to remove symbols and stopwords
It has a bigram method to detect collocated words
It has a stem method to stem words
'''
def __init__(self,corpus):
'''
Takes a corpus (list where each element is a string)
'''
#Store
self.corpus = corpus
def clean(self,drop=drop_characters,stopwords=stop):
'''
Removes strings and stopwords,
'''
cleaned = [clean_tokenise(doc,drop_characters=drop,stopwords=stop) for doc in self.corpus]
self.tokenised = cleaned
return(self)
def stem(self):
'''
Optional: stems words
'''
#Stems each word in each tokenised sentence
stemmed = [[stemmer.stem(word) for word in sentence] for sentence in self.tokenised]
self.tokenised = stemmed
return(self)
def bigram(self,threshold=10):
'''
Optional Create bigrams.
'''
#Colocation detector trained on the data
phrases = models.Phrases(self.tokenised,threshold=threshold)
bigram = models.phrases.Phraser(phrases)
self.tokenised = bigram[self.tokenised]
return(self)
class LdaPipeline():
'''
This class processes lists of keywords.
How does it work?
-It is initialised with a list where every element is a collection of keywords
-It has a method to filter keywords removing those that appear less than a set number of times
-It has a method to process the filtered df into an object that gensim can work with
-It has a method to train the LDA model with the right parameters
-It has a method to predict the topics in a corpus
'''
def __init__(self,corpus):
'''
Takes the list of terms
'''
#Store the corpus
self.tokenised = corpus
def _filter(self,minimum=5):
'''
Removes keywords that appear less than 5 times.
'''
#Load
tokenised = self.tokenised
#Count tokens
token_counts = pd.Series([x for el in tokenised for x in el]).value_counts()
#Tokens to keep
keep = token_counts.index[token_counts>minimum]
#Filter
tokenised_filtered = [[x for x in el if x in keep] for el in tokenised]
#Store
self.tokenised = tokenised_filtered
self.empty_groups = np.sum([len(x)==0 for x in tokenised_filtered])
return(self)
def clean(self):
'''
Remove symbols and numbers
'''
def process(self):
'''
This creates the bag of words we use in the gensim analysis
'''
#Load the list of keywords
tokenised = self.tokenised
#Create the dictionary
dictionary = corpora.Dictionary(tokenised)
#Create the Bag of words. This converts keywords into ids
corpus = [dictionary.doc2bow(x) for x in tokenised]
self.corpus = corpus
self.dictionary = dictionary
return(self)
def tfidf(self):
'''
This is optional: We extract the term-frequency inverse document frequency of the words in
the corpus. The idea is to identify those keywords that are more salient in a document by normalising over
their frequency in the whole corpus
'''
#Load the corpus
corpus = self.corpus
#Fit a TFIDF model on the data
tfidf = models.TfidfModel(corpus)
#Transform the corpus and save it
self.corpus = tfidf[corpus]
return(self)
def fit_lda(self,num_topics=20,passes=5,iterations=75,random_state=1803):
'''
This fits the LDA model taking a set of keyword arguments.
#Number of passes, iterations and random state for reproducibility. We will have to consider
reproducibility eventually.
'''
#Load the corpus
corpus = self.corpus
#Train the LDA model with the parameters we supplied
lda = models.LdaModel(corpus,id2word=self.dictionary,
num_topics=num_topics,passes=passes,iterations=iterations,random_state=random_state)
#Save the outputs
self.lda_model = lda
self.lda_topics = lda.show_topics(num_topics=num_topics)
return(self)
def predict_topics(self):
'''
This predicts the topic mix for every observation in the corpus
'''
#Load the attributes we will be working with
lda = self.lda_model
corpus = self.corpus
#Now we create a df
predicted = lda[corpus]
#Convert this into a dataframe
predicted_df = pd.concat([pd.DataFrame({x[0]:x[1] for x in topics},
index=[num]) for num,topics in enumerate(predicted)]).fillna(0)
self.predicted_df = predicted_df
return(self)
###Output
/usr/local/lib/python3.7/site-packages/sklearn/utils/__init__.py:4: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import Sequence
###Markdown
Load the labelled dataset from George
###Code
gdb = pd.read_csv('../data/external/gdb.csv')
gdb.shape
gdb['source_detailed'] = [x if x!='GDB' else y for x,y in zip(gdb.source_id,
gdb.gdb_dataset_id)]
###Output
_____no_output_____
###Markdown
But no MESH terms?
###Code
#Drop the gdbs with no descriptions (we can't use them)
gdb.dropna(axis=0,subset=['description'],inplace=True)
###Output
_____no_output_____
###Markdown
Load the AI aids and identify preliminary AI health projects
###Code
#Load the AI ids I got from other analysis (this is such a mess!)
ai_ids = pd.read_csv('../data/external/ai_ids.csv',header=None)
#gdb['has_ai'] = gdb.row_id.apply(lambda x: x in ai_ids[1])
gdb.set_index('row_id',inplace=True)
ai_projects = gdb.loc[set(ai_ids[1]) & set(gdb.index)]
###Output
_____no_output_____
###Markdown
Next steps1. Label variables with their source names2. Group projects by source names and extract tfidf values3. Produce wordclouds comparing different types Label variables
###Code
ai_projects.source_detailed.value_counts()
#Create aggregated corpora
ai_corpus = ai_projects.groupby(['source_detailed'])['description'].apply(lambda x: ' '.join(list(x)))
###Output
_____no_output_____
###Markdown
Extract tfidf descriptions
###Code
#This cleans up the corpus
cleaner = CleanTokenize(ai_corpus)
cleaner.clean().bigram()
#This generates TFIDF terms by group in the corpus
salient = LdaPipeline(cleaner.tokenised)
salient._filter(minimum=30).process().tfidf()
processed_corpus = salient.corpus
id_token_lookup = {v:k for k,v in salient.dictionary.token2id.items()}
#This extracts the top 1000 tfidf words in each document
top_words = [[id_token_lookup[val[0]] for val in sorted(x,key=lambda x: x[1],reverse=True)[:1000]] for x in processed_corpus]
#Put them in the corpus
ai_corpus = pd.DataFrame(ai_corpus)
ai_corpus['high_tfidf'] = top_words
ai_corpus['tokenised'] = cleaner.tokenised
#Create list of high tfidf tokens
ai_corpus['tokenised_high_tfidf'] = [[el for el in x if el in y] for x,y in zip(ai_corpus['tokenised'],
ai_corpus['high_tfidf'])]
###Output
_____no_output_____
###Markdown
TODO: Reproduce all this analysis comparing salience of AI / non-AI projects as well as robustness of changes in parameters (ie number of terms we filter during pre-processing, number of TFIDF terms we focus on...) VisualiseWordcloud! Sorry everyone
###Code
from wordcloud import WordCloud
def get_wordcloud(df,source,variable='tokenised_high_tfidf'):
'''
Extracts a wordcloud from the data
'''
#my_text = df[(df.has_ai==True)&(df.top_discipline==discipline)&([y in years for y in df.start_year])][variable]
my_text = df[(df.index==source)][variable]
my_text_string = ', '.join(list(my_text)[0])
wc = WordCloud()
out = wc.generate_from_text(my_text_string)
return(out)
def create_wordcloud(wc_object,source,ax):
'''
Creates a wordcloud plot from tw charts
'''
ax.imshow(wc_object)
ax.set_axis_off()
ax.set_title(f'{source}',size=18)
return(ax)
fig,ax = plt.subplots(ncols=2,nrows=2,figsize=(20,10))
create_wordcloud(get_wordcloud(ai_corpus,'gtr'),'gtr',ax=ax[0][0])
create_wordcloud(get_wordcloud(ai_corpus,'H2020'),'h2020',ax=ax[0][1])
create_wordcloud(get_wordcloud(ai_corpus,'ahrq'),'innovateuk',ax=ax[1][0])
create_wordcloud(get_wordcloud(ai_corpus,'Crunchbase'),'Crunchbase',ax=ax[1][1])
plt.savefig(f'../reports/figures/{today_str}_salient_terms_funder.pdf')
###Output
_____no_output_____
###Markdown
ExamplesThis takes 3 random examples from the GDB data
###Code
for x in set(ai_projects.source_detailed):
print(x)
sample_projects(ai_projects.loc[ai_projects.source_detailed==x],'description',3,1000)
print('====')
###Output
innovateuk
Can a better understanding of your personal characteristics help unlock new insights into disease? Could your credit score, your income and your shopping habits help predict whether you are about to have a heart attack? This project focuses on people with diabetes- specifically looking at whether linking big atypical, non-health data sets with health data can reliably predict who is likely to benefit from a health intervention or who is not. Outcomes Based Healthcare, Big Data Partnership, Camden Clinical Commissioning Group and the University of Surrey are working together to research exactly these questions over the next two years. These insights will enable clinicians to understand whether their preventative measures or interventions are likely to be effective, ineffective, wasteful, or even harmful. This moves from a disease-based to a personalised, data-driven health system. To paraphrase Aristotle, it is really important to understand what sort of person has a disease, in order t
Development of Artificial Intelligence to identify product and brand names in online sales listings so as to enable fully automated monitoring for infringement against registered trade marks, at minimal marginal cost per mark.
This project will generate an optical detection system and associated software for true realtime determination of fluorescence levels generated during real-time PCR. BioGene (BG) has a technology platform capable of completing the real-time PCR process in 10 minutes and this speed combined with independent reaction control lends itself to a number of novel applications that could revolutionise the diagnostic field. “True real-time” PCR is a step change in this sector - the ability to perform reactions whereby the instrumentation can keep up with the chemistry and hence observe and collect data missed by the current one reading per cycle approach. The ability to monitor all visible wavelengths on a milliseconds timescale when combined with Ultra-Rapid PCR facilitates new assays centred on a machine learning approach. Principally, the system can learn the ideal amplification characteristics of any reaction, allowing processes such as factorial optimisation - the ability to optimise any n
====
H2020
This project brings together a diverse group of subject matter experts from industry and academia under one umbrella, with the main aim of enhancing and advancing future healthcare processes and systems using sensory and machine learning technologies to provide emotional (affective) and cognitive insights into patients well-being so as to provide them with more effective treatment across multiple medical domains. The objective is to develop technologies and methods that will lessen the enormous and growing health care costs of dementia and related cognitive impairments that burden European citizens, which is estimated to cost over €250 Billion by 2030 [1].
From a technical perspective, the primary objective is to “develop a cloud based affective computing [2] operating system capable of processing and fusing multiple sensory data streams to provide cognitive and emotional intelligence for AI connected healthcare systems”. In particular the consortium intends to:
• Specify and engine
Cancer is considered as the second leading cause of death worldwide. It is important to develop methodologies that improve understanding of the disease condition and progression. Over the past few years, single cell biology has been performed using micro/nano robotics for exploration of the nanomechanical and electrophysiological properties of cells. However, most of the research so far has been empirical and the understanding of the mechanisms and thus possible for cancer therapy are limited. Therefore, a systematic approach to address this challenge using advanced micro/robotics techniques is timely and important to a wide range of the technologies where micro/nano manipulation and measurement are in demand. The proposed “Micro/nano robotics for single cancer cells (MNR4SCell)” project focuses on the staff exchange between the 8 world recognised institutions of EU and China, and the share of knowledge and ideas, and further the development of the leading edge technologies for the des
Mathematical, computational models are central in biomedical and biological systems engineering; models enable (i) mechanistically justifying experimental results via current knowledge and (ii) generating new testable hypotheses or novel intervention methods.
SyMBioSys is a joint academic/industrial training initiative supporting the convergence of engineering, biological and computational sciences. The consortium's mutual goal is developing a new generation of innovative and entrepreneurial early-stage researchers (ESRs) to develop and exploit cutting-edge dynamic (kinetic) mathematical models for biomedical and biotechnological applications. SyMBioSys integrates: (i) six academic beneficiaries with a strong record in biomedical and biological systems engineering research, these include four universities and two research centres; (ii) four industrial beneficiaries including key players in developing simulation software for process systems engineering, metabolic engineering and indus
====
ahrq
DESCRIPTION (provided by applicant): Although cervical cancer is preventable, it still continues to be a leading cause of death. Following the evidence- based guidelines for cervical cancer prevention is challenging for healthcare providers, due to which many patients do not receive the optimal preventive care. Clinical decision support (CDS) systems can potentially improve the care delivery. However, the current CDS systems only identify patients overdue for screening, and do not suggest the optimal screening interval. Moreover they do not help with surveillance of patients with abnormal screening results. This is because the existing systems lack the capability to process free- text clinical reports that contain information needed for applying the guidelines. Hence there is a critical need for natural language processing (NLP)-enabled CDS systems that can utilize discrete as well as free-text patient information for enhancing the decision support. Our long-term goal is to improv
DESCRIPTION (provided by the applicant): Patients with known or suspected cancers transition through several different settings of ambulatory care in order to receive a timely diagnosis and treatment. The survival benefit conferred by early diagnosis and treatment hinges on well-coordinated care. Our preliminary work, for instance, shows that breakdowns in care processes that occur while patients are navigating the ambulatory care system can lead to substantial delays in diagnosis and/or treatment of some cancers. The goal of this proposal is to test the use of health information technology (IT) to identify patients with these delays and facilitate their movement through the health care system. The Institute of Medicine has identified both timeliness and coordination of care as targets for improving health care quality. However, little or no other published literature addresses a comprehensive approach to prevent breakdowns and delays in cancer diagnosis and/or treatment related to
?
DESCRIPTION (provided by applicant): Millions of Americans undergo surgery every year, and postoperative pain is common and too often poorly managed. Poorly managed postoperative pain may cause severe functional impairment, adverse events, impaired care of the underlying diseases, transition to chronic pain, and decreased quality of life. Many controlled
studies have demonstrated a variety of interventions that benefit postoperative pain, yet their application in a large and more diverse population is unknown, and a nationally endorsed, concise quality process metric for postoperative pain management does not exist. One roadblock is that postoperative pain and its related outcomes are complex. The gathering of evidence from electronic health data, which draw from and inform real-world practice, could bypass this roadblock and inform decisions leading to more effective and efficient postoperative pain management. This project seeks to measure quality of various care processes for
====
rwjf
Under this grant, the Center for Digital Democracy (CDD), in partnership with the Berkeley Media Studies Group (BMSG), will carry out analysis, convening, and outreach activities to build awareness of trends in digital marketing; the goal will be to provide an updated and in-depth examination of practices that drive marketing of unhealthy foods and beverages and to identify the potential use of these digital-marketing technologies to shape consumer demand to drive the reduction of childhood obesity. Through interviews of key figures in the food and beverage industry, as well as engagement of core stakeholders in public health and priority populations most saturated with targeted marketing, CDD and BMSG will identify potential guidelines to harness these practices--with a focus on emerging issues such as the use of big data. Finally, CDD and BMSG will help cultivate stakeholders for future action through education and outreach activities, such as briefings and webinars.
The Foundation's Twenty-First Century Public Health initiative was designed to implement innovations in state and local public health agencies that will significantly enhance their contribution to building a Culture of Health.This grant supports the Public Health National Center for Innovations (PHNCI) to foster alignment and spread of innovations in public health practice that promise to advance a Culture of Health. PHNCI will connect a network of partners, including representatives from local, tribal, state, and federal public health; health policy makers; and consultants or organizations with expertise in health reform, public health law, health in all policies, modeling and predictive analytics, health financing, services and systems research, and cross-jurisdictional sharing of services. Initially, PHNCI will assist multistakeholder coalitions in three pilot states to implement the innovations required to provide foundational public health services (FPHS) such that all people have
This project is designed to tell human stories that illuminate issues of health and wellness for lay audiences in the greater Philadelphia and South Jersey metropolitan region. At this time of consumer confusion driven by federal health reform, people are thirsty for reliable, timely information that explains how they, their health, and their family's health fit into this rapidly changing landscape. WHYY, the largest public media outlet in the nation's fourth largest media market, is positioned to play a key role in this explanatory mission. This grant funds additional staff for a new radio and Web program, the Pulse, in order to bolster coverage of public health, trends in care delivery, and medical research. It will also help underwrite partnerships with other regional and national media efforts, such as NJ Spotlight and ClearHealthCosts.com, and will bolster the ability to do data-driven reports, with data visualizations, that convey complex trends at a glance to audiences. This ini
====
world_reporter
Cerebral Palsy (CP) is a frequent neurological disorder in childhood (2 per 1,000 births). The EU has 1.3m of the 15m persons with CP globally. Participation in daily activities with other individuals and inclusion in cooperative learning paths by playing digital games can promote health & well-being with such disabilities as is identified in Europe 2020. GABLE is a highly innovative online service platform to personalise games to enhance the living adjustment of people with CP avoiding a gamification process where one fits for all. GABLE will be an open platform combining (i) the results of phase from the very successful FP7-SME-2012 Research for SMEs project (GA 315032) that ended in Nov 2014, (ii) the results of RAGE project and (iii) the technologies derived from machine learning under a multiplayer approach combining fixed and mobile devices. It enables carers & clinical staff to easily create (using visual programming & no special IT skills) interactive games customised to indi
?
DESCRIPTION (provided by applicant): The application of high-throughput technologies in biomedical research has become widespread and requires specialized skills to effectively design experiments analyze large datasets and integrate new data with existing large datasets. These technologies are increasingly being applied in environmental health sciences to provide comprehensive and timely mechanistic knowledge on how the environment affects human health. With the increased application of these technologies, more researchers need training to conceptually develop, properly design and implement comprehensive, large-scale big data studies. Accordingly, our proposed program in Population-Scale Genomics Studies of Environmental Stress has the long-term goal of training a network of Big Data to Knowledge (BD2K) practitioners in the application of modern sequencing technologies, computational approaches and biostatistical methods. The program couples three annual training workshops with n
DESCRIPTION (provided by applicant):
The aim of this proposal is to conduct research on the foundational models and algorithms in computer vision and machine learning for an egocentric vision based active learning co-robot wheelchair system to improve the quality of life of elders and disabled who have limited hand functionality or no hand functionality at all, and rely on wheelchairs for mobility. In this co-robt system, the wheelchair users wear a pair of egocentric camera glasses, i.e., the camera is capturing the users' field-of-the-views. This project help reduce the patients' reliance on care-givers. It fits NINR's mission in addressing key issues raised by the Nation's aging population and shortages of healthcare workforces, and in supporting patient-focused research that encourage and enable individuals to become guardians of their own well-beings. The egocentric camera serves two purposes. On one hand, from vision based motion sensing, the system can capture unique he
====
rgov
PREEVENTS Track 2: Collaborative Research: Developing a Framework for Seamless Prediction of Sub-Seasonal to Seasonal Extreme Precipitation Events in the United States
Extreme precipitation is a natural hazard that poses risks to life, society, and the economy. Impacts include mortality and morbidity from fast-moving water, contaminated water supplies, and waterborne diseases as well as dam failures, power and transportation disruption, severe erosion, and damage to both natural and agro-ecosystems. These impacts span several sectors including water resource management, energy, infrastructure, transportation, health and safety, and agriculture. However, on the timeframe required by many decision makers for planning, preparing, and resilience-building "subseasonal to seasonal (S2S; 14 to 90 days)" forecasts have poor skill and thus adequate tools for prediction do not exist. Additionally, societal resilience to these events cannot be increased without established, two-way communicati
Most natural language processing (NLP) systems still have only a literal understanding of words and phrases. In particular, current NLP technology is oblivious to the psychological impact that events and situations have on people. For example, NLP systems can recognize phrases associated with being hired or fired, but they do not understand that being hired is usually desirable while being fired is generally undesirable. This exploratory research aims to endow NLP technology with the ability to recognize events and situations that will have a positive or negative impact on people. The proposed research could be transformative because it would enable a fundamentally deeper understanding of language beyond the literal meaning of words and phrases. New methods resulting from this research would benefit a wide spectrum of applications that require understanding of written texts and human conversation, document summarization, question answering, sarcasm recognition, and identification of th
This project aims at (1) precise geometric depictions of digital images arising in biology, medicine, machine vision and other fields of science and engineering and (2) providing their model-independent statistical analysis for purposes of identification, discrimination and diagnostics. One specific application is to discriminate between a normal organ and a diseased one in the human body. Among examples, one may refer to the diagnosis of glaucoma and certain types of schizophrenia based on shape changes. A subject that the project will especially look at and analyze in depth, concerns changes in the geometric structure of the white matter in the brain's cortex brought about by Parkinson's disease, Alzheimers, schizophrenia, autism, etc., and their progression. Important applications in the fields of graphics, robotics, etc., will be explored as well.<br/><br/>Advancements in imaging technology enable scientists and medical professionals today to view the inner functioning of organs a
====
Crunchbase
Finding unique medical insights with Artificial Intelligence tools
We create neurotechnology platforms that help people tell their health story using short videos paired with artificial intelligence and ML.
Densitas is a medical device software company focused on creating products that make big data meaningful for healthcare decision-makers.
====
gtr
Two areas and approaches to historical writing which captured imaginations during the later twentieth and early twenty-first centuries were microhistory and global history. Both have generated wide interest among historians and the general public. Microhistory connected with cultural history became a popular historical methodology from the later 1970s, especially in France and Italy. Inspired by anthropological approaches, microhistorians reduced the scale of their analyses as a way of testing the large-scale paradigms that had come to influence the study of the past, for example Marxism, modernization theory, and the quantitative focus of certain approaches derived from the social sciences. It gained a wide public reception around the film 'The Return of Martin Guerre' and Natalie Zemon Davis's historical contribution to this.
Global history, by contrast, developed from the late 1990s out of comparative economic history, especially of China, South Asia and Europe in a debate on 'dive
<p>This post-doctoral fellowship (PDF) aims to better understand how important socio-economic events (such as unemployment, promotion, pay rises, disability, and changes to marital status) impact on peoples’ lives. The PDF will enable the fellow to develop his writing skills and publish work, develop further interdisciplinary research skills, present his work at international conferences and plan out and apply for funding for a future programme of work.<br /><br />The research will integrate areas of economics and psychology through the use of subjective well-being (SWB) data and self report personality measures. Specifically this research will show that the changes in SWB following an event are dependent upon an individual’s pre-event personality. Further, integration between psychology and economics will be promoted through research showing how and why money relates to well-being, including the testing of models arising from cognitive science.<br /><br />The research will draw on nat
My research will develop data-driven modelling of human interaction dynamics, where experimental measurements are followed by mathematical modelling. I emphasise that real-world data needs to drive modelling. Such refined modelling will predict potential disease outbreaks and enables building synthetic networks, which will provide opportunities to scale up the network environment and experimentally control epidemics. I aim to build such a prediction system. Realising this vision will involve both sophisticated data collection and model construction. Especially data collection takes an important role. Current popular detection mechanisms using WiFi access points or short range radio involve high failures, communication protocol limitation and complex statistics. Without in-depth understanding of data collection mechanisms, modelling such networks will not be reliable. The derived epidemic models will need to be accurate and parameterised with data on human interaction patterns, modulari
====
###Markdown
Something on the World Reporter dataIn this case, load the mesh lookup and check the top MeSH categories (diseases etc) being mentioned by AI / non AI projects
###Code
import ast
###Output
_____no_output_____
###Markdown
Load AI data
###Code
ai = pd.read_csv('../data/external/world_reporter/ai_projects',converters={'terms_mesh_abstract':ast.literal_eval}
)
###Output
_____no_output_____
###Markdown
Load MeSHThis is a lookup created by George. I use it to split MeSh terms into their groups. There will be some double counting because several MeSH terms appear in multiple categories.
###Code
mesh = pd.read_csv('../data/processed/mesh_lookup.csv')
#Here we should create a collection of MeSH terms for each group
mesh_sets = mesh.groupby('tree_string_0')['DescriptorNameString'].apply(lambda x: list(x))
###Output
/usr/local/lib/python3.7/site-packages/IPython/core/interactiveshell.py:2785: DtypeWarning: Columns (23,28) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Work with a sample of world reporter projects because otherwise it takses too long
###Code
random.seed(667)
selection = random.sample(range(0,len(wr)),75000)
wr_sample = wr.iloc[selection]
###Output
_____no_output_____
###Markdown
We are simply counting the total number of MeSH terms by MeSH category in AI projects and in the WR sample total
###Code
comparisons = []
for var in mesh_sets.index:
print(var)
print('ai')
#ai[var] = [[x for x in mesh if x.lower() in mesh_sets[var]] for mesh in ai['terms_mesh_abstract']]
ai_terms = term_distr(ai['terms_mesh_abstract'].apply(lambda x: [t for t in x if t in mesh_sets[var]]))
print('wr')
wr_terms = term_distr(wr_sample['terms_mesh_abstract'].apply(lambda x: [t for t in x if t in mesh_sets[var]]))
#Concatenate the outputs into a table
comp = pd.concat([ai_terms,wr_terms],axis=1).fillna(0)
comp.columns = [var[:10]+'_ai',var[:10]+'_wr']
comparisons.append(comp)
###Output
analytical, diagnostic, and therapeutic techniques, and equipment
ai
wr
###Markdown
We plot some of the findings
###Code
#This list controls what MeSH terms are plotted and how we refer to them (some of the names above are too large for plotting)
to_plot = [[0,'Techniques'],
#[2,'social_groups'],
[5,'Diseases'],[7,'Health_care'],[13,'Psychiatry_psychology']]
###Output
_____no_output_____
###Markdown
In the plot, we focus on the
###Code
fig,ax = plt.subplots(figsize=(25,10),ncols=4)
for n,x in enumerate(to_plot[:4]):
top_categories = comparisons[x[0]].iloc[:,1].sort_values(ascending=True)[-30:].index
(100*comparisons[x[0]].loc[top_categories].apply(lambda x: x/x.sum(),axis=0)).plot.barh(ax=ax[n])
ax[n].set_title(f'{x[1]} \n AI projects vs total')
ax[n].set_yticklabels(ax[n].get_yticklabels(),size=14)
ax[n].legend(['ai','all'],loc='lower right')
plt.tight_layout()
fig.suptitle('MeSH categories as % of total in all World Reporter projects and AI projects',y=1.05,size=18)
plt.savefig(f'../reports/figures/{today_str}_ai_comparison.pdf',bbox_inches='tight')
ai[['title_of_organisation','title_of_project','textBody_abstract_project','date_start_project','terms_mesh_abstract','placeName_country_organisation']]
###Output
_____no_output_____
###Markdown
Line chartShows AI as share of all WR projects over time
###Code
wr['wr_year'] = wr.date_start_project.apply(lambda x: int(x.split('-')[0]) if pd.isnull(x)==False else np.nan)
ai['ai_year'] = ai.date_start_project.apply(lambda x: int(x.split('-')[0]) if pd.isnull(x)==False else np.nan)
year_act = pd.concat([wr['wr_year'].value_counts(),ai['ai_year'].value_counts()],axis=1).fillna(0)
year_act['ai_share']= year_act['ai_year']/year_act['wr_year']
fig,ax = plt.subplots(figsize=(8,2))
(100*year_act['ai_share']).plot(title='% of AI projects in World Reporter',ax=ax)
plt.savefig('../reports/figures/ai_trends.pdf')
###Output
_____no_output_____ |
10_adv_problems/10b_vae.ipynb | ###Markdown
Enable GPUThis notebook and pretty much every other notebook in this repository will run faster if you are using a GPU. On Colab:* Navigate to Edit→Notebook Settings* Select GPU from the Hardware Accelerator drop-downOn Cloud AI Platform Notebooks:* Navigate to https://console.cloud.google.com/ai-platform/notebooks* Create an instance with a GPU or select your instance and add a GPUNext, we'll confirm that we can connect to the GPU with tensorflow:
###Code
import tensorflow as tf
print(tf.version.VERSION)
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
# Import libraries and modules
import matplotlib.pyplot as plt
import numpy as np
print(np.__version__)
np.set_printoptions(threshold=np.inf)
###Output
1.18.5
###Markdown
Sampling Layer Unlike non-variational autoencoders, instead of just generating points from the encoder, to better regularize the latent space we instead sample from the probability distribution based on the mean and standard deviation vector outputs from the encoder.
###Code
latent_dim = 3
class Sampling(tf.keras.layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.
"""
def call(self, inputs):
"""Calls sampling layer with prob. dist. inputs & samples from dist.
Returns:
Random sample from probability distribution.
"""
z_mean, z_log_var = inputs
batch = tf.shape(input=z_mean)[0]
dim = tf.shape(input=z_mean)[1]
epsilon = tf.random.normal(shape=(batch, dim))
return z_mean + tf.math.exp(x=0.5 * z_log_var) * epsilon
###Output
_____no_output_____
###Markdown
Vanilla Variational Autoencoders For a vanilla variational autoencoder, we use just basic `Dense` layers for both the encoder and decoder.
###Code
def create_vanilla_encoder(latent_dim):
"""Creates vanilla encoder for VAE.
Args:
latent_dim: int, the latent vector dimension length.
Returns:
VAE vanilla encoder `Model`.
"""
encoder_inputs = tf.keras.Input(shape=(28, 28, 1))
x = tf.keras.layers.Flatten()(inputs=encoder_inputs)
x = tf.keras.layers.Dense(units=1024, activation="relu")(inputs=x)
x = tf.keras.layers.Dense(units=512, activation="relu")(inputs=x)
x = tf.keras.layers.Dense(units=256, activation="relu")(inputs=x)
z_mean = tf.keras.layers.Dense(units=latent_dim, name="z_mean")(inputs=x)
z_log_var = tf.keras.layers.Dense(
units=latent_dim, name="z_log_var"
)(inputs=x)
z = Sampling()(inputs=[z_mean, z_log_var])
return tf.keras.Model(
inputs=encoder_inputs,
outputs=[z_mean, z_log_var, z],
name="vae_vanilla_encoder"
)
vanilla_encoder = create_vanilla_encoder(latent_dim=latent_dim)
vanilla_encoder.summary()
# Plot encoder model.
tf.keras.utils.plot_model(
model=vanilla_encoder,
to_file="vae_vanilla_encoder_model.png",
show_shapes=True,
show_layer_names=True
)
def create_vanilla_decoder(latent_dim):
"""Creates vanilla decoder for VAE.
Args:
latent_dim: int, the latent vector dimension length.
Returns:
VAE vanilla decoder `Model`.
"""
decoder_inputs = tf.keras.Input(shape=(latent_dim,))
x = tf.keras.layers.Dense(units=256, activation="relu")(
inputs=decoder_inputs
)
x = tf.keras.layers.Dense(units=512, activation="relu")(inputs=x)
x = tf.keras.layers.Dense(units=1024, activation="relu")(inputs=x)
x = tf.keras.layers.Dense(units=28 * 28 * 1, activation="sigmoid")(
inputs=x
)
decoder_outputs = tf.keras.layers.Reshape(target_shape=(28, 28, 1))(
inputs=x
)
return tf.keras.Model(
inputs=decoder_inputs,
outputs=decoder_outputs,
name="vae_vanilla_decoder"
)
vanilla_decoder = create_vanilla_decoder(latent_dim=latent_dim)
vanilla_decoder.summary()
# Plot decoder model.
tf.keras.utils.plot_model(
model=vanilla_decoder,
to_file="vae_vanilla_decoder_model.png",
show_shapes=True,
show_layer_names=True
)
###Output
_____no_output_____
###Markdown
We'll create a custom `Model` class named `VAE` that we can use to facilitate training of our encoder and decoder networks.
###Code
class VAE(tf.keras.Model):
"""Custom model for training a variational autoencoder.
Attributes:
encoder: Keras `Model`, the encoder network.
decoder: Keras `Model`, the decoder network.
reconstruction_loss_metric: `Metric`, to track reconstruction loss.
kl_loss_metric: `Metric`, to track KL loss.
total_loss_metric: `Metric`, to track total loss.
"""
def __init__(self, encoder, decoder, **kwargs):
"""Instantiates `VAE` model class.
Args:
encoder: Keras `Model`, the encoder network.
decoder: Keras `Model`, the decoder network.
"""
super(VAE, self).__init__(**kwargs)
self.encoder = encoder
self.decoder = decoder
self.reconstruction_loss_metric = tf.keras.metrics.Mean(
name="reconstruction_loss"
)
self.kl_loss_metric = tf.keras.metrics.Mean(name="kl_loss")
self.total_loss_metric = tf.keras.metrics.Mean(name="total_loss")
@property
def metrics(self):
"""Returns list of metrics.
Returns:
List of metrics.
"""
return [
self.reconstruction_loss_metric,
self.kl_loss_metric,
self.total_loss_metric,
]
def train_step(self, images):
"""Trains `VAE` for one step.
Args:
images: tensor, rank 4 tensor of images with shape
(batch_size, height, width, depth).
Returns:
losses: dict, dictionary containing scalar losses.
"""
with tf.GradientTape(persistent=True) as tape:
# Encode images into probability distribution parameters and
# latent vector/code.
z_mean, z_log_var, z = self.encoder(inputs=images)
# Generate reconstructed images from latent vectors.
reconstructed_images = self.decoder(inputs=z)
reconstruction_loss = tf.math.reduce_mean(
input_tensor=tf.math.reduce_sum(
input_tensor=tf.keras.losses.binary_crossentropy(
y_true=images, y_pred=reconstructed_images
),
axis=(1, 2)
)
)
kl_loss = -0.5 * (
1. + z_log_var - tf.square(x=z_mean) - tf.exp(x=z_log_var)
)
kl_loss = tf.reduce_mean(
input_tensor=tf.reduce_sum(input_tensor=kl_loss, axis=1)
)
total_loss = reconstruction_loss + kl_loss
# Get gradients from loss and apply to weights.
grads = tape.gradient(
target=total_loss, sources=self.trainable_weights
)
self.optimizer.apply_gradients(
grads_and_vars=zip(grads, self.trainable_weights)
)
# Update metrics.
self.reconstruction_loss_metric.update_state(
values=reconstruction_loss
)
self.kl_loss_metric.update_state(values=kl_loss)
self.total_loss_metric.update_state(values=total_loss)
losses = {
"reconstruction_loss": self.reconstruction_loss_metric.result(),
"kl_loss": self.kl_loss_metric.result(),
"loss": self.total_loss_metric.result(),
}
return losses
def create_dataset(batch_size, training):
"""Creates training dataset.
Args:
batch_size: int, number of elements in a mini-batch.
training: bool, whether training or not.
Returns:
dataset: `Dataset`, dataset object for training using MNIST.
"""
# Get and format MNIST data.
(x_train, _), (x_test, _) = tf.keras.datasets.mnist.load_data()
if training:
images = x_train
else:
images = x_test
images = images.astype("float32") / 255.0
images = np.reshape(images, newshape=(-1, 28, 28, 1))
# Create tf.data.Dataset for training.
dataset = tf.data.Dataset.from_tensor_slices(tensors=images)
if training:
dataset = dataset.shuffle(buffer_size=70000)
dataset = dataset.batch(batch_size=batch_size)
return dataset
# Instantiate an VAE instance using our vanilla encoder and decoder.
vanilla_vae = VAE(
encoder=vanilla_encoder, decoder=vanilla_decoder
)
vanilla_vae.compile(
optimizer=tf.keras.optimizers.Adam(
learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-7
)
)
# Train vanilla variational autoencoder model.
vanilla_vae_history = vanilla_vae.fit(
create_dataset(batch_size=128, training=True), epochs=30
)
###Output
Epoch 1/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 167.1454 - kl_loss: 5.8514 - loss: 205.9385
Epoch 2/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 136.6066 - kl_loss: 7.3714 - loss: 146.3028
Epoch 3/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 130.8336 - kl_loss: 7.7967 - loss: 139.6543
Epoch 4/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 127.5951 - kl_loss: 8.0558 - loss: 135.7583
Epoch 5/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 125.4783 - kl_loss: 8.2111 - loss: 133.9135
Epoch 6/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 124.3622 - kl_loss: 8.3210 - loss: 133.0397
Epoch 7/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 122.8894 - kl_loss: 8.4606 - loss: 131.4232
Epoch 8/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 121.9176 - kl_loss: 8.5443 - loss: 130.6047
Epoch 9/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 121.1513 - kl_loss: 8.6023 - loss: 129.5638
Epoch 10/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 120.4412 - kl_loss: 8.6844 - loss: 128.7188
Epoch 11/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 120.0537 - kl_loss: 8.7217 - loss: 128.4621
Epoch 12/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 119.2182 - kl_loss: 8.7857 - loss: 128.3907
Epoch 13/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 118.4907 - kl_loss: 8.8183 - loss: 127.4828
Epoch 14/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 117.9334 - kl_loss: 8.8798 - loss: 126.6340
Epoch 15/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 117.5510 - kl_loss: 8.9038 - loss: 126.3496
Epoch 16/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 117.0453 - kl_loss: 8.9229 - loss: 125.7153
Epoch 17/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 116.9296 - kl_loss: 8.9458 - loss: 125.9631
Epoch 18/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 116.3198 - kl_loss: 8.9943 - loss: 125.3361
Epoch 19/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 116.1085 - kl_loss: 9.0155 - loss: 125.2003
Epoch 20/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 115.9034 - kl_loss: 9.0555 - loss: 124.5876
Epoch 21/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 115.3949 - kl_loss: 9.0718 - loss: 124.3461
Epoch 22/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 115.0904 - kl_loss: 9.1178 - loss: 124.0206
Epoch 23/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 115.0394 - kl_loss: 9.1068 - loss: 123.5618
Epoch 24/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 116.8088 - kl_loss: 9.1743 - loss: 125.1896
Epoch 25/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 116.5654 - kl_loss: 9.1969 - loss: 126.0862
Epoch 26/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 115.4081 - kl_loss: 9.2078 - loss: 124.7423
Epoch 27/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 114.8673 - kl_loss: 9.2243 - loss: 124.1473
Epoch 28/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 114.9758 - kl_loss: 9.2315 - loss: 124.2533
Epoch 29/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 114.4236 - kl_loss: 9.2574 - loss: 123.7667
Epoch 30/30
469/469 [==============================] - 2s 4ms/step - reconstruction_loss: 113.7834 - kl_loss: 9.2814 - loss: 123.2228
###Markdown
Let's plot the loss history and some generated images using our trained model.
###Code
def plot_loss_history(history):
"""Plots loss history.
Args:
history: `keras.callbacks.History`, history object from training job.
"""
plt.plot(history.history["loss"])
plt.title("Training loss")
plt.ylabel("Loss")
plt.xlabel("Epoch")
plt.legend(["train_loss"], loc="upper left")
plt.show()
def plot_images(images):
"""Plots images.
Args:
images: np.array, array of images of
[num_images, image_size, image_size, num_channels].
"""
num_images = len(images)
plt.figure(figsize=(20, 20))
for i in range(num_images):
image = images[i]
plt.subplot(1, num_images, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(
tf.reshape(image, image.shape[:-1]),
cmap="gray_r"
)
plt.show()
plot_loss_history(history=vanilla_vae_history)
###Output
_____no_output_____
###Markdown
Both non-variational and variational autoencoders are great at reconstruction.
###Code
dataset_iter = iter(create_dataset(batch_size=12, training=False))
batch = next(dataset_iter)
plot_images(
images=vanilla_vae.decoder(
inputs=vanilla_vae.encoder(inputs=batch)
)
)
vanilla_vae.decoder(
inputs=tf.random.normal(shape=(12, latent_dim))
)
###Output
_____no_output_____
###Markdown
And now the latent space is nicely regularized due to the variational aspect, therefore decoding a user provided latent vector performs well.
###Code
plot_images(
images=vanilla_vae.decoder(
inputs=tf.random.normal(shape=(12, latent_dim))
)
)
###Output
_____no_output_____
###Markdown
DC-VAE A Deep-convolutional variational autoencoder (DC-VAE) uses convolutional (`Conv2D`) and deconvolutional (`Conv2DTranspose`) instead of `Dense` layers for the encoder and decoder respectively.
###Code
def create_dc_encoder(latent_dim):
"""Creates deep convolutional encoder for VAE.
Args:
latent_dim: int, the latent vector dimension length.
Returns:
VAE deep convolutional encoder `Model`.
"""
encoder_inputs = tf.keras.Input(shape=(28, 28, 1))
x = tf.keras.layers.Conv2D(
filters=32,
kernel_size=3,
strides=2,
padding="same",
activation="relu"
)(inputs=encoder_inputs)
x = tf.keras.layers.Conv2D(
filters=64,
kernel_size=3,
strides=2,
padding="same",
activation="relu"
)(inputs=x)
x = tf.keras.layers.Flatten()(inputs=x)
x = tf.keras.layers.Dense(units=16, activation="relu")(inputs=x)
z_mean = tf.keras.layers.Dense(units=latent_dim, name="z_mean")(inputs=x)
z_log_var = tf.keras.layers.Dense(units=latent_dim, name="z_log_var")(
inputs=x
)
z = Sampling()(inputs=[z_mean, z_log_var])
return tf.keras.Model(
inputs=encoder_inputs,
outputs=[z_mean, z_log_var, z],
name="vae_dc_encoder"
)
dc_encoder = create_dc_encoder(latent_dim=latent_dim)
dc_encoder.summary()
# Plot decoder model.
tf.keras.utils.plot_model(
model=dc_encoder,
to_file="vae_dc_encoder_model.png",
show_shapes=True,
show_layer_names=True
)
def create_dc_decoder(latent_dim):
"""Creates deep convolutional decoder for VAE.
Args:
latent_dim: int, the latent vector dimension length.
Returns:
VAE deep convolutional decoder `Model`.
"""
decoder_inputs = tf.keras.Input(shape=(latent_dim,))
x = tf.keras.layers.Dense(units=7 * 7 * 64, activation="relu")(
inputs=decoder_inputs
)
x = tf.keras.layers.Reshape(target_shape=(7, 7, 64))(inputs=x)
x = tf.keras.layers.Conv2DTranspose(
filters=64,
kernel_size=3,
strides=2,
padding="same",
activation="relu"
)(inputs=x)
x = tf.keras.layers.Conv2DTranspose(
filters=32,
kernel_size=3,
strides=2,
padding="same",
activation="relu"
)(inputs=x)
decoder_outputs = tf.keras.layers.Conv2DTranspose(
filters=1, kernel_size=3, padding="same", activation="sigmoid"
)(inputs=x)
return tf.keras.Model(
inputs=decoder_inputs,
outputs=decoder_outputs,
name="vae_dc_decoder"
)
dc_decoder = create_dc_decoder(latent_dim=latent_dim)
dc_decoder.summary()
# Plot decoder model.
tf.keras.utils.plot_model(
model=dc_decoder,
to_file="vae_dc_decoder_model.png",
show_shapes=True,
show_layer_names=True
)
# Instantiate an `VAE` instance using our DC encoder and decoder.
dc_vae = VAE(
encoder=dc_encoder, decoder=dc_decoder
)
dc_vae.compile(
optimizer=tf.keras.optimizers.Adam(
learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-7
)
)
# Train DC-VAE model.
dc_vae_history = dc_vae.fit(
create_dataset(batch_size=128, training=True), epochs=30
)
###Output
Epoch 1/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 209.1565 - kl_loss: 4.1338 - loss: 264.4867
Epoch 2/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 164.2599 - kl_loss: 7.1135 - loss: 177.4706
Epoch 3/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 146.7268 - kl_loss: 8.5374 - loss: 157.0586
Epoch 4/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 142.1685 - kl_loss: 8.7049 - loss: 151.0237
Epoch 5/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 139.7380 - kl_loss: 8.7948 - loss: 149.1933
Epoch 6/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 138.1725 - kl_loss: 8.8309 - loss: 147.0765
Epoch 7/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 136.8625 - kl_loss: 8.8469 - loss: 145.8864
Epoch 8/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 135.8532 - kl_loss: 8.8677 - loss: 145.0436
Epoch 9/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 134.9837 - kl_loss: 8.8759 - loss: 144.2765
Epoch 10/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 134.3635 - kl_loss: 8.8672 - loss: 143.2842
Epoch 11/30
469/469 [==============================] - 4s 8ms/step - reconstruction_loss: 133.8151 - kl_loss: 8.8571 - loss: 143.0146
Epoch 12/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 133.2811 - kl_loss: 8.8496 - loss: 142.2589
Epoch 13/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 132.7805 - kl_loss: 8.8172 - loss: 142.2175
Epoch 14/30
469/469 [==============================] - 4s 8ms/step - reconstruction_loss: 132.3395 - kl_loss: 8.7993 - loss: 141.5917
Epoch 15/30
469/469 [==============================] - 4s 8ms/step - reconstruction_loss: 132.0204 - kl_loss: 8.7880 - loss: 140.9512
Epoch 16/30
469/469 [==============================] - 4s 8ms/step - reconstruction_loss: 131.7324 - kl_loss: 8.7733 - loss: 140.2379
Epoch 17/30
469/469 [==============================] - 4s 8ms/step - reconstruction_loss: 131.3824 - kl_loss: 8.7391 - loss: 140.4529
Epoch 18/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 131.0637 - kl_loss: 8.7140 - loss: 140.0258
Epoch 19/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 130.8072 - kl_loss: 8.7136 - loss: 139.2236
Epoch 20/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 130.6477 - kl_loss: 8.6855 - loss: 139.4570
Epoch 21/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 130.4011 - kl_loss: 8.6726 - loss: 139.1106
Epoch 22/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 130.2065 - kl_loss: 8.6422 - loss: 138.8855
Epoch 23/30
469/469 [==============================] - 4s 8ms/step - reconstruction_loss: 130.0415 - kl_loss: 8.6625 - loss: 138.7476
Epoch 24/30
469/469 [==============================] - 4s 8ms/step - reconstruction_loss: 129.8052 - kl_loss: 8.6375 - loss: 138.2914
Epoch 25/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 129.6610 - kl_loss: 8.6415 - loss: 138.3956
Epoch 26/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 129.5196 - kl_loss: 8.6314 - loss: 138.4193
Epoch 27/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 129.3393 - kl_loss: 8.6235 - loss: 137.7918
Epoch 28/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 129.2037 - kl_loss: 8.6034 - loss: 137.7090
Epoch 29/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 129.0271 - kl_loss: 8.6415 - loss: 137.7265
Epoch 30/30
469/469 [==============================] - 4s 9ms/step - reconstruction_loss: 128.8743 - kl_loss: 8.6269 - loss: 137.4384
###Markdown
Let's plot the loss history and some generated images using our trained model.
###Code
plot_loss_history(history=dc_vae_history)
###Output
_____no_output_____
###Markdown
Both non-variational and variational autoencoders are great at reconstruction.
###Code
dataset_iter = iter(create_dataset(batch_size=12, training=False))
batch = next(dataset_iter)
plot_images(
images=dc_vae.decoder(
inputs=dc_vae.encoder(inputs=batch)
)
)
###Output
_____no_output_____
###Markdown
And now the latent space is nicely regularized due to the variational aspect, therefore decoding a user provided latent vector performs well.
###Code
plot_images(
images=dc_vae.decoder(
inputs=tf.random.normal(shape=(12, latent_dim))
)
)
###Output
_____no_output_____ |
site/_build/html/_sources/notebooks/08-intro-nlp/04-what-cooking-python.ipynb | ###Markdown
[](http://rpi.analyticsdojo.com) What's Cooking in Pythonrpi.analyticsdojo.com What's Cooking in Python This was adopted from. https://www.kaggle.com/manuelatadvice/whats-cooking/noname/code
###Code
#This imports a bunch of packages.
import pandas as pd
import matplotlib.pyplot as plt
import nltk
from nltk.stem import WordNetLemmatizer
from collections import Counter
import json
from nltk.stem import WordNetLemmatizer
import re
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
#from sklearn import grid_search
#If you import the codes locally, this seems to cause some issues.
import json
from urllib.request import urlopen
urltrain= 'https://raw.githubusercontent.com/RPI-Analytics/MGMT6963-2015/master/data/whatscooking/whatscookingtrain.json'
urltest = 'https://raw.githubusercontent.com/RPI-Analytics/MGMT6963-2015/master/data/whatscooking/whatscookingtest.json'
train = pd.read_json(urlopen(urltrain))
test = pd.read_json(urlopen(urltest))
#First we want to see the most popular cuisine for the naive model.
train.groupby('cuisine').size()
#Here we write the most popular selection. This is the baseline by which we will judge other models.
test['cuisine']='italian'
#THis is a much more simple version that selects out the columns ID and cuisinte
submission=test[['id' , 'cuisine' ]]
#This is a more complex method I showed that gives same.
#submission=pd.DataFrame(test.ix[:,['id' , 'cuisine' ]])
#This outputs the file.
submission.to_csv("1_cookingSubmission.csv",index=False)
from google.colab import files
files.download('1_cookingSubmission.csv')
#So it seems there is some data we need to use the NLTK leemmatizer.
stemmer = WordNetLemmatizer()
nltk.download('wordnet')
train
#We see this in a Python Solution.
train['ingredients_clean_string1'] = [','.join(z).strip() for z in train['ingredients']]
#We also know that we can do something similar though a Lambda function.
strip = lambda x: ' , '.join(x).strip()
#Finally, we call the function for name
train['ingredients_clean_string2'] = train['ingredients'].map(strip)
#Now that we used the lambda function, we can reuse this for the test dataset.
test['ingredients_clean_string1'] = test['ingredients'].map(strip)
#We see this in one of the solutions. We can reconstruct it in a way that makes it abit easier to follow, but I found when doing that it took forever.
#To interpret this, read from right to left.
train['ingredients_string1'] = [' '.join([WordNetLemmatizer().lemmatize(re.sub('[^A-Za-z]', ' ', line)) for line in lists]).strip() for lists in train['ingredients']]
test['ingredients_string1'] = [' '.join([WordNetLemmatizer().lemmatize(re.sub('[^A-Za-z]', ' ', line)) for line in lists]).strip() for lists in test['ingredients']]
train['ingredients_string1']
ingredients = train['ingredients'].apply(lambda x:','.join(x))
ingredients
#Now we will create a corpus.
corpustr = train['ingredients_string1']
corpusts = test['ingredients_string1']
corpustr
#http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html
#You could develop an understanding based on each.
vectorizertr = TfidfVectorizer(stop_words='english',
ngram_range = ( 1 , 1 ),analyzer="word",
max_df = .57 , binary=False , token_pattern=r'\w+' , sublinear_tf=False)
vectorizerts = TfidfVectorizer(stop_words='english')
#Note that this doesn't work with the #todense option.
tfidftr=vectorizertr.fit_transform(corpustr)
predictors_tr = tfidftr
#Note that this doesn't work with the #todense option. This creates a matrix of predictors from the corpus.
tfidfts=vectorizertr.transform(corpusts)
predictors_ts= tfidfts
#This is target variable.
targets_tr = train['cuisine']
###Output
_____no_output_____
###Markdown
Logistic Regression and Regularization.- Regularlization can help us with the large matrix by adding a penalty for each parameter. - Finding out how much regularization via grid search (search across hyperparameters.)http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html```C : float, default: 1.0Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization.```
###Code
#Logistic Regression.
parameters = {'C':[1, 10]}
#clf = LinearSVC()
clf = LogisticRegression()
predictors_tr
from sklearn.model_selection import GridSearchCV
#This uses that associated paramters to search a grid space.
classifier = GridSearchCV(clf, parameters)
classifier=classifier.fit(predictors_tr,targets_tr)
#This predicts the outcome for the test set.
predictions=classifier.predict(predictors_ts)
#This adds it to the resulting dataframe.
test['cuisine'] = predictions
#This creates the submision dataframe
submission2=test[['id' , 'cuisine' ]]
#This outputs the file.
submission2.to_csv("2_logisticSubmission.csv",index=False)
from google.colab import files
files.download('2_logisticSubmission.csv')
from sklearn.ensemble import RandomForestClassifier
# Create the random forest object which will include all the parameters
# for the fit
forest = RandomForestClassifier(n_estimators = 10)
# Fit the training data to the Survived labels and create the decision trees
forest = forest.fit(predictors_tr,targets_tr)
# Take the same decision trees and run it on the test data
predictions = forest.predict(predictors_ts)
#This adds it to the resulting dataframe.
test['cuisine'] = predictions
#This creates the submision dataframe
submission3=test[['id' , 'cuisine' ]]
submission3.to_csv("3_random_submission.csv",index=False)
from google.colab import files
files.download('3_random_submission.csv')
ingredients = train['ingredients'].apply(lambda x:','.join(x))
ingredients
train
###Output
_____no_output_____ |
CSP_norte.ipynb | ###Markdown
READ DATA
###Code
#Libraries
import numpy as np
import matplotlib.pyplot as plt
#Parametric Data
parametric = np.genfromtxt("sources/results_parametric.csv", delimiter = ',',
skip_header = 1)
parametric_wet = np.genfromtxt("sources/results_dt.csv", delimiter = ',',
skip_header = 1)
days = np.arange(365)
###Output
_____no_output_____
###Markdown
Cost distribution
###Code
# Pie chart, where the slices will be ordered and plotted counter-clockwise:
labels = ['Heliostat \n Field', 'Tower', 'Receiver', 'TES', 'Power Block']
sizes = [27.86, 12.31, 21.2, 17.42, 21.1]
explode = (.05, .05, .05, .05, .05)
fig1, ax1 = plt.subplots(figsize= (8,8))
ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%',
shadow=True, startangle=120, textprops=dict(fontsize= 18))
ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
#Save figure
plt.savefig('graphs/pie_costs.png', format='png', bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Parametric analysis
###Code
#Hours of storage
h_storage = parametric[0:16,0]
#Solar multiple
parametric_sm17 = parametric[0:16,:]
parametric_sm18 = parametric[16:32,:]
parametric_sm19 = parametric[32:48,:]
parametric_sm20 = parametric[48:64,:]
parametric_sm21 = parametric[64:80,:]
parametric_sm22 = parametric[80:96,:]
parametric_sm23 = parametric[96:112,:]
parametric_sm24 = parametric[112:128,:]
parametric_sm25 = parametric[128:144,:]
parametric_sm26 = parametric[144:160,:]
parametric_sm27 = parametric[160:176,:]
parametric_sm28 = parametric[176:192,:]
parametric_sm29 = parametric[192:208,:]
parametric_sm30 = parametric[208:224,:]
###Output
_____no_output_____
###Markdown
LCOE
###Code
#Part 1
#Graph
plt.figure(figsize=(16, 8)) #Image Size
plt.plot(h_storage, parametric_sm17[:, 5], 'o-', label= 'SM: 1.7')
plt.plot(h_storage, parametric_sm20[:, 5], 'o-', label= 'SM: 2')
plt.plot(h_storage, parametric_sm23[:, 5], 'o-', label= 'SM: 2.3')
plt.plot(h_storage, parametric_sm25[:, 5], 'o-', linewidth= 3,
label= 'SM: 2.5')
plt.plot(h_storage, parametric_sm27[:, 5], 'o-', label= 'SM: 2.7')
plt.plot(h_storage, parametric_sm30[:, 5], 'o-', label= 'SM: 3')
plt.xticks(h_storage)
plt.ylim(7,14)
plt.xlim(0,15)
plt.xlabel(r'TES Hours [h]', fontsize=24)
plt.ylabel(r'Cost [¢/kWh]', fontsize=24)
plt.title("LCOE Parametric Analysis", fontsize=28)
plt.legend(fontsize= 16)
plt.grid()
#Save figure
plt.savefig('graphs/lcoe_parametric.png', format='png', bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Wet-Cooling parametric analysis
###Code
#Graph
plt.figure(figsize=(16, 6)) #Image Size
plt.plot(parametric_wet[:,0], parametric_wet[:, 1], 'o-',
label= 'PPA (1st year)')
#plt.plot(parametric_wet[:,0], parametric_wet[:, 2], 'o-', label= 'LCOE (real)')
plt.xticks(parametric_wet[:,0])
plt.xlim(10, 40)
plt.xlabel(r'Temperatura Rise [$\Delta^\circ$C]', fontsize=24)
plt.ylabel(r'Cost [¢/kWh]', fontsize=24)
plt.title("Effect of Temperature Rise in Cooling Water", fontsize=28)
plt.legend()
plt.grid()
#Save figure
plt.savefig('graphs/wet_metrics.png', format='png', bbox_inches='tight')
plt.show()
###Output
_____no_output_____ |
notebooks/inputcheck/MontecarloInputCheck.ipynb | ###Markdown
Simple Montecarlo Example
###Code
%load_ext autoreload
%autoreload 2
def all_vals(run, name):
vs = []
for st in run.getsteps():
v = run.step(st).get_tensor(name).as_numpy()
vs.append(v)
return np.array(vs)
###Output
_____no_output_____
###Markdown
This notebook will show how to run Montecarlo to capture the inputs/outputs/intermediate layers of an inference run and compare it with the corresponding data from the training run Train a Simple Model This very simple model learns the coefficients of a simple linear equation: `y_hat=Wx+b`. The data we feed to it follows the formula `y=5x+1`, so we should converge to `W==5` and `b==1`
###Code
!rm -rf ts_outputs/train
!rm -rf model
!python ./simplemodel_train.py --mean 0 --stddev 2 --batchsize 256 --epochs 1000
###Output
2019-04-18 10:16:31.355871: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Epoch: 0001 cost= 6721.340820312 W= 2.3675923 b= 0.8477539
Epoch: 0101 cost= 123.101486206 W= 4.668736 b= 0.8997093
Epoch: 0201 cost= 2.549396992 W= 4.958542 b= 0.94027495
Epoch: 0301 cost= 0.329603940 W= 4.994855 b= 0.9643348
Epoch: 0401 cost= 0.118118107 W= 4.999462 b= 0.9786624
Epoch: 0501 cost= 0.041727878 W= 4.9999423 b= 0.9872289
Epoch: 0601 cost= 0.014956966 W= 4.999992 b= 0.9923562
Epoch: 0701 cost= 0.005347713 W= 5.000022 b= 0.9954251
Epoch: 0801 cost= 0.001918845 W= 4.9999933 b= 0.9972619
Epoch: 0901 cost= 0.000687572 W= 4.9999895 b= 0.9983698
Optimization Finished!
Training cost= 0.00024611675 W= 5.0000043 b= 0.9990243
###Markdown
This will deposit a model in `./model`
###Code
!ls -l ./model/
###Output
total 48
-rw-r--r-- 1 olg ANT\Domain Users 67 Apr 18 10:16 checkpoint
-rw-r--r-- 1 olg ANT\Domain Users 8 Apr 18 10:16 model.data-00000-of-00001
-rw-r--r-- 1 olg ANT\Domain Users 142 Apr 18 10:16 model.index
-rw-r--r-- 1 olg ANT\Domain Users 35877 Apr 18 10:16 model.meta
###Markdown
And will deposit the Tornasole/Montecarlo data in `./ts_output/train/`
###Code
!ls -l ./ts_outputs/train/
###Output
total 0
drwxr-xr-x 104 olg ANT\Domain Users 3328 Apr 18 10:16 [34mecffef59-f2df-48e9-ae52-f81671adb1f2[m[m
###Markdown
Run Inference on the Trained Model Now we load the model we previously deposited in `./model` and run inference. The Tornasole/Montecarlo traces will be deposited in `./ts_output/infer`. We only trace three tensors: `X:0` (the input), `product:0` (an intermediate result), `Y_hat:0` (the output).Let's pretend we have a run with inputs having a different statistical distribution from the training data: `mean=3`, `stddev=6`. We also choose a batch size of 17 (just because) and run for 1000 steps, evaluating a total of 17,000 input samples.
###Code
!rm -rf ts_outputs/infer
!python simplemodel_infer.py --mean 3 --stddev 6 --batchsize 17 --steps 1000
###Output
2019-04-18 10:36:37.808473: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Running sample 0
Running sample 100
Running sample 200
Running sample 300
Running sample 400
Running sample 500
Running sample 600
Running sample 700
Running sample 800
Running sample 900
###Markdown
The output of the Montecarlo trace is deposited in `ts_outputs/infer/`. Each directory shown below contains the trace for a single batch (of size==17, in this case)
###Code
!ls -l ./ts_outputs/infer/*/ | head
###Output
total 0
drwxr-xr-x 3 olg ANT\Domain Users 96 Apr 18 10:35 0
drwxr-xr-x 3 olg ANT\Domain Users 96 Apr 18 10:35 1
drwxr-xr-x 3 olg ANT\Domain Users 96 Apr 18 10:35 10
drwxr-xr-x 3 olg ANT\Domain Users 96 Apr 18 10:35 100
drwxr-xr-x 3 olg ANT\Domain Users 96 Apr 18 10:35 101
drwxr-xr-x 3 olg ANT\Domain Users 96 Apr 18 10:35 102
drwxr-xr-x 3 olg ANT\Domain Users 96 Apr 18 10:35 103
drwxr-xr-x 3 olg ANT\Domain Users 96 Apr 18 10:35 104
drwxr-xr-x 3 olg ANT\Domain Users 96 Apr 18 10:35 105
###Markdown
Running MonteCarlo Now we have traces for a single training run, which ran for multiple batches, in `./ts_outputs/train/`, and a single inference run, which ran for 1000 batches of size==17, in `./ts_outputs/infer/`. Let's say that we want to compare the distribution of inputs, intermediate results, and outputs.
###Code
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import os
import sys
from tornasole_rules.run_catalog import LocalRunCatalog
from tornasole_rules.run import LocalRun
tdir = "./ts_outputs/train"
idir = "./ts_outputs/infer"
# We scan the disc to find the training run
training_run_catalog = LocalRunCatalog(tdir)
inference_run_catalog = LocalRunCatalog(idir)
###Output
_____no_output_____
###Markdown
First we ask Montecarlo to list all the training runs it knows something about
###Code
training_run_names = training_run_catalog.list_candidates()
for training_run_name in training_run_names:
print( f'Training Run {training_run_name}')
tr = LocalRun(training_run_name, os.path.join(tdir,training_run_name))
###Output
Training Run ecffef59-f2df-48e9-ae52-f81671adb1f2
None
###Markdown
Then all the inference runs it knows about. In the SageMaker world, a single endpoint would constitute a Run
###Code
inference_run_names = inference_run_catalog.list_candidates()
for inference_run_name in inference_run_names:
print( f'Inference Run {inference_run_name}')
ir = LocalRun(inference_run_name, os.path.join(idir,inference_run_name))
###Output
Inference Run 42cbf42b-b2c1-46eb-86a4-9d8d64549b21
None
###Markdown
We can plot the input distributions of the tensor called `X:0` (the input), respectively in the training and inference runs
###Code
xs_train = all_vals(tr,"X:0").flatten()
xs_infer = all_vals(ir,"X:0").flatten()
plt.hist(xs_train, bins=50, alpha=0.5, label="Training inputs")
plt.hist(xs_infer, bins=50, alpha=0.5, label="Inference inputs")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We can also look at values _inside_, say a tensor named `product:0`. Again, note the difference between training and inference.
###Code
product_train = all_vals(tr, "product:0").flatten()
product_infer = all_vals(ir, "product:0").flatten()
plt.hist(product_train, bins=50, alpha=0.5, label="Training intermediate product")
plt.hist(product_infer, bins=50, alpha=0.5, label="Inference intermediate product")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
And finally, the outputs:- training (actual): `Y:0`- training (predicted): `Y_hat:0`- inference (predicted): `Y_hat:0`
###Code
output_actual_train = all_vals(tr, "Y:0").flatten()
output_predicted_train = all_vals(tr, "Y_hat:0").flatten()
output_predicted_infer = all_vals(ir, "Y_hat:0").flatten()
plt.hist(output_actual_train, bins=50, alpha=0.5, label="Training actual output")
plt.hist(output_predicted_train, bins=50, alpha=0.5, label="Training predicted output")
plt.hist(output_predicted_infer, bins=50, alpha=0.5, label="Inference predicted output")
plt.legend()
plt.show()
###Output
_____no_output_____ |
Model backlog/Train/22-jigsaw-train-2fold-xlm-roberta-large-numpy.ipynb | ###Markdown
Dependencies
###Code
import json, warnings, shutil
from jigsaw_utility_scripts import *
from transformers import TFXLMRobertaModel, XLMRobertaConfig
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
TPU configuration
###Code
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
###Output
Running on TPU grpc://10.0.0.2:8470
REPLICAS: 8
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
valid_df = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv", usecols=['comment_text', 'toxic', 'lang'])
print('Train set samples: %d' % len(k_fold))
print('Validation set samples: %d' % len(valid_df))
display(k_fold.head())
# Unzip files
!tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_1.tar.gz
!tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_2.tar.gz
# !tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_3.tar.gz
# !tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_4.tar.gz
# !tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_5.tar.gz
###Output
Train set samples: 435775
Validation set samples: 8000
###Markdown
Model parameters
###Code
base_path = '/kaggle/input/jigsaw-transformers/XLM-RoBERTa/'
config = {
"MAX_LEN": 192,
"BATCH_SIZE": 16 * strategy.num_replicas_in_sync,
"EPOCHS": 2,
"LEARNING_RATE": 1e-5,
"ES_PATIENCE": 1,
"N_FOLDS": 2,
"base_model_path": base_path + 'tf-xlm-roberta-large-tf_model.h5',
"config_path": base_path + 'xlm-roberta-large-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
###Output
_____no_output_____
###Markdown
Model
###Code
module_config = XLMRobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
base_model = TFXLMRobertaModel.from_pretrained(config['base_model_path'], config=module_config)
sequence_output = base_model({'input_ids': input_ids})
last_state = sequence_output[0]
cls_token = last_state[:, 0, :]
output = layers.Dense(1, activation='sigmoid', name='output')(cls_token)
model = Model(inputs=input_ids, outputs=output)
model.compile(optimizers.Adam(lr=config['LEARNING_RATE']),
loss=losses.BinaryCrossentropy(),
metrics=[metrics.BinaryAccuracy(), metrics.AUC()])
return model
# Datasets
def get_training_dataset(x_train, y_train, batch_size, buffer_size):
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': x_train}, y_train))
dataset = dataset.repeat()
dataset = dataset.shuffle(2048)
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.prefetch(buffer_size)
return dataset
def get_validation_dataset(x_valid, y_valid, batch_size, buffer_size):
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': x_valid}, y_valid))
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.cache()
dataset = dataset.prefetch(buffer_size)
return dataset
###Output
_____no_output_____
###Markdown
Train
###Code
history_list = []
for n_fold in range(config['N_FOLDS']):
tf.tpu.experimental.initialize_tpu_system(tpu)
print('\nFOLD: %d' % (n_fold+1))
# Load data
base_data_path = 'fold_%d/' % (n_fold+1)
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
# x_valid = np.load(base_data_path + 'x_valid.npy')
x_valid_ml = np.load(database_base_path + 'x_valid.npy')
y_valid_ml = np.load(database_base_path + 'y_valid.npy')
step_size = x_train.shape[0] // config['BATCH_SIZE']
### Delete data dir
shutil.rmtree(base_data_path)
# Train model
model_path = 'model_fold_%d.h5' % (n_fold+1)
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min',
save_best_only=True, save_weights_only=True, verbose=1)
with strategy.scope():
model = model_fn(config['MAX_LEN'])
# history = model.fit(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO),
# validation_data=(get_validation_dataset(x_valid_ml, y_valid_ml, config['BATCH_SIZE'], AUTO)),
history = model.fit(x_train, y_train,
validation_data=(x_valid_ml, y_valid_ml),
callbacks=[checkpoint, es],
epochs=config['EPOCHS'],
batch_size=config['BATCH_SIZE'],
steps_per_epoch=step_size,
verbose=2).history
history_list.append(history)
# Fine-tune on validation set
n_steps2 = x_valid_ml.shape[0] // config['BATCH_SIZE']
print('\nFine-tune on validation set')
# history2 = model.fit(get_training_dataset(x_valid_ml, y_valid_ml, config['BATCH_SIZE'], AUTO),
history2 = model.fit(x_valid_ml, y_valid_ml,
batch_size=config['BATCH_SIZE'],
steps_per_epoch=n_steps2,
epochs=config['EPOCHS'],
verbose=2).history
# Make predictions
# train_preds = model.predict(get_test_dataset(np.load(base_data_path + 'x_train.npy'), config['BATCH_SIZE'], AUTO))
# valid_preds = model.predict(get_test_dataset(np.load(base_data_path + 'x_valid.npy'), config['BATCH_SIZE'], AUTO))
# valid_ml_preds = model.predict(get_test_dataset(np.load(database_base_path + 'x_valid.npy'), config['BATCH_SIZE'], AUTO))
# k_fold.loc[k_fold['fold_%d' % (n_fold+1)] == 'train', 'pred_%d' % (n_fold+1)] = np.round(train_preds)
# k_fold.loc[k_fold['fold_%d' % (n_fold+1)] == 'validation', 'pred_%d' % (n_fold+1)] = np.round(valid_preds)
# valid_df['pred_%d' % (n_fold+1)] = np.round(valid_ml_preds)
###Output
FOLD: 1
Train on 348620 samples, validate on 8000 samples
Epoch 1/2
Epoch 00001: val_loss improved from inf to 0.29469, saving model to model_fold_1.h5
Epoch 2/2
Epoch 00002: val_loss did not improve from 0.29469
Restoring model weights from the end of the best epoch.
Epoch 00002: early stopping
Fine-tune on validation set
Train on 8000 samples
Epoch 1/2
Epoch 2/2
FOLD: 2
Train on 348620 samples, validate on 8000 samples
Epoch 1/2
Epoch 00001: val_loss improved from inf to 0.28263, saving model to model_fold_2.h5
Epoch 2/2
Epoch 00002: val_loss did not improve from 0.28263
Restoring model weights from the end of the best epoch.
Epoch 00002: early stopping
Fine-tune on validation set
Train on 8000 samples
Epoch 1/2
Epoch 2/2
###Markdown
Model loss graph
###Code
sns.set(style="whitegrid")
for n_fold in range(config['N_FOLDS']):
print('Fold: %d' % (n_fold+1))
plot_metrics(history_list[n_fold])
###Output
Fold: 1
###Markdown
Model evaluation
###Code
# display(evaluate_model(k_fold, config['N_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Confusion matrix
###Code
# for n_fold in range(config['N_FOLDS']):
# print('Fold: %d' % (n_fold+1))
# train_set = k_fold[k_fold['fold_%d' % (n_fold+1)] == 'train']
# validation_set = k_fold[k_fold['fold_%d' % (n_fold+1)] == 'validation']
# plot_confusion_matrix(train_set['toxic'], train_set['pred_%d' % (n_fold+1)],
# validation_set['toxic'], validation_set['pred_%d' % (n_fold+1)])
###Output
_____no_output_____
###Markdown
Model evaluation by language
###Code
# display(evaluate_model_lang(valid_df, config['N_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
pd.set_option('max_colwidth', 120)
print('English validation set')
display(k_fold[['comment_text', 'toxic'] + [c for c in k_fold.columns if c.startswith('pred')]].head(10))
print('Multilingual validation set')
display(valid_df[['comment_text', 'toxic'] + [c for c in k_fold.columns if c.startswith('pred')]].head(10))
x_test = np.load(database_base_path + 'x_test.npy')
test_dataset = (
tf.data.Dataset
.from_tensor_slices(x_test)
.batch(config['BATCH_SIZE'])
)
import glob
model_path_list = glob.glob('/kaggle/working/' + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep = "\n")
NUM_TEST_IMAGES = len(x_test)
test_preds = np.zeros((NUM_TEST_IMAGES, 1))
for model_path in model_path_list:
tf.tpu.experimental.initialize_tpu_system(tpu)
print(model_path)
with strategy.scope():
model = model_fn(config['MAX_LEN'])
model.load_weights(model_path)
test_preds += model.predict(test_dataset)
# test_preds += model.predict(get_test_dataset(x_test, config['BATCH_SIZE'], AUTO)) / len(model_path_list)
submission = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/sample_submission.csv')
# submission['toxic'] = model.predict(test_dataset)
submission['toxic'] = test_preds
submission.to_csv('submission.csv', index=False)
submission.head(10)
###Output
_____no_output_____ |
analysis/.ipynb_checkpoints/mr estimator-checkpoint.ipynb | ###Markdown
Example manual analysis
###Code
srcsub = mre.input_handler('/Users/harangju/Developer/activity_i5_j2.txt')
print('imported trials from wildcard: ', srcsub.shape[0])
oful = mre.OutputHandler()
oful.add_ts(srcsub)
rk = mre.coefficients(srcsub)
print(rk.coefficients)
print('this guy has the following attributes: ', rk._fields)
m = mre.fit(rk)
m.mre
###Output
INFO Unbound fit to $|A| e^{-k/\tau}$
INFO Finished 4 fit(s)
INFO Finished fitting the data to f_exponential, mre = 0.70596, tau = 2.87ms, ssres = 0.15697
WARNING The obtained autocorrelationtime is small compared to the fitrange: tmin~1ms, tmax~2904ms, tau~3ms
WARNING Consider fitting with smaller 'minstep' and 'maxstep'
###Markdown
Full analysis
###Code
# m = np.zeros((16,12))
mact = np.zeros((16,12))
for i in range(0,16):
for j in range(0,12):
fname = '/Users/harangju/Developer/avalanche paper data/mr estimation/activity/activity_i' +\
str(i+1) + '_j' + str(j+1) + '.txt'
act = np.loadtxt(fname)
mact[i,j] = max(act)
# srcsub = mre.input_handler(fname)
# rk = mre.coefficients(srcsub, steps=(1,10000))
# me = mre.fit(rk)
# m[i,j] = me.mre
import scipy.io as sio
sio.savemat('/Users/Developer/mre.mat',{'mre':m,'mact':mact})
###Output
_____no_output_____ |
1-intro-to-computer-vision/activities/5-cnn-layers-and-feature-visualization/1. Conv layer visualization.ipynb | ###Markdown
Convolutional Layer---In this notebook, we visualize four filtered outputs (a.k.a. feature maps) of a convolutional layer. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'images/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
### do not modify the code below this line ###
# visualize all four filters
fig = plt.figure(figsize=(10, 5))
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
width, height = filters[i].shape
for x in range(width):
for y in range(height):
ax.annotate(str(filters[i][x][y]), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if filters[i][x][y]<0 else 'black')
###Output
_____no_output_____
###Markdown
Define a convolutional layer Initialize a single convolutional layer so that it contains all your created filters. Note that you are not training this network; you are initializing the weights in a convolutional layer so that you can visualize what happens after a forward pass through this network!
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a single convolutional layer with four filters
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# returns both layers
return conv_x, activated_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1, xticks=[], yticks=[])
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer, before and after a ReLu activation function is applied.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get the convolutional layer (pre and post activation)
conv_layer, activated_layer = model(gray_img_tensor)
# visualize the output of a conv layer
viz_layer(conv_layer)
# visualize the output of an activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____ |
projects/bach_project.ipynb | ###Markdown
Main Process
###Code
# import required libraries
from pandas import Series, DataFrame
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn import tree
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score
from collections import Counter
# download the zip with our bach dataset
!wget https://raw.githubusercontent.com/zacharski/ml-class/master/data/bach.zip
# extract the file in the zip
!unzip bach.zip
# load the csv file to a dataframe
chorales = pd.read_csv('bach.csv')
chorales
# sets the choral_ID column as the index for dataframe
chorales.set_index('choral_ID', inplace=True)
chorales
# evaluate the 12 note columns and replace accordingly
chorales.replace(('YES', 'NO'), (1, 0), inplace=True)
chorales
# one hot encode the Bass columns
one_hot_bass = pd.get_dummies(chorales['bass'], prefix='Bass')
one_hot_bass
# drop bass column and join to main dataframe
chorales = chorales.drop('bass', axis=1)
chorales = chorales.join(one_hot_bass)
chorales
# assign specific columns to features and labels
# divide the data into training and testing sets
featureCols = list(chorales.columns)
featureCols.remove('chord_label')
chorales_features = chorales[featureCols]
chorales_labels = chorales['chord_label']
bach_train_features, bach_test_features, bach_train_labels, bach_test_labels = train_test_split(chorales_features, chorales_labels, test_size = 0.2, random_state=42)
chorales_features
# create the classifier and fit it to training sets
clf = tree.DecisionTreeClassifier(criterion='entropy')
clf.fit(bach_train_features, bach_train_labels)
# print accuracy values of each fold and the average
scores = cross_val_score(clf, bach_train_features, bach_train_labels, cv=10)
print(scores)
print("The average accuracy is %5.3f" % (scores.mean()))
# evaluate how accurate the classifier is in guessing the correct chord_label
predictions = clf.predict(bach_test_features)
accuracy_score(bach_test_labels, predictions)
###Output
_____no_output_____
###Markdown
Original Process This is the original thought process I had but this method had low accuracy scores.
###Code
# read data into a dataframe
chorales_copy = pd.read_csv('bach.csv')
chorales_copy.set_index('choral_ID', inplace=True)
chorales_copy
# evaluate the chord_label and bass columns to help get a better understanding of data
print(chorales_copy.chord_label.unique().size)
print(chorales_copy.chord_label.unique())
print(chorales_copy.bass.unique().size)
print(chorales_copy.bass.unique())
# count how many unique bass notes there are
bass_results = Counter()
chorales_copy['bass'].str.split().apply(bass_results.update)
print("bass: ", len(bass_results.keys()))
print(bass_results)
# Original thought process was to assign a value to each Bass note
# dict is in order from most common to least common bass note
replace_bass = {'D' : 1, 'A' : 2, 'G' : 3, 'E' : 4, 'C' : 5, 'F' : 6, 'F#' : 7, 'B' : 8, 'Bb' : 9,'C#' : 10,'Eb' : 11, 'G#' : 12,
'D#': 13, 'Ab': 14, 'Db': 15, 'A#': 16}
chorales_copy.replace((replace_bass.keys()), (replace_bass.values()), inplace=True)
chorales_copy
# evaluate the 12 note columns and replace accordingly
chorales_copy.replace(('YES', 'NO'), (1, 0), inplace=True)
chorales_copy
# assign columns to features and labels
# divide data into training and testing sets
features = list(chorales_copy.columns)
features.remove('chord_label')
features_copy = chorales_copy[features]
labels_copy = chorales_copy['chord_label']
bach_train_featuresC, bach_test_featuresC, bach_train_labelsC, bach_test_labelsC = train_test_split(features_copy, labels_copy, test_size = 0.2, random_state=42)
# create the classifier and fit it to training sets
clf2 = tree.DecisionTreeClassifier(criterion='entropy')
clf2.fit(bach_train_featuresC, bach_train_labelsC)
# print accuracy values of each fold and the average
scores2 = cross_val_score(clf2, bach_train_featuresC, bach_train_labelsC, cv=10)
print(scores2)
print("The average accuracy is %5.3f" % (scores2.mean()))
# evaluate how accurate the classifier is in guessing the correct chord_label
predictions2 = clf2.predict(bach_test_featuresC)
accuracy_score(bach_test_labelsC, predictions2)
###Output
_____no_output_____ |
_notebooks/2022-01-07-license-plate-detection.ipynb | ###Markdown
License Plate Detection
###Code
# !git clone https://github.com/GuiltyNeuron/ANPR.git
# !wget -O weights.zip https://bit.ly/39RYf1u
# !unzip weights.zip
# !cp '/content/classes.names' '/content/ANPR/Licence_plate_detection/'
# %cd '/content/ANPR/Licence_plate_detection/'
# !python detector.py --image test.jpg
!git clone https://github.com/quangnhat185/Plate_detect_and_recognize.git
%cd Plate_detect_and_recognize
import cv2
import numpy as np
import matplotlib.pyplot as plt
from local_utils import detect_lp
from os.path import splitext,basename
from keras.models import model_from_json
import glob
def load_model(path):
try:
path = splitext(path)[0]
with open('%s.json' % path, 'r') as json_file:
model_json = json_file.read()
model = model_from_json(model_json, custom_objects={})
model.load_weights('%s.h5' % path)
print("Loading model successfully...")
return model
except Exception as e:
print(e)
wpod_net_path = "wpod-net.json"
wpod_net = load_model(wpod_net_path)
def preprocess_image(image_path,resize=False):
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img / 255
if resize:
img = cv2.resize(img, (224,224))
return img
# !wget -q -O /content/plates/plate1.jpg 'https://images.squarespace-cdn.com/content/v1/5c981f3d0fb4450001fdde5d/1563727260863-E9JQC4UVO8IYCE6P19BO/ke17ZwdGBToddI8pDm48kDHPSfPanjkWqhH6pl6g5ph7gQa3H78H3Y0txjaiv_0fDoOvxcdMmMKkDsyUqMSsMWxHk725yiiHCCLfrh8O1z4YTzHvnKhyp6Da-NYroOW3ZGjoBKy3azqku80C789l0mwONMR1ELp49Lyc52iWr5dNb1QJw9casjKdtTg1_-y4jz4ptJBmI9gQmbjSQnNGng/cars+1.jpg'
# !wget -q -O /content/plates/plate2.jpg 'https://www.cars24.com/blog/wp-content/uploads/2018/12/High-Security-Registration-Plates-Feature-Cars24.com_.png'
# Create a list of image paths
image_paths = glob.glob("/content/plates/*.jpg")
print("Found %i images..."%(len(image_paths)))
# Visualize data in subplot
fig = plt.figure(figsize=(12,8))
cols = 5
rows = 4
fig_list = []
for i in range(len(image_paths)):
fig_list.append(fig.add_subplot(rows,cols,i+1))
title = splitext(basename(image_paths[i]))[0]
fig_list[-1].set_title(title)
img = preprocess_image(image_paths[i],True)
plt.axis(False)
plt.imshow(img)
plt.tight_layout(True)
plt.show()
# forward image through model and return plate's image and coordinates
# if error "No Licensese plate is founded!" pop up, try to adjust Dmin
def get_plate(image_path, Dmax=608, Dmin=256):
vehicle = preprocess_image(image_path)
ratio = float(max(vehicle.shape[:2])) / min(vehicle.shape[:2])
side = int(ratio * Dmin)
bound_dim = min(side, Dmax)
_ , LpImg, _, cor = detect_lp(wpod_net, vehicle, bound_dim, lp_threshold=0.5)
return LpImg, cor
# Obtain plate image and its coordinates from an image
test_image = image_paths[0]
LpImg,cor = get_plate(test_image)
print("Detect %i plate(s) in"%len(LpImg),splitext(basename(test_image))[0])
print("Coordinate of plate(s) in image: \n", cor)
# Visualize our result
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.axis(False)
plt.imshow(preprocess_image(test_image))
plt.subplot(1,2,2)
plt.axis(False)
plt.imshow(LpImg[0])
#plt.savefig("part1_result.jpg",dpi=300)
# Viualize all obtained plate images
fig = plt.figure(figsize=(12,6))
cols = 5
rows = 4
fig_list = []
for i in range(len(image_paths)):
fig_list.append(fig.add_subplot(rows,cols,i+1))
title = splitext(basename(image_paths[i]))[0]
fig_list[-1].set_title(title)
LpImg,_ = get_plate(image_paths[i])
plt.axis(False)
plt.imshow(LpImg[0])
plt.tight_layout(True)
plt.show()
###Output
_____no_output_____ |
4-language-translation/dlnd_language_translation.ipynb | ###Markdown
Language TranslationIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the DataSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with view_sentence_range to view different parts of the data.
###Code
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 227
Number of sentences: 137861
Average number of words in a sentence: 13.225277634719028
English sentences 0 to 10:
new jersey is sometimes quiet during autumn , and it is snowy in april .
the united states is usually chilly during july , and it is usually freezing in november .
california is usually quiet during march , and it is usually hot in june .
the united states is sometimes mild during june , and it is cold in september .
your least liked fruit is the grape , but my least liked is the apple .
his favorite fruit is the orange , but my favorite is the grape .
paris is relaxing during december , but it is usually chilly in july .
new jersey is busy during spring , and it is never hot in march .
our least liked fruit is the lemon , but my least liked is the grape .
the united states is sometimes busy during january , and it is sometimes warm in november .
French sentences 0 to 10:
new jersey est parfois calme pendant l' automne , et il est neigeux en avril .
les états-unis est généralement froid en juillet , et il gèle habituellement en novembre .
california est généralement calme en mars , et il est généralement chaud en juin .
les états-unis est parfois légère en juin , et il fait froid en septembre .
votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme .
son fruit préféré est l'orange , mais mon préféré est le raisin .
paris est relaxant en décembre , mais il est généralement froid en juillet .
new jersey est occupé au printemps , et il est jamais chaude en mars .
notre fruit est moins aimé le citron , mais mon moins aimé est le raisin .
les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre .
###Markdown
Implement Preprocessing Function Text to Word IdsAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.You can get the `` word id by doing:```pythontarget_vocab_to_int['']```You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`.
###Code
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# TODO: Implement Function
source_id_text = [[source_vocab_to_int.get(word, source_vocab_to_int['<UNK>']) for word in line.split(' ')] for line in source_text.split('\n')]
target_id_text = [[target_vocab_to_int.get(word, target_vocab_to_int['<UNK>']) for word in line.split(' ')] + [target_vocab_to_int['<EOS>']] for line in target_text.split('\n')]
return source_id_text, target_id_text
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
###Output
Tests Passed
###Markdown
Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Check the Version of TensorFlow and Access to GPUThis will check to make sure you have the correct version of TensorFlow and access to a GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
###Output
TensorFlow Version: 1.1.0
###Markdown
Build the Neural NetworkYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:- `model_inputs`- `process_decoder_input`- `encoding_layer`- `decoding_layer_train`- `decoding_layer_infer`- `decoding_layer`- `seq2seq_model` InputImplement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.- Targets placeholder with rank 2.- Learning rate placeholder with rank 0.- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.- Target sequence length placeholder named "target_sequence_length" with rank 1- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.- Source sequence length placeholder named "source_sequence_length" with rank 1Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
###Code
def model_inputs():
"""
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
"""
# TODO: Implement Function
input_ = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
lr = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
target_sequence_length = tf.placeholder(tf.int32, [None], name='target_sequence_length')
max_target_len = tf.reduce_max(target_sequence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, [None], name='source_sequence_length')
return input_, targets, lr, keep_prob, target_sequence_length, max_target_len, source_sequence_length
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
###Output
Tests Passed
###Markdown
Process Decoder InputImplement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch.
###Code
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)
###Output
Tests Passed
###Markdown
EncodingImplement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.mdstacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn)
###Code
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
"""
# TODO: Implement Function
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
# RNN cell
def make_cell(rnn_size):
enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1))
drop = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob)
return drop
enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
rnn_output, rnn_state = tf.nn.dynamic_rnn(enc_cell,
enc_embed_input,
sequence_length=source_sequence_length,
dtype=tf.float32)
return rnn_output, rnn_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
###Output
Tests Passed
###Markdown
Decoding - TrainingCreate a training decoding layer:* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode)
###Code
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
# TODO: Implement Function
# Helper for the training process. Used by BasicDecoder to read inputs
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
# Basic decoder
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
training_helper,
encoder_state,
output_layer)
# Perform dynamic decoding using the decoder
training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder,
impute_finished=True,
maximum_iterations=max_summary_length)[0]
return training_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
###Output
Tests Passed
###Markdown
Decoding - InferenceCreate inference decoder:* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode)
###Code
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
"""
# TODO: Implement Function
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size])
# Helper for the inference process
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
start_tokens,
end_of_sequence_id)
# Basic encoder
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
inference_helper,
encoder_state,
output_layer)
# Perform dynamic decoding using decoder
inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)[0]
return inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
###Output
Tests Passed
###Markdown
Build the Decoding LayerImplement `decoding_layer()` to create a Decoder RNN layer.* Embed the target sequences* Construct the decoder LSTM cell (just like you constructed the encoder cell above)* Create an output layer to map the outputs of the decoder to the elements of our vocabulary* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference.
###Code
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
"""
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# TODO: Implement Function
# Decoder embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# Construct the decoder cell
def make_cell(rnn_size):
dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1))
return dec_cell
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
# Dense layer to translate the decoder's output at each time
# step into a choice from the target vocabulary
output_layer = Dense(target_vocab_size,
kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
# Set up a training decoder and inference decoder
with tf.variable_scope('decoding'):
training_decoder_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_target_sequence_length,
output_layer, keep_prob)
with tf.variable_scope('decoding', reuse=True):
inference_decoder_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
target_vocab_size, output_layer, batch_size, keep_prob)
return training_decoder_output, inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
###Output
Tests Passed
###Markdown
Build the Neural NetworkApply the functions you implemented above to:- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function.
###Code
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# TODO: Implement Function
# Pass the input data through the encoder. We'll ignore the encoder output, but use the state
_, enc_state = encoding_layer(input_data,
rnn_size,
num_layers,
keep_prob,
source_sequence_length,
source_vocab_size,
enc_embedding_size)
# Prepare the target sequences we'll feed to the decoder in training mode
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
# Pass encoder state and decoder inputs to the decoders
training_decoder_output, inference_decoder_output = decoding_layer(dec_input,
enc_state,
target_sequence_length,
max_target_sentence_length,
rnn_size,
num_layers,
target_vocab_to_int,
target_vocab_size,
batch_size,
keep_prob,
dec_embedding_size)
return training_decoder_output, inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
###Output
Tests Passed
###Markdown
Neural Network Training HyperparametersTune the following parameters:- Set `epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `num_layers` to the number of layers.- Set `encoding_embedding_size` to the size of the embedding for the encoder.- Set `decoding_embedding_size` to the size of the embedding for the decoder.- Set `learning_rate` to the learning rate.- Set `keep_probability` to the Dropout keep probability- Set `display_step` to state how many steps between each debug output statement
###Code
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.0005
# Dropout Keep Probability
keep_probability = 0.8
display_step = 80
###Output
_____no_output_____
###Markdown
Build the GraphBuild the graph using the neural network you implemented.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
###Output
_____no_output_____
###Markdown
Batch and pad the source and target sequences
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
###Output
_____no_output_____
###Markdown
TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
###Output
Epoch 0 Batch 80/1077 - Train Accuracy: 0.4453, Validation Accuracy: 0.4975, Loss: 2.2717
Epoch 0 Batch 160/1077 - Train Accuracy: 0.4664, Validation Accuracy: 0.4975, Loss: 1.6403
Epoch 0 Batch 240/1077 - Train Accuracy: 0.5008, Validation Accuracy: 0.5344, Loss: 1.3581
Epoch 0 Batch 320/1077 - Train Accuracy: 0.5125, Validation Accuracy: 0.5369, Loss: 1.1665
Epoch 0 Batch 400/1077 - Train Accuracy: 0.5484, Validation Accuracy: 0.5554, Loss: 0.9924
Epoch 0 Batch 480/1077 - Train Accuracy: 0.5518, Validation Accuracy: 0.5799, Loss: 0.8861
Epoch 0 Batch 560/1077 - Train Accuracy: 0.5719, Validation Accuracy: 0.5987, Loss: 0.7657
Epoch 0 Batch 640/1077 - Train Accuracy: 0.5967, Validation Accuracy: 0.6293, Loss: 0.6924
Epoch 0 Batch 720/1077 - Train Accuracy: 0.5683, Validation Accuracy: 0.6214, Loss: 0.7405
Epoch 0 Batch 800/1077 - Train Accuracy: 0.5988, Validation Accuracy: 0.6424, Loss: 0.6055
Epoch 0 Batch 880/1077 - Train Accuracy: 0.6754, Validation Accuracy: 0.6562, Loss: 0.5570
Epoch 0 Batch 960/1077 - Train Accuracy: 0.6570, Validation Accuracy: 0.6584, Loss: 0.5047
Epoch 0 Batch 1040/1077 - Train Accuracy: 0.6517, Validation Accuracy: 0.6843, Loss: 0.5144
Epoch 1 Batch 80/1077 - Train Accuracy: 0.7457, Validation Accuracy: 0.7269, Loss: 0.4057
Epoch 1 Batch 160/1077 - Train Accuracy: 0.7562, Validation Accuracy: 0.7489, Loss: 0.3546
Epoch 1 Batch 240/1077 - Train Accuracy: 0.7879, Validation Accuracy: 0.7727, Loss: 0.3146
Epoch 1 Batch 320/1077 - Train Accuracy: 0.7902, Validation Accuracy: 0.7812, Loss: 0.2972
Epoch 1 Batch 400/1077 - Train Accuracy: 0.8148, Validation Accuracy: 0.8004, Loss: 0.2634
Epoch 1 Batch 480/1077 - Train Accuracy: 0.8664, Validation Accuracy: 0.8175, Loss: 0.2119
Epoch 1 Batch 560/1077 - Train Accuracy: 0.8414, Validation Accuracy: 0.8430, Loss: 0.1790
Epoch 1 Batch 640/1077 - Train Accuracy: 0.8690, Validation Accuracy: 0.8395, Loss: 0.1577
Epoch 1 Batch 720/1077 - Train Accuracy: 0.8734, Validation Accuracy: 0.8249, Loss: 0.1776
Epoch 1 Batch 800/1077 - Train Accuracy: 0.8672, Validation Accuracy: 0.8672, Loss: 0.1330
Epoch 1 Batch 880/1077 - Train Accuracy: 0.9238, Validation Accuracy: 0.8654, Loss: 0.1190
Epoch 1 Batch 960/1077 - Train Accuracy: 0.8943, Validation Accuracy: 0.8672, Loss: 0.1017
Epoch 1 Batch 1040/1077 - Train Accuracy: 0.9120, Validation Accuracy: 0.8768, Loss: 0.1065
Epoch 2 Batch 80/1077 - Train Accuracy: 0.9234, Validation Accuracy: 0.8910, Loss: 0.0742
Epoch 2 Batch 160/1077 - Train Accuracy: 0.9160, Validation Accuracy: 0.8967, Loss: 0.0723
Epoch 2 Batch 240/1077 - Train Accuracy: 0.9344, Validation Accuracy: 0.8917, Loss: 0.0729
Epoch 2 Batch 320/1077 - Train Accuracy: 0.9273, Validation Accuracy: 0.8881, Loss: 0.0796
Epoch 2 Batch 400/1077 - Train Accuracy: 0.9105, Validation Accuracy: 0.9180, Loss: 0.0775
Epoch 2 Batch 480/1077 - Train Accuracy: 0.9071, Validation Accuracy: 0.8967, Loss: 0.0596
Epoch 2 Batch 560/1077 - Train Accuracy: 0.9398, Validation Accuracy: 0.9070, Loss: 0.0517
Epoch 2 Batch 640/1077 - Train Accuracy: 0.9442, Validation Accuracy: 0.9013, Loss: 0.0511
Epoch 2 Batch 720/1077 - Train Accuracy: 0.9112, Validation Accuracy: 0.9155, Loss: 0.0642
Epoch 2 Batch 800/1077 - Train Accuracy: 0.9328, Validation Accuracy: 0.9105, Loss: 0.0507
Epoch 2 Batch 880/1077 - Train Accuracy: 0.9348, Validation Accuracy: 0.9290, Loss: 0.0592
Epoch 2 Batch 960/1077 - Train Accuracy: 0.9315, Validation Accuracy: 0.9251, Loss: 0.0467
Epoch 2 Batch 1040/1077 - Train Accuracy: 0.9539, Validation Accuracy: 0.9187, Loss: 0.0483
Epoch 3 Batch 80/1077 - Train Accuracy: 0.9422, Validation Accuracy: 0.9336, Loss: 0.0363
Epoch 3 Batch 160/1077 - Train Accuracy: 0.9484, Validation Accuracy: 0.9208, Loss: 0.0370
Epoch 3 Batch 240/1077 - Train Accuracy: 0.9551, Validation Accuracy: 0.9219, Loss: 0.0393
Epoch 3 Batch 320/1077 - Train Accuracy: 0.9383, Validation Accuracy: 0.9201, Loss: 0.0485
Epoch 3 Batch 400/1077 - Train Accuracy: 0.9406, Validation Accuracy: 0.9329, Loss: 0.0490
Epoch 3 Batch 480/1077 - Train Accuracy: 0.9511, Validation Accuracy: 0.9233, Loss: 0.0347
Epoch 3 Batch 560/1077 - Train Accuracy: 0.9457, Validation Accuracy: 0.9229, Loss: 0.0356
Epoch 3 Batch 640/1077 - Train Accuracy: 0.9531, Validation Accuracy: 0.9315, Loss: 0.0321
Epoch 3 Batch 720/1077 - Train Accuracy: 0.9564, Validation Accuracy: 0.9304, Loss: 0.0362
Epoch 3 Batch 800/1077 - Train Accuracy: 0.9605, Validation Accuracy: 0.9371, Loss: 0.0307
Epoch 3 Batch 880/1077 - Train Accuracy: 0.9445, Validation Accuracy: 0.9283, Loss: 0.0414
Epoch 3 Batch 960/1077 - Train Accuracy: 0.9520, Validation Accuracy: 0.9400, Loss: 0.0311
Epoch 3 Batch 1040/1077 - Train Accuracy: 0.9757, Validation Accuracy: 0.9553, Loss: 0.0304
Epoch 4 Batch 80/1077 - Train Accuracy: 0.9578, Validation Accuracy: 0.9439, Loss: 0.0259
Epoch 4 Batch 160/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9450, Loss: 0.0246
Epoch 4 Batch 240/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9368, Loss: 0.0250
Epoch 4 Batch 320/1077 - Train Accuracy: 0.9566, Validation Accuracy: 0.9403, Loss: 0.0367
Epoch 4 Batch 400/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9421, Loss: 0.0345
Epoch 4 Batch 480/1077 - Train Accuracy: 0.9622, Validation Accuracy: 0.9396, Loss: 0.0253
Epoch 4 Batch 560/1077 - Train Accuracy: 0.9473, Validation Accuracy: 0.9386, Loss: 0.0263
Epoch 4 Batch 640/1077 - Train Accuracy: 0.9728, Validation Accuracy: 0.9364, Loss: 0.0233
Epoch 4 Batch 720/1077 - Train Accuracy: 0.9634, Validation Accuracy: 0.9510, Loss: 0.0261
Epoch 4 Batch 800/1077 - Train Accuracy: 0.9598, Validation Accuracy: 0.9577, Loss: 0.0226
Epoch 4 Batch 880/1077 - Train Accuracy: 0.9668, Validation Accuracy: 0.9442, Loss: 0.0318
Epoch 4 Batch 960/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9457, Loss: 0.0218
Epoch 4 Batch 1040/1077 - Train Accuracy: 0.9692, Validation Accuracy: 0.9489, Loss: 0.0209
Epoch 5 Batch 80/1077 - Train Accuracy: 0.9609, Validation Accuracy: 0.9506, Loss: 0.0192
Epoch 5 Batch 160/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9478, Loss: 0.0186
Epoch 5 Batch 240/1077 - Train Accuracy: 0.9781, Validation Accuracy: 0.9482, Loss: 0.0205
Epoch 5 Batch 320/1077 - Train Accuracy: 0.9688, Validation Accuracy: 0.9503, Loss: 0.0254
Epoch 5 Batch 400/1077 - Train Accuracy: 0.9734, Validation Accuracy: 0.9439, Loss: 0.0251
Epoch 5 Batch 480/1077 - Train Accuracy: 0.9729, Validation Accuracy: 0.9513, Loss: 0.0174
Epoch 5 Batch 560/1077 - Train Accuracy: 0.9477, Validation Accuracy: 0.9442, Loss: 0.0206
Epoch 5 Batch 640/1077 - Train Accuracy: 0.9758, Validation Accuracy: 0.9489, Loss: 0.0169
Epoch 5 Batch 720/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9560, Loss: 0.0218
Epoch 5 Batch 800/1077 - Train Accuracy: 0.9680, Validation Accuracy: 0.9620, Loss: 0.0167
Epoch 5 Batch 880/1077 - Train Accuracy: 0.9719, Validation Accuracy: 0.9592, Loss: 0.0257
Epoch 5 Batch 960/1077 - Train Accuracy: 0.9691, Validation Accuracy: 0.9545, Loss: 0.0166
Epoch 5 Batch 1040/1077 - Train Accuracy: 0.9745, Validation Accuracy: 0.9393, Loss: 0.0209
Epoch 6 Batch 80/1077 - Train Accuracy: 0.9715, Validation Accuracy: 0.9666, Loss: 0.0159
Epoch 6 Batch 160/1077 - Train Accuracy: 0.9738, Validation Accuracy: 0.9592, Loss: 0.0154
Epoch 6 Batch 240/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9545, Loss: 0.0138
Epoch 6 Batch 320/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9631, Loss: 0.0227
Epoch 6 Batch 400/1077 - Train Accuracy: 0.9777, Validation Accuracy: 0.9460, Loss: 0.0179
Epoch 6 Batch 480/1077 - Train Accuracy: 0.9782, Validation Accuracy: 0.9549, Loss: 0.0134
Epoch 6 Batch 560/1077 - Train Accuracy: 0.9555, Validation Accuracy: 0.9659, Loss: 0.0143
Epoch 6 Batch 640/1077 - Train Accuracy: 0.9762, Validation Accuracy: 0.9624, Loss: 0.0138
Epoch 6 Batch 720/1077 - Train Accuracy: 0.9778, Validation Accuracy: 0.9606, Loss: 0.0161
Epoch 6 Batch 800/1077 - Train Accuracy: 0.9793, Validation Accuracy: 0.9517, Loss: 0.0129
Epoch 6 Batch 880/1077 - Train Accuracy: 0.9660, Validation Accuracy: 0.9567, Loss: 0.0227
Epoch 6 Batch 960/1077 - Train Accuracy: 0.9747, Validation Accuracy: 0.9556, Loss: 0.0125
Epoch 6 Batch 1040/1077 - Train Accuracy: 0.9782, Validation Accuracy: 0.9474, Loss: 0.0158
Epoch 7 Batch 80/1077 - Train Accuracy: 0.9867, Validation Accuracy: 0.9691, Loss: 0.0122
Epoch 7 Batch 160/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9567, Loss: 0.0119
Epoch 7 Batch 240/1077 - Train Accuracy: 0.9930, Validation Accuracy: 0.9585, Loss: 0.0101
Epoch 7 Batch 320/1077 - Train Accuracy: 0.9820, Validation Accuracy: 0.9588, Loss: 0.0185
Epoch 7 Batch 400/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9464, Loss: 0.0140
Epoch 7 Batch 480/1077 - Train Accuracy: 0.9708, Validation Accuracy: 0.9450, Loss: 0.0119
Epoch 7 Batch 560/1077 - Train Accuracy: 0.9711, Validation Accuracy: 0.9645, Loss: 0.0121
Epoch 7 Batch 640/1077 - Train Accuracy: 0.9851, Validation Accuracy: 0.9684, Loss: 0.0122
Epoch 7 Batch 720/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9712, Loss: 0.0124
Epoch 7 Batch 800/1077 - Train Accuracy: 0.9789, Validation Accuracy: 0.9666, Loss: 0.0116
Epoch 7 Batch 880/1077 - Train Accuracy: 0.9832, Validation Accuracy: 0.9648, Loss: 0.0171
Epoch 7 Batch 960/1077 - Train Accuracy: 0.9795, Validation Accuracy: 0.9762, Loss: 0.0103
Epoch 7 Batch 1040/1077 - Train Accuracy: 0.9745, Validation Accuracy: 0.9773, Loss: 0.0134
Epoch 8 Batch 80/1077 - Train Accuracy: 0.9746, Validation Accuracy: 0.9698, Loss: 0.0109
Epoch 8 Batch 160/1077 - Train Accuracy: 0.9902, Validation Accuracy: 0.9698, Loss: 0.0110
Epoch 8 Batch 240/1077 - Train Accuracy: 0.9945, Validation Accuracy: 0.9815, Loss: 0.0078
Epoch 8 Batch 320/1077 - Train Accuracy: 0.9742, Validation Accuracy: 0.9723, Loss: 0.0137
Epoch 8 Batch 400/1077 - Train Accuracy: 0.9883, Validation Accuracy: 0.9418, Loss: 0.0105
Epoch 8 Batch 480/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9577, Loss: 0.0090
Epoch 8 Batch 560/1077 - Train Accuracy: 0.9703, Validation Accuracy: 0.9673, Loss: 0.0109
Epoch 8 Batch 640/1077 - Train Accuracy: 0.9926, Validation Accuracy: 0.9709, Loss: 0.0108
Epoch 8 Batch 720/1077 - Train Accuracy: 0.9897, Validation Accuracy: 0.9616, Loss: 0.0104
Epoch 8 Batch 800/1077 - Train Accuracy: 0.9898, Validation Accuracy: 0.9670, Loss: 0.0092
Epoch 8 Batch 880/1077 - Train Accuracy: 0.9938, Validation Accuracy: 0.9688, Loss: 0.0141
Epoch 8 Batch 960/1077 - Train Accuracy: 0.9896, Validation Accuracy: 0.9680, Loss: 0.0087
Epoch 8 Batch 1040/1077 - Train Accuracy: 0.9856, Validation Accuracy: 0.9744, Loss: 0.0114
Epoch 9 Batch 80/1077 - Train Accuracy: 0.9797, Validation Accuracy: 0.9730, Loss: 0.0072
Epoch 9 Batch 160/1077 - Train Accuracy: 0.9844, Validation Accuracy: 0.9734, Loss: 0.0083
Epoch 9 Batch 240/1077 - Train Accuracy: 0.9910, Validation Accuracy: 0.9815, Loss: 0.0081
Epoch 9 Batch 320/1077 - Train Accuracy: 0.9852, Validation Accuracy: 0.9698, Loss: 0.0110
Epoch 9 Batch 400/1077 - Train Accuracy: 0.9887, Validation Accuracy: 0.9521, Loss: 0.0086
Epoch 9 Batch 480/1077 - Train Accuracy: 0.9922, Validation Accuracy: 0.9702, Loss: 0.0074
Epoch 9 Batch 560/1077 - Train Accuracy: 0.9809, Validation Accuracy: 0.9673, Loss: 0.0093
Epoch 9 Batch 640/1077 - Train Accuracy: 0.9974, Validation Accuracy: 0.9780, Loss: 0.0095
Epoch 9 Batch 720/1077 - Train Accuracy: 0.9848, Validation Accuracy: 0.9844, Loss: 0.0086
Epoch 9 Batch 800/1077 - Train Accuracy: 0.9992, Validation Accuracy: 0.9641, Loss: 0.0067
Epoch 9 Batch 880/1077 - Train Accuracy: 0.9875, Validation Accuracy: 0.9719, Loss: 0.0097
Epoch 9 Batch 960/1077 - Train Accuracy: 0.9862, Validation Accuracy: 0.9723, Loss: 0.0105
Epoch 9 Batch 1040/1077 - Train Accuracy: 0.9864, Validation Accuracy: 0.9716, Loss: 0.0112
Model Trained and Saved
###Markdown
Save ParametersSave the `batch_size` and `save_path` parameters for inference.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
###Output
_____no_output_____
###Markdown
Checkpoint
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
###Output
_____no_output_____
###Markdown
Sentence to SequenceTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.- Convert the sentence to lowercase- Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `` word id.
###Code
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
###Output
Tests Passed
###Markdown
TranslateThis will translate `translate_sentence` from English to French.
###Code
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
###Output
INFO:tensorflow:Restoring parameters from checkpoints/dev
Input
Word Ids: [91, 223, 178, 188, 77, 158, 167]
English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.']
Prediction
Word Ids: [106, 259, 87, 228, 155, 245, 253, 1]
French Words: il a vu un camion jaune . <EOS>
|
MathematicsAndPython/materials/notebooks/2-6.MatrixOperations.ipynb | ###Markdown
`NumPy`: матрицы и операции над ними (Версия для Python 3)--- В этом ноутбуке из сторонних библиотек нам понадобится только `NumPy`. Для удобства импортируем ее под более коротким именем:
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
1. Создание матриц Приведем несколько способов создания матриц в `NumPy`. Самый простой способ — с помощью функции __`numpy.array(list, dtype=None, ...)`__.В качестве первого аргумента ей надо передать итерируемый объект, элементами которого являются другие итерируемые объекты одинаковой длины и содержащие данные одинакового типа.Второй аргумент является опциональным и определяет тип данных матрицы. Его можно не задавать, тогда тип данных будет определен из типа элементов первого аргумента. При задании этого параметра будет произведена попытка приведения типов.Например, матрицу из списка списков целых чисел можно создать следующим образом:
###Code
a = np.array([[1, 2, 3], [2, 5, 6], [6, 7, 4]])
print("Матрица:\n", a)
###Output
Матрица:
[[1 2 3]
[2 5 6]
[6 7 4]]
###Markdown
Второй способ создания — с помощью встроенных функций __`numpy.eye(N, M=None, ...)`__, __`numpy.zeros(shape, ...)`__, __`numpy.ones(shape, ...)`__.Первая функция создает единичную матрицу размера $N \times M$; если $M$ не задан, то $M = N$. Вторая и третья функции создают матрицы, состоящие целиком из нулей или единиц соответственно. В качестве первого аргумента необходимо задать размерность массива — кортеж целых чисел. В двумерном случае это набор из двух чисел: количество строк и столбцов матрицы.__Примеры:__
###Code
b = np.eye(5)
print("Единичная матрица:\n", b)
c = np.ones((7, 5))
print("Матрица, состоящая из одних единиц:\n", c)
###Output
Матрица, состоящая из одних единиц:
[[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]]
###Markdown
__Обратите внимание: размерность массива задается не двумя аргументами функции, а одним — кортежем!__ Вот так — __`np.ones(7, 5)`__ — создать массив не получится, так как функции в качестве параметра `shape` передается `7`, а не кортеж `(7, 5)`. И, наконец, третий способ — с помощью функции __`numpy.arange([start, ]stop, [step, ], ...)`__, которая создает одномерный массив последовательных чисел из промежутка __`[start, stop)`__ с заданным шагом __`step`__, и _метода_ __`array.reshape(shape)`__. Параметр __`shape`__, как и в предыдущем примере, задает размерность матрицы (кортеж чисел). Логика работы метода ясна из следующего примера:
###Code
v = np.arange(0, 24, 2)
print("Вектор-столбец:\n", v)
d = v.reshape((3, 4))
print("Матрица:\n", d)
###Output
Матрица:
[[ 0 2 4 6]
[ 8 10 12 14]
[16 18 20 22]]
###Markdown
Более подробно о том, как создавать массивы в `NumPy`, см. [документацию](http://docs.scipy.org/doc/numpy-1.10.1/user/basics.creation.html). 2. Индексирование Для получения элементов матрицы можно использовать несколько способов. Рассмотрим самые простые из них. Для удобства напомним, как выглядит матрица __d__:
###Code
print("Матрица:\n", d)
###Output
Матрица:
[[ 0 2 4 6]
[ 8 10 12 14]
[16 18 20 22]]
###Markdown
Элемент на пересечении строки __`i`__ и столбца __`j`__ можно получить с помощью выражения __`array[i, j]`__. __Обратите внимание:__ строки и столбцы нумеруются с нуля!
###Code
print("Второй элемент третьей строки матрицы:", d[2, 1])
###Output
Второй элемент третьей строки матрицы: 18
###Markdown
Из матрицы можно получать целые строки или столбцы с помощью выражений __`array[i, :]`__ или __`array[:, j]`__ соответственно:
###Code
print("Вторая строка матрицы d:\n", d[1, :])
print("Четвертый столбец матрицы d:\n", d[:, 3])
###Output
Вторая строка матрицы d:
[ 8 10 12 14]
Четвертый столбец матрицы d:
[ 6 14 22]
###Markdown
Еще один способ получения элементов — с помощью выражения __`array[list1, list2]`__, где __`list1`__, __`list2`__ — некоторые списки целых чисел. При такой адресации одновременно просматриваются оба списка и возвращаются элементы матрицы с соответствующими координатами. Следующий пример более понятно объясняет механизм работы такого индексирования:
###Code
print("Элементы матрицы d с координатами (1, 2) и (0, 3):\n", d[[1, 0], [2, 3]])
###Output
Элементы матрицы d с координатами (1, 2) и (0, 3):
[12 6]
###Markdown
Более подробно о различных способах индексирования в массивахсм. [документацию](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html). 3. Векторы, вектор-строки и вектор-столбцы Следующие два способа задания массива кажутся одинаковыми:
###Code
a = np.array([1, 2, 3])
b = np.array([[1], [2], [3]])
###Output
_____no_output_____
###Markdown
Однако, на самом деле, это задание одномерного массива (то есть _вектора_) и двумерного массива:
###Code
print("Вектор:\n", a)
print("Его размерность:\n", a.shape)
print("Двумерный массив:\n", b)
print("Его размерность:\n", b.shape)
###Output
Вектор:
[1 2 3]
Его размерность:
(3,)
Двумерный массив:
[[1]
[2]
[3]]
Его размерность:
(3, 1)
###Markdown
__Обратите внимание:__ _вектор_ (одномерный массив) и _вектор-столбец_ или _вектор-строка_ (двумерные массивы) являются различными объектами в `NumPy`, хотя математически задают один и тот же объект. В случае одномерного массива кортеж __`shape`__ состоит из одного числа и имеет вид __`(n,)`__, где __`n`__ — длина вектора. В случае двумерных векторов в __`shape`__ присутствует еще одна размерность, равная единице. В большинстве случаев неважно, какое представление использовать, потому что часто срабатывает приведение типов. Но некоторые операции не работают для одномерных массивов. Например, транспонирование (о нем пойдет речь ниже):
###Code
a = a.T
b = b.T
print("Вектор не изменился:\n", a)
print("Его размерность также не изменилась:\n", a.shape)
print("Транспонированный двумерный массив:\n", b)
print("Его размерность изменилась:\n", b.shape)
###Output
Вектор не изменился:
[1 2 3]
Его размерность также не изменилась:
(3,)
Транспонированный двумерный массив:
[[1 2 3]]
Его размерность изменилась:
(1, 3)
###Markdown
4. Умножение матриц и столбцов __Напоминание теории.__ Операция __умножения__ определена для двух матриц, таких что число столбцов первой равно числу строк второй. Пусть матрицы $A$ и $B$ таковы, что $A \in \mathbb{R}^{n \times k}$ и $B \in \mathbb{R}^{k \times m}$. __Произведением__ матриц $A$ и $B$ называется матрица $C$, такая что $c_{ij} = \sum_{r=1}^{k} a_{ir}b_{rj}$, где $c_{ij}$ — элемент матрицы $C$, стоящий на пересечении строки с номером $i$ и столбца с номером $j$.В `NumPy` произведение матриц вычисляется с помощью функции __`numpy.dot(a, b, ...)`__ или с помощью _метода_ __`array1.dot(array2)`__, где __`array1`__ и __`array2`__ — перемножаемые матрицы.
###Code
a = np.array([[1, 0], [0, 1]])
b = np.array([[4, 1], [2, 2]])
r1 = np.dot(a, b)
r2 = a.dot(b)
print("Матрица A:\n", a)
print("Матрица B:\n", b)
print("Результат умножения функцией:\n", r1)
print("Результат умножения методом:\n", r2)
###Output
Матрица A:
[[1 0]
[0 1]]
Матрица B:
[[4 1]
[2 2]]
Результат умножения функцией:
[[4 1]
[2 2]]
Результат умножения методом:
[[4 1]
[2 2]]
###Markdown
Матрицы в `NumPy` можно умножать и на векторы:
###Code
c = np.array([1, 2])
r3 = b.dot(c)
print("Матрица:\n", b)
print("Вектор:\n", c)
print("Результат умножения:\n", r3)
###Output
Матрица:
[[4 1]
[2 2]]
Вектор:
[1 2]
Результат умножения:
[6 6]
###Markdown
__Обратите внимание:__ операция __`*`__ производит над матрицами покоординатное умножение, а не матричное!
###Code
r = a * b
print("Матрица A:\n", a)
print("Матрица B:\n", b)
print("Результат покоординатного умножения через операцию *:\n", r)
###Output
Матрица A:
[[1 0]
[0 1]]
Матрица B:
[[4 1]
[2 2]]
Результат покоординатного умножения через операцию *:
[[4 0]
[0 2]]
###Markdown
Более подробно о матричном умножении в `NumPy`см. [документацию](http://docs.scipy.org/doc/numpy-1.10.0/reference/routines.linalg.htmlmatrix-and-vector-products). 5. Транспонирование матриц __Напоминание теории.__ __Транспонированной матрицей__ $A^{T}$ называется матрица, полученная из исходной матрицы $A$ заменой строк на столбцы. Формально: элементы матрицы $A^{T}$ определяются как $a^{T}_{ij} = a_{ji}$, где $a^{T}_{ij}$ — элемент матрицы $A^{T}$, стоящий на пересечении строки с номером $i$ и столбца с номером $j$.В `NumPy` транспонированная матрица вычисляется с помощью функции __`numpy.transpose()`__ или с помощью _метода_ __`array.T`__, где __`array`__ — нужный двумерный массив.
###Code
a = np.array([[1, 2], [3, 4]])
b = np.transpose(a)
c = a.T
print("Матрица:\n", a)
print("Транспонирование функцией:\n", b)
print("Транспонирование методом:\n", c)
###Output
Матрица:
[[1 2]
[3 4]]
Транспонирование функцией:
[[1 3]
[2 4]]
Транспонирование методом:
[[1 3]
[2 4]]
###Markdown
См. более подробно о [numpy.transpose()](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.transpose.html) и [array.T](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.ndarray.T.html) в `NumPy`. В следующих разделах активно используется модуль __`numpy.linalg`__, реализующий некоторые приложения линейной алгебры. Более подробно о функциях, описанных ниже, и различных других функциях этого модуля можно посмотреть в его [документации](http://docs.scipy.org/doc/numpy-1.10.0/reference/routines.linalg.htmllinear-algebra-numpy-linalg). 6. Определитель матрицы __Напоминание теории.__ Для квадратных матриц существует понятие __определителя__.Пусть $A$ — квадратная матрица. __Определителем__ (или __детерминантом__) матрицы $A \in \mathbb{R}^{n \times n}$ назовем число $$\det A = \sum_{\alpha_{1}, \alpha_{2}, \dots, \alpha_{n}} (-1)^{N(\alpha_{1}, \alpha_{2}, \dots, \alpha_{n})} \cdot a_{\alpha_{1} 1} \cdot \cdot \cdot a_{\alpha_{n} n},$$где $\alpha_{1}, \alpha_{2}, \dots, \alpha_{n}$ — перестановка чисел от $1$ до $n$, $N(\alpha_{1}, \alpha_{2}, \dots, \alpha_{n})$ — число инверсий в перестановке, суммирование ведется по всем возможным перестановкам длины $n$._Не стоит расстраиваться, если это определение понятно не до конца — в дальнейшем в таком виде оно не понадобится._Например, для матрицы размера $2 \times 2$ получается:$$\det \left( \begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \end{array} \right) = a_{11} a_{22} - a_{12} a_{21}$$Вычисление определителя матрицы по определению требует порядка $n!$ операций, поэтому разработаны методы, которые позволяют вычислять его быстро и эффективно.В `NumPy` определитель матрицы вычисляется с помощью функции __`numpy.linalg.det(a)`__, где __`a`__ — исходная матрица.
###Code
a = np.array([[1, 2, 1], [1, 1, 4], [2, 3, 6]], dtype=np.float32)
det = np.linalg.det(a)
print("Матрица:\n", a)
print("Определитель:\n", det)
###Output
Матрица:
[[1. 2. 1.]
[1. 1. 4.]
[2. 3. 6.]]
Определитель:
-1.0
###Markdown
Рассмотрим одно интересное свойство определителя. Пусть у нас есть параллелограмм с углами в точках $(0, 0), (c,d), (a+c, b+d), (a, b)$ (углы даны в порядке обхода по часовой стрелке). Тогда площадь этого параллелограмма можно вычислить как модуль определителя матрицы $\left( \begin{array}{cc} a & c \\ b & d \end{array} \right)$. Похожим образом можно выразить и объем параллелепипеда через определитель матрицы размера $3 \times 3$. 7. Ранг матрицы __Напоминание теории.__ __Рангом матрицы__ $A$ называется максимальное число линейно независимых строк (столбцов) этой матрицы.В `NumPy` ранг матрицы вычисляется с помощью функции __`numpy.linalg.matrix_rank(M, tol=None)`__, где __`M`__ — матрица, __`tol`__ — параметр, отвечающий за некоторую точность вычисления. В простом случае можно его не задавать, и функция сама определит подходящее значение этого параметра.
###Code
a = np.array([[1, 2, 3], [1, 1, 1], [2, 2, 2]])
r = np.linalg.matrix_rank(a)
print("Матрица:\n", a)
print("Ранг матрицы:", r)
###Output
Матрица:
[[1 2 3]
[1 1 1]
[2 2 2]]
Ранг матрицы: 2
###Markdown
С помощью вычисления ранга матрицы можно проверять линейную независимость системы векторов.Допустим, у нас есть несколько векторов. Составим из них матрицу, где наши векторы будут являться строками. Понятно, что векторы линейно независимы тогда и только тогда, когда ранг полученной матрицы совпадает с числом векторов. Приведем пример:
###Code
a = np.array([1, 2, 3])
b = np.array([1, 1, 1])
c = np.array([2, 3, 5])
m = np.array([a, b, c])
print(np.linalg.matrix_rank(m) == m.shape[0])
###Output
True
###Markdown
8. Системы линейных уравнений __Напоминание теории.__ __Системой линейных алгебраических уравнений__ называется система вида $Ax = b$, где $A \in \mathbb{R}^{n \times m}, x \in \mathbb{R}^{m \times 1}, b \in \mathbb{R}^{n \times 1}$. В случае квадратной невырожденной матрицы $A$ решение системы единственно.В `NumPy` решение такой системы можно найти с помощью функции __`numpy.linalg.solve(a, b)`__, где первый аргумент — матрица $A$, второй — столбец $b$.
###Code
a = np.array([[3, 1], [1, 2]])
b = np.array([9, 8])
x = np.linalg.solve(a, b)
print("Матрица A:\n", a)
print("Вектор b:\n", b)
print("Решение системы:\n", x)
###Output
Матрица A:
[[3 1]
[1 2]]
Вектор b:
[9 8]
Решение системы:
[2. 3.]
###Markdown
Убедимся, что вектор __x__ действительно является решением системы:
###Code
print(a.dot(x))
###Output
[9. 8.]
###Markdown
Бывают случаи, когда решение системы не существует. Но хотелось бы все равно "решить" такую систему. Логичным кажется искать такой вектор $x$, который минимизирует выражение $\left\Vert Ax - b\right\Vert^{2}$ — так мы приблизим выражение $Ax$ к $b$.В `NumPy` такое псевдорешение можно искать с помощью функции __`numpy.linalg.lstsq(a, b, ...)`__, где первые два аргумента такие же, как и для функции __`numpy.linalg.solve()`__. Помимо решения функция возвращает еще три значения, которые нам сейчас не понадобятся.
###Code
a = np.array([[0, 1], [1, 1], [2, 1], [3, 1]])
b = np.array([-1, 0.2, 0.9, 2.1])
x, res, r, s = np.linalg.lstsq(a, b)
print("Матрица A:\n", a)
print("Вектор b:\n", b)
print("Псевдорешение системы:\n", x)
###Output
Матрица A:
[[0 1]
[1 1]
[2 1]
[3 1]]
Вектор b:
[-1. 0.2 0.9 2.1]
Псевдорешение системы:
[ 1. -0.95]
###Markdown
9. Обращение матриц __Напоминание теории.__ Для квадратных невырожденных матриц определено понятие __обратной__ матрицы. Пусть $A$ — квадратная невырожденная матрица. Матрица $A^{-1}$ называется __обратной матрицей__ к $A$, если $$AA^{-1} = A^{-1}A = I,$$ где $I$ — единичная матрица.В `NumPy` обратные матрицы вычисляются с помощью функции __`numpy.linalg.inv(a)`__, где __`a`__ — исходная матрица.
###Code
a = np.array([[1, 2, 1], [1, 1, 4], [2, 3, 6]], dtype=np.float32)
b = np.linalg.inv(a)
print("Матрица A:\n", a)
print("Обратная матрица к A:\n", b)
print("Произведение A на обратную должна быть единичной:\n", a.dot(b))
###Output
Матрица A:
[[1. 2. 1.]
[1. 1. 4.]
[2. 3. 6.]]
Обратная матрица к A:
[[ 6. 9. -7.]
[-2. -4. 3.]
[-1. -1. 1.]]
Произведение A на обратную должна быть единичной:
[[1. 0. 0.]
[0. 1. 0.]
[0. 0. 1.]]
###Markdown
10. Собственные числа и собственные вектора матрицы __Напоминание теории.__ Для квадратных матриц определены понятия __собственного вектора__ и __собственного числа__.Пусть $A$ — квадратная матрица и $A \in \mathbb{R}^{n \times n}$. __Собственным вектором__ матрицы $A$ называется такой ненулевой вектор $x \in \mathbb{R}^{n}$, что для некоторого $\lambda \in \mathbb{R}$ выполняется равенство $Ax = \lambda x$. При этом $\lambda$ называется __собственным числом__ матрицы $A$. Собственные числа и собственные векторы матрицы играют важную роль в теории линейной алгебры и ее практических приложениях.В `NumPy` собственные числа и собственные векторы матрицы вычисляются с помощью функции __`numpy.linalg.eig(a)`__, где __`a`__ — исходная матрица. В качестве результата эта функция выдает одномерный массив __`w`__ собственных чисел и двумерный массив __`v`__, в котором по столбцам записаны собственные вектора, так что вектор __`v[:, i]`__ соотвествует собственному числу __`w[i]`__.
###Code
a = np.array([[-1, -6], [2, 6]])
w, v = np.linalg.eig(a)
print("Матрица A:\n", a)
print("Собственные числа:\n", w)
print("Собственные векторы:\n", v)
###Output
Матрица A:
[[-1 -6]
[ 2 6]]
Собственные числа:
[2. 3.]
Собственные векторы:
[[-0.89442719 0.83205029]
[ 0.4472136 -0.5547002 ]]
###Markdown
__Обратите внимание:__ у вещественной матрицы собственные значения или собственные векторы могут быть комплексными. 11. Комплексные числа в питоне __Внимание: данный материал является дополнительным — его изучение не является необходимым для выполнения тестов.__ __Напоминание теории.__ __Комплексными числами__ называются числа вида $x + iy$, где $x$ и $y$ — вещественные числа, а $i$ — мнимая единица (величина, для которой выполняется равенство $i^{2} = -1$). Множество всех комплексных чисел обозначается буквой $\mathbb{C}$ (подробнее про комплексные числа см. [википедию](https://ru.wikipedia.org/wiki/%D0%9A%D0%BE%D0%BC%D0%BF%D0%BB%D0%B5%D0%BA%D1%81%D0%BD%D0%BE%D0%B5_%D1%87%D0%B8%D1%81%D0%BB%D0%BE)). В питоне комплескные числа можно задать следующим образом (__j__ обозначает мнимую единицу):
###Code
a = 3 + 2j
b = 1j
print("Комплексное число a:\n", a)
print("Комплексное число b:\n", b)
###Output
Комплексное число a:
(3+2j)
Комплексное число b:
1j
###Markdown
С комплексными числами в питоне можно производить базовые арифметические операции так же, как и с вещественными числами:
###Code
c = a * a
d = a / (4 - 5j)
print("Комплексное число c:\n", c)
print("Комплексное число d:\n", d)
###Output
Комплексное число c:
(5+12j)
Комплексное число d:
(0.0487804878048781+0.5609756097560976j)
|
docs/contents/molmech/ForceFields.ipynb | ###Markdown
Force Fields
###Code
from molsysmt._private_tools.forcefields import forcefields, water_models, implicit_solvent_models
forcefields
water_models
implicit_solvent_models
from molsysmt.native.forcefields import get_forcefield_names
get_forcefield_names(['AMBER14'], water_model='TIP3P', engine='OpenMM')
###Output
_____no_output_____ |
examples/jury.ipynb | ###Markdown
Bayesian analysis of the Curtis Flowers trialsCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/code/soln/utils.py
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from empiricaldist import Pmf
from utils import decorate, savefig
###Output
_____no_output_____
###Markdown
On September 5, 2020, prosecutors in Mississippi dropped charges against [Curtis Flowers](https://en.wikipedia.org/wiki/Curtis_Flowers), freeing him after 23 years of incarceration.Flowers had been tried six times for a 1996 multiple murder. Two trials ended in a mistrial due to a hung jury; four trials ended in convictions. According to [this NPR report](https://www.npr.org/2020/09/05/910061573/after-6-trials-prosecutors-drop-charges-against-curtis-flowers)> After each conviction, a higher court struck down the initial ruling. The latest ruling invalidating Flowers' conviction, and death sentence, came from the U.S. Supreme Court in June of last year. The justices noted the Mississippi Supreme Court had found that in three prior convictions the prosecution had misrepresented evidence and deliberately eliminated Black jurors.Since the racial composition of the juries was the noted reason the last conviction was invalidated, the purpose of this article is to explore the relationship between the composition of the juries and the outcome of the trials.Flowers' trials were the subject of the [In the Dark](https://www.apmreports.org/episode/2018/05/01/in-the-dark-s2e1) podcast, which reported the racial composition of the juries and the outcomes:```Trial Jury Outcome 1 All white Guilty 2 11 white, 1 black Guilty 3 11 white, 1 black Guilty 4 7 white, 5 black Hung jury 5 9 white, 3 black Hung jury 6 11 white, 1 black Guilty```We can use this data to estimate the probability that white and black jurors would vote to convict, and then use those estimates to compute the probability of a guilty verdict.As a modeling simplification, I'll assume:* The six juries were presented with essentially the same evidence, prosecution case, and defense;* The probabilities of conviction did not change over the years of the trials (from 1997 to 2010); and* Each juror votes independently of the others; that is, I ignore interactions between jurors.I'll use the same prior distribution for white and black jurors, a uniform distribution from 0 to 1.
###Code
ps = np.linspace(0, 1, 101)
prior_p1 = Pmf(1.0, ps)
prior_p1.index.name = 'p1'
prior_p2 = Pmf(1.0, ps)
prior_p2.index.name = 'p2'
###Output
_____no_output_____
###Markdown
To prepare for the updates, I'll form a joint distribution of the two probabilities.
###Code
from utils import make_joint
joint = make_joint(prior_p2, prior_p1)
prior_pmf = Pmf(joint.stack())
prior_pmf.head()
###Output
_____no_output_____
###Markdown
Here's how we compute the update.Assuming that a guilty verdict must be unanimous, the probability of conviction is$ p = p_1^{n_1} ~ p_2^{n_2}$where* $p_1$ is the probability a white juror votes to convict* $p_2$ is the probability a black juror votes to convict* $n_1$ is the number of white jurors* $n_2$ is the number of black jurorsThe probability of an acquittal or hung jury is the complement of $p$.The following function performs a Bayesian update given the composition of the jury and the outcome, either `'guilty'` or `'hung'`. We could also do an update for an acquittal, but since that didn't happen, I didn't implement it.
###Code
def update(prior, data):
n1, n2, outcome = data
likelihood = prior.copy()
for p1, p2 in prior.index:
like = p1**n1 * p2**n2
if outcome == 'guilty':
likelihood.loc[p1, p2] = like
elif outcome == 'hung':
likelihood.loc[p1, p2] = 1-like
else:
raise ValueError()
posterior = prior * likelihood
posterior.normalize()
return posterior
###Output
_____no_output_____
###Markdown
I'll use the following function to plot the marginal posterior distributions after each update.
###Code
from utils import pmf_marginal
def plot_marginals(posterior):
marginal0 = pmf_marginal(posterior, 0)
marginal0.plot(label='white')
marginal1 = pmf_marginal(posterior, 1)
marginal1.plot(label='black')
decorate(xlabel='Probability of voting to convict',
ylabel='PDF',
title='Marginal posterior distributions')
###Output
_____no_output_____
###Markdown
Here's the update for the first trial.
###Code
data1 = 12, 0, 'guilty'
posterior1 = update(prior_pmf, data1)
plot_marginals(posterior1)
###Output
_____no_output_____
###Markdown
Since there were no black jurors for the first trial, we learn nothing about their probability of conviction, so the posterior distribution is the same as the prior.The posterior distribution for white voters reflects the data that 12 of them voted to convict.Here are the posterior distributions after the second trial.
###Code
data2 = 11, 1, 'guilty'
posterior2 = update(posterior1, data2)
plot_marginals(posterior2)
###Output
_____no_output_____
###Markdown
And the third.
###Code
data3 = 11, 1, 'guilty'
posterior3 = update(posterior2, data3)
plot_marginals(posterior3)
###Output
_____no_output_____
###Markdown
Since the first three verdicts were guilty, we infer that all 36 jurors voted to convict, so the estimated probabilities for both groups are high.The fourth trials ended in a mistrial due to a hung jury, which implies that at least one juror refused to vote to convict. That decreases the estimated probabilities for both juror pools, but it has a bigger effect on the estimate for black jurors because the total prior data pertaining to black jurors is less, so the same amount of new data moves the needle more.
###Code
data4 = 7, 5, 'hung'
posterior4 = update(posterior3, data4)
plot_marginals(posterior4)
###Output
_____no_output_____
###Markdown
The effect of the fifth trial is similar; it decreases the estimates for both pools, but the effect on the estimate for black jurors is greater.
###Code
data5 = 9, 3, 'hung'
posterior5 = update(posterior4, data5)
plot_marginals(posterior5)
###Output
_____no_output_____
###Markdown
Finally, here are the posterior distributions after all six trials.
###Code
data6 = 11, 1, 'guilty'
posterior6 = update(posterior5, data6)
plot_marginals(posterior6)
###Output
_____no_output_____
###Markdown
The posterior distributions for the two pools are substantially different. Here are the posterior means.
###Code
marginal_p1 = pmf_marginal(posterior6, 0)
marginal_p2 = pmf_marginal(posterior6, 1)
marginal_p1.mean(), marginal_p2.mean(),
###Output
_____no_output_____
###Markdown
Based on the outcomes of all six trials, we estimate that the probability is 98% that a white juror would vote to convict, and the probability is 68% that a black juror would vote to convict.Again, those results are based on the modeling simplifications that* All six juries saw essentially the same evidence,* The probabilities we're estimating did not change over the period of the trials, and* Interactions between jurors did not have substantial effects on their votes. PredictionNow we can use the joint posterior distribution to estimate the probability of conviction as a function of the composition of the jury.I'll draw a sample from the joint posterior distribution.
###Code
sample = posterior6.sample(1000)
###Output
_____no_output_____
###Markdown
Here's the probability that white jurors were more likely to convict.
###Code
np.mean([p1 > p2 for p1, p2 in sample])
###Output
_____no_output_____
###Markdown
The following function takes this sample and a hypothetical composition and returns the posterior predictive distribution for the probability of conviction.
###Code
def prob_guilty(sample, n1, n2):
ps = [p1**n1 * p2**n2 for p1, p2 in sample]
return Pmf.from_seq(ps)
###Output
_____no_output_____
###Markdown
According to [Wikipedia](https://en.wikipedia.org/wiki/Montgomery_County,_Mississippi):> As of the 2010 United States Census, there were 10,925 people living in the county. 53.0% were White, 45.5% Black or African American, 0.4% Asian, 0.1% Native American, 0.5% of some other race and 0.5% of two or more races. 0.9% were Hispanic or Latino (of any race).A jury drawn at random from the population of Montgomery County would be expected to have 5 or 6 black jurors.Here's the probability of conviction with a panel of 7 white and 5 black jurors.
###Code
pmf = prob_guilty(sample, 7, 5)
pmf.mean(), pmf.credible_interval(0.9)
###Output
_____no_output_____
###Markdown
And with 6 white and 6 black jurors.
###Code
pmf = prob_guilty(sample, 6, 6)
pmf.mean(), pmf.credible_interval(0.9)
###Output
_____no_output_____
###Markdown
With a jury that represents the population of Montgomery County, the probability Flowers would be convicted is 14-15%.However, notice that the credible intervals for these estimates are quite wide. Based on the data, the actual probabilities could be in the range from near 0 to 50%. The following figure shows the probability of conviction as a function of the number of black jurors.The probability of conviction is highest with an all-white jury, and drops quickly if there are a few black jurors. After that, the addition of more black jurors has a relatively small effect.These results suggest that all-white juries have a substantially higher probability of convicting a defendant, compared to a jury with even a few non-white jurors.
###Code
pmf_seq = []
n2s = range(0, 13)
for n2 in n2s:
n1 = 12 - n2
pmf = prob_guilty(sample, n1, n2)
pmf_seq.append(pmf)
means = [pmf.mean() for pmf in pmf_seq]
lows = [pmf.quantile(0.05) for pmf in pmf_seq]
highs = [pmf.quantile(0.95) for pmf in pmf_seq]
means
plt.plot(n2s, means)
plt.fill_between(n2s, lows, highs, color='C0', alpha=0.1)
decorate(xlabel='Number of black jurors',
ylabel='Probability of a guilty verdict',
title='Probability of a guilty verdict vs jury composition',
ylim=[0, 1])
###Output
_____no_output_____
###Markdown
Double CheckLet's compute the results a different way to double check.For the four guilty verdicts, we don't need to make or update the joint distribution; we can update the distributions for the two pools separately.
###Code
from scipy.stats import binom
k1 = 12 + 11 + 11 + 11
like1 = binom(k1, ps).pmf(k1)
prior_p1 = Pmf(like1, ps)
k2 = 0 + 1 + 1 + 1
like2 = binom(k2, ps).pmf(k2)
prior_p2 = Pmf(like2, ps)
prior_p1.plot()
prior_p2.plot()
###Output
_____no_output_____
###Markdown
We can use the posteriors from those updates as priors and update them based on the two trials that resulted in a hung jury.
###Code
prior = Pmf(make_joint(prior_p2, prior_p1).stack())
posterior = update(prior, data4)
posterior = update(posterior, data5)
###Output
_____no_output_____
###Markdown
The posterior marginals look the same.
###Code
plot_marginals(posterior)
###Output
_____no_output_____
###Markdown
And yield the same posterior means.
###Code
marginal_p1 = pmf_marginal(posterior, 0)
marginal_p2 = pmf_marginal(posterior, 1)
marginal_p1.mean(), marginal_p2.mean(),
###Output
_____no_output_____
###Markdown
Here's the probability that a fair jury would convict four times out of six.
###Code
binom.pmf(4, 6, 0.15)
###Output
_____no_output_____ |
notebooks/enzyme/CNN_classification_estimators.ipynb | ###Markdown
Loading preprosed data
###Code
LEVEL="Level_1"
import numpy as np
train_data = np.load("..//data//train_features_"+LEVEL+".npy")
train_label = np.load("..//data//train_labels_"+LEVEL+".npy")
val_data = np.load("..//data//val_features_"+LEVEL+".npy")
val_label = np.load("..//data//val_labels_"+LEVEL+".npy")
PATH = "..//weights//cnn_{}_v2//version8.ckpt".format(LEVEL)
train_data.shape, train_label.shape, val_data.shape, val_label.shape
NUM_OF_ACIDS = 21
EMBEDDING_SIZE = 8
NUM_CLASSES = np.amax(val_label, axis=0)+1
NUM_CLASSES
NUM_EPOCH=7
BATCH_SIZE=128
###Output
_____no_output_____
###Markdown
Model
###Code
import tensorflow as tf
tf.__version__
###Output
/home/donatasrep/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Setting up model
###Code
def model(features, is_training):
acid_embeddings = tf.get_variable("acid_embeddings", [NUM_OF_ACIDS, EMBEDDING_SIZE])
embedded_acids = tf.nn.embedding_lookup(acid_embeddings, features)
embedded_acids = tf.expand_dims(embedded_acids, 3)
# Convolutional Layer #1
conv1 = tf.layers.conv2d(
inputs=embedded_acids,
filters=32,
kernel_size=(3,EMBEDDING_SIZE),
padding="same",
activation=tf.nn.selu)
# Pooling Layer #1
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=2, strides=2)
# Convolutional Layer #2 and Pooling Layer #2
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=(3,EMBEDDING_SIZE),
padding="same",
activation=tf.nn.selu)
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=2, strides=2)
# Dense Layer
pool2_flat = tf.layers.flatten(pool2)
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.selu)
dropout = tf.layers.dropout(inputs=dense, rate=0.4, training=is_training)
# Logits Layer
x = tf.layers.dense(inputs=dropout, units=NUM_CLASSES)
return x
def model_fn(features, labels, mode, params):
"""The model_fn argument for creating an Estimator."""
if mode == tf.estimator.ModeKeys.PREDICT:
logits = model(features, is_training=False)
predictions = {
'classes': tf.argmax(logits, axis=1),
'probabilities': tf.nn.softmax(logits),
}
return tf.estimator.EstimatorSpec(
mode=tf.estimator.ModeKeys.PREDICT,
predictions=predictions,
export_outputs={
'classify': tf.estimator.export.PredictOutput(predictions)
})
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4)
logits = model(features, is_training=True)
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
accuracy = tf.metrics.accuracy(labels=labels, predictions=tf.argmax(logits, axis=1))
# Name the accuracy tensor 'train_accuracy' to demonstrate the
# LoggingTensorHook.
tf.identity(accuracy[1], name='train_accuracy')
tf.summary.scalar('train_accuracy', accuracy[1])
return tf.estimator.EstimatorSpec(
mode=tf.estimator.ModeKeys.TRAIN,
loss=loss,
train_op=optimizer.minimize(loss, tf.train.get_or_create_global_step()))
if mode == tf.estimator.ModeKeys.EVAL:
logits = model(features, is_training=False)
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
return tf.estimator.EstimatorSpec(
mode=tf.estimator.ModeKeys.EVAL,
loss=loss,
eval_metric_ops={
'accuracy': tf.metrics.accuracy(labels=labels, predictions=tf.argmax(logits, axis=1))})
enzyme_classifier = tf.estimator.Estimator(
model_fn=model_fn,
model_dir=PATH)
###Output
INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_model_dir': '..//weights//cnn_Level_1_v2//version8.ckpt', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fbdc8a67860>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
WARNING:tensorflow:Estimator's model_fn (<function model_fn at 0x7fbdd60b81e0>) includes params argument, but params are not passed to Estimator.
###Markdown
Training
###Code
def train_input():
return (tf.data.Dataset.from_tensor_slices((train_data, train_label))
.shuffle(buffer_size=10000, reshuffle_each_iteration=True)
.batch(BATCH_SIZE)
.repeat(1))
tensors_to_log = {'train_accuracy': 'train_accuracy'}
logging_hook = tf.train.LoggingTensorHook(tensors=tensors_to_log, every_n_iter=100)
enzyme_classifier.train(input_fn=train_input, hooks=[logging_hook])
###Output
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from ..//weights//cnn_Level_1_v2//version8.ckpt/model.ckpt-9009
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 9010 into ..//weights//cnn_Level_1_v2//version8.ckpt/model.ckpt.
INFO:tensorflow:train_accuracy = 0.9296875
INFO:tensorflow:loss = 0.25511307, step = 9010
INFO:tensorflow:global_step/sec: 45.9907
INFO:tensorflow:train_accuracy = 0.93359375 (2.176 sec)
INFO:tensorflow:loss = 0.20976786, step = 9110 (2.176 sec)
INFO:tensorflow:global_step/sec: 47.5782
INFO:tensorflow:train_accuracy = 0.9427083 (2.101 sec)
INFO:tensorflow:loss = 0.16500229, step = 9210 (2.101 sec)
INFO:tensorflow:global_step/sec: 47.6559
INFO:tensorflow:train_accuracy = 0.9394531 (2.099 sec)
INFO:tensorflow:loss = 0.23634267, step = 9310 (2.099 sec)
INFO:tensorflow:global_step/sec: 47.9071
INFO:tensorflow:train_accuracy = 0.934375 (2.088 sec)
INFO:tensorflow:loss = 0.20455964, step = 9410 (2.088 sec)
INFO:tensorflow:global_step/sec: 47.8602
INFO:tensorflow:train_accuracy = 0.93098956 (2.088 sec)
INFO:tensorflow:loss = 0.23760758, step = 9510 (2.088 sec)
INFO:tensorflow:global_step/sec: 47.8142
INFO:tensorflow:train_accuracy = 0.93191963 (2.093 sec)
INFO:tensorflow:loss = 0.19174632, step = 9610 (2.093 sec)
INFO:tensorflow:global_step/sec: 47.7125
INFO:tensorflow:train_accuracy = 0.9296875 (2.095 sec)
INFO:tensorflow:loss = 0.21559557, step = 9710 (2.095 sec)
INFO:tensorflow:global_step/sec: 47.6116
INFO:tensorflow:train_accuracy = 0.9279514 (2.100 sec)
INFO:tensorflow:loss = 0.27396557, step = 9810 (2.100 sec)
INFO:tensorflow:global_step/sec: 47.8825
INFO:tensorflow:train_accuracy = 0.93046874 (2.088 sec)
INFO:tensorflow:loss = 0.1502693, step = 9910 (2.088 sec)
INFO:tensorflow:global_step/sec: 47.5779
INFO:tensorflow:train_accuracy = 0.93039775 (2.102 sec)
INFO:tensorflow:loss = 0.1764364, step = 10010 (2.102 sec)
INFO:tensorflow:global_step/sec: 47.6992
INFO:tensorflow:train_accuracy = 0.9316406 (2.097 sec)
INFO:tensorflow:loss = 0.15421954, step = 10110 (2.096 sec)
INFO:tensorflow:global_step/sec: 47.5825
INFO:tensorflow:train_accuracy = 0.9332933 (2.101 sec)
INFO:tensorflow:loss = 0.22446254, step = 10210 (2.101 sec)
INFO:tensorflow:Saving checkpoints for 10296 into ..//weights//cnn_Level_1_v2//version8.ckpt/model.ckpt.
INFO:tensorflow:Loss for final step: 0.29072845.
###Markdown
Validate
###Code
def eval_input():
return (tf.data.Dataset.from_tensor_slices((val_data, val_label))
.batch(BATCH_SIZE).repeat(1))
eval_results = enzyme_classifier.evaluate(input_fn=eval_input)
print()
print('Evaluation results: %s' % eval_results)
###Output
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2018-04-22-10:14:25
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from ..//weights//cnn_Level_1_v2//version8.ckpt/model.ckpt-10296
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Finished evaluation at 2018-04-22-10:14:27
INFO:tensorflow:Saving dict for global step 10296: accuracy = 0.83120793, global_step = 10296, loss = 0.6537026
Evaluation results: {'accuracy': 0.83120793, 'loss': 0.6537026, 'global_step': 10296}
###Markdown
Predict
###Code
import pandas as pd
data = pd.read_csv("..//data//test_sequences.csv", sep='\t', skipinitialspace=True)
data["Sequence"] = data.Sequence.str.rjust(500, '0')
letterToIndex = {'0': 0, 'A': 1, 'C': 2, 'D': 3, 'E': 4, 'F': 5, 'G': 6, 'H': 7, 'I': 8, 'K': 9, 'L': 10, 'M': 11, 'N': 12,
'P': 13, 'Q': 14, 'R': 15, 'S': 16, 'T': 17, 'V': 18, 'W': 19, 'Y': 20}
data["Sequence_vector"] = [[letterToIndex[char] for char in val ] for index, val in data.Sequence.iteritems()]
test_data= np.asarray([ np.asarray(element) for element in data["Sequence_vector"].values])
len(test_data), test_data
def test_input():
test_data_for_tensorflow = np.append(test_data, np.zeros((BATCH_SIZE-len(test_data), 500)), axis=0).astype(int)
return (tf.data.Dataset.from_tensor_slices((test_data_for_tensorflow))).batch(BATCH_SIZE)
np.set_printoptions(suppress=True)
predict = enzyme_classifier.predict(input_fn=test_input)
count = 0
for p in predict:
if count == 0:
print("\n\r")
print("Oxidoreductases Transferases Hydrolases Lyases Isomerases Ligases")
count = count + 1
print(p["probabilities"])
print( p["classes"]+1)
if (count == 5):
break
###Output
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from ..//weights//cnn_Level_1_v2//version8.ckpt/model.ckpt-10296
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
Oxidoreductases Transferases Hydrolases Lyases Isomerases Ligases
[0.02524485 0.42460245 0.3180151 0.08731605 0.1447798 0.0000418 ]
2
[0.01956985 0.63964903 0.10866959 0.23204246 0.00002328 0.00004582]
2
[0.15433058 0.20506094 0.6196978 0.00744062 0.00508434 0.00838569]
3
[0.00855853 0.9912123 0.00000528 0.00011529 0.00000593 0.0001026 ]
2
[0.07928088 0.457729 0.4544403 0.0002656 0.0073913 0.0008929 ]
2
|
14.MultipleRegression.ipynb | ###Markdown
**밑바닥부터 시작하는 데이터과학**Data Science from Scratch- https://github.com/joelgrus/data-science-from-scratch/blob/master/first-edition/code-python3/simple_linear_regression.py **필요한 함수들의 호출**작업에 필요한 함수들 호출하기
###Code
# with open('data.json', 'r') as f:
# data = json.load(f)
# data['x'] = x
# with open('data.json', 'w') as f:
# f.write(json.dumps(data))
# stats.py
def dot(v, w):
return sum(v_i * w_i for v_i, w_i in zip(v, w))
def standard_deviation(x):
return math.sqrt(variance(x))
def median(v):
n = len(v)
sorted_v = sorted(v)
midpoint = n // 2
if n % 2 == 1:
return sorted_v[midpoint]
else:
lo = midpoint - 1
hi = midpoint
return (sorted_v[lo] + sorted_v[hi]) / 2
def mean(x):
return sum(x) / len(x)
def de_mean(x):
x_bar = mean(x)
return [x_i - x_bar for x_i in x]
def sum_of_squares(v):
return dot(v, v)
def variance(x):
n = len(x)
deviations = de_mean(x)
return sum_of_squares(deviations) / (n - 1)
def covariance(x, y):
n = len(x)
return dot(de_mean(x), de_mean(y)) / (n - 1)
def correlation(x, y):
stdev_x = standard_deviation(x)
stdev_y = standard_deviation(y)
if stdev_x > 0 and stdev_y > 0:
return covariance(x, y) / stdev_x / stdev_y
else:
return 0 # if no variation, correlation is zero
# linear_algebra.py
def dot(v, w):
return sum(v_i * w_i for v_i, w_i in zip(v, w))
def vector_add(v, w):
return [v_i + w_i for v_i, w_i in zip(v,w)]
# gradient_descent.py
def scalar_multiply(c, v):
return [c * v_i for v_i in v]
def in_random_order(data):
indexes = [i for i, _ in enumerate(data)] # create a list of indexes
random.shuffle(indexes) # shuffle them
for i in indexes: # return the data in that order
yield data[i]
def minimize_stochastic(target_fn, gradient_fn, x, y, theta_0, alpha_0=0.01):
data = list(zip(x, y))
theta = theta_0 # initial guess
alpha = alpha_0 # initial step size
min_theta, min_value = None, float("inf") # the minimum so far
iterations_with_no_improvement = 0
while iterations_with_no_improvement < 100:
value = sum( target_fn(x_i, y_i, theta) for x_i, y_i in data )
if value < min_value:
min_theta, min_value = theta, value
iterations_with_no_improvement = 0
alpha = alpha_0
else:
iterations_with_no_improvement += 1
alpha *= 0.9
for x_i, y_i in in_random_order(data):
gradient_i = gradient_fn(x_i, y_i, theta)
theta = vector_subtract(theta, scalar_multiply(alpha, gradient_i))
return min_theta
# linear_regression.py
def total_sum_of_squares(y):
return sum(v ** 2 for v in de_mean(y))
###Output
_____no_output_____
###Markdown
**15장 다중 회귀분석**데이터가 선형관계에 있는경우, 단순 회귀분석을 통해서 관계내용을 자세하게 살펴볼 수 있습니다 **1 모델**$ y_i $ 는 사용자 **i** 가 매일 사이트 시간이며, $x_i$ 는 사용자 **i** 의 친구수이고, $\varepsilon_i$ 는 오차를 의미합니다$$ y_i = \beta x_i + \alpha + \varepsilon_i $$
###Code
# linear_algebra.py
def vector_subtract(v, w):
return [v_i - w_i for v_i, w_i in zip(v,w)]
# /beta : 파라미터 벡터는
# /alapha : 상수항
def predict(alpha, beta, x_i):
return beta * x_i + alpha
def predict(x_i, beta):
return dot(x_i, beta)
###Output
_____no_output_____
###Markdown
**2 최소자승법에 대한 몇 가지 추가 가정**최소자승법은 **단순회귀분석** 모델로 $\beta_1$ 을 **과소평가 모델로 편향** 됩니다 1. $x$ 열은 서로 일차독립을 해야 한다1. $x$ 의 모든 열은 $\varepsilon$ (오류) 와 상관이 없다1. 대신 잘못된 추정들은 $\beta$ (beta) 로 추정을 합니다 **3 모델 학습하기**오류제곱의 합을 최소로 하는 $\beta$ 를 찾기 위해 SGD (Stochastic Gradient Descent) 를 사용 합니다
###Code
def error(x_i, y_i, beta):
return y_i - predict(x_i, beta)
def squared_error(x_i, y_i, beta):
return error(x_i, y_i, beta) ** 2
# 1번째 오류 제곱의 beta 기울기 계산
def squared_error_gradient(x_i, y_i, beta):
return [-2 * x_ij * error(x_i, y_i, beta)
for x_ij in x_i]
# SGD 를 활용한 최적의 Beta 계산함수
def estimate_beta(x, y):
beta_initial = [random.random() for x_i in x[0]]
return minimize_stochastic(squared_error,
squared_error_gradient,
x, y,
beta_initial,
0.001)
import json, random, math, re
with open('./data/data.json', 'r') as f:
data = json.load(f)
x = data['x']
daily_minutes_good = data['daily_minutes_good']
random.seed(0)
beta = estimate_beta(x, daily_minutes_good)
print("beta :{}\n체류시간(분): {:.4f} + {:.4f}, 친구수: {:.4f} 근무시간: {:.4f}".format(
beta, beta[0], beta[1], beta[2], beta[3]))
###Output
beta :[30.619881701311712, 0.9702056472470465, -1.8671913880379478, 0.9163711597955347]
체류시간(분): 30.6199 + 0.9702, 친구수: -1.8672 근무시간: 0.9164
###Markdown
**4 모델 해석하기**1. 모든것이 동일할 때 **친구 1명이 증가** 하면 **체류시간은 0.97분 증가** 합니다1. 근무시간이 동일할 때 **근무 시간이 1시간 증가** 하면 **체류시간은 2분 감소** 합니다1. 박사 학위를 취득한 경우에는 **하루평균 1분** 더 체류 합니다 **5 적합성 (Goodness of fit)**- 오차를 측정을 위해서는 1. $\varepsilon_1$ 는 **독립**1. **평균** 은 01. **표준편차** $\sigma$ 는 **정규분포의 확률변수** 라는 가정이 필요
###Code
# 최적의 다중 회귀분석 : 단순회귀보다 작은 오류값을 갖는다
# 모델의 R^2 을 다시 계산하면 0.68 까지 증가함을 알 수 있습니다
def multiple_r_squared(x, y, beta):
sum_of_squared_errors = sum(error(x_i, y_i, beta)**2
for x_i, y_i in zip(x, y))
return 1.0 - sum_of_squared_errors / total_sum_of_squares(y)
print("r-squared :", multiple_r_squared(x, daily_minutes_good, beta))
###Output
r-squared : 0.6800074955952597
###Markdown
**6 Bootstrap**알 수 없는 분포에서 생성된 표본 데이터가 주어진 경우 데이터를 확대하는 방법- **중복 허용된 대추출 방식** 을 반복하여 **새로운 데이터 각 항목을 추가/ 생성** 하는 방법 입니다
###Code
# len(data) 개의 항목을 중복 허용한 무작위 재추출
def bootstrap_sample(data):
return [random.choice(data) for _ in data]
# num_sample 의 bootstrap 샘플에 대해 stats_fn 을 적용
def bootstrap_statistic(data, stats_fn, num_samples):
return [stats_fn(bootstrap_sample(data))
for _ in range(num_samples)]
# 101 개의 데이터가 모두 100 에 인접 : 표준편차가 0에 수렴
close_to_100 = [99.5 + random.random() for _ in range(101)]
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(bootstrap_statistic(close_to_100, median, 100))
plt.title("bootstrap_statistic(close_to_100, median, 100):")
plt.show()
# 101 개의 데이터 중, 50 개는 0 에 근접, 50 개는 200 에 근접 : 표준편차가 100에 수렵
far_from_100 = ([99.5 + random.random()] +
[random.random() for _ in range(50)] +
[200 + random.random() for _ in range(50)])
plt.plot(bootstrap_statistic(far_from_100, median, 100))
plt.title("bootstrap_statistic(far_from_100, median, 100)")
plt.show()
###Output
_____no_output_____
###Markdown
**7 계수의 표준 오차**계수의 표준오차를 추정시 **bootstrap** 을 적용할 수 있습니다
###Code
# probability.py
def normal_cdf(x, mu=0,sigma=1):
return (1 + math.erf((x - mu) / math.sqrt(2) / sigma)) / 2
%%time
# so that you get the same results as me
def estimate_sample_beta(sample):
x_sample, y_sample = list(zip(*sample)) # zip 자료풀기
return estimate_beta(x_sample, y_sample)
random.seed(0)
bootstrap_betas = bootstrap_statistic(
list(zip(x, daily_minutes_good)), estimate_sample_beta, 100)
bootstrap_standard_errors = [
standard_deviation([beta[i] for beta in bootstrap_betas])
for i in range(4)]
# 상수, 친구 수, 근무시간, 박사학위여부
bootstrap_standard_errors
def p_value(beta_hat_j, sigma_hat_j):
if beta_hat_j > 0:
# 만약 계수가 양수인 경우, 더 큰값이 발생할 확률에 2를 곱한다
return 2 * (1 - normal_cdf(beta_hat_j / sigma_hat_j))
else:
# 아니면 더 작은값 발생할 확률에 2를 곱한다
return 2 * normal_cdf(beta_hat_j / sigma_hat_j)
print("p-value>>>\n상수: {}\n친구의 수: {}\n근무시간: {}\n박사취득여부: {}".format(
p_value(30.63, 1.174), p_value(0.972, 0.079),
p_value(-1.868, 0.131), p_value(0.911, 0.990)))
###Output
p-value>>>
상수: 0.0
친구의 수: 0.0
근무시간: 0.0
박사취득여부: 0.35746719881669264
###Markdown
**8 Regularization**다양한 데이터를 취합하면 Overfitting 문제가 생길 수 있습니다.1. 때문에 적당히 **3개의 변수를 생성** 하는게 유용 합니다1. **Regularization** 은 **beta** 가 클수록 모델에 패널티를 추가해 최적의 모델을 생성합니다1. **L2** 정규화 방식인 **ridge regression** 은 **beta_i** 를 제곱한 값의 합에 비례하는 패널치를 추가 합니다
###Code
# alpha는 패널티의 강도를 조절하는 변수 입니다
def ridge_penalty(beta, alpha):
return alpha * dot(beta[1:], beta[1:])
# beta 를 사용할 때 오류와 패널티의 합을 추정
def squared_error_ridge(x_i, y_i, beta, alpha):
return error(x_i, y_i, beta) ** 2 + ridge_penalty(beta, alpha)
# 패널티의 기울기
def ridge_penalty_gradient(beta, alpha):
return [0] + [2 * alpha * beta_j for beta_j in beta[1:]]
# i번 오류 제곱과 패널티 합의 기울기
def squared_error_ridge_gradient(x_i, y_i, beta, alpha):
return vector_add(squared_error_gradient(x_i, y_i, beta),
ridge_penalty_gradient(beta, alpha))
# 패널티가 alpha 인 ridge 회귀를 경사 하강법으로 학습 합니다
from functools import partial
def estimate_beta_ridge(x, y, alpha):
beta_initial = [random.random() for x_i in x[0]]
return minimize_stochastic(
partial(squared_error_ridge, alpha=alpha),
partial(squared_error_ridge_gradient, alpha=alpha),
x, y, beta_initial, 0.001)
def lasso_penalty(beta, alpha):
return alpha * sum(abs(beta_i) for beta_i in beta[1:])
random.seed(0)
for alpha in [0.0, 0.01, 0.1, 1, 10]:
beta = estimate_beta_ridge(x, daily_minutes_good, alpha=alpha)
print("alpha: {}\nbeta: {}".format(alpha, beta))
print("dot(beta[1:], beta[1:]): {:.5f}".format(dot(beta[1:], beta[1:])))
print("r-squared: {:.5f}\n".format(multiple_r_squared(x, daily_minutes_good, beta)))
###Output
alpha: 0.0
beta: [30.619881701311712, 0.9702056472470465, -1.8671913880379478, 0.9163711597955347]
dot(beta[1:], beta[1:]): 5.26744
r-squared: 0.68001
alpha: 0.01
beta: [30.55985204967343, 0.9730655363505671, -1.8624424625144256, 0.9317665551046306]
dot(beta[1:], beta[1:]): 5.28374
r-squared: 0.68001
alpha: 0.1
beta: [30.894860179735474, 0.9490275238632391, -1.8501720889216575, 0.5325129720515789]
dot(beta[1:], beta[1:]): 4.60736
r-squared: 0.67973
alpha: 1
beta: [30.666778908554885, 0.908635996761392, -1.6938673046100265, 0.09370161190283018]
dot(beta[1:], beta[1:]): 3.70359
r-squared: 0.67571
alpha: 10
beta: [28.372861060795607, 0.7307660860322116, -0.9212163182015426, -0.018495551723207087]
dot(beta[1:], beta[1:]): 1.38300
r-squared: 0.57521
|
coin-change-problem-suhel-kap.ipynb | ###Markdown
Coin Change Problem How to run the code and save your workThe recommended way to run this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on [mybinder.org](https://mybinder.org), a free online service for running Jupyter notebooks. This tutorial is an executable [Jupyter notebook](https://jupyter.org). You can _run_ this tutorial and experiment with the code examples in a couple of ways: *using free online resources* (recommended) or *on your computer*. Option 1: Running using free online resources (1-click, recommended)The easiest way to start executing the code is to click the **Run** button at the top of this page and select **Run on Binder**. You can also select "Run on Colab" or "Run on Kaggle", but you'll need to create an account on [Google Colab](https://colab.research.google.com) or [Kaggle](https://kaggle.com) to use these platforms. Option 2: Running on your computer locallyTo run the code on your computer locally, you'll need to set up [Python](https://www.python.org), download the notebook and install the required libraries. We recommend using the [Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) distribution of Python. Click the **Run** button at the top of this page, select the **Run Locally** option, and follow the instructions. Saving your workBefore staring the assignment, let's save a snapshot of the assignment to your [Jovian](https://jovian.ai) profile, so that you can access it later, and continue your work.
###Code
project_name = 'coin-change-problem-suhel-kap' # give it an appropriate name
!pip install jovian --upgrade --quiet
import jovian
jovian.commit(project=project_name)
###Output
_____no_output_____
###Markdown
Problem Statement> Given an amount and the denominations of coins available, determine how many ways change can be made for amount. There is a limitless supply of each coin type. **Example:**`n` = `3`,`c` = `[8,3,1,2]`There are `3` ways to make change for `n` i.e, `{1,1,1},{1,2},{3}`https://www.hackerrank.com/challenges/coin-change/problem The MethodHere's the systematic strategy we'll apply for solving problems:1. State the problem clearly. Identify the input & output formats.2. Come up with some example inputs & outputs. Try to cover all edge cases.3. Come up with a correct solution for the problem. State it in plain English.4. Implement the solution and test it using example inputs. Fix bugs, if any.5. Analyze the algorithm's complexity and identify inefficiencies, if any.6. Apply the right technique to overcome the inefficiency. Repeat steps 3 to 6.This approach is explained in detail in [Lesson 1](https://jovian.ai/learn/data-structures-and-algorithms-in-python/lesson/lesson-1-binary-search-linked-lists-and-complexity) of the course. Let's apply this approach step-by-step. Solution 1. State the problem clearly. Identify the input & output formats.While this problem is stated clearly enough, it's always useful to try and express in your own words, in a way that makes it most clear for you. **Problem**> We are given an integer `n` that we need to split it as the sum of coins of different denominations, and we can use a coin as many times as we want i.e, there is an unlimited supply of coins. We need to return an integer showing the number of ways that we can split the number `n`**Input**1. **First Line**: an integer `n` that needs to be changed and `m` denoting the different types of coins available2. **Second Line**: `m` spaced integers showing the available denomiations**Output**1. Single integer denoting the number of ways to make the changeBased on the above, we can now create a signature of our function:
###Code
def coinChange(coins,lengthOfCoins,amt):
pass
###Output
_____no_output_____
###Markdown
Save and upload your work before continuing.
###Code
import jovian
jovian.commit()
###Output
_____no_output_____
###Markdown
2. Come up with some example inputs & outputs. Try to cover all edge cases.Our function should be able to handle any set of valid inputs we pass into it. Here's a list of some possible variations we might encounter:1. Generic possible case2. Only 1 type of coin3. All coins greater than `n`4. An impossible case5. A very large test caseWe'll express our test cases as dictionaries, to test them easily. Each dictionary will contain 2 keys: `input` (a dictionary itself containing one key for each argument to the function and `output` (the expected result from the function).
###Code
#Generic Possible Case
test0 = {
'input': {
'n':3,'m':4,
'coins':[8,3,1,2]
},
'output': 3
}
#Only 1 type of coin
test1 = {
'input': {
'n':5,'m':1,
'coins':[1]
},
'output': 1
}
#Only 1 type of coin
test2 = {
'input': {
'n':6,'m':1,
'coins':[4]
},
'output': 0
}
#All coins greater than `n`
test3 = {
'input': {
'n':5,'m':4,
'coins':[9,6,7,8]
},
'output': 0
}
#An impossible case
test4 = {
'input': {
'n':10,'m':5,
'coins':[12,4,8,9,11]
},
'output': 0
}
#A large test case
test5 = {
'input': {
'n':245,'m':26,
'coins':[16,30,9, 17, 40, 13, 42, 5 ,25, 49, 7, 23, 1, 44, 4, 11, 33, 12, 27, 2 ,38, 24, 28, 32, 14, 50]
},
'output': 64027917156
}
###Output
_____no_output_____
###Markdown
Create one test case for each of the scenarios listed above. We'll store our test cases in an array called `tests`.
###Code
tests = [test0,test1,test2,test3,test4,test5]
# add more test cases
###Output
_____no_output_____
###Markdown
3. Come up with a correct solution for the problem. State it in plain English.Our first goal should always be to come up with a _correct_ solution to the problem, which may not necessarily be the most _efficient_ solution. Come with a correct solution and explain it in simple words below:**Recursive Solution**Suppose we need to split 20 in coins of 5,4,2,11. We can say that 20 can be split as 20 = ways using 0 coins of 5 + ways using 1 coin of 5 + ....2. Which can also be read as 20 = ways using only [4,2,1] + ....3. Recursively we can write it as number of ways = coinChange(n - 0* amount of next denomination,next denomination) + coinChange(n - 1* amount of next denomination,next denomination) and so onLet's save and upload our work before continuing.
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
4. Implement the solution and test it using example inputs. Fix bugs, if any.
###Code
def numberOfways(n,m,coins):
# n,m = input().split()
# n,m = int(n),int(m)
# coins = list(map(int,input().strip().split()))[:m]
coins.sort(reverse=True)
return coinChange(coins,m,n)
def coinChange(coins,lengthCoins,amount):
if amount == 0:
return 1
if amount < 0 or (lengthCoins <= 0 and amount >=1):
return 0
return coinChange(coins,lengthCoins,amount-coins[lengthCoins-1]) + coinChange(coins,lengthCoins-1,amount)
###Output
_____no_output_____
###Markdown
We can test the function by passing the input to it directly or by using the `evaluate_test_case` function from `jovian`.
###Code
from jovian.pythondsa import evaluate_test_case
#evaluate_test_case(numberOfways,test5)
###Output
_____no_output_____
###Markdown
Evaluate your function against all the test cases together using the `evaluate_test_cases` (plural) function from `jovian`.
###Code
from jovian.pythondsa import evaluate_test_cases
#evaluate_test_cases(numberOfways,tests)
###Output
_____no_output_____
###Markdown
Verify that all the test cases were evaluated. We expect them all to fail, since we haven't implemented the function yet.Let's save our work before continuing.
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
5. Analyze the algorithm's complexity and identify inefficiencies, if any.
###Code
'''
The recursive funtion uses python's in built sort function which has a complexity of O(NlogN) and
since there are two recursive calls at the end of every iteration, it has a complexity of 2N
'''
timeComplexity = 'O(N^2)'
'''
The space complexity is constant because no auxillary space is required to
implement this algorithm. The required space remains constant.
'''
spaceComplexity = 'O(1)'
jovian.commit()
###Output
_____no_output_____
###Markdown
6. Apply the right technique to overcome the inefficiency. Repeat steps 3 to 6. We can use the memoization technique to overcome the inefficiency
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
7. Come up with a correct solution for the problem. State it in plain English.Come with the optimized correct solution and explain it in simple words below:1. We can create an empty dictionary named `memo`2. Everytime we come across an interation, we firstly check if it is in the memo or not3. If it is then we simply return the answer that was stored as the value for that key in the dictionary4. If it won't be present in the dictionary, then we compute the result and then store the key-value pair in the dictionaryLet's save and upload our work before continuing.
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
8. Implement the solution and test it using example inputs. Fix bugs, if any.
###Code
# n is amount, m is length of coins , coins is the array
def numberOfWaysOptimized(n,m,coins):
memo = {}
return coinChangeOptimized(n,m-1,coins,memo)
def coinChangeOptimized(n,m,coins,memo):
#if the amount equals 0 we've found our solution and we return 1
if n == 0:
return 1
#if the amount or the length of coins become negative, that means that we can't find the solution for that case
# and hence we return 0
if n < 0 or m < 0:
return 0
#we create a key storing the length and amount
key = (m,n)
#print("Key:(length,amount)",key)
#we check here that have we come acrossed the case before or not
if key not in memo:
#we include one coin of the dimension coins[m]
include = coinChangeOptimized(n-coins[m],m,coins,memo)
#print("Include: ",include)
#we exclude the dimension coins[m]
exclude = coinChangeOptimized(n,m-1,coins,memo)
#print("Exclude: ",exclude)
memo[key] = include + exclude
#print("memo[",key,"]: ",memo[key])
return memo[key]
evaluate_test_case(numberOfWaysOptimized,test0)
evaluate_test_cases(numberOfWaysOptimized,tests)
###Output
[1mTEST CASE #0[0m
Input:
{'n': 3, 'm': 4, 'coins': [8, 3, 1, 2]}
Expected Output:
3
Actual Output:
3
Execution Time:
0.014 ms
Test Result:
[92mPASSED[0m
[1mTEST CASE #1[0m
Input:
{'n': 5, 'm': 1, 'coins': [1]}
Expected Output:
1
Actual Output:
1
Execution Time:
0.007 ms
Test Result:
[92mPASSED[0m
[1mTEST CASE #2[0m
Input:
{'n': 6, 'm': 1, 'coins': [4]}
Expected Output:
0
Actual Output:
0
Execution Time:
0.004 ms
Test Result:
[92mPASSED[0m
[1mTEST CASE #3[0m
Input:
{'n': 5, 'm': 4, 'coins': [9, 6, 7, 8]}
Expected Output:
0
Actual Output:
0
Execution Time:
0.017 ms
Test Result:
[92mPASSED[0m
[1mTEST CASE #4[0m
Input:
{'n': 10, 'm': 5, 'coins': [12, 4, 8, 9, 11]}
Expected Output:
0
Actual Output:
0
Execution Time:
0.034 ms
Test Result:
[92mPASSED[0m
[1mTEST CASE #5[0m
Input:
{'n': 245, 'm': 26, 'coins': [16, 30, 9, 17, 40, 13, 42, 5, 25, 49, 7, 23, 1, 44, 4, 11, 33, 12, 27,...
Expected Output:
64027917156
Actual Output:
64027917156
Execution Time:
5.852 ms
Test Result:
[92mPASSED[0m
[1mSUMMARY[0m
TOTAL: 6, [92mPASSED[0m: 6, [91mFAILED[0m: 0
###Markdown
9. We can also solve this problem using dynamic programming 1. We create a list having amount+1 elements with list[0] = 1 and rest elements = 02. We represent the coins that we are using, using i which runs from 0 to last element present in the array3. For the inner loop j, we update the j'th element of the list (i.e, the position corresponding to the coin[i]) This update is just adding the solutions when we use the i'th coin, which in turn equals to solutions for creating change for j-coins[i]4. We finally use all the coins and return the last element of the list
###Code
def coinChangeDP(n,m,coins):
result = [1] + [0]*n
for i in range(m):
for j in range(coins[i],n+1):
result[j] = result[j] + result[j-coins[i]]
return result[-1]
evaluate_test_cases(coinChangeDP,tests)
###Output
[1mTEST CASE #0[0m
Input:
{'n': 3, 'm': 4, 'coins': [8, 3, 1, 2]}
Expected Output:
3
Actual Output:
3
Execution Time:
0.007 ms
Test Result:
[92mPASSED[0m
[1mTEST CASE #1[0m
Input:
{'n': 5, 'm': 1, 'coins': [1]}
Expected Output:
1
Actual Output:
1
Execution Time:
0.003 ms
Test Result:
[92mPASSED[0m
[1mTEST CASE #2[0m
Input:
{'n': 6, 'm': 1, 'coins': [4]}
Expected Output:
0
Actual Output:
0
Execution Time:
0.009 ms
Test Result:
[92mPASSED[0m
[1mTEST CASE #3[0m
Input:
{'n': 5, 'm': 4, 'coins': [9, 6, 7, 8]}
Expected Output:
0
Actual Output:
0
Execution Time:
0.008 ms
Test Result:
[92mPASSED[0m
[1mTEST CASE #4[0m
Input:
{'n': 10, 'm': 5, 'coins': [12, 4, 8, 9, 11]}
Expected Output:
0
Actual Output:
0
Execution Time:
0.011 ms
Test Result:
[92mPASSED[0m
[1mTEST CASE #5[0m
Input:
{'n': 245, 'm': 26, 'coins': [16, 30, 9, 17, 40, 13, 42, 5, 25, 49, 7, 23, 1, 44, 4, 11, 33, 12, 27,...
Expected Output:
64027917156
Actual Output:
64027917156
Execution Time:
1.56 ms
Test Result:
[92mPASSED[0m
[1mSUMMARY[0m
TOTAL: 6, [92mPASSED[0m: 6, [91mFAILED[0m: 0
###Markdown
If you found the problem on an external platform, you can make a submission to test your solution.Share your approach and start a discussion on the Jovian forum: https://jovian.ai/forum/c/data-structures-and-algorithms-in-python/78
###Code
jovian.commit()
jovian.submit(assignment="pythondsa-project")
###Output
_____no_output_____ |
experiments/VR-inserts/notebooks/PCR-nanodrop-results.ipynb | ###Markdown
PCR nanodrop resultsSimple plots for visualizing nanodrop results of PCR products.
###Code
import pandas as pd
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(ggpubr)
library(RColorBrewer)
library(reshape2)
%%R
# Basic barplot showing yields of all samples requires sample name column to
# be valled "Sample" and ng / ul DNA measured to be called "ng_ul"
basic.bars <- function(df, title='', x_lab='', y_lab='ng/ul DNA'){
colors <- colorRampPalette(brewer.pal(8, "Dark2"))(nrow(df))
ggplot(df, aes(x=Sample, y=Yield, fill=Sample)) +
geom_bar(stat='identity', color='black', size=1, width=0.7) +
theme_pubr() + theme(legend.position='none') + scale_fill_manual(values=colors) +
theme(axis.text.x = element_text(angle = 45, vjust = 1, hjust=1)) +
labs(title=title, x=x_lab, y=y_lab)
}
%%R
# Boxplot showing median values for yeild and purity ratios
basic.box <- function(df, title='', x_lab='', y_lab=''){
df.melt <- melt(df)
colors <- colorRampPalette(brewer.pal(8, "Dark2"))(length(unique(df.melt$variable)))
ggplot(df.melt, aes(x=variable, y=value, fill=variable)) +
geom_boxplot(color='black', alpha=0.7) +
theme_pubr() + theme(legend.position='none') +
scale_fill_manual(values=colors) +
labs(title=title, x=x_lab, y=y_lab) +
facet_wrap(~variable, scales = "free") +
theme(axis.title.x=element_blank(),
axis.text.x=element_blank(),
axis.ticks.x=element_blank()
)
}
###Output
_____no_output_____
###Markdown
8-24-21
###Code
PCR_8_24 = pd.read_csv('../tables/VRn-PCR-nanodrop-unpurified-8-24-21.csv')
%%R -i PCR_8_24 -w 5 -h 5 --units in -r 200
bars.8.24 <- basic.bars(PCR_8_24, title='PCR yields: 25ul sample volume')
ggsave('../images/bars.8.24.PCR.plot.png')
bars.8.24
%%R -i PCR_8_24 -w 5 -h 5 --units in -r 200
basic.box(PCR_8_24)
###Output
R[write to console]: Using Sample as id variables
|
Notebook/MaskRCNN (1).ipynb | ###Markdown
**Import Mask R-CNN**
###Code
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
print(MODEL_DIR)
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
print(COCO_MODEL_PATH)
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
# Directory of images to run detection on
IMAGE_DIR = os.path.join(ROOT_DIR, "images")
batch_size = 2
class InferenceConfig(coco.CocoConfig):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 2
NAME = 'Coco'
#BATCH_SIZE = batch_size
config = InferenceConfig()
config.display()
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
###Output
_____no_output_____
###Markdown
Load COCO datasetCOCO_DIR = os.path.join(ROOT_DIR,'coco')dataset = coco.CocoDataset()dataset.load_coco(COCO_DIR, "train")dataset.prepare() Print class namesprint(dataset.class_names)
###Code
# COCO Class names
# Index of the class in the list is its ID. For example, to get ID of
# the teddy bear class, use: class_names.index('teddy bear')
class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',
'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',
'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',
'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
'kite', 'baseball bat', 'baseball glove', 'skateboard',
'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',
'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',
'teddy bear', 'hair drier', 'toothbrush']
###Output
_____no_output_____
###Markdown
IMAGE_DIR = os.path.join(ROOT_DIR, "Test_Data")IMAGE_SAVE_DIR = os.path.join(ROOT_DIR, "Image_Results")img = cv2.imread(os.path.join(IMAGE_DIR, "Test.jpg"))image = []image.append(img)print(len(image))print(config.BATCH_SIZE)r = model.detect(image, verbose=0)visualize.save_image( image_name = name, image = image, boxes = r['rois'], masks = r['masks'], class_ids = r['class_ids'], \ class_names = class_names, scores = r['scores'], save_dir = IMAGE_SAVE_DIR )
###Code
VIDEO_DIR = os.path.join(ROOT_DIR, "Test_Data")
VIDEO_SAVE_DIR = os.path.join(ROOT_DIR, "Results")
print(VIDEO_DIR)
capture = cv2.VideoCapture(os.path.join(VIDEO_DIR, 'Test-Clip2.mp4'))
frame_count = 0
frames = []
while True:
ret, frame = capture.read()
# Bail out when the video file ends
if not ret:
break
# Save each frame of the video to a list
frame_count += 1
frames.append(frame)
if len(frames) == batch_size:
results = model.detect(frames, verbose=0)
for i, item in enumerate(zip(frames, results)):
frame = item[0]
r = item[1]
name = '{0}.jpg'.format(frame_count + i - batch_size)
visualize.save_image(
image_name = name, image = frame, boxes = r['rois'], masks = r['masks'], class_ids = r['class_ids'], \
class_names = class_names, scores = r['scores'], save_dir = VIDEO_SAVE_DIR
)
# Clear the frames array to start the next batch
frames = []
###Output
_____no_output_____ |
3_Merging_Data.ipynb | ###Markdown
3. Merging Datasets
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['font.size'] = 12
plt.rcParams['legend.frameon'] = False
plt.rcParams['axes.spines.top'] = False
plt.rcParams['axes.spines.right'] = False
%config InlineBackend.figure_format = 'retina'
###Output
_____no_output_____
###Markdown
Load in Penn World Tables Let's start by loading in the Penn World Tables Data
###Code
pwt0 = pd.read_excel('data/pwt100.xlsx', sheet_name='Data')
pwt0 = pwt0[['year', 'countrycode', 'rgdpna', 'pop', 'hc', 'avh']].dropna()
pwt0.head()
###Output
_____no_output_____
###Markdown
Load in World Bank inequality data Now we load in a second dataset that we wish to merge with the PWT. Here we will use the World Bank data on global inequality, which has Gini coefficient data for a larget set of countries over many years.
###Code
gini = pd.read_excel('data/world_bank_gini.xls')
gini.head()
###Output
_____no_output_____
###Markdown
We can see that the World Bank data doesn't appear every year for many countries.
###Code
gini_pan = gini.pivot('year', 'countrycode')['gini']
gini_pan['VNM'].dropna().plot(marker='o');
###Output
_____no_output_____
###Markdown
Merge the two together Finally, we want to merge these two together. To do this, we will match any rows that have the same value for both `year` and `countrycode`.
###Code
full = pd.merge(pwt, gini, how='left', on=('year', 'countrycode'))
full.head()
###Output
_____no_output_____
###Markdown
Now we can see how the Gini coefficient relates to other variables, like GDP per capita
###Code
full['lgdp_per'] = np.log(full['rgdpna']/full['pop'])
sns.jointplot('lgdp_per', 'gini', kind='reg', data=full);
###Output
_____no_output_____ |
Haberman_EDA.ipynb | ###Markdown
###Code
from google.colab import files
upload=files.upload()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
haberman=pd.read_csv('haberman.csv')
haberman
haberman.columns
haberman.columns=['age','year','node','survival_status']
print(haberman.shape)
haberman.columns
haberman['survival_status'].value_counts()
haberman.plot(kind='scatter',x='age',y='year')
plt.show()
sns.FacetGrid(haberman,hue='survival_status').map(plt.scatter,'age','year').add_legend()
plt.show()
sns.pairplot(haberman,hue='survival_status')
plt.show()
sns.FacetGrid(haberman,hue='survival_status').map(sns.distplot,'age').add_legend()
sns.FacetGrid(haberman,hue='survival_status').map(sns.distplot,'year').add_legend()
sns.FacetGrid(haberman,hue='survival_status').map(sns.distplot,'node').add_legend()
clss1=haberman.loc[haberman['survival_status']==1]
sns.boxplot(x='survival_status',y='age',data=haberman)
plt.show()
###Output
_____no_output_____ |
notebooks/model_arima.ipynb | ###Markdown
Conclusion: AR(3)
###Code
# Autocorrelation Plot
plt.figure(figsize=(14, 8))
pacf = plot_acf(train, lags=30)
###Output
_____no_output_____
###Markdown
Conclusion: MA between 3 and 10
###Code
# setting the MLFlow connection and experiment
mlflow.set_tracking_uri(TRACKING_URI)
mlflow.set_experiment(EXPERIMENT_NAME)
mlflow.start_run()
run = mlflow.active_run()
train.head()
train.shape
test.shape
model = ARIMA(train, order=(3,1,1))
results = model.fit()
results.aic
forecast = results.forecast(steps=1)
# RMSE
rmse = np.sqrt(mean_squared_error(test.iloc[0:1], forecast))
rmse
# define ranges for p, d, q
p_range = range(0,6)
d_range = range(0,2)
q_range = range(0,11)
for p in p_range:
for d in d_range:
for q in q_range:
order = (p,d,q)
model = ARIMA(train, order=order)
model_fit = model.fit()
pred_y = model_fit.forecast(steps=1)
error = np.sqrt(mean_squared_error(test.iloc[0:1], pred_y))
print(f'ARIMA {order} RMSE = {error}')
# or parameters of the model (fit_intercept for Linear Regression model)
mlflow_params = {
"p": fitted_model.params,
"d": ,
"q": ,
}
# logging params to mlflow
mlflow.log_params(params)
# setting tags
mlflow.set_tag("model", "ARIMA")
mlflow.set_tag("features", "imbalance price")
# logging metrics
mlflow.log_metric("test-" + "RMSE", rmse)
mlflow.log_metric("test-" + "R2", rsquared)
# end run
mlflow.end_run()
###Output
_____no_output_____ |
feature_selection_and_analyses/recursive_feature_elimination_regression_testing.ipynb | ###Markdown
Testing the module for performing recursive feature elimination ('recursive_feature_elimination.py')
###Code
import recursive_feature_elimination as rfe
# Path to training data
path_to_file = '/Users/songyojung/data/classification/df_train_1.pkl'
path_to_save = '/Users/songyojung/data/classification'
feature_relevance_score = '/Users/songyojung/Documents/GitHub/GBSFS4MPP/data/classification/feature_relevance_score.pkl'
path_to_features = '/Users/songyojung/data/classification/features_selected_from_hierarchical_analysis.pkl'
problem = 'classification'
path_to_file2 = '/Users/songyojung/data/regression/df_train_2.pkl'
path_to_save2 = '/Users/songyojung/data/regression'
feature_relevance_score2 = '/Users/songyojung/Documents/GitHub/GBSFS4MPP/data/regression/feature_relevance_score.pkl'
path_to_features2 = '/Users/songyojung/data/regression/features_selected_from_hierarchical_analysis.pkl'
problem2 = 'regression'
# Initialize with four args as shown
feature_sel = rfe.recursive_feature_elimination(
path_to_file2,
path_to_save2,
path_to_features2,
problem2
)
# Choose baseline model
# boosting_method can be: lightGBM or XGBoost
feature_sel.base_model(boosting_method = 'lightGBM')
feature_sel.perform()
# Plot figure of the result
feature_sel.RFE_plot()
###Output
_____no_output_____ |
Error-Handling.ipynb | ###Markdown
Some of the most common errors in python are:* SyntaxError -> example like we forgot `:` after defining a function* NameError -> tryig to manipulate/access undefined variables* IndexError -> trying to access an element from a data structure which doesnt have that particular index* ZeroDivisionError--- More exceptions https://docs.python.org/3/library/exceptions.html
###Code
# Let's consider we are writing code to take in use's age
age = input('Enter the age')
print(age)
###Output
Enter the age10
10
###Markdown
But here we have an issue. What if we passed a string ?
###Code
age = input('Enter the age')
print(age)
try:
age=int(input('Enter your age'))
except:
print("Enter a valid number")
while True:
try:
age=int(input('Enter your age'))
except:
print("Enter a valid number")
else:
break
###Output
Enter your agesid
Enter a valid number
Enter your age100
###Markdown
--- Checking For Multiple Exceptions
###Code
while True:
try:
age=int(input('Enter your age'))
10/age
except ZeroDivisionError:
print("Enter age > 0")
except ValueError:
print("Enter a number")
else:
break
###Output
Enter your age0
Enter age > 0
Enter your age34
###Markdown
--- Combining Multiple Exceptions
###Code
def add(n1,n2):
try:
return n1+n2
except (TypeError,ZeroDivisionError) as err:
print(err)
add(1,'s')
def add(n1,n2):
try:
return n1/n2
except (TypeError,ZeroDivisionError) as err:
print(err)
add(1,0)
###Output
division by zero
###Markdown
--- Creating Custom Errors
###Code
# Using raise
# finally executes at the end evertime
while True:
try:
age=int(input('Enter your age'))
10/age
raise Exception("Hey not possible")
except ZeroDivisionError:
print("Enter age > 0")
except ValueError:
print("Enter a number")
else:
break
finally:
print("end")
###Output
Enter your age3
end
|
tests/ipython-notebooks/NumPy.ipynb | ###Markdown
NumPy NumPy is the fundamental package for scientific computing with Python. It contains among other things:- a powerful N-dimensional array object- sophisticated (broadcasting) functions- tools for integrating C/C++ and Fortran code- useful linear algebra, Fourier transform, and random number capabilitiesBesides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases.Library documentation: http://www.numpy.org/
###Code
from numpy import *
# declare a vector using a list as the argument
v = array([1,2,3,4])
v
# declare a matrix using a nested list as the argument
M = array([[1,2],[3,4]])
M
# still the same core type with different shapes
type(v), type(M)
M.size
# arguments: start, stop, step
x = arange(0, 10, 1)
x
linspace(0, 10, 25)
logspace(0, 10, 10, base=e)
x, y = mgrid[0:5, 0:5]
x
y
from numpy import random
random.rand(5,5)
# normal distribution
random.randn(5,5)
diag([1,2,3])
M.itemsize
M.nbytes
M.ndim
v[0], M[1,1]
M[1]
# assign new value
M[0,0] = 7
M
M[0,:] = 0
M
# slicing works just like with lists
A = array([1,2,3,4,5])
A[1:3]
A = array([[n+m*10 for n in range(5)] for m in range(5)])
A
row_indices = [1, 2, 3]
A[row_indices]
# index masking
B = array([n for n in range(5)])
row_mask = array([True, False, True, False, False])
B[row_mask]
###Output
_____no_output_____
###Markdown
Linear Algebra
###Code
v1 = arange(0, 5)
v1 + 2
v1 * 2
v1 * v1
dot(v1, v1)
dot(A, v1)
# cast changes behavior of + - * etc. to use matrix algebra
M = matrix(A)
M * M
# inner product
v.T * v
C = matrix([[1j, 2j], [3j, 4j]])
C
conjugate(C)
# inverse
C.I
###Output
_____no_output_____
###Markdown
Statistics
###Code
mean(A[:,3])
std(A[:,3]), var(A[:,3])
A[:,3].min(), A[:,3].max()
d = arange(1, 10)
sum(d), prod(d)
cumsum(d)
cumprod(d)
# sum of diagonal
trace(A)
m = random.rand(3, 3)
m
# use axis parameter to specify how function behaves
m.max(), m.max(axis=0)
A
# reshape without copying underlying data
n, m = A.shape
B = A.reshape((1,n*m))
B
# modify the array
B[0,0:5] = 5
B
# also changed
A
# creates a copy
B = A.flatten()
B
# can insert a dimension in an array
v = array([1,2,3])
v[:, newaxis], v[:,newaxis].shape, v[newaxis,:].shape
repeat(v, 3)
tile(v, 3)
w = array([5, 6])
concatenate((v, w), axis=0)
# deep copy
B = copy(A)
Tested: gopala
###Output
_____no_output_____ |
02_Random_Walk.ipynb | ###Markdown
Random systems, simulation, and validation*Joël Foramitti, 09.02.2022* This notebook introduces a random system that we try to represent with a model.
###Code
import random
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_theme()
###Output
_____no_output_____
###Markdown
A hypothetical climate system Below, we define a model of a hypothetical climate system. This climate has one variable **temperature** which starts at 0 and changes every year by a random amount. This is also called a random walk. The random step follows a normal distribution with a mean value **a** and a standard deviation **b**. These are the parameters of the model.
###Code
def simulate_temperature(a, b):
# Prepare simulation
temperature = 0
n_years = 100
data = {
'year': np.arange(n_years),
'temperature': np.zeros(n_years),
'temperature_change': np.zeros(n_years)
}
# Simulate year by year
for year in range(n_years):
# Generate random temperature change
temperature_change = random.gauss(a, b)
# Add temperature change to temperature
temperature += temperature_change
# Record data from the current year
data['temperature_change'][year] = temperature_change
data['temperature'][year] = temperature
# Return recorded data as a dataframe
return pd.DataFrame(data)
###Output
_____no_output_____
###Markdown
Imagine that the following data is an observation from our target system (the actual system, not the model). We observe the temperature of our climate for 100 years:
###Code
observed_data = simulate_temperature(0, 1)
sns.lineplot(data=observed_data, x='year', y='temperature');
###Output
_____no_output_____
###Markdown
A regression model We now try to represent this system by building a regression model to fit the observed data.
###Code
sns.regplot(data=observed_data, x='year', y='temperature', order=4);
###Output
_____no_output_____
###Markdown
This approach can sometimes be useful to see trends and make forecasts, but usually doesn't increase our understanding of the system since it does not attempt to represent the underlying mechanisms. In this case, it is also not useful to make forecasts, as we know that the system is completely random. A simulation model The following model is a correct representation of the target system. An experiment can run the model multiple times. We run the model 1000 times.
###Code
def experiment(a, b, runs):
data = pd.DataFrame()
for i in range(runs):
df = simulate_temperature(a, b)
df['run'] = i
data = data.append(df)
return data
simulated_data = experiment(0, 1, 1000)
###Output
_____no_output_____
###Markdown
Even though the model is correct, it is not able to reproduce the same data series (which is one random outcome of an infinite number of possible random outcomes).
###Code
sns.lineplot(data=simulated_data, x='year', y='temperature', units="run", estimator=None, color='grey', alpha=0.01);
sns.lineplot(data=simulated_data, x='year', y='temperature', label='Simulated data (average)')
sns.lineplot(data=observed_data, x='year', y='temperature', label='Observed data')
plt.legend();
###Output
_____no_output_____
###Markdown
However, we can validate the model by looking at the distribution of temperature change between years.
###Code
sns.kdeplot(data=observed_data, x='temperature_change', label='Observed data')
sns.kdeplot(data=simulated_data, x='temperature_change', label='Simulated data')
plt.legend();
###Output
_____no_output_____ |
prediction/multitask/fine-tuning/function documentation generation/php/large_model.ipynb | ###Markdown
**Predict the documentation for php code using codeTrans multitask finetuning model**You can make free prediction online through this Link (When using the prediction online, you need to parse and tokenize the code first.) **1. Load necessry libraries including huggingface transformers**
###Code
!pip install -q transformers sentencepiece
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
###Output
_____no_output_____
###Markdown
**2. Load the token classification pipeline and load it into the GPU if avilabile**
###Code
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask_finetune", skip_special_tokens=True),
device=0
)
###Output
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_auto.py:970: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
FutureWarning,
###Markdown
**3 Give the code for summarization, parse and tokenize it**
###Code
code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" #@param {type:"raw"}
!pip install tree_sitter
!git clone https://github.com/tree-sitter/tree-sitter-php
from tree_sitter import Language, Parser
Language.build_library(
'build/my-languages.so',
['tree-sitter-php']
)
PHP_LANGUAGE = Language('build/my-languages.so', 'php')
parser = Parser()
parser.set_language(PHP_LANGUAGE)
def get_string_from_code(node, lines):
line_start = node.start_point[0]
line_end = node.end_point[0]
char_start = node.start_point[1]
char_end = node.end_point[1]
if line_start != line_end:
code_list.append(' '.join([lines[line_start][char_start:]] + lines[line_start+1:line_end] + [lines[line_end][:char_end]]))
else:
code_list.append(lines[line_start][char_start:char_end])
def my_traverse(node, code_list):
lines = code.split('\n')
if node.child_count == 0:
get_string_from_code(node, lines)
elif node.type == 'string':
get_string_from_code(node, lines)
else:
for n in node.children:
my_traverse(n, code_list)
return ' '.join(code_list)
tree = parser.parse(bytes(code, "utf8"))
code_list=[]
tokenized_code = my_traverse(tree.root_node, code_list)
print("Output after tokenization: " + tokenized_code)
###Output
Output after tokenization: public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }
###Markdown
**4. Make Prediction**
###Code
pipeline([tokenized_code])
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.