path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
Mountain Car.ipynb
###Markdown Episodic Mountain Car with function appoximation and control This Notebook is intended to solve the Episodic Mountain car problem using Semi-gradient sarsa and Tile Coding.The description of the problem is given below:"A car is on a one-dimensional track, positioned between two "mountains". The goal is to drive up the mountain on the right; however, the car's engine is not strong enough to scale the mountain in a single pass. Therefore, the only way to succeed is to drive back and forth to build up momentum." An extensive description and solution of the problem can be seen here [Section 10.1 Reinforment Learning an Introduction](http://www.incompleteideas.net/book/RLbook2018.pdfpage=267)Image and Text taken from Taken from [Official documentaiton Mountain car](https://gym.openai.com/envs/MountainCar-v0/). ###Code import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import tiles3 as tc from tqdm import tqdm import gym from gym.wrappers import Monitor from utils import * %matplotlib inline ###Output _____no_output_____ ###Markdown Undestanding the Workflow of OpenAIThe following variables are used at each timestep and they are returned by the Mountain Car environment. - **observation** (object): an environment-specific object representing your observation of the environment. For example, pixel data from a camera, joint angles and joint velocities of a robot, or the board state in a board game.- **reward** (float): amount of reward achieved by the previous action. The scale varies between environments, but the goal is always to increase your total reward.- **done** (boolean): whether it’s time to reset the environment again. Most (but not all) tasks are divided up into well-defined episodes, and done being True indicates the episode has terminated. (For example, perhaps the pole tipped too far, or you lost your last life.)- **info** (dict): diagnostic information useful for debugging. It can sometimes be useful for learning (for example, it might contain the raw probabilities behind the environment’s last state change). However, official evaluations of your agent are not allowed to use this for learning. As a quick recap, the diagram below explains the workflow of a Markov Decision Process (MDP)Image taken from [Section 3.1 Reinforment Learning an Introduction](http://www.incompleteideas.net/book/RLbook2018.pdfpage=70) Environment and Agent specificationsBelow are presented the main features of the environment and agent. Overall, the action space of the problem is discrete with three posible actions. The observations or state space is continuios, therefore it is necessary to use a function approximation technique to solve this challenge. The agent receives a reward of -1 at each timestep unless it reaches the goal. The episode ends if the agent reaches the goal or a specific number of iterations are done. Additionally, the agent will always start at a random position between $-0.6$ and $-0.4$ with zero velocity. **Observation**: Type: Box(2) Num Observation Min Max 0 Car Position -1.2 0.6 1 Car Velocity -0.07 0.07 **Actions**: Type: Discrete(3) Num Action 0 Accelerate to the Left 1 Don't accelerate 2 Accelerate to the Right Note: This does not affect the amount of velocity affected by the gravitational pull acting on the car **Reward**: Reward of 0 is awarded if the agent reached the flag(position = 0.5) on top of the mountain Reward of -1 is awarded if the position of the agent is less than 0.5 **Starting State**: The position of the car is assigned a uniform random value in [-0.6 , -0.4] The velocity of the car is always assigned to 0 **Episode Termination**: The car position is more than 0.5 Episode length is greater than 200 For further information see [Github source code](https://github.com/openai/gym/blob/master/gym/envs/classic_control/mountain_car.py) The next cell aims to show how to iterate with the action and observation space of the agent and extract relevant information from it ###Code env = gym.make("MountainCar-v0") observation = env.reset() # Object's type in the action Space print("The Action Space is an object of type: {0}\n".format(env.action_space)) # Shape of the action Space print("The shape of the action space is: {0}\n".format(env.action_space.n)) # Object's type in the Observation Space print("The Environment Space is an object of type: {0}\n".format(env.observation_space)) # Shape of the observation space print("The Shape of the dimension Space are: {0}\n".format(env.observation_space.shape)) # The high and low values in the observation space print("The High values in the observation space are {0}, the low values are {1}\n".format( env.observation_space.high, env.observation_space.low)) # Minimum and Maximum car position print("The minimum and maximum car's position are: {0}, {1}\n".format( env.observation_space.low[0], env.observation_space.high[0])) # Minimum and Maximum car velocity print("The minimum and maximum car's velocity are: {0}, {1}\n".format( env.observation_space.low[1], env.observation_space.high[1])) # Example of observation print("The Observations at a given timestep are {0}\n".format(env.observation_space.sample())) ###Output The Action Space is an object of type: Discrete(3) The shape of the action space is: 3 The Environment Space is an object of type: Box(2,) The Shape of the dimension Space are: (2,) The High values in the observation space are [0.6 0.07], the low values are [-1.2 -0.07] The minimum and maximum car's position are: -1.2000000476837158, 0.6000000238418579 The minimum and maximum car's velocity are: -0.07000000029802322, 0.07000000029802322 The Observations at a given timestep are [-1.0707569 0.0590123] ###Markdown Tile Coding Class For a complete explanation about what is tile coding and how it works, see [Section 9.5.4 of Reinforment Learning an Introduction](http://www.incompleteideas.net/book/RLbook2018.pdfpage=239). Overall, this is a way to create features that can both provide good generalization and discrimination for value function approximation. Tile coding consists of multiple overlapping tiling, where each tiling is a partitioning of the space into tiles.**Note**: Tile coding can be only be used with 2d observation spaces.This technique is implemented using Tiles3, which is a python library written by Richard S. Sutton. For the full documentation see [Tiles3 documentation](http://incompleteideas.net/tiles/tiles3.html)Image taken from [Section 9.5.4 of Reinforment Learning an Introduction](http://www.incompleteideas.net/book/RLbook2018.pdfpage=239) ###Code # Tile Coding Class class MountainCarTileCoder: def __init__(self, iht_size=4096, num_tilings=8, num_tiles=8): """ Initializes the MountainCar Tile Coder Initializers: iht_size -- int, the size of the index hash table, typically a power of 2 num_tilings -- int, the number of tilings num_tiles -- int, the number of tiles. Here both the width and height of the tile coder are the same Class Variables: self.iht -- tc.IHT, the index hash table that the tile coder will use self.num_tilings -- int, the number of tilings the tile coder will use self.num_tiles -- int, the number of tiles the tile coder will use """ self.iht = tc.IHT(iht_size) self.num_tilings = num_tilings self.num_tiles = num_tiles def get_tiles(self, position, velocity): """ Takes in a position and velocity from the mountaincar environment and returns a numpy array of active tiles. Arguments: position -- float, the position of the agent between -1.2 and 0.5 velocity -- float, the velocity of the agent between -0.07 and 0.07 returns: tiles - np.array, active tiles """ # Set the max and min of position and velocity to scale the input # The max position is set to 0.5 as this is the position to end the experiment POSITION_MIN = -1.2 POSITION_MAX = 0.5 VELOCITY_MIN = -0.07 VELOCITY_MAX = 0.07 # Scale position and velocity by multiplying the inputs of each by their scale position_scale = self.num_tiles / (POSITION_MAX - POSITION_MIN) velocity_scale = self.num_tiles / (VELOCITY_MAX - VELOCITY_MIN) # Obtain active tiles for current position and velocity tiles = tc.tiles(self.iht, self.num_tilings, [position * position_scale, velocity * velocity_scale]) return np.array(tiles) # Test the TileCoder class mctc = MountainCarTileCoder(iht_size = 1024, num_tilings = 8, num_tiles = 8) tiles = mctc.get_tiles(position = -1.0, velocity = 0.01) # Tiles obtained at a random pos and vel print("The Tiles obtained are: {0}\n".format(tiles)) ###Output The Tiles obtained are: [0 1 2 3 4 5 6 7] ###Markdown Implementing Sarsa Agent To solve the Mountain Car problem, Value Function approximation and control will be used (Owing to the continuous state space). As a quick recap, Action-values can be computed using value function approximation giving the following equation.\begin{equation} q_\pi(s) \approx \hat{q}(s, a, w) \doteq w^T x(s,a)\end{equation}Where $w$ are a set of weights and $x(s,a)$ are the features vector which are computed using tile coding.Using the Tile coder implemented above it is possible to compute the action-values $\hat{q}(s, a, w)$ and solve this RL task. The equation to update the weights using the Sarsa algorithm is given below. Here, $\nabla \hat{q}(S_t, A_t, w)$ is the gradient of the action-values approximation but as $x(s,a)$ is a linear function, the gradient is one only for the active features.\begin{equation} w \leftarrow w + \alpha[R_{t+1} + \gamma \hat{q}(S_{t+1}, A_{t+1}, w)- \hat{q}(S_t, A_t, w)]\nabla \hat{q}(S_t, A_t, w)\end{equation}Additionally, the update "Target" is composed of the following terms:\begin{equation} \delta \leftarrow R_{t+1} + \gamma \hat{q}(S_{t+1}, A_{t+1}, w)\end{equation}\begin{equation} w \leftarrow w + \alpha[\delta - \hat{q}(S_t, A_t, w)]\nabla \hat{q}(S_t, A_t, w)\end{equation}The Pseudo-code implementation of this algorithm is given below.For further details, see [Section 9.5.4 of Reinforment Learning an Introduction](http://www.incompleteideas.net/book/RLbook2018.pdfpage=266). Image taken from the last reference. ###Code # SARSA class SarsaAgent(): """ Initialization of Sarsa Agent. All values are set to None so they can be initialized in the agent_init method. """ def __init__(self, agent_info={}): """Setup for the agent called when the experiment first starts.""" self.last_action = None self.last_state = None self.epsilon = None self.gamma = None self.iht_size = None self.w = None self.alpha = None self.num_tilings = None self.num_tiles = None self.mctc = None self.initial_weights = None self.num_actions = None self.previous_tiles = None def agent_init(self, agent_info={}): """Setup for the agent called when the experiment first starts.""" self.num_tilings = agent_info.get("num_tilings", 8) self.num_tiles = agent_info.get("num_tiles", 8) self.iht_size = agent_info.get("iht_size", 4096) self.epsilon = agent_info.get("epsilon", 0.0) self.gamma = agent_info.get("gamma", 1.0) self.alpha = agent_info.get("alpha", 0.5) / self.num_tilings self.initial_weights = agent_info.get("initial_weights", 0.0) self.num_actions = agent_info.get("num_actions", 3) # Initialize self.w to three times the iht_size. Recall this is because # we need to have one set of weights for each action (Stacked values). self.w = np.ones((self.num_actions, self.iht_size)) * self.initial_weights # Initialize self.mctc to the mountaincar verions of the tile coder created self.mctc = MountainCarTileCoder(iht_size = self.iht_size, num_tilings = self.num_tilings, num_tiles = self.num_tiles) def select_action(self, tiles): """ Selects an action using epsilon greedy Args: tiles - np.array, an array of active tiles Returns: (chosen_action, action_value) - (int, float), tuple of the chosen action and it's value """ action_values = [] chosen_action = None # Obtain action values for all actions (sum through rows) action_values = np.sum(self.w[:, tiles], axis = 1) # Epsilon Greedy action selecion if np.random.random() < self.epsilon: # Select random action among the three posible actions chosen_action = np.random.randint(self.num_actions) else: # Select the greedy action chosen_action = argmax(action_values) return chosen_action, action_values[chosen_action] def agent_start(self, state): """The first method called when the experiment starts, called after the environment starts. Args: state (Numpy array): the state observation from the environment's env.reset() function. Returns: The first action the agent takes. """ # Current state position, velocity = state # Obtain tiles activated at state cero active_tiles = self.mctc.get_tiles(position = position, velocity = velocity) # Select an action and obtain action values of the state current_action, action_value = self.select_action(active_tiles) # Save action as last action self.last_action = current_action # Save tiles as previous tiles self.previous_tiles = np.copy(active_tiles) return self.last_action def agent_step(self, reward, state): """A step taken by the agent. Args: reward (float): the reward received for taking the last action taken state (Numpy array): the state observation from the environment's step based, where the agent ended up after the last step Returns: The action the agent is taking. """ # Current state position, velocity = state # Compute current tiles active_tiles = self.mctc.get_tiles(position = position, velocity = velocity) # Obtain new action and action value before updating actition values current_action, action_value = self.select_action(active_tiles) # Update the Sarsa Target (delta) target = reward + (self.gamma * action_value) # Compute last action values to update weights last_action_val = np.sum(self.w[self.last_action][self.previous_tiles]) # As we are using tile coding, which is a variant of linear function approximation # The gradient of the active tiles are one, otherwise cero. grad = 1 self.w[self.last_action][self.previous_tiles] = self.w[self.last_action][self.previous_tiles] + \ self.alpha * (target - last_action_val) * grad self.last_action = current_action self.previous_tiles = np.copy(active_tiles) return self.last_action def agent_end(self, reward): """Run when the agent terminates. Args: reward (float): the reward the agent received for entering the terminal state. """ # There is no action_value used here because this is the end # of the episode. # Compute delta target = reward # Compute last action value last_action_val = np.sum(self.w[self.last_action][self.previous_tiles]) grad = 1 # Update weights self.w[self.last_action][self.previous_tiles] = self.w[self.last_action][self.previous_tiles] + \ self.alpha * (target - last_action_val) * grad def return_action_value(self, state): """Run to obtain action-values for a given state. Args: state (Numpy array): the state observation Returns: The max action-value """ # Current state position, velocity = state # Obtain tiles activated at state cero active_tiles = self.mctc.get_tiles(position = position, velocity = velocity) # Obtain action values for all actions (sum through rows) action_values = np.sum(self.w[:, active_tiles], axis = 1) # Obtain max action value max_action_value = np.max(action_values) return max_action_value ###Output _____no_output_____ ###Markdown Running the experimentThe following lines solves the Mountain Car problem and plot the average reward obtained over episodes and steps taken to solve the challenge at a specific episode. ###Code # Test Sarsa Agent num_runs = 10 num_episodes = 100 agent_info_options = {"num_tilings": 8, "num_tiles": 8, "iht_size": 4096, "epsilon": 0.0, "gamma": 1.0, "alpha": 0.5, "initial_weights": 0.0, "num_actions": 3} # Variable to store the amount of steps taken to solve the challeng all_steps = [] # Variable to save the rewards in an episode all_rewards = [] # Agent agent = SarsaAgent(agent_info_options) # Environment env = gym.make('MountainCar-v0') env.reset() # Maximum number of possible iterations (default was 200) env._max_episode_steps = 10000 # Number of runs are the times the experiment will start again (a.k.a episode) for n_runs in tqdm(range(num_runs)): # Resets environment observation = env.reset() # Reset agent agent.agent_init(agent_info_options) # Generate last state and action in the agent last_action = agent.agent_start(observation) # Steps taken at each episode to solve the challenge steps_per_episode = [] rewards_per_episode = [] # Times the environment will start again without resetting the agent for t in range(num_episodes): # Store number of steps taken to solve experiment n_steps = 0 rewards = 0 # Reset done flag done = False # Reset environment observation = env.reset() # Run until the experiment is over while not done: # Take a step with the environment observation, reward, done, info = env.step(last_action) # Number of steps the agent take to solve the challenge n_steps += 1 # Accumulate reward rewards += reward # If the goal has been reached stop if done: # Last step with the agent agent.agent_end(reward) else: # Take a step with the agent last_action = agent.agent_step(reward, observation) # Save the amount of steps needed to complete the experiment # Without rebooting the agent steps_per_episode.append(n_steps) # Save the amount of award obtained at each episode rewards_per_episode.append(rewards) # Save the list of steeps needed to finish the experiment # in all the Episodes all_steps.append(np.array(steps_per_episode)) # Awards obtained in every episode all_rewards.append(np.array(rewards_per_episode)) env.close() steps_average = np.mean(np.array(all_steps), axis=0) plt.plot(steps_average, label = 'Steps') plt.xlabel("Episodes") plt.ylabel("Iterations",rotation=0, labelpad=40) plt.xlim(-0.2, num_episodes) plt.ylim(steps_average.min(), steps_average.max()) plt.title("Average iterations to solve the experiment over runs") plt.legend() plt.show() print("The Minimum number of iterations used to solve the experiment were: {0}\n".format(np.array(all_steps).max())) print("The Maximum number of iterations used to solve the experiment were: {0}\n".format(np.array(all_steps).min())) rewards_average = np.mean(all_rewards, axis=0) plt.plot(rewards_average, label = 'Average Reward') plt.xlabel("Episodes") plt.ylabel("Sum of\n rewards\n during\n episode" ,rotation=0, labelpad=40) plt.xlim(-0.2, num_episodes) plt.ylim(rewards_average.min(), rewards_average.max()) plt.title("Average iterations to solve the experiment over runs") plt.legend() plt.show() print("The best reward obtained solving the experiment was: {0}\n".format(np.array(all_rewards).max())) print("The Wordt reward obtained solving the experiment was: {0}\n".format(np.array(all_rewards).min())) ###Output _____no_output_____ ###Markdown Using the last trained Agent This lines shows in a video the performance of the last trained agent and save a video with the results. ###Code # Test Sarsa Agent num_runs = 1 num_episodes = 1000 # Environment env_to_wrap = gym.make('MountainCar-v0') # Maximum number of possible iterations (default was 200) env_to_wrap._max_episode_steps = 1500 env = Monitor(env_to_wrap, "./videos/mountainCar", video_callable=lambda episode_id: True, force=True) # Number of runs are the times the experiment will start again (a.k.a episode) for n_runs in tqdm(range(num_runs)): # Resets environment observation = env.reset() # Generate last state and action in the agent last_action = agent.agent_start(observation) # Times the environment will start again without resetting the agent for t in tqdm(range(num_episodes)): # View environment env.render() # Take a step with the environment observation, reward, done, info = env.step(last_action) # If the goal has been reached stop if done: # Last step with the agent agent.agent_end(reward) break else: # Take a step with the agent last_action = agent.agent_step(reward, observation) env.close() env_to_wrap.close() print("Episode finished after {} timesteps".format(t+1)) ###Output 0%| | 0/1 [00:00<?, ?it/s] 0%| | 0/1000 [00:00<?, ?it/s] 1%|▏ | 13/1000 [00:00<00:07, 124.86it/s] 3%|▎ | 26/1000 [00:00<00:07, 123.89it/s] 4%|▍ | 39/1000 [00:00<00:07, 123.81it/s] 5%|▌ | 52/1000 [00:00<00:07, 123.93it/s] 6%|▋ | 65/1000 [00:00<00:07, 123.64it/s] 8%|▊ | 79/1000 [00:00<00:07, 126.19it/s] 9%|▉ | 92/1000 [00:00<00:07, 127.29it/s] 10%|█ | 105/1000 [00:00<00:07, 126.48it/s] 12%|█▏ | 119/1000 [00:00<00:06, 128.02it/s] 13%|█▎ | 132/1000 [00:01<00:06, 128.56it/s] 14%|█▍ | 145/1000 [00:01<00:06, 127.71it/s] 16%|█▌ | 158/1000 [00:01<00:06, 126.67it/s] 17%|█▋ | 171/1000 [00:01<00:06, 125.91it/s] 18%|█▊ | 184/1000 [00:01<00:06, 125.88it/s] 20%|█▉ | 197/1000 [00:01<00:06, 124.24it/s] 22%|██▏ | 221/1000 [00:01<00:06, 125.14it/s] 100%|██████████| 1/1 [00:02<00:00, 2.15s/it] ###Markdown Plotting the Action-Values of the agentThis final plot aims to show the action-values learned by the agent with Sarsa. The action value for a given state was calculated using: -$max_a\hat{q}(s, a, w)$ ###Code # Resolution values = 500 # Vector of positions pos_vals = np.linspace(-1.2, 0.5, num = values) # Vector of velocities vel_vals = np.linspace(-0.07, 0.07, num = values) # Z grid values av_grid = np.zeros((values, values)) # Compute Action-values for each pos - vel pair for ix in range(len(pos_vals)): for iy in range(len(vel_vals)): av_grid[ix][iy] = -1 * agent.return_action_value([pos_vals[ix], vel_vals[iy]]) # Plot the 3D surface fig = plt.figure() ax = fig.add_subplot(111, projection='3d') Px, Vy = np.meshgrid(pos_vals, vel_vals) ax.plot_surface(Vy, Px, av_grid, color = 'gray') ax.set_title("Cost-to-go function learned", y = 1.1) ax.set_xlabel('Velocity') ax.set_ylabel('Position') ax.set_zlabel('Iterations') ax.view_init(45, azim=30) plt.tight_layout() plt.show() ###Output _____no_output_____
notebooks/DLwPT_Chapter_4-timeseries.ipynb
###Markdown Chapter 4: Working with time series data ###Code %matplotlib inline from matplotlib import pyplot as plt import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim torch.set_printoptions(edgeitems=2) torch.manual_seed(42) # Life, the Universe, and Everything ###Output _____no_output_____ ###Markdown Let's start by loading the CSV data. We'll use numpy for this. ###Code !pwd bikes_numpy = np.loadtxt( "../data/external/bike_sharing/hour-fixed.csv", dtype=np.float32, delimiter=",", skiprows=1, converters={1: lambda x: float(x[8:10])} ) bikes = torch.from_numpy(bikes_numpy) bikes bikes.shape, bikes.stride() daily_bikes = bikes.view(-1, 24, bikes.shape[1]) daily_bikes.shape,daily_bikes.stride() daily_bikes = daily_bikes.transpose(1, 2) daily_bikes.shape, daily_bikes.stride() first_day = bikes[:24].long() weather_onehot = torch.zeros(first_day.shape[0], 4) first_day[:, 9] first_day = bikes[:24].long() weather_onehot = torch.zeros(first_day.shape[0], 4) first_day[:, 9] first_day[:,9].shape weather_onehot.scatter_(dim=1, index=first_day[:, 9].unsqueeze(1).long() -1, value=1.0) torch.cat((bikes[:24], weather_onehot), 1)[:2] ###Output _____no_output_____
cgames/01_ping_pong/ping_pong_rainbow.ipynb
###Markdown Pong with Dueling Dqn Step 1: Import the libraries ###Code import time import gym import random import torch import numpy as np from collections import deque import matplotlib.pyplot as plt from IPython.display import clear_output import math %matplotlib inline import sys sys.path.append('../../') from algos.agents import DDQNAgent from algos.models import DDQNCnn from algos.preprocessing.stack_frame import preprocess_frame, stack_frame ###Output _____no_output_____ ###Markdown Step 2: Create our environmentInitialize the environment in the code cell below. ###Code env = gym.make('Pong-v0') env.seed(0) # if gpu is to be used device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print("Device: ", device) ###Output _____no_output_____ ###Markdown Step 3: Viewing our Enviroment ###Code print("The size of frame is: ", env.observation_space.shape) print("No. of Actions: ", env.action_space.n) env.reset() plt.figure() plt.imshow(env.reset()) plt.title('Original Frame') plt.show() ###Output _____no_output_____ ###Markdown Execute the code cell below to play Pong with a random policy. ###Code def random_play(): score = 0 env.reset() while True: env.render() action = env.action_space.sample() state, reward, done, _ = env.step(action) score += reward if done: env.close() print("Your Score at end of game is: ", score) break random_play() ###Output _____no_output_____ ###Markdown Step 4:Preprocessing Frame ###Code env.reset() plt.figure() plt.imshow(preprocess_frame(env.reset(), (30, -4, -12, 4), 84), cmap="gray") plt.title('Pre Processed image') plt.show() ###Output _____no_output_____ ###Markdown Step 5: Stacking Frame ###Code def stack_frames(frames, state, is_new=False): frame = preprocess_frame(state, (30, -4, -12, 4), 84) frames = stack_frame(frames, frame, is_new) return frames ###Output _____no_output_____ ###Markdown Step 6: Creating our Agent ###Code INPUT_SHAPE = (4, 84, 84) ACTION_SIZE = env.action_space.n SEED = 0 GAMMA = 0.99 # discount factor BUFFER_SIZE = 100000 # replay buffer size BATCH_SIZE = 1024 # Update batch size LR = 0.002 # learning rate TAU = .1 # for soft update of target parameters UPDATE_EVERY = 100 # how often to update the network UPDATE_TARGET = 10000 # After which thershold replay to be started EPS_START = 0.99 # starting value of epsilon EPS_END = 0.01 # Ending value of epsilon EPS_DECAY = 100 # Rate by which epsilon to be decayed agent = DDQNAgent(INPUT_SHAPE, ACTION_SIZE, SEED, device, BUFFER_SIZE, BATCH_SIZE, GAMMA, LR, TAU, UPDATE_EVERY, UPDATE_TARGET, DDQNCnn) ###Output _____no_output_____ ###Markdown Step 7: Watching untrained agent play ###Code # watch an untrained agent state = stack_frames(None, env.reset(), True) for j in range(200): env.render() action = agent.act(state, .9) next_state, reward, done, _ = env.step(action) state = stack_frames(state, next_state, False) if done: break env.close() ###Output _____no_output_____ ###Markdown Step 8: Loading AgentUncomment line to load a pretrained agent ###Code start_epoch = 0 scores = [] scores_window = deque(maxlen=20) # To Load checkpoint uncomment code # checkpoint = torch.load('space_invader_dqn.pth') # agent.policy_net.load_state_dict(checkpoint['state_dict']) # agent.target_net.load_state_dict(checkpoint['state_dict']) # agent.optimizer.load_state_dict(checkpoint['optimizer']) # start_epoch = checkpoint['epoch'] # scores = checkpoint['socres'] # index = 1 # for i in reversed(scores): # scores_window.append(i) # if index == 100: # break # index += 1 ###Output _____no_output_____ ###Markdown Step 9: Train the Agent with DQN ###Code epsilon_by_epsiode = lambda frame_idx: EPS_END + (EPS_START - EPS_END) * math.exp(-1. * frame_idx /EPS_DECAY) plt.plot([epsilon_by_epsiode(i) for i in range(1000)]) def train(n_episodes=1000): """ Params ====== n_episodes (int): maximum number of training episodes """ for i_episode in range(start_epoch + 1, n_episodes+1): state = stack_frames(None, env.reset(), True) score = 0 eps = epsilon_by_epsiode(i_episode) while True: action = agent.act(state, eps) next_state, reward, done, info = env.step(action) score += reward next_state = stack_frames(state, next_state, False) agent.step(state, action, reward, next_state, done) state = next_state if done: break scores_window.append(score) # save most recent score scores.append(score) # save most recent score clear_output(True) fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() print('\rEpisode {}\tAverage Score: {:.2f}\tEpsilon: {:.2f}'.format(i_episode, np.mean(scores_window), eps), end="") return scores scores = train(1000) fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() ###Output _____no_output_____ ###Markdown Step 10: Watch a Smart Agent! ###Code score = 0 state = stack_frames(None, env.reset(), True) while True: env.render() action = agent.act(state, .01) next_state, reward, done, _ = env.step(action) score += reward state = stack_frames(state, next_state, False) if done: print("You Final score is:", score) break env.close() ###Output _____no_output_____
C3/W3/Course 3 - Week 3 - Lesson 1b.ipynb
###Markdown Multiple Layer LSTM ###Code from __future__ import absolute_import, division, print_function, unicode_literals import tensorflow_datasets as tfds import tensorflow as tf print(tf.__version__) import tensorflow_datasets as tfds import tensorflow as tf print(tf.__version__) # Get the data dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True) train_dataset, test_dataset = dataset['train'], dataset['test'] tokenizer = info.features['text'].encoder BUFFER_SIZE = 10000 BATCH_SIZE = 64 train_dataset = train_dataset.shuffle(BUFFER_SIZE) train_dataset = train_dataset.padded_batch(BATCH_SIZE, train_dataset.output_shapes) test_dataset = test_dataset.padded_batch(BATCH_SIZE, test_dataset.output_shapes) model = tf.keras.Sequential([ tf.keras.layers.Embedding(tokenizer.vocab_size, 64), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True)), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.summary() model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) NUM_EPOCHS = 10 history = model.fit(train_dataset, epochs=NUM_EPOCHS, validation_data=test_dataset) import matplotlib.pyplot as plt def plot_graphs(history, string): plt.plot(history.history[string]) plt.plot(history.history['val_'+string]) plt.xlabel("Epochs") plt.ylabel(string) plt.legend([string, 'val_'+string]) plt.show() plot_graphs(history, 'accuracy') plot_graphs(history, 'loss') ###Output _____no_output_____
project-brainwave/project-brainwave-scoreonly.ipynb
###Markdown Imports ###Code import os import tensorflow as tf from azureml.core import Workspace ws = Workspace.from_config() print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n') from azureml.core.webservice import Webservice from azureml.exceptions import WebserviceException from azureml.contrib.brainwave import BrainwaveWebservice, BrainwaveImage service_name = "imagenet-infer" service = Webservice(ws, service_name) import requests classes_entries = requests.get("https://raw.githubusercontent.com/Lasagne/Recipes/master/examples/resnet50/imagenet_classes.txt").text.splitlines() ###Output _____no_output_____ ###Markdown We can now send an image to the service and get the predictions. Let's see if it can identify a snow leopard.![title](snowleopardgaze.jpg)Snow leopard in a zoo. Photo by Peter Bolliger. ###Code results = service.run('snowleopardgaze.jpg') # map results [class_id] => [confidence] results = enumerate(results) # sort results by confidence sorted_results = sorted(results, key=lambda x: x[1], reverse=True) # print top 5 results for top in sorted_results[:5]: print(classes_entries[top[0]], 'confidence:', top[1]) ###Output snow leopard, ounce, Panthera uncia confidence: 0.8942749 leopard, Panthera pardus confidence: 0.10466321 lynx, catamount confidence: 0.00037594116 jaguar, panther, Panthera onca, Felis onca confidence: 0.00027404528 cheetah, chetah, Acinonyx jubatus confidence: 0.00018314221
docs/source/user_guide/1-overview.ipynb
###Markdown Quick StartInstall using ``pip`` and then import and use one of the tracking readers. This exampleloads a local file.tcx. From the data file, we obviously get time, altitude, distance, heart rate and geo position (lat/long). ###Code import warnings warnings.filterwarnings('ignore') # !pip install runpandas import runpandas as rpd activity = rpd.read_file('./sample.tcx') activity.head(5) ###Output _____no_output_____ ###Markdown The data frames that are returned by runpandas whenloading files is similar for different file types.The dataframe in the above example is a subclass of the``pandas.DataFrame`` and provides some additional features.Certain columns also return specific ``pandas.Series`` subclasses,which provides useful methods: ###Code print (type(activity)) print(type(activity.alt)) ###Output <class 'runpandas.types.frame.Activity'> <class 'runpandas.types.columns.Altitude'> ###Markdown For instance, if you want to get the base unit for the altitude ``alt`` data or the distance ``dist`` data: ###Code print(activity.alt.base_unit) print(activity.alt.sum()) print(activity.dist.base_unit) print(activity.dist[-1]) ###Output m 4686.31103516 ###Markdown The `Activity` dataframe also contains special properties that presents some statistics from the workout such as elapsed time, the moving time , mean heartrate, and the distance of workout in meters. ###Code #total time elapsed for the activity print(activity.ellapsed_time) #distance of workout in meters print(activity.distance) #mean heartrate print(activity.mean_heart_rate()) ###Output 0 days 00:33:11 4686.31103516 156.65274151436032 ###Markdown Occasionally, some observations such as speed, distance and others must be calculated based on available data in the given activity. In runpandas there are special accessors (`runpandas.acessors`) that computes some of these metrics. We will compute the `speed` and the `distance per position` observations using the latitude and longitude for each record and calculate the haversine distance in meters and the speed in meters per second. ###Code #compute the distance using haversine formula between two consecutive latitude, longitudes observations. activity['distpos'] = activity.compute.distance() activity['distpos'].head() #compute the distance using haversine formula between two consecutive latitude, longitudes observations. activity['speed'] = activity.compute.speed(from_distances=True) activity['speed'].head() ###Output _____no_output_____ ###Markdown Popular running metrics are also available through the runpandas acessors such as gradient, pace, vertical speed , etc. ###Code activity['vam'] = activity.compute.vertical_speed() activity['vam'].head() ###Output _____no_output_____ ###Markdown Sporadically, there will be a large time difference between consecutive observations in the same workout. This can happen when device is paused by the athlete or therere proprietary algorithms controlling the operating sampling rate of the device which can auto-pause when the device detects no significant change in position. In runpandas there is an algorithm that will attempt to calculate the moving time based on the GPS locations, distances, and speed of the activity.To compute the moving time, there is a special acessor that detects the periods of inactivity and returns the `moving` series containing all the observations considered to be stopped. ###Code activity_only_moving = activity.only_moving() print(activity_only_moving['moving'].head()) ###Output time 00:00:00 False 00:00:01 False 00:00:06 False 00:00:12 True 00:00:16 True Name: moving, dtype: bool ###Markdown Now we can compute the moving time, the time of how long the user were active. ###Code activity_only_moving.moving_time ###Output _____no_output_____ ###Markdown Now let's play with the data. Let's show distance vs as an example of what and how we can create visualizations. In this example, we will use the built in, matplotlib based plot function. ###Code activity[['dist']].plot() ###Output Matplotlib is building the font cache; this may take a moment. ###Markdown And here is altitude versus time. ###Code activity[['alt']].plot() ###Output _____no_output_____ ###Markdown Finally, lest's show the altitude vs distance profile. Here is a scatterplot that shows altitude vs distance as recorded. ###Code activity.plot.scatter(x='dist', y='alt', c='DarkBlue') ###Output _____no_output_____ ###Markdown Finally, let's watch a glimpse of the map route by plotting a 2d map using logintude vs latitude. ###Code activity.plot(x='lon', y='lat') ###Output _____no_output_____ ###Markdown Ok, a 2D map is cool. But would it be possible plot the route above on Google Maps ? For this task, we will use a ready-made package called [gmplot](https://github.com/gmplot/gmplot). It uses the Google Maps API together with its Python library. ###Code import gmplot #let's get the min/max latitude and longitudes min_lat, max_lat, min_lon, max_lon = \ min(activity['lat']), max(activity['lat']), \ min(activity['lon']), max(activity['lon']) ## Create empty map with zoom level 16 mymap = gmplot.GoogleMapPlotter( min_lat + (max_lat - min_lat) / 2, min_lon + (max_lon - min_lon) / 2, 16, apikey='yourapikey') #To plot the data as a continuous line (or a polygon), we can use the plot method. It has two self-explanatory optional arguments: color and edge width. mymap.plot(activity['lat'], activity['lon'], 'blue', edge_width=1) #Draw the map to an HTML file. mymap.draw('myroute.html') #Show the map (here I saved the myroute.html as image for the illustration purposes) import IPython IPython.display.Image(filename='images/myroute-min.png') ###Output _____no_output_____ ###Markdown The ``runpandas`` package also comes with extra batteries, such as our ``runpandas.datasets`` package, which includes a range of example data for testing purposes. There is a dedicated [repository](https://github.com/corriporai/runpandas-data) with all the data available. An index of the data is kept [here](https://github.com/corriporai/runpandas-data/blob/master/activities/index.yml).You can use the example data available: ###Code example_fit = rpd.activity_examples(path='Garmin_Fenix_6S_Pro-Running.fit') print(example_fit.summary) print('Included metrics:', example_fit.included_data) rpd.read_file(example_fit.path).head() ###Output _____no_output_____ ###Markdown In case of you just only want to see all the activities in a specific file type , you can filter the ``runpandas.activities_examples``, which returns a filter iterable that you can iterate over: ###Code fit_examples = rpd.activity_examples(file_type=rpd.FileTypeEnum.FIT) for example in fit_examples: #Download and play with the filtered examples print(example.path) ###Output https://raw.githubusercontent.com/corriporai/runpandas-data/master/activities/Garmin_Fenix_6S_Pro-Running.fit https://raw.githubusercontent.com/corriporai/runpandas-data/master/activities/Garmin_Fenix2_running_with_hrm.fit https://raw.githubusercontent.com/corriporai/runpandas-data/master/activities/Garmin_Forerunner_910XT-Running.fit
02_high-perf/03_dask-data-structures.ipynb
###Markdown Dask Data Structures Learning Objectives - List data structures Dask uses: `dask.bag`, `dask.array`, `dask.dataframe`- Compare these data structures to standard `numpy arrays` and `pandas dataframes`- Read in data and perform simple computation with `dask.dataframes` Dask BagsDask-bag excels in processing data that can be represented as a sequence of arbitrary inputs. We'll refer to this as "messy" data, because it can contain complex nested structures, missing fields, mixtures of data types, etc. The *functional* programming style fits very nicely with standard Python iteration, such as can be found in the `itertools` module.Messy data is often encountered at the beginning of data processing pipelines when large volumes of raw data are first consumed. The initial set of data might be JSON, CSV, XML, or any other format that does not enforce strict structure and datatypes.For this reason, the initial data massaging and processing is often done with Python `list`s, `dict`s, and `set`s.These core data structures are optimized for general-purpose storage and processing. Adding streaming computation with iterators/generator expressions or libraries like `itertools` or [`toolz`](https://toolz.readthedocs.io/en/latest/) let us process large volumes in a small space. If we combine this with parallel processing then we can churn through a fair amount of data.Dask.bag is a high level Dask collection to automate common workloads of this form. In a nutshell dask.bag = map, filter, toolz + parallel execution **Related Documentation*** [Bag Documenation](http://dask.pydata.org/en/latest/bag.html)* [Bag API](http://dask.pydata.org/en/latest/bag-api.html)**More resources and exercises can be found in the `99_bag.ipynb` but here we'll just cover a brief overview** Dask ArraysDask array provides a parallel, larger-than-memory, n-dimensional array using blocked algorithms. Simply put: distributed Numpy.* **Parallel**: Uses all of the cores on your computer* **Larger-than-memory**: Lets you work on datasets that are larger than your available memory by breaking up your array into many small pieces, operating on those pieces in an order that minimizes the memory footprint of your computation, and effectively streaming data from disk.* **Blocked Algorithms**: Perform large computations by performing many smaller computations**Related Documentation*** [Documentation](http://dask.readthedocs.io/en/latest/array.html)* [API reference](http://dask.readthedocs.io/en/latest/array-api.html)**More resources and exercises can be found in the `99_array.ipynb` but here we'll just cover a brief overview <img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe built a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame`. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [Dask DataFrame documentation](http://dask.pydata.org/en/latest/dataframe.html)* [Pandas documentation](http://pandas.pydata.org/)**Main Take-aways**1. Dask.dataframe should be familiar to Pandas users2. The partitioning of dataframes is important for efficient queries Setup With Pandas, we can use `read_csv()` to read in our dataset ###Code import pandas as pd import os import dask cols = ['UnitPriceMean', 'UnitPriceStd', 'TotalQuantity', 'NoOfUniqueItems', 'NoOfInvoices', 'UniqueItemsPerInvoice', 'QuantityPerInvoice','SpendingPerInvoice'] df = pd.read_csv('customer_norm_data/customer_norm_01.csv', usecols=cols) df.head() ###Output _____no_output_____ ###Markdown `Dask` works just like `pandas.read_csv`, except on multiple csv files at once. This is helpful when our data is split into multiple files, as is the case when data no longer fits into memory.For teaching purposes, we have split the `customer_normalised.csv` dataset into four. We then use `dask` to read each csv file. ###Code import dask.dataframe as dd cols = ['UnitPriceMean', 'UnitPriceStd', 'TotalQuantity', 'NoOfUniqueItems', 'NoOfInvoices', 'UniqueItemsPerInvoice', 'QuantityPerInvoice','SpendingPerInvoice'] customer = dd.read_csv("customer_norm_data/customer_norm_*.csv", usecols=cols) # load and count number of rows customer.head() len(customer) ###Output _____no_output_____ ###Markdown What happened here?- Dask investigated the input path and found that there are four matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grant total. Try Another DatasetLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area. ###Code df = dd.read_csv(os.path.join('dask_data', 'nycflights', '*.csv'), parse_dates={'Date': [0, 1, 2]}) ###Output _____no_output_____ ###Markdown Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and types. ###Code df ###Output _____no_output_____ ###Markdown We can view the start and end of the data ###Code df.head() df.tail() # this fails ###Output _____no_output_____ ###Markdown What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns. ###Code df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'), parse_dates={'Date': [0, 1, 2]}, dtype={'TailNum': str, 'CRSElapsedTime': float, 'Cancelled': bool}) df.tail() # now works ###Output _____no_output_____ ###Markdown Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel. ###Code %time customer.QuantityPerInvoice.max().compute() ###Output _____no_output_____ ###Markdown This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method: ###Code # notice the parallelism customer.QuantityPerInvoice.max().visualize() ###Output _____no_output_____ ###Markdown ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples? ###Code # Your code here %load solutions/03-dask-dataframe-rows.py ###Output _____no_output_____ ###Markdown 2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing). ###Code # Your code here %load solutions/03-dask-dataframe-non-cancelled.py ###Output _____no_output_____ ###Markdown 3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html). ###Code # Your code here %load solutions/03-dask-dataframe-non-cancelled-per-airport.py ###Output _____no_output_____ ###Markdown 4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?) ###Code # Your code here df.columns %load solutions/03-dask-dataframe-delay-per-airport.py ###Output _____no_output_____ ###Markdown 5.) What day of the week has the worst average departure delay? ###Code # Your code here %load solutions/03-dask-dataframe-delay-per-day.py ###Output _____no_output_____ ###Markdown Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe require to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations. ###Code non_cancelled = df[~df.Cancelled] mean_delay = non_cancelled.DepDelay.mean() std_delay = non_cancelled.DepDelay.std() %%time mean_delay_res = mean_delay.compute() std_delay_res = std_delay.compute() ###Output _____no_output_____ ###Markdown But lets try by passing both to a single `compute` call. ###Code %%time mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay) ###Output _____no_output_____ ###Markdown Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to zoom in on the graph better): ###Code dask.visualize(mean_delay, std_delay) ###Output _____no_output_____ ###Markdown How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory (the difference is caused by using `object` dtype for strings). This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL (coming to `groupby` in the next version) `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](04-schedulers.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](02-dask-arrays.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. We'll cover this in more detail in [Distributed DataFrames](05-distributed-dataframes-and-efficiency.ipynb).For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`: ###Code crs_dep_time = df.CRSDepTime.head(10) crs_dep_time ###Output _____no_output_____ ###Markdown To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic: ###Code import pandas as pd # Get the first 10 dates to complement our `crs_dep_time` date = df.Date.head(10) # Get hours as an integer, convert to a timedelta hours = crs_dep_time // 100 hours_timedelta = pd.to_timedelta(hours, unit='h') # Get minutes as an integer, convert to a timedelta minutes = crs_dep_time % 100 minutes_timedelta = pd.to_timedelta(minutes, unit='m') # Apply the timedeltas to offset the dates by the departure time departure_timestamp = date + hours_timedelta + minutes_timedelta departure_timestamp ###Output _____no_output_____ ###Markdown Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own: ###Code # Look at the docs for `map_partitions` help(df.CRSDepTime.map_partitions) ###Output _____no_output_____ ###Markdown The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`. ###Code hours = df.CRSDepTime // 100 # hours_timedelta = pd.to_timedelta(hours, unit='h') hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h') minutes = df.CRSDepTime % 100 # minutes_timedelta = pd.to_timedelta(minutes, unit='m') minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m') departure_timestamp = df.Date + hours_timedelta + minutes_timedelta departure_timestamp departure_timestamp.head() ###Output _____no_output_____ ###Markdown Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph. ###Code def compute_departure_timestamp(df): # TODO departure_timestamp = df.map_partitions(compute_departure_timestamp) departure_timestamp.head() %load solutions/03-dask-dataframe-map-partitions.py ###Output _____no_output_____
primary_school.ipynb
###Markdown We're looking at the [primary school temporal network dataset](http://www.sociopatterns.org/datasets/primary-school-temporal-network-data/).This notebook is for the analysis used in the dynamic SBM paper. It considers the graph as a dynamic object, and looks at distances between timesteps. Get data from web ###Code %%bash cd data curl -O http://www.sociopatterns.org/wp-content/uploads/2015/09/primaryschool.csv.gz gunzip primaryschool.csv.gz # grab metadata too ###Output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0 curl: (52) Empty reply from server gunzip: primaryschool.csv already exists -- skipping ###Markdown Load in dataWe have a bunch of time-stamped edges in a csv. We need to turn these into a sequence of graphs so that we can look at the resistance distance between timesteps. Let's see how this does. ###Code full_data = pd.read_csv('../primary_school/data/day1.csv',sep='\t',header=None) full_data.columns = ['Time','i','j','Class i','Class j'] data = full_data[full_data['Time'] <= 62300] # select only the first day metadata = pd.read_csv('data/metadata_primaryschool.txt',sep='\t',header=None) metadata.columns = ['ID','Class','Gender'] n_bins = 150 graphs = [] amats = [] amats_clean = [] bins = pd.cut(data['Time'],bins=n_bins,labels=np.arange(n_bins)) for group,df in data.groupby(bins): G = nx.Graph() G.add_nodes_from(metadata['ID']) G.add_edges_from(zip(df['i'],df['j'])) graphs.append(G) amats.append(nx.adjacency_matrix(G)) G.remove_nodes_from(nx.isolates(G)) amats_clean.append(nx.adjacency_matrix(G)) ###Output _____no_output_____ ###Markdown AnalysisLet's take the renormalized resistance distance between each graph, along with the edit distance, and see what we can see. ###Code def vol_dist(A1,A2): """Volume difference between two graphs.""" return np.abs(A1.sum() - A2.sum())/2 %%time r_dists = [] e_dists = [] d_dists = [] ns_dists = [] adj_dists = [] lap_dists = [] nlap_dists = [] vol_dists = [] for A_old,A in zip(amats[:-1],amats[1:]): r_dists.append(nc.resistance_distance(A,A_old,renormalized=True)) e_dists.append(nc.edit_distance(A,A_old)) d_dists.append(nc.deltacon0(A,A_old)) ns_dists.append(nc.netsimile(A,A_old)) adj_dists.append(nc.lambda_dist(A,A_old,kind='adjacency')) lap_dists.append(nc.lambda_dist(A,A_old,kind='laplacian')) nlap_dists.append(nc.lambda_dist(A,A_old,kind='laplacian_norm')) vol_dists.append(vol_dist(A,A_old)) # normalize by sample mean r_dists = r_dists/np.mean(r_dists) e_dists = e_dists/np.mean(e_dists) d_dists = d_dists/np.mean(d_dists) ns_dists = ns_dists/np.mean(ns_dists) adj_dists = adj_dists/np.mean(adj_dists) lap_dists = lap_dists/np.mean(lap_dists) nlap_dists = nlap_dists/np.mean(nlap_dists) vol_dists = vol_dists/np.mean(vol_dists) # Assume start at 9 AM, end at 5 PM, subdivide time bins accordingly times = [pd.to_datetime('10/1/2009 {}:{:.02f}'.format(int(time//1),60*(time%1))) for time in np.linspace(9,17,num=n_bins)] # our anomalies significant_times = [pd.to_datetime('10/1/2009 ' + time_str) for time_str in ['10:30','11:00','12:00','13:00','14:00','15:30','16:00']] time_labels = ['Morning Recess Begins', 'Morning Recess Ends','First Lunch Begins','Second Lunch Begins', 'Second Lunch Ends','Afternoon Recess Begins','Afternoon Recess Ends'] # we'll only mark the start of each bin, so we don't need the last one times = times[:-1] # distances all freak out towards the end of the day, as the graph dissolves cutoff = -8 xtick_times = [pd.to_datetime('10/1/2009 {}:{:.02f}'.format(int(time//1),60*(time%1))) for time in np.linspace(9,16,num=8)] xtick_labels = ['{} AM'.format(i) for i in range(9,13)] + ['{} PM'.format(i) for i in range(1,5)] # iterable of colors to use for anomaly indicators color_list = ['#1f77b4','#ff7f0e','#2ca02c','#d6272b','#9467bd','#8c564b','#e377c2'] # these are matplotlib.patch.Patch properties # see https://matplotlib.org/users/recipes.html props = [dict(boxstyle='round',color=color) for color in color_list] ###Output _____no_output_____ ###Markdown Volume DIfference ###Code data_max = max( e_dists[:cutoff].max(), vol_dists[:cutoff].max() ) # Make locations for text step = 0.1*data_max text_heights = [data_max+step*i for i,_ in enumerate(significant_times)] text_heights = text_heights[::-1] text_times = [time+pd.Timedelta(minutes=3) for time in significant_times] top = text_heights[0] + step # make horizontal & vertical lines [plt.axvline(dtime,color=color) for dtime,color in zip(significant_times,color_list)] [plt.text(x,y,label,bbox=prop) for x,y,label,prop in zip(text_times,text_heights,time_labels,props)] # plot curves plt.plot(times[:cutoff],e_dists[:cutoff],label='$\widehat{D}_E(t)$'); plt.plot(times[:cutoff],vol_dists[:cutoff],label='$\Delta_V(t)$'); # axes labels & title plt.ylim([0,top]); plt.xticks(xtick_times,xtick_labels); plt.xlabel('Time'); plt.ylabel('Normalized Distance'); plt.title('Volume Difference'); plt.legend(); # fig = plt.gcf(); # fig.savefig(os.path.join(figures_dir,'primary_school_matrix.pdf'), # dpi=300,bbox_inches='tight'); ###Output _____no_output_____ ###Markdown Matrix Distances ###Code data_max = max( r_dists[:cutoff].max(), e_dists[:cutoff].max(), d_dists[:cutoff].max() ) # Make locations for text step = 0.1*data_max text_heights = [data_max+step*i for i,_ in enumerate(significant_times)] text_heights = text_heights[::-1] text_times = [time+pd.Timedelta(minutes=3) for time in significant_times] top = text_heights[0] + step # make horizontal & vertical lines [plt.axvline(dtime,color=color) for dtime,color in zip(significant_times,color_list)] [plt.text(x,y,label,bbox=prop) for x,y,label,prop in zip(text_times,text_heights,time_labels,props)] # plot curves plt.plot(times[:cutoff],r_dists[:cutoff],label='$\widehat{D}_R(t)$'); plt.plot(times[:cutoff],e_dists[:cutoff],label='$\widehat{D}_E(t)$'); plt.plot(times[:cutoff],d_dists[:cutoff],label='$\widehat{D}_{DC}(t)$'); # axes labels & title plt.ylim([0,top]); plt.xticks(xtick_times,xtick_labels); plt.xlabel('Time'); plt.ylabel('Normalized Distance'); plt.title('Matrix Distances'); plt.legend(); fig = plt.gcf(); fig.savefig(os.path.join(figures_dir,'primary_school_matrix.pdf'), dpi=300,bbox_inches='tight'); data_max = max( r_dists[:cutoff].max(), e_dists[:cutoff].max(), d_dists[:cutoff].max() ) # Make locations for text step = 0.1*data_max text_heights = [data_max+step*i for i,_ in enumerate(significant_times)] text_heights = text_heights[::-1] text_times = [time+pd.Timedelta(minutes=3) for time in significant_times] top = text_heights[0] + step # make horizontal & vertical lines [plt.axvline(dtime,color=color) for dtime,color in zip(significant_times,color_list)] [plt.text(x,y,label,bbox=prop) for x,y,label,prop in zip(text_times,text_heights,time_labels,props)] # plot curves plt.plot(times[:cutoff],r_dists[:cutoff],label='Resistance'); plt.plot(times[:cutoff],e_dists[:cutoff],label='Edit'); plt.plot(times[:cutoff],d_dists[:cutoff],label='DeltaCon'); # axes labels & title plt.ylim([0,top]); plt.xticks(xtick_times,xtick_labels); plt.xlabel('Time'); plt.ylabel('Scaled Distance'); plt.legend(); fig = plt.gcf(); fig.savefig('/Users/peterwills/google-drive/jekyll/pwills.com/assets/images/research/school_distances.png', dpi=300,bbox_inches='tight',transparent=True); ###Output _____no_output_____ ###Markdown The two figured below won't have the lines labelled. The code to do it is there, but commented out. ###Code data_max = max( ns_dists[:cutoff].max(), e_dists[:cutoff].max(), ) # Make locations for text step = 0.1*data_max text_heights = [data_max+step*i for i,_ in enumerate(significant_times)] text_heights = text_heights[::-1] text_times = [time+pd.Timedelta(minutes=3) for time in significant_times] top = text_heights[0] + step # make horizontal & vertical lines [plt.axvline(dtime,color=color) for dtime,color in zip(significant_times,color_list)] # [plt.text(x,y,label,bbox=prop) for x,y,label,prop in # zip(text_times,text_heights,time_labels,props)] # plot curves plt.plot(times[:cutoff],ns_dists[:cutoff],label='$\widehat{D}_{NS}(t)$'); plt.plot(times[:cutoff],e_dists[:cutoff],label='$\widehat{D}_E(t)$'); # axes labels & title # plt.ylim([0,top]); plt.xticks(xtick_times,xtick_labels); plt.xlabel('Time'); plt.ylabel('Normalized Distance'); plt.title('NetSimile Distance'); plt.legend(loc='upper left'); fig = plt.gcf(); fig.savefig(os.path.join(figures_dir,'primary_school_netsim.pdf'), dpi=300,bbox_inches='tight'); ###Output _____no_output_____ ###Markdown Lambda Distances ###Code data_max = max( lap_dists[:cutoff].max(), adj_dists[:cutoff].max(), nlap_dists[:cutoff].max() ) # Make locations for text step = 0.1*data_max text_heights = [data_max+step*i for i,_ in enumerate(significant_times)] text_heights = text_heights[::-1] text_times = [time+pd.Timedelta(minutes=3) for time in significant_times] top = text_heights[0] + step # make horizontal & vertical lines [plt.axvline(dtime,color=color) for dtime,color in zip(significant_times,color_list)] # [plt.text(x,y,label,bbox=prop) for x,y,label,prop in # zip(text_times,text_heights,time_labels,props)] # plot curves plt.plot(times[:cutoff],lap_dists[:cutoff],label='$\widehat{D}_{L}(t)$'); plt.plot(times[:cutoff],nlap_dists[:cutoff],label='$\widehat{D}_{\mathcal{L}}(t)$'); plt.plot(times[:cutoff],adj_dists[:cutoff],label='$\widehat{D}_{A}(t)$'); # axes labels & title # plt.ylim([0,top]); plt.xticks(xtick_times,xtick_labels); plt.xlabel('Time'); plt.ylabel('Normalized Distance'); plt.title('Spectral Distances'); plt.legend(loc='upper left'); fig = plt.gcf(); fig.savefig(os.path.join(figures_dir,'primary_school_lambda.pdf'), dpi=300,bbox_inches='tight'); ###Output _____no_output_____
Week_1/part_1_initialisation/Initialization.ipynb
###Markdown InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify. ###Code import numpy as np import matplotlib.pyplot as plt import sklearn import sklearn.datasets from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec %matplotlib inline plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # load image dataset: blue/red dots in circles train_X, train_Y, test_X, test_Y = load_dataset() ###Output _____no_output_____ ###Markdown You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls. ###Code def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"): """ Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID. Arguments: X -- input data, of shape (2, number of examples) Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples) learning_rate -- learning rate for gradient descent num_iterations -- number of iterations to run gradient descent print_cost -- if True, print the cost every 1000 iterations initialization -- flag to choose which initialization to use ("zeros","random" or "he") Returns: parameters -- parameters learnt by the model """ grads = {} costs = [] # to keep track of the loss m = X.shape[1] # number of examples layers_dims = [X.shape[0], 10, 5, 1] # Initialize parameters dictionary. if initialization == "zeros": parameters = initialize_parameters_zeros(layers_dims) elif initialization == "random": parameters = initialize_parameters_random(layers_dims) elif initialization == "he": parameters = initialize_parameters_he(layers_dims) # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID. a3, cache = forward_propagation(X, parameters) # Loss cost = compute_loss(a3, Y) # Backward propagation. grads = backward_propagation(X, Y, cache) # Update parameters. parameters = update_parameters(parameters, grads, learning_rate) # Print the loss every 1000 iterations if print_cost and i % 1000 == 0: print("Cost after iteration {}: {}".format(i, cost)) costs.append(cost) # plot the loss plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters ###Output _____no_output_____ ###Markdown 2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes. ###Code # GRADED FUNCTION: initialize_parameters_zeros def initialize_parameters_zeros(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ parameters = {} L = len(layers_dims) # number of layers in the network for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1])) # note np.zeros takes a tuple parameters['b' + str(l)] = np.zeros((layers_dims[l],1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_zeros([3,2,1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ###Output W1 = [[ 0. 0. 0.] [ 0. 0. 0.]] b1 = [[ 0.] [ 0.]] W2 = [[ 0. 0.]] b2 = [[ 0.]] ###Markdown **Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization. ###Code parameters = model(train_X, train_Y, initialization = "zeros") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) ###Output Cost after iteration 0: 0.6931471805599453 Cost after iteration 1000: 0.6931471805599453 Cost after iteration 2000: 0.6931471805599453 Cost after iteration 3000: 0.6931471805599453 Cost after iteration 4000: 0.6931471805599453 Cost after iteration 5000: 0.6931471805599453 Cost after iteration 6000: 0.6931471805599453 Cost after iteration 7000: 0.6931471805599453 Cost after iteration 8000: 0.6931471805599453 Cost after iteration 9000: 0.6931471805599453 Cost after iteration 10000: 0.6931471805599455 Cost after iteration 11000: 0.6931471805599453 Cost after iteration 12000: 0.6931471805599453 Cost after iteration 13000: 0.6931471805599453 Cost after iteration 14000: 0.6931471805599453 ###Markdown The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary: ###Code print ("predictions_train = " + str(predictions_train)) print ("predictions_test = " + str(predictions_test)) plt.title("Model with Zeros initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ###Output _____no_output_____ ###Markdown The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters. ###Code # GRADED FUNCTION: initialize_parameters_random def initialize_parameters_random(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours parameters = {} L = len(layers_dims) # integer representing the number of layers for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * 10 parameters['b' + str(l)] = np.zeros((layers_dims[l],1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_random([3, 2, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ###Output W1 = [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] b1 = [[ 0.] [ 0.]] W2 = [[-0.82741481 -6.27000677]] b2 = [[ 0.]] ###Markdown **Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization. ###Code parameters = model(train_X, train_Y, initialization = "random") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) ###Output /home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: divide by zero encountered in log logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y) /home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: invalid value encountered in multiply logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y) ###Markdown If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s. ###Code print (predictions_train) print (predictions_test) plt.title("Model with large random initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ###Output _____no_output_____ ###Markdown **Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! Let's try random initialisation with smaller weights 3b - Random initialization with small weightsTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 2` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters. ###Code # GRADED FUNCTION: initialize_parameters_random def initialize_parameters_random(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours parameters = {} L = len(layers_dims) # integer representing the number of layers for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * 2 parameters['b' + str(l)] = np.zeros((layers_dims[l],1)) ### END CODE HERE ### return parameters # Run the following code to train your model on 15,000 iterations using the random initialization above. parameters = model(train_X, train_Y, initialization = "random") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) ###Output /home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: divide by zero encountered in log logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y) /home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: invalid value encountered in multiply logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y) ###Markdown We clearly have broken symmetry with the random initialisation of small weights multiplied by 2, not 10. We achieved training accuracy of 98% and test accuracy of 94%. Let's see our predictions matrix ###Code print (predictions_train) print (predictions_test) plt.title("Model with small random initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ###Output _____no_output_____ ###Markdown We can see how our model with small random initialisation aptly fits the data well. Classifying blue dots in blue area and red dots in red area. 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation. ###Code # GRADED FUNCTION: initialize_parameters_he def initialize_parameters_he(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) parameters = {} L = len(layers_dims) - 1 # integer representing the number of layers for l in range(1, L + 1): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2.0 /layers_dims[l-1] ) parameters['b' + str(l)] = np.zeros((layers_dims[l],1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_he([2, 4, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ###Output W1 = [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] b1 = [[ 0.] [ 0.] [ 0.] [ 0.]] W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] b2 = [[ 0.]] ###Markdown **Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization. ###Code parameters = model(train_X, train_Y, initialization = "he") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) plt.title("Model with He initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ###Output _____no_output_____
module2/unit project 2 .ipynb
###Markdown Data wrangling import data ###Code import pandas as pd import numpy as np url = "https://raw.githubusercontent.com/lopez-isaac/project-data-sets/master/world%202018%20lol.csv" main_df = pd.read_csv(url) main_df.head() ### If importing locally #path = "/Users/isaaclopez/Downloads/world 2018 lol.csv" #main_df = pd.read_csv(path) ###Output _____no_output_____ ###Markdown Clean up data frame ###Code #remove display limits pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) #copy data clean_up = main_df #drop timebased or Unnecessary columns drop = ["gameid","url","league","split","date","week","game","patchno","playerid","player",] clean_up = main_df.drop(columns=drop) #check clean_up.head() #check for null values clean_up.isnull().sum() ## Fix nan values #fill nan values with either 1 or 0 fill_list =[1,0] clean_up['fbaron'] = clean_up['fbaron'].fillna(pd.Series(np.random.choice(fill_list, size=len(clean_up.index)))) #fill nan values with the column mean clean_up["fbarontime"] = clean_up["fbarontime"].fillna((clean_up["fbarontime"].mean())) #fill nan values with either 1 or 0 clean_up["herald"] = clean_up["herald"].fillna(pd.Series(np.random.choice(fill_list,size=len(clean_up.index)))) #drop the column mostly nan values clean_up = clean_up.drop(columns=["heraldtime"]) #check clean_up.isnull().sum() clean_up.head() clean_up.dtypes #change data type object to int object_int = ["doubles","triples","quadras","pentas"] for x in object_int: clean_up[x] = clean_up[x].replace(' ', np.NaN) for x in object_int: #fill nan values with the column mode clean_up[x] = clean_up[x].fillna(clean_up[x].mode()[0]) for x in object_int: clean_up[x] = clean_up[x].astype(float) clean_up["dmgshare"] = clean_up["dmgshare"].replace(' ', np.NaN) clean_up["dmgshare"] = clean_up["dmgshare"].astype(float) clean_up["dmgshare"] = clean_up["dmgshare"].fillna((clean_up["dmgshare"].mean())) clean_up["earnedgoldshare"] = clean_up["earnedgoldshare"].replace(' ', np.NaN) clean_up["earnedgoldshare"] = clean_up["earnedgoldshare"].astype(float) clean_up["earnedgoldshare"] = clean_up["earnedgoldshare"].fillna((clean_up["earnedgoldshare"].mean())) #drop to many blanks clean_up["visiblewardclearrate"].value_counts() #drop to many blanks clean_up["invisiblewardclearrate"].value_counts() clean_up = clean_up.drop(columns=["visiblewardclearrate","invisiblewardclearrate"]) clean_up.isnull().sum() #need to drop towers destoryed causing indirect leakage #teamtowerkills 0 #opptowerkills clean_up = clean_up.drop(columns=["opptowerkills","teamtowerkills"]) clean_up["side"].value_counts() ###Output _____no_output_____ ###Markdown machine learning, predict match winner ###Code print(clean_up.shape) clean_up.head() ###Output (1428, 83) ###Markdown majority value baseline ###Code target = clean_up["result"] #unique value percentages for target target.value_counts(normalize=True) ###Output _____no_output_____ ###Markdown train, validation, test split ###Code from sklearn.model_selection import train_test_split #train and test split 85% and 15% split train, test = train_test_split(clean_up, train_size=.85, test_size=.15, stratify=clean_up['result'], random_state=42) clean_up.shape,train.shape,test.shape #train and validation split. split based of test length train, val = train_test_split(train, test_size = len(test), stratify=train['result'], random_state=42) clean_up.shape,train.shape,test.shape,val.shape ###Output _____no_output_____ ###Markdown random forest classifier ###Code import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import make_pipeline # assign features and target target = "result" features = train.columns.drop(target) X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] y_test = test[target] train.select_dtypes(exclude='number').describe().T.sort_values(by='unique') # make the pipeline pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median'), RandomForestClassifier(n_estimators=10, max_depth=3, min_samples_leaf=1, random_state=42) ) # Fit on train, score on val pipeline.fit(X_train, y_train) print('Validation Accuracy', pipeline.score(X_val, y_val)) ###Output Validation Accuracy 0.9348837209302325 ###Markdown Random forest importance ###Code import matplotlib.pyplot as plt pipeline.named_steps['randomforestclassifier'] rf = pipeline.named_steps['randomforestclassifier'] importances = pd.Series(rf.feature_importances_, X_train.columns) n = 20 plt.figure(figsize=(10,n/2)) plt.title(f'Top {n} features') importances.sort_values()[-n:].plot.barh(color='lightblue'); ###Output _____no_output_____ ###Markdown eli5 permuation ###Code import eli5 from eli5.sklearn import PermutationImportance #eli doesnt work with pipline(workaround) transformers = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median') ) X_train_transformed = transformers.fit_transform(X_train) X_val_transformed = transformers.transform(X_val) model = RandomForestClassifier(n_estimators=10, max_depth=3, min_samples_leaf=1, random_state=42) model.fit(X_train_transformed, y_train) permuter = PermutationImportance( model, scoring='accuracy', n_iter=5, random_state=42 ) permuter.fit(X_val_transformed, y_val) feature_names = X_val.columns.tolist() eli5.show_weights( permuter, top=None, # show permutation importances for all features feature_names=feature_names ) from xgboost import XGBClassifier minimum_importance = 0 mask = permuter.feature_importances_ > minimum_importance features = X_train.columns[mask] X_train = X_train[features] print('Shape after removing features:', X_train.shape) X_val = X_val[features] pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median'), RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) ) # Fit on train, score on val pipeline.fit(X_train, y_train) print('Validation Accuracy', pipeline.score(X_val, y_val)) from sklearn.metrics import accuracy_score y_pred = pipeline.predict(X_val) print('Validation Accuracy', accuracy_score(y_val, y_pred)) encoder = ce.OrdinalEncoder() X_train_encoded = encoder.fit_transform(X_train) X_val_encoded = encoder.transform(X_val) model = XGBClassifier( n_estimators=500, # <= 1000 trees, depends on early stopping max_depth=7,# try deeper trees because of high cardinality categoricals learning_rate=0.5, # try higher learning rate num_class = 1, n_jobs=-1 ) eval_set = [(X_train_encoded, y_train), (X_val_encoded, y_val)] model.fit(X_train_encoded, y_train, eval_set=eval_set, eval_metric='merror', objective = softmax(1), ) ###Output _____no_output_____
Week 02/monty-hall.ipynb
###Markdown Monty Hall ###Code import numpy as np def selectdoor(): randomno = np.random.uniform(0.0, 3.0) if randomno < 1.0: return 1 elif randomno < 2.0: return 2 else: return 3 cars = [selectdoor() for i in range(1000)] print(cars) import seaborn as sns import matplotlib.pyplot as pl pl.figure(figsize=(10, 6)) sns.set(style="darkgrid") ax = sns.countplot(y=cars) pl.show() ###Output _____no_output_____ ###Markdown Student game ###Code import random doors = ['A', 'B', 'C'] car = random.choice(doors) pick = random.choice(doors) car, pick show = random.choice([door for door in doors if door != car and door != pick]) show ###Output _____no_output_____
loading_images.ipynb
###Markdown Download data ###Code dataset_url = 'https://datashare.is.ed.ac.uk/bitstream/handle/10283/3192/CINIC-10.tar.gz?sequence=4&isAllowed=y' data_name = 'cinic10' file_extension = 'tar.gz' file_name = '.'.join([data_name, file_extension]) # download CINIC-10 dataset downloaded_file_location = get_file(origin = dataset_url, fname = file_name, extract = False) # decompress data data_directory, _ = downloaded_file_location.rsplit(os.path.sep, maxsplit = 1) data_directory = os.path.sep.join([data_directory, data_name]) if not os.path.exists(data_directory): tar = tarfile.open(downloaded_file_location) tar.extractall(data_directory) # load all image paths data_pattern = os.path.sep.join([data_directory, '*/*/*.png']) image_paths = list(glob.glob(data_pattern)) print(f'There are {len(image_paths):,} images in the dataset') ###Output There are 270,000 images in the dataset ###Markdown Display an image ###Code # load a single image and print its metadata sample_img = load_img(image_paths[0]) print(f'Image type: {type(sample_img)}') print(f'Image format: {sample_img.format}') print(f'Image mode: {sample_img.mode}') print(f'Image size: {sample_img.size}') # convert an image to a NumPy array sample_img_array = img_to_array(sample_img) print(f'Image type: {type(sample_img_array)}') print(f'Image array shape: {sample_img_array.shape}') # display the sample image plt.imshow(sample_img_array / 255.0) ###Output _____no_output_____ ###Markdown Iterate over dataset ###Code # load a batch of images and rescale to range [0, 1] image_generator = ImageDataGenerator(horizontal_flip = True, rescale = 1.0 / 255.0) # display a random batch of 10 images batch_size = 10 iterator = (image_generator.flow_from_directory(directory = data_directory, batch_size = batch_size)) for batch, _ in iterator: plt.figure(figsize = (5, 5)) for index, image in enumerate(batch, start = 1): ax = plt.subplot(5, 5, index) plt.imshow(image) plt.axis('off') plt.show() break ###Output Found 270000 images belonging to 3 classes.
examples/Pandas Patch In Action.ipynb
###Markdown Pandas Patch In Action import packages and download test data ###Code from pandas_patch import * %psource structure def get_test_df_complete(): """ get the full test dataset from Lending Club open source database, the purpose of this fuction is to be used in a demo ipython notebook """ import requests from zipfile import ZipFile from StringIO import StringIO zip_to_download = "https://resources.lendingclub.com/LoanStats3b.csv.zip" r = requests.get(zip_to_download) zipfile = ZipFile(StringIO(r.content)) file_csv = zipfile.namelist()[0] df = pd.read_csv(zipfile.open(file_csv), skiprows =[0], na_values = ['n/a','N/A',''], parse_dates = ['issue_d','last_pymnt_d','next_pymnt_d','last_credit_pull_d'] ) zipfile.close() df = df[:-2] nb_row = float(len(df.index)) df['na_col'] = np.nan df['constant_col'] = 'constant' df['duplicated_column'] = df.id df['many_missing_70'] = np.nan df.loc[1:int(0.3*nb_row),'many_missing_70'] = 1 df['bad'] = 1 index_good = df['loan_status'].isin(['Fully Paid', 'Current','In Grace Period']) df.loc[index_good,'bad'] = 0 return df # ipython tips # with psource you can see the source code of a function %psource pandas_patch ?nacount # to get info about the functions and docs #df = get_test_df_complete() # because no wifi connection df = get_test_df_complete() df.columns ###Output _____no_output_____ ###Markdown Basic data cleaning and exploration Basic helpers ###Code df.nrow() df.ncol() df.dfnum() #identify numeric variables df.dfchar() # identify character variables timeit df.factors() df.nacount(axis = 0) # counting the number of missing values per column df.nacount(axis = 1) # count the number of missing values per rows df.constantcol() # find the constant columns df.findupcol() # find the duplicate columns timeit df.detectkey(pct = ) timeit df.apply(lambda x: len(pd.unique(x))) timeit df.count_unique() df.manymissing(a = 0.7) ###Output _____no_output_____ ###Markdown Summary of strucuture, data info and cleaning functions ###Code timeit df.structure() df.psummary(dynamic = True) df.str %psource structure df.nacount(axis = 0).Napercentage %timeit df.nacount() timeit df.count_unique() df.count() df.nacount() 1 >= 2 df.int_rate.dtype df.sample_df(pct = 0.10).nrow() df.factors() timeit df.detectkey(pct = 0.05) timeit df.detectkey2() df.nearzerovar() def pandas_to_ndarray_wrap(X, copy=True): """ Converts X to a ndarray and provides a function to help convert back to pandas object. Parameters ---------- X : Series/DataFrame/ndarray copy : Boolean If True, return a copy. Returns ------- Xvals : ndarray If X is a Series/DataFrame, then Xvals = X.values, if ndarray, Xvals = X F : Function F(Xvals) = X """ if copy: X = X.copy() if isinstance(X, pd.Series): return X.values, lambda Z: pd.Series(np.squeeze(Z), index=X.index) elif isinstance(X, pd.DataFrame): return X.values, lambda Z: pd.DataFrame( Z, index=X.index, columns=X.columns) elif isinstance(X, np.ndarray) or isspmatrix(X): return X, lambda Z: Z else: raise ValueError("Unhandled type: %s" % type(X)) pandas_to_ndarray_wrap(df) pandas_to_ndarray_wrap(df)[0] df.size timeit df.duplicated() ###Output 1 loops, best of 3: 1.08 s per loop
docs/examples/example_crb_xr.ipynb
###Markdown The Metagalactic X-ray Background In this example, we'll compute the Meta-Galactic X-ray background over a series of redshifts ($10 \leq z \leq 40$). To start, the usual imports: ###Code %pylab inline import ares import numpy as np import matplotlib.pyplot as pl from ares.physics.Constants import c, ev_per_hz, erg_per_ev # Initialize radiation background pars = \ { # Source properties 'pop_sfr_model': 'sfrd-func', 'pop_sfrd': lambda z: 0.1, 'pop_sed': 'pl', 'pop_alpha': -1.5, 'pop_Emin': 2e2, 'pop_Emax': 3e4, 'pop_EminNorm': 5e2, 'pop_EmaxNorm': 8e3, 'pop_rad_yield': 2.6e39, 'pop_rad_yield_units': 'erg/s/sfr', # Solution method 'pop_solve_rte': True, 'tau_redshift_bins': 400, 'initial_redshift': 60., 'final_redshift': 5., } ###Output _____no_output_____ ###Markdown To summarize these inputs, we've got:* A constant SFRD of $0.1 \ M_{\odot} \ \mathrm{yr}^{-1} \ \mathrm{cMpc}^{-3}$, given by the ``pop_sfrd`` parameter.* A power-law spectrum with index $\alpha=-1.5$, given by ``pop_sed`` and ``pop_alpha``, extending from 0.2 keV to 30 keV.* A yield of $2.6 \times 10^{39} \ \mathrm{erg} \ \mathrm{s}^{-1} \ (M_{\odot} \ \mathrm{yr})^{-1}$ in the $0.5 \leq h\nu / \mathrm{keV} \leq 8$ band, set by ``pop_EminNorm``, ``pop_EmaxNorm``, ``pop_yield``, and ``pop_yield_units``. This is the $L_X-\mathrm{SFR}$ relation found by [Mineo et al. (2012)](http://adsabs.harvard.edu/abs/2012MNRAS.419.2095M).See the complete listing of parameters relevant to :class:`ares.populations.GalaxyPopulation` objects [here](../params_populations.html). Now, to initialize a calculation and run it: ###Code mgb = ares.simulations.MetaGalacticBackground(**pars) mgb.run() ###Output _____no_output_____ ###Markdown We'll pull out the evolution of the background just as we did in the [UV background](example_crb_uv) example: ###Code z, E, flux = mgb.get_history(flatten=True) ###Output _____no_output_____ ###Markdown and plot up the result (at the final redshift): ###Code pl.semilogy(E, flux[0] * E * erg_per_ev, color='k') pl.xlabel(ares.util.labels['E']) pl.ylabel(ares.util.labels['flux_E']) ###Output _____no_output_____ ###Markdown Compare to the analytic solution, given by Equation A1 in [Mirocha (2014)](http://adsabs.harvard.edu/abs/2014arXiv1406.4120M>) (the *cosmologically-limited* solution to the radiative transfer equation)$J_{\nu}(z) = \frac{c}{4\pi} \frac{\epsilon_{\nu}(z)}{H(z)} \frac{(1 + z)^{9/2-(\alpha + \beta)}}{\alpha+\beta-3/2} \times \left[(1 + z_i)^{\alpha+\beta-3/2} - (1 + z)^{\alpha+\beta-3/2}\right]$with $\alpha = -1.5$, $\beta = 0$, $z=5$, and $z_i=60$, ###Code # Grab the GalaxyPopulation instance pop = mgb.pops[0] # Plot the numerical solution again pl.semilogy(E, flux[0] * E * erg_per_ev, color='k') # Compute cosmologically-limited solution e_nu = np.array([pop.Emissivity(10., energy) for energy in E]) e_nu *= c / 4. / np.pi / pop.cosm.HubbleParameter(5.) e_nu *= (1. + 5.)**6. / -3. e_nu *= ((1. + 60.)**-3. - (1. + 5.)**-3.) e_nu *= ev_per_hz # Plot it pl.semilogy(E, e_nu, color='b', ls='--') ###Output _____no_output_____ ###Markdown Neutral Absorption by the Diffuse IGMThe calculation above is basically identical to the optically-thin UV background calculations performed in the previous example, at least in the cases where we neglected any sawtooth effects. While there is no modification to the X-ray background due to resonant absorption in the Lyman series (of Hydrogen or Helium II), bound-free absorption by intergalactic hydrogen and helium atoms acts to harden the spectrum. By default, ARES will *not* include these effects.To "turn on" bound-free absorption in the IGM, modify the dictionary of parameters you've got already: ###Code pars['tau_approx'] = 'neutral' ###Output _____no_output_____ ###Markdown Now, initialize and run a new calculation: ###Code mgb2 = ares.simulations.MetaGalacticBackground(**pars) mgb2.run() ###Output # Loaded /Users/jordanmirocha/Dropbox/work/mods/ares/input/optical_depth/optical_depth_planck_TTTEEE_lowl_lowE_best_H_400x862_z_5-60_logE_2.3-4.5.hdf5. ###Markdown and plot the result on the previous results: ###Code z2, E2, flux2 = mgb2.get_history(flatten=True) # Plot the numerical solution again pl.semilogy(E, flux[0] * E * erg_per_ev, color='k') pl.loglog(E2, flux2[0] * E2 * erg_per_ev, color='b', ls=':', lw=3) ###Output _____no_output_____ ###Markdown The behavior at low photon energies ($h\nu \lesssim 0.3 \ \mathrm{keV}$) is an artifact that arises due to poor redshift resolution. This is a trade made for speed in solving the cosmological radiative transfer equation, discussed in detail in Section 3 of [Mirocha (2014)](http://adsabs.harvard.edu/abs/2014arXiv1406.4120M>). For more accurate calculations, you must enhance the redshift sampling using the ``tau_redshift_bins`` parameter, e.g., ###Code pars['tau_redshift_bins'] = 1000 ###Output _____no_output_____
Day-5/Day 5 - Vanilla GAN Solved.ipynb
###Markdown Data Science Summer School - Split '17 5. Generating images of digits with Generative Adversarial Networks ###Code %matplotlib inline %load_ext autoreload %autoreload 2 import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import os, util ###Output _____no_output_____ ###Markdown Goals:1. Implement the model from "[Generative Adversarial Networks](http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf)" by Goodfellow et al. (1284 citations since 2014.)2. **Understand** how the model learns to generate realistic imagesIn ~two hours. 5.1 Downloading the datasets and previewing data ###Code data_folder = 'data'; dataset = 'mnist' # the folder in which the dataset is going to be stored download_folder = util.download_mnist(data_folder, dataset) images, labels = util.load_mnist(download_folder) print("Folder:", download_folder) print("Image shape:", images.shape) # greyscale, so the last dimension (color channel) = 1 print("Label shape:", labels.shape) # one-hot encoded show_n_images = 25 sample_images, mode = util.get_sample_images(images, n=show_n_images) mnist_sample = util.images_square_grid(sample_images, mode) plt.imshow(mnist_sample, cmap='gray') sample = images[3]*50 # sample = sample.reshape((28, 28)) print(np.array2string(sample.astype(int), max_line_width=100, separator=',', precision=0)) plt.imshow(sample, cmap='gray') ###Output _____no_output_____ ###Markdown What are we going to do with the data?- We **have** $70000$ images of hand-written digits generated from some distribution $X \sim P_{real}$- We **have** $70000$ labels $y_i \in \{0,..., 9\}$ indicating which digit is written on the image $x_i$**Problem:** Imagine that the number of images we have **is not enough** - a common issue in computer vision and machine learning.1. We can pay experts to create new images * **Expensive** * Slow * Realiable2. We can generate new images ourselves * **Cheap** * **Fast** * Unreliable?**Problem:** Not every image that we generate is going to be perfect (or even close to perfect). Therefore, we need some method to determine which images are realistic. 1. We can pay experts to determine which images are good enough * Expensive * Slow * **Reliable**2. We can train a model to determine which images are good enough * **Cheap** * **Fast** * Unreliable? Formalization* $X \sim P_{real}$ : existing images of shape $s$* $Z \sim P_z$ : a $k$-dimensional random vector* $G(z; \theta_G): Z \to \hat{X}$ : the **generator**, a function that transforms the random vector $z$ into an image of shape $s$* $D(x, \theta_D): X \to (Real, Fake)$ : the **discriminator** a function that given an image of shape $s$ decides if the image is real or fake DetailsThe existing images $X$ in our setup are images from the mnist dataset. We will arbitrarily decide that vectors $z$ will be sampled from a uniform distribution, and $G$ and $D$ will both be *'deep'* neural networks. For simplicity, and since we are using the mnist dataset, both $G$ and $D$ will be multi-layer perceptrons (and not deep convolutional networks) with one hidden layer. The generated images $G(z) \sim P_{fake}$ as well as real images $x \sim P_{real}$ will be passed on to the discriminator, which will classify them into $(Real, Fake)$. Figure 1. General adversarial network architecture Discriminator** The goal ** of the discriminator is to successfully recognize which image is sampled from the true distribution, and which image is sampled from the generator. Figure 2. Discriminator network sketch Generator** The goal ** of the generator is that the discriminator *missclassifies* the images that the generator generated as if they were generated by the true distribution.Figure 3. Generator network sketch 5.2 Data transformationSince we are going to use a fully connected network (we are not going to use local convolutional filters), we are going to flatten the input images for simplicity. Also, the pixel values are scaled to the interval $[0,1]$ (this was already done beforehand). We will also use a pre-made `Dataset` class to iterate over the dataset in batches. The class is defined in `util.py`, and only consists of a constructor and a method `next_batch`.**Question:** Having seen the architecture of the network, why are we the pixels scaled to $[0,1]$ and not, for example, $[-1, 1]$, or left at $[0, 255]$?**Answer:** ###Code # the mnist dataset is stored in the variable 'images', and the labels are stored in 'labels' images = images.reshape(-1, 28*28) # 70000 x 784 print (images.shape, labels.shape) mnist = util.Dataset(images, labels) print ("Number of samples:", mnist.n) ###Output (70000, 784) (70000, 10) Number of samples: 70000 ###Markdown 5.3 The generator network ###Code class Generator: """The generator network the generator network takes as input a vector z of dimension input_dim, and transforms it to a vector of size output_dim. The network has one hidden layer of size hidden_dim. We will define the following methods: __init__: initializes all variables by using tf.get_variable(...) and stores them to the class, as well a list in self.theta forward: defines the forward pass of the network - how do the variables interact with respect to the inputs """ def __init__(self, input_dim, hidden_dim, output_dim): """Constructor for the generator network. In the constructor, we will just initialize all the variables in the network. Args: input_dim: The dimension of the input data vector (z). hidden_dim: The dimension of the hidden layer of the neural network (h) output_dim: The dimension of the output layer (equivalent to the size of the image) """ with tf.variable_scope("generator"): self.W1 = tf.get_variable(name="W1", shape=[input_dim, hidden_dim], initializer=tf.contrib.layers.xavier_initializer()) self.b1 = tf.get_variable(name="b1", shape=[hidden_dim], initializer=tf.zeros_initializer()) self.W2 = tf.get_variable(name="W2", shape=[hidden_dim, output_dim], initializer=tf.contrib.layers.xavier_initializer()) self.b2 = tf.get_variable(name="b2", shape=[output_dim], initializer=tf.zeros_initializer()) self.theta = [self.W1, self.W2, self.b1, self.b2] def forward(self, z): """The forward pass of the network -- here we will define the logic of how we combine the variables through multiplication and activation functions in order to get the output. """ h1 = tf.nn.relu(tf.matmul(z, self.W1) + self.b1) log_prob = tf.matmul(h1, self.W2) + self.b2 prob = tf.nn.sigmoid(log_prob) return prob ###Output _____no_output_____ ###Markdown 5.4 The basic network for the discriminator ###Code class Discriminator: """The discriminator network the discriminator network takes as input a vector x of dimension input_dim, and transforms it to a vector of size output_dim. The network has one hidden layer of size hidden_dim. You will define the following methods: __init__: initializes all variables by using tf.get_variable(...) and stores them to the class, as well a list in self.theta forward: defines the forward pass of the network - how do the variables interact with respect to the inputs """ def __init__(self, input_dim, hidden_dim, output_dim): with tf.variable_scope("discriminator"): self.W1 = tf.get_variable(name="W1", shape=[input_dim, hidden_dim], initializer=tf.contrib.layers.xavier_initializer()) self.b1 = tf.get_variable(name="b1", shape=[hidden_dim], initializer=tf.zeros_initializer()) self.W2 = tf.get_variable(name="W2", shape=[hidden_dim, output_dim], initializer=tf.contrib.layers.xavier_initializer()) self.b2 = tf.get_variable(name="b2", shape=[output_dim], initializer=tf.zeros_initializer()) self.theta = [self.W1, self.W2, self.b1, self.b2] def forward(self, x): """The forward pass of the network -- here we will define the logic of how we combine the variables through multiplication and activation functions in order to get the output. """ h1 = tf.nn.relu(tf.matmul(x, self.W1) + self.b1) logit = tf.matmul(h1, self.W2) + self.b2 prob = tf.nn.sigmoid(logit) return prob, logit ###Output _____no_output_____ ###Markdown Intermezzo: Xavier initialization of weightsGlorot, X., & Bengio, Y. (2010, March). [Understanding the difficulty of training deep feedforward neural networks](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf). In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (pp. 249-256).Implemented in tensorflow, as part of the standard library: https://www.tensorflow.org/api_docs/python/tf/contrib/layers/xavier_initializer 1. Idea:- If the weights in a network are initialized to too small values, then the signal shrinks as it passes through each layer until it’s too tiny to be useful.- If the weights in a network are initialized to too large, then the signal grows as it passes through each layer until it’s too massive to be useful. 2. Goal: - We need initial weight values that are *just right* for the signal not to explode or vanish during the forward pass 3. Math- Trivial 4. Solution- $v = \frac{2}{n_{in} + n_{out}}$In the case of a Gaussian distribution, we set the **variance** to $v$.In the case of a uniform distribution, we set the **interval ** to $\pm v$ (the default distr. in tensorflow is the uniform).http://andyljones.tumblr.com/post/110998971763/an-explanation-of-xavier-initialization 5.5 Define the model parametersWe will take a brief break to set the values for the parameters of the model. Since we know the dataset we are working with, as well as the shape of the generator and discriminator networks, your task is to fill in the values of the following variables. ###Code image_dim = 784 # The dimension of the input image vector to the discrminator discriminator_hidden_dim = 128 # The dimension of the hidden layer of the discriminator discriminator_output_dim = 1 # The dimension of the output layer of the discriminator random_sample_dim = 100 # The dimension of the random noise vector z generator_hidden_dim = 128 # The dimension of the hidden layer of the generator generator_output_dim = 784 # The dimension of the output layer of the generator ###Output _____no_output_____ ###Markdown 5.6 Check the implementation of the classes ###Code d = Discriminator(image_dim, discriminator_hidden_dim, discriminator_output_dim) for param in d.theta: print (param) g = Generator(random_sample_dim, generator_hidden_dim, generator_output_dim) for param in g.theta: print (param) ###Output <tf.Variable 'generator/W1:0' shape=(100, 128) dtype=float32_ref> <tf.Variable 'generator/W2:0' shape=(128, 784) dtype=float32_ref> <tf.Variable 'generator/b1:0' shape=(128,) dtype=float32_ref> <tf.Variable 'generator/b2:0' shape=(784,) dtype=float32_ref> ###Markdown Drawing samples from the latent space ###Code def sample_Z(m, n): return np.random.uniform(-1., 1., size=[m, n]) plt.imshow(sample_Z(16, 100), cmap='gray') ###Output _____no_output_____ ###Markdown 5.5 Define the model loss -- Vanilla GANThe objective for the vanilla version of the GAN was defined as follows:$\min_G \max_D V(D, G) = \mathbb{E}_{x \sim p_{real}} [log(D(x))] + \mathbb{E}_{z \sim p_{z}} [log(1 -D(G(z)))]$The function contains a *minimax* formulation, and cannot be directly optimized. However, if we freeze $D$, we can derive the loss for $G$ and vice versa.**Discriminator loss:**$p_{fake} = G(p_z)$$D_{loss} = \mathbb{E}_{x \sim p_{real}} [log(D(x))] + \mathbb{E}_{\hat{x} \sim p_{fake}} [log(1 -D(\hat{x}))]$We estimate the expectation over each minibatch and arrive to the following formulation:$D_{loss} = \frac{1}{m}\sum_{i=0}^{m} log(D(x_i)) + \frac{1}{m}\sum_{i=0}^{m} log(1 -D(\hat{x_i}))$**Generator loss:**$G_{loss} = - \mathbb{E}_{z \sim p_{z}} [log(1 -D(G(z)))]$$G_{loss} = \frac{1}{m}\sum_{i=0}^{m} [log(D(G(z)))]$ Model loss, translated from mathThe **discriminator** wants to:- **maximize** the (log) probability of a **real** image being classified as **real**,- **minimize** the (log) probability of a **fake** image being classified as **real**.The **generator** wants to:- **maximize** the (log) probability of a **fake** image being classified as **real**. Model loss, translated to practical machine learningThe output of the discriminator is a scalar, $p$, which we interpret as the probability that an input image is **real** ($1-p$ is the probability that the image is fake).The **discriminator** takes as input:- a minibatch of images from our training set with a vector of **ones** for class labels: $D_{loss\_real}$. - a minibatch of images from the generator with a vector of **zeros** for class labels: $D_{loss\_fake}$. - a minibatch of images from the generator with a vector of **ones** for class labels: $G_{loss}$.The **generator** takes as input:- a minibatch of vectors sampled from the latent space and transforms them to a minibatch of generated images ###Code def gan_model_loss(X, Z, discriminator, generator): G_sample = g.forward(Z) D_real, D_logit_real = d.forward(X) D_fake, D_logit_fake = d.forward(G_sample) D_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( logits=D_logit_real, labels=tf.ones_like(D_logit_real))) D_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( logits=D_logit_fake, labels=tf.zeros_like(D_logit_fake))) D_loss = D_loss_real + D_loss_fake G_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( logits=D_logit_fake, labels=tf.ones_like(D_logit_fake))) return G_sample, D_loss, G_loss ###Output _____no_output_____ ###Markdown Intermezzo: sigmoid cross entropy with logitsWe defined the loss of the model as the log of the probability, but we are not using a $log$ function or the model probablities anywhere?Enter sigmoid cross entropy with logits: https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logitsFrom the tensorflow documentation Putting it all together ###Code X = tf.placeholder(tf.float32, name="input", shape=[None, image_dim]) Z = tf.placeholder(tf.float32, name="latent_sample", shape=[None, random_sample_dim]) G_sample, D_loss, G_loss = gan_model_loss(X, Z, d, g) with tf.variable_scope('optim'): D_solver = tf.train.AdamOptimizer(name='discriminator').minimize(D_loss, var_list=d.theta) G_solver = tf.train.AdamOptimizer(name='generator').minimize(G_loss, var_list=g.theta) saver = tf.train.Saver() # Some runtime parameters predefined for you minibatch_size = 128 # The size of the minibatch num_epoch = 500 # For how many epochs do we run the training plot_every_epochs = 5 # After this many epochs we will save & display samples of generated images print_every_batches = 1000 # After this many minibatches we will print the losses restore = False checkpoint = 'fc_2layer_e100_2.170.ckpt' model = 'gan' model_save_folder = os.path.join('data', 'chkp', model) print ("Model checkpoints will be saved to:", model_save_folder) image_save_folder = os.path.join('data', 'model_output', model) print ("Image samples will be saved to:", image_save_folder) minibatch_counter = 0 epoch_counter = 0 d_losses = [] g_losses = [] with tf.device("/gpu:0"), tf.Session() as sess: sess.run(tf.global_variables_initializer()) if restore: saver.restore(sess, os.path.join(model_save_folder, checkpoint)) print("Restored model:", checkpoint, "from:", model_save_folder) while epoch_counter < num_epoch: new_epoch, X_mb = mnist.next_batch(minibatch_size) _, D_loss_curr = sess.run([D_solver, D_loss], feed_dict={ X: X_mb, Z: sample_Z(minibatch_size, random_sample_dim) }) _, G_loss_curr = sess.run([G_solver, G_loss], feed_dict={ Z: sample_Z(minibatch_size, random_sample_dim) }) # Plotting and saving images and the model if new_epoch and epoch_counter % plot_every_epochs == 0: samples = sess.run(G_sample, feed_dict={Z: sample_Z(16, random_sample_dim)}) fig = util.plot(samples) figname = '{}.png'.format(str(minibatch_counter).zfill(3)) plt.savefig(os.path.join(image_save_folder, figname), bbox_inches='tight') plt.show() plt.close(fig) im = util.plot_single(samples[0], epoch_counter) plt.savefig(os.path.join(image_save_folder, 'single_' + figname), bbox_inches='tight') plt.show() chkpname = "fc_2layer_e{}_{:.3f}.ckpt".format(epoch_counter, G_loss_curr) saver.save(sess, os.path.join(model_save_folder, chkpname)) # Printing runtime statistics if minibatch_counter % print_every_batches == 0: print('Epoch: {}/{}'.format(epoch_counter, num_epoch)) print('Iter: {}/{}'.format(mnist.position_in_epoch, mnist.n)) print('Discriminator loss: {:.4}'. format(D_loss_curr)) print('Generator loss: {:.4}'.format(G_loss_curr)) print() # Bookkeeping minibatch_counter += 1 if new_epoch: epoch_counter += 1 d_losses.append(D_loss_curr) g_losses.append(G_loss_curr) chkpname = "fc_2layer_e{}_{:.3f}.ckpt".format(epoch_counter, G_loss_curr) saver.save(sess, os.path.join(model_save_folder, chkpname)) disc_line, = plt.plot(range(len(d_losses[:10000])), d_losses[:10000], c='b', label="Discriminator loss") gen_line, = plt.plot(range(len(d_losses[:10000])), g_losses[:10000], c='r', label="Generator loss") plt.legend([disc_line, gen_line], ["Discriminator loss", "Generator loss"]) ###Output _____no_output_____
Db2_Data_Management_Console_Run_Workload_Two.ipynb
###Markdown Run SQL WorkloadsThis Jupyter Notebook contains code to run SQL workloads across databases. The Db2 Data Management Console is more than a graphical user interface. It is a set of microservices that you can use to build custom applications to automate your use of Db2.This Jupyter Notebook contains examples of how to use the Open APIs and the composable interface that are available in the Db2 Data Management Console. Everything in the User Interface is also available through an open and fully documented RESTful Services API. The full set of APIs are documented as part of the Db2 Data Management Console user interface. In this hands on lab you can connect to the documentation directly through this link: [Db2 Data Management Console RESTful APIs](http://localhost:11080/dbapi/api/index_enterprise.html). You can also embed elements of the full user interface into an IFrame by constructing the appropriate URL.This hands on lab will be calling the Db2 Data Management Console as a service. However you can explore it through the user interface as well. Just click on the following link to try out the console that is already and setup in this lab: http://localhost:11080/console. If you have not already logged in you can use the following:* Userid: db2inst1* Password: db2inst1 Import Helper ClassesFor more information on these classes, see the lab: [Automate Db2 with Open Console Services](http://localhost:8888/notebooks/Db2_Data_Management_Console_Overview.ipynb) ###Code %run ./dmc_setup.ipynb ###Output _____no_output_____ ###Markdown Db2 Data Management Console ConnectionTo connect to the Db2 Data Management Console service you need to provide the URL, the service name (v4) and profile the console user name and password as well as the name of the connection profile used in the console to connect to the database you want to work with. For this lab we are assuming that the following values are used for the connection:* Userid: db2inst1* Password: db2inst1* Connection: sample**Note:** If the Db2 Data Management Console has not completed initialization, the connection below will fail. Wait for a few moments and then try it again. ###Code # Connect to the Db2 Data Management Console service Console = 'http://localhost:11080' profile = 'SAMPLE' user = 'DB2INST1' password = 'db2inst1' # Set up the required connection profileURL = "?profile="+profile databaseAPI = Db2(Console+'/dbapi/v4') if databaseAPI.authenticate(user, password, profile) : print("Token Created") else : print("Token Creation Failed") database = Console ###Output Token Created ###Markdown Confirm the connectionTo confirm that your connection is working you can check the status of the moitoring service. You can also check your console connection to get the details of the specific database connection you are working with. Since your console user id and password may be limited as to which databases they can access you need to provide the connection profile name to drill down on any detailed information for the database. ###Code # List Monitoring Profile r = databaseAPI.getProfile(profile) json = databaseAPI.getJSON(r) print(json) ###Output {'name': 'SAMPLE', 'disableDataCollection': 'false', 'databaseVersion': '11.5.0', 'databaseName': 'SAMPLE', 'timeZone': '-50000', 'DB2Instance': 'db2inst1', 'db2license': 'AESE,DEC', 'isInstPureScale': 'false', 'databaseVersion_VRMF': '11.5.0.0', 'sslConnection': 'false', 'userProfileRole': 'OWNER', 'timeZoneDiff': '0', 'host': 'localhost', '_PROFILE_INIT_': 'true', 'dataServerType': 'DB2LUW', 'port': '50000', 'URL': 'jdbc:db2://localhost:50000/SAMPLE', 'edition': 'AESE,DEC', 'isInstPartitionable': 'false', 'dataServerExternalType': 'DB2LUW', 'capabilities': '["DSM_ENTERPRISE_LUW"]', 'OSType': 'Linux', 'location': ''} ###Markdown SQL Scripts Used to Generate WorkWe are going to define a few scripts that we will use during this lab. ###Code sqlScriptWorkload1 = \ ''' WITH SALARYBY (DEPTNO, TOTAL) AS (SELECT DEPT.DEPTNO DNO, SUM(BIGINT(EMP.SALARY)) TOTAL_SALARY FROM EMPLOYEES EMP, DEPARTMENTS DEPT WHERE DEPT.DEPTNO = EMP.DEPTNO AND EMP.SALARY > 190000 GROUP BY DEPT.DEPTNO ORDER BY DNO) SELECT DEPT.DEPTNAME NAME, SALARYBY.TOTAL COST, DEPT.REVENUE, DEPT.REVENUE-SALARYBY.TOTAL PROFIT FROM SALARYBY, DEPARTMENTS DEPT WHERE DEPT.DEPTNO = SALARYBY.DEPTNO AND REVENUE > TOTAL ORDER BY PROFIT ''' print("Defined Workload 1 Script") sqlScriptWorkload2 = \ ''' SELECT DEPT.DEPTNO DNO, SUM(FLOAT(EMP.SALARY)) TOTAL_SALARY FROM EMPLOYEES EMP, DEPARTMENTS DEPT WHERE DEPT.DEPTNO = EMP.DEPTNO AND EMP.SALARY < 50000 AND YEAR(EMP.HIREDATA) > 2010 GROUP BY DEPT.DEPTNO ORDER BY DNO; SELECT DEPT.DEPTNO DNO, SUM(FLOAT(EMP.SALARY)) TOTAL_SALARY FROM EMPLOYEES EMP, DEPARTMENTS DEPT WHERE DEPT.DEPTNO = EMP.DEPTNO AND EMP.SALARY < 190000 AND YEAR(EMP.HIREDATA) > 2010 GROUP BY DEPT.DEPTNO ORDER BY DNO; SELECT DEPT.DEPTNO, DEPT.REVENUE FROM DEPARTMENTS DEPT WHERE DEPT.REVENUE > 450000000; ''' print("Defined Workload 2 Script") ###Output Defined Workload 2 Script ###Markdown Creating a Routine to Run an SQL ScriptTo make things easier we can create reusable routines that will included everything we have developed so far. By running the next two steps, you create two routines that you can call by passing parameters to them. While we could create a single routine to run SQL and then display the results, we are creating two different routines so that we can display the results differently later in the lab. ###Code def runSQL(profile,user, password, sqlText): if databaseAPI.authenticate(user, password, profile) : # Run the SQL Script and return the runID for later reference runID = databaseAPI.getJSON(databaseAPI.runSQL(sqlText))['id'] # See if there are any results yet for this job json = databaseAPI.getJSON(databaseAPI.getSQLJobResult(runID)) # If the REST call returns an error return the json with the error to the calling routine if 'errors' in json : return json # Append the results from each statement in the script to the overall combined JSON result set fulljson = json while json['results'] != [] or (json['status'] != "completed" and json['status'] != "failed") : json = databaseAPI.getJSON(databaseAPI.getSQLJobResult(runID)) # Get the results from each statement as they return and append the results to the full JSON for results in json['results'] : fulljson['results'].append(results) # Wait 250 ms for more results time.sleep(0.25) return fulljson else : print('Could not authenticate') print('runSQL routine defined') def displayResults(json): for results in json['results']: print('Statement: '+str(results['index'])+': '+results['command']) print('Runtime ms: '+str(results['runtime_seconds']*1000)) if 'error' in results : print(results['error']) elif 'rows' in results : df = pd.DataFrame(results['rows'],columns=results['columns']) print(df) else : print('No errors. Row Affected: '+str(results['rows_affected'])) print() print('displayResults routine defined') ###Output displayResults routine defined ###Markdown Running multiple scripts across multiple databases - Summarized ResultsNow that we have our tables created on both databases, we can run workloads and measure their performance. By repeatedly running the scripts across multiple databases in a single Db2 instance we can simulate a real database environemt. Instead of using the displayResults routine we are going to capture runtime metrics for each run of the SQL Query workloads so that we can analyze performance. The appendResults routine builds this dataframe with each pass.runScripts lets use define the database connection profiles we want to run against, the scripts to run, and now many times to repeat the runs for each profile and for each script. ###Code # This routine builds up a Data Frame containing the run results as we run workloads across databases def appendResults(df, profile, json) : error = '' rows = 0 if 'error' in json : print('SQL Service Failed') else : for results in json['results']: if 'error' in results : error = results['error'] if 'rows_affected' in results : rows = results['rows_affected'] df = df.append({'profile':profile,'index':results['index'], 'statement':results['command'], 'error':error, 'rows_affected': rows, 'runtime_ms':(results['runtime_seconds']*1000)}, ignore_index=True) return df print('appendResults routine defined') # This routine runs multistatment scripts across multiple databases. # The scripts are run repeatedly for each profile (database) def runScripts(profileList, scriptList, user, password, profileReps, scriptReps) : df = pd.DataFrame(columns=['profile', 'index', 'statement', 'error', 'rows_affected', 'runtime_ms']) for x in range(0, profileReps): print("Running repetition: "+str(x)) for profile in profileList : print(" Running scripts against: "+profile) for y in range(0, scriptReps) : print(" Running script repetition: "+str(y)) for script in scriptList : json = runSQL(profile, user, password, script) while 'errors' in json: print(' * Trying again *') json = runSQL(profile, user, password, script) df = appendResults(df, profile, json) return df print('runScripts routine defined') ###Output runScripts routine defined ###Markdown The next cell loops through a list of databases as well as a list of scripts and run then repeatedly. You an set the number of times the scripts are run against each database and the number of times the runs against both databases is repeated. ###Code profileList = ['SAMPLE','HISTORY'] scriptList = [sqlScriptWorkload1, sqlScriptWorkload2] user = 'DB2INST1' password = 'db2inst1' profileReps = 20 scriptReps = 5 df = runScripts(profileList, scriptList, user, password, profileReps, scriptReps) display(df) ###Output Running repetition: 0 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 1 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 2 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 3 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 4 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 5 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 6 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 7 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 8 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 9 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 10 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 11 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 12 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 13 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 14 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 15 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 16 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 17 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 18 Running scripts against: SAMPLE Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running scripts against: HISTORY Running script repetition: 0 Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 Running script repetition: 4 Running repetition: 19 Running scripts against: SAMPLE Running script repetition: 0 * Trying again * Running script repetition: 1 Running script repetition: 2 Running script repetition: 3 ###Markdown Analyze ResultsNow we can use the results in the dataframe to look at the results statistically. First we can see the average runtime for each statement across the databases. ###Code print('Mean runtime in ms') pd.set_option('display.max_colwidth', 100) stmtMean = df.groupby(['statement']).mean() print(stmtMean) ###Output _____no_output_____ ###Markdown We can also display the total runtime for each statement across databases. ###Code print('Total runtime in ms') pd.set_option('display.max_colwidth', 100) stmtSum = df.groupby(['statement']).sum() print(stmtSum) ###Output _____no_output_____ ###Markdown We can even graph the total run time for all the statements can compare database performance. Since there are more rows in the employees table in the SAMPLE database it takes longer for the queries to run. ###Code print('Mean runtime in ms') pd.set_option('display.max_colwidth', 100) profileSum = df.groupby(['profile']).sum() profileSum.plot(kind='bar') plt.show() ###Output _____no_output_____
.ipynb_checkpoints/Day2 Exercise 3.2-checkpoint.ipynb
###Markdown Exercise 3.2: Olympics ###Code import requests from bs4 import BeautifulSoup ###Output _____no_output_____ ###Markdown - Retrieve a list of Olympic **summer sports** using requests.py and beautifulsoup.py and print them ###Code www = 'https://www.olympic.org' r = requests.get(f'{www}/sport') soup = BeautifulSoup(r.text,'lxml') for sport in soup.select('ul.countries li'): print(sport.text.strip()) ###Output Archery Athletics Badminton Basketball Beach Volleyball Boxing Canoe Slalom Canoe Sprint Cycling BMX Cycling Mountain Bike Cycling Road Cycling Track Diving Equestrian / Dressage Equestrian Eventing Equestrian Jumping Fencing Football Golf Gymnastics Artistic Gymnastics Rhythmic Handball Hockey Judo Marathon Swimming Modern Pentathlon Rowing Rugby Sailing Shooting Swimming Synchronized Swimming Table Tennis Taekwondo Tennis Trampoline Triathlon Volleyball Water Polo Weightlifting Wrestling Freestyle Wrestling Greco-Roman Alpine Skiing Biathlon Bobsleigh Cross Country Skiing Curling Figure skating Freestyle Skiing Ice Hockey Luge Nordic Combined Short Track Speed Skating Skeleton Ski Jumping Snowboard Speed skating ###Markdown - For each summer sport, retrieve all **men’s** and **women’s events** (one level deeper) and print them ###Code # First try your code for one sport r = requests.get('https://www.olympic.org/archery') s = BeautifulSoup(r.text, 'lxml') for sp in s.select('section.sport-events ul li'): print(sp.text.strip()) # Then loop through all sports for sport in soup.select('ul.countries li'): sp = sport.a['href'] r = requests.get(f'{www}{sp}') s = BeautifulSoup(r.text,'lxml') for i in s.select('section.sport-events ul li'): print(sport.text.strip(),i.text.strip()) ###Output Archery individual (Olympic round - 70m) men Archery team (Olympic round - 70m) men Archery individual (Olympic round - 70m) women Archery team (Olympic round - 70m) women Athletics 10000m men Athletics 100m men Athletics 110m hurdles men Athletics 1500m men Athletics 200m men Athletics 20km race walk men Athletics 3000m steeplechase men Athletics 400m hurdles men Athletics 400m men Athletics 4x100m relay men Athletics 4x400m relay men Athletics 5000m men Athletics 50km race walk men Athletics 800m men Athletics decathlon men Athletics discus throw men Athletics hammer throw men Athletics high jump men Athletics javelin throw men Athletics long jump men Athletics marathon men Athletics pole vault men Athletics shot put men Athletics triple jump men Athletics 10000m women Athletics 100m hurdles women Athletics 100m women Athletics 1500m women Athletics 200m women Athletics 20km race walk women Athletics 3000m steeplechase women Athletics 400m hurdles women Athletics 400m women Athletics 4x100m relay women Athletics 4x400m relay women Athletics 5000m women Athletics 800m women Athletics discus throw women Athletics hammer throw women Athletics heptathlon women Athletics high jump women Athletics javelin throw women Athletics long jump women Athletics marathon women Athletics pole vault women Athletics shot put women Athletics triple jump women Badminton doubles men Badminton singles men Badminton doubles women Badminton singles women Badminton doubles mixed Basketball basketball men Basketball basketball women Beach Volleyball Tournament men Beach Volleyball tournament women Boxing + 91kg (super heavyweight) men Boxing 46 - 49 kg (Light Fly weight) men Boxing Up to 52 kg (Fly weight) men Boxing Up to 56 kg (Bantam weight) men Boxing Up to 60 kg (Light weight) men Boxing Up to 64 kg (Light Welter weight) men Boxing Up to 69 kg (Welter weight) men Boxing Up to 75 kg (Middle weight) men Boxing Up to 81 kg (Light Heavy weight) men Boxing Up to 91 kg (Heavy weight) men Boxing 48 to 51 kg (Fly weight) women Boxing 57 to 60 kg (Light weight) women Boxing 69 to 75 kg (Middle weight) women Canoe Slalom C-1 (canoe single) men Canoe Slalom C-2 (canoe double) men Canoe Slalom K-1 (kayak single) men Canoe Slalom K-1 (kayak single) women Canoe Sprint C-1 1000m (canoe single) men Canoe Sprint C-1 200m (canoe single) men Canoe Sprint C-2 1000m (canoe double) men Canoe Sprint K-1 1000m (kayak single) men Canoe Sprint K-1 200m (kayak single) men Canoe Sprint K-2 1000m (kayak double) men Canoe Sprint K-2 200m (kayak double) men Canoe Sprint K-4 1000m (kayak four) men Canoe Sprint K-1 200m (kayak single) women Canoe Sprint K-1 500m (kayak single) women Canoe Sprint K-2 500m (kayak double) women Canoe Sprint K-4 500m (kayak four) women Cycling BMX Race men Cycling BMX Race women Cycling Mountain Bike cross-country men Cycling Mountain Bike cross-country women Cycling Road individual time trial men Cycling Road Road race men Cycling Road individual time trial women Cycling Road Road race women Cycling Track Keirin men Cycling Track Omnium men Cycling Track Sprint men Cycling Track Team Pursuit men Cycling Track Team Sprint men Cycling Track Keirin women Cycling Track Omnium women Cycling Track Sprint women Cycling Track Team Pursuit women Cycling Track Team Sprint women Diving 10m platform men Diving 3m springboard men Diving synchronized diving 10m platform men Diving synchronized diving 3m springboard men Diving 10m platform women Diving 3m springboard women Diving synchronized diving 10m platform women Diving synchronized diving 3m springboard women Equestrian / Dressage individual mixed Equestrian / Dressage team mixed Equestrian Eventing individual mixed Equestrian Eventing team mixed Equestrian Jumping individual mixed Equestrian Jumping team mixed Fencing épée individual men Fencing épée team men Fencing foil individual men Fencing sabre individual men Fencing sabre team men Fencing épée individual women Fencing foil individual women Fencing foil team women Fencing sabre individual women Fencing sabre team women Football 16 team tournament men Football 12 team tournament women Golf stroke play men Golf stroke play women Gymnastics Artistic floor exercises men Gymnastics Artistic horizontal bar men Gymnastics Artistic individual all-round men Gymnastics Artistic parallel bars men Gymnastics Artistic pommel horse men Gymnastics Artistic rings men Gymnastics Artistic team competition men Gymnastics Artistic vault men Gymnastics Artistic beam women Gymnastics Artistic floor exercises women Gymnastics Artistic individual all-round women Gymnastics Artistic team competition women Gymnastics Artistic uneven bars women Gymnastics Artistic vault women Gymnastics Rhythmic group all-around competition Gymnastics Rhythmic individual all-round competition Handball 12 team tournament men Handball 12 team tournament women Hockey 12 team tournament men Hockey 12 team tournament women Judo - 60 kg men Judo + 100kg (heavyweight) men Judo 60 - 66kg (half-lightweight) men Judo 66 - 73kg (lightweight) men Judo 73 - 81kg (half-middleweight) men Judo 81 - 90kg (middleweight) men Judo 90 - 100kg (half-heavyweight) men Judo - 48kg (extra-lightweight) women Judo + 78kg (heavyweight) women Judo 48 - 52kg (half-lightweight) women Judo 52 - 57kg (lightweight) women Judo 57 - 63kg (half-middleweight) women Judo 63 - 70kg (middleweight) women Judo 70 - 78kg (half-heavyweight) women Marathon Swimming Marathon - 10 km Marathon Swimming Marathon - 10 km Modern Pentathlon Individual competition men Modern Pentathlon Individual competition women Rowing coxless pair (2-) men Rowing double sculls (2x) men Rowing eight with coxswain (8+) men Rowing four without coxswain (4-) men Rowing lightweight coxless four (4-) men Rowing lightweight double sculls (2x) men Rowing quadruple sculls without coxsw men Rowing single sculls (1x) men Rowing double sculls (2x) women Rowing eight with coxswain (8+) women Rowing lightweight double sculls (2x) women Rowing pair without coxswain (2-) women Rowing quadruple sculls without coxsw women Rowing single sculls (1x) women Rugby rugby-7 men Rugby rugby-7 women Sailing 470 - Two Person Dinghy men Sailing 49er - Skiff men Sailing Finn - One Person Dinghy (Heavyweight) men Sailing Laser - One Person Dinghy men Sailing RS:X - Windsurfer men Sailing 470 - Two Person Dinghy women Sailing 49er FX Women Sailing Laser Radial - One Person Dinghy women Sailing RS:X - Windsurfer women Sailing Nacra 17 Mixed Shooting 10m air pistol men Shooting 10m air rifle men Shooting 25m rapid fire pistol men Shooting 50m pistol men Shooting 50m rifle 3 positions men Shooting 50m rifle prone men Shooting double trap men Shooting skeet men Shooting trap men Shooting 10m air pistol women Shooting 10m air rifle women Shooting 25m pistol women Shooting 50m rifle 3 positions women Shooting skeet women Shooting trap women Swimming 100m backstroke men Swimming 100m breaststroke men Swimming 100m butterfly men Swimming 100m freestyle men Swimming 1500m freestyle men Swimming 200m backstroke men Swimming 200m breaststroke men Swimming 200m butterfly men Swimming 200m freestyle men Swimming 200m individual medley men Swimming 400m freestyle men Swimming 400m individual medley men Swimming 4x100m freestyle relay men Swimming 4x100m medley relay men Swimming 4x200m freestyle relay men Swimming 50m freestyle men Swimming marathon 10km men Swimming 100m backstroke women Swimming 100m breaststroke women Swimming 100m butterfly women Swimming 100m freestyle women Swimming 200m backstroke women Swimming 200m breaststroke women Swimming 200m butterfly women Swimming 200m freestyle women Swimming 200m individual medley women Swimming 400m freestyle women Swimming 400m individual medley women Swimming 4x100m freestyle relay women Swimming 4x100m medley relay women Swimming 4x200m freestyle relay women Swimming 50m freestyle women Swimming 800m freestyle women Swimming marathon 10km women Synchronized Swimming duet Synchronized Swimming team Table Tennis singles men Table Tennis team men Table Tennis singles women Table Tennis team women Taekwondo - 58 kg men Taekwondo + 80 kg men Taekwondo 58 - 68 kg men Taekwondo 68 - 80 kg men Taekwondo - 49 kg women Taekwondo + 67 kg women Taekwondo 49 - 57 kg women Taekwondo 57 - 67 kg women Tennis doubles men Tennis singles men Tennis doubles women Tennis singles women Tennis doubles mixed ###Markdown - Option: retrieve a list of all **cities** where summer Olympic games have been held together with their **year**, number of **athletes** and number of participating **countries**. ###Code # Try your code for one O.G. r = requests.get('https://www.olympic.org/rio-2016') s = BeautifulSoup(r.text, 'html.parser') a = s.select_one('div.frame ul') a = a.select('li') date = a[0].findAll(text=True)[3].strip() city = a[1].findAll(text=True)[4].strip() athletes = a[2].findAll(text=True)[3].strip() countries = a[3].findAll(text=True)[3].strip() print(date, city, athletes, countries) # Then loop through all O.G.'s www = 'https://www.olympic.org' r = requests.get(f'{www}/summer-games') s = BeautifulSoup(r.text, 'lxml') g = s.select('section.games-section a') for i in g: sg = i.get('href', '') r = requests.get(f'{www}{sg}') s = BeautifulSoup(r.text, 'html.parser') a = s.select_one('div.frame ul') a = a.select('li') if len(a) > 3: date = a[0].findAll(text=True)[3].strip() city = a[1].findAll(text=True)[4].strip() athletes = a[2].findAll(text=True)[3].strip() countries = a[3].findAll(text=True)[3].strip() print(i.text.strip(), date, city, athletes, countries) ###Output Paris 2024 26 Jul - 11 Aug France Tokyo 2020 24 Jul - 09 Aug Japan Rio 2016 05 Aug - 21 Aug Brazil 11238 207 London 2012 27 Jul - 12 Aug Great Britain 10568 204 Beijing 2008 08 Aug - 24 Aug People's Republic of China 10942 204 Athens 2004 13 Aug - 29 Aug Greece 10625 201 Sydney 2000 15 Sep - 01 Oct Australia 10651 199 Atlanta 1996 19 Jul - 04 Aug United States of America 10318 197 Barcelona 1992 25 Jul - 09 Aug Spain 9356 169 Seoul 1988 17 Sep - 02 Oct Republic of Korea 8397 159 Los Angeles 1984 28 Jul - 12 Aug United States of America 6829 140 Moscow 1980 19 Jul - 03 Aug USSR 5179 80 Montreal 1976 17 Jul - 01 Aug Canada 6084 92 Munich 1972 26 Aug - 11 Sep Germany 7134 121 Mexico 1968 12 Oct - 27 Oct Mexico 5516 112 Tokyo 1964 10 Oct - 24 Oct Japan 5151 93 Rome 1960 25 Aug - 11 Sep Italy 5338 83 Melbourne - Stockholm 1956 22 Nov - 08 Dec Australia 3314 72 Helsinki 1952 19 Jul - 03 Aug Finland 4955 69 London 1948 29 Jul - 14 Aug Great Britain 4104 59 Berlin 1936 01 Aug - 16 Aug Germany 3963 49 Los Angeles 1932 30 Jul - 14 Aug United States of America 1334 37 Amsterdam 1928 17 May - 12 Aug Netherlands 2883 46 Paris 1924 04 May - 27 Jul France 3088 44 Antwerp 1920 20 Apr - 12 Sep Belgium 2622 29 Stockholm 1912 05 May - 27 Jul Sweden 2407 28 London 1908 27 Apr - 31 Oct Great Britain 2008 22 St Louis 1904 01 Jul - 23 Nov United States of America 651 12 Paris 1900 14 May - 28 Oct France 997 24 Athens 1896 06 Apr - 15 Apr Greece 241 14
docs/examples/nbs/timeVariation.ipynb
###Markdown Time Variation> In air pollution, the variation of a pollutant by time of day and day of week can revealuseful information concerning the likely sources. For example, road vehicle emissionstend to follow very regular patterns both on a daily and weekly basis. By contrastsome industrial emissions or pollutants from natural sources (e.g. sea salt aerosol)may well have very different patterns Standard libraries to be imported for usage ###Code import pandas as pd from vayu import timeVariation ###Output _____no_output_____ ###Markdown Reading the data and using the function from the library. ###Code df = pd.read_csv("../data/mydata.csv") timeVariation(df, ['pm10','no2','pm25', 'so2']) ###Output /home/apoorv/Desktop/github/.env/lib/python3.6/site-packages/vayu-0.0.134-py3.6.egg/vayu/timeVariation.py:50: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df_day["hour"] = df_day['date'].dt.hour
ThinkingLikeAScientist/PowerOfTest.ipynb
###Markdown Power of TestsAs you have already seen, there is a significant risk of finding falsely significant results when testing hypotheses. On the other hand, there is also a significant chance that an actual effect will not be detected. We call the ability of a test to detect an actual effect as the **power**. >The **power of a test** is formally defined as: $$power = P(reject\ H_0| when\ H_a\ is\ true)$$ >In pain language, the power is the probability of getting a positive result when the null hypothesis is not true. Conversely, a test with insufficient power will not detect a real effect. Clearly, we want the most powerful test we can find for the situation. >Computing test power can be a bit complex, and analytical solutions can be difficult or impossible. Often, a simulation is used to compute power. In this exercise you will compute the power for a two sample t-test. The power of this test depends on the several parameters:- The number of samples.- The anticipated difference in the population means, which we call the **effect size**.- The significance level chosen.When running a power test, you can ask several questions, which will assist you in designing an experiment. Usually, you will determine how big a sample you need to detect an effect of the expected size. You can also determine how big an effect needs to be given a fixed sample size (all the samples you have or can afford) to detect an effect of the expected size. The Python [statsmodels package](https://www.statsmodels.org/dev/generated/statsmodels.stats.power.tt_ind_solve_power.html) provides power calculations for a limited set of hypothesis tests. We can use these capabilities to examine the characteristics of test power.Execute the code in the cell below to import the packages you will need for this exercise. ###Code import pandas as pd import numpy as np import numpy.random as nr import statsmodels.api as sm import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown The code in the cell below does the following:- Create a sequence of effect sizes.- Compute a vector of power values with a fixed sample size and cutoff value.- Plot the power vs. effect size. Execute this code and examine the result. ###Code import statsmodels.stats.power as smsp nr.seed(seed=23344) diffs = np.arange(start = 0.0, stop = 0.5, step = .01) powers = [smsp.tt_ind_solve_power(effect_size = x, nobs1 = 1000, alpha = 0.05, power = None, ratio = 1.0, alternative = 'two-sided') for x in diffs] def plot_power(x, y, xlabel, title): import matplotlib.pyplot as plt plt.plot(x, y, color = 'red', linewidth = 2) plt.title(title) plt.xlabel(xlabel) plt.ylabel('Power') plot_power(diffs, powers, xlabel = 'Difference of means', title = 'Power vs. difference of means') ###Output _____no_output_____ ###Markdown Examine the results displayed above. Notice that for small effect sizes the chance of detecting the effect is quite small. The code in the cell below does the following:- Create a sequence of cut-off values.- Compute a vector of power values with a fixed sample size and effect size.- Plot the power vs cut-off value. Execute this code and examine the result. ###Code nr.seed(seed=1234) alphas = np.arange(start = 0.001, stop = 0.05, step = .001) powers = [smsp.tt_ind_solve_power(effect_size = 0.1, nobs1 = 1000, alpha = x, power = None, ratio = 1.0, alternative = 'two-sided') for x in alphas] plot_power(alphas, powers, xlabel = 'significance level', title = 'Power vs. significance level') ###Output _____no_output_____ ###Markdown Examine the results displayed above. Notice that the probability of detecting the effect drops rapidly with significance level. In most cases of experiment design determining a sufficient sample size is of primary importance. In the cell below create the code to do the following:- Create a sequence of sample sizes from 100 to 5,000 in steps of 100.- Compute a vector of power values for a fixed effect size of 0.1 and a cutoff of 0.01.- Plot the power vs. sample size. Execute this code and examine the result. ###Code nr.seed(seed=1234) ###Output _____no_output_____
HARNet_All_Models.ipynb
###Markdown Import Statements ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix, classification_report from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from multiprocessing import Pool, cpu_count import keras from keras.models import Sequential, Input, Model from keras.layers import Dense, Conv1D, Conv2D, MaxPooling1D, MaxPooling2D, Flatten, Dropout from keras.layers import BatchNormalization, LSTM, Merge, Concatenate, merge, Reshape from keras.utils import to_categorical from keras.callbacks import ModelCheckpoint, EarlyStopping from keras import regularizers from keras import backend as K from matplotlib.pyplot import imshow from scipy.signal import decimate import pywt from sklearn.metrics import f1_score import seaborn as sn ###Output _____no_output_____ ###Markdown Data Extraction ###Code pa = pd.read_csv("Phones_accelerometer.csv") # pa=pd.read_csv("Phones_gyroscope.csv") # wa=pd.read_csv("Watch_accelerometer.csv") # wg=pd.read_csv("Watch_gyroscope.csv") print (pa.shape) print("Done") pa = pa[pa['gt'] != 'null'] acts = pa['gt'].unique() users = pa['User'].unique() devices = pa['Device'].unique() print(devices) print (acts) print (users) l = {} for i, act in enumerate(acts): l[act] = i print (l) dev = {} for i, d in enumerate(devices): dev[d] = i print (dev) use = {} for i, u in enumerate(users): use[u] = i print (use) devs, acc, labels = [], [], [] for act in acts: pa_act = pa[pa['gt'] == act] for device in devices: pa_dev = pa_act[(pa_act['Device'] == device)] for user in users: pa_user = pa_dev[pa_dev['User'] == user] pa_user = pa_user[['x', 'y','z']] if str(device) == 'nexus4_1' or str(device) == 'nexus4_2': min_win = 400 elif str(device) == 's3_1' or str(device) == 's3_2': min_win = 300 elif str(device) == 's3mini_1' or str(device) == 's3mini_2': min_win = 200 else: min_win = 100 if(pa_user.shape[0] >= min_win): acc.append(pa_user.values) devs.append(device) labels.append(l[act]) print (f'{act} done') acc = np.array(acc) labels = np.array(labels) devs = np.array(devs) print ("Done") print(acc.shape, labels.shape, devs.shape) acc[390].shape ###Output _____no_output_____ ###Markdown Getting the Windowed Data ###Code def getWindowedData(index, w_min): windowData, windowLabels = [], [] num_windows = acc[index].shape[0] // w_min if num_windows == 0: print(acc[index].shape[0], w_min) k = 0 for _ in range(num_windows): windowData.append(acc[index][k:k+w_min]) k += w_min windowLabels.append(labels[index]) return windowData, windowLabels # Getting 2 seconds (100 samples) of data for all devices windowedData = [] for i in range(len(acc)): if str(devs[i]) == 'nexus4_1' or str(devs[i]) == 'nexus4_2': w_min = 400 elif str(devs[i]) == 's3_1' or str(devs[i]) == 's3_2': w_min = 300 elif str(devs[i]) == 's3mini_1' or str(devs[i]) == 's3mini_1': w_min = 200 else: w_min = 100 windowedData.append((getWindowedData(i, w_min))) np.array(windowedData).shape windowedData = np.array(windowedData) print (windowedData.shape) ###Output _____no_output_____ ###Markdown Decimating the Windowed Data ###Code def decimateThatSignal(i): decimatedSignalData, decimatedSignalLabels = [], [] if windowedData[i][0][0].shape[0] == 400: for j in range(len(windowedData[i][0])): decimatedX = decimate(windowedData[i][0][j][:,0], 4, zero_phase=True) decimatedY = decimate(windowedData[i][0][j][:,1], 4, zero_phase=True) decimatedZ = decimate(windowedData[i][0][j][:,2], 4, zero_phase=True) decimatedSignal = np.dstack((decimatedX, decimatedY, decimatedZ)) decimatedSignalData.append(decimatedSignal) decimatedSignalLabels.append(windowedData[i][1][j]) return np.array(decimatedSignalData), np.array(decimatedSignalLabels) elif windowedData[i][0][0].shape[0] == 300: for j in range(len(windowedData[i][0])): decimatedX = decimate(windowedData[i][0][j][:,0], 3, zero_phase=True) decimatedY = decimate(windowedData[i][0][j][:,1], 3, zero_phase=True) decimatedZ = decimate(windowedData[i][0][j][:,2], 3, zero_phase=True) decimatedSignal = np.dstack((decimatedX, decimatedY, decimatedZ)) decimatedSignalData.append(decimatedSignal) decimatedSignalLabels.append(windowedData[i][1][j]) return np.array(decimatedSignalData), np.array(decimatedSignalLabels) elif windowedData[i][0][0].shape[0] == 200: for j in range(len(windowedData[i][0])): decimatedX = decimate(windowedData[i][0][j][:,0], 2, zero_phase=True) decimatedY = decimate(windowedData[i][0][j][:,1], 2, zero_phase=True) decimatedZ = decimate(windowedData[i][0][j][:,2], 2, zero_phase=True) decimatedSignal = np.dstack((decimatedX, decimatedY, decimatedZ)) decimatedSignalData.append(decimatedSignal) decimatedSignalLabels.append(windowedData[i][1][j]) return np.array(decimatedSignalData), np.array(decimatedSignalLabels) else: return np.array(windowedData[i][0]), np.array(windowedData[i][1]) decimateThatSignal(0)[0].shape w_min = 100 decimatedData = decimateThatSignal(0)[0].reshape((-1, w_min, 3)) decimatedLabels = decimateThatSignal(0)[1] for i in range(1, len(windowedData)): print (i) decimatedData = np.vstack((decimatedData, decimateThatSignal(i)[0].reshape((-1, w_min, 3)))) decimatedLabels = np.hstack((decimatedLabels, decimateThatSignal(i)[1])) decimatedData = np.array(decimatedData) decimatedLabels = np.array(decimatedLabels) print (decimatedData.shape, decimatedLabels.shape) ###Output _____no_output_____ ###Markdown Getting the DWTed Data ###Code pywt.wavelist() # Change window size here as and when DWT wavelet changes w_min = 50+3 DWTData = [] for i in range(len(decimatedData)): Xca, Xda = pywt.dwt(decimatedData[i].reshape((-1, 3))[:,0], wavelet='db4', mode='periodic') Yca, Yda = pywt.dwt(decimatedData[i].reshape((-1, 3))[:,1], wavelet='db4', mode='periodic') Zca, Zda = pywt.dwt(decimatedData[i].reshape((-1, 3))[:,2], wavelet='db4', mode='periodic') coef = np.hstack((Xca, Yca, Zca)).reshape((-1, w_min, 3)) DWTData.append((coef, decimatedLabels[i])) print (i) DWTData = np.array(DWTData) print(DWTData.shape) a = 46548 a = 34509 print (DWTData[a][0][0].shape) plt.plot(decimatedData[a]) plt.ylabel('Inertial g-values') plt.xlabel(r'Data points in a single $w_a$') # plt.legend(['X-axis','Y-axis','Z-axis'],loc=1) plt.legend(['X-axis','Y-axis','Z-axis'],bbox_to_anchor=(0., 1.02, 1., .102), loc=1,ncol=3, mode="expand", borderaxespad=0.) plt.savefig('plotBeforeDWT.png') plt.show() plt.plot(DWTData[a][0][0]) plt.ylabel('Inertial g-values') plt.xlabel(r'Data points in a single $w_a$') plt.legend(['X-axis','Y-axis','Z-axis'],bbox_to_anchor=(0., 1.02, 1., .102), loc=1,ncol=3, mode="expand", borderaxespad=0.) # plt.legend(['X-axis','Y-axis','Z-axis'],loc=1) plt.savefig('plotAfterDWT.png') plt.show() # np.save('beforeDWT',DWTData[a][0][0]) # np.save('afterDWT',decimatedData[a]) labels = [] data = np.zeros((DWTData.shape[0], 1, w_min, 3)) for i in range(DWTData.shape[0]): data[i, :, :] = DWTData[i][0][:] labels.append(DWTData[i][1]) data = data.reshape((-1, w_min, 3)).astype('float32') labels = np.array(labels).astype('float32') print (data.shape, labels.shape) ###Output _____no_output_____ ###Markdown Train-Test Split and Normalization ###Code Xtrain, Xtest, ytrain, ytest = train_test_split(data, labels, stratify=labels, test_size=0.2, random_state=5233) # Run only once ytrain = to_categorical(ytrain, len(acts)) ytest = to_categorical(ytest, len(acts)) print (Xtrain.shape, Xtest.shape, ytrain.shape, ytest.shape) np.save('Xtrain_53.npy', Xtrain) np.save('Xtest_53.npy', Xtest) # Delete if necessary # del acc, pa, windowedData, decimatedData, decimatedLabels, data, labels, DWTData Xtrain[:, :, 0].shape # Standard Scaler Xtrain_fit0 = StandardScaler().fit(Xtrain[:, :, 0]) Xtrain0 = Xtrain_fit0.transform(Xtrain[:, :, 0]) Xtrain_fit1 = StandardScaler().fit(Xtrain[:, :, 1]) Xtrain1 = Xtrain_fit1.transform(Xtrain[:, :, 1]) Xtrain_fit2 = StandardScaler().fit(Xtrain[:, :, 2]) Xtrain2 = Xtrain_fit2.transform(Xtrain[:, :, 2]) Xtest0 = Xtrain_fit0.transform(Xtest[:, :, 0]) Xtest1 = Xtrain_fit1.transform(Xtest[:, :, 1]) Xtest2 = Xtrain_fit2.transform(Xtest[:, :, 2]) print (Xtrain0.shape, Xtest0.shape) X_train = np.dstack((Xtrain0, Xtrain1, Xtrain2)) X_test = np.dstack((Xtest0, Xtest1, Xtest2)) print (X_train.shape, X_test.shape) del Xtrain0, Xtrain1, Xtrain2, Xtest0, Xtest1, Xtest2 print (np.min(X_train), np.max(X_train), np.min(X_test), np.max(X_test)) X_train = X_train.reshape((-1, w_min, 3, 1)) X_test = X_test.reshape((-1, w_min, 3, 1)) print (X_train.shape) print (X_test.shape) print (ytrain.shape) print (ytest.shape) ###Output _____no_output_____ ###Markdown Define params for model ###Code num_classes = len(acts) learning_rate = 2e-4 ###Output _____no_output_____ ###Markdown Conv1D (Subnet) -> Conv2D ###Code X_train0 = X_train[:, :, 0].reshape((-1, w_min, 1)) X_train1 = X_train[:, :, 1].reshape((-1, w_min, 1)) X_train2 = X_train[:, :, 2].reshape((-1, w_min, 1)) X_test0 = X_test[:, :, 0].reshape((-1, w_min, 1)) X_test1 = X_test[:, :, 1].reshape((-1, w_min, 1)) X_test2 = X_test[:, :, 2].reshape((-1, w_min, 1)) print (X_train0.shape, X_test0.shape) inputX = Input(shape=(X_train0.shape[1], X_train0.shape[2])) convX1 = Conv1D(filters=8, kernel_size=2, padding='causal', activation='relu')(inputX) batchX1 = BatchNormalization()(convX1) poolX1 = MaxPooling1D(pool_size=2, padding='same')(batchX1) convX2 = Conv1D(filters=16, kernel_size=2, padding='causal', activation='relu')(poolX1) batchX2 = BatchNormalization()(convX2) poolX2 = MaxPooling1D(pool_size=2, padding='same')(batchX2) # convX3 = Conv1D(filters=128, kernel_size=2, padding='causal', activation='relu')(poolX2) # batchX3 = BatchNormalization()(convX3) # poolX3 = MaxPooling1D(pool_size=2, padding='same')(batchX3) modelX = Flatten()(poolX2) inputY = Input(shape=(X_train1.shape[1], X_train2.shape[2])) convY1 = Conv1D(filters=8, kernel_size=2, padding='causal', activation='relu')(inputY) batchY1 = BatchNormalization()(convY1) poolY1 = MaxPooling1D(pool_size=2, padding='same')(batchY1) convY2 = Conv1D(filters=16, kernel_size=2, padding='causal', activation='relu')(poolY1) batchY2 = BatchNormalization()(convY2) poolY2 = MaxPooling1D(pool_size=2, padding='same')(batchY2) # convY3 = Conv1D(filters=128, kernel_size=2, padding='causal', activation='relu')(poolY2) # batchY3 = BatchNormalization()(convY3) # poolY3 = MaxPooling1D(pool_size=2, padding='same')(batchY3) modelY = Flatten()(poolY2) inputZ = Input(shape=(X_train2.shape[1], X_train2.shape[2])) convZ1 = Conv1D(filters=8, kernel_size=2, padding='causal', activation='relu')(inputZ) batchZ1 = BatchNormalization()(convZ1) poolZ1 = MaxPooling1D(pool_size=2, padding='same')(batchZ1) convZ2 = Conv1D(filters=16, kernel_size=5, padding='causal', activation='relu')(poolZ1) batchZ2 = BatchNormalization()(convZ2) poolZ2 = MaxPooling1D(pool_size=2, padding='same')(batchZ2) # convZ3 = Conv1D(filters=128, kernel_size=2, padding='causal', activation='relu')(poolZ2) # batchZ3 = BatchNormalization()(convZ3) # poolZ3 = MaxPooling1D(pool_size=2, padding='same')(batchZ3) modelZ = Flatten()(poolZ2) merged_model = merge([modelX, modelY, modelZ], mode='concat') print (K.int_shape(merged_model)) final_merge = Reshape((K.int_shape(merged_model)[1]//3, 3, 1))(merged_model) print (K.int_shape(final_merge)) conv1 = Conv2D(filters=8, kernel_size=(3, 3), padding='same')(final_merge) batch1 = BatchNormalization()(conv1) pool1 = MaxPooling2D(pool_size=(2, 2), padding='same')(batch1) conv2 = Conv2D(filters=16, kernel_size=(3, 3), padding='same')(pool1) batch2 = BatchNormalization()(conv2) pool2 = MaxPooling2D(pool_size=(2, 2), padding='same')(batch2) flatten = Flatten()(pool2) # fc1 = Dense(128, activation='relu')(flatten) # fc1 = Dropout(0.25)(fc1) # fc2 = Dense(64, activation='relu')(flatten) # fc2 = Dropout(0.4)(fc2) fc1 = Dense(32, activation='relu', kernel_initializer='glorot_normal')(flatten) fc1 = Dropout(0.25)(fc1) output = Dense(num_classes, activation='softmax')(fc1) model = Model([inputX, inputY, inputZ], output) model.summary() model.compile(loss='categorical_crossentropy', optimizer=keras.optimizers.Adam(lr=learning_rate), metrics=['accuracy']) #beta_1=0.9, beta_2=0.999)) filepath = "bestWeightsHeterogeneityMixed1D.h5" checkpoint = ModelCheckpoint(filepath, monitor='val_acc', save_best_only=True, mode='max') callback = [checkpoint] model.fit([X_train0, X_train1, X_train2], ytrain, epochs=35, batch_size=128, \ validation_data=([X_test0, X_test1, X_test2], ytest), callbacks=callback) model.fit([X_train0, X_train1, X_train2], ytrain, epochs=30, batch_size=128, \ validation_data=([X_test0, X_test1, X_test2], ytest))#, callbacks=callback) ###Output _____no_output_____ ###Markdown LSTM -> Conv1D (Subnet) -> Conv2D ###Code X_train0 = X_train[:, :, 0].reshape((-1, w_min, 1)) X_train1 = X_train[:, :, 1].reshape((-1, w_min, 1)) X_train2 = X_train[:, :, 2].reshape((-1, w_min, 1)) X_test0 = X_test[:, :, 0].reshape((-1, w_min, 1)) X_test1 = X_test[:, :, 1].reshape((-1, w_min, 1)) X_test2 = X_test[:, :, 2].reshape((-1, w_min, 1)) print (X_train1.shape, X_test1.shape) # Subnet X inputX = Input(shape=(53, 1)) #inputX = Input(shape=(X_train0.shape[1], X_train0.shape[2])) lstmX = LSTM(32, return_sequences=True)(inputX) convX1 = Conv1D(filters=8, kernel_size=5, padding='causal', activation='relu')(lstmX) batchX1 = BatchNormalization()(convX1) poolX1 = MaxPooling1D(pool_size=2, padding='same')(batchX1) modelX = poolX1 # Subnet Y inputY = Input(shape=(53, 1)) #inputY = Input(shape=(X_train1.shape[1], X_train1.shape[2])) lstmY = LSTM(32, return_sequences=True)(inputY) convY1 = Conv1D(filters=8, kernel_size=5, padding='causal', activation='relu')(lstmY) batchY1 = BatchNormalization()(convY1) poolY1 = MaxPooling1D(pool_size=2, padding='same')(batchY1) modelY = poolY1 # Subnet Z inputZ = Input(shape=(53, 1)) #inputZ = Input(shape=(X_train2.shape[1], X_train2.shape[2])) lstmZ = LSTM(32, return_sequences=True)(inputZ) convZ1 = Conv1D(filters=8, kernel_size=5, padding='causal', activation='relu')(lstmZ) batchZ1 = BatchNormalization()(convZ1) poolZ1 = MaxPooling1D(pool_size=2, padding='same')(batchZ1) modelZ = poolZ1 merged_model = merge([modelX, modelY, modelZ], mode='concat') print (K.int_shape(merged_model)) # final_merge = Reshape((K.int_shape(merged_model)[1]//3, 3, 1))(merged_model) final_merge = Reshape((K.int_shape(merged_model)[1], K.int_shape(merged_model)[2], 1))(merged_model) print (K.int_shape(final_merge)) conv1 = Conv2D(filters=8, kernel_size=(3, 3), padding='same')(final_merge) batch1 = BatchNormalization()(conv1) pool1 = MaxPooling2D(pool_size=(3, 2), padding='same')(batch1) conv2 = Conv2D(filters=16, kernel_size=(3, 3), padding='same')(pool1) batch2 = BatchNormalization()(conv2) pool2 = MaxPooling2D(pool_size=(3, 2), padding='same')(batch2) flatten = Flatten()(pool2) fc1 = Dense(16, activation='relu')(flatten) fc1 = Dropout(0.25)(fc1) fc2 = Dense(8, activation='relu')(fc1) # fc2 = Dropout(0.25)(fc2) # fc3 = Dense(32, activation='relu')(fc2) # fc3 = Dropout(0.4)(fc3) output = Dense(6, activation='softmax')(fc2) model = Model([inputX, inputY, inputZ], output) model.summary() model.compile(loss='categorical_crossentropy', optimizer=keras.optimizers.Adam(learning_rate), metrics=['accuracy']) #beta_1=0.9, beta_2=0.999)) filepath = "bestWeightsHeterogeneityMixedLSTM1D.h5" checkpoint = ModelCheckpoint(filepath, monitor='val_acc', save_best_only=True, mode='max') callback = [checkpoint] model.fit([X_train0, X_train1, X_train2], ytrain, epochs=15, batch_size=128, \ validation_data=([X_test0, X_test1, X_test2], ytest), callbacks=callback) model.fit([X_train0, X_train1, X_train2], ytrain, epochs=15, batch_size=128, \ validation_data=([X_test0, X_test1, X_test2], ytest))#, callbacks=callback) ###Output _____no_output_____ ###Markdown Conv1D -> LSTM (Subnet) -> Conv2D ###Code X_train0 = X_train[:, :, 0].reshape((-1, w_min, 1)) X_train1 = X_train[:, :, 1].reshape((-1, w_min, 1)) X_train2 = X_train[:, :, 2].reshape((-1, w_min, 1)) X_test0 = X_test[:, :, 0].reshape((-1, w_min, 1)) X_test1 = X_test[:, :, 1].reshape((-1, w_min, 1)) X_test2 = X_test[:, :, 2].reshape((-1, w_min, 1)) print (X_train1.shape, X_test1.shape) # Subnet X inputX = Input(shape=(X_train0.shape[1], X_train0.shape[2])) convX1 = Conv1D(filters=8, kernel_size=3, padding='causal', activation='relu')(inputX) batchX1 = BatchNormalization()(convX1) poolX1 = MaxPooling1D(pool_size=2, padding='same')(batchX1) lstmX = LSTM(32, return_sequences=True)(poolX1) modelX = lstmX # Subnet Y inputY = Input(shape=(X_train1.shape[1], X_train1.shape[2])) convY1 = Conv1D(filters=8, kernel_size=3, padding='causal', activation='relu')(inputY) batchY1 = BatchNormalization()(convY1) poolY1 = MaxPooling1D(pool_size=2, padding='same')(batchY1) lstmY = LSTM(32, return_sequences=True)(poolY1) modelY = lstmX # Subnet Z inputZ = Input(shape=(X_train2.shape[1], X_train2.shape[2])) convZ1 = Conv1D(filters=8, kernel_size=3, padding='causal', activation='relu')(inputZ) batchZ1 = BatchNormalization()(convZ1) poolZ1 = MaxPooling1D(pool_size=2, padding='same')(batchZ1) lstmZ = LSTM(32, return_sequences=True)(poolZ1) modelZ = poolZ1 merged_model = merge([modelX, modelY, modelZ], mode='concat') print (K.int_shape(merged_model)) # final_merge = Reshape((K.int_shape(merged_model)[1]//3, 3, 1))(merged_model) final_merge = Reshape((K.int_shape(merged_model)[1], K.int_shape(merged_model)[2], 1))(merged_model) print (K.int_shape(final_merge)) conv1 = Conv2D(filters=8, kernel_size=(2, 2), padding='same')(final_merge) batch1 = BatchNormalization()(conv1) pool1 = MaxPooling2D(pool_size=(2, 2), padding='same')(batch1) conv2 = Conv2D(filters=16, kernel_size=(2, 2), padding='same')(pool1) batch2 = BatchNormalization()(conv2) pool2 = MaxPooling2D(pool_size=(2, 2), padding='same')(batch2) conv3 = Conv2D(filters=16, kernel_size=(2, 2), padding='same')(pool2) batch3 = BatchNormalization()(conv3) pool3 = MaxPooling2D(pool_size=(2, 2), padding='same')(batch3) flatten = Flatten()(pool3) fc1 = Dense(16, activation='relu')(flatten) fc1 = Dropout(0.25)(fc1) fc2 = Dense(8, activation='relu')(fc1) # fc2 = Dropout(0.25)(fc2) # fc3 = Dense(32, activation='relu')(fc2) # fc3 = Dropout(0.4)(fc3) output = Dense(num_classes, activation='softmax')(fc2) model = Model([inputX, inputY, inputZ], output) model.summary() model.compile(loss='categorical_crossentropy', optimizer=keras.optimizers.Adam(learning_rate), metrics=['accuracy']) #beta_1=0.9, beta_2=0.999)) filepath = "bestWeightsHeterogeneityMixed1DLSTM.h5" checkpoint = ModelCheckpoint(filepath, monitor='val_acc', save_best_only=True, mode='max') callback = [checkpoint] model.fit([X_train0, X_train1, X_train2], ytrain, epochs=15, batch_size=128, \ validation_data=([X_test0, X_test1, X_test2], ytest), callbacks=callback) model.fit([X_train0, X_train1, X_train2], ytrain, epochs=15, batch_size=128, \ validation_data=([X_test0, X_test1, X_test2], ytest))#, callbacks=callback) model.fit([X_train0, X_train1, X_train2], ytrain, epochs=15, batch_size=128, \ validation_data=([X_test0, X_test1, X_test2], ytest))#, callbacks=callback) ###Output _____no_output_____ ###Markdown Predictions and Confusion Matrix ###Code # yPred = model.predict_classes(X_test) y_prob = model.predict([X_test0, X_test1, X_test2]) yPred = y_prob.argmax(axis=-1) yTrue = [np.argmax(y) for y in ytest] print (yPred[:20]) print (yTrue[:20]) f1 = f1_score(yTrue, yPred, average = 'weighted') print (f1) print (classification_report(yTrue, yPred, digits=4)) conf_matrix = confusion_matrix(yTrue, yPred)#, labels=[0,1,2,3,4,5]) print (conf_matrix) df_conf_matrix = pd.DataFrame(conf_matrix, index=list(acts), columns=list(acts)) plt.figure(figsize = (8, 8)) conf_heatmap = sn.heatmap(conf_matrix, annot=True, fmt='g', xticklabels=list(acts), yticklabels=list(acts)) fig = conf_heatmap.get_figure() ###Output _____no_output_____
module2-regression-2/Ben_Whitman_212_assignment_regression_classification_2.ipynb
###Markdown Lambda School Data Science*Unit 2, Sprint 1, Module 2*--- Regression 2 AssignmentYou'll continue to **predict how much it costs to rent an apartment in NYC,** using the dataset from renthop.com.- [ ] Do train/test split. Use data from April & May 2016 to train. Use data from June 2016 to test.- [ ] Engineer at least two new features. (See below for explanation & ideas.)- [ ] Fit a linear regression model with at least two features.- [ ] Get the model's coefficients and intercept.- [ ] Get regression metrics RMSE, MAE, and $R^2$, for both the train and test data.- [ ] What's the best test MAE you can get? Share your score and features used with your cohort on Slack!- [ ] As always, commit your notebook to your fork of the GitHub repo. [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)> "Some machine learning projects succeed and some fail. What makes the difference? Easily the most important factor is the features used." — Pedro Domingos, ["A Few Useful Things to Know about Machine Learning"](https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf)> "Coming up with features is difficult, time-consuming, requires expert knowledge. 'Applied machine learning' is basically feature engineering." — Andrew Ng, [Machine Learning and AI via Brain simulations](https://forum.stanford.edu/events/2011/2011slides/plenary/2011plenaryNg.pdf) > Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work. Feature Ideas- Does the apartment have a description?- How long is the description?- How many total perks does each apartment have?- Are cats _or_ dogs allowed?- Are cats _and_ dogs allowed?- Total number of rooms (beds + baths)- Ratio of beds to baths- What's the neighborhood, based on address or latitude & longitude? Stretch Goals- [ ] If you want more math, skim [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 3.1, Simple Linear Regression, & Chapter 3.2, Multiple Linear Regression- [ ] If you want more introduction, watch [Brandon Foltz, Statistics 101: Simple Linear Regression](https://www.youtube.com/watch?v=ZkjP5RJLQF4)(20 minutes, over 1 million views)- [ ] Add your own stretch goal(s) ! ###Code %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/' # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import numpy as np import pandas as pd # Read New York City apartment rental listing data df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv') assert df.shape == (49352, 34) # Remove the most extreme 1% prices, # the most extreme .1% latitudes, & # the most extreme .1% longitudes df = df[(df['price'] >= np.percentile(df['price'], 0.5)) & (df['price'] <= np.percentile(df['price'], 99.5)) & (df['latitude'] >= np.percentile(df['latitude'], 0.05)) & (df['latitude'] < np.percentile(df['latitude'], 99.95)) & (df['longitude'] >= np.percentile(df['longitude'], 0.05)) & (df['longitude'] <= np.percentile(df['longitude'], 99.95))] df.dtypes import datetime df['created_datetime'] = pd.to_datetime(df['created']) df.describe() df.describe(exclude='number') df.head() df[df['created_datetime'].dt.month == (4 or 5)].describe(exclude='number') # Train/test split train = df[(df['created_datetime'].dt.month == 4) | (df['created_datetime'].dt.month == 5)] train['created_datetime'].describe() test = df[df['created_datetime'].dt.month == 6] test['created_datetime'].describe() train.head() train.shape[0] range(train.shape[0]) # Engineer 'num_perks' feature perks = ['elevator', 'cats_allowed', 'hardwood_floors', 'dogs_allowed', 'doorman', 'dishwasher', 'no_fee', 'laundry_in_building', 'fitness_center', 'pre-war', 'laundry_in_unit', 'roof_deck', 'outdoor_space', 'dining_room', 'high_speed_internet', 'balcony', 'swimming_pool', 'new_construction', 'terrace', 'exclusive', 'loft', 'garden_patio', 'wheelchair_access', 'common_outdoor_space'] def count_perks(df): for row in range(df.shape[0]): num_perks = 0 for perk in perks: num_perks += df[perk] return num_perks # So I'm not totally sure how important this warning is to deal with. # It sounds pretty serious, but I'm also having trouble figuring out where it's happening in my code. # I'll have to return to this later. # The warning is only thrown when I do this to the 'train' slice. I can run this # function on df and it goes just fine train['num_perks'] = count_perks(train) test['num_perks'] = count_perks(test) # I'm not in love with how this function works, but it does seem to work train.sample(5) train.head() train.index.values train.loc[300, 'bedrooms'] # Engineer bed-bath ratio def bed_bath_ratio(df): for row in df.index.values: if df.loc[row, 'bedrooms'] > 0 and df.loc[row, 'bathrooms'] > 0: bed_bath_ratio = df['bathrooms'] / df['bedrooms'] else: bed_bath_ratio = 0 return bed_bath_ratio train['bed_bath_ratio'] = bed_bath_ratio(train) train.head() train['bed_bath_ratio'].describe() train[train['bed_bath_ratio'].isna()].head() train[np.isinf(train['bed_bath_ratio'])].head() train['bed_bath_ratio'] = train['bed_bath_ratio'].replace([np.inf, -np.inf], np.nan) train['bed_bath_ratio'] = train['bed_bath_ratio'].replace(np.nan, 0) test['bed_bath_ratio'] = bed_bath_ratio(test) test['bed_bath_ratio'] = test['bed_bath_ratio'].replace([np.inf, -np.inf], np.nan) test['bed_bath_ratio'] = test['bed_bath_ratio'].replace(np.nan, 0) # Fit linear regression model with at least two features # Arrange target y vector target = 'price' y_train = train[target] y_test = test[target] # Arrange X matrices features = ['num_perks', 'bed_bath_ratio'] X_train = train[features] X_test = test[features] # Scale the features. from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(X_train) X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_absolute_error model = LinearRegression() # Fit the model model.fit(X_train_scaled, y_train) y_pred_train = model.predict(X_train_scaled) mae_train = mean_absolute_error(y_train, y_pred_train) ###Output _____no_output_____
site/public/courses/DS-1.1/Notebooks/Challenges_Time Series.ipynb
###Markdown Introduction to time series What is time?One definition of time as we percieve it is the observations of **events** as features of those events change. Think of a movie reel and how for evey frame a diffrecnt set of actions occur. These 'actions' can be quantified as features in any dataset based on the time in which they occur. What time series really seeks to point out is **what events occured at this point in time"**. This is incredibly powerful as time series is the backbone of nearly every industry. Here are some examples in which time series can be used:> - Looking at your github data to see which days you do what type of programming, for how long, ect> - In Fintech it's uses can be seen in studying past trends in crypto currency> - In the banking industry for fraud dectection and stocks and bonds trading> - In scientific analysis to study the effects of inputs in experiements over time> - Many more... Discussion: Discuss how you conceptualize time asn well as give three examples of when measuring time in your everyday life could be interesting Using Quandl for Time Series analysisThe libray that we will use for retreiving financial, economic and sociology data is the python api for Quandl.Head over to https://www.quandl.com/ to sign up for an account to get an API key first!Install this via anaconda with the following command. conda install -c anaconda quandl (you may have to use sudo)For today well be taking a look at Tesla's Stock data! ###Code # quandl for financial data import quandl import pandas as pd import matplotlib as plt import matplotlib.pyplot as plt # Use your API Key quandl.ApiConfig.api_key = '' # Retrieve TSLA data from Quandl tesla = quandl.get('WIKI/TSLA') ###Output _____no_output_____ ###Markdown Lets take a look at the head of our dataframe to view what kind of data we will be working with. ###Code tesla.head(5) ###Output _____no_output_____ ###Markdown Time series organizationOne of the first things you likely noticed is that instead of having the index just being a count, the index in this case is the actual timestamp. In certain fields or sciences, this may be the more prevalent form of working with times series. Other forms of time series may contain the temporal features as a regular column in your data. ###Code tesla.describe() ###Output _____no_output_____ ###Markdown Activity:Using google/ wikipiedia/ ect, pick 3 columns from this dataset and define them. This will give you a deeper undeerstanding of how your data works and what is realy being measured. A big part of what makes one more proficient at data science is being nosy....No seriously GET NOSY! Don't follow my usual advice to "Mind your business" Time series and graphsOne of the most frequent ways that you will see a time series represented is through the use of graphs, usually to track the change of a variable over time. in this case, it is the change of the Tesla stock over time.Lets look at a few just to get into the swing of it!To plot a time series we'll be using matplotlib.pyplot which we imported as "plt" as well. Let's look at how the price of the stock changes over time! ###Code plt.plot(tesla.index, tesla['Volume'].astype(int), 'r') plt.title('Tesla Stock Volume by Year') plt.ylabel('Number of shares traded (in 10\'s of millions)'); plt.xlabel('Time (in Years)'); plt.show(); ###Output _____no_output_____ ###Markdown Activity: using plt.plot, make 3 visualizations on time series. Based off of this, discuss with a neighboor any odd trends you see and try to explain them Slicing and time intervalsViewing our data is all fine and dandy, but thats just wieing the entire set, what if we wanted to view this based on certain time intervals?Most times series are indexed based on what time the event in question took place. This differs from the bulk of your other data sets in that their indexes are just the number in which they appear in the dataset, this can be completely arbitrary. Time series are sliced pythonic-ly based on the that of occurence.Let's do some slicing just based on the year! ###Code tesla['2010'].head(10) ###Output _____no_output_____ ###Markdown Let's query the data just based off what happened in January of 2011 ###Code tesla['2011-1'].head(10) ###Output _____no_output_____ ###Markdown More precise slicingMost of your time series can be index in the following format.> month/day/yearLet's slice our dataframe based on just thethe summer of 2015, shall we? ###Code # Let's give summer a start date of May 1st summer_start = '05/1/2015' # Let's give it an end date of september 30th summer_end = '09/30/2015' # lets set it to another variable tesla_summer = tesla[summer_start:summer_end] tesla_summer.head(10) ###Output _____no_output_____ ###Markdown Data base exampleFintech is nice and all but we build products here!!! One of the ways that you'll be dealing with data in the industry and your own apps is your own user data. Knowing when, how many, or if your results indicate something odd is happening in your applications is a skill that makes things like testing, fraud detection, or even crashes and uptime. The example we have below is a selection of user comments and button clicks observed in a fictional database over the last month of September.The columns contain the unique usernames, emails, comments, creation dates for those comments, as well as the number of button clicks this account has per minute. Well be using this data set to answer a few questions using the skills we've gains in time series, as well basic data manipulation. ###Code user_comments_df = pd.read_csv('user_comments.csv') user_comments_df ###Output _____no_output_____ ###Markdown Challenges1. How many government accounts have made comments in the month this dataset covers?2. What day had the most comments? which had the least?3. We'll define a fradulent account as one that has made more than 40 average clicks per minute. how many fraud accounts are there in this dataset? How many are students in college? Do non goverment accounts have a higher probability of being fraud vs the probability of being fraudulent if they are non-goverment accounts?4. How many comments mention 'gradiva' and were there more mentions during the day or night?5. Was the latter half of the month more popular for commenting than the earlier half?6. What is the average length of a comment coming from people in the irs? ###Code # Here is an example of how to use the pandas.Series.str.contains() method to parse every string in a series and return the rows containing certain data. This will help you out with some of the exercises above user_comments_df[(user_comments_df['user_name'].str.contains('this',regex=False))] ###Output _____no_output_____
decades_octaves_fractions.ipynb
###Markdown Decades, octaves and fractions thereofIn this notebook we will explore the relations between [decades](https://en.wikipedia.org/wiki/Decade_(log_scale)) and [octaves](https://en.wikipedia.org/wiki/Octave_(electronics)), two scales that are commonly used when working with frequency bands. In fact, they correspond to logarithmic scales, to base 10 and base 2 respectively, and, therefore, are related. Furthermore, fractions (1/N, with N integer and larger than one) are also used to refine the frequency bands. Table of contents[Preamble](Preamble)[Introduction](Introduction)[Conclusions](Conclusions)[Odds and ends](Odds-and-ends) PreambleThe computational environment set up for this Python notebook is the following: ###Code # Ipython 'magic' commands %matplotlib inline # Python standard library import sys # 3rd party modules import numpy as np import scipy as sp import matplotlib as mpl import pandas as pd import matplotlib.pyplot as plt # Computational lab set up print(sys.version) for package in (np, sp, mpl, pd): print('{:.<15} {}'.format(package.__name__, package.__version__)) ###Output 3.6.1 |Anaconda 4.4.0 (64-bit)| (default, May 11 2017, 13:09:58) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] numpy.......... 1.12.1 scipy.......... 0.19.0 matplotlib..... 2.0.2 pandas......... 0.20.1 ###Markdown [Back to top](top) IntroductionIn signal processing, decades and octaves are usually applied to frequency bands. A decade is defined as a 10-fold increase in frequency and an octave as a 2-fold increase (doubling) in frequency. In mathematical terms, this can be translated to:$$ \frac{f_{10}}{f_1} = 10 $$$$ \frac{f_2}{f_1} = 2 $$where $f_{{10}}$ is a frequency one decade above $f_1$ and $f_{2}$ is a frequency one octave above $f_1$. Similarly, $f_{1}$ is a frequency one decade below $f_{10}$ and $f_{1}$ is a frequency one octave below $f_2$. These relations can be further expanded to express *n* decades or octaves:$$ \frac{f_{10n}}{f_1} = 10^n $$$$ \frac{f_{2n}}{f_1} = 2^n $$Please bear in mind that, in the previous expressions, *n* is an integer value that can be either positive (for decades or octaves above $f_1$) or negative (for decades or octaves below $f_1$).Each decade, or octave, can be divided into fractions, that is, it can be divided into multiple steps. For that, the exponent in the previous expressions is no longer an integer but is expressed as a fraction with an integer denominator *N*:$$ \frac{f_{10/N}}{f_1} = 10^{1/N} $$$$ \frac{f_{2/N}}{f_1} = 2^{1/N} $$This means that in between $f_1$ and $f_{10}$ or $f_2$ there will be *N* frequencies:$$ {f_1} \times 10^{1/N}\ ,\ {f_1} \times 10^{2/N}\ ,\ \dots\ ,\ {f_1} \times 10^{(N-1)/N} $$$$ {f_1} \times 2^{1/N}\ ,\ {f_1} \times 2^{2/N}\ ,\ \dots\ ,\ {f_1} \times 2^{(N-1)/N} $$The denominator *N* usually assumes the following [values](http://blog.prosig.com/2006/02/17/standard-octave-bands/): 3, 6, 12, 24. These are closely related with the [Renard numbers](https://en.wikipedia.org/wiki/Renard_series).With these relations in mind, we are now ready to start explore further about decades, octaves and fractions thereof.[Back to top](top) DecadesLet us start with the decades. We will generate a series of frequencies bands covering three decades and starting at 1Hz: ###Code f1 = 1 # starting frequency nf = 3 # number of frequency bands fn = f1*10**np.arange(nf+1) print(fn) ###Output [ 1 10 100 1000] ###Markdown These are actually no more than a sequence of frequencies like the one generated by the [numpy logspace function](https://docs.scipy.org/doc/numpy/reference/generated/numpy.logspace.html): ###Code fn = f1*np.logspace(0, nf, num=nf+1) print(fn) ###Output [ 1. 10. 100. 1000.] ###Markdown If we try to plot this sequence, we will see the skeleton of an exponential function: ###Code fig, ax = plt.subplots() ax.plot(np.arange(nf+1), fn) ###Output _____no_output_____
2020/Day07.ipynb
###Markdown Day 07Let's get the data first. ###Code import aocd rules = aocd.get_data(day=7, year=2020).splitlines() # rules = """light red bags contain 1 bright white bag, 2 muted yellow bags. # dark orange bags contain 3 bright white bags, 4 muted yellow bags. # bright white bags contain 1 shiny gold bag. # muted yellow bags contain 2 shiny gold bags, 9 faded blue bags. # shiny gold bags contain 1 dark olive bag, 2 vibrant plum bags. # dark olive bags contain 3 faded blue bags, 4 dotted black bags. # vibrant plum bags contain 5 faded blue bags, 6 dotted black bags. # faded blue bags contain no other bags. # dotted black bags contain no other bags.""".splitlines() # rules = """shiny gold bags contain 2 dark red bags. # dark red bags contain 2 dark orange bags. # dark orange bags contain 2 dark yellow bags. # dark yellow bags contain 2 dark green bags. # dark green bags contain 2 dark blue bags. # dark blue bags contain 2 dark violet bags. # dark violet bags contain no other bags.""".splitlines() rule_dict = {} for rule in rules: colour = rule.split(' bags contain ')[0] contains = rule.split(' bags contain ')[1] if contains == 'no other bags.': contains = [] else: contains = contains[:-1].split(', ') contains = [{'qty': x.split(' ')[0], 'bag': ' '.join(x.split(' ')[1:-1])} for x in contains] # contains = [' '.join(x.split(' ')[1:-1]) for x in contains] # print(colour) rule_dict[colour] = contains # rule_dict ###Output _____no_output_____ ###Markdown Part 1 ###Code def contains_gold(bag): # print('contains_gold('+bag+')') return_val = False for inner_bag in rule_dict[bag]: if inner_bag['bag'] == 'shiny gold' or contains_gold(inner_bag['bag']): return_val = True return return_val count = 0 for colour, contains in rule_dict.items(): # print(colour, contains_gold(colour)) if contains_gold(colour): count += 1 print(count) ###Output 112 ###Markdown Part 2 ###Code def count_bags(bag): return_val = 1 for inner_bag in rule_dict[bag]: return_val += int(inner_bag['qty']) * count_bags(inner_bag['bag']) return return_val print(count_bags('shiny gold')-1) ###Output 6260
Code/Evaluation_LDA.ipynb
###Markdown Evaluation des modèles LDA **OBJECTIF GENERAL** **Evaluer les résultats des modèles LDA à l'aide de mesures automatiques** **OBJECTIFS SPECIFIQUES** - Calculer **les mesures de perplexité et de cohérence** pour les modèles entraînés sur les 3 revues- Comparer **l'influence du nombre de thèmes, de la lemmatisation et de la représentation de mots sur la perplexité** des modèles- Etudier la **fiabilité** du modèle à l'aide de la **variabilité des mesures de perplexité, de cohérence et de similarité sémantique** ---------------------------------- **Import des bibliothèques et données** ###Code from gensim.models import LdaModel from gensim import models from gensim.models import CoherenceModel import numpy as np import matplotlib.pyplot as plt from tqdm import tqdm ###Output _____no_output_____ ###Markdown **Imports AE** ###Code # liste de jetons pour chaque article %store -r tokens_bigrams_Corpus_LDA_AE_clean %store -r tokens_bigrams_Corpus_LDA_AE_clean_lemma # Set d'entraînement/test et dictionnaire %store -r train_texts_AE %store -r test_texts_AE %store -r dictionary_AE_2 %store -r dictionary_AE_2_lemma # Matrice documents/jetons # Sac de mots sans lemmatisation %store -r corpus_train_AE %store -r corpus_test_AE # Sac de mots avec lemmatisation %store -r corpus_train_AE_lemma %store -r corpus_test_AE_lemma # Tf-idf sans lemmatisation %store -r corpus_train_AE_tfidf %store -r corpus_test_AE_tfidf # Tf-idf avec lemmatisation %store -r corpus_train_AE_tfidf_lemma %store -r corpus_test_AE_tfidf_lemma ###Output _____no_output_____ ###Markdown **Imports EI** ###Code # liste de tokens pour chaque article %store -r tokens_bigrams_Corpus_LDA_EI_clean %store -r tokens_bigrams_Corpus_LDA_EI_clean_lemma # Set d'entraînement/test et dictionnaire %store -r train_texts_EI %store -r test_texts_EI %store -r dictionary_EI_2 %store -r dictionary_EI_2_lemma # Matrice documents/jetons # Sac de mots sans lemmatisation %store -r corpus_train_EI %store -r corpus_test_EI # Sac de mots avec lemmatisation %store -r corpus_train_EI_lemma %store -r corpus_test_EI_lemma # Tf-idf sans lemmatisation %store -r corpus_train_EI_tfidf %store -r corpus_test_EI_tfidf # Tf-idf avec lemmatisation %store -r corpus_train_EI_tfidf_lemma %store -r corpus_test_EI_tfidf_lemma ###Output _____no_output_____ ###Markdown **Imports RI** ###Code # liste de tokens pour chaque article %store -r tokens_bigrams_Corpus_LDA_RI_clean %store -r tokens_bigrams_Corpus_LDA_RI_clean_lemma # Set d'entraînement/test et dictionnaire %store -r train_texts_RI %store -r test_texts_RI %store -r dictionary_RI_2 %store -r dictionary_RI_2_lemma # Matrice documents/jetons # Sac de mots sans lemmatisation %store -r corpus_train_RI %store -r corpus_test_RI # Sac de mots avec lemmatisation %store -r corpus_train_RI_lemma %store -r corpus_test_RI_lemma # Tf-idf sans lemmatisation %store -r corpus_train_RI_tfidf %store -r corpus_test_RI_tfidf # Tf-idf avec lemmatisation %store -r corpus_train_RI_tfidf_lemma %store -r corpus_test_RI_tfidf_lemma ###Output _____no_output_____ ###Markdown ------------------ I. MESURE DE PERPLEXITE POUR CHAQUE REVUE ###Code num_topics = [x for x in range(1,10)] + [x for x in range(10,110,10)] def plot_perplexités (num_topics, perplexité, perplexité_lemma, perplexité_tfidf, perplexité_tfidf_lemma, revue ='AE'): perplexité = [element[1] for element in perplexité] perplexité_lemma = [element[1] for element in perplexité_lemma] perplexité_tfidf = [element[1] for element in perplexité_tfidf] perplexité_tfidf_lemma = [element[1] for element in perplexité_tfidf_lemma] fig = plt.figure(figsize=(15,8)) plt.plot(num_topics, perplexité, 'kx', label = 'sac-de-mot',markersize=10) plt.plot(num_topics, perplexité_tfidf_lemma, 'yx', label = 'sac-de-mot_lemma',markersize=10) plt.plot(num_topics, perplexité_tfidf, 'ko', label = 'tfidf',markersize=10) plt.plot(num_topics, perplexité_tfidf_lemma, 'yo', label = 'tfidf_lemma',markersize=10) plt.legend() plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.xlabel('Nombre de thèmes',fontsize=20) plt.ylabel('Perplexité du modèle',fontsize=20) plt.title(revue,fontsize=40) plt.savefig("Plots/LDA/perplexité/perplexités_" + revue + ".png") plt.show() ###Output _____no_output_____ ###Markdown REVUE AE **MODELE SAC DE MOTS** ###Code # Récupérer si des valeurs ont déjà été calculées sans lemmatisation %store -r perplexité_AE # Récupérer si des valeurs ont déjà été calculées avec lemmatisation %store -r perplexité_AE_lemma #perplexité_AE = [] for num_topic in tqdm(num_topics): model = LdaModel.load('Résultats_LDA/AE/lda_ae_'+ str(num_topic)) perplexité_AE.append((num_topic,model.log_perplexity(corpus_test_AE))) #perplexité_AE_lemma = [] for num_topic in tqdm(num_topics): model = LdaModel.load('Résultats_LDA/AE/lda_ae_lemma_'+ str(num_topic)) perplexité_AE_lemma.append((num_topic,model.log_perplexity(corpus_test_AE_lemma))) ###Output _____no_output_____ ###Markdown **MODELE TF-IDF** ###Code # Récupérer si des valeurs ont déjà été calculées sans lemmatisation %store -r perplexité_AE_tfidf # Récupérer si des valeurs ont déjà été calculées avec lemmatisation %store -r perplexité_AE_tfidf_lemma # sans lemmatisation #perplexité_AE_tfidf = [] for num_topic in tqdm(num_topics): model = LdaModel.load('Résultats_LDA/AE/lda_ae_tfidf'+ str(num_topic)) perplexité_AE_tfidf.append((num_topic,model.log_perplexity(corpus_test_AE_tfidf))) # avec lemmatisation #perplexité_AE_tfidf_lemma = [] for num_topic in tqdm(num_topics): if num_topic <10: model = LdaModel.load('Résultats_LDA/EI/lda_ae_tfidf_lemma_'+ str(num_topic)) else: model = LdaModel.load('Résultats_LDA/EI/lda_ae_tfidf_lemma'+ str(num_topic)) plot_perplexités (num_topics, perplexité_AE, perplexité_AE_lemma, perplexité_AE_tfidf, perplexité_AE_tfidf_lemma) ###Output _____no_output_____ ###Markdown REVUE EI **MODELE SAC DE MOTS** ###Code # Récupérer si des valeurs ont déjà été calculées sans lemmatisation %store -r perplexité_EI # Récupérer si des valeurs ont déjà été calculées avec lemmatisation %store -r perplexité_EI_lemma #perplexité_EI = [] for num_topic in tqdm(num_topics): model = LdaModel.load('Résultats_LDA/EI/lda_ei_'+ str(num_topic)) perplexité_EI.append((num_topic,model.log_perplexity(corpus_test_EI))) #perplexité_EI_lemma = [] for num_topic in tqdm(num_topics): model = LdaModel.load('Résultats_LDA/EI/lda_ei_lemma_'+ str(num_topic)) perplexité_EI_lemma.append((num_topic,model.log_perplexity(corpus_test_EI_lemma))) ###Output _____no_output_____ ###Markdown **MODELE TF-IDF** ###Code # Récupérer si des valeurs ont déjà été calculées sans lemmatisation %store -r perplexité_EI_tfidf # Récupérer si des valeurs ont déjà été calculées avec lemmatisation %store -r perplexité_EI_tfidf_lemma # sans lemmatisation #perplexité_EI_tfidf = [] for num_topic in tqdm(num_topics): model = LdaModel.load('Résultats_LDA/EI/lda_ei_tfidf'+ str(num_topic)) perplexité_EI_tfidf.append((num_topic,model.log_perplexity(corpus_test_EI_tfidf))) # avec lemmatisation #perplexité_EI_tfidf_lemma = [] for num_topic in tqdm(num_topics): if num_topic <10: model = LdaModel.load('Résultats_LDA/EI/lda_ei_tfidf_lemma_'+ str(num_topic)) else: model = LdaModel.load('Résultats_LDA/EI/lda_ei_tfidf_lemma'+ str(num_topic)) perplexité_EI_tfidf_lemma.append((num_topic,model.log_perplexity(corpus_test_EI_tfidf_lemma))) plot_perplexités (num_topics, perplexité_EI, perplexité_EI_lemma, perplexité_EI_tfidf, perplexité_EI_tfidf_lemma,revue='EI') ###Output _____no_output_____ ###Markdown REVUE RI **MODELE SAC DE MOTS** ###Code # Récupérer si des valeurs ont déjà été calculées sans lemmatisation %store -r perplexité_RI # Récupérer si des valeurs ont déjà été calculées avec lemmatisation %store -r perplexité_RI_lemma #perplexité_RI = [] for num_topic in tqdm(num_topics): model = LdaModel.load('Résultats_LDA/RI/lda_ri_'+ str(num_topic)) perplexité_RI.append((num_topic,model.log_perplexity(corpus_test_RI))) #perplexité_RI_lemma = [] for num_topic in tqdm(num_topics): model = LdaModel.load('Résultats_LDA/RI/lda_ri_lemma_'+ str(num_topic)) perplexité_RI_lemma.append((num_topic,model.log_perplexity(corpus_test_RI_lemma))) ###Output _____no_output_____ ###Markdown **MODELE TF-IDF** ###Code # Récupérer si des valeurs ont déjà été calculées sans lemmatisation %store -r perplexité_RI_tfidf # Récupérer si des valeurs ont déjà été calculées avec lemmatisation %store -r perplexité_RI_tfidf_lemma # sans lemmatisation #perplexité_RI_tfidf = [] for num_topic in tqdm(num_topics): model = LdaModel.load('Résultats_LDA/RI/lda_ri_tfidf'+ str(num_topic)) perplexité_RI_tfidf.append((num_topic,model.log_perplexity(corpus_test_RI_tfidf))) # avec lemmatisation #perplexité_RI_tfidf_lemma = [] for num_topic in tqdm(num_topics): model = LdaModel.load('Résultats_LDA/RI/lda_ri_tfidf_lemma'+ str(num_topic)) perplexité_RI_tfidf_lemma.append((num_topic,model.log_perplexity(corpus_test_RI_tfidf_lemma))) plot_perplexités (num_topics, perplexité_RI, perplexité_RI_lemma, perplexité_RI_tfidf, perplexité_RI_tfidf_lemma, revue='RI') ###Output _____no_output_____ ###Markdown ---------------------------------- II. COHERENCE DES MODELES POUR CHAQUE REVUE Comparaison des 4 mesures de convergence : (cf. Röder et al. 2015)- UCI coherence : utilise mesure de co-occurence basée sur des articles de Wikipedia (mesure extrinsèque)- Umass coherence : intrinsèque.- NPMI : utilise le vecteur de contexte autour de chaque "topic top word"- CV : combinaison de NPMI et fenêtre glissante de contexte : meilleure corrélation avec humain **Remarque** : seule la configuration "sac de mots sans lemmatisation" a été évaluée. Pour évaluer les modèles avec lemmatisation, il faut prendre le dictionnaire lemmatisé.Puis il faut adapter les corpus utilisés en fonction de la configuration tfidf/lemma Running time moyen pour chaque mesure sur les 10 modèles de chaque revue = - 15s pour mesure de cohérence umass- 15min pour mesure de cohérence c_uci- 15 min pour mesure de cohérence c_npmi- 1h pour mesure de cohérence c_v ###Code def plot_coherence (dico_coherences_AE, dico_coherences_EI, dico_coherences_RI, label = 'umass', size=(10,5)): coherences_ae, coherences_ei,coherences_ri = dico_coherences_AE[label],dico_coherences_EI[label], dico_coherences_RI[label] x, y_ae = zip(*coherences_ae) _, y_ei = zip(*coherences_ei) _, y_ri = zip(*coherences_ri) fig = plt.figure(figsize=size) plt.plot(x,y_ae,'gx', label ='AE') plt.plot(x,y_ei,'yx', label ='EI') plt.plot(x,y_ri,'ox', label ='RI') plt.legend() plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.xlabel('Nombre de thèmes',fontsize=20) plt.ylabel('Cohérence du modèle',fontsize=20) plt.savefig("Plots/LDA/cohérence/Comparatif/cohérences_" + label + ".png") plt.show() ###Output _____no_output_____ ###Markdown REVUE AE **SAC DE MOTS** ###Code %store -r dico_coherences_AE #dico_coherences_AE = {'umass':[], 'c_v': [], 'uci':[], 'npmi' : []} dictionary = dictionary_AE_2 temp = dictionary[0] # load du dictionnaire for num_topic in tqdm(range(1,10)): model = LdaModel.load('Résultats_LDA/AE/lda_ae_'+ str(num_topic)) #dico_coherences_AE['umass'].append((num_topic,CoherenceModel(model=model,corpus=corpus_train_AE, coherence="u_mass").get_coherence())) dico_coherences_AE['c_v'].append((num_topic,CoherenceModel(model = model,dictionary=dictionary, coherence="c_v", texts=train_texts_AE).get_coherence())) dico_coherences_AE['uci'].append((num_topic,CoherenceModel(model = model,dictionary=dictionary, coherence="c_uci", texts=train_texts_AE).get_coherence())) dico_coherences_AE['npmi'].append((num_topic,CoherenceModel(model = model,dictionary=dictionary, coherence="c_npmi", texts=train_texts_AE).get_coherence())) ###Output _____no_output_____ ###Markdown REVUE EI **SAC DE MOTS** ###Code %store -r dico_coherences_EI #dico_coherences_EI = {'umass':[], 'c_v': [], 'uci':[], 'npmi' : []} dictionary = dictionary_EI_2 temp = dictionary[0] # load du dictionnaire for num_topic in tqdm(range(1,10)): model = LdaModel.load('Résultats_LDA/EI/lda_ei_'+ str(num_topic)) #dico_coherences_EI['umass'].append((num_topic,CoherenceModel(model=model,corpus=corpus_train_EI, coherence="u_mass").get_coherence())) dico_coherences_EI['c_v'].append((num_topic,CoherenceModel(model = model,dictionary=dictionary, coherence="c_v", texts=train_texts_EI).get_coherence())) #dico_coherences_EI['uci'].append((num_topic,CoherenceModel(model = model,dictionary=dictionary, coherence="c_uci", texts=train_texts_EI).get_coherence())) #dico_coherences_EI['npmi'].append((num_topic,CoherenceModel(model = model,dictionary=dictionary, coherence="c_npmi", texts=train_texts_EI).get_coherence())) ###Output _____no_output_____ ###Markdown REVUE RI **SAC DE MOTS** ###Code %store -r dico_coherences_RI #dico_coherences_RI = {'umass':[], 'c_v': [], 'uci':[], 'npmi' : []} dictionary = dictionary_RI_2 temp = dictionary[0] # load du dictionnaire for num_topic in tqdm(range(1,10)): model = LdaModel.load('Résultats_LDA/RI/lda_ri_'+ str(num_topic)) dico_coherences_RI['umass'].append((num_topics[num_topic],CoherenceModel(model=model,corpus=corpus_train_RI, coherence="u_mass").get_coherence())) dico_coherences_RI['c_v'].append((num_topics[num_topic],CoherenceModel(model = model,dictionary=dictionary, coherence="c_v", texts=train_texts_RI).get_coherence())) dico_coherences_RI['uci'].append((num_topic,CoherenceModel(model = model,dictionary=dictionary, coherence="c_uci", texts=train_texts_RI).get_coherence())) dico_coherences_RI['npmi'].append((num_topics[num_topic],CoherenceModel(model = model,dictionary=dictionary, coherence="c_npmi", texts=train_texts_RI).get_coherence())) ###Output _____no_output_____ ###Markdown PLOTS POUR LES 3 REVUES ###Code def plot_coherence (dico_coherences_AE, dico_coherences_EI, dico_coherences_RI, label = 'umass', size=(10,5)): coherences_ae, coherences_ei,coherences_ri = dico_coherences_AE[label],dico_coherences_EI[label], dico_coherences_RI[label] x, y_ae = zip(*coherences_ae) _, y_ei = zip(*coherences_ei) _, y_ri = zip(*coherences_ri) fig = plt.figure(figsize=size) plt.plot(x,y_ae,'gx', label ='AE') plt.plot(x,y_ei,'yo', label ='EI') plt.plot(x,y_ri,'rs', label ='RI') plt.legend() plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.title(label,fontsize=20) plt.xlabel('Nombre de thèmes',fontsize=20) plt.ylabel('Cohérence du modèle',fontsize=20) plt.savefig("Plots/LDA/cohérence/Comparatif/cohérences_" + label + ".png") plt.show() %store -r dico_coherences_AE %store -r dico_coherences_EI %store -r dico_coherences_RI plot_coherence (dico_coherences_AE, dico_coherences_EI, dico_coherences_RI, label = 'umass') plot_coherence (dico_coherences_AE, dico_coherences_EI, dico_coherences_RI, label = 'c_v') plot_coherence (dico_coherences_AE, dico_coherences_EI, dico_coherences_RI, label = 'uci') plot_coherence (dico_coherences_AE, dico_coherences_EI, dico_coherences_RI, label = 'npmi') ###Output _____no_output_____ ###Markdown ---------------------------------- III. ETUDE DE FIABILITE POUR CHAQUE REVUE **ETUDE DE LA PERPLEXITE** REVUE AE ###Code # Récupérer si des valeurs ont déjà été calculées sans lemmatisation %store -r perplexité_AE_10 %store -r perplexité_AE_40 # fiabilité LDA 10 thèmes #perplexité_AE_10 = [] indexes = [0,1,2,3,4] for index in tqdm(indexes): model = LdaModel.load('Résultats_LDA/Fiabilité/lda_ae_10_'+ str(index)) perplexité_AE_10.append((index,model.log_perplexity(corpus_test_AE))) # fiabilité LDA 40 thèmes #perplexité_AE_40 = [] indexes = [0,1,2,3,4] for index in tqdm(indexes): model = LdaModel.load('Résultats_LDA/Fiabilité/lda_ae_40_'+ str(index)) perplexité_AE_40.append((index,model.log_perplexity(corpus_test_AE))) ###Output _____no_output_____ ###Markdown REVUE EI ###Code # Récupérer si des valeurs ont déjà été calculées sans lemmatisation %store -r perplexité_EI_10 %store -r perplexité_EI_40 # fiabilité LDA 10 thèmes #perplexité_EI_10 = [] indexes = [0,1,2,3,4] for index in tqdm(indexes): model = LdaModel.load('Résultats_LDA/Fiabilité/lda_ei_10_'+ str(index)) perplexité_EI_10.append((index,model.log_perplexity(corpus_test_EI))) # fiabilité LDA 40 thèmes #perplexité_EI_40 = [] indexes = [0,1,2,3,4] for index in tqdm(indexes): model = LdaModel.load('Résultats_LDA/Fiabilité/lda_ei_40_'+ str(index)) perplexité_EI_40.append((index,model.log_perplexity(corpus_test_EI))) ###Output _____no_output_____ ###Markdown REVUE RI ###Code # Récupérer si des valeurs ont déjà été calculées sans lemmatisation %store -r perplexité_RI_10 %store -r perplexité_RI_40 # fiabilité LDA 10 thèmes #perplexité_RI_10 = [] indexes = [0,1,2,3,4] for index in tqdm(indexes): model = LdaModel.load('Résultats_LDA/Fiabilité/lda_ri_10_'+ str(index)) perplexité_RI_10.append((index,model.log_perplexity(corpus_test_RI))) # fiabilité LDA 10 thèmes #perplexité_RI_40 = [] indexes = [0,1,2,3,4] for index in tqdm(indexes): model = LdaModel.load('Résultats_LDA/Fiabilité/lda_ri_40_'+ str(index)) perplexité_RI_40.append((index,model.log_perplexity(corpus_test_RI))) ###Output _____no_output_____ ###Markdown PLOTS POUR LES 3 REVUES ###Code %store -r perplexité_AE_10 %store -r perplexité_AE_40 %store -r perplexité_EI_10 %store -r perplexité_EI_40 %store -r perplexité_RI_10 %store -r perplexité_RI_40 AE_10 = [element[1] for element in perplexité_AE_10] AE_40 = [element[1] for element in perplexité_AE_40] EI_10 = [element[1] for element in perplexité_EI_10] EI_40 = [element[1] for element in perplexité_EI_40] RI_10 = [element[1] for element in perplexité_RI_10] RI_40 = [element[1] for element in perplexité_RI_40] numpy_AE_10 = np.array(AE_10) numpy_AE_40 = np.array(AE_40) numpy_EI_10 = np.array(EI_10) numpy_EI_40 = np.array(EI_40) numpy_RI_10 = np.array(RI_10) numpy_RI_40 = np.array(RI_40) fig = plt.figure(figsize=(10,6)) BoxName = ['LDA AE 10', 'LDA EI 10', 'LDA RI 10','LDA AE 40', 'LDA EI 40', 'LDA RI 40'] data = [AE_10,EI_10,RI_10,AE_40,EI_40,RI_40] plt.boxplot(data) plt.ylim(-11,-9.5) plt.xticks([1,2,3,4,5,6], BoxName) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.ylabel('Perplexité',fontsize=20) plt.savefig("Plots/LDA/perplexité/perplexités_fiabilités_3_REVUES.png") plt.show() print("Perplexité des modèles à 10 thèmes pour AE :", round(np.mean(numpy_AE_10),3), "+/-", round(np.std(numpy_AE_10),3)) print("Perplexité des modèles à 40 thèmes pour AE :", round(np.mean(numpy_AE_40),3), "+/-", round(np.std(numpy_AE_40),3)) print("Perplexité des modèles à 10 thèmes pour RI :", round(np.mean(numpy_EI_10),3), "+/-", round(np.std(numpy_EI_10),3)) print("Perplexité des modèles à 40 thèmes pour RI :", round(np.mean(numpy_EI_40),3), "+/-", round(np.std(numpy_EI_40),3)) print("Perplexité des modèles à 10 thèmes pour RI :", round(np.mean(numpy_RI_10),3), "+/-", round(np.std(numpy_RI_10),3)) print("Perplexité des modèles à 40 thèmes pour RI :", round(np.mean(numpy_RI_40),3), "+/-", round(np.std(numpy_RI_40),3)) ###Output _____no_output_____ ###Markdown ---------------- **ETUDE DE LA COHERENCE** Pour chaque revue, running time ~ 30min REVUE AE ###Code dico_coherences_AE_fiabilité = {'10' : {'umass':[], 'c_v': [], 'uci':[], 'npmi' : []},'40' : {'umass':[], 'c_v': [], 'uci':[], 'npmi' : []}} dictionary = dictionary_AE_2 temp = dictionary[0] # load du dictionnaire indexes = [0,1,2,3,4] for num_topic in tqdm(['10','40']): for index in tqdm(indexes): #model = LdaModel.load('Résultats_LDA/Fiabilité/lda_ae_'+ num_topic+ '_'+str(index)) model = LdaModel.load('Résultats_LDA/lda_ae_'+ num_topic+ '_'+str(index)) try : dico_coherences_AE_fiabilité[num_topic]['umass'].append(CoherenceModel(model=model,corpus=corpus_train_AE, coherence="u_mass").get_coherence()) except: print('Problème avec ' + 'lda_ae_'+ num_topic+ '_'+str(index)+ ' pour umass') try : dico_coherences_AE_fiabilité[num_topic]['c_v'].append(CoherenceModel(model = model,dictionary=dictionary, coherence="c_v", texts=train_texts_AE).get_coherence()) except: print('Problème avec ' + 'lda_ae_'+ num_topic+ '_'+str(index) + ' pour c_v') try : dico_coherences_AE_fiabilité[num_topic]['uci'].append(CoherenceModel(model = model,dictionary=dictionary, coherence="c_uci", texts=train_texts_AE).get_coherence()) except: print('Problème avec ' + 'lda_ae_'+ num_topic+ '_'+str(index) + ' pour uci') try: dico_coherences_AE_fiabilité[num_topic]['npmi'].append(CoherenceModel(model = model,dictionary=dictionary, coherence="c_npmi", texts=train_texts_AE).get_coherence()) except : print('Problème avec ' + 'lda_ae_'+ num_topic+ '_'+str(index) + 'pour npmi') %store dico_coherences_AE_fiabilité dico_coherences_AE_fiabilité ###Output _____no_output_____ ###Markdown REVUE EI ###Code dico_coherences_EI_fiabilité = {'10' : {'umass':[], 'c_v': [], 'uci':[], 'npmi' : []},'40' : {'umass':[], 'c_v': [], 'uci':[], 'npmi' : []}} dictionary = dictionary_EI_2 temp = dictionary[0] # load du dictionnaire indexes = [0,1,2,3,4] for num_topic in tqdm(['10','40']): for index in tqdm(indexes): model = LdaModel.load('Résultats_LDA/Fiabilité/lda_ei_'+ num_topic+ '_'+str(index)) try : dico_coherences_EI_fiabilité[num_topic]['umass'].append(CoherenceModel(model=model,corpus=corpus_train_EI, coherence="u_mass").get_coherence()) except: print('Problème avec ' + 'lda_ei_'+ num_topic+ '_'+str(index)+ ' pour c_v') try : dico_coherences_EI_fiabilité[num_topic]['c_v'].append(CoherenceModel(model = model,dictionary=dictionary, coherence="c_v", texts=train_texts_EI).get_coherence()) except: print('Problème avec ' + 'lda_ei_'+ num_topic+ '_'+str(index) + ' pour c_v') try : dico_coherences_EI_fiabilité[num_topic]['uci'].append(CoherenceModel(model = model,dictionary=dictionary, coherence="c_uci", texts=train_texts_EI).get_coherence()) except: print('Problème avec ' + 'lda_ei_'+ num_topic+ '_'+str(index) + ' pour uci') try: dico_coherences_EI_fiabilité[num_topic]['npmi'].append(CoherenceModel(model = model,dictionary=dictionary, coherence="c_npmi", texts=train_texts_EI).get_coherence()) except : print('Problème avec ' + 'lda_ei_'+ num_topic+ '_'+str(index) + 'pour npmi') dico_coherences_EI_fiabilité ###Output _____no_output_____ ###Markdown REVUE RI ###Code dico_coherences_RI_fiabilité = {'10' : {'umass':[], 'c_v': [], 'uci':[], 'npmi' : []},'40' : {'umass':[], 'c_v': [], 'uci':[], 'npmi' : []}} dictionary = dictionary_RI_2 temp = dictionary[0] # load du dictionnaire indexes = [0,1,2,3,4] for num_topic in tqdm(['10','40']): for index in tqdm(indexes): model = LdaModel.load('Résultats_LDA/Fiabilité/lda_ri_'+ num_topic+ '_'+str(index)) try : dico_coherences_RI_fiabilité[num_topic]['umass'].append(CoherenceModel(model=model,corpus=corpus_train_RI, coherence="u_mass").get_coherence()) except: print('Problème avec ' + 'lda_ri_'+ num_topic+ '_'+str(index)+ ' pour c_v') try : dico_coherences_RI_fiabilité[num_topic]['c_v'].append(CoherenceModel(model = model,dictionary=dictionary, coherence="c_v", texts=train_texts_RI).get_coherence()) except: print('Problème avec ' + 'lda_ri_'+ num_topic+ '_'+str(index) + ' pour c_v') try : dico_coherences_RI_fiabilité[num_topic]['uci'].append(CoherenceModel(model = model,dictionary=dictionary, coherence="c_uci", texts=train_texts_RI).get_coherence()) except: print('Problème avec ' + 'lda_ri_'+ num_topic+ '_'+str(index) + ' pour uci') try: dico_coherences_RI_fiabilité[num_topic]['npmi'].append(CoherenceModel(model = model,dictionary=dictionary, coherence="c_npmi", texts=train_texts_RI).get_coherence()) except : print('Problème avec ' + 'lda_ri_'+ num_topic+ '_'+str(index) + 'pour npmi') dico_coherences_RI_fiabilité ###Output _____no_output_____ ###Markdown PLOTS POUR LES 3 REVUES ###Code %store -r dico_coherences_AE_fiabilité %store -r dico_coherences_EI_fiabilité %store -r dico_coherences_RI_fiabilité def plot_variabilité(label='umass'): fig = plt.figure(figsize=(10,6)) BoxName = ['LDA AE 10', 'LDA EI 10', 'LDA RI 10','LDA AE 40', 'LDA EI 40', 'LDA RI 40'] AE_10 = dico_coherences_AE_fiabilité['10'][label] AE_40 = dico_coherences_AE_fiabilité['40'][label] EI_10 = dico_coherences_EI_fiabilité['10'][label] EI_40 = dico_coherences_EI_fiabilité['40'][label] RI_10 = dico_coherences_RI_fiabilité['10'][label] RI_40 = dico_coherences_RI_fiabilité['40'][label] data = [AE_10,EI_10,RI_10,AE_40,EI_40,RI_40] plt.boxplot(data) plt.ylim(-0.5,0.5) plt.xticks([1,2,3,4,5,6], BoxName, fontsize=14) plt.yticks(fontsize=14) plt.title(label,fontsize=20) plt.ylabel('Cohérence',fontsize=20) plt.savefig("Plots/LDA/cohérence/comparatif/cohérence_fiabilité_" + label + ".png") plt.show() plot_variabilité(label='umass') plot_variabilité(label='npmi') plot_variabilité(label='c_v') plot_variabilité(label='uci') AE_umass_10 =np.array(dico_coherences_AE_fiabilité ['10']['umass']) AE_uci_10 =np.array(dico_coherences_AE_fiabilité['10']['uci']) AE_npmi_10 = np.array(dico_coherences_AE_fiabilité['10']['npmi']) AE_cv_10 = np.array(dico_coherences_AE_fiabilité['10']['c_v']) AE_umass_40 = np.array(dico_coherences_AE_fiabilité['40']['umass']) AE_uci_40 = np.array(dico_coherences_AE_fiabilité['40']['uci']) AE_npmi_40 = np.array(dico_coherences_AE_fiabilité['40']['npmi']) AE_cv_40 = np.array(dico_coherences_AE_fiabilité['40']['c_v']) EI_umass_10 = np.array(dico_coherences_EI_fiabilité['10']['umass']) EI_uci_10 = np.array(dico_coherences_EI_fiabilité['10']['uci']) EI_npmi_10 =np.array(dico_coherences_EI_fiabilité['10']['npmi']) EI_cv_10 = np.array(dico_coherences_EI_fiabilité['10']['c_v']) EI_umass_40 = np.array(dico_coherences_EI_fiabilité['40']['umass']) EI_uci_40 = np.array(dico_coherences_EI_fiabilité['40']['uci']) EI_npmi_40 = np.array(dico_coherences_EI_fiabilité['40']['npmi']) EI_cv_40 = np.array(dico_coherences_EI_fiabilité['40']['c_v']) RI_umass_10 = np.array(dico_coherences_RI_fiabilité['10']['umass']) RI_uci_10 = np.array(dico_coherences_RI_fiabilité['10']['uci']) RI_npmi_10 = np.array(dico_coherences_RI_fiabilité['10']['npmi']) RI_cv_10 = np.array(dico_coherences_RI_fiabilité['10']['c_v']) RI_umass_40 = np.array(dico_coherences_RI_fiabilité['40']['umass']) RI_uci_40 = np.array(dico_coherences_RI_fiabilité['40']['uci']) RI_npmi_40 = np.array(dico_coherences_RI_fiabilité['40']['npmi']) RI_cv_40 = np.array(dico_coherences_RI_fiabilité['40']['c_v']) print("Cohérence des modèles à 10 thèmes pour AE:", round(np.mean(AE_umass_10),3), "+/-", round(np.std(AE_umass_10),3)) print("Cohérence des modèles à 40 thèmes pour AE :", round(np.mean(AE_umass_40),3), "+/-", round(np.std(AE_umass_40),3)) print("Cohérence des modèles à 10 thèmes pour AE :", round(np.mean(AE_uci_10),3), "+/-", round(np.std(AE_uci_10),3)) print("Cohérence des modèles à 40 thèmes pour AE :", round(np.mean(AE_uci_40),3), "+/-", round(np.std(AE_uci_40),3)) print("Cohérence des modèles à 10 thèmes pour AE :", round(np.mean(AE_npmi_10),3), "+/-", round(np.std(AE_npmi_10),3)) print("Cohérence des modèles à 40 thèmes pour AE :", round(np.mean(AE_npmi_40),3), "+/-", round(np.std(AE_npmi_40),3)) print("Cohérence des modèles à 10 thèmes pour AE :", round(np.mean(AE_cv_10),3), "+/-", round(np.std(AE_cv_10),3)) print("Cohérence des modèles à 40 thèmes pour AE :", round(np.mean(AE_cv_40),3), "+/-", round(np.std(AE_cv_40),3)) print("Cohérence des modèles à 10 thèmes pour EI :", round(np.mean(EI_umass_10),3), "+/-", round(np.std(EI_umass_10),3)) print("Cohérence des modèles à 40 thèmes pour EI :", round(np.mean(EI_umass_40),3), "+/-", round(np.std(EI_umass_40),3)) print("Cohérence des modèles à 10 thèmes pour EI :", round(np.mean(EI_uci_10),3), "+/-", round(np.std(EI_uci_10),3)) print("Cohérence des modèles à 40 thèmes pour EI :", round(np.mean(EI_uci_40),3), "+/-", round(np.std(EI_uci_40),3)) print("Cohérence des modèles à 10 thèmes pour EI :", round(np.mean(EI_npmi_10),3), "+/-", round(np.std(EI_npmi_10),3)) print("Cohérence des modèles à 40 thèmes pour EI :", round(np.mean(EI_npmi_40),3), "+/-", round(np.std(EI_npmi_40),3)) print("Cohérence des modèles à 10 thèmes pour EI :", round(np.mean(EI_cv_10),3), "+/-", round(np.std(EI_cv_10),3)) print("Cohérence des modèles à 40 thèmes pour EI :", round(np.mean(EI_cv_40),3), "+/-", round(np.std(EI_cv_40),3)) print("Cohérence des modèles à 10 thèmes pour RI :", round(np.mean(RI_umass_10),3), "+/-", round(np.std(RI_umass_10),3)) print("Cohérence des modèles à 40 thèmes pour RI :", round(np.mean(RI_umass_40),3), "+/-", round(np.std(RI_umass_40),3)) print("Cohérence des modèles à 10 thèmes pour RI :", round(np.mean(RI_uci_10),3), "+/-", round(np.std(RI_uci_10),3)) print("Cohérence des modèles à 40 thèmes pour RI :", round(np.mean(RI_uci_40),3), "+/-", round(np.std(RI_uci_40),3)) print("Cohérence des modèles à 10 thèmes pour RI :", round(np.mean(RI_npmi_10),3), "+/-", round(np.std(RI_npmi_10),3)) print("Cohérence des modèles à 40 thèmes pour RI :", round(np.mean(RI_npmi_40),3), "+/-", round(np.std(RI_npmi_40),3)) print("Cohérence des modèles à 10 thèmes pour RI :", round(np.mean(RI_cv_10),3), "+/-", round(np.std(RI_cv_10),3)) print("Cohérence des modèles à 40 thèmes pour RI :", round(np.mean(RI_cv_40),3), "+/-", round(np.std(RI_cv_40),3)) ###Output _____no_output_____ ###Markdown ---------------- **ETUDE DE LA SIMILARITE SEMANTIQUE** ###Code def compte_similarité (revue ='ae'): dico_similarité = {'10':[],'40':[]} compte_similarité = {'10':[],'40':[]} for num_topic in ['10','40']: # Récupérer l'ensemble des jetons produits par chaque modèle for num_model in range(5): model = LdaModel.load('Résultats_LDA/Fiabilité/lda_' + revue + '_' + num_topic +'_' + str(num_model)) jetons = [] for topic in range(model.num_topics): topics = model.show_topic(topic) jetons.extend([element[0] for element in topics]) dico_similarité[num_topic].append(jetons) # Compter les similarité de jetons 2 à 2 + prendre la moyenne puis pourcentage sur le total for i in range (5): for j in range(i,5): compte_similarité[num_topic].append(len(set(dico_similarité[num_topic][i]) & set(dico_similarité[num_topic][j]))) compte_similarité[num_topic] = [element/int(num_topic)*10 for element in compte_similarité[num_topic]] #compte_similarité[num_topic] = round(np.mean(compte_similarité[num_topic]),1) return dico_similarité, compte_similarité dico_similarité_ae, compte_similarité_ae = compte_similarité(revue='ae') dico_similarité_ei, compte_similarité_ei = compte_similarité(revue='ei') dico_similarité_ri, compte_similarité_ri = compte_similarité(revue='ri') def print_resultats_stats (compte_similarité): for topic in ['10','40']: print("Similarité globale pour modèles avec ", topic, "thèmes : ", round(np.mean(compte_similarité[topic]),1), " +/-", round(np.std(compte_similarité[topic]),1)) print_resultats_stats(compte_similarité_ae) print_resultats_stats(compte_similarité_ei) print_resultats_stats(compte_similarité_ri) ###Output _____no_output_____
Wk6/Simple Convergence Graphs using Plotly.ipynb
###Markdown Week6Version/Date: Oct 24, 2017 Exercise> PREDICT_400-DL_SEC56> Week6 Discussion File(s)Simple Convergence Graphs using Plotly.ipynb InstructionsIn your own words, explain the difference between average rate of change and instantaneous rate of change. Do they have anything in common? How is average rate of change used in determining instantaneous rate of change? Share with us a specific application of average rate of change and a specific application of instantaneous rate of change. Ideally choose something you’ve encountered in your work or perhaps something that interests you. DescriptionWhy Plotly? Well, it's an interesting alternative to matplotlib pyplot that I've been meaning to try. Explanation & Example In my own simplistic view, the only difference between average rate of change and instantaneous rate of change is simply the length of the interval the over which the observation is made.I took an example involving rate of change in descent. When the weather starts getting colder, nothing interests me more than looking forward to my next ski trip. When it starts snowing I want to be outside skiing down the mountain.Let's say I take a given ski run down a mountain and want to know what my rate of descent will be through a given portion of the run. If I assume my speed down the a section of the hill will be directly proportional to the slope of run through that same section, then I can predict which part of the run will be my favorite based on which portion is the *fastest*... childish perhaps but let's go with it. Also to keep this example simple and relevant, I'll ignore other variables like changing surface conditions, obstacles, or the fact that my speed at any given section is dependent upon my speed entering that section of the run. I've made up a completely fictious run profile below to illustrate.As you see from the various slopes, the interval span determines the slope and as the interval span decreases, the it gets closer to resembling the slope at a single point on the hill. Thus the instanteous rate of change at a given point will be determined by the average rate of change for the two points on either side of that instant *as the span between those two points diminishes to zero*.So if I want to know how quickly I'm moving down the, um, slope (pun absolutely intended) at a given point then I simply need to look at the average rate of descent across two points of equal distance fromme up the hill and down the hill from my exact location. The more points I define on the continuous ski slope the easier it is to approximate the instanteous rate of descent at a given point. ###Code # pip install plotly first if you havent already done so. # This cell will produce an error on the imports if not installed. import plotly.plotly from plotly.graph_objs import Scatter, Layout # simple offline plot for slope - note the iplot method for jupyter as opposed to the normal plot. plotly.offline.init_notebook_mode(connected=True) # Made up scatter plot of the entire mountain ski run profile full_run = Scatter(x=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], y=[5, 4.9, 4.2, 4.1, 4, 2.8, 2.6, 2.6, 2.5, 1.8, 1.2, 1], name='entire run', line=dict(color='#bc42f4')) top_half_name_slope = 'top half - slope is ' + str(round((2.8-5)/(6-1),2)) top_half = Scatter(x=[1, 6], y=[5, 2.8], name=top_half_name_slope, line=dict(color='#41c1f4')) bottom_half_name_slope = 'bottom half - slope is ' + str(round((1-2.8)/(12-6),2)) bottom_half = Scatter(x=[6,12], y=[2.8,1], name=bottom_half_name_slope, line=dict(color='#41f1f4')) top_third_name_slope = 'top third - slope is ' + str(round((4.1-5)/(4-1),2)) top_third = Scatter(x=[1, 4], y=[5, 4.1], name=top_third_name_slope, line=dict(color='#41f49d')) middle_third_name_slope = 'middle third - slope is ' + str(round((2.6-4.1)/(8-4),2)) middle_third = Scatter(x=[4,8], y=[4.1,2.6], name=middle_third_name_slope, line=dict(color='#4ff441')) bottom_third_name_slope = 'bottom third - slope is ' + str(round((1-2.6)/(12-8),2)) bottom_third = Scatter(x=[8,12], y=[2.6,1], name=bottom_third_name_slope, line=dict(color='#b5f441')) top_fourth_name_slope = 'top fourth - slope is ' + str(round((4.2-5)/(3-1),2)) top_fourth = Scatter(x=[1, 3], y=[5, 4.2], name=top_fourth_name_slope, line=dict(color='#f4cd41')) midhigh_fourth_name_slope = 'midhigh fourth - slope is ' + str(round((2.8-4.2)/(6-3),2)) midhigh_fourth = Scatter(x=[3, 6], y=[4.2, 2.8], name=midhigh_fourth_name_slope, line=dict(color='#f4ac41')) midlow_fourth_name_slope = 'midlow fourth - slope is ' + str(round((2.5-2.8)/(9-6),2)) midlow_fourth = Scatter(x=[6, 9], y=[2.8, 2.5], name=midlow_fourth_name_slope, line=dict(color='#f48241')) bottom_fourth_name_slope = 'bottom fourth - slope is ' + str(round((1-2.5)/(12-9),2)) bottom_fourth = Scatter(x=[9, 12], y=[2.5, 1], name=bottom_fourth_name_slope, line=dict(color='#f44641')) # slope between the second point and the sixth point slope_part1_label = 'slope pt1 - slope is ' + str(round((4.2-5)/(3-1),2)) slope_part1 = Scatter(x=[2, 6], y=[4.9, 2.8], name=slope_part1_label, line=dict(color='#41f49d')) # slope between the third point and the fifth point slope_part2_label = 'slope pt2 - slope is ' + str(round((4-4.2)/(5-3),2)) slope_part2 = Scatter(x=[3, 5], y=[4.2, 4], name=slope_part2_label, line=dict(color='#f44641')) #slope_part3_label = 'slope pt3 - slope is ' + str(round((4.1-4.2)/(4-3),2)) #slope_part3 = Scatter(x=[3, 4], y=[4.2, 4.1], name=slope_part3_label, line=dict(color='#f48241')) #slope_part4_label = 'slope pt4 - slope is ' + str(round((4-4.1)/(5-4),2)) #slope_part4 = Scatter(x=[4, 5], y=[4.1, 4], name=slope_part4_label, line=dict(color='#f44641')) plotly.offline.iplot({ "data": [full_run, top_half, slope_part1, slope_part2], "layout": Layout(title="mountain run profile") }) print('From this graph we would assume the rate of descent at point (4,4.1) is ' + str(round((4-4.2)/(5-3),2))) print('As we converge on a given point, the average rate approaches the instanteous rate.') ###Output _____no_output_____
OptionsPricingEvaluation.ipynb
###Markdown Black-Scholes European Option Pricing Script ###Code # File Contains: Python code containing closed-form solutions for the valuation of European Options, # for backward compatability with Python 2.7 from __future__ import division # import necessary libaries import math import numpy as np from scipy.stats import norm from scipy.stats import mvn # Plotting import matplotlib.pylab as pl import numpy as np ###Output _____no_output_____ ###Markdown Option Pricing Theory: Black-Scholes modelBlack Scholes genre option models widely used to value European options. The original “Black Scholes” model was published in 1973 for non-dividend paying stocks. Since that time, a wide variety of extensions to the original Black Scholes model have been created. Modifications of the formula are used to price other financial instruments like dividend paying stocks, commodity futures, and FX forwards. Mathematically, these formulas are nearly identical. The primary difference between these models is whether the asset has a carrying cost (if the asset has a cost or benefit associated with holding it) and how the asset gets present valued. To illustrate this relationship, a “generalized” form of the Black Scholes equation is shown below.The Black Scholes model is based on number of assumptions about how financial markets operate. Black Scholes style models assume:1. **Arbitrage Free Markets**. Black Scholes formulas assume that traders try to maximize their personal profits and don’t allow arbitrage opportunities (riskless opportunities to make a profit) to persist. 2. **Frictionless, Continuous Markets**. This assumption of frictionless markets assumes that it is possible to buy and sell any amount of the underlying at any time without transaction costs.3. **Risk Free Rates**. It is possible to borrow and lend money at a risk-free interest rate4. **Log-normally Distributed Price Movements**. Prices are log-normally distributed and described by Geometric Brownian Motion5. **Constant Volatility**. The Black Scholes genre options formulas assume that volatility is constant across the life of the option contract. In practice, these assumptions are not particularly limiting. The primary limitation imposed by these models is that it is possible to (reasonably) describe the dispersion of prices at some point in the future in a mathematical equation. An important concept of Black Scholes models is that the actual way that the underlying asset drifts over time isn't important to the valuation. Since European options can only be exercised when the contract expires, it is only the distribution of possible prices on that date that matters - the path that the underlying took to that point doesn't affect the value of the option. This is why the primary limitation of the model is being able to describe the dispersion of prices at some point in the future, not that the dispersion process is simplistic.The generalized Black-Scholes formula can found below (see *Figure 1 – Generalized Black Scholes Formula*). While these formulas may look complicated at first glance, most of the terms can be found as part of an options contract or are prices readily available in the market. The only term that is difficult to calculate is the implied volatility (σ). Implied volatility is typically calculated using prices of other options that have recently been traded.>*Call Price*>\begin{equation}C = Fe^{(b-r)T} N(D_1) - Xe^{-rT} N(D_2)\end{equation}>*Put Price*>\begin{equation}P = Xe^{-rT} N(-D_2) - Fe^{(b-r)T} N(-D_1)\end{equation}>*with the following intermediate calculations*>\begin{equation}D_1 = \frac{ln\frac{F}{X} + (b+\frac{V^2}{2})T}{V*\sqrt{T}}\end{equation}>\begin{equation}D_2 = D_1 - V\sqrt{T}\end{equation}>*and the following inputs*>| Symbol | Meaning |>|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|>| F or S | **Underlying Price**. The price of the underlying asset on the valuation date. S is used commonly used to represent a spot price, F a forward price |>| X | **Strike Price**. The strike, or exercise, price of the option. |>| T | **Time to expiration**. The time to expiration in years. This can be calculated by comparing the time between the expiration date and the valuation date. T = (t_1 - t_0)/365 |>| t_0 | **Valuation Date**. The date on which the option is being valued. For example, it might be today’s date if the option we being valued today. |>| t_1 | **Expiration Date**. The date on which the option must be exercised. |>| V | **Volatility**. The volatility of the underlying security. This factor usually cannot be directly observed in the market. It is most often calculated by looking at the prices for recent option transactions and back-solving a Black Scholes style equation to find the volatility that would result in the observed price. This is commonly abbreviated with the greek letter sigma,σ, although V is used here for consistency with the code below. |>| q | **Continuous Yield**. Used in the Merton model, this is the continuous yield of the underlying security. Option holders are typically not paid dividends or other payments until they exercise the option. As a result, this factor decreases the value of an option. |>| r | **Risk Free Rate**. This is expected return on a risk-free investment. This is commonly a approximated by the yield on a low-risk government bond or the rate that large banks borrow between themselves (LIBOR). The rate depends on tenor of the cash flow. For example, a 10-year risk-free bond is likely to have a different rate than a 20-year risk-free bond.[DE1] |>| rf | **Foreign Risk Free Rate**. Used in the Garman Kohlhagen model, this is the risk free rate of the foreign currency. Each currency will have a risk free rate. |>*Figure 1 - Generalized Black Scholes Formula* Model ImplementationThese functions encapsulate a generic version of the pricing formulas. They are primarily intended to be called by the other functions within this libary. The following functions will have a fixed interface so that they can be called directly for academic applicaitons that use the cost-of-carry (b) notation: _GBS() A generalized European option model _GBS_ImpliedVol() A generalized European option implied vol calculator The other functions in this libary are called by the four main functions and are not expected to be interface safe (the implementation and interface may change over time). Implementation for European Options ###Code # The primary class for calculating Generalized Black Scholes option prices and deltas # It is not intended to be part of this module's public interface # Inputs: option_type = "p" or "c", fs = price of underlying, x = strike, t = time to expiration, r = risk free rate # b = cost of carry, v = implied volatility # Outputs: value, delta, gamma, theta, vega, rho def _gbs(option_type, fs, x, t, r, b, v): _debug("Debugging Information: _gbs()") # ----------- # Create preliminary calculations t__sqrt = math.sqrt(t) d1 = (math.log(fs / x) + (b + (v * v) / 2) * t) / (v * t__sqrt) d2 = d1 - v * t__sqrt if option_type == "c": # it's a call _debug(" Call Option") value = fs * math.exp((b - r) * t) * norm.cdf(d1) - x * math.exp(-r * t) * norm.cdf(d2) delta = math.exp((b - r) * t) * norm.cdf(d1) gamma = math.exp((b - r) * t) * norm.pdf(d1) / (fs * v * t__sqrt) theta = -(fs * v * math.exp((b - r) * t) * norm.pdf(d1)) / (2 * t__sqrt) - (b - r) * fs * math.exp( (b - r) * t) * norm.cdf(d1) - r * x * math.exp(-r * t) * norm.cdf(d2) vega = math.exp((b - r) * t) * fs * t__sqrt * norm.pdf(d1) rho = x * t * math.exp(-r * t) * norm.cdf(d2) else: # it's a put _debug(" Put Option") value = x * math.exp(-r * t) * norm.cdf(-d2) - (fs * math.exp((b - r) * t) * norm.cdf(-d1)) delta = -math.exp((b - r) * t) * norm.cdf(-d1) gamma = math.exp((b - r) * t) * norm.pdf(d1) / (fs * v * t__sqrt) theta = -(fs * v * math.exp((b - r) * t) * norm.pdf(d1)) / (2 * t__sqrt) + (b - r) * fs * math.exp( (b - r) * t) * norm.cdf(-d1) + r * x * math.exp(-r * t) * norm.cdf(-d2) vega = math.exp((b - r) * t) * fs * t__sqrt * norm.pdf(d1) rho = -x * t * math.exp(-r * t) * norm.cdf(-d2) _debug(" d1= {0}\n d2 = {1}".format(d1, d2)) _debug(" delta = {0}\n gamma = {1}\n theta = {2}\n vega = {3}\n rho={4}".format(delta, gamma, theta, vega, rho)) return value, delta, gamma, theta, vega, rho ###Output _____no_output_____ ###Markdown Implementation: Implied VolatilityThis section implements implied volatility calculations. It contains implementation of a **Newton-Raphson Search.** This is a fast implied volatility search that can be used when there is a reliable estimate of Vega (i.e., European options) ###Code # ---------- # Find the Implied Volatility of an European (GBS) Option given a price # using Newton-Raphson method for greater speed since Vega is available #def _gbs_implied_vol(option_type, fs, x, t, r, b, cp, precision=.00001, max_steps=100): # return _newton_implied_vol(_gbs, option_type, x, fs, t, b, r, cp, precision, max_steps) ###Output _____no_output_____ ###Markdown Public Interface for valuation functionsThis section encapsulates the functions that user will call to value certain options. These function primarily figure out the cost-of-carry term (b) and then call the generic version of the function (like _GBS() or _American). All of these functions return an array containg the premium and the greeks. ###Code # --------------------------- # Black Scholes: stock Options (no dividend yield) # Inputs: # option_type = "p" or "c" # fs = price of underlying # x = strike # t = time to expiration # v = implied volatility # r = risk free rate # q = dividend payment # b = cost of carry # Outputs: # value = price of the option # delta = first derivative of value with respect to price of underlying # gamma = second derivative of value w.r.t price of underlying # theta = first derivative of value w.r.t. time to expiration # vega = first derivative of value w.r.t. implied volatility # rho = first derivative of value w.r.t. risk free rates def BlackScholes(option_type, fs, x, t, r, v): b = r return _gbs(option_type, fs, x, t, r, b, v) ###Output _____no_output_____ ###Markdown Public Interface for implied Volatility Functions ###Code # Inputs: # option_type = "p" or "c" # fs = price of underlying # x = strike # t = time to expiration # v = implied volatility # r = risk free rate # q = dividend payment # b = cost of carry # Outputs: # value = price of the option # delta = first derivative of value with respect to price of underlying # gamma = second derivative of value w.r.t price of underlying # theta = first derivative of value w.r.t. time to expiration # vega = first derivative of value w.r.t. implied volatility # rho = first derivative of value w.r.t. risk free rates #def euro_implied_vol(option_type, fs, x, t, r, q, cp): # b = r - q # return _gbs_implied_vol(option_type, fs, x, t, r, b, cp) ###Output _____no_output_____ ###Markdown Implementation: Helper FunctionsThese functions aren't part of the main code but serve as utility function mostly used for debugging ###Code # --------------------------- # Helper Function for Debugging # Prints a message if running code from this module and _DEBUG is set to true # otherwise, do nothing # Developer can toggle _DEBUG to True for more messages # normally this is set to False _DEBUG = False def _debug(debug_input): if (__name__ is "__main__") and (_DEBUG is True): print(debug_input) ###Output _____no_output_____ ###Markdown Real Calculations of Options Prices ###Code bs = BlackScholes('c', fs=60, x=65, t=0.25, r=0.08, v=0.30) optionPrice = bs[0] optionPrice ###Output _____no_output_____ ###Markdown Option prices charts ###Code stockPrices = np.arange(50, 100, 1) prices = stockPrices * 0 stockPrice = 60 strike = 65 timeToExpiration = 0.25 impliedVolatility = 0.30 riskFreeRate = 0.05 pl.title('Stock Option Price') for i in range(len(stockPrices)): prices[i] = BlackScholes('c', stockPrices[i], strike, t = timeToExpiration, r = riskFreeRate, v = impliedVolatility)[0] pl.plot(stockPrices, prices, label = 'Option Price') pl.xlabel("Stock Price") pl.ylabel("Option Price") pl.grid(True) pl.show() timeToExpiration = np.arange(0.1, 1, 0.05) prices = timeToExpiration * 0 stockPrice = 60 strike = 65 #timeToExpiration = 0.25 impliedVolatility = 0.30 riskFreeRate = 0.05 pl.title('Stock Option Price') for i in range(len(prices)): prices[i] = BlackScholes('c', stockPrice, strike, t = timeToExpiration[i], r = riskFreeRate, v = impliedVolatility)[0] pl.plot(timeToExpiration, prices, label = 'Option Price') pl.xlabel("Time to Expiry") pl.ylabel("Option Price") pl.grid(True) pl.show() strikes = np.arange(50, 80, 1) prices = strikes * 0 stockPrice = 60 strike = 65 timeToExpiration = 0.25 impliedVolatility = 0.30 riskFreeRate = 0.05 pl.title('Stock Option Price') for i in range(len(prices)): prices[i] = BlackScholes('c', stockPrice, strikes[i], t = timeToExpiration, r = riskFreeRate, v = impliedVolatility)[0] pl.plot(strikes, prices, label = 'Option Price') pl.xlabel("Striking Price") pl.ylabel("Option Price") pl.grid(True) pl.show() strikes = np.arange(50, 80, 1) prices = strikes * 0 stockPrice = 60 strike = 65 timeToExpiration = 0.25 impliedVolatility = 0.30 riskFreeRate = 0.05 pl.title('Stock Put Option Price') for i in range(len(prices)): prices[i] = BlackScholes('p', stockPrice, strikes[i], t = timeToExpiration, r = riskFreeRate, v = impliedVolatility)[0] pl.plot(strikes, prices, label = 'Option Price') pl.xlabel("Striking Price") pl.ylabel("Option Price") pl.grid(True) pl.show() ###Output _____no_output_____
03_USO_DEL_PAQUETE_GEEMAP.ipynb
###Markdown ** **04 - Conversión de GEE-JavaScripts a Python-NB >>** 3. USO DEL PAQUETE "Geemap" A continuación, se describen las funciones mas generales dispuestas en el paquete **geemap**: a) Creando un mapa interactivo con el paquete "geemap" para GEE: * Creación de mapas interactivos basados en `ipyleaflet`: ###Code import geemap Map = geemap.Map(center=[40,-100], zoom=4) Map ###Output _____no_output_____ ###Markdown * Creación de un mapa interactivo basado en `folium`: ###Code import geemap.eefolium as emap Map = emap.Map(center=[40,-100], zoom=4) Map ###Output _____no_output_____
02-Regression/01-Simple-Linear-Regression/simple_linear_regression.ipynb
###Markdown Simple Linear Regression Importing the libraries ###Code import matplotlib.pyplot as plt import pandas as pd ###Output _____no_output_____ ###Markdown Importing the dataset ###Code dataset = pd.read_csv('Salary_Data.csv') X = dataset.iloc[ :, :-1].values y = dataset.iloc[:, -1].values print(X) print(y) ###Output [[ 1.1] [ 1.3] [ 1.5] [ 2. ] [ 2.2] [ 2.9] [ 3. ] [ 3.2] [ 3.2] [ 3.7] [ 3.9] [ 4. ] [ 4. ] [ 4.1] [ 4.5] [ 4.9] [ 5.1] [ 5.3] [ 5.9] [ 6. ] [ 6.8] [ 7.1] [ 7.9] [ 8.2] [ 8.7] [ 9. ] [ 9.5] [ 9.6] [10.3] [10.5]] [ 39343. 46205. 37731. 43525. 39891. 56642. 60150. 54445. 64445. 57189. 63218. 55794. 56957. 57081. 61111. 67938. 66029. 83088. 81363. 93940. 91738. 98273. 101302. 113812. 109431. 105582. 116969. 112635. 122391. 121872.] ###Markdown Splitting the dataset into the Training set and Test set ###Code from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0) print(X_train) print(X_test) print(y_train) print(y_test) ###Output [[ 9.6] [ 4. ] [ 5.3] [ 7.9] [ 2.9] [ 5.1] [ 3.2] [ 4.5] [ 8.2] [ 6.8] [ 1.3] [10.5] [ 3. ] [ 2.2] [ 5.9] [ 6. ] [ 3.7] [ 3.2] [ 9. ] [ 2. ] [ 1.1] [ 7.1] [ 4.9] [ 4. ]] [[ 1.5] [10.3] [ 4.1] [ 3.9] [ 9.5] [ 8.7]] [112635. 55794. 83088. 101302. 56642. 66029. 64445. 61111. 113812. 91738. 46205. 121872. 60150. 39891. 81363. 93940. 57189. 54445. 105582. 43525. 39343. 98273. 67938. 56957.] [ 37731. 122391. 57081. 63218. 116969. 109431.] ###Markdown Training the Simple Linear Regression model on the Training set ###Code from sklearn.linear_model import LinearRegression regressor = LinearRegression() regressor.fit(X_train, y_train) ###Output _____no_output_____ ###Markdown Predicting the Test set results ###Code y_pred = regressor.predict(X_test) print(y_pred) ###Output [ 40748.96184072 122699.62295594 64961.65717022 63099.14214487 115249.56285456 107799.50275317] ###Markdown Visualising the Training set results ###Code plt.scatter(X_train, y_train, color = 'red') plt.plot(X_train, regressor.predict(X_train), color = 'blue') plt.title('Salary vs Experience (Training set)') plt.xlabel('Years of Experience') plt.ylabel('Salary') plt.show() ###Output _____no_output_____ ###Markdown Visualising the Test set results ###Code plt.scatter(X_test, y_test, color = 'red') plt.plot(X_train, regressor.predict(X_train), color = 'blue') plt.title('Salary vs Experience (Test set)') plt.xlabel('Years of Experience') plt.ylabel('Salary') plt.show() ###Output _____no_output_____
examples/non-geospatial/classification_mnist/003_train_classifier.ipynb
###Markdown Train/fine-tune computer vision (CV) classifiersIn this notebook, we use the annotated images (see, e.g., notebooks `001` and `002`) to train/fine-tune CV classifiers. ###Code # solve issue with autocomplete %config Completer.use_jedi = False %load_ext autoreload %autoreload 2 %matplotlib inline from mapreader import classifier from mapreader import loadAnnotations from mapreader import patchTorchDataset import numpy as np import torch from torch import nn import torchvision from torchvision import transforms from torchvision import models ###Output _____no_output_____ ###Markdown Read annotations ###Code annotated_images = loadAnnotations() annotated_images.load("./annotations_mnist/mnist_#kasra#.csv", path2dir="../../../mapreader/datasets/small_mnist") annotated_images.annotations.columns.tolist() print(annotated_images) # We need to shift these labels so that they start from 0: annotated_images.adjust_labels(shiftby=-1) # show sample images for target label (tar_label) annotated_images.show_image_labels(tar_label=0, num_sample=6) # show an image based on its index annotated_images.show_image(indx=1, cmap="Greys_r") ###Output _____no_output_____ ###Markdown Split annotations into train/val or train/val/testWe use a stratified method for splitting the annotations, that is, each set contains approximately the same percentage of samples of each target label as the original set. ###Code annotated_images.split_annotations(frac_train=0.8, frac_val=0.2, frac_test=0.0) ###Output _____no_output_____ ###Markdown Dataframes for train, validation and test sets can be accessed via:```pythonannotated_images.trainannotated_images.valannotated_images.test``` ###Code annotated_images.train["label"].value_counts() annotated_images.val["label"].value_counts() # annotated_images.test["label"].value_counts() ###Output _____no_output_____ ###Markdown Classifier Dataset Define transformations to be applied to images before being used in training or validation/inference.`patchTorchDataset` has some default transformations. However, it is possible to define your own transformations and pass them to `patchTorchDataset`: ###Code # ------------------ # --- Transformation # ------------------ # FOR INCEPTION #resize2 = 299 # otherwise: resize2 = 224 # mean and standard deviations of pixel intensities in normalize_mean = [0.485, 0.456, 0.406] normalize_std = [0.229, 0.224, 0.225] data_transforms = { 'train': transforms.Compose( [transforms.Resize(resize2), transforms.RandomApply([ transforms.RandomHorizontalFlip(p=0.5), transforms.RandomVerticalFlip(p=0.5), ], p=0.5), # transforms.RandomApply([ # transforms.GaussianBlur(21, sigma=(0.5, 5.0)), # ], p=0.25), transforms.RandomApply([ #transforms.RandomPerspective(distortion_scale=0.5, p=0.5), transforms.Resize((50, 50)), ], p=0.25), # transforms.RandomApply([ # transforms.RandomAffine(180, translate=None, scale=None, shear=20), # ], p=0.25), transforms.Resize((resize2, resize2)), transforms.ToTensor(), transforms.Normalize(normalize_mean, normalize_std) ]), 'val': transforms.Compose( [transforms.Resize((resize2, resize2)), transforms.ToTensor(), transforms.Normalize(normalize_mean, normalize_std) ]), } ###Output _____no_output_____ ###Markdown Now, we can use these transformations to instantiate `patchTorchDataset`: ###Code train_dataset = patchTorchDataset(annotated_images.train, transform=data_transforms["train"]) valid_dataset = patchTorchDataset(annotated_images.val, transform=data_transforms["val"]) # test_dataset = patchTorchDataset(annotated_images.test, # transform=data_transforms["val"]) ###Output _____no_output_____ ###Markdown Sampler ###Code # ----------- # --- Sampler # ----------- # We define a sampler as we have a highly imbalanced dataset label_counts_dict = annotated_images.train["label"].value_counts().to_dict() class_sample_count = [] for i in range(0, len(label_counts_dict)): class_sample_count.append(label_counts_dict[i]) weights = 1. / (torch.Tensor(class_sample_count)/1.) weights = weights.double() print(f"Weights: {weights}") train_sampler = torch.utils.data.sampler.WeightedRandomSampler( weights[train_dataset.patchframe["label"].to_list()], num_samples=len(train_dataset.patchframe)) valid_sampler = torch.utils.data.sampler.WeightedRandomSampler( weights[valid_dataset.patchframe["label"].to_list()], num_samples=len(valid_dataset.patchframe)) ###Output _____no_output_____ ###Markdown Dataloader ###Code myclassifier = classifier(device="default") # myclassifier.load("./checkpoint_12.pkl") batch_size = 8 # Add training dataset myclassifier.add2dataloader(train_dataset, set_name="train", batch_size=batch_size, # shuffle can be False as annotations have already been shuffled shuffle=False, num_workers=0, sampler=train_sampler ) # Add validation dataset myclassifier.add2dataloader(valid_dataset, set_name="val", batch_size=batch_size, shuffle=False, num_workers=0, #sampler=valid_sampler ) myclassifier.print_classes_dl() # set class names for plots class_names = {0: "1", 1: "3"} myclassifier.set_classnames(class_names) myclassifier.print_classes_dl() myclassifier.batch_info() for bn in range(1, 3): myclassifier.show_sample(set_name="train", batch_number=bn, print_batch_info=False) ###Output _____no_output_____ ###Markdown Load a (pretrained) PyTorch model and add it to `classifier` Two methods to add a (pretrained) PyTorch model:1. Define a model using `from torchvision import models`2. Use `.initialize_model` method Method 1: Define a model using `from torchvision import models` ###Code # # Choose a model from the supported PyTorch models # model_ft = models.resnet18(pretrained=True) # # Add FC based on the number of classes # num_ftrs = model_ft.fc.in_features # model_ft.fc = nn.Linear(num_ftrs, myclassifier.num_classes) # # Add the model to myclassifier # myclassifier.add_model(model_ft) # myclassifier.model_summary() ###Output _____no_output_____ ###Markdown Method 2: use `.initialize_model` ###Code myclassifier.del_model() myclassifier.initialize_model("resnet18", pretrained=True, last_layer_num_classes="default", add_model=True) myclassifier.model_summary(only_trainable=False) ###Output _____no_output_____ ###Markdown (Un)freeze layers in the neural network architecture ###Code # myclassifier.freeze_layers(["conv1.weight", "bn1.weight", "bn1.bias", "layer1*", "layer2*", "layer3*"]) # myclassifier.model_summary(only_trainable=False) # myclassifier.unfreeze_layers(["layer3*"]) # myclassifier.model_summary(only_trainable=False) # myclassifier.only_keep_layers(["fc.weight", "fc.bias"]) # myclassifier.model_summary(only_trainable=True) ###Output _____no_output_____ ###Markdown Define optimizer, scheduler and criterion We can either use one learning rate for all the layers in the neural network or define layerwise learning rates, that is, the learning rate of each layer is different. This is normally used in fine-tuning pretrained models in which a smaller learning rate is assigned to the first layers.`MapReader` has a `.layerwise_lr` method to define layerwise learning rates. By default, `MapReader` uses a linear function to distribute the learning rates (using `min_lr` for the first layer and `max_lr` for the last layer). The linear function can be changed using `ltype="geomspace"` argument. ###Code list2optim = myclassifier.layerwise_lr(min_lr=1e-4, max_lr=1e-3) # #list2optim = myclassifier.layerwise_lr(min_lr=1e-4, max_lr=1e-3, ltype="geomspace") optim_param_dict = { "lr": 1e-3, "betas": (0.9, 0.999), "eps": 1e-08, "weight_decay": 0, "amsgrad": False } # --- if list2optim is defined, e.g., by using `.layerwise_lr` method (see the previous cell): myclassifier.initialize_optimizer(optim_type="adam", params2optim=list2optim, optim_param_dict=optim_param_dict, add_optim=True) # --- otherwise: # myclassifier.initialize_optimizer(optim_type="adam", # optim_param_dict=optim_param_dict, # add_optim=True) ###Output _____no_output_____ ###Markdown Other optimizers can also be used in the above cell, e.g.:```pythonoptim_param_dict = { "lr": 1e-3, "momentum": 0, "dampening": 0, "weight_decay": 0, "nesterov": False}myclassifier.initialize_optimizer(optim_type="sgd", optim_param_dict=optim_param_dict, add_optim=True)``` ###Code scheduler_param_dict = { "step_size": 10, "gamma": 0.1, "last_epoch": -1, # "verbose": False } myclassifier.initialize_scheduler(scheduler_type="steplr", scheduler_param_dict=scheduler_param_dict, add_scheduler=True) ###Output _____no_output_____ ###Markdown Other schedulers can also be used in the above cell, e.g.:```pythonscheduler_param_dict = { "max_lr": 1e-2, "steps_per_epoch": len(myclassifier.dataloader["train"]), "epochs": 5}myclassifier.initialize_scheduler(scheduler_type="OneCycleLR", scheduler_param_dict=scheduler_param_dict, add_scheduler=True)``` ###Code # Add criterion criterion = nn.CrossEntropyLoss() myclassifier.add_criterion(criterion) ###Output _____no_output_____ ###Markdown Train/fine-tune a model ###Code myclassifier.train_component_summary() ###Output _____no_output_____ ###Markdown **Note:** it is possible to interrupt a training (using Kernel/Interrupt in Jupyter Notebook or ctrl+C). ###Code myclassifier.train(num_epochs=5, save_model_dir="./models_mnist", tensorboard_path="tboard_mnist", verbosity_level=0, tmp_file_save_freq=2, remove_after_load=False, print_info_batch_freq=5) ###Output _____no_output_____ ###Markdown Plot results ###Code list(myclassifier.metrics.keys()) myclassifier.plot_metric(y_axis=["epoch_loss_train", "epoch_loss_val"], y_label="Loss", legends=["Train", "Valid"], colors=["k", "tab:red"]) myclassifier.plot_metric(y_axis=["epoch_rocauc_macro_train", "epoch_rocauc_macro_val"], y_label="ROC AUC", legends=["Train", "Valid"], colors=["k", "tab:red"]) myclassifier.plot_metric(y_axis=["epoch_fscore_macro_train", "epoch_fscore_macro_val", "epoch_fscore_0_val", "epoch_fscore_1_val"], y_label="F-score", legends=["Train", "Valid", "Valid (label: 0)", "Valid (label: 1)",], colors=["k", "tab:red", "tab:red", "tab:red"], styles=["-", "-", "--", ":"], markers=["o", "o", "", ""], plt_yrange=[0, 100]) myclassifier.plot_metric(y_axis=["epoch_recall_macro_train", "epoch_recall_macro_val", "epoch_recall_0_val", "epoch_recall_1_val"], y_label="Recall", legends=["Train", "Valid", "Valid (label: 0)", "Valid (label: 1)",], colors=["k", "tab:red", "tab:red", "tab:red"], styles=["-", "-", "--", ":"], markers=["o", "o", "", ""], plt_yrange=[0, 100]) ###Output _____no_output_____
code/subsampling_example.ipynb
###Markdown Subsampling Example - using bootstrap methods[![Latest release](https://badgen.net/github/release/Naereen/Strapdown.js)](https://github.com/eabarnes1010/course_objective_analysis/tree/main/code)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eabarnes1010/course_objective_analysis/blob/main/code/subsampling_example.ipynb)An illustration of using Bootstrapping. The problemThis example is based on the "Resampling: bootstrap" example in Chapter 1 of the lecture notes, and is actually an example taken from my research life where I got to be a co-author on the paper for doing this analysis! The idea is as follows...You believe that aerosols grow the most when you have high geopotential heights nearby. You composite the 500 geopotential height on the 20 August days when you have aerosol formation and growth over a site in Egbert, Canada, and you find that the average geopotential on these days is 5900 m. The long-term mean at this station is 5886 m, so the heights appear higher during aerosol growth. Are these results significant? Or is this just random chance? The approachThe first thing to notice is that you may have no idea what the distribution of daily 500 hPa geopotential heights even looks like. Now, you do have years and years of data, so you can plot it (and we will), but I doesn't look particularly normal. Next, you might say "that's okay. I'll just use the Central Limit Theorem", but with only N = 20, you may not have enough points to invoke that yet. So...what do you do? The idea is that while you do not know the underlying distribution (and thus, cannot use Monte Carlo where you have to know the underlying distribution in order to code it up), you do have _a lot_ of data, so why not use the data instead?Here's what we will do in the steps below: Run 2,500 experiments, within each experiment, randomly grab 20 days from the historical geopotential height data, and take the mean of the 20 days. After 2,500 iterations, you will have a distribution of the N = 20 sample means under the null hypothesis of random chance. Now, you can look at this distribution and determine the 95% confidence bounds on the N = 20 sample means - if the observed value of 5900 m is outside of this range, you have reason to believe it may be more than random chance. ###Code try: import google.colab IN_COLAB = True except: IN_COLAB = False print('IN_COLAB = ' + str(IN_COLAB)) import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats import scipy.io as sio import matplotlib as mpl # set figure defaults mpl.rcParams['figure.dpi'] = 150 plt.rcParams['figure.figsize'] = (12.0/2, 8.0/2) ###Output _____no_output_____ ###Markdown We will start from loading our (ahem) Matlab data. ###Code if IN_COLAB: !pip install wget import wget filename = wget.download("https://raw.githubusercontent.com/eabarnes1010/course_objective_analysis/main/data/subsampling_example_Z500_August.mat") else: filename = '../data/subsampling_example_Z500_August.mat' DATA = sio.loadmat(filename) X = DATA['X'][:,0] LAT = DATA['LAT'][0][0] LONG = DATA['LONG'][0][0] ###Output _____no_output_____ ###Markdown Next, as all good scientists do, we plot our data (histogram of 500 hPa heights for all August days) to get a feeling for what it looks like. ###Code plt.figure() h, bins = np.histogram(X,20) plt.plot(bins[:-1],h, color = 'black') plt.xlabel('geopotential height (m)') plt.ylabel('frequency') plt.plot([np.mean(X), np.mean(X)],[0., 150],'--', color = 'black') plt.title('daily August Z500 at ' + str(np.round(LAT)) + '$^o$N, ' + str(round(LONG)) + '$^o$E') Z = (X-np.mean(X))/np.std(X) plt.text(5710, 130, 'skewness = ' + str(np.around(stats.skew(Z[:]), 2))) plt.xlim(5700, 6000) plt.show() ###Output _____no_output_____ ###Markdown The black line shows the histogram of the August days, and the dashed black line is the mean. As you can see, the data is skewed and not particularly normal looking - so, we probably shouldn't use the Central Limit Theorem here. Next, we define the variable "sample_length" so that we can change it later. For this problem, our initial size of our sample is N=20. ###Code sample_length = 20 ###Output _____no_output_____ ###Markdown This is where the magic happens. We are now going to _randomly_ grab 2500 different samples of size N = sample_length. That is, we are going to pretend we know nothing about aerosol growth and just grab random combinations of samples. We do this 2500 times, but it could be 5000, or 10,000. Etc. However many zeros you wish :)After randomly grabbing each sample we will calculate the mean geopotential height over the sample, save it in an array called "P", and then rinse and repeat. ###Code #note, P = np.random.choice() is also an option for writing this code P = np.empty(2500) for j, val in enumerate(P): ir = stats.randint.rvs(0,len(X)-1,size = sample_length) P[j] = np.nanmean(X[ir]) ###Output _____no_output_____ ###Markdown Our hard work is done. Now we just plot the results. That is, we plot the histogram of the means of our random samples of size N = 20. ###Code mp = 0. plt.figure() h, bins = np.histogram(P-mp,20) plt.plot(bins[:-1],h, color = 'black', label = 'means') plt.plot((np.mean(X), np.mean(X)),(0., 400),'--', color = 'black', label = 'mean of sample means'); a1 = np.percentile(P-mp,2.5) a2 = np.percentile(P-mp,100.-2.5) plt.plot((a1,a1),(0,400),'--',color = 'red', linewidth = 2, label = '95% confidence bounds') plt.plot((a2,a2),(0,400),'--',color = 'red', linewidth = 2) t_inc = (stats.t.ppf(0.975, sample_length - 1))*np.std(X)/np.sqrt(sample_length-1) plt.plot(np.ones((2,))*(np.mean(X)-t_inc), (0,400), '--',color = 'cornflowerblue', label = 'critical t if assumed Normal') plt.plot(np.ones((2,))*(np.mean(X)+t_inc), (0,400), '--',color = 'cornflowerblue') plt.xlabel('geopotential height (m)') plt.ylabel('frequency') plt.legend(fontsize = 8, frameon = False) plt.title('distribution of random sample means of ' + str(sample_length) + ' days') plt.xlim(5700, 6000) plt.show() ###Output _____no_output_____ ###Markdown The black line above shows us the range of random sampled means of length N = 20 we can get when the data is totally random. The dashed black line shows the mean of these means of random samples. The red lines denote the actual 95% bounds of the data - that is, the true bounds where 95% of the means fall within the range. The dashed blue lines are the 95% bounds _if we had assumed the daily data was normal_. See how they are different?Also notice that the x-axis scale on this plot is the same as the plot above. Notice how the distribution of sample means is narrower than the full daily distribution? This is because when you average things together you get a better estimate of the true mean (a narrower distribution). Let's zoom in a bit more. ###Code plt.figure() h, bins = np.histogram(P-mp,20) plt.plot(bins[:-1],h, color = 'black', label = 'means') plt.plot((np.mean(X), np.mean(X)),(0., 400),'--', color = 'black', label = 'mean of sample means'); plt.plot((a1,a1),(0,400),'--',color = 'red', linewidth = 2, label = '95% confidence bounds') plt.plot((a2,a2),(0,400),'--',color = 'red', linewidth = 2) plt.plot(np.ones((2,))*(np.mean(X)-t_inc), (0,400), '--',color = 'cornflowerblue', label = 'critical t if assumed Normal') plt.plot(np.ones((2,))*(np.mean(X)+t_inc), (0,400), '--',color = 'cornflowerblue') plt.xlabel('geopotential height (m)') plt.ylabel('frequency') plt.legend(fontsize = 8, frameon = False) plt.title('distribution of random sample means of ' + str(sample_length) + ' days') plt.xlim(5700, 6000) plt.xlim(5820,5910) plt.plot((5900,5900),(0,400),'-',color='darkorange', label='*actual data*') plt.legend(fontsize = 5, frameon = False) plt.show() ###Output _____no_output_____
minimap_data/HMM_Teams-All_Groups-Minimap-PositiveEmotions.ipynb
###Markdown Load Data ###Code import pickle with open('minimap_group_to_state_data_AGG.pickle', 'rb') as handle: group_to_state_id_data = pickle.load(handle) with open('minimap_group_to_state_data_w_pid_AGG.pickle', 'rb') as handle: group_to_state_id_data_w_pid = pickle.load(handle) with open('minimap_group_to_binary_state_data_AGG.pickle', 'rb') as handle: group_to_binary_state_data = pickle.load(handle) with open('minimap_group_to_state_mapping_AGG.pickle', 'rb') as handle: group_to_state_mapping = pickle.load(handle) ###Output _____no_output_____ ###Markdown Run HMMs ###Code group_no_to_hmm = {} for group_no in [1]: group_no_to_hmm[group_no] = {} state_id_data = group_to_state_id_data_w_pid[group_no] state_binary_data = group_to_binary_state_data[group_no] state_id_to_state = group_to_state_mapping[group_no]['id_to_vec'] state_to_state_id = group_to_state_mapping[group_no]['vec_to_id'] hmm_input_X = [] index_to_team = {} counter = 0 for p_uid in state_id_data: hmm_input_X.append(state_id_data[p_uid]) index_to_team[counter] = p_uid counter += 1 hmm_input_X = np.array(hmm_input_X) hmm_input_X.shape N_iters = 100 n_states = 6 test_unsuper_hmm = unsupervised_HMM(hmm_input_X, n_states, N_iters) # print('emission', test_unsuper_hmm.generate_emission(10)) hidden_seqs = {} team_num_to_seq_probs = {} for j in range(len(hmm_input_X)): # print("team", team_numbers[j]) # print("reindex", X[j][:50]) team_id = index_to_team[j] viterbi_output, all_sequences_and_probs = test_unsuper_hmm.viterbi_all_probs(hmm_input_X[j]) team_num_to_seq_probs[team_id] = all_sequences_and_probs hidden_seqs[team_id] = [int(x) for x in viterbi_output] group_no_to_hmm[group_no]['hmm'] = test_unsuper_hmm # hidden_state_to_mean_value = {1:[], 2:[], 3:[], 4:[]} hidden_state_to_mean_value_list = {} # team_id_map_to_state_id_sequence for team_id in hidden_seqs: for i in range(len(hidden_seqs[team_id])): hidden_state = hidden_seqs[team_id][i] state = state_binary_data[team_id][i] mean_value = sum(list(state)) if hidden_state not in hidden_state_to_mean_value_list: hidden_state_to_mean_value_list[hidden_state] = [] hidden_state_to_mean_value_list[hidden_state].append(mean_value) hidden_state_to_mean_value = {} for hidden_state in hidden_state_to_mean_value_list: hidden_state_to_mean_value[hidden_state] = np.mean(hidden_state_to_mean_value_list[hidden_state]) print("hidden_state_to_mean_value", hidden_state_to_mean_value) sorted_hidden_states = {k: v for k, v in sorted(hidden_state_to_mean_value.items(), key=lambda item: item[1])} new_hidden_state_to_old = dict(enumerate(sorted_hidden_states.keys())) old_hidden_state_to_new = {v: k for k, v in new_hidden_state_to_old.items()} team_id_to_new_hidden = {} for team_id in hidden_seqs: team_id_to_new_hidden[team_id] = [old_hidden_state_to_new[x] for x in hidden_seqs[team_id]] # Plot group_num_to_title = { 1: "High Anger, High Anxiety Teams", 2: "Low Anger, High Anxiety Teams", 3: "High Anger, Low Anxiety Teams", 4: "Low Anger, Low Anxiety Teams", } # for team_id in hidden_seqs: # for i in range(len(index_to_team_map)): fig, ax = plt.subplots(1, 1, figsize=(10,5)) legend_labels = [] # print("index_to_team_map = ", index_to_team_map) # print("team_id_map_to_new_hidden = ", team_id_map_to_new_hidden) for i in range(4): team_id = index_to_team[i] legend_labels.append(team_id) # if team_id not in team_id_map_to_new_hidden: # continue plt.plot(range(len(team_id_to_new_hidden[team_id])), team_id_to_new_hidden[team_id]) plt.legend(legend_labels) y = [0, 1, 2, 3, 4, 5] labels = ['State 1: Poor', 'State 2', 'State 3', 'State 4', 'State 5', 'State 6: Good'] plt.yticks(y, labels, rotation='horizontal') ax.set_yticklabels(labels) plt.ylabel("Hidden State (Collective Intelligence)") plt.xlabel("Time (Minute)") plt.title(group_num_to_title[group_no]) plt.show() plt.close() A = np.array(test_unsuper_hmm.A) O = np.array(test_unsuper_hmm.O) new_A = [] new_O = [] for new_h_i in range((A.shape[0])): new_row = [] for new_h_j in range(A.shape[0]): old_h_i = new_hidden_state_to_old[new_h_i] old_h_j = new_hidden_state_to_old[new_h_j] new_row.append(A[old_h_i, old_h_j]) new_A.append(new_row) new_O.append(O[old_h_i, :]) group_no_to_hmm[group_no]['new_A'] = np.array(new_A) group_no_to_hmm[group_no]['new_O'] = np.array(new_O) group_no_to_hmm[group_no]['new_hidden_state_to_old'] = new_hidden_state_to_old group_no_to_hmm[group_no]['old_hidden_state_to_new'] = old_hidden_state_to_new # state_id_to_state = dict(enumerate(unique_states_list)) # state_to_state_id = {v: k for k, v in state_id_to_state.items()} group_no_to_hmm[group_no]['state_id_to_state'] = state_id_to_state group_no_to_hmm[group_no]['state_to_state_id'] = state_to_state_id group_no_to_hmm[group_no]['old_hidden_state_to_mean_value'] = hidden_state_to_mean_value group_no_to_hmm[group_no]['hidden_seqs'] = hidden_seqs visualize_sparsities(new_A, new_O) visualize_O(group_no_to_hmm[group_no]) with open('minimap_group_no_to_hmm_AGG.pickle', 'wb') as handle: pickle.dump(group_no_to_hmm, handle, protocol=pickle.HIGHEST_PROTOCOL) ###Output _____no_output_____ ###Markdown Compute Loss ###Code df_anx = pd.read_csv('minimap_hmm.csv') df_anx = df_anx.replace(r'^s*$', float('NaN'), regex = True) df_anx = df_anx.replace(r' ', float('NaN'), regex = True) df_anx = df_anx[df_anx['anger_premeasure'].notna()] df_anx = df_anx[df_anx['anxiety_premeasure'].notna()] mean_anger = np.mean([float(elem) for elem in df_anx['anger_premeasure'].to_numpy()]) mean_anx = np.mean([float(elem) for elem in df_anx['anxiety_premeasure'].to_numpy()]) uid_to_anger_anx = {} uid_to_group = {} for index, row in df_anx.iterrows(): uid = row['uid'] anger = float(row['anger_premeasure']) anx = float(row['anxiety_premeasure']) if anger >= mean_anger: if anx >= mean_anx: group = 1 else: group = 3 else: if anx >= mean_anx: group = 2 else: group = 4 uid_to_anger_anx[uid] = {'anger': anger, 'anxiety': anx, 'group': group} uid_to_group[uid] = group group_to_loss = {1:[], 2:[], 3:[], 4:[]} # for group_no in [1]: group_no = 1 group_hmm = group_no_to_hmm[group_no]['hmm'] A = np.array(group_hmm.A) O = np.array(group_hmm.O) # state_id_to_state = group_no_to_team_data[group_no]['state_id_to_state'] # team_id_map_to_state_id_sequence = group_no_to_team_data[group_no]['teams_state_id'] team_id_to_state_id_sequence = group_to_state_id_data_w_pid[group_no] state_binary_data = group_to_binary_state_data[group_no] state_id_to_state = group_to_state_mapping[group_no]['id_to_vec'] state_to_state_id = group_to_state_mapping[group_no]['vec_to_id'] total_loss = [] for team_id in team_id_to_state_id_sequence: seq = team_id_to_state_id_sequence[team_id] seq = np.array(seq) team_group = uid_to_group[team_id] average_loss = [] # recommendations = [] for t in range(1, len(seq)): partial_seq = seq[:t] viterbi_output, all_sequences_and_probs = group_hmm.viterbi_all_probs(partial_seq) current_hidden = int(viterbi_output[-1]) curr_obs_state = state_id_to_state[seq[t-1]] normalized_hidden_probs = A[current_hidden, :]/sum(A[current_hidden, :]) next_hidden_predicted, next_hidden_prob = np.argmax(normalized_hidden_probs), max(normalized_hidden_probs) valid_obs = [] for j in range(O.shape[1]): obs = state_id_to_state[j] if obs[3:] == curr_obs_state[0:3]: valid_obs.append(O[current_hidden, j]) else: valid_obs.append(0) if sum(valid_obs)>0: valid_obs /= sum(valid_obs) # next_obs_predicted_idx, next_obs_prob = np.argmax(O[current_hidden, :]), max(O[current_hidden, :]) # print("valid_obs", valid_obs) next_obs_predicted_idx, next_obs_prob = np.argmax(valid_obs), max(valid_obs) next_obs_predicted_state = state_id_to_state[next_obs_predicted_idx] true_next_obs_state = state_id_to_state[seq[t]] loss = np.array(next_obs_predicted_state[0:3]) - np.array(true_next_obs_state[0:3]) loss = sum([abs(elem) for elem in loss]) average_loss.append(loss) # print("curr_obs_state", curr_obs_state) # print(next_hidden_predicted, next_hidden_prob) # print("next_obs_predicted_state", (next_obs_predicted_state, next_obs_prob)) # print("CONFIDENCE", next_obs_prob) # print("ADJUSTED CONFIDENCE", next_obs_prob*next_hidden_prob) # print("true_next_obs_state", true_next_obs_state) # print("loss", loss) # print() # break # average_loss = np.mean(average_loss) # print("GROUP", group_no) # print("average_loss", average_loss) group_to_loss[team_group].extend(average_loss) total_loss.extend(average_loss) group_no_to_title = { 1: 'High Anger, High Anxiety', 2: 'Low Anger, High Anxiety', 3: 'High Anger, Low Anxiety', 4: 'Low Anger, Low Anxiety', } from scipy import stats # Create lists for the plot groups = [group_no_to_title[group] for group in group_to_loss] x_pos = np.arange(len(groups)) means = [np.mean(group_to_loss[group]) for group in group_to_loss] std_devs = [stats.sem(group_to_loss[group]) for group in group_to_loss] # Build the plot fig, ax = plt.subplots(figsize=(15,5)) ax.bar(x_pos, means, yerr=std_devs, align='center', alpha=0.5, ecolor='black', capsize=10) ax.set_ylabel('L1 Loss in Predicting Next State') ax.set_xticks(x_pos) ax.set_xticklabels(groups) ax.set_title('Loss of HMM Predictions') ax.yaxis.grid(True) # Save the figure and show plt.tight_layout() plt.savefig('minimap_losses_with_error_bars_with_agg.png') plt.show() print("Avg loss ", np.mean(total_loss)) print("std loss ", np.std(total_loss)) ###Output Avg loss 1.1199238578680204 std loss 0.928378857927649 ###Markdown Plot with Other HMM Models ###Code agg_model_group_to_loss = group_to_loss # minimap_group_no_to_hmm_2.pickle with open('minimap_group_no_to_hmm.pickle', 'rb') as handle: group_no_to_hmm = pickle.load(handle) df_anx = pd.read_csv('minimap_hmm.csv') df_anx = df_anx.replace(r'^s*$', float('NaN'), regex = True) df_anx = df_anx.replace(r' ', float('NaN'), regex = True) df_anx = df_anx[df_anx['anger_premeasure'].notna()] df_anx = df_anx[df_anx['anxiety_premeasure'].notna()] mean_anger = np.mean([float(elem) for elem in df_anx['anger_premeasure'].to_numpy()]) mean_anx = np.mean([float(elem) for elem in df_anx['anxiety_premeasure'].to_numpy()]) uid_to_anger_anx = {} uid_to_group = {} for index, row in df_anx.iterrows(): uid = row['uid'] anger = float(row['anger_premeasure']) anx = float(row['anxiety_premeasure']) if anger >= mean_anger: if anx >= mean_anx: group = 1 else: group = 3 else: if anx >= mean_anx: group = 2 else: group = 4 uid_to_anger_anx[uid] = {'anger': anger, 'anxiety': anx, 'group': group} uid_to_group[uid] = group import pickle with open('minimap_group_to_state_data.pickle', 'rb') as handle: group_to_state_id_data = pickle.load(handle) with open('minimap_group_to_state_data_w_pid.pickle', 'rb') as handle: group_to_state_id_data_w_pid = pickle.load(handle) with open('minimap_group_to_binary_state_data.pickle', 'rb') as handle: group_to_binary_state_data = pickle.load(handle) with open('minimap_group_to_state_mapping.pickle', 'rb') as handle: group_to_state_mapping = pickle.load(handle) group_to_loss = {} for group_no in [1,2,3,4]: group_hmm = group_no_to_hmm[group_no]['hmm'] A = np.array(group_hmm.A) O = np.array(group_hmm.O) # state_id_to_state = group_no_to_team_data[group_no]['state_id_to_state'] # team_id_map_to_state_id_sequence = group_no_to_team_data[group_no]['teams_state_id'] team_id_to_state_id_sequence = group_to_state_id_data_w_pid[group_no] state_binary_data = group_to_binary_state_data[group_no] state_id_to_state = group_to_state_mapping[group_no]['id_to_vec'] state_to_state_id = group_to_state_mapping[group_no]['vec_to_id'] average_loss = [] for team_id in team_id_to_state_id_sequence: seq = team_id_to_state_id_sequence[team_id] seq = np.array(seq) # recommendations = [] for t in range(1, len(seq)): partial_seq = seq[:t] viterbi_output, all_sequences_and_probs = group_hmm.viterbi_all_probs(partial_seq) current_hidden = int(viterbi_output[-1]) curr_obs_state = state_id_to_state[seq[t-1]] normalized_hidden_probs = A[current_hidden, :]/sum(A[current_hidden, :]) next_hidden_predicted, next_hidden_prob = np.argmax(normalized_hidden_probs), max(normalized_hidden_probs) valid_obs = [] for j in range(O.shape[1]): obs = state_id_to_state[j] if obs[3:] == curr_obs_state[0:3]: valid_obs.append(O[current_hidden, j]) else: valid_obs.append(0) if sum(valid_obs)>0: valid_obs /= sum(valid_obs) # next_obs_predicted_idx, next_obs_prob = np.argmax(O[current_hidden, :]), max(O[current_hidden, :]) # print("valid_obs", valid_obs) next_obs_predicted_idx, next_obs_prob = np.argmax(valid_obs), max(valid_obs) next_obs_predicted_state = state_id_to_state[next_obs_predicted_idx] true_next_obs_state = state_id_to_state[seq[t]] loss = np.array(next_obs_predicted_state[0:3]) - np.array(true_next_obs_state[0:3]) loss = sum([abs(elem) for elem in loss]) average_loss.append(loss) # print("curr_obs_state", curr_obs_state) # print(next_hidden_predicted, next_hidden_prob) # print("next_obs_predicted_state", (next_obs_predicted_state, next_obs_prob)) # print("CONFIDENCE", next_obs_prob) # print("ADJUSTED CONFIDENCE", next_obs_prob*next_hidden_prob) # print("true_next_obs_state", true_next_obs_state) # print("loss", loss) # print() # break # average_loss = np.mean(average_loss) # print("GROUP", group_no) # print("average_loss", average_loss) group_to_loss[group_no] = average_loss from scipy import stats # Create lists for the plot groups = [group_no_to_title[group] for group in group_to_loss] x_pos = np.arange(len(groups)) means = [np.mean(group_to_loss[group]) for group in group_to_loss] std_devs = [stats.sem(group_to_loss[group]) for group in group_to_loss] ind = np.arange(4) # the x locations for the groups width = 0.35 # the width of the bars # Build the plot fig, ax = plt.subplots(figsize=(15,5)) ax.bar(x_pos, means, yerr=std_devs, width=width, align='center', alpha=0.5, ecolor='black', capsize=10, label='Grouped HMM') groups = [group_no_to_title[group] for group in agg_model_group_to_loss] x_pos = np.arange(len(groups)) means = [np.mean(agg_model_group_to_loss[group]) for group in agg_model_group_to_loss] std_devs = [stats.sem(agg_model_group_to_loss[group]) for group in agg_model_group_to_loss] ax.bar(ind+width, means, width=width, yerr=std_devs, align='center', alpha=0.5, ecolor='black', capsize=10, label='Aggregate HMM') ax.set_ylabel('L1 Loss in Predicting Next State') ax.set_xticks(x_pos+(width/2)) ax.set_xticklabels(groups) ax.set_title('Round 1: Loss of HMM Predictions') ax.yaxis.grid(True) # Save the figure and show plt.legend() plt.tight_layout() plt.savefig('minimap_losses_with_error_bars_with_agg.png') plt.show() ###Output _____no_output_____
sklearn/sklearn learning/demonstration/auto_examples_jupyter/calibration/plot_compare_calibration.ipynb
###Markdown Comparison of Calibration of ClassifiersWell calibrated classifiers are probabilistic classifiers for which the outputof the predict_proba method can be directly interpreted as a confidence level.For instance a well calibrated (binary) classifier should classify the samplessuch that among the samples to which it gave a predict_proba value close to0.8, approx. 80% actually belong to the positive class.LogisticRegression returns well calibrated predictions as it directlyoptimizes log-loss. In contrast, the other methods return biased probabilities,with different biases per method:* GaussianNaiveBayes tends to push probabilities to 0 or 1 (note the counts in the histograms). This is mainly because it makes the assumption that features are conditionally independent given the class, which is not the case in this dataset which contains 2 redundant features.* RandomForestClassifier shows the opposite behavior: the histograms show peaks at approx. 0.2 and 0.9 probability, while probabilities close to 0 or 1 are very rare. An explanation for this is given by Niculescu-Mizil and Caruana [1]_: "Methods such as bagging and random forests that average predictions from a base set of models can have difficulty making predictions near 0 and 1 because variance in the underlying base models will bias predictions that should be near zero or one away from these values. Because predictions are restricted to the interval [0,1], errors caused by variance tend to be one- sided near zero and one. For example, if a model should predict p = 0 for a case, the only way bagging can achieve this is if all bagged trees predict zero. If we add noise to the trees that bagging is averaging over, this noise will cause some trees to predict values larger than 0 for this case, thus moving the average prediction of the bagged ensemble away from 0. We observe this effect most strongly with random forests because the base-level trees trained with random forests have relatively high variance due to feature subsetting." As a result, the calibration curve shows a characteristic sigmoid shape, indicating that the classifier could trust its "intuition" more and return probabilities closer to 0 or 1 typically.* Support Vector Classification (SVC) shows an even more sigmoid curve as the RandomForestClassifier, which is typical for maximum-margin methods (compare Niculescu-Mizil and Caruana [1]_), which focus on hard samples that are close to the decision boundary (the support vectors)... topic:: References: .. [1] Predicting Good Probabilities with Supervised Learning, A. Niculescu-Mizil & R. Caruana, ICML 2005 ###Code print(__doc__) # Author: Jan Hendrik Metzen <[email protected]> # License: BSD Style. import numpy as np np.random.seed(0) import matplotlib.pyplot as plt from sklearn import datasets from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.svm import LinearSVC from sklearn.calibration import calibration_curve X, y = datasets.make_classification(n_samples=100000, n_features=20, n_informative=2, n_redundant=2) train_samples = 100 # Samples used for training the models X_train = X[:train_samples] X_test = X[train_samples:] y_train = y[:train_samples] y_test = y[train_samples:] # Create classifiers lr = LogisticRegression() gnb = GaussianNB() svc = LinearSVC(C=1.0) rfc = RandomForestClassifier() # ############################################################################# # Plot calibration plots plt.figure(figsize=(10, 10)) ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2) ax2 = plt.subplot2grid((3, 1), (2, 0)) ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated") for clf, name in [(lr, 'Logistic'), (gnb, 'Naive Bayes'), (svc, 'Support Vector Classification'), (rfc, 'Random Forest')]: clf.fit(X_train, y_train) if hasattr(clf, "predict_proba"): prob_pos = clf.predict_proba(X_test)[:, 1] else: # use decision function prob_pos = clf.decision_function(X_test) prob_pos = \ (prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min()) fraction_of_positives, mean_predicted_value = \ calibration_curve(y_test, prob_pos, n_bins=10) ax1.plot(mean_predicted_value, fraction_of_positives, "s-", label="%s" % (name, )) ax2.hist(prob_pos, range=(0, 1), bins=10, label=name, histtype="step", lw=2) ax1.set_ylabel("Fraction of positives") ax1.set_ylim([-0.05, 1.05]) ax1.legend(loc="lower right") ax1.set_title('Calibration plots (reliability curve)') ax2.set_xlabel("Mean predicted value") ax2.set_ylabel("Count") ax2.legend(loc="upper center", ncol=2) plt.tight_layout() plt.show() ###Output _____no_output_____
Datacamp/IntermediatePython/Matplotlib.ipynb
###Markdown Matplotlib ###Code import matplotlib.pyplot as plt year=[2000,2001,2002,2006,2014,2020] pop=[1,4,6,2,7,3] plt.plot(year,pop) plt.show() import matplotlib.pyplot as plt year=[2000,2001,2002,2006,2014,2020] pop=[1,4,6,2,7,3] plt.scatter(year,pop) # plt.grid(True) plt.show() year=[2000,2001,2002,2006,2014,2020] pop=[1,4,6,2,7,3] plt.plot(year,pop) plt.yscale('log') plt.show() year=[2000,2001,2002,2006,2014,2020] pop=[1,4,6,2,7,3] plt.plot(year,pop) plt.xscale('log') plt.show() ###Output _____no_output_____ ###Markdown Interpretation ###Code pop[year.index(2006)] ###Output _____no_output_____ ###Markdown Histogram ###Code values=[2,4,6,2,5,7,9,4,3,6,7,8,9,4,3,2,6,9] plt.hist(values,bins=20) plt.show() plt.clf() ###Output _____no_output_____ ###Markdown Customization ###Code year=[2000,2001,2002,2006,2014,2020] pop=[1,4,6,2,7,3] plt.plot(year,pop) plt.xlabel('Year') plt.ylabel('pop') plt.title('World Population Projects') plt.show() ###Output _____no_output_____ ###Markdown Ticks ###Code plt.yticks([1,6,4,2,3,5,7]) plt.show() plt.yticks([1,4,7,3,5,8],['a','b','c','d','e','f']) plt.show() plt.yticks([1,4,7,3,5,8],['a','b','c','d','e','f']) plt.grid(True) plt.show() ###Output _____no_output_____ ###Markdown Sizes ###Code import numpy as np year=[2000,2001,2002,2006,2014,2020] pop=[1,4,6,2,7,3] # convert in numpy array np_pop=np.array(pop) np_pop*=40 plt.scatter(year,pop,s=np_pop) plt.show() ###Output _____no_output_____ ###Markdown Color ###Code import numpy as np year=[2000,2001,2002,2006,2014,2020] pop=[1,4,6,2,7,3] # convert in numpy array np_pop=np.array(pop) np_pop*=40 plt.scatter(c='green',x=year,y=pop,s=np_pop,alpha=0.8) plt.show() ###Output _____no_output_____ ###Markdown Additional customization text ###Code import matplotlib.pyplot as plt import numpy as np year=[2000,2001,2002,2006,2014,2020] pop=[1,4,6,2,7,3] # convert in numpy array np_pop=np.array(pop) np_pop*=40 plt.scatter(c='green',x=year,y=pop,s=np_pop,alpha=0.8) # addtional customization plt.text(2000,1,'India') plt.text(2014,7,'KKKKK') plt.grid(True) plt.show() ###Output _____no_output_____
MaterialCursoPython/Fase 1 - Fundamentos de programacion/Tema 03 - Controlando el flujo/Apuntes/Leccion 4 (Apuntes) - Post analisis.ipynb
###Markdown Ejemplo de cabecera ###Code n = 0 while n < 10: if (n % 2) == 0: print(n,'es un número par') else: print(n,'es un número impar') n = n + 1 ###Output _____no_output_____
HackerRank/Python/set/Untitled.ipynb
###Markdown [Check Subset](https://www.hackerrank.com/challenges/py-check-subset/problem) ###Code for _ in range(int(input())): a1,l1,a2,l2=input(),set(map(int,input().split())),input(),set(map(int,input().split())) print(l1.issubset(l2)) ###Output 1 5 1 2 3 5 6 9 9 8 5 6 3 2 1 4 7 ###Markdown [No Idea!](https://www.hackerrank.com/challenges/no-idea/problem) ###Code a1,a2=input().split() a=input().split() set_1=set(input().split()) set_2=set(input().split()) # print(abs(len(a&set_1)-len(a&set_2))) print(sum([(i in set_1)-(i in set_2) for i in a])) a set_1 set_2 [(i in set_2) for i in a] ###Output _____no_output_____
matplotlilb-challenge/pymaceuticals_starter.ipynb
###Markdown Observations and Insights ###Code # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import scipy.stats as st # Study data files mouse_metadata_path = "Resources/Mouse_metadata.csv" study_results_path = "Resources/Study_results.csv" # Read the mouse data and the study results mouse_metadata = pd.read_csv(mouse_metadata_path) study_results = pd.read_csv(study_results_path) # Combine the data into a single dataset clinical_data_complete = pd.merge(mouse_metadata, study_results, how="left", on=["Mouse ID", "Mouse ID"]) # Display the data table for preview clinical_data_complete.head() # Checking the number of mice. tumor_vols_mean = clinical_data_complete.groupby(["Drug Regimen", "Timepoint"]).mean()["Tumor Volume (mm3)"] tumor_vols_mean = pd.DataFrame(tumor_vols_mean) tumor_vols_mean = tumor_vols_mean.reset_index() tumor_vols_mean.head() #standard error of tumor volume tumor_vols_sem = clinical_data_complete.groupby(["Drug Regimen", "Timepoint"]).sem()["Tumor Volume (mm3)"] tumor_vols_sem = pd.DataFrame(tumor_vols_sem) tumor_vols_sem.head().reset_index() # Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint. # Optional: Get all the data for the duplicate mouse ID. # Create a clean DataFrame by dropping the duplicate mouse by its ID. # Checking the number of mice in the clean DataFrame. ###Output _____no_output_____ ###Markdown Summary Statistics ###Code # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen tumor_vols_mean = tumor_vols_mean.reset_index() tumor_vols_pivot_mean = tumor_vols_mean.pivot(index="Timepoint", columns="Drug Regimen") tumor_vols_pivot_mean.head() # Use groupby and summary statistical methods to calculate the following properties of each drug regimen: # mean, median, variance, standard deviation, and SEM of the tumor volume. # Assemble the resulting series into a single summary dataframe. # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen # Using the aggregation method, produce the same summary statistics in a single line ###Output _____no_output_____ ###Markdown Bar and Pie Charts ###Code # Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas. # Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot. # Generate a pie plot showing the distribution of female versus male mice using pandas # Generate a pie plot showing the distribution of female versus male mice using pyplot ###Output _____no_output_____ ###Markdown Quartiles, Outliers and Boxplots ###Code # Calculate the final tumor volume of each mouse across four of the treatment regimens: # Capomulin, Ramicane, Infubinol, and Ceftamin # Start by getting the last (greatest) timepoint for each mouse # Merge this group df with the original dataframe to get the tumor volume at the last timepoint # Put treatments into a list for for loop (and later for plot labels) # Create empty list to fill with tumor vol data (for plotting) # Calculate the IQR and quantitatively determine if there are any potential outliers. # Locate the rows which contain mice on each drug and get the tumor volumes # add subset # Determine outliers using upper and lower bounds # Generate a box plot of the final tumor volume of each mouse across four regimens of interest ###Output _____no_output_____ ###Markdown Line and Scatter Plots ###Code # Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin plt.errorbar(tumor_vols_pivot_mean.index, tumor_vols_pivot_mean["Capomulin"], yerr=tumor_vols_pivot_sem["Capomulin"], color="r", marker="o", markersize=5, linestyle="dashed", linewidth=.50) plt.errorbar(tumor_vols_pivot_mean.index,tumor_vols_pivot_mean["Infubinol"], yerr=tumor_vols_pivot_sem["Infubinol"], color="b", marker="^", markersize=5, linestyle="dashed", linewidth=.5) plt.errorbar(tumor_vols_pivot_mean.index,tumor_vols_pivot_mean["Ketapril"], yerr=tumor_vols_pivot_sem["Ketapril"], color="g", marker="s", markersize=5, linestyle="dashed", linewidth=.5) plt.errorbar(tumor_vols_pivot_mean.index,tumor_vols_pivot_mean["Placebo"], yerr=tumor_vols_pivot_sem["Placebo"], color="k", marker="d", markersize=5, linestyle="dashed", linewidth=.5) plt.title("Tumor Response to Treatment") plt.ylabel("Tumor Volume (mm3)") plt.xlabel("Time (Days)") plt.grid(axis='y') plt.legend(["Capomulin", "Infubinol", "Ketapril", "Placebo"], loc="best", fontsize="small", fancybox=True) # Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen ###Output _____no_output_____ ###Markdown Correlation and Regression ###Code # Calculate the correlation coefficient and linear regression model # for mouse weight and average tumor volume for the Capomulin regimen ###Output _____no_output_____
4.cv_example.ipynb
###Markdown Más allá del Hello World, un ejemplo con visión artificialEn el anterior ejercicio, pudimos ver cómo crear una red neuronal para encontrar una relación entre 2 números. Obviamente, fue un poco obvio y un "overkill" usar una red neuronal para una función conocida. Pero el ejemplo es válido para entender el concepto de aprendizaje.Por otro lado, considere ahora una tarea en la cual encontrar las reglas que definen la relación entre entradas - salidas es mucho más difícil, si no imposible. En este ejemplo, desarrollaremos una red neuronal que sea capaz de reconocer distintos tipos de vestimentas.But what about a scenario where writing rules like that is much more difficult -- for example a computer vision problem? Let's take a look at a scenario where we can recognize different items of clothing, trained from a dataset containing 10 different types. Comencemos con el códigoComencemos importando tensorflow ###Code import tensorflow as tf print(tf.__version__) ###Output _____no_output_____ ###Markdown El dataset Fashion MNIST está disponible en tensorflow por defecto. ###Code mnist = tf.keras.datasets.fashion_mnist ###Output _____no_output_____ ###Markdown Al invocar el método "load_data" obtendremos como resultado 2 tuplas con 2 arrays; éstas corresponden con los conjuntos de entrenamiento y pruebas para las imágenes y sus etiquetas. ###Code (training_images, training_labels), (test_images, test_labels) = mnist.load_data() ###Output _____no_output_____ ###Markdown Podemos echar un vistazo a algunos de los ejemplos en el conjunto de entrenamiento.Experimente con distintos índices para visualizar el ejemplo. ###Code import numpy as np np.set_printoptions(linewidth=200) import matplotlib.pyplot as plt plt.imshow(training_images[0]) print(training_labels[0]) print(training_images[0]) ###Output _____no_output_____ ###Markdown Notará que todos los valores en las imágenes están entre 0 y 255. Para entrenar una red neuronal, por varias razones, es más fácil si normalizamos los valores en un rango entre 0 y 1. ###Code training_images = training_images / 255.0 test_images = test_images / 255.0 ###Output _____no_output_____ ###Markdown Se preguntará porqué hay 2 tipos de conjuntos... training y testing. En el ámbito del machine learning es común contar con un conjunto para entrenar el modelo y otro conjunto con ejemplos nunca vistos para poder evaluar el rendimiento y capacidad de generalización de nuestro modelo. Después de todo, el objetivo es implementar un modelo capaz de desempeñarse bien en ejemplos que nunca vió en el entrenamiento. Comencemos a diseñar el modelo. Repasaremos algunos conceptos. ###Code model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax)]) ###Output _____no_output_____ ###Markdown **Sequential**: Define una SECUENCIA de capas en la red neuronal.**Flatten**: Nótese que las imágenes tienen 2 dimensiones, para poder alimentar estos valores a la red necesitamos "aplanarlos" en un vector unidimensional.**Dense**: Añade una capa de neuronas "densamente conectadas".Cada capa de neuronas necesita una **función de activación**. Existen muchas opciones pero usaremos éstas por ahora:**Relu** representa "If X>0 return X, else return 0", en otras palabras, solamente pasa valores positivos a la siguiente capa de la red. **Softmax** toma un conjunto de valores y los adecúa en una forma de distribución de probabilidad de tal forma que la sumatoria de todos sea igual a 1. Esto nos servirá para escoger el valor mayor en esta distribución que corresponderá con la predicción de la red. Una vez el modelo está definido, se debe compilar el mismo. De similar manera al anterior ejemplo debemos especificar una función de costo y un optimizador. Luego podemos entrenar la red haciendo que se ejecute el bucle de entrenamiento por el número de épocas definido. En este caso, especificamos un argumento adicional para poder monitorear el rendimiento durante el entrenamiento. ###Code model.compile(optimizer = tf.optimizers.Adam(), loss = 'sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(training_images, training_labels, epochs=5) ###Output _____no_output_____ ###Markdown Una vez entrenado, deberá ver una medida de precisión al final de cada época. Debería tener un valor cercano a 0.91. Este valor se puede interpretar como una precisión del 91% en clasificar los datos de entrenamiento.Considerando la sencillez de la red y el pequeño número de épocas, 91% de precisión es bastante bueno. Pero, cómo se comportará al predecir ejemplos nunca antes vistos? Podemos explorar esta posibilidad usando el método **model.evaluate** el cual toma un conjunto de ejemplos y evalúa el rendimiento de la red, en este caso sobre el conjunto de pruebas. ###Code model.evaluate(test_images, test_labels) ###Output _____no_output_____ ###Markdown Se debería obtener un valor cercano a 0.88, o el 88% de precisión. Como es esperado, el rendimiento no es tan bueno en ejemplos nunca antes vistos.Acá algunos ejercicios de exploración: Ejercicios de exploración Ejercicio 1:Ejecute la celda, el código crea un conjunto de predicciones para cada ejemplo en el conjunto de pruebas y luego imprime el resultado de la primera predicción. La salida, luego de correr la celda es una lista de valores. Qué representan esos números? ###Code classifications = model.predict(test_images) print(classifications[0]) ###Output _____no_output_____ ###Markdown Pista: intente correr```print(test_labels[0])```y obtendrá un 9, qué ocurre con la predicción? ###Code print(test_labels[0]) ###Output _____no_output_____ ###Markdown A qué clase pertenece la predicción? Corresponde con la clase real?Refiera a [este link](https://github.com/zalandoresearch/fashion-mnistlabels) Ejercicio 2*: Echemos un vistazo a las capas en nuestro modelo. Experimente con diferentes valores para la capa densamente conectada con 512 neuronas. Cuál es la diferencia en el costo final, tiempo de entrenamiento, etc. ###Code import tensorflow as tf print(tf.__version__) mnist = tf.keras.datasets.mnist (training_images, training_labels) , (test_images, test_labels) = mnist.load_data() training_images = training_images/255.0 test_images = test_images/255.0 model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), tf.keras.layers.Dense(1024, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax)]) model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=5) model.evaluate(test_images, test_labels) classifications = model.predict(test_images) print(classifications[0]) print(test_labels[0]) ###Output _____no_output_____ ###Markdown Pregunta 1. Incremente a 1024 neuronas, cuál es el impacto?1. El entrenamiento toma más tiempo, pero el modelo es más preciso2. El entrenamiento toma más tiempo, pero el rendimiento no mejora3. El entrenamiento toma el mismo tiempo, pero el modelo es más preciso Ejercicio 3: Qué pasa si elimina la capa Flatten()? ###Code import tensorflow as tf print(tf.__version__) mnist = tf.keras.datasets.mnist (training_images, training_labels) , (test_images, test_labels) = mnist.load_data() training_images = training_images/255.0 test_images = test_images/255.0 model = tf.keras.models.Sequential([#tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax)]) model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=5) model.evaluate(test_images, test_labels) classifications = model.predict(test_images) print(classifications[0]) print(test_labels[0]) ###Output _____no_output_____ ###Markdown Ejercicio 4: Qué pasa si agrega una capa oculta adicional entre la capa con 512 neuronas y la capa de salida? ###Code import tensorflow as tf print(tf.__version__) mnist = tf.keras.datasets.mnist (training_images, training_labels) , (test_images, test_labels) = mnist.load_data() training_images = training_images/255.0 test_images = test_images/255.0 model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dense(256, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax)]) model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=5) model.evaluate(test_images, test_labels) classifications = model.predict(test_images) print(classifications[0]) print(test_labels[0]) ###Output _____no_output_____ ###Markdown Ejercicio 5: Entrene con un número distinto de épocas. Qué pasa en tales casos? ###Code import tensorflow as tf print(tf.__version__) mnist = tf.keras.datasets.mnist (training_images, training_labels) , (test_images, test_labels) = mnist.load_data() training_images = training_images/255.0 test_images = test_images/255.0 model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax)]) model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=30) model.evaluate(test_images, test_labels) classifications = model.predict(test_images) print(classifications[34]) print(test_labels[34]) ###Output _____no_output_____ ###Markdown Ejercicio 6: Qué pasa si no normaliza los datos? ###Code import tensorflow as tf print(tf.__version__) mnist = tf.keras.datasets.mnist (training_images, training_labels), (test_images, test_labels) = mnist.load_data() # training_images=training_images/255.0 # test_images=test_images/255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=5) model.evaluate(test_images, test_labels) classifications = model.predict(test_images) print(classifications[0]) print(test_labels[0]) ###Output _____no_output_____
notebooks/039_exp.ipynb
###Markdown ![](https://images.unsplash.com/photo-1602084551218-a28205125639?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=2070&q=80) <div class = 'alert alert-block alert-info' style = 'background-color:4c1c84; color:eeebf1; border-width:5px; border-color:4c1c84; font-family:Comic Sans MS; border-radius: 50px 50px'> Exp 039 <a href = "Config" style = "color:eeebf1; font-size:14px">1.Config <a href = "Settings" style = "color:eeebf1; font-size:14px">2.Settings <a href = "Data-Load" style = "color:eeebf1; font-size:14px">3.Data Load <a href = "Pytorch-Settings" style = "color:eeebf1; font-size:14px">4.Pytorch Settings <a href = "Training" style = "color:eeebf1; font-size:14px">5.Training<p style = 'font-size:24px; color:4c1c84'> 実施したこと <li style = "color:4c1c84; font-size:14px">Albert-base Config ###Code import sys sys.path.append("../src/utils/iterative-stratification/") sys.path.append("../src/utils/detoxify") sys.path.append("../src/utils/coral-pytorch/") sys.path.append("../src/utils/pyspellchecker") !pip install --no-index --find-links ../src/utils/faiss/ faiss-gpu==1.6.3 import warnings warnings.simplefilter('ignore') import os import gc gc.enable() import sys import glob import copy import math import time import random import string import psutil import pathlib from pathlib import Path from contextlib import contextmanager from collections import defaultdict from box import Box from typing import Optional from pprint import pprint import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import japanize_matplotlib from tqdm.auto import tqdm as tqdmp from tqdm.autonotebook import tqdm as tqdm tqdmp.pandas() ## Model from sklearn.metrics import mean_squared_error from sklearn.model_selection import StratifiedKFold, KFold, GroupKFold import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import Dataset, DataLoader from transformers import AutoTokenizer, AutoModel, AdamW, AutoModelForSequenceClassification from transformers import RobertaModel, RobertaForSequenceClassification from transformers import RobertaTokenizer from transformers import LukeTokenizer, LukeModel, LukeConfig from transformers import get_linear_schedule_with_warmup, get_cosine_schedule_with_warmup from transformers import BertTokenizer, BertForSequenceClassification, BertForMaskedLM from transformers import RobertaTokenizer, RobertaForSequenceClassification from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification from transformers import DebertaTokenizer, DebertaModel from transformers import DistilBertTokenizer, DistilBertModel, DistilBertForSequenceClassification from transformers import AlbertTokenizer, AlbertModel # Pytorch Lightning import pytorch_lightning as pl from pytorch_lightning.utilities.seed import seed_everything from pytorch_lightning import callbacks from pytorch_lightning.callbacks.progress import ProgressBarBase from pytorch_lightning import LightningDataModule, LightningDataModule from pytorch_lightning import Trainer from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping, LearningRateMonitor from pytorch_lightning.loggers import WandbLogger from pytorch_lightning.loggers.csv_logs import CSVLogger from pytorch_lightning.callbacks import RichProgressBar from sklearn.linear_model import Ridge from sklearn.svm import SVC, SVR from sklearn.feature_extraction.text import TfidfVectorizer from scipy.stats import rankdata from cuml.svm import SVR as cuml_SVR from cuml.linear_model import Ridge as cuml_Ridge import cudf from detoxify import Detoxify from iterstrat.ml_stratifiers import MultilabelStratifiedKFold from ast import literal_eval from nltk.tokenize import TweetTokenizer import spacy from scipy.stats import sem from copy import deepcopy from spellchecker import SpellChecker from typing import Text, Set, List import faiss import cudf, cuml, cupy from cuml.feature_extraction.text import TfidfVectorizer as cuTfidfVectorizer from cuml.neighbors import NearestNeighbors as cuNearestNeighbors from cuml.decomposition.tsvd import TruncatedSVD as cuTruncatedSVD import torch config = { "exp_comment":"Psuedo Labeling", "seed": 42, "root": "/content/drive/MyDrive/kaggle/Jigsaw/raw", "n_fold": 10, "epoch": 5, "max_length": 256, "environment": "AWS", "project": "Jigsaw", "entity": "dataskywalker", "exp_name": "039_exp", "margin": 0.5, "train_fold": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], "trainer": { "gpus": 1, "accumulate_grad_batches": 8, "progress_bar_refresh_rate": 1, "fast_dev_run": True, "num_sanity_val_steps": 0, }, "train_loader": { "batch_size": 8, "shuffle": True, "num_workers": 1, "pin_memory": True, "drop_last": True, }, "valid_loader": { "batch_size": 2, "shuffle": False, "num_workers": 1, "pin_memory": True, "drop_last": False, }, "test_loader": { "batch_size": 2, "shuffle": False, "num_workers": 1, "pin_memory": True, "drop_last": False, }, "backbone": { "name": "../data/processed/albert-base-v2/", "output_dim": 1, }, "optimizer": { "name": "torch.optim.AdamW", "params": { "lr": 1e-6, }, }, "scheduler": { "name": "torch.optim.lr_scheduler.CosineAnnealingWarmRestarts", "params": { "T_0": 20, "eta_min": 0, }, }, "loss": "nn.MarginRankingLoss", } config = Box(config) config.tokenizer = AlbertTokenizer.from_pretrained(config.backbone.name) config.model = AlbertModel.from_pretrained(config.backbone.name) # pprint(config) config.tokenizer.save_pretrained(f"../data/processed/{config.backbone.name}") pretrain_model = AlbertModel.from_pretrained(config.backbone.name) pretrain_model.save_pretrained(f"../data/processed/{config.backbone.name}") # 個人的にAWSやKaggle環境やGoogle Colabを行ったり来たりしているのでまとめています import os import sys from pathlib import Path if config.environment == 'AWS': INPUT_DIR = Path('/mnt/work/data/kaggle/Jigsaw/') MODEL_DIR = Path(f'../models/{config.exp_name}/') OUTPUT_DIR = Path(f'../data/interim/{config.exp_name}/') UTIL_DIR = Path('/mnt/work/shimizu/kaggle/PetFinder/src/utils') os.makedirs(MODEL_DIR, exist_ok=True) os.makedirs(OUTPUT_DIR, exist_ok=True) print(f"Your environment is 'AWS'.\nINPUT_DIR is {INPUT_DIR}\nMODEL_DIR is {MODEL_DIR}\nOUTPUT_DIR is {OUTPUT_DIR}\nUTIL_DIR is {UTIL_DIR}") elif config.environment == 'Kaggle': INPUT_DIR = Path('../input/*****') MODEL_DIR = Path('./') OUTPUT_DIR = Path('./') print(f"Your environment is 'Kaggle'.\nINPUT_DIR is {INPUT_DIR}\nMODEL_DIR is {MODEL_DIR}\nOUTPUT_DIR is {OUTPUT_DIR}") elif config.environment == 'Colab': INPUT_DIR = Path('/content/drive/MyDrive/kaggle/Jigsaw/raw') BASE_DIR = Path("/content/drive/MyDrive/kaggle/Jigsaw/interim") MODEL_DIR = BASE_DIR / f'{config.exp_name}' OUTPUT_DIR = BASE_DIR / f'{config.exp_name}/' os.makedirs(MODEL_DIR, exist_ok=True) os.makedirs(OUTPUT_DIR, exist_ok=True) if not os.path.exists(INPUT_DIR): print('Please Mount your Google Drive.') else: print(f"Your environment is 'Colab'.\nINPUT_DIR is {INPUT_DIR}\nMODEL_DIR is {MODEL_DIR}\nOUTPUT_DIR is {OUTPUT_DIR}") else: print("Please choose 'AWS' or 'Kaggle' or 'Colab'.\nINPUT_DIR is not found.") # Seed固定 seed_everything(config.seed) ## 処理時間計測 @contextmanager def timer(name:str, slack:bool=False): t0 = time.time() p = psutil.Process(os.getpid()) m0 = p.memory_info()[0] / 2. ** 30 print(f'<< {name} >> Start') yield m1 = p.memory_info()[0] / 2. ** 30 delta = m1 - m0 sign = '+' if delta >= 0 else '-' delta = math.fabs(delta) print(f"<< {name} >> {m1:.1f}GB({sign}{delta:.1f}GB):{time.time() - t0:.1f}sec", file=sys.stderr) ###Output _____no_output_____ ###Markdown Data Load ###Code ## Data Check for dirnames, _, filenames in os.walk(INPUT_DIR): for filename in filenames: print(f'{dirnames}/{filename}') val_df = pd.read_csv("/mnt/work/data/kaggle/Jigsaw/validation_data.csv") test_df = pd.read_csv("/mnt/work/data/kaggle/Jigsaw/comments_to_score.csv") display(val_df.head()) display(test_df.head()) with timer("Count less text & more text"): less_df = val_df.groupby(["less_toxic"])["worker"].agg("count").reset_index() less_df.columns = ["text", "less_count"] more_df = val_df.groupby(["more_toxic"])["worker"].agg("count").reset_index() more_df.columns = ["text", "more_count"] text_df = pd.merge( less_df, more_df, on="text", how="outer" ) text_df["less_count"] = text_df["less_count"].fillna(0) text_df["more_count"] = text_df["more_count"].fillna(0) text_df["target"] = text_df["more_count"]/(text_df["less_count"] + text_df["more_count"]) display(text_df) ###Output << Count less text & more text >> Start ###Markdown Make Fold ###Code texts = set(val_df["less_toxic"].to_list() + val_df["more_toxic"].to_list()) text2id = {t:id for id,t in enumerate(texts)} val_df['less_id'] = val_df['less_toxic'].map(text2id) val_df['more_id'] = val_df['more_toxic'].map(text2id) val_df.head() len_ids = len(text2id) idarr = np.zeros((len_ids, len_ids), dtype=bool) for lid, mid in val_df[['less_id', 'more_id']].values: min_id = min(lid, mid) max_id = max(lid, mid) idarr[max_id, min_id] = True def add_ids(i, this_list): for j in range(len_ids): if idarr[i, j]: idarr[i, j] = False this_list.append(j) this_list = add_ids(j,this_list) #print(j,i) for j in range(i+1,len_ids): if idarr[j, i]: idarr[j, i] = False this_list.append(j) this_list = add_ids(j,this_list) #print(j,i) return this_list group_list = [] for i in tqdm(range(len_ids)): for j in range(i+1,len_ids): if idarr[j, i]: this_list = add_ids(i,[i]) # print(this_list) group_list.append(this_list) id2groupid = {} for gid,ids in enumerate(group_list): for id in ids: id2groupid[id] = gid val_df['less_gid'] = val_df['less_id'].map(id2groupid) val_df['more_gid'] = val_df['more_id'].map(id2groupid) display(val_df.head()) display(val_df.tail()) val_df[val_df["more_gid"]==109] print('unique text counts:', len_ids) print('grouped text counts:', len(group_list)) # now we can use GroupKFold with group id group_kfold = GroupKFold(n_splits=config.n_fold) # Since df.less_gid and df.more_gid are the same, let's use df.less_gid here. for fold, (trn, val) in enumerate(group_kfold.split(val_df, val_df, val_df.less_gid)): val_df.loc[val , "fold"] = fold val_df["fold"] = val_df["fold"].astype(int) val_df val_df[val_df["less_gid"]==1751] ###Output _____no_output_____ ###Markdown Pytorch Dataset ###Code class JigsawDataset: def __init__(self, df, tokenizer, max_length, mode): self.df = df self.max_len = max_length self.tokenizer = tokenizer self.mode = mode if self.mode == "train": self.more_toxic = df["more_toxic"].values self.less_toxic = df["less_toxic"].values elif self.mode == "valid": self.more_toxic = df["more_toxic"].values self.less_toxic = df["less_toxic"].values else: self.text = df["text"].values def __len__(self): return len(self.df) def __getitem__(self, index): if self.mode == "train": more_toxic = self.more_toxic[index] less_toxic = self.less_toxic[index] inputs_more_toxic = self.tokenizer.encode_plus( more_toxic, truncation=True, return_attention_mask=True, return_token_type_ids=True, max_length = self.max_len, padding="max_length", ) inputs_less_toxic = self.tokenizer.encode_plus( less_toxic, truncation=True, return_attention_mask=True, return_token_type_ids=True, max_length = self.max_len, padding="max_length", ) target = 1 more_toxic_ids = inputs_more_toxic["input_ids"] more_toxic_mask = inputs_more_toxic["attention_mask"] more_token_type_ids = inputs_more_toxic["token_type_ids"] less_toxic_ids = inputs_less_toxic["input_ids"] less_toxic_mask = inputs_less_toxic["attention_mask"] less_token_type_ids = inputs_less_toxic["token_type_ids"] return { 'more_toxic_ids': torch.tensor(more_toxic_ids, dtype=torch.long), 'more_toxic_mask': torch.tensor(more_toxic_mask, dtype=torch.long), 'more_token_type_ids': torch.tensor(more_token_type_ids, dtype=torch.long), 'less_toxic_ids': torch.tensor(less_toxic_ids, dtype=torch.long), 'less_toxic_mask': torch.tensor(less_toxic_mask, dtype=torch.long), 'less_token_type_ids': torch.tensor(less_token_type_ids, dtype=torch.long), 'target': torch.tensor(target, dtype=torch.float) } elif self.mode == "valid": more_toxic = self.more_toxic[index] less_toxic = self.less_toxic[index] inputs_more_toxic = self.tokenizer.encode_plus( more_toxic, truncation=True, return_attention_mask=True, return_token_type_ids=True, max_length = self.max_len, padding="max_length", ) inputs_less_toxic = self.tokenizer.encode_plus( less_toxic, truncation=True, return_attention_mask=True, return_token_type_ids=True, max_length = self.max_len, padding="max_length", ) target = 1 more_toxic_ids = inputs_more_toxic["input_ids"] more_toxic_mask = inputs_more_toxic["attention_mask"] more_token_type_ids = inputs_more_toxic["token_type_ids"] less_toxic_ids = inputs_less_toxic["input_ids"] less_toxic_mask = inputs_less_toxic["attention_mask"] less_token_type_ids = inputs_less_toxic["token_type_ids"] return { 'more_toxic_ids': torch.tensor(more_toxic_ids, dtype=torch.long), 'more_toxic_mask': torch.tensor(more_toxic_mask, dtype=torch.long), 'more_token_type_ids': torch.tensor(more_token_type_ids, dtype=torch.long), 'less_toxic_ids': torch.tensor(less_toxic_ids, dtype=torch.long), 'less_toxic_mask': torch.tensor(less_toxic_mask, dtype=torch.long), 'less_token_type_ids': torch.tensor(less_token_type_ids, dtype=torch.long), 'target': torch.tensor(target, dtype=torch.float) } else: text = self.text[index] inputs_text = self.tokenizer.encode_plus( text, truncation=True, return_attention_mask=True, return_token_type_ids=True, max_length = self.max_len, padding="max_length", ) text_ids = inputs_text["input_ids"] text_mask = inputs_text["attention_mask"] text_token_type_ids = inputs_text["token_type_ids"] return { 'text_ids': torch.tensor(text_ids, dtype=torch.long), 'text_mask': torch.tensor(text_mask, dtype=torch.long), 'text_token_type_ids': torch.tensor(text_token_type_ids, dtype=torch.long), } ###Output _____no_output_____ ###Markdown <h2 style = "font-size:45px; font-family:Comic Sans MS ; font-weight : normal; background-color: eeebf1 ; color : 4c1c84; text-align: center; border-radius: 100px 100px;"> DataModule ###Code class JigsawDataModule(LightningDataModule): def __init__(self, train_df, valid_df, test_df, cfg): super().__init__() self._train_df = train_df self._valid_df = valid_df self._test_df = test_df self._cfg = cfg def train_dataloader(self): dataset = JigsawDataset( df=self._train_df, tokenizer=self._cfg.tokenizer, max_length=self._cfg.max_length, mode="train", ) return DataLoader(dataset, **self._cfg.train_loader) def val_dataloader(self): dataset = JigsawDataset( df=self._valid_df, tokenizer=self._cfg.tokenizer, max_length=self._cfg.max_length, mode="valid", ) return DataLoader(dataset, **self._cfg.valid_loader) def test_dataloader(self): dataset = JigsawDataset( df=self._test_df, tokenizer = self._cfg.tokenizer, max_length=self._cfg.max_length, mode="test", ) return DataLoader(dataset, **self._cfg.test_loader) ## DataCheck seed_everything(config.seed) sample_dataloader = JigsawDataModule(val_df, val_df, test_df, config).train_dataloader() for data in sample_dataloader: break print(data["more_toxic_ids"].size()) print(data["more_toxic_mask"].size()) print(data["more_token_type_ids"].size()) print(data["target"].size()) print(data["target"]) output = config.model( data["more_toxic_ids"], data["more_toxic_mask"], data["more_token_type_ids"], output_hidden_states=True, output_attentions=True, ) print(output["hidden_states"][-1].size(), output["attentions"][-1].size()) print(output["hidden_states"][-1][:, 1, :].size(), output["attentions"][-1].size()) ###Output torch.Size([8, 256]) torch.Size([8, 256]) torch.Size([8, 256]) torch.Size([8]) tensor([1., 1., 1., 1., 1., 1., 1., 1.]) torch.Size([8, 256, 768]) torch.Size([8, 12, 256, 256]) torch.Size([8, 768]) torch.Size([8, 12, 256, 256]) ###Markdown <h2 style = "font-size:45px; font-family:Comic Sans MS ; font-weight : normal; background-color: eeebf1 ; color : 4c1c84; text-align: center; border-radius: 100px 100px;"> LigitningModule ###Code def criterion(outputs1, outputs2, targets): return nn.MarginRankingLoss(margin=config.margin)(outputs1, outputs2, targets) class JigsawModel(pl.LightningModule): def __init__(self, cfg, fold_num): super().__init__() self.cfg = cfg self.__build_model() self.criterion = criterion self.save_hyperparameters(cfg) self.fold_num = fold_num def __build_model(self): self.base_model = AlbertModel.from_pretrained( self.cfg.backbone.name ) print(f"Use Model: {self.cfg.backbone.name}") self.norm = nn.LayerNorm(768) self.drop = nn.Dropout(p=0.3) self.head = nn.Linear(768, self.cfg.backbone.output_dim) def forward(self, ids, mask, token_type_ids): output = self.base_model( input_ids=ids, attention_mask=mask, token_type_ids=token_type_ids, output_hidden_states=True, output_attentions=True ) feature = self.norm(output["hidden_states"][-1][:, 1, :]) out = self.drop(feature) out = self.head(out) return { "logits":out, "feature":feature, "attention":output["attentions"], "mask":mask, } def training_step(self, batch, batch_idx): more_toxic_ids = batch['more_toxic_ids'] more_toxic_mask = batch['more_toxic_mask'] more_text_token_type_ids = batch['more_token_type_ids'] less_toxic_ids = batch['less_toxic_ids'] less_toxic_mask = batch['less_toxic_mask'] less_text_token_type_ids = batch['less_token_type_ids'] targets = batch['target'] more_outputs = self.forward( more_toxic_ids, more_toxic_mask, more_text_token_type_ids ) less_outputs = self.forward( less_toxic_ids, less_toxic_mask, less_text_token_type_ids ) loss = self.criterion(more_outputs["logits"], less_outputs["logits"], targets) return { "loss":loss, "targets":targets, } def training_epoch_end(self, training_step_outputs): loss_list = [] for out in training_step_outputs: loss_list.extend([out["loss"].cpu().detach().tolist()]) meanloss = sum(loss_list)/len(loss_list) logs = {f"train_loss/fold{self.fold_num+1}": meanloss,} self.log_dict( logs, on_step=False, on_epoch=True, prog_bar=True, logger=True ) def validation_step(self, batch, batch_idx): more_toxic_ids = batch['more_toxic_ids'] more_toxic_mask = batch['more_toxic_mask'] more_text_token_type_ids = batch['more_token_type_ids'] less_toxic_ids = batch['less_toxic_ids'] less_toxic_mask = batch['less_toxic_mask'] less_text_token_type_ids = batch['less_token_type_ids'] targets = batch['target'] more_outputs = self.forward( more_toxic_ids, more_toxic_mask, more_text_token_type_ids ) less_outputs = self.forward( less_toxic_ids, less_toxic_mask, less_text_token_type_ids ) outputs = more_outputs["logits"] - less_outputs["logits"] logits = outputs.clone()[:, 0] logits[logits > 0] = 1 loss = nn.BCEWithLogitsLoss()(logits, targets) return { "loss":loss, "pred":outputs, "targets":targets, } def validation_epoch_end(self, validation_step_outputs): loss_list = [] pred_list = [] target_list = [] for out in validation_step_outputs: loss_list.extend([out["loss"].cpu().detach().tolist()]) pred_list.append(out["pred"][:, 0].detach().cpu().numpy()) target_list.append(out["targets"].detach().cpu().numpy()) meanloss = sum(loss_list)/len(loss_list) pred_list = np.concatenate(pred_list) pred_count = sum(x>0 for x in pred_list)/len(pred_list) logs = { f"valid_loss/fold{self.fold_num+1}":meanloss, f"valid_acc/fold{self.fold_num+1}":pred_count, } self.log_dict( logs, on_step=False, on_epoch=True, prog_bar=True, logger=True ) def configure_optimizers(self): optimizer = eval(self.cfg.optimizer.name)( self.parameters(), **self.cfg.optimizer.params ) self.scheduler = eval(self.cfg.scheduler.name)( optimizer, **self.cfg.scheduler.params ) scheduler = {"scheduler": self.scheduler, "interval": "step",} return [optimizer], [scheduler] ###Output _____no_output_____ ###Markdown <h2 style = "font-size:45px; font-family:Comic Sans MS ; font-weight : normal; background-color: eeebf1 ; color : 4c1c84; text-align: center; border-radius: 100px 100px;"> Training ###Code # skf = StratifiedKFold(n_splits=config.n_fold, shuffle=True, random_state=config.seed) # for fold, (_, val_idx) in enumerate(skf.split(X=val_df, y=val_df["worker"])): # val_df.loc[val_idx, "kfold"] = int(fold) # val_df["kfold"] = val_df["kfold"].astype(int) val_df.head() ## Debug config.trainer.fast_dev_run = True config.backbone.output_dim = 1 for fold in config.train_fold: print("★"*25, f" Fold{fold+1} ", "★"*25) df_train = val_df[val_df.fold != fold].reset_index(drop=True) df_valid = val_df[val_df.fold != fold].reset_index(drop=True) datamodule = JigsawDataModule(df_train, df_valid, test_df, config) sample_dataloader = JigsawDataModule(df_train, df_valid, test_df, config).train_dataloader() config.scheduler.params.T_0 = config.epoch * len(sample_dataloader) model = JigsawModel(config, fold) lr_monitor = callbacks.LearningRateMonitor() loss_checkpoint = callbacks.ModelCheckpoint( filename=f"best_acc_fold{fold+1}", monitor=f"valid_acc/fold{fold+1}", save_top_k=1, mode="max", save_last=False, dirpath=MODEL_DIR, save_weights_only=True, ) wandb_logger = WandbLogger( project=config.project, entity=config.entity, name = f"{config.exp_name}", tags = ['AlbertModel-Base', "MarginRankLoss"] ) lr_monitor = LearningRateMonitor(logging_interval='step') trainer = pl.Trainer( max_epochs=config.epoch, callbacks=[loss_checkpoint, lr_monitor, RichProgressBar()], # deterministic=True, logger=[wandb_logger], **config.trainer ) trainer.fit(model, datamodule=datamodule) # Training config.trainer.fast_dev_run = False config.backbone.output_dim = 1 for fold in config.train_fold: print("★"*25, f" Fold{fold+1} ", "★"*25) df_train = val_df[val_df.fold != fold].reset_index(drop=True) df_valid = val_df[val_df.fold == fold].reset_index(drop=True) datamodule = JigsawDataModule(df_train, df_valid, test_df, config) sample_dataloader = JigsawDataModule(df_train, df_valid, test_df, config).train_dataloader() config.scheduler.params.T_0 = config.epoch * len(sample_dataloader) model = JigsawModel(config, fold) lr_monitor = callbacks.LearningRateMonitor() loss_checkpoint = callbacks.ModelCheckpoint( filename=f"best_acc_fold{fold+1}", monitor=f"valid_acc/fold{fold+1}", save_top_k=1, mode="max", save_last=False, dirpath=MODEL_DIR, save_weights_only=True, ) wandb_logger = WandbLogger( project=config.project, entity=config.entity, name = f"{config.exp_name}", tags = ['Albert-Base', "MarginRankLoss"] ) lr_monitor = LearningRateMonitor(logging_interval='step') trainer = pl.Trainer( max_epochs=config.epoch, callbacks=[loss_checkpoint, lr_monitor, RichProgressBar()], # deterministic=True, logger=[wandb_logger], **config.trainer ) trainer.fit(model, datamodule=datamodule) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(f"Device == {device}") MORE = np.zeros(len(val_df)) LESS = np.zeros(len(val_df)) PRED = np.zeros(len(test_df)) attention_array = np.zeros((len(val_df), 256)) # attention格納 mask_array = np.zeros((len(val_df), 256)) # mask情報格納,後でattentionと掛け合わせる for fold in config.train_fold: pred_list = [] print("★"*25, f" Fold{fold+1} ", "★"*25) val_idx = val_df[val_df.fold == fold].index.tolist() df_train = val_df[val_df.fold != fold].reset_index(drop=True) df_valid = val_df[val_df.fold == fold].reset_index(drop=True) datamodule = JigsawDataModule(df_train, df_valid, test_df, config) valid_dataloader = JigsawDataModule(df_train, df_valid, test_df, config).val_dataloader() model = JigsawModel(config, fold) loss_checkpoint = callbacks.ModelCheckpoint( filename=f"best_acc_fold{fold+1}", monitor=f"valid_acc/fold{fold+1}", save_top_k=1, mode="max", save_last=False, dirpath="../input/toxicroberta/", ) model = model.load_from_checkpoint(MODEL_DIR/f"best_acc_fold{fold+1}.ckpt", cfg=config, fold_num=fold) model.to(device) model.eval() more_list = [] less_list = [] for step, data in tqdm(enumerate(valid_dataloader), total=len(valid_dataloader)): more_toxic_ids = data['more_toxic_ids'].to(device) more_toxic_mask = data['more_toxic_mask'].to(device) more_text_token_type_ids = data['more_token_type_ids'].to(device) less_toxic_ids = data['less_toxic_ids'].to(device) less_toxic_mask = data['less_toxic_mask'].to(device) less_text_token_type_ids = data['less_token_type_ids'].to(device) more_outputs = model( more_toxic_ids, more_toxic_mask, more_text_token_type_ids, ) less_outputs = model( less_toxic_ids, less_toxic_mask, less_text_token_type_ids ) more_list.append(more_outputs["logits"][:, 0].detach().cpu().numpy()) less_list.append(less_outputs["logits"][:, 0].detach().cpu().numpy()) MORE[val_idx] += np.concatenate(more_list) LESS[val_idx] += np.concatenate(less_list) # PRED += pred_list/len(config.train_fold) plt.figure(figsize=(12, 5)) plt.scatter(LESS, MORE) plt.xlabel("less-toxic") plt.ylabel("more-toxic") plt.grid() plt.show() val_df["less_attack"] = LESS val_df["more_attack"] = MORE val_df["diff_attack"] = val_df["more_attack"] - val_df["less_attack"] attack_score = val_df[val_df["diff_attack"]>0]["diff_attack"].count()/len(val_df) print(f"HATE-BERT Jigsaw-Classification Score: {attack_score:.6f}") val_df[val_df["fold"]==0] ###Output _____no_output_____ ###Markdown <h2 style = "font-size:45px; font-family:Comic Sans MS ; font-weight : normal; background-color: eeebf1 ; color : 4c1c84; text-align: center; border-radius: 100px 100px;"> Attention Visualize ###Code text_df = pd.DataFrame() text_df["text"] = list(set(val_df["less_toxic"].unique().tolist() + val_df["more_toxic"].unique().tolist())) display(text_df.head()) display(text_df.shape) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(f"Device == {device}") attention_array = np.zeros((len(text_df), config.max_length)) # attention格納 mask_array = np.zeros((len(text_df), config.max_length)) # mask情報格納,後でattentionと掛け合わせる feature_array = np.zeros((len(text_df), 768)) PRED = np.zeros(len(text_df)) for fold in config.train_fold: pred_list = [] print("★"*25, f" Fold{fold+1} ", "★"*25) test_dataloader = JigsawDataModule(val_df, val_df, text_df, config).test_dataloader() model = JigsawModel(config, fold) loss_checkpoint = callbacks.ModelCheckpoint( filename=f"best_acc_fold{fold+1}", monitor=f"valid_acc/fold{fold+1}", save_top_k=1, mode="max", save_last=False, dirpath="../input/toxicroberta/", ) model = model.load_from_checkpoint(MODEL_DIR/f"best_acc_fold{fold+1}.ckpt", cfg=config, fold_num=fold) model.to(device) model.eval() attention_list = [] feature_list = [] mask_list = [] pred_list = [] for step, data in tqdm(enumerate(test_dataloader), total=len(test_dataloader)): text_ids = data["text_ids"].to(device) text_mask = data["text_mask"].to(device) text_token_type_ids = data["text_token_type_ids"].to(device) mask_list.append(text_mask.detach().cpu().numpy()) outputs = model( text_ids, text_mask, text_token_type_ids, ) ## Last LayerのCLS Tokenに対するAttention last_attention = outputs["attention"][-1].detach().cpu().numpy() total_attention = np.zeros((last_attention.shape[0], config.max_length)) for batch in range(last_attention.shape[0]): for n_head in range(12): total_attention[batch, :] += last_attention[batch, n_head, 0, :] attention_list.append(total_attention) pred_list.append(outputs["logits"][:, 0].detach().cpu().numpy()) feature_list.append(outputs["feature"].detach().cpu().numpy()) attention_array += np.concatenate(attention_list)/config.n_fold mask_array += np.concatenate(mask_list)/config.n_fold feature_array += np.concatenate(feature_list)/config.n_fold PRED += np.concatenate(pred_list)/len(config.train_fold) text_df["target"] = PRED text_df.to_pickle(OUTPUT_DIR/f"{config.exp_name}__text_df.pkl") np.save(OUTPUT_DIR/'toxic-attention.npy', attention_array) np.save(OUTPUT_DIR/'toxic-mask.npy', mask_array) np.save(OUTPUT_DIR/'toxic-feature.npy', feature_array) plt.figure(figsize=(12, 5)) sns.histplot(text_df["target"], color="#4c1c84") plt.grid() plt.show() ###Output _____no_output_____ ###Markdown <h2 style = "font-size:45px; font-family:Comic Sans MS ; font-weight : normal; background-color: eeebf1 ; color : 4c1c84; text-align: center; border-radius: 100px 100px;"> Attention Load ###Code text_df = pd.read_pickle(OUTPUT_DIR/"text_df.pkl") attention_array = np.load(OUTPUT_DIR/'toxic-attention.npy') mask_array = np.load(OUTPUT_DIR/'toxic-mask.npy') from IPython.display import display, HTML def highlight_r(word, attn): html_color = '#%02X%02X%02X' % (255, int(255*(1 - attn)), int(255*(1 - attn))) return '<span style="background-color: {}">{}</span>'.format(html_color, word) num = 12 ids = config.tokenizer(text_df.loc[num, "text"])["input_ids"] tokens = config.tokenizer.convert_ids_to_tokens(ids) attention = attention_array[num, :][np.nonzero(mask_array[num, :])] html_outputs = [] for word, attn in zip(tokens, attention): html_outputs.append(highlight_r(word, attn)) print(f"Offensive Score is {PRED[num]}") display(HTML(' '.join(html_outputs))) display(text_df.loc[num, "text"]) text_df.sort_values("target", ascending=False).head(20) high_score_list = text_df.sort_values("target", ascending=False).head(20).index.tolist() for num in high_score_list: ids = config.tokenizer(text_df.loc[num, "text"])["input_ids"] tokens = config.tokenizer.convert_ids_to_tokens(ids) attention = attention_array[num, :][np.nonzero(mask_array[num, :])] html_outputs = [] for word, attn in zip(tokens, attention): html_outputs.append(highlight_r(word, attn)) print(f"Offensive Score is {PRED[num]}") display(HTML(' '.join(html_outputs))) display(text_df.loc[num, "text"]) ###Output Offensive Score is 1.8255026936531067
1_Neural Networks and Deep Learning/Week_2/Logistic Regression as a Neural Network/Logistic_Regression_with_a_Neural_Network_mindset_v6a.ipynb
###Markdown Logistic Regression with a Neural Network mindsetWelcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.**Instructions:**- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.**You will learn to:**- Build the general architecture of a learning algorithm, including: - Initializing parameters - Calculating the cost function and its gradient - Using an optimization algorithm (gradient descent) - Gather all three functions above into a main model function, in the right order. UpdatesThis notebook has been updated over the past few months. The prior version was named "v5", and the current versionis now named '6a' If you were working on a previous version:* You can find your prior work by looking in the file directory for the older files (named by version name).* To view the file directory, click on the "Coursera" icon in the top left corner of this notebook.* Please copy your work from the older versions to the new version, in order to submit your work for grading. List of Updates* Forward propagation formula, indexing now starts at 1 instead of 0.* Optimization function comment now says "print cost every 100 training iterations" instead of "examples".* Fixed grammar in the comments.* Y_prediction_test variable name is used consistently.* Plot's axis label now says "iterations (hundred)" instead of "iterations".* When testing the model, the test image is normalized by dividing by 255. 1 - Packages First, let's run the cell below to import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end. ###Code import numpy as np import matplotlib.pyplot as plt import h5py import scipy from PIL import Image from scipy import ndimage from lr_utils import load_dataset %matplotlib inline ###Output _____no_output_____ ###Markdown 2 - Overview of the Problem set **Problem Statement**: You are given a dataset ("data.h5") containing: - a training set of m_train images labeled as cat (y=1) or non-cat (y=0) - a test set of m_test images labeled as cat or non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.Let's get more familiar with the dataset. Load the data by running the following code. ###Code # Loading the data (cat/non-cat) train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset() ###Output _____no_output_____ ###Markdown We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images. ###Code # Example of a picture index = 25 plt.imshow(train_set_x_orig[index]) print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.") ###Output y = [1], it's a 'cat' picture. ###Markdown Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. **Exercise:** Find the values for: - m_train (number of training examples) - m_test (number of test examples) - num_px (= height = width of a training image)Remember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`. ###Code ### START CODE HERE ### (≈ 3 lines of code) m_train = train_set_x_orig.shape[0] m_test = test_set_x_orig.shape[0] num_px = train_set_x_orig.shape[1] ### END CODE HERE ### print ("Number of training examples: m_train = " + str(m_train)) print ("Number of testing examples: m_test = " + str(m_test)) print ("Height/Width of each image: num_px = " + str(num_px)) print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)") print ("train_set_x shape: " + str(train_set_x_orig.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x shape: " + str(test_set_x_orig.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) ###Output Number of training examples: m_train = 209 Number of testing examples: m_test = 50 Height/Width of each image: num_px = 64 Each image is of size: (64, 64, 3) train_set_x shape: (209, 64, 64, 3) train_set_y shape: (1, 209) test_set_x shape: (50, 64, 64, 3) test_set_y shape: (1, 50) ###Markdown **Expected Output for m_train, m_test and num_px**: **m_train** 209 **m_test** 50 **num_px** 64 For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.**Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\_px $*$ num\_px $*$ 3, 1).A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use: ```pythonX_flatten = X.reshape(X.shape[0], -1).T X.T is the transpose of X``` ###Code # Reshape the training and test examples ### START CODE HERE ### (≈ 2 lines of code) train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T ### END CODE HERE ### print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0])) ###Output train_set_x_flatten shape: (12288, 209) train_set_y shape: (1, 209) test_set_x_flatten shape: (12288, 50) test_set_y shape: (1, 50) sanity check after reshaping: [17 31 56 22 33] ###Markdown **Expected Output**: **train_set_x_flatten shape** (12288, 209) **train_set_y shape** (1, 209) **test_set_x_flatten shape** (12288, 50) **test_set_y shape** (1, 50) **sanity check after reshaping** [17 31 56 22 33] To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel). Let's standardize our dataset. ###Code train_set_x = train_set_x_flatten/255. test_set_x = test_set_x_flatten/255. ###Output _____no_output_____ ###Markdown **What you need to remember:**Common steps for pre-processing a new dataset are:- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)- "Standardize" the data 3 - General Architecture of the learning algorithm It's time to design a simple algorithm to distinguish cat images from non-cat images.You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!****Mathematical expression of the algorithm**:For one example $x^{(i)}$:$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$The cost is then computed by summing over all training examples:$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$**Key steps**:In this exercise, you will carry out the following steps: - Initialize the parameters of the model - Learn the parameters for the model by minimizing the cost - Use the learned parameters to make predictions (on the test set) - Analyse the results and conclude 4 - Building the parts of our algorithm The main steps for building a Neural Network are:1. Define the model structure (such as number of input features) 2. Initialize the model's parameters3. Loop: - Calculate current loss (forward propagation) - Calculate current gradient (backward propagation) - Update parameters (gradient descent)You often build 1-3 separately and integrate them into one function we call `model()`. 4.1 - Helper functions**Exercise**: Using your code from "Python Basics", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp(). ###Code # GRADED FUNCTION: sigmoid def sigmoid(z): """ Compute the sigmoid of z Arguments: z -- A scalar or numpy array of any size. Return: s -- sigmoid(z) """ ### START CODE HERE ### (≈ 1 line of code) s = 1/(1+ np.exp(-z)) ### END CODE HERE ### return s print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2])))) ###Output sigmoid([0, 2]) = [ 0.5 0.88079708] ###Markdown **Expected Output**: **sigmoid([0, 2])** [ 0.5 0.88079708] 4.2 - Initializing parameters**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation. ###Code # GRADED FUNCTION: initialize_with_zeros def initialize_with_zeros(dim): """ This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0. Argument: dim -- size of the w vector we want (or number of parameters in this case) Returns: w -- initialized vector of shape (dim, 1) b -- initialized scalar (corresponds to the bias) """ ### START CODE HERE ### (≈ 1 line of code) w = np.zeros((dim,1)) b = 0 ### END CODE HERE ### assert(w.shape == (dim, 1)) assert(isinstance(b, float) or isinstance(b, int)) return w, b dim = 2 w, b = initialize_with_zeros(dim) print ("w = " + str(w)) print ("b = " + str(b)) ###Output w = [[ 0.] [ 0.]] b = 0 ###Markdown **Expected Output**: ** w ** [[ 0.] [ 0.]] ** b ** 0 For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1). 4.3 - Forward and Backward propagationNow that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.**Hints**:Forward Propagation:- You get X- You compute $A = \sigma(w^T X + b) = (a^{(1)}, a^{(2)}, ..., a^{(m-1)}, a^{(m)})$- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$Here are the two formulas you will be using: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$ ###Code # GRADED FUNCTION: propagate def propagate(w, b, X, Y): """ Implement the cost function and its gradient for the propagation explained above Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples) Return: cost -- negative log-likelihood cost for logistic regression dw -- gradient of the loss with respect to w, thus same shape as w db -- gradient of the loss with respect to b, thus same shape as b Tips: - Write your code step by step for the propagation. np.log(), np.dot() """ m = X.shape[1] # FORWARD PROPAGATION (FROM X TO COST) ### START CODE HERE ### (≈ 2 lines of code) A = sigmoid(np.dot(w.T,X) + b) # compute activation cost = -(1/m)*(np.dot(Y, np.log(A).T) + np.dot((1 - Y), np.log(1 - A).T)) # compute cost ### END CODE HERE ### # BACKWARD PROPAGATION (TO FIND GRAD) ### START CODE HERE ### (≈ 2 lines of code) dw = (np.dot(X, (A - Y).T))/m db = np.sum(A - Y)/m ### END CODE HERE ### assert(dw.shape == w.shape) assert(db.dtype == float) cost = np.squeeze(cost) assert(cost.shape == ()) grads = {"dw": dw, "db": db} return grads, cost w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]]) grads, cost = propagate(w, b, X, Y) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) print ("cost = " + str(cost)) ###Output dw = [[ 0.99845601] [ 2.39507239]] db = 0.00145557813678 cost = 5.801545319394553 ###Markdown **Expected Output**: ** dw ** [[ 0.99845601] [ 2.39507239]] ** db ** 0.00145557813678 ** cost ** 5.801545319394553 4.4 - Optimization- You have initialized your parameters.- You are also able to compute a cost function and its gradient.- Now, you want to update the parameters using gradient descent.**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate. ###Code # GRADED FUNCTION: optimize def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): """ This function optimizes w and b by running a gradient descent algorithm Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of shape (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples) num_iterations -- number of iterations of the optimization loop learning_rate -- learning rate of the gradient descent update rule print_cost -- True to print the loss every 100 steps Returns: params -- dictionary containing the weights w and bias b grads -- dictionary containing the gradients of the weights and bias with respect to the cost function costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve. Tips: You basically need to write down two steps and iterate through them: 1) Calculate the cost and the gradient for the current parameters. Use propagate(). 2) Update the parameters using gradient descent rule for w and b. """ costs = [] for i in range(num_iterations): # Cost and gradient calculation (≈ 1-4 lines of code) ### START CODE HERE ### grads, cost = propagate(w, b, X, Y) ### END CODE HERE ### # Retrieve derivatives from grads dw = grads["dw"] db = grads["db"] # update rule (≈ 2 lines of code) ### START CODE HERE ### w = w - learning_rate*(dw) b = b - learning_rate*(db) ### END CODE HERE ### # Record the costs if i % 100 == 0: costs.append(cost) # Print the cost every 100 training iterations if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" %(i, cost)) params = {"w": w, "b": b} grads = {"dw": dw, "db": db} return params, grads, costs params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False) print ("w = " + str(params["w"])) print ("b = " + str(params["b"])) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) ###Output w = [[ 0.19033591] [ 0.12259159]] b = 1.92535983008 dw = [[ 0.67752042] [ 1.41625495]] db = 0.219194504541 ###Markdown **Expected Output**: **w** [[ 0.19033591] [ 0.12259159]] **b** 1.92535983008 **dw** [[ 0.67752042] [ 1.41625495]] **db** 0.219194504541 **Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There are two steps to computing predictions:1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$2. Convert the entries of a into 0 (if activation 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this). ###Code # GRADED FUNCTION: predict def predict(w, b, X): ''' Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b) Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Returns: Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X ''' m = X.shape[1] Y_prediction = np.zeros((1,m)) w = w.reshape(X.shape[0], 1) # Compute vector "A" predicting the probabilities of a cat being present in the picture ### START CODE HERE ### (≈ 1 line of code) A = sigmoid(np.dot(w.T,X) + b) ### END CODE HERE ### for i in range(A.shape[1]): # Convert probabilities A[0,i] to actual predictions p[0,i] ### START CODE HERE ### (≈ 4 lines of code) if A[0,i] >= 0.5: Y_prediction[0,i] = 1 else: Y_prediction[0,i] = 0 ### END CODE HERE ### assert(Y_prediction.shape == (1, m)) return Y_prediction w = np.array([[0.1124579],[0.23106775]]) b = -0.3 X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]]) print ("predictions = " + str(predict(w, b, X))) ###Output predictions = [[ 1. 1. 0.]] ###Markdown **Expected Output**: **predictions** [[ 1. 1. 0.]] **What to remember:**You've implemented several functions that:- Initialize (w,b)- Optimize the loss iteratively to learn parameters (w,b): - computing the cost and its gradient - updating the parameters using gradient descent- Use the learned (w,b) to predict the labels for a given set of examples 5 - Merge all functions into a model You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.**Exercise:** Implement the model function. Use the following notation: - Y_prediction_test for your predictions on the test set - Y_prediction_train for your predictions on the train set - w, costs, grads for the outputs of optimize() ###Code # GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False): """ Builds the logistic regression model by calling the function you've implemented previously Arguments: X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train) Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train) X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test) Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test) num_iterations -- hyperparameter representing the number of iterations to optimize the parameters learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize() print_cost -- Set to true to print the cost every 100 iterations Returns: d -- dictionary containing information about the model. """ ### START CODE HERE ### # initialize parameters with zeros (≈ 1 line of code) w, b = initialize_with_zeros(X_train.shape[0]) # Gradient descent (≈ 1 line of code) parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost) # Retrieve parameters w and b from dictionary "parameters" w = parameters["w"] b = parameters["b"] # Predict test/train set examples (≈ 2 lines of code) Y_prediction_test = predict(w, b, X_test) Y_prediction_train = predict(w, b, X_train) ### END CODE HERE ### # Print train/test Errors print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100)) print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100)) d = {"costs": costs, "Y_prediction_test": Y_prediction_test, "Y_prediction_train" : Y_prediction_train, "w" : w, "b" : b, "learning_rate" : learning_rate, "num_iterations": num_iterations} return d ###Output _____no_output_____ ###Markdown Run the following cell to train your model. ###Code d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True) ###Output Cost after iteration 0: 0.693147 Cost after iteration 100: 0.584508 Cost after iteration 200: 0.466949 Cost after iteration 300: 0.376007 Cost after iteration 400: 0.331463 Cost after iteration 500: 0.303273 Cost after iteration 600: 0.279880 Cost after iteration 700: 0.260042 Cost after iteration 800: 0.242941 Cost after iteration 900: 0.228004 Cost after iteration 1000: 0.214820 Cost after iteration 1100: 0.203078 Cost after iteration 1200: 0.192544 Cost after iteration 1300: 0.183033 Cost after iteration 1400: 0.174399 Cost after iteration 1500: 0.166521 Cost after iteration 1600: 0.159305 Cost after iteration 1700: 0.152667 Cost after iteration 1800: 0.146542 Cost after iteration 1900: 0.140872 train accuracy: 99.04306220095694 % test accuracy: 70.0 % ###Markdown **Expected Output**: **Cost after iteration 0 ** 0.693147 $\vdots$ $\vdots$ **Train Accuracy** 99.04306220095694 % **Test Accuracy** 70.0 % **Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test accuracy is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set. ###Code # Example of a picture that was wrongly classified. index = 2 plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3))) print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.") ###Output y = 1, you predicted that it is a "cat" picture. ###Markdown Let's also plot the cost function and the gradients. ###Code # Plot learning curve (with costs) costs = np.squeeze(d['costs']) plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(d["learning_rate"])) plt.show() ###Output _____no_output_____ ###Markdown **Interpretation**:You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting. 6 - Further analysis (optional/ungraded exercise) Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$. Choice of learning rate **Reminder**:In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens. ###Code learning_rates = [0.01, 0.001, 0.0001] models = {} for i in learning_rates: print ("learning rate is: " + str(i)) models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False) print ('\n' + "-------------------------------------------------------" + '\n') for i in learning_rates: plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"])) plt.ylabel('cost') plt.xlabel('iterations (hundreds)') legend = plt.legend(loc='upper center', shadow=True) frame = legend.get_frame() frame.set_facecolor('0.90') plt.show() ###Output learning rate is: 0.01 train accuracy: 99.52153110047847 % test accuracy: 68.0 % ------------------------------------------------------- learning rate is: 0.001 train accuracy: 88.99521531100478 % test accuracy: 64.0 % ------------------------------------------------------- learning rate is: 0.0001 train accuracy: 68.42105263157895 % test accuracy: 36.0 % ------------------------------------------------------- ###Markdown **Interpretation**: - Different learning rates give different costs and thus different predictions results.- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). - A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.- In deep learning, we usually recommend that you: - Choose the learning rate that better minimizes the cost function. - If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.) 7 - Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)! ###Code ## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "grey-and-white-short-fur-cat-104827.jpg" # change this to the name of your image file ## END CODE HERE ## # We preprocess the image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) image = image/255. my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T my_predicted_image = predict(d["w"], d["b"], my_image) plt.imshow(image) print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.") ###Output y = 1.0, your algorithm predicts a "cat" picture.
titanic/XGB-titanic.ipynb
###Markdown **EXPLORATORY DATA ANALYSIS FOR HOUSE PRICES** **INITIALIZATION** ###Code # load required packages import os import numpy as np import pandas as pd import pylab as pl import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline plt.style.use('bmh') color = sns.color_palette() sns.set_style('darkgrid') from scipy import stats from scipy.stats import norm, skew # ignore warnings from sklearn and seaborn import warnings def ignore_warn(*args, **kwargs): pass warnings.warn = ignore_warn # pandas output format pd.set_option('display.float_format', lambda x: '{:.3f}'.format(x)) # check files available from subprocess import check_output print(check_output(['ls', os.getcwd()]).decode('utf8')) ###Output gender_submission.csv model.R outcomes.csv submission_SVC_1.csv submission_SVC_2.csv test.csv test_engineered.csv titanic-eda.ipynb train.csv train_engineered.csv yhat.csv ###Markdown **EXPLORATION** ###Code # load train and test data train = pd.read_csv('train.csv') test = pd.read_csv('test.csv') print(train.shape, test.shape) # drop identifier column train_id = train['PassengerId'] test_id = test['PassengerId'] del train['PassengerId'] del test['PassengerId'] train.info() # distribution plot of outcomes by sex and age fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10,4)) women = train[train['Sex'] == 'female'] men = train[train['Sex'] == 'male'] ax = sns.distplot(women[women['Survived'] == 1].Age.dropna(), bins=18, label = 'survived',\ ax = axes[0], kde=False) ax = sns.distplot(women[women['Survived'] == 0].Age.dropna(), bins=40, label = 'not survived',\ ax = axes[0], kde=False) ax.legend() ax.set_title('Female') ax = sns.distplot(men[men['Survived'] == 1].Age.dropna(), bins=18, label = 'survived',\ ax = axes[1], kde=False) ax = sns.distplot(men[men['Survived'] == 0].Age.dropna(), bins=40, label = 'not survived',\ ax = axes[1], kde=False) ax.legend() ax.set_title('Male') # outcomes by embarked, pclass and sex FacetGrid = sns.FacetGrid(train, row='Embarked', size=4.5, aspect=1.6) FacetGrid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette=None, order=None, hue_order=None) FacetGrid.add_legend() # outcome by pclass sns.barplot(x='Pclass', y='Survived', data = train) #outcome by class and age grid = sns.FacetGrid(train, col = 'Survived', row = 'Pclass', size=2.2, aspect=1.6) grid.map(plt.hist, 'Age', alpha=0.5, bins=20) grid.add_legend(); ###Output _____no_output_____ ###Markdown **FEATURE ENGINEERING** ###Code # data manipulation n_train = train.shape[0] n_test = test.shape[0] y = train['Survived'].values df = pd.concat((train, test)).reset_index(drop=True) del df['Survived'] print(n_train) # deal with missing data df_nan = df.isnull().sum() / len(df) * 100 df_nan = df_nan.drop(df_nan[df_nan == 0].index).sort_values(ascending=False) print(df_nan[:10]) # f, ax = plt.subplots(figsize=(10,10)) # plt.xticks(rotation='90') # sns.barplot(x=df_nan.index[:10], y=df_nan[:10]) # plt.xlabel('Features', fontsize=12) # plt.ylabel('% missing', fontsize=12) # plt.title('% missing by feature', fontsize=12) from itertools import groupby def split_text(s): for k, g in groupby(s, str.isalpha): yield ''.join(g) # helper functions I # add 'deck' and 'room' based on the cabin number provided #deck =['Boat', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'Orlop', 'Tank Top'] # 10 decks cabin = df['Cabin'].tolist() d = {i:list(split_text(str(e))) for i, e in enumerate (cabin)} deck = [] room = [] for i, (k,v) in enumerate (d.items()): if v[0] != 'nan': deck.append(v[0]) if len(v) > 1: if v[1].isnumeric(): room.append(int(v[1])) else: room.append(np.nan) else: room.append(np.nan) else: deck.append(None) room.append(np.nan) #print(deck) #print(room) # some tickets have prefixes, not sure about meaning ticketvar=[] for i in df['Ticket']: tmp = i.split(' ') if len(set(tmp)) == 2: ticketvar.append(tmp[0].split('/')[0].replace('.','')) else: ticketvar.append('None') # adding features df2 = df.copy() df2['FamilySize'] = df2['SibSp'] + df2['Parch'] + 1 df2['Deck'] = pd.Series(deck) df2['Room'] = pd.Series(room) df2['TicketVar'] = pd.Series(ticketvar) df2['FamilyName'] = [i.split(',')[0] for i in df2['Name']] #print(df2.head()) # replace missing values for deck mask = df2['Pclass'] == 3 df2.loc[mask, 'Deck'] = df2.loc[mask, 'Deck'].fillna('F') # most F, a few on G deck mask = df2['Pclass'] == 2 df2.loc[mask, 'Deck'] = df2.loc[mask, 'Deck'].fillna(df2.loc[mask, 'Deck'].mode()[0]) # most D to F deck mask = df2['Pclass'] == 1 df2.loc[mask, 'Deck'] = df2.loc[mask, 'Deck'].fillna(df2.loc[mask, 'Deck'].mode()[0]) # most D to F deck # replace missing values for age mask = ((df2['FamilySize'] == 1) | (df2['Parch'] == 0)) # most likely adults df2.loc[mask, 'Age'] = df2.loc[mask, 'Age'].fillna(df2.loc[mask, 'Age'].mean()) df2.loc[~mask, 'Age'] = df2.loc[~mask, 'Age'].fillna(df2.loc[~mask, 'Age'].median()) # bin age feature bins = [0, 2, 5, 10, 18, 35, 65, np.inf] names = ['<2', '2-5', '5-10', '10-18', '18-35', '35-65', '65+'] df2['AgeRange'] = pd.cut(df2['Age'], bins, labels=names) df2['Fare'].describe() # clean up fare feature #df2['Fare'].describe() df2['Fare'].fillna(df2['Fare'].median(), inplace=True) df2['Fare'] = df2['Fare'].astype(int) bins = [0, 5, 10, 15, 30, 50, np.inf] names = ['<5', '5-10', '10-15', '15-30', '30-50', '50+'] df2['FareRange'] = pd.cut(df2['Fare'], bins, labels=names) # clean up familyname feature df2['FamilyName'] = df2['FamilyName'] + '_' + df2['FamilySize'].astype(str) # drop 'cabin' and replace remaining missing values df2.drop(['Cabin', 'Name', 'Ticket'], axis=1, inplace=True) df2['Embarked'].fillna(df2['Embarked'].mode()[0], inplace=True) df2['Deck'].fillna('None', inplace=True) df2['Room'].fillna(0, inplace=True) #print(df2[df2['Age'].isnull()]) # check if remaining missing values df2_nan = df2.isnull().sum() / len(df2) * 100 #print(df2_nan) # transform some numerical variables to categorical ls =['Sex', 'Embarked', 'Deck', 'AgeRange', 'FareRange', 'TicketVar', 'FamilyName'] # label encoding for categorical variables from sklearn.preprocessing import LabelEncoder for f in ls: print(f) lbl = LabelEncoder() lbl.fit(list(df2[f].values)) df2[f] = lbl.transform(list(df2[f].values)) print(df2.shape) # split between numerical and categorical features #df_num = df2.select_dtypes(include = ['float64', 'int64']) # 109 features + SalePrice #num_skewed = df_num.apply(lambda x: skew(x.dropna())).sort_values(ascending=False) #skewness = pd.DataFrame({'Skew': num_skewed}) #print(skewness.head(5)) # box-cox transformation of highly skewed features #skewness = skewness[abs(skewness) > 0.75] #skewness.drop('lat', inplace=True) #skewness.drop('lon', inplace=True) #print(skewness.shape[0]) #lam=0.15 #from scipy.special import boxcox1p #for f in skewness.index: # if (f != 'lon') | (str(f)!= 'lat'): # print(f) # df2[f] = boxcox1p(df2[f], lam) # create dummies for categorical variables df3 = df2.copy() #keep original df df3 = pd.get_dummies(df3) print(df3.shape) # split between train and test after feature engineering train = df3[:n_train]; train['PassengerId'] = train_id.values; train.set_index('PassengerId') test = df3[n_train:]; test['PassengerId'] = test_id.values; test.set_index('PassengerId') outcomes = pd.DataFrame({'Survived': y}) outcomes['PassengerId'] = train_id.values; outcomes.set_index('PassengerId') train.to_csv('train_engineered.csv') test.to_csv('test_engineered.csv') outcomes.to_csv('outcomes.csv') ###Output _____no_output_____ ###Markdown **MODEL** ###Code train = pd.read_csv('train_engineered.csv') test = pd.read_csv('test_engineered.csv') outcomes = pd.read_csv('outcomes.csv') y = np.asarray(outcomes['Survived'].values) train_id = train['PassengerId']; test_id = test['PassengerId'] del train['PassengerId'] del test['PassengerId'] X = np.asarray(train) X_forecast = np.asarray(test) print(X.shape, y.shape, X_forecast.shape) # split the dataset in train and validation sets from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=4) print ('Train set:', X_train.shape, y_train.shape) print ('Test set:', X_test.shape, y_test.shape) # set the parameters by cross-validation from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.model_selection import GridSearchCV from xgboost.sklearn import XGBClassifier #y_scaler = StandardScaler() #y_train = y_scaler.fit_transform(y_train.reshape(-1, 1)) pipe = Pipeline(steps=[('scaler', StandardScaler()), ('classifier', XGBClassifier())]) score = 'roc_auc' param_grid=dict(classifier__n_estimators = [500, 1000, 5000], classifier__learning_rate = [0.01, 0.02, 0.03, 0.04, 0.05, 0.6], classifier__max_depth = [3, 4, 5, 6, 7, 8, 9, 10]) search = GridSearchCV(pipe, param_grid, scoring=score) search.fit(X_train, y_train) print(search.best_params_) # {'estimator__C': 1, 'estimator__gamma': 0.2, 'estimator__kernel': 'rbf'} yhat = search.predict(X_test) print(sum(yhat == y_test) / len(yhat) * 100) prediction = search.predict(X_forecast) pd.DataFrame(prediction).to_csv('yhat.csv') # performance evaluation def f_importances(coef, names, top=-1): imp = coef imp, names = zip(*sorted(list(zip(imp, names)))) # show features if top == -1: top = len(names) plt.barh(range(top), imp[::-1][0:top], align='center') plt.yticks(range(top), names[::-1][0:top]) plt.show() labels = train.columns.tolist() svm = SVC(kernel='linear') svm.fit(X_train, y_train) f_importances(abs(svm.coef_[0]), labels, top=10) ###Output _____no_output_____
src/testing_framework/jupyter notebooks/Hacky FP calc.ipynb
###Markdown Imports ###Code import pandas as pd import json from pyteomics import fasta import random import os import sys module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) module_path = os.path.abspath(os.path.join('../..')) if module_path not in sys.path: sys.path.append(module_path) from src import runner ###Output _____no_output_____ ###Markdown Hacky False Positive CalculationFor the sake of making it easier, we are going to do the FP in a bit of a weird way. We have a dataset with 280 proteins we know to be the acutal source. We will run hypedsearch with both this set and a set of 300 prots that are NOT in this set. Then we will compare results. 1. Load in the truth set2. Create another set of proteins that are exclusive of the truth set3. Run hypedsearch on both sets4. Count the number of results that the non truth set had over the truth set Constants ###Code specPath = '/Users/zacharymcgrath/Desktop/nod2 data/filteredSpec/' truth_set = '/Users/zacharymcgrath/Desktop/raw inputs/NOD2_E3/filtered_mouse_database.fasta' not_truth_set = '/Users/zacharymcgrath/Desktop/nod2 data/not_truth_subset.fasta' whole_db = '/Users/zacharymcgrath/Desktop/raw inputs/mouse_database.fasta' outputDir = '/Users/zacharymcgrath/Desktop/Experiment output/FP/' minPep = 3 maxPep = 30 tolerance = 20 relative_abundance_filter = 0.0 precursor_tolerance = 3 peak_filter = 25 verbose = True truth_run_params = { 'spectra_folder': specPath, 'database_file': truth_set, 'output_dir': outputDir + 'truth/', 'min_peptide_len': minPep, 'max_peptide_len': maxPep, 'tolerance': tolerance, 'precursor_tolerance': precursor_tolerance, 'peak_filter': peak_filter, 'relative_abundance_filter': relative_abundance_filter, 'digest': 'trypsin', 'verbose': verbose, 'DEBUG': True, 'truth_set': '', 'cores': 16, 'n': 5 } non_truth_run_params = { 'spectra_folder': specPath, 'database_file': not_truth_set, 'output_dir': outputDir + 'not_truth/', 'min_peptide_len': minPep, 'max_peptide_len': maxPep, 'tolerance': tolerance, 'precursor_tolerance': precursor_tolerance, 'peak_filter': peak_filter, 'relative_abundance_filter': relative_abundance_filter, 'digest': 'trypsin', 'verbose': verbose, 'DEBUG': False, 'truth_set': '', 'cores': 16, 'n': 5 } ###Output _____no_output_____ ###Markdown 1. Load in truth set ###Code get_name = lambda x: x.split('|')[-1].split(' ')[0] ts = {get_name(entry.description): None for entry in fasta.read(truth_set)} ###Output _____no_output_____ ###Markdown 2. Create the other set of 280 proteins ###Code num_keep = 280 all_prots = {get_name(entry.description): entry for entry in fasta.read(whole_db)} all_prot_keys = list(all_prots.keys()) saving = {} # get num_keep for i in range(num_keep): print(f'\rchoosing prot {i+1}/{num_keep}', end='') while not False: # pick a protein selected = all_prot_keys[random.randint(0, len(all_prot_keys) - 1)] # see if its in saving or ts if selected in ts or selected in saving: continue # flip it and add "reverse" to the description entry = all_prots[selected] rev_seq = entry.sequence[::-1] split_desc = entry.description.split('|') split_desc[-1] = 'REVERSE_' + split_desc[-1] new_desc = '|'.join(split_desc) saving[selected] = entry._replace(description=new_desc, sequence=rev_seq) break # write to file fasta.write([v for _, v in saving.items()], open(not_truth_set, 'w')) ###Output choosing prot 1/280 choosing prot 2/280 choosing prot 3/280 choosing prot 4/280 choosing prot 5/280 choosing prot 6/280 choosing prot 7/280 choosing prot 8/280 choosing prot 9/280 choosing prot 10/280 choosing prot 11/280 choosing prot 12/280 choosing prot 13/280 choosing prot 14/280 choosing prot 15/280 choosing prot 16/280 choosing prot 17/280 choosing prot 18/280 choosing prot 19/280 choosing prot 20/280 choosing prot 21/280 choosing prot 22/280 choosing prot 23/280 choosing prot 24/280 choosing prot 25/280 choosing prot 26/280 choosing prot 27/280 choosing prot 28/280 choosing prot 29/280 choosing prot 30/280 choosing prot 31/280 choosing prot 32/280 choosing prot 33/280 choosing prot 34/280 choosing prot 35/280 choosing prot 36/280 choosing prot 37/280 choosing prot 38/280 choosing prot 39/280 choosing prot 40/280 choosing prot 41/280 choosing prot 42/280 choosing prot 43/280 choosing prot 44/280 choosing prot 45/280 choosing prot 46/280 choosing prot 47/280 choosing prot 48/280 choosing prot 49/280 choosing prot 50/280 choosing prot 51/280 choosing prot 52/280 choosing prot 53/280 choosing prot 54/280 choosing prot 55/280 choosing prot 56/280 choosing prot 57/280 choosing prot 58/280 choosing prot 59/280 choosing prot 60/280 choosing prot 61/280 choosing prot 62/280 choosing prot 63/280 choosing prot 64/280 choosing prot 65/280 choosing prot 66/280 choosing prot 67/280 choosing prot 68/280 choosing prot 69/280 choosing prot 70/280 choosing prot 71/280 choosing prot 72/280 choosing prot 73/280 choosing prot 74/280 choosing prot 75/280 choosing prot 76/280 choosing prot 77/280 choosing prot 78/280 choosing prot 79/280 choosing prot 80/280 choosing prot 81/280 choosing prot 82/280 choosing prot 83/280 choosing prot 84/280 choosing prot 85/280 choosing prot 86/280 choosing prot 87/280 choosing prot 88/280 choosing prot 89/280 choosing prot 90/280 choosing prot 91/280 choosing prot 92/280 choosing prot 93/280 choosing prot 94/280 choosing prot 95/280 choosing prot 96/280 choosing prot 97/280 choosing prot 98/280 choosing prot 99/280 choosing prot 100/280 choosing prot 101/280 choosing prot 102/280 choosing prot 103/280 choosing prot 104/280 choosing prot 105/280 choosing prot 106/280 choosing prot 107/280 choosing prot 108/280 choosing prot 109/280 choosing prot 110/280 choosing prot 111/280 choosing prot 112/280 choosing prot 113/280 choosing prot 114/280 choosing prot 115/280 choosing prot 116/280 choosing prot 117/280 choosing prot 118/280 choosing prot 119/280 choosing prot 120/280 choosing prot 121/280 choosing prot 122/280 choosing prot 123/280 choosing prot 124/280 choosing prot 125/280 choosing prot 126/280 choosing prot 127/280 choosing prot 128/280 choosing prot 129/280 choosing prot 130/280 choosing prot 131/280 choosing prot 132/280 choosing prot 133/280 choosing prot 134/280 choosing prot 135/280 choosing prot 136/280 choosing prot 137/280 choosing prot 138/280 choosing prot 139/280 choosing prot 140/280 choosing prot 141/280 choosing prot 142/280 choosing prot 143/280 choosing prot 144/280 choosing prot 145/280 choosing prot 146/280 choosing prot 147/280 choosing prot 148/280 choosing prot 149/280 choosing prot 150/280 choosing prot 151/280 choosing prot 152/280 choosing prot 153/280 choosing prot 154/280 choosing prot 155/280 choosing prot 156/280 choosing prot 157/280 choosing prot 158/280 choosing prot 159/280 choosing prot 160/280 choosing prot 161/280 choosing prot 162/280 choosing prot 163/280 choosing prot 164/280 choosing prot 165/280 choosing prot 166/280 choosing prot 167/280 choosing prot 168/280 choosing prot 169/280 choosing prot 170/280 choosing prot 171/280 choosing prot 172/280 choosing prot 173/280 choosing prot 174/280 choosing prot 175/280 choosing prot 176/280 choosing prot 177/280 choosing prot 178/280 choosing prot 179/280 choosing prot 180/280 choosing prot 181/280 choosing prot 182/280 choosing prot 183/280 choosing prot 184/280 choosing prot 185/280 choosing prot 186/280 choosing prot 187/280 choosing prot 188/280 choosing prot 189/280 choosing prot 190/280 choosing prot 191/280 choosing prot 192/280 choosing prot 193/280 choosing prot 194/280 choosing prot 195/280 choosing prot 196/280 choosing prot 197/280 choosing prot 198/280 choosing prot 199/280 choosing prot 200/280 choosing prot 201/280 choosing prot 202/280 choosing prot 203/280 choosing prot 204/280 choosing prot 205/280 choosing prot 206/280 choosing prot 207/280 choosing prot 208/280 choosing prot 209/280 choosing prot 210/280 choosing prot 211/280 choosing prot 212/280 choosing prot 213/280 choosing prot 214/280 choosing prot 215/280 choosing prot 216/280 choosing prot 217/280 choosing prot 218/280 choosing prot 219/280 choosing prot 220/280 choosing prot 221/280 choosing prot 222/280 choosing prot 223/280 choosing prot 224/280 choosing prot 225/280 choosing prot 226/280 choosing prot 227/280 choosing prot 228/280 choosing prot 229/280 choosing prot 230/280 choosing prot 231/280 choosing prot 232/280 choosing prot 233/280 choosing prot 234/280 choosing prot 235/280 choosing prot 236/280 choosing prot 237/280 choosing prot 238/280 choosing prot 239/280 choosing prot 240/280 choosing prot 241/280 choosing prot 242/280 choosing prot 243/280 choosing prot 244/280 choosing prot 245/280 choosing prot 246/280 choosing prot 247/280 choosing prot 248/280 choosing prot 249/280 choosing prot 250/280 choosing prot 251/280 choosing prot 252/280 choosing prot 253/280 choosing prot 254/280 choosing prot 255/280 choosing prot 256/280 choosing prot 257/280 choosing prot 258/280 choosing prot 259/280 choosing prot 260/280 choosing prot 261/280 choosing prot 262/280 choosing prot 263/280 choosing prot 264/280 choosing prot 265/280 choosing prot 266/280 choosing prot 267/280 choosing prot 268/280 choosing prot 269/280 choosing prot 270/280 choosing prot 271/280 choosing prot 272/280 choosing prot 273/280 choosing prot 274/280 choosing prot 275/280 choosing prot 276/280 choosing prot 277/280 choosing prot 278/280 choosing prot 279/280 choosing prot 280/280 ###Markdown 3. Run hyped search on both ###Code %%time runner.run(truth_run_params) %%time runner.run(non_truth_run_params) ###Output Loading database... Done Loading spectra... File /Users/zacharymcgrath/Desktop/nod2 data/filteredSpec/summary.json is not of supported types (mzML, mzXML) Done On batch 1/1 On protein 271/271 [100%] Sorting the set of protein masses... Done Initializing other processors... Done. Creating an alignment for 1085/1086 [100%] Finished search. Writting results to /Users/zacharymcgrath/Desktop/Experiment output/FP/not_truth/... Could not make an alignment for 83/1086 spectra (7%) CPU times: user 5min 48s, sys: 27.5 s, total: 6min 15s Wall time: 17min 1s ###Markdown 4. Compare and count ###Code truth_results_json = outputDir + 'truth/summary.json' non_truth_results_json = outputDir + 'not_truth/summary.json' truth_results = json.load(open(truth_results_json, 'r')) non_truth_results = json.load(open(non_truth_results_json, 'r')) # save them in a dictionary by their id so that we can compare ided_truth = {v['spectrum']['id'].replace('.pkl', ''): v for _, v in truth_results.items()} ided_non_truth = {v['spectrum']['id'].replace('.pkl', ''): v for _, v in non_truth_results.items()} truth_greater = {} non_truth_greater = {} for k, v in ided_truth.items(): if k not in ided_non_truth: truth_greater[k] = v else: nt_v = ided_non_truth[k] # see if we have anything in our alignments if len(v['alignments']) == 0 and len(nt_v['alignments']) == 0: continue elif len(v['alignments']) == 0 and len(nt_v['alignments']) > 0: non_truth_greater[k] = v continue elif len(v['alignments']) >0 and len(nt_v['alignments']) == 0: truth_greater[k] = v continue # see which score was greater best_truth = sorted(v['alignments'], key=lambda x: x['total_score'], reverse=True)[0]['total_score'] best_non_truth = sorted(nt_v['alignments'], key=lambda x: x['total_score'], reverse=True)[0]['total_score'] # get all sequences with this score best_truth_seqs = [x['sequence'] for x in v['alignments'] if x['total_score'] >= best_truth] best_non_truth_seqs = [x['sequence'] for x in nt_v['alignments'] if x['total_score'] >= best_non_truth] entry = (v, nt_v) if best_truth > best_non_truth: truth_greater[k] = entry elif any([x == y for x in best_truth_seqs for y in best_non_truth_seqs]): truth_greater[k] = entry else: non_truth_greater[k] = entry for k, v in ided_non_truth.items(): if k not in ided_truth and k not in truth_greater and k not in non_truth_greater: non_truth_greater[k] = v print(len(truth_greater)) print(len(non_truth_greater)) for k, v in non_truth_greater.items(): if isinstance(v, dict): continue t_seqs = [x['sequence'] for x in v[0]['alignments']] nt_seqs = [x['sequence'] for x in v[1]['alignments']] print(f'Truth: {t_seqs} \t non truth: {nt_seqs}') ###Output Truth: ['DLQTLALE'] non truth: ['TLNELALE', 'TINELALE', 'EVQITAIE', 'ENLTLALE', 'GLADTLALE', 'QLDTIAIE', 'QLDTIALE', 'TVEQLALE', 'LDAGTLALE', 'EVTQIALE', 'EVTQIAIE', 'EVQTIAIE', 'EVQTLALE', 'EVQTIALE', 'TIAEGVALE', 'TLADVAAIE', 'TLAGLDALE', 'TIADGLALE', 'TIADGIALE', 'TIADGIAIE', 'GLADTLAIE', 'TIAGDLALE', 'TLAGDLALE', 'TVIQEALE', 'TVIQEAIE', 'NEITIALE', 'NEITIAIE', 'IENTLAIE', 'IENTLALE', 'QDLTLAIE', 'QDLTLALE', 'LGEGTLALE', 'LGEGTLAIE', 'TDQLIAIE', 'TDQLIALE', 'EVQLTAIE', 'EVQLTALE', 'EGISALALE', 'EGISALAIE', 'ETVQLALE', 'ETVQLAIE', 'ENLTLAIE', 'LDAGTLAIE', 'TIADAVALE', 'TIAPTSALE', 'TIAADVALE', 'TLAPSTAIE', 'TLAVADAIE', 'TLAGLDAIE', 'TLAAVDAIE'] Truth: ['DEILRVV', 'SKPTLLAN', 'DKKPEVK'] non truth: ['EDLLRVV', 'EDLIRVV', 'EDILRVV', 'EDLLVRV', 'EDLRLVV', 'EDIRLVV', 'EDILVRV', 'EDLIVRV', 'SSGAVKPVV', 'EDLVLRV', 'EDLVRIV', 'EDLLVVR', 'EDLIVVR', 'EDILVVR', 'SGKEPKVV', 'SQAVLQVV', 'EDRLLVV', 'AAANVLSVV', 'LKVGGDGVV', 'SSLLAPKQ', 'LIVDREV'] Truth: ['DLQTLAL', 'DLQTLAL', 'QTLDLAL', 'TQLDLAL', 'TQIDLAL', 'DIAGLTAL', 'VEQTLAL', 'LDQTLAL', 'VEQLTAL', 'LDQLTAL', 'QTLEVAL', 'TQIEVAL', 'TQLEVAL'] non truth: ['LDAGTIAL', 'LDAGTLAI', 'IDAGTLAI', 'LDAGTLAL', 'IDAGTLAL', 'IDAGTIAL', 'LGEGTLAI', 'LGEGTLAL', 'EVQTIAL', 'EVQTLAI', 'EVQTLAL', 'DLQTIAL', 'LDQTIAL', 'DLQTLAL', 'LDQTLAI', 'LDQTLAL', 'DLQTLAI', 'QTIDLAI', 'TQIDLAL', 'TQLDIAL', 'QTLDLAI', 'TQLDLAL', 'TQLDLAI', 'QTIDIAL', 'QTIDLAL', 'TQIDIAL', 'QTLDIAL', 'QTLDLAL', 'TQIDLAI', 'VEQITAL'] Truth: ['ELQKQKEVE', 'ELRLKDEVE', 'ELQKQKEDL', 'ELQKQKEDL', 'ELLRDKEVE', 'ELRLKDEDL', 'ELLRDKEDL'] non truth: ['EVKERLEVE', 'EVKERIEVE', 'EVKERLDLE', 'EVKERELVE', 'EVKEREIVE', 'EVKERELDL'] Truth: ['LATALTSPLL', 'LATALTSPIL', 'LATALTSPII', 'LATALTSPLI', 'DLQTLALII', 'DLQTLALIL', 'ALTITTGPLL', 'ALTITTLGPI', 'ALTITTPGIL', 'ALTITTPGLL', 'ALTITTGPII', 'ALTITTGPIL', 'ITALAPSTLL', 'ALTITTLAVP', 'ALTITTLAPV', 'ALTITTIAVP', 'TLAITAPSLL', 'TLAITAPSII', 'TLAITAPSIL', 'TLAITASPLL', 'TLAITASPIL', 'ITALATSPLL', 'TLALATSPLL', 'TLAITASPLI', 'LLQESALIL', 'LLQESALII', 'ALTITTLPGL', 'ALTITTIPGL', 'ALTITTLGPL', 'ALTITTIGPL', 'ALTITTIPGI', 'ALTITTLPGI', 'SIQVVDILL', 'SIQVVDILI', 'ITALAPSTII', 'ITALAPSTLI', 'ITALAPSTIL', 'LASGVVDLII', 'LASGVVDLIL', 'ALTLATSPLL', 'LATLATSPLL', 'IIDDKGILL', 'IIDDKGILI', 'LATALTPSIL', 'LATALTPSLL', 'LATALTPSII', 'VVEKADILL', 'VVEKADILI'] non truth: ['TIALATSPIL', 'LATITAPSLL', 'LATITASPLL', 'LDLDGKLLL', 'LDLDGKLLL', 'LATLNELIL', 'LATLNELII', 'TIALATPSLI', 'TIALATPSII', 'TIALATPSIL', 'TIALATPSLL', 'TLALATSPIL', 'TIALATSPLL', 'TIALATSPLI', 'TIALATSPII', 'LATITAPSII', 'LATITAPSIL', 'LATITAPSLI', 'LATITASPIL', 'LATITASPII', 'LATITASPLI', 'IIEQSAILL', 'IIEQSAILI', 'DLDGKLLLI', 'ALTVEQLIL', 'ALTVEQLII', 'IATLTAPSLL', 'AITIATSPIL', 'LATLATSPIL', 'LATIATSPIL', 'IATLTASPLL', 'SLSLVAPSIL', 'SLSLVAPSLL', 'SLSLVAPSII', 'SLSLVAPSLI', 'SLSLVASPLL', 'SLSLVASPLI', 'SLSLVASPII', 'SLSLVASPIL', 'TLAILGVAEL'] Truth: ['TVEQIEDL', 'TINEIEDL', 'TLNEIEDL', 'ELTNIEDL', 'EITNIEDL', 'ELGGTIEDL', 'LDQTIEDL', 'DLQTIEDL', 'EELLSQDL', 'EELLSQDI', 'EELLSQVE', 'EIQSELDL', 'ELGGTELDL', 'LDEVKDDI', 'LDQTELDL', 'TDIAGLEDL', 'LDEVKDVE', 'EKEALEDL', 'ELNLTEDL', 'EIQSLEDL', 'EIQSIEDL', 'TTPGTIEDL', 'TLQEDIDL', 'DLQTLEDL', 'EKVDVEDL', 'DLQTELDL', 'TLNELEDL', 'TINEIEDI', 'TINEIEVE', 'ELTNIEDI', 'ELTNIEVE', 'EVQITEDL', 'TLQEVEDL', 'TVEQELDL', 'ELTNELDL', 'EITNELDL', 'TTPGTELDL', 'TDIAGLEDI', 'TDIAGIEDL', 'EEKAIEDL', 'EEKALEDL', 'EEAKLEDL', 'EEAKIEDL', 'EKEAIEDL', 'EKEALEDI', 'TDIAGLEVE', 'EKEALEVE', 'TLQELDVE', 'EKVDVEDI', 'EIQTLDVE'] non truth: ['TVEQLEDL', 'ELDVKDVE', 'LDAGTLEDL', 'TLQEVEDL', 'IENTLEDL', 'ELKAEEDL', 'LSGEALEDL', 'TATPSLEDL', 'TEVQLEDL', 'EAEKLEDL', 'LESQIEDL', 'ENLTLEDL', 'LDAGTLEDI', 'LDAGTLEVE', 'ETQIVEDL', 'LDVGSVEDL', 'TVEQLEDI', 'TLNELEDL', 'TINELEDL', 'TVEQLEVE', 'EVTQIEDL', 'ELTIQDDL', 'ELTIQDDI', 'EKEAELDL', 'EKEAELVE', 'LSGEALEDI', 'TATPSLEDI', 'TEVQLEDI', 'EAEKLEDI', 'LSGEALEVE', 'TATPSLEVE', 'TEVQLEVE', 'LESQIEDI', 'EAEKLEVE', 'LESQIEVE', 'TLQELDVE', 'ETQIVEDI', 'LDVGSVEDI', 'EIDAASIDL', 'EIDAASIDI', 'EIDAASLDL', 'EIDAASIVE', 'EIDAASLVE', 'ELKAEEDI', 'ELKAEEVE', 'ELDVKDEV', 'LDVGSLDVE', 'EKEAELDI', 'EEVLTQDL', 'EEVLTQVE'] Truth: ['PGGEEVLREQADALASAAGHLDDLPGALSA', 'TQAGVEELWEPKPTQAFVKQHLCGPH', 'NPPYDKGALLEDNLPQFRIDADSGAIT', 'AAAPEAAGVFALEDNLPQFRIDADSGAIT', 'NPELFNVKNEELEAMVKEAPGPINFT', 'TQAGVEELDPEVDKASLGPIEDFKELT', 'TGAAGRNSGAVRAEEEDKKEDVGTVVGID', 'GKDGNAGDLQTLALEVAQQKRGIVDQCC', 'GADPVAGPEAGAYLSAAARPGLGQYLCNQL', 'NPEAGVATTDLGTRNIYYLCAPNRHLA', 'NPPYAGDLQTLALEVAQQKRGIVDQCC', 'AAGTGVSGNVKNEELEAMVKEAPGPINFT', 'GPAVDHPLPPAQPTSEREGILQEESIY', 'NPPYDKVKNEELEAMVKEAPGPINFT', 'PGDAVTEKSNLSDLIDLVPSLCEDLLSS', 'TQAGVEEVKNEELEAMVKEAPGPINFT', 'EKFPPEETLKSYWEDNKNSLISYL', 'EKFPPEETLKSYWEDNKNSLISYL', 'LVEEEDTRPAPWKDVRDLPDGPDAPI', 'KFPPEETLKSYWEDNKNSLISYLE'] non truth: ['GPGPNKGTTNRSPHPKTFIQPSSEGGGQP', 'GDKGNINNQYVYALKSGVPSWCLYQI', 'GPGPYINNQYVYALKSGVPSWCLYQI', 'PGGPYINNQYVYALKSGVPSWCLYQI', 'SAAQNLNNQYVYALKSGVPSWCLYQI', 'QASAGGLNNQYVYALKSGVPSWCLYQI', 'QTANGLNNQYVYALKSGVPSWCLYQI', 'PGNVDIGEVIGDSKEPSKNLQADGLFKC', 'GDKGDIGPPGTIGKPGTIGMEAELKAEYE', 'GPGEGLTPELRNKDAMVGQEAVFVMER', 'GPGGRRGGGFGGRDGAKIANVSAPGCQFVGQ', 'PGAVDVSPDLRNKDAMVGQEAVFVMER', 'PGAVDLGCGSRRGVTPSWIRWPNYTQG', 'GPGESCRAVRRGVTPSWIRWPNYTQG', 'GPGPPQNRAFFDAFGGSRRARATTGGPGQ', 'GPGGRRGGGFGGREWYKLYMWMLKGN', 'GPGPNKGTTNRSPYLIGNKISNSIDNES'] Truth: ['SLVSNYLQTQEGEAKSPGEAK', 'SLVSNYLQTQEREGILQEE', 'SYGDLGGPIITTDAATAVEQEK', 'SEAAKVNTDPALAIYESVDDK', 'SSLEKSYELPEREGILQEE', 'SSEPTQGSYKVVREGILQEE', 'SSEPTQGSYKVVGEAKSPGEAK', 'SEVKTDVNKIEEVYDPKNE', 'STRIIYGGSVTGATCGAPGNKPE', 'SSEPTQGSYKVVIRTEPTGAE', 'SSEPTQGSYKVVIRTEPQTE', 'SSEPTQGSYKVVIRTPEGATE', 'SSEPTQGSYKVVIRTEPAGTE', 'STRIIYGGSVTGAEELDPENK', 'STRIIYGGSVTGAEEDLPENK', 'SSEPTQGSYKVVIRTEPDDK', 'SSEERAAPFEALSRTGRSRE', 'SSEELQEPLYRLNTKAASAE', 'SSEERAAPFVESKHKSDFGK', 'QQYGVRGYPTIKFASYQTE', 'QQYGVRGYPTIKFSAYGATE', 'SSEELLVEAGGAFGKREKAEE', 'SSEELLVAEGGAFGKREKAEE', 'QQELPSLSVGPSLHVATDQTE', 'SYKALLDSQSIPTDRPSTQE', 'SSQVVLPAPTGIIHNESESAAE', 'SSQVVLPAPTGIIHQSEDDDK', 'LSERDAIYKEFSFKNFNQ'] non truth: ['STSPKNECKPAWAYIEVANK', 'SYMEVVQLPESLLDLNKEE', 'STMKVSGSAREEAQAARAEKE', 'SYMEVVQLPESPTVNNVMAK', 'SYMEVVQLPESLVSSPLTQE', 'SSTPKELDDPAPTPPAAEERK'] Truth: ['ACRNLFGPVNHEGHRAVPAASYGRIYAGGG', 'LPLDTDKGVHYISVSAATSEEQIKEYFG', 'EVRKALSRQEMQEVQSSRSGRGGNFGFG', 'TNPESKVFYLKMKGDYFVSASSKWLDG', 'TKQWYLFNTGQADGTPGIEAIVKNYPEG', 'TKQWYLFNTGQADGTPGLEAIVKNYPEG', 'PKPGPRSGRFCCEPAGGLTSLTEPPKGPGFG'] non truth: ['SGSTLPGRRQPSGQCFKMLCRSTWTKGAG', 'RIWNWFYGSTISFKTGISGREMKEFG', 'EIDKWTERTLHCGDLYFEKLHLFFG', 'FGTVSQSIELCPLPCLPDFTVGAETIRFG', 'GFTSNLSTSNAVVAYFKMLCRSTWTKGAG', 'SILSNGPSTEPRLGPEGKSGAPGMKGPSGQFG', 'ESTPLDLPIESGQSCMLSVGLLALHPWFG', 'ADELAKQGAEKAMRSSFYNRKQFFAFG', 'SGSTLPGRRQPSGQPEGKSGAPGMKGPSGQFG', 'SGSTLPGRRQPSGQNSYTINMKQLIGCFG', 'YSMPKLESLLYNLRFTDASYQLPSGFG', 'EIDKWTERTLHNSYTINMKQLIGCFG'] Truth: ['DRETQRSRGFGFITFTNPEHAS'] non truth: ['PTDEPYAVVARMIRLDGEKYDD'] Truth: ['DSEAVSVRKLAG', 'SDEAVSVRKLAG', 'SDEAAATRLKGI', 'SSNNVVKVEKQ', 'SSVIATNAKVNQ', 'SSILASNAKVNQ', 'SSLISANAKVNQ', 'SSLALSNAKVNQ', 'SEATVRSLQAAV', 'GDSEAVSVRKLA'] non truth: ['SSNNVDKVKIQ', 'SDQKTLAERLA', 'SDQLKSAIKNQ', 'SDQGKLQKISQ', 'SSKKQKVPSDQ', 'SIVAENKKQSQ', 'SSLVQRVISDQ', 'SSIQVRVISDQ', 'SSVQLRVISDQ', 'SSLGLGRVISDQ', 'SSNLLRVISDQ', 'SSINIIRVSDQ', 'SSINIRVISDQ', 'SSNILVRLDSQ', 'SSLNLVRLDSQ', 'SSNILRVISDQ', 'SSLNLRVISDQ', 'SSTPARKVGELS', 'SSTPAKSAIKNQ'] Truth: ['DNKEFVRF', 'NKEFVRFD', 'NYDVQRFI'] non truth: ['SSQPSFVRF', 'SAAEQFVRF', 'SSQPSFVFR', 'SSQPSFRVF', 'SAAEQFVFR', 'SAAEQFRVF', 'TTGATAMVRF', 'TQQEFVRF', 'SSQPSFFVR', 'SAAEQFFVR', 'TQQEFVFR', 'SSRQIMSVF', 'SSRCGVLSVF', 'TTQLASMRF', 'TQQEFRVF', 'SSGVVTCFVR', 'TQQEFFVR', 'SAATVTCFVR', 'SAATVTCVRF', 'TQTSLCVRF', 'TQTTAMVRF', 'SSGVAFSPRF', 'TQTSAFPRF', 'SSRGIMKEF', 'TQTTCVVFR', 'TQTLASMRF', 'TQTLTGMRF', 'SSGVCVTVRF', 'SSGVVTCVRF', 'SAATVTCRVF', 'TTQSLCVRF', 'TTQLSCVRF', 'TQTSLCRVF', 'SAATVTAMFR', 'TQTTLGMFR', 'TTGATCVVFR', 'TTGATAMVFR', 'TTQLSCFVR', 'SSRLAGMSVF', 'SSRALGMSVF', 'SSRIQMSVF', 'SSRCVGLVSF', 'SSRAGIMSVF', 'SSRVMAASVF', 'SSRCVAVVSF', 'SSGVLASMRF', 'SSGVLTGMRF', 'SSRAGFPSVF', 'SSRVWSSVF', 'SSVGRICSVF'] Truth: ['DALASAAGHL'] non truth: ['GEIASAQLH', 'GEIASAQHL', 'GEIASAQIH', 'ADLASAQHL', 'ADLASAQLH', 'ADLASAQIH', 'DALASAQLH', 'DALASAQHL', 'DALASAQIH', 'GELASAQHL', 'EGIASAQLH', 'GELASAQLH', 'EGLASAQLH', 'EGIASAQHL', 'EGLASAQHL', 'ADIASAQHL', 'ADIASAQLH', 'GELANATHI', 'EGIANATHI', 'GEIASALHQ', 'EGLASALHQ', 'EGIASALHQ', 'GELASALHQ', 'ADLASALHQ', 'ADIASALHQ', 'DALASALHQ', 'EGLANATHI', 'GEIANATHI', 'ADLANATHI', 'ADIANATHI', 'DALANATHI', 'DAALSAQHL', 'DAALSAQLH', 'DAAIANTLH', 'GEIAAPDRP', 'EGLAAPDRP', 'EGIAAPDRP', 'GELAAPDRP', 'DALAAPDRP', 'ADIAAPDRP', 'ADLAAPDRP', 'GEIAGHTVAA', 'EGIAGHTVAA', 'EGLAGHTVAA', 'GELAGHTVAA', 'ADIAGHTVAA', 'ADLAGHTVAA', 'DALAGHTVAA', 'DAALSALHQ', 'VAEASAQLH'] Truth: ['DFTRSVFSV', 'DFTPAVHASL', 'DFTPAVHASL', 'DFTEYRVK', 'DFTIYDRK', 'DFTQKYQK', 'DFTKAFSNK', 'DFTASNFKK', 'DFTPPPQQK', 'SFEIYDRK', 'SFEASNFKK', 'SFEEYRVK', 'DFTTYVGVR', 'DFTHEAKPL', 'SFERSVFSV', 'SFEPAVHASL', 'SFEPPPQQK', 'SFEKAFSNK', 'SFEQKYQK', 'FSERSVFSV', 'FSEPAVHASL', 'SFEHEAKPL', 'SFETYVGVR', 'FSEPPPQQK', 'FSEIYDRK', 'FSEKAFSNK', 'FSEASNFKK', 'FSEQKYQK', 'FSEHEAKPL', 'FSETYVGVR', 'QGIQLPGELC', 'QGIQLPGECL', 'SVLKEHELC', 'SVLKEHECL', 'SKEVIHELC', 'SKEVIHECL', 'HLDSLKELC', 'LQNLAPGELC', 'HLDSLKECL', 'LQNLAPGECL', 'SFRESVFSV', 'FSTRYLGDV'] non truth: ['FDTRYIDK', 'DFTRYIDK', 'FDTLPAEHK', 'DFTLPAEHK', 'SFERYIDK', 'DFTDRYLK', 'FDTDRYLK', 'SFELPAEHK', 'DFTVRTYVG', 'FDTVRTYVG', 'DFTIPHASLG', 'FDTIPHASLG', 'SEFRYIDK', 'SFEVRTYVG', 'SFEDRYLK', 'SFEIPHASLG', 'SFEVGYLRS', 'SEFLPAEHK', 'SEFDRYLK', 'SEFIPHASLG', 'SEFVRTYVG', 'LDKHLSEIC', 'LDKHLSELC', 'LDKHLSECL', 'SFREYIDK'] Truth: ['DVIVSTFHKYSGKEG', 'VDLVSTFHKYSGKEG', 'DVLVSTFHKYSGKEG', 'VDIVSTFHKYSGKEG', 'DVIVSTFHKYSGKEG', 'VDLVEDEVTLSAYIT', 'DVLVEDEVTLSAYIT', 'VDIVEDEVTLSAYIT', 'DVIVEDEVTLSAYIT', 'VDIVMNHPLAQTESL', 'DVVIENNLDIHWVT', 'DVVIENNLDLHWVT', 'DVVIENNLDHIWSL', 'DVLVSLSTFQQMWI', 'DVIVSLSTFQQMWI', 'VDIVSLSTFQQMWI', 'VDLVSLSTFQQMWI', 'VDLNNESFPYRLLS', 'VDLNNHYRHIWSL', 'DVVISTFHKYSGKEG', 'DVVIEDEVTLSAYIT', 'DVVISLSTFQQMWI', 'VDRNALVFQTFEVE', 'KSPNELVDDLAVHMV', 'KSPNELVDDLAFRY', 'KSPNELVDDLARFY', 'KCPQVLVFQTFEVE', 'AVFTHLVFQTFEVE'] non truth: ['VEVVDQPSTRAKNHS', 'VDIVSFLDSPVNPQH', 'VDLVSFLDSPVNPQH', 'DVIVSFLDSPVNPQH', 'DVLVSFLDSPVNPQH', 'IVDVDNPTLGSAYFR', 'VDIVSDFVKLESRGC', 'DVIVSDFVKLESRGC', 'VDLVSDFVKLESRGC', 'DVLVSDFVKLESRGC', 'VDIVEANRPGERPSQ', 'DVIVEANRPGERPSQ', 'VDLVEANRPGERPSQ', 'DVLVEANRPGERPSQ', 'VDLVSVCPPKFFTNT', 'DVLVSVCPPKFFTNT', 'DVIVSVCPPKFFTNT', 'VDIVSVCPPKFFTNT', 'IVDVDNPTLGSAHCLL', 'IVDVDNPTLGSALCLH', 'IVDVDNPTLGSAHVMV', 'DVLVGTVPIGYFQQM', 'VDLVGTVPIGYFQQM', 'VDIVGTVPIGYFQQM', 'DVIVGTVPIGYFQQM', 'DVLNGGAAKVADFLYN', 'VDINNAAKVADFLYN', 'VEVVSFLDSPVNPQH', 'VDLEFLAMVMAEVSI', 'DVLEFLAMVMAEVSI', 'IVDVDNPTLGSWHLT', 'IVDVDNPTLGSRSFF', 'VEVVSDFVKLESRGC', 'VEVVEANRPGERPSQ', 'VEVVSVCPPKFFTNT', 'VDQSKERYFSTPPI', 'DVIEFLAMVMAEVSI', 'VDIEFLAMVMAEVSI', 'KQGTVYNTGFGSTPPI', 'DLVPYITPDPAKNHS', 'KVGAPCNPSTRAKNHS', 'DLVPYITPDPAYFR', 'DLVPYITPDPAHCLL', 'DLVPYITPDPALCLH', 'DLVPYITPDPAHVMV', 'KKFYCPSTRAKNHS', 'DVLNGVTLHPSESWL', 'VEVVGTVPIGYFQQM', 'DLVPYITPDPRSFF', 'DLVPYITPDPWHLT'] Truth: ['ELLTRELPSFLGKRT', 'TLPPLSPYLGFLRKSA', 'HPILLTHYTIAALLSP', 'TIAHRLNTVLDPLLPS', 'TTQVTIPKDLRNKFV', 'PVKEPRSLSTHVLLPS'] non truth: ['TLRFTAKSQAGVALISP', 'VPKEQAAVASRPPLLPS', 'TLFLVRSVPALNYAPV', 'TLRFTAKSQAINIIPS', 'TLLTEPTVYLISLLSP', 'TLRFTAKSQALNLTVP', 'TLRFTAKSQAGAVIISP', 'TLRFTAKSQAGVALLSP', 'TLRFTAKSQAGVAILPS', 'TLRFTAKSQAGVALTVP'] Truth: ['GPDVQGTIHF', 'DGPVQGTIHF', 'GPDVGKDVFH', 'SSPPQGTIHF', 'SPPSQGTIHF', 'SSPIPTQGHF'] non truth: ['SSPPQAFHLS', 'SSPPQAFHVT', 'SSPPQAHFVT', 'GPDVGAAFHLS', 'GPDVGAAFHVT', 'SSPPQIASFH', 'GPDVGAWLQQ', 'GPDVGAIPKNC', 'GPDVGDHFVK', 'GPDVGHDKVF', 'SSPPGAIPKNC', 'SSIFFVSFH', 'SSIFFVSHF', 'SSETIHLFH', 'SSIFFSVFH', 'SSIFFVFSH'] Truth: ['SSIEQKTMNFKPEK', 'SSSARLGSFRASSPEK', 'PPMNPATTLPSLMPLS', 'SSSARLGSFLPCLSLE'] non truth: ['SSSGVAFSLGAKQMPSL', 'SSSGVAFSLGAKQPMLS', 'SSFWITIQAVSPMSI', 'SSLQSTTSLSKSSPEK'] Truth: ['SSRFPGLGVREPVFMRLRVGRQN', 'SSFRPGLGVREPVFMRLRVGRQN', 'SLVYDLLPVKDLTGALRQAKQESN', 'SLVSGMGLGVREPVFMRLRVGRQN', 'SLNGPYLGVREPVFMRLRVGRQN', 'SSFNPVLGVREPVFMRLRVGRQN', 'SLKFIPGKPVLETRTMDILLGDSQ', 'SSQVVLPAPTGIIHQGVQTILLGDSQ', 'SLRSGVWWPQTKFGVLAILLGDSQ', 'ALGIPMYSIITPNVLRLESEETIV'] non truth: ['SSELARTEGDITISLKAPYLLRPQ', 'SSNQVKTEGDITISLKAPYLLRPQ', 'SSELARTPDGLIGRKGPHGKVGQLGQ', 'SSSRLDVPDGLIGRKGPHGKVGQLGQ', 'SSKQLSNPDGLIGRKGPHGKVGQLGQ', 'SSLGLGMVPDGLIGRKGPHGKVGQLGQ', 'SSRVLKEVYPVPIMGVRGDKNRAS', 'SSQNVKTEGDITISLKAPYLLRPQ', 'SSRVTVSALAVSLSVTAIRGEPKESN', 'SSRVTVSALAVSLSVPKGARLETDSAG', 'SSTWILGPDGLIGRKGPHGKVGQLGQ', 'SSGQGVKTEGDITISLKAPYLLRPQ', 'SSGGAGVKTEGDITISLKAPYLLRPQ', 'SSGGGLGKTEGDITISLKAPYLLRPQ', 'SSNAGVKTEGDITISLKAPYLLRPQ', 'SSGAVNKTEGDITISLKAPYLLRPQ', 'SSGLGNKTEGDITISLKAPYLLRPQ', 'SSNKQVTEGDITISLKAPYLLRPQ', 'SSSRLDVEGDITISLKAPYLLRPQ', 'SSKQLSNEGDITISLKAPYLLRPQ', 'SSPMLKTEGDITISLKAPYLLRPQ', 'SSLPMKTEGDITISLKAPYLLRPQ', 'SSLGLGMVEGDITISLKAPYLLRPQ', 'SLSLWTGKKIWGCLQLALIREEN', 'SSPVNIFILVTGLSTGDGRVVQLTGQ', 'SSLGLGMVILVTGLSTGDGRVVQLTGQ', 'SSELARTILVTGLSTGDGRVVQLTGQ', 'SSSRLDVILVTGLSTGDGRVVQLTGQ', 'SSKQLSNILVTGLSTGDGRVVQLTGQ', 'SSLGLGMVKKIWGCLQLALIREEN', 'SSIWLTGKKIWGCLQLALIREEN', 'SSTWILGKKIWGCLQLALIREEN', 'SSKQTRGKKIWGCLQLALIREEN', 'SSQVKGTGKKIWGCLQLALIREEN', 'SSELARTKKIWGCLQLALIREEN', 'SSSRLDVKKIWGCLQLALIREEN', 'SSKQLSNKKIWGCLQLALIREEN'] Truth: ['DDIPFGITSNSGVFSKYQL', 'DDIPFGITSNSGVFSKYQL', 'DDLNEIFSKKEMPPTNPI', 'DDLITRFFRLCTVSECGL', 'DDLITRFFRLCTESVCGI', 'IDDIPFGITSNSGVFSKYQ', 'LDDLDPESATLPAGFRQKD', 'DIPFGITSNSGVFSKYQLD', 'KLQQLTNTIMDPHAMPYS'] non truth: ['DDLLNYLLHQGQGTSAETI', 'DDLKFFTNTAAGFEKGVDI', 'DDIKFFTNTAAGFEKGVDI', 'DDLYIVQQKWAPGMSSHL', 'DDLLNYLQKWAPGMSSHL', 'DDIEEKSLFPCLSQITHL', 'DDLEEKSLFPCLSQITHL', 'DDLLNYILHQGQGTSAETI', 'DIDKFFTNTAAGFEKGVDI', 'DIDEEKSLFPCLSQITHL'] Truth: ['DDLGKGGNEESTKTGNAGSRLA', 'TCVVERPPSNQTSATSKSTSP', 'TTSDAVIGGDEDIVTISQATPS', 'SSSSLEKSYELTSATSKSTSP'] non truth: ['STSARGQTHGVEFVNCRNTL', 'SIKSSLCKLSHDCSSHRTTS', 'SSTPAASVQHTRFSNCRNTL', 'SSPVADPCGATPKFVNCRNTL'] Truth: ['DSTANEVEAVKVHSFPTLKFFPASA', 'SDKKGICVAETSSKIIGQFGVGFYSA', 'SDTLITTVAETSSKIIGQFGVGFYSA', 'KSEKTAWAETSSKIIGQFGVGFYSA', 'DSPLRYVAETSSKIIGQFGVGFYSA', 'SDKKSREAETSSKIIGQFGVGFYSA', 'DSTALVCAEVATGDDKKRIIDSARSA', 'SDTTIKLDPQNHVLYSNRSAAYAK', 'SDTTLKLDPQNHVLYSNRSAAYAK', 'SSETIKLDPQNHVLYSNRSAAYAK', 'SSEITKLDPQNHVLYSNRSAAYAK', 'SSELTKLDPQNHVLYSNRSAAYAK', 'SSEILDVSLSTSSKIIGQFGVGFYSA', 'SDTGSAGLMLVEGDDKKRIIDSARSA', 'SDTGSAGLMLVENKVIRHYHGHLSA', 'SSEILDVSLSFMILWLRGENGTGPA', 'SSEGNEKGNKVLGICGNVGSGKSSLISA', 'APAPEFTAAENRVLILCGGYSVVDVT', 'SSEELQEQAPPPELLEIINEDIAK'] non truth: ['SSMKLRQTGLLQQSCILEEELQSA', 'SSGINNEAFIKQLEGAPFRIQNTSA', 'SSLGEIEDVTDASSKVVKRKEQESA', 'DSTLKRNLNTFQLVHFLQSSADSA', 'DSTLKRNLNTFQLSYLATASYQSA', 'DSTLKRNLNTFQLSYDQSYLKSA', 'SSLGLPQNYSGQLTFLQLQQPNTSA', 'SSLGLPQNYSGQLTFSDLHKQTLSA', 'SSLGLPQNYSGQLTFSDLSVAPPRSA', 'SSLGLPQNYSGQLTFSDLVSAPRPSA', 'SSEQSYRLLPGLNLTQLLGSACSPSA', 'SSMKLRQTGLLQQSCILEPSSDLSA', 'SSMKLRQTGLLQQSCILELAAEDSA', 'SSMKLRQTGLLQQSCILEPLCWSA', 'SSMKLRQTGLLQQSCIARPWSGSSA', 'SSMKLRQTGLLQQSCIEERRNGSA'] Truth: ['SCPIIVHCSDGAGRSGATSSSRSELEGR', 'DFPEFLTMMARKMKDTDSEEEIR', 'KWTEGGGHSREGVDDAPVFEQAQYR', 'KWTEGGGHSDAMMAMNGKSVDGRQIR', 'FDPEFLTMMARKMKDTDSEEEIR', 'DFPEFLTMMARKMKDTDSEEEIR', 'YASSPWTFGSAQNSGCLTRSASANISR', 'YASSMWTIDAMMAMNGKSVDGRQIR', 'YASSPWTFDAMMAMNGKSVDGRQIR', 'NEAPGPINFTMFLTMQSFVFQEDR', 'AYASNHPVGCLPYTIPPCEHHVNGSR', 'FSLFDKDGDGSITTDGEVNEQEFLR', 'FSAGSAIDTMMARKMKDTDSEEEIR', 'DLPETFDAREQWSNCPTIGQIRDQ'] non truth: ['FSSHFLESWIKSVEDSSPSHDQNR', 'WPASWPQSGRGGGSESSGLTQSSASQAR', 'WPASWPQSGRGGGDTSSGLTQSSASQAR', 'VNPCPWRSCRYDLFFYPEEIER', 'QGPEGRIDRVDETEGPETLGESYER', 'FSKLDDFVDLSSAASATEWDRDTSR', 'FSSHFLESWIKSVECWTDYTTPR', 'EPQNGVLQSGCIWMPLPDYNSSDLR', 'EPFQHRYDKDIFERMDSCSFLR', 'EPFQHRYDKDIFERMDSCYALR', 'FSKLDDFVRDETEGPETLGESYER', 'YAYAGVLQSGCIWMPLPDYNSSDLR', 'SFFSGVLQSGCIWMPLPDYNSSDLR', 'CFTMLTRLMGAECMVQRNSQCQLR'] Truth: ['PEPEHVIVACCFMSQQSETSLQSIR'] non truth: ['PTMPAAALGNMNLAAMQQMQAAAFAAMP'] Truth: ['KAEGSEIRLAKVDATSPAF', 'TTKKVTQLDLDGPREAAF', 'PEFVARMIPKVEWAVAF', 'DKQPVKVLVGANFEEVAF', 'DKQPVKVLVGANFEEVAF', 'KAEGSEIRLAKVDQIDAF', 'KAAADNKDQLEKATGLVAF', 'KAEGSEIRLAKVDATPSAF', 'KKEVVEEAEAVRTAAAAAF', 'KAEGSEIRLAKVDENLAF', 'KAEGSEIRLAKVDAVEGAF', 'KAEGSEIRLAKVDALGDAF', 'KAEGSEIRLAKVDADVAAF', 'TTKKVTQLDLDGPKNNAF', 'TTGRFGDFHRLLVPGVAF', 'TTKKVTQLDLDGPKMPAF', 'TDQLLKEQPGKTISWVF', 'KAEGSEIRLAKVDVYQSP', 'KAEGSEIRLAKVDVYQPS', 'KDSPSVWAAVPGKTFVSIT', 'KKVITAFNDGLMELVQPS', 'KKIVDQNSKLAPSYSQPS', 'KQPVKVLVGANFEEVAFD'] non truth: ['KEKKKQEELQQQEVAF', 'KEKKKQEELQQQLDAF', 'QPYSRHVLRLFQLDAF', 'QPYSRHVLRLFQEVAF', 'KEVLSDLQREKAQLDAF', 'KEKKKQEELQQQIDAF', 'KEKKKQEELQQQVEAF', 'KEKKKQEELQQQDLAF', 'KKDHAPVHSFLKRDQSP', 'QPYSRHVLRLFQIDAF', 'QPYSRHVLRLFQVEAF', 'QPYSRHVLRLFQDLAF', 'KEVLSDLQREKANLEAF', 'KEVLSDLQREKALDQAF', 'KEVLSDLQREKAGEVAAF', 'RSGGARELFLARDGPAVAF', 'TGEKGLIRNSGRNRERF', 'NETKILELNQDKIERF', 'AKRSWDRKQTSLPAQAF', 'KKEIERGHASFDFARKA'] Truth: ['EIYPYVIQ', 'EIYPYVIQ', 'EIYPYVIAG', 'EIYPYVLQ', 'EIYPYVIGA', 'EIYPYVLGA', 'ELYPYVIQ', 'ELFDFLIGA', 'ELFDFLIAG', 'EFFDLILGA', 'ELFDFLIQ', 'EFFDLILQ', 'YPYVIQEL', 'IYPYVIQE'] non truth: ['ELYYPVLQ', 'ELYYPVIGA', 'ELYYPVIAG', 'ELYYPVLAG', 'ELYYPVLGA', 'PVYIEYLAG', 'PVYLEYLAG', 'VPYIEYLAG', 'ELYYPVIQ', 'PYVLEYLAG', 'PYVIEYLAG', 'ELYIGIPGY', 'ELYYPVQL', 'EIYIGIPGY', 'LEYIGIPGY', 'LELYYPVQ'] Truth: ['VKLIESKEAFQEALAAAG', 'VKLIESKEAFQEALAAAG', 'VKLIESKEAFQEALAAQ', 'VKLIESKEAFQEALAAGA', 'KVNFHFILFNNVLAAAG', 'KVNFHFILFNNVALAAG'] non truth: ['KEEQTAVLYLLQGLAAAG', 'KKDIYTGEIAEKPLAAAG', 'KKDIYTGEIAEKPALAAG', 'KKRNKCSFLDLLGGPAAG', 'KDLVRKEDRAWFLAAG', 'KVEIFDRARQSPMKLG'] Truth: ['ELCEKAIEVGTAEKDRMEEALASLAQELQ', 'EATTEGPRPQATEEENGKKPVSEDIPGPLQ', 'EAMADLHTLRTEEENGKKPVSEDIPGPLQ', 'EIVHEDMKRTEEENGKKPVSEDIPGPLQ', 'EEILITVLSADGPPGEGTGEPYMYLSPLQQ', 'EIELAIISISDGPPGEGTGEPYMYLSPLQQ', 'ELLEAIISISDGPPGEGTGEPYMYLSPLQQ', 'EELLAIISISDGPPGEGTGEPYMYLSPLQQ', 'EEILAIISISDGPPGEGTGEPYMYLSPLQQ', 'EEIIAIISISDGPPGEGTGEPYMYLSPLQQ', 'TTRSYFPERTEEENGKKPVSEDIPGPLQ', 'EEFKRYERTEEENGKKPVSEDIPGPLQ', 'EAEAPTGKRVAMTKLGEKLSDEEVDEMIQ', 'EAAEEEDVPKQKERTHFSRGSMAAFLIQ', 'EAAEEEDVPKQKERTHFTVRTYNWIQ', 'EAAEEEDVPKQKERTHFTLQTDYKALQ', 'EAEAPTGLRKHHEDEIDHHSKEIERLQ', 'EEAARKTNEEEWRRAIWEKNMRMIQ', 'EEILAALEKGCSEKDRMEEALASLAQELQ', 'EAEAAVGASFDEMRLELDQLEVREKQLQ', 'EEAASPALRKHHEDEIDHHSKEIERLQ', 'DKDGDGTITTKELGTVMRSLGQNPTEAELQ', 'KDGDGTITTKELGTVMRSLGQNPTEAELQD'] non truth: ['ELKNGPFFSCASCLQSCIAAWVSAKRFLQ', 'EIELTAESDQGKLAQEMSCLLSAQIQQLQ', 'ELQTRPEEVYKPHSERELSEPLPPSEQ', 'ELLHQKMCTTAAQVTELDEGRLQYWLQ', 'ELLHQKMCTTIEPGLHWIFVCSGSPHLQ', 'ELLHQKMCTTIEPGLHWIFVCSYGTGLQ', 'ELQTRPEEVYKDIDATSFQGGVTGEPPLQ', 'EEYTLRFTAKAAAAEICEPKLCYEQKAQ', 'EELQTRPEESKYRVGPNEDLTACVTVIQ', 'ELQTRPEEVYKGMATLLCPETPHYALSQ', 'EEYDIKEIRSPYWKQLYDLAETSSLQ', 'EEILKELEEFLLMDELGPHHLSATEGPGA', 'ELKNGPTEPFFLLMDELGPHHLSATEGPGA', 'ELLHQKMCTTIEPTWKAGAPSSDSVPHLQ', 'EINKVDQSPLDASRPPGNCHVSKNEAASIQ', 'EEILKELEEAGFYTSEHQGPSSAKNGILQ', 'EEILEEVVRSGFYTSEHQGPSSAKNGILQ', 'ELELRQPDPSPLTQPPALNAAAMCNSAELQ', 'ELLEQRPDPSPLTQPPALNAAAMCNSAELQ', 'EINKVDQTQNATELLEESSELARTCAALQ', 'ELKNASDGLQNATELLEESSELARTCAALQ', 'EELIQTLDEVATELLEESSELARTCAALQ', 'EEILKQLMYAGPDIDATSFQGGVTGEPPLQ', 'EELIVRQGDLSSGGQDLYLISHEEEFIQ', 'EEAATKDARLRLEEEEDEEMLSLRTLQ', 'ELELSGSPRPVETETWKAGAPSSDSVPHLQ', 'ELIEDGEVDFLHQLPNTQLKEEYGFLQ', 'ELIEDGECELKCPCAFPKIGNEERIKIQ', 'ELLEEQSSLLVHGDKENFKPSFNSESIQ', 'ELKNGPFEADEMAHWTKFSVAASPLTSLQ', 'ELIEDGEATVSEWPYISNSGDLVIVIDLQ', 'EAAELGDKMAVLVGGQDLYLISHEEEFIQ', 'MKFFAMGTLLSDSGAQHQPGIRELELQEA'] Truth: ['SSYGGPAWHGQGQSPSIRQLIMHAMAEWYM', 'SSYNPAWHGQGQSPSIRQLIMHAMAEWYM', 'SSRESHLSNGLDPVAEFDMLNHSAGMVEALY', 'SSDVPLLNDYAMDSYYDYIWDVTILEYL', 'DGNGTIDFPEFLDSYYDYIWDVTILEYL', 'DGNGYIWHGQGQSPSIRQLIMHAMAEWYM', 'SSIEQKTSADGAMDSYYDYIWDVTILEYL', 'SSEQLTSGGASVAMDSYYDYIWDVTILEYL', 'SSGPTNASAFTERDSYYDYIWDVTILEYL', 'SSQSASSSEEGYLALNTKTSYTMLFDWIYP', 'SSYSNPTIGYENSPSIRQLIMHAMAEWYM', 'SSADESADDLCTIIYLEKISHTEEDCLTFK', 'SSFESSASNTRGPLDRESVEHYFFGVEARD', 'SSSLERSYELPDGYLSELIHNHQDELSEE', 'DGNGTIDFPEFLTMMARKMKDTDSEEEIR', 'SSFFVNGLHDAMDSYYDYIWDVTILEYL', 'SSGGSSSGTITDEFALNTKTSYTMLFDWIYP', 'SSKQEMPQDPGQSPSIRQLIMHAMAEWYM', 'SSGELMFLMKWKDSDNNLSQSNMTQKVEE', 'SSLASCAWHGQGQSPSIRQLIMHAMAEWYM', 'SSLMNSWHGQGQSPSIRQLIMHAMAEWYM', 'SSTWSAWHGQGQSPSIRQLIMHAMAEWYM', 'SSGPGYAWHGQGQSPSIRQLIMHAMAEWYM', 'SSPNYAWHGQGQSPSIRQLIMHAMAEWYM', 'SSYSNGWVSVSGPLDRESVEHYFFGVEARD', 'SSGGSYGWVSVSGPLDRESVEHYFFGVEARD', 'SSGPGGDHSVSVSGPLDRESVEHYFFGVEARD', 'SSIDEFCVSVSGPLDRESVEHYFFGVEARD'] non truth: ['SRGPCSGLHEPGKLEMPCGCSPGRSLGFQQFN', 'SSLDYLATVSTIFQCCFGCSPGRSLGFQQFN', 'SSHSFTKSCELCKYPKEGTHNATWEAAHAR', 'SSTGRPAPLAANEQQDSMCRCSTHNAFSITVP'] Truth: ['KKAKETMKNIPML', 'KSRTSTKRNIPML', 'KESKQKMKAFPPL', 'KDTKTTVIATNINL', 'KDKTVTSIVSNNLL'] non truth: ['KSTRNTAPPPPPRL', 'KKLCSVTWIFPPL', 'KTRSLRYGPNLNI', 'KKLCSVTLWMIPL', 'KKLCSVMAVVNLNL'] Truth: ['DHVTNLRKMGAPEAGMAEYLF', 'DHVTNLRKMGAPEAGMAEYIF', 'DHVTNLRKMGAPEAGMAEYLF', 'EAEAARVEIKTLEEDEEELF', 'TTKEESGAVAAAASVSCTMLYLF', 'EAEAQAAMEANSEGSLTRPKLF', 'EAEAQAAMEANSEGSLTRPKIF', 'EAEAAVVQGMKEAQERLTGDAF', 'DHVTNLRKMGAPEAGMAEYFL', 'TTQEPIWIKTLEEDEEELF', 'TTQEPIWLKTLEEDEEELF', 'TTQEPIWLTDVPAFPDMEIF', 'TTDDPSMLARVVQENEERLF', 'EAEAARVELKTLEEDEEELF', 'EAAEEEDVPKLATPTEDVFIF', 'EAEAAARVEGNIPELFQMDIF', 'EAAEQIREGNIPELFQMDIF', 'TTKEESGAVAAAASVPAQSEADLF', 'TTEAAPVQGMKEAQERLTGDAF', 'AEEAAVVQGMKEAQERLTGDAF', 'EAEAVAVQGMKEAQERLTGDAF', 'EAEAQAAMEANSEGSLTRPKFL', 'TTGCSQTLLATDNCREVIITPN', 'TTLQKMGTDTDNCREVIITPN', 'TTQEPIQGMKEAQERLTGDAF', 'TTHTKHAGRSFNDLMQYPVF', 'LPEQEIKTLEEDEEELFKM', 'HVTNLRKMGAPEAGMAEYLFD'] non truth: ['TQTPSDISAYVKDQHGNATGLF', 'TGTARSGKWHEDRSMQKDLF', 'TSPDSGRLSYVKDQHGNATGLF', 'TSPDSGRLSDFEGRGDLPPSLF', 'TTATVFPNHHGPLSETREATPS', 'TTGYWLFQVDHYVQWFIF', 'TTRLEMTGERSRSGPTTASQSP', 'TTHTELIFDPVRGIDSACDIF', 'TAASNRGDPVGPTASMLSSPTSVF', 'TTVNGSKGSHHGPLSETREATPS', 'TTLDRTGSHHGPLSETREATPS', 'AEAEGEGVQRLALSDSEIESLF', 'TTPEPLLQESDHYVQWFIF', 'TTDVPMQPVGPTASMLSSPTSVF', 'TTESIGYSSIGLDSLTSMVELF', 'EAQSKFYEKLGNDQPAGNEPK'] Truth: ['EQIHREWNLTIKKE', 'TTSTTTTVWNLTIKKE', 'EEKPEKPQIQEKVLE', 'EEKPEKPQIQVKELE'] non truth: ['TTTSSIEKRKTVESKE', 'TTGYWLAPLIVQHPVE', 'PGPGPAGGGALVRSNKLFE', 'GPPGQPGLLSRLPRYSE', 'PGPGPAGGLLSRLPRYSE', 'EQLSLLNLASTVSPAPPS', 'EIQLLAQMVGLNTHKE', 'TTTSSLKPLYKAKQEE', 'EQLSLLNLAPCQRPLE', 'PGPGPAGGGIMSQVIKVGTT', 'GPPGQPGTTLTLLKELSD', 'PGPGPAGGTTLTLLKELSD', 'ELQQKRKDGPGGKTGPQ', 'EQIGYLKIAKKMWSE', 'EQIGYFLKNKGLWVE', 'EQLYGNKPLHRAGLVE', 'EQIYGNKPLHRAGLVE', 'EQIGYNKPLHRAGLVE', 'EQIGYRFTGVKQRIE', 'EIQVHWVKLEMAVLE', 'EQLHVWVKLEMAVLE', 'EQLVHWVKLEMAVLE', 'EQILLASRCLHLVEAE', 'EIQLLASRCLHLVEAE', 'EQLSARILRQSEKHE', 'EQLSARILHQESKRE', 'TTSTSIEKRKTVESKE', 'EQLSLLNLTRPMQPSP', 'EQLSLLNLHMLLRDE', 'EEKILVKAQPPDAKEE', 'ELQLMLAQVGLNTHKE', 'ELQSAAKIELKSYSKE', 'ELQLALSRCLHLVEAE', 'ELQKTIRQHKWIDE', 'ELQKTIRQHKWVEE', 'ELQKTIRQHKWEVE', 'ELQKTIRQHKWLDE', 'EQDQTHRLKAVLRTE', 'EELRKLAAQDGPFPRP', 'EQIIKSLVHEIEKEE', 'ELQKTLTPAPCQRPLE', 'ELQKTITPAPCQRPLE', 'TTESTAPVTLGHLLGDLV', 'TTRAHPEGFVLINKGSP', 'TTYPQLKIAKKMWSE', 'EEGHRLAPLIVQHPVE', 'TTPGPHSGFIVKLRQGE', 'TTGPPHSGFIVKLRQGE', 'TTPPHGLLSRLPRYSE', 'EEFKGNKPLHRAGLVE'] Truth: ['VEYPAHQAMNLVGPQSIEGGAHEGLQHLGPFGNIPNIVA', 'PNLVPSISNVVDKDLFHCASFIVPQSSSNEVMFLTVQ', 'NPLVGRGDLGKEIEQKYDCGEEILITVLSAMTEEAAVA', 'PNLVPSISNGKEIEQKYDCGEEILITVLSAMTEEAAVA', 'DISQWAGPLCLQEVDEPPQHALRVDYAGAPRILFWA', 'NPLPSKGDLGKEIEQKYDCGEEILITVLSAMTEEAAVA', 'TPRSATSHSISELSPDSEVPRSEGPAVNNDPFVQRKLG', 'DLEVLLLKLESDPIHGESVDHHPQETSNPLKEAEGVS', 'SDLEVLLLKLESDPIHGESVDHHPQETSNPLKEAEGV', 'GFPPNLGSAFSTPQSPAKAFPPLSTPNQTTAFSGIGGLSSQ', 'QGFPPNLGSAFSTPQSPAKAFPPLSTPNQTTAFSGIGGLSS'] non truth: ['DQVASYQLYVSDFHVSRYPSQLIRHGGPYGQVGPAGI', 'NPLPEKSGRLTPMGSEESIETVIDILSQLGFIYGADQG', 'NPISTNVPLDLKEGSEESIETVIDILSQLGFIYGADQG', 'NPLVSGKGNKVDVVWTKKLANEGSDGGGEEAAVLAFFEE', 'MKEHGESTQLLNPAFIVALNSSDEPGAEAGVPSSRRPGL', 'LDLDTFLHKFPNLAVLTCGANTYFIDETYRAQLCR', 'DLIDQVLTAMRANLAVLTCGANTYFIDETYRAQLCR'] Truth: ['NEAIGLLVAIFHKYSGKEG', 'ENAIGLLVAIFHKYSGKEG', 'NEALGLLVAIFHKYSGKEG', 'ENALGLLVAIFHKYSGKEG', 'DQAIGLLVAIFHKYSGKEG', 'GADAIGLLVAIFHKYSGKEG', 'DQAIGLLVAIFHKYSGKEG', 'QDAIGLLVAIFHKYSGKEG', 'GEGAIGLLVAIFHKYSGKEG', 'GGEAIGLLVAIFHKYSGKEG', 'GGEALGLLVAIFHKYSGKEG', 'GADALGLLVAIFHKYSGKEG', 'QDALGLLVAIFHKYSGKEG', 'GEGALGLLVAIFHKYSGKEG', 'GGEALGAPHITLNPTIPLFQ', 'NEALQVLRPGLVVVHTEDG', 'ENALQVLRPGLVVVHTEDG', 'GADALQVLRPGLVVVHTEDG', 'GGEALQVLRPGLVVVHTEDG', 'QDALQVLRPGLVVVHTEDG', 'GEGALQVLRPGLVVVHTEDG', 'DQALQVLRPGLVVVHTEDG', 'NEAIQVLRPGLVVVHTEDG', 'ENAIQVLRPGLVVVHTEDG', 'DQAIQVLRPGLVVVHTEDG', 'QDAIQVLRPGLVVVHTEDG', 'NEAIQEGRLAWLIYLVGT', 'ENAIQEGRLAWLIYLVGT', 'NEALQEGRLAWLIYLVGT', 'ENALQEGRLAWLIYLVGT', 'GGEAIQEGRLAWLIYLVGT', 'QDAIQEGRLAWLIYLVGT', 'DQAIQEGRLAWLIYLVGT', 'GADAIQEGRLAWLIYLVGT', 'GEGAIQEGRLAWLIYLVGT', 'GGEALQEGRLAWLIYLVGT', 'QDALQEGRLAWLIYLVGT', 'DQALQEGRLAWLIYLVGT', 'GADALQEGRLAWLIYLVGT', 'GEGALQEGRLAWLIYLVGT', 'EANIGLLVAIFHKYSGKEG', 'QGEIGLLVAIFHKYSGKEG', 'NEAGLAPHITLNPTIPLFQ', 'ENAGLAPHITLNPTIPLFQ', 'QDAGLAPHITLNPTIPLFQ', 'GADAGLAPHITLNPTIPLFQ', 'GGEAGLAPHITLNPTIPLFQ', 'DQAGLAPHITLNPTIPLFQ', 'GEGAGLAPHITLNPTIPLFQ', 'DQLAGLLVAIFHKYSGKEG'] non truth: ['ENALGFDCLKIGGGKALLIN', 'NEALGFDCLKIGGGKALLIN', 'DGAALGFDCLKIGGGKALLIN', 'GDAAIGFDCLKIGGGKALLIN', 'QDAIGFDCLKIGGGKALLIN', 'DQAIGFDCLKIGGGKALLIN', 'EGGALGFDCLKIGGGKALLIN', 'AGDALGFDCLKIGGGKALLIN', 'QDALGFDCLKIGGGKALLIN', 'DQALGFDCLKIGGGKALLIN', 'ENALGIHLYSYIALQKNV', 'NEALGIHLYSYIALQKNV', 'DGAALGIHLYSYIALQKNV', 'GDAALGIHLYSYIALQKNV', 'GDAAIGIHLYSYIALQKNV', 'QDALGIHLYSYIALQKNV', 'EGGALGIHLYSYIALQKNV', 'DQALGIHLYSYIALQKNV', 'AGDALGIHLYSYIALQKNV', 'QDAIGIHLYSYIALQKNV', 'DQAIGIHLYSYIALQKNV', 'GGEALGIHLYSYIALQKNV', 'ENALGTLWLARNARRFSA', 'NEALGTLWLARNARRFSA', 'DGAALGTLWLARNARRFSA', 'GDAALGTLWLARNARRFSA', 'GDAAIGTLWLARNARRFSA', 'QDALGTLWLARNARRFSA', 'EGGALGTLWLARNARRFSA', 'AGDALGTLWLARNARRFSA', 'DQALGTLWLARNARRFSA', 'GGEALGTLWLARNARRFSA', 'QDAIGTLWLARNARRFSA', 'DQAIGTLWLARNARRFSA', 'ENAIKNKELRKQFEADL', 'NEAIKNKELRKQFEADL', 'ENALKNKELRKQFEADL', 'NEALKNKELRKQFEADL', 'DGAAIKNKELRKQFEADL', 'GDAAIKNKELRKQFEADL', 'DGAALKNKELRKQFEADL', 'QDAIKNKELRKQFEADL', 'DQAIKNKELRKQFEADL', 'EGGAIKNKELRKQFEADL', 'AGDAIKNKELRKQFEADL', 'GGEAIKNKELRKQFEADL', 'DQALKNKELRKQFEADL', 'EGGALKNKELRKQFEADL', 'AGDALKNKELRKQFEADL', 'QDALKNKELRKQFEADL'] Truth: ['DGTITTKELGTVMRSLGQNPTEAELQ', 'TTSSIPEAAVQHGHEAVVRLLMEWGA', 'EADVSLTAAVQHGHEAVVRLLMEWGA', 'EAMIDRAAVQHGHEAVVRLLMEWGA', 'EEITEYRRQLQARTTELEEPGIQ', 'TTSSIPEAGLKMMVGNNLKDWQLQQ', 'EARLAERKFMNPFNLPNLTDGDLQ', 'EARLAERKFMNPFNLELGGSPGDLQ', 'EAKGLEFDDVLLYNFFTDVNAKIQ', 'TPAGQKTCLSGRSDDNRESLEKRIQ', 'TPAGQKTCLPDNIKQWKFPDGGFIQ', 'EDLKSELSSNFEQVILGLPTEAELQ', 'EAEKSLEAVQHGHEAVVRLLMEWGA', 'TPRCNKSAVQHGHEAVVRLLMEWGA', 'TPAGQKTCAVQHGHEAVVRLLMEWGA', 'EDLKSELSSNFEQVILGLGEFGHIQ', 'TTFPNKELADEILDLSGELSDALGQI', 'EARLAERKFMNPFLGQNPTEAELQ', 'EAALQSKESECQRLMLVSRQGGQLQ', 'TVSAEALDAVQHGHEAVVRLLMEWGA', 'EEMLQRAVQHGHEAVVRLLMEWGA', 'EEEEEYVVEKVLDRRVSNSPAALAG', 'EEEEEYVVEKVLDRRVVEANNLQ', 'EEEEEYVVEKVLDRRVMKMHLAG', 'EEAAIHRAVPAASYGRIYAGGGTGSLSQ', 'EEAALHRAVPAASYGRIYAGGGTGSLSQ', 'EAAEHLRAVPAASYGRIYAGGGTGSLSQ', 'EAAELHRAVPAASYGRIYAGGGTGSLSQ', 'EEITEYRRQLQARTTELEADPLQ', 'EEITEYRRQLQARTTELEAPDLQ', 'EEITEYRRQLQARTTELEAPDIQ', 'EAALQSKESECQRLQLTKVEEEIQ', 'EAEAVATIICSVAFSNQPEGVSINVIAG', 'TPLPGAPPQQLQYGQQQPMVQGGQLQ', 'TPLPGAPPQQLQYGQQQPMVPWSLQ', 'EARLAERKFMNPFNLPEANTEAIQ', 'EARLAERKFMNPFNLPQCQQQIQ', 'EAEAARVEDNCKLVTDECRRKLLQ', 'EAKGLEFDDVLLYNFFTEIERLQ', 'EAKGLEFDDVLLYNFFTDQGKVLQ', 'EAKGLEFDDVLLYNFFTLSQAQLQ', 'EAKGLEFDDVLLYNFFTQLAQSLQ', 'EAKGLEFDDVLLYNFFTYLPGPLQ', 'EAKGLEFDDVLLYNFFTDWLIIQ', 'EEITEYRRQLQARTTELEPEGLQ', 'EEITEYRRQLQARTTELEPADLQ', 'EEEEEYVVEKVLDRRVQQDRIQ', 'EELSCEERNLLSVADKISRDELLQ', 'EELSCEERNLLSVADLERKVESLQ', 'TTGLTAKELEALDDVFLELGGSPGDLQ'] non truth: ['TTDINEILSESRACKKPPSSSLGDLQ', 'EVANIRFDGTIDSGRRGRSQAGLEQG', 'EAVSSVVDLSESRACKKPPSSSLGDLQ', 'EEELKSALSESRACKKPPSSSLGDLQ', 'TTNGARWLSESRACKKPPSSSLGDLQ', 'TTRAEDLNPSSMQVAQLEKESLTQL', 'TTRLEEGNPSSMQVAQLEKESLTQL', 'TPWSPAFNDVKCGQSFVLRIDAPIQ', 'TPCTKNAANDVKCGQSFVLRIDAPIQ', 'EEFLELIAAAPSRQSSDLSSFPGPLQ', 'TTGRNQENDVKCGQSFVLRIDAPIQ', 'EAEAGGLGPCISVLVDVSTTMLAVESLQ', 'TPPTNEYPTLLMAQSLARMQSAQLQ'] Truth: ['SADGTRWASSRE', 'DASGTRWASSRE', 'DASGTRWASSRE', 'GESGTRWASSRE', 'SADATEAKATETQ', 'GESNTVVSATETQ', 'GESNTVVSAEQTT', 'GESNTVVSADTTAA', 'GESNTVVSATDKD', 'GESNTVVSAEDKS', 'GESNTVVSAEVSSG', 'GESNTVVSADEKS', 'GESNTVVSATSSSP', 'GESNTVVSASKED', 'SDAGTRWASSRE', 'GESNTVWSALMQ', 'GESNTVVSESTAAA', 'SSPSAKDTATDKD', 'DAKAQEMAKMAE', 'SAASAEEKATETQ', 'SAASAGEVTATDKD', 'SAGVAQEMAKMAE', 'ADEKAQEMAKMA'] non truth: ['SADQQIMAYQAP', 'GESGVLCQAMQLS', 'SADQQIMATPFN', 'SADTIGQTAAATES', 'SADPTTKQSSATE', 'SADFQPQAMSLQ', 'SADKGWEAMAKE', 'SAGVDGLSSAASATE', 'SASSPTGSVASDSGL', 'SASSPTGSVASSTSP', 'SASSPTGVSASSTSP', 'SASPSASVSASSTSP', 'SAAAEASVSASSTSP', 'SSTPGDKTAHHAGG', 'SSLSSSGPAAAETGT', 'SSLSSSGPAAPSSST', 'SSPTGSSVAAAETGT', 'SSTPGDKTAAATES', 'SSPGTGLSSAASATE', 'SSPGTGSLSAASATE', 'SSPGTGSLSATSSPS', 'SSPGTGSLSASSTSP', 'SAPSSLCQAMQLS', 'SAGTAGLSEAESDK', 'SADPTTKQSHNH', 'SADPTTKQSETGT', 'SADPTTKQSATES', 'SSPASICQAMQLS', 'SSSAPWSAALSMQ'] Truth: ['HYGDVYE', 'DVNWGYE', 'DVNWGYE'] non truth: ['WGGGDVYE', 'GYHDVYE', 'YHGDVYE', 'YHPSSYE', 'HYSPSYE'] Truth: ['TLFKSKNYEFMWNPHLGYILTCPSNLNATLSVAEINHPV', 'NPLQFDNPLQAQLAQAEQLAQSLQETAHQEQDALKFQLSA', 'NPLPSKETIEQELAQAEQLAQSLQETAHQEQDALKFQLSA', 'TLFKSKNYEFMWNPHLGYILTCPSNLGTSSGSERGKKLSA', 'TLFKSKNYEFMWNPHLGYILTCPSNLGESVFQELGKLTG', 'TLFKSKNYEFMWNPHLGYILTCPSNLSLTFGVENLNLTG', 'PNIKRIHELYGDEIQDFTQAELALLMKCINDPNAMFLTG', 'YYEILNNPELACTLAKTAFDEAIAELDTLNEDAYKLLLGT', 'YYEILNNPELACTLAKTAFDEAIAEFQLGDLLKDYGSPLL', 'PNLASTTMNLTSPLLQCNMSATNIGIPHTQRLQGQPTGGQIAS', 'PNLASTTMNLTSPLLQCNMSATNIGIPHTQRLQGQSAQPGLAS', 'PNLASTTMNLTSPLLQCNMSATNIGIPHIHWIMDIPFVLSA', 'TLFKSKNYEFMWNPHLGYILTCPSNLGTGLRAGVTFDLTG', 'VEYPAHQAMNLVGPQSIEGGAHEGLQHLGPFGNIPNIVAELTG', 'AEAEADKIKKAFPSESKPSESNYSSVDNLVDISYLKKLCGT', 'EAAELLNREPSPTGGVGSLDIAGLLNNPHFITMASSLMNSPQL', 'TLFKSKNYEFMWNPHLGYILTCPSNLGTGLRAGVAEYLAS', 'PNLFCFFSLPHVGYLYMVVKSVELMSVHHMYQLVYIAS', 'PNLFCFFSLPHVGYLYMVVKSVELHQPPESTAAAAAAADISA', 'PNLFCFFSLPHVGYLYMVVKSVELMSVYWYSGKSSLISA', 'PNLFCFFSLPHVGYLYMVVKSVELMSVYWYPCVFLLAS', 'PNLFCFFSLPHVGYLYMVVKSVELMSVYWLCFIFSSPV', 'PNLFCFFSLPHVGYLYMVVKSVELMSVYWYFSWLTPV'] non truth: ['TLYYGTGLQNKLFLSSGVCCLKGHSILHCAREGQPLPEELS', 'TLYYGTGLQNKLFLSSGVCCLKGHSIIHCAREGQPLPEELS', 'NPLFCLLQQQPGQGPGQPQLGSGSSSTTVVQYDDGFKRRLAS', 'TFGPRKYTESSALNLSNRLDKDRNNRCAREGQPLPEELS', 'TLYYGTGLQNKLFLSSGVCCLKGHSINVSIGSHNFTQFGISA', 'TFGPRKYTESSALNLSNRLDKGAKGTDSAAKPMAHLTEGPSSA', 'TLLTEPTRVLSQEKNVPTDFFETHLHCAREGQPLPEELS', 'GGPIQTVIYQQPGQGPGQPQLGSGSSSTTVVQYDDGFKRRLAS', 'VPQSCTTPASNPNGFVGKGVIVVSDESLTAHSPASSPAPSAARLSA', 'VPQSCTTPASNPNGFVGKGVIVVSDESLTNTSTFSPAPSAARLSA', 'VPQSCTTPASNPNGFVGKGVIVVSDESLTNTSTLDSLDVEILSA', 'VPQSCTTPASNPNGFVGKGVIVVSDESLTNTSIHCVYTRLLTG', 'PNLSCLEECLLRPLPEAESSLVVSQNQLHINLWSSNTGTGL', 'TFGPRKYTESSALNLSNRLDKDRNNRNMPMHLMRQLAS', 'PNIGVVFISMYIGTEGFWHFRGRAAQTIEKYATATDTRSA', 'GGPIQTVIYEYIGTEGFWHFRGRAAQTIEKYATATDTRSA', 'NPLLGDDERNKFNPDSRRKRDFYICYGIFLAGCVGAAIAS', 'LCPNRTFLKRFAECEERRNGSAACGLQMHLQVLESALLSA', 'LCPNRTFLKRFAECEERRNGSAACTPDPNNKAKRNLALTG', 'EAAEKPKYFDLYEIDSGGAVYDGFGITTLTIVVFYIADLAS', 'AEEAKPKYFDLYEIDSGGAVYDGFGITTLTIVVFYIADLAS', 'EAEAKPKYFDLYEIDSGGAVYDGFGITTLTIVVFYIADLAS', 'LTITEESSPNHIEGGLFERLDGKLSARGEADSGLSHGNVQQL', 'LTITEESSPNHLEGGLFERLDGKLSARGEADSGLSHGNVQQL', 'AEAEAHRARGESDKGLFERLDGKLSARGEADSGLSHGNVQQL', 'PNITVGFSLQQPGQGPGQPQLGSGSSSTTVVQYDDGFKRRLAS', 'EAAEESAEDLPLVVKPEAESSLVVSQNQLHINLWSSNTGTGL', 'PNLECQERTPELPDPSPVSAGLYRKQLSSVQEEHKQTLSA', 'PNLECQERTPELPDPSPVSAGLYRRLRNLGMFRESYISA', 'PNLECQERTPELPDPSPVSAGLYRRPQDKNQGVSTAIQLSA', 'EAIKHNECSTPSAAVVAYFKMLCRKQLSSVQEEHKQTLSA', 'EAIKHNECSTPSAAVVAYFKMLCRSTDRVFGLNEYLKLAS', 'EAIKHNECSTPSAAVVAYFKMLCRSTWTKGAVKNLSSYLAS', 'EAIKHNECSTPSAAVVAYFKMLCRSTWTAKPELLCVHGLAS', 'EAIKHNECSTPSAAVVAYFKMLCRSVLAIACLDGQQVQPLGT', 'TFGPRKYTESSALNLSNRLDKDRNNRNMPMHCQALLLAS', 'TFGPRKYTESSALNLSNRLDKDRNNRNMPMHMAVNLISA', 'TFGPRKYTESSALNLSNRLDKDRNNRNMPQVYDKYISA', 'TFGPRKYTESSALNLSNRLDKDRNNRNMPMHLDGRSLAS', 'TFGPRKYTESSALNLSNRLDKDRNNRNMPMRQHLMLAS', 'TFGPRKYTESSALNLSNRLDKDRNNRNMPMHLRQMISA', 'TFGPRKYTESSALNLSNRLDKDRNNRNMVSTDFRTSLTG', 'TFGPRKYTESSALNLSNRLDKDRNNRNQAMLDYFRITG', 'TFGPRKYTESSALNLSNRLDKDRNNRNMDKWVEHVLTG', 'PNITVGFSLEYIGTEGFWHFRGRAAQTIEKYATATDTRSA', 'TLLTKVMMKLKEAFDESEPSYCHYYGGLADTLNKLVASPV', 'TLLTEPTRVLSQEKNVPTDFFETHCTGPIRDNYQIQLAS', 'NPLDYIGYGMQIGRFSCLLSHAQRYIRHPISCPSLNALAS', 'NPLCKFTGPEIAQNPDSRRKRDFYICYGIFLAGCVGAAIAS', 'GPGLSPQKLDSTAFNPDSRRKRDFYICYGIFLAGCVGAAIAS'] Truth: ['KKQELEEELVESLPQEIKANV', 'KKELKDLLQTELSGFLPHPME', 'KKEMPPTNAAPAEEKKVEAKKE', 'KTNERRPEGVPTDTKNKDGVLV', 'KKIVDQNSKLAPEGRGSTAPVGGGS', 'KKEEMPAAEVLTVVAIDGDRGKP', 'KKEECPAVEVLTVVAIDGDRGKP', 'KKMISEADVTKSLSQNYGVLKN', 'KKAFPSEDVTKSLSQNYGVLKN', 'KKIFVGGLSPDTPEEKIREYF', 'KKSGKEVKGEEKGAFDISKKEM', 'KKGKTLMMFVTVSGNPLGKGQMT', 'KKIVDQNSKLAPEPDLRELME', 'KKIVDQNSKLAPETKAIFYEM', 'KKIVDQNSKLAPETKAPPEQMT', 'KKEDEVQAIATLIEKQALHGMT', 'KKQAGPASMDNARRKTPARLDAA', 'KKELKDLLQTELSGFMSIIEM', 'KKELKDLLQTELSGMILAYME'] non truth: ['KKMHGMTSLARLQSSAAKFFTL', 'KKPTSVNSGDVLKFGLELSFTW', 'KTDQLIANGSVVPSKEVTIDPEL', 'KTNEFKCGRTVTLQRNKVVMT', 'KKQPSLSSDPFNIERLKSHEI', 'KKEVEPDNKFNIERLKSHEI', 'KKDKVEENPFNIERLKSHEI', 'KKQPESDNSLVEMKPIKLPSNA', 'KKEEEARLQNLQLAEQLREQ', 'GPGPFYAHIKFNIERLKSHEI', 'KPYRPPVCSVVPSKEVTIDPEL', 'GPGPTGMRFLARLQSSAAKFFTL', 'KKMHGMTSLVLKFGLELSFTW', 'KKMHGMTSLVIQGVPVCGLVNEL', 'KKMHGMTSLVLLCHTLIGIDDK', 'KKVSTGPSASPVSAPGVTDIHLGFT', 'KTNEFIIHRSSNRKQDLRNP', 'KKVEEDRARNQPLVMNRLQE', 'KKVSTGPSASPVSPSKEVTIDPEL', 'KKISYHSTGAQLELVPSAPAAER', 'GPPGVKGPPGTGAQLELVPSAPAAER', 'KKQPSLSSDPEERNLKARLQE', 'KKDKVEENPEERNLKARLQE', 'KKLHSSVCANEERNLKARLQE', 'KKLTSCHRNEERNLKARLQE', 'KKMHGMTVTVIQGVPVCGLVNEL', 'KKVSTGPSASPVSKFGLELSFTW', 'KKVSTGPSASPVSQGVPVCGLVNEL', 'KKVSTGPSASPVSLCHTLIGIDDK', 'KKAPPTEGFRNQPLVMNRLQE', 'KKAMHRGFRNQPLVMNRLQE', 'KKLQAEGSGDVLKFGLELSFTW', 'KKEALQGSGDVLKFGLELSFTW', 'KKVSTGPGSGDVLKFGLELSFTW', 'KKEAAVAGSGDVLKFGLELSFTW', 'KKAELQGSGDVLKFGLELSFTW', 'KKALEQGSGDVLKFGLELSFTW', 'KKCIDLLSYGWGSILCKKGPPF', 'NTLTSEPQPFVIKPVTDIHLGF'] Truth: ['VNPTVFF'] non truth: ['ASPGVVFF', 'NPTVVFF', 'PTVNVFF', 'TPNVFVF', 'PTVNFVF', 'TVPNVFF', 'ASPGVFVF', 'NPTVFVF', 'TPVNVFF', 'PTNVFVF', 'PLSNFVF', 'VTPNFVF', 'PTVVNFF', 'TPVVNFF', 'TPNVFFV', 'TPNFVVF', 'PTNFVVF', 'TPVNFVF', 'PLSNVFF', 'VTPNVFF', 'PTVVGGFF', 'TPVVGGFF', 'NPSLFVF', 'SPVQFVF', 'ASPGVFFV', 'NPTVFFV', 'PTNVFFV', 'NPSLVFF', 'SPVQVFF', 'TVPNFVF', 'NPTFVVF', 'ASPGFVVF', 'PLSNFFV', 'VTPNFFV', 'PTVNFFV', 'NPSLFFV', 'VGGFPLSF', 'TVPNFFV'] Truth: ['DLTIKLP'] non truth: ['DLITKLP', 'KTILPDL', 'KLTPELV'] Truth: ['DGVIKVFN', 'SPIVKSFN', 'GVIKVFND'] non truth: ['SSPLVKFGG', 'SSPLVKFN', 'SSPVIKFN', 'SSPLVFKN', 'SSPLFKVN', 'SSPLFVKN', 'SSPLVKNF', 'SSPIVKFGG', 'SSPIVKFN', 'KLVEAAFGG', 'KIVEAAFGG', 'KIVSPSFN', 'KLVSPSFN', 'KLVAAEFN', 'SSPKLVNF', 'DGVKLVNF', 'KLEQLFGG', 'KLLQEFN', 'KLQLEFN', 'KLELQFN', 'KLIQEFN', 'KLEQIFN', 'KLEQLFN', 'KLVDVGFGG', 'KLVVDGFGG', 'KLVGVDFGG', 'KLVGDVFN', 'KLVDVGFN', 'KLVVDGFN', 'KLVAEAFN', 'KLVGVDFN', 'SSPIVKNF', 'SSPIVFKN', 'SSPIFKVN', 'SSPIFVKN', 'SSPIKFVN', 'SSPLKFVN', 'SINIIAFN', 'KLQLEFGG', 'KLLQEFGG', 'KLIQEFGG', 'KLELQFGG', 'KLEQIFGG', 'KDGVLVNF', 'KLIQENF', 'KLQLENF', 'KLLQENF', 'KLELQNF', 'KLEQINF', 'KLEQLNF'] Truth: ['KENPLPSKETIEQEKQAGES', 'KEGGPLPSKETIEQEKQAGES', 'KENEQSVWIRSHLGSKEQS', 'KEGGEQSVWIRSHLGSKEQS', 'EKNPLPSKETIEQEKQAGES', 'EKNPLPSKETIEQEKQAGES', 'EKNEQSVWIRSHLGSKEQS', 'KENGLSSTKYLGDVDATMSIL', 'KNEEQSVWIRSHLGSKEQS', 'KDQEQSVWIRSHLGSKEQS', 'EGKGEQSVWIRSHLGSKEQS', 'EGKGPLPSKETIEQEKQAGES', 'KNEPLPSKETIEQEKQAGES', 'KDQPLPSKETIEQEKQAGES', 'KEENQSVWIRSHLGSKEQS', 'KERVMYEKEAKGDYALAPGS', 'KERVMYEQEQDALKFQLS', 'KKEEFQKMISGGHNPLGTPGS', 'FDKHTLGHQEQDALKFQLS', 'KEQLPAKALETMAEQLEPES'] non truth: ['KENQMKLKEAFDAHGSIGPAA', 'KENMQKLKEAFDAHGSIGPAA', 'KENGYFPSTYYLHRGLLPS', 'KENIEDVPQVPGWPHNPAKS', 'LSGNPIATYFIEQKNCLPAY', 'KELDETTVSNLKLFFQNES', 'KEGGNNEMGSVLGVNGLNRPLS', 'LSGNGYFPSTYYLHRGLLPS', 'EKNGYFPSTYYLHRGLLPS', 'KELESPFPRMERHKSEEL', 'KQDNNEMGSVLGVNGLNRPLS', 'GSLNGYFPSTYYLHRGLLPS', 'GVTNGYFPSTYYLHRGLLPS', 'KEDIETTVSNLKLFFQNES', 'KEEALKRGQSILAQQESHCS', 'KELESPFQDLHGSMKIPAQS', 'KETSEAKGPRMERHKSEEL', 'GVTIDETTVSNLKLFFQNES', 'KELQIDTQKDLGSWNHRSS', 'GVTIQIDTQKDLGSWNHRSS', 'KELSGEALQKDLGSWNHRSS', 'KELDVKDQKDLGSWNHRSS', 'NKENNEMGSVLGVNGLNRPLS', 'KGDAWLMQIGKMFLLSETSS', 'KKASMQPGLHGNGDADLDKFL', 'NKEGYFPSTYYLHRGLLPS', 'KEMGSPWIDAERAQNGNLLI'] Truth: ['DIGGGQTKTFAPETKLEGFHTQISKY', 'LSDGSGVFEIAEASLLGGFLNILTQTNS', 'DIGGGQTKTFAPEEISAMVLTKMKETA', 'SLGSALRPSTSTVDLYEGKDMAAVQRT', 'TKDAGQTKYVEAKDCLNVLNKSNEGK', 'SLSGREKMRAFRMVWQNTETALLE', 'LDPADLTHDTTGLTAKEALMRLPYGPG', 'SLSGREKMMAEIYANGPKDLHKKYS', 'DKTIKVYREDETATEALMRLPYGPG', 'TVLSGGTTMYPGIADRMQVVSETEVVL'] non truth: ['SLSKRNSCLSDRTCLLAGSTMAVQLAQ', 'LSDVSLQFDAFDQLIRSLCATKGTGPA', 'LSDVSLQFDAVEEKKMKEFRQAGPT', 'LSSGSSRTLPSHVQARSPDRLEMWGV', 'ENVFSISVEAFDQLIRSLCATKGTGPA', 'ATRTQPLARVENVDPEADLPQFEKE', 'SLKNFWQIHDSVKGHETRPEPPPGP', 'TPEPEPKPEPLSTLPTVFHYDLSMK', 'DLLNLFDSSAFDQLIRSLCATKGTGPA', 'LSGSGNWHKDVLELLQADLPQFEKE', 'TSVIFNLDEGFDQLIRSLCATKGTGPA', 'TKDPFICTSLCSVVKRANNWCILLE', 'IPGEEVAPPGEFDQLIRSLCATKGTGPA', 'LSGSGNWHKDVLELLQAETGGTVGNLGT', 'SLGNQVMFQAFDQLIRSLCATKGTGPA', 'ATAESYPKEVFDQLIRSLCATKGTGPA', 'TVISAFNDDLFDQLIRSLCATKGTGPA', 'TVDKYGSPDLFDQLIRSLCATKGTGPA', 'QSEQKLYGAALAPPPWALDDATKGTGPA', 'EVTDVAKFLVNTKGSGMAGSTMAVQLAQ', 'SLKNFWQIHDSVKGHAGSTMAVQLAQ', 'ISESQIFAEKEARAAAVYQPAVMKES', 'ATRTQPLARVENVDPEAGSTMAVQLAQ', 'PTTPVPQVNRLKVECDADLPQFEKE', 'LSGSGNWHKDVLELLQEVSGTYGPLPG', 'PTTPVPQVNRLKVECDAETGGTVGNLGT', 'ISESQIFAEKEARAAAVYRLGQFDPG', 'PTTPVPQVNRLKVECDEVSGTYGPLPG', 'LDIDEKTLDGSVPPAHLYRLGQFDPG'] Truth: ['KVAVLSVDDCNVAKEHGA', 'KKTQDQISELEDRHQ', 'KVAVLSVDDCNVAVASHQ', 'KVAVLSVDDCNVAVASQH', 'GYNFQRVHAAVQLSQH', 'KVAVLSVDDCNVAVAHSQ', 'KVAVLSVDDCNVSHAAVAG', 'KVAVLSVDDCNVAVSAHAG', 'GYNFQRVHAAVQLHSQ', 'GYNFQRVHAAVAKEHGA', 'GYNFQRVHAAVQSIHQ', 'GYNFQRVHAAVQVTHQ', 'KKELENLAAMDAKEHGA', 'KKELENLAAMDLSQHQ', 'TTPPNKEPPPSPAKEHGA', 'TTPPNKEPPPSPEKAHQ', 'TTPPNKEPPPSPEAKHQ', 'TTPPNKEPPPSPEKAHAG', 'GYNFQRVHAAVSHAAVAG', 'KKELENLAAMDSHAAVAG', 'TTPPNKEPPPSPSHAAVAG', 'KKTQDQISQAAHDSVAQ', 'SHEFKEGGEPIKAQVAQ', 'SHEFKEGGEPIKQALGGA', 'YKKKMWGWLWTEAGA'] non truth: ['KKREALQKFEDVSCSS', 'TTTVPRASIVEAFNWY', 'TEGKRLEIDSNQISHQ', 'KKELDALEVYSHASHQ', 'TEGKRLEIDSNSHAGIQ', 'TEGKRLEIDSNASGLHQ', 'TEGKRLEIDSNGSALHQ', 'KKFPLDEADNKTPHSQ', 'KKNKNLQKFEDVSCSS', 'KKEARLQKFEDVSCSS', 'KKARELQKFEDVSCSS', 'KSTKQLNENGLETQHQ', 'KSTIEGILSDDQRQHQ', 'KKFPLDEANGLETQHQ', 'KKFPLDEADNTSPKHQ', 'TTTVPRASIKSEGDQHQ', 'TTHFITLLVSDSGAQHQ', 'KKDKFPNESAPSLEHQ', 'KKDGPSMSKIPLDSSQH', 'KKFPLDEADNGVKEHQ', 'KKFPLDEADNGLVSGHQ', 'HITTGPASPAATQGRTTTS'] Truth: ['DPYPYPRYT'] non truth: ['PDYPYAWLF', 'PDYPIHGFEP', 'PDYPGPPGAFGP', 'PDYPPGAFGPGP', 'PDYPPGPPGAFG', 'PDYPFWSIF', 'PDYLMSGFGGK', 'PDYHGPLFPE', 'DPYHGPLFPE', 'PDYFPFWSI', 'DPYFPFWSI', 'YPDPPGPPGAFG', 'YPDPGPPGAFGP', 'YPDPPGAFGPGP', 'YPDYPGGKQF', 'YPDPFWSIF', 'YPDPIHGFEP', 'YPDYPAWLF', 'PCQLKEPXDNQ', 'QFIYFPXDNQ', 'RNNKGEPXDNQ', 'RNNKGEPDNQ', 'QERREPXDNQ', 'QERREPDNQ', 'RGGDQKGPXDNQ', 'RGGDQKGPDNQ', 'YPDLMSGFGGK', 'YPDHGPLFPE', 'YPDFPFWSI', 'HTVIIECXDNQ', 'HTVIIECDNQ', 'TARHKDSXDNQ', 'TARHKDSDNQ'] Truth: ['TDGRGILNATLEGEAAALR', 'EDSTKFILLTFLCELR', 'SVPAQSTARTGKTPEEIR', 'TDLKYLMVPESYSLLR', 'TDLKYLMVEAYLEAIR', 'TLDLFFNLEAYLEAIR', 'TIFDNFLIEAYLEAIR', 'TLDLFFNLYVGLEDLR', 'TIFDNFLITEFDLALR', 'TLDKFDKFKNEHHLR', 'TDLKYLMVNDRTRFR', 'TDLKYLMVREYRGQR', 'TLDLFFNLRAYVNWR', 'TIFDNFLIRAYVNWR', 'TIFDNFLINDRTRFR', 'TLDLFFNLNDRTRFR', 'TIFDNFLIREYRGQR', 'TLDLFFNLREYRGQR', 'SSHVLNKDEPVFMRLR', 'SSHVLNKDANDGFVRLR', 'SSEELSIRRGEVPAELR', 'EDSTKFILLTFLECLR', 'SVPAQSTARTRMPEQIR', 'SVPAQSTARTGHKFDGLR', 'SVPAQSTARSELDLQGIR', 'SVPAQSTARTGEVLQDLR', 'SVPAQSTARTLEGEAAALR', 'SVPAQSTAREETQALALR', 'SVPAQSTARTGTPEEKIR', 'TLVQQLKEQNEAEEIR', 'TLVQQLKEQNEAEELR', 'TILTMVASFNDRTRFR', 'DEIEFLKKLHEEELR', 'LTLTKDEYLYEREVR', 'LTLTKDEYVEELYRR', 'TDGRGILNATRMPEQIR', 'TDGRGILNATLSTPENLR', 'TDGRGILNASELDLQGIR', 'TDGRGILNAEETQALALR', 'TDGRGILNATLSQDPTLR', 'TDGRGILNATGKTPEEIR', 'TDGRGILNATLEAAEQIR', 'SEPPINFIQSWQGRLR', 'SEPPINFIQSSFHRLR', 'SEPPINFIQSSRHFLR', 'SEPPINFIQSNRWAIR', 'SEPPINFIQVETSTPLR', 'SEPPINFIQTVGLEDLR', 'SEPPINFIEETQALALR', 'SEPPINFIQSSEPLTLR'] non truth: ['TTNVYTKREHCPLPLR', 'TLQNALHWSNLSQFLR', 'TIEPGLHWIFIDSMIR', 'SSIFVTADIIMSVFELR', 'SSKTRETSLGTNYKSLR', 'SSKTRETSLGLYSETIR', 'TTLVSTPTPPWEKSDLR', 'TIRHVQYFHVTEEIR', 'TIRHVQYPWEKSDLR', 'SQLGPVERPEHCPLPLR', 'SQLGPVERPDHTHVSLR', 'SQLGPVERPDNQRYIR', 'TLQNALHWSVQSQFLR', 'TLQNALHWSGEQKFLR', 'TLQNALHWSVQNTFIR', 'TLQNALHWSNFAEKLR', 'TLQNALHWSANLYQLR', 'TLQNALHWSVNYNVLR', 'SEPQPFVIKPVTDSDLR', 'SEPQPFVIKPVTDSDIR', 'SEPQPFVIKPVSDESLR', 'SEPQPFVIKPWVECIR', 'SSSRRSFRDSFEVVIR', 'SSSRRSFRDPTSIYIR', 'SESELKIRSRVCGGPGPR', 'SSFSLLSEISNLSQFLR', 'TTNVYTKREQHQPGIR', 'TTLVSTPTPPKEMEQIR', 'SENARILNWNNQVTLR', 'SENARILNALNSEEVLR', 'SENARILNDSEGLLQLR', 'SENARILNSLAQDIDLR', 'SENARILNATAIEQELR', 'TLFNKFFDLKPETISQ', 'TTLLSDDEPPRCRRLR', 'TTLISDDEPPRCRRLR', 'SEPQPFVISHAQRYIR', 'SEPQPFVIDSEGLLQLR', 'SEPQPFVIKVDDDGLLR', 'SEPQPFVISLAQDIDLR', 'SEPQPFVIKADEGELLR', 'TIEPGLHWIPNGSTHIR', 'TIEPGLHWIFVDTMLR', 'TIEPGLHWIFVSEMIR', 'TIEPGLHWIFVESMLR', 'TLVTYFISVVEMEQIR', 'TDPISRLDSHAQRYIR', 'TDPISRLDQMPIGICIR', 'TDPISRLDQETDRLLR', 'TDPISRLDRQELDTLR'] Truth: ['DAAAEVPSR', 'AAAEVPSRD', 'RDAAAEVPS'] non truth: ['GEAAEVPSR', 'AEGAEVPSR', 'EGAEAVPSR', 'GEAAGNGNLL', 'GEAAAKSGGAP', 'EGAAAKSGGAP', 'AEAGEVPSR', 'EAAGEVPSR', 'SSPDAVPSR', 'AEGEAVPSR', 'SSPEGVPSR', 'AEGAAKSGGAP', 'SSDPAVPSR', 'EGAVNNNVV', 'GEAVNNNVV', 'GEAESRGPL', 'GEAQNGNLL', 'EGAQNGNLL', 'GEAPDRLAS', 'EGAESRGPL', 'EGAPDRLAS', 'EGASTRPTP', 'GEASTRPTP', 'GEAPKQTNA', 'EGAPKQTNA', 'PSSEGVPSR', 'GEAELFIH', 'EGAELFIH', 'GEAVVTGWP', 'EGAVVTGWP', 'AEAEGVPSR', 'EAAGAKSGGAP', 'AEAGAKSGGAP', 'SSPGAKSGGAP', 'GEKGQVNSP', 'EAEAGVPSR', 'GERVEAPAS', 'GEIQFPAGP', 'AEGVNNNVV', 'AEGQNGNLL', 'AEGESRGPL', 'AEGPDRLAS', 'AEGSTRPTP', 'AEGPKQTNA', 'AEGELFIH', 'AEGVVTGWP', 'SSPDSRGPL', 'SSPDLFIH'] Truth: ['DQLAKELTAE', 'LENLSGDELK'] non truth: ['NEIAKLETAE', 'NEIAEKLTAE', 'ENIALKETAE', 'ENLVDKVTAE', 'ENLISQLTAE', 'ENLLSQLTAE', 'ENILSQLTAE', 'NELALKETAE', 'NEIAKELTAE', 'NEIAEKITAE', 'GDALALKETAE', 'GGEIAIGLSTAE', 'GGEIAIKETAE', 'DQIAKLETAE', 'DQIAKELTAE', 'GGEIAIGVTTAE', 'ENLKVDSIAE', 'ENIKVDSIAE', 'EGGLKVDSIAE', 'ENLALSDDLK', 'ENIALSDDLK', 'ENLAEKTALE', 'ENIAEKTALE', 'EGGLAEKTALE', 'ENIALKEAET', 'ENLALKEAET', 'ENLIQISTAE', 'ENLSLQLTAE', 'ENLKEIATAE', 'ENLKELATAE', 'ENLEKLATAE', 'ENLKALETAE', 'ENLKAELTAE', 'ENIEKLATAE', 'ENLIQVTTAE', 'ENLQLVTTAE', 'ENLTLQVTAE', 'EGGALLKETAE', 'EGGALLGVTTAE', 'EGAVAKLETAE', 'EGAVAKELTAE', 'EVQAEKLTAE', 'ELNATVGITAE', 'TASPALKETAE', 'NELKVDSIAE', 'GDALKVDSIAE', 'DQLKVDSIAE', 'NEIALSDDLK', 'NELALSDDLK', 'GDALALSDDLK'] Truth: ['KKIYANFMKIYLY', 'KKVSYSHIQSKARY', 'KKAKETMKAQVAQAY', 'KKAKETMKAQQQLY', 'KKAKETMKAQQLQY', 'KKAKETMKAQVVNGY', 'SSREEIKKSRVRGY', 'KKAKETMKAQATKTM', 'SSVFSIEVKKVKSEQ', 'DYPGLGKWNKKLLY', 'DYPGLGKWNKKLLY', 'SYVFLVRKDNRAVQ', 'SYIVFVRKDNRAVQ', 'KKIYANFKDNRAVQ', 'KKCGWFALTLVAGLTS', 'KKKYPYGDIERARA', 'KKIYANFDIERARA', 'SSFRVVIRPDLRSY', 'SSFRVVIRDSPLRY', 'SSFRVVIRPKNAGTY', 'KNKINNRIADKAFY', 'KKIYANFLMKYIY', 'KKVSYSHIQLLCLY', 'KKVSYSHITAKNRY', 'KKVSYSHARLKTNY', 'KKVSYSHIQTKGRY', 'KKVSYSHGRRKWY', 'SSFRVVIRLALDSTM', 'KKAKETMKVSVPWY', 'KKAKETMKAQVQAAY', 'KKAKETMKAQAGQLY', 'KKAKETMKAQVGNVY', 'KKAKETMKAQVGVNY', 'KKAKETMKAQKGPSY', 'KKAKETMKAQLAANY', 'KKAKETMKDSPLRY', 'KKAKETMKANIGNVY', 'KKIYANFTHTVPIY', 'KKVSYSHVGVVLFMT', 'SSREEIKKSVRRGY', 'SSREEIKKVTILEY', 'KKAKETMKLETRTM', 'KKAKETMKSGGVAKTM', 'SSRPGQHVKKVKSEQ', 'SSDLALYVKKVKSEQ', 'SSFDVITVKKVKSEQ', 'KKAKPLATTQPAMFY', 'KKAKETMKSKQKEM', 'KKSHGVDLLMKYIY', 'KKYNALFMKIYLY'] non truth: ['SYLRVGLRFQHRY', 'SSRRVGLLRLGGSSSY', 'SSHPPLMPVVKQVLY', 'KVERFIATLMNPLY', 'KVERFIATTRLEMT', 'KVERFIATANVSKMT', 'SYIEKEQAKKKLTE', 'SYSGILEQAKKKLTE', 'SYGSILEQAKKKLTE', 'SYVAVTEQAKKKLTE', 'SYGTVLEQAKKKLTE', 'SSRRVGLLRLPDYY', 'GGHERVGGQAKKKLTE', 'GGHERVGGQKASLAKLS', 'SYLTPSTLTALLGTRT', 'SYTTPVTLTALLGTRT', 'KVERFIATLMPNLY', 'KVERFIATIGPGPFY', 'KVERFIATLGPGPFY', 'KVERFIATNFPPIY', 'KVERFIATRNGQLY', 'GGHERVGGLTALLGTRT', 'KVERFIATLFAPAMT', 'KVERFIATLNGSKMT', 'KVERFIATLNKSGTM', 'KVERFIATSAKGKME', 'SYLRVGLRSIEAVTM', 'SYLRVGLRLNNNLY', 'SYLRVGLRSQPKQY', 'SYLRVGLRKGQPTGY', 'SYLRVGLRFHRQY', 'SYLRVGLRFRQHY', 'SYLLLRQATRLEMT', 'SYLLLRQALMNPLY', 'SYLLLRQAIGPGPFY', 'SYLLLRQANFPPIY', 'SYLLLRQARNGQLY', 'SSLLYGEQAKKKLTE', 'SSLDYLAQAKKKLTE', 'SSFSDLLQAKKKLTE', 'SSHPRGQQKASLAKLS', 'SSGPHRQQKASLAKLS', 'SSLDYLAQKASLAKLS', 'SSFSDLLQKASLAKLS', 'HNERSTRPVLLPLY', 'SYLLLRQASAKGKME', 'GGHERVGGPVVKQVLY', 'SSYLGKNLVDCKRVL', 'PGPGPGLSSLVDCKRVL', 'SSLDYLALTALLGTRT'] Truth: ['TTPLQGAKERA', 'LDALQGAKERA', 'LDAIQGAKERA', 'EVALQGAKERA', 'EVAIQGAKERA', 'DLALQGAKERA', 'DLALQGAKERA', 'DIALQGAKERA', 'VEALQGAKERA', 'VEAIQGAKERA', 'TPTLQGAKERA', 'TTPIQGAKERA', 'TTLPQGAKERA'] non truth: ['LDALGASGRKPS', 'LDAIGASGRKPS', 'IDAIGASGRKPS', 'IDALGASGRKPS', 'DLALGASGRKPS', 'VEALGASGRKPS', 'EVAIGASGRKPS', 'EVALGASGRKPS', 'VEAIGASGRKPS', 'TTPLGASGRKPS', 'TTPIGASGRKPS', 'TPTIGASGRKPS', 'TPTLGASGRKPS', 'TTLPGASGRKPS', 'TVSGQRAAIPAT', 'TVSGAGRAAIPAT', 'TIVSAAQRTAPG', 'TAATQRAAIPAT'] Truth: ['ITTLPRQKDQTQKAATGPFDREHLLMYL', 'ELGFVGSAPQKGLLKEWTLMLHGTQSAPYI', 'ELLMNALKLCGHKCIPPSAPSKENFVRYL', 'ELLMNALKLCGHKCIPPSAPSKPTHNVPIY', 'ELLMNALKLCGHKCIPPSAPLDIQQELPSL', 'ITSISSGHIGIGICALHIFLDINADIGFNFL', 'ELELPSQGLSVEKTFPIPKLSYSASQFYL', 'ITLTTIGYPQNPVMNSPVVSVAVFHGRNFL', 'LEELTRLAFEPVMNSPVVSVAVFHGRNFL', 'ITLTTIGYGDVLDLRAFYAELYHIISSNL', 'LEEIRKVKELGDRAAQGRAYGNLGNTHYL', 'LELERKVKELGDRAAQGRAYGNLGNTHYL', 'TLTIPRQKDQTQKAATGPFDREHLLMYL', 'TLTLPRQKDQTQKAATGPFDREHLLMYL', 'AEQGVWETLLAALEVLIRVEHHQQQFNI'] non truth: ['IEELVVALQTANEAAQNQLIQKEKEWYI', 'PVPVRGLTMHYVSCALHPIHYPLTATERI', 'PVVPAEGPDLLYVSCALHPIHYPLTATERI', 'TIFTPLIDPRQAGPTLSERDLASDFTTLVP', 'LTTFLDSLYRSTLSLHVHELPCTVHLYL', 'IEEIGSFILNPHQFVPLHQRMQQLFHL', 'ITTIVTGAFFFSISHLVDHYPGGKQFIYL', 'LEIEQSANKARSADFSISAVKTYKSESAKL', 'EIEIAIPTKTANEAAQNQLIQKEKEWYI', 'VPVPTTNGLDEQVKASNAEVNLLMQKLSYI', 'IEEIKVEIEQQAKTRMFSQLSAPDSLAPI', 'IEEIKVEIEGAQAKTRMFSQLSAPDSLAPI', 'EIEIAIPTKESQAKTRMFSQLSAPDSLAPI', 'IEELDEEQVLLVKNGSAPNKESVGRNKYL', 'EIELILDDGLELGHAATTRDARLVDFLYL', 'TLTLLKELSDQRKKYGDGDYTRTQNSKL', 'LELEQAMILLQAGPTLSERDLASDFTTLVP', 'ELELYYPVQLNLLGSQSQLAGRLAMSHQL', 'IEELVVAIQTANEAAQNQLIQKEKEWYI'] Truth: ['IPEIKDVTE', 'LPEIKDVTE', 'LCATTFIKF', 'ICATTFIKF', 'PLEIKDVTE'] non truth: ['LPDTISGLDI', 'LPDTISGLDI', 'IPDTISGLDI', 'LFAMLYAKS', 'IFAMLYAKS', 'PLDTISGLDI'] Truth: ['EDVTEFPEPFSP', 'SELDLETMSQGSP', 'EDVTEFEPLMSP', 'EDVTEFEIMPSP', 'NQEFLMPMSEAP', 'QNSCVVTQGFSHS', 'ESISDVTTMNEAP'] non truth: ['SEVEEFVMICAH', 'ESVEEFVMICAH', 'ESLDEFVMICAH', 'DETVEFVMICAH', 'DEVTEFVMICAH', 'SIDEEFVMICAH', 'SDLEEFVMICAH', 'SVEEEFVMICAH', 'SLEDEFVMICAH', 'EISDEFVMICAH', 'ESEVEFVMICAH', 'ESDLEFVMICAH', 'SELDDLGTVCSASP', 'QNEFISTFCHAP', 'QNEFGVTKGCSSH', 'QNEFGVTGKCSSH'] Truth: ['NEELESLSAIEAELEKVAHQ', 'ENELESLSAIEAELEKVAHQ', 'GDAELESLSAIEAELEKVAHQ', 'DQELESLSAIEAELEKVAHQ', 'GADELESLSAIEAELEKVAHQ', 'DQELESLSAIEAELEKVAHQ', 'GGEELESLSAIEAELEKVAHQ', 'QDELESLSAIEAELEKVAHQ', 'GEGELESLSAIEAELEKVAHQ', 'EGGELESLSAIEAELEKVAHQ', 'DQELESAVVIAIGDSELSIHN', 'NEEIESLSAIEAELEKVAHQ', 'QDEIESLSAIEAELEKVAHQ', 'DQEIESLSAIEAELEKVAHQ', 'NEEIESGEVFSIIQEYKPK', 'ENELESGEVFSIIQEYKPK', 'NEELESGEVFSIIQEYKPK', 'GDAELESGEVFSIIQEYKPK', 'QDEIESGEVFSIIQEYKPK', 'DQEIESGEVFSIIQEYKPK', 'DQELESGEVFSIIQEYKPK', 'QDELESGEVFSIIQEYKPK', 'GADELESGEVFSIIQEYKPK', 'QDELSEAVVIAIGDSELSIHN', 'NEEIEREHAEKLMKLQNQ', 'ENELEREHAEKLMKLQNQ', 'NEELEREHAEKLMKLQNQ', 'GDAELEREHAEKLMKLQNQ', 'DQEIEREHAEKLMKLQNQ', 'DQELEREHAEKLMKLQNQ', 'QDEIEREHAEKLMKLQNQ', 'GADELEREHAEKLMKLQNQ', 'QDELEREHAEKLMKLQNQ', 'NEEIVEVAAVGVTGNDITTPPN', 'QDTGDLVLLDVCPLTLGICHN', 'AQKFKTKFEECRKEICHN', 'KITITNDQNRLTPEEICHN', 'NELEEREHAEKLMKLQNQ', 'ENLEEREHAEKLMKLQNQ', 'GDALEEREHAEKLMKLQNQ', 'DQLEEREHAEKLMKLQNQ', 'QDLEEREHAEKLMKLQNQ', 'EGGLEEREHAEKLMKLQNQ', 'GADLEEREHAEKLMKLQNQ', 'GGELEEREHAEKLMKLQNQ', 'GEGLEEREHAEKLMKLQNQ', 'SSDVPESLSAIEAELEKVAHQ', 'SSDVPESGEVFSIIQEYKPK', 'DQQLEIIEGMKFDRGYISP', 'SSDVPEREHAEKLMKLQNQ'] non truth: ['NEELQISKQQYSGLGLTTET', 'ENELLCPLKGVQAGTFGYVSN', 'NEEILCPLKGVQAGTFGYVSN', 'NEELQITGEQSALGNPFVRH', 'ENELQITGEQSALGNPFVRH', 'NEEIQITGEQSALGNPFVRH', 'DGAELQITGEQSALGNPFVRH', 'QDEIQITGEQSALGNPFVRH', 'DQEIQITGEQSALGNPFVRH', 'QDELQITGEQSALGNPFVRH', 'DQELQITGEQSALGNPFVRH', 'DQEILCPLKGVQAGTFGYVSN', 'QDEILCPLKGVQAGTFGYVSN', 'NEELCLAMLVTKSSDGPYRL', 'ENELCLAMLVTKSSDGPYRL', 'NEEICLAMLVTKSSDGPYRL', 'DGAELCLAMLVTKSSDGPYRL', 'QDELCLAMLVTKSSDGPYRL', 'DQELCLAMLVTKSSDGPYRL', 'GEGELCLAMLVTKSSDGPYRL', 'DQEICLAMLVTKSSDGPYRL', 'QDEICLAMLVTKSSDGPYRL', 'DGAELTLNQDAIDASIPQAPIS', 'ENELQDPTLLHEESKSIKE', 'NEELQDPTLLHEESKSIKE', 'NEEIQDPTLLHEESKSIKE', 'DGAELQDPTLLHEESKSIKE', 'DQELQDPTLLHEESKSIKE', 'QDELQDPTLLHEESKSIKE', 'QDEIQDPTLLHEESKSIKE', 'DQEIQDPTLLHEESKSIKE', 'GEGELQDPTLLHEESKSIKE', 'ENELRLDTVVQDNRSRGTH', 'NEELRLDTVVQDNRSRGTH', 'NEEIRLDTVVQDNRSRGTH', 'DGAELRLDTVVQDNRSRGTH', 'DQELRLDTVVQDNRSRGTH', 'QDELRLDTVVQDNRSRGTH', 'QDEIRLDTVVQDNRSRGTH', 'DQEIRLDTVVQDNRSRGTH', 'GEGELRLDTVVQDNRSRGTH', 'DDAALRANYTNRINMISTIS', 'DDAALQITGEQSALGNPFVRH', 'QDENKSNWYCAAVVSVILTV', 'ENIELCPLKGVQAGTFGYVSN', 'NEIELCPLKGVQAGTFGYVSN', 'DGAIELCPLKGVQAGTFGYVSN', 'DQIELCPLKGVQAGTFGYVSN', 'QDIELCPLKGVQAGTFGYVSN', 'GEGIELCPLKGVQAGTFGYVSN'] Truth: ['KLNAEAVRGQWANLKNEK', 'KLNAEAVRGQWANLLLSW', 'KKRTQDVEDLTKETLHK', 'KKAARSDVEDLTKETLHK', 'KLNAEAVRGQWANLIVTW', 'KLNAEAVRGQWANLILSW', 'KLNAEAVRGQWANLIWSL', 'KLNAEAVRGQWAQVEKNK', 'KLNAEAVRGQWANLEKNK', 'KLNAEAVRGQWANLNKEK', 'KLNAEAVRGQWANLNEKK', 'KLNAEAVRGQWANLKENK', 'KLNAEAVRGQWANLENKK', 'KLNAEAVRGQWANIDKQK', 'KLNAEAVRGQWANLDKQK', 'KLNAEAVRGQWANLDQKK', 'KLNAEAVRGQWANLQKDK', 'KLNAEAVRGQWQVAERLT', 'KLNAEAVRGQWANILRET', 'KLNAEAVRGQWALNLRET', 'KLNAEAVRGQWANLELRT', 'KLNAEAVRGQWANLERLT', 'KLNAEAVRGQWANLREIT', 'KLNAEAVRGQWANLRLET', 'KLNAEAVRGQWANLIRET', 'KLNAEAVRGQWANLRELT', 'KKNSVVEAGQVNSLPGSINK', 'KKSPSAKEGQVNSLPGSINK', 'KKGLVSGGEGQVNSLPGSINK', 'KKGIVDQSGQVNSLPGSINK', 'KKDVKQEGQVNSLPGSINK', 'KKEAAASGLGQVNSLPGSINK', 'KKWGAREGQVNSLPGSINK', 'KLNAEAVRGQWANLVWIT', 'KEFAKSLETLFTRELEK', 'KEFMWALKLFTRELEK', 'KKEGDPKPRASKDARWVT', 'KKEEIPEKVVENGALLSW', 'KKEGDPKPRASTQKELEK', 'KKGRGHSTWLWTIEELK', 'KKAIPPGCGWLWTIEELK', 'KKLQSCKMKTASKLCSALT', 'KKHAHLTMKTASKLCSALT', 'KKVPEQPPELNEGIKVYT', 'KKFISDKDALNEGIKVYT', 'KKDAEKYAVLNEGIKVYT', 'KKHGCTAKKVVENGALLSW', 'KKSHRQSWLWTIEELK', 'KKRHSQSWLWTIEELK', 'KLNAEAVRGQWHLGYILT'] non truth: ['KKPMKSPLPDMGLAVAVWT', 'KKQNMLIDALSFLFSWK', 'KKRMYFINRLSNELEK', 'KKATAEPQINRLSNELEK', 'RRTGHYAKGPNVLTEIEK', 'LYQLSSNVKLLSDSFINK', 'ASFVLLKHMMQRLVMHK', 'KANSKPKDTDQLKPRLET', 'KLFEANLVHEIEKEEIK', 'KKTSSIEKMLLSDSFINK', 'KKDTEPAQKATPKGARLET', 'KKPMKSPLPDMGLQKNEK', 'KKPMKSPLPDMGLQELRT', 'KKPMKSPLPDMGLQERLT', 'KKPMKSPLPDMGLGARLET', 'KKPMKSPLPDMGLVAAVWT', 'KKPLDSFVHEIEKEEIK', 'KKLAPSDNINRLSNELEK', 'KKPKSFYFPTDIVELEK', 'KKWSPGPAKEEVLTEIEK', 'KKGPNKDSQPETKINEKK', 'KKNRTNSGFVKFIGECLK', 'KKPEYGLVHEIEKEEIK', 'KKDGPSLQINRLSNELEK', 'KKRNHSTKEEVLTEIEK', 'KKNDRAANGALELIGELEK', 'KKWYFVDPLLFNLLSW', 'KKIDHMVDPLLFNLLSW', 'KKHMDLVDPLLFNLLSW', 'KKITDVEGYFRTITSLPT', 'KASATDLPIEGLSLTHVFIG', 'EVDKKVLTLYYGTGLQNK', 'LSKRTTNFIHQQKNEKP'] Truth: ['DQELEVTGAVVAFVMKMRRRNTGG', 'NEELEVTGAVVAFVMKMRRRNTGG', 'DQELESLSAIEAELEKVAHQLQAL', 'GGEELESLSAIEAELEKVAHQLQAL', 'DQELESLSAIEAELEKVAHQLQAL', 'QDELESLSAIEAELEKVAHQLQAL', 'GEGELESLSAIEAELEKVAHQLQAL', 'EGGELESLSAIEAELEKVAHQLQAL', 'GADELESLSAIEAELEKVAHQLQAL', 'QDEIESLSAIEAELEKVAHQLQAL', 'DQEIESLSAIEAELEKVAHQLQAL', 'GDAELESLSAIEAELEKVAHQLQAL', 'ADGELESLSAIEAELEKVAHQLQAL', 'ENELESLSAIEAELEKVAHQLQAL', 'NEELESLSAIEAELEKVAHQLQAL', 'NEEIESLSAIEAELEKVAHQLQAL', 'DQELEVLCPPKYLKLSPKHPESN', 'NEELEVLCPPKYLKLSPKHPESN', 'QDEIEVLCPPKYLKLSPKHPESN', 'DQEIEVLCPPKYLKLSPKHPESN', 'QDELEVLCPPKYLKLSPKHPESN', 'GADELEVLCPPKYLKLSPKHPESN', 'GDAELEVLCPPKYLKLSPKHPESN', 'ENELEVLCPPKYLKLSPKHPESN', 'NEEIEVLCPPKYLKLSPKHPESN', 'DQEEIVTGAVVAFVMKMRRRNTGG', 'ENEEIVTGAVVAFVMKMRRRNTGG', 'EQDLEVTGAVVAFVMKMRRRNTGG', 'NELEEVTGAVVAFVMKMRRRNTGG', 'EQDLESLSAIEAELEKVAHQLQAL', 'EQDLEVLCPPKYLKLSPKHPESN', 'NELEESLSAIEAELEKVAHQLQAL', 'QDEISELNAGKGRERQILQETIH', 'DQEISELNAGKGRERQILQETIH', 'DQELSELNAGKGRERQILQETIH', 'QDELSELNAGKGRERQILQETIH', 'GGEEISELNAGKGRERQILQETIH', 'GEGEISELNAGKGRERQILQETIH', 'EGGEISELNAGKGRERQILQETIH', 'GADEISELNAGKGRERQILQETIH', 'GADELSELNAGKGRERQILQETIH', 'GDAEISELNAGKGRERQILQETIH', 'GDAELSELNAGKGRERQILQETIH', 'NEEISELNAGKGRERQILQETIH', 'ENEISELNAGKGRERQILQETIH', 'NEELSELNAGKGRERQILQETIH', 'ENELSELNAGKGRERQILQETIH', 'NELEEVLCPPKYLKLSPKHPESN', 'ENEELVTGAVVAFVMKMRRRNTGG', 'DQENKVTGAVVAFVMKMRRRNTGG'] non truth: ['NEELESLYRVQRESDKLKEVLS', 'ENELESLYRVQRESDKLKEVLS', 'DQLEESLYRVQRESDKLKEVLS', 'NELEESLYRVQRESDKLKEVLS', 'QDEIQMWRLTSSYVLIRPQSIT', 'DQELQMWRLTSSYVLIRPQSIT', 'QDELQMWRLTSSYVLIRPQSIT', 'DQEIQMWRLTSSYVLIRPQSIT', 'DGAELQMWRLTSSYVLIRPQSIT', 'NEEIQMWRLTSSYVLIRPQSIT', 'NEELQMWRLTSSYVLIRPQSIT', 'ENELQMWRLTSSYVLIRPQSIT', 'QDENKAKEYALAISPIIDTGLDIF', 'DQELWEELSVPERRQVVPREGL', 'DQEIWEELSVPERRQVVPREGL', 'QDELWEELSVPERRQVVPREGL', 'QDEIWEELSVPERRQVVPREGL', 'DGAELWEELSVPERRQVVPREGL', 'NEELWEELSVPERRQVVPREGL', 'ENELWEELSVPERRQVVPREGL', 'NEEIWEELSVPERRQVVPREGL', 'DQEIRSPYWKQLYDLAELRAGI', 'QDEIRSPYWKQLYDLAELRAGI', 'GEGEIRSPYWKQLYDLAELRAGI', 'DQELRSPYWKQLYDLAELRAGI', 'QDELRSPYWKQLYDLAELRAGI', 'DGAEIRSPYWKQLYDLAELRAGI', 'DGAELRSPYWKQLYDLAELRAGI', 'NEEIRSPYWKQLYDLAELRAGI', 'ENEIRSPYWKQLYDLAELRAGI', 'NEELRSPYWKQLYDLAELRAGI', 'ENELRSPYWKQLYDLAELRAGI', 'QDQLQQWLARVSKQISHQVDGLS', 'NEQLQQWLARVSKQISHQVDGLS', 'QDENKRLAKMRQSMSGKSLCQLL', 'QDQDKAKEYALAISPIIDTGLDIF', 'NEQDKAKEYALAISPIIDTGLDIF', 'DQLEQMWRLTSSYVLIRPQSIT', 'NELEQMWRLTSSYVLIRPQSIT', 'QDQLQEQIMKLVVDVIHQDSAAR', 'NEQLQEQIMKLVVDVIHQDSAAR', 'QDQLQALPHQTTRPNGQRPWPVP', 'NEQLQALPHQTTRPNGQRPWPVP', 'QDQLQQESVQPDPHRSVIIPVPGP', 'NEQLQQESVQPDPHRSVIIPVPGP', 'QDQDKRLAKMRQSMSGKSLCQLL', 'NEQDKRLAKMRQSMSGKSLCQLL', 'QDNEKAKEYALAISPIIDTGLDIF', 'NEDQKAKEYALAISPIIDTGLDIF', 'SSVDEPSLYRVQRESDKLKEVLS'] Truth: ['DLAPPEDVLLTKETELAPAKGMVSLSTQALTWLSQQQ', 'DLAPPEDVLLTKETELAPATTVQFRGPTQSLASGISAGQ', 'EQVEIRAVLFNYREQESCEEVLDLPKLPVPPLQQ', 'EQLDILSSILASFNSQKQEYHQAVQHLEQKKLQQ', 'EQVEIRAVLFNYREQEELKVRVELLHTATEETH', 'DLAPPEDVLLTKETELAPAQVRFPMMLPQELFELQ', 'QDLALQGAKERAQQPLKQQQPPKQQQQQQQQQQQ', 'DGIRSHSLKAFETLIVSLGEQQKDAAVLDVDGLDIQQ', 'QPAGAAGQPIGQAQTAVSTVPTGGQIASIGQQANIPTAVQQP', 'IKVYTKEGGYFHVLLAPGVHNINAIADGYQQQHTQV', 'KKPLRSREAMQAGQRLLHFMQQNPDVLQQMMSVL'] non truth: ['DLALEIEQSANLPNLAGSQLEKESHNKLSAVLFACKQ', 'EQDLLRAHLEQPIIEFKLNSSQLSESHHLEVLQQ', 'ISLSHGVLALYYLVYANQLKESTFANFHAELQLQQ', 'HLDFVEFDLPSFFQGLLGKTRASMRLCAVQITLQQ', 'EVAVNGWPLGLDKELAGSQLEKESHNKLSAVLFACKQ', 'QQQQQQQQTIINEIGPITSDLSGSDTLVKDRKRQQ', 'ISLSHGVLALYYLVYANQLKGAPGKYFGPKSTGEETH', 'DGSGAPPQNMIAKLMGLLLLAGEICLGAQLVEQQHHKQ', 'VELSGRGKTEKKILEDIERFCGPDILCVIELQQSY', 'QALPAPDLAKHSPCPDKSLSSMLDQVTAEAVLQQPPLL', 'AGDGSGAPPQNMIAKLMGLLLLAGEICLGAQLVEQQHHK', 'MSKIPLDSSQHHAPVRTTGRETCIKKSIQQLEARY', 'EERRRMVLEEELRHLCTGESKIKEFHLKVEER'] Truth: ['DGEEAMVGTPQGLFPQGGKGRELEHKQEEEEEEEERLS'] non truth: ['DGAYQLAYQSPSYPRGHEWAQTTGEQVMSVCSPKTPKEG', 'DGAYQLAYQSPSYPRGHEVWSSQIEGSPPSMSAFLGEASP', 'SDSFINKQWRTDSASDLNWFRITNDSSCTLFTGSPLCA'] Truth: ['TLVVKGDPRDNHLAPGQLNTTDVYLLPS', 'TYRELIEQLQNKPSDLGDKYGKPVKN', 'SSVVTNKRNKRNQTTKLVDDEVYPKN', 'TYRELIEQLQNKPSDLGTKLQYPGKGG', 'TYRELIEQLQNKPSDLGTKVISMQKGG', 'TYRELIEQLQNKPSDLGTKLQLTCKN', 'TYRELIEQLQNKPSDLGTQYRALTVP', 'LPFGCSKEEIVQFFSGLEIVPNGITLPV', 'TITIQDTGCVLAGFCNEVKRVPMKTLVP', 'SSVVTNKRNKRNQTTKLVDDGKDSTKN', 'TLWYRPPDILLGSTDYSTQINGITLPV'] non truth: ['SSALNLSNRLDKSAPPPAGPSTRSGLPTKGG', 'TIQGFFLQEAVRGGTEPQSIEKKIDKGG', 'TLPPNLTPSHLQDAKLAKDVDERIFTN', 'TLTLTVESSQLRNVAAEEERRRFTLN', 'TLLKSRELPPAWSSWSTDGVKSIFITN', 'TYRKKPDVELLNRCVNDEPLLFRTN', 'TTQLSRREVEVSAPPPAGPSTRSGLPTKGG', 'TTRSNGGLASQLLSAPPPAGPSTRSGLPTKGG', 'TLQGIIKLGSQQGPSNLPFSWMLFRTN', 'TYPPFNLLQPKCNERAELRLLFRTN', 'TYPPFNLLQPKCNERAELRLFLRTN', 'TYPPFNLLQPKCNERAELRLLTFRN', 'TLTLTVESSQLRNVAKIHTCFDFITPV', 'SSALNLSNRLDKDAKLAKDVDERIFTN'] Truth: ['DFGLESTTGEVPVVAIRTAKGEKFVMQ', 'AYLGKKVTHAVVTVPAYFNDACGVIGLC', 'GILGSGFALKVQEQHRQKHFENAVLC'] non truth: ['LGGLAIRMSRQSHDALSHGLHHIQFQ', 'LLYQVVNGGTPKSHAIPIMTPSPCVACL', 'VAREYRLAAKQTEQQAEKHHIQFQ', 'VAREYRLAAKQTEQQAEKIWCFLQ', 'FDGLPGGAAQLKAPRATPPPPVHDGGVACL', 'AVKPGCSKKPAPKSSGGAGGGAGQQGLQGLLC', 'GIFLNVSTLVVYATSPIKFCISSSFIC', 'ISGFAKRQFTGDLAAVIVFCEGPPIGIC', 'GIVASTSQACLIGMMGIIFEALKSIDLAG', 'LGIGSSAGTACLIGMMGIIFEALKSIDLAG', 'VALGVFVVSLLMLMASGSIMTPSPCVACL', 'TRLPGSLHGACNRLLFVPTANDRSSIN'] Truth: ['DFGLESTTGEVPVVAIRTAKGEKFVMQEEFSR', 'DFVAEYEPLLLEILVEVMDPSLSEKPPAIDW', 'LGVAGQWILAYMEDHLKNKNRLEKEWEALC', 'GLGLSPVNWDTLGSTVSQLQERLGPLTRDFWD', 'QLFRIEGVKSVFFGPDFITVTKENEELDWN'] non truth: ['MTSLLVETQFDRKQQSTAELSNLMVTMKLQS', 'FDGTFRWIIDTLINILGTFSDTIADWIIWD', 'VAVAMSVLKGLLSMRSDDAKNGCAVRSLHDLDW', 'LTLGLACGMYIYFRNMWLSLLPSLDWVDLPG'] Truth: ['PGAGRGYNSIGRGAGFERMRRGNTELAAFTKNQKDPGVLDRMM', 'PLTQSESWSAVPTQLENVLDDLDPESATLPAGFRQKDQTQKAA', 'LPKSVSDYDGKLHAMPYSHSPAVTSYATSVSLSNTGLAQLAPSHP', 'QPQPHHIPTSAPVYQQPQYLSPLQQAVSDLQENPEATLTMSL', 'PIFVSTPFQVSVLENAMQHKGQSTVPPCTASPEPVKAAEQMSTL', 'PGAGRGYNSIGRGAGFERMRDTEGRYLFLNAIANQLRYPNSH', 'DPQFEPIVSLPEQEIKTLEEDEEELFKMRAKLFRFASEN'] non truth: ['LPPLGDVTLHPSESWLSDPNCLMTVLSLEWICVVASAGSTVIMT', 'GEGPEIKEPETGSRRRRLFDMPYRSYPPPPSELECLRSLH', 'PQSRHPAYFGSSSLLISPASQVCLDKYVTEVTQCVQKLEMEL', 'QNEVPYADLFESLCRAAFDFLQQGAGLIVEQPIVFERSRPE'] Truth: ['DIVHKIAT', 'IVNLIHST', 'FITKLQF', 'TIVNLIHS'] non truth: ['DLVHLKAT', 'KKPPPSAAT', 'DLVPPRVT', 'KFVFALAT', 'DLVPPRLS', 'VHLTAALAT', 'KFVVGFVT', 'KFVFIAAT', 'KFVFLAAT', 'KFVLFAAT', 'KFVLAFAT', 'KVFIFAAT', 'KVFIAFAT', 'KVFLFAAT', 'KVFLAFAT', 'KVFAFIAT', 'KVFAFLAT', 'KVFALFAT', 'KVFAIFAT', 'VHLTAAIAT', 'KFVVFGVT', 'KFVFGVVT', 'KVFVGFVT', 'HTNIVIVT', 'HTNIVIVT'] Truth: ['GDSELSIHLHVTLTSAVGTSTVL', 'DGSPNAGTRRKLLQLKEQYY', 'DGGDEIIRREKLIQDHMAKL', 'DGDGEIIRREKLIQDHMAKL', 'FFALPAGILGSGFALKVQGWEE'] non truth: ['DGIYNIRTVQHIFSGNKYIV', 'FIFAPVSAQICSAVVIDKKWD', 'RAELHVLDPEHIFSGNKYIV', 'FIFAPVSAQICLEPQSALQKF', 'FIFAPVSAQICPQELAKFVTAG', 'DGSVTILHELPTLVDEVTRSR', 'DGSVPPAVAVGNLYYRFLRKD', 'DGSVPPAVARLAEKTALEKEQQ', 'DGSITYIFSRKQRWTTLPAP', 'DGSLLGPSQERKQRWTTLPAP'] Truth: ['DQELESLSAIEAELEKVAHQLQALR', 'DQELESLSAIEAELEKVAHQLQALR', 'QDELESLSAIEAELEKVAHQLQALR', 'ENELESLSAIEAELEKVAHQLQALR', 'NEELESLSAIEAELEKVAHQLQALR', 'SEIAGIENETYRNIKVLLTSDKAAAN', 'SETREPDGTIKRLIYATSSLDSGVPK', 'SERTEPDGTIKRLIYATSSLDSGVPK', 'SEERTPDGTIKRLIYATSSLDSGVPK', 'SETSSVSHGTIKRLIYATSSLDSGVPK', 'DQEIESLSAIEAELEKVAHQLQALR', 'DQEEISLSAIEAELEKVAHQLQALR', 'QDEIESLSAIEAELEKVAHQLQALR', 'QDELSELSAIEAELEKVAHQLQALR', 'ENEELSLSAIEAELEKVAHQLQALR', 'ENEEISLSAIEAELEKVAHQLQALR', 'NEEIESLSAIEAELEKVAHQLQALR', 'NEETILNETYRNIKVLLTSDKAAAN', 'SSNNSIPDGTIKRLIYATSSLDSGVPK', 'SSSMPLPDGTIKRLIYATSSLDSGVPK', 'SEEELNLSAIEAELEKVAHQLQALR', 'SEAAEVLNETYRNIKVLLTSDKAAAN', 'SELDDRADAIKRLIYATSSLDSGVPK', 'SSPDESLVSAIEAELEKVAHQLQALR', 'SSDSEVPLSAIEAELEKVAHQLQALR', 'SEELQLNETYRNIKVLLTSDKAAAN', 'SEEQILNETYRNIKVLLTSDKAAAN', 'IEEKNKTIFLRAYVNWRSKACHGG', 'SSYGGYKETIKRLIYATSSLDSGVPK', 'SSGGQAEQLDIKRLIYATSSLDSGVPK', 'SSDTKPMVNLDKLKERAQVKANPHN', 'SSDVPESLSAIEAELEKVAHQLQALR', 'SSAAPATEVETYRNIKVLLTSDKAAAN', 'SSDTPVLNETYRNIKVLLTSDKAAAN', 'SSSLPDLNETYRNIKVLLTSDKAAAN', 'SSSLPDINETYRNIKVLLTSDKAAAN', 'SSESPVLNETYRNIKVLLTSDKAAAN', 'SSSKPNRGPRPPGFPTGPEAKQILSTN', 'SSLCGVIPGPRPPGFPTGPEAKQILSTN', 'SSLGMALPGPRPPGFPTGPEAKQILSTN', 'SSCLLRPGPRPPGFPTGPEAKQILSTN', 'SSHELFIPRPPGFPTGPEAKQILSTN', 'SENLPFREVVQIFIEDNLTFTLPV', 'SEGTVIAWEVVQIFIEDNLTFTLPV', 'SERLELSYAELYHIISSNLEKIVN', 'SEDKKNIYAELYHIISSNLEKIVN', 'SEDRIITYAELYHIISSNLEKIVN', 'SEIPAGLFYAELYHIISSNLEKIVN', 'SEPPPPPLYAELYHIISSNLEKIVN', 'SSSMPLIQEVVQIFIEDNLTFTLPV'] non truth: ['SSTWSVTDASSKVVKRKEQESALTVP', 'SCQKHLVKLLGQTLGTDARRLECHN', 'SSALIDGDEKHIKTRNKIEAKLHCN', 'ESSEKILRQVKALVEHPVYLQHCN', 'ESSEKILRQVKALVEHPVYQLCHN', 'SEGPRLALGQKAEAARASSRDQPIPTN', 'SSTWSVTDASSKVVKRKEQESATLPV', 'SSTWSVTDASSKVVKRKEQESATIPV', 'SSTWSVTDASSKVVKRKEQESALTPV', 'SSTWSVTDASSKVVKRKEQESAITVP', 'SSTWSVTDASSKVVKRKEQESATIVP', 'SSTWSVTDASSKVVKRKEQESAITPV', 'SSTWSVTDASSKVVKRKEQESATLVP'] Truth: ['KELQTMPA', 'KELQTMAP'] non truth: ['KEICSGLPA', 'KEICSGLAP', 'KEICSGPLA', 'KEIEKCPA', 'KEICSGALP', 'KEICSGPAL', 'KEICSGPIA', 'KEICSGIAP', 'KEICSGAPI', 'KEICSGAPL', 'KEICSGIPA', 'KEIEKCAP', 'KEICSGPAI', 'KEICSGAIP'] Truth: ['KKGHFSILISMETDLAPVANVQLLDIDGGF', 'KKRTHAVLSPPSPSYIAETEASQRKWNF', 'KDWSKVVLAYEPVWERQILQETIHNF', 'KPPPSSSMPLILQEEKAPLPPPKNTDGVNF', 'KPPPSSSMPLILQEEKAPLPPPQWGSRGGF', 'KSCTVLTIAHRLNTVLNCDLVLVMVDPNF', 'KSCTVLTIAHRLNTVLNCDLVLVMDPVNF', 'KSCTVLTIAHRLNTVLNCANVQLLDIDGGF', 'KPPPSSSMPLILQEEKAPLPPPEYATVQPS', 'KPPPSSSMPLILQEEKAPLPPPEYTGIQPS', 'KSDRNLEKVDVKDRETQRSRGFGFITF', 'KDSNNLCLVVVVKDRETQRSRGFGFITF', 'KDWSKVVLAYEPVWAIGTGKTATPHGRNF', 'TIKVYEGERPLTKDNHSTRPSRVRQNF', 'EANSEGSLTMKKAGRVDFILHPLATGPKNF', 'TATGFISCAQAPKAGRVDFILHPLATGPKNF', 'KKLEDVKNSPTFKSAQVDFSQELIRSSF', 'KKLDLNCDGVVVKDRETQRSRGFGFITF', 'TTMAGPISEVVVVKDRETQRSRGFGFITF', 'TTAMGPISEVVVVKDRETQRSRGFGFITF', 'TTCVGPISEVVVVKDRETQRSRGFGFITF', 'TTYSCVGVFKNFPLRLQATEVRINPVEF', 'TTWLQWASLLFGFRDPGPQLRRGWRPS', 'KKKDIRDYNDAGFRDPGPQLRRGWRPS', 'KKVPEQPPELDEHSVNYKPPAQKSIQEI', 'KKDTFVLKEGDEHSVNYKPPAQKSIQEI', 'TIKVYEGERPLTKDNHLLGTFDLGVTGAPS', 'TIKVYEGERPLTKDNHLLGTFDEKLGASP', 'KKCYEMASHLRGRVDFILHPLATGPKNF', 'TISTIKEVRGWPFADIISISDGPPGIFQPS', 'TISTIKEVRGWPFADIISISDGPPGLFQPS', 'KKGHFSILISMETDLAPVNAQGIDLNRNF', 'KKMQDQEVVVVKDRETQRSRGFGFITF', 'ADQRVLTAEEKKELENLAAKSKENPRNF', 'TTFHKERLAIAKEFGDKAAERRTPGSQPS', 'TTLMASSSLHNGFPLRLQATEVRINPVEF', 'KKAGLELSPEMYGFRDPGPQLRRGWRPS', 'KKGSDADGLRRYGFRDPGPQLRRGWRPS', 'KKEQGERTRAYGFRDPGPQLRRGWRPS', 'TIKVYEGERPLTKDNHLLGTFDLTGVQSP', 'TIKVYEGERPLTKDNHLLGTFDLTGVQPS', 'TIKVYEGERPLTKDNHLLGTFDLTGVAGPS', 'DGNLKRYLKSEPIPESNEGPVKVVVAENF', 'KKDKLQRTQRMGSSGCSGNNPIKHILANF', 'KKFSLAATRSNDPVALAFAEMLKTLQQCF', 'TTRCPISEVVVVKDRETQRSRGFGFITF', 'KKRTHAVLSPPSPSYIAETESLSFATVQPS', 'GNLKRYLKSEPIPESNEGPVKVVVAENFD', 'QATFIKKGVPIIFADELDDSKPPPSSSMPL'] non truth: ['TTGRETCIKKSIQQLEARYPLRPMQPSP', 'KLGDRTSADVSVYLQTVPTLDPNSARIPQS', 'TTGRETCIKKSIQQLEARYPLRPASNPNG', 'KNIQNEFGVTGKAKWMLWLPSKLTEYF', 'KEWDEDALLRLAETLAIKPDYPIPDGLF', 'KEWDEDALLRLAETLAIDPKYPIPDGLF', 'KLGDRTSADVSVYLQTVPISLQFKNTSNF', 'KLGDRTSADVSVYLQTVPTLKRIEYDNF', 'KLGDRTSADVSVYLQTVPKKATAEPQNQPS', 'KKMWEITTQDQLQEARFKATKTKGTNF', 'KEWDEDALLRLAETLAIDGVFLKRDQSP', 'KEWDEDALLRLAETLAIDPKSFRSIQPS', 'TTTESLLRLSFLRWEVPAQPSTRPMQPS', 'TRTSKSPAEVEMAAKEKQALRIETRHNF', 'TRTSKSPAEVEMAAKEKQALRIYFRGNF', 'TRTSKSPAEVEMAAKEKQALRIQFFTNF', 'TRTSKSPAEVEMAAKEKQALRIQQLCQSP', 'TRTSKSPAEVEMAAKEKQALRITSQVGGASP', 'EANRLRCERIVPDQLLHVFVPAIEYTF', 'TRTSKSPAEVEMAAKEKQALRILMASGSHT', 'KLGDRTSADVSVYLQTVPTLKREDQPQPS', 'KKRYDLYGMYTNQLGVLRHEEIIKNF', 'KKRYDLYGMYTNQLGVLRHIDLIASNF', 'KKRYDLYGMYTNQLGVLRHIDLAVTNF', 'KKVSKTEEADLMDLLLHFVKENRSHNF', 'KKRYDLYGMYTNQLGVLREINKVDQSP', 'KKVSKTEEADLMDLLLHFTEINKVDQSP', 'KKVSKTEEADLMDLLLHFVKELAAMGQPS', 'KKNLHTRQHSSLSRAHKAVSFRDTYNF', 'KKNLHTRQHSSLSRAHKFMSEVAFKNF', 'KKGSFSKKITQAALEESPTTQLMLLSMNF', 'KKNLHTRQHSSLSRAHKFMVSHLSSQPS', 'KKVSKTEEADLMDLLLHFVKEKSPMNTP', 'TTTESLLRLSFLRWEVQLDEAFSLLNF', 'KKGSFSKKITQAALEESLFSAFRDLDQPS', 'KKGSFSKKITQAALEESEENALSRHKQPS', 'KKGSFSKKITQAALEESLFSKLMEDTQSP', 'TTTESLLRLSFLRWEVNQSQSRHKQPS', 'TTTESLLRLSFLRWEVAQPSTRPMQPSP', 'TTKRWAALSDDLKSQWEKALGLKHGGANF', 'TTKRWAALSDDLKSQWEKELQRARQSP', 'TTKRWAALSDDLKSQPDSSKQNALKKQPS', 'KKEVSYYLVESKKLWEFPGGPGGRRGGGF', 'KTFSKSTLEVSSKNCSTSKQAKLANTKGAPS', 'KKEVSYYLVESKKLWEFEDAQLALQPS', 'KGPYGPPGPPGPLGVMGAEARFKATKTKGTNF', 'KGPYGPPGPPGPLGVMQEARFKATKTKGTNF', 'KKARSADFSISAVKTYKSEIVGPEENINF', 'TTHYIEGTVPDIRKGVNREILVTRPCNF', 'TRTSKSPAEVEMAAKEKQLYKVDYIVPF'] Truth: ['SLTVSSSPVANSVLNAAAGITVG', 'SLVTSSSPVANSVLNAAAGITVG', 'SSILSSSPVANSVLNAAAGITVG', 'SSLISSSPVANSVLNAAAGITVG', 'SLDTVINDGFSTILHLLLGS', 'SINVEKASDSLKDVPKQKE', 'DNLNLLRAITEKETVEKE', 'SLSISSSPVANSVLNAAAGITVG', 'SLSLSSSPVANSVLNAAAGITVG', 'SSIISSSPVANSVLNAAAGITVG', 'SSLLSSSPVANSVLNAAAGITVG', 'SLLSSAQSTGTGAGISVIPVAQA', 'SLLSSAQGSTINDVLAVNTPK', 'SLSIGGSAASTINDVLAVNTPK', 'KKAEATTTTPNAEVVVQALGS', 'SLISYLPTSLPKQPAPTTSGG', 'SLLSSAQSPAKEKAKSPEKE', 'SIISNTELVLYLEHNLEK', 'SIISNTEIQEQALQKVGVTG', 'SIISNTSAASAPLVETSTPLR', 'SSLITASAATTAGKPEPNAVTK', 'PAKEKAKSPEKEEAKTSEK'] non truth: ['SISLGTHIKQHQLLHTSQS', 'SLSIGTHIKQHQLLHTSQS', 'SLSLGTHIKQHQLLHTSQS', 'SLTVGTHIKQHQLLHTSQS', 'SIVTGTHIKQHQLLHTSQS', 'SITVGTHIKQHQLLHTSQS', 'SSLLAPKQTNATVPDRRFN', 'KKDEKTMALYRLYAQEK', 'SISLTMKEGKPHLAIGATFN', 'SSILTCQLTKPHLAIGATFN', 'SISLKKVGDHHQQPGQVEK', 'SIISGTHIKQHQLLHTSQS', 'SILSGTHIKQHQLLHTSQS', 'SLLSGTHIKQHQLLHTSQS', 'SLISGTHIKQHQLLHTSQS', 'SLVTGTHIKQHQLLHTSQS', 'SSILGTHIKQHQLLHTSQS', 'SSLIGTHIKQHQLLHTSQS', 'SSIIGTHIKQHQLLHTSQS', 'SSLLGTHIKQHQLLHTSQS', 'SSLIAAARFSAQLHTVWSAV'] Truth: ['SDILNLLTSRSNAQRQ', 'SDLLNLLTSRSNAQRQ', 'SSPAARRYVTALGGLAPE', 'DSILNLLTSRSNAQRQ', 'DSILNLLTSRSNAQRQ', 'DSILNLISNASDALEKL', 'DSILNLIEQVEKQTSV', 'SDIINLLTSRSNAQRQ', 'SDLINLLTSRSNAQRQ', 'DSILNLLSSWHIAPIH', 'DSILNGLELNKTALDTV', 'DSILNAAGVASLLTTAEAV', 'GRVDNIRSATPEALAFV'] non truth: ['SDLQREKRFLNLPEA', 'SSMAKLIWDKLNLPEA', 'SSVPAVAIGCKFLNLPEA', 'SSLSSEVDKKVLNLPEA', 'SDLLGGLILSVNVQSVDS', 'SDLLGGLLVQLSTNDLAS', 'SDLINLYYRFLRKD', 'SDLLNLYYRFLRKD', 'SDIINLYYRFLRKD', 'SDILNLYYRFLRKD', 'DSLLINFRYGRVISY', 'DSLLNLYYRFLRKD', 'DSLLINILSVNVQSVDS', 'DSLLINLVQLSTNDLAS', 'SSRVNELARFLNLPEA', 'SLDIINILSVNVQSVDS', 'SLDIINLVQLSTNDLAS', 'SLDINLYYRFLRKD', 'SEVINLYYRFLRKD', 'DSLLINYYRFLRKD', 'SLDIINYYRFLRKD', 'SKKIHPPQSAAQNLPEA', 'SSPLASLLVQLSTNDLAS', 'SKKIHPPQSAAQNAEPI'] Truth: ['KINALIKAAGVDSV', 'DKINALIKAAGVSV', 'DKINALIKAAGVSV', 'KSPEKAKTPVKSV', 'IQGKDLLKAAGVSV', 'KNLALDINKLASV', 'KNLALDIKAAGVSV', 'NALKLQKGILDSV', 'KSPEKAKTPVKVS'] non truth: ['IQGKVDKLAVAASV', 'GAAVKIDIVKGAASV', 'GAAVKIDIAGKIGSV', 'GAAVKIDIVKQASV', 'GAAVKIDIVKAQSV', 'GAAVKIDIVAQKSV', 'IQGKVDKISKPSV', 'KKQELVLNQISV', 'KKQELKPGVVSSV', 'KKQELKTQIPSV', 'KDLGQLIAGKIGSV', 'KDLGQLIQKIGSV', 'KDLGQLIQLKGSV', 'KDLGQLIQIKGSV', 'IQGKVDKLIQASV', 'IQGKVDKLALQSV', 'IQGKVDKLAIQSV', 'IQGKVDKLAQLSV', 'IQGKVDKLALGASV', 'IQGKVDKLAQISV', 'IQGKVDKLSKPSV', 'KNDALVKVKPTSV', 'KNDALVKVGVGLSV', 'KNDALVKVLVNSV', 'KNDALVKVNIVSV', 'KNDALVKISKPSV', 'KNDALVKVNVLSV', 'KKQLEVLNQISV', 'KGLQVDIVAKAASV', 'KNDALVKVSAAIAV'] Truth: ['KKPEPVLATGNRMFLRFYSDIFSGELTALIDL', 'KKYRGFTIPEAVEYEAKSGRVNKGVATSWLRT', 'TTSIFALKEVQLQKELFQMDIFSGELTALIDL', 'ESKKSVDLRTFPSSYAQMVRVLKRTAARCSQV', 'TAAAAARGLLHPASAPSPAAAVATWKPHLKALHTCSP', 'TAMFRRKAFLHWYTEMLKVNKTLKSLNVES', 'TAAAAARGLLHPASAPSPAIFHMAEPKQLPHVLSSP', 'TAMFRRKAFLHWYGTGAGISVIPVAQAQGIQLPG', 'DTKVMKCVLEVISDSLFLYPKSRLVRSLMSPS', 'ESKKSVDLRTFVEYEAKSGRVNKGVATSWLRT', 'KKPEPVLATGNRMFLRFYSDPKDLPAIPGVTSP', 'TAMFRRKAFLHWYTGKNLEPKYKELGEKLS', 'DTKVMKCVLEVISLALDSSLHGLHSVSVSIGFTR', 'KKEVEHRLSALKSQGDTDTARRLLEPGADPVAGP', 'TGAVVAFVMKMRVEYEAKSGRVNKGVATSWLRT', 'KKPEPVLATGNRMFLRFYSDPKYKELGEKLS', 'KKPEPVLATGNRMFLRFYSDKQEFLEKKIE', 'AQRQATKDAGTIAGLNVMRIINEPTAAAIAYGLDK', 'LKKLCGTVLGGPKLPGRVAFGEDIDLPETFDARE'] non truth: ['ESKKNLHTRQHSSLSRAHRPTLSFQKLRDPS', 'TAPRQSHNKVLCSRIFEQKWFQLAQELTLGI', 'TAAPGPPGLNLTQLLGYTFLDGLCVVAARSVGSRLN', 'KKHRPKRNMIDSEAKEYALNTFIPQWTLNL', 'TTGQIVRSFELWRYQFRQLFLRTNLSDLPG', 'KKPVLQQQPGQGPGQPQLGSEQPKATTSPHLPVPS', 'KKHRPKRNMIDSEAKEYAQPKATTSPHLPVPS', 'KKPKSFYFSFAPAELKEGLNLRVWFPASTVSP', 'KKHRPKRNMIDSEAKEYGQLFLRTNLSDLPG', 'KKPKSFYFSFAPLRGWSDAHITYLVGRTQLQ', 'KKHRPKRNMIDSEAFEQKWFQLAQELTLGI', 'KKPVLQQQPGQGPGQPQSCLNPCKLVELIGEVTK', 'PKKQQETLRRMRDENRMRVQDVNVSSISKL', 'EKKRRAERERIQEQLAEQYSQLSQTSPKPK', 'AIPTKESGVGSALSAIRGIITRDGTNIVIGTATGELC'] Truth: ['SSVYLWDLFSKQ', 'SVYLIFDTNGCLK', 'SKHRDSTGSELKQ', 'KSHRDSTGSELKQ', 'KKQQQEREAAEQ', 'SVSHSDLLMEVKQ', 'KSPEAQTPMEVKQ', 'KSWSFNFRKLGC', 'SVFYHNRFFGKA', 'SVFHYNRFFGKA', 'SSKKDSDTHRALQ', 'SKQCQYRRLYQ', 'KSACGQYRRLYQ', 'KSTWMNRFFGKA', 'SSPAPQSIAMEVKQ', 'SVLGFEDFFNKAV', 'SVIEFGDFFNKAV', 'SVDFEAVFFNKAV', 'KKERCCQQPPKQ', 'KKERCCLEHAKGA', 'SSRRGPTPEETKQ', 'SVMKMGGGYAKICK', 'SVSSSVSGGYAKICK', 'SKTSETGGYAKICK', 'SVASNEPRISRDAA', 'SVDFEAVYAKICK', 'SVFFGPDYVVPFV', 'SVFFGPDFTLPVF', 'KKERCCHLEAQK', 'KKERCCQPPKQQ', 'KKERCCPPKQQQ', 'KKERCCVTHQLQ', 'KKERCCIHQSLAG', 'KKERCCHLQISQ', 'SSSGTITGGYAKICK', 'SSVSSSVGGYAKICK', 'KSYENQYVVPFV', 'KSYENQFTLPVF', 'SSHVRGSTGSELKQ', 'KKCGWFEYRKGA', 'KKCGWFERYKQ', 'KKCGWFRYEKQ', 'SSSTLTGGGYAKICK', 'SSTSLTGGGYAKICK', 'QAAAQTNSNAAGKQL', 'FSKFGEVVDCTLK', 'HAGKSARGSRGGAFGG', 'AEIPVAQETVVSET'] non truth: ['SSGGTLMKFTTSKGA', 'SSNTLMKFTTSKGA', 'SSTLNMKFTTSKGA', 'KSSSCTIKEFSKQ', 'KSYGQCARRLYQ', 'SVPERREQESKQ', 'SKPQQREQESKQ', 'KSSSCTIKEFSQK', 'SSIVGFLMEFSKQ', 'KSSSCTIKEYAKQ', 'KSSSCTIKEAYKQ', 'SVPTPQILDGCSGKA', 'SVPTPQIAMGEKSQ', 'SKEGTMKFTTSKGA', 'SKGETMKFTTSKGA', 'SKAESMKFTTSKGA', 'SKPQQRVDGDKSQ', 'KSTGEMKFTTSKGA', 'KSASEMKFTTSKGA', 'SKHCSNPPLPHKAG', 'SSIVGFLMEFSQK', 'SVFCDVVTLSFQK', 'SVPAGPEDKTAGKCI', 'KHPMAPSSQPHKAG', 'SVPERRQESEKQ', 'SVPERRQSEEKQ', 'SVPERRQEDTKQ', 'SSFLLGAYMLDKGA', 'SSVGRPTGPSGTNKQ', 'KSSSCTIKESFQK', 'SSIVGFLMEYAKQ', 'SSIVGFLMEAYKQ', 'SSGPVVPVLDGCSGKA', 'SSSSKTRETPHKAG', 'SSSKTRETSPHKAG', 'SVQTSMKFTTSKGA', 'SKTEGMKFTTSKGA', 'SKTVCDKFTTSKGA', 'SKESAMKFTTSKGA', 'SSVGRPTGPSNAKSQ', 'KSSEAMKFTTSKGA', 'KSESAMKFTTSKGA', 'SVFCDVVISTFKQ', 'SVPTPQIASADMKQ', 'SVPTPQIAVSDCKQ', 'KSCKYILAGCMKQ', 'KSCKYIVSSSDKGA', 'SKFNSCARRLYQ', 'SSVPAVAIGCEPSQK', 'SSFLLGAYMEVKQ'] Truth: ['NSKISLPHPMEIGENKFQELKGKLQALEG', 'SDLVEELYRRKAWAGFEDGIVRLKLQGS', 'DSILNLLTSRSNAQRQEIAQEFKTLFGR', 'DSVSELPALFQRLDFKSKDISKIAADITQ', 'SSNKIRRDGGDVIYRGFEDGIVRLKLQGS', 'SILNLLTSRSNAQRQEIAQEFKTLFGRD'] non truth: ['FSVSSSAAGRGGLGLGLGGSLLPLLGSSSGVAFSLG', 'SSPLCLLSKMVASSLQYIIVTDAATLNIGLGG', 'DSIFVGSGSRKGVISEVIKAAQTVKTIDDSQ', 'VEPKVLFDEKLYGWGSILCNSTLLRQVQ', 'SDQEPKAPAPLRAVPVEVPDVPVNSLLNVAN', 'VTDVAKFLVNTKGSGMVFNLILSVAESVEPA'] Truth: ['DLQQILESEK'] non truth: ['EVQQLIEEKS', 'VEQQLIEEKS', 'DIQQLIEEKS', 'DLQQLIEEKS', 'LDQQLIEEKS', 'IDQQLIEEKS', 'LDAGGALIEEKS', 'LDAGGALIEESK', 'LDAGGALIETAAT', 'VEQQILSVAES', 'EVQQIISKEE', 'VEQQIISKEE', 'EVQQILSVAES', 'LDQQIISKEE', 'DLQQIISKEE', 'IDQQIISKEE', 'LDAGGAIISKEE', 'DLQQILSVAES', 'IDQQILSVAES', 'LDAGGAILSVAES', 'DIQQILSVAES', 'DIQQIISKEE', 'LDQQILSVAES', 'EVQQIIDLGTS', 'EVQQLLSIGTD', 'VEQQIIDLGTS', 'VEQQLLSIGTD', 'LDAGGALLSIGTD', 'DIQQLLSIGTD', 'IDQQIIDLGTS', 'IDQQLLSIGTD', 'LDAGGAIIDLGTS', 'LDQQLLSIGTD', 'DLQQLLSIGTD', 'DIQQIIDLGTS', 'DLQQIIDLGTS', 'LDQQIIDLGTS', 'EVQQLLTVSEG', 'VEQQLLTVSEG', 'LDAGGALLTVSEG', 'LDQQLLTVSEG', 'DIQQLLTVSEG', 'DLQQLLTVSEG', 'IDQQLLTVSEG', 'LDAGGALISKEE', 'LDAGGALIDLGTS', 'VEQLQLEKES', 'VEQLAGLEASSV', 'VEQLQLESGSL', 'VEQQLEKESL'] Truth: ['KKKKEQYGDCEEK', 'TTNGEGRVRQNFHP', 'KHEANNLDDATLSGK', 'KRTYEEGLCEIGSK', 'KERVMYDRMGEAK', 'KERVMYDDATLSGK', 'KCPACGRVRQNFHP', 'TTILSSPDYSNMRK', 'KKEKKQYGDCEEK', 'KKSLGKQYGDCEEK', 'TTSSIPEVDRNHEK', 'VFKKTSEGSWEPFA'] non truth: ['TTLTYLMWNRWQ', 'CTPLTHGSPDTVKEK', 'KKYQGDSEMAAKEK', 'KWHREHSFTEQK', 'KHWTEILDDAEQK', 'TLWSFYGPVTGQEK', 'TWHHHVGPVTGQEK', 'TTPWSHIEEQKEK', 'TTGQGQHIEEQKEK', 'KWHQLMWNRWQ', 'KHWQLMWNRWQ', 'KHWTLELDDAEQK', 'TGHRQLMWNRWQ', 'TGRHQLMWNRWQ', 'KKDDTVHGWGRQW', 'KKDDTVHWNRWQ', 'TTHLLEETLQNWQ', 'TTEKGHAVGWGRQW', 'TTGEKHAVGWGRQW', 'TTVSQHAVGWGRQW', 'TTQSVHAVGWGRQW'] Truth: ['DEKVLTEIIASRTPEELSAIKQVYEEEYGSNLE', 'INITWLRNSKSVTDGVYETSDKDALWQKSDALE', 'KAFMSQQSETSLQSIRLLESDKDALWQKSDALE', 'KKLLEDVEVATEPTGSRTIQKQVYEEEYGSNLE', 'AKSPEKEEAKTSEKVAPKKEKQVYEEEYGSNLE', 'INITWLRNSKSVTDGVYETSPGSVKVYSYYNLE', 'INITWLRNSKSVTDGVYETSGGAVFGEEGLNLNLE', 'KKISLEDIQAFEKTYKGSEEELNDIAEEVAGGIE', 'KAFMSQQSETSLQSIRLLESGGAVFGEEGLNLNLE', 'KAFMSQQSETSLQSIRLLESPGSVKVYSYYNLE', 'KKISLEDIQAFEKTYKGSEEEGPDTEAALAKDIE', 'KAQSELSGAADEAARAEIQIQYCFLLVPCMLTALE', 'QVLNSCSKVADMLLDNGLLYVLCNTYHIISSNLE', 'KAAFENWEVEVTFRVTGYNGRLNEKHDFLALE', 'KKISLEDIQAFEKTYKGSEEELNDRFAHTNIE', 'KKISLEDIQAFEKTYKGSEEELNHNFVMVNLE', 'KKISLEDIQAFEKTYKGSEEELNDIKQADGNLE', 'QVLNSCSKVADMLLDNGLLYVLCNTVIGGMSRLGGE', 'EKVLTEIIASRTPEELSAIKQVYEEEYGSNLED'] non truth: ['RRSFRDNNEMGSVLGVNGLMLSSIDFVVNSQVIE', 'KATTEGVSFLGGGDLTAVMVEYDPGLLNEWRSKLE', 'KKLANEGSDGGGEEAAVLAFFEELSEGAVKARFNLE', 'KELTVTCVTAEKSEVSIDSAASASALQPDLHTAPAIE', 'KKHYGMSVFVDVERLYDSTLKTWQQSSPPINE', 'KKHYGMSVFVDVERLYDSTLKHDYPREEVLE', 'KKHYGMSVFVDVERLYDSTLGGQVYEMILYIE', 'VQLIGPDQPLLSTTEEPNDTKISAMNQLMRSQVE', 'KELTVTCVTAEKSEVSIDSAASASALFTKLDPDLNE', 'KELTVTCVTAEKSEVPDGEELAPGNERRGNVRGLE'] Truth: ['TTMGQEVSP', 'TTEVNMASP', 'TTPATSMGSP', 'TTATPSMGSP', 'TTMGDLAGSP', 'TTGFDPQSP', 'TTPATSGMPS', 'TTATPSGMPS', 'TTTAPSGMPS', 'TVTVQDCSP', 'FENQEVSP', 'TTMGQIDPS', 'TTAFPDNSP', 'GANFEEVSP', 'TQQPEYSP', 'EGGFQDISP', 'TVTVQCDPS', 'FENELNSP', 'EFNPSATSP', 'FENEVQSP', 'FENGELGSP', 'FENELGGSP', 'EFNPTTGSP', 'FENQIDPS', 'TTINEMGSP', 'TTIENMGSP', 'TTELNMGSP', 'TTQVEMGSP', 'TTMGDAGLSP', 'TTMGEVQSP', 'TTMGPSATSP', 'TTMGDLGASP', 'TTMGDLQSP', 'TTMGATPSSP', 'TTMGDQISP', 'TTMGDIQSP', 'TTMGELGGSP', 'TTMGPTTGSP', 'CLSSQDISP', 'FENQDISP', 'FENQDLSP', 'FENQLDSP', 'FENQIDSP', 'EFNQEVSP', 'TTEVNMAPS', 'TTDIMGQPS', 'TTVEMGQPS', 'TTDLMGQPS', 'TTEVMGQPS', 'TTLDMGQPS'] non truth: ['TTDLQMGSP', 'TTNPDAFSP', 'TTNCVDLSP', 'TTCVNEVSP', 'TTQVEMGSP', 'TTDLQMGPS', 'TTNPDAFPS', 'TTGPSCISSP', 'TTTPMGSASP', 'TTGEQVMSP', 'TTDGCAVVPS', 'TTIENGMPS', 'TTLENGMPS', 'TTGPSCISPS', 'TTGEQVMPS', 'TVVTDCQSP', 'EFNQLDSP', 'TVSCNELSP', 'TVCSASPTSP', 'TVCSASTPSP', 'TTDLQGMSP', 'SLCSQLDSP', 'TTDLQGMPS', 'TTAFGGPDSP', 'NFEQVESP', 'QFDQLDSP', 'TTNCVDISP', 'TTCVNLDSP', 'TTCVNDLSP', 'TTCVNVESP', 'TTLDAAACSP', 'TTDLAAACSP', 'TTIDAAACSP', 'TTDIAAACSP', 'TTEVAAACSP', 'TTVEAAACSP', 'TTLDMGQSP', 'TTDLMGQSP', 'TTIDMGQSP', 'TTDIMGQSP', 'TTEVMGQSP', 'TTVEMGQSP', 'TTLNEMGSP', 'TTNLEMGSP', 'TTNIEMGSP', 'TTQVEGMSP', 'TTVQEMGSP', 'TTADLGMGSP', 'TTDGLAGMSP', 'TTQDLGMSP'] Truth: ['KKWKEDVELYRQKKVIQ', 'KKWKEDVELYRQKVKLGA', 'KDRKNPLPPSVGVADKKQLQ', 'KDRKNPLPPSVGVADKKQIQ', 'KKGICVADPWHRRILKGLQ', 'KDRKNPLPPSVGVADKKGALQ', 'KDRKNPLPPSVGVADKKAGLQ', 'KDRKNPLPPSVGVADKKGAIQ', 'KDRKNPLPPSVGVADKKGALGA', 'KDRKNPLPPSVGVADKKQIGA', 'KDRKNPLPPSVGVADKKQLGA', 'KDRKNPLPPSVGVADKKQLAG', 'KKLREERLAQYESKKAIQ', 'KKLREERLAQYESKKALGA', 'KKLREERLAQYESKKALQ', 'KKLREERLAQYESKKAIAG', 'KKLREERLAQYESKKALAG', 'DERILSILRHQNLLKELQ', 'DERILSILRHQNLLKELAG', 'DERILSILRHQNLLKELQ', 'DERILSILRHQNLLKEIQ', 'DERILSILRHQNLLKEIAG', 'DERILSILRHQNLLKEIGA', 'DERILSILRHQNLLKELGA', 'KDRKNPLPPSVGVADKQKIQ', 'KKWKEDVELYRQKVKLQ', 'KKWKEDVELYRQKVKIQ', 'KKWKEDVELYRQKVKIGA', 'KKWKEDVELYRQKVKLAG', 'KNRYFQTIKIPPKSSVALAG', 'KNRYFQTIKIPPKSSVAIQ', 'KNRYFQTIKIPPKSSVAIAG', 'KNRYFQTIKIPPKSSVALGA', 'KNRYFQTIKIPPKSSVALQ', 'KKKMWPQLSKFNQLKWK', 'KKWKEDVELYRQKVKQL', 'KKWKEDVELYRTRIILQ', 'KKWKEDVELYRIRITLGA', 'KKWKEDVELYRQVKKLQ', 'KKWKEDVELYRQVKKIQ', 'KNRYFQTIKIPPKSSVAIGA', 'DERILSILRHQNLLKEQI', 'DERILSILRHQNLLKEQL', 'DERILSILRHQNLLKEALG', 'EPVTVLTVILDADLTKLHLQ', 'EPVTVLTVILDADLTKIHLQ', 'EPVTVLTVILDADLTKHLLQ', 'EPVTVLTVILDADLTKHLLAG', 'EPVTVLTVILDADLTKIHIQ', 'EPVTVLTVILDADLTKLHIQ'] non truth: ['EEKLQEKRLKYTRKQIAG', 'EEKLQEKRLKYTRKQLAG', 'EEKLQEKRLKYTRKQLQ', 'EEKLQEKRLKYTRKQIQ', 'EEKLQEKRLKYTRKQLGA', 'EEKLQEKRLKYTRGAKLQ', 'KKKKITTSGIHHGVGEEKLQ', 'KKKKITTSGIHHGVGEKELQ', 'KKKKITTSGIHHGVGEKELGA', 'KKKKITTSGIHHGVGETRLQ', 'KKKKITTSGIHHGVGERTLQ', 'EEKLQEKRLKYTRKQIGA', 'EEKLQEKRLKYTRKGALAG', 'EEKLQEKRLKYTRKGALQ', 'EEKLQEKRLKYTRKAGLQ', 'EEKLQEKRLKYTRKGAIAG', 'EEKLQEKRLKYTRKAGIAG', 'EEKLQEKRLKYTRKAGIQ', 'EEKLQEKRLKYTRKGAIQ', 'EELIVRPFLVLFVSMGKLQ', 'KKLAPSDPFLVLFVSMGKLQ', 'KKETKSIGTAAATKTLSKNLAG', 'EKGRLTQGNVELIVHGVKLQ', 'KKIDKGGTLIRQSEFTRIQ', 'KKIDKGGTLIRQSEFRTLAG', 'KKIDKGGTLIRQSEFTRLAG', 'KKIDKGGTLIRQSEFTRLQ', 'KKIDKGGTLIRQSEFRTLQ', 'EEKLQEKRLKYTRAKGLQ', 'EEKLQEKRLKYTRQKLQ', 'EEKLQEKRLKYTRQKIQ', 'KKGHAKIRPNGMIAQATRIQ', 'KKGHAKIRPNGMIAQATRLQ', 'KKGHAKIRPNGMIAQARTLQ', 'KKGHAKIRPNGMIAQARTLAG', 'KKGHAKIRPNGMIAQATRLAG', 'KKKKITTSGIHHGVGEGLSLQ', 'KKKKITTSGIHHGVGEGSLLQ', 'KKKKITTSGIHHGVGEGISLQ', 'KKKKITTSGIHHGVGESLGLQ', 'KKKKITTSGIHHGVGESGILQ', 'KKKKITTSGIHHGVGEGSILQ', 'KKKKITTSGIHHGVGEGSLLGA', 'KKKKITTSGIHHGVGEGLSIQ', 'KKKKITTSGIHHGVGEGSLIQ', 'KKKKITTSGIHHGVGEASVLQ', 'KKKKITTSGIHHGVGEKEIQ', 'KKKKITTSGIHHGVGEKELAG', 'KKKKITTSGIHHGVGEVGTLQ', 'KKKKITTSGIHHGVGEGTVLQ'] Truth: ['DTEENPRSFPASQTEAHEDP', 'DTEENPRSFPASQTEAHEDP', 'EESENPRSFPASQTEAHEDP', 'TEENPRSFPASQTEAHEDPD', 'TEENPRSFPASQTEAHEDPD', 'EETNPRSFPASQTEAHEDPD', 'EEEEEEEAVARSGMISGGHNP', 'EEYDLADLSSGQEMNGKNWA', 'EELSCLEGPEVDGFVKDMME', 'EEEEEEEAVAREGDDDPVPL', 'EESLRNYYMDQHTLDVED', 'EETDSTKEEAAKMEQDEMR', 'APAPAAGPGHDADDEDEGHLDSL', 'EELSCEERGFTNRENDETV', 'EEEEAQMAPLDFEAEREYA', 'EEEEGAAMAPLDFEAEREYA', 'TGAEVDFQEYCVFLSCIAMM', 'QTEVDFQEYCVFLSCIAMM', 'TQEVDFQEYCVFLSCIAMM', 'EEEEEEEAVARGFVKDMME'] non truth: ['EEEEEDSKIPGDGFYPESSI', 'EESLFSAMFNSLDPQDSEPN', 'HVDLCMIGSESSFMYEPTNP', 'EETGNYYQSIYSNMDDKVT', 'QTQTGEGCDFLQWYSDHNK', 'TQSLNHTSSCKTGSDSYSEQP', 'EEEEDESKIPGDGFYPESSI', 'EESDLSYFNLNYPSLEAYC', 'EEEEELNEAIQFLEGEAYD', 'EEEARPWSGSSAGYYHCQVT', 'EEDGYSRLNDQDHPLEEPN', 'EEEELGPADSSVFMSIDSDSL', 'EEEEVVPSASLCSPETCMIGF'] Truth: ['FDVQWLMNTKRNR', 'DFVQWLMNTKRNR', 'DFVQWLMNTKRNR', 'DFVAGGEDKLKMIRE', 'FDVAGGEDKLKMIRE', 'DFVQGEDKLKMIRE', 'DFVQAELLMNALKLC', 'DFVQILIQWITTQC', 'FDVQILIQWITTQC', 'GGLQNIAAEKHNIGGAGT', 'GGLQNIAAEKHNLGGAGT', 'NLPFRPELYVMEIS', 'NLLNNDAVDGERLKH', 'QVLNNDAVDGERLKH', 'NLLNNRELYVMEIS', 'QVLNNRELYVMEIS', 'GGLQNIAAEDHLSQRV', 'NLGGLAVARDHEIGGAGT', 'NLGGLAVARDDGNHLVS', 'INNIMLDNGLLYVLC', 'LNINMLDNGLLYVLC', 'VQNLMLDNGLLYVLC', 'NLGGLAVARDYVMEIS'] non truth: ['FDVQWVPKTYLTNP', 'FDVQWRNQLSKRM', 'FDVQEGWVLPKKDF', 'DFVQFMIRAEEPLL', 'FDVQFMIRAEEPLL', 'FDVNAEKIIQVCICL', 'DFVGEQKIIQVCICL', 'PLMLDAGGVLGKSYACL', 'PLMLDAGGVSYIKQCL', 'PLMLDAGGVGYIYPKN', 'QVNLPSGPSGPRGVQGAS', 'NLINMAVKTGTKDSCI', 'VQQVNHLVVPMMPLC'] Truth: ['DVSSYLEGQAAKEFIAWLVKGRGRR', 'VDSHEPKKQNTVSKGPFSKVRTGPGR', 'SVDLAHRFIVNIPALEDMAAQITRR', 'VDNASLARLDLERKVESDFHRPLR', 'VDNASLARLDLERKVESKPPEVTNR'] non truth: ['DVSSIYKTQPPPHRRLLAQEPFTR', 'DVSTDFQQQKRRLYEQKILERR', 'VSYYKWILSIMKELREPSPMPLR'] Truth: ['LQDTEENPRSFPASQTEAHEDP', 'QLDTEENPRSFPASQTEAHEDP', 'QIDTEENPRSFPASQTEAHEDP', 'IQDTEENPRSFPASQTEAHEDP', 'AGLDTEENPRSFPASQTEAHEDP', 'LQDTEENPRSFPASQTEAHEDP', 'GALDTEENPRSFPASQTEAHEDP', 'AAVDTEENPRSFPASQTEAHEDP', 'GAIDTEENPRSFPASQTEAHEDP', 'AVADTEENPRSFPASQTEAHEDP', 'LAGDTEENPRSFPASQTEAHEDP', 'LGADTEENPRSFPASQTEAHEDP', 'AIGDTEENPRSFPASQTEAHEDP', 'ALGDTEENPRSFPASQTEAHEDP', 'IGADTEENPRSFPASQTEAHEDP', 'GLADTEENPRSFPASQTEAHEDP', 'VAADTEENPRSFPASQTEAHEDP', 'GIADTEENPRSFPASQTEAHEDP'] non truth: ['QIDEEDGYSRLNDQDHQPLEP', 'LQDEEDGYSRLNDQDHQPLEP', 'AGLDEEDGYSRLNDQDHQPLEP', 'IQDEEDGYSRLNDQDHQPLEP', 'QLDEEDGYSRLNDQDHQPLEP', 'LAGDEEDGYSRLNDQDHQPLEP', 'GLADEEDGYSRLNDQDHQPLEP', 'GALDEEDGYSRLNDQDHQPLEP', 'AAVDEEDGYSRLNDQDHQPLEP', 'AGIDEEDGYSRLNDQDHQPLEP', 'GIADEEDGYSRLNDQDHQPLEP', 'LGADEEDGYSRLNDQDHQPLEP', 'ALGDEEDGYSRLNDQDHQPLEP', 'AIGDEEDGYSRLNDQDHQPLEP', 'IAGDEEDGYSRLNDQDHQPLEP', 'VAADEEDGYSRLNDQDHQPLEP', 'AVADEEDGYSRLNDQDHQPLEP', 'QMSSHFFGEKGQCLSTEFPGSPP', 'QMSSHFFGEKGQCLSTEFPSGPP', 'QMSSHFFGEKGQCLSTEFPSPGP', 'QMSSHFFGEKGQCLSTEFPGPSP'] Truth: ['DTEENPRSFEWNPAFAYSP', 'TDDTFDKNVLDSEDVMTQSP', 'TDENQTVSVSEQSKTSETCSP', 'SEEEPLQTSGHLDNSLSDDSP', 'TDEELGGDYALAPGSQSSEMSL', 'TDIYANYDQQCPRGWWGSP', 'DTEIMTLVDETNMYFNHSP', 'SEETLNNSVSEQSKTSETCSP', 'TDDTFDKNVLDSEDVWSSSP', 'TDEMLWHSVSDHSQQVVDSG', 'TDDTFDKNVLDSEDERSSPS', 'TDIYANYDPEASYPLPEDSP', 'TDDTFDKNVLDSEDVWSSPS', 'TDDTFDKNVLDSEDINMTSP', 'TDDTFDKNVLDSEDMNLTSP'] non truth: ['TDEEPHIDDCLAEDLSGTLSP', 'TDEDIDMKQGTVYNTLSDDP', 'TDEKDPSTSGPASGESPPTDGQP', 'TDTWMGEAALSCLGQLFSDDP', 'TDELSTPPDAGPNQSDDSSLNP', 'TDEENQMIILGMFDMNNEI', 'TDERSCQWDGARHQVNGGGSP', 'TDELQVTQFDMDEQSSISSP', 'ESESRCQWDGARHQVNGGGSP', 'ESLSVQCHEGAQSGPQSGGQSSP', 'TDEDLEKNPGSDVDFTYSQP', 'TEDSQRCWDGARHQVNGGGSP', 'TTRGSDEPNVDEYLSDTSSSP', 'TSSNFGDMEVLHMLFTDESP', 'TDQEEFLAFYECGFGFAVPS', 'TDDRTCQWDGARHQVNGGGSP', 'TTRGSDEPNVDEGSGVMQHGSP', 'TSSNFGDMEVLEVAADVDGETA', 'TSSTPDPSTSGPASGESPPTDGQP', 'DTQEEFLAFYECGFGFAVPS', 'SEQEEFLAFYECGFGFAVPS', 'ESQEEFLAFYECGFGFAVPS', 'TDSGSSNVVSEFPDPYKEDPS', 'TTEAEGPSTSGPASGESPPTDGQP', 'TDTWMGQWDGARHQVNGGGSP', 'TTEAEGKDFPLWFGDDMSGGP', 'TSYTFARNHCDPYLYYGW', 'TTCYSNWHKSDIGRCGSASSP', 'TDSGSSNVVSFYECGFGFAVPS', 'TSSNFGDIDDCLAEDLSGTLSP', 'TDDRTNFEVVHVYDECDVT', 'TTEAEGDLFSSQMLDDLEQQ'] Truth: ['DFPEEVAIAEELGRRHADGSFS', 'EPEPEEEIIAMIERCICLVCL', 'PEPEEEIIAEMIERCICLVCL', 'EPESPRSNSPPALFCLDTAWKS', 'PEEYEFLRNALFCLDTAWKS', 'FSLYFLAYEDLFCLDTAWKS', 'FPEEVAIAEELGRRHADGSFSD'] non truth: ['PELGAASSTSATHGPVSASSFSSPLS', 'PEERNQFATHGPVSASSFSSPLS', 'SGPVGPPGPPGPPGPEGPPFGIMSYS', 'EPEVAEGLMVHGLASMGSRDTFV', 'PEENINFGGPDSPSRPHTSPPLS', 'PEERNQFGPDSPSRPHTSPPLS', 'EPEQEVLMVHGLASMGSRDTFV', 'PEVAEEGLMVHGLASMGSRDTFV', 'EPVAEEGLMVHGLASMGSRDTFV', 'EPEQEVPNAPTLPESSVKGSYSS', 'EPEQEVPNAPTLPESSVKSGSYS', 'EPEQEVPNAPTLPESSVKGSSYS', 'EPGLQSSSDPGIYVDGSITYIFS', 'PELGAASSTSHETILVQDDYGEI', 'EPELTTGEKVAQHVEEVSHEPS', 'EEPPIKMKDIHERPVSSCSYS', 'EPECELTVPSPPSKTASTLSANSS', 'PEETPTGLMVHGLASMGSRDTFV'] Truth: ['DVSSYLEGQAAKEFIAWLVKGRGRR', 'SVDKVLELKEHKLDGKPAPAQPPGDPA', 'LCRQKQHRKVLDKGKPEDVYLDPA', 'LCRQKQHRKVLDKETEKLSPEDR', 'VDVDARVINYEEFKKALEELATKR', 'DVVDARVINYEEFKKALEELATKR', 'DVDVARVINYEEFKKALEELATKR', 'VDDVARVINYEEFKKALEELATKR', 'VSSVSSARSGRAEEFKKALEELATKR'] non truth: ['VSDVTLIVEDVTFPRGKYTVQGKER', 'VSDTVSDIAKARRLSVLRVEGPGAGDPA', 'DVSLNVKSMIEPVTHLKPEKSELSR'] Truth: ['PEEGISSSLDRRGHRRS', 'EPERIIGATDSSAKSHIQ', 'NEQVEIRAVLFNAHPAM', 'NEQVEIRAVLFNGLCPH', 'PEETFLALQENFLLQF'] non truth: ['PEEPHTPLAPHAAPHAPVA', 'EPEPHTPLAPHAAPHAPVA', 'DFIALNRHMGAHNKVLC'] Truth: ['EGQAAKEFIAWLVKGR', 'KESKQKMNADLGLKRG', 'KTSGNPEFIAWLVKGR', 'KESKQKMNADAKGVLR', 'LVFVSEAGIEEGKVVVQ', 'RFQILWEEEIKRVG', 'KKLHDSHKENIRGQL'] non truth: ['KKMWEASIRTVALQAA', 'ELKMKKALLGCDRAVQ', 'KKMWSEAIRTVALQAA', 'KKATLTWCLTPSRIQG', 'KKMWSEALIRDKNLA', 'KSTGEETKLIRDKNLA', 'KTSSGIEASIRTVALQAA', 'KKIHPPQSAAQGSRGLQ', 'KKDEECKLAGGLTRVR', 'KKATLTWCLERVGLQG', 'KKTTDEASIRTVALQAA', 'SMKKIAVVESNRVDRA', 'HNKKKVSLLFAQFEN'] Truth: ['EGILPEKAELAPPS', 'TTLSKGLTEFGISP', 'GILPEKAEEALPPS', 'GILPEKAEEALPSP', 'GILPEKAEEAIPPS', 'GILPEKAEELAPPS', 'LADLAEYHLRHL', 'LAIEVNTPELAPPS', 'TTPLAEYHLRHL', 'TTPIAEYHLRHL', 'EGILAEYHLRHL', 'TVVSPEYHLRHL', 'TTLSKGLTEAVFSP', 'TTLSKGLTEIFGSP', 'TTLSKGLTEAVFPS', 'TTLSKGLTELFGSP', 'TTLSKGLTEVPFTG', 'TTLSKGLTEVSAML', 'TTLSKGLTEVASML', 'TTLSKGLTEVLSAM', 'TTLSKGLTEVVSLC', 'TTLSKGLTEVAVTM', 'TTLSKGLTEVTVAM', 'TTSYAQLLATVVSP'] non truth: ['TTPIAQLSEPLPPS', 'TTPIAESLQYKSL', 'PLVGLEAASEALPSP', 'PLVGLEAASEIAPPS', 'VGPGPPQRAAVATVM', 'AIDLAESLQYKSL', 'VGPGPPQRPPAPPPS', 'VGPGPPQRAPPPPSP', 'TTPLAESLQYKSL', 'VGPGPPQRIQAFSP', 'LADAIESLQYKSL', 'TVRKHSPIMTPSP', 'GVPLWRFPPTPSP', 'GVPLWRFPPTPPS', 'GIPVVPVAHSHGPSP', 'ALDAEVVGPISVPPS'] Truth: ['VQNLIRSTEMSAATGIQPSPVSVVG', 'DFVQWLMNTKRNRNNIAKRH', 'QVDQLTNFLVRSFYLKNLQTN', 'NLNLTQDFLVRSFYLKNLQTN', 'INNITQDFLVRSFYLKNLQTN', 'QVLNTQDFLVRSFYLKNLQTN', 'EGCLIEILASRTPEEIRRINQT'] non truth: ['NIGIGDNTIIASTGLRYPSGSLHGK', 'QVQDKLWLVSLRLFTSQEYTS', 'LNQVWFDRLSSLQQEVNALLAP', 'RDRSGPAPACSAQFALLRPRCLR'] Truth: ['TLKSGIQNMLAPLLYPEKR', 'EGQAAKEFIAWLVKGRGRR', 'KLPDLLRSYIDADKQLQR', 'TLKSGIQNMAAVGVVRLCRR', 'KKPMSLASGSVPAINRFRGR', 'TTQPQGTFIAWLVKGRGRR', 'KKETGFSHSQTLIRVVVSR', 'KKLALDMATNAPLLYPEKR', 'KKPMSLASGSVAPLLYPEKR', 'TTDVYLLPAAVLRQQTARR', 'KKEEIKKMISEINGAQGIR', 'TTDVYLLPAARLREERLR', 'KKAEATTWGAAVGVVRLCRR', 'KKTEAATWGAAVGVVRLCRR', 'KKDTKEWGAAVGVVRLCRR', 'TTLHCPHIAKEGKKPLRSR', 'TTTTTFHIAKEGKKPLRSR', 'TTISKDKEAARLREERLR', 'TTLSKDKEAARLREERLR', 'KKSLDSAWGAAVGVVRLCRR', 'RVTGKKPNFEVGSSSQLKLP'] non truth: ['KKYVQYYLPFRGRIANR', 'KKSKENKKDTSVFIHINR', 'KKNRTNSGSLRYVRISHR', 'KKYVQYYLDWVLVLSVR', 'KKENMQITLDWVLVLSVR', 'KKYVQYYLRYVRISHR', 'KKYVQYYLEEGNKLRKI', 'TTVMGGVAQKLEEGNKLRKI', 'TTVEVEVKSLRYVRISHR', 'KKISYHDTLDWVLVLSVR', 'TTPKEKCGKLEEGNKLRKI', 'TTKDGVWGKLEEGNKLRKI', 'KKGMALDALASPFRGRIANR', 'KKGMALDALASRYVRISHR', 'KKGMALDALASDWVLVLSVR', 'TTHNNRHLALRQSRRRR', 'TTKDGVWVSALRQSRRRR', 'TTLWMVLQPQPISLYIIR', 'KKYVQYYPQPISLYIIR', 'KKEHLQPQPEEGNKLRKI', 'KKYSSKKIDLQKEGNHIR', 'KKYSSKKIDLQEKGNHIR', 'KKGMALDALASEEGNKLRKI', 'KKKCTPADTLDWVLVLSVR', 'KKDGPGLNYLPFRGRIANR', 'KKSQFEPQPQPISLYIIR', 'KKVGGYDPQPQPISLYIIR', 'KKDNYVPQPQPISLYIIR', 'TTVFPLSLNEERIKIQQR', 'KKRCSLAWEERIKIQQR', 'KKAINPSSFEERIKIQQR', 'KKLNDFINEERIKIQQR', 'TTDVIFVRVVPNKAGAKPYP', 'TTTDLMPLTALRQSRRRR', 'TTESIMPLTALRQSRRRR', 'TTSELMPLTALRQSRRRR', 'TTIESMPLTALRQSRRRR', 'KKDFTQPQPQPISLYIIR'] Truth: ['QIQLIEVLSAEVTTSTRTYSLGSALR', 'DVEIAKQSVDEVPEVRALIDWIRR', 'SSVEGRPLWVLVTTSTRTYSLGSALR', 'LRLKQQSELQSQVRQDHQALFQR', 'GNVQIQNKKVDISKVSSGQTPRPAER'] non truth: ['DQLIAAIQNRTYNHLTIEQLQLLQ', 'RGGRRRRSNQSVLSKRLWFFDSQ', 'VSMIVIALAYYYNHLTIEQLQLLQ', 'SSVPAVAIGCKFLNNHLTIEQLQLLQ', 'VDASIIGKDDGVLQLHQTLALFAGQAR'] Truth: ['KKSPNELVDDLFKGAKEHGAVAVERV', 'KSTLSMSPRQRKKMTRTTPTMKEL', 'KSTEIKWKSGKDLTKRSSQTQNKAS'] non truth: ['TTQSVSYDEIHLRKKSQSLISFIR', 'GVQSPWLVATANVEKNKTTVFPLSLH'] Truth: ['TFKVSAKVYLLYR', 'WAVIPKSLEYLLR', 'LAALCIQGRLRFAR', 'TTRRPTGRLRFAR', 'WWKKLLPRSFAR'] non truth: ['TFPIKMASRRKPR', 'TFPIKMAPSRKRR', 'TFPIKMAQALRRR', 'TTRLKKTLKGGNDR', 'TFQQKRSRRKPR', 'TFGQAKRSRRKPR', 'TFKAKFTLSYLIR', 'TFKAKFTSLYIIR', 'TFVVVLIHDLYLR', 'TFDRLRSRRKPR', 'TFLRDRSRRKPR', 'TFRDRIQALRRR', 'ALALWLVQEYIIR', 'TTTVTATIQALRRR'] Truth: ['DLSLEEIQKKLEAA', 'IDSLEEIQKKLEAA', 'DLSLEEIQKKLEAA', 'SIDLAKELQLKEEA', 'SLDLAKELQLKEEA', 'SLDLLNLGLITESQA', 'SIDLLNLGLITESQA', 'SLDLEEIQKKLEAA', 'SIDLEEIQKKLEAA', 'TTKVKKEDEVIPEA', 'TVDLAKELQLKEEA', 'TVDLLNLGLITESQA', 'TVDLEEIQKKLEAA', 'TVEKTEFIPLLEPA', 'TVEKTEFIPLLPEA', 'SINTTNIDTLLPTLA', 'SINTTNIDTLLPTIA', 'SINTTNIDTLLVVPS', 'SINTTNIDTLLVVSP', 'DLSLLNLGLITESQA', 'IDSLAKELQLKEEA', 'IDSLLNLGLITESQA', 'DLSLAKELQLKEEA', 'LDSLEEIQKKLEAA', 'IDSIVGSVVAQITAVD', 'DLSIVGSVVAQITAVD', 'LDSIVGSVVAQITAVD', 'SSKEAQSILLEPITA', 'SSKEAQSILLEPTIA', 'SSKEAQSILLEPLTA', 'SSKEAQSILLEPTLA', 'SSKEAQSILLEVVPS', 'DLSRALLCLALAWAA', 'IDSRALLCLALAWAA', 'SIDIVGSVVAQITAVD', 'SLDIVGSVVAQITAVD', 'TVDIVGSVVAQITAVD', 'TVEKTEFIPLIPEA', 'LDSRALLCLALAWAA', 'SIPPKNELRVHPEA', 'TTKVKKEDEVLEPA', 'TTKVKKEDEVLPEA', 'SIDRALLCLALAWAA', 'SLDRALLCLALAWAA', 'TVDRALLCLALAWAA', 'SINTTNIDTLLPLTA', 'SINTTNIDTLLIPTA', 'SINTTNIDTLLPITA', 'SINTTNIDTLLTLPA', 'SINTTNIDTLLITPA'] non truth: ['SLDITLHARQSFAK', 'SIDITLHARQSFAK', 'EVSITLHARQSFAK', 'TVDITLHARQSFAK', 'IDSITLHARQSFAK', 'EVSITQLEQLSLLN', 'IDSITQLEQLSLLN', 'SIDITQLEQLSLLN', 'SLDITQLEQLSLLN', 'TVDITQLEQLSLLN', 'TVDAHAKNIKYKQA', 'SDLITLHARQSFAK', 'SEVITLHARQSFAK', 'SEVITQLEQLSLLN', 'SDLITQLEQLSLLN', 'SLLSKGLSVAESVEPA', 'SLNTSVAEAPLLLTSA', 'SITVGHQNIKYKQA', 'SLSIGGHANIKYKQA', 'SILSKLARSCYFAK', 'LDRGIIESVEELTI', 'PTGSSLDLISSKTPLA', 'SLLSKGHRINYLDA', 'SSSIQVAEAPLLLTSA'] Truth: ['SSYLEGQAAKEFIAWLVKGRGRR', 'KKKSWQLDPTEGPNRERRRLQ', 'KKPMSLASGSVPAAPGGQRAAWRVLS', 'KKPMSLASGSVPAAPNKPSDLGTKLQ', 'KKMGNHLTNLRRNSNIILLGDSQ', 'QQLTALGAAQATVPESKVFYLKMK', 'QQNVVINGNLSRAAAALLRWGRSAG', 'KKNVHGEISIWVSGQRKTDVILD', 'KKDRVTDALNAEGPNRERRRLQ', 'QQQPMVPLVVSLCIRWRRWVQ', 'QQRRHCLVVSLCIRWRRWVQ', 'KKTIQMGSFVSLCIRWRRWVQ', 'KKAGSFSCLVVSLCIRWRRWVQ', 'KKGYTQCLVVSLCIRWRRWVQ', 'QQLDCALDPKAQIQKRHVFKKQ', 'KKQNGKSKDPTEGPNRERRRLQ', 'KKSKWQLDPTEGPNRERRRLQ', 'KKIKGCQLDPTEGPNRERRRLQ', 'QQQHTQVFVKEAARLREERLR', 'KKGRGHSTEDKEAARLREERLR', 'KKAIPPGCGEDKEAARLREERLR', 'KKAIPPGCGDEKEAARLREERLR', 'TPSVIQQGAPPSSQVVLPAPTGIIHQ'] non truth: ['KKKIPMLYEFNSIERKQWRQ', 'KKTRDIHDKLQIRVQSDLVSDQ', 'KKTRDIHDKLQISLQERLASDQ', 'KKSKENKKDTEPAQKAELFFKQ', 'KKWNPGLPGRKTKVSHFAQLDSQ', 'KKWNPGLPGRKTKVSHFKSPDSQ', 'KKWNPGLPGRKTKVSHFLAQSDQ', 'KKDDTVHRLGKNSIERKQWRQ', 'KKKPLMLYEFNSIERKQWRQ', 'KKKLPMLYEFNSIERKQWRQ', 'KKIPKMLYEFNSIERKQWRQ', 'KKQINVKYEFNSIERKQWRQ', 'QQLTAVEDPMPLTALRQSRRRR', 'QQETGEVLPMPLTALRQSRRRR', 'QQNKRDDPMPLTALRQSRRRR', 'KKPDVSDDPMPLTALRQSRRRR', 'KKDTEPADPMPLTALRQSRRRR', 'EHSAKKVNIENIALIQDGARLGVF'] Truth: ['SSELDLELQ', 'SEDSLLELQ', 'SESVELELQ', 'SSVEEELIQ', 'SSVEEELLQ', 'ESSELDLIQ', 'ESSELDLLQ', 'EEVLSSELQ', 'EEVLSSEIQ', 'EISSIDELQ', 'EISSIDEIQ', 'TDSLDLELQ', 'SSELDLEQL', 'SSVEELELQ', 'SSELDLEIQ', 'SEESVLELQ', 'SESVELEIQ', 'SEDSLLEIQ', 'SEDTVLELQ', 'ESESVLELQ', 'SSEELVELQ', 'SSEELVEIQ', 'SSVEEELIGA', 'EDVMLWLQ', 'EDLVMWLQ', 'DEMWVLIQ', 'SDSILEEIQ', 'EISSIDEIGA', 'SESVELEQL', 'SEDSLLEQL', 'TDSLDLEQL', 'SEDSLELLQ', 'TDSLDELLQ', 'SSEELEVLQ', 'SSVEEEIIGA', 'SSELDELLQ', 'SSLEDLEQL', 'EEVLSSEIGA', 'SEESVELLQ', 'SEDTVELLQ', 'ESESVELLQ', 'DELVMWLQ', 'DEVMLWLQ', 'ESSELDLIGA', 'EEVTSVELQ', 'SEESVLEQL', 'SEDTVLEQL', 'ESESVLEQL', 'SSEELVEQL', 'EEIICWLQ'] non truth: ['EESSVLELQ', 'EESSVLEIGA', 'EDSSILELQ', 'SSIEVEELQ', 'ELSSDLELQ', 'EISSDLELQ', 'ESIDSLELQ', 'ESIDSIELQ', 'SSELLDELQ', 'SSELLDELGA', 'SSELLDEIGA', 'SSIEEDILQ', 'EDSSILEIGA', 'TDESEVLLQ', 'EDMVLWLQ', 'SEISLDELQ', 'SEISLDEIGA', 'ESSVLEELQ', 'EDTSIVELQ', 'EDTSIVEIGA', 'EDSSIELLQ', 'ELSSDIELQ', 'EISSDIELQ', 'EESSVLELGA', 'SSDLEIELQ', 'SSDLELEIGA', 'EDTSIDLLQ', 'EDTSIDILQ', 'EISSDLEIGA', 'ELSSDLEIGA', 'EESSVIELQ', 'EESSVLEIQ', 'ESIDSLEIGA', 'SEISLDELGA', 'ELSSDELLQ', 'EISSDELLQ', 'SSELLDEIQ', 'EISSVEELQ', 'ELSSVEELQ', 'SEISLDEIQ', 'SEISVEELQ', 'EESSVELLQ', 'ESIDSELLQ', 'SSELVEELQ', 'TDESEVLIGA', 'EESSVLEQL', 'DAGRSWTIQ', 'SSITAFHDR', 'ELSSEDILQ', 'EISSEDILQ'] Truth: ['FDHLLAKSGTEQSPDAKHR', 'FDHLLAKSGTEVFGWITLC', 'PAGENGRRGGLVPSEKPPTMS', 'KKEETQPPVALKVCDYTST', 'KKDDAGRDFTPAELRRFD', 'FSAYIKVFENQSPDAKHR', 'FSSFGPISELHVTLTSAVGTS', 'KKQCETPAKIIFEDDRCL', 'NPGWKKNGAGLMVNSRFGFG'] non truth: ['KPMVHVYDGKGAYSGQLSLS', 'APSSQRNLRSREEIIYMS', 'KKNKNSSQNHSAVQHELTS', 'KYFGPKSMTRSYEIGGKTS', 'KKEQNEESALPYSPLIFSG', 'KKEQNEEGTIFIFADSAPL', 'KKDPWAGCGTIFIFADSAPL', 'KKFYCLCGTIFIFADSAPL', 'PAGSGTGGSITGKISTLVMTNTS', 'APGNERRGNMRKYFPRST', 'APGNERRGNHRSLSRDVIC', 'APGNERRGNGPVSGPVSGQLST', 'KKASMQPDFEETVVQPRF', 'KDYRYKASWRQIAYTTS', 'KKEHEAQTMISWHDLVKG', 'KKGSFSESTMISWHDLVKG', 'KKNYTESTMISWHDLVKG', 'KKQSYESTMISWHDLVKG', 'KKDDTVHQAPYPHIGNAFV', 'KKEQNEEGKISTLVMTNTS', 'KKEQNEEKIDGIFAKGDST', 'KDRFNLESLDITLNDKES', 'KKEVSYNCLRGQWGPRCI', 'SFFSRETPSAMLHVTVQSL', 'KKASMQPDMRLAKMRQSM', 'YAYAPGESLLRLSHMQLST', 'PAPAAEGATSVGPVSGPVSGQLST', 'FDFDVQSLLRLSHMQLST', 'KKWNWPCTGEQKFLRIC', 'KKASMQPDVFQGDQKLVSM', 'KKASMQPDKQYISPGQISM', 'KKEEELEKQYISPGQISM', 'KKEWDEDLLCCILLLSCL', 'KKASMQPDLAEYTILENVS', 'KKDGPSCLGLAEYTILENVS', 'SFFSWSSLSIGVVFISMRS', 'KKDCGEPQRLRNQIEPEP', 'KYKKESKSGFDRLFCDVS', 'YKKESKSGFDRLFCDVSK'] Truth: ['ELQRSANSNPAM', 'ELQRSANSNCVP', 'ELKEEEEICVP'] non truth: ['YGPNGGIYRASY', 'YGPNPHGPSPPTP', 'YGPNLNYRASY', 'GYVAQHSSLGCVP', 'YGPSTRPMQPSP', 'SDRERLENCVP', 'GYPSTRPMQPSP'] Truth: ['DLPQAAEQ', 'EVPQAAEQ', 'PQAAEVEGA', 'PQAAEVEQ', 'EVNGPVEGA', 'PAGAADLEQ', 'GAPAADLEQ', 'GPAAADLEQ', 'APGAADLEQ', 'QPAADLEQ', 'PQAADLEQ', 'TTQPPAEQ', 'TTQPPAEAG', 'TTQPPAEGA', 'EQGIPAEQ', 'EQLGPAEQ', 'EQAPVAEQ', 'PAGAALDEQ', 'PAGAADIEQ', 'GAPAALDEQ', 'GAPAADIEQ', 'PQAALDEAG', 'QPAALDEAG', 'GPAAALDEQ', 'GPAAADIEQ', 'APGAALDEQ', 'APGAADIEQ', 'APGAALDEAG', 'QPAALDEQ', 'QPAADIEQ', 'GAPAALDEAG', 'PAGAALDEAG', 'PQAALDEQ', 'PQAADIEQ', 'GPAAALDEAG', 'PAGAAVEEQ', 'GAPAAVEEQ', 'GPAAAVEEQ', 'APGAAVEEQ', 'QPAAVEEQ', 'PQAAVEEQ', 'EQPLQEQ', 'EQPLQEGA', 'EQPLQEAG', 'EVNGVPEQ', 'EQAAPVEGA', 'EQGIPAEGA', 'EQLGPAEAG', 'EQLGPAEGA', 'EQGIPAEAG'] non truth: ['APGAAEVEQ', 'EQVPAAEGA', 'EQVPAAEQ', 'PGAAAEVEQ', 'AGPAAEVEQ', 'APGAAEVEAG', 'QPAAEVEQ', 'PQAAEVEQ', 'PQAAIDEGA', 'PGAAALDEQ', 'AGPAALDEQ', 'APGAALDEQ', 'PGAAAIDEGA', 'QPAALDEQ', 'APGAAIDEGA', 'AGPAAIDEGA', 'QPAAIDEGA', 'PQAALDEQ', 'AGPAAVEEQ', 'AGPAAVEEAG', 'AGPAAVEEGA', 'EQAAPVEAG', 'EQAAPVEQ', 'PQAVEAEGA', 'PQAVEAEQ', 'PQAVEAEAG', 'EQLGPAEQ', 'EQLGPAEAG', 'EQLGPAEGA', 'EQPAVAEQ', 'EQPAVAEGA', 'EQVPAAQE', 'PQAADIEGA', 'PGAAALDEAG', 'PGAAAIDEAG', 'PQAALDEAG', 'PQAAIDEAG', 'PGAAADLEQ', 'PGAAAIDEQ', 'QPAALDEAG', 'QPAAIDEAG', 'AGPAADLEQ', 'AGPAAIDEQ', 'APGAADLEQ', 'APGAAIDEQ', 'PGAAADIEGA', 'APGAALDEAG', 'APGAAIDEAG', 'QPAADLEQ', 'QPAAIDEQ'] Truth: ['DVKPWDDET', 'DVKPWDDET', 'TEPPKGMDET', 'TEWVGAPDET', 'TEVWQPDET', 'TEPTQPMAET', 'TEPTQPAMET', 'TTEPPGCGLET', 'TEWVGAETDP', 'DVKPWDDTE', 'TEWVGAPDTE', 'TEVWQPDTE', 'TEWVGADPTE', 'TEVWQDPTE', 'TEKMPGPDTE', 'TEWVGAEPDT', 'TEVWQEPDT', 'TEWVGAPEDT', 'TEVWQPEDT', 'DVKPWDEDT', 'TTSWAPPDTE', 'TTWTPGPDTE', 'TTQCPLPDTE', 'TEYLHTEPD', 'TELYHTEPD', 'TEWVGAETPD', 'TEVWQETPD', 'TEWVGATEPD', 'TEVWQTEPD', 'TEVWQETDP', 'TTSWAPTEPD', 'TTQCLPTEPD', 'VKPWDDETD'] non truth: ['TEPCLPGSAET', 'TEPCLPTNET', 'TTPEPLCNET', 'TEPCLPNTET', 'TEPCLPSQET', 'TEPCLGSAPET', 'TEPCLSGAPET', 'TEPCLSQPET', 'TEHLYPDET', 'TEHIYPDET', 'TELWNPDET', 'TETPSGWPET', 'TTPEPLNCET', 'TTEPNLCPET', 'TTEPNCPLET', 'TTPWSAPDET'] Truth: ['EAQMAAKL', 'KKEMAAPS', 'KKEMQPT', 'TTGCPALKA', 'TTGCPAKAL', 'KAKETMPG', 'KKEMAASP', 'KKMEAASP', 'KKEMATGP', 'KKEMATPG', 'KKEMTAPG', 'KKAETMPG'] non truth: ['TAPSCAAKL', 'TASPCAAKL', 'TAPCSAAKL', 'TAPCSAAKI', 'TAPCSAAIK', 'TAPCSAALK', 'TTPGCAAKL', 'TTGPCAAKL', 'TAPCSAKAL', 'TAPCSAIAK', 'TAPCSAKIA', 'TAPCSAIKA', 'TAPCSALAK', 'TAPCSAKAI', 'TAPCSALKA', 'TAPCSAKLA', 'TTCPGAKIA', 'TTPCGAKIA', 'TTCPGALAK', 'TTPCGALAK', 'TDSARGRV', 'KAGMLGDIG', 'KAAKMESP', 'KAAKEMPS', 'KAKAMESP', 'KAKAEMPS', 'KKAAMESP', 'KKAAEMPS', 'KKEMAAPS', 'KKMEAAPS', 'KKMEAASP', 'KKMEATGP', 'KKEMATPG', 'KKMEATPG', 'KKMETAPG', 'GKAGMLGDI'] Truth: ['IDLSDVELDPAEVLQD', 'LSDVELDDLPAEVLQD', 'DIRAMSPQTLVPENEA', 'FEHFLPQTLVPENEA', 'DIRAMSPLDNNDVLTP', 'DIRAMSPLDATEQQVP', 'LSDVELDDLPIPGSVDS', 'LSDVELDDLPSDVPEK', 'IDLSDVELDPSDVPEK', 'LSDVELDDLPEQIGLD', 'IDLSDVELDPEQIGLD', 'LSDVELDDLPETTPLN', 'LSDVELDDLPPTETNL', 'IDLSDVELDPPTETNL', 'IDLSDVELDPETTPLN', 'LSDVELDDLPGQVLEE', 'IDLSDVELDPGQVLEE', 'AAFNNRPLDDIPEADL', 'AAFNNRPDIPDRTFH', 'TLLDVNDLDDIPEADL', 'TLEEGLQLDDIPEADL', 'NLDELSVLDDIPEADL', 'NLDELSVDIPDRTFH', 'RNQWSVDIPDRTFH', 'RNQWSVLDDIPEADL', 'FEHFLPEQACHLAKT', 'IDLSDVELDLVPENEA', 'TINVEDVDIPDRTFH', 'TINVEDVLDDIPEADL', 'LTLEDQADIPDRTFH', 'LTLEDQALDDIPEADL', 'TLEEGLQDIPDRTFH', 'NLDELSVEQACHLAKT', 'REGQLTSMPLVPENEA', 'AAFNNRPEQACHLAKT', 'RNQWSVEQACHLAKT', 'TLLDVNDEQACHLAKT', 'TINVEDVEQACHLAKT', 'LTLEDQAEQACHLAKT', 'TLEEGLQEQACHLAKT'] non truth: ['NLLNQGMQTPAVAGLDE', 'NLLNQGMQTPNLEDAL', 'NLLNQGMQTPTEPQSI', 'NLLNQGMQTPETISQP', 'NLLNQGMQTPQIVEAD', 'NLLNQGMQTPTEVPQT', 'NLLNQGMQTPSHFRQ', 'NLLNQGMQTAPEDLLN', 'RFPSDAPTQGPPHHNI', 'PVLGPGSFACAPAMAEPR', 'RSMDALPTQGPPHHNI', 'HIYAVSTPMPAVAGLDE', 'LSWQAVSPMPAVAGLDE', 'NLLNQGMQTGPPHHNI', 'NLLNQGMTQGPPHHNI', 'RFPSDAPEVQPLEQAS', 'RFPSDAPDLEPQSALQ', 'RFPSDAPIDQQPTAEV', 'TLQNWQTQGPPHHNI', 'DDNIVKSIDPRCVPEA', 'DDNIVKSIDPQCQKPA', 'CLNPCKLVEPRCVPEA', 'CLNPCKLVEPQCQKPA', 'DDNIVKSIDPWDRPT', 'DDNIVKSIDPSPWER', 'DDNIVKSIDPHNLSFG', 'DDNIVKSIDPVQQYH', 'CLNPCKLVEPWDRPT', 'CLNPCKLVEPHNLSFG', 'CLNPCKLVEPSPWER', 'CLNPCKLVEPVQQYH', 'PVLGPGSFACAPAEDLLN', 'PVLGPGSFACAPAVAGLDE', 'RSMDALPEVQPLEQAS', 'RSMDALPDLEPQSALQ', 'RSMDALPIDQQPTAEV', 'PQMGFLPNGGIYRASY', 'PQMGFLPDLEPQSALQ', 'PQMGFLPEVQPLEQAS', 'PQMGFLPIDQQPTAEV', 'SDLKTEPNGGIYRASY', 'HIYAVSTPMPRCVPEA', 'HIYAVSTPMPQCQKPA', 'LSWQAVSPMPRCVPEA', 'LSWQAVSPMPQCQKPA', 'HPPREGPNGGIYRASY', 'HIYAVSTPMPWDRPT', 'HIYAVSTPMPHNLSFG', 'HIYAVSTPMPSPWER', 'HIYAVSTPMPVQQYH'] Truth: ['TEETPPPIFEN', 'TEETPPPVMTTA', 'PNQTENVTKEN', 'PNQTKTLNDEN', 'TYEEIKFTNE', 'PNQTTAKEAAEN', 'PNQTTAITQGEN', 'PNQTTATQVAEN', 'TTHTKTLNDEN', 'TYVAHTFNHSP', 'TYEEIKTFEN', 'LELEPMEIGEN', 'IEIEPMEIGEN', 'IELEPMEIGEN', 'ELLEPMEIGEN', 'ELLEEVCVPEN', 'ELLEEVCPVEN', 'ELIEPMEIGEN', 'TTHSSPATKTDQ', 'NPPGGKTSTATQD', 'TYEEIKSMSSV', 'TTEEQRLPGEN', 'TEETQRLPGEN', 'TEETPPPLFEN', 'TTEEYKLFEN', 'TEETYKLFEN', 'LELERQNEEN', 'ELLEANDAREN', 'LELERAGEGGEN', 'TTHSDALKGTNE', 'TEETPRHFWA', 'TEETHPRWFA', 'TEETPPPVSLTC'] non truth: ['IEIEMTPPTNE', 'TTHSKQEVETN', 'TTHTDNKLTNE', 'THTTTQGALTNE', 'TYEELFKTQD', 'TYEELFKTDQ', 'TYEELFKTNE', 'THTTQISKDEN', 'LELEVEPVCEN', 'EILEEVVCPEN', 'ELLEQRQDEN', 'ELLEQRGEGEN', 'IEIERQGEGEN', 'TTHSKLDQTNE', 'TTHSKTSAPETN', 'TYEELFKETN', 'TTEERPHWAF', 'EILEEMGPIEN', 'EIIELMGPEEN', 'ELLELMGPEEN', 'EIIEPFGPEEN'] Truth: ['KWFSLEREVALAKTLRLTCKNRVQNMAL', 'KWFSLEREVALAKTLRRKQEAQELFKN', 'RNLYRQLLQRQPHGERFGSLQALPTRAI', 'KWFSLEREVALAKTLRRYGSPGDLQTLAL', 'KKPMSLASGSVPAAPHKRPELEKVAHQLQAL', 'KKYVANTVFHSVLAGLRISKQEAQELFKN', 'EKKLEKVKAYREKIEKELETVCNDVLAL', 'PRKKSPNELVDDLFKGAKEHGAVAVERVTK'] non truth: ['KKFQIGSTTLLEISDQRPDKTLLGQLFNK', 'KKTGKSSSSSKRISAQDPAILRPSGVMQFVK', 'KKKPLADFPMDTSVTTEIIKPGIKNSIYLA', 'KKELPRQEQRQHQLSVLYNLTNEALVLA', 'KKTGKSSSSSKRISAQDPAIPKLLSDSFINK', 'KKKLTEISSDLSSFPQRPDKTLLGQLFNK', 'FISKKKGRMSKNKSKKDGPSAGVASPGVVSSVG', 'EAVMGKKARLVRGLRQAEQRRSGGHSSIQI'] Truth: ['DPRLRQFLQKSLAAATGKQ', 'KKDLSLEEIQKLGAMAIIGA', 'KKDLSLEEIQKLQNSVKQ', 'KKQHRKTTTVQSSFPVKQ', 'KKKKEDALEDVIRDSLLQ', 'KKQHRKTTTTLCVLGKGTQ'] non truth: ['KPHFLLSQLYLAAAAAVALM', 'KGKIAVSEKPLSMLAVESLQ', 'KAAQTVKTIDQEIIGVSAKQ', 'KAAQTVKTIDQAVIEVAKSQ', 'KKTISKPGTGARLDELSKAQ', 'KKTISKPGTGEPALVTSTKGQ', 'KGDVLASLTLARLDELSKAQ', 'KKDVTRFIQFIYRAKSQ', 'KGKLQSKDIARLDELSKAQ', 'KGKLQSKDIEPALVTSTKGQ', 'KKLEELMALQEIIGVSAKQ', 'KGGRLRPELSYVAVMPVKQ', 'KKAVSLIEEARLDELSKAQ', 'KKLEELMALQAVIEVAKSQ'] Truth: ['APSDPRLRQFLQKSLAAATGKQELAKYFLA', 'KKEEIPEFSKIQTTAIHGNAPVGTELLLALA', 'EKKLEKVKAYREKIEKELETVCNDVLAL', 'PRKKSPNELVDDLFKGAKEHGAVAVERVTK'] non truth: ['KKAVVPQNCIAEIALVAPGSLEKGGRQDRSIA', 'KKGAESNHLLSRLIANGSLANAAGQLTWTLLA', 'KKVAVEEGHFRIALVAPGSLEKGGRQDRSIA', 'KRQFTGDLANETKILELLSSQEYPLLKLA', 'LWLRSIGRRMQPGQQRVWKGGAAMAALPLA', 'KKMWSEVKNETKILELLSSQEYPLLKLA', 'FISKKKGRMSKNKSKKDGPSAGVASPGVVSSVG', 'EAVMGKKARLVRGLRQAEQRRSGGHSSIQI'] Truth: ['SDPRLRQFLQKSLAAATGKQELAKYFLA', 'KDRKNPLPPSVGEAGITEKVVFEQTKVIA', 'KKSPETLKCRLPEGNKNDALLLAWNKAL', 'LVKKLAFAFGDRNCSSITLQNITRGSIVV', 'PKKRKTIHQDVHINTINLFLCVAFLCV'] non truth: ['KMYAAVGKAALPVLNLISGFAKRQFTGDLA', 'KRQGPSSKRAKTDKIWSNFKILNTYIA', 'KQFVQQQKGAKRRGFRDVTVGLFVCALA', 'DTPIEKKANLLKKSKPKGQSLESEQKEP'] Truth: ['KRLIYATSSLDSKLSAPEAGPPRRGPLPRGGA', 'KDRKNPLPPSVGVADKKEEPPRRGPLPRGGA', 'APKKEEVKSPVKEEVKAKEPPKKVEEEKT'] non truth: ['KKHLEAILAETGEADRPGRLRHAPALRELA', 'ARRRERQERQERRKQKLDKHLSTKGE', 'EAVMGKKARLVRGLRQAEQRRSGGHSSIQI', 'RKQSAKKKKITTSGIHHGVGEGMQEKAAGKK'] Truth: ['APSDPRLRQFLQKSLAAATGKQELAKYFLAELLSEPNQTEN', 'TELNEPLSNEERNLLSVAYKNVVGARRSSWRVISSIEQKT', 'VAKTYSIGRSFEGKDLVVIEFSSRPGQHELMEPEVKLIGNI', 'SRVTEVVVKLFDSDPITVVLPEEVSKDNPKFMDTVAEKALQ'] non truth: ['TLSDKIERNPFPIFLGKNSEMLDVGDKTLGTLLIGELNGAEK', 'KRGSVNALANLIDGLIRPPEQVMHQILLDLCRALYENGSQQ'] Truth: ['SSILASFNVNGPLAYVSQQA', 'SLLSDPTYRGPLAYVSQQA', 'SLLSDPTYRFLEALQNQA', 'SSFLVYLVEKDYYLQAGA', 'QQLFTESKAFLEALQNQA', 'SSFLVYLVEKDANPAREGA', 'SSFLVYLVEKDAKEYFGA', 'SLISGVLSYLNCVEAEAAVGA', 'SSMAYPNLVAMASQRQAKI', 'MLKNQLDQEVEFLSTSIA', 'FLVYLVEKDANKEFSTY'] non truth: ['SSLLFSPGPFERWCLAKGA', 'SSLISFASATSAVPSRDKNGA', 'SSLFRDAGAKVEAVADSKSGA', 'SSMSLLQNIGDLELLSFAQ', 'SSMSLLQMKLVDVTESVAQ', 'SSMSLLQGSFLPSITQLEQ', 'QQLFTSCPKQFVQQQKGA', 'SLFSAQACPKQFVQQQKGA'] Truth: ['DQLLHYRLDITNPPRTN', 'DQIIGRIDDQLIAGKEPAN', 'QDILPNLRAVGWNEVEGR', 'AGDLLPNLRAVGWNEVEGR', 'QDLLPNLRAVGWNEVEGR', 'GADLLPNLRAVGWNEVEGR', 'GADILPNLRAVGWNEVEGR', 'QQAVLQMEQRKQQLSPPG', 'QQAVLQMEQRKQQLPPGS', 'QQAVLQMEQRKQQLPGPS', 'QQAVLQMEQRKQQLGSPP', 'QDLIPNLRAVGWNEVEGR', 'ENILKVLTWQAGRNFYN', 'GADINQLLQKEPDLRLEN', 'AGDLNQLLQKEPDLRLEN', 'GADLNQLLQKEPDLRLEN', 'QDIVSIILNKDHSTALNAN', 'GADIVSIILNKDHSTALNAN', 'ENLLPNLRAVGWNEVEGR', 'NELLPNLRAVGWNEVEGR', 'ENILPNLRAVGWNEVEGR', 'ENLIPNLRAVGWNEVEGR', 'NELIPNLRAVGWNEVEGR', 'ENIIPNLRAVGWNEVEGR', 'DQLLPNLRAVGWNEVEGR', 'DQLLPNLRAVGWNEVEGR', 'DQILPNLRAVGWNEVEGR', 'QDINQLLQKEPDLRLEN', 'QDLNQLLQKEPDLRLEN', 'GADLVSIILNKDHSTALNAN', 'AGDLVSIILNKDHSTALNAN', 'AGDIVSIILNKDHSTALNAN', 'NELNQLLQKEPDLRLEN', 'ENLNQLLQKEPDLRLEN', 'ENINQLLQKEPDLRLEN', 'DQINQLLQKEPDLRLEN', 'DQLNQLLQKEPDLRLEN', 'ENLDETIALQLHLEKSVN', 'QVELPNLRAVGWNEVEGR', 'QLDLPNLRAVGWNEVEGR', 'QEVLPNLRAVGWNEVEGR', 'ENIVSIILNKDHSTALNAN', 'DQIVSIILNKDHSTALNAN', 'QDLVSIILNKDHSTALNAN', 'LENLDETIALQHLEKSVN', 'QLDIPNLRAVGWNEVEGR', 'QEVIPNLRAVGWNEVEGR', 'QVEIPNLRAVGWNEVEGR', 'NEIVSIILNKDHSTALNAN', 'NELVSIILNKDHSTALNAN'] non truth: ['QDLLMQALDVIHLKSPMN', 'DQLILCGGLDVIHLKSPMN', 'QDLLFVFVVSPDSYPLLN', 'QDILFVFVVSPDSYPLLN', 'AGDIINEQLGPPLTTLNRGS', 'QDLINEQLGPPLTTLNRGS', 'QDIINEQLGPPLTTLNRGS', 'ELNLITDLGASRLSRSYGT', 'AGDILFVFVVSPDSYPLLN', 'QDLLGFESLNRNFQKRT', 'QDILGFESLNRNFQKRT', 'AGDILGFESLNRNFQKRT', 'GADILGFESLNRNFQKRT', 'GADLLQGSGDVLKFGLELSF', 'AGDLLQGSGDVLKFGLELSF', 'QDLLQGSGDVLKFGLELSF', 'QDILQGSGDVLKFGLELSF', 'AGDILQGSGDVLKFGLELSF', 'ENILFVFVVSPDSYPLLN', 'NELLFVFVVSPDSYPLLN', 'ENLLFVFVVSPDSYPLLN', 'NEILFVFVVSPDSYPLLN', 'DQILFVFVVSPDSYPLLN', 'DQLLFVFVVSPDSYPLLN', 'QDILVAKYQPTFRRMEA', 'AGDILVAKYQPTFRRMEA', 'QDLLVAKYQPTFRRMEA', 'GADILVAKYQPTFRRMEA', 'QDLIFVFVVSPDSYPLLN', 'NELINEQLGPPLTTLNRGS', 'ENIINEQLGPPLTTLNRGS', 'NEIINEQLGPPLTTLNRGS', 'ENLINEQLGPPLTTLNRGS', 'DQLINEQLGPPLTTLNRGS', 'DQIINEQLGPPLTTLNRGS', 'AGDLINEQLGPPLTTLNRGS', 'GADIINEQLGPPLTTLNRGS', 'GADLINEQLGPPLTTLNRGS', 'QKKNLDERLLDQDLGPSP', 'QDLIGFESLNRNFQKRT', 'QDLIQGSGDVLKFGLELSF', 'NELIFVFVVSPDSYPLLN', 'NEIIFVFVVSPDSYPLLN', 'ENLIFVFVVSPDSYPLLN', 'DQLIFVFVVSPDSYPLLN', 'QDLIVAKYQPTFRRMEA', 'QDLLNEQLGPPLTTLNRGS', 'QDILNEQLGPPLTTLNRGS', 'QDLLVSGESDAPKVQIPRN', 'QDLKVVDEHSAKKVNIEN'] Truth: ['SLSLALSQICSF', 'DEGKLFVGGLSF', 'DEGKLFVGGLSF', 'SLNCLLRCLFS', 'KKNGAGLMLSCF', 'DEGKLFVGGLAY', 'DEGKLFVGGLFS', 'SVQEKTMVLAY', 'SVQEKTMVLFS', 'KKNGAGLMCLAY', 'KKNGAGLMCLFS', 'SLSLALSQCLAY', 'SLSLALSQCLFS', 'SVQEKTMVLSF', 'KKNGAGLMICSF', 'KKNIMANCLAY', 'KKNGAGLMLCAY', 'KKNGAGLMLCFS', 'KDSNVYVLPSF', 'SLSLALSQLCAY', 'SLSLALSQLCFS', 'DEGKLFVGGVTF', 'SLSLALSQCVTF', 'KKNGAGLMCVTF', 'SLVTSKLADMGF', 'SLSLALSQLSCF', 'SLSLALSQMATF', 'SLSLALSQTAMF', 'KKNGAGLMTAMF', 'EGKLFVGGLSFD'] non truth: ['SSLLFSPGVQSF', 'SFLEQALGGLSF', 'SSLLFSPGEKGF', 'SSLLFSPGNLSF', 'SSLLFSPGNISF', 'SFLEQALGGISF', 'SSLLFSPGLNSF', 'SSLLFSPGLGGSF', 'SLNKFLPESSF', 'SLNKFIPESSF', 'KDSNPLTLFSF', 'SLSLGSIAAICSF', 'SLSLEKAAICSF', 'SSLLFSPGIGSGF', 'SSLLFSPGISGGF', 'SSLLFSPGASVGF', 'SSLLFSPGAVSGF', 'VPPVSPSSPASFP'] Truth: ['EWIPNNVKVAVCDIPPRGLGYAAIAQADRLTQEPESIRK', 'SEKLQLLQRAIQAQTEYTGYAAIAQADRLTQEPESIRK', 'QNPDAALRTLFQAHPQLKQCVRQAIERAVQELVHPVVD', 'RTNSENIKRTLSITGMAIDNHLLALRELARDLCKEPPE', 'GWLKSNVNDGVAQSTRIIYGGERKKQQEKEEKEEKKK', 'QGINEFLRELSFGRGSTAPVGERKKQQEKEEKEEKKK', 'EWIPNNVKVAVCDIPPRGLDADGLRRVLLTTQVENQFAA', 'QPNAEVVVQALGSREQEAVIPVAQAQGIQLPGQPTAVQTQPA', 'EWIPNNVKVAVCDIPPRGLNHLLALRELARDLCKEPPE', 'QPNAEVVVQALGSREQEANVEVLIDVLKELNPSLNFKEV', 'SEKLQLLQRAIQAQTEYTDADGLRRVLLTTQVENQFAA', 'PPDLLYQIRHDVTVSPNGVNHLLALRELARDLCKEPPE', 'SEKLQLLQRAIQAQTEYTNHLLALRELARDLCKEPPE', 'REWLSTLPPPRRNRDLDNFKLLSHCLLVTLASHHPAD', 'KIFQEKVQSLAYLRHSSSGGKTAMAYKEKMKELSMLSL', 'AVEDPVQRRALETMIKTYGHPTLLTEAPLNPKANREKM', 'SEKLQLLQRAIQAQTEYTGLVPSEKPPTMSQAQPSIPRP', 'AEDKEEESLHSIISNKSAKKKPGKQHRTNSENIKRTLS', 'QPNAEVVVQALGSRCFLLVPCMLTALEAVPIDVDKTKVHN', 'AVEDPVQRRALETMIKTYNHLLALRELARDLCKEPPE', 'AVEDPVQRRALETMIKTYGKSLSRKHAIPADEKMKPHS', 'DPRLRQFLQKSLAAATGKQELAKYFLAELLSEPNQTEN', 'VNKDIVSGLKYVQHTYRTNFKLLSHCLLVTLASHHPAD', 'KIFQEKVQSLAYLRHSSSNFKLLSHCLLVTLASHHPAD', 'CPAVRLITLEEEMTKYKIPVAQAQGIQLPGQPTAVQTQPA', 'AVEDPVQRRALETMIKTYGDVPATVELIAAAEVAVIGFFQ', 'AHYKLTSTVMLWLQTNKAEEFEVRVKENSIVGSVVAQI'] non truth: ['EVFSPTLAARQPQLNRPSTMPDSQYLKKDKLILLFASD', 'EVFSPTLAARQPQLNRPSTMHLQLGEPIETTSGETKLKI', 'ILAAGYQLCFRRLHAQIFDRLATDLSHQLKTWHAEPV', 'KLELITKNMEMVQKATNEGVKQFSYSRLALANCYIRK', 'EVFSPTLAARQPQLNRPSTMQMEAKTLLGAKELGRSYAK', 'ILAAGYQLCFRRLHAQIFDSFLAHLKEEIERKEASSK', 'EVFSPTLAARQPQLNRPSTMARLQNKRAPVPPSQARSMS', 'ELQKTIRQHKWETMNAVIVGPNGPSLPSLPSSVKGPTPSH', 'ESARLGQLIKPDNRDRPVDPKQFSYSRLALANCYIRK'] Truth: ['TDEMVELRILLQSKNAGAVIGKGGKNIKALR'] non truth: ['KGDKPPLAEAIIVKDQPLIMGHQILLQFLP', 'KGDKPPLAEAIIVKDQPLIMGKVGTVRKHSP', 'KGDKPPLAEAIIVKDQPLIMGPLLLTSALPSP', 'KGVALIGTVKFKGSIEFPPPPGPPPPPPPPPPL', 'TTHSKLRQRLLPPPLCPYLKVGTVRKHSP', 'LKTLVSDRYPIHLVKKGMALDALASIVNGLS'] Truth: ['DRKEAAENSLVAYKAAS', 'TTILSSPDGELVDYILS', 'TTILSSPDGELVYLDSL', 'TTILSSPDGELVYIDSL', 'TTILSSPDGELVIYESV', 'TTILSSPDGELVYLISD', 'TTILSSPDGELVYLSEV', 'TTILSSPDGELVELYVS', 'TTILSSPDGELVIDIYS'] non truth: ['KYYRPLFNNGPRNPS', 'KQCLHCTIPLGAQLNPS', 'KKSEHTKSGETLPFFS'] Truth: ['DIVLDHCKLSDSINSFHPLTSH', 'LVVHEDMDTAKVYTGAGAIVDHH', 'DIVLDHCKLSDLFVGDSMVQLM', 'LVVHEDMDTAKLFVGDSMVQLM', 'KKAWGNNEPDVLFVGDSMVQLM', 'EVATGEKRATVVESSEKAYSEAH', 'KKEMPNLFCFFSLPHVEFGAH', 'SLSLHCTVSPTVPGGLSDGQWHTV', 'KKPYCNYKAASDIAMTELPPTH', 'KKDRETMGHRYVEVETYHVT', 'VTSIFTSEDIPGPLQGSGQDMVSI', 'VEEVKEEGPKEMTLDPAAAQVAH', 'KVRLDPPCTNTTAPSNYLNNPY', 'VEEVKEEGPKEDIAMTELPPTH', 'VATGEKRATVVESSEKAYSEAHE', 'IRYEEFQEHKMEKPSLSQQA'] non truth: ['NWLSLHAGACSVITGSVDRHFAH', 'ISHREQFIGDTTEICKYTTVH', 'DLNERPGNTLETLPTQETELAH', 'DLNERPGNTLETLPTRCLGGNAH', 'KKYQGDSKTSDFTVDFTRFAH', 'ISHREQFIGDTTEIRDQPQAH', 'TVLQVCDHMVKMTVDFTRFAH', 'KKYMQWIDHYEGKCSLALGAH', 'EVHAGQASFFASLVGNSHELGLAH', 'SILSNGPSTEPRLDDGPVADTHSL', 'KKALEDEAKQEDKFDSIAEFH', 'TVLQVCDHMVKMTVLGHVMEAH', 'TVLQVCDHMVKMTVLGGAHSSGAH', 'KKDNYVTQPYLAPTSYGFGESL', 'KKVGGYDTQPYLAPTSYGFGESL', 'KKSQFETQPYLAPTSYGFGESL', 'LSSLFRDASPYLAPTSYGFGESL', 'KKTMPFDIEFDEFPKERSPH', 'KKLWEFEEFDEFPKERSPH', 'KKYSFTFTSFDEFPKERSPH', 'IEIEMTVGVGDPFINDDHRIAH', 'KKLDSAQQQCTEFLEKGSVMAH', 'KKGFNKGCETDIEDIFLICPAH', 'KKHTCTEPPADIEDIFLICPAH', 'KKEHEAQKEERFWSLMHGAH', 'KKSRMPAWHDRFWSLMHGAH', 'KKALEDEAKQEDKFFCLGGNAH', 'KKLDSAQQQCTEFGCQLLTTVH', 'TVGAQERIDGPNRRAHYANLCH', 'ISSISYGDPGESDGLSISRTTKAH', 'KKYSSPGTQPYLAPTSYGFGESL', 'KKLMNIEQKIEEEEDGTVDMV', 'KKYQGDSKTSDRRAHYANLCH', 'SLSLVSSDPQSHRRAHYANLCH', 'KKVSTGPSEAHDRFWSLMHGAH', 'KKAMHRGPASPVHDDCKYTTVH'] Truth: ['DSTLIATDILK', 'SDTLIATDILK', 'SSEILATDILK', 'SDTLELTGILK', 'SDTIELTGILK', 'DSTLELTGILK', 'DSTIELTGILK', 'SSEIELTGILK', 'SESLELTGILK', 'TDSLELTGILK', 'SNLTKDRSLR'] non truth: ['DSTILDALTLK', 'SDTLLDALTLK', 'SDTILDALTLK', 'SSELLDALTLK', 'SSEILDALTLK', 'DSTILLALSEK', 'SDTILLALSEK', 'SSEILLALSEK', 'ESSLLDALTLK', 'TDSLLDALTLK', 'STDLLDALTLK', 'DSTLILALSEK', 'DSTLIDALTLK', 'ESSLILALSEK', 'TDSLILALSEK', 'ESSLIDALTLK', 'ESSILDALTLK', 'TDSILDALTLK', 'TDSLIDALTLK', 'STDILDALTLK', 'SDITLLALSEK', 'SDITLDALTLK', 'TLDSLLALSEK', 'ITDSLLALSEK', 'LTSDLLALSEK', 'ITSDLLALSEK', 'ITDSLDALTLK', 'TLDSLDALTLK', 'LTSDLDALTLK', 'ITSDLDALTLK', 'SSIELLALSEK', 'SSIELDALTLK', 'SSSSVRGLQLR'] Truth: ['DVNRTLETVVALPEGRQ', 'SHITFLQRQNGWLPIS', 'TALHYLQRQNGWLPIS', 'TWQGLIQRQNGWLPIS', 'TKPMQIQRQNGWLPIS', 'TALHYLQLSSVSPPSLSP', 'TWQGLIQLSSVSPPSLSP', 'SHITFLQLSSVSPPSLSP', 'TKPMQIQLSSVSPPSLSP'] non truth: ['TIWGQKKEVEPIDGLPS', 'IIGGCAAALNARSLARDGPA', 'TIWGQKKEVEPDLAVSP', 'TIWGQKKEVEDPLGLSP', 'TIWGQKKEVEPDLGLSP', 'TIWGQKKEVEPDIGLPS', 'TIWGQKKEVEPLDGLPS', 'TIWGQKKEVEPDLVASP', 'TIWGQKKEVEPDLVAPS', 'TIWGQKKEVEPDILGPS', 'TIWGQKKEVEPDILGSP', 'TIWGQKKEVEPDLIGSP', 'SRVNELSERLVQVEAPA', 'SLNDRLSERLVQVEAPA', 'SSLLAPKQTNASLARDGPA', 'SQQKNLSERLVQVEAPA', 'SGQAKNLSERLVQVEAPA', 'TAERQLSERLVQVEAPA', 'TIWGQKKEVEPDLPGSL', 'TIWGQKKEVEPDIGPSL', 'SGQAKNLIEIGNGNKGPLS', 'SGQAKNLIEIGREQGPSL', 'SGQAKNLIEIGRQEGPSL', 'TIWGQKKEVEPDISVAP', 'SVAIWGLAAISEIKYFE', 'SDRINSRLGTVPIDGLPS', 'SDRINSRLGTVDPLGLSP', 'SSLLAPKQTNASIKYFE', 'IIGGCAAALNAPMKRLNPS'] Truth: ['DAAIVGYK', 'AAIVGYKD', 'PVTTYKAG'] non truth: ['GEAIVGYK', 'GEALVGYK', 'EGALVGYK', 'DAALVGYK', 'ADALVGYK', 'EGAIVGYK', 'ADAIVGYK', 'DAAIVGYK', 'EGAIGVYK', 'GEAIGVYK', 'EGALGVYK', 'DAAIGVYK', 'ADAIGVYK', 'GEAIVYGK', 'EGALVYGK', 'EGALVGKY', 'GEAIVGKY', 'EGALVGPPP', 'GEAIVGPPP', 'GEALGVYK', 'DAALGVYK', 'ADALGVYK', 'EGAIVYGK', 'GEALVYGK', 'DAAIVYGK', 'DAALVYGK', 'ADALVYGK', 'ADAIVYGK', 'SSPVVGYK', 'GEAIFATK', 'EGAIFATK', 'DAAIFATK', 'ADAIFATK', 'EGALVKGY', 'EGAIVKGY', 'GEALVKGY', 'GEAIVKGY', 'GEALVYKG', 'EGAIVYKG', 'EGALVYKG', 'GEAIVYKG', 'DAALVKGY', 'DAAIVYKG', 'ADALVKGY', 'DAAIVKGY', 'DAALVYKG', 'ADAIVKGY', 'ADAIVYKG', 'ADALVYKG', 'GEALYGVK'] Truth: ['DSAFKAVPVSNIAPALRDENFPLFVTS', 'PGIPDTGSASRPHRIDSTEVIYQPRR', 'DDAPRAVFPSIVGRPRHQGVMVGMGQK', 'AIFHKYSGKETVQNDRRGVEKMVNV', 'DSKIRYAYRNTAQRQEIRSAYKST', 'AIFHKYSGKEGDKHTLSKSQLKSQTS'] non truth: ['IFAKGDSTIAFDSTLAARQPQLNRPST', 'FFQGLLGKTRASMRFSDVTLQCLVST', 'AFIQGVKAIQAGDNLCAQVASTAKRCEL', 'INGLVEHGVLEPVEQVGSRESRETGTL', 'SQYGALVLLCLTIGQSLHGSTMKSLPST', 'IHVFHEIEIAIPTNVGSRESRETGTL', 'RYLRKWSPTGGAANAQTLNGGILAMATS', 'DSTFSLEKSTLPLLPNNRSLTAGMKGQ', 'FALASLSKRTTNFIHQQKNEAGMKGQ', 'QEAIQELLEQRYKAEEVGQRTKAST', 'AFIQGVKAIQAGDNLCAKDRSPRKMTS', 'DSTFSLEKSTLPLLPNNAEVTKEALST', 'FALIALNYRASCCVPAVVPPGGILAMATS', 'INGLVEHGVLEPVEQIQETTIAKAEST', 'LELITKNMEMVQKATNGIGDNTIIAST', 'NYNKEVNLVHILRDRSASAGSLSQPH', 'IHVFHEIEIAIVCCHKAAKDKELAST', 'DSTFSLEKSTLPLLPAEEVGQRTKAST', 'DSGPLRHIIPNLTINQPTVELQFDST', 'NYNKEVNLVHILRDRSASAPRQHTS', 'AFINSIPTSVRHNNVSFMAGVIAFNKA', 'GSRKKEEATERIMLAEEVGQRTKAST', 'RQHQILNTSQTFAKGCIRCGIAFNKA'] Truth: ['FLFERQARRY', 'HALGSARKFHVY', 'AQAAMKGALPLDTV', 'GFIQVALQLAYY'] non truth: ['TARVHQVHKFY', 'SLCPKKPGDLGASL', 'SLCPKKEGPVQIS', 'AQACRVKRKYY', 'GPRPAAPRMAKAY', 'GPRPAAPRKMAAY', 'SIPAAPSLPWLAY'] Truth: ['TTSPNLGYIPRILFLDPS', 'DVTEFEITPEGRRITKL', 'TTPGSIGGYIPRILFLDPS', 'TTFQEQVKSPAKEKAKSP', 'TTVSAPGGYIPRILFLDPS', 'TTVYQSPGIPRILFLDPS', 'TTPGTVGGYIPRILFLDPS', 'TTVNGDVFIPRILFLDPS', 'TTDIFQTQKKLQQGIAPS', 'TTAFSGLEQKKLQQGIAPS', 'TTAFSGIEQKKLQQGIAPS', 'TTFQISEQKKLQQGIAPS', 'TTVNGDVFKSPAKEKAKSP', 'TTVYQSPGKSPAKEKAKSP', 'TTAPSNYVKSPAKEKAKSP', 'TTTTLCVNALSKQSPKKSP', 'TTVNGDVFALSKQSPKKSP', 'TTVYQSPGALSKQSPKKSP', 'TTYVGVRNSPIEVKSPEK', 'TTFLLMVQLSSVSPPSLSP', 'TTLFMLVQLSSVSPPSLSP', 'TTGFEANLAAIIERLQEK', 'TTTLCVLNTPRILFLDPS'] non truth: ['TIGTFVIGAWFIPIDGLPS', 'TTSQGCLDKRLKKRWPS', 'TISNVKGKIQLTCSASLGPS', 'TTPCCSKLAKWIIRSLPS', 'TALTRVVTSLLQCSASLGPS', 'TTADSPFLAMIIINSILPS', 'TTLNRGSCRSTKGPRIEK', 'LLGQGQLAANLNAITPEPPS', 'TTTKGAGNCTNIVARNRVK', 'TTAFLNESAVKRAALLDPS', 'TTEAAIYNAVKRAALLDPS', 'TTVDGIYNAVKRAALLDPS', 'TTGDVIYNAVKRAALLDPS', 'TTSYGGSPLAVKRAALLDPS', 'TTDVGIYNAVKRAALLDPS', 'TTAEAIYNAVKRAALLDPS', 'TTSATTATQARRVSKQLPS', 'TTLKCMANARRVSKQLPS', 'TTTTVMGGVARFRGRLPPS', 'TTCKQIRCPSISTIRISP', 'TTCKQIRCIIPKNSKSSP', 'TTSWLGSSNRLKKRWPS', 'TTENECKKRLKKRWPS', 'TTQDECKKRLKKRWPS', 'TTQESVGCKRLKKRWPS', 'TTLNFGPCTNIVARNRVK', 'TTAEALYNAVKRAALLDPS', 'TTHLAPDPTPSISTIRISP'] Truth: ['GEIQAGASQFETSAAK', 'EGLQAGASQFETSAAK', 'GELQAGASQFETSAAK', 'DALQAGASQFETSAAK', 'DALQAGASQFETSAAK', 'ADIQAGASQFETSAAK', 'DAIQAGASQFETSAAK', 'ADLQAGASQFETSAAK', 'EGIQAGASQFETSAAK', 'IADQAGASQFETSAAK', 'DLAQAGASQFETSAAK', 'LEGQAGASQFETSAAK', 'VEAQAGASQFETSAAK', 'LDAQAGASQFETSAAK', 'DIAQAGASQFETSAAK', 'LGEQAGASQFETSAAK', 'EVAQAGASQFETSAAK', 'VAEQAGASQFETSAAK', 'TPTGAAGASQFETSAAK', 'TPTQAGASQFETSAAK', 'GEGALAGASQFETSAAK', 'ADQLAGASQFETSAAK', 'KPDSAGASQFETSAAK', 'AGLDAAGASQFETSAAK', 'LQDAAGASQFETSAAK', 'QVTEVNSPGSPAQPSP', 'QVTEVNSPGSPAGAPPS', 'DAQLAGASQFETSAAK', 'QLADAGASQFETSAAK', 'GEIQAVMEMMSQKI', 'EGLQAVMEMMSQKI', 'GELQAVMEMMSQKI', 'EGIQAVMEMMSQKI', 'ADLQAVMEMMSQKI', 'ADIQAVMEMMSQKI', 'DALQAVMEMMSQKI', 'DAIQAVMEMMSQKI', 'ADLAGAVMEMMSQKI', 'DALAGAVMEMMSQKI', 'DAIQQASQFETSAAK', 'IADQQASQFETSAAK', 'DLAQQASQFETSAAK', 'LGEQQASQFETSAAK', 'EVAQQASQFETSAAK', 'QVTEVNSPGSPAPSGPA', 'QVTEVNSPGSPATNPP', 'QLQEGASQFETSAAK', 'LQQEGASQFETSAAK', 'IADQAVMEMMSQKI', 'DIAQAVMEMMSQKI'] non truth: ['DALQAQSDIPGELTH', 'DALQAQSDIPQPQSP', 'DALQAQSDIPQSPQP', 'DKAPSGEIGSPGELTH', 'DAQLALQPSDGPQASP', 'DAQLALQPSDGHETI', 'DAQLALQPSDQSPPQ', 'DAQLALQPSDQPQPS', 'ADLPTDKGGSPGELTH', 'QLEGQTSNDLGYAKA'] Truth: ['KAALLAQYATGKQELAKYFLA', 'KAALLAQYAEVLQDPTLRLNA', 'FLQKSLAAATGKQELAKYFLA', 'KKEVDVGLLIESLSNTGLAQLA', 'KAPAEVQITAEQLYVRRAALA', 'KKEDEVQAIATLIEKQALSIA', 'KKEDEVQAIATLIEKQASLLA', 'KKEDEVQAIATLIEKQAISLA', 'KKEDEVQAIATLIEKQATVLA', 'KKEDEVQAIATLIQTLSQVLA', 'KKEEVKSPVKEEVKAKEEIA', 'KKEEVKSPVKEEVKAKEELA', 'KAVQSLDKNGVDLKSHYIVLA', 'KAVKEHEGKTIFARLSKGDIA', 'KAVKEHEGKTIFAREKDKLA', 'KAVKEHEGKTIFAQTLSQVLA', 'KTLVQQLKEQNEAMKAVGVLA', 'KTLVQQLKEQNEALNMVKLA', 'KTLVQQLKEQNEAMLNKVLA', 'KKALEELATKRFYIFQELA', 'KEADVATVRLGEKRSLRADLA', 'KEADVATVRLGEKRSDLARLA', 'KEADVATVRLGEKRSREVALA', 'KEADVATVRLGEKRGSEIRLA', 'KEADVATVRLGEKCAVVAGVVLA', 'KEADVATVRLGEKIITEELLA', 'KEADVATVRLGEKRKIWDLA', 'KEADVATVRLGEKLWAGSALLA', 'KEELQKSLNILTEGVSINVIA', 'KEELQKSLNILTKNIEDVIA', 'KEELQKSLNILTSVSQSPLIA', 'KEELQKSLNILTATTPGSIALA', 'KEELQKSLNILTAVGTLEQLA', 'KEELQKSLNILTPRNPAYIA', 'KKEVDVGLLIESLSVVNNTGLA', 'KKEVDVGLLIESLSEREVALA', 'KQIAWEQKQRDHIHIIALA', 'KQIAWEQKQRGFPRVSRLA', 'KAALLAQYRLTSPAAAEKPDLA', 'KAKLKEIVADSDVEGVSINVIA', 'KAKLKEIVADSDVKNIEDVIA', 'KAKLKEIVADSDVSVSQSPLIA', 'KAKLKEIVADSDVPRNPAYIA', 'KAFQQHILNVLSRAERFGIA', 'KAFQQHILNVLSKTKETELA', 'KAPAEVQITAEQLLRYRQIA', 'KAEGSEIRLAKVDCAVVAGVVLA', 'KAEGSEIRLAKVDIITEELLA', 'KAEGSEIRLAKVDLWAGSALLA', 'KAAQIKRRYANPIGYDLHLA'] non truth: ['KKLREVEQELDPQFRKALA', 'KLKQEEAANKLDPQFRKALA', 'IFPPPFGNVPTSLIKPDFKIA', 'KAAIRVKEEDLDPQFRKALA', 'KAMITRPRGELDPQFRKALA', 'KATKTKGTNFATFIAMALSLLA', 'KATKTKGTNFALEKVIMYAIA', 'KATKTKGTNFALQTYVFVPLA', 'KAFKYVTKMAPNAKFWLAIA', 'KKKLFQYTFNVTTSPVAAALA', 'KKKLFQYTFNVTTSPKSPIA', 'KKKQEELQQLDPQFRKALA', 'KKGSKSTGRNALSQVVNSGVPLA'] Truth: ['IDSDISGTLKFACESIVEEYEQINEEMKRSG', 'PEADRHLLRCLFSHVDFSGDGSMARQFQDVS', 'DSLLQDGEFTMDLRTKSTGGAPTFNVTANFGET', 'PEGAGPYVDQAAAEKAVTKEEFQGEWTAPAPEF', 'ISVETDIVVDHYKEEKFEDTSPADESQLEPA', 'DSLIGGNASAEGPEGEGTESTVVTGVDIVMNHHLQ', 'DSLLQDGEFTMDLRTKSTGRSEETKQNEAFS', 'ESVLCVQYTDEPRIRNIISTTENYENPSSY'] non truth: ['EPASGQGMQPMLQQYDLFKQELMDWLVSPGP', 'TTTSATTATVFPNGGFQEQVANTGERSISGGFTDA', 'EPATYAVSGCFTQQYDLFKQELMDWLVSPGP'] Truth: ['KKKEEREAEAQAPRGQSIKSRQ', 'KKPAVVAKSSILLDVKPWDDETD'] non truth: ['KKSQFEEFFQQIAAAVALMAALL', 'KKLGLKREKSLEPGGPQGMRSEE', 'GKKIWGCLQLALIREENLNQID', 'RTVQKKIIQVCICLTLGLACGMY', 'TKKELGGIVREIAQEFHREQVS'] Truth: ['DFQLIGIQ', 'FQLIGIQD', 'QEFLNLIG'] non truth: ['FDQIGLLQ', 'FDQIIGIQ', 'FDQLIGIQ', 'FDQLIGIAG', 'FDQIIGIAG', 'FDQLLGLAG', 'FDQLIGLAG', 'FDQIIGLQ', 'FDQLIGLQ', 'FDQIIGLAG', 'FQDLLGLAG', 'FDQIGILGA', 'FDQIGLIQ', 'FDQLGLIQ', 'FDQIGIIGA', 'FDQLGIIGA', 'FDQIGLLGA', 'FDQLGILGA', 'FDQIGILQ', 'FDQLGLLQ', 'FDQLGILQ', 'FDQLGLLGA', 'FQDLGILQ', 'FQDLGIIGA', 'FQDLGLLGA', 'FQDLGILGA', 'FQDLGLIQ', 'FQDLGLLQ', 'FDQIIGIGA', 'FDQLIGIGA', 'FDQILGLQ', 'FDQIIGLGA', 'FDAGLLGLAG', 'FDQLIGLGA', 'FDQILGLAG', 'FQDLIGIQ', 'FQDLIGIAG', 'FQDLIGLAG', 'FQDLIGLQ', 'FAGDLLGLAG', 'FQLDGLIQ', 'FQLDGLLQ', 'FDQLAVLQ', 'FDAGLVALQ', 'FDQLVALQ', 'FQDLAVLQ', 'FQDLVALQ', 'FDAGIGLLQ', 'FAGDIGLLQ', 'FQDIGLLQ'] Truth: ['TSYALLALPGPLQ', 'LKDGPLIAEFLQ', 'LKDGPIMKTLID', 'DTVIKYLPGPLQ', 'DTVLKYLPGPLQ', 'DTVIKYLPGPLQ', 'DTVLKYLPGPIQ', 'DTVLKYLPGPLQ', 'DTVIKYLPGPIQ', 'DTVLKYLPGLPQ', 'DTVLKYLPGIPQ', 'DTVIKYLPGLPQ', 'DTVIKYLPGIPQ', 'KLDPGLIAEFLQ', 'KLDPGIMKTLID', 'KIGDSKITSGIPQ', 'KLLKEGEADNVK', 'DTVIKYLPPGLQ', 'DTVLKYLPPGLQ', 'DTVIKYLPAPVQ', 'DTVLKYLPAPVQ', 'KDVAPLIAEFLQ', 'KGLDPLIAEFLQ', 'TSYALLALPPGLQ', 'TSYALLALPAPVQ', 'STPKPLIAEFLQ', 'KLPDGLIAEFLQ', 'KDVAPIMKTLID', 'KGLDPIMKTLID', 'STPKPIMKTLID', 'KLPDGIMKTLID', 'DTVLKYLPVAPQ', 'DTVIKYLPVPAQ', 'DTVLKYLPVPAQ', 'DTVIKYLPVAPQ', 'DTVIKYLPLGPQ', 'DTVLKYLPAVPQ', 'DTVIKYLPAVPQ', 'DTVLKYLPLGPQ', 'VGGLSPLIAEFLQ', 'VGGLSPIMKTLID', 'KLDGKLAAVATEQ', 'KLDGKLIAAGDDK', 'KIISSSLNSGPIQ', 'KLLDGTTASPAAAK', 'KLIKMYAMAFK', 'KLLKEGEEGVNK', 'KKLLEDEQVNK', 'KILKEQEDNVK', 'KKLLEDGEANVK'] non truth: ['KLGDPLIELAFQ', 'KIGDPLIELAFQ', 'KLDGPLIELAFQ', 'KIDGPLIELAFQ', 'EYVISKIPGPLQ', 'EYVISKIPGLPQ', 'EYVISKIPGLPGA', 'KIDPGILFNLDV', 'LKGDPLVTSTKGQ', 'LKGDPIVVVEFQ', 'LKGDPLIELAFQ', 'KLSEYLVPGPLQ', 'KLSEYLVPGLPQ', 'KLSEYLVPGLPGA', 'KLDPGILFNLDV', 'LKGDPLTMELVK', 'EGVKPLVTSTKGQ', 'KPSTPLVTSTKGQ', 'KPTSPLVTSTKGQ', 'KSTPPLVTSTKGQ', 'KGDLPLVTSTKGQ', 'TSPKPLVTSTKGQ', 'KIPGDLVTSTKGQ', 'EYVISKIPPLGGA', 'EYVISKIPPAVQ', 'EYVISKIPPAVAG', 'EYVISKIPPLGQ', 'EYVISKIPLGPQ', 'EGVKPIVVVEFQ', 'EGVKPLIELAFQ', 'KPSTPIVVVEFQ', 'KSTPPIVVVEFQ', 'KPTSPIVVVEFQ', 'KGDLPIVVVEFQ', 'KPSTPLIELAFQ', 'KPTSPLIELAFQ', 'KGDLPLIELAFQ', 'KSTPPLIELAFQ', 'LKGDPILFNLDV', 'TSPKPIVVVEFQ', 'TSPKPLIELAFQ', 'KIPGDIVVVEFQ', 'KIPGDLIELAFQ', 'EGVKPLTMELVK', 'KGDLPLTMELVK', 'KPTSPLTMELVK', 'KSTPPLTMELVK', 'KPSTPLTMELVK', 'TSPKPLTMELVK', 'KIPGDLTMELVK'] Truth: ['PANGNANTKPYKHR', 'PANEDQKLVETRPA', 'APNFLHCLVETRPA', 'KHHNSSVVTGQAFR', 'KHHNSSKIENRFA', 'KSKEMMNQVVFAR', 'KSKEMMQIQAAFR', 'KSKEMMAQLQFRA', 'VSVSIGFTREWMR', 'KKYLEEREWMR'] non truth: ['KKVWDQLEEQPPA', 'KRTQLVNGFYCAPA', 'KRTQLVNGFYACPA', 'KKPQSMKDMTRFA', 'KKVWDQLEAADPPA', 'KKVWDQLEQPEPA', 'KKVWDQLEAGEPPA', 'KKAFSCTPFTNRPA', 'WQTGAQLELVPSAPA', 'KKVWDQLEEAGPPA', 'KKVWDQLEEGPAPA', 'KKVWDQLEEAPGPA', 'KKVWDQLEEPAGPA', 'KKVWDQLEEPQPA', 'KKTFGCTPFTNRPA', 'NGVFSHTPTLDVLPA', 'GHRKAKREQTNQD'] Truth: ['KKEFDKKYPPGSPTQNVGL', 'TTTDKNGLARFSIKLFSAY', 'TTTDKNGLARFSLPPKAASGT', 'TTTDKNGLARFSIPPKAASGT', 'TTTDKNGLARFSIPTQNVGL', 'KKVIQWCTHHQLTALGAAQ', 'KKEPKPEPGIESRGIHETT', 'KKDVVERGHRRSFAERY', 'KKALEERGHRRSFAERY', 'KKEVDVRGHRRSFAERY', 'KKVDISKVSSKCGSKAHETT', 'KKVDISKVSSKCGSKAPGGQE', 'KKVDISKVSSKCGSKAENPQ', 'KKVDISKVSSKCGSKAQNPE', 'KKVDISKVSSKCGSKAQENP', 'KKVDISKVSSKCGSKAPNGEA', 'KKVDISKVSSKCGSKAHHGH', 'KKVDISKVSSKCGSKAYAYA', 'KKVDISKVSSKCGSKAFSAY', 'KKVDISKVSSKCGSKAFTYG', 'KKVDISKVSSKCGSKAFSSF', 'KKVDISKVSSKCGSKASFSF', 'TGRNFDEILRVVDSLQLTG'] non truth: ['TVGNDLVLSQAKQQLFTDR', 'TVGNDLVKIADEIKFWER', 'KNGPHPLQELTLQFPALTE', 'TGAQLELVPSAPAARAPSNVPS', 'KKDQSPKISHERAQTPVSP', 'ASVNDLVKIADEIKFWER', 'HFAMLVKIADEIKFWER', 'KKESGTLAGRVAKGLCCGARGA', 'KKTSNQMQIAAISTPPFLGT', 'KKANRSSTWAALTSTHRFA', 'KEGQLFKMPYVIVEPNIQ', 'KKDDVLIHSQPVGQLEGRN', 'TTLDCSLAQAKRITTVQVNA', 'KKSYFLTMVTGVFTRFAH', 'TVGNDEPLERFFQKLLGAV', 'KKRKDEDASTTHPLSHGLI', 'TTAFLGSHEKKSKSPVSSKN', 'KKSYFLTMVTGVTFVDLPS', 'KEGQLFKMPYVIAQTPVSP', 'TVGNDGPAPPSPPKPPPKPVSP', 'KKWNPGLPGSPTEVPQTLSP', 'ASVNDEPLERFFQKLLGAV', 'HFAMEPLERFFQKLLGAV', 'KKSSEAVAEQTTHPLSHGLI', 'KDQLDVKIADEIKFWER', 'TGAQLEVKIADEIKFWER', 'KKDQSPKISHERVEPNIQ', 'KKSSEAVAKRLLEDHTHSP', 'ASVNDGPAPPSPPKPPPKPVSP', 'KKDDVLKGLLSMRSDDAKN', 'HFAMGPAPPSPPKPPPKPVSP', 'KKDDFIPQWTLNLATQSK', 'TTQHQPKPRVESTLPLDVS', 'TGAQLELVSQAKQQLFTDR', 'KKDDTNIFLTNEVNGVVRA', 'KKDDKAVSISSLAQHQLHQ', 'KKDDKEKEKTKEQKELE', 'KKDDPKHQPTEPQSIEKK', 'KKDDRVVFSNTGLVPTSVNG', 'KLGQCIVVFSNTGLVPTSVNG', 'KKQPSLSSDSTTHPLSHGLI', 'KKDKVEENSTTHPLSHGLI', 'KKRKDEDAVKELMNKSVD', 'TTDGKEQLKGTTHPLSHGLI', 'TTSYGGSPLRKLVGQLEGRN', 'KKDQSPKFRHYSGNIART', 'KKDQSPKSQAKQQLFTDR'] Truth: ['DLGKEIEQKY', 'KGEVEIEQKY', 'TVAEPLSSQKY', 'SIPEALSSQKY', 'TVPEALSSQKY', 'TVEPALSSQKY', 'SIPEAGETKKY', 'SSLIETGPQPPP', 'TVLSESAPQPPP', 'SLTAFTASALGPS', 'SSSVKPVVQEY', 'SSSVKPVVEQY', 'LGKEIEQKYD'] non truth: ['SSIIDEVAFKN', 'SIAEPNITSYK', 'TTPTPNITSYK', 'SISATPDLQYK', 'SSLLLDDAKFGG', 'SLAPENITSYK', 'SLAPEVTSQPPP', 'SIAEPVTSQPPP', 'SIVTEDVAKFN', 'SLTVDEVAFKN', 'SLVTVDEAFNK', 'SIVTEDVAFKN', 'SSIIDEVAKFN', 'SLSLDEVAFKN', 'SSLLLDDANKF', 'SSLLLDDAKFN', 'SLISDEVAFKN', 'SILSDEVAFKN', 'SSLLLDDAFKN', 'SSLLLDDAFNK', 'TVTVDEVAFKN', 'TVVTDEVAFKN', 'SSLLEASPQPPP', 'SSTSPILEKFGG', 'SSTSPLEIKFGG', 'SSTSPLEINKF', 'SSTSPLEIFNK', 'SSTSPILENKF', 'SSTSPLEIKFN', 'SSTSPILEKFN', 'SSTSPILEFNK', 'SSTSPLEIFKN', 'SSTSPILEFKN', 'TTPTPVTSQPPP', 'SSLLLDDNFAK', 'SSLLAPSKEQY', 'KKNDRAPCKY', 'KKECNPRGKY', 'KKECNGPRKY', 'SLTVGFESKAPS', 'SSLLLDDNAKF', 'TVVTSLMKETN'] Truth: ['DIEEIRSGRLSQEL', 'IDEEIRSGRLSQEL', 'IDEVFDGRDAKAPVL', 'DITNPPRTNVLYEL', 'IDEPAVRTAAAAAFEI', 'SIVDVEDEIEAIISI', 'SLAELEDEIEAIISI', 'IDEVVSDEIEAIISI', 'IDTNPPRTNVLYEL', 'SLENPPRTNVLYEL', 'SLFHFPSPEMIRAL', 'SIFHFPSPEMIRAL', 'TVFHFPSPEMIRAL', 'TVLDVNRSNEKQEL', 'SLEDVVDEIEAIISI', 'ELVDSVDEIEAIISI', 'ELVDFDGRDAKAPVL', 'TAEDLIDEIEAIISI', 'LEEIASDEIEAIISI'] non truth: ['SSLISFASAHGKPTEL', 'SLYYNRMKRATEI', 'SLYYNRMKRATEL', 'SLFHVVKYHICGEL', 'SIFHVVKYHICGEL', 'TVFHVVKYHICGEL', 'SLSIGGHVYSVGLNEI', 'SSLLSSFAAHGKPTEL', 'SLISSWLSQERPEL', 'SLISSWLSQREPEL', 'SLISSWLTGGQQGTPL', 'SISLRPFSNPLADDL', 'SLLSKGNSTGGQQGTPL', 'SLYYNRMKRATIE', 'SLIMQLGSLSQPHPH', 'DLIESKIIEEIVDE'] Truth: ['DMLIKAATA', 'KLIAAMDTA', 'KLIAADMTA', 'KLLAAMDTA', 'KLLAADMTA', 'KALLAMDTA', 'KALLADMTA', 'KLALAMDTA', 'KLALADMTA', 'KVVVAMDTA', 'KVVVADMTA', 'MLIKAATAD'] non truth: ['DMIIKAATA', 'DMIIKAAAT', 'MDIIGVSKA', 'DMIIGVSKA', 'DMLIGVSKA', 'MDILKDKA', 'MDLLKDKA', 'DMLLKDKA', 'DMLLVKGSA', 'DMIIKKDA', 'MDILGSKVA', 'MDLLVKGSA', 'MDILVKGSA', 'DMIIKDKA', 'DMIIKATAA', 'MDILKAATA', 'MDLLKAATA', 'DMLLKAATA', 'DMLIKAATA', 'MDILKAAAT', 'DMLLKAAAT', 'DMLIKAAAT', 'DMIIAAKAT', 'MDILAAKAT', 'MDLLAAKAT', 'DMLLAAKAT', 'DMLIAAKAT', 'MDLLKAAAT', 'MDLIGVSKA', 'MDLLGSKVA', 'DMLIKDKA', 'DMILKDKA', 'DMLLGSKVA', 'DMLIGSKVA', 'MDLLKKDA', 'DMLLKKDA', 'DMLIKKDA', 'DMIIGSKVA', 'MDILKKDA', 'DMILVKGSA', 'DMIIATAKA', 'DMLLATAKA', 'DMLIATAKA', 'MDILATAKA', 'MDILKATAA', 'DMLLKATAA', 'DMLIKATAA', 'MDLLATAKA', 'MDLLKATAA', 'DMLVKVSAA'] Truth: ['VDFKSKENPRNFS', 'DVFKSKENPRNFS', 'DVFREDKIENRF', 'DVFYETRRQEKP', 'DVFLRDVRQQYE', 'VDFLRDVRQQYE', 'VDFQGRSTMVLFAP', 'KHEQNVQKFPSPE', 'KHAGENVQKFPSPE', 'KAAHDNVQKFPSPE', 'KAAHDMELVETRPA', 'AFRFYDVFKDLF', 'RAFFYDVFKDLF', 'ARFFYDVFKDLF', 'RFAFYDVFKDLF', 'KKCYNVQKFPSPE', 'KYCKNVQKFPSPE', 'KKCYTLHYKTDAP', 'KYCKTLHYKTDAP', 'KHAEGNVQKFPSPE', 'KHEAGNVQKFPSPE', 'KEQHNVQKFPSPE', 'KAHEGNVQKFPSPE', 'KHEAGMELVETRPA', 'KHAEGMELVETRPA', 'KAHEGMELVETRPA', 'KHAGEMELVETRPA', 'KEQHMELVETRPA', 'KHEQMELVETRPA', 'KEQHTLHYKTDAP', 'KAHEGTLHYKTDAP', 'KHAGETLHYKTDAP', 'KHEQTLHYKTDAP', 'KHEAGTLHYKTDAP', 'KHAEGTLHYKTDAP', 'KAAHDTLHYKTDAP', 'DVVDKWFLDQFR', 'KKMTSGEQLCTVSR', 'KKEEMPFLRDQF', 'KKEEMPFLDQFR', 'KKEMPEFLNRFE', 'KKEEMPFLNRFE', 'KKDYWKDCKEPK', 'KKYRGDPEYFPPA', 'KKRGYDPEYFPPA'] non truth: ['DVFPIGFNLFADGR', 'VDFTAGGKNLFADGR', 'VDFTANKNLFADGR', 'VDFAQNSKLHPPSQ', 'VDFAQIESLRGWF', 'VSSVVDSCIISTSGLT', 'AFRFHCKIHPPQS', 'AFRFHCKLHPPSQ', 'KKRNKCSCPRQYG', 'KKETNASMLFADGR', 'KKDEQFNLFADGR', 'KWYSPVEISVCTR', 'KWYSPVELNREF', 'KKETNASMISVCTR', 'KKAEAVMSEHNGVAP', 'KKVWDTSEHNGVAP', 'KKETNASMLVFYH', 'KKETNASMINFWV', 'KKETNASMFTSRPA', 'VSVSDCVSIISTSGLT', 'VSVSDCVSLLVGMMK', 'KKETNASMLHPPSQ', 'KKETNASMLSQPHP', 'KKETNASMIHPPQS', 'KKETNASMIQRDF', 'KKETNASMIDRFGA', 'KKETNASMLFAGDR', 'KKCSNETVLFAGDR', 'KKETNASMLREFN', 'KKSMQASELNREF', 'KKASMQSELNREF', 'KKETNASMLNREF', 'KKLCSPTSEHNGVAP', 'KKGCDSLLTGRFDQ', 'KKAEMPNRFGMMK'] Truth: ['GAGANLRQLASPEMIRALEYIEK', 'KYDIIANATNPESKVFYLKMKG', 'KVEYFLKWKGFTDADAEAVKGK', 'ANTTIGRIRFHDEPSVEVTFLK', 'KKDRVTDALNATRAAVEEAWTGK', 'SKHEFQAETKKLLDSGKRENGK', 'DKYLIANATNPESKVFYLKMKG', 'VVAYSLANATNPESKVFYLKMKG', 'SVGYILANATNPESKVFYLKMKG', 'KDYLLANATNPESKVFYLKMKG', 'KVEYLANATNPESKVFYLKMKG', 'KEYVLANATNPESKVFYLKMKG', 'PETKAIMKWIQTIDSGKRENGK', 'PETKAIMKWIQTIPFIADENGK', 'PETKAIMKWIQTIPFVDGDRGK', 'SEYLLGNPRIQRLLNDSQGQTK', 'SSGPERILSISADIETIQLEMLK', 'SKLDPGKPESILKENRCGGNFLK', 'KVEYFLKWKGFTDADRLTANK', 'KKGVPIIFADELDDSKPKDTEGK', 'KKGVPIIFADELDDSAISELNAGK', 'KQLFSFPIKHCPDMQSARVLGK', 'SEYLLGNPRIQRLLNTTPMEGK', 'SEYLLGNPRIQRLLNTTNQDGK', 'SEYLLGNPRIQRLLNTQNTDGK', 'SEYLLGNPRIQRLDSGKRENGK', 'KRFGAPGGLGETLQEKLDRTGDGK', 'SKHEFQAETKKLLDHPSFLFK', 'PEEHPVLLTEAPLNPKAQHPQGK', 'PEEHPVLLTEAPLNPKFGSRNGK', 'PEEHPVLLTEAPLNDLKSELTGK', 'PEEHPVLLTEAPLNPKAMYVPGK', 'PEEHPVLLTEAPLNRPGGCLFLK', 'STTYVGVRNHRRVWRTLEDGK', 'KVEYFLKWKGFTDADKKQNGK', 'KVEYFLKWKGFQKHPHTGDGK', 'KVEYFLKWKGFTDAGQVLDTGK', 'KVEYFLKWKGFTDADQELKGK', 'STTYVGVRNHRRVTYWFIFK', 'KKQHRKTTTTLCVLEDSQGQTK', 'KKQHRKTTTTLCVLNTDEAQSK', 'SKHEFQAETKKLLDIVARCEGK', 'PEEHPVLLTEAPLNPKANRYGGK', 'PEEHPVLLTEAPLNPKANRGYGK', 'KKNRIAQWQNDEPSVEVTFLK', 'SSNLEKIVNPKGDEPSVEVTFLK', 'SSGPERILSISADIEQLEPTFIK', 'KKGLVSGGVYNSHVGCLPEPTFIK', 'KVEYFLKWKGFTDADNKQKGK', 'KVEYFLKWKGFTDADNTLVAGK'] non truth: ['SSNIRRISKGQTYREQRHEGK', 'KWVEHVLTGNYVITWATVTANK', 'KEVYPVPIMGVRGDKNQPDFIK', 'KKKFSVTTVMDWEVLLEWYK', 'KKHMDLKCVIAGDLCPNRTFLK', 'KWVEHVLTGNYVITWAISKGQT', 'KWVEHVLTGNYVITWALSGKQT', 'KWVEHVLTGNYVITWAIATNSK', 'KWVEHVLTGNYVITWALASTNK', 'SPWLEAELLPERPLYQAYGLGK', 'KWVEHVLTGNYVITWAGCRLGK', 'KEVYPVPIMGVRGDKTAQDNKGK', 'KEVYPVPIMGVRGDKQSLGSQGGK', 'KEVYPVPIMGVRGDTFTGPKGPGK', 'SSIKAFGNGGANPPSLVGSTMKKPGK', 'KWVEHVLTGNYVITWATLTNGK', 'KWVEHVLTGNYVITWATAGSLGK', 'KWVEHVLTGNYVITWATSLQGK', 'KWVEHVLTGNYVITWATIAGSGK', 'KWVEHVLTGNYVITWATNITGK', 'KWVEHVLTGNYVITSSDLRQGK', 'KWVEHVLTGNYVDSVSKVERGK', 'KWVEHVLTGNYVIGISSFAPQGK', 'KWVEHVLTGNYVDAVVRCTRGK', 'KWVEHVLTGNYVITAAGISGVMGK', 'KWVEHVLTGNYVITDCLKIGGGK', 'KWVEHVLTGNYVITPEEQFLK', 'KKANLHSYNEKLSLAWSQVEGK', 'KKANLHSYNEKLWEEKLTNGK', 'KKANLHSYNEKLFSLGYSVFGK', 'KKHVELLDSASGLKDNRECLFK', 'KKHVELLDSASGLCYEPVPGLFK', 'SSHFVRVGLQGVGDLCPNRTFLK', 'KKLHDRYLNPNGVIASVEEYGK', 'LDSRDLLLELDEDSIVKERSGK', 'KKHMDLKCVIAGVCEPNRTFLK'] Truth: ['DIEIATYRKLLEGEGA', 'DIVEVLFTQPTGFLAQ', 'DLGLKRGYRMVVNEGA', 'DLGLKRGYRMVVNEGA', 'TPEKRTDKIFRQMGA', 'DIEIATYRKLLEGEQ', 'TVYQTLAVNADGVLVSGA', 'TVEELHAENLSVPVIGA', 'TYRTALTYYLDITLA', 'TYRTALTYYLDITIA', 'DLGLKRGYRMVVNEQ', 'TTVYQLAVNADGVLVSGA', 'TTIGYGLAVNADGVLVSGA', 'TTVYQKELQDLALQGA', 'TTIGYGKELQDLALQGA', 'TTYNLKELQDLALQGA', 'LSSLPEIDKSLVVFCGA', 'TPEKRTDKIFRQMQ', 'TYRTALTYYLDILTA', 'LSSLPEIDKSLVVFCQ', 'KTTSASSVKRNTTPTGAA', 'TTSASSVKRNTTPTGAKA'] non truth: ['TTAASALSLSSGVPFQLGA', 'TVGYVDALLHHTRPAGA', 'TVPRASIKSEFAGMRGA', 'TTSPHLHGQLLLWFGA', 'TVQKQIIEEVSFTQGA', 'TVQKQIIEEVSFATGGA', 'TVQKQIIEEVSFQTGA', 'TVQKQIIEEVSFAASGA', 'TVQKQIIEEVSFSAAGA', 'TVQKQIIEEVSFASAGA', 'TVQKQIIEEMTTTKGA', 'TVPGWPHNPAKIYVAGA', 'TVPGWPHNPAKAYVIGA', 'TVPGWPHNPAKSVLFGA', 'TVPGWPHNPAKSVFIGA', 'TVPGWPHNPAKSVIFGA', 'TVPGWPHNPAKSVFLGA', 'TVPGWPHNPAKSFVLGA', 'TVPGWPHNPAKSFVIGA', 'TVGYVDALLHHTPARGA'] Truth: ['DIILKNF', 'VELFVKAG', 'QVELFVK'] non truth: ['LDIIKNF', 'IDIIKNF', 'DLIIKNF', 'IDILKNF', 'DILIKNF', 'DLLIKNF', 'DLILKNF', 'IDLIKNF', 'LDLIKNF', 'DIILKNF', 'LDILKNF', 'EVLIKNF', 'VELIKNF', 'IDLLKFN', 'IDILKFN', 'DLIIKFN', 'DILLKFN', 'IDIIKFN', 'LDLLKFN', 'LDIIKFN', 'LDLIKFN', 'IDLIKFN', 'DILIKFN', 'DIILKFN', 'LDILKFN', 'DLILKFN', 'DLLIKFN', 'VELLKFN', 'EVLLKFN', 'EVLIKFN', 'VELIKFN', 'DLLKINF', 'DLLKLNF', 'IDILNKF', 'EVIIKNF', 'VEIIKNF', 'VEIIKFN', 'EVIIKFN', 'DILKLNF', 'LDLKLNF', 'IDIKLNF', 'IDLKLNF', 'DLIKLNF', 'DIIKLNF', 'LDIKLNF', 'VELKLNF', 'EVLKLNF', 'DLILNKF', 'LDILNKF', 'DIILNKF'] Truth: ['SHPPSLVHER', 'SFDRLVHER', 'SFRDLVHER', 'SFEVRVHER', 'KFSQQVHER', 'SFEVRVEHR', 'DYRALVHER', 'KGFQASVHER', 'SIPSHPVHER', 'SDFLRVHER', 'SFQQKVHER', 'KFSQQVEHR', 'SFEVRVREH', 'SFEVRVHRE', 'KFSQQVREH', 'SFEVRVHWV', 'SFEVRVHVW', 'KFSQQVWHV', 'KGFQASVEHR', 'SHPPSLVEHR', 'SFRDLVEHR', 'SIPSHPVEHR', 'SFDRLVEHR', 'SDFLRVEHR', 'SFQQKVEHR', 'KGFQASVREH', 'SFDRLVREH', 'SFRDLVREH', 'SHPPSLVREH', 'SIPSHPVREH', 'SDFLRVREH', 'SFQQKVREH', 'KGFQASVWHV', 'SIPSHPVWHV', 'SFDRLVWHV', 'SHPPSLVWHV', 'SFRDLVWHV', 'SFEVRVWHV', 'SDFLRVWHV', 'SFQQKVWHV', 'KSYIDFSVIS', 'KSYIDFSVLS', 'YRALVHERD'] non truth: ['KQFSQVHER', 'SRFVEVHER', 'KQFSQVEHR', 'KAQQYVHER', 'KSQQFVHER', 'SRDFLVHER', 'SLFDRVHER', 'SRLDFVHER', 'SIRFDVHER', 'SLDRFVHER', 'SFLRDVHER', 'SDRFIVHER', 'SLFRDVHER', 'SDLFRVHER', 'SKQQFVHER', 'SWVVFVHER', 'SVFVWVHER', 'KQFSQVRHE', 'KQFSQVHRE', 'KQFSQVHWV', 'KAQQYVEHR', 'KSQQFVEHR', 'SLFDRVEHR', 'SRLDFVEHR', 'SRDFLVEHR', 'SDRFIVEHR', 'SFLRDVEHR', 'SIRFDVEHR', 'SLDRFVEHR', 'SLFRDVEHR', 'SDLFRVEHR', 'SKQQFVEHR', 'SWVVFVEHR', 'SVFVWVEHR', 'KAQQYVRHE', 'KSQQFVRHE', 'SLFDRVRHE', 'SRDFLVRHE', 'SRLDFVRHE', 'SFLRDVRHE', 'SIRFDVRHE', 'SLFRDVRHE', 'SLDRFVRHE', 'SDRFIVRHE', 'SRFVEVRHE', 'SDLFRVRHE', 'SKQQFVRHE', 'SWVVFVRHE', 'SVFVWVRHE', 'SVFVWVMKY'] Truth: ['KPDSAFAMLLAAEGLQGVALQQLEFSSP', 'TLKIFRDGEEAGAPGQLVAVFWDKSSP', 'KPCLKHTCMKFYADKMRKLFSFVP', 'KPASADFQRASKATSPSTLVSTGPSSRSP', 'ITFQCKHTSALSSHVLNKDGLGHIQSP', 'TLFMLFIFTEGKGSRPKNMTPYRSP', 'KKYDEELEERLVEWINSRVGVMVP', 'KKYDEELEERLVEWIGNNITMLVP', 'TITIQDTGIGMGISPGQLVAVFWDKSSP', 'EAKSPAEPKSPAEAKSPAEVKSPAEAKSP', 'KKYDEELEERLVEWIVVQCGKEVP', 'KKYDEELEERLVEWIVVQCGEKVP', 'KKYDEELEERLVEWIVVQCGAVSVP', 'KKYDEELEERLVEWIVVQCADKVP', 'KKYDEELEERLVEWIVVQCGLGSVP', 'KKYDEELEERLVEWIVVQCGSVAVP', 'KKYDEELEERLVEWIVVQCGVASVP', 'KKYDEELEERLVEWIVVQCVTNVP', 'KKYDEELEERLVEWIVVQCNTVVP', 'KKYDEELEERLVEWIVVQCSNIVP', 'KKYDEELEERLVEWIVVQCNLSVP'] non truth: ['TLLTVLSYGGTICGSIVSNKWDSKGRY', 'TTLISLFSTTVSPHNLFSSTMHSLLSP', 'TLFRLPHGEMVNAIVSNKWDSKGRY', 'QKQPIPARVEEETHPRVTSVSASSTSP'] Truth: ['HVLETFMTKIVTNLKYWGRCEPAQIFGFLLLLFQGTRC', 'DFQANLAQMEEKPVPAAPVRQEPEAVRTRSLRDRLVQDI', 'APVLHPLSRPGSQERLDTMMPEFQKSSVRIKNPTRVEEI'] non truth: ['PAPPSQMWEQQPSGGRPYAAVLLLSPSPPLSQARTALLANLY', 'FDQAYKYVPSFAELFIDGKEQLKGRLRDLVQLVQMKEA'] Truth: ['SLGRQLGIARPR', 'SLRGQLGIARPR', 'SIGRQLGIARPR', 'SIRGQLGIARPR', 'SIRKWPPKAQI', 'SIVHRVSRIRT', 'SIRHVVSRIRT', 'SLRHVVSRIRT', 'SIVHRVRTRSL', 'SIVHRVRSTRI', 'SIVHRVRSLRT', 'SIRVHVKVWIS', 'SIVRHVKVWIS', 'SIVHRVKVWIS', 'SLRVHVKVWIS'] non truth: ['SRRVPGALRPKS', 'SRRVPGALRPSK', 'SRRVPGALRAQI', 'SRRVPGARLKPS', 'SRRVPGARIQAL', 'SRRVPGARLQAL', 'SRRVPGARIAQL', 'SRRVPGARLLAQ', 'SRRVPGARIALQ', 'SRRVPGARLALQ', 'SRRVPGLARALQ', 'SRRVPGARIAIQ', 'SRRVPGARIIAQ', 'SRRVPGAIRIAQ', 'SRRVPGALRKPS', 'SRRVPGALRKSP', 'SRRVPGALRLQA', 'SRRVPGALRAQL', 'SRRVPGALRLAQ', 'SRRVPGALRALQ', 'SRRVPGALRIAQ', 'SRRVPGALRQIA', 'NVIRWPVKSPK', 'PTKRWPVKSPK', 'KPTRWPVKSPK', 'GGVLRWPVKSPK', 'NVIRRVSQRVP', 'KPTRRVSQRVP', 'PTKRRVSQRVP', 'GGVLRRVSQRVP'] Truth: ['PSKEHPYDAAKDSILRRAKIKGFPTIKIFQ', 'VPVSNIAPAAVPQRESRGRWRGRPRSRRAAT', 'EEKPVPAAPVPSPVAPAPVPSRRNPPGGKSSLVLG', 'PVVPERGKKDTIVNKTLKSLNVESNFITGTGI', 'SPKKLISPCMLRDSDSILETLQRKLGIKVGE', 'PSKAMLRLLQPPSTDSYMVIVAVLGVLGAMAII', 'PSKAMLRLLQDICGPGTKKVHVIFNYKGKNV', 'PVAFQRNARRSEGQAAKEFIAWLVKGRGRR', 'PSSPAARRYVLRDSDSILETLQRKLGIKVGE', 'SPSPLRGNVVYLEGQAAKEFIAWLVKGRGRR', 'PVVPIKPGRKFISILLCLYGADGNIVMTQSPK'] non truth: ['SPKDKRTSRAFKNETRALQLWKRATQGLPA'] Truth: ['EKIVVLLQRLKPEIK', 'KKGRVIPRRSGRLLAV', 'KKGRVLIKRCKAPLVP', 'KDLLVLLQRLKPEIK', 'KKRGVIPRRSGRLLAV', 'KKRGVLIKRCKAPLVP', 'KKLTPLLKANVEKPVK', 'KKDVPAKPVDKVKLIK', 'KLINLIEQGIPKAKLK', 'KKTLKVVDKYKLSKK', 'KKVVKITDKYKLSKK', 'KKKLISVDKYKLSKK', 'KKLIDLLSIHKLKEK', 'KKGLVSGIKNIPIPTLK'] non truth: ['KVALGLSTLVKKGPGPLK', 'KKEGPKLILNLIQAIK', 'KKEGPKLILNLKSPLK', 'KKQPSLPVIEKIGKLK', 'KVALGLSTLVKKNPPLK', 'KKEGPKLILNAVGVVLK', 'KKEGPKLILNLIQALK', 'KKEGPKLILNLIAQLK', 'KKEGPKLILNLQALLK', 'KKEGPKLILNLIAAGLK', 'KKEGPKLILNLIAQIK', 'KKEGPKLILNLPKSLK', 'KKEGPKLILNLSPKLK', 'KKEGPKLILNLPKSIK', 'KKEGPKLILVVLPMLK', 'KKFKKLEKTTKSVLK', 'KKELIYVVKFKLGLK', 'KKLVKEKGPPNTLIIK', 'KKPQSLPVIEKIGKLK'] Truth: ['EDGAELEALFTKELEKVY', 'EDAGELEALFTKELEKVY', 'DEQELEALFTKELEKVY', 'DEQELEALFTKELEKVY', 'EDQELEALFTKELEKVY', 'QEDELEALFTKELEKVY', 'EQDELEALFTKELEKVY', 'EDGAERAAPFTLEYRVFL', 'EDAGERAAPFTLEYRVFL', 'EDQERAAPFTLEYRVFL', 'DEQERAAPFTLEYRVFL', 'GGEEELEALFTKELEKVY', 'EQDERAAPFTLEYRVFL', 'QEDERAAPFTLEYRVFL', 'EDGALELGGSPGDLQTLALEV', 'DEQLELGGSPGDLQTLALEV', 'EDQLELGGSPGDLQTLALEV', 'EDAGLELGGSPGDLQTLALEV', 'GGEEERAAPFTLEYRVFL', 'EDGAQFRRYLEKSGVLDT', 'EDAGQFRRYLEKSGVLDT', 'KQNKKKVEEVLEEENHC', 'EQDLELGGSPGDLQTLALEV', 'QEDLELGGSPGDLQTLALEV', 'QEDQFRRYLEKSGVLDT', 'EQDQFRRYLEKSGVLDT', 'NEELELGGSPGDLQTLALEV', 'GGEELELGGSPGDLQTLALEV', 'SELIVRITSLEVENQNHC', 'NEEQFRRYLEKSGVLDT', 'GGEEQFRRYLEKSGVLDT', 'KVAQPKEVYRQQQQNHC', 'SSEEGYLERIQWKAAGFL', 'SSEEGYLERIQRFGLNVS', 'SELNGNKEPLLNFLEMVH', 'SELNGKNEPLLNFLEMVH'] non truth: ['DEQEILVFHSRKPDKNE', 'QEDEILVFHSRKPDKNE', 'AGEDEILVFHSRKPDKNE', 'EQDEILVFHSRKPDKNE', 'KKEVSYYLVESKKEHCN', 'KKEVSYYLVESKKECHN', 'KKEVSYYLVESKKEHNC', 'SSPDDILVFHSRKPDKNE', 'SSEEKSRSADRRSKGYLQ', 'SEESFTPVAERRSKGYLQ'] Truth: ['VEDILNVLGMLPYL', 'VEDILNVLGMIPYL', 'VEDILNVLGMIYPL', 'VEEVRKNKEPTFI', 'KKKEGDPKQSAPYI', 'KKQESAQPATKYPI', 'LDELKRVGGGSFPTI', 'VEEVRKNKEFTPI', 'VEEVRKNKETFPI', 'KKEDEVQAIRYPL', 'VEEVRKNKEFPTI', 'VEEVRKNKEFPTL', 'VEEVRKNKEPFTL', 'DEIVLVGVYANRQL', 'KKKEGDPKGGSFPTI', 'VEVEATVMRTLDLL'] non truth: ['DIELQQFILPPYL', 'DIELQQFILPYPI', 'DLEKLERQGKPYI', 'DLEKLFPLPSGPYL', 'EVEKVITRNQYPI', 'EVEKVITRNQYPL', 'DLEKLEARNKPYL', 'EVEKVITRNQPYI', 'EVEKVITRNQPYL', 'DLEKLFPMTLTAPL', 'KADKLEQIDRPYI', 'KADKLEQIDRPYL', 'KKFYCLVCLVYPL', 'SLSLDLGPVCLVYPL', 'DIELQQFILPYPL', 'EVEKLNFTARPTAL', 'KKLNEEALDRPYL', 'KKLNEEALDRPYI', 'KKLNEEALDRYPL', 'KKLNEEALDRYPI', 'KADKLEQIDRYPI', 'KADKLEQIDRYPL', 'EVEVPLIIGSSFWI', 'LDEVPKIETVWFL', 'IDELGLFSLCPLVGL', 'IDELGLFSLGCPLVI'] Truth: ['EVINVLGME', 'EVNLVLGME', 'EVLNVLGME', 'QVVDILGME', 'INVEVLGME', 'NIVEVLGME', 'LNVEVLGME', 'VQVEVLGME', 'TPKDILGME', 'TKPEVLGME', 'DILNVLGME', 'DILNVLGME', 'DLVQVLGME', 'VEQVVLGME', 'EVINVIGME', 'EVNLVIGME', 'EVVQVLGME', 'QVVDIAVME', 'INDVLIGME', 'IAAAEVLGME', 'IAAAEVIGME', 'DLVQVIGME', 'VEQVVIGME', 'TVGPISVAME', 'INDVLAVME', 'QVVDIVAEM', 'QVVDIAVEM', 'QVVDIGLEM', 'TPKDIAVME', 'IAAAEVAVME', 'EVINVAVME', 'EVNLVAVME', 'TTVPGVLGME', 'DILNVLGEM', 'DLVQVLGEM', 'VEQVVLGEM', 'EVINVLGEM', 'TPKDIVAEM', 'TPKDIAVEM', 'TPKDIGLEM', 'TVGPISLGME', 'TVGPISIGME', 'DILNVAVME', 'DLVQVAVME', 'VEQVVAVME', 'EVNLVAVEM', 'EVINVAVEM', 'EVNLVGLEM', 'EVINVGLEM', 'EVAAAVLAME'] non truth: ['LNVDLIGME', 'EVGGVILGME', 'NIVEVIGME', 'GGIVEVLGME', 'GGIVEVIGME', 'KTPDLLGME', 'TVPASVAVME', 'LNVDLIGEM', 'LNVDLLGEM', 'NIVEVAVME', 'GGIVEVAVME', 'VQVEVAVME', 'NLVDILGME', 'INVDILGME', 'LNVDILGME', 'VQVDILGME', 'QVVDILGME', 'EVNLVAVME', 'NIVEVIGEM', 'GGIVEVLGEM', 'GGIVEVIGEM', 'NIVEVLGME', 'VQVEVLGME', 'VQVEVIGME', 'EVNLVLGME', 'EVNLVIGME', 'LNVDLVAME', 'LNVDLAVME', 'LNVDLGLME', 'LNVDLGIME', 'KTPDLIGME', 'NLVDLLGEM', 'INVDLLGEM', 'VQVDLLGEM', 'QVVDLLGEM', 'INVEVAVME', 'LNVEVAVME', 'NLVEVAVME', 'GGIVEVGIME', 'GGIVEVGLME', 'TPKDILGME', 'TKPDILGME', 'EVGGVILGEM', 'TVPASVLGME', 'TVPASVIGME', 'NIVEVLGEM', 'VQVEVLGEM', 'VQVEVIGEM', 'EVNLVLGEM', 'EVNLVIGEM'] Truth: ['DTPNRQYL', 'SEPNRQYL', 'SENTTWRL', 'SERNTTWL', 'CIVGMLEEI'] non truth: ['SEESIETVI', 'SEESIETVI', 'SETWRSLQ', 'SEQKFDPR', 'STLSIEEVE', 'SSKQSHGYL', 'SSTKNHGYL'] Truth: ['DVESRQAPYENLN', 'DVESRQAPYENLN', 'DVESRQAPYENIN', 'DVESRQAPYENLGG', 'DVESRQAPYEGGLN', 'MLQREEAESTNIN', 'MLQREEAESTNLN', 'MLQREEAESTNLGG', 'MLQREEAESTGGLN', 'KADIGCTPGSGTSGGLN', 'LNSWTDQDSKNLN', 'DVESRQAPYENNL', 'GIVTNWDPYENLN', 'AKQEMNESEKNLN', 'AKQEMNESVASNLN', 'LNSWTDQDSKNIN', 'LNSWTDQDSKNLGG', 'LNSWTDQDSKGGLN', 'MLQREEAESTNNL', 'KADIGCTPSSGGASGIGG', 'DVESRQAPYELNN', 'VDGTTNGCGLVASNLN', 'LNSWTDQDSKNNL', 'MLQREEAESTLNN', 'MLQREEAESASGIGG', 'EQSDKGSSKGLGPMN', 'KADIGCTGEREELN', 'LNSWTDQDSKLNN', 'EQSDKGSSKNFGPGP'] non truth: ['SVAWSNGDGLSTNIN', 'VQESTMGQVNTGGIN', 'VQESTMGQVNTGGLN', 'VQESTMGQVNTGGLGG', 'AGVYDFPGPSSPNLN', 'AGVYDFPGPSSPNIN', 'AGVYDFPGPSSPNLGG', 'AGVYDFPGPSSPGGLGG', 'VQESTMGQVNTGGIGG', 'NLWSSNDGLSTNIN', 'SVAWSNGDVASTNIN', 'VQESTMGQVNTNLN', 'VQESTMGQVNTNIN', 'VQESTMGQVNTNLGG', 'KESWGGGDGLSTNIN', 'SLQGVCGGDGLSTNIN', 'ASVCVQGGDGLSTNIN', 'AGVYDFPGPSSPNNL', 'QVCEKGESNSLNLN', 'QVCEKGESLNSNIN', 'NFDWVDGILDNIN', 'NFDWVDGILDNLN', 'NFDWVDGILDNLGG', 'NFDWVDGILDGGLGG', 'VSTYHGSTVENQVN', 'DVVDTSQRGMAELN', 'DVVDTSQRGMAEIN', 'KESWGGGNSLESGIN', 'KESWGGGNLGSDTLN', 'VQESTMGQVNTNNL', 'VQESTMGQVNTGGNL', 'QVCEKGSTVENQVN', 'AGVYDFPGPSSPLNN', 'AGVYDFPGPSSPQVN', 'NFDWVDGILDNNL', 'EGSEESIETVIDIGG', 'EGSEESIETVIDLGG', 'EGSEESIETVIDIN', 'DVVDTSQRGMAELGG', 'EGSEESIETVIDLN', 'VQESTMGQVNTLNN', 'VQESTMGQVNTQVN', 'NFDWVDGILDQVN', 'NFDWVDGILDLNN', 'EGSEESIETVIDNL', 'DVVDTSQRGMAENL'] Truth: ['SSSVSALLKEPTP', 'DFVSHNVRTKL', 'DFVSHNVRTKL', 'SHHSIIAKFLY', 'FDVSHNVRTKL', 'SFVDLLYRRF', 'SFDVLLYRRF', 'RWLAALRDDSL', 'RWLAALRDDLS', 'SHHGQLKVALEP', 'SFHVDINKRLS', 'SFVHDINKRLS', 'SHVDFRLKNSI', 'SHVDFNKRLSL', 'SHVDFINKRLS', 'SHVDFQRVKLS', 'SHVDFVKQRSL', 'SHVDFRINKTV', 'SLLATALTSPVQD', 'SHSHIIAKFLY', 'SHVRGVDFKALS', 'HSSHIIAKFLY', 'LDFVSHNVRTK', 'FVSHNVRTKLD', 'TADIVIQLLDTN'] non truth: ['SSSVEAPALALASL', 'SSSVSEALPVLNL', 'SHRQELAKFVT', 'LILSVNVQDDLS', 'LILSVNVQDDSL', 'LILSVNVQDDSI', 'LILSVNVQDDIS', 'LILSVNVQDDTV', 'LILSVNVQDDVT', 'SSSFKTISFRR', 'SLPKLVTQDDLS', 'SLPKLVTQDDIS', 'SLPKLVTQDDSL', 'SLPKLVTQDDSI', 'SLPKLVTQDDVT', 'SLPKLVTQDDTV', 'SHRGAELAKFVT', 'SHLNLWAKFLS', 'SHINLWAKFLS', 'SHGVLAWAKFLS', 'SHRVSSPFAKIS', 'SHRVSSPAKFLS', 'SHLHYLAKFVT', 'SHREQLAKFVT', 'SHRVSSPKAFVT', 'SHGNNKLAKFVT', 'SHRVSSPAKFVT', 'SSTPPPAISRRF', 'SSPPTAPISRRF', 'SSTPPPALSFRR', 'SSPPTAPLSFRR', 'SSPPTAPISFRR', 'SSPPPATISFRR', 'SSFTKSISRRF', 'SSFTKSLSFRR', 'SSFTKSISFRR', 'SSYKATISFRR'] Truth: ['NEELGEYLARMLVKYPELL', 'ENELGEYLARMLVKYPELL', 'GDAELGEYLARMLVKYPELL', 'DQELGEYLARMLVKYPELL', 'QDELGEYLARMLVKYPELL', 'GEGELGEYLARMLVKYPELL', 'GGEELGEYLARMLVKYPELL', 'DQELGEYLARMLVKYPELL', 'GADELGEYLARMLVKYPELL', 'EGGELGEYLARMLVKYPELL', 'NEEIGEYLARMLVKYPELL', 'QDEIGEYLARMLVKYPELL', 'ENELTIAHRLNTVLNCDLVL', 'NEELTIAHRLNTVLNCDLVL', 'GDAELTIAHRLNTVLNCDLVL', 'DQELTIAHRLNTVLNCDLVL', 'QDELTIAHRLNTVLNCDLVL', 'GADELTIAHRLNTVLNCDLVL', 'NEEITIAHRLNTVLNCDLVL', 'EGGELTIAHRLNTVLNCDLVL', 'QDEITIAHRLNTVLNCDLVL', 'GGEELTIAHRLNTVLNCDLVL', 'GEGELTIAHRLNTVLNCDLVL', 'DQEITIAHRLNTVLNCDLVL', 'NEEIVYLFLNAIANQLRYP', 'NEEIQQLQPRADLLQDITR', 'NEELQQLQPRADLLQDITR', 'ENELQQLQPRADLLQDITR', 'ENEIQQLQPRADLLQDITR', 'GDAEIQQLQPRADLLQDITR', 'GDAELQQLQPRADLLQDITR', 'QDEIQQLQPRADLLQDITR', 'DQEIQQLQPRADLLQDITR', 'GGEEIQQLQPRADLLQDITR', 'GADELQQLQPRADLLQDITR', 'GADEIQQLQPRADLLQDITR', 'EGGEIQQLQPRADLLQDITR', 'DQELQQLQPRADLLQDITR', 'QDELQQLQPRADLLQDITR', 'GEGEIQQLQPRADLLQDITR', 'GDAERLEARDRGSTLPRRQP', 'QDERLEARDRGSTLPRRQP', 'DQERLEARDRGSTLPRRQP', 'GADERLEARDRGSTLPRRQP', 'GEGERLEARDRGSTLPRRQP', 'EGGERLEARDRGSTLPRRQP', 'GGEERLEARDRGSTLPRRQP', 'ENERLEARDRGSTLPRRQP', 'NEERLEARDRGSTLPRRQP', 'DQQILNLASGVVDLLQFYFP'] non truth: ['ENELLVPSALLGSSKKQLCHN', 'DQEILVPSALLGSSKKQLCHN', 'QDEILVPSALLGSSKKQLCHN', 'NEEILVPSALLGSSKKQLCHN', 'NEELRQPVQCKLLEKGLGEP', 'ENELRQPVQCKLLEKGLGEP', 'DGAELRQPVQCKLLEKGLGEP', 'DQELRQPVQCKLLEKGLGEP', 'QDELRQPVQCKLLEKGLGEP', 'NEEIRQPVQCKLLEKGLGEP', 'QDEIRQPVQCKLLEKGLGEP', 'GEGELRQPVQCKLLEKGLGEP', 'DQEIRQPVQCKLLEKGLGEP', 'NEELSRPLQAPLSELRNNSI', 'ENELSRPLQAPLSELRNNSI', 'DGAELSRPLQAPLSELRNNSI', 'QDELSRPLQAPLSELRNNSI', 'DQELSRPLQAPLSELRNNSI', 'NEELTVHNDVIEDLTLLVTL', 'ENELTVHNDVIEDLTLLVTL', 'DGAELTVHNDVIEDLTLLVTL', 'QDELTVHNDVIEDLTLLVTL', 'DQELTVHNDVIEDLTLLVTL', 'NEEISRPLQAPLSELRNNSI', 'QDEISRPLQAPLSELRNNSI', 'DQEISRPLQAPLSELRNNSI', 'GEGELSRPLQAPLSELRNNSI', 'NEEIEVTYKSEVLQKTKEI', 'NEELEVTYKSEVLQKTKEI', 'ENELEVTYKSEVLQKTKEI', 'DGAELEVTYKSEVLQKTKEI', 'DQELEVTYKSEVLQKTKEI', 'QDEIEVTYKSEVLQKTKEI', 'QDELEVTYKSEVLQKTKEI', 'DQEIEVTYKSEVLQKTKEI', 'NEEITVHNDVIEDLTLLVTL', 'GEGELTVHNDVIEDLTLLVTL', 'QDEITVHNDVIEDLTLLVTL', 'DQEITVHNDVIEDLTLLVTL', 'KVEQATLQGIELQLDKLHCN', 'SSPDVLVPSALLGSSKKQLCHN', 'ASPETLVPSALLGSSKKQLCHN', 'KKQQEAIQELLELDKLHCN', 'QDQLVQLLQQPQGCVAIANTL', 'GEGQLVQLLQQPQGCVAIANTL', 'DGAQLVQLLQQPQGCVAIANTL', 'NEQLVQLLQQPQGCVAIANTL', 'ENQLVQLLQQPQGCVAIANTL', 'SSPDVRQPVQCKLLEKGLGEP', 'KKNDRAANGALENGATTPEPLL'] Truth: ['DLIEMLKAGEKPNGLVEPEQDLELAV', 'TLTHISAGEVLNILSSDSYSLFEALAV', 'TLTHISAGEVLNILSSDSYSLFEAIAV', 'DLIEMLKAGEKPNGLVEPEQDLEIAV', 'DLIEMLKAGEKPNGLVEPEQDLELAV', 'TVLSGGSTMYPGIADRMQKEIVTLSPV', 'LSFATFSWLTPVMIRSPEQDLELAV', 'DLIEMLKAGEKPNGLVEPEQDLEIIG', 'DLIEMLKAGEKPNGLVEPEQDLEILG', 'DLIEMLKAGEKPNGLVEPEQDLEIGI', 'VEEVRRRATSGGKWPQPEQDLELAV', 'TIRIVDLNNHPPTFYGESGPDPKIAV', 'TLTHISAGEVLNILSSDSYSLFEAIGI', 'TLTHISAGEVLNILSSDSYSLFEAIIG', 'TLTHISAGEVLNILSSDSYSLFEAILG', 'IAALGAYTAQRLPGENDPEPRNDVLAV', 'TLSVAAQVCSLNEHLENVLTPCTVLPV', 'TLSVAAGHRHYTIAALLSPYSYSTTAV', 'TIRIVDLNNHPPTFYGESGPQLSVPV', 'TIRIVDLNNHPPTFYGESGPQISVPV'] non truth: ['TISVRTCVAVCVYQVLEADSARPSLGL', 'TVSIRTCVAVCVYQVLEADSARPSLGL', 'TLLTYFSWNIEEVLPVMHDFLILG', 'TLSVRTCVAVCVYQVLEADSARPSLGL', 'TTVVRTCVAVCVYQVLEADSARPSLGL', 'TVTVRTCVAVCVYQVLEADSARPSLGL', 'TIVSRTCVAVCVYQVLEADSARPSLGL', 'TLVSRTCVAVCVYQVLEADSARPSLGL', 'TVVTRTCVAVCVYQVLEADSARPSLGL', 'TLVSGFYLMKFAGHFAREHQEVLGL', 'TIVSAAQAPSSPVQQSDPLTRGEVEALV', 'TLLTYFSWKFAGHFAREHQEVLGL', 'TTILLEKSTRNDTLTTNSDVMDVLAV', 'TVLSRTCVAVCVYQVLEADSARPSLGL', 'TVISRTCVAVCVYQVLEADSARPSLGL', 'TVSLRTCVAVCVYQVLEADSARPSLGL', 'TIVSAAQDNIDNLKNNTFLELVFTAV', 'TITIIEKSTRNDTLTTNSDVMDVLAV', 'TLLTIEKSTRNDTLTTNSDVMDVLAV', 'TLTLLEKSTRNDTLTTNSDVMDVLAV', 'TLLTYFSWFDMQLLIDEYLRLIG', 'TIGPPTSGGSTPTMPAAALGNMNLAITLPV', 'TIGPPTSGGSTPTMPAAALGNMNLAIITPV', 'TIGPPTSGGSTPTMPAAALGNMNLALLTPV', 'TLVSGFRHDEVPRVVSSYDEFLKAV', 'TLLTYFSWFQSPKKARSADFSISAV'] Truth: ['KQLPVNSSP', 'KAGLPVNSSP', 'SSKPTLLSH', 'SSKPTLLHS', 'NQLLQKEP', 'KKDLELSH', 'KKLDELHS', 'KKLDELSH', 'KKVEELHS', 'KKVEELSH', 'KKDLEIHS', 'KKEVELHS', 'KKLDEIHS', 'KKEVELSH', 'KKEVEIHS', 'KKDLELHS', 'KKVEEIHS', 'KKEEVLHS', 'KKEDLLSH', 'KKEDLISH', 'KKLEDLHS', 'KKEDLIHS', 'QNIILQGSP', 'KQILENQP', 'KQLLENQP', 'KAGLELNQP', 'QNLLQKEP', 'QNLVVKGDP', 'QNLIAGKEP', 'QNLLKEQP', 'LQKELQNP', 'KQILENPQ', 'KLQELNQP', 'KQIPVNSSP', 'SSKPTLIHS', 'SSKPTLISH', 'QIKELNQP', 'IQKELNQP', 'LQKELNQP', 'KKVEEISH', 'KKDLEISH', 'KKEVEISH', 'KKLDEISH', 'KKEEVLSH', 'KKEEVISH', 'KKLEDLSH', 'KKLEDIHS', 'KKEEVIHS', 'KKLEDISH', 'KAGILENPQ'] non truth: ['QNLLALNSP', 'QNLAADLKP', 'QNLNILTGP', 'QNLEKQIP', 'QNLVGTLQP', 'TTLQGIIHS', 'KKIDEIHS', 'QNLRRWP', 'QNIGGLSLAP', 'QNLGGLSLAP', 'QNLLNLTGP', 'KKVEEIHS', 'KKVEELSH', 'KKEVEIHS', 'KKIDELSH', 'KKEVELSH', 'KKELDLSH', 'KKDELIHS', 'KKELDIHS', 'KQKPSTPSP', 'KQDLGKPSP', 'KQALPQTSP', 'KKQPSTPSP'] Truth: ['KTLNDENSMARRPLWWGLP', 'VTKENEELDWNLLKHRYP', 'VTKENEELDWNLLKYRHP', 'VTKENEELDWNLLKHYRP'] non truth: ['IALLVETDSCPRCPTIVQQTP', 'ALSNVSSSQGPILESIQDTGLAP', 'LALAWRGDRCDVLIQELSEP', 'ALLADSGTGAAGRGKNWDKLGEP', 'ALLATGKPMWYHTLWDARGP', 'ALALASKPMWYHTLWDARGP', 'ALAISAKPMWYHTLWDARGP', 'EEYVEHIVIADLPTDKGGVIS'] Truth: ['SGQLGLPDEENRRESK', 'SGQLGLPDEENRRESK', 'GSQLGLPDEENRRESK', 'GSAGLGLPDEENRRESK', 'SGQLGLVPEVTADVEEAT', 'SGQIGLVPEVTADVEEAT', 'GSQLGLVPEVTADVEEAT', 'SGGALGLPDEENRRESK', 'GSQGIIDTLFQDRFFA', 'SGQGIIDTLFQDRFFA', 'SGGAGIIDTLFQDRFFA', 'SGQLAFHYLHAGLCGLQ', 'GSQLAFHYLHAGLCGLQ', 'SGQIAFHYLHAGLCGLQ', 'GSAGLAFHYLHAGLCGLQ', 'SGGALAFHYLHAGLCGLQ', 'SGQLQRSPDQLQVPDF', 'GSAGLQRSPDQLQVPDF', 'GSQLQRSPDQLQVPDF', 'SGQIQRSPDQLQVPDF', 'SGGALQRSPDQLQVPDF', 'SGQDVNVMRIINEPTAA', 'SGQEAQVQLQKANEER', 'GSQEAQVQLQKANEER', 'GSQDKLALDSTMRAPPQ', 'SGGAEAQVQLQKANEER', 'GSAGEAQVQLQKANEER', 'GSEDRILCLALFTTEF', 'GSVSKHEFQAETKVNGP', 'GSVSKHEFQAETKNPVG', 'GSVSKHEFQAETKPVGGG', 'GSSGPGASVMRIINEPTAA', 'GSEDRIICLALFTTEF', 'DFTPAVHASLDKFVNGP', 'DFTPAVHASLDKFNPVG', 'DFTPAVHASLDKFPVGGG', 'CLRQGNMTAALQAAVNGP', 'CLRQGNMTAALQAANPVG', 'CLRQGNMTAALQAAPVGGG', 'KLEECVRSIQADGVNGP', 'KLEECVRSIQADGNPVG', 'KLEECVRSIQADGPVGGG', 'CLNVLNKSNEGKEVNGP', 'KAEQQGAGLTMAKDVNGP', 'CLNVLNKSNEGKENPVG', 'CLNVLNKSNEGKEPVGGG', 'RVVSRNGESSELDVNGP', 'RVVSRNGESSELDNPVG', 'RVVSRNGESSELDPVGGG', 'DAVTMLSAPNIPELVNM'] non truth: ['SGQLGKEEMRVHHVAH', 'GSQLGRESPFGVYLFAS', 'SGQLGRESPFGVYLFAS', 'SGAGLGRESPFGVYLFAS', 'SGQIGRESPFGVYLFAS', 'GSQLGASGKMGPIGPTGDKG', 'SGQLGASGKMGPIGPTGDKG', 'SGQIGASGKMGPIGPTGDKG', 'SGAGLGASGKMGPIGPTGDKG', 'GSQLGKKSTPPVNVNCGE', 'SGQLGKKSTPPVNVNCGE', 'SGAGLGKKSTPPVNVNCGE', 'SGQIGKKSTPPVNVNCGE', 'SGQLGPPLTTLNRGSCDV', 'GSQLGPPLTTLNRGSCDV', 'SGAGLGPPLTTLNRGSCDV', 'SGQIGPPLTTLNRGSCDV', 'SNAIAFAPDRGDLQDKP', 'SGQLAKEFNKENELAH', 'GSQLAKEFNKENELAH', 'SGQIAKEFNKENELAH', 'SGAGLAKEFNKENELAH', 'SGQLIEVSGNWTRPDVG', 'GSQLIEVSGNWTRPDVG', 'SGAGLIEVSGNWTRPDVG', 'SGIQGRESPFGVYLFAS', 'SGIQGASGKMGPIGPTGDKG', 'SNAIGRESPFGVYLFAS', 'NSALGRESPFGVYLFAS', 'TNGLGRESPFGVYLFAS', 'SNAIGASGKMGPIGPTGDKG', 'NSALGASGKMGPIGPTGDKG', 'TNGLGASGKMGPIGPTGDKG', 'TGKVGAEECVERVAQAAP', 'TGNHVMAKAYWIGQAAP', 'SGIQGKKSTPPVNVNCGE', 'SGIQGPPLTTLNRGSCDV', 'NSALGKKSTPPVNVNCGE', 'SNAIGKKSTPPVNVNCGE', 'SNALGKKSTPPVNVNCGE', 'NSALGPPLTTLNRGSCDV', 'SNALGPPLTTLNRGSCDV', 'SNAIGPPLTTLNRGSCDV', 'TNGLGKKSTPPVNVNCGE', 'TNGLGPPLTTLNRGSCDV', 'QAVSAMLAAQQASLNPQS', 'QAVSAMLAAQQTGNLQSP', 'QAVSAMLAAQQTGTQNVP', 'SGQSANRKNTLTSEPQP', 'SGIQIEVSGNWTRPDVG'] Truth: ['QLLQRARLAEQAERYDDMASAAAESLVDSSEVT', 'SLICNVGAGGPAPAAGAAPAGGAAPSTAAAEMNLVDYSRP', 'DDLGKPHSSGGASGIGGMSRLGGELHSRSGGSVDSVLS', 'DDLQEEEQLEQAIKEHLGPGSSQEMERLAKVS', 'DDIKSTSHNPLGTPGSSPQQSDLASLIQAGQQFAQ', 'DDINVYATTNKSNAFYNLGKWNEAFLAAKECL'] non truth: ['EVPYADLFESLCRAAFDFLSPLSSSSSLSPPPPP', 'EVDLSSFVDELLDISAARDPSVEEASLCEVVSIS', 'QLISVSQPEQLCAVNSDNLYGVEEASLCEVVSIS', 'QLISVSQPEQLCAVNSDNLYGSVEEASLCEVVSI', 'QLDFYSPLSEFLRTWHHHVLCPETPHYALS', 'LQSGASMIELQTAIASINTDVEQCNVKGKLETCE', 'EVPYADLFESLCRAAFDFLQQGASYHLWHPL', 'EVPYADLFESLCRAAFDFLQQGASHKGYKWF', 'EVDLSSFVDELLDISAARDPPFQETQMSRQSL', 'ALGMYYELSYKTRSPDAVDGTYILQSDQAPALS', 'QLLQQPQGCHCQRQYDPSTFCHAPEALKRQL', 'FLSFDQSCGGLSAQHLSVNQNIPTNPALSYNTSL'] Truth: ['DDLQEELLMVETRLPNDLIEFEGDFSLE', 'DDLKSELTANRSKAICVQWDPPGPTDQHGM', 'DDLQEETFPSGIIGRIPAYDPDVSDHLFY', 'DDQRCPLEEGQAGGDHSELIVRITSLEVEN', 'AVALQFGAEIGNYSCAAAGVQTSSKNIGEQGHM', 'EQSDKGSSKEVEHAKIPEPEDVEVATEPTGS', 'QEEQEDSTKKEFDKKYNPTWHCIVGRN', 'DDLTQHGSSFVTQNSKISLPHPMEIGENLD', 'DEVQAIKPYCNAHYPKQSFTMVADTPENL', 'DDLDRDTFPSGIIGRIPAYDPDVSDHLFY', 'DDLDSKQQQAGRAAMGELYEREVREMRGA', 'DDIVNEEDKAGRAAMGELYEREVREMRGA', 'DDLIKVLVDDMAACMKSVTEQGAELSNEER', 'DDIDLSDVELDSVKAASELYSPLSGEVTEVN', 'DDQRQSEELESVKAASELYSPLSGEVTEVN', 'QEIREAFDLFDADGNWVSQLANLTQGEDK', 'DEVQNEIDASSGPGGDHSELIVRITSLEVEN', 'DSDSILDGMVALLDYREDGVTPFMIFFKD', 'DDIDLFGSDPSGIIGRIPAYDPDVSDHLFY', 'DDLHVKATHEDAVAAMSKDKANMQHRYVE', 'DSDSILETLRPDNFIFGQSGAGNNWAKGHY', 'SDALEFQQKLSAEEKCLQSGAGNNWAKGHY', 'QEQELRALQADGYFLRAWNYTVDTPHGM', 'LQIQCVVPYCNAHYPKQSFTMVADTPENL', 'SDDSPIDTFPSGIIGRIPAYDPDVSDHLFY'] non truth: ['SDDSQALRLEQSTETAESSSALVVCDIIDAVG', 'DDLKSQWLEQSTETAESSSALVVCDIIDAVG', 'DDITSWMKFQLHTVNFEQVEHDYIWK', 'QILQTKMGQMQEEFIEQEEKLDNEDKQ', 'DDLLVMDALEQSTETAESSSALVVCDIIDAVG', 'DDLRPGFGRNPNYGMSNLMRAQDLDRPFG', 'DDLSRTIDQSDKQMLTWLDLVDTGCDRAV', 'DDLGDLCEVGEEKVLEEFSVIDELDPVFR', 'DDIEHFLEVFAGELGIGRSSFQCDFGKRH', 'DDLAMNRVKEMLEFIEQEEKLDNEDKQ', 'LDDQPMKGQTTSCPLGIGRSSFQCDFGKRH', 'DLDQPMKGQTTSCPLGIGRSSFQCDFGKRH', 'DDLLVMDAANTKEKFKEESEVSWWPDLN', 'DDLFVWSFHQVQMLTWLDLVDTGCDRAV', 'IQQLQDLSSPDNYGMSNLMRAQDLDRPFG', 'EQEQLATITNPNYGMSNLMRAQDLDRPFG', 'LQQLEKERMMEEFIEQEEKLDNEDKQ', 'DSDSDQRLITDKQMLTWLDLVDTGCDRAV'] Truth: ['SSSPDLRGFAAQMTLHNFQIQAFNVTGEQ', 'SLKTMDRFDDHKAPGTKHETWSGHVISN', 'SSSPDLRGFAAQPWCNKQITRCLIECRD', 'SSSTQKNCGSQRIDLLNRMQSFSEVELH', 'SIISNTVQNTNEEDMYGFLKDMGLKVFT', 'SSPDLRGFAAQSPWCNKQITRCLIECRD', 'SLSLHCSGKTAESGELHGLTTDEKFVEGVY', 'SILSGMGNGTVSSSPADTTAAGPLFQQRPYPS'] non truth: ['SSLLRVQEDEGSLVMAARDGGSMGTKQDKE', 'SSSTQKAKEGPSLSRIESPTSAYTEQSPPGS', 'SSLLTSTQEGPSLSRIESPTSAYTEQSPPGS', 'SLTVTSTQEGPSLSRIESPTSAYTEQSPPGS', 'SSLLLDDEQASAAAPSGGNLGHAVANAAAAGGPAY', 'SSPENESTMKPLYLHCFGLVREGLGGLEM', 'SSIIDERGADLGSLVMAARDGGSMGTKQDKE', 'SSLLEARGVDEGSLVMAARDGGSMGTKQDKE', 'SSLLTSVPQDEGSLVMAARDGGSMGTKQDKE', 'SILSSTDGQMAPIVADGEVSRVSGSMEERSV', 'SSLLEASIVGNDDNGNIMASLVRCVYKPDN'] Truth: ['TYLFKHIAKSDYVDVFYNTK'] non truth: ['TAAQVAAESGHKLFVNGATQAPNIG', 'TAAAAAAVHYGGQVATGALQLIEDGP', 'TAAAAAAVHYGGPVSGPVSGQLSTVPS', 'TQLQAAESGHKLFVNGATQAPNIG', 'TAVAQGLSRGFHVNPHEKKFND', 'TNQLGVESGHKLFVNGATQAPNIG', 'TNGIQGVAGKDLAKQHVQREESS', 'TKDSPLEDHKLFVNGATQAPNIG', 'TAQLGPSEEKPVSGPVSGQLSTVPS', 'TAQLGPSEEKPIFPGPPGQPGPPGP', 'TAQLGPSEEKQVATGALQLIEDGP', 'TTAEPIESGHKLFVNGATQAPNIG', 'EAAEIVESGHKLFVNGATQAPNIG'] Truth: ['TTPGPDTEAALAKDIPTGGVGSLDIAG', 'EQAIKEHLGPGSSQEMERLAKVS', 'TGPWGKVHYSIYAIVGYKDSPSV', 'TFAPEEISAMVLTKMSRAENLSV', 'TTYKAKQKEMSQTGPAVDHPLPP', 'TTFRIDTLLERCPEAVIAYANQ', 'TTFRLDTLLERCPEAVIAYANQ', 'TTDIFQTVASAPQKPIYMVMVPS', 'TTVSESPFKSAPQKPIYMVMVPS', 'TTMYPGIATAKDLWEVVVQICSV', 'KKDKSLPVSSCESSVFVASRWPS', 'KVPPGTKHVDMDLSALTSRTENQ', 'TTVYSLGINSVTGTSATNNVNIAAVG', 'TTYVSLGINSVTGTSATNNVNIAAVG', 'GTLSTIEFQSAPQKPIYMVMVPS', 'KYWGRCEPVIQKPIYMVMVPS', 'KVPPGTKHVDMRCPEAVIAYANQ'] non truth: ['KNASDGLSPHVVGASATSAVPGTKSW', 'VFFTHAAALPNTQLKEEYGFLQ', 'TTIAADNLYDVNVSSISKLSLGDQ', 'KELMDIFAISNFGLAFPPTDGLQ', 'KNASDGLSPHVVGNQFLQQSERL', 'KELMDIFAEKGNQFLQQSERL', 'KEGTHMRQHAILSSLHAYLFGQ', 'KKNKNSSQNHSARAQGERERAQ', 'KFVFHDALPNTQLKEEYGFLQ', 'KLCTAVMWCLLNQFLQQSERL', 'TIKMLCGDRIMNQFLQQSERL', 'KNASDGLSPHVVGFGLAFPPTDGLQ', 'KLCTAVMWCLLFGLAFPPTDGLQ', 'KNASDGLSPHVVGAEQFGSLGKTPQ', 'LTVEKFNIDAMNQFLQQSERL', 'LTVEKFNIDAMFGLAFPPTDGLQ', 'TTNDLFDERLRQFKQKVEEAG', 'KELMDIFAISVYPFDGLKYFQ', 'TKDLYHTVYAQLGRIMAERMQ', 'TTKGSGDEGFVLNEFELLLASSIP'] Truth: ['TDKPRAENEPPSAPKENKPY', 'TTTEVDGRGTPIPYEMIFIGA', 'TKDVAPPMNSPGHRAGLEPFF', 'TAQKSAESNTPIPYEMIFIGA', 'TTQPQGTTQPVTTFVNHPQVS', 'TTGSGAADAKGTPIPYEMIFIGA', 'TTVSNSQQATPIPYEMIFIGA', 'TDKHGEVCPLPPSHRVNVEPG', 'TKDVAPPMEAPSGPAGELKFEP', 'EALRQAENEPPSAPKENKPY', 'TAWKSARRGASGNHAAEATAAPS', 'TAQDHGVTISQATPSSVSRGTAP', 'TTLDTDGRGTPIPYEMIFIGA', 'TTFYTALNEPPSAPKENKPY', 'TATARPAENEPPSAPKENKPY', 'TTENYSSALLPPSHRVNVEPG', 'TAQHVFKVSLVSGMPFYGYH', 'TTDAYGSSALLPPSHRVNVEPG', 'TTSDFGSSALLPPSHRVNVEPG', 'TTDSFGSSALLPPSHRVNVEPG', 'QVRQNYHQDAEAAINRQIN', 'NDGLNHLDSLKGTFASLSELH'] non truth: ['TATASAAAGSGAASGPGLPSAGRPTSP', 'TTAASAAAGSGAASGPGLPSAGRPTSP', 'TANGLLERTSSVHPDFPTAASP', 'TPRPDESKEPSRVTSQPPPY', 'TAMRASGAVRERGEETGVLFM', 'TYIGPVNPKGCPSPGWARPSSP', 'TYIGPVNPKGCPSPGWRTGPSP', 'TYIGPVNPKGCPSPGWRPTGPS', 'TYIGPVNPKGCPSPGWPRGTPS', 'TYIGPVNPKGCPSPGWERVGGP', 'TYIGPVNPKGCPSPGWRSPTGP', 'TYIGPVNPKGCPSPGWHVVYP', 'TTDLYHIKYFESVGRPTGPS', 'TARQFKDYIEVRTDEAAASP', 'TELSLHAAAHVPSPHCGKSAAPS', 'TAATSAAAGSGAASGPGLPSAGRPTSP', 'TTVLKSKDEEGGIYDLSSAAPS'] Truth: ['KKPDRDDTKEEILKAFKLFDDDE', 'KYFVSSGSVTVNAQDLELAVDLDDIP', 'MSGLRVYSTSVTGSREVQQPSTQVTP', 'MSGLRVYSTSGSSVVATNNIANQISAAP', 'KWDPLCLEPFIQACMTFLKRRCP', 'KKKEEREAEAQAAMEATEQSHLALP', 'DMFQSKMLSKGGYPKAPGRGMVEALP', 'KKDNGHIQSGSFRINPDGSQSVVEVP', 'KKRHSGDFGAEEILKAFKLFDDDE', 'KKETSQIDGPEEILKAFKLFDDDE', 'KRQQCLREEQVNNSTVSEPPKDLP', 'MSGLRVYSTSVTGSELAISQPRLDMP', 'KKDNGHIQSTWNDPSVQQDIKFLP', 'KKYDTIQMGSFRINPDGSQSVVEVP', 'KKDYTIQMGSFRINPDGSQSVVEVP', 'KKSYLEQMGSFRINPDGSQSVVEVP', 'KKEYSIQMGSFRINPDGSQSVVEVP', 'KKYESIQMGSFRINPDGSQSVVEVP', 'KKSEQPEEKEEILKAFKLFDDDE', 'KKGNFNYRDHGDWDVDRRLPPLP', 'KKDLDSYTGSSDGGVPVAPPPAVPSSGLP', 'KKAWGNNGEPFIQACMTFLKRRCP', 'KKHPFSQNDGSSVVATNNIANQISAAP', 'KKLHMSQNDGSSVVATNNIANQISAAP', 'KKHLMSQNDGSSVVATNNIANQISAAP', 'KKHIMSQNDGSSVVATNNIANQISAAP', 'MSGLRVYSTSVTGSRTNNIANQISAAP', 'KKYESEESVSRPGLGQYLCNQLGLP', 'KAEDGENYAIKKQAEILQESRMMI', 'MFQSKMLSKGGYPKAPGRGMVEALPD', 'MKKLDLNCDGQLDFQEFLNLIGGLA'] non truth: ['KACLPQESSGGLGLGLGGSAAAGSGAASGPGLP', 'KKYSFLPMSRSMVRLFIDDVYCP', 'KKSYFLPMSRSMVRLFIDDVYCP', 'KHMDLKCVIAGVCEEGVMLIPYHLP', 'QQACVQKAERAATRTAGEPNDLLDLP', 'KHMDLKCVIAGVCLGGSAAAGSGAASGPGLP', 'KFQKIMNQHNEIAEQQQDAITALP', 'KAETTTQPKPWSILTSRDLGTQHCP', 'KHMDLKCVIAGVCEEGLSSIPAGLDLP', 'KAETTTQPKPSEGSFGFFAKCPINLP', 'KFKSNHDQVGQLEGRNPSLLDQTSP', 'KQTNNRNNYIWHLKERREPSDP', 'KDEQLDKNYTSPLGKSPNFWFAPL', 'AQQACVQKAERAAILTSRDLGTQHCP', 'QQACVQKAERAATRTAGEPNDLIDIP', 'QQACVQKAERAATRTAGEPNDLDIIP', 'QQACVQKAERAATRTAGEPNDLEVLP', 'QQACVQKAERAATRTAGEPNDLVELP', 'QQACVQKAERAATRTAGEPNDLDILP', 'AQQACVQKAERAASFGFFAKCPINLP', 'KKEEALQGLSCDKSLSSPNFWFAPL', 'KKEIEAQGLSCDKSLSSPNFWFAPL', 'KELYKECLSKFYVNPPQWTDPPI', 'SMTRRPQIQYGNHLKERREPSDP', 'KKDIYTGAMSPPVDVPEQSWVHTIP', 'KYSQMKQKLLEADSHRFPFNKCP', 'KYSQMKQKLLEADSHRFPCEFLP', 'KYSQMKQKLLEADSHRFSSTNSLP', 'KKGCIHEGKTWSILTSRDLGTQHCP', 'KKISYHSHTWSILTSRDLGTQHCP', 'KKPMKSPMPGSEGSFGFFAKCPINLP', 'KKWSPGIMPGSEGSFGFFAKCPINLP', 'KLSYIFTQTSEGSFGFFAKCPINLP', 'KKRAGACAAAPGSEGSFGFFAKCPINLP', 'AQQACVQKAERAATRTAGEAPVGPKGCP', 'AQQACVQKAERAATRTAGEKDLHLCP', 'AQQACVQKAERAATRTAGCKPSFYLP', 'QQACVQKAERAATRTAGEPACKGPNLP', 'AQQACVQKAERAATRTAGEPCKGPNLP', 'AQQACVQKAERAATRTAGEPLSSDPLP', 'QQACVQKAERAATRTAGEPNVLDELP', 'QQACVQKAERAATRTAGEPNDELVLP', 'QQACVQKAERAATRTAGEPNDVLELP', 'QQACVQKAERAATRTAGEPNDIDILP', 'QQACVQKAERAATRTAGEPNDIVELP', 'QQACVQKAERAATRTAGEGIPESPSIP', 'QQACVQKAERAATRTAGEPTAPDVSLP', 'AQQACVQKAERAATRTAGEPEAIEGLP', 'QQACVQKAERAATRTAGEPNRWNLP', 'QQACVQKAERAATRTAGEPNRNWLP'] Truth: ['DEVVDIMRVNVDKVLENEPPSAPKENKPY', 'SESVELWRGQLKKSPENEPPSAPKENKPY', 'DEWMRIILEALRQAENEKRLLDEGHYP', 'TDALHPPHEYVPWVLVNEKRLLDEGHYP', 'DEWMRIILEALRQAENEARRILNEDGSP', 'DEWMRIILEALRQAENELFKHQLSGNSP', 'DEWMRIILEALRQAENELRACSAPGGLVPS', 'TQGPQPPPPQSLNLLSQANEPPSAPKENKPY', 'TDALHPPHEYVPWVLVNEARRILNEDGSP', 'TDALHPPHEYVPWVLVNELFKHQLSGNSP', 'TDALHPPHEYVPWVLVNELRACSAPGGLVPS', 'TQGPQPPPPQSLNLLSQAQLQGQPLATTDDPS', 'TDSALDLGNIRAEMAPRERKAGCKNFFWK', 'TDIQRLLHGYGYHEKERHFMKIYLYN', 'DESTPVQTIEVLSAEKGQQGLSLPQVNTDQL', 'TTNTVEEPLDVLSAEKGQQGLSLPQVNTDQL', 'TTDDPSQLIEVLSAEKGQQGLSLPQVNTDQL', 'DESTPVQTRQRLNPPGGKTSDIFGSPVTATAP', 'TTQEPIWKSLSPAATVTAELHYEQGSPRNL'] non truth: ['TDVNLIVVDDLAMNRGGAARGGRGQSSPASRPS', 'TKNAAQCQPLTAGWVEILDGSELGLDAPLEKG', 'TDKPDWRSKLENEPSGPVKEPYVDPKELA', 'TDVNLIVVDDLAMNRVTNTALGNNTLSNHSL', 'TTLEPCLSPAKEASLGQGAARGGRGQSSPASRPS', 'QWVSLGVATQASQADQALARGGRGQSSPASRPS', 'QWVSLGVATQASQADQALLEELESVGRPTGPS', 'TDQDLKKVAEVENEPSGPVKEPYVDPKELA', 'TDKPDFGRERAGFLAWQPTPLFKFSQTPS', 'TEQPTSLVDASAIDLHLLIQRCYISLDFY', 'TWLIQMAPKSAIAECVVTRNEGPTNKAPDSP', 'TDPRSWRSKLENEPSGPVKEPYVDPKELA', 'ESFTIPEEPEIKLANRYSGFFAAISQPVPS', 'TTNPHGPSPPKLENEPSGPVKEPYVDPKELA'] Truth: ['DEWMRIILEALRQAENEPPSAPKENKPYAL'] non truth: ['TLSINETPPQLYFQLSHQVSVPTLLPADGSGAAG', 'TTAPLYLGGTYPRFDEPLRYLDAQERKGPPS', 'TLTPTTPGSNLQLSQSLHGSTMKSLPSTKQPSSP', 'TVKGGDRAAAGAERAQSLHGSTMKSLPSTKQPSSP', 'TLTPTTPGSNLQLSEEQATQFLQGTLFSMLAAI', 'RDLQPPPPPLAAGTQQIFQRMNPGQPWRQSP', 'TLTPTTPGSNLQLSEEQHYRVPGAFHVVVFSP', 'TLSINETPPQLYFQLSHGAEGCRHSGKLLKSP', 'RDLQPPPPPLAAGTQTSLSQRMNPGQPWRQSP', 'GDESITDLEPKLKITSLSQRMNPGQPWRQSP', 'TTLSALEQIDLDLMPDVISLRPGEDQLRDAPS', 'TTLSALEQIDLDLMPVGEPGINAELVPWRFPS', 'TDEDLEKNPGKEVLGATHYRVPGAFHVVVFSP', 'TDRAFLLGTYPRFDEPLRYLDAQERKGPPS'] Truth: ['SVAPSRETAGELLSPAATVTAELHYEQGSPRNLGTP', 'DGPRELTLQAMADGVNKGRGGKGSIYVWFSWLTP', 'DGPRELTLQAMADGVNKGRGGKGSIYVWADHIPTP', 'DGPRELTLQAMADGVNKGRGGKGSIYVWKNGSFTP', 'DGPRELTLQAMADGVNKGRGGKGSIYVSGGHNPLGTP', 'INREKMTQIMFEAFNTGRVEIIANDQGNRITP', 'WGKVEADLAGHRGTWSTGLPKRNEAKTGADTTAAGP', 'DGPRELTLQAMADGVNKGKIYHPNIDEKGQVCLP', 'DKARVEVERDNLAEDIKIYHPNIDEKGQVCLP', 'VAFAKFEENQKEFERKIYHPNIDEKGQVCLP', 'DGPRELTLQAMADGVNKGRFDVTKGRKFYGPEGP', 'RMELKPGDNLNVNFHPPEAKMRMKNIGRDTPT', 'RMELKPGDNLNVNFPQGVETVAQGVVSQQLPTGSP', 'SVKEIMAEIYKNGPVEGAFTVFSDSLEALHNITP', 'VAFAKFEENQKEFERGVETVAQGVVSQQLPTGSP', 'LSLGFFFDRDDVALEGQALEQVFSKYGQISEVV', 'IDLNRNFPDLDVILHSGSDMVEAERSGIPIVTSP', 'DHIYEKLDAILSPAATVTAELHYEQGSPRNLGTP', 'VDRERDRDRLSPAATVTAELHYEQGSPRNLGTP', 'AFPRNMRLPDLSPAATVTAELHYEQGSPRNLGTP', 'GDGATPLMLAAVTLSPAATVTAELHYEQGSPRNLGTP', 'DKARVEVERDLSPAATVTAELHYEQGSPRNLGTP', 'QLGNAYFYLKEYARALQFCLGTKEQNREKQP', 'SQPARVVDEREQMAISGGFIRRVPEEYEFLTP', 'VDRERDRDRAVMVSFQSGYLFIQTDKTIYTP', 'DGPRELTLQAMADGVNKGRETVAQGVVSQQLPTGSP', 'SVAPSRETAGELAVMVSFQSGYLFIQTDKTIYTP', 'DHIYEKLDAIAVMVSFQSGYLFIQTDKTIYTP', 'NSEVKTDVNKIEEFLEEVLVYDLTKFLEEHP', 'GVAKMSLDPADLAVMVSFQSGYLFIQTDKTIYTP', 'AFPRNMRLPDAVMVSFQSGYLFIQTDKTIYTP', 'LYVAFAKFEENQKEFELELYVSRNNDVLTPT', 'GDGATPLMLAAVTAVMVSFQSGYLFIQTDKTIYTP', 'IDLNRNFPDLAVMVSFQSGYLFIQTDKTIYTP', 'LDLNRNFPDLAVMVSFQSGYLFIQTDKTIYTP', 'DKARVEVERDAVMVSFQSGYLFIQTDKTIYTP', 'EALRQAENEPPSAPKENKPYALNLEKNFPVDTP', 'LDEQELEALFTKELEKVYDPKNFQRIYNHP', 'NLPSLSNFNLPAPHIMPGVGLPELGSPGLPEESQTP', 'NLPSLSNFNLPAPHIMPGVGLPELGSPGLDTPGEATP', 'NLPSLSNFNLPAPHIMPGVGLPELGSPGNHTSFPTP', 'RSKRSDNSRVCIRMVGHNVEASFGKGFSVVADTP', 'SQPARVVDEREQMAISGGFIRRVTNDKTYGQTP', 'INREKMTQIMFEAFNTPAMYVAILEALHNITP'] non truth: ['HDALSASIQRLCLMEKGGGGQGGLPVVDRLVCQTPT', 'TVDSRSRAHGPKFTTTEDLLDNVSRPTAPAPTATP', 'ERLSVQWPFEGPAAEVGGSWRSAQGPAARGGRSRP', 'ERLSVQWPFEGPAAEVGGGGQGGLPVVDRLVCQTPT', 'ERLSVQWPFEGPAAEVGGEKPLSQGSGSKPSSPLTP', 'HDALSASIQRLCLMEKGVSREIITPETEDGIPTP', 'LMATYYALNQHMVIPQSHKAGQDLKFDLYLTP', 'NEIADRSIHQVKHFNEFSSIRQRISSSAQPQP', 'AHEATAASLAQLRAQGERGCEQPKATTSPHLPVPSP', 'AHEATAASLAQLRAQGERGCQYKTFRYVAVQWP', 'AHEATAASLAQLRAQGERVGLAPQAMPEAAIQHSEP', 'FSRFPCKGHWIHLHQSHKAGQDLKFDLYLTP', 'EFRQQLAKHNYIFNEFSSIRQRISSSAQPQP', 'AGTTIQGFFLQEAVRDNIDDTEWSVRLNILSTP', 'TVDSRSRAHGPKFTTTEDLLDNVSRVVGPAEGPTP', 'TVDSRSRAHGPKFTTTEDLLDNVSRTPVSPQPTP', 'QLMPPVQSSSNDSDGKSLTISPEASIAPPLTKVPQS'] Truth: ['KKIVIMPCKYAPSRHVSIVRENIQP', 'KKPTSTKPSSSAPRVKLSINTPNSRQP', 'KEELQKSLNILTAFRKRTQHIQQP', 'DKAPARFKARLAGDPAPVIVEVLFTQP', 'KAKMKNKQLPSAGADKNLVTAPKENKP', 'KDIVEVLFTEQFRKSQAIVKRFGAP', 'AKPLATTQPAKTSTSKAKTQPTSLPKQP', 'KDGRLDLNDLAIKQHLRVHRQRQP', 'NLPFARRALGTSIGKSILGGLAPEKDQP', 'KDPKKSLDFYTRVLGLTLLKCRSQP', 'KEELQKSLNILTAFRKKEQHIQQP', 'KDFLLQQTMLRIKRLLEPGADPVAGP', 'KDFLLQQTMLRIKTNLVPPQNIAQP', 'KDFLLQQTMLRIKDPLIALVNDPQP', 'KKLAQRLREVTGQTDHIVEVLFTQP', 'KKLAQRLREVTGQTDHYNLRRVAGP', 'KDANGLVHRGYIKQHLRVHRQRQP', 'KDIHAVGNRGYIKQHLRVHRQRQP', 'KDNHLLNRGYIKQHLRVHRQRQP', 'KKFSPKEKELQYLELIVEVLFTQP', 'KKRNKPSVLEDSVAKLSINTPNSRQP', 'KKYRGFTIPEAVIPVAQAQGIQLPGQP', 'KKNVHGEIGAGISVIPVAQAQGIQLPGQP', 'KKFSLAATRSNSVIPVAQAQGIQLPGQP', 'KKKYPYWPISVIPVAQAQGIQLPGQP', 'KKMTRTTPHPIRGDQLALLGRRTYP', 'KKGFKAHFFSIRGDQLALLGRRTYP', 'KKLISRKDASVRSGTPHVHPRSQPQP', 'KKLISRKDASVRSGTPHVFEQKIGQP', 'KKLISRKDASVRSGTPHVQIAFEEIP', 'KKDLTEYLGYIKQHLRVHRQRQP', 'KKYLMGNRGYIKQHLRVHRQRQP', 'KKGHVNGNRGYIKQHLRVHRQRQP', 'KKIVIMPCKYAPSRQLVQAWPPKQP', 'KPLATTQPAKTSTSKAKTQPTSLPKQPA'] non truth: ['KKLHSSVCAVIMRAASVGTGIVLLLDQP', 'KKDSNPLVSGKGNKGRKEKEEQKKPP', 'KDKALKAKQEEIAGVPSSRRPGLPHPP', 'KKPPSTTVSQRKNKRLERGDLQDKP', 'KKDELAQLQRKNKRLERGDLQDKP', 'KDRGLPKKPSPSLPELAFSKRRSSQP', 'KVEIEGAKVSLDIAGVPSSRRPGLPHPP', 'FEPLVLDEKRLLPFSKSVLRHQASP', 'LFEPLVLDEKRLPFSKSVLRHQASP', 'KKDSNPLVSGKGNKAASVGTGIVLLLDQP', 'KKLHSSVCAVIMRAIHRLARGQHTPP', 'KKLHSSVCAVIMRALHRLARGQHTPP', 'LFEPLVLDEKRLLGRQRRSIDQQP', 'FEPLVLDEKRLLGRQDRVVQLTGQP', 'KDAVITAIDRPSNKRALAPSISTTRQP', 'KDRDVRDRKRQRTEARLRLVDQP', 'KKLHSSVCAVIMRGRKEKEEQKKPP', 'KKKQEELQQQGKGRKEKEEQKKPP', 'KDLDSRLQSPKGKGRKEKEEQKKPP', 'KDGAAVKIDIVKGPMPAGIQGLFLRSQP', 'KDGAAVKIDIVKGPMPAGIQGKVEFIGAP', 'LFEPLVLDEKRLLGRQDQRSLRQP', 'KKDSNPLVSGKGNKRTEARLRLVDQP', 'KKWDWVNLVRQRTEARLRLVDQP', 'KKKYETFTNKGKGRKEKEEQKKPP', 'KDWISTIAQPKGKGRKEKEEQKKPP', 'KERQSFPRVRVAVTINSIKINEPQP', 'KTGNLGRRSSPSRVQTVPTLKRIEEP', 'LFEPLVLDEKRLLGRTSLQNRPSQP', 'KKRNKWEQARQRTEARLRLVDQP', 'KKKGGRWEQARQRTEARLRLVDQP', 'KDQAGLGVVKVDKNKRLERGDLQDKP', 'KKLREWEQARQRTEARLRLVDQP', 'KKVMALDEQPKGKGRKEKEEQKKPP', 'KKRSQASEQPKGKGRKEKEEQKKPP', 'KDKALKAKQEEIFVKIISKEEFTSP', 'KDLYSALRDTKLIDPRKSHPSLKQP', 'KDRVFGASAIDLHLLIQRFSPRNRP', 'KDVFRQSAIDLHLLIQRFSPRNRP', 'KDRFNLSAIDLHLLIQRFSPRNRP', 'KDGAAVKIDIVKGPMVQLEPPLRSTFP', 'KERQSFPRVRVAVRVSDSRPVAAAQP', 'KERQSFPRVRVAVRVSDSRPVRGQP', 'LFEPLVLDEKRLLGRQDVRNAVTQP', 'KDILAQVASQRKNKRLERGDLQDKP', 'KKLTSCHKQRKNKRLERGDLQDKP', 'KKLIPEQGPFTVNKRLERGDLQDKP', 'KKASLSNREPKVNKRLERGDLQDKP', 'KKLNEEALQRKNKRLERGDLQDKP', 'KKLVSHTLAKLDAEAGGAAGAGLFLRSQP'] Truth: ['DLRLENVQKFPSPEMIRALEYIEKLRQQAHREESSP', 'EGKKISLPGQMTGTPITPLKDGFTEVQLSSVSPPSLSPPGTTG', 'IKKAFPSESKPSESNYSSVDNLNLLRAITEKETVEKER'] non truth: ['KERSSKEVELLFNESNGLFQALKKDKSLDVKEDSKSGGP', 'KERSSKEVELLFNESDTSRSPNRSLIPLAELSEVRHSP', 'KFVWYKDGKFFVFRGDARVDSVVKSTPNAFLPTEEKE', 'GTESILEYKLGRRKENLEMVPNKGHKTLIPGQETTADSP', 'KKASTEQDTKSKNPRKRTEFVSDDNIVKSIDEKSLMSP', 'EMIVNKKEEEARLQNKRAPVPPSQARSMSSEHIFSSII', 'LEMIVNKKEEEARLQNKRAPVPPSQARSMSSEHIFSSI', 'RKWVVKKAWERTVDQTVELPDQHQNMLAEFDRGITI', 'SIVAENKKQSQPKPSIAQELVPATASCDEKREAGDVGVAKSA'] Truth: ['KKAGDGSKKIKGCWDSIHVPPSINTTNIDTLLVATDQTER', 'KELLQSFQSKMLNTSSLLEQLNDPVFDAKGIETVRRDS', 'QINTSWHTLRHELISTLMPIFEPVFDAKGIETVRRDS', 'KKKEEFQKGHPDLQGQPADDIFNIDTLLVATDQTERIV', 'KESKKEDLVFIFWAPENAPLKMVQLMQQYEIWREL', 'NEVAGREVLIYLAQYLCSAEEKKELENLAAMDLELQKI', 'KKDRETMGHRYVEVFEIKQLNRQLDMILDEQRRY', 'KKKEEFQKGHPDLQGQPADDLVLTYIQHPQGKKNVHGE', 'KKKEEFQKGHPDLQGQPADDIVLTYIQHPQGKKNVHGE', 'KKDTKEEKTTESRKVAEQAESIDNPLHEAAKRGNLSWL', 'KKQQMAREYREKIVAEQAESIDNPLHEAAKRGNLSWL', 'KKSLGKYGPSSVEDTTGSDTIEIITDRQSGKKRGFGFVTF', 'KKKMWGWLWTEAGARLHFFMPGFAPLTARGSQQYRAL', 'SFTFASPTQVFFDSANVKQVDVPTLTGAFGILASHVPTLQV', 'EGKKISLPGQMTGTPITPLKDGFTEVQLSSVSPPSLSPPGTTG', 'IKKAFPSESKPSESNYSSVDNLNLLRAITEKETVEKER'] non truth: ['KANRFDLIAQSIEENTNMPLTKSEPKPSTKKGAESNHLL', 'KQLKPPLDAVSQSMPDSLMKAKDVDERIFTNSEGAKHLL', 'LVDGPITASNVTYGLYSALPENMKLHEERLGTRFRELF', 'KQAKLRCALCMFISEVNQSEGKAQGKMLVINSIVSAQLGGS', 'KVSHGESPLPYARAPVWLVTSDRTCPTGSSLDLISSKTPLA', 'KKGDAWLMQIGKMFTTYLAYGPFAQLILPDAKPTESHLG', 'LVDGPITASNVTYGLYSALPENMKLICTKQGKEQVKGRTN', 'KKAWERTVDQTVELPDQHQNMLYKINRFLGGVMAALH', 'KNYTIFTKVQNDIKKNALAKDCARPSSPKDPPSHASGPRG', 'KKAWERTVDQTVELPDQHQNMLNILNYVKGQGSRDLL', 'KKYGDYLAKGNLFQAKTCFPQLLLYNDVHQSLAHVVNT', 'EMIVNKKEEEARLQNKRAPVPPSQARSMSSEHIFSSII', 'LEMIVNKKEEEARLQNKRAPVPPSQARSMSSEHIFSSI', 'SIVAENKKQSQPKPSIAQELVPATASCDEKREAGDVGVAKSA'] Truth: ['TFPFLEPQITPSYYTTSDAVISTETVFIVE', 'TTFVNHPQVSALLGEEDEEAVDQSVLLTKPE', 'TTFVNHPQVSALLGEEDEEALNEAQVTIIVE', 'GEEQRKEAERKLQEQQRKGCWDSIHVVE', 'GEEQRKEAERKLQEQQRRIDDPTDSKPE', 'GEEQRKEAERKLQEQQRRHRNSSSSNVE', 'TATVEYPFREALTEEEQEELRRELTKVE', 'TSALSSHVLNFLASTQRKEFGDTGEGWKTVE', 'TTLMASSSLHNRVFNKIVRSPMSVAENNYE', 'DPGHLPPDVRETGGEDKLKMIREYRQMVE', 'GEEQRKEAERKLQEQQRRHREGGFGEVE', 'TTFVNHPQVSALLGEEDEEALHYLKNGRVE', 'TTFVNHPQVSALLGEEDEEALSPASKVPTTVE', 'TTFVNHPQVSALLGEEDEEALHYLRLWVE', 'TTFVNHPQVSALLGEEDEEALHMALFRLVE', 'TTFVNHPQVSALLGEEDEGFINYFKSKPVE', 'TTLMASSSLHNRVFNKIVGTFTSDYSKYLD', 'SKDQENKYQASHPNLRRLQDAELDSVPKE', 'DMFQSKMLSKGGYPKAPGRGMVEALPDGLSVE', 'LAAQLVPAPAAAPRPRPPVYDDGPTGPDVEDAGD', 'MFQSKMLSKGGYPKAPGRGMVEALPDGLSVED', 'GTGGVDTAAVGGVFDVSNADRLGFSEVELVQMVV'] non truth: ['TTSDVREADPSRQKCIQTALPSMLTKLNYE', 'RYSPPPGTQPYLALSGDPPKEQIAGECPGKLE', 'TTDLLDLGHESPSLEVLLEWYKLYMWML', 'TAVTKRAGDWLNDLSGDPPKEQIAGECPGKLE', 'TSALAISAPSLAELCVGLDWDSWTVFACLKVE', 'DPGTVDQAVKFGGGVEGLVNEVPVRTNEFHVE', 'DPGVRLDLDPSEPSFDARLALGVSVLNHYNE', 'DPGVRLDLDPSEPSFDARLPVTIPMAEMVVE', 'DPGVRLDLDPSEPSFDARLPVTCAELQNLVE', 'DPGVRLDLDPSEPSFDARLPVTCLNPCKLVE', 'TRPEDVSAEDDFLRRCCVLDAFSKRISVAP', 'TGLTGTCECPFKKLEVLLEWYKLYMWML', 'SVSVRSVLDPCLGDLSGDPPKEQIAGECPGKLE', 'DPGFPPGSTSPFKLFEAVMSNILKDFAEVSPA', 'TTVKTIYLYELCVGLDWDSWTVFACLKVE', 'TTKDSPLEDLTAGVEGLVNEVPVRTNEFHVE', 'TTSLDLITGESPGGVEGLVNEVPVRTNEFHVE', 'DPGTVDQLGKFGGGVEGLVNEVPVRTNEFHVE', 'QDLPTDNIALIQDNEKLRHDRDAVSGPHVE', 'TTTEDLLDNVSRPRDRRRQCINFAIYNE', 'TTQHNMAVKFGGGVEGLVNEVPVRTNEFHVE'] Truth: ['DLRLENVQKFPSPEMIRALEYIEKLRQQAHRE', 'DLRLENVQKFPSPEMIRALEYIPTKKSQIFSTAS', 'DLRLENVQKFPSPEMIRALEYLLGRVAVSADPNVP', 'DLRLENVQKFPSPEMIRALEYILGRVAVSADPNVP', 'KELPGPKESRRTAKHTANAVFSPSRSFVHLESLPY', 'KPKDSTAEAKKEEAGEKKKAVAMLPCLSLEDVVLPSP', 'KKKEDALEDTRDSEMKLKAHLRAALDELGRAKDR', 'KKDRVTDALNATRADLRTKSTGGAPTFNVTVTMTAKT', 'KKVPEQPPELPQLDSQHERVRMASPRGQSIKSRQ', 'RHTLIKKLCQLRESEPELAQLLVDQIYENAMIAA'] non truth: ['KKANLHSYNEKLFSLQLLNNHPLRSCLTRLDDR', 'KKASMQPDIKLGKTCFPQLLLYNDVHQSLAHVVNT', 'VNGGRHSFVKAQEAEVLLRPLCFAAIHEGSSLKPLY', 'KAAEEKMVQPVHQCLASVLEFKRARAEEAKRQQK', 'KMFHLTYWLSKLPRLSREDATGIFMQLTLPKES', 'KMPPPATGHRKAKREQTSFALKEAPYEAVVGARDTV', 'KKANLHSYNEKLFSLGYMRLQTAIASINTDAVVQL', 'KKIDHMRRMTKYRVASFALKEAPYEAVVGARDTV', 'KLPEPGSAKEGQLFKMPYVIAHYRDISLSVLVYTS', 'KKLHDRYLNLQYQTVTANKIKVDIHQDENKGQV'] Truth: ['DVTSFEGSKAPILTPA', 'DVTSFEGSKAPILTAP', 'DVTSFEGSKAPIPTAL', 'DVTSFEGSKAPIPTAL', 'STVEELHEPIPITPA', 'STVEELHEPIPLTPA', 'STVEELHEPIPLTAP', 'KSELSSNFEPIPTAL', 'KNSTLRTYHPFTAP', 'KNSTLRTYHFPTPA', 'DVTSFEVLALGGVTGAP', 'VSKSQIALQTSDSTAP', 'KKPTSTKPSSSTDTAP', 'KKLQSCLQALECTPA', 'GFRQKDQTQKAATGP', 'KKFHVYWEAVSAAP', 'KKFHVYWEAASVPA'] non truth: ['VSTIGDSFAEKPLTPA', 'VSTIGDSFAEKPLTAP', 'STVIQLFAQDELTAP', 'VSTIGDSFAEKPITPA', 'STVIQLFAQDEITPA', 'STVIQLFAQDELTPA', 'VSTIGDSFAEKPTLAP', 'KVSKTEEADLMLTAP', 'KVSKTEEADLMITPA', 'KVSKTEEADLMLTPA', 'STVIQLFAQDETLAP', 'KVSKTEEADLMTLAP', 'KKAVSISSCCIKPDPG', 'KKVSKTECCIKPDPG', 'KKVSKTEEADTAGAPT', 'KKVSKTEESPSSSAAP', 'KKVSKTEEADGSLSPG'] Truth: ['GFIDYTGASEK', 'GFLDYTGASEK', 'FGIDYTGASEK', 'DLSDQVPDLW', 'DLSDQVPDIW', 'SPNELVDDIW', 'SPNELVDDLW', 'FLGDYTGASEK', 'GFDLYTGASEK', 'FGDLYTGASEK', 'GFLNRMEHPS', 'GFGIWHNDNK', 'SHFPQSWLW', 'SHFPQSWLW', 'SHFPQSWGGGK', 'SHFPQSWGNK', 'SHFPQSWGKN', 'SHFPQSWNGK'] non truth: ['GFLDYETSQK', 'GFLDYETSAGK', 'GFLDYESTQK', 'GFIDYETSAGK', 'GFIDYETSQK', 'YSHNRPDLW', 'YSHNRPDIW', 'EVSYYLDIW', 'EVSYYLDLW', 'TMISWHNGNK', 'EVSYYLDGNK', 'EVSYYLVEW', 'EVSYYLDGKGG', 'EVSYYLDGKN', 'VDYDLTYIW', 'TGYQPHWLW', 'VDYDLTYLW', 'TGYQPHWIW', 'VDYDLTYGGGK', 'VDYDLTYGNK', 'TGYQPHWGGGK', 'TGYQPHWGNK', 'GFTNIYSTDVA', 'FGTNIYSTDVA', 'VDYDLTYGKGG', 'VDYDLTYGKN', 'TGYQPHWGKGG', 'TGYQPHWGKN'] Truth: ['DDVDTKKQKTETTIN', 'DDVDTKKQKTETTLN', 'SLISYLEQDIEAQIGG', 'SLISYLEQNVEDVLN', 'SLISYLEQNEEAILN', 'SLLSDPTYLKELQDN', 'DILDRLTYRSPWSN'] non truth: ['DDLKELTQLFTEGLN', 'DDPRRLPYGTTAFLN', 'DDRKKCLRSCERIN', 'DDRKKCLRSCRELN', 'DDRKKCLRAMWSLN', 'DDRKKCLRSAETSLN', 'DDRKKCLRTESSALN', 'DDRKKCLRSTESALN', 'DDRKKCLRATSDTIN', 'DDLKELTQEAFSLLN', 'DDLKELTQTAEVFLGG', 'SSLLYELQEAVEQLGG', 'DDLLVMDAANIRYIN', 'DDLLVMDAAGLRGYLN', 'SLLSLTMMGAANAQTLN', 'SSLLFIAEEVQDTGLN', 'SLLSAFIEEVQDTGLN', 'SLISFIAEEVQDTGLN', 'SIVTIFAEEVQDTGLN', 'SLISYELQEAVEQLGG', 'SLISIYDAAEEQGIIN', 'SISLGEQLDEAFSLLN', 'SLTVDAQLDEAFSLLN'] Truth: ['ESFRDEEV', 'EFSRDEEV', 'EVHGPDEEV', 'DRFTDEEV', 'DRFTDEEV', 'RDFTDEEV', 'EEDYAREV', 'EEFSRDEV', 'AERYDEEV', 'RSFEDEEV', 'DRFSEEEV', 'DRFSEEEV', 'EVHGPEDEV', 'DRFTDEDL', 'DRFTDEDI', 'DRFTDELD', 'AERYDEDI', 'AERYDEDL', 'AERYDELD', 'RSFEDEDL', 'RSFEDEDI', 'RSFEDELD', 'DRFSEEDL', 'DRFSEELD', 'DRFSEEDI', 'EEDYARDL', 'EEDYARDI', 'EEDYARLD', 'EEFSRDLD'] non truth: ['TRFDDEEV', 'EEGREYEV', 'SSQDFAQEV', 'KYNNDEEV', 'TRFDDEDI', 'KYNNDEDI', 'KYNNDEDL', 'KYNNDELD', 'KYNNDEID', 'SSQSTEWSV', 'EEGREYDL', 'EEGREYID', 'EEGREYDI', 'EEGREYLD', 'SSQDFAQID', 'SSQDFAQDI', 'SSQDFAQDL', 'SSQDFAQLD'] Truth: ['KLYSDAWLIYLVGTVVGG', 'DIPLNLSRELVQSLDKN', 'KDVVPLQEELNRTVISN', 'KTWRKYDFRAIDKIN', 'KYLDSAWLIYLVGTVVGG', 'KKFLESKDLFANTVLSGG', 'KIYLYNPAMVKRICEL'] non truth: ['KKLFQYTDGLQISISVN', 'LDKPARLDELSNIKEVN', 'LDKPARLDELSNIIKDN', 'KKLGHNLADGSVASVKRAC', 'KYLKADRCFRAVASVLN', 'KYLKADRCFRLQLVSN', 'KYLKADRCFRALLDKN', 'KKQPSLSSDPAGVAVIKDN', 'KKDPFGVFGYARVRVTN', 'KKRAGACRYQFRRSLN', 'KKLEESRYQFRRSLN', 'SLTVDAKRYQFRRSLN', 'KKDKFPNESAPVGVLLPD', 'MPKKLVDQPSITVGNDLV', 'VMPKKLVDQPSITVGNDL'] Truth: ['EESYSHLE', 'EESYSHIE', 'EENSWSLE', 'EEVNASCLE', 'EENLCGSLE', 'EECVANSLE', 'EESGAQMLE', 'EECKDQLE', 'EECKGDALE', 'EEKADGCLE', 'EECQDKLE', 'EENCTVGLE', 'EEVGASGCLE', 'EECDGKALE', 'EECQDKIE', 'EECQKDIE', 'EEVQSGCLE', 'EECGKGELE', 'EEKGEGCLE', 'EECDAKGLE', 'EECDAKGIE', 'EEQATACLE', 'GNPDDSFLE', 'GNPDDSFLE', 'EYYNYIE', 'EYNYYLE', 'EESHSYLE', 'EESHSYIE', 'GNDSLDMIE', 'EYYNYLE', 'EELCSGNLE', 'EELCSNGLE', 'EECISGNLE', 'EECISNGLE', 'EELSCNGLE', 'EELSCGNLE', 'EEHSSYLE', 'EESSHYLE', 'EECLSNGLE', 'EESCLNGLE', 'EESHYSLE', 'EESYHSLE', 'EEYSHSLE', 'EEWNSSLE', 'EEWNSSIE', 'EEAASNMIE', 'EELCGSNLE', 'EELNCGSLE', 'EECIGSNLE', 'EEGLNSCLE'] non truth: ['EEHSSYLE', 'EEHSSYIE', 'EESHSYIE', 'EEYSHSLE', 'EEYSSHLE', 'EEYHSSLE', 'EEYSSHIE', 'EEYHSSIE', 'EEASCNVLE', 'EENLCSGLE', 'EEGNLSCLE', 'EECKENLE', 'EEKECNLE', 'EECKENIE', 'EEAAGGSMLE', 'EESMQQLE', 'EESMQAGLE', 'EEAQGSMLE', 'EEMAGSQLE', 'EECGSQVLE', 'EECGADKLE', 'EEGVSQCLE', 'EECGKADLE', 'EECQDKLE', 'EEMSQAGLE', 'EESMQQIE', 'EEGMAQSIE', 'EEGVSCQIE', 'EECGGKEIE', 'EECGEKGIE', 'EEACKDGIE', 'EEQGASMIE', 'EECQDKIE', 'EEMGAQSIE', 'EEKDACGLE', 'EEKDACGIE', 'EEVNGTCLE', 'EEGLSGCGLE', 'EENQMTLE', 'EECGKGELE', 'EECGKGEIE', 'EETQAACLE', 'EEQTAACLE', 'EEGMGTAGLE', 'EEGMGQTLE', 'EEGGQMTLE', 'EECTQAALE', 'EECAAQTIE', 'EEVGTGCGLE', 'EESHSYLE'] Truth: ['KKEAGGGGVQTRQRLNPPGGKTSDI', 'DVMRALGQNPTNAEVLKVLGNPKS', 'KPATKTDQNLEKIVNPKGEEKPS', 'KPPFDAKQTRQRLNPPGGKTSDI', 'KKKEGDPSNLEKIVNPKGEEKPS', 'KKGATPAESNLEKIVNPKGEEKPS', 'KKYSYISNLEKIVNPKGEEKPS', 'KGILDKEEANAIKRNMIEVHKD', 'KPRWSLSERLARAEKGEKTPPD', 'KVGLQVVAVKAPGFGDNMIEVHKD', 'KVGLQVVAVKAPGFGDNKPEQVCAP', 'KVGLQVVAVKAPGFGDNGGMVPGPVSA', 'KVGLQVVAVKAPGFGDNDHVAVYPA', 'KPGECLKVERLARAEKGEKTPPD', 'KGILDKEEANAIKRSDPRGFGHI', 'PKTEAKESKKDEAPKGAVAAAASVPA', 'KKLNELESDVKELIPEFFYLP', 'KPRPAFLQQNSEVDRLGPLKEE', 'KPKEKTETDVKELIPEFFYLP', 'KPDLTAALRDVDRYFREKTKE', 'KKSPETLKCIELPVKEEEPPEK', 'KGILDKEEANAIKRTSSSPPELPA', 'KGILDKEEANAIKRKEEEPPEK', 'KKEPLCRERLARAEKGEKTPPD', 'KVLTEIIASRTPEELDHVAVYPA', 'KKAQEHKRDFTHVRGVGTRGFP', 'KPELLAIRFLNAENAKPEQVCAP', 'KPELLAIRFLNAENAGGMVPGPVSA', 'KPELLAIRFLNAENADHVAVYPA', 'KPEEAKQGDKAPARFKARLAGDPA', 'KPATPSTQGDKAPARFKARLAGDPA', 'KVGLQVVAVKAPGFGDTSSSPPELPA', 'KVGLQVVAVKAPGFGDKEEEPPEK', 'KKEAGGGGVGGPGKEIKLSDYRGKY', 'KPEVKELASSSKELIPEFFYLP', 'KPELLAIRFLNAENAMIEVHKD', 'KPRSTSQIPNAEIKLSDYRGKY', 'KKLQQGIAPSEDRYFREKTKE', 'KPLSRRLAEEDRYFREKTKE', 'KPLVREKAEEDRYFREKTKE', 'KPVLREKAEEDRYFREKTKE', 'KPVIREKAEEDRYFREKTKE', 'KKEEAPQGDKAPARFKARLAGDPA', 'KKPEEAQGDKAPARFKARLAGDPA', 'KKEAEPQGDKAPARFKARLAGDPA', 'KPEFVDIIDVKELIPEFFYLP', 'KPVREIKAEEDRYFREKTKE', 'KKETPTEKDVKELIPEFFYLP', 'KPKVCERTDVKELIPEFFYLP', 'KPCGVRLSTDVKELIPEFFYLP'] non truth: ['EWKLQVSELIQVINQGEIAEKP', 'EWKLQVSELIQVINKPQFQHS', 'EWKLQVSELIQVINRIDWNPA', 'EWKLQVSELIQVINSLKNHGLC', 'EWKLQVSELIQVINILQSDQAP', 'EWKLQVSELIQVINTNVAEVAAP', 'EWKLQVSELIQVINALEQGEKP', 'EWKLQVSELIQVINQLEQGPSL', 'KKATEIESIGEPQRLRIDWNPA', 'QGLHHIQNGQVQVQLSLIEEIPA', 'EWKLQVSELIQVINWHKVNTS', 'EWKLQVSELIQVINWFYRIS', 'EWKLQVSELIQVINHREKSNT', 'EWKLQVSELIQVINERLLPED', 'EWKLQVSELIQVINRLVEEEP', 'KPLMNSAELVPSALLGSSKKQLCH', 'FVAEAAPQGLVPSALLGSSKKQLCH', 'KPETISQPLKSIFPNKPQFQHS', 'KKDSNSNPLVPSALLGSSKKQLCH', 'KKMEGPLPLAQLLFNLNNEERP', 'KNTLTSEPQPFVIKPRIDWNPA', 'KKKQQEAIQELLENKPQFQHS', 'KNTLTSEPQPFVIKPSLKNHGLC', 'KPETISQPLKSIFPNWHKVNTS', 'KPETISQPLKSIFPNWFYRIS', 'KNTLTSEPQPFVIKPILQSDQAP', 'KNTLTSEPQPFVIKPTNVAEVAAP', 'KNTLTSEPQPFVIKPQLEQGPSL', 'KNTLTSEPQPFVIKPALEQGEKP', 'KPETISQPLKSIFPNHREKSNT', 'KKDSNSNPRRLEVRILQSDQAP', 'KPETISQPLKSIFPNERLLPED', 'KPETISQPLKSIFPNRLVEEEP', 'QGLHHIQNGQVQVQLSLQSVGLPA', 'QGLHHIQNGQVQVQLSAATQILAP', 'KKDELAQLQEVERLANYKGYR', 'KNTLTSEPQPFVIKPWHKVNTS', 'KKKQQEAIQELLENWHKVNTS', 'KNTLTSEPQPFVIKPWFYRIS', 'KKKQQEAIQELLENWFYRIS', 'KNTLTSEPQPFVIKPHREKSNT', 'KNTLTSEPQPFVIKPERLLPED', 'KNTLTSEPQPFVIKPRLVEEEP', 'KKKQQEAIQELLENERLLPED', 'KKKQQEAIQELLENRLVEEEP', 'KKKQQEAIQELLENVDYTVKF', 'KKKQQEAIQELLENEKSFLFT', 'KKDSNSNPRRLERQRYFLML', 'SVSVAKTVEPVLQLDNVDYTVKF', 'SVSVAKTVEPVLQLDNEKSFLFT'] Truth: ['KASVHTLSGHTEDGCIHLISL', 'KHQSLGFGDLSSASAIMGNAKV', 'KNHTSEGDTKVMKCVLEVIS', 'KETAPPTEGHTVHLLLDQMT', 'KKPDDGAEELGYDLLGQIGSL', 'KKPEDWDARGYNPVTKNVT', 'RNCEEKERVLVERSAAETV', 'KKPDRDCQLSAEEVRFPIS', 'VDPKKTIQMGSFRINPDGSQ'] non truth: ['KPVANNCDELCLVHHRLGSI', 'KGISHTDVSNAHDKAFKYVT', 'KKMHGMEESRLQKRMESL', 'KKYQGDSYVSESELKIGSLS', 'SYSYQQTLYLLSHAQFIR', 'KKPCDHYVSTAREAAVKDLS', 'YSYSLRTDVATVQQKRCIS', 'AKKDCVPCVWTPKKENMQI'] Truth: ['DKATFMVGSYGPRPEEYEFLTPVEFQKGDYPQAMKHYTEAIK', 'DKATFMVGSYGPRPEEYEFLTPVEEAPKGMLARGTYHNKSFFT', 'TTYVEVMVNDVNDNAPQFVASHYTGLVSEDAPPFTSVLQISATDR', 'KATFMVGSYGPRPEEYEFLTPVEEAPKGMLARGTYHNKSFFTD', 'VSRTFSFHVSWRMEDPLFRLDLGWPKNSEYFTGATFCVAVDS'] non truth: ['AIQFGYFCRAETVQVEQSFREVLNEYLVGDYAGAESTLFAFMV', 'TTPVVPTGNNAPYGQRLTEVKSTPNAFLPTEEKEERPCEGGHYSP', 'TFTSLFEEYTQEHCNVFKLQSKDIMKMAYAQKTERHRCEK', 'AIQFGYFCRAETVQVEQSFREVLNENVIGDPADHAHDGHDKKK'] Truth: ['LFIFIDSDQGGVQKV', 'KTKDLYSHYLTSPL', 'KTAVDCGAVKYLTSPL', 'LFIVDAEAKMRMKN', 'FILFNNVDTEVLKN', 'KTTIFTDLPMGSIKN', 'LFIFIDSDGQAVVNK', 'FLIVTDQAQELFKN', 'DVTKSLSQSKMLSKGG', 'FLLLLFQGTEENKGG', 'FLLLLFQGENTEKN', 'AIAVEKYEEMLKNK'] non truth: ['LFVLGYPLNWVCKN', 'LFVLGYPLNWAMKN', 'LFVLGYPLNWMAKN', 'LFVLGYPDVNTSVKN', 'IFLTNAFNEEIIKN', 'IFLTNAFQEEVINK', 'LFVVFSAQEEAAIKN', 'LFVVFSAPVETSQKN', 'LFVVFSAPVETNTKN', 'IFLTNAFDGLAEKLN', 'KLGYKCGYVFFVKN', 'FLLFDSVNEQLVKN', 'FLLIFLCSSMHINK', 'KITYKLNPQCCKLN', 'KLPWSNSEVHRAKN', 'KTTKHPAYNGVHAKN', 'LFLLIFLCSSMHKN', 'FLLIFLCSSMHKLN', 'FLLIFLCSSMIHKN', 'FLLIFLCSSMLHKN', 'KTTKHPFHDLTARN'] Truth: ['DIAQEQGDKVGEADKFKNLKSEEDMK', 'DIAQEQGDKVGERHTEAAAAQREEWK'] non truth: ['DLSKNWEELSVPERRQDGQSGKYCK', 'DILQQHSYLTPSFVMADELDYPLEK', 'SPSPHCAPAAAAPLGPPGCGPGLSPQKLDSTA'] Truth: ['KKDTLPGDRLIEDGCIHLIS', 'LSSLPEIDEIKPSSAPELQAV', 'KKAKVDSSNGFLIDGYPREV', 'KKKAVDSSNGFLIDGYPREV', 'VTVTRLSSNGFLIDGYPREV', 'KKVAGMDVEYLPQEVLQIY', 'KKVSYSHQYLPQEVLQIY', 'KKVAGMDVEPGIYLLIFDNT', 'LSSLPEIVGSGPPSGTGLHRYV', 'KKETPRGAPAQDPVFTALAPE', 'LSSLPELDEIKPSSAPELQAV', 'LSSLPEIRLLDEGHYPVRE', 'KKQNTVSTAEHISIWPSSIP', 'KKPDDGVEHRLSALKSAVESG', 'KKKKEDSNGFLIDGYPREV', 'KKSGKEVSNGFLIDGYPREV', 'KKDEKKSNGFLIDGYPREV', 'KKIKGCWNGFLIDGYPREV', 'KKISEPGDRLIEDGCIHLIS', 'KKDLTPGDRLIEDGCIHLIS', 'KKDTIPGDRLIEDGCIHLIS', 'KKGKLDSSNGFLIDGYPREV', 'KKGLVSGSSNGFLIDGYPREV', 'KKIKGDSSNGFLIDGYPREV', 'KKLKGDSSNGFLIDGYPREV', 'SLTVRLSSNGFLIDGYPREV', 'LKKDRETMGHRYVEVFKS'] non truth: ['SLSLGSPGPEVTVEKFVWYK', 'KKPFDERGYLITTKGANDLG', 'KKLEMEAQIYAIWEEKLT', 'KKPFDERGIYAIWEEKLT', 'KKLEMEAQRAKPSEPLPSPS', 'KKLEMEAQYLEERTVSIR', 'KKGPNKSKHNNDELLSDVSL', 'KKSNETLWRMRQQYLEI', 'KKIQEDLCIYAIWEEKLT', 'KKLAPSDCPARGATDEQLLLP', 'KKPDVATCPARGATDEQLLLP', 'KKSTPPTCPARGATDEQLLLP', 'LSSIVANLGCIYAIWEEKLT', 'SLTVQSGAITHKGDLWQTPAL', 'KKDKVEETHKGDLWQTPAL', 'KKWDWITHKGDLWQTPAL', 'KKAQACGIGTHKGDLWQTPAL', 'KKRAGACAATHKGDLWQTPAL', 'KKWDWITAVFFGQFHILS', 'KKAQACGIGTAVFFGQFHILS', 'KKPFDERGAVFFGQFHILS', 'KKDKVEETAVFFGQFHILS', 'SLTVQSGAITAVFFGQFHILS', 'KKHTCTMKYLEERTVSIR', 'KKDTEPMKYLEERTVSIR', 'SLSLGSPGMKYLEERTVSIR', 'KKEKPERLYCGRYQITNP', 'KKLAPSDRLYCGRYQITNP', 'KKPAVSERLYCGRYQITNP', 'KKSPPSNGEVTVEKFVWYK', 'KKQQEGPEVTVEKFVWYK', 'KKDFTPPSGIYQEYLLPAR', 'KKADGKSGEGSFIVVSDTNKR', 'KKNTWRSGPRGGHAGSRQRS', 'KKITDVESGPRGGHAGSRQRS', 'KKLSEVESGPRGGHAGSRQRS', 'KKDPFGVEGSFIVVSDTNKR', 'KKDFTPPSGSFIVVSDTNKR', 'KKLEMEAQSFIVVSDTNKR', 'KKATSQVASGPRGGHAGSRQRS', 'KKKQDSGEGSFIVVSDTNKR', 'KKGKADSGEGSFIVVSDTNKR', 'KKQKDSGEGSFIVVSDTNKR', 'KKGSNISGEGSFIVVSDTNKR', 'KKSKENGEGSFIVVSDTNKR'] Truth: ['LVEGKASIQDFIFAKEYRGPGRPLS', 'DLIASSGEIIKVSAAGKEALPSWLHW', 'LVEGKASIQDFIFAKEYRDPKPRA', 'DLFKYLAPFLRNVEPSAGADKNLVT', 'LIASSGEIIKVSAAGKEALPSWLHWD'] non truth: ['LDLRQYDLLAVAVWTLPAFAATMSL', 'DLIDPRNTGALVVVNTSLEVDTLLPN', 'EVLDSLMKTLAVAVWTLPAFAATMSL', 'KKNATDLLIFDFQVGLNTHKEFTV', 'DLIDPRNTGALVVVNTLACGAWRRAP'] Truth: ['GAVVAFVMSKINASKNQ', 'KKTSETLSKESLERQ', 'VTLSGSVDYAIIERLQ', 'VPKCGYLPLVSAFKNQ', 'PPPPPPPPQQPQPPPVP', 'PPPPPPPPPPPQQPQVP', 'PPPPPPPPPPQQPQPVP', 'PPPPPPPPPQQPQPPVP', 'PPPPPPPPPPPQQPQPV', 'KLSDSIMEDLKTLSAI', 'VTVTVQDFLTSKLAGGQ'] non truth: ['KLIDMNYRLVELWA', 'KLIDMNYRLVENKQ', 'QFISNRRTLASTNKQ', 'TLLTVLSYGLANENKQ', 'KKMWSTQLLASTNKQ', 'SISLTMKEILPVASSTS', 'VTLSVNFTITADLNKQ', 'KKFLDSNKEVLSDLQ', 'ISGDTFVLLGLASTNKQ', 'KKENMQIYLGLGLAW'] Truth: ['DVDVSKPDLTAALRDVRQGGAAGLWVRSGAAAAAGAGGGRP', 'DVTATNAYKKTSETLSQAGQKASAAFSSVGSVITKKLE', 'KKDLTEYLSRFGEVVDCTIKTDPVRPAATTLHAEL', 'KKPVSEDIPGPLQGSGQDMVQILWEEEIKRVGPEK', 'KKDLTEYLSRFGEVVDCTIKTGPAPQLLRAREDPA', 'KKGVPIIFADELDDSKPPPSSSMPLILQEVKQHLCG', 'KKGVPIIFADELDDSKPPPSSSMPLILQELPVNSSPA', 'KKGVPIIFADELDDSKPPPSSSMPLILQENPLSPEK', 'KKGVPIIFADELDDSKPPPSSSMPLILQEPKGEEKP', 'KKGVPIIFADELDDSKPPPSSSMPLILQEPDAKKPE', 'KKDEAPKEAPLAIHSRYEDFVVDGFNVLYNKKPV', 'KKDEAPKEVAEINHPVTTYKAGDYWRLLVPGTYK', 'VTATNAYKKTSETLSQAGQKASAAFSSVGSVITKKLED', 'TATNAYKKTSETLSQAGQKASAAFSSVGSVITKKLEDV'] non truth: ['PSVITSAQCRQQLIAAVSCPDPKESYGLLKVVPHGGPA', 'KRLLDEFPDRAQTPVSPEAPRDFILFRYWGPSL', 'DVNLDKVAAVSDDPMPLTALRQSRRRRESHKCPR', 'KKLWFISAFSGSFFIGEYLQGLIAVKEPETLDPEG', 'DVDVKLIGSNIFGMLYDSGILVKAQKEGQLFKMPY', 'KKGSNIGLPDELSRPRCNVQDGGKIEVVDGVVLQEAP', 'KKMHGMTSLDAKDPLLNIGRPIPCMIELVQDVSAVP', 'KKQQETLRRMRDENRMRVQDVNVSSISKLSLGD'] Truth: ['SVDIIPPLFTVSV', 'VSDIIPPLFTVSV', 'SVDLIPPLFTVSV', 'DVSIIPPLFTVSV', 'DVSIIPPLFTVSV', 'VDSIIPPLFTVSV', 'DVSLLAKLSDGVAV', 'VSDLIPPLFTVSV', 'VDSLIPPLFTVSV', 'ETALLAKLSDGVAV', 'TAELLAKLSDGVAV', 'VSDILAKLSDGVAV', 'SVDILAKLSDGVAV', 'DVLSLAKLSDGVAV', 'DVSIIAKLSDGVAV', 'DVSILAKLSDGVAV', 'ETALLPPLFTVSV', 'TAELLPPLFTVSV', 'DVLSLPPLFTVSV', 'SVDLIAKLSDGVAV', 'SVSRPLRLLGCSV', 'AEKVAEKLEALSV'] non truth: ['DVSLIAFLIIGEP', 'ETAILAFLIIGEP', 'VSDILLQAATAVSV', 'VDSLLQLAATAVSV', 'ETAILAQLATAVSV', 'ETALIQLAATAVSV', 'TATVKSVTALLGEP', 'TATVKSASLLPAEV', 'TATVKSASLIEPAV', 'SVSVAKTVSIAPEV'] Truth: ['TDDIPVWDQEFIIDDK', 'DTPTSAGPNSFNKGKHGFS', 'TTVREHCEQGPSIPGGPPS', 'TPLRDEDPADLAEEYSK', 'TPLVDGEDESSGLAPGNYK', 'TTVREHCEQPNKEPPPS', 'TTVREHCEQNKEPPPSP', 'TDDIPVWDQEFIISSSP', 'TDDIPVWDQEFLISSSP', 'TDDIPVWDQEFLIDDK', 'TPTSAGPNSFNKGKHGFSD', 'TTVREHCEQLSGNPSYK'] non truth: ['TTPNTDRKQAEESSEQK', 'TSQPTDRKQAEESSEQK', 'TSGPATDRKQAEESSEQK', 'TSPQTDRKQAEESSEQK', 'TPSQTDRKQAEESSEQK', 'TPQSTDRKQAEESSEQK', 'LGPELGEMRRCCIGCSPK', 'TTNPTDRKQAEESSEQK', 'TTGGPTDRKQAEESSEQK', 'TSRICHEYLFQSSYSK', 'TPKYDVYYEQLATSSPS', 'QQHSYLTAQAEAHSSYK', 'QQELDFPSLTQTPSDDK', 'QQEAIQELLELCSSSSSP'] Truth: ['ARKELKKMQDQDEDATLTQLATAWVNLAVGGEKLQEAYYIFQE', 'ERKKREKEQFRKLFIGGLSFETTEESLRNYYEQWGKLTDC'] non truth: ['KKDCVPCVWTPKKENMQIYLTAEELAPGNERRGNVRGLEEGGQV', 'SLLSPYSVPSKKEEEMNVASRKESAPGGRGQTRPSGSPGRAPSESTR', 'KKPAPKSSGGAGGGAHSSGAHGSSGAVKHLVYEQILSAQSSTSYIEARYA', 'KKSKGTPLAGEGKEKDDDSVKQDNSIPYVEAQQVAKELKSDFTSF'] Truth: ['DVLAVNTPK', 'DVLAVNTPK', 'DVLAAVKSPG', 'DVLKAGLSPG', 'DVLEKQKP'] non truth: ['VDLAVVNLN', 'VDLAVQLAQ', 'VDLAGSLGKP', 'VDLAAVIQQ', 'VDIAAVIQQ', 'VDLIGANIAA', 'VDIIGANIAA', 'VDLGVKGAPT', 'VDLKLPSNA', 'VDIKLPSNA', 'VDLIAAIQN', 'VDLNATPVK', 'VDLPLKGNT', 'VDLKSIGQP', 'VDLPKADAK', 'VDILAQLNA', 'VDLKKQPE', 'VDLQLQLQ', 'VDLLAAQLGG', 'VDLLQGALQ', 'VDIGKPGTIG', 'VDLGKPGTIG', 'VDAILAQLN', 'VDALAQLVQ', 'VDALIQLGQ'] Truth: ['SLLRLFPLTAQH', 'SLRLLFPLTAQH', 'SIRLLFPLTAQH', 'SLVAVIWNKTKH', 'SIGLVIWNKTKH', 'SIAVVIWNKTKH', 'SLVGLIWNKTKH', 'DLKIIWNKTKH', 'SIVIGIWNKTKH', 'SLVLGIWNKTKH', 'SLIVGIWNKTKH', 'SILRLFPLTAQH', 'SLRLFLMKGLAF', 'KELVIWNKTKH', 'KEIVIWNKTKH', 'KKNVLFPLTAQH'] non truth: ['KKVNIGEKHLMV', 'SLLRLPVNLAYH', 'SLRILPVNLAYH', 'SLIRLPVNLAYH', 'SLRLLPVNLAYH', 'SILRLPVNLAYH', 'SIIRIGEKHLMV', 'SLLRLGAGTHKRS', 'SIIRLPVNLAYH', 'SILRIGEKHLMV', 'SLLRIGEKHLMV', 'SLRLIGEKHLMV', 'SIRLIGEKHLMV', 'SLIRIGEKHLMV', 'SLIRATHPLGLAF', 'SLLRYTVRGAIF', 'KKVNIEGPFVKH', 'KKVNIEGVPFKH', 'KKVNLPVNLAYH'] Truth: ['TDSMFVLLKGASSLTVDVTSP', 'TSKVYARMRPSETANSLEK', 'TSWIKFAELQNRDGFIDK', 'KEDLRDTFARLKLQGSCTS', 'TDSETRIILQGSPQACHLAK', 'STEDINSPVKEDIKPPAEAK', 'STDEHTAIRQLTEPSGRAAK', 'TDSMFVLLKGQQQLYGRSP', 'TDSMFVLLKGRPAGDGTFQK', 'TDLALQGLHSGEETRRQEK', 'TSSTPSEIHAFVFLLVFGCL', 'TDSMFVLLKGETDVDLTGVK', 'TDSMFVLLKGEGLPPSWYK', 'TDSMFVLLKGMQQLAKTQE', 'TDSMFVLLKGSAEQVVQGMK', 'TDSMFVLLKGSLKAPSCGEGK', 'SSPNKKGHRTGPAATTLSDTAA', 'TDSMFVLLKGFQAQLQTGSP', 'STRSVFSVDPRLKLQGSCTS', 'TDSMFVLLKGYSGLKHEDK', 'SSIRQYELVQNRDGFIDK', 'TTTFKGVKTEAEDTKAKEPS', 'SSIRQYELVVHENSHLEK', 'SEDTVAGSPVKEDIKPPAEAK', 'STSTLFRDLAGYEAQGARPK', 'TSTSAYMIRQLTEPSGRAAK', 'TSMDLYLLRGLAYCHRQK', 'SSNFEQVILGLMTPIYPMK', 'TTTFKGTMPKPQWHPPWK', 'TSSTPRPSAEDAKDFFKRK', 'SSEMSLHLVLTYIQHPQGK', 'TDESGLNLIFIPIQMFMAK', 'TTCEQIVGFLIDGYPREVK', 'EELVRQRGVLEHTEKMVD'] non truth: ['STDLVNLIYGGNDRAFREK', 'KKDGPSNCIIGQVWPLTGASP', 'KVEREYDRSNLQADGLFK', 'KVEREYDRSKTYKPQGPS', 'TSMDVPSPQIIVSAYFREK', 'SEAKEYALAISPISEMSTLK', 'SEPHQTGLKPEEQKFREK', 'STDLVNLIYGGNDRAEFRK', 'STDLVNLIYGGNDRARFEK', 'STDLVNLIYGGNDRAFERK', 'STDLVNLIYGGNDRAPPTHK', 'STDLVNLIYGGNDRAFWVK', 'TTIGFALDKERRKMSGSSSP'] Truth: ['ESRISGPERILSI', 'ESRIKEDRPITI', 'ESIRSGPERILSI', 'ESLRSGPERILSI', 'ESRLSGPERILSI', 'ESDGKLIDPKRAK', 'EEILKAFKLQTH', 'EEIAFLKKLQTH', 'EEVKSGVLRLQAGA', 'ESLLGPRPLPPSPP', 'EEKVSGVLRLQAGA', 'EEKLKKAPREQT', 'EEKLKKAPRTQE', 'EEKLKKARPTEQ', 'EEKLKKASKTGHT', 'EEKLKKAKSSAHT', 'AGQKTSGPERILSI', 'EKRYVIALSYSK', 'EKRYVASLIKYS', 'EKRATVVYISKY', 'EKRYVAVLTSKY', 'EKRATVIVYSYK', 'QQKILFSLIETH', 'QQLKLFSLIETH', 'EEKAPLPPPLKTH', 'EEKAPLPPPIKTH', 'EEKAPLPPPKLTH', 'EEKAPLPPPLKHT', 'EEKAPLPPPKIHT', 'EEILKAFKQLTH', 'EEILKAFKLAGTH', 'EEILKAFKLQHT', 'EEILKAFKIQHT', 'EEIAFLKKLQHT', 'EEIAFLKKIQHT', 'EEIAFLKKQLTH', 'EEIAFLKKLAGTH', 'EESLKLAARLQAGA', 'LTEPSGRAAKKTPT'] non truth: ['ESLRGSVNALANLI', 'ESKSLFPLGIKHT', 'ESLRSGKDGKIGPL', 'ESRLSGKDGKIGPL', 'ESRISGKDGKIGPL', 'ESIRGRSVFHRI', 'ESIRGSVNALANLI', 'ESRLGSVNALANLI', 'ESRIGSVNALANLI', 'QQTKSGKDGKIGPL', 'QQKTSGKDGKIGPL', 'ESLTAQLRKGSGLP', 'ESVGLQGVAVAGLTR', 'EPYIKQLSKIHT', 'ESKSLFPLGKIHT', 'ESKSLFPLGKLHT', 'ESKSLFPLGKITH', 'ESKSLFPLGKLTH', 'ESKSLFPLGIKTH', 'ESAKLTVPKFHVT', 'ERKESVNALANLI', 'EPYIKQLSKITH', 'EPYIKQLSKLTH', 'EPYIKQLSKLHT', 'EPYIKQLSIKHT', 'EEELLRTQALRV', 'EPKVKSPEHPLVP', 'ERKERSVFHRI', 'QQTKGSVNALANLI', 'QQKTGSVNALANLI', 'ERKYASIVISYK', 'ERKYASVLLKSY', 'ERKVVISTFSYK', 'QQLKAWVLLTEQ', 'QQKIAWVLLTEQ', 'QQKLAWVLLTEQ', 'QQIKAWVLLTEQ', 'AGQLKAWVLLTEQ', 'EEAAFKLVAKITH', 'EEAAFKLVAKLTH', 'EEAAFKLVAKIHT', 'EEAAFKLVAKLHT', 'EEAAFKLVAIKHT', 'EPEPRPLLKSYK', 'EEPPIPKRIYKS', 'EEPPIKLPRYSK', 'EPAVKGTLSRGTALG', 'AKQQGGHLPKKGAH', 'AMSVIKIAVMTHR'] Truth: ['IMRLDLAAILK', 'IMRLDLAIAIK', 'LMDRILAAILK', 'LRMDILAAILK', 'LMDRILALIKA', 'LMDRILAIKAI', 'LRMDILAIKAL', 'LRMDILAIKAI', 'LRMDILAILKA', 'LMDRILAILKA', 'LRMDILALIKA', 'LMDRILAIKAL', 'SHYKLIAIKAI', 'IMRLDLAIAKI', 'IMRLDLAIAKL', 'LMDRILAALKI', 'LRMDILAALKI', 'DIMRLIAIKAI', 'MDRILIAIKAI', 'RMDILIAIKAI', 'MRLDLIAIKAI', 'IMRLDLAALKI', 'IMRLDLALIKA', 'IMRLDLAIKAL', 'IMRLDLAIKAI', 'IMRLDLAILKA', 'IMRLDLALIAK', 'IMRLDLALLAK', 'IMRLDLAIIAK', 'WKKGEIAIKAI', 'IMRLDIAIKAI', 'IMRLDLALKAL', 'ALALMNKLAALK', 'MRLDLALLALK', 'LMDRILGVVKL', 'LRMDILGVVKL', 'MRLDLALIAKI', 'MRLDLALLAKL', 'MRLDLAILIAK', 'MRLDLALLLAK', 'MRLDLAIIIAK', 'SHYKLAILKAL', 'KSHYIAILKAL', 'KSHYILGVVKL', 'DIMRLAILKAL', 'LRMDIAILKAL', 'LMDRIAILKAL', 'MRLDLAILKAL', 'MDRILAILKAL', 'RMDILAILKAL'] non truth: ['PRDFILALAKI', 'ALAKMNIALAKI', 'LMRDLIALAKI', 'MRDLILALAKI', 'CKGQVLLALAKI', 'GQVCKLIALAKI', 'WKKGELALLKA', 'KWSQVLALLKA', 'PRDFILALLKA', 'PRDFILALKLA', 'PRDFILAIKAI', 'PRDFILAIKLA', 'ALAKMNIALLKA', 'ALAKMNLALLKA', 'ALAKMNIAIKLA', 'ALAKMNIAIKAI', 'ALAKMNIALKLA', 'MRDLILALLKA', 'MRDLILALKLA', 'LMRDLIALKLA', 'LMRDLIALLKA', 'MRDLILAIKAI', 'MRDLILAIKLA', 'LMRDLIAIKAI', 'LMRDLIAIKLA', 'GQVCKLIALKLA', 'GQVCKLIAIKAI', 'CKGQVLLAIKLA', 'CKGQVLLALKLA', 'GQVCKLIAIKLA', 'CKGQVLLAIKAI', 'CKGQVLLALLKA', 'GQVCKLIALLKA', 'LMRDLILAALK', 'DPRFILALLKA', 'IDMRILALLKA', 'WKKGELALAKI', 'ALAKMNILAALK', 'LMRDLILAAIK', 'GQVCKLILAALK', 'WKKGELAIKLA', 'WKKGELALKLA', 'KWDKALALLKA', 'WKKGELAIKAI', 'KWQVSLALLKA', 'ALAKMNILALAK', 'LMRDLILALAK', 'GQVCKLILALAK', 'YKHSLALIAIK', 'YKHSLALLALK'] Truth: ['TVKKYNSLGDLVQVLGHSP', 'TVKKYNSLGDLVQVLGHPS', 'SSFLLVSNLGTIARSGSKAF', 'SLAYLVSNLGTIARSGSKAF', 'TVLNVSFSAAEQIKHILAN', 'SLNISYIGAAEQIKHILAN', 'TVAYLVSNLGTIARSGSKAF', 'TVFSLVSNLGTIARSGSKAF', 'TVSFLVSNLGTIARSGSKAF', 'SLYALVSNLGTIARSGSKAF', 'SLSFLVSNLGTIARSGSKAF', 'SLFSLVSNLGTIARSGSKAF', 'SISFLVSNLGTIARSGSKAF', 'TVDEFKKAAEQIKHILAN', 'TVFRGRMAAEQIKHILAN', 'SIPTKTYGAAEQIKHILAN', 'SLITASQFAAEQIKHILAN', 'TVMGAEIRHVLVTAAFRSP', 'TVMGAEIRHVLRFLTQSP', 'SINTPNSRQKILKEKDSP', 'SLPEKPPSPSPLRGNVVPSP', 'SLAKTTFSPSPLRGNVVPSP', 'SIAGSRKFPLAQTEAPIPTA', 'SIAGSRKFPLAQTWFFAK', 'TVKHEQLAIVKCMRSKPS', 'SLSCLPPVSLLPIAAVGSFSP', 'SIRKRVETSRSPRPCVPS', 'SIRKRVETSKEDPNLVPS', 'SLHSANLLAIVKCMRSKPS', 'TVKYEINLGTIARSGSKAF', 'TVLNVSFSLGTIARSGSKAF', 'TVFISVSNLGTIARSGSKAF', 'TVDEFKKLGTIARSGSKAF', 'TVFRGRMLGTIARSGSKAF', 'TVSIFVSNLGTIARSGSKAF', 'SLNISYIGLGTIARSGSKAF', 'SIVIAYSNLGTIARSGSKAF', 'SLITASQFLGTIARSGSKAF', 'SLGQKPQPPPGVMPKGRPPS', 'SLKPANQPPPGVMPKGRPPS', 'TTFQISLGAAEQIKHILAN', 'SSKTPSLFAAEQIKHILAN', 'TVLNVSFSDFVKPKGAFKA', 'TVDEFKKDFVKPKGAFKA', 'SLKSHMWKPRILFLDPS', 'SLAQPQLPYPRILFLDPS', 'SLNISYIGDFVKPKGAFKA', 'SIKSVFGEDFVKPKGAFKA', 'SLITASQFDFVKPKGAFKA', 'SLAKTAFEDFVKPKGAFKA'] non truth: ['DLVEFARLLIDERKGPPS', 'SLVFLLRLDFVDSQGAFK', 'SLKHGSLGMSLLNLAVGKTE', 'SLAIVRCSVKNPSVAELTAP', 'SLQKVLYVAMLLPSPPDPS', 'SLQKVLYVAMPLSLPDPSP', 'SLQKVLYVAMSYTALAISP', 'SLARLRMKTAIHLMETPS', 'SLARLRMKTLLPEDEVSP', 'SLARLRMKTFVDSQGAFK', 'TVGFVTKLWTKDKGEFAK', 'SLLWHLRQVYKRSANSP', 'SLLWHLRQVYGASGRKPS', 'SLLWHLRQVSPRQGPPPS', 'SLLWHLRQVMELFAKSP', 'SLLWHLRQVYCIVQLSP', 'SLLWHLRQVYCINIIPS', 'SLLWHLRQVYWIKESP', 'SLLWHLRQVYLGWSISP', 'SLLWHLRQVAHYSVAKF', 'TVLSYGLNHIIPKNSKSSP', 'SLLSNFFLKPKSAPPPAGPS', 'SLFQTSISLRPNGLPQLPS', 'SLVTKQGSKKFVDSQGAFK', 'SLEQLFSVFRYIKQLSP', 'SLEQLFSVFRGTVLFKSP', 'SLEQLFSVFRGPPVVPVSP', 'TTVGFVTKLWTKAGPAPPSP', 'TTVGFVTKLWTKAPPPAGPS', 'TTVGFVTKLWTKPAGPAPPS', 'SLTVGFKDYAQLKREKAT', 'SSLTLKLSARAEHPYLAPT', 'SSLTLKLSARFVDSQGAFK', 'DLVEFARLLIDEPKRGSP', 'DLVEFARLLQPTQVKNSP', 'DLVEFARLLIDVGRPTGPS', 'DLVEFARLLIAWAPKESP', 'SSLRPNHKMRYIKQLSP', 'SSLRPNHKMREALVAKFA', 'SSLRPNMQARRVSKQLPS', 'RKSRRLETSVLHNGSMAI'] Truth: ['DPIPAPLEPESEPGISIVI', 'DPLPAPLEPESEPGISIVI', 'PDLPAPLEPESEPGISIVI', 'PDIPAPLEPESEPGISIVI', 'RCLEREVLKKRHSGDF', 'LGRILTGSSEPEAAPAVHAP', 'APAEVQITAEQLLREHAP', 'PAEAKSPATVKSPGEAKHAP', 'SVVSVTSVDGRQIRVDQAG', 'SVSVSISVDGRQIRVDQAG', 'RCLIECRSIDGRVVEPK', 'RCLIECRVDSLRTQLAP'] non truth: ['RCIPRTISTTVSVKDEPA', 'DPLPVGVEVATIELISYAS', 'RCLEKKSLDIEDLSRAP', 'RCLEKKKLDSNPLSTSPG', 'VSTFIIYTSFRQTNKAP', 'VSTFIIYTSFREREKP', 'VSTFIIYTSFRDLSRAP', 'RCIPRTISTWTWLVSPG'] Truth: ['TTFYQDDQQPAVFGTTV'] non truth: ['KPDRRFAECIEDSSQH', 'TALEETTSAAGAVGVESGMH', 'KPDRRFAECIESSDQH', 'VYDKYISAYLESSDQH', 'VYDKYISAYLEDSSQH', 'TGGLDLALFDGRESSDQH', 'TGGLDLALFDGREDSSQH', 'PSPSTSAPDVPSPACFSAVT', 'TTEHCWLRFADQNSLP', 'TTPEDGTIDSNPGLVSDTV', 'TTSLPELAFSKDEDPMH', 'PSSPSDQNKYTRGVNMH', 'ITSPNGPSHSLGNMHGASGP', 'PITSPNGPSHSLGNMHGASG'] Truth: ['IPELGLDLW', 'PLELGLDLW', 'PIELGLDLW', 'LPELGLDLW', 'PLELDLGIW', 'IPEIGLDLW', 'LPEIGLDLW', 'PLTTPLDLW', 'PLELVDALW', 'DLGIPLELW', 'DLGIPLELW', 'DIGIPLELW', 'LDGLEPLIW', 'DLGIPLEIW', 'LDGIPLELW', 'LDGLELPLW', 'VEGIPLELW', 'IPELAVDLW', 'IPELVADLW', 'PLELVADLW', 'PLELVADIW', 'IPELAVDIW', 'IPELVADIW', 'LPELLDGLW', 'PLELDLGLW', 'LPEIDLGIW', 'LDGLLEPIW', 'DIGIIPELW', 'LDGLLPELW', 'DIGIIEPIW', 'VEGLIEPIW', 'VEGILEPIW', 'VEGLLEPIW', 'VEGLIPELW', 'VEGILPELW', 'VEGLLPELW', 'PELLEGVLW', 'PELLEGVIW', 'IPELVDALW', 'PLELDAVLW', 'PIELPTSLW', 'LPELGEVLW', 'LPELGVELW', 'LPELGDLLW', 'PLELVDAIW', 'LPEIDAVLW', 'LDGLEPIWL', 'DLGIPLEWL', 'DLGIEPLIW', 'DLGLEPLIW'] non truth: ['PLELEVGLW', 'LPELEVGLW', 'PLEIEVGLW', 'PIELEVGLW', 'IPELEVGLW', 'LPEIEVGLW', 'PLEIGDLIW', 'LPEIGDLIW', 'LPIEGLDIW', 'EPIILGDLW', 'PELLIGDLW', 'LPEIGLDIW', 'LPTLSEPLW', 'PIEELVGLW', 'EIPLLDGIW', 'IEPLIDGIW', 'LPEIVEGLW', 'PIELGDLIW', 'LPELGDLIW', 'PLELGDLIW', 'IPELGDLIW', 'EPIILGDIW', 'PELLIGDIW', 'EIPLLDGLW', 'IEPLIDGLW', 'LPELPSTLW', 'LPELPTSLW', 'LPEIGVELW', 'LPEIGEVLW', 'LPEIGDILW', 'LPEIVDALW', 'PLEISTPLW', 'PLEISPTIW', 'IPELSTPLW', 'IPELSPTIW', 'LPEIVGELW', 'LPIEGLDLW', 'PLTIEPSLW', 'LPTLEPSLW', 'LPTIEPSLW', 'PITISEPLW', 'LPTLSEPIW', 'LPTISEPLW', 'LLPEEVGIW', 'LLPEEVGLW', 'PIEELVGIW', 'LLPEEVGWL'] Truth: ['DLARILALQ', 'PKLAPFVIQ', 'LRTPVSIQV'] non truth: ['LDRLALALAG', 'NVKQVIALGA', 'NVKQVIAIQ', 'NVKQVIALQ', 'NVKQVIAIAG', 'NVKQVIALAG', 'NVKQVIAIGA', 'LILWGVAIAG', 'LILWGVALQ', 'LILWGVALGA', 'LILWGVAIQ', 'LILWGVALAG', 'LILWGVAIGA', 'EQLIRIIQ', 'EQLIRILQ', 'ELLQRLIAG', 'EQLIRIIAG', 'EQLIRIIGA', 'ELLQRLIGA', 'EQLIRIIQ', 'ELLQRLLQ', 'ELLQRLIQ', 'ELLQRLLAG', 'EQLIRILGA', 'EQLIRILAG', 'ELLQRLLGA', 'VVAVLDRLQ', 'VVAVLDRLGA', 'VVAVLDRLAG', 'ALWVGLLLQ', 'ALWVGLLLGA', 'ALWVGLILQ', 'ALWVGLLLAG', 'ALWVGLLIQ', 'VVAVLDRIAG', 'VVAVLDRIGA', 'ALWVGLLIGA', 'ALWVGLLIAG', 'ELLQRILQ', 'VVAVLDRIQ', 'NVKQVIAGAL', 'NVKQVIAGAI', 'NVKQVIAQL', 'NVKQVIAQI', 'ILWGVALLGA', 'LILWGVAGAL', 'LILWGVAGAI', 'LILWGVAQI', 'LILWGVAQL', 'PVPLFKLAQ'] Truth: ['DHLLAKSGTE', 'DLHTLASEKG', 'HLLAKSGTED'] non truth: ['DHILAGTSKE', 'DHLLAGTSKE', 'DHLIASAGSTV', 'DHLLASAGSTV', 'DHILASAGSTV', 'DHLIAGTSKE', 'HDLIASAGSTV', 'HDLIAGTSKE', 'HDLLAGTSKE', 'DHLLASLGASS', 'DHILASLGASS', 'DHLIASLGASS', 'DHILGASTKE', 'DHLLGASTKE', 'HDLLASAGSTV', 'DHLIGASTKE', 'HDLIALPYE', 'HDLLASLGASS', 'HDLIASLGASS', 'DHLITASNVT', 'DLHLASAGSTV', 'DIHLAGTSKE', 'DLHLAGTSKE', 'HDLLGASTKE', 'DHLIGISQST', 'DHILGISQST', 'DHLLGISQST', 'DHLLPFESL', 'DHILPFESL', 'DHIIEPFVT', 'DHLIEPFVT', 'DHLALLPYE', 'DHLLTASNVT', 'DHILTASNVT', 'DIHLASAGSTV', 'HDLIGASTKE', 'DHLIPFESL', 'DHILEPFVT', 'DHLLEPFVT', 'DHLLFPETV', 'DHLIFPETV', 'DHILFPETV', 'DHIALLPYE', 'DLHLALPYE', 'HDLITASNVT', 'HIDIASAGSTV', 'LDHLASAGSTV', 'HLDLAGTSKE', 'HIDIAGTSKE', 'LDHLAGTSKE'] Truth: ['TILLVQPTKRPEGRTYADIPV', 'TILLVQPTKRPEGRTYADLVP', 'TILLVQPTKRPEGRTYADLPV', 'TILLVQPTKRPEGRTYADIVP', 'LTKPEAELSASLVYDLLPVLPV', 'LTKPEAELSASLVYDLLPVIVP', 'LTKPEAELSASLVYDLLPVIPV', 'LTKPEAELSASLVYDLLPVLVP', 'ITLGLPAVDKGQREDVAYIPLV', 'LTLGLPAVDKGQREDVAYIPLV', 'TILGLPAVDKGQREDVAYIPLV', 'TLLGLPAVDKGQREDVAYIPLV', 'ITNLEPPVLLYVLTSLSEGLPV', 'ITNLEPPVLLYVLTSLSEGLVP', 'TLIGLPAVDKGQREDVAYIPLV', 'LTIGLPAVDKGQREDVAYIPLV', 'ITIGLPAVDKGQREDVAYIPLV', 'ITNLEPPVLLYVLTSLSEGIPV', 'ITNLEPPVLLYVLTSLSEGIVP', 'KLNSIIDQALTCREELLVLPV', 'TILLVQPTKRPEGRTYADPVI', 'TILLVQPTKRPEGRTYADVLP', 'TILLVQPTKRPEGRTYADVPI', 'TILLVQPTKRPEGRTYADPLV', 'TILLVQPTKRPEGRTYADPVL', 'TILLVQPTKRPEGRTYADVPL', 'TILLVQPTKRPEGRTYADVIP', 'ITLLKGADINAPDKHHITPLLS', 'LTLLKGADINAPDKHHITPLLS', 'TILLKGADINAPDKHHITPLLS', 'TLILKGADINAPDKHHITPLLS', 'ITILKGADINAPDKHHITPLLS', 'LTILKGADINAPDKHHITPLLS', 'LTKPEAELSASLVYDLLPVPLV', 'ITNLEPPVLLYVLTSLSEGVIP', 'ITNLEPPVLLYVLTSLSEGVPL', 'ITNLEPPVLLYVLTSLSEGPVL', 'ITNLEPPVLLYVLTSLSEGPLV', 'ITNLEPPVLLYVLTSLSEGPVI', 'ITNLEPPVLLYVLTSLSEGVLP', 'ITNLEPPVLLYVLTSLSEGVPI', 'IPGYVEPTAVITPPTTTTKKPR'] non truth: ['LTIPTLHKAAATMCFKRVGIPV', 'TLLDKYRCYLESSLLLKIVP', 'EPLVKVSINFPKEVMGAVTIPV', 'EPLVKVSINFPKEVMGAVTIVP', 'EPLVKVSINFPKEVMGAVTLPV', 'EPLVKVSINFPKEVMGAVTLVP', 'LTLVKQETAHPKRQYLETVI', 'TLLVKQETAHPKRQYLETVI', 'TILVKQETAHPKRQYLETVI', 'ITLVKQETAHPKRQYLETVI', 'ITIVKQETAHPKRQYLETVI', 'TLIVKQETAHPKRQYLETVI', 'LTIVKQETAHPKRQYLETVI', 'TIIVKQETAHPKRQYLETVI', 'LTLVIFYLPLRQMVFHYTI', 'TIIVIFYLPLRQMVFHYTI', 'TLLVIFYLPLRQMVFHYTI', 'LTIVIFYLPLRQMVFHYTI', 'TILVIFYLPLRQMVFHYTI', 'TLIVIFYLPLRQMVFHYTI', 'ITLVIFYLPLRQMVFHYTI', 'ITIVIFYLPLRQMVFHYTI', 'TIIVAVTVGDKTPKPIEESKLE', 'ITIVAVTVGDKTPKPIEESKLE', 'LTIVAVTVGDKTPKPIEESKLE', 'TLIVAVTVGDKTPKPIEESKLE', 'LTLVAVTVGDKTPKPIEESKLE', 'TILVAVTVGDKTPKPIEESKLE', 'TLLVAVTVGDKTPKPIEESKLE', 'ITLVAVTVGDKTPKPIEESKLE', 'TIVLLCLSLTCPVVVAVANVSGPV', 'LTVLLCLSLTCPVVVAVANVSGPV', 'TLVLLCLSLTCPVVVAVANVSGPV', 'ITVLLCLSLTCPVVVAVANVSGPV', 'EPLVKVSINFPKEVMGAVTPVI', 'LLTVKQETAHPKRQYLETVI', 'TIVITLVRRFMRAPITHVED', 'TLVITLVRRFMRAPITHVED', 'LTVITLVRRFMRAPITHVED', 'ITVITLVRRFMRAPITHVED', 'TLILIVSRRFMRAPITHVED', 'LLTTVEAGGLGPKLPGAAAALYRQ', 'LLTTVLVRRFMRAPITHVED', 'LLTTVGSIVAVTVGDKTPKPIEE', 'LLTVIFYLPLRQMVFHYTI', 'LLTVAVTVGDKTPKPIEESKLE', 'LLTTNIPFPRQFHDWKLLK'] Truth: ['ELNLADLLQDI', 'LELNADLLQDI', 'EINIADLLQDI', 'LENLADLLQDI', 'IEQVADLLQDI', 'EIVQADLLQDI', 'VQELADLLQDI', 'VQLEADLLQDI', 'ELVQPSISGVDL', 'ELVQPSISGVDL', 'TLASPADLLQDI', 'TLAPSADLLQDI', 'TLTPGADLLQDI', 'TLPGTADLLQDI', 'TLTPGPSISGVDL', 'TLPGTPSISGVDL', 'TLASPEATGLSPL', 'TLASPKPVSEDI', 'TLTPGLENALDL', 'TLTPGLEANLDL', 'LENLPSISGVDL', 'EINIPSISGVDL', 'ELNLPSISGVDL', 'LELNPSISGVDL', 'IEQVPSISGVDL', 'VQELPSISGVDL', 'VQLEPSISGVDL', 'TLASPGEILGADL', 'TLAPSGEILGADL', 'TLPGTGEILGADL', 'TTAPVADLLQDI', 'TLASPEQLAVDL', 'TTAPVPSISGVDL', 'TLTPGSPISSIPS', 'TLPGTSPISSIPS', 'LELNGEILGADL', 'ELNLGEILGADL', 'EINIGEILGADL', 'LENLGEILGADL', 'EIVQGEILGADL', 'IEQVGEILGADL', 'VQLEGEILGADL', 'VQELGEILGADL', 'TLAPSEATGLSPL', 'TLTPGEATGLSPL', 'TLPGTEATGLSPL', 'TLAPSIALNEDL', 'TLASPIALNEDL', 'TLASPLENALDL', 'TLAPSLENALDL'] non truth: ['IENLAAVEVADI', 'ELQVAAVEVADI', 'QVLEAAVEVADI', 'IENLADILQDI', 'ELQVADILQDI', 'QVLEADILQDI', 'IENLGEILQDI', 'ELQVGEILQDI', 'QVLEGEILQDI', 'TEQPSLILQDI', 'ELQVLAQDIDL', 'QVLELAQDIDL'] Truth: ['DTLMEENVQKVKEQLMTSHNVSKEGRILMKELASMFLSE', 'TDLVSWLSPHDLERTQKELEKATSKIQEYYNKLCQEVT', 'VHYSIYGTGSDLFLIHPSTGLIYTQPWASLDAIKSQQSEVT', 'DTITEWKAGAFCLSNITSGIPQTERMQKRAERFNVPVSLE', 'EVTVTEDKINAGLEFNHLFGYGVLDAGAMVKMAKDWKTVPE'] non truth: ['ESLQYKSLLENFEEVPQTEIGEEEHKKFQIGSTTLLEIS', 'SELRNNSIDLTRLNTYPTLQNHLDTFHNYSLNLHIINE', 'ESLEHQSVEPFPFVGKQELEKGGRQDRSIAANLEQQLESL', 'ESLEHQSVEPFPFVGKQEIEKGGRQDRSIAANLEQQLESL', 'ESDRTLAVAHSLIWHSWRIPQARQLCYRNETEALSGELS', 'ESILAPANKLMEYFSDVTNRFVWHKWGRAEKIVWEGSE', 'TDLTFTETLQETGAPAELQVNLPSPLSARGQPSILTWDPELS', 'TDLTFTETLQETGAPAELQVNLPSPLSARHTDLAQLEKATSE', 'DTIDAPKSLRLLMLDNSFDAGPPTTQLMEGAYRPIVVPHDE', 'MYFRPRSTMLLEAFYQTEIGEEEHKKFQIGSTTLLEIS', 'FLKPGTYEAVSALSSAASATEWDRDTSRSPNRSLIPLAELSE', 'TLDTIVLSADNDSGGKLFVRGQYGGYTKKHYGMSVFVDVER'] Truth: ['DLYQVLGVRREAS', 'SLPPVTHVDNIRSA', 'SLLSMLSVDWNKL', 'SIAITHVHKEVGTN'] non truth: ['SSLLSMLLCAAANVL', 'SSLLSMIQPMLSKA', 'SSLLSMLQPMLSKA', 'YINEIVTIGEELL', 'SSLLSMLVPGTKSW', 'SSRLDVGNKSPKAF', 'SSLLFSPQPMLSKA', 'SSLLFSPTIGEELL', 'SSLLFSPLTEVAEL', 'SSLLFSPTLLTEPT', 'SSLLFSPTLETLPT'] Truth: ['TVLSGGEQLE', 'TVLSGGADALE', 'TVLSAENGLE', 'TVTVQDQLE', 'TVTVQDQIE', 'TVVTEAGNLE', 'TVVTEAGNIE', 'TITLNDNLE', 'TLTLNDNLE', 'TTILNDNLE', 'EEGKEKAIE', 'EEGKLTNIE', 'EEGKITNLE', 'EEGKKEALE', 'EELLWTIE', 'EIELWTIE', 'EEILWTIE', 'ELELWTIE', 'TISVQDQLE', 'TLSVAENGLE', 'TISVAENGLE', 'TLSVAANDIE', 'TNNTLDILE', 'TNNTLIDIE', 'TNNTLDLIE', 'DLTNNITLE', 'TNVEAEKLE', 'TNVEASLGLE', 'TNVEAKELE', 'TNVEASLGIE', 'TNVEKEALE', 'TNVELTNIE', 'TNVENITLE', 'TISVDQQIE', 'TISVDQQLE', 'TISVQDQIE', 'TLSVQDQLE', 'TISVQGDALE', 'TISVQENIE', 'TISVQNEIE', 'TISVQENLE', 'TISVQNELE', 'TLVSQDQLE', 'TLVSAENGLE', 'ELVDPAFLE', 'ELVDPFALE', 'ELVDAPFLE', 'ELVDAFPLE', 'EEGKNITLE', 'TNVEGITGLE'] non truth: ['TTPGSNLTIE', 'TTPGSNTIIE', 'TTPGSNLTLE', 'TTPGSNTLLE', 'EEKGNTIIE', 'EEKGNTLLE', 'TLGDGNLTIE', 'TLGDGNTIIE', 'TLGDGNLTLE', 'TLGDGNTLLE', 'ELSNNTIIE', 'ELSNNTLLE', 'TNEVNTLLE', 'TNEVNTIIE', 'EVEVFAPLE', 'EVDIFAPLE', 'TVTVNEQLE', 'TTVVNEQLE', 'TLLTDGNGLE', 'EIELTWLE', 'EELIWTIE', 'TLSVNEQLE', 'TISVNEQLE', 'TIVSAANDLE', 'TNNTDILLE', 'TNSQLDLIE', 'ELSNNTLIE', 'ELSNNLTIE', 'ELSNNITIE', 'ELSNNTILE', 'ELSNNLTLE', 'ELSNNITLE', 'TLGDGNTLIE', 'TLGDGNITIE', 'TLGDGNTILE', 'TLGDGNITLE', 'TNEVNTILE', 'TNEVNLTLE', 'TNEVNITLE', 'TNEVNTLIE', 'TNEVNLTIE', 'TNEVNITIE', 'TTPGSNTLIE', 'TTPGSNITIE', 'TTPGSNTILE', 'TTPGSNITLE', 'EEKGNTLIE', 'EEKGNLTIE', 'EEKGNITIE', 'EEKGNTILE'] Truth: ['SSKNLDHGVLLVGFEVKEYVLPS', 'HRRSEALAGAPLDNAPKEYPPKI'] non truth: ['KNSPRWPFVVFQIPFEIIEPS', 'KNSPRWPFVVFQIGQPAAKCTR', 'LDPWRFVKCLLLPLSYSSPPQA'] Truth: ['ALFQDVQKLDIEAQIGGNPG', 'ALFQDVQKLELGGGPGAGDLQ', 'ALFQDVQKIGIDPGYSKAY', 'ALFQDVQKLETMIKTYGQ', 'KTMQALEKFPHHLQEYI', 'KTMQALEKLDIEAQIGGNPG', 'KTMQALEKLELGGGPGAGDLQ', 'KTMQALEKIGIDPGYSKAY', 'KIENRFAALFCLDTAWKS', 'EEMGKVKKFPHHLQEYI', 'HLSRPEPLITGLCVCKPYS', 'HLSRPEPLLDEAASAPAIPE', 'HLSRPEPLLTTSSYSNPTI', 'HLSRPEPLLFCLDTAWKS', 'HLSRPEPLLNIYNLYAPC', 'HLSRPEPLYSPMIAAGLFN', 'SLEFLRSPLDIEAQIGGNPG', 'SLEFLRSPLELGGGPGAGDLQ', 'HLSRPEPLFPHHLQEYI', 'EEMGKVKKLDIEAQIGGNPG', 'EEMGKVKKLELGGGPGAGDLQ', 'EEMGKVKKIGIDPGYSKAY', 'NRVMMVAKIGIDPGYSKAY', 'KRFPVCQAIGIDPGYSKAY', 'GTKERTEKIGIDPGYSKAY', 'NRVMMVAKLETMIKTYGQ', 'KRFPVCQALETMIKTYGQ', 'GTKERTEKLETMIKTYGQ', 'KGLVDFRNITGLCVCKPYS', 'KIENRFAAITGLCVCKPYS', 'KGLVDFRNLDEAASAPAIPE', 'KIENRFAALDEAASAPAIPE', 'KGLVDFRNLTTSSYSNPTI', 'KIENRFAALTTSSYSNPTI', 'KGLVDFRNLFCLDTAWKS', 'KGLVDFRNLNIYNLYAPC', 'KIENRFAALNIYNLYAPC', 'ISLMDTRLFPHHLQEYI', 'KIENRFAAYSPMIAAGLFN', 'KGLVDFRNYSPMIAAGLFN', 'KEKLMTQAFPHHLQEYI', 'KKECVVKDFPHHLQEYI', 'KIENRFAAFPHHLQEYI', 'KGLVDFRNFPHHLQEYI', 'KEKLMTQALDIEAQIGGNPG', 'KKECVVKDLDIEAQIGGNPG', 'KEKLMTQALELGGGPGAGDLQ', 'KKECVVKDLELGGGPGAGDLQ', 'TVLQIFNNLDIEAQIGGNPG', 'TVLQIFNNLELGGGPGAGDLQ'] non truth: ['KYHYLPKIKNFLSDPCF', 'LSLEDIKMLSKALMYSAEA', 'KYHYLPKPHFSYPEPPI', 'KYHYLPKLWFETGLNGF', 'KCILFDPLIESWMFKRS', 'KYHYLPKISRLDQMSFS', 'SLMNKLDKIESWMFKRS', 'KLCNTIEKIESWMFKRS', 'CASHFKKKLSKALMYSAEA', 'MLTELDVKLSKALMYSAEA', 'QLLNNHPLIKNFLSDPCF', 'DQKLVSMKIESWMFKRS', 'LCNTIEKKIESWMFKRS', 'QLLNNHPLIESWMFKRS', 'VTPAIYRELSKALMYSAEA', 'KEIVNQFALSKALMYSAEA', 'KQEINGLFLSKALMYSAEA', 'KTQDVTTRLSKALMYSAEA', 'KCTLLLEELSKALMYSAEA', 'KEISIMEVLSKALMYSAEA', 'KDRVFGLNIKNFLSDPCF', 'KMKEEKRIKNFLSDPCF', 'KMKEEKRPHFSYPEPPI', 'KQIWFLNIKNFLSDPCF', 'KQIWFLNPHFSYPEPPI', 'KELMNKSVIESWMFKRS', 'PILSSVFWIESWMFKRS', 'KTTEARSRIESWMFKRS', 'ISRRSSSRIKNFLSDPCF', 'STTQLSRRIESWMFKRS', 'VTPAIYREIESWMFKRS', 'KMKEEKRLWFETGLNGF', 'KQEINGLFIESWMFKRS', 'KEIVNQFAIESWMFKRS', 'ISRRSSSRPHFSYPEPPI', 'KQIWFLNLWFETGLNGF', 'PLAQIYLMIESWMFKRS', 'KQIWFLNISRLDQMSFS', 'ISRRSSSRLWFETGLNGF', 'KDRVFGLNIESWMFKRS', 'ISRRSSSRISRLDQMSFS', 'ISRRSSSRLDWATHAELQ', 'ISRRSSSRLVEHHESGIY', 'ISRRSSSRLEYGRVFEGN', 'KKVSKTEEIKNFLSDPCF', 'KKVSKTEEPHFSYPEPPI', 'KKVSKTEELWFETGLNGF', 'KKVSKTEEISRLDQMSFS', 'KKVSKTEELDWATHAELQ', 'KKVSKTEELVEHHESGIY'] Truth: ['KIKMFFEKDIVSG', 'KIKMFFESVGLDK', 'KIKMFFEKDLKD', 'KIKMFFEATALKD'] non truth: ['KYSIRTITCTLSGV', 'KKPMKSPESPGIDK'] Truth: ['TEETHPVSWKPEAGRFGSRDALDLGAPREW', 'PDQDEIDCLPGKDVTLPLEAERPLVTDMTPS', 'EEEEPKPIELPVDFSKNPHEKKMFSPTPD', 'EDDSVDLIDSDELLDPEDLKRPDPASLKAPS', 'WARRSQDLHCGACRALVDELEWEIARVDP', 'ADKKDDAGASTANVSSDRTLFRNTNVEDVLNA', 'PETEEFVDTNAPAHQLIQTESPQEGIVTWK', 'LAQTESLLMKMRSVASDELHSMMQRRMSQ'] non truth: ['PDEGEGPNLEVMAKGNTREKQLTQEWKFY', 'THGWCVRAFYVAGFSGHGIERLDSFLKEPD', 'KEGLSKDSHPYKGPMGGRGLLGPAVNYNHCDP', 'EEEEEEAGSRSKHLQEKRDLFKSYLELD', 'EEEELMAPTIVAPESLSPGRYCEDDLFIKI', 'PDPDDTDGAALSLAKGNTREKQLTQEWKFY', 'ETTADSPFVSSPNSVAEVVPPPAVIAPKGDSEEA', 'GSKKWNKLIEVSGNWTRPDVGGTYNGSWNGS'] Truth: ['GLLELCEQVFEPPSRIVPTP', 'LAEVLARVDWDENLEKRAP', 'TLAVLVMQDPSMNLQGLVTPP', 'TLAVLVMQDPSMNLQGIVPTP', 'TLAVLVMQDPSMNLQGLVPTP', 'TLAVLVMQDPSMNLQGLVPPT', 'AIEVIILPTEPDESSTKDVAP', 'AIEVLILPTEPDESSTKDVAP', 'VVDVLILPTEPDESSTKDVAP', 'VVDVIILPTEPDESSTKDVAP', 'LAEVIILPTEPDESSTKDVAP', 'LAEVLILPTEPDESSTKDVAP', 'ALEVIILPTEPDESSTKDVAP', 'ALEVLILPTEPDESSTKDVAP', 'ELAVIILPTEPDESSTKDVAP', 'ELAVLILPTEPDESSTKDVAP', 'IAEVLILPTEPDESSTKDVAP', 'IAEVIILPTEPDESSTKDVAP', 'LGELIILPTEPDESSTKDVAP', 'GLLELILPTEPDESSTKDVAP', 'GLLEIILPTEPDESSTKDVAP', 'EAIAELDTLSKPPAEAKSPEK', 'VKWKIDGSERQNGVFFWK'] non truth: ['DVVVILMEREEGRGDLPPSL', 'DLLALLMEREEGRGDLPPSL', 'IEAVILMEREEGRGDLPPSL', 'VVDVILMEREEGRGDLPPSL', 'DVVLVLMEREEGRGDLPPSL', 'VDVLVLMEREEGRGDLPPSL', 'DVVVITTSAREPHSPHTLPAP', 'VDVVLTTSAREPHSPHTLPAP', 'IVAEILMEREEGRGDLPPSL', 'VVDVITTSAREPHSPHTLPAP', 'LAEVLTTSAREPHSPHTLPAP', 'IEAVITTSAREPHSPHTLPAP', 'VVDVLTTSAREPHSPHTLPAP', 'AEIVLTTSAREPHSPHTLPAP', 'ELAVLTTSAREPHSPHTLPAP', 'EALVLTTSAREPHSPHTLPAP', 'VDVLVTTSAREPHSPHTLPAP', 'DVVVALLAAAAHADGATTVLDCK', 'VDVVALLAAAAHADGATTVLDCK', 'DVVLVTTSAREPHSPHTLPAP', 'IVAEITTSAREPHSPHTLPAP', 'DLLALTTSAREPHSPHTLPAP', 'LAEVDELASHAAATFNAKVVPA', 'LSLNVRSEREEGRGDLPPSL', 'VVDVALLAAAAHADGATTVLDCK', 'EALVALLAAAAHADGATTVLDCK', 'LAEVALLAAAAHADGATTVLDCK', 'AEIVALLAAAAHADGATTVLDCK', 'ELAVALLAAAAHADGATTVLDCK', 'IEAVALLAAAAHADGATTVLDCK', 'SGRIGLKDEPKGNTDRSVPTP', 'SGRLLGSGEVLLVCDKSAYEK', 'SLIALTESAREPHSPHTLPAP', 'IVAEALLAAAAHADGATTVLDCK', 'VDLTYVQSEQPIAPGRPPEK', 'WKVLCKRHFAHQCTLEPK', 'WKVLCKRHFAHQCTLPEK', 'SGRIGLKDEPKGNTIEDRAPA', 'SGRIGLKDEPKGNTIGGNGEPK'] Truth: ['KSPVKEDIKPPVVQILRLH', 'KKGFVLHKSKSYLRVRHL', 'KKRKERFGIVTSNKVIRH', 'EKALAAGGVGSIVRVLTARKTV', 'KKSYLERSIQLLRRLRH', 'KKFFLMNIVTFVVALLVTL', 'KKAIPPGCNIVTFVVALLVTL'] non truth: ['VGGVIRGASKRMIKAQKRLQ', 'KGVDLLLQKLHRWALLHR', 'KPEKTPLAIPKVVDAAIHLR', 'KPEKTPLAIPKVVDAALLRH', 'KKVLNTGIGNRRLHAAHVLV', 'KKGKLFCKQTNLIRIIHR', 'KKLPRKFFGIKWEIHLR', 'KKLPRKFFGIKWELHRL', 'KKLPRKFFGIKWELHRI', 'KKLPRKFFGIKWELRHL', 'VGGVIRGASIVTAAIHLRRIH', 'VGGVIRGASPVQIGFLGLSKLK', 'KKQQGLIFHKHVLTLIHR', 'KKYKLGHRKPDPIPLRHL'] Truth: ['EEQLAAPQVQADGKPDGTLSTIEFQREALE', 'EELGYDLLGQIGSLENKTKFEECRKEIE', 'EERAAPFTLEYRVDGTLSTIEFQREALE', 'EEQLAAPQVQADGKPDIVPAPAGDQKDVPASE', 'EENQKEFERVRVIYMSGALDVLQMKEE', 'EEAFDAICQLIAGKEPANIGVTSPAEVKSPGE', 'EEIRKTFNIKNDFTEEEELKMEQIKE', 'EEEDNVLVLKKSNFEEALAAVVRGLCHEE', 'EEILHYLEKTCEAKSPAEAKSPAEVKSPGE', 'EENSRENPFKRTNEIVKGTVVRGLCHEE'] non truth: ['EECVRPILGKNTGFGIMSYSKGSAQILENE', 'EENLHEAMFRKLSESTGPRLYPFYVKE', 'EEPFQANTVLFSSIGSYRTKAQAVQQELE', 'EELFPFAMGRNSLAGSYRTKAQAVQQELE', 'EEEMGALKTIVKFSYGQDTKAQAVQQELE', 'EEKLTNGKTNTIDYTELSFIVEDLAQIGE', 'EEFRVSVVGQEKTEIKHETNGQADEVKSP', 'EETEKLSNNLPLQGNLLWGGQGFVQELQE', 'EEEKEKKEQNEEAATKDARFFIRASSLS', 'EECVRPILGKNTGFGIMSYSKGSARQAELE', 'EECVRPILGKNTGFGIMSYSKGSAVQWLAE', 'EECVRPILGKNTGFGIMSYSKGSAVQQELE', 'EECVRPILGKNTGFGIMSYSKGSAQLQEVE', 'EENLHEAMFRKLSESTGPRLYHQRRGE', 'EEGEKVSRTVVFQGDQKLVNKEPRHECE', 'EEEEEPVQTKGIPNKPSGKYWVREKESP', 'QQFEELNLIGSPCDKLFGKYWQKLEQE', 'MAEEYHSMLVKLPSDPSLSIELMQHKEK', 'SLGMAEEYHSMLVKLPSDPSLSIELMQHK', 'YEEDLNSLLQSGGTSLDVAALPLDRYAFNV'] Truth: ['SASNTRAPSSLS', 'SASNTRAPSSSI', 'SASNTRAPSSIS', 'SASNTRAPSSSL', 'SASNTRAPSSVT', 'SASNTRAPSSTV'] non truth: ['SSGAAGRPTSSLS', 'SAQGSRPTSSLS', 'SQTRNPTSSLS', 'SSAAGRGPTSSLS', 'SAGGRGTPTSSLS', 'SSRAANPTSSLS', 'SSRGQAPTSSLS'] Truth: ['KKYESEESVSKGSWQKTVNNNQQSLKDVESDSAKQFLLAAEA', 'RKKRTHAVLSPPSPSYIAETEDCDLSYSDVMSKLGFLSERST', 'IRPAPETAKKPEEAKNTMFLEICTKYLGDVDATMSILDISMM'] non truth: ['KNDITWPYFRVADSSTAEVVADETGLISRERGVPCKYVDGTR', 'KVEFIDDLFVWSFHQVKLLLMGSCFYLAMQMPRSYMNTL', 'KELLKEFNPHGMMLFLMKRGYMGSGYYVINPDLTNRAGYF', 'KKLRLLMLDNSYDDEPQPTHENLPMSRERGVPCKYVDGTR', 'KKLRLLMLDNSYDDEPQPLYDELQEVGLWEELVYIMCTV', 'DNQFLTKLPTSPHLDKNDLIASVESEEEAKEPSPAKHFYIY', 'SPVSTRLGKKSTPPVNVNCGEMIIPSVWSSQIEGSPPSMSAFLGE'] Truth: ['TVVTGVDIVMNHHLGSGSP', 'SVVSVTAVDRDANSSKTPS', 'SVVSVTAVDRDANSKSTSP', 'TTLKACPPVSMKTVEMSP', 'TVNGDLVGHVHKTVEMSP', 'LSLSFATAWHTANAVFSP', 'TTAEREHVRHLQGDALS', 'TVGPISVAMDASHPTPRPS', 'TVGPISVAMDASHPRTPPS', 'TVVSPTVPGGLSDSPVHSPS', 'TVVSPTVPGGLSDGLAEHSP', 'TVVSPTVPGGLSDGLEAHSP', 'TVVSPTVPGGLSDGDVVHSP', 'TVVSPTVPGGLSDGQPQPSP', 'TTLKACPPVSMDVAFRSP', 'TTLKACPPVSMDRAVFPS', 'DLTEEWLREKLGFFH', 'TTFQISLQASQSQLPSSP', 'TVTHQDIRSQVHAPFPS', 'TTTTFKGVDPNSAQVPGTV', 'TTTTFKGVDPNSAVQPGSL', 'TVNGDLVGHVHCITTPGSI', 'TVLHLMSVDVYKGSSFH', 'TTFIKEAWHTANAVFSP', 'TTFQISLWHTANAVFSP', 'TVTVQDFAAAAGVSGAASLSP', 'TTTATVLCRVDDKVNFH', 'TVVTGVDIVMNHHLSGGPS', 'TVVTGVDIVMNHHLSNPS', 'TVVTGVDIVMNHHLNSPS', 'TVVTGVDIVMNHHLNSSP', 'TVVTGVDIVMNHHLSNSP', 'TVVTGHSANTSFTTILSSP', 'TVLSGHSANTSFTTILSSP', 'LSSLPAMGPPKYLDFFH', 'SVVSVTAVDRDANSTKSSP', 'SVVSVTAVDRDEAIFQPS', 'SVVSVTAVDRDEALFQPS', 'SVVSVTAVDRDANFRFH', 'VGVTGNDITTPPNKEPPPS', 'TVVTGVDIVMNHHATAAPS', 'TVVTGVDIVMNHHAAATSP', 'TVVTGVDIVMNHAHDKPS', 'TVVTGVDIVMNHHTNVPS', 'TVVTGVDIVMNHHGKEPS', 'TVVTGVDIVMNHHVSQSP', 'TVVTGVDIVMNHHAKDSP', 'TVVTGVDIVMNHHGASVSP', 'TVVTGVDIVMNHHKGESP', 'TVVTGVDIVMNHHEGKSP'] non truth: ['TTEIRCSGFRADPLGLSP', 'TTIESMKIPAQSLDADTV', 'TTVAAFVTGDHGKSETVTV', 'TTIGFALSGDHGKSETVTV', 'TTELSMKIPAQSLDADTV', 'TTLESMKIPAQSLDADTV', 'TTEISMKIPAQSLDADTV', 'TTPRKDGPEIPHCSVVPS', 'TTVAAFVTGGAEAKQADVSP', 'TTVAAFVTQPSNSKPSTPS', 'TTPRKDGPEIPHMSVAPS', 'TTPRKDGPEIPHMSLGPS', 'TTPRKDGPEIPHMSLGSP', 'TTIVTGAFGDHGKSETVTV', 'TTAEPIIENEMVYLLPS', 'TTVEVEVPNEMVYLLPS', 'TTSLSSTLNELSGNVRGSP', 'TTSLSSTLQSAVEPLVCPS', 'TTSHATPEGLPLLAAQEPS', 'TTHSATPEGLPLLAAQEPS', 'TTGGPGQPEGLPLLAAQEPS', 'TTSPHLPGRGQSSPASRPS', 'TTTDLMKIPAQSLDADTV', 'TTESIMKIPAQSLDADTV', 'TTSELMKIPAQSLDADTV', 'TTSMKEFELRIPSPFH', 'TTIVTGAFGGAEAKQADVSP', 'TTIVTGAFQPSNSKPSTPS', 'TTPVVPTGNTAVFFGQFH', 'TTVAAFYITAVFFGQFH', 'TTIGFAYITAVFFGQFH', 'TTLVPDVENEMVYLLPS', 'TTPIVDVENEMVYLLPS', 'TTLDRTCGFRADPLGLSP', 'TTVMDRSGFRADPLGLSP', 'TTGPASPAAITAVFFGQFH', 'TTKETFTSVTAPAERPSP'] Truth: ['DQLLKEQP'] non truth: ['TTGQLIGPSP', 'KKDELGPSP', 'SSKTPLGPSP', 'SVSVQLGPSP', 'STAIQLGPSP', 'TTGQIPLGSP', 'SSVVQLGPSP', 'SVSVQLPGSP', 'STAIQLPGSP', 'SALTQLGPSP', 'SLTAQLGPSP', 'TTPGIPGLTN', 'SSVVQPKEP', 'SALTQPKEP', 'SLTAQPKEP', 'STAIQPKEP', 'TTGPLGASVAP', 'TTPGLGASVAP', 'TTGPIAGSGLP', 'TTPGIAGSGLP', 'TTVAAGPGLSP', 'SSVVQPGLSP', 'SSVVQPLGSP', 'SALTQPGLSP', 'STAIQPGLSP', 'SALTQPLGSP', 'SLTAQPGLSP', 'STAIQPLGSP', 'SLTAQPLGSP', 'TTGPAVAAVSP', 'TTGPLALNSP', 'TTPGLALNSP', 'TTGQILGPSP', 'TTGQLIPGSP', 'TTLQGIPGSP', 'TTLQGIGPSP', 'TGTGLALGPSP', 'TGTGLALPGSP', 'KKEDLGPSP', 'KKDELPGSP', 'KDKELGPSP', 'KDKELPGSP', 'SSKTPLPGSP', 'TTGPQLKEP', 'TTPGQLKEP', 'KKDEPKEP', 'KKEDPKEP', 'TTGQIPKEP', 'TTPGLASLNP', 'TTGPLASLNP'] Truth: ['DVLNARKLER', 'DVLNARKLER', 'VDLNARKLER', 'SVTRNRLSLPA', 'VDLNLVTRQR', 'DVLNLVTRQR', 'STIAIVEPVLAT', 'SSVVIVEPVLAT', 'SSVVARAVRTAP', 'SRSINRLSLPA', 'SVHIGLFRALT', 'STALVEIVTLAP', 'SVVSVEIVTLAP', 'SRSLRVAASVPA'] non truth: ['SSIRQRISVAP', 'SSIRQRISVPA', 'SSAPVLELLIAT', 'SVLLSDVLVTPA', 'SVLISDVLVTPA', 'SVVITDVLVTPA', 'SVVTLDVLVTPA', 'SSLVIDVLVTPA', 'SSVILDVLVTPA', 'SSVLIDVLVTPA', 'SSVLLDVLVTPA', 'SVGFRLHLTAL', 'SVFRGLHLTAL', 'SVGFRLLHLTA', 'SVGFRLHLLAT', 'SVFRGLLHLTA', 'SVFRGLHLLAT', 'SVPSALELLIAT', 'SVAPSLELLIAT', 'SVLIHGRFTAL', 'SIATIRQRTAP', 'SITARQLTRPA', 'SVSVQLRTRPA', 'SVSVQLRTRAP', 'SVPTGLELLIAT', 'SSLVIVLEVSPA', 'SSVLLVEISVAP', 'SSPAVLELLIAT', 'SSLGPLELLIAT', 'SSVPALELLIAT', 'SSPVALELLIAT', 'SSIRQLRVSAP', 'SSIRQRIVSAP', 'SSIRQLRVSPA', 'SSIRQRIVSPA', 'SSRIQRISVAP', 'SSIRQLRSVPA', 'VGPDPIPLRHL'] Truth: ['FLASSYVDNK', 'FLASSTFENK', 'FLASSEYQAK', 'FLASSEQSFK', 'FLASESSFQK', 'FLASSTFIGNS', 'FIASSTFIGNS', 'LFASESSFQK', 'LFASSTFIGNS', 'IFASSTFIGNS', 'FLASFNSSALS', 'FIASFNSSALS', 'FLASAAFSSVGS', 'FIASAAFSSVGS', 'IFASFNSSALS', 'LFASFNSSALS', 'IFASAAFSSVGS', 'LFASAAFSSVGS', 'FSAISYQITGG'] non truth: ['LFAFMVCANK', 'LFAFMVCAGGK', 'FLATSDNFTK', 'IFADSQSFTK', 'IFATSDNFTK', 'LFASDQSFTK', 'IFASDQSFTK', 'FLATSGDGFTK', 'FLASADQTYK', 'FIASADQTYK', 'FIASGSESFAK', 'FLASGSESFAK', 'IFASDSQFTK', 'LFADSQSFTK', 'LFAQDSSFTK', 'IFASADQTYK', 'LFASADQTYK', 'IFASGSESFAK', 'LFASGSESFAK', 'LFAFMVCGQK', 'GGFKYCRAGGK', 'GGFKYCRANK', 'GFHVHGTFNK', 'GFHVHGTFGGK', 'GFHVHGFTNK', 'LFQSSESFAK', 'GFHVHGNFTK', 'FTEGTSLFNK', 'GFHVHGFNTK'] Truth: ['DARPAMAATSFVLMTTFPNKELA', 'DARPAMAATSFVLMTTFPNKEAL', 'DARPAMAATSFVLMTTFPNKELA', 'ILTQTNSDFSRSGGSVDSVLSQIA', 'PFAEVTASGCRVLYDYELKHAL', 'DARPAMAATSFVLMTTFPNKEAI', 'GFTLDDAIQTGVDNPGLISVELHA', 'ESSEKAYSLRLRATRTSTPEEA', 'ARPAMAATSFVLMTTFPNKELAD'] non truth: ['TDIHLGFTESSTPLEEKKEHAL', 'TDIHLGFTESSTPLEEKKEHLA', 'TFHFGYTKHVIYKYLDEHAL', 'TDIHLGFTESSTPLEEKKHELA', 'TDIHLGFTESSTPLEEKKEHAI', 'TDIHLGFTESSTPLEEKKEHIA', 'ASGPRGGHAGSRQRSGGNQQVPHLA', 'ASGPRGGHAGSRQRSGGNQQPVHIA', 'TDIHLGFTESSTPLEEKKEAIH', 'TDIHLGFTESSTPLEEKKEALH', 'TDIHLGFTESSTPLEEKKEAHI', 'TDIHLGFTESSTPLEEKKEAHL', 'TDIHLGFTESSTPLEEKKEIHA', 'TDIHLGFTESSTPLEEKKELAH', 'TDIHLGFTESSTPLEEKSGLHAL', 'TDIHLGFTESSTPLEEKKHEAL', 'TDIHLGFTESSTPLEEKKHEIA', 'TFHFGYTKHVIYKYLDHEAL', 'NVSGPVDDITVMIDVDDQPLRLA', 'NVSGPVDDITVMIDVDDQPLRIA', 'NVSGPVDDITVMIDVDDQPLRAL', 'NVSGPVDDITVMIDVDDQPLRAI', 'TFHFGYTKMTSVASQLAKGSHLA'] Truth: ['EDVVAYIK', 'DEVVAYIK', 'EDVVYLAK', 'EDVVYAIK', 'DEVVYAIK', 'DEVVYLAK', 'EDVVYALK', 'DEVVYALK', 'DEVVSKFL', 'DEVVSKFL', 'EDVVSKFL', 'TVAYTVVSP', 'TVTFSVVPS', 'TTGLYPALT', 'TTGLYPAIT', 'TTGLYPLAT', 'TTYLGIPAT', 'TVFSTVVSP', 'TVTFSVVSP', 'TVSFTVVSP', 'TVYTAALPT', 'TVAYTPLAT', 'TVAYTLPAT', 'TVAYTIPAT', 'TVYTALAPT', 'TVAYTVVPS', 'TVAYTAPLT', 'TVAYTAPIT', 'TVAYTPAIT', 'TVAYTPALT', 'TVAYTALPT', 'TTGLYLPAT', 'TTIGYLPAT', 'TTGLYIPAT', 'TTIGYIPAT', 'TTIGYPLAT', 'TVAYTLAPT', 'TVAYTAIPT', 'TTGLYVVSP', 'TTIGYVVSP', 'TTGLYVVPS', 'TTIGYVVPS', 'TTVFSVVPS', 'TTFVSVVSP', 'TTFVSVVPS', 'TTGLYAPIT', 'TTIGYAPIT', 'TTGLYAPLT', 'TTIGYAPLT', 'TTVSFAPLT'] non truth: ['EDVVALYK', 'EDVVYALK', 'DEVVALYK', 'DEVVYALK', 'EDVGYILK', 'DEVGYILK', 'EDVVKYIA', 'DEVVKYIA', 'DEVVYKAL', 'EDVVYKAL', 'TTSVFVVSP', 'TTGIYAPIT', 'TVATYAPIT', 'TTGIYLAPT', 'TVATYLAPT', 'TVATYPIAT', 'TVATYPLAT', 'TVATYLPAT', 'TVATYIPAT', 'TVATYVVPS', 'TVATYVVSP', 'TVTFSVVPS', 'TVSTFVVSP', 'TVTSFVVSP', 'TVATYAPLT', 'TVTAYAPIT', 'TVATYPALT', 'TVYATPALT', 'TVATYALPT', 'TVYATALPT', 'TVTAYLAPT', 'TVATYAIPT', 'TVSTFPALT', 'TVTSFPALT', 'TVSFTPALT', 'TVSFTALPT', 'TVTFSPLAT', 'TNRSPHPK', 'TTGIYLPAT', 'TTGIYIPAT', 'TTGIYPIAT', 'TTGIYPLAT', 'TTYGIVVSP', 'TTGIYVVSP', 'TTGYIVVSP', 'TTGIYVVPS', 'TTYVAVVPS', 'TTVSFVVSP', 'TTVFSVVPS', 'TTVAYAPIT'] Truth: ['NEALVKALE', 'NEAQVTIIV', 'SSVDQPLKI'] non truth: ['NEALVAKLE', 'ENALVAKLE', 'EGGALVAKLE', 'DGAALVAKLE', 'QDALVAKLE', 'AGDALVAKLE', 'DQALVAKLE', 'DQAIVAKLE', 'GDAAIVAKLE', 'GDAALVAKLE', 'QDAIVAKLE', 'NEALVLTGGI', 'ENALVLTGGI', 'EGGALVLTGGI', 'NEALVLGALS', 'ENALVLGALS', 'DGAALVLTGGI', 'GDAAIVLTGGI', 'GDAALVLTGGI', 'DQALVLTGGI', 'AGDALVLTGGI', 'QDALVLTGGI', 'QDAIVLTGGI', 'DQAIVLTGGI', 'ENALVLQSI', 'ENALVLSAIG', 'NEALVLQSI', 'NEALVLSAIG', 'NEALVIQSL', 'NEAIVIQSL', 'ENALVQLLS', 'ENAIVQLLS', 'ENALVIIQS', 'NEALVSGALI', 'NEAIVLQSI', 'ENALVSGAII', 'ENALVSGALI', 'ENAIVIQSL', 'ENALVIQSL', 'NEAIVQLLS', 'NEALVQLLS', 'NEALVSGAII', 'NEALVIIQS', 'ENAIVLQSI', 'ENAIVVVKD', 'ENALVVVKD', 'NEAIVVVKD', 'NEALVVVKD', 'EGGALVLGALS', 'DGAALVLGALS'] Truth: ['SVLTLEANAHLAAVHRRAAELERRLLAAER', 'VSTIQLADAHLAAVHRRAAELERRLLAAER', 'TVATDILNAHLAAVHRRAAELERRLLAAER', 'ISKIAADITQAVSLSQGRAAELERRLLAAER', 'TVTVLEANAHLAAVHRRAAELERRLLAAER', 'LSSILEANAHLAAVHRRAAELERRLLAAER', 'TPPTTTTKAHLAAVHRRAAELERRLLAAER', 'DVPTLTGAFGILASHVPTLQVLRPGLVVVHTE'] non truth: ['SLSLGSPGSALLTAQLRKGSGLPGCLLVKPTFPA', 'SVTLEQLGKWIIPPRLDGRIACTLVTSLGVT', 'VSLTEGAQALLTAQLRKGSGLPGCLLVKPTFPA', 'PTLSKAIIPGTMGPITAFTNTIGLLIGAWTLPA', 'SITVGNDLVVSIGQLRKGSGLPGCLLVKPTFPA', 'LSSLQQEALLTAQLRKGSGLPGCLLVKPTFPA', 'VTSLGVTVTTVSVKDEPALSLQLLAERVGGPSI', 'VTSLGVTVTTVSVKDEPALSLQILAERVGGPSI'] Truth: ['DFTPENYKRIEAIVKNYPEGHQAAAVLPVL', 'FSAYIKNSRPILPNGEVVGDTVKYEIEKCL', 'PVPVDTASPPTRIRTTTSGVPRGGEPNQRPEL', 'IKLFQECCPHSTDRVVLIGGKPDRVVECIK'] non truth: ['FDKFLPRDISISPTKSTFPSAVSAELESLSI', 'FDTIWLRSLSISPTKSTFPSAVSAELESLSI', 'FDKFLPRDIKKVTSTLVSPSVRCSTTGPSCI', 'PVVPVNTVQVQLNVTSTLVSPSVRCSTTGPSCI', 'VPPVNVLNPVCSISPTKSTFPSAVSAELESLSI', 'PVVPVNLNPVCSISPTKSTFPSAVSAELESLSI', 'WGKCVAWVRTLQALPHQTTRPNGQRPWPV', 'SEFAAVTAPAERPSPILAAALLSCALSLFCLLH'] Truth: ['DFTPENYKRIEAIVKNYPEGHQAAAVLPVL', 'SFKSLPCGTADTTLSKGLTESLPIYPAYLPVL', 'IKLFQECCPHSTDRVVLIGGKPDRVVECIK'] non truth: ['FDTIWLRSLHLFIFLGESFTAQFHKLCI', 'FDKFLPRDIKKVTSTLVSPSVRCSTTGPSCI', 'YTTEPELGKLQYQTVTANKIVTYQERPVL', 'YTTEPELGKLQYQTVTANKIKVDLVDSICL', 'YTTEPELGKLQYQTVTANKIKVDIVDSICL', 'YTTEPELGKLQYQTVTANKICPKTNVKVCL', 'YTTEPELGKLQYQTVTANKIKVDITKANCL', 'YTTEPELGKLQYQTVTANKIKVDIKSQACL', 'YTTEPELGKLQYQTVTANKIKQLTAEKACL', 'WGKCVAWVRTLQALPHQTTRPNGQRPWPV', 'SEFAAVTAPAERPSPILAAALLSCALSLFCLLH'] Truth: ['DLMGQPISV', 'DIMGQPISV', 'DLMGQPISV', 'LDMGQPISV', 'DLMGQPLSV', 'IDMGQPISV', 'VEMGQPISV', 'EVMGQPISV', 'ANMIEVPSV', 'ANMIEPVSV', 'NAMIEPVSV', 'PISCGSPLSV', 'SLPCGSPLSV', 'DLMGQPSLV', 'DLMGQPLVS', 'DLMGQPIVS', 'NAMIDIPSV', 'ANMIDIPSV', 'DLMQGPISV', 'DLMGQIPSV', 'GMQDIPIVS', 'MGQDIPIVS', 'DLMGQSPLV', 'DLMGQTPVV', 'DLMGQTVPV', 'GQMVEPIVS', 'QGMVEPIVS', 'IDMANPLVS', 'DLMANPLVS', 'LDMANPLVS', 'SLPCGTPVVS', 'PLSTCGVSPV', 'DLMPALNSV', 'DLMPAQVSV', 'DLMPAAGVSV', 'LDMIQPGSV', 'VEMIQPGSV', 'EVMPQGLSV', 'ANMIEVSPV', 'ANMIEVPVS', 'EVMPDAAKV', 'PLSVSCPAVS', 'SLPVSCPAVS', 'VTPVSCPAVS', 'DLMQVPGTV', 'GQEMITLPA', 'QGEMITLPA', 'NAEMITLPA', 'MGQIEPVSV', 'GMQIEPVSV'] non truth: ['GMQDIPLSV', 'NAMDIPLSV', 'QGMDIPLSV', 'LDMLNPASV', 'DLMLNPASV', 'DLMQVPASV', 'VEMLNPASV', 'EVMGAVPASV', 'GMQEVPLSV', 'MGQDIPLSV', 'GMQDIIPSV', 'LPSSCGPISV', 'PLSCSGPISV', 'LSPCSGPISV', 'SLPGCSLPSV', 'NAMEVPLSV', 'NAMDIIPSV', 'ANMDIPLSV', 'ANMEVPLSV', 'QGMDIIPSV', 'GQMDIPLSV', 'GQMEVPLSV', 'IDMNAPLSV', 'DLMQGPISV', 'LDMGQIPSV', 'DLMGQIPSV', 'EVMGQIPSV', 'VEMGQIPSV', 'EVMGAGPISV', 'QGEMVPLSV', 'MGQIEPVVS', 'SLPCASSPVV', 'VEMGLPQSV', 'VEMPGVAASV', 'EVMGLPQSV', 'GQMELSPVV', 'GQMELSVPV', 'VEMAQVPSV', 'LPSSMASVAP', 'GMQELTLAP', 'MGQIELTAP', 'PLSMASPAVS', 'LSPCSATLAP', 'SLPMSASPAV', 'PLSCSATLAP', 'DLMQVASPV', 'DLMQVSPAV', 'EVMGAVASPV', 'EVMGAVSPAV', 'EVMGAVTGPV'] Truth: ['SILTNNLPNLLEYGYFVSSGSVTV', 'SSILASFNSSALSEPAAPVSIQRSAP', 'SSILASFNSSALSEVKLIGNIHGNE', 'SSILASFNSSALSELLASGVDRYIS', 'SILSNMNNSPFDLCHVLLSLLEK'] non truth: ['SIMLAAQLIYFAVSITDESEVSSI', 'SVAEYAKNIHLPQHAQSSIFFVT'] Truth: ['TPGSAPREPYLK', 'TPGSAPREPPPLP', 'TPGSAPREPYKL', 'TPGSAPREPLPPP', 'TPGSAPREPPLPP', 'TPGSAPREPPPPL', 'TSLHQTQFLIQ', 'TPGSAPREIIAFG', 'TPGSAPREALAFV', 'TPGSAPRELPKY', 'TPGSAPREQFLL', 'TPGSAPREYPKL', 'TPGSAPREPKLY', 'TPGSAPREPKYI', 'TPGSAPREPKYL', 'TPGSAPREPIYK', 'TPGSAPREFLIQ', 'THTVQTQFLIQ', 'AAAGAPQTQFLIQ', 'KESQPPAGFNKL', 'KESQPPQFNIK', 'TKEQNREKVSP'] non truth: ['PRGNEIEPYIK', 'PRGNEIEPYKL', 'PRGNEIEPYLK', 'PRGNEIEPLPPP', 'PRGNEIEPPPPL', 'PRGNEIEPPPLP', 'PRGNEIEPPIPP', 'PRGNEIEPIPPP', 'PRGNEIEYPIK', 'PRGNEIEPLYK', 'PRGNEIEPIYK', 'PRGNEIEPKYL', 'PRGNEIEPKYI', 'PRGNEIEPIKY', 'PRGNEIELIQF', 'PRGNEIELLQF', 'PRGNEIEIFIQ', 'PRGNEIELFLQ', 'PRGNEIEQILF', 'PRGNEIELLFQ', 'PRGNEIEYLKP', 'PRGNEIEKIYP', 'PRGNEIEKIPY', 'PRGNEIEYLPK', 'PRGNEIEQFLI', 'PRGNEIEFGLAL'] Truth: ['KKETPTKELG', 'KKVEEAEKLG', 'KKVEEAKELG', 'KKEDLAEKLG', 'KKEEVAKELG', 'EQVKSIKELG', 'KKEGIEELKG', 'KKEDALLKEG', 'KKVEEAEKAV', 'KKDAELEKVA', 'KKEDALEKVA', 'KKVEEAEKVA', 'KKADELEKVA', 'KKEDALKEGI', 'KKEADIEKAV', 'KKEDALKEGL', 'TTPGKSIKELG', 'KKGRWTNALG', 'KKGRWTANIG', 'KKGRWTGQLG', 'KKGRWTAGGLG', 'KKGRWTQGLG', 'KKGRWTGQIG', 'KKETPTEKLG', 'KKTETPEKLG', 'KKPETTKELG', 'KKTPTEKELG', 'KKEPTTKELG', 'KKETPTEKIG', 'KKLDEAEKLG', 'KKDLEAEKLG', 'KKEVEAEKLG', 'KKIDAEKELG', 'KKLDEAKELG', 'KKDAEIKELG', 'KKDLEAKELG', 'KKEVEAKELG', 'KKVEAEKELG', 'KKVEEAEKIG', 'KKEEVAEKLG', 'KKDELAEKLG', 'KKEDALEKLG', 'KKLEDAEKLG', 'KKDELAGSIIG', 'KKEEAVKELG', 'KKDEAIKELG', 'KKEDALKELG', 'KKEDAIKELG', 'KKEAEVKELG', 'KKLEDAKELG'] non truth: ['KKALEDEKLG', 'KKELDAKELG', 'KKGELEKELG', 'KKAEVEKELG', 'KKDELAKELG', 'KKIDEAKELG', 'KKLDAEKELG', 'KKDIAEKELG', 'KKELGEKELG', 'KKVEEAKELG', 'KKEVEAKELG', 'KKELDAKEIG', 'KKGELEKEIG', 'KKDELAKEIG', 'KKALEDKEIG', 'KKELDAEKLG', 'KKELADEKLG', 'KKLEADEKLG', 'KKEIADEKLG', 'KKIEADEKLG', 'KKDELAEKLG', 'KKEAVEKELG', 'KKEADIKELG', 'KKALEDKELG', 'KKELDAEKIG', 'KKEADIEKIG', 'KKEEAVEKIG', 'KKAEEVEKIG', 'KKDELAEKIG', 'KKALEDEKIG', 'TTPGKKTVEIG', 'TTGPKKTVEIG', 'KKITQAALEE', 'KKLLDEKEQ'] Truth: ['DIEKAVQSL', 'SLSIASPQSL', 'SLSIASPQSI', 'SLSIGPTQSL', 'VTQLDLNSL', 'VTQLDLNSI', 'VTQLDLGGSL', 'LSQLDLNSL', 'LSQLDLNSI', 'SLQEVNLSI', 'SLQEVNISI', 'SLQEVNLSL', 'SLQEVNISL', 'SIQLDLNSL', 'SIQLDGVASL', 'LSQLDLGGSL', 'SIQLDGLGSI', 'SIQLDGLGSL', 'ISAGEVLNSL', 'ISAGEVLNSI', 'ISAGEVLGGSL', 'VTQLVDQSL', 'TVQVDLQSL', 'TVQDVIQSL', 'SIQVDLQSL', 'SIQVVEQSL', 'ISQVDLQSL', 'LSQVDLQSL', 'SLQVDLQSL', 'KEAVDLQSL', 'KEADVIQSL', 'SIQVEVTNL', 'SIQVEVQLS', 'SLQEVVQTV', 'ISAGEVVQTV', 'TVQVEVTNL', 'TVQDIVTNL', 'LSQDIVTNL', 'SIQDIVTNL', 'ISQDIVTNL', 'LSQVEVTNL', 'ISQVEVTNL', 'SLQVEVTNL', 'TVQVEVQLS', 'SLQVEVQLS', 'LSQVEVQLS', 'ISQVEVQLS', 'TVQEVVQTV', 'LSQEVVQTV', 'SIQEVVQTV'] non truth: ['TVQVEVQSI', 'TVQDLVQSL', 'SLQDLVQSL', 'LSQDLVQSL', 'ISQEVVQSL', 'SLQVEVQSI', 'KAEEVVQSL', 'LDEVAKSQL', 'VTSLTGPAGSL', 'VTSLTPGQSI', 'VTSLTPGQSL', 'LSSITGPAGSL', 'LSSIPGTQSI', 'SISLTGPAGSL', 'LDEKVAQSL', 'TVQDLLNSI', 'TVQVEQVSI', 'ISQEVINSI', 'ISQEVLNSI', 'LSQDLLNSI', 'SLQDLLNSI', 'ISQVDLGASL', 'ISQVDLQSL', 'KAEEVLNSI', 'KAEEVINSI', 'KAELDQVSI', 'VTQVEVQSI', 'TVQDIVQSI', 'SLQDIVQSI', 'SIQDIVQSI', 'LSQDIVQSI', 'SIQDLVQSL', 'SIQEVVQSL', 'LSQEVVQSL', 'SLQEVVQSL', 'SIQVEVQSI', 'LSQVEVQSI', 'ISQVEVQSI', 'VTQLDVQSI', 'TVQDLVQSI', 'TVQVEVQSL', 'VTQLDVQSL', 'LSQLDVQSI', 'SLQDLVQSI', 'LSQDLVQSI', 'LSQLDVQSL', 'SLQVEVQSL', 'KAELDVQSL', 'KEADIVQSI', 'KEADLVQSL'] Truth: ['DLEIPIVSVFRSMARQFQ', 'LDEIPIVSVFRSMARQFQ', 'IDEIPIVSVFRSMARQFQ', 'DLEIPIVSVFRSMARQFQ', 'DIEIPIVSVFRSMARQFQ', 'IDELPIVSVFRSMARQFQ', 'LDELPIVSVFRSMARQFQ', 'VEEIPIVSVFRSMARQFQ', 'EVEIPIVSVFRSMARQFQ', 'VEELPIVSVFRSMARQFQ', 'EVELPIVSVFRSMARQFQ', 'EVQIPPEDLIEMLKAGEKP', 'VEQIPPEDLIEMLKAGEKP', 'LDQIPPEDLIEMLKAGEKP', 'DLQIPPEDLIEMLKAGEKP', 'IDQIPPEDLIEMLKAGEKP', 'DIQIPPEDLIEMLKAGEKP', 'TATAVVDGAFKEIFILTPNE', 'TATAVVDGAFKEIEVLFTQP'] non truth: ['DIEIEVFPCVIGRGNKPLH', 'LDEIEVFPCVIGRGNKPLH', 'DLEIEVFPCVIGRGNKPLH', 'IDEIEVFPCVIGRGNKPLH', 'DIELEVFPCVIGRGNKPLH', 'LDELEVFPCVIGRGNKPLH', 'IDELEVFPCVIGRGNKPLH', 'DLELEVFPCVIGRGNKPLH', 'EVEIEVFPCVIGRGNKPLH', 'VEEIEVFPCVIGRGNKPLH', 'VEELEVFPCVIGRGNKPLH', 'EVELEVFPCVIGRGNKPLH', 'LDELAQQLEKHQSRRRE', 'IDELAQQLEKHQSRRRE', 'DLELAQQLEKHQSRRRE', 'DIELAQQLEKHQSRRRE', 'DLEIAQQLEKHQSRRRE', 'LDEIAQQLEKHQSRRRE', 'IDEIAQQLEKHQSRRRE', 'VEELAQQLEKHQSRRRE', 'EVELAQQLEKHQSRRRE', 'VEEIAQQLEKHQSRRRE', 'VEEVVGLEPHLLYSVLGEGP', 'TAKGPDRAGTGLGQRNAAAVAGP', 'VEQIEELFFKQVAEADKL', 'EVQIEELFFKQVAEADKL', 'LDQIEELFFKQVAEADKL', 'IDQIEELFFKQVAEADKL', 'DIQIEELFFKQVAEADKL', 'DLQIEELFFKQVAEADKL', 'VEVEVGLEPHLLYSVLGEGP', 'DIVEVGLEPHLLYSVLGEGP', 'VELDVGLEPHLLYSVLGEGP', 'LDDLVGLEPHLLYSVLGEGP', 'TVSIRENGPALGASNLAHVMV', 'TVQLGPRGPKGPPGPPGSPGEPG'] Truth: ['DIASLTLLEIS', 'DIASLTLLELS', 'DIASLTLLEIS', 'TVRGSLGRELS', 'TKEELIELLS', 'TKEELIELIS', 'TELKELLELS', 'DIASLTLLEVT', 'DIASLTLLETV', 'TKEELIELVT', 'TKEELIEILS', 'TKEELIEIIS', 'TVRGSLGREIS', 'TKEELIELTV', 'TVRGSLGREVT', 'TILTMWLPIS', 'IASLTLLEISD'] non truth: ['TLTLGLDELIS', 'TLTLGLDELLS', 'TTEVIGELILS', 'TTEVIGELIIS', 'VTRSGRGIELS', 'ISGELTILELS', 'ELLLASSIELS', 'VTRSGRGIEIS', 'TLTLGLDEILS', 'TLTLGLDEIIS', 'TLLTDTTLPLS', 'FAKARIPTSPS'] Truth: ['SPTHKVKGA', 'SPTHKVKQ', 'SPTHVKKQ', 'DPLPRAKQ', 'SSKHLPKQ', 'SSKHPIKQ', 'SSHKPIKQ', 'SSHKLPKQ', 'SSHKPLKQ', 'SSKHPLKQ', 'KGLVSGNHL', 'EKVKNGHI', 'KKIDNGHI', 'KKEVNGHI', 'KKLDNGHI', 'KKVENGHI', 'KKDINGHI', 'KKDLNGHI', 'KGLVSNGHI', 'EPVPRAKQ', 'SSKHLPQK', 'SSHKLPQK', 'KQPAQPKQ', 'EKVKHGVQ', 'EKVKGNHL', 'EKVKHNGL', 'KGLVSHGVQ', 'KKDLHGVQ', 'KKLDHGVQ', 'KKDIHGVQ', 'KKIDHGVQ', 'KKEVHGVQ', 'KKVEHGVQ', 'KGLVSHNGL', 'KKEVGNHL', 'KKDLGNHL', 'KKLDGNHL', 'KKDIGNHL', 'KKVEGNHL', 'KKIDGNHL', 'KKEVHNGL', 'KKLDHNGL', 'KKIDHNGL', 'KKVEHNGL', 'KKDLHNGL', 'KKDIHNGL', 'HGLVKDKAG', 'KKEAAAGHI', 'KDLNKGHI', 'KKSPSAGHI'] non truth: ['SSHKPIKAG', 'KKEVVGHQ', 'KKEVVHGQ', 'SPHTVKKQ', 'SPHTKVKQ', 'SPHTKVKGA', 'SPTHKVKGA', 'SPTHVKKQ', 'SPTHKVKQ', 'SSHKPLKGA', 'SSHKPIKQ', 'SSHKPLKQ', 'SSHKLPKQ', 'KSHPSLKQ', 'SSHKAPGKL', 'SSHKGKPAL', 'EVKKVGHQ', 'EVKKHGVQ', 'EVKKVHGQ', 'EVKKGHNL', 'EVKKHGNL', 'EVKKGNHI', 'KKDLVGHQ', 'KKDIVGHQ', 'KKDLHGVQ', 'KKVEVGHQ', 'KKDIHGVQ', 'KKIDHGVQ', 'KKLDVGHQ', 'KKIDVGHQ', 'KKLDHGVQ', 'KKEVHGVQ', 'KKVEHGVQ', 'KKLDVHGQ', 'KKDIVHGQ', 'KKIDVHGQ', 'KKDLVHGQ', 'KKVEVHGQ', 'KKDIGHNL', 'KKLDGHNL', 'KKEVGHNL', 'KKVEGHNL', 'KKIDGHNL', 'KKDLGHNL', 'KKIDGNHI', 'KKVEGNHI', 'KKDLGNHI', 'KKLDGNHI', 'KKEVGNHI', 'KKDIGNHI'] Truth: ['KKSSSVKPVVDFT'] non truth: ['APAELQVNLLEPK', 'APAELQVLLNEPK', 'TFLTFVLLNEPK', 'APAELQVQLVEPK', 'APAELQVNIIEPK', 'APAELQVNLIEPK', 'APAELQVNLIPEK', 'APAELQVNLIGLPS', 'APAELQVLNLIGSP', 'APAELQVNLILGSP', 'APAELQVNLILGPS', 'TFLTFVLNLIGSP', 'TFLTFVQLVEPK', 'TFKVIAFLEPEK', 'TFKVIASKSSPEK', 'TTFFPKLTLPEK', 'APAELQVNLISVAP', 'TFFTKPTLLEPK', 'TFFTKPLTLEPK', 'TFFTKPLTLPEK', 'KKGFNKDRGMLK'] Truth: ['RDAIIFVST', 'DRAIIFVST', 'DRALIFVST', 'RDALIFVST'] non truth: ['RDALFIVST', 'RDAIFIVST', 'RDAIFVLST', 'RDALFVLST', 'DRALFIVST', 'DRALFVLST', 'KGKNEYALV', 'LRTGYDVVV'] Truth: ['DFYPLSVSARARS', 'DFYPPMKPFILT', 'DFYAIQNKDRAK', 'DFYVDQQRKAKA'] non truth: ['FDYPDFKIAPQK', 'FDYPQVTKLNPF', 'DFYPQVTKLNPF', 'FDYPLFEPARLT', 'DFYPDFKIAPQK', 'DFYPLFEPARLT', 'FDYPLEPPIHIQ', 'DFYPLEPPIHIQ', 'FDYKPDFKIAPQ', 'DFYKPDFKIAPQ', 'FDYASKQTWPLL', 'DFYASKQTWPLL'] Truth: ['AKISKPAPYWEGTAVINGEFPEQNGSIVSQPNSRLKEA', 'NNRPPSTWLTAYVVKVFSMSGALDVLQMKEEDVLKF', 'EVEVTFRVTGRGRIGADGLAIWYTENQSLATLESVFQ', 'DPKKTIQMGSFRINPDGSQSVVEVPYARSEAHLTELL', 'AKISKPAPYWEGTAVINGEFKELYTPQSLATLESVFQ', 'YGFLKDMGLKVFTNLNISVASSQDSTRPSRVRQNFH', 'AKISKPAPYWEGTAVINGASVASSQDSTRPSRVRQNFH', 'EVEVTFRVTGRGRIGADGLAIWYTENQSLSGVFPSSLT', 'EVEVTFRVTGRGRIGADGLAIWYTENQGLTVDLYEGK'] non truth: ['SQDLYQPLLEALVKETNKQADEVKSPSSLLFSPGPFH', 'AEHNKGLLVPDSIPQEQRSQEIFIQNNLNYLYRSV', 'SCLNPCKLVELIGEVTKRNETWVDVTVATLHWSSVSV', 'AEHNKGLLVPDSIPQEQRSGDLEKSALCAMRIRWKH', 'AEHNKGLLVPDSIPQEQRSGEATRETRPQEKGNKNKA', 'CQDLLVSGVGGTVLTGVQSGAGVTSVQLRSNVEIFYHKGH', 'CQDLLVSGVGGTVLTGVQSGAGVTSVESEIPINGEVVTRER', 'HQQPGQVEKEKIPIDVEPYAVVARDTSSFTTDKSLTV', 'CQDLLVSGVGGTVLTGVQSGAGVTSVEWNPLIENIWIVMA', 'LACELDKQTQKKNLDERLLDQDGIDKTSNILEPVTE', 'LACELDKQTQKKNLDERLNIEDGIDKTSNILEPVTE', 'DLKFDLYLTPTRTYVEPYAVVARDTSSFTTDKSLTV', 'EVRTQKEQCQQKLERVEHRNLSVSEAALTIFKDEA', 'HQQPGQVEKEKIPIDVVGAVPQNCIAEYKNVALDFVH', 'KHALLAAAAHADGATTVLDCKNGVLVLEGCYTVGKVEIFD', 'CQDLLVSGVGGTVLTGVQSGAGVTSVVARDTSSFTTDKSLTV', 'LDSRDLLLELDEDSIVKVVPQNCIAEYKNVALDFVH', 'VEPAPLTDALKPYMSQEKLSSVQKSCPTGIVHAKTEMV', 'EVIKAAQTVKTIDDSQSEMSTLKRMAMIKEFCKKAGP', 'VTEKGEVRCNKKQINVKNPEFNMTPNHINIIPSFSS', 'KERRTMNERAQQAKTRMFSQLSWCLLPVQCITFL', 'EVEVQLSAQASLTELDVRMGDLEKSALCAMRIRWKH'] Truth: ['FRAIDKVYDPK', 'FRFSDLLATGPK', 'FRFSDLLSAAPK', 'RFFLDVAQDLK', 'RFFPGTGTIDIK', 'RFFPQTTVDLK', 'FRFAPETKLDK', 'RFFPQISTLDK', 'RFFAPETKLDK', 'FRFATVSAPDLK', 'RFFATVSAPDLK', 'FRFAEAAIAVEK', 'RFFAEAAIAVEK', 'KKNGADPAIYFK', 'RFAFPETKLDK', 'RFANFIEVEVK', 'RFFDTQVTIPK', 'FRFSDLLQTPK', 'RFFLESAGLSPK', 'RFFLESAGISPK', 'FRFASLDASLPK', 'FRFSDKVIDPK', 'RFFSDKVIDPK', 'RFFLESAGSLPK', 'RFFQELVGLDK', 'FRFLDVAQDLK', 'FRALTEMKIDK', 'RFADVFLQDIK', 'RFAFEAAIAVEK', 'RFAFTVSAPDLK', 'RFANFEIVEVK', 'RFANEFLVVEK', 'RFANFELVDLK', 'RFAGYITPAVEK', 'RFVAEKVYDPK', 'RFFQELVSTPK', 'FRFASEGIISPK', 'FRFSDLDKVPK', 'RFFLESKEAPK', 'RFFLESINTPK', 'FRFSDLDVKPK', 'FRFSDLISAAPK', 'RFFLESAKEPK', 'RFFLESAEKPK', 'RFFLESEAKPK', 'RFAASLISMEVK', 'RFAASLISMDIK', 'RFAASLISMVEK', 'FRAFPETKLDK', 'RFADVLGFADIK'] non truth: ['KKPFDERFIAT', 'KKPFDERFTAL', 'RFFRIRCEPK', 'FRFTLNESIPK', 'RFFDIDVKSPK', 'RFFNELVADLK', 'RFFNEVIALDK', 'RFFAGGIVEVEK', 'FRFAGGIVEVEK', 'RFFTKVDGPITA', 'FRFADLNLDLK', 'RFFNEALVLDK', 'RFFNEIAVIDK', 'FRFTLSGPAVEK', 'RFFGLQVEVEK', 'RFFQGIVEVEK', 'FRFDNVVVVEK', 'RFFNAIVEVEK', 'KKLYTFNYFK', 'KKYQSFIYFK', 'KKGSFSFIYFK', 'KKPPSDNKFYK', 'RFVFQTPTLDK', 'RFVFQIDADIK', 'RFFDSVKAPTAL', 'FRFSGIKDPSVV', 'FRFRIRCEPK', 'FRFRKNMQPK', 'RFFRKNMQPK', 'FRFADSALVTPK', 'FRFADIKETPK', 'FRFSGTLAIDPK', 'FRFSGITIDAPK', 'FRFDDKTVVPK', 'RFFDITQLSPK', 'RFFDSQLTLPK', 'RFFALEKESPK', 'FRFTLNLSEPK', 'FRFDIDVKSPK', 'FRFADSSLLAPK', 'RFFTLNESIPK', 'RFFALEKSEPK', 'RFFALQVDEVK', 'FRFDNAILDLK', 'FRFDNALLDLK', 'RFFDSVPKLDK', 'RFFALQVDVEK', 'FRASSMVVVVEK', 'RFADGIVAFLDK', 'FRASSAPFIDIK'] Truth: ['DETPDVDPELLRYLLGRILTGSSEPEAAPAPR', 'DTERLSRGSMAAFLIQTKDNPMKAVGVLAGVMA', 'TNRERNDQKMLADLDDLNRTKKYLEERL'] non truth: ['DETFIRKEQKDRSPRERPVDVSRLYQVM', 'AQARSILYPLDFNQINYGTDKLFGKYWQK', 'AQARSILYPLDFNQINYGTFLDKFYQPVR', 'ESTLHLQSSSIVGFLMELLQNRANETWVKF', 'GVQGLPWAEQSLFAVFLPTQGDVEVEMVFRK', 'ESTLHLQSSSIVGFLMRERPVDVSRLYQVM', 'HKFPNLAVLTCGANTYRERPVDVSRLYQVM', 'TDESPRPRAQLRLPPLGDVTSASSHHHFRRG', 'TEDSLLAERVGGPSIANDVLFDQLKLNSNPHK', 'PKKAQACGIGWSVIGALYFVGPTVFRIANSSCH'] Truth: ['KKQIEELKGQEVSTTQEPIWLTDVP', 'KKDRVTDALNATRAAEEEAVAREKAGP', 'KKFLQGTVEHCIISKESVPDFPLSPP', 'KKFLQGTVEHCIITTQEPIWLTDVP', 'KKIELDRTIMPDGTVVERYSIQAHP', 'KKKAVASEEETPAKSKESVPDFPLSPP', 'KKKAVASEEETPAKTTQEPIWLTDVP', 'SVITKKLEDVKNSPTFKSFEEKVEN'] non truth: ['KVMVEKQLDYGRSDSSALLLTWIFP', 'LPLPPPPPPEPGEEGPNSARIPQSGLRP', 'PIPISRGPCSDYLKEGTIQAITIAQQP', 'KKEGVEPEVTNNLFYVRLQLHTQGP', 'PLIPGFIEETNNLFYVRLQLHTQGP'] Truth: ['EEGLKHEASVADHSLHLSKAKISKPAP', 'LVQCTEHLLKHMMTGNKKLVRVDSV'] non truth: ['EEASRDKRLPKAQQSPPVIPVGDPPAP', 'EEASRDKRLPKAQQSPPVIPVGPDPAP'] Truth: ['GPVDQKFQSIVIGGRGGARGRGRGQGQ', 'PSSPVSRKLSTTQGVSVPLQLKENGE', 'PLLMSISTNLKNSFCAVVAGVVLAQY', 'DPPLAPDDDPDAPAAQLARALLRARL', 'CALLRCIPALDSLGVSVPLQLKENGE', 'PKYEVNGVKPSIGGRGGARGRGRGQGQ', 'CALLRCIPALDSLAPSTAAAPAEEKKV', 'PLTLGIETVGGVMTGVSVPLQLKENGE', 'PLTLGIETVGGVMTGRGGARGRGRGQGQ', 'PLTLGIETVGGVMTAPSTAAAPAEEKKV', 'PLAPDDDPDAPAAQLARALLRARLDP', 'PPLAPDDDPDAPAAQLARALLRARLD', 'ALVPIPPSPDTKDRWRSMTVVPYL'] non truth: ['DPPLVRNVTFVMINIADVHISFLQ', 'DPPAPSLRHAAKQGKAQGNKPSQLLE', 'PDPAPSLRHAAKQGKAQGNKPSQLLE', 'PDVSLPAPDVPLPAQFITSEIKLQY', 'SPPSKTASTLSALQGFAALRLAHQASQ', 'LPDPSLAKTSPRQGPLFQERPLYQ', 'PSSPLVTPLNIYSGFAALRLAHQASQ', 'PDVSLPAPDVPLPAGFAALRLAHQASQ', 'SPSPPSPLAHAAKQGKAQGNKPSQLLE', 'DPVGCRPKVLGACGLVAPVARQGAAGYL', 'PDHPINLVTYLNHSLVKLNFIYQ', 'IPLPASQLQSSFYILFQERPLYQ', 'SPSPVLKPCRGACGLVAPVARQGAAGYL', 'SPPSNGISIVLGACGLVAPVARQGAAGYL', 'LPPLVERESTACGLVAPVARQGAAGYL', 'IPLPASQLQSSACGLVAPVARQGAAGYL'] Truth: ['DETPDVDPELLRYLLGRILTGSSEPEAAPAPRRL', 'PLSEERRPSPKESKEADVATVRLGEKRSHHLAH'] non truth: ['ESEQKEPVAPRGKHKETDERLLPQLGWWKFK'] Truth: ['TLQPVPLQATMSAAKLGQLLAATCKELPGPKE', 'AVGVVRLCRRFCHVATPHTFKQLIAGKEPA', 'DVDPELLRYLLGRILTGSSEPEAAPAPRRL', 'TTQVTIPKDLAGSIIGKGGFNQAILVSRHDPA'] non truth: ['SVFVQALPKSSPDLKSRHEIEGTATNLLVAP', 'TTPIVESALLTSLIASAQSSFLELSFSPKVAP'] Truth: ['NLENLAPGTHPPFITFNSEVKT', 'PQKFLSDAVQDLFPGQAIDLHS', 'PQKFLSDAVQDLFPGQAIDLSH', 'NLENLAVLVMQDPSMNLQGLAVG', 'KDINAYNGETPTEKLPGQKVSH', 'DLQNLAPGTHPPFITFNSEVKT', 'LNEINVNIAAVGSFSPSVTNSVHG', 'LNELNVNIAAVGSFSPSVTNSVHG', 'GVAELNVNIAAVGSFSPSVTNSVHG', 'NLENALPGTHPPFITFNSEVKT', 'NLENALVLVMQDPSMNLQGLAVG', 'TVTVLDVNDNRPEFIDKINSH', 'NQDTKKEILKSLDEEIAIDSH', 'LNEANLVLVMQDPSMNLQGLAVG', 'NLNIQAELKSNPTLNEYHTRA', 'LNEAQVPGTHPPFITFNSEVKT', 'LNEANLPGTHPPFITFNSEVKT', 'PQKFLSDAVQDLFPGQIAIDSH', 'KGATPAEVLVMQDPSMNLQGLAVG', 'LNIQNAANVPAGTEVVCAPPTAYI', 'KGATPAEPGTHPPFITFNSEVKT', 'KKTQDQISNIKYHEDKINSH', 'KKEGDPVLVMQDPSMNLQGLAVG', 'SLSNGVLSQKPNPTLNEYHTRA', 'NLNIQAANVPAGTEVVCAPPTAYI', 'TVTVLDVNDNRPEFILNKDHS', 'KKHGCTVLVMQDPSMNLQGLAVG', 'KKHGCTEAVRGQWANLSWELL', 'KKHGCTPGTHPPFITFNSEVKT', 'KKTQDQISNIKYHEILDESH', 'KKTQDQISNIKYHELNKDHS', 'NLNIQPKRTNPPGGKGSGIFDES', 'TVAQGVVSQQLNPTLNEYHTRA', 'LNIQSGQTVGINPTLNEYHTRA', 'KKVRNNNADANPTLNEYHTRA', 'VTVTDGLHSVTHRGRGREPGAHS', 'VTVTDGLHSVTGQWANLSWELL'] non truth: ['NIENIATDLLIFAFVGDLMCIT', 'NIENIAVAAAREWEGEQVLSNL', 'NIENLAVAAAREWEGEQVLSNL', 'LNELNATDLLIFAFVGDLMCIT', 'VQELNATDLLIFAFVGDLMCIT', 'NIENIAIALNNGSNAKLAANWGGT', 'TSVPEDKMWTQLNKLALDLHS', 'TSVPEDKMWTQLNKLALDLSH', 'QNLVEATDLLIFAFVGDLMCIT', 'VQELGGAIALNNGSNAKLAANWGGT', 'QNLVEAIALNNGSNAKLAANWGGT', 'QNLVEAVAAAREWEGEQVLSNL', 'VQELGGATDLLIFAFVGDLMCIT', 'VQELGGAVAAAREWEGEQVLSNL', 'VQERAAVAAAREWEGEQVLSNL', 'NLEVMASTRPTPRSATRCVLHS', 'NIENLSEVDNPVLLGAEKLETE', 'NELALTFYVGRTDSIYLNLSH', 'NELALTFYVGRTDSIYLNLHS', 'NIENISEVDNPVLLGAEKLETE', 'QNLVESEVDNPVLLGAEKLETE', 'QVQNANCIIGQVWPLTGASPMAK', 'QNLNANCIIGQVWPLTGASPMAK', 'QVQNNCPSKPSVREKPICTIQG', 'VQELGGSEVDNPVLLGAEKLETE', 'QNLNNCPSKPSVREKPICTIQG', 'QVQNNQDVMKYKIVELYTNV', 'QNLNNQDVMKYKIVELYTNV', 'INEQLGIALNNGSNAKLAANWGGT', 'NLEVAQIALNNGSNAKLAANWGGT', 'NLVQEATDLLIFAFVGDLMCIT', 'NLVQEAIALNNGSNAKLAANWGGT', 'VQNIEATDLLIFAFVGDLMCIT', 'VQNIEAIALNNGSNAKLAANWGGT', 'INEQLGTDLLIFAFVGDLMCIT', 'NLEVAQTDLLIFAFVGDLMCIT', 'NLEVAQVAAAREWEGEQVLSNL', 'INEQLGVAAAREWEGEQVLSNL', 'NLVQEAVAAAREWEGEQVLSNL', 'VQNIEAVAAAREWEGEQVLSNL', 'VQERASEVDNPVLLGAEKLETE', 'QDLGLQIALNNGSNAKLAANWGGT', 'VQERAFSVAERRSKGYLQMGN', 'QDLGLQTDLLIFAFVGDLMCIT', 'QDLGLQVAAAREWEGEQVLSNL', 'VGAEPVSEMTRREVSELLHTVS', 'KKDPWAGAGSTPPHPHHHGKAGR', 'NLVQESEVDNPVLLGAEKLETE', 'VQNIESEVDNPVLLGAEKLETE', 'SIVTEHTKAREWEGEQVLSNL'] Truth: ['DQQARVLAQLLRAWGSPRASDPPLAPDDDP', 'TEAKESKKDEAPKEAPKPKVEDLAEYLDP', 'TEAKESKKDEAPKEAPKPKVEEYAIEVDP', 'TEETHPVRVISSIEQKTSADGNEKKIEMV', 'ETETTLPIKMDLAPPEDVLLTKMKCINDP', 'ETETTLPIKMDLAPPEDVLLTKEAFEIDP', 'QQARVLAQLLRAWGSPRASDPPLAPDDDPD', 'REIMIAAQKGLDPYNMLPPKAASGTKEDPN', 'ESDPFEVLKAAENKKKEAGGGGVGGPGAKSAAQA'] non truth: ['VLLDCLKDRNVDYTVKFEGTPTKIFEDP', 'PEEDSAKELGPRFICDKRLDRRKGVEDP', 'QSHQIGPKGGGEGLSAASGGVIRTLAGPPALGCDP', 'KMKEEKRHQRVWDKPNIYESTIDLDP', 'FFDHPKKFVLVTRRDNCHDAQKQLTDP', 'RERCLELFEGLSAASGGVIRTLAGPPALGCDP', 'FFDHPKKFEGLSAASGGVIRTLAGPPALGCDP', 'KMKEEKRHQRVWDKPTEDIPYTTKDP', 'KMKEEKRHQRVWDVLEEFSVIDELDP', 'KMKEEKRHQRVWDKPKGFLDTCQRDP', 'ETIGCCSLKVQSRDAIDIHVIEVTKWTNP', 'LAERFSSTRNPKIGPLDESPSKVDDVSVDP', 'KRSDREIYMVLRAQCKAVIDQYKDTEP', 'KMKEEKRHQRVWDKPNKEHQNAGVADP', 'ETLLREAIHRGDRGAGLKGHDSVPQTESEP'] Truth: ['RVTLLYGSPGATAVLFVFGWITLC', 'KKLEAAEERRKSHATFSWLTPV'] non truth: ['KSEELRRQSRPAAKRVENLTLC', 'KSEELRRQSRPAAKRGPLTTSLC', 'KKAWERTVDQTVELPSSSLLISP', 'KKAWERTVDQTVELQETLTLVP'] Truth: ['SDKEGHKYGRLAWLIYLV', 'SDLARYYTYLVMNKGKLL', 'SDALRYYTYLVMNKGKLL', 'SDIARYYTYLVMNKGKLL', 'SDEADLVPALYNKKPVIYL', 'SSDIRAMSPGRLAWLIYLV', 'SDGRMIVGEGRLAWLIYLV', 'SDAVMAIQEGRLAWLIYLV', 'SDLGMAIQEGRLAWLIYLV', 'SDGLMAIQEGRLAWLIYLV', 'SDGIMAIQEGRLAWLIYLV', 'SDVAMAIQEGRLAWLIYLV', 'SDAIRYYTYLVMNKGKLL', 'SDRALYYTYLVMNKGKLL', 'SDRAIYYTYLVMNKGKLL', 'SDIRAYYTYLVMNKGKLL', 'SSHLLYFYQQKIKDLHK', 'SSTLATSYSLYNKKPVIYL', 'SDPSIVRNMVLFAPNIYVL', 'SSPTPRATALLQWHEKALAA', 'SSSAHTKAVLLQWHEKALAA', 'DDDPDAPAAQLARALLRARL', 'SSTQGARQEGRLAWLIYLV', 'SSIPSHPYTYLVMNKGKLL', 'SDMQLIQEGRLAWLIYLV', 'SSPTPRATAMVLFAPNIYVL', 'SSPPRSSAVLLQWHEKALAA', 'SSTPMAIQEGRLAWLIYLV', 'SSPTMAIQEGRLAWLIYLV'] non truth: ['SDSLGGPALASMVGSIIIYLAL', 'SPCHFILASMVGSIIIYLAL', 'SPHCLFLASMVGSIIIYLAL', 'SDESLRLSGSLTISILSIWA', 'SSLKVSKESWGNSRLGTVRG', 'SDRYPIHSMVGSIIIYLAL', 'SDRYPIHVDRLPPLVQDK', 'SDEKQPLASMVGSIIIYLAL', 'SDQEPKLASMVGSIIIYLAL', 'SPCHLFLASMVGSIIIYLAL', 'SSYSKSKTLTYAITKKLND', 'SSEVFYRAALEKDHVLKR', 'SDSEVRSALSLTISILSIWA', 'SSLTPPTNASMVGSIIIYLAL', 'SSSSPPPKTLTYAITKKLND', 'SDVVETKADIMKKTISKPGT', 'SDVEIGTKDIMKKTISKPGT', 'SDTIALQTDIMKKTISKPGT', 'SDQRSRQSIMKKTISKPGT'] Truth: ['DAGAPTQFHKHDLLLARTIGD', 'DAGAETMTQRLLKRGETSGRV', 'ADGNEKKIEHSTRIHIMPSL', 'ADGNEKKLEHSTRIHIMPSL', 'ENENEPLKVRQLEAHNRSL', 'SAASAGEPLHEKVDPIIEEGKV', 'SSPTPRATAQASQKGRSFNLTA', 'SAPDLKSVRSNRGPSYGLSRE', 'ENENEPLGVAGVVLAQYIFTL', 'ENENEPAVVAGVVLAQYIFTL'] non truth: ['SAGTKPSHTADKIERFFRID', 'GEQVRNNRSPNEIVAIIHET', 'TGGTRPYAVFPRQFHDWKL', 'DAGAPKGAGAAADKIERFFRID', 'SASAFIMAKLFSTDKTKSLTE', 'SAGTKPSHTADFERLLRINY', 'SAGTIATFELRHFFEHALKT', 'EGGEGPGDLLQGELKLLFKEF', 'GEGGEGPDLLQGELKLLFKEF', 'SASAFIMAKLFSRNSMTTLLS', 'GQQAYVTFYVLFGGLAQMLGI'] Truth: ['ELILAANYL'] non truth: ['LELIAANYL', 'LELIAAGGYL', 'LELIAANYI', 'ELLAALNYI', 'ELLAALNYL', 'LELIAQGYL', 'ELLLAQGYL', 'ELLIAQGYL', 'LELLAQGYL', 'ELILAQGYL', 'LELALANYL', 'ELLALANYL', 'LELAIANYL', 'LEIALANYL', 'IEIAIANYL', 'ELLAIANYL', 'IELAIANYL', 'ELIAALNYI', 'ELLAAINYL', 'ELIAAINYL', 'ELIAALNYL', 'LELIQQYL', 'LELLQQYL', 'IELLQQYL', 'ELLLQQYL', 'LELIQQYI', 'ELLIGAAGYL', 'ELLIQQYL', 'IEILNAAYL', 'IEILNAAYI', 'ELLALNAYL', 'ELLALNAYI', 'ELLAANLYL', 'ELIAANLYL', 'ELLAALGGYL', 'LIELAQGYL', 'IELIQQYL', 'IELIGAAGYL', 'IEILQQYL', 'IEILGAAGYL', 'EILLQQYL', 'EILLGAAGYL', 'EIILQQYL', 'EIILGAAGYL', 'LELLGAAGYL', 'LELLQAGYL', 'IELLGAAGYL', 'IELLQAGYL', 'ELLLGAAGYL', 'ELLLQAGYL'] Truth: ['TEQQVPLVLWSS', 'TEQQVPLVLWSS', 'DFLHLTGARGSRG', 'TQPWASLARGSRG'] non truth: ['TTVNGSKLFFVSS', 'TTESLLRFFVSS', 'TTQLMLLESIHT', 'TTQLMLLVTEHT', 'TTQLMLLETVTH', 'TTPVFAALFFVSS', 'TTPPVIAKAEWSS'] Truth: ['TTLSDTAAAESLVDS'] non truth: ['TTDLQMCTGTLLPS', 'TTRAEDLQPHESP'] Truth: ['SLLSHEFQDETDLKSEPIPESNEGPVKVVVAENF', 'SLLSHEFQDETDLEDLRHSLMPMLETLKTQVQ'] non truth: ['SILSSTDGQMTSMENRIDIFRNFFRIPNLSFF', 'SILSSTDGQMTSMENLPHGLSGYLDPRRHQFVPL', 'SILSSTDGHEDYMFQSFIYFKKKLFQYTFNV', 'SILSSTDGQMTSMENHPPLAARLGPLDGCSGKASAKR'] Truth: ['KAQSTNVVQDLGKSLSVQNLIRSTEELNLQ', 'PSPSYRFQGILPALQGALTMKQVNPSKRLD', 'KIFDKNKDGRLDLNASQPESKVFYLKMK', 'KIFDKNKDGRLDLNATNPESKVFYLKMK', 'KAAGVSVEPFWPGLFARSAGRWLAALRSPGAS', 'KAAGVSVEPFWPGLFAMADRINIKRKGAWP', 'KIETQTQEEVRKGKEPPKKVFVGGLSPDTS', 'KIQNGAQGIRFIYTREGRPSIILNKDHST', 'MQLKPMEINPEMLNKVLAKLGVAGQWRFA', 'KLAMSKKQELEEELRSAGRWLAALRSPGAS', 'GMALFRLVEPASGTIIVATVRCQAAEPQIITG', 'KLLAIEVNTPEKFSSASQPESKVFYLKMK', 'KLLAIEVNTPEKFSSATNPESKVFYLKMK', 'KLLAIEVNTPEKFSSVDDLFKGAKEHGAVAV', 'GMALFRLVEPASGTIIPGPKTTPTVSKATSPST', 'KLQSLICSPSLDVASIEPPKKVFVGGLSPDTS', 'KAQSTNVVQDSALKGKEPPKKVFVGGLSPDTS', 'KLQIQCVVEDDKVGTELLLPRLPELFETS', 'KLAMSKKQELEEELMADRINIKRKGAWP', 'KKPDDGKMKGLAFIQASQPESKVFYLKMK', 'KKPDDGKMKGLAFIQATNPESKVFYLKMK', 'KARLANSSVVTNAEAAARALARFAQEAEAARV', 'KKPDDGKMKGLAFIQVDDLFKGAKEHGAVAV', 'KALTQDDGVAKLSINTGALTMKQVNPSKRLD', 'KAYPFVLSANLHGGSLGALTMKQVNPSKRLD', 'EPPENIQEKIAFIFELLLPRLPELFETS', 'PEMYKLLNGELKQLYTAITRKLDPQNHV', 'KLLAIEVNTPEKFSSTADIVIQLGRGHSTST', 'KLLKEGEEPTVYRLSSLEKSSPTPRATAPQ', 'KILVEDELSSPVVVFLPTDAAYPAVNVRDR', 'KIQNGAQGIRFIYTREGRPSGELPAIPGVTS', 'KIQNGAQGIRFIYTREGRPLPTGSPLPSAST', 'KIQNGAQGIRFIYTREGRKLGFLSERSTS', 'KIQNGAQGIRFIYTREGRSKLGFLSERST', 'KKLIGLQETSTKSTKGWQDVTATNAYKKTS', 'KIQNGAQGIRFIYTREGIISSSPNKKGHVN', 'KLTSTVMLWLQTNKSGSGTMNLGGLLVLVNH', 'KLKERIDTRNELESYAYSKKLIGLQETS', 'KLKERIDTRNELESYAYSLKLQQVSLST', 'KLKERIDTRNELESYAYSLNKVQITLTS', 'KLKERIDTRNELESYAYSLKNVQITLTS', 'KLKERIDTRNELESYAYSLKNTLVAGLTS', 'KLKERIDTRNELESYAYSLKVDKKPTST', 'KLKERIDTRNELESYAYSLKNLEALKST', 'KLKERIDTRNELESYAYSLKNVVKLDST', 'KLKERIDTRNELESYALRDNVLVRHGST', 'KKILDSVGIEADDDRVLGPAVTFKVSANIQN', 'KKVTHAVVTVPAYFNLPTDAAYPAVNVRDR', 'KLIKKMDALREFVAVTGTEEDRARFFLE', 'KKILDSVGIEADDDRSKNALSKQSPKKSPSA'] non truth: ['KLLKQYLEPVQNSNVASNQKVPAGKQKDTS', 'KKLANEGSVISTAATGELQQIKESLVLNEEL', 'KISVEPSSVISTAATGELQQIKESLVLNEEL', 'KKISYHSHESTFLVLQQIKESLVLNEEL', 'KQSTNRLDFIALNRCKMNKIRHGEDVIL', 'KKILEDIERAAEYPLVYTLEHGIDEVLR', 'KAQTIGQTVISTAATGELQQIKESLVLNEEL', 'KLESNVASEVHYLALRELAQLLSGTNKLSE', 'KLTRKSVPMEDLEQELVHIQELFLRFN', 'KLDAGTLEVISTAATGELQQIKESLVLNEEL', 'KLTRKSVPMEDLEQSYRLLSLPLEPLGAE', 'KLFLGYQMRGGHRLCKMNKIRHGEDVIL', 'KAEELFKEVLKLDNCKMNKIRHGEDVIL', 'KINQQDSTTQLSRRSPLPYARAPVWLVTS', 'KINQQDSTTQLSRRGKLKKVEEDRAPACL', 'EPMLRFLHYLLKNVASNQKVPAGKQKDTS', 'EPMLRFLHYLLKNCHFKISGQPLARIST', 'KLSELLRENSTVMRPVTIEFLTKSLYGTS', 'KLEFRFICELHVAANYLRVKHTVHELF', 'KLFNFFYRLIDVTPVTIEFLTKSLYGTS', 'KLFLGYQMRGGHRLPVTIEFLTKSLYGTS', 'KLLKQYLEPVQNSNALGGQILQHAMFLATS', 'KLDAGTLEGDSVIKRAENLKAQEQRLLEST', 'KQSTNRLDFIALNRGLSAPPTGFPRAPTSSL', 'KLESNVASEVHYALLSACALLTRAGVPRRTS', 'KLTRKSVPMEDLEQITKTDQLIANGASVQV', 'KLTRKSVPMEDLEQLSRSRLISNPLSLDS', 'KKPFDERGTIKQLNALGGQILQHAMFLATS', 'KKSPLKFWKGGDRECKMNKIRHGEDVIL', 'KKTKIAVYERYFEFTMRSKRKRRCTS', 'KLKLCQASFKVSHGESPLPYARAPVWLVTS', 'KKHYGMSVFVDVERLYDSTLKTWKIRI', 'KKTKIAVYERYFELGLGQSVKETAASIDAP', 'KKKQEELQQQEMKSPLPYARAPVWLVTS', 'KLEFRFICELHVAAIRKYATKLGYQDSL', 'KLKLPEPGHNRPSSNVASNQKVPAGKQKDTS', 'KKKQEELQQQEMKGKLKKVEEDRAPACL', 'KLSELLRENSTVMRGAVKPGCSKKPAPKSSGG', 'KLKLDVNGEKAAEYPLVYTLEHGIDEVLR', 'KLFNFFYRLIDVTGAVKPGCSKKPAPKSSGG', 'KLFLGYQMRGGHRLGAVKPGCSKKPAPKSSGG', 'KISTDRQSKLELEKQLSHVKEELEKVSM', 'KLDAGTLEGDVSIKRAENLKAQEQRLLEST', 'KIKLSGPTDLEWTWTLGQSVSRIRNWQL', 'KLTRKSVPMEDLEQTLGQSVSRIRNWQL', 'KLFNFFYRLIDVTGLSAPPTGFPRAPTSSL', 'KLFLGYQMRGGHRLGLSAPPTGFPRAPTSSL', 'KLKLCQDARATAATGELQQIKESLVLNEEL', 'KLESNVASEVSGRVVVPDPSYKLLQEELAR', 'KKISYHSHESTFLVPPEKLNKLRGMKGTS'] Truth: ['TTTTTFKGVDPRPGSP', 'MIRSYKHTLNQSSP', 'TVPTGGPHLVQSDGTVP', 'DIPVNAHQLIQTESP', 'DLPVWEYAHKHGIP', 'TVHSKGAGHMVPTDKP', 'LDPNVAHQLIQTESP', 'TWAIPKNMTPYRSP', 'TVSAPGPHLVQSDGTVP', 'LDPESATLPAGPRPGSP', 'TWIQHLIVVCEHSP', 'VPSLNPDGRELSPPSP', 'VPSLNPDGREPPSLSP', 'MIRSYKHSVTQQSP', 'TVQPSPHLVQSDGTVP', 'MIRSYKHTLTMPSP', 'MIRSYKHTQKGESP', 'MIRSYKHTLTNNSP', 'SLTVRLENMWQER', 'LPVWEYAHKHGIPD'] non truth: ['TVDRFEVEERVVPS', 'TVDRFYVYKMRSP', 'PVEVQKIADNVEHSP', 'TVPVGNTGGPGQEAAPLP', 'TWPPHGGNIGMIVVSP', 'LSEPLPPSEQQPRSP', 'TVDRFEVEERLTAP', 'TVDRFEVEERTLAP', 'TVDRFEVEERVSVP', 'TIQNGHLEGSEVIAPP', 'TWPPHGGNIYAALAPP', 'TWPPHGGNIYAAPLAP']
.ipynb_checkpoints/1.2 列表,元组,集合,字典--checkpoint.ipynb
###Markdown 列表(List) 列表是个有序类型(Sequence Type)的容器,其中包含着有索引编号的元素。列表是可变序列 增 * 把一个元素添加到列表的结尾:list.append(x) ###Code list = ['000001','000002'] list.append('000003') list * 在列表末尾追加列表: list.extend(list) list1 = ['000001', '000002', '000003'] list2 = ['000004', '000005', '000006'] list1.extend(list2) list1 ###Output _____no_output_____ ###Markdown * 在指定位置插入一个元素: list.insert(i, x) i 为插入的索引,x为插入的值 ###Code list1 = ['000001', '000002', '000003', '000004', '000005', '000006'] list1.insert(1, '600519') list1 ###Output _____no_output_____ ###Markdown 删 * 删除列表中值为 x 的第一个元素:list.remove(x) 如果没有这样的元素,就会返回一个错误。 ###Code list = [4,3,7,9,5,3,6,8,0,4,3] list.remove(4) list #如果有重复,则删除第一个 ###Output _____no_output_____ ###Markdown * 从列表的指定位置移除元素 并且返回索引元素 ###Code list = [4,3,7,9,5,3,6,8,0,4,3] list_pop=list.pop(4) list_pop,list ###Output _____no_output_____ ###Markdown * 移除列表中的所有项 list.clear() ###Code list = [4,3,7,9,5,3,6,8,0,4,3] list.clear() list ###Output _____no_output_____ ###Markdown * 切割一个列表 ###Code a = [-1, 1, 66.25, 333, 333, 1234.5] del a[0] a a = [-1, 1, 66.25, 333, 333, 1234.5] del a[2:4] a ###Output _____no_output_____ ###Markdown 查,索引 ###Code a = [-1, 1, 66.25, 333, 333, 1234.5] a[:],a[2:5] ###Output _____no_output_____ ###Markdown 返回列表中第一个值为 x 的元素的索引。如果没有匹配的元素就会返回一个错误。list.index(x) ###Code from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" a = [-1, 1, 66.25, 333, 333, 1234.5] a.index(333) # a.index(27) ###Output _____no_output_____ ###Markdown 返回元素 x 在列表中出现的次数。 list.count(x) 排序 对列表中的元素进行排序 list.sort() ###Code list = [4,3,7,9,5,3,6,8,0,4,3] list.sort() list ###Output _____no_output_____ ###Markdown 倒置 倒排列表中的元素: list.reverse() ###Code list1 = ['000001', '000002', '000003', '000004', '000005', '000006'] list1.reverse() list1 ###Output _____no_output_____ ###Markdown 遍历 ###Code seasons = ['Spring', 'Summer', 'Fall', 'Winter'] for i in seasons: print(i) for i, e in enumerate(seasons): print(i, e) ###Output 0 Spring 1 Summer 2 Fall 3 Winter ###Markdown 同时遍历两个或更多的序列,可以使用 zip() 组合: ###Code questions = ['name', 'quest', 'favorite color'] answers = ['lancelot', 'the holy grail', 'blue'] for q, a in zip(questions, answers): print('What is your {0}? It is {1}.'.format(q, a)) ###Output What is your name? It is lancelot. What is your quest? It is the holy grail. What is your favorite color? It is blue. ###Markdown 反向遍历一个列表 ###Code seasons = ['Spring', 'Summer', 'Fall', 'Winter'] for i in reversed(seasons): print(i) ###Output Winter Fall Summer Spring ###Markdown 列表推导式 ###Code a_range = range(10) a_list = [x * x for x in a_range] a_list ###Output _____no_output_____ ###Markdown 其他操作 ###Code 让列表中所有的树都大于0 list1= [-1, 1, 66.25, 333, 333, 1234.5] [i for i in list1 if i>0] import numpy list= [-1, -3, 66.25, 333, 333, 1234.5] (numpy.array(list)>0).all() import numpy list= [-1, 1, 66.25, 333, 333, 1234.5] (numpy.array(list)>0).any() ###Output _____no_output_____ ###Markdown 元组(Tuple) 与列表相似的一个容器,元组,用()表示 他们之间的区别* List 是可变有序容器,Tuple 是不可变有序容器。 可变/不可变 有序/无序 可重复/不可重复* List 用方括号标识 [],Tuple 用圆括号 标识 ()。 创建一个元组的时候,用圆括号:a = () 元组是不可变序列,所以,你没办法直接从里面删除,替换元素。但是,你可以在末尾追加元素。即:当前已有部分不可变 ###Code a = (1,2,3) b = a+(3,4,5,6) b b[1] b[2:5] ###Output _____no_output_____ ###Markdown 列表和元组的应用:> 在将来需要更改的时候,创建 List > 在将来不需要更改的时候,创建 Tuple > Tuple 相对于 List 占用更小的内存 遍历 ###Code seasons = ('Spring', 'Summer', 'Fall', 'Winter') for i in seasons: print(i) for i, e in enumerate(seasons): print(i, e) ###Output 0 Spring 1 Summer 2 Fall 3 Winter ###Markdown 集合(Set) > 字符串是不可变,有序的容器 ' '," " > 列表是可变,有序的容器 [] > 元组是不可变,有序的容器 () 集合又分为两种,Set,可变的,Frozen Set,不可变的。而且它是无序的,不包含重合元素的 这里只讲可变集合 创建一个集合 ###Code a = {2, 3, 5, 7, 11, 13, 17} a ###Output _____no_output_____ ###Markdown 创建空集合的时候,必须用 set(),而不能用 {}: ###Code b = set() b ###Output _____no_output_____ ###Markdown 去重 ###Code a = [2,3,3,5,5,2,8,7,7] set(a) b = (2,3,3,5,5,2,8,7,7) set(b) ###Output _____no_output_____ ###Markdown 集合不是有序容器,不可查 增 ###Code a = {1,2,3} a.add('sh') a ###Output _____no_output_____ ###Markdown 删 ###Code a = {1, 2, 3, 'sh'} a.remove('sh') a a = {1, 2, 3, 'sh'} a.discard(4) a ###Output _____no_output_____ ###Markdown 操作 ###Code a = {'000001', '000002', '000003', '000004'} b = {'000003', '000004', '000005', '000006'} a-b #取在a中但不在b中的元素 b-a #取在b中但不在a中的元素 a | b #取a,b中所有的元素 a & b #取a,b中重合的元素 a^b #取a,b中不重合的元素 for i in a: print(i) ###Output 000004 000001 000002 000003 ###Markdown 字典(Dictionary) 字典是一种含有键,值得映射容器 > 字符串是不可变,有序的容器 '',"" > 列表是可变,有序的容器 [] > 元组是不可变,有序的容器 () > 集合:可变/不可变,无序,不可重复{} 字典是可变,不可重复容器,无序的键=>值对集合 字典里的每个元素,由两部分组成,key(键)和 value(值),二者由一个冒号连接,用 key 作为索引,并映射到与它匹配的 value: 字典是唯一的,如果有重复,会自动去重,保留后面的一个 ###Code sq = {'one':6575, 'two':8982, 'three':2598, 'four':1225, 'four':6585} sq ###Output _____no_output_____ ###Markdown 增 ###Code from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" a = {'a':1, 'b':2, 'c':3} a['d']=4 #增 a a['a']=5 #更新 a a = {'a':1, 'b':2, 'c':3} b = {'d':4,'e':5,'f':6} a.update(b) a ###Output _____no_output_____ ###Markdown 删 ###Code a = {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6} del a['a'] a a = {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6} b = a.pop('a',1) b,a ###Output _____no_output_____ ###Markdown 查 判断存在 ###Code #通过键获取值 a = {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6} a['a'] a = {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6} 'a' in a a = {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6} a.keys(),a.values() # 将键值转换为列表 a = {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6} list(a.keys()),list(a.values()) #两列表转换为字典 a = ['a', 'b', 'c', 'd', 'e', 'f'] b = [1, 2, 3, 4, 5, 6] dict(zip(a,b)) 'a' in a.keys(),1 in a.values() ('a',1) in a.items() ###Output _____no_output_____ ###Markdown 遍历 ###Code a = {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6} for i in a: print(i,a[i]) for key,value in a.items(): print(key,value) ###Output a 1 b 2 c 3 d 4 e 5 f 6
B3S1 - Artifical Intelligence/Homework 3 - IMDB Review (RNN)/train.ipynb
###Markdown Homework 3 - IMDB Review (RNN) Define constants, functions and set random seed ###Code NUM_WORDS = 4000 MAXLEN = 400 BATCH_SIZE = 64 EPOCHS = 8 SEED = 0xFFFF SHOULD_RANDOMLY_SPLIT_VAL = False VAL_SPLIT = 0 import os from os import path from datetime import datetime def create_result_dir() -> str: result_dir = "result." + datetime.today().strftime("%y%m%d-%H%M%S") if not path.exists(result_dir): os.makedirs(result_dir) else: result_dir_suffix = 0 while path.exists(result_dir + "-" + str(result_dir_suffix)): result_dir_suffix += 1 result_dir = result_dir + "-" + str(result_dir_suffix) os.makedirs(result_dir) return result_dir import pandas, numpy from keras.preprocessing import sequence, text def prepare_data( train_data: pandas.DataFrame, test_data: pandas.DataFrame, ) -> (numpy.ndarray, numpy.ndarray, numpy.ndarray): train_labels = train_data["Sentiment"].values.astype("int32") train_texts = train_data["SentimentText"] test_texts = test_data["SentimentText"] tokenizer = text.Tokenizer(num_words=NUM_WORDS) tokenizer.fit_on_texts(train_texts) def pad_sequences_from_texts(texts: pandas.Series) \ -> numpy.ndarray: return sequence.pad_sequences( tokenizer.texts_to_sequences(texts), maxlen=MAXLEN, ) return ( train_labels, pad_sequences_from_texts(train_texts), pad_sequences_from_texts(test_texts), ) from matplotlib import pyplot from keras import callbacks def show_train_history(history: callbacks.History): fig, ax = pyplot.subplots(nrows=1, ncols=2) fig.set_size_inches(18, 6) ax[0].set_title("Model accuary") ax[0].plot(history.history["acc"]) ax[0].plot(history.history["val_acc"]) ax[0].set_ylabel("accuary") ax[0].set_xlabel("epoch") ax[0].legend(["result", "validation"], loc="upper left") ax[1].set_title("Model loss") ax[1].plot(history.history["loss"]) ax[1].plot(history.history["val_loss"]) ax[1].set_ylabel("loss") ax[1].set_xlabel("epoch") ax[1].legend(["result", "validation"], loc="upper left") pyplot.show() from numpy import random random.seed(SEED) ###Output _____no_output_____ ###Markdown Read files and prepare data ###Code import zipfile from keras.utils import np_utils train = pandas.read_csv("data/train_data.csv") test = pandas.read_csv("data/test_data_ans.csv") train_labels, train_texts, test_texts = prepare_data(train, test) train.head() test.head() from keras import models, layers, initializers, activations model = models.Sequential([ layers.Embedding( input_dim=NUM_WORDS, output_dim=16, input_length=MAXLEN, ), layers.Dropout(0.3), layers.LSTM(16), layers.Dense( units=512, activation=activations.relu, ), layers.Dropout(0.4), layers.Dense( units=1, activation=activations.sigmoid, ), ]) from keras import optimizers, losses model.compile( optimizer=optimizers.Adam(amsgrad=True), loss=losses.binary_crossentropy, metrics=["accuracy"], ) model.summary() ###Output _____no_output_____ ###Markdown Train the model ###Code result_dir = create_result_dir() model_filename = "model.{epoch:0%dd}-{val_loss:.4f}.hdf5" \ % len(str(abs(EPOCHS))) model_path = path.join(result_dir, model_filename) model_checkpoint = callbacks.ModelCheckpoint( model_path, monitor="val_loss", verbose=1, save_best_only=True, ) early_stopping = callbacks.EarlyStopping( monitor="val_loss", patience=64, verbose=1, ) if SHOULD_RANDOMLY_SPLIT_VAL: from sklearn import model_selection train_x, val_x, train_y, val_y = model_selection.train_test_split( train_texts, train_labels, test_size=VAL_SPLIT, random_state=SEED, ) val_data = (val_x, val_y) else: train_x, train_y = train_texts, train_labels val_data = None history = model.fit( x=train_x, y=train_y, batch_size=BATCH_SIZE, epochs=EPOCHS, verbose=1, callbacks=[ model_checkpoint, early_stopping, ], validation_split=VAL_SPLIT, validation_data=val_data, ) import pickle with open(path.join(result_dir, "history.pickle"), "wb") as file: pickle.dump(history.history, file, pickle.HIGHEST_PROTOCOL) import glob model_files = sorted(glob.glob(path.join(result_dir, "model.*-*.hdf5"))) model = models.load_model(model_files[-1]) predictions = model.predict_classes(test_texts) pandas.DataFrame(data={"sentiment": predictions.flatten()}).to_csv( path.join(result_dir, "predictions.csv"), index_label="id", ) show_train_history(history) ###Output _____no_output_____
kaggle/src/kaggle_global_wheat_detector.ipynb
###Markdown Helper Functions ###Code # Helper functions # @brief get np.array from string # @param array_str format : [float, ... , float] # @return np.array def get_array_from_string(array_str): # Extract bullshit from string array_str = array_str.replace("[", "") array_str = array_str.replace("]", "") array_f_str = array_str.split(",") # Convert to float array_f = [float(e) for e in array_f_str] return array_f # @brief Get bbox arrays from (cropped) pandas (train.csv) # @param data # @return np.array with bounding boxe def get_img_bbox(data): bbox_all = np.array([]) # Iterate boxes for index, row in data.iterrows(): bbox_f = get_array_from_string( row['bbox'] ) bbox_all = np.concatenate([bbox_all, bbox_f]) return bbox_all # @brief Plot image with bounding boxes # @param image_string image name without *.jpg ending def plot_boundingbox(image_str): # Get image and it's bounding boxes image = io.imread(kaggledata_root + '/kaggle/input/global-wheat-detection/train/' + image_str + '.jpg') bbox_all = get_img_bbox(data[data.image_id == image_str]) # Plot data %matplotlib inline fig, ax = plt.subplots(1) ax.imshow(image) for bbox in bbox_all.reshape(-1,4): rect = patches.Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3],linewidth=1,edgecolor='r',facecolor='none') ax.add_patch(rect) plt.show() plot_boundingbox('b6ab77fd7') # !DO NOT EXECUTE! #images = [] #for image in images_ids: # images.append(mpimg.imread("/kaggle/input/global-wheat-detection/train/" + str(image) +".jpg")) # yolo parameters (taken from https://www.kaggle.com/aruchomu/yolo-v3-object-detection-in-tensorflow/notebook) _BATCH_NORM_DECAY = 0.9 _BATCH_NORM_EPSILON = 1e-05 _LEAKY_RELU = 0.1 _ANCHORS = [(10, 13), (16, 30), (33, 23), (30, 61), (62, 45), (59, 119), (116, 90), (156, 198), (373, 326)] _MODEL_SCALE = 1 _MODEL_SIZE = (1024/_MODEL_SCALE, 1024/_MODEL_SCALE) # yolo helper functions def batch_norm(inputs, training, data_format): """Performs a batch normalization using a standard set of parameters.""" return tf.layers.batch_normalization( inputs=inputs, axis=1 if data_format == 'channels_first' else 3, momentum=_BATCH_NORM_DECAY, epsilon=_BATCH_NORM_EPSILON, scale=True, training=training) def fixed_padding(inputs, kernel_size, data_format): """ResNet implementation of fixed padding. Pads the input along the spatial dimensions independently of input size. Args: inputs: Tensor input to be padded. kernel_size: The kernel to be used in the conv2d or max_pool2d. data_format: The input format. Returns: A tensor with the same format as the input. """ pad_total = kernel_size - 1 pad_beg = pad_total // 2 pad_end = pad_total - pad_beg if data_format == 'channels_first': padded_inputs = tf.pad(inputs, [[0, 0], [0, 0], [pad_beg, pad_end], [pad_beg, pad_end]]) else: padded_inputs = tf.pad(inputs, [[0, 0], [pad_beg, pad_end], [pad_beg, pad_end], [0, 0]]) return padded_inputs def conv2d_fixed_padding(inputs, filters, kernel_size, data_format, strides=1): """Strided 2-D convolution with explicit padding.""" if strides > 1: inputs = fixed_padding(inputs, kernel_size, data_format) return tf.layers.conv2d( inputs=inputs, filters=filters, kernel_size=kernel_size, strides=strides, padding=('SAME' if strides == 1 else 'VALID'), use_bias=False, data_format=data_format) # Feature extraction with pre-trained Darknet-53 NN def darknet53_residual_block(inputs, filters, training, data_format, strides=1): """Creates a residual block for Darknet.""" shortcut = inputs inputs = conv2d_fixed_padding( inputs, filters=filters, kernel_size=1, strides=strides, data_format=data_format) inputs = batch_norm(inputs, training=training, data_format=data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) inputs = conv2d_fixed_padding( inputs, filters=2 * filters, kernel_size=3, strides=strides, data_format=data_format) inputs = batch_norm(inputs, training=training, data_format=data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) inputs += shortcut return inputs def darknet53(inputs, training, data_format): """Creates Darknet53 model for feature extraction.""" inputs = conv2d_fixed_padding(inputs, filters=32, kernel_size=3, data_format=data_format) inputs = batch_norm(inputs, training=training, data_format=data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) inputs = conv2d_fixed_padding(inputs, filters=64, kernel_size=3, strides=2, data_format=data_format) inputs = batch_norm(inputs, training=training, data_format=data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) inputs = darknet53_residual_block(inputs, filters=32, training=training, data_format=data_format) inputs = conv2d_fixed_padding(inputs, filters=128, kernel_size=3, strides=2, data_format=data_format) inputs = batch_norm(inputs, training=training, data_format=data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) for _ in range(2): inputs = darknet53_residual_block(inputs, filters=64, training=training, data_format=data_format) inputs = conv2d_fixed_padding(inputs, filters=256, kernel_size=3, strides=2, data_format=data_format) inputs = batch_norm(inputs, training=training, data_format=data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) for _ in range(8): inputs = darknet53_residual_block(inputs, filters=128, training=training, data_format=data_format) route1 = inputs inputs = conv2d_fixed_padding(inputs, filters=512, kernel_size=3, strides=2, data_format=data_format) inputs = batch_norm(inputs, training=training, data_format=data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) for _ in range(8): inputs = darknet53_residual_block(inputs, filters=256, training=training, data_format=data_format) route2 = inputs inputs = conv2d_fixed_padding(inputs, filters=1024, kernel_size=3, strides=2, data_format=data_format) inputs = batch_norm(inputs, training=training, data_format=data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) for _ in range(4): inputs = darknet53_residual_block(inputs, filters=512, training=training, data_format=data_format) return route1, route2, inputs # yolo cnn blocks def yolo_convolution_block(inputs, filters, training, data_format): """Creates convolution operations layer used after Darknet.""" inputs = conv2d_fixed_padding(inputs, filters=filters, kernel_size=1, data_format=data_format) inputs = batch_norm(inputs, training=training, data_format=data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) inputs = conv2d_fixed_padding(inputs, filters=2 * filters, kernel_size=3, data_format=data_format) inputs = batch_norm(inputs, training=training, data_format=data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) inputs = conv2d_fixed_padding(inputs, filters=filters, kernel_size=1, data_format=data_format) inputs = batch_norm(inputs, training=training, data_format=data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) inputs = conv2d_fixed_padding(inputs, filters=2 * filters, kernel_size=3, data_format=data_format) inputs = batch_norm(inputs, training=training, data_format=data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) inputs = conv2d_fixed_padding(inputs, filters=filters, kernel_size=1, data_format=data_format) inputs = batch_norm(inputs, training=training, data_format=data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) route = inputs inputs = conv2d_fixed_padding(inputs, filters=2 * filters, kernel_size=3, data_format=data_format) inputs = batch_norm(inputs, training=training, data_format=data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) return route, inputs # yolo detection layer ''' Yolo has 3 detection layers, that detect on 3 different scales using respective anchors. For each cell in the feature map the detection layer predicts n_anchors * (5 + n_classes) values using 1x1 convolution. For each scale we have n_anchors = 3. 5 + n_classes means that respectively to each of 3 anchors we are going to predict 4 coordinates of the box, its confidence score (the probability of containing an object) and class probabilities. ''' def yolo_layer(inputs, n_classes, anchors, img_size, data_format): """Creates Yolo final detection layer. Detects boxes with respect to anchors. Args: inputs: Tensor input. n_classes: Number of labels. anchors: A list of anchor sizes. img_size: The input size of the model. data_format: The input format. Returns: Tensor output. """ n_anchors = len(anchors) inputs = tf.layers.conv2d(inputs, filters=n_anchors * (5 + n_classes), kernel_size=1, strides=1, use_bias=True, data_format=data_format) shape = inputs.get_shape().as_list() grid_shape = shape[2:4] if data_format == 'channels_first' else shape[1:3] if data_format == 'channels_first': inputs = tf.transpose(inputs, [0, 2, 3, 1]) inputs = tf.reshape(inputs, [-1, n_anchors * grid_shape[0] * grid_shape[1], 5 + n_classes]) strides = (img_size[0] // grid_shape[0], img_size[1] // grid_shape[1]) box_centers, box_shapes, confidence, classes = \ tf.split(inputs, [2, 2, 1, n_classes], axis=-1) x = tf.range(grid_shape[0], dtype=tf.float32) y = tf.range(grid_shape[1], dtype=tf.float32) x_offset, y_offset = tf.meshgrid(x, y) x_offset = tf.reshape(x_offset, (-1, 1)) y_offset = tf.reshape(y_offset, (-1, 1)) x_y_offset = tf.concat([x_offset, y_offset], axis=-1) x_y_offset = tf.tile(x_y_offset, [1, n_anchors]) x_y_offset = tf.reshape(x_y_offset, [1, -1, 2]) box_centers = tf.nn.sigmoid(box_centers) box_centers = (box_centers + x_y_offset) * strides anchors = tf.tile(anchors, [grid_shape[0] * grid_shape[1], 1]) box_shapes = tf.exp(box_shapes) * tf.to_float(anchors) confidence = tf.nn.sigmoid(confidence) classes = tf.nn.sigmoid(classes) inputs = tf.concat([box_centers, box_shapes, confidence, classes], axis=-1) return inputs # yolo upsample ''' In order to concatenate with shortcut outputs from Darknet-53 before applying detection on a different scale, we are going to upsample the feature map using nearest neighbor interpolation. ''' def upsample(inputs, out_shape, data_format): """Upsamples to `out_shape` using nearest neighbor interpolation.""" if data_format == 'channels_first': inputs = tf.transpose(inputs, [0, 2, 3, 1]) new_height = out_shape[3] new_width = out_shape[2] else: new_height = out_shape[2] new_width = out_shape[1] inputs = tf.image.resize_nearest_neighbor(inputs, (new_height, new_width)) if data_format == 'channels_first': inputs = tf.transpose(inputs, [0, 3, 1, 2]) return inputs # yolo non-max suppression ''' The model is going to produce a lot of boxes, so we need a way to discard the boxes with low confidence scores. Also, to avoid having multiple boxes for one object, we will discard the boxes with high overlap as well using non-max suppresion for each class. ''' def build_boxes(inputs): """Computes top left and bottom right points of the boxes.""" center_x, center_y, width, height, confidence, classes = \ tf.split(inputs, [1, 1, 1, 1, 1, -1], axis=-1) top_left_x = center_x - width / 2 top_left_y = center_y - height / 2 bottom_right_x = center_x + width / 2 bottom_right_y = center_y + height / 2 boxes = tf.concat([top_left_x, top_left_y, bottom_right_x, bottom_right_y, confidence, classes], axis=-1) return boxes def non_max_suppression(inputs, n_classes, max_output_size, iou_threshold, confidence_threshold): """Performs non-max suppression separately for each class. Args: inputs: Tensor input. n_classes: Number of classes. max_output_size: Max number of boxes to be selected for each class. iou_threshold: Threshold for the IOU. confidence_threshold: Threshold for the confidence score. Returns: A list containing class-to-boxes dictionaries for each sample in the batch. """ batch = tf.unstack(inputs) boxes_dicts = [] for boxes in batch: boxes = tf.boolean_mask(boxes, boxes[:, 4] > confidence_threshold) classes = tf.argmax(boxes[:, 5:], axis=-1) classes = tf.expand_dims(tf.to_float(classes), axis=-1) boxes = tf.concat([boxes[:, :5], classes], axis=-1) boxes_dict = dict() for cls in range(n_classes): mask = tf.equal(boxes[:, 5], cls) mask_shape = mask.get_shape() if mask_shape.ndims != 0: class_boxes = tf.boolean_mask(boxes, mask) boxes_coords, boxes_conf_scores, _ = tf.split(class_boxes, [4, 1, -1], axis=-1) boxes_conf_scores = tf.reshape(boxes_conf_scores, [-1]) indices = tf.image.non_max_suppression(boxes_coords, boxes_conf_scores, max_output_size, iou_threshold) class_boxes = tf.gather(class_boxes, indices) boxes_dict[cls] = class_boxes[:, :5] boxes_dicts.append(boxes_dict) return boxes_dicts # yolo v3 model class Yolo_v3: """Yolo v3 model class.""" def __init__(self, n_classes, model_size, max_output_size, iou_threshold, confidence_threshold, data_format=None): """Creates the model. Args: n_classes: Number of class labels. model_size: The input size of the model. max_output_size: Max number of boxes to be selected for each class. iou_threshold: Threshold for the IOU. confidence_threshold: Threshold for the confidence score. data_format: The input format. Returns: None. """ if not data_format: if tf.test.is_built_with_cuda(): data_format = 'channels_first' else: data_format = 'channels_last' self.n_classes = n_classes self.model_size = model_size self.max_output_size = max_output_size self.iou_threshold = iou_threshold self.confidence_threshold = confidence_threshold self.data_format = data_format def __call__(self, inputs, training): """Add operations to detect boxes for a batch of input images. Args: inputs: A Tensor representing a batch of input images. training: A boolean, whether to use in training or inference mode. Returns: A list containing class-to-boxes dictionaries for each sample in the batch. """ with tf.variable_scope('yolo_v3_model'): if self.data_format == 'channels_first': inputs = tf.transpose(inputs, [0, 3, 1, 2]) inputs = inputs / 255 route1, route2, inputs = darknet53(inputs, training=training, data_format=self.data_format) route, inputs = yolo_convolution_block( inputs, filters=512, training=training, data_format=self.data_format) detect1 = yolo_layer(inputs, n_classes=self.n_classes, anchors=_ANCHORS[6:9], img_size=self.model_size, data_format=self.data_format) inputs = conv2d_fixed_padding(route, filters=256, kernel_size=1, data_format=self.data_format) inputs = batch_norm(inputs, training=training, data_format=self.data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) upsample_size = route2.get_shape().as_list() inputs = upsample(inputs, out_shape=upsample_size, data_format=self.data_format) axis = 1 if self.data_format == 'channels_first' else 3 inputs = tf.concat([inputs, route2], axis=axis) route, inputs = yolo_convolution_block( inputs, filters=256, training=training, data_format=self.data_format) detect2 = yolo_layer(inputs, n_classes=self.n_classes, anchors=_ANCHORS[3:6], img_size=self.model_size, data_format=self.data_format) inputs = conv2d_fixed_padding(route, filters=128, kernel_size=1, data_format=self.data_format) inputs = batch_norm(inputs, training=training, data_format=self.data_format) inputs = tf.nn.leaky_relu(inputs, alpha=_LEAKY_RELU) upsample_size = route1.get_shape().as_list() inputs = upsample(inputs, out_shape=upsample_size, data_format=self.data_format) inputs = tf.concat([inputs, route1], axis=axis) route, inputs = yolo_convolution_block( inputs, filters=128, training=training, data_format=self.data_format) detect3 = yolo_layer(inputs, n_classes=self.n_classes, anchors=_ANCHORS[0:3], img_size=self.model_size, data_format=self.data_format) inputs = tf.concat([detect1, detect2, detect3], axis=1) inputs = build_boxes(inputs) boxes_dicts = non_max_suppression( inputs, n_classes=self.n_classes, max_output_size=self.max_output_size, iou_threshold=self.iou_threshold, confidence_threshold=self.confidence_threshold) return boxes_dicts # utility functions def load_images(img_names, model_size): """Loads images in a 4D array. Args: img_names: A list of images names. model_size: The input size of the model. data_format: A format for the array returned ('channels_first' or 'channels_last'). Returns: A 4D NumPy array. """ imgs = [] for img_name in img_names: img = Image.open(img_name) img = img.resize(size=model_size) img = np.array(img, dtype=np.float32) img = np.expand_dims(img, axis=0) imgs.append(img) imgs = np.concatenate(imgs) return imgs def load_class_names(file_name): """Returns a list of class names read from `file_name`.""" with open(file_name, 'r') as f: class_names = f.read().splitlines() return class_names def draw_boxes(img_names, boxes_dicts, class_names, model_size): """Draws detected boxes. Args: img_names: A list of input images names. boxes_dict: A class-to-boxes dictionary. class_names: A class names list. model_size: The input size of the model. Returns: None. """ colors = ((np.array(color_palette("hls", 80)) * 255)).astype(np.uint8) for num, img_name, boxes_dict in zip(range(len(img_names)), img_names, boxes_dicts): img = Image.open(img_name) draw = ImageDraw.Draw(img) font = ImageFont.truetype(font='../input/futur.ttf', size=(img.size[0] + img.size[1]) // 100) resize_factor = \ (img.size[0] / model_size[0], img.size[1] / model_size[1]) for cls in range(len(class_names)): boxes = boxes_dict[cls] if np.size(boxes) != 0: color = colors[cls] for box in boxes: xy, confidence = box[:4], box[4] xy = [xy[i] * resize_factor[i % 2] for i in range(4)] x0, y0 = xy[0], xy[1] thickness = (img.size[0] + img.size[1]) // 200 for t in np.linspace(0, 1, thickness): xy[0], xy[1] = xy[0] + t, xy[1] + t xy[2], xy[3] = xy[2] - t, xy[3] - t draw.rectangle(xy, outline=tuple(color)) text = '{} {:.1f}%'.format(class_names[cls], confidence * 100) text_size = draw.textsize(text, font=font) draw.rectangle( [x0, y0 - text_size[1], x0 + text_size[0], y0], fill=tuple(color)) draw.text((x0, y0 - text_size[1]), text, fill='black', font=font) display(img) # load yolo v3 weights into tensorflow format def load_weights(variables, file_name): """Reshapes and loads official pretrained Yolo weights. Args: variables: A list of tf.Variable to be assigned. file_name: A name of a file containing weights. Returns: A list of assign operations. """ with open(file_name, "rb") as f: # Skip first 5 values containing irrelevant info np.fromfile(f, dtype=np.int32, count=5) weights = np.fromfile(f, dtype=np.float32) assign_ops = [] ptr = 0 # Load weights for Darknet part. # Each convolution layer has batch normalization. for i in range(52): conv_var = variables[5 * i] gamma, beta, mean, variance = variables[5 * i + 1:5 * i + 5] batch_norm_vars = [beta, gamma, mean, variance] for var in batch_norm_vars: shape = var.shape.as_list() num_params = np.prod(shape) var_weights = weights[ptr:ptr + num_params].reshape(shape) ptr += num_params assign_ops.append(tf.assign(var, var_weights)) shape = conv_var.shape.as_list() num_params = np.prod(shape) var_weights = weights[ptr:ptr + num_params].reshape( (shape[3], shape[2], shape[0], shape[1])) var_weights = np.transpose(var_weights, (2, 3, 1, 0)) ptr += num_params assign_ops.append(tf.assign(conv_var, var_weights)) # Loading weights for Yolo part. # 7th, 15th and 23rd convolution layer has biases and no batch norm. ranges = [range(0, 6), range(6, 13), range(13, 20)] unnormalized = [6, 13, 20] for j in range(3): for i in ranges[j]: current = 52 * 5 + 5 * i + j * 2 conv_var = variables[current] gamma, beta, mean, variance = \ variables[current + 1:current + 5] batch_norm_vars = [beta, gamma, mean, variance] for var in batch_norm_vars: shape = var.shape.as_list() num_params = np.prod(shape) var_weights = weights[ptr:ptr + num_params].reshape(shape) ptr += num_params assign_ops.append(tf.assign(var, var_weights)) shape = conv_var.shape.as_list() num_params = np.prod(shape) var_weights = weights[ptr:ptr + num_params].reshape( (shape[3], shape[2], shape[0], shape[1])) var_weights = np.transpose(var_weights, (2, 3, 1, 0)) ptr += num_params assign_ops.append(tf.assign(conv_var, var_weights)) bias = variables[52 * 5 + unnormalized[j] * 5 + j * 2 + 1] shape = bias.shape.as_list() num_params = np.prod(shape) var_weights = weights[ptr:ptr + num_params].reshape(shape) ptr += num_params assign_ops.append(tf.assign(bias, var_weights)) conv_var = variables[52 * 5 + unnormalized[j] * 5 + j * 2] shape = conv_var.shape.as_list() num_params = np.prod(shape) var_weights = weights[ptr:ptr + num_params].reshape( (shape[3], shape[2], shape[0], shape[1])) var_weights = np.transpose(var_weights, (2, 3, 1, 0)) ptr += num_params assign_ops.append(tf.assign(conv_var, var_weights)) return assign_ops # create array with images path img_names = [] for image in images_ids.unique(): img_names.append(kaggledata_root + "/kaggle/input/global-wheat-detection/train/" + str(image) +".jpg") # yolo v3 model batch_size = 30 #len(img_names) batch = load_images(img_names[:batch_size], model_size=_MODEL_SIZE) #class_names = load_class_names('../input/coco.names') n_classes = 1 max_output_size = 10 iou_threshold = 0.5 confidence_threshold = 0.5 model = Yolo_v3(n_classes=n_classes, model_size=_MODEL_SIZE, max_output_size=max_output_size, iou_threshold=iou_threshold, confidence_threshold=confidence_threshold) inputs = tf.placeholder(tf.float32, [batch_size, 1024/_MODEL_SCALE, 1024/_MODEL_SCALE, 3]) detections = model(inputs, training=False) model_vars = tf.global_variables(scope='yolo_v3_model') assign_ops = load_weights(model_vars, kaggledata_root + '/kaggle/input/global-wheat-detection/models/yolov3.weights') with tf.Session() as sess: sess.run(assign_ops) detection_result = sess.run(detections, feed_dict={inputs: batch}) draw_boxes(img_names[:20], detection_result, class_names, _MODEL_SIZE) print(detection_result) ###Output _____no_output_____
sam/notebooks/Aiko-Q2.ipynb
###Markdown Warmup Question 2: Study the relationship between the patient ages (at the time of their service) and the counts of medical claims. ###Code hue_feature = 'Status' age_claims = data.groupby(['Age',hue_feature])['ClaimID'].count().reset_index() plt.figure(figsize=(6,8)) ax1 = plt.subplot(211) ax1 = sns.scatterplot(x='Age',y='ClaimID', hue=hue_feature, data=age_claims) ax1.set(ylabel='Number of Claims') plt.tight_layout() ax1.figure.savefig("./visualizations/aiko/age_Claims.png",bbox_inches='tight') ax2 = plt.subplot(212) ax2 = sns.scatterplot(x='Age',y=np.log(age_claims.ClaimID), hue=hue_feature, data=age_claims) ax2.set(ylabel='Number of Claims') plt.tight_layout() ax2.figure.savefig("./visualizations/aiko/age_logClaims.png",bbox_inches='tight') ###Output _____no_output_____ ###Markdown Plot shows sharp rise after retirement age. Shape of rise and fall across all ages is consistent across inpatient & outpatient, and train & test sets.Questions:- Is this due to some sudden onset of disease after 65?- Is there a retirement scheme in the works? ###Code plt.figure(figsize=(6,8)) ax1 = plt.subplot(211) ax1 = sns.scatterplot(x='Age',y='ClaimID', hue=hue_feature, data=age_claims[age_claims.Age >= 65]) ax1.set(ylabel='Number of Claims') plt.tight_layout() ax2 = plt.subplot(212) ax2 = sns.scatterplot(x='Age',y=np.log(age_claims.ClaimID), hue=hue_feature, data=age_claims[age_claims.Age >= 65]) ax2.set(ylabel='Number of Claims') plt.tight_layout() ###Output _____no_output_____ ###Markdown Study the relationship between the patient age and their chornic conditions. - Within the train-samples, do these chronic conditions show a definite trend with respect to increasing ages? ###Code age_chronics = data.groupby(['Age','Set'])['Alzheimer', 'HeartFailure', 'KidneyDisease', 'Cancer', 'ObstrPulmonary', 'Depression', 'Diabetes', 'IschemicHeart', 'Osteoporasis', 'RheumatoidArthritis', 'Stroke'].sum().reset_index() age_chronics = age_chronics.melt(id_vars=['Age','Set']) plt.figure(figsize=(8,8)) ax1 = plt.subplot(211) ax1 = sns.lineplot(x='Age',y='value', hue='variable', data=age_chronics[age_chronics.Set=='Train']) ax1.set(ylabel='Cases of Chronic Disease') plt.tight_layout() plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ax1.figure.savefig("./visualizations/aiko/age_Chronic.png",bbox_inches='tight') ax2 = plt.subplot(212) ax2 = sns.lineplot(x='Age',y=np.log(age_chronics.value), hue='variable', data=age_chronics[age_chronics.Set=='Train']) ax2.set(ylabel='Cases of Chronic Disease') plt.tight_layout() plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ax2.figure.savefig("./visualizations/aiko/age_logChronic.png",bbox_inches='tight') ###Output _____no_output_____ ###Markdown Yes they show the same trend as the number of claims In order to make sure the insurance premiums can cover the claims, the insurance company would need to categorize the patients according to their resource usage. In answering the question of what types of patients would make more outpatient visits, please provide your findings. Not sure how best to tackle this. Going with the theme of Age and adding Gender and Number of Chronic Diseases... ###Code #out = data[data.Status=='out'] arg = data.groupby(['Age','Gender','Race'])['BeneID'].nunique().reset_index() arg arg_ = arg.melt(id_vars=['Age','Gender','Race'], value_vars=['BeneID']) arg_ plt.figure(figsize=(8,8)) ax1 = plt.subplot(211) ax1 = sns.lineplot(x = 'Age', y='BeneID', hue='Race', style='Gender' estimator=None, lw=1, data=arg) ax1.set(ylabel='Number of Beneficiaries') plt.tight_layout() #plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) #ax1.figure.savefig("./visualizations/aiko/age_Bene.png",bbox_inches='tight') ax2 = plt.subplot(212) ax2 = sns.lineplot(x = 'Age', y='BeneID', hue='Race', style='Gender' estimator=None, lw=1, data=arg[arg.Age>60]) ax2.set(ylabel='Number of Beneficiaries') plt.tight_layout() #plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) #ax2.figure.savefig("./visualizations/aiko/age_Bene60.png",bbox_inches='tight') sns.countplot(x = 'Risk', data=arg) arg['Risk'] = pd.qcut(arg.ClaimID, 3, labels=['Low','Medium','High'], duplicates='drop') arg.columns outR = out.merge(arg[['Age', 'Gender', 'NumChronics','Risk']], on = ['Age', 'Gender', 'NumChronics'], how='left') ###Output _____no_output_____ ###Markdown Let's see how separable the data is ###Code sns.catplot(x='Risk', hue='Gender', col='Race', kind='count', col_wrap=2, height=3.5, aspect=1.5, sharey=False, data=outR) ###Output _____no_output_____ ###Markdown Men, what I'm guessing Gender 0 to be, are overrepresented in the High Risk category we created - across all racial groups. Imbalance in Race5 for women in the medium risk category In answering what types of patients would make more inpatient service claims, please provide your findings. From the prospect of the insurance company, the reimbursed amounts are their coverage on theclaims. Please analyze the patterns of the total reimbursed amounts (or average reimbursed amounts/visit) vs different types of patients. ###Code dataR = data.merge(arg[['Age', 'Gender', 'NumChronics','Risk']], on = ['Age', 'Gender', 'NumChronics'], how='left') reimMean = dataR.groupby(['Risk','Gender','Race'])['InscClaimAmtReimbursed'].mean().reset_index() reimTotal = dataR.groupby(['Risk','Gender','Race'])['InscClaimAmtReimbursed'].sum().reset_index() reimTotal['logReim'] = np.log(reimTotal.InscClaimAmtReimbursed) #ax1 = plt.subplot(211) ax1= sns.catplot(x='Risk', y='InscClaimAmtReimbursed', hue = 'Gender', palette="rocket", col='Race', kind='bar', #col_wrap=2, height=7, aspect=0.7, data=reimMean) plt.tight_layout() plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ax1.savefig("./visualizations/aiko/risk_meanReim.png",bbox_inches='tight') #ax2 = plt.subplot(212) ax2= sns.catplot(x='Risk', y='logReim', hue = 'Gender', palette="rocket", col='Race', kind='bar', #col_wrap=2, height=7, aspect=0.7, data=reimTotal) plt.tight_layout() plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ax2.savefig("./visualizations/aiko/risk_TotalReim.png",bbox_inches='tight') ###Output _____no_output_____ ###Markdown Race2 has noticeably higher mean Insurance reimbursments for low risk patients. From the perspective of the providers, the sum of reimbursed amounts and deductibles are flowing to the providers. Based on this, analyze which types of patients contribute more to the providers in terms of the aggregate charges or the average charge per visit. ###Code d= {1: 'Alabama', 2: 'Alaska', 3: 'Arizona', 4: 'Arkansas', 5: 'California', 6: 'Colorado', 7: 'Connecticut', 8: 'Delaware', 9: 'District of Columbia', 10: 'Florida', 11: 'Georgia', 12: 'Hawaii', 13: 'Idaho', 14: 'Illinois', 15: 'Indiana', 16: 'Iowa', 17: 'Kansas', 18: 'Kentucky', 19: 'Louisiana', 20: 'Maine', 21: 'Maryland', 22: 'Massachusetts', 23: 'Michigan', 24: 'Minnesota', 25: 'Mississippi', 26: 'Missouri', 27: 'Montana', 28: 'Nebraska', 29: 'Nevada', 30: 'New Hampshire', 31: 'New Jersey', 32: 'New Mexico', 33: 'New York', 34: 'North Carolina', 35: 'North Dakota', 36: 'Ohio', 37: 'Oklahoma', 38: 'Oregon', 39: 'Pennsylvania', 41: 'Rhode Island', 42: 'South Carolina', 43: 'South Dakota', 44: 'Tennessee', 45: 'Texas', 46: 'Utah', 47: 'Vermont', 49: 'Virginia', 50: 'Washington', 51: 'West Virginia', 52: 'Wisconsin', 53: 'Wyoming', 54: 'Puerto Rico'} len(d) ###Output _____no_output_____
Photo-z/gtr_NW_colors+dcr_propmotion.ipynb
###Markdown This notebook produces photo-z predictions for a set of 119,899 objects. The objective is to compare regressions trained on photometric colors alone, colors and dcr offset, colors and proper motions, and colors + dcr offset + GAIA proper motions. We also want to attempt to use the regression to do object classification as well. Matching summary for data table used: gtr match to spAll-dr12 table from SDSS dr12 match resulting table to gaia with query: SELECT gaia.ra, gaia.dec, gaia.pmra, gaia.pmdec FROM gaiadr2.gaia_source as gaia INNER JOIN user_[USERNAME].xmatch_gaia_source_[TABLENAME] ON user_[USERNAME].xmatch_gaia_source_[TABLENAME].gaia_source_source_id = gaia.source_id Produces file gtr_all+dr12offset+gaia.fits ###Code import numpy as np from astropy.table import Table from sklearn.model_selection import train_test_split, cross_val_predict from sklearn.metrics import classification_report from astroML.linear_model import NadarayaWatson from dask import compute, delayed import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.patches as mpatches import palettable import richardsplot as rplot %matplotlib inline def absoluteMagnitude(xarray, yarray): if len(xarray) != len(yarray): print("The two arrays must be the same length") return else: absMag = [] for i in range(len(xarray)): if (not np.isnan(xarray[i])) and (not np.isnan(yarray[i])): absMag = np.append(absMag, np.sqrt((xarray[i]*xarray[i])+(yarray[i]*yarray[i]))) else: absMag = np.append(absMag, -1000000) return absMag data = Table.read('gtr_all+dr12offset+gaia.fits') data=data.filled() print data.keys() def process(Xin): return model.predict(Xin) OFFSETABS_u = absoluteMagnitude(data['OFFSETRA'].T[0], data['OFFSETDEC'].T[0]) OFFSETABS_g = absoluteMagnitude(data['OFFSETRA'].T[1], data['OFFSETDEC'].T[1]) pmabs = absoluteMagnitude(data['pmra'], data['pmdec']) print data['OFFSETRA'].T[0] print data['OFFSETRA'].T[1] #1 is colors, 2 is colors+offset, 3 is colors+proper motion, 4 is colors+offset+proper motion X1 = np.vstack([ data['ug'], data['gr'], data['ri'], data['iz'] ]).T X2 = np.vstack([ data['ug'], data['gr'], data['ri'], data['iz'], OFFSETABS_u, OFFSETABS_g ]).T X3 = np.vstack([ data['ug'], data['gr'], data['ri'], data['iz'], pmabs ]).T X4 = np.vstack([ data['ug'], data['gr'], data['ri'], data['iz'], OFFSETABS_u, OFFSETABS_g, pmabs ]).T y = np.array(data['zspec']) X1_train, X1_test, y1_train, y1_test = train_test_split(X1, y, test_size = 0.2, random_state=57) X2_train, X2_test, y2_train, y2_test = train_test_split(X2, y, test_size = 0.2, random_state=57) X3_train, X3_test, y3_train, y3_test = train_test_split(X3, y, test_size = 0.2, random_state=57) X4_train, X4_test, y4_train, y4_test = train_test_split(X4, y, test_size = 0.2, random_state=57) import dask.threaded model = NadarayaWatson('gaussian', 0.05) model.fit(X1_train, y1_train) dobjs = [delayed(process)(x.reshape(1,-1)) for x in X1_test] ypredselfNW1 = compute(*dobjs, get=dask.threaded.get) ypredselfNW1 = np.array(ypredselfNW1).reshape(1,-1)[0] del model del dobjs model = NadarayaWatson('gaussian', 0.05) model.fit(X2_train, y2_train) dobjs = [delayed(process)(x.reshape(1,-1)) for x in X2_test] ypredselfNW2 = compute(*dobjs, get=dask.threaded.get) ypredselfNW2 = np.array(ypredselfNW2).reshape(1,-1)[0] del model del dobjs model = NadarayaWatson('gaussian', 0.05) model.fit(X3_train, y3_train) dobjs = [delayed(process)(x.reshape(1,-1)) for x in X3_test] ypredselfNW3 = compute(*dobjs, get=dask.threaded.get) ypredselfNW3 = np.array(ypredselfNW3).reshape(1,-1)[0] del model del dobjs model = NadarayaWatson('gaussian', 0.05) model.fit(X4_train, y4_train) dobjs = [delayed(process)(x.reshape(1,-1)) for x in X4_test] ypredselfNW4 = compute(*dobjs, get=dask.threaded.get) ypredselfNW4 = np.array(ypredselfNW4).reshape(1,-1)[0] plt.figure(figsize=(13,13)) plt.subplot(221) plt.scatter(ypredselfNW1, y1_test, s=2) plt.plot([0,1,2,3,4,5], color='red') plt.xlabel('NW trained on colors') plt.ylabel('zspec') plt.xlim(0,5) plt.ylim(0,5) plt.subplot(222) plt.scatter(ypredselfNW2, y2_test, s=2) plt.plot([0,1,2,3,4,5], color='red') plt.xlabel('NW trained on colors+dcr') plt.ylabel('zspec') plt.xlim(0,5) plt.ylim(0,5) plt.subplot(223) plt.scatter(ypredselfNW3, y3_test, s=2) plt.plot([0,1,2,3,4,5], color='red') plt.xlabel('NW trained on colors+proper motion') plt.ylabel('zspec') plt.xlim(0,5) plt.ylim(0,5) plt.subplot(224) plt.scatter(ypredselfNW4, y4_test, s=2) plt.plot([0,1,2,3,4,5], color='red') plt.xlabel('NW trained on colors+dcr+proper motion') plt.ylabel('zspec') plt.xlim(0,5) plt.ylim(0,5) plt.savefig("gtr_all_NWout.png", dpi = 400) ###Output _____no_output_____
Notebooks/successful-models/mlp-subreddits-4000.ipynb
###Markdown Data Mining Challange: *Reddit Gender Text-Classification* (MLP) Modules ###Code # Numpy & matplotlib for notebooks %pylab inline # Pandas for data analysis and manipulation import pandas as pd # Sklearn from sklearn.neural_network import MLPClassifier # Multi-layer Perceptron classifier which optimizes the log-loss function using LBFGS or sdg. from sklearn.model_selection import train_test_split # to split arrays or matrices into random train and test subsets from sklearn.model_selection import KFold # K-Folds cross-validator providing train/test indices to split data in train/test sets. from sklearn.decomposition import PCA, TruncatedSVD # Principal component analysis (PCA); dimensionality reduction using truncated SVD. from sklearn.linear_model import LogisticRegression from sklearn.feature_extraction.text import CountVectorizer # Convert a collection of text documents to a matrix of token counts from sklearn.metrics import roc_auc_score as roc # Compute Area Under the Receiver Operating Characteristic Curve from prediction scores from sklearn.metrics import roc_curve, auc # Compute ROC; Compute Area Under the Curve (AUC) using the trapezoidal rule from sklearn.model_selection import cross_val_score # Matplotlib import matplotlib # Data visualization import matplotlib.pyplot as plt import matplotlib.patches as mpatches # Seaborn import seaborn as sns # Statistical data visualization (based on matplotlib) ###Output Populating the interactive namespace from numpy and matplotlib ###Markdown Data Collection ###Code # Import the training dataset and target. Create a list of authors # Import the training dataset train_data = pd.read_csv("../input/dataset/train_data.csv", encoding="utf8") # Import the training target target = pd.read_csv("../input/dataset/train_target.csv") # Create author's gender dictionary author_gender = {} # Populate the dictionary with keys ("authors") and values ("gender") for i in range(len(target)): author_gender[target.author[i]] = target.gender[i] ###Output _____no_output_____ ###Markdown Data Manipulation ###Code # Create a list of aggregated binary subreddits Xs = [] # Create a list of genders y = [] # Create a list of authors a = [] # Populate the lists for author, group in train_data.groupby("author"): Xs.append(group.subreddit.str.cat(sep = " ")) y.append(author_gender[author]) a.append(author) # Lower text in comments clean_train_subreddits = [xs.lower() for xs in Xs] ###Output _____no_output_____ ###Markdown Models Definition & Training CountVectorizer ###Code # Define CountVectorizer vectorizer_ = CountVectorizer(analyzer = "word", tokenizer = None, preprocessor = None, stop_words = None, binary=True ) #500 # Train CountVectorizer train_data_subreddits = vectorizer_.fit_transform(clean_train_subreddits).toarray() sum(train_data_subreddits[1]) y = np.array(y) # Plot the test data along the 2 dimensions of largest variance def plot_LSA(test_data, test_labels, plot=True): lsa = TruncatedSVD(n_components=2) lsa.fit(test_data) lsa_scores = lsa.transform(test_data) color_mapper = {label:idx for idx,label in enumerate(set(test_labels))} color_column = [color_mapper[label] for label in test_labels] colors = ['orange','blue'] if plot: plt.scatter(lsa_scores[:,0], lsa_scores[:,1], s=8, alpha=.8, c=test_labels, cmap=matplotlib.colors.ListedColormap(colors)) orange_patch = mpatches.Patch(color='orange', label='M') blue_patch = mpatches.Patch(color='blue', label='F') plt.legend(handles=[orange_patch, blue_patch], prop={'size': 20}) plt.title('Binary Subreddits only') plt.savefig('foo.pdf') plt.figure(figsize=(8, 8)) plot_LSA(train_data_subreddits, y) plt.show() ###Output _____no_output_____ ###Markdown MLP Classifier ###Code # Split the data for training X_train, X_valid, y_train, y_valid = train_test_split(train_data_subreddits, y, train_size=0.8, test_size=0.2, random_state=0) # Define MLP Classifier: ## Activation function for the hidden layer: "rectified linear unit function" ## Solver for weight optimization: "stochastic gradient-based optimizer" ## Alpha: regularization parameter ## Learning rate schedule for weight updates: "gradually decreases the learning rate at each time step t using an inverse scaling exponent of power_t" ## Verbose: "True" in order to print progress messages to stdout. ## Early stopping: "True" in order to use early stopping to terminate training when validation score is not improving. It automatically sets aside 10% of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. mlpClf = MLPClassifier(activation= 'relu', solver = 'adam', alpha = 0.05, learning_rate = 'invscaling', verbose = True, early_stopping = True, max_iter = 400, random_state=0) # K fold per la cross-validation kfold = KFold(n_splits= 10) # Training and validation on all K folds (????????) # for train_indices, test_indices in kf.split(X_train): # mlpClf.fit(X_train[train_indices], y_train[train_indices]) # print(mlpClf.score(X_train[test_indices], y_train[test_indices])) # cross_val_score resets parameters of mlpClf and fits it on X_train and t_train with cross validation (we did it for consistency). results = cross_val_score(mlpClf, train_data_subreddits, y, cv= kfold, scoring= 'roc_auc') print("roc = ", np.mean(results)) # Model fit mlpClf.fit(X_train, y_train) # Prediction y_score = mlpClf.predict_proba(X_valid)[:,1] # Display score print(roc(y_valid,y_score)) ###Output Iteration 1, loss = 0.60519812 Validation score: 0.726667 Iteration 2, loss = 0.49356004 Validation score: 0.780000 Iteration 3, loss = 0.40826604 Validation score: 0.851111 Iteration 4, loss = 0.34901344 Validation score: 0.862222 Iteration 5, loss = 0.31110970 Validation score: 0.864444 Iteration 6, loss = 0.28548940 Validation score: 0.871111 Iteration 7, loss = 0.26582918 Validation score: 0.873333 Iteration 8, loss = 0.25112270 Validation score: 0.873333 Iteration 9, loss = 0.23981873 Validation score: 0.864444 Iteration 10, loss = 0.22957836 Validation score: 0.866667 Iteration 11, loss = 0.22126490 Validation score: 0.864444 Iteration 12, loss = 0.21459015 Validation score: 0.868889 Iteration 13, loss = 0.20804896 Validation score: 0.866667 Iteration 14, loss = 0.20266724 Validation score: 0.866667 Iteration 15, loss = 0.19800401 Validation score: 0.866667 Iteration 16, loss = 0.19439373 Validation score: 0.862222 Iteration 17, loss = 0.19090965 Validation score: 0.868889 Iteration 18, loss = 0.18725276 Validation score: 0.862222 Validation score did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping. Iteration 1, loss = 0.60713516 Validation score: 0.724444 Iteration 2, loss = 0.49532957 Validation score: 0.786667 Iteration 3, loss = 0.41057434 Validation score: 0.857778 Iteration 4, loss = 0.34806334 Validation score: 0.864444 Iteration 5, loss = 0.30867577 Validation score: 0.866667 Iteration 6, loss = 0.28189177 Validation score: 0.866667 Iteration 7, loss = 0.26300150 Validation score: 0.871111 Iteration 8, loss = 0.24711541 Validation score: 0.868889 Iteration 9, loss = 0.23558131 Validation score: 0.875556 Iteration 10, loss = 0.22539743 Validation score: 0.866667 Iteration 11, loss = 0.21711170 Validation score: 0.862222 Iteration 12, loss = 0.21031035 Validation score: 0.855556 Iteration 13, loss = 0.20419508 Validation score: 0.855556 Iteration 14, loss = 0.19908709 Validation score: 0.853333 Iteration 15, loss = 0.19449144 Validation score: 0.853333 Iteration 16, loss = 0.19072728 Validation score: 0.857778 Iteration 17, loss = 0.18769351 Validation score: 0.851111 Iteration 18, loss = 0.18363201 Validation score: 0.846667 Iteration 19, loss = 0.18093759 Validation score: 0.844444 Iteration 20, loss = 0.17798208 Validation score: 0.844444 Validation score did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping. Iteration 1, loss = 0.60572593 Validation score: 0.724444 Iteration 2, loss = 0.49269410 Validation score: 0.802222 Iteration 3, loss = 0.40726564 Validation score: 0.842222 Iteration 4, loss = 0.34727180 Validation score: 0.851111 Iteration 5, loss = 0.30805438 Validation score: 0.860000 Iteration 6, loss = 0.28146397 Validation score: 0.855556 Iteration 7, loss = 0.26274239 Validation score: 0.855556 Iteration 8, loss = 0.24879462 Validation score: 0.857778 Iteration 9, loss = 0.23715569 Validation score: 0.848889 Iteration 10, loss = 0.22783390 Validation score: 0.851111 Iteration 11, loss = 0.21988823 Validation score: 0.842222 Iteration 12, loss = 0.21289148 Validation score: 0.844444 Iteration 13, loss = 0.20686763 Validation score: 0.831111 Iteration 14, loss = 0.20209538 Validation score: 0.831111 Iteration 15, loss = 0.19731711 Validation score: 0.828889 Iteration 16, loss = 0.19357185 Validation score: 0.824444 Validation score did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping. Iteration 1, loss = 0.60915427 Validation score: 0.722222 Iteration 2, loss = 0.49312872 Validation score: 0.773333 Iteration 3, loss = 0.40481327 Validation score: 0.835556 Iteration 4, loss = 0.34492944 Validation score: 0.842222 Iteration 5, loss = 0.30766272 Validation score: 0.848889 Iteration 6, loss = 0.28219744 Validation score: 0.853333 Iteration 7, loss = 0.26424319 Validation score: 0.848889 Iteration 8, loss = 0.24998039 Validation score: 0.846667 Iteration 9, loss = 0.23923767 Validation score: 0.848889 Iteration 10, loss = 0.22954690 Validation score: 0.851111 Iteration 11, loss = 0.22266089 Validation score: 0.840000 Iteration 12, loss = 0.21620817 Validation score: 0.842222 Iteration 13, loss = 0.21039172 Validation score: 0.837778 Iteration 14, loss = 0.20581384 Validation score: 0.837778 Iteration 15, loss = 0.20196251 Validation score: 0.835556 Iteration 16, loss = 0.19763705 Validation score: 0.837778 Iteration 17, loss = 0.19432982 Validation score: 0.828889 Validation score did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping. Iteration 1, loss = 0.60415471 Validation score: 0.728889 Iteration 2, loss = 0.49019196 Validation score: 0.775556 Iteration 3, loss = 0.40247410 Validation score: 0.844444 Iteration 4, loss = 0.34330874 Validation score: 0.853333 Iteration 5, loss = 0.30533714 Validation score: 0.848889 Iteration 6, loss = 0.27925772 Validation score: 0.853333 Iteration 7, loss = 0.26086658 Validation score: 0.851111 Iteration 8, loss = 0.24666655 Validation score: 0.851111 Iteration 9, loss = 0.23540549 Validation score: 0.835556 Iteration 10, loss = 0.22586151 Validation score: 0.844444 Iteration 11, loss = 0.21807683 Validation score: 0.842222 Iteration 12, loss = 0.21216869 Validation score: 0.846667 Iteration 13, loss = 0.20601932 Validation score: 0.835556 Iteration 14, loss = 0.20092077 Validation score: 0.842222 Iteration 15, loss = 0.19657577 Validation score: 0.840000 Validation score did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping. Iteration 1, loss = 0.60314285 Validation score: 0.733333 Iteration 2, loss = 0.48956292 Validation score: 0.777778 Iteration 3, loss = 0.40592068 Validation score: 0.813333 Iteration 4, loss = 0.34444953 Validation score: 0.840000 Iteration 5, loss = 0.30468572 Validation score: 0.842222 Iteration 6, loss = 0.27781771 Validation score: 0.835556 Iteration 7, loss = 0.25942207 Validation score: 0.835556 Iteration 8, loss = 0.24478070 Validation score: 0.840000 Iteration 9, loss = 0.23301084 Validation score: 0.837778 Iteration 10, loss = 0.22355183 Validation score: 0.833333 Iteration 11, loss = 0.21619526 Validation score: 0.840000 Iteration 12, loss = 0.20903106 Validation score: 0.828889 Iteration 13, loss = 0.20321302 Validation score: 0.828889 Iteration 14, loss = 0.19834626 Validation score: 0.822222 Iteration 15, loss = 0.19356318 Validation score: 0.822222 Iteration 16, loss = 0.18992257 Validation score: 0.824444 Validation score did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping. Iteration 1, loss = 0.60160344 Validation score: 0.737778 Iteration 2, loss = 0.49134389 Validation score: 0.768889 Iteration 3, loss = 0.40693951 Validation score: 0.848889 Iteration 4, loss = 0.34653798 Validation score: 0.855556 Iteration 5, loss = 0.30807883 Validation score: 0.853333 Iteration 6, loss = 0.27996377 Validation score: 0.848889 Iteration 7, loss = 0.26067121 Validation score: 0.855556 Iteration 8, loss = 0.24563418 Validation score: 0.848889 Iteration 9, loss = 0.23362419 Validation score: 0.851111 Iteration 10, loss = 0.22400361 Validation score: 0.848889 Iteration 11, loss = 0.21624213 Validation score: 0.846667 Iteration 12, loss = 0.20901022 Validation score: 0.840000 Iteration 13, loss = 0.20294414 Validation score: 0.842222 Iteration 14, loss = 0.19804547 Validation score: 0.833333 Iteration 15, loss = 0.19362548 Validation score: 0.828889 Validation score did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping. Iteration 1, loss = 0.60050182 Validation score: 0.742222 Iteration 2, loss = 0.48671187 Validation score: 0.784444 Iteration 3, loss = 0.40390351 Validation score: 0.855556 Iteration 4, loss = 0.34328843 Validation score: 0.875556 Iteration 5, loss = 0.30426020 Validation score: 0.877778 Iteration 6, loss = 0.27670439 Validation score: 0.864444 Iteration 7, loss = 0.25773230 Validation score: 0.866667 Iteration 8, loss = 0.24294222 Validation score: 0.868889 Iteration 9, loss = 0.23159069 Validation score: 0.871111 Iteration 10, loss = 0.22218745 Validation score: 0.871111 Iteration 11, loss = 0.21417301 Validation score: 0.868889 Iteration 12, loss = 0.20751911 Validation score: 0.864444 Iteration 13, loss = 0.20175142 Validation score: 0.860000 Iteration 14, loss = 0.19744842 Validation score: 0.857778 Iteration 15, loss = 0.19264898 Validation score: 0.855556 Iteration 16, loss = 0.18875589 Validation score: 0.860000 Validation score did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping. Iteration 1, loss = 0.59848040 Validation score: 0.737778 Iteration 2, loss = 0.48546106 Validation score: 0.780000 Iteration 3, loss = 0.40181588 Validation score: 0.846667 Iteration 4, loss = 0.34048926 Validation score: 0.853333 Iteration 5, loss = 0.30180163 Validation score: 0.848889 Iteration 6, loss = 0.27461504 Validation score: 0.842222 Iteration 7, loss = 0.25549353 Validation score: 0.855556 Iteration 8, loss = 0.24086228 Validation score: 0.844444 Iteration 9, loss = 0.22919604 Validation score: 0.848889 Iteration 10, loss = 0.21969364 Validation score: 0.846667 Iteration 11, loss = 0.21167934 Validation score: 0.846667 Iteration 12, loss = 0.20485362 Validation score: 0.851111 Iteration 13, loss = 0.19916244 Validation score: 0.848889 Iteration 14, loss = 0.19404413 Validation score: 0.851111 Iteration 15, loss = 0.18943788 Validation score: 0.853333 Iteration 16, loss = 0.18556226 Validation score: 0.853333 Iteration 17, loss = 0.18198083 Validation score: 0.848889 Iteration 18, loss = 0.17903184 Validation score: 0.840000 Validation score did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping. Iteration 1, loss = 0.59855763 Validation score: 0.737778 Iteration 2, loss = 0.48469279 Validation score: 0.777778 Iteration 3, loss = 0.40299411 Validation score: 0.842222 Iteration 4, loss = 0.34257251 Validation score: 0.855556 Iteration 5, loss = 0.30330475 Validation score: 0.848889 Iteration 6, loss = 0.27683817 Validation score: 0.851111 Iteration 7, loss = 0.25758905 Validation score: 0.851111 Iteration 8, loss = 0.24296337 Validation score: 0.855556 Iteration 9, loss = 0.23171368 Validation score: 0.855556 Iteration 10, loss = 0.22263622 Validation score: 0.857778 Iteration 11, loss = 0.21456259 Validation score: 0.853333 Iteration 12, loss = 0.20806902 Validation score: 0.855556 Iteration 13, loss = 0.20220837 Validation score: 0.860000 Iteration 14, loss = 0.19797839 Validation score: 0.862222 Iteration 15, loss = 0.19329325 Validation score: 0.864444 Iteration 16, loss = 0.18985083 Validation score: 0.866667 Iteration 17, loss = 0.18635492 Validation score: 0.864444 Iteration 18, loss = 0.18341282 Validation score: 0.862222 Iteration 19, loss = 0.18039613 Validation score: 0.862222 Iteration 20, loss = 0.17790492 Validation score: 0.855556 Iteration 21, loss = 0.17578645 Validation score: 0.860000 Iteration 22, loss = 0.17373198 Validation score: 0.860000 Iteration 23, loss = 0.17140256 Validation score: 0.855556 Iteration 24, loss = 0.16973979 Validation score: 0.860000 Iteration 25, loss = 0.16810221 Validation score: 0.860000 Iteration 26, loss = 0.16609072 Validation score: 0.862222 Iteration 27, loss = 0.16459771 Validation score: 0.853333 Validation score did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping. roc = 0.899400332588652 Iteration 1, loss = 0.60720573 Validation score: 0.732500 Iteration 2, loss = 0.50065820 Validation score: 0.762500 Iteration 3, loss = 0.42322966 Validation score: 0.820000 Iteration 4, loss = 0.36064135 Validation score: 0.845000 Iteration 5, loss = 0.31731490 Validation score: 0.857500 Iteration 6, loss = 0.28802893 Validation score: 0.857500 Iteration 7, loss = 0.26707265 Validation score: 0.857500 Iteration 8, loss = 0.25150091 Validation score: 0.860000 Iteration 9, loss = 0.23846592 Validation score: 0.857500 Iteration 10, loss = 0.22879330 Validation score: 0.855000 Iteration 11, loss = 0.21975116 Validation score: 0.852500 Iteration 12, loss = 0.21304780 Validation score: 0.852500 Iteration 13, loss = 0.20704653 Validation score: 0.850000 Iteration 14, loss = 0.20153542 Validation score: 0.847500 Iteration 15, loss = 0.19691662 Validation score: 0.850000 Iteration 16, loss = 0.19292959 Validation score: 0.845000 Iteration 17, loss = 0.18901319 Validation score: 0.845000 Iteration 18, loss = 0.18559416 Validation score: 0.842500 Iteration 19, loss = 0.18293977 Validation score: 0.837500 Validation score did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping. 0.8949591183427125 ###Markdown ROC Visualization ###Code # ROC Curve for validation data fpr, tpr, thresholds = roc_curve(y_valid, y_score) roc_auc = auc(fpr, tpr) plt.figure() lw = 2 plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.4f)'% roc_auc ) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show() np.save("y_score_MLPs",y_score) ###Output _____no_output_____
notebooks/Timeslice.ipynb
###Markdown http://www.gregreda.com/2015/08/23/cohort-analysis-with-python/ ###Code for e in l: e.to_sql("olx_houses", sqlite3.connect("./olx_houses.db"), if_exists='append') ###Output /home/rolisz/.local/share/virtualenvs/olx_scrapy-O7wnURrw/lib/python3.6/site-packages/pandas/core/generic.py:1534: UserWarning: The spaces in these column names will not be changed. In pandas versions < 0.14, spaces were converted to underscores. chunksize=chunksize, dtype=dtype)
R_basics/Dics10_PS5.ipynb
###Markdown A.1 Import libraries ###Code library(dplyr) ###Output _____no_output_____ ###Markdown Problem 1 1.A ###Code n = 164+160 #Change x number to get answer for assignment x = n-10-5 #-NUMBER #-NUMBER p = x/n table_1 = cbind(x,n,p) # cbind() columns table_1 ###Output _____no_output_____ ###Markdown 1.B$$SE_{\hat{p}}= \sqrt{\frac{p(1-p)}{n}}$$ ###Code std_error = sqrt(p*(1-p)/n) std_error ###Output _____no_output_____ ###Markdown 1.C $$\hat{p}\pm z_{\alpha/2}SE_{\hat{p}}$$ ###Code #Change Z number to get answer for assignment z = 1.5 p+(z*std_error) p-(z*std_error) ###Output _____no_output_____ ###Markdown 1.E ###Code table_1e = matrix(nrow=2, ncol=2, c(151,13,120,40)) table_1e ###Output _____no_output_____ ###Markdown 1.E.1 Add column and add col names ###Code #rowSums function creates row sum #cbind function adds column table_1e = cbind(table_1e, rowSums(table_1e)) colnames(table_1e) = c('T:AZT', 'C: NO AZT', 'Total') table_1e ###Output _____no_output_____ ###Markdown 1.E.2 Add row and add row names ###Code #colSums function creates col sum #rbind function adds row table_1e = rbind(table_1e, colSums(table_1e)) rownames(table_1e) = c('HIV -', 'HIV +', 'Total') table_1e ###Output _____no_output_____ ###Markdown 1.G$$\frac{D}{\hat{p}(1-\hat{p})(1/n_1+ 1/n_2)}$$- Where D is the difference between sample proportions of HIV negative births between the treatment and control groups ###Code prop.test(c(151,120),c(164,160),correct=F) ###Output _____no_output_____ ###Markdown Problem 2 ###Code prop.test(c(151,120),c(164,160),correct=F) ###Output _____no_output_____ ###Markdown 2.A Recorded values! ###Code table_21 = matrix(nrow=2, ncol=3, c(68,164,128,204,173,165)) table_21 ###Output _____no_output_____ ###Markdown 2.C Expected values!Remember if you want to know how expected values refer to previous lesson [2.2.2](https://github.com/corybaird/PLCY_610_public/blob/master/Discussion_sections/Disc9_PS5/Disc9_PS5.ipynb) ###Code chisq.test(table_21 , correct=F)$expected ###Output _____no_output_____ ###Markdown 2.D Test to see whether differences in 2.A and 2.C are statistically different ###Code chisq.test(table_21 , correct=F) ###Output _____no_output_____ ###Markdown Problem 3 3.B Recorded values! - Similar to 2.A ###Code fridays = c(315641 , 298749 ,322631) fridays ###Output _____no_output_____ ###Markdown 3.C Expected values- Similar to 2.C ###Code chisq.test(fridays, correct=F)$expected ###Output _____no_output_____ ###Markdown 3.D Test to see whether differences in 2.A and 2.C are statistically different- Similar to 2.D ###Code probabilities = c(1/3,1/3,1/3) chisq.test(fridays, correct=F, p=probabilities) ###Output _____no_output_____ ###Markdown Problem 4 ###Code library(foreign) url = 'http://fmwww.bc.edu/ec-p/data/wooldridge/bwght.dta' df_birth = read.dta(url) ###Output _____no_output_____ ###Markdown 4.1 Show plot ###Code plot(df_birth$cigs, df_birth$bwght) abline(lm(bwght~cigs, data=df_birth), col="red") ###Output _____no_output_____ ###Markdown 4.2 Find the slope (red line) and intercept (point where red line touches y axis)- This is a regression line- We can find regression slope and intercept using the lm() function - You can think of this function as the regression function ###Code regression = lm(bwght ~ cigs, data = df_birth) regression ###Output _____no_output_____ ###Markdown 4.3 Extract coefs to calculate 4.D$$Y = \beta_0+\beta_1\cdot x \\\text{Birth weight}=\beta_0+\beta_1\cdot\text{Cigs/Day}$$ ###Code coefs = regression$coefficients coefs beta0 = coefs[c(1)] %>% as.double() beta0 beta1 =coefs[c(2)] %>% as.double() beta1 cigs_day = 10 beta0 + cigs_day*beta1 ###Output _____no_output_____
torch/install_pytorch.ipynb
###Markdown 安装 PyTorchhttps://pytorch.org/ 检查 GPU 兼容性https://developer.nvidia.com/cuda-gpus 安装 CUDAhttps://developer.nvidia.com/cuda-zone去这边下载, 选择的时候选离线包, 即 `exe (local)`. 安装 cuDNNhttps://developer.nvidia.com/cudnn注意选择和 CUDA 对应的版本. **需要注册并登陆**.https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.htmlinstall-windows安装文档, 基本操作如下:> Copy \cuda\bin\cudnn64_7.6.5.32.dll to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\bin.>> Copy \cuda\ include\cudnn.h to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\include.> > Copy \cuda\lib\x64\cudnn.lib to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\lib\x64. 安装 pytorchhttps://pytorch.org/get-started/locally/去找对应的版本, 选定之后会给出对应的安装命令.conda install pytorch torchvision cudatoolkit=10.1 -c pytorch 验证是否安装成功```pythonimport torchx = torch.tensor([10.0])x = x.cuda()print(x)y = torch.randn(2, 3)y = y.cuda()print(y)z = x + yprint(z)from torch.backends import cudnnprint(cudnn.is_acceptable(x))print(cudnn.version())print(cudnn.is_acceptable(torch.cuda.FloatTensor(1)))print(cudnn.version())``` ###Code import torch print(torch.cuda.is_available()) x = torch.tensor([10.0]) x = x.cuda() print(x) y = torch.randn(2, 3) y = y.cuda() print(y) z = x + y print(z) from torch.backends import cudnn print(cudnn.is_acceptable(x)) print(cudnn.version()) print(cudnn.is_acceptable(torch.cuda.FloatTensor(1))) print(cudnn.version()) ###Output True tensor([10.], device='cuda:0') tensor([[-0.0344, 1.8270, -1.1743], [-1.7450, -0.5376, 0.9639]], device='cuda:0') tensor([[ 9.9656, 11.8270, 8.8257], [ 8.2550, 9.4624, 10.9639]], device='cuda:0') True 7501 True 7501
docs/examples/vanderpol/vanderpol.ipynb
###Markdown The Van der Pol OscillatorIn dynamics, the Van Der Pol oscillator {cite}`wikivanderpol` is a non-conservative oscillator with non-linear damping. It evolves in time according to the second-order differential equation:\begin{align} \frac{d^2x}{dt^2} - u (1 - x^2) \frac{dx}{dt} + x &= 0\end{align}where $x$ is the position coordinate (a function of the time $t$), and $u$ is a scalar parameterindicating the nonlinearity and the strength of the damping.To make this an optimal control problem, we want to find the smallest control that will dampen the oscillation(drive the state variables to zero). We can express this as an objective function $J$ to minimize:\begin{align} J &= \int x^2_0 + x^2_1 + u^2\end{align}In other words, we want to find the optimal (smallest) trajectory of the control $u$ such that the oscillationand the oscillation's rate of change are driven to zero. State VariablesThere are three _state_ variables are used to define the configuration of the system at any given instant in time.- $x_1$: The primary output of the oscillator.- $x_0$: The rate of change of the primary output.- $J$: The objective function to be minimized.The objective function is included as a state variable so that Dymos will do the integration.The $x_1$ and $x_0$ state variables are also inputs to the system, along with the control $u$. System DynamicsThe evolution of the state variables is given by the following ordinary differential equations (ODE):\begin{align} \frac{dx_0}{dt} &= (1 - x^2_1) x_0 - x_1 + u \\ \frac{dx_1}{dt} &= x_0 \\ \frac{dJ}{dt} &= x^2_0 + x^2_1 + u^2\end{align} Control VariablesThis system has a single control variable:- $u$: The control input.The control variable has a constraint: $-0.75 \leq u \leq 1.0$ The initial and final conditionsThe initial conditions are:\begin{align} x_0 &= 1 \\ x_1 &= 1 \\ u &= -0.75\end{align}The final conditions are:\begin{align} x_0 &= 0 \\ x_1 &= 0 \\ u &= 0\end{align} Defining the ODE as an OpenMDAO SystemIn Dymos, the ODE is an OpenMDAO System (a Component, or a Group of components).The following _ExplicitComponent_ computes the state rates for the Van der Pol problem.More detail on the workings of an _ExplicitComponent_ can be found in the OpenMDAO documentation. In summary:- **initialize**: Called at setup, and used to define options for the component. **ALL** Dymos ODE components should have the property `num_nodes`, which defines the number of points at which the outputs are simultaneously computed.- **setup**: Used to add inputs and outputs to the component, and declare which outputs (and indices of outputs) are dependent on each of the inputs.- **compute**: Used to compute the outputs, given the inputs.- **compute_partials**: Used to compute the derivatives of the outputs with respect to each of the inputs analytically. This method may be omitted if finite difference or complex-step approximations are used, though analytic is recommended.```{Note} Things to note about the Van der Pol ODE system- Only the _vanderpol_ode_ class below is important for defining the basic problem. The other classes are used to demonstrate Message Passing Interface (MPI) parallel calculation of the system. They can be ignored.- $x_1$, $x_0$, and $u$ are inputs.- $\dot{x_1}$, $\dot{x_0}$, and $\dot{J}$ are outputs.- **declare_partials** is called for every output with respect to every input.- For efficiency, partial derrivatives that are constant have values specified in the **setup** method rather than the **compute_partials** method. So although 9 partials are declared, only 5 are computed in **compute_partials**.``` ###Code import numpy as np import openmdao.api as om import time from openmdao.utils.array_utils import evenly_distrib_idxs class VanderpolODE(om.ExplicitComponent): """intentionally slow version of vanderpol_ode for effects of demonstrating distributed component calculations MPI can run this component in multiple processes, distributing the calculation of derivatives. This code has a delay in it to simulate a longer computation. It should run faster with more processes. """ def __init__(self, *args, **kwargs): self.progress_prints = False super().__init__(*args, **kwargs) def initialize(self): self.options.declare('num_nodes', types=int) self.options.declare('distrib', types=bool, default=False) self.options.declare('delay', types=(float,), default=0.0) def setup(self): nn = self.options['num_nodes'] comm = self.comm rank = comm.rank sizes, offsets = evenly_distrib_idxs(comm.size, nn) # (#cpus, #inputs) -> (size array, offset array) self.start_idx = offsets[rank] self.io_size = sizes[rank] # number of inputs and outputs managed by this distributed process self.end_idx = self.start_idx + self.io_size # inputs: 2 states and a control self.add_input('x0', val=np.ones(nn), desc='derivative of Output', units='V/s') self.add_input('x1', val=np.ones(nn), desc='Output', units='V') self.add_input('u', val=np.ones(nn), desc='control', units=None) # outputs: derivative of states # the objective function will be treated as a state for computation, so its derivative is an output self.add_output('x0dot', val=np.ones(self.io_size), desc='second derivative of Output', units='V/s**2', distributed=self.options['distrib']) self.add_output('x1dot', val=np.ones(self.io_size), desc='derivative of Output', units='V/s', distributed=self.options['distrib']) self.add_output('Jdot', val=np.ones(self.io_size), desc='derivative of objective', units='1.0/s', distributed=self.options['distrib']) # self.declare_coloring(method='cs') # # partials r = np.arange(self.io_size, dtype=int) c = r + self.start_idx self.declare_partials(of='x0dot', wrt='x0', rows=r, cols=c) self.declare_partials(of='x0dot', wrt='x1', rows=r, cols=c) self.declare_partials(of='x0dot', wrt='u', rows=r, cols=c, val=1.0) self.declare_partials(of='x1dot', wrt='x0', rows=r, cols=c, val=1.0) self.declare_partials(of='Jdot', wrt='x0', rows=r, cols=c) self.declare_partials(of='Jdot', wrt='x1', rows=r, cols=c) self.declare_partials(of='Jdot', wrt='u', rows=r, cols=c) def compute(self, inputs, outputs): # introduce slowness proportional to size of computation time.sleep(self.options['delay'] * self.io_size) # The inputs contain the entire vector, be each rank will only operate on a portion of it. x0 = inputs['x0'][self.start_idx:self.end_idx] x1 = inputs['x1'][self.start_idx:self.end_idx] u = inputs['u'][self.start_idx:self.end_idx] outputs['x0dot'] = (1.0 - x1**2) * x0 - x1 + u outputs['x1dot'] = x0 outputs['Jdot'] = x0**2 + x1**2 + u**2 def compute_partials(self, inputs, jacobian): time.sleep(self.options['delay'] * self.io_size) x0 = inputs['x0'][self.start_idx:self.end_idx] x1 = inputs['x1'][self.start_idx:self.end_idx] u = inputs['u'][self.start_idx:self.end_idx] jacobian['x0dot', 'x0'] = 1.0 - x1 * x1 jacobian['x0dot', 'x1'] = -2.0 * x1 * x0 - 1.0 jacobian['Jdot', 'x0'] = 2.0 * x0 jacobian['Jdot', 'x1'] = 2.0 * x1 jacobian['Jdot', 'u'] = 2.0 * u ###Output _____no_output_____ ###Markdown Defining the Dymos ProblemOnce the ODEs are defined, they are used to create a Dymos _Problem_ object that allows solution.```{Note} Things to note about the Van der Pol Dymos Problem definition- The **vanderpol** function creates and returns a Dymos _Problem_ instance that can be used for simulation or optimization.- The **vanderpol** function has optional arguments for specifying options for the type of transcription, number of segments, optimizer, etc. These can be ignored when first trying to understand the code.- The _Problem_ object has a _Trajectory_ object, and the trajectory has a single _Phase_. Most of the problem setup is performed by calling methods on the phase (**set_time_options**, **add_state**, **add_boundary_constraint**, **add_objective**).- The **add_state** and **add_control** calls include the _target_ parameter for $x_0$, $x_1$, and $u$. This is required so that the inputs are correctly calculated.- Initial (linear) guesses are supplied for the states and control.``` ###Code import openmdao.api as om import dymos as dm def vanderpol(transcription='gauss-lobatto', num_segments=40, transcription_order=3, compressed=True, optimizer='SLSQP', use_pyoptsparse=False, delay=0.0, distrib=True, solve_segments=False): """Dymos problem definition for optimal control of a Van der Pol oscillator""" # define the OpenMDAO problem p = om.Problem(model=om.Group()) if not use_pyoptsparse: p.driver = om.ScipyOptimizeDriver() else: p.driver = om.pyOptSparseDriver() p.driver.options['optimizer'] = optimizer if use_pyoptsparse: if optimizer == 'SNOPT': p.driver.opt_settings['iSumm'] = 6 # show detailed SNOPT output elif optimizer == 'IPOPT': p.driver.opt_settings['print_level'] = 4 p.driver.declare_coloring() # define a Trajectory object and add to model traj = dm.Trajectory() p.model.add_subsystem('traj', subsys=traj) # define a Transcription if transcription == 'gauss-lobatto': t = dm.GaussLobatto(num_segments=num_segments, order=transcription_order, compressed=compressed, solve_segments=solve_segments) elif transcription == 'radau-ps': t = dm.Radau(num_segments=num_segments, order=transcription_order, compressed=compressed, solve_segments=solve_segments) # define a Phase as specified above and add to Phase phase = dm.Phase(ode_class=VanderpolODE, transcription=t, ode_init_kwargs={'delay': delay, 'distrib': distrib}) traj.add_phase(name='phase0', phase=phase) t_final = 15 phase.set_time_options(fix_initial=True, fix_duration=True, duration_val=t_final, units='s') # set the State time options phase.add_state('x0', fix_initial=False, fix_final=False, rate_source='x0dot', units='V/s', targets='x0') # target required because x0 is an input phase.add_state('x1', fix_initial=False, fix_final=False, rate_source='x1dot', units='V', targets='x1') # target required because x1 is an input phase.add_state('J', fix_initial=False, fix_final=False, rate_source='Jdot', units=None) # define the control phase.add_control(name='u', units=None, lower=-0.75, upper=1.0, continuity=True, rate_continuity=True, targets='u') # target required because u is an input # add constraints phase.add_boundary_constraint('x0', loc='initial', equals=1.0) phase.add_boundary_constraint('x1', loc='initial', equals=1.0) phase.add_boundary_constraint('J', loc='initial', equals=0.0) phase.add_boundary_constraint('x0', loc='final', equals=0.0) phase.add_boundary_constraint('x1', loc='final', equals=0.0) # define objective to minimize phase.add_objective('J', loc='final') # setup the problem p.setup(check=True) p['traj.phase0.t_initial'] = 0.0 p['traj.phase0.t_duration'] = t_final # add a linearly interpolated initial guess for the state and control curves p['traj.phase0.states:x0'] = phase.interp('x0', [1, 0]) p['traj.phase0.states:x1'] = phase.interp('x1', [1, 0]) p['traj.phase0.states:J'] = phase.interp('J', [0, 1]) p['traj.phase0.controls:u'] = phase.interp('u', [-0.75, -0.75]) return p if __name__ == '__main__': # just set up the problem, test it elsewhere p = vanderpol(transcription='radau-ps', num_segments=30, transcription_order=3, compressed=True, optimizer='SLSQP', delay=0.005, distrib=True, use_pyoptsparse=True) dm.run_problem(p, run_driver=True, simulate=False) ###Output _____no_output_____ ###Markdown Simulating the Problem (without control)The following script creates an instance of the Dymos vanderpol problem and simulates it.Since the problem was only simulated and not solved, the solution lines in the plots show onlythe initial guesses for $x_0$, $x_1$, and $u$. The simulation lines shown in the plots are thesystem response with the control variable $u$ held constant.The first two plots shows the variables $x_0$ and $x_1$ vs time. The third plots shows $x_0$ vs. $x_1$(which will be mostly circular in the case of undamped oscillation). The final plot is the (fixed)control variable $u$ vs time. ###Code from dymos.examples.plotting import plot_results import matplotlib.pyplot as plt # Create the Dymos problem instance p = vanderpol(transcription='gauss-lobatto', num_segments=75) # Run the problem (simulate only) p.run_model() # check validity by using scipy.integrate.solve_ivp to integrate the solution exp_out = p.model.traj.simulate() # Display the results plot_results([('traj.phase0.timeseries.time', 'traj.phase0.timeseries.states:x1', 'time (s)', 'x1 (V)'), ('traj.phase0.timeseries.time', 'traj.phase0.timeseries.states:x0', 'time (s)', 'x0 (V/s)'), ('traj.phase0.timeseries.states:x0', 'traj.phase0.timeseries.states:x1', 'x0 vs x1', 'x0 vs x1'), ('traj.phase0.timeseries.time', 'traj.phase0.timeseries.controls:u', 'time (s)', 'control u'), ], title='Van Der Pol Simulation', p_sol=p, p_sim=exp_out) plt.show() ###Output _____no_output_____ ###Markdown Solving the Optimal Control ProblemThe next example shows optimization followed by simulation.With a successful optimization, the resulting plots show a good match between the simulated (with varying control)and optimized results. The state variables $x_0$ and $x_1$ as well as the control variable $u$ are all driven to zero. ###Code import dymos as dm from dymos.examples.plotting import plot_results # Create the Dymos problem instance p = vanderpol(transcription='gauss-lobatto', num_segments=75, transcription_order=3, compressed=True, optimizer='SLSQP') # Find optimal control solution to stop oscillation dm.run_problem(p) # check validity by using scipy.integrate.solve_ivp to integrate the solution exp_out = p.model.traj.simulate() # Display the results plot_results([('traj.phase0.timeseries.time', 'traj.phase0.timeseries.states:x1', 'time (s)', 'x1 (V)'), ('traj.phase0.timeseries.time', 'traj.phase0.timeseries.states:x0', 'time (s)', 'x0 (V/s)'), ('traj.phase0.timeseries.states:x0', 'traj.phase0.timeseries.states:x1', 'x0 vs x1', 'x0 vs x1'), ('traj.phase0.timeseries.time', 'traj.phase0.timeseries.controls:u', 'time (s)', 'control u'), ], title='Van Der Pol Optimization', p_sol=p, p_sim=exp_out) plt.show() ###Output _____no_output_____ ###Markdown Solving the Optimal Control Problem with Grid RefinementRepeating the optimization with grid refinement enabled requires changing only two lines in the code. For the sakeof grid refinement demonstration, the initial number of segments is also reduced by a factor of 5.Optimization with grid refinement gets results similar to the example without grid refinement, but runs fasterand does not require supplying a good guess for the number segments. ###Code import dymos as dm from dymos.examples.plotting import plot_results # Create the Dymos problem instance p = vanderpol(transcription='gauss-lobatto', num_segments=15, transcription_order=3, compressed=True, optimizer='SLSQP') # Enable grid refinement and find optimal control solution to stop oscillation p.model.traj.phases.phase0.set_refine_options(refine=True) dm.run_problem(p, refine_iteration_limit=10) # check validity by using scipy.integrate.solve_ivp to integrate the solution exp_out = p.model.traj.simulate() # Display the results plot_results([('traj.phase0.timeseries.time', 'traj.phase0.timeseries.states:x1', 'time (s)', 'x1 (V)'), ('traj.phase0.timeseries.time', 'traj.phase0.timeseries.states:x0', 'time (s)', 'x0 (V/s)'), ('traj.phase0.timeseries.states:x0', 'traj.phase0.timeseries.states:x1', 'x0 vs x1', 'x0 vs x1'), ('traj.phase0.timeseries.time', 'traj.phase0.timeseries.controls:u', 'time (s)', 'control u'), ], title='Van Der Pol Optimization with Grid Refinement', p_sol=p, p_sim=exp_out) plt.show() ###Output _____no_output_____
Metaspace_batch_submit_json_metadata.ipynb
###Markdown Log in to METASPACE ###Code import json from pathlib import Path from metaspace import SMInstance sm = SMInstance(host='https://staging.metaspace2020.eu', api_key='API_KEY') ###Output _____no_output_____ ###Markdown Directory, Databases, Projects ###Code directory = 'PATH_TO_DIRECTORY' # directory, wich contains batch of imzML and ibd files is_public = False ## or True databases = ['HMDB-v4'] # list of databases project_ids = [] # list of project IDs, optionaly; example ['a4b713a0-918f-11eb-b5f0-c32b469a3ef0'] ###Output _____no_output_____ ###Markdown Metadata ###Code metadata_filepath = Path(directory) / 'metadata.json' with open(metadata_filepath, 'r') as f: metadata = json.loads(f.read()) ###Output _____no_output_____ ###Markdown Launch submission ###Code names = set(x.stem for x in Path(directory).iterdir() if x.is_file() and x.suffix.lower() in ('.ibd', '.imzml')) for name in list(names)[:1]: sm.submit_dataset( directory / Path(f'{name}.imzML'), directory / Path(f'{name}.ibd'), name, json.dumps(metadata), is_public, databases, project_ids=project_ids ) ###Output Uploading 1 part of 2021-03-21_HTvsAA_Hela_AAS1_s25a32_75x75_DANneg.imzML file... Uploading 2 part of 2021-03-21_HTvsAA_Hela_AAS1_s25a32_75x75_DANneg.imzML file... Uploading 1 part of 2021-03-21_HTvsAA_Hela_AAS1_s25a32_75x75_DANneg.ibd file...
Trabalho_Extra_de_IC.ipynb
###Markdown Copyright 2019 The TensorFlow Authors. Trabalho Extra de IC com TensorFlow - 356726Universidade Federal do Ceará - Campus SobralLucas Gabriel Guilherme dos Santos - 356726Engenharia da ComputaçãoDisciplina: Inteligência ComputacionalProf. Jarbas JoaciTrabalho Extra - Entrega até 11/12/2019Semestre: 2019.2 Este documento utiliza [Keras](https://www.tensorflow.org/guide/keras/overview) para:1. Construir uma rede neural para classificar imagens.2. Treinar esta rede neural.3. Avaliar a acurácia do modelo. 1 Introdução Este é um caderno do [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb). Por meio dele pode-se executar programas em Python diretamente no servidor, sendo esta uma ótima forma de aprender a utilizar TensorFlow. As importações serão realizadas no bloco de código abaixo. Por meio da biblioteca TensorFlow e Keras serão gerados os modelos de rede neural convolucional. Será empregada a biblioteca Matplotlib para exibição dos valores numéricos e a biblioteca numpy para operações matemáticas auxiliares.Também é importado o sistema de arquivos do Google Drive, onde foi realizado o envio do conjunto de dados de imagens, disponibilizado na url [datasets de IC](https://www.dropbox.com/s/n1elpgp4xxagbrv/mnist.rar?dl=0).Para acessar arquivos no Google Drive a partir do Google Colab é necessário gerar um token de identificação, normalmente após a primeira execução é exibido um link de acesso para o Drive, onde é solicitado a permissão do proprietário.O Token gerado para a pasta MNIST foi: 4/uAHrA5VaxjyK5wl-dwQ5BjfL3qBR5inzX0l2Ux18kvmlkdVU585ae5o ###Code from __future__ import absolute_import, division, print_function, unicode_literals import numpy as np import os import cv2 from google.colab import drive from google.colab import files import matplotlib.pyplot as plt # Lucas Gabriel - 356726 # Engenharia da Computação - UFC Sobral # montando o sistema de arquivos no Google Drive # A variável image_path é global, bem como a variável image_data drive.mount('/content/drive') image_path = 'drive/My Drive/mnist' def loadImages(path): '''Put files into lists and return them as one list with all images in the folder''' image_files = sorted([os.path.join(path, 'train', file) for file in os.listdir(path + "/train") if file.endswith('.png')]) return image_files # Em seguida verificar a melhor versão do TensorFlow para o Google Colab try: # % A versão 2.0 e 2.x+ só existem no Google Colab, as mesmas não podem ser obtidas para execução local. %tensorflow_version 2.x except Exception: pass import tensorflow as tf ###Output _____no_output_____ ###Markdown As funções definidas abaixo (display e display_one) são responsáveis por exibir imagens obtidas a partir do banco de testes. O banco encontra-se numa pasta chamada MNIST no Google Drive, dentro de uma subpasta chamada test. A primeira função recebe como parâmetro duas imagens e dois títulos, definidos previamente. A segunda função recebe somente uma imagem como parâmetro e retorna a visualização desta e o títurlo pré definido. ###Code # Exibe duas imagens e seus títulos, lado a lado def display(a, b, title1 = "mnist_100", title2 = "mnist_5"): plt.subplot(121), plt.imshow(a), plt.title(title1) plt.xticks([]), plt.yticks([]) plt.subplot(122), plt.imshow(b), plt.title(title2) plt.xticks([]), plt.yticks([]) plt.show() # Exibe uma imagem e seu títurlo def display_one(a, title1 = "Carregando uma imagem do banco do DropBox"): plt.imshow(a), plt.title(title1) plt.show() ###Output _____no_output_____ ###Markdown A função abaixo realiza o pré processamento das imagens no banco fornecido no Dropbox, realizando o redimensionamento de 30 x 30 pixels para 28 x 28 pixels. O procedimento utiliza uma função da biblioteca OpenCV na versão 2.Load and prepare the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). Convert the samples from integers to floating-point numbers: ###Code #lê imagens entre 0 e o tamanho da lista passada em Data def preprocess(data): lim = len(data) # a linha a seguir lê as imagens no diretório de trabalho definido em image_path, localizada no Google Drive img = [cv2.imread(i, cv2.IMREAD_UNCHANGED) for i in data[:lim]] #img = [cv2.imread(data, cv2.IMREAD_UNCHANGED)] try: print('Imagem em dimensões originais:', img.shape) except AttributeError: print('Dimensões não processadas') # A partir daqui serão definidas as novas dimensões altura, largura = 28, 28 dim = (altura, largura) res_image = [] for i in range(len(img)): res = cv2.resize(img[i], dim, interpolation=cv2.INTER_LINEAR) res_image.append(res) #checando se o redimensionamento ocorreu com sucesso try: print('Redimensionando a imagem 1', res_image[0].shape) except AttributeError: print('Dimensões não encontradas') original = res_image[0] display_one(original) #fim da implementação da função mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 """ train data -> mnist.test_data() #test data -> dropbox.data() #labels = y_test """ print("#----------------------------------") print('Imagens a partir do banco de testes do MNIST') display(x_test[1], x_test[2]) print("#----------------------------------") print('Imagem a partir do banco de teste do DROPBOX') imagens = loadImages(image_path) preprocess(imagens) ###Output _____no_output_____ ###Markdown A seguir será definida uma função para realizar o tratamento das imagens de amostra, carregadas na lista retornada pela função loadImages: Build the `tf.keras.Sequential` model by stacking layers. Choose an optimizer and loss function for training: ###Code model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Train and evaluate the model: ###Code #utilizando o conjunto de 10k model.fit(x_test, y_test, epochs=5) model.evaluate(x_test, y_test, verbose=2) """ Utilizando o conjunto de 60k model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test, verbose=2) """ ###Output _____no_output_____ ###Markdown The image classifier is now trained to ~98% accuracy on this dataset. To learn more, read the [TensorFlow tutorials](https://www.tensorflow.org/tutorials/). ###Code #calcular perca e precisão do modelo var_loss, var_accuracy = model.evaluate(x_test, y_test) print(var_loss, var_accuracy) ###Output _____no_output_____ ###Markdown O modelo treinado acima é então salvo para que possa ser empregado futuramente em novas implementações ###Code #save a model model.save('356726.model') novo_modelo = tf.keras.models.load_model('356726.model') predicoes = novo_modelo.predict([x_test]) print(predicoes[100]) ###Output _____no_output_____ ###Markdown Utilizando numpy para ###Code # este dado vem a partir do mnist.test (base com 10k imagens de dígitos) # como o banco de 10k foi utilizado para o treino é necessário utilizar outro banco # para validação, no caso o dropbox print(np.argmax(predicoes[1])) plt.imshow(x_test[1]) plt.show() images = loadImages(image_path) print(images[1]) preprocess(images[1]) ###Output _____no_output_____ ###Markdown Algumas funções auxiliares foram obtidas a partir do tutorial disponível em:https://colab.research.google.com/drive/1b8pVMMoR37a3b9ICo8TMqMLVD-WvbzTkscrollTo=3E94MjxkR_hIOutras informações relevantes foram obtidos a partir do Stack Overflow:https://stackoverflow.com/questions/48257255/how-to-import-pre-downloaded-mnist-dataset-from-a-specific-directory-or-folder ###Code ###Output _____no_output_____
notebooks/demo_model_evaluation.ipynb
###Markdown Best Model evaluation ###Code best_svm_clf = SVC(kernel = 'rbf', C = 14.1, gamma= 'scale', class_weight = 'balanced',probability=True) best_svm_clf.fit(X_train, Y_train) Y_predict = best_svm_clf.predict(X_test) accuracy_score(Y_test, Y_predict) ###Output _____no_output_____ ###Markdown The number of correctly classified samples in the test set ###Code accuracy_score(Y_test, Y_predict, normalize=False) ###Output _____no_output_____ ###Markdown Confusion Matrix ###Code conf_mx=confusion_matrix(Y_test, Y_predict) conf_mx plt.matshow(conf_mx,cmap=plt.cm.gray) plt.show() ###Output _____no_output_____ ###Markdown Classification Error plot ###Code row_sums=conf_mx.sum(axis=1,keepdims=True) norm_conf_mx=conf_mx/row_sums np.fill_diagonal(norm_conf_mx,0) plt.matshow(norm_conf_mx,cmap=plt.cm.gray) plt.show() ###Output _____no_output_____ ###Markdown Precision vs. Recall curve ###Code Y_score = best_svm_clf.predict_proba(X_test) Y_test_bin= label_binarize(Y_test, classes=[*range(no_of_classes)]) precision = dict() recall = dict() x=range(no_of_classes) print(x) for i in range(no_of_classes): precision[i], recall[i], _ = precision_recall_curve(Y_test_bin[:,i],Y_score[:, i]) plt.plot(recall[i], precision[i], lw=2, label='class {}'.format(i)) plt.xlabel("recall") plt.ylabel("precision") plt.legend(loc="best") plt.title("precision vs. recall curve") plt.show() ###Output _____no_output_____ ###Markdown ROC curve ###Code fpr = dict() tpr = dict() x=range(no_of_classes) print(x) for i in range(no_of_classes): fpr[i], tpr[i], _ = roc_curve(Y_test_bin[:, i],Y_score[:, i]) plt.plot(fpr[i], tpr[i], lw=2, label='class {}'.format(i)) plt.xlabel("false positive rate") plt.ylabel("true positive rate") plt.legend(loc="best") plt.title("ROC curve") plt.show() ###Output range(0, 8)
mini-projects/aic-5_6_6-sql-at-scale-with-spark-mini-project/Mini_Project_SQL_with_Spark.ipynb
###Markdown SQL at Scale with Spark SQLWelcome to the SQL mini project. For this project, you will use the Domino Data Lab Platform and work through a series of exercises using Spark SQL. The dataset size may not be too big but the intent here is to familiarize yourself with the Spark SQL interface which scales easily to huge datasets, without you having to worry about changing your SQL queries. The data you need is present in the mini-project folder in the form of three CSV files. You need to make sure that these datasets are uploaded and present in the same directory as this notebook file, since we will be importing these files in Spark and create the following tables under the __`country_club`__ database using Spark SQL.1. The __`bookings`__ table,2. The __`facilities`__ table, and3. The __`members`__ table.You will be uploading these datasets shortly into Spark to understand how to create a database within minutes! Once the database and the tables are populated, you will be focusing on the mini-project questions.In the mini project, you'll be asked a series of questions. You can solve them using the Domino platform, but for the final deliverable, please download this notebook as an IPython notebook (__`File -> Export -> IPython Notebook`__) and upload it to your GitHub. Run the following if you failed to open a notebook in the PySpark WorkspaceThis will work assuming you are using Spark in the cloud on domino or you might need to configure with your own spark instance if you are working offline ###Code if 'sc' not in locals(): from pyspark.context import SparkContext from pyspark.sql.context import SQLContext from pyspark.sql.session import SparkSession sc = SparkContext() sqlContext = SQLContext(sc) spark = SparkSession(sc) ###Output _____no_output_____ ###Markdown Checking Existence of Spark Environment VariablesMake sure your notebook is loaded using a PySpark Workspace. If you open up a regular Jupyter workspace the following variables might not exist ###Code spark sqlContext ###Output _____no_output_____ ###Markdown Create a utility function to run SQL commandsInstead of typing the same python functions repeatedly, we build a small function where you can just pass your query to get results.- Remember we are using Spark SQL in PySpark- We can't run multiple SQL statements in one go (no semi-colon ';' separated SQL statements)- We can run multi-line SQL queries (but still has to be a single statement) ###Code def run_sql(statement): try: result = sqlContext.sql(statement) except Exception as e: print(e.desc, '\n', e.stackTrace) return return result ###Output _____no_output_____ ###Markdown Creating the DatabaseWe will first create our database in which we will be creating our three tables of interest ###Code run_sql('drop database if exists country_club cascade') run_sql('create database country_club') dbs = run_sql('show databases') dbs.toPandas() ###Output _____no_output_____ ###Markdown Creating the TablesIn this section, we will be creating the three tables of interest and populate them with the data from the CSV files already available to you.To get started, first make sure you have already uploaded the three CSV files and they are present in the same directory as the notebook.Once you have done this, please remember to execute the following code to build the dataframes which will be saved as tables in our database ###Code # File location and type file_location_bookings = "./Bookings.csv" file_location_facilities = "./Facilities.csv" file_location_members = "./Members.csv" file_type = "csv" # CSV options infer_schema = "true" first_row_is_header = "true" delimiter = "," # The applied options are for CSV files. For other file types, these will be ignored. bookings_df = (spark.read.format(file_type) .option("inferSchema", infer_schema) .option("header", first_row_is_header) .option("sep", delimiter) .load(file_location_bookings)) facilities_df = (spark.read.format(file_type) .option("inferSchema", infer_schema) .option("header", first_row_is_header) .option("sep", delimiter) .load(file_location_facilities)) members_df = (spark.read.format(file_type) .option("inferSchema", infer_schema) .option("header", first_row_is_header) .option("sep", delimiter) .load(file_location_members)) ###Output _____no_output_____ ###Markdown Viewing the dataframe schemasWe can take a look at the schemas of our potential tables to be written to our database soon ###Code print('Bookings Schema') bookings_df.printSchema() print('Facilities Schema') facilities_df.printSchema() print('Members Schema') members_df.printSchema() type(bookings_df) ###Output _____no_output_____ ###Markdown Create permanent tablesWe will be creating three permanent tables here in our __`country_club`__ database as we discussed previously with the following code ###Code permanent_table_name_bookings = "country_club.Bookings" bookings_df.write.format("parquet").saveAsTable(permanent_table_name_bookings) permanent_table_name_facilities = "country_club.Facilities" facilities_df.write.format("parquet").saveAsTable(permanent_table_name_facilities) permanent_table_name_members = "country_club.Members" members_df.write.format("parquet").saveAsTable(permanent_table_name_members) ###Output _____no_output_____ ###Markdown Refresh tables and check them ###Code run_sql('use country_club') run_sql('REFRESH table bookings') run_sql('REFRESH table facilities') run_sql('REFRESH table members') tbls = run_sql('show tables') tbls.toPandas() ###Output _____no_output_____ ###Markdown Test a sample SQL query__Note:__ You can use multi-line SQL queries (but still a single statement) as follows ###Code result = run_sql(''' SELECT * FROM bookings LIMIT 3 ''') result.toPandas() ###Output _____no_output_____ ###Markdown Your Turn: Solve the following questions with Spark SQL- Make use of the `run_sql(...)` function as seen in the previous example- You can write multi-line SQL queries but it has to be a single statement (no use of semi-colons ';')- Make use of the `toPandas()` function as depicted in the previous example to display the query results Q1: Some of the facilities charge a fee to members, but some do not. Please list the names of the facilities that do. ###Code result = run_sql(''' SELECT facid, name, membercost FROM facilities WHERE membercost > 0.0 ''') result.toPandas() ###Output _____no_output_____ ###Markdown Q2: How many facilities do not charge a fee to members? ###Code result = run_sql(''' SELECT facid, name, membercost FROM facilities WHERE membercost = 0.0 ''') result.toPandas().shape[0] ###Output _____no_output_____ ###Markdown Q3: How can you produce a list of facilities that charge a fee to members, where the fee is less than 20% of the facility's monthly maintenance cost? Return the facid, facility name, member cost, and monthly maintenance of the facilities in question. ###Code result = run_sql(''' SELECT facid, name, membercost, monthlymaintenance FROM facilities WHERE membercost > 0.0 AND membercost < monthlymaintenance * 0.2 ''') result.toPandas() ###Output _____no_output_____ ###Markdown Q4: How can you retrieve the details of facilities with ID 1 and 5? Write the query without using the OR operator. ###Code result = run_sql(''' SELECT * FROM facilities ''') result.toPandas().iloc[[1, 5]] ###Output _____no_output_____ ###Markdown Q5: How can you produce a list of facilities, with each labelled as 'cheap' or 'expensive', depending on if their monthly maintenance cost is more than $100? Return the name and monthly maintenance of the facilities in question. ###Code result = run_sql(''' SELECT name, CASE WHEN monthlymaintenance < 100 THEN 'cheap' ELSE 'expensive' END AS monthlymaintenance FROM facilities ''') result.toPandas() ###Output _____no_output_____ ###Markdown Q6: You'd like to get the first and last name of the last member(s) who signed up. Do not use the LIMIT clause for your solution. ###Code result = run_sql(''' SELECT firstname, surname, joindate FROM members ORDER BY joindate DESC ''') result.toPandas().iloc[[0]] ###Output _____no_output_____ ###Markdown Q7: How can you produce a list of all members who have used a tennis court?- Include in your output the name of the court, and the name of the member formatted as a single column. - Ensure no duplicate data- Also order by the member name. ###Code result = run_sql(''' SELECT f.name AS facility, CASE WHEN b.memid = 0 THEN 'Guest' ELSE CONCAT(m.firstname, ' ', m.surname) END AS member FROM bookings AS b INNER JOIN members AS m USING (memid) INNER JOIN facilities AS f USING (facid) WHERE f.name LIKE 'Tennis Court%' GROUP BY member, facility ORDER BY member, facility ''') result.toPandas() ###Output _____no_output_____ ###Markdown Q8: How can you produce a list of bookings on the day of 2012-09-14 which will cost the member (or guest) more than $30? - Remember that guests have different costs to members (the listed costs are per half-hour 'slot')- The guest user's ID is always 0. Include in your output the name of the facility, the name of the member formatted as a single column, and the cost.- Order by descending cost, and do not use any subqueries. ###Code result = run_sql(''' SELECT f.name AS facility, CASE WHEN b.memid = 0 THEN 'Guest' ELSE CONCAT(m.firstname, ' ', m.surname) END AS member, CASE WHEN b.memid = 0 THEN f.guestcost ELSE f.membercost END AS cost FROM bookings AS b INNER JOIN facilities AS f USING (facid) INNER JOIN members AS m USING (memid) WHERE date_format(b.starttime, 'yyyy-MM-dd') = '2012-09-14' AND (f.membercost > 30 OR f.guestcost > 30) ORDER BY cost DESC ''') result.toPandas() ###Output _____no_output_____ ###Markdown Q9: This time, produce the same result as in Q8, but using a subquery. ###Code result = run_sql(''' SELECT f.name AS facility, CASE WHEN b.memid = 0 THEN 'Guest' ELSE CONCAT(m.firstname, ' ', m.surname) END AS member, CASE WHEN b.memid = 0 THEN f.guestcost ELSE f.membercost END AS cost FROM bookings AS b, (SELECT facid, name, membercost, guestcost FROM facilities WHERE membercost > 30 OR guestcost > 30) AS f, (SELECT memid, firstname, surname FROM members) AS m WHERE b.facid = f.facid AND b.memid = m.memid AND date_format(b.starttime, 'yyyy-MM-dd') = '2012-09-14' ORDER BY cost DESC ''') result.toPandas() ###Output _____no_output_____ ###Markdown Q10: Produce a list of facilities with a total revenue less than 1000.- The output should have facility name and total revenue, sorted by revenue. - Remember that there's a different cost for guests and members! ###Code # revenue < 1000 does not make sense since the dataset has no actual case # check revenue < 10000 instead result = run_sql(''' SELECT f.name AS facility, r.revenue AS revenue FROM facilities AS f INNER JOIN (SELECT facid, SUM(membercost) + SUM(guestcost) AS revenue FROM bookings INNER JOIN facilities USING (facid) GROUP BY facid) AS r USING (facid) WHERE r.revenue < 10000 ORDER BY r.revenue ''') result.toPandas() ###Output _____no_output_____ ###Markdown Trying to answer Q10 by calculating income (net profit) instead of revenue- Calculate revenue collecting between earlest booking 2012-07-03 08:00:00 and latest one 2012-09-30 19:30:00- Account monthly maintenance as the expense within 3 months of bookings ###Code result = run_sql(''' SELECT f.name AS facility, (r.revenue - f.monthlymaintenance * (SELECT round(months_between( (SELECT MAX(starttime) FROM bookings), (SELECT MIN(starttime) FROM bookings)) )) ) AS income FROM facilities AS f INNER JOIN (SELECT facid, SUM(membercost) + SUM(guestcost) AS revenue FROM bookings INNER JOIN facilities USING (facid) GROUP BY facid) AS r USING (facid) ORDER BY income ''') df = result.toPandas() df[df.income < 1000] ###Output _____no_output_____
12 Pandas Teil 4/Homework 3.ipynb
###Markdown Homework 3 **Inhalt:** Regular Expressions in Pandas anwenden**Nötige Skills:** Regex in Python, Regex in Pandas**Lernziele:**- Ein praktisches Beispiel kennenlernen, wo Regex nützlich sein kann Das Beispiel Eine Liste von US-Senatoren, die uns in einer etwas mühsamen Form zugestellt wurde.Ziel ist, aus einem Wust von Zeichen eine saubere Tabelle zu generieren.Die Liste ist gespeichert unter `dataprojects/Senatoren/Senator-List.xlsx` Vorbereitung Nötige Libraries importieren: Pandas, re Datei laden Tipp: `header=None` Check, was steht drin? Geben Sie der Spalte einen neuen Namen: "Eintrag" Zeigen Sie die ersten zwei Einträge in der Serie 'Eintrag' an Welche Informationen sind in den Text-Strings enthalten? (Hinweis: Es sollten acht sein) ###Code # Antwort, in Worten: # Name # ... # ###Output _____no_output_____ ###Markdown Regex-Ausdrücke entwickeln Hier ist ein Regex-Ausdruck, der den Namen der Senatoren aus der Spalte "Eintrag" fischt: ###Code # Name: df['Eintrag'].str.extract(r"^(.+),").head(5) ###Output _____no_output_____ ###Markdown Entwickeln Sie weitere Regex-Ausdrücke für alle anderen Informationen, die in der Spalte enthalten sind! Neue Spalten bilden Speichern Sie jede Information, für die Sie eine Regex entwickelt haben, in einer neuen Spalte ab. Check, wie sieht es aus? Bonus: Alle Spalten auf einmal bilden Laden Sie die Datei nochmals neu, so dass Sie nur eine Spalte namens "Eintrag" haben Wir können mit mehreren Klammern auch mehrere Informationen aufs Mal extrahieren: ###Code # Name, Vorname df['Eintrag'].str.extract(r"^(.+), (.+) - \(").head(5) ###Output _____no_output_____
NLP/Document parsing.ipynb
###Markdown using another libararyimport slate3k as slatewith open(file_path, 'rb') as f: extracted_text = slate.PDF(f)print(extracted_text) ###Code pdf_reader=pdf.PdfFileReader(file) help(pdf_reader) pdf_reader.getNumPages() page1=pdf_reader.getPage(0) page1.extractText() page2=pdf_reader.getPage(1) page2.extractText() #modify pages to get the first and second pages only pdf_writer=pdf.PdfFileWriter() pdf_writer.addPage(page1) pdf_writer.addPage(page2) output=open('pages.pdf','wb') pdf_writer.write(output) output.close ###Output _____no_output_____
lightautoml.ipynb
###Markdown Step 0.1. Import necessary libraries ###Code # Standard python libraries import logging import os import time import requests logging.basicConfig(format='[%(asctime)s] (%(levelname)s): %(message)s', level=logging.INFO) # Installed libraries import numpy as np import pandas as pd from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split import torch from sklearn.metrics import mean_absolute_percentage_error, r2_score, mean_squared_error import typing # Imports from our package from lightautoml.automl.presets.tabular_presets import TabularAutoML, TabularUtilizedAutoML from lightautoml.tasks import Task from lightautoml.dataset.roles import DatetimeRole THRESHOLD = 0.15 NEGATIVE_WEIGHT = 1.1 ###Output _____no_output_____ ###Markdown Step 0.3. Data load ###Code %%time train_data = pd.read_csv('./data/train.csv') train_data.head() test_data = pd.read_csv('./data/test.csv') test_data.head() submission = pd.read_csv('./data/test_submission.csv') submission.head() print(train_data.shape) print(test_data.shape) ###Output (279792, 77) (2974, 76) ###Markdown Step 0.2. Parameters ###Code N_THREADS = 16 # threads cnt for lgbm and linear models N_FOLDS = 7 # folds cnt for AutoML, best 7 RANDOM_STATE = 42 # fixed random state for various reasons TEST_SIZE = 0.1 # Test size for metric check TIMEOUT = 3*3600 # Time in seconds for automl run TARGET_NAME = 'per_square_meter_price' # Target column name ###Output _____no_output_____ ###Markdown Step 0.4. Some user feature preparation ###Code def deviation_metric_one_sample(y_true: typing.Union[float, int], y_pred: typing.Union[float, int]) -> float: """ Реализация кастомной метрики для хакатона. :param y_true: float, реальная цена :param y_pred: float, предсказанная цена :return: float, значение метрики """ deviation = (y_pred - y_true) / np.maximum(1e-8, y_true) if np.abs(deviation) <= THRESHOLD: return 0 elif deviation <= - 4 * THRESHOLD: return 9 * NEGATIVE_WEIGHT elif deviation < -THRESHOLD: return NEGATIVE_WEIGHT * ((deviation / THRESHOLD) + 1) ** 2 elif deviation < 4 * THRESHOLD: return ((deviation / THRESHOLD) - 1) ** 2 else: return 9 def deviation_metric(y_true: np.array, y_pred: np.array) -> float: return np.array([deviation_metric_one_sample(y_true[n], y_pred[n]) for n in range(len(y_true))]).mean() def median_absolute_percentage_error(y_true: np.array, y_pred: np.array) -> float: return np.median(np.abs(y_pred-y_true)/y_true) ###Output _____no_output_____ ###Markdown ========= AutoML preset usage ========= Step 1. Create Task ###Code %%time task = Task('reg', loss='mse', metric=deviation_metric, greater_is_better=False) ###Output CPU times: user 311 µs, sys: 33 µs, total: 344 µs Wall time: 351 µs ###Markdown Step 2. Setup columns roles Roles setup here set target column and base date, which is used to calculate date differences: ###Code NUM_FEATURES = ['lat', 'lng', 'osm_amenity_points_in_0.001', 'osm_amenity_points_in_0.005', 'osm_amenity_points_in_0.0075', 'osm_amenity_points_in_0.01', 'osm_building_points_in_0.001', 'osm_building_points_in_0.005', 'osm_building_points_in_0.0075', 'osm_building_points_in_0.01', 'osm_catering_points_in_0.001', 'osm_catering_points_in_0.005', 'osm_catering_points_in_0.0075', 'osm_catering_points_in_0.01', 'osm_city_closest_dist', 'osm_city_nearest_population', 'osm_crossing_closest_dist', 'osm_crossing_points_in_0.001', 'osm_crossing_points_in_0.005', 'osm_crossing_points_in_0.0075', 'osm_crossing_points_in_0.01', 'osm_culture_points_in_0.001', 'osm_culture_points_in_0.005', 'osm_culture_points_in_0.0075', 'osm_culture_points_in_0.01', 'osm_finance_points_in_0.001', 'osm_finance_points_in_0.005', 'osm_finance_points_in_0.0075', 'osm_finance_points_in_0.01', 'osm_healthcare_points_in_0.005', 'osm_healthcare_points_in_0.0075', 'osm_healthcare_points_in_0.01', 'osm_historic_points_in_0.005', 'osm_historic_points_in_0.0075', 'osm_historic_points_in_0.01', 'osm_hotels_points_in_0.005', 'osm_hotels_points_in_0.0075', 'osm_hotels_points_in_0.01', 'osm_leisure_points_in_0.005', 'osm_leisure_points_in_0.0075', 'osm_leisure_points_in_0.01', 'osm_offices_points_in_0.001', 'osm_offices_points_in_0.005', 'osm_offices_points_in_0.0075', 'osm_offices_points_in_0.01', 'osm_shops_points_in_0.001', 'osm_shops_points_in_0.005', 'osm_shops_points_in_0.0075', 'osm_shops_points_in_0.01', 'osm_subway_closest_dist', 'osm_train_stop_closest_dist', 'osm_train_stop_points_in_0.005', 'osm_train_stop_points_in_0.0075', 'osm_train_stop_points_in_0.01', 'osm_transport_stop_closest_dist', 'osm_transport_stop_points_in_0.005', 'osm_transport_stop_points_in_0.0075', 'osm_transport_stop_points_in_0.01', 'reform_count_of_houses_1000', 'reform_count_of_houses_500', 'reform_house_population_1000', 'reform_house_population_500', 'reform_mean_floor_count_1000', 'reform_mean_floor_count_500', 'reform_mean_year_building_1000', 'reform_mean_year_building_500','total_square'] CATEGORICAL_STE_FEATURES = ['region', 'city', 'realty_type'] NUM_FEATURES.append(TARGET_NAME) train_new = train_data[NUM_FEATURES] %%time roles = {'target': TARGET_NAME, #'drop': ['id', 'floor'], #'numeric': NUM_FEATURES, #'category': CATEGORICAL_STE_FEATURES, #DatetimeRole(base_date=False, base_feats=True, seasonality=('y', 'm', 'd')): 'date' } ###Output CPU times: user 0 ns, sys: 6 µs, total: 6 µs Wall time: 11.7 µs ###Markdown Step 3. Create AutoML from preset To create AutoML model here we use `TabularAutoML` preset, which looks like:![TabularAutoML preset pipeline](https://github.com/sberbank-ai-lab/LightAutoML/raw/master/imgs/tutorial_2_pipeline.png)All params we set above can be send inside preset to change its configuration: ###Code %%time automl = TabularAutoML(task = task, timeout = TIMEOUT, cpu_limit = N_THREADS, reader_params = {'n_jobs': N_THREADS, 'cv': N_FOLDS, 'random_state': RANDOM_STATE}, general_params = {'use_algos': [['lgb_tuned', 'lgb','cd']]}, #tuning_params = {'fit_on_holdout':True, 'max_tuning_iter': 201, 'max_tuning_time': 500}, lgb_params = {'default_params': {'num_threads': N_THREADS}}, cb_params = {'default_params':{'task_type': "GPU"}} ) oof_pred = automl.fit_predict(train_new, roles = roles) logging.info('oof_pred:\n{}\nShape = {}'.format(oof_pred, oof_pred.shape)) %%time # Fast feature importances calculation fast_fi = automl.get_feature_scores('fast') fast_fi.set_index('Feature')['Importance'].plot.bar(figsize = (20, 12), grid = True) ###Output CPU times: user 110 ms, sys: 3.96 ms, total: 114 ms Wall time: 117 ms ###Markdown Step 4. Predict to test data and check scores ###Code %%time test_pred = automl.predict(test_data) logging.info('Prediction for test data:\n{}\nShape = {}' .format(test_pred, test_pred.shape)) logging.info('Check scores...') logging.info('OOF score: {}'.format(mean_absolute_error(train_data[TARGET_NAME].values, oof_pred.data[:, 0]))) ###Output [2021-09-25 11:43:59,500] (INFO): Prediction for test data: array([[38519.965], [54549.316], [51106.117], ..., [51706.305], [58667.97 ], [47155.79 ]], dtype=float32) Shape = (2974, 1) [2021-09-25 11:43:59,501] (INFO): Check scores... [2021-09-25 11:43:59,502] (INFO): OOF score: 29267.990025748266 ###Markdown Step 5. Generate submission ###Code submission[TARGET_NAME] = test_pred.data[:, 0] submission.head() submission.to_csv('submission_test.csv', index = False) ###Output _____no_output_____
HA02_LogisticRegression_NeuralNetwork.ipynb
###Markdown Logistic Regression with a Neural Network mindsetWelcome! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.**Instructions:**- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.**You will learn to:**- Build the general architecture of a learning algorithm, including: - Initializing parameters - Calculating the cost function and its gradient - Using an optimization algorithm (gradient descent) - Gather all three functions above into a main model function, in the right order. 1 - Packages First, let's run the cell below to import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end. ###Code import numpy as np import matplotlib.pyplot as plt import h5py import scipy from PIL import Image from scipy import ndimage from lr_utils import load_dataset %matplotlib inline ###Output _____no_output_____ ###Markdown 2 - Overview of the Problem set **Problem Statement**: You are given a dataset ("data.h5") containing: - a training set of m_train images labeled as cat (y=1) or non-cat (y=0) - a test set of m_test images labeled as cat or non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.Let's get more familiar with the dataset. Load the data by running the following code. ###Code # Loading the data (cat/non-cat) train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset() ###Output _____no_output_____ ###Markdown We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images. ###Code # Example of a picture index = 25 plt.imshow(train_set_x_orig[index]) print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.") print(train_set_x_orig.shape) ###Output y = [1], it's a 'cat' picture. (209, 64, 64, 3) ###Markdown Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. **1. Exercise:** Find and print the values for:- m_train (number of training examples)- m_test (number of test examples)- num_px (= height = width of a training image) ###Code ### START CODE HERE ### m_train = train_set_x_orig.shape[ 0 ] m_test = test_set_x_orig.shape[ 0 ] num_px = train_set_x_orig[ 0 ].shape[ 0 ] ### END CODE HERE ### print(m_train) print(m_test) print(num_px) ###Output 209 50 64 ###Markdown **Expected Output for m_train, m_test and num_px**: **m_train** 209 **m_test** 50 **num_px** 64 For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.**2. Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\_px $*$ num\_px $*$ 3, 1).A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use: ```pythonX_flatten = X.reshape(X.shape[0], -1).T X.T is the transpose of X``` ###Code # Reshape the training and test examples ### START CODE HERE ### (≈ 2 lines of code) train_set_x_flatten = train_set_x_orig.reshape( m_train, -1 ).T test_set_x_flatten = test_set_x_orig.reshape( m_test, -1 ).T ### END CODE HERE ### print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0])) ###Output train_set_x_flatten shape: (12288, 209) train_set_y shape: (1, 209) test_set_x_flatten shape: (12288, 50) test_set_y shape: (1, 50) sanity check after reshaping: [17 31 56 22 33] ###Markdown **Expected Output**: **train_set_x_flatten shape** (12288, 209) **train_set_y shape** (1, 209) **test_set_x_flatten shape** (12288, 50) **test_set_y shape** (1, 50) **sanity check after reshaping** [17 31 56 22 33] To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel). Let's standardize our dataset. ###Code train_set_x = train_set_x_flatten/255. test_set_x = test_set_x_flatten/255. ###Output _____no_output_____ ###Markdown **What you need to remember:**Common steps for pre-processing a new dataset are:- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)- "Standardize" the data 3 - General Architecture of the learning algorithm It's time to design a simple algorithm to distinguish cat images from non-cat images.You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!****Mathematical expression of the algorithm**:For one example $x^{(i)}$:$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$The cost is then computed by summing over all training examples:$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$**Key steps**:In this exercise, you will carry out the following steps: - Initialize the parameters of the model - Learn the parameters for the model by minimizing the cost - Use the learned parameters to make predictions (on the test set) - Analyse the results and conclude 4 - Building the parts of our algorithm The main steps for building a Neural Network are:1. Define the model structure (such as number of input features) 2. Initialize the model's parameters3. Loop: - Calculate current loss (forward propagation) - Calculate current gradient (backward propagation) - Update parameters (gradient descent)You often build 1-3 separately and integrate them into one function we call `model()`. 4.1 - Helper functions**3. Exercise**: Implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp(). ###Code # GRADED FUNCTION: sigmoid def sigmoid(z): """ Compute the sigmoid of z Arguments: z -- A scalar or numpy array of any size. Return: s -- sigmoid(z) """ ### START CODE HERE ### (≈ 1 line of code) s = 1 / ( 1 + np.exp( -z ) ) ### END CODE HERE ### return s print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2])))) ###Output sigmoid([0, 2]) = [0.5 0.88079708] ###Markdown **Expected Output**: **sigmoid([0, 2])** [ 0.5 0.88079708] 4.2 - Initializing parameters**4. Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation. ###Code # GRADED FUNCTION: initialize_with_zeros def initialize_with_zeros(dim): """ This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0. Argument: dim -- size of the w vector we want (or number of parameters in this case) Returns: w -- initialized vector of shape (dim, 1) b -- initialized scalar (corresponds to the bias) """ ### START CODE HERE ### (≈ 1 line of code) w = np.zeros( ( dim, 1 ) ) b = 0 ### END CODE HERE ### assert(w.shape == (dim, 1)) assert(isinstance(b, float) or isinstance(b, int)) return w, b dim = 2 w, b = initialize_with_zeros(dim) print ("w = " + str(w)) print ("b = " + str(b)) ###Output w = [[0.] [0.]] b = 0 ###Markdown **Expected Output**: ** w ** [[ 0.] [ 0.]] ** b ** 0 For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1). 4.3 - Forward and Backward propagationNow that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.**5. Exercise:** Implement a function `propagate()` that computes the cost function and its gradient. ###Code # GRADED FUNCTION: propagate def propagate(w, b, X, Y): """ Implement the cost function and its gradient for the propagation explained above Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples) Return: cost -- negative log-likelihood cost for logistic regression dw -- gradient of the loss with respect to w, thus same shape as w db -- gradient of the loss with respect to b, thus same shape as b Tips: - Write your code step by step for the propagation. np.log(), np.dot() """ m = X.shape[1] # FORWARD PROPAGATION (FROM X TO COST) ### START CODE HERE ### (≈ 2 lines of code) A = sigmoid( np.dot( w.T, X ) + b ) # compute activation cost = ( -1 / m ) * np.sum( Y * np.log( A ) + ( 1 - Y ) * np.log( 1 - A ) ) # compute cost ### END CODE HERE ### # BACKWARD PROPAGATION (TO FIND GRAD) ### START CODE HERE ### (≈ 2 lines of code) dw = ( 1 / m ) * np.dot( X, ( A - Y ).T ) db = ( 1 / m ) * np.sum( A - Y ) ### END CODE HERE ### assert(dw.shape == w.shape) assert(db.dtype == float) cost = np.squeeze(cost) assert(cost.shape == ()) grads = {"dw": dw, "db": db} return grads, cost w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]]) grads, cost = propagate(w, b, X, Y) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) print ("cost = " + str(cost)) ###Output dw = [[0.99845601] [2.39507239]] db = 0.001455578136784208 cost = 5.801545319394553 ###Markdown **Expected Output**: ** dw ** [[ 0.99845601] [ 2.39507239]] ** db ** 0.00145557813678 ** cost ** 5.801545319394553 4.4 - Optimization- You have initialized your parameters.- You are also able to compute a cost function and its gradient.- Now, you want to update the parameters using gradient descent.**6. Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate. ###Code # GRADED FUNCTION: optimize def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): """ This function optimizes w and b by running a gradient descent algorithm Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of shape (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples) num_iterations -- number of iterations of the optimization loop learning_rate -- learning rate of the gradient descent update rule print_cost -- True to print the loss every 100 steps Returns: params -- dictionary containing the weights w and bias b grads -- dictionary containing the gradients of the weights and bias with respect to the cost function costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve. """ costs = [] for i in range(num_iterations): # Cost and gradient calculation ### START CODE HERE ### grads, cost = propagate( w, b, X, Y ) ### END CODE HERE ### # update rule for w and b ### START CODE HERE ### dw = grads["dw"] db = grads["db"] w = w - learning_rate * dw b = b - learning_rate * db ### END CODE HERE ### # Record the costs if i % 100 == 0: costs.append(cost) # Print the cost every 100 training iterations if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" %(i, cost)) params = {"w": w, "b": b} grads = {"dw": dw, "db": db} return params, grads, costs params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False) print ("w = " + str(params["w"])) print ("b = " + str(params["b"])) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) ###Output w = [[0.19033591] [0.12259159]] b = 1.9253598300845747 dw = [[0.67752042] [1.41625495]] db = 0.21919450454067652 ###Markdown **Expected Output**: **w** [[ 0.19033591] [ 0.12259159]] **b** 1.92535983008 **dw** [[ 0.67752042] [ 1.41625495]] **db** 0.219194504541 **7. Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There are two steps to computing predictions:1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$2. Convert the entries of a into 0 (if activation 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this). ###Code # GRADED FUNCTION: predict def predict(w, b, X): ''' Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b) Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Returns: Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X ''' m = X.shape[1] Y_prediction = np.zeros((1,m)) w = w.reshape(X.shape[0], 1) # Compute vector "A" predicting the probabilities of a cat being present in the picture ### START CODE HERE ### A = sigmoid( np.dot( w.T, X ) + b ) ### END CODE HERE ### # Convert probabilities to actual predictions ### START CODE HERE ### # Convert probabilities A[0,i] to actual predictions p[0,i] Y_prediction = np.where( A > 0.5, 1, 0 ) ### END CODE HERE ### assert(Y_prediction.shape == (1, m)) return Y_prediction w = np.array([[0.1124579],[0.23106775]]) b = -0.3 X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]]) print ("predictions = " + str(predict(w, b, X))) ###Output predictions = [[1 1 0]] ###Markdown **Expected Output**: **predictions** [[ 1. 1. 0.]] **What to remember:**You've implemented several functions that:- Initialize (w,b)- Optimize the loss iteratively to learn parameters (w,b): - computing the cost and its gradient - updating the parameters using gradient descent- Use the learned (w,b) to predict the labels for a given set of examples 5 - Merge all functions into a model You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.**8. Exercise:** Implement the model function. Use the following notation: - Y_prediction_test for your predictions on the test set - Y_prediction_train for your predictions on the train set - w, costs, grads for the outputs of optimize() ###Code # GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False): """ Builds the logistic regression model by calling the function you've implemented previously Arguments: X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train) Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train) X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test) Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test) num_iterations -- hyperparameter representing the number of iterations to optimize the parameters learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize() print_cost -- Set to true to print the cost every 100 iterations Returns: d -- dictionary containing information about the model. """ ### START CODE HERE ### # initialize parameters with zeros (≈ 1 line of code) w, b = initialize_with_zeros( X_train.shape[ 0 ] ) # Gradient descent (≈ 1 line of code) parameters, grads, costs = optimize( w, b, X_train, Y_train, num_iterations, learning_rate, print_cost) # Retrieve parameters w and b from dictionary "parameters" w = parameters["w"] b = parameters["b"] # Predict test/train set examples (≈ 2 lines of code) Y_prediction_test = predict( w, b, X_test ) Y_prediction_train = predict( w, b, X_train ) ### END CODE HERE ### # Print train/test Errors print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100)) print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100)) d = {"costs": costs, "Y_prediction_test": Y_prediction_test, "Y_prediction_train" : Y_prediction_train, "w" : w, "b" : b, "learning_rate" : learning_rate, "num_iterations": num_iterations} return d ###Output _____no_output_____ ###Markdown Run the following cell to train your model. ###Code d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True) ###Output Cost after iteration 0: 0.693147 Cost after iteration 100: 0.584508 Cost after iteration 200: 0.466949 Cost after iteration 300: 0.376007 Cost after iteration 400: 0.331463 Cost after iteration 500: 0.303273 Cost after iteration 600: 0.279880 Cost after iteration 700: 0.260042 Cost after iteration 800: 0.242941 Cost after iteration 900: 0.228004 Cost after iteration 1000: 0.214820 Cost after iteration 1100: 0.203078 Cost after iteration 1200: 0.192544 Cost after iteration 1300: 0.183033 Cost after iteration 1400: 0.174399 Cost after iteration 1500: 0.166521 Cost after iteration 1600: 0.159305 Cost after iteration 1700: 0.152667 Cost after iteration 1800: 0.146542 Cost after iteration 1900: 0.140872 train accuracy: 99.04306220095694 % test accuracy: 70.0 % ###Markdown **Expected Output**: **Cost after iteration 0 ** 0.693147 ... ... **Train Accuracy** 99.04306220095694 % **Test Accuracy** 70.0 % **Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test accuracy is 70%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. ###Code # Example of a picture that was wrongly classified. index = 1 plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3))) print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.") ###Output y = 1, you predicted that it is a "cat" picture. ###Markdown Let's also plot the cost function and the gradients. ###Code # Plot learning curve (with costs) costs = np.squeeze(d['costs']) plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(d["learning_rate"])) plt.show() ###Output _____no_output_____
Smiles_Testing.ipynb
###Markdown RDKitRDKit is the library that allowes us to visualize our molecules. ###Code # Install RDKit. Takes 2-3 minutes !wget -c https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh !chmod +x Miniconda3-latest-Linux-x86_64.sh !time bash ./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local !time conda install -q -y -c conda-forge rdkit from rdkit import Chem from rdkit.Chem import Draw M = Chem.MolFromSmiles('O=C(O)CC(=O)NCC(c1ccccc1)C(O)CC(Cc1ccccc1)NC(=O)OC1CCCCC1') # ok m = Chem.MolFromSmiles('CNC(=O)C1CCCC1C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)O') # ok Mole = Chem.MolFromSmiles('CC(C)(C)NC(=O)C1CCCC1C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)NC') # ok mole = Chem.MolFromSmiles('CNC(=O)C(Cc1ccccc1)NC(=O)C(Cc1ccccc1)NC(=O)C(Cc1ccccc1)NC(=O)C(CC(N)=O)NC(=O)c1ccccc1') # ok A = Chem.MolFromSmiles('CNCC(=O)CCc1ccccc1NC(=O)CCc1ccccc1NC(=O)OC1COC2OCCC12S(=O)(=O)c1cccNC(=O)c2ccccc2cc1') # not ok a = Chem.MolFromSmiles('CNS(=O)(=O)c1ccccc1C(=O)CCc1ccccc1NC(=O)OC1COC2OCCC12') # ok B = Chem.MolFromSmiles('CCCCNCC(=O)CCc1ccccc1NC(=O)OC1COC2OCCC12S(=O)(=O)c1cccNC(=O)CCc2ccccc2C(=O)cc1') # not ok b = Chem.MolFromSmiles('CCCCNCC(=O)CCc1ccccc1') # ok C = Chem.MolFromSmiles('c1cccNC(=O)OCCCCcc1') # not ok c = Chem.MolFromSmiles('CCCCCC(=O)N1CCCCC1CC1CCCC1C(=O)NCCc1ccccc1C(=O)NCCc1ccccc1C(=O)NCCc1ccccc1C(=O)') # not ok D = Chem.MolFromSmiles('C(=O)CCc1ccccc1NC(=O)CCc1ccccc1C(=O)NCCc1ccccc1C(=O)NCCc1ccccc1C(=O)NCCc1ccccc1C(=O)') # ok d = Chem.MolFromSmiles('Cc1ccccNS(=O)(=O)c2ccccc2c1C(=O)NC') # not ok E = Chem.MolFromSmiles('c1ccC(=O)N2CCCCCC2c1') # not ok e = Chem.MolFromSmiles('C(=O)CCCCCc1ccC(=O)NCCC(=O)ccc1C') # not ok F = Chem.MolFromSmiles('CCCCCCCCNC(=O)CCCCC(=O)') # ok f = Chem.MolFromSmiles('CNCCC(=O)CCc1ccccc1') # ok G = Chem.MolFromSmiles('Cc1cccC(=O)NCCc2ccccc2C(=O)NCCc2ccccc2C(=O)cc1') # not ok g = Chem.MolFromSmiles('PC1CCCC1C(=O)OC1COC2OCCC12') # ok H = Chem.MolFromSmiles('c1cccNCC(=O)CCCC(=O)NCCC(=O)C(=O)NCCC(=O)cc1') # not ok h = Chem.MolFromSmiles('Cc1ccccc1C(=O)NCCc1ccccc1C(=O)NCCc1ccccc1C(=O)NCCc1ccccc1C(=O)') # ok # Visualizing the SMILES Draw.MolsToGridImage([M, m, Mole, mole, A, a, B, b, C, c, D, d, E, e, F, f, G, g, H, h], molsPerRow=2, maxMols=100, subImgSize=(400, 400), legends=["M", "m", "Mole", "mole", "A", "a", "B", "b", "C", "c", "D", "d", "E", "e", "F", "f", "G", "g", "H", "h"]) ###Output _____no_output_____ ###Markdown ok, not ok...what's going on?* Essentially, the compounds which have a giant ring are excluded. **Smiles for testing*** O=C(O)CC(=O)NCC(c1ccccc1)C(O)CC(Cc1ccccc1)NC(=O)OC1CCCCC1* CNC(=O)C1CCCC1C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)O* CC(C)(C)NC(=O)C1CCCC1C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)NC(Cc1ccccc1)C(=O)NC* CNC(=O)C(Cc1ccccc1)NC(=O)C(Cc1ccccc1)NC(=O)C(Cc1ccccc1)NC(=O)C(CC(N)=O)NC(=O)c1ccccc1* CNS(=O)(=O)c1ccccc1C(=O)CCc1ccccc1NC(=O)OC1COC2OCCC12* CCCCNCC(=O)CCc1ccccc1* C(=O)CCc1ccccc1NC(=O)CCc1ccccc1C(=O)NCCc1ccccc1C(=O)NCCc1ccccc1C(=O)NCCc1ccccc1C(=O)* CCCCCCCCNC(=O)CCCCC(=O)* PC1CCCC1C(=O)OC1COC2OCCC12* Cc1ccccc1C(=O)NCCc1ccccc1C(=O)NCCc1ccccc1C(=O)NCCc1ccccc1C(=O) ###Code ###Output _____no_output_____
reference notebooks/kaggle-learn-reference-feature-engineering.ipynb
###Markdown Feature EngineeringThis notebook is an attempt to summarize the feature engineering techniques covered in the [Kaggle Learn](https://www.kaggle.com/learn/feature-engineering) course (the notes, the exercises and the [bonus notebook](https://www.kaggle.com/ryanholbrook/feature-engineering-for-house-prices)) into one notebook for easier reference. We conclude with a sample competition submission to the [Housing Prices Competition for Kaggle Learn Users](https://www.kaggle.com/c/home-data-for-ml-course). ###Code # Global variables for testing changes to this notebook quickly RANDOM_SEED = 0 NUM_FOLDS = 12 MAX_TREES = 2000 EARLY_STOP = 50 NUM_TRIALS = 100 SUBMIT = True # Essentials import os import warnings import numpy as np import pandas as pd import time from collections import defaultdict # Models from sklearn.base import clone from xgboost import XGBRegressor from lightgbm import LGBMRegressor from catboost import CatBoostRegressor # Model Evaluation from sklearn.preprocessing import KBinsDiscretizer from sklearn.model_selection import StratifiedKFold, KFold from sklearn.metrics import mean_absolute_error # Preprocessing from functools import partial, reduce from sklearn.impute import SimpleImputer from category_encoders import OrdinalEncoder, OneHotEncoder # Feature Engineering from sklearn.feature_selection import mutual_info_regression from sklearn.preprocessing import StandardScaler from sklearn.cluster import KMeans from sklearn.decomposition import PCA from category_encoders import MEstimateEncoder # Hyperparameter Tuning import optuna from optuna.visualization import plot_param_importances, plot_parallel_coordinate from optuna.pruners import PercentilePruner # Plotting import matplotlib.pyplot as plt import seaborn as sns # Set Matplotlib defaults plt.style.use("seaborn-whitegrid") plt.rc("figure", autolayout=True) plt.rc( "axes", labelweight="bold", labelsize="large", titleweight="bold", titlesize=14, titlepad=10, ) # Mute warnings warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown Load Data ###Code # Load the training data train = pd.read_csv("../input/home-data-for-ml-course/train.csv") test = pd.read_csv("../input/home-data-for-ml-course/test.csv") submission = pd.read_csv("../input/home-data-for-ml-course/sample_submission.csv") # Remove rows with missing target train.dropna(axis=0, subset=['SalePrice'], inplace=True) # Columns of interest features = [x for x in train.columns if x not in ['SalePrice','Id']] categorical = [x for x in features if train[x].dtype == "object"] numerical = [x for x in features if train[x].dtype in ['int64', 'float64']] # Bin target for stratified cross-validation binner = KBinsDiscretizer(n_bins = 45, encode = 'ordinal', strategy = 'quantile') y_bins = binner.fit_transform(pd.DataFrame(data=train['SalePrice'])) ###Output _____no_output_____ ###Markdown Preliminaries: Preprocessing This section involves preprocessing our data prior to feature engineering, in particular, dealing with missing values and encoding categorical variables. This content is covered in the [Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning) course, and is the subject of my [previous notebook](https://www.kaggle.com/rsizem2/kaggle-learn-reference-intermediate-ml). In this section we will do the following:1. Clean data by fixing typos and erroneous values2. Impute numerical data with the column mean3. Ordinally encode the *ordinal* variables4. Use a mix of ordinal and one-hot encoding for the *nominal* variables 1. Data CleaningWe fix some typos and bad values in our raw data. ###Code def data_cleaning(input_df): df = input_df.copy() # Data cleaning: fix typos and bad values df['MSZoning'] = df['MSZoning'].replace({'C (all)': 'C'}) df["Exterior2nd"] = df["Exterior2nd"].replace({"Brk Cmn":"BrkComm","Wd Shng": "WdShing"}) df['Neighborhood'] = df['Neighborhood'].replace({'NAmes':'Names'}) df["GarageYrBlt"] = df["GarageYrBlt"].where(df.GarageYrBlt <= 2010, df.YearBuilt) #df["MSClass"] = df['MSZoning'].map({'A': 'A','C': 'C',"FV": 'R','I': 'I',"RH": 'R',"RL": 'R',"RP": 'R',"RM": 'R', np.nan:np.nan}) return df ###Output _____no_output_____ ###Markdown 2. ImputationWe replace numerical NA values with the column mean and categorical NAs with a placeholder value. ###Code # Transformations that depend only on the input data (no fear of leakage) def imputation(X_train, X_valid, X_test = None, num_strategy = 'mean', cat_strategy = 'constant'): X_train, X_valid = X_train.copy(), X_valid.copy() if X_test is not None: X_test = X_test.copy() # 1. impute numerical data assert num_strategy in ['median','mean'] columns = [col for col in X_train.columns if X_train[col].dtype != "object"] num_imputer = SimpleImputer(strategy = num_strategy) X_train[columns] = num_imputer.fit_transform(X_train[columns]) X_valid[columns] = num_imputer.transform(X_valid[columns]) if X_test is not None: X_test[columns] = num_imputer.transform(X_test[columns]) # 2. impute categorical data assert cat_strategy in ['constant','most_frequent'] cat_imputer = SimpleImputer(strategy = cat_strategy, fill_value = 'None') columns = [col for col in X_train.columns if X_train[col].dtype == "object"] X_train[columns] = cat_imputer.fit_transform(X_train[columns]) X_valid[columns] = cat_imputer.transform(X_valid[columns]) if X_test is not None: X_test[columns] = cat_imputer.transform(X_test[columns]) return X_train, X_valid, X_test ###Output _____no_output_____ ###Markdown 3. Encoding Ordinal VariablesA few of our variables are based on ratings (e.g. poor, fair, good) of various qualities of the property. We hard code this ordering: ###Code def ordinal_encoding(X_train, X_valid, X_test = None): X_train, X_valid = X_train.copy(), X_valid.copy() if X_test is not None: X_test = X_test.copy() # 1. Encode 1-10 ratings cols = ["OverallQual","OverallCond"] cols = [x for x in cols if x in X_train.columns] ratings = {float(a):b for b,a in enumerate(range(1,11))} mapping = [{'col':x, 'mapping': ratings} for x in cols] encoder = OrdinalEncoder(cols = cols, mapping = mapping, handle_missing = 'return_nan') X_train = encoder.fit_transform(X_train) X_valid = encoder.transform(X_valid) if X_test is not None: X_test = encoder.transform(X_test) # 2. Encode Poor, Fair, Avg, Good, Ex ratings cols = ["ExterQual","ExterCond","BsmtQual","BsmtCond","HeatingQC", "KitchenQual","FireplaceQu","GarageQual","GarageCond",'PoolQC'] cols = [x for x in cols if x in X_train.columns] ratings = {"Po":0, "Fa":1, "TA":2, "Gd":3, "Ex":4} mapping = [{'col':x, 'mapping': ratings} for x in cols] encoder = OrdinalEncoder(cols = cols, mapping = mapping, handle_missing = 'return_nan') X_train = encoder.fit_transform(X_train) X_valid = encoder.transform(X_valid) if X_test is not None: X_test = encoder.transform(X_test) # 3. Encode remaining ordinal data cols = ["LotShape","LandSlope","BsmtExposure","BsmtFinType1","BsmtFinType2", "Functional","GarageFinish","PavedDrive","Utilities","CentralAir","Electrical", "Fence"] cols = [x for x in cols if x in X_train.columns] mapping = [{'col':"LotShape", 'mapping': {"Reg":0, "IR1":1, "IR2":2, "IR3":3}}, {'col':"LandSlope", 'mapping': {"Sev":0, "Mod":1, "Gtl":2}}, {'col':"BsmtExposure", 'mapping': {"No":0, "Mn":1, "Av":2, "Gd":3}}, {'col':"BsmtFinType1", 'mapping': {"Unf":0, "LwQ":1, "Rec":2, "BLQ":3, "ALQ":4, "GLQ":5}}, {'col':"BsmtFinType2", 'mapping': {"Unf":0, "LwQ":1, "Rec":2, "BLQ":3, "ALQ":4, "GLQ":5}}, {'col':"Functional", 'mapping': {"Sal":0, "Sev":1, "Maj1":2, "Maj2":3, "Mod":4, "Min2":5, "Min1":6, "Typ":7}}, {'col':"GarageFinish", 'mapping': {"Unf":0, "RFn":1, "Fin":2}}, {'col':"PavedDrive", 'mapping': {"N":0, "P":1, "Y":2}}, {'col':"Utilities", 'mapping': {"NoSeWa":0, "NoSewr":1, "AllPub":2}}, {'col':"CentralAir", 'mapping': {"N":0, "Y":1}}, {'col':"Electrical", 'mapping': {"Mix":0, "FuseP":1, "FuseF":2, "FuseA":3, "SBrkr":4}}, {'col':"Fence", 'mapping': {"MnWw":0, "GdWo":1, "MnPrv":2, "GdPrv":3}}] mapping = [x for x in mapping if x['col'] in X_train.columns] encoder = OrdinalEncoder(cols = cols, mapping = mapping, handle_missing = 'return_nan') X_train = encoder.fit_transform(X_train) X_valid = encoder.transform(X_valid) if X_test is not None: X_test = encoder.transform(X_test) return X_train, X_valid, X_test ###Output _____no_output_____ ###Markdown 4. Encoding Nominal DataFor the remaining categorical data we one-hot encode the low cardinality variables and ordinally encode the high cardinality variables. Recall the *cardinality* refers to the number of unique values. ###Code # Not ordinal categorical data #columns = ["MSSubClass", "MSZoning", "Street", "Alley", "LandContour", "LotConfig", # "Neighborhood", "Condition1", "Condition2", "BldgType", "HouseStyle", # "RoofStyle", "RoofMatl", "Exterior1st", "Exterior2nd", "MasVnrType", "Foundation", # "Heating", "CentralAir", "GarageType", "MiscFeature", "SaleType", "SaleCondition"] def nominal_encoding(X_train, X_valid, X_test = None, threshold = 10): X_train, X_valid = X_train.copy(), X_valid.copy() if X_test is not None: X_test = X_test.copy() # 1. Determine high and low cardinality data columns = [col for col in X_train.columns if X_train[col].dtype == 'object'] high_cols = [col for col in columns if X_train[col].nunique() >= threshold] low_cols = [col for col in columns if X_train[col].nunique() < threshold] # label encode high cardinality data if high_cols: encoder = OrdinalEncoder(cols = high_cols, handle_missing = 'return_nan') X_train = encoder.fit_transform(X_train) X_valid = encoder.transform(X_valid) if X_test is not None: X_test = encoder.transform(X_test) if low_cols: encoder = OneHotEncoder(cols = low_cols, use_cat_names = True, handle_missing = 'return_nan') X_train = encoder.fit_transform(X_train) X_valid = encoder.transform(X_valid) if X_test is not None: X_test = encoder.transform(X_test) return X_train, X_valid, X_test ###Output _____no_output_____ ###Markdown Full Preprocessing The following function combines all of the above preprocessing steps. We will use this preprocessing for every model we test. ###Code def preprocessing(X_train, X_valid, X_test = None): # 1. Data cleaning X_train = data_cleaning(X_train) X_valid = data_cleaning(X_valid) if X_test is not None: X_test = data_cleaning(X_test) # 2. Imputation X_train, X_valid, X_test = imputation(X_train, X_valid, X_test) # 3. Ordinal Encoding X_train, X_valid, X_test = ordinal_encoding(X_train, X_valid, X_test) # 4. Nominal Encoding X_train, X_valid, X_test = nominal_encoding(X_train, X_valid, X_test) return X_train, X_valid, X_test ###Output _____no_output_____ ###Markdown Feature EngineeringIn this section we consider the feature engineering techniques covered in the course and benchmark. 1. Scoring FunctionThis function takes a function which transforms our data (e.g. preprocessing or feature engineering) and scores it using cross-validation and returns the mean absolute error (MAE). ###Code def score_xgboost(xgb_model = XGBRegressor(random_state = RANDOM_SEED, n_estimators = 500, learning_rate = 0.05), processing = preprocessing, trial = None, verbose = True): # Drop high cardinality categorical variables features = [x for x in train.columns if x not in ['Id','SalePrice']] X_temp = train[features].copy() y_temp = train['SalePrice'].copy() # Data structure for storing scores and times scores = np.zeros(NUM_FOLDS) times = np.zeros(NUM_FOLDS) # Stratified k-fold cross-validation kfold = StratifiedKFold(n_splits = NUM_FOLDS, shuffle = True, random_state = RANDOM_SEED) for fold, (train_idx, valid_idx) in enumerate(kfold.split(X_temp, y_bins)): # Training and Validation Sets X_train, X_valid = X_temp.iloc[train_idx], X_temp.iloc[valid_idx] y_train, y_valid = y_temp.iloc[train_idx], y_temp.iloc[valid_idx] # Preprocessing start = time.time() X_train, X_valid, _ = processing(X_train, X_valid) # Create model model = clone(xgb_model) model.fit( X_train, y_train, early_stopping_rounds=EARLY_STOP, eval_set=[(X_valid, y_valid)], verbose=False ) # validation predictions valid_preds = np.ravel(model.predict(X_valid)) scores[fold] = mean_absolute_error(y_valid, valid_preds) end = time.time() times[fold] = end - start time.sleep(0.5) if trial: # Use pruning on fold AUC trial.report( value = scores[fold], step = fold ) # prune slow trials and bad fold AUCs if trial.should_prune(): raise optuna.TrialPruned() if verbose: print(f'\n{NUM_FOLDS}-Fold Average MAE: {round(scores.mean(), 5)} in {round(times.sum(),2)}s.\n') return scores.mean() ###Output _____no_output_____ ###Markdown 2. XGBoost BaselineThis model performs no feature engineering other than the preprocessing defined in the previous section. ###Code # Data structure for saving our training scores benchmarks = defaultdict(list) # Baseline score score = score_xgboost() # Save scores benchmarks['feature'].append('Baseline') benchmarks['score'].append(score) ###Output 12-Fold Average MAE: 15838.47199 in 42.97s. ###Markdown Lesson 2: Mutual InformationWe use mutual information to perform feature selection, discarding features with low/no mutual information score. We use the [mutual_info_regression](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.mutual_info_regression.html) function from scikit-learn. 2.1 Most Informative FeaturesThe following function returns a sorted series of features and their mutual information scores: ###Code def get_mi_scores(): # Preprocessing columns = [x for x in train.columns if x not in ['Id','SalePrice']] X_train, X_test, _ = preprocessing(train[columns], test[columns]) discrete = [i for i,x in enumerate(X_train.columns) if x not in numerical] y_train = train['SalePrice'] # Get Score scores = mutual_info_regression( X_train, y_train, discrete_features = discrete ) scores = pd.Series(scores, name = "MI Scores", index = X_train.columns) return scores.sort_values(ascending=False) # Get sorted list of features get_mi_scores() ###Output _____no_output_____ ###Markdown 2.2 Plot Informative FeaturesThe following function plots all the features with mutual information scores above a threshold (0.1 by default): ###Code def plot_informative(threshold = 0.1): scores = get_mi_scores() scores.drop(scores[scores <= threshold].index, inplace = True) width = np.arange(len(scores)) ticks = list(scores.index) plt.figure(dpi=100, figsize=(8,6)) plt.barh(width, scores) plt.yticks(width, ticks) plt.title("Mutual Information Scores") plot_informative(threshold = 0.1) ###Output _____no_output_____ ###Markdown 2.3 Drop Uninformative FeaturesA function which transforms our data by removing features with low mutual information scores: ###Code # X_train must have no NA values def remove_uninformative(X_train, X_valid, X_test = None, threshold = 1e-5, verbose = False): # 0. Preprocessing X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test) # 1. Get discrete columns and target discrete = [i for i,x in enumerate(X_train.columns) if x not in numerical] y_train = train['SalePrice'].iloc[X_train.index] # 2. Get mutual information scores scores = mutual_info_regression(X_train, y_train, discrete_features = discrete) cols = [x for i, x in enumerate(X_train.columns) if scores[i] < threshold] # 3. Drop the uninformative columns X_train.drop(cols, axis = 1, inplace = True) X_valid.drop(cols, axis = 1, inplace = True) if X_test is not None: X_test.drop(cols, axis = 1, inplace = True) if verbose: print("Dropped columns:", *cols) return X_train, X_valid, X_test # Drop uninformative score = score_xgboost(processing = remove_uninformative) # Save scores benchmarks['feature'].append('Mutual_Info') benchmarks['score'].append(score) ###Output 12-Fold Average MAE: 15743.80477 in 72.63s. ###Markdown Lesson 3: Creating FeaturesIn this section we create features using static transformations of existing features:1. Mathematical Transformations2. Feature Interactions3. Count Features4. Building and breaking down features5. Group transformations 3.1 Mathematical TransformationsCreate features by combining and/or transforming already existing features ###Code def transformations(input_df, test_data = False): df = input_df.copy() temp = train.iloc[df.index] if test_data: temp = test.copy() df["LivLotRatio"] = temp["GrLivArea"] / temp["LotArea"] df["Spaciousness"] = (temp["1stFlrSF"]+temp["2ndFlrSF"]) / temp["TotRmsAbvGrd"] df["TotalOutsideSF"] = temp["WoodDeckSF"] + temp["OpenPorchSF"] + temp["EnclosedPorch"] + temp["3SsnPorch"] + temp["ScreenPorch"] df['TotalLot'] = temp['LotFrontage'] + temp['LotArea'] df['TotalBsmtFin'] = temp['BsmtFinSF1'] + temp['BsmtFinSF2'] df['TotalSF'] = temp['TotalBsmtSF'] + temp['2ndFlrSF'] + temp['1stFlrSF'] df['TotalBath'] = temp['FullBath'] + temp['HalfBath'] * 0.5 + temp['BsmtFullBath'] + temp['BsmtHalfBath'] * 0.5 df['TotalPorch'] = temp['OpenPorchSF'] + temp['EnclosedPorch'] + temp['ScreenPorch'] + temp['WoodDeckSF'] return df def mathematical_transformations(X_train, X_valid, X_test = None): # 0. Preprocessing X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test) X_train = transformations(X_train) X_valid = transformations(X_valid) if X_test is not None: X_test = transformations(X_test) return X_train, X_valid, X_test # Mathematical transformations score = score_xgboost(processing = mathematical_transformations) # Save scores benchmarks['feature'].append('Transformations') benchmarks['score'].append(score) ###Output 12-Fold Average MAE: 15508.78515 in 42.56s. ###Markdown 3.2 Encode Feature InteractionsAttempt to encode an interaction between a categorical variable and a numerical variable ###Code def interaction(input_df, cat_col = "BldgType", num_col = "GrLivArea"): df = input_df.copy() try: # will fail if column already one-hot encoded X = pd.get_dummies(df[cat_col], prefix=cat_col) for col in X.columns: df[col+"_"+num_col] = X[col]*df[num_col] except: # if column already one-hot encoded for col in df.columns: if col.startswith(cat_col): df[col+"_"+num_col] = df[col]*df[num_col] return df def encode_interaction(X_train, X_valid, X_test = None, cat_col = "BldgType", num_col = "GrLivArea"): X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test) X_train = interaction(X_train, cat_col, num_col) X_valid = interaction(X_valid, cat_col, num_col) if X_test is not None: X_test = interaction(X_test, cat_col, num_col) return X_train, X_valid, X_test # Categorical interactions score = score_xgboost(processing = encode_interaction) # Save scores benchmarks['feature'].append('Interactions') benchmarks['score'].append(score) ###Output 12-Fold Average MAE: 15853.57222 in 43.29s. ###Markdown 3.3 Generate a Count FeatureWe combine several related features into an aggregate feature counting the presence of the components. ###Code def count_features(X_train, X_valid, X_test = None, features = ["WoodDeckSF","OpenPorchSF","EnclosedPorch","3SsnPorch","ScreenPorch"]): X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test) X_train["PorchTypes"] = X_train[features].gt(0).sum(axis=1) X_valid["PorchTypes"] = X_valid[features].gt(0).sum(axis=1) if X_test is not None: X_test["PorchTypes"] = X_test[features].gt(0).sum(axis=1) return X_train, X_valid, X_test # New count features score = score_xgboost(processing = count_features) # Save scores benchmarks['feature'].append('Count') benchmarks['score'].append(score) ###Output 12-Fold Average MAE: 15755.98938 in 42.38s. ###Markdown 3.4 Break Down a Categorical FeatureNote: The column `HouseStyle` already serves the purpose of the `MSClass` feature in the notes, so we create `MSClass` use the `Zoning` column instead: ###Code def breakdown_zoning(X_train, X_valid, X_test = None): mapping = {'A': 'A','C': 'C',"FV": 'R','I': 'I',"RH": 'R',"RL": 'R',"RP": 'R',"RM": 'R', np.nan:np.nan} X_train["MSClass"] = train['MSZoning'].iloc[X_train.index].map(mapping) X_valid["MSClass"] = train['MSZoning'].iloc[X_valid.index].map(mapping) if X_test is not None: X_test["MSClass"] = test['MSZoning'].map(mapping) X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test) return X_train, X_valid, X_test # New count features score = score_xgboost(processing = breakdown_zoning) # Save scores benchmarks['feature'].append('MSZoning') benchmarks['score'].append(score) ###Output 12-Fold Average MAE: 15838.47199 in 44.47s. ###Markdown 3.5 Use a Grouped TransformWe create a feature from a statistic calculated on a group. In this case the above ground living area per neighborhood. Note that the statistic is calculated on the training data only. ###Code def group_transformation(X_train, X_valid, X_test = None): X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test) X_train["MedNhbdLvArea"] = X_train.groupby("Neighborhood")["GrLivArea"].transform('median') # we use the medians from the training data to impute the test data mapping = {y:x for x,y in zip(X_train["MedNhbdLvArea"].values, X_train['Neighborhood'].values)} X_valid["MedNhbdLvArea"] = X_valid['Neighborhood'].map(mapping) if X_test is not None: X_test["MedNhbdLvArea"] = X_test['Neighborhood'].map(mapping) return X_train, X_valid, X_test # New count features score = score_xgboost(processing = group_transformation) # Save scores benchmarks['feature'].append('Group') benchmarks['score'].append(score) ###Output 12-Fold Average MAE: 15523.25086 in 40.59s. ###Markdown Lesson 4: K-means ClusteringIn this section we use unsupervised clustering techniques to build new features for our model. 4.1 Cluster Label FeaturesWe create a new feature by clustering a subset of the numerical data and using the resulting cluster labels to create a new categorical feature. ###Code def cluster_labels(X_train, X_valid, X_test = None, name = "Area", features = ['LotArea', 'TotalBsmtSF', '1stFlrSF', '2ndFlrSF','GrLivArea']): X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test) # 1. normalize based on training data scaler = StandardScaler() X_scaled = scaler.fit_transform(X_train[features]) X_valid_scaled = scaler.transform(X_valid[features]) if X_test is not None: X_test_scaled = scaler.transform(X_test[features]) # 2. create cluster labels (use predict) kmeans = KMeans(n_clusters = 10, n_init = 10, random_state=0) X_train[name + "_Cluster"] = kmeans.fit_predict(X_scaled) X_valid[name + "_Cluster"] = kmeans.predict(X_valid_scaled) if X_test is not None: X_test[name + "_Cluster"] = kmeans.predict(X_test_scaled) return X_train, X_valid, X_test # Cluster label features score = score_xgboost(processing = cluster_labels) # Save scores benchmarks['feature'].append('Cluster_Labels') benchmarks['score'].append(score) ###Output 12-Fold Average MAE: 15843.71768 in 128.66s. ###Markdown 4.2 Cluster Distance FeaturesWe create new numerical features by clustering a subset of the numerical data and using the distances to each respective cluster centroid. ###Code def cluster_distances(X_train, X_valid, X_test = None, name = "Area", features = ['LotArea', 'TotalBsmtSF', '1stFlrSF', '2ndFlrSF','GrLivArea']): X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test) # 1. normalize based on training data scaler = StandardScaler() X_scaled = scaler.fit_transform(X_train[features]) X_valid_scaled = scaler.transform(X_valid[features]) if X_test is not None: X_test_scaled = scaler.transform(X_test[features]) # 2. generate cluster distances (use transform) kmeans = KMeans(n_clusters = 10, n_init = 10, random_state=0) X_cd = kmeans.fit_transform(X_scaled) X_valid_cd = kmeans.transform(X_valid_scaled) if X_test is not None: X_test_cd = kmeans.transform(X_test_scaled) # 3. column labels X_cd = pd.DataFrame(X_cd, columns=[name + "_Centroid_" + str(i) for i in range(X_cd.shape[1])]) X_valid_cd = pd.DataFrame(X_valid_cd, columns=[name + "_Centroid_" + str(i) for i in range(X_valid_cd.shape[1])]) if X_test is not None: X_test_cd = pd.DataFrame(X_test_cd, columns=[name + "_Centroid_" + str(i) for i in range(X_test_cd.shape[1])]) if X_test is not None: return X_train.join(X_cd), X_valid.join(X_valid_cd), X_test.join(X_test_cd) return X_train.join(X_cd), X_valid.join(X_valid_cd), X_test # Cluster distance features score = score_xgboost(processing = cluster_distances) # Save scores benchmarks['feature'].append('Cluster_Dist') benchmarks['score'].append(score) ###Output 12-Fold Average MAE: 16114.38487 in 139.65s. ###Markdown Lesson 5: Principal Component AnalysisFollowing the bonus notebook, we use PCA to examine the following four features: `GarageArea`, `YearRemodAdd`, `TotalBsmtSF`, and `GrLivArea`. ###Code # Assumes data is standardized def apply_pca(X): # Standardize input scaler = StandardScaler() X_scaled = scaler.fit_transform(X) X_scaled = pd.DataFrame(X_scaled, columns = X.columns) pca = PCA() X_pca = pca.fit_transform(X_scaled) component_names = [f"PC{i+1}" for i in range(X_pca.shape[1])] X_pca = pd.DataFrame(X_pca, columns=component_names) # Create loadings loadings = pd.DataFrame( pca.components_.T, # transpose the matrix of loadings columns=component_names, # so the columns are the principal components index=X_scaled.columns, # and the rows are the original features ) return pca, X_pca, loadings ###Output _____no_output_____ ###Markdown 5.1 Plot VarianceThe following function adapted from the notes plots the explained variance and culmulative variance of each component. ###Code def plot_variance(X, width=8, dpi=100): pca, _, _ = apply_pca(X) # Create figure fig, axs = plt.subplots(1, 2) n = pca.n_components_ grid = np.arange(1, n + 1) # Explained variance evr = pca.explained_variance_ratio_ axs[0].bar(grid, evr) axs[0].set( xlabel="Component", title="% Explained Variance", ylim=(-0.1, 1.1) ) # Cumulative Variance cv = np.cumsum(evr) axs[1].plot(np.r_[0, grid], np.r_[0, cv], "o-") axs[1].set( xlabel="Component", title="% Cumulative Variance", ylim=(-0.1, 1.1) ) # Set up figure fig.set(figwidth=8, dpi=100) plot_variance( train[["GarageArea","YearRemodAdd","TotalBsmtSF","GrLivArea"]] ) ###Output _____no_output_____ ###Markdown 5.2 LoadingsView the loadings for each feature/component. ###Code pca, X_pca, loadings = apply_pca( train[["GarageArea","YearRemodAdd","TotalBsmtSF","GrLivArea"]] ) loadings ###Output _____no_output_____ ###Markdown 5.3 Outlier DetectionFind observations with outlier values in each component, this can be useful for finding outlier values that might not be readily apparent in the original feature space: ###Code def check_outliers(X_pca, component = "PC1"): idx = X_pca[component].sort_values(ascending=False).index return train.loc[idx, ["SalePrice", "Neighborhood", "SaleCondition"] + features].head(10) check_outliers(X_pca, component = "PC1") ###Output _____no_output_____ ###Markdown 5.4 PCA FeaturesThe following function adds the first two principal components as features to our original data: ###Code # Performs PCA on the whole dataframe def pca_transform(X_train, X_valid, X_test = None, features = ["GarageArea","YearRemodAdd","TotalBsmtSF","GrLivArea"], n_components = 2): assert n_components <= len(features) X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test) # Normalize based on training data scaler = StandardScaler() X_scaled = scaler.fit_transform(X_train[features]) X_valid_scaled = scaler.transform(X_valid[features]) if X_test is not None: X_test_scaled = scaler.transform(X_test[features]) # Create principal components pca = PCA(n_components) X_pca = pca.fit_transform(X_scaled) X_valid_pca = pca.transform(X_valid_scaled) if X_test is not None: X_test_pca = pca.transform(X_test_scaled) # Convert to dataframe component_names = [f"PC{i+1}" for i in range(X_pca.shape[1])] X_pca = pd.DataFrame(X_pca, columns=component_names) X_valid_pca = pd.DataFrame(X_valid_pca, columns=component_names) if X_test is not None: X_test_pca = pd.DataFrame(X_test_pca, columns=component_names) if X_test is not None: return X_train.join(X_pca), X_valid.join(X_valid_pca), X_test.join(X_test_pca) return X_train.join(X_pca), X_valid.join(X_valid_pca), X_test # PCA components score = score_xgboost(processing = pca_transform) # Save scores benchmarks['feature'].append('PCA_Comps') benchmarks['score'].append(score) ###Output 12-Fold Average MAE: 16108.08229 in 45.07s. ###Markdown 5.5 PCA Inspired FeaturesFollowing the bonus notebook, adds two new features inspired by the principal component analysis: ###Code def pca_inspired(X_train, X_valid, X_test = None): X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test) X_train["Feature1"] = X_train.GrLivArea + X_train.TotalBsmtSF X_train["Feature2"] = X_train.YearRemodAdd * X_train.TotalBsmtSF X_valid["Feature1"] = X_valid.GrLivArea + X_valid.TotalBsmtSF X_valid["Feature2"] = X_valid.YearRemodAdd * X_valid.TotalBsmtSF if X_test is not None: X_test["Feature1"] = X_test.GrLivArea + X_test.TotalBsmtSF X_test["Feature2"] = X_test.YearRemodAdd * X_test.TotalBsmtSF return X_train, X_valid, X_test # PCA inspired features score = score_xgboost(processing = pca_inspired) # Save scores benchmarks['feature'].append('PCA_Inspired') benchmarks['score'].append(score) ###Output 12-Fold Average MAE: 15575.02949 in 46.89s. ###Markdown Lesson 6: Target EncodingWe use the wrapper from the [bonus notebook](https://www.kaggle.com/ryanholbrook/feature-engineering-for-house-prices) to target encode a few categorical variables. ###Code class CrossFoldEncoder: def __init__(self, encoder, **kwargs): self.encoder_ = encoder self.kwargs_ = kwargs # keyword arguments for the encoder self.cv_ = KFold(n_splits=5) # Fit an encoder on one split and transform the feature on the # other. Iterating over the splits in all folds gives a complete # transformation. We also now have one trained encoder on each # fold. def fit_transform(self, X, y, cols): self.fitted_encoders_ = [] self.cols_ = cols X_encoded = [] for idx_encode, idx_train in self.cv_.split(X): fitted_encoder = self.encoder_(cols=cols, **self.kwargs_) fitted_encoder.fit( X.iloc[idx_encode, :], y.iloc[idx_encode], ) X_encoded.append(fitted_encoder.transform(X.iloc[idx_train, :])[cols]) self.fitted_encoders_.append(fitted_encoder) X_encoded = pd.concat(X_encoded) X_encoded.columns = [name + "_encoded" for name in X_encoded.columns] return X_encoded # To transform the test data, average the encodings learned from # each fold. def transform(self, X): from functools import reduce X_encoded_list = [] for fitted_encoder in self.fitted_encoders_: X_encoded = fitted_encoder.transform(X) X_encoded_list.append(X_encoded[self.cols_]) X_encoded = reduce( lambda x, y: x.add(y, fill_value=0), X_encoded_list ) / len(X_encoded_list) X_encoded.columns = [name + "_encoded" for name in X_encoded.columns] return X_encoded ###Output _____no_output_____ ###Markdown 6.1 Encode NeighborhoodWe use `Neighborhood` to target encode since it is a high cardinality nominal feature which is likely very important for determining the target `SalePrice`. ###Code def encode_neighborhood(X_train, X_valid, X_test = None): X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test) y_train = train['SalePrice'].iloc[X_train.index] encoder = CrossFoldEncoder(MEstimateEncoder, m=1) X1_train = encoder.fit_transform(X_train, y_train, cols=["Neighborhood"]) X1_valid = encoder.transform(X_valid) if X_test is not None: X1_test = encoder.transform(X_test) if X_test is not None: return X_train.join(X1_train), X_valid.join(X1_valid), X_test.join(X1_test) return X_train.join(X1_train), X_valid.join(X1_valid), X_test score = score_xgboost(processing = encode_neighborhood) benchmarks['feature'].append('Encode_Neighborhood') benchmarks['score'].append(score) ###Output 12-Fold Average MAE: 15373.05694 in 43.28s. ###Markdown 10.2 Encode SubclassThe example in the notes uses `MSSubClass` for target encoding so we test it out here for sake of completeness. ###Code def encode_subclass(X_train, X_valid, X_test = None): X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test) y_train = train['SalePrice'].iloc[X_train.index] encoder = CrossFoldEncoder(MEstimateEncoder, m=1) X1_train = encoder.fit_transform(X_train, y_train, cols=["MSSubClass"]) X1_valid = encoder.transform(X_valid) if X_test is not None: X1_test = encoder.transform(X_test) if X_test is not None: return X_train.join(X1_train), X_valid.join(X1_valid), X_test.join(X1_test) return X_train.join(X1_train), X_valid.join(X1_valid), X_test score = score_xgboost(processing = encode_subclass) benchmarks['feature'].append('Encode_Subclass') benchmarks['score'].append(score) ###Output 12-Fold Average MAE: 15777.96425 in 43.37s. ###Markdown Determining Feature Engineering StrategyIn this section we compare all the above strategies versus the baseline and choose which we will use in our final model. ###Code pd.DataFrame(benchmarks).sort_values('score', ascending = False) ###Output _____no_output_____ ###Markdown Final Feature EngineeringWe include all of the techniques which resulted in improvements over the baseline. ###Code def feature_engineering(X_train, X_valid, X_test = None): X_train, X_valid, X_test = preprocessing(X_train, X_valid, X_test) y_train = train['SalePrice'].iloc[X_train.index] og_columns = [x for x in X_train.columns] # Drop the uninformative columns discrete = [i for i,x in enumerate(X_train.columns) if x not in numerical] scores = mutual_info_regression(X_train, y_train, discrete_features = discrete) cols = [x for i, x in enumerate(X_train.columns) if scores[i] < 1e-5] X_train.drop(cols, axis = 1, inplace = True) X_valid.drop(cols, axis = 1, inplace = True) if X_test is not None: X_test.drop(cols, axis = 1, inplace = True) # Cluster Labels features = ['LotArea', 'TotalBsmtSF', '1stFlrSF', '2ndFlrSF','GrLivArea'] scaler = StandardScaler() X_scaled = scaler.fit_transform(X_train[features]) X_valid_scaled = scaler.transform(X_valid[features]) if X_test is not None: X_test_scaled = scaler.transform(X_test[features]) kmeans = KMeans(n_clusters = 10, n_init = 10, random_state=0) X_train["Cluster"] = kmeans.fit_predict(X_scaled) X_valid["Cluster"] = kmeans.predict(X_valid_scaled) if X_test is not None: X_test["Cluster"] = kmeans.predict(X_test_scaled) # Group Transformation X_train["MedNhbdLvArea"] = X_train.groupby("Neighborhood")["GrLivArea"].transform('median') mapping = {y:x for x,y in zip(X_train["MedNhbdLvArea"].values, X_train['Neighborhood'].values)} X_valid["MedNhbdLvArea"] = X_valid['Neighborhood'].map(mapping) if X_test is not None: X_test["MedNhbdLvArea"] = X_test['Neighborhood'].map(mapping) # PCA Inspired X_train["Feature1"] = X_train.GrLivArea + X_train.TotalBsmtSF X_train["Feature2"] = X_train.YearRemodAdd * X_train.TotalBsmtSF X_valid["Feature1"] = X_valid.GrLivArea + X_valid.TotalBsmtSF X_valid["Feature2"] = X_valid.YearRemodAdd * X_valid.TotalBsmtSF if X_test is not None: X_test["Feature1"] = X_test.GrLivArea + X_test.TotalBsmtSF X_test["Feature2"] = X_test.YearRemodAdd * X_test.TotalBsmtSF # Transformations X_train = transformations(X_train) X_valid = transformations(X_valid) if X_test is not None: X_test = transformations(X_test, test_data = True) # Target Encode Subclass encoder = CrossFoldEncoder(MEstimateEncoder, m=1) X1_train = encoder.fit_transform(X_train, y_train, cols=["MSSubClass"]) X1_valid = encoder.transform(X_valid) if X_test is not None: X1_test = encoder.transform(X_test) if X_test is not None: X_train, X_valid, X_test = X_train.join(X1_train), X_valid.join(X1_valid), X_test.join(X1_test) else: X_train, X_valid = X_train.join(X1_train), X_valid.join(X1_valid) # Target Encode Neighborhood encoder = CrossFoldEncoder(MEstimateEncoder, m=1) X2_train = encoder.fit_transform(X_train, y_train, cols=["Neighborhood"]) X2_valid = encoder.transform(X_valid) if X_test is not None: X2_test = encoder.transform(X_test) if X_test is not None: return X_train.join(X2_train), X_valid.join(X2_valid), X_test.join(X2_test) return X_train.join(X2_train), X_valid.join(X2_valid), X_test _ = score_xgboost(processing = feature_engineering) ###Output 12-Fold Average MAE: 15580.38715 in 167.4s. ###Markdown Hyperparameter SearchNow that we have established our preprocessing and feature engineering strategies we want to optimize our model parameters. PruningWe use a pruner to skip unpromising trials (in the lower 33% of scores for that fold). ###Code # Tweak Pruner settings pruner = PercentilePruner( percentile = 33, n_startup_trials = 10, n_warmup_steps = 0, interval_steps = 1, n_min_trials = 10, ) ###Output _____no_output_____ ###Markdown Search FunctionFunction which actually performs the hyperparameter search, based on [optuna](https://optuna.org/): ###Code def parameter_search(trials): # Optuna objective function def objective(trial): model_params = dict( # default 6 max_depth = trial.suggest_int( "max_depth", 2, 12 ), # default 0.3 learning_rate = trial.suggest_loguniform( "learning_rate", 0.01, 0.3 ), # default 0 gamma = trial.suggest_loguniform( "gamma", 1e-10, 100 ), # default 1 min_child_weight = trial.suggest_loguniform( "min_child_weight", 1e-2, 1e2 ), # default 1 subsample = trial.suggest_discrete_uniform( "subsample", 0.2, 1.0, 0.01 ), # default 1 colsample_bytree = trial.suggest_discrete_uniform( "colsample_bytree", 0.2, 1.0, 0.01 ), # default 1 colsample_bylevel = trial.suggest_discrete_uniform( "colsample_bylevel", 0.2, 1.0, 0.01 ), # default 1 reg_lambda = trial.suggest_loguniform( "reg_lambda", 1e-10, 100 ), # default 0 reg_alpha = trial.suggest_loguniform( "reg_alpha", 1e-10, 100 ), ) return score_xgboost( xgb_model = XGBRegressor( random_state = RANDOM_SEED, n_estimators = MAX_TREES, n_jobs = 4, **model_params ), processing = feature_engineering, trial = trial ) optuna.logging.set_verbosity(optuna.logging.DEBUG) study = optuna.create_study( pruner = pruner, direction = "minimize" ) # close to default parameters study.enqueue_trial({ 'max_depth': 6, 'learning_rate': 0.05, 'gamma': 1e-10, 'min_child_weight': 1, 'subsample': 1, 'colsample_bytree': 1, 'colsample_bylevel': 1, 'reg_lambda': 1, 'reg_alpha': 1, }) study.optimize(objective, n_trials=trials) return study ###Output _____no_output_____ ###Markdown Hyperparameter Evaluation ###Code study = parameter_search(NUM_TRIALS) print("Best Parameters:", study.best_params) plot_parallel_coordinate(study) ###Output _____no_output_____ ###Markdown Generate Submission ###Code def make_submission(model_params): # Features features = [x for x in train.columns if x not in ['Id','SalePrice']] X_temp = train[features].copy() y_temp = train['SalePrice'].copy() # Data structure for storing scores and times test_preds = np.zeros((test.shape[0],)) scores = np.zeros(NUM_FOLDS) times = np.zeros(NUM_FOLDS) # Stratified k-fold cross-validation kfold = StratifiedKFold(n_splits = NUM_FOLDS, shuffle = True, random_state = RANDOM_SEED) for fold, (train_idx, valid_idx) in enumerate(kfold.split(X_temp, y_bins)): # Training and Validation Sets X_train, X_valid = X_temp.iloc[train_idx], X_temp.iloc[valid_idx] y_train, y_valid = y_temp.iloc[train_idx], y_temp.iloc[valid_idx] X_test = test[features].copy() # Preprocessing X_train, X_valid, X_test = feature_engineering(X_train, X_valid, X_test) # Create model start = time.time() model = XGBRegressor( random_state = RANDOM_SEED, n_estimators = MAX_TREES, n_jobs = 4, **model_params ) model.fit( X_train, y_train, early_stopping_rounds=EARLY_STOP, eval_set=[(X_valid, y_valid)], verbose=False ) # validation predictions valid_preds = np.ravel(model.predict(X_valid)) test_preds += model.predict(X_test) / NUM_FOLDS scores[fold] = mean_absolute_error(y_valid, valid_preds) end = time.time() times[fold] = end - start print(f'Fold {fold} MAE: {round(scores[fold], 5)} in {round(end-start,2)}s.') time.sleep(0.5) print(f'\n{NUM_FOLDS}-Fold Average MAE: {round(scores.mean(), 5)} in {round(times.sum(),2)}s.') output = pd.DataFrame({'Id': test.Id,'SalePrice': test_preds}) output.to_csv('new_submission.csv', index=False) # Make final submission make_submission(study.best_params) ###Output Fold 0 MAE: 17036.57345 in 6.13s. Fold 1 MAE: 10517.46299 in 4.47s. Fold 2 MAE: 13275.18897 in 4.19s. Fold 3 MAE: 13847.10102 in 4.05s. Fold 4 MAE: 13120.46958 in 3.72s. Fold 5 MAE: 15351.65154 in 3.45s. Fold 6 MAE: 14933.21933 in 4.42s. Fold 7 MAE: 11838.48092 in 5.59s. Fold 8 MAE: 13737.39834 in 4.68s. Fold 9 MAE: 15418.44909 in 3.46s. Fold 10 MAE: 14154.5011 in 3.7s. Fold 11 MAE: 13662.08807 in 4.79s. 12-Fold Average MAE: 13907.71537 in 52.64s.
examples/dev_blog_example.ipynb
###Markdown “Hello, Sionna!"The following notebook implements the “Hello, Sionna!” example from the [Sionna whitepaper](https://arxiv.org/pdf/2203.11854.pdf) and the [NVIDIA blog post](https://developer.nvidia.com/blog/jumpstarting-link-level-simulations-with-sionna/). The transmission of a batch of LDPC codewords over an AWGN channel using 16QAM modulation is simulated. This example shows how Sionna layers are instantiated and applied to a previously defined tensor. The coding style follows the [functional API](https://www.tensorflow.org/guide/keras/functional) of Keras. The [official documentation](https://nvlabs.github.io/sionna) provides key material on how to use Sionna and how its components are implemented.Many more [tutorials](https://nvlabs.github.io/sionna/tutorials.html) are available online. ###Code # Import Sionna try: import sionna except ImportError as e: # Install Sionna if package is not already installed import os os.system("pip install sionna") import sionna # IPython "magic function" for inline plots %matplotlib inline import matplotlib.pyplot as plt # Import required Sionna components from sionna.mapping import Constellation, Mapper, Demapper from sionna.utils import BinarySource, compute_ber, BinaryCrossentropy from sionna.channel import AWGN from sionna.fec.ldpc import LDPC5GEncoder, LDPC5GDecoder # For the implementation of the neural receiver import tensorflow as tf from tensorflow.keras.layers import Dense, Layer ###Output _____no_output_____ ###Markdown Let us define the required transmitter and receiver components for the transmission. ###Code batch_size = 1024 n = 1000 # codeword length k = 500 # information bits per codeword m = 4 # number of bits per symbol snr = 10 c = Constellation("qam", m) b = BinarySource()([batch_size, k]) u = LDPC5GEncoder(k, n)(b) x = Mapper(constellation=c)(u) y = AWGN()([x, 1/snr]) llr = Demapper("app", constellation=c)([y, 1/snr]) b_hat = LDPC5GDecoder(LDPC5GEncoder(k, n))(llr) ###Output _____no_output_____ ###Markdown We can now directly calculate the simulated bit-error-rate (BER) for the whole batch of 1024 codewords. ###Code ber = compute_ber(b, b_hat) print(f"Coded BER = {ber :.3f}") ###Output Coded BER = 0.000 ###Markdown Automatic Gradient Computation One of the key advantages of Sionna is that components can be made trainable or replaced by neural networks. In the following example, we have made the [Constellation](https://nvlabs.github.io/sionna/api/mapping.htmlconstellation) trainable and replaced [Demapper](https://nvlabs.github.io/sionna/api/mapping.htmldemapping) with a NeuralDemapper, which is just a neural network defined through [Keras](https://keras.io). ###Code # Let us define the Neural Demapper class NeuralDemapper(Layer): def build(self, input_shape): # Initialize the neural network layers self._dense1 = Dense(16, activation="relu") self._dense2 = Dense(m) def call(self, inputs): y, no = inputs # Stack noise variance, real and imaginary # parts of each symbol. The input to the # neural network is [Re(y_i), Im(y_i), no]. no = no*tf.ones(tf.shape(y)) llr = tf.stack([tf.math.real(y), tf.math.imag(y), no], axis=-1) # Compute neural network output llr = self._dense1(llr) llr = self._dense2(llr) # Reshape to [batch_size, n] llr = tf.reshape(llr, [batch_size, -1]) return llr ###Output _____no_output_____ ###Markdown Now, we can simply replace the *classical* demapper with the previously defined NeuralDemapper and set the Constellation object trainable. ###Code with tf.GradientTape() as tape: # Watch gradients c = Constellation("qam", m, trainable=True) # Constellation object is now trainable b = BinarySource()([batch_size, k]) u = LDPC5GEncoder(k, n)(b) x = Mapper(constellation=c)(u) y = AWGN()([x, 1/snr]) llr = NeuralDemapper()([y, 1/snr]) # Replaced by the NeuralDemapper loss = BinaryCrossentropy(from_logits=True)(u, llr) # Use TensorFlows automatic gradient computation grad = tape.gradient(loss, tape.watched_variables()) print("Gradients of the Constellation object: ", grad[0]) ###Output Gradients of the Constellation: tf.Tensor( [[-0.00329827 -0.00322376 -0.00074452 0.00339787 -0.00513376 -0.00157427 0.00207564 0.00579527 -0.00197416 -0.01357635 0.00363045 -0.00814517 0.00536535 0.00455021 0.0116162 0.00674398] [-0.00412695 0.00536264 0.00231561 0.00733877 -0.00158195 0.00073293 0.00541865 0.00879211 -0.0044257 0.00550126 -0.00092697 0.00692451 -0.00466044 -0.00021787 -0.00036102 0.00797399]], shape=(2, 16), dtype=float32)
Topics_Master/17-K-Means-Clustering/02-K Means Clustering Project.ipynb
###Markdown ______ K Means Clustering Project For this project we will attempt to use KMeans Clustering to cluster Universities into to two groups, Private and Public.___It is **very important to note, we actually have the labels for this data set, but we will NOT use them for the KMeans clustering algorithm, since that is an unsupervised learning algorithm.** When using the Kmeans algorithm under normal circumstances, it is because you don't have labels. In this case we will use the labels to try to get an idea of how well the algorithm performed, but you won't usually do this for Kmeans, so the classification report and confusion matrix at the end of this project, don't truly make sense in a real world setting!.___ The DataWe will use a data frame with 777 observations on the following 18 variables.* Private A factor with levels No and Yes indicating private or public university* Apps Number of applications received* Accept Number of applications accepted* Enroll Number of new students enrolled* Top10perc Pct. new students from top 10% of H.S. class* Top25perc Pct. new students from top 25% of H.S. class* F.Undergrad Number of fulltime undergraduates* P.Undergrad Number of parttime undergraduates* Outstate Out-of-state tuition* Room.Board Room and board costs* Books Estimated book costs* Personal Estimated personal spending* PhD Pct. of faculty with Ph.D.’s* Terminal Pct. of faculty with terminal degree* S.F.Ratio Student/faculty ratio* perc.alumni Pct. alumni who donate* Expend Instructional expenditure per student* Grad.Rate Graduation rate Import Libraries** Import the libraries you usually use for data analysis.** ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ###Output _____no_output_____ ###Markdown Get the Data ** Read in the College_Data file using read_csv. Figure out how to set the first column as the index.** ###Code df=pd.read_csv('College_Data') ###Output _____no_output_____ ###Markdown **Check the head of the data** ###Code df.head() ###Output _____no_output_____ ###Markdown ** Check the info() and describe() methods on the data.** ###Code df.info() df.describe() ###Output _____no_output_____ ###Markdown EDAIt's time to create some data visualizations!** Create a scatterplot of Grad.Rate versus Room.Board where the points are colored by the Private column. ** ###Code sns.scatterplot(data=df,x='Grad.Rate',y='Room.Board',hue='Private') ###Output _____no_output_____ ###Markdown **Create a scatterplot of F.Undergrad versus Outstate where the points are colored by the Private column.** ###Code sns.scatterplot(data=df,y='F.Undergrad',x='Outstate',hue='Private') ###Output _____no_output_____ ###Markdown ** Create a stacked histogram showing Out of State Tuition based on the Private column. Try doing this using [sns.FacetGrid](https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.FacetGrid.html). If that is too tricky, see if you can do it just by using two instances of pandas.plot(kind='hist'). ** ###Code # plt.figure(figsize=(12,5)) sns.FacetGrid(data=df,hue='Private',row='Outstate') ###Output _____no_output_____
code_listings/02.00-Introduction-to-NumPy.ipynb
###Markdown Introduction to NumPy ###Code import numpy numpy.__version__ import numpy as np ###Output _____no_output_____
publication-manager/publication_manager.ipynb
###Markdown Configuration ###Code ### RUN THIS CELL TO BEGIN CONFIGURATION ### %run configuration_form.ipynb ### LEGACY CONFIGURATION CELL -- SKIP THIS IF USING THE CONFIGURATION FORM ABOVE ### ## CHOOSE YOUR ACTION. Options are "import", "delete", and "update". action = 'import' ## CONFIGURE IMPORT FILES """ List paths to CSV or Excel files to insert. Files must contain the following headers in order: _id, publication, description, publisher, edition,contentType,language,country, startdate,enddate,altTitle,authors Excel spreadsheets must contain data only in sheet 1. """ source_files = ['path_to_source_files'] ## CONFIGURE `_id` TO DELETE A MANIFEST delete_id = '' # A string ## CONFIGURE `_id` AND PROPERTIES TO UPDATE A MANIFEST """ Properties must be given in Python dict form. E.g.: {'description': 'Some other text', 'edition': 'online'} """ # Configure Database ID and Properties to Update update_id = '' # A string properties = {'key1': 'value1', 'key2': 'value2'} # A dict ###Output _____no_output_____ ###Markdown Basic Setup ###Code # If using the configuration form, get the values from the form try: action = config.values['action'] delete_id = config.values['delete_id'] update_id = config.values['update_id'] properties = config.values['properties'] except: pass namespace = 'we1s1.1' # Import dependencies import os, datetime, tabulator, itertools, requests, json import pymongo from pymongo import MongoClient import pandas as pd from tableschema_pandas import Storage from jsonschema import validate, FormatChecker # Set up the MongoDB client, configure the databases, and assign variables to the "collections" client = MongoClient('mongodb://localhost:27017') db = client.we1s publications = db.Publications we1s_log = db.we1s_log # Set up the storage functions for pandas dataframes storage = Storage() storage.create('data', { 'primaryKey': '_id', 'fields': [ {'name': '_id', 'type': 'string'}, {'name': 'namespace', 'type': 'string'}, {'name': 'publication', 'type': 'string'}, {'name': 'description', 'type': 'string'}, {'name': 'publisher', 'type': 'string'}, {'name': 'edition', 'type': 'string'}, {'name': 'contentType', 'type': 'string'}, {'name': 'language', 'type': 'string'}, {'name': 'country', 'type': 'string'}, {'name': 'startdate', 'type': 'string'}, {'name': 'enddate', 'type': 'string'}, {'name': 'altTitle', 'type': 'string'}, {'name': 'authors', 'type': 'string'} ] }) ###Output _____no_output_____ ###Markdown API Methods ###Code def clear(collection): """ Removes all documents from the specified collection. """ collection.delete_many({}) print('All documents in the specified collection have been deleted.') def delete_publication(id): """ Deletes a publication manifest based on id. """ result = publications.delete_one({'_id': id}) if result.deleted_count != 0: we1s_log.insert_one({'id': id, 'date': datetime.datetime.utcnow(), 'type': 'delete'}) print('Document "' + str(id) + ' was deleted.') else: print('Error: The document could not be deleted. Make sure you have the correct "id" by running `list_publications()`.') def get_page(pages, page): """ Takes a list of paginated results form `paginate()` and returns a single page from the list. """ try: return pages[page-1] except: print('The requested page does not exist.') def import_manifests(source_files): """ Loops through the source files and streams them into a dataframe, then converts the dataframe to a list of manifest dicts. """ for item in source_files: if item.endswith('.xlsx') or item.endswith('.xls'): options = {'format': 'xlsx', 'sheet': 1, 'headers': 1} else: options = {'headers': 1} try: with tabulator.Stream(item, **options) as stream: try: validate_headers(stream.headers) storage.write('data', stream) except: print('Error: The table headings in ' + item + ' do not match the Publications schema.') except: print('Error: Could not stream tabular data.') manifests = [] for key, properties in storage['data'].to_dict('index').items(): properties['_id'] = key properties['namespace'] = namespace properties['date'] = [] try: assert len(properties['enddate']) > 0 start = properties.pop('startdate', None) end = properties.pop('enddate', None) properties['date'].append({"start": start}) properties['date'].append({"end": end}) except: properties['date'].append(properties['startdate']) if validate_manifest(properties) == True: manifests.append(properties) else: print('Could not produce a valid manifest for ' + key + '.') return manifests def insert_publication(manifest): """ Inserts a publication manifest after checking for a unique `_id`. """ try: assert manifest['_id'] not in publications.distinct("_id") publications.insert_one(manifest) we1s_log.insert_one({'id': manifest['_id'], 'date': datetime.datetime.utcnow(), 'type': 'insert'}) print('Inserted manifest with `_id` "' + manifest['_id'] + '".') except: print('The `_id` "' + manifest['_id'] + '" already exists in the database.') def list_publications(page_size=10, page=1): """ Prints a list of all publications. """ if len(list(publications.find())) > 0: result = list(publications.find()) pages = list(paginate(result, page_size=page_size)) page = get_page(pages, page) print(page) else: print('The Publications database is empty.') def paginate(iterable, page_size): """ Returns a generator with a list sliced into pages by the designated size. If the generator is converted to a list called `pages`, and individual page can be called with `pages[0]`, `pages[1]`, etc. """ while True: i1, i2 = itertools.tee(iterable) iterable, page = (itertools.islice(i1, page_size, None), list(itertools.islice(i2, page_size))) if len(page) == 0: break yield page def search(values): """ Returns search results. """ print(values) if len(list(publications.find())) > 0: if value['regex'] == True: query = {} for k, v in value['query'].items(): REGEX = re.compile(v) query[k] = {'$regex': REGEX} else: query = values['query'] result = list(publications.find( query, limit=values['limit'], projection=values['show_properties'])) pages = list(paginate(result, page_size=page_size)) page = get_page(pages, page) print(page) else: print('The Publications database is empty.') def show_databases(): """ Lists all databases in the current client. """ if len(client.database_names()) > 0: print(client.database_names()) else: print('The WE1S database is empty.') def show_log(): """ Prints the log of database transactions. """ print(list(we1s_log.find())) def update_publication(id, properties): """ Updates a publication manifest based on id. Takes a dict containing all the properties to be updated. """ publications.update_one({"_id": id}, {"$set": properties}, upsert=False) we1s_log.insert_one({'id': id, 'date': datetime.datetime.utcnow(), 'type': 'update'}) print('The manifest for `_id` "' + id + '" has been updated.') def validate_headers(headers): """ Verifies that the headers in the tabular stream match the Publications schema. """ assert headers == ['_id', 'publication', 'description', 'publisher', 'edition', 'contentType', 'language', 'country', 'startdate', 'enddate', 'altTitle', 'authors'] def validate_manifest(manifest): """ Validates a manifest against the online manifest schema. """ schema_file = 'https://raw.githubusercontent.com/whatevery1says/manifest/master/schema/Publications/Publications.json' schema = json.loads(requests.get(schema_file).text) try: validate(manifest, schema, format_checker=FormatChecker()) return True except: return False ###Output _____no_output_____ ###Markdown Execute Action ###Code # Call relevant functions based on action configuration if action == 'import': print('Processing...') for manifest in import_manifests(source_files): insert_publication(manifest) elif action == 'Delete': delete_publication(delete_id) elif action == 'Update': update_publication(update_id, properties) elif action == 'Search': search(config.values) else: print('Please set the `action` configuration at the top of the notebook.') ###Output _____no_output_____ ###Markdown Extra Function Calls ###Code # Run these API methods if desired # list_publications() # show_log() # clear(publications) # clear(we1s_log) ###Output _____no_output_____
Data Transforms/Developing Transforms.ipynb
###Markdown Horizontal Flips ###Code fig, ax = plt.subplots(1,1,figsize=(25,25),facecolor='w') cmap = plt.cm.viridis cmap.set_bad(color='black') ev = np.zeros((40,40,19)) for i in range(19): ev[:,:,i]=i+1 # ev[:,:,5]=3+1 # ev[:,:,7]=3+1 a=get_plot_array(ev) ax.imshow(a, origin="upper", cmap=cmap) plt.show() horizontal_map_array_idxs=[0,11,10,9,8,7,6,5,4,3,2,1,12,17,16,15,14,13,18] fig, ax = plt.subplots(1,1,figsize=(25,25),facecolor='w') cmap = plt.cm.viridis cmap.set_bad(color='black') ev = np.zeros((40,40,19)) for i in range(19): ev[:,:,i]=4*i+1 ev[:,:20,:] = ev[:,:20,horizontal_map_array_idxs] a=get_plot_array(ev) ax.imshow(a, origin="upper", cmap=cmap) plt.show() fig, ax = plt.subplots(1,1,figsize=(25,25),facecolor='w') cmap = plt.cm.viridis cmap.set_bad(color='black') a=get_plot_array(original_eventdata[index,:,:,0:19]) ax.imshow(a, origin="upper", cmap=cmap, norm=colors.LogNorm(vmax=np.amax(event), clip=True)) plt.show() def horizontal_flip(data): return np.flip(data[:,:,horizontal_map_array_idxs],axis=1) fig, ax = plt.subplots(1,1,figsize=(25,25),facecolor='w') cmap = plt.cm.viridis cmap.set_bad(color='black') event=horizontal_flip(original_eventdata[index,:,:,:19]) a=get_plot_array(event) ax.imshow(a, origin="upper", cmap=cmap, norm=colors.LogNorm(vmax=np.amax(event), clip=True)) plt.show() ###Output _____no_output_____ ###Markdown Vertical Flipping ###Code vertical_map_array_idxs=[6,5,4,3,2,1,0,11,10,9,8,7,15,14,13,12,17,16,18] def vertical_flip(data): return np.flip(data[:,:,vertical_map_array_idxs],axis=0) fig, ax = plt.subplots(1,1,figsize=(25,25),facecolor='w') cmap = plt.cm.viridis cmap.set_bad(color='black') ev = np.zeros((40,40,19)) for i in range(19): ev[:,:,i]=4*i+1 ev[:20,:,:] = ev[:20,:,vertical_map_array_idxs] a=get_plot_array(ev) ax.imshow(a, origin="upper", cmap=cmap) plt.show() fig, ax = plt.subplots(1,1,figsize=(25,25),facecolor='w') cmap = plt.cm.viridis cmap.set_bad(color='black') event=vertical_flip(original_eventdata[index,:,:,:19]) a=get_plot_array(event) ax.imshow(a, origin="upper", cmap=cmap, norm=colors.LogNorm(vmax=np.amax(event), clip=True)) plt.show() geo_file = np.load('/fast_scratch/WatChMaL/data/mPMT_full_geo.npz', allow_pickle=True) geo_file.files geo_file['position'].shape ###Output _____no_output_____
dati_2017/wt05/2DOF.ipynb
###Markdown 2 DOF SystemThe dynamical system in figure is composed of two massless, uniform beamssupporting a dimensionless body of mass $m$.With $\omega_0^2=(EJ)/(mL^3)$ determine the system's eigenvalues$\omega_i^2=\lambda_i^2\,\omega_0^2$ and the mass normalized eigenvectors.The system is subjected to a horizontal motion $u_\mathcal{B}= u_\mathcal{B}(t)$,determine the mass displacements $\boldsymbol x_\mathcal{B} = \boldsymbol r\,u_\mathcal{B}$ and write the modal equations of dynamic equilibrium as$$\ddot q_i + \lambda_i^2\omega_0^2\,q_i = \alpha_i\,\ddot u_\mathcal{B}$$determining the numerical values of the $\alpha_i$. Structural matricesThe flexibility is computed using the Principle of Virtual Displacements, the stiffness is computed by inversion and the mass matrix is the unit matrix multiplied by $m$, $\ \boldsymbol M=m\,\boldsymbol I$. ###Code l = [1, 1, 1, 2, 1] m = [[p( 2/3, 0), p(-1/3, 1), p(1, 0), p(2/3, 0), p(4/3, 0)], [p(-2/3, 0), p(-2/3, 0), p(0, 0), p(1/3, 0), p(2/3, 0)]] F = array([[vw(emme, chi, l) for emme in m] for chi in m]) K = inv(F) M = eye(2) dl(dmat(r'\boldsymbol{F}=\frac{1}{27}\frac{L^3}{EJ}', F*27, r',')) dl(dmat(r'\boldsymbol{K}=\frac{1}{53}\frac{EJ}{L^3}', K*53, r',')) dl(dmat(r'\boldsymbol{M}=m', M,'.', fmt='%d')) ###Output _____no_output_____ ###Markdown The eigenvalues problem ###Code wn2, Psi = eigh(K, M) ; Psi[:,0] *= -1 Lambda2 = diag(wn2) dl(dmat(r'\boldsymbol{\Lambda^2}=', Lambda2, r'.')) dl(dmat(r'\boldsymbol{\Psi}=', Psi, r'.')) ###Output _____no_output_____ ###Markdown Mass Displacements and Inertial ForcesWe downgrade the hinge in $\mathcal B$ to permit a unit horizontal displacement and observe that the centre of instantaneous rotation for the lower beam is at the intersection of the vertical in $\mathcal B$ and the line connecting $\mathcal A$ and $\mathcal C$.The angle of rotation of the lower beam is $\theta_2=1/3L$, anti-clockwise and the angle of rotation of the upper beam (that rotates about the hinge in $\mathcal A$) is equal but clockwise, due to the continuity of displacements in the internal hinge $\mathcal C$.Knowing the angle of rotation of the upper beam, the mass displacements are$$ x_1 = 2L\frac{1}{3L}=\frac{2}{3}\quad\text{ and }\quad x_2=L\frac{1}{3L}=\frac{1}{3}.$$We can eventually write$$ \boldsymbol x_\text{tot} = \boldsymbol x + \begin{Bmatrix} 2/3\\1/3 \end{Bmatrix}\,u_\mathcal{B}$$and the inertial force can be written as$$\boldsymbol f_\text{I} =\boldsymbol M \, \ddot{\boldsymbol x}_\text{tot} =\boldsymbol M \, \ddot{\boldsymbol x} +\boldsymbol M \, \begin{Bmatrix}2/3\\1/3\end{Bmatrix}\,\ddot u_\mathcal{B}$$ The Modal Equations of MotionThe equation of motion, in structural coordinates, is$$\boldsymbol M \, \ddot{\boldsymbol x} +\boldsymbol K \, \boldsymbol x = - \boldsymbol M \, \begin{Bmatrix}2/3\\1/3\end{Bmatrix}\,\ddot u_\mathcal{B}$$or, because $\boldsymbol M = m\, \boldsymbol I$,$$\boldsymbol M \, \ddot{\boldsymbol x} +\boldsymbol K \, \boldsymbol x = - m\, \begin{Bmatrix}2/3\\1/3\end{Bmatrix}\,\ddot u_\mathcal{B}$$Using the modal expansion, $\boldsymbol x = \boldsymbol\Psi\, \boldsymbol q$ and premultiplying term by term by $\boldsymbol\Psi^T$ we have, because the eigenvectors are normalized w/r to the mass matrix,$$m\,\boldsymbol I \, \ddot{\boldsymbol q} + m\omega_0^2\,\boldsymbol \Lambda^2 \, \boldsymbol q = - m\,\boldsymbol\Psi^T\, \begin{Bmatrix}2/3\\1/3\end{Bmatrix}\,\ddot u_\mathcal{B}.$$ ###Code r = array((2/3, 1/3)) a = -Psi.T@r print('The modal equations of motion are') for i, (ai, l2i) in enumerate(zip(a, wn2), 1): dl(r'$$\ddot q_{%d} %+.6f\,\omega_0^2\,q_{%d} = %+.6f\,\ddot u_\mathcal{B}$$' % (i, l2i, i, ai)) ###Output The modal equations of motion are ###Markdown Initialization ###Code from numpy import array, diag, eye, poly1d from scipy.linalg import eigh, inv from IPython.display import Latex, HTML display(HTML(open('01.css').read())) def p(*l): return poly1d(l) def vw(M, Χ, L): return sum(p(l)-p(0) for (m, χ, l) in zip(M, Χ, L) for p in ((m*χ).integ(),)) def dmat(pre, mat, post, mattype='b', fmt='%+.6g'): s = r'\begin{align}' + pre + r'\begin{%smatrix}'%mattype s += r'\\'.join('&'.join(fmt%val for val in row) for row in mat) s += r'\end{%smatrix}'%mattype + post + r'\end{align}' return s def dl(ls): display(Latex(ls)) return None ###Output _____no_output_____
Edited1_3_diabetic_NeuralNet_Exercise_TF.ipynb
###Markdown Predict Diabetes from Medical Records:: Dataset (Pima Indians Database): (https://www.kaggle.com/uciml/pima-indians-diabetes-database) - Features: Pregnancies, Glucose, BloodPressure, SkinThickness, Insulin, BMI, DiabetesPedigreeFunction, and Age - Outcome: Normal(0) vs Diabetes(1) Simple Neural Network ###Code from google.colab import drive drive.mount('/content/drive/') ls cd drive/My\ Drive ls cd CITREP_Data+Code/ ls import numpy as np import pandas as pd import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.preprocessing import Imputer import tensorflow as tf import matplotlib.pyplot as plt # Build model with learning rate = 0.01 # Number od epochs = 10 learning_rate = 0.01 training_epochs = 500 #if you don't want to transfer the file out to the same folder as the notebook, just do Data/... df = pd.read_csv("Data/diabetes.csv") df.head() ###Output _____no_output_____ ###Markdown Checking Missing/Null Values for different Features ###Code df1 = df.iloc[:, :-1] print("\nColumn Name % of Null Values\n") # print((df1[:] == 0).sum()) ((df1[:] == 0).sum())/768*100 # this plots the correlation data between each of the parameters corr = df[df.columns].corr() sns.heatmap(corr, cmap="Greens", annot = False) ###Output _____no_output_____ ###Markdown From the correlation plot, it is clear that Features with null values (Insulin and SkinThickness) are not correlated with the outcome. Prepare Dataset for Training and Testing ###Code # google the iloc() function X = df.iloc[:, :-1] #everything in the dataset df except for the last column "outcome" y = df.iloc[:, -1] # y contains only the "outcome" column X_trainold, X_testold, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1) # df.iloc[:, :-1] print("X_shape:",X.shape,"Y_shape:",y.shape) print("X_Train:",X_trainold.shape,"X_Test:", X_testold.shape) ###Output X_shape: (768, 8) Y_shape: (768,) X_Train: (614, 8) X_Test: (154, 8) ###Markdown Missing Values are replaced by Median ###Code imputer = Imputer(missing_values=0,strategy='median') X_train = imputer.fit_transform(X_trainold) X_test = imputer.transform(X_testold) print("X_Train:",X_train.shape,"X_Test:", X_test.shape) mean = X_train.mean(axis=0) std = X_train.std(axis=0) print("Mean: ",mean) print("Std: ",std) ###Output Mean: [ 4.43973941 121.82899023 72.26221498 28.84201954 141.33224756 32.19723127 0.46379479 33.2980456 ] Std: [ 2.99817272 30.47429215 12.40618104 8.5871349 87.34394387 6.81501567 0.327095 11.80062782] ###Markdown Preparing the data To deal with different scale for different features, need to do feature-wise normalization: For each feature substract the feature-mean and divide by the feature-standard_deviation. So that the feature is centered around 0 and has a unit standard deviation. ###Code X_train -= mean X_train /= std X_test -= mean X_test /= std X_train.mean(axis=0) X_train.std(axis=0) #preparing our levels as one hot encoding from keras.utils import to_categorical y_train1 = to_categorical(y_train) y_test1 = to_categorical(y_test) y_train1[1] ###Output _____no_output_____ ###Markdown Build the Network Model ###Code # Build a FcNN model with 2 hidden layers (20 and 10 nurons respectively) # use sigmoid as activation function #-----------------------------------------------------------------------------------------MY ANSWER---------------------------------------------------------------------------------------------- import tensorflow as tf # Hyper Parameters learning_rate = 0.01 training_epochs = 500 #data boxes (boxes contain 100 random data points from the dataset) #based on how much ram your com has, you can increase the batch size batch_size = 100 tf.set_random_seed(25) X = tf.placeholder(tf.float32, [None, 768*8]) y = tf.placeholder(tf.float32, [None, 2]) L1 = 20 L2 = 10 #in between the input layer W1 = tf.Variable(tf.truncated_normal([768*8, L1], stddev=0.1)) # 784 x 200 = 156800 B1 = tf.Variable(tf.truncated_normal([L1],stddev=0.1)) # b1 = 200 because there are 200 neurons in hidden layer 1 #in between hidden layer 1 and hideen layer 2 W2 = tf.Variable(tf.truncated_normal([L1, L2], stddev=0.1)) B2 = tf.Variable(tf.truncated_normal([L2],stddev=0.1)) #in between hidden layer 2 and the output layer W3 = tf.Variable(tf.truncated_normal([L2, 2], stddev=0.1)) B3 = tf.Variable(tf.truncated_normal([2],stddev=0.1)) Y1 = tf.nn.sigmoid(tf.matmul(X, W1) + B1) Y2 = tf.nn.sigmoid(tf.matmul(Y1, W2) + B2) Ylogits = tf.matmul(Y2, W3) + B3 yhat = tf.nn.softmax(Ylogits) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=Ylogits)) train = tf.train.AdamOptimizer(learning_rate).minimize(loss) is_correct = tf.equal(tf.argmax(y,1),tf.argmax(yhat,1)) accuracy = tf.reduce_mean(tf.cast(is_correct,tf.float32)) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # Step 5: Training Loop for epoch in range(training_epochs): train_data = {X: batch_X, y: batch_y} sess.run(train, feed_dict=train_data) print(epoch*num_batches+i+1, "Training accuracy =", sess.run(accuracy, feed_dict=train_data), "Loss =", sess.run(loss, feed_dict=train_data)) # calculate the performance with Test data test_data = {X:mnist.test.images,y:mnist.test.labels} print("Testing Accuracy = ", sess.run(accuracy, feed_dict = test_data)) #---------------------------------------------------------------------------------------TEACHER'S ANSWERS--------------------------------------------------------------------------------- # Hyper Parameters learning_rate = 0.01 training_epochs = 500 tf.set_random_seed(50) X = tf.placeholder(tf.float32, [None, 8]) y = tf.placeholder(tf.float32, [None, 2]) L1 = 20 L2 = 10 #in between the input layer W1 = tf.Variable(tf.truncated_normal([8, L1], stddev=0.1)) B1 = tf.Variable(tf.truncated_normal([L1],stddev=0.1)) #in between hidden layer 1 and hideen layer 2 W2 = tf.Variable(tf.truncated_normal([L1, L2], stddev=0.1)) B2 = tf.Variable(tf.truncated_normal([L2],stddev=0.1)) #in between hidden layer 2 and the output layer W3 = tf.Variable(tf.truncated_normal([L2, 2], stddev=0.1)) B3 = tf.Variable(tf.truncated_normal([2],stddev=0.1)) Y1 = tf.nn.sigmoid(tf.matmul(X, W1) + B1) Y2 = tf.nn.sigmoid(tf.matmul(Y1, W2) + B2) Ylogits = tf.matmul(Y2, W3) + B3 yhat = tf.nn.softmax(Ylogits) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=Ylogits)) train = tf.train.AdamOptimizer(learning_rate).minimize(loss) is_correct = tf.equal(tf.argmax(y,1),tf.argmax(yhat,1)) accuracy = tf.reduce_mean(tf.cast(is_correct,tf.float32)) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # Step 5: Training Loop for epoch in range(training_epochs): train_data = {X: X_train, y: y_train1} sess.run(train, feed_dict=train_data) print(epoch, "Training accuracy =", sess.run(accuracy, feed_dict=train_data), "Loss =", sess.run(loss, feed_dict=train_data)) # Step 6: Evaluation test_data = {X: X_test, y: y_test1} print("Testing Accuracy = ", sess.run(accuracy, feed_dict = test_data)) ###Output _____no_output_____
NutritionOptimizer.ipynb
###Markdown ###Code import numpy as np from bokeh.plotting import figure, show ###Output _____no_output_____
notebooks/WIDERFACE_Detectron2_DD_mobilenet_v2_test.ipynb
###Markdown Install detectron2Detectron2 https://github.com/facebookresearch/detectron2 Detectron2 Beginner's Tutorial https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5 Documentation https://detectron2.readthedocs.io Detectron2 Model Zoo and Baselines https://github.com/facebookresearch/detectron2/blob/master/MODEL_ZOO.md Rethinking ImageNet Pre-training https://arxiv.org/pdf/1811.08883.pdf Wykorzystano kody z https://github.com/youngwanLEE/vovnet-detectron2 ###Code import os def Wider_load(val=True,train=True,test=False): os.makedirs('WIDER/', exist_ok=True) if val: #!gdown https://drive.google.com/uc?id=0B6eKvaijfFUDd3dIRmpvSk8tLUk !gdown https://drive.google.com/uc?id=1-5A_pa_jDS7gk8mHVCBB7ApV5KN8jWDr -O WIDER/tempv.zip !unzip -q WIDER/tempv.zip -d WIDER !rm WIDER/tempv.zip if train: ### WIDER Face Training Images #!gdown https://drive.google.com/uc?id=0B6eKvaijfFUDQUUwd21EckhUbWs !gdown https://drive.google.com/uc?id=1-1iJfmXKYvAx9uLdRDX5W6HHG_KZv1jH -O WIDER/temptr.zip !unzip -q WIDER/temptr.zip -d WIDER !rm WIDER/temptr.zip if test: #!gdown https://drive.google.com/uc?id=0B6eKvaijfFUDbW4tdGpaYjgzZkU !gdown https://drive.google.com/uc?id=1tTpUJZEQMKDVxKT6100V5FwDuGX_8sDi -O WIDER/tempt.zip !unzip -q WIDER/tempt.zip -d WIDER !rm WIDER/tempt.zip ### Face annotations !wget mmlab.ie.cuhk.edu.hk/projects/WIDERFace/support/bbx_annotation/wider_face_split.zip -O WIDER/tempa.zip !unzip -q WIDER/tempa.zip -d WIDER !rm WIDER/tempa.zip #annotations tool !gdown https://drive.google.com/uc?id=1_9ydMZlTNFXBOMl16xsU8FSBmK2PW4lN -O WIDER/tools.py #zestaw z danymi wyników trenowania !gdown https://drive.google.com/uc?id=1Sk6JWWZFHfxvAJtF7JKOk9ptfyVZWNoU -O WIDER/WIDER_test_set.json #mAP tools !wget -q -O WIDER/mAP.py https://drive.google.com/uc?id=1PtEsobTFah3eiCDbSsYblOGbe2fmkjGR ### Examples and formats of the submissions #!wget mmlab.ie.cuhk.edu.hk/projects/WIDERFace/support/example/Submission_example.zip def repo_load(): !pip install cython pyyaml==5.1 !pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI' # install detectron2: !git clone https://github.com/facebookresearch/detectron2 detectron2_repo !pip install -q -e detectron2_repo !gdown https://drive.google.com/uc?id=1U0SVkSaSio4TBiXvF1QfTZI65WYpXpZ9 !unzip -qo mobilenet.zip !rm -f mobilenet.zip repo_load() Wider_load(train=False) #Faces_DD set !pip install gdown import gdown url = 'https://drive.google.com/uc?export=download&id=1XwVm-2EMFdy9Zq39pKFr5UoSJvgTOm-7' output = 'Faces_DD.zip' gdown.download(url, output, False) !unzip -qo Faces_DD.zip !rm Faces_DD.zip url = 'https://drive.google.com/uc?export=download&id=1gIIUK518Ft9zi3VDVQZLRVozI-Hkpgt2' output = 'Faces_DD/Faces_DD_metrics.json' gdown.download(url, output, False) ###Output Requirement already satisfied: gdown in /usr/local/lib/python3.6/dist-packages (3.6.4) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from gdown) (1.12.0) Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from gdown) (2.23.0) Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from gdown) (4.41.1) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->gdown) (1.24.3) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->gdown) (2.9) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->gdown) (2020.4.5.1) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->gdown) (3.0.4) ###Markdown Restart runtime to continue... Crtl+M. ###Code !nvidia-smi from psutil import virtual_memory print('Your runtime has {:.1f} gigabytes of available RAM\n'.format(virtual_memory().total / 1e9)) #!gdown https://drive.google.com/uc?id=1Sk6JWWZFHfxvAJtF7JKOk9ptfyVZWNoU -O WIDER/WIDER_test_set.json import time import torch, torchvision import detectron2 from detectron2.utils.logger import setup_logger setup_logger() from google.colab import drive import os import cv2 import random import itertools import shutil import glob import json import numpy as np import pandas as pd from PIL import ImageDraw, Image from collections import defaultdict import matplotlib.pyplot as plt from matplotlib.patches import Ellipse from matplotlib import collections as mc from google.colab.patches import cv2_imshow from detectron2 import model_zoo import detectron2.utils.comm as comm from detectron2.engine import DefaultPredictor, DefaultTrainer, HookBase from detectron2.config import get_cfg from detectron2.utils.visualizer import Visualizer from detectron2.data import DatasetCatalog, MetadataCatalog, build_detection_train_loader from detectron2.structures import BoxMode from detectron2.data import build_detection_test_loader from detectron2.data.datasets import register_coco_instances from detectron2.evaluation import COCOEvaluator, inference_on_dataset from mobilenet.utils import add_mobilenet_config, build_mobilenetv2_fpn_backbone from WIDER.mAP import mAP, plot_mAP from WIDER.tools import annotations,output_Files #sciezki do trenowanych wag modeli mobilenet with open('WIDER/WIDER_test_set.json','r') as f: test_set=json.load(f) output_files=output_Files() with open('Faces_DD/Faces_DD_metrics.json','r') as f: Faces_DD=json.load(f) resF,resW ={},{} # wyniki dla Feces i Wider {gbxs,dbxs,metric} test_set['BN_Mish_V2_250+F_2_100k'] #best model BN+Mish V2 250k iterations + FrozenBN 100k iterations len(test_set.keys()),test_set.keys() ###Output _____no_output_____ ###Markdown 800k - trenowanie na parametrach domyslnych do 800k iteracjiBN_800k - z BatchNormalization do 800kBN_V2_300k - BN + normalizacja w preprocesing + poprawiony format na RGBBN_Mish - zmieniona funkcja aktywacji na Mish z ReLU6BN_Mish + F - douczania ze zmienionym BN na FrozenBN Prepare the dataset WIDER FACE: A Face Detection Benchmarkhttp://shuoyang1213.me/WIDERFACE/ https://arxiv.org/pdf/1511.06523.pdf ###Code train = annotations("val") val = annotations('val') for d in ["train","val"]: DatasetCatalog.register("face_" + d, lambda d=d: train if d == "train" else val) MetadataCatalog.get("face_" + d).set(thing_classes = ['face']) faces_metadata = MetadataCatalog.get("face_train") ###Output _____no_output_____ ###Markdown "Base-RCNN-MobileNet-FPN_v1.yaml"mobilenet/configs Użycie modelu ###Code # pelny zestaw kodow, wynikow # https://drive.google.com/drive/folders/1ApEnn3br2Z2Nt3-0Ve9DKYKMiDkFVpAR?usp=sharing #zestawy z konfiguracja cfg cfg_set = { 'FrozenBN':'https://drive.google.com/uc?id=1rZFzJaR_g7uYuTguTdbUuCQYD4eXLeqw', 'BN': 'https://drive.google.com/uc?id=1-doXtwe5iZHoqPzKGc2ZZbxj6Ebhxsn4', 'BN_V2':'https://drive.google.com/uc?id=1wywB8UAaOO5KZx3IS35kV-rLsvJMIse6', 'BN_Mish':'https://drive.google.com/uc?id=1-axV3KKg8-YiZZ7uDBh_2v181JoC9Nj3', 'BN_Mish_V2':'https://drive.google.com/uc?id=1WoESx5RYvpapNicpSrmMoNJeE2GVm3zK', 'BN_Mish_V3':'https://drive.google.com/uc?id=1-Kgd_2AS4EsD_ZPqP7SxkscyDjP-Qhnr', 'BN_Mish_V2F':'https://drive.google.com/uc?id=1pCwyYCjIoduro2vIKMZi5HhlpaypH0_x', 'R50_C4':'https://drive.google.com/uc?id=1-5P5Xyx5GM26p7g89CY-SSX0aluB7H9U', } def test_set_add(config,count,pth,ext=''): #config name or index of cf_set.keys(), count @ .000 set_list=list(cfg_set.keys()) if (isinstance(config,str) and not config in set_list) or (isinstance(config,int)) and config >=len(set_list): print('No ',config,' on the list: ', cfg_set.keys()) else: config=list(cfg_set.keys())[config] if isinstance(config,int) else config set_choice=config+ext+'_'+str(count)+'k' weights='model_{:07}.pth'.format((count*1000)-1) test_set[set_choice]={'pth':'https://drive.google.com/uc?id='+pth, 'weights_name': weights, 'config': config,} return set_choice,test_set[set_choice] #Weights and cfg configuration from training cfg = get_cfg() add_mobilenet_config(cfg) def cfg_write(cfg,cfg_all): for key in cfg_all.keys(): if isinstance(cfg_all[key],dict): cfg_write(cfg[key],cfg_all[key]) else: cfg[key]=cfg_all[key] return cfg def set_predictor(set_choice,device='cuda'): print('PREPARATION: ',set_choice) path = test_set[set_choice]['pth'] #sciezka do wag modelu out = test_set[set_choice]['weights_name'] !gdown $path #-O $out #bez, mamy kontrole zgodnosci nazwy pliku path=cfg_set[test_set[set_choice]['config']] #sciezka do konfiguracji !gdown -q $path -O 'Base-RCNN-MobileNet-FPN_V1_ALL.json' with open('Base-RCNN-MobileNet-FPN_V1_ALL.json','r') as f: cfg_all=json.load(f) cfg = get_cfg(); add_mobilenet_config(cfg) cfg=cfg_write(cfg,cfg_all) cfg.MODEL.WEIGHTS = test_set[set_choice]['weights_name'] cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set the testing threshold for this model cfg.MODEL.DEVICE=device return DefaultPredictor(cfg),cfg,set_choice ###Output _____no_output_____ ###Markdown AP metric on val ###Code def plot_marks(img,metric,dbxs=[],figsize=(14,14)): fig, ax = plt.subplots(1,1,figsize=figsize); ax.imshow(img) Ms=np.array(metric['marks']) color,color_det,lw=('white','red',1) for k,dbx in enumerate(metric['marks']): annot=str(k)+' - '+metric['persons'][k] ax.annotate(annot,np.array(dbx[:2])+2,color='k') ax.annotate(annot,dbx[:2],color='w') if dbxs != []: cbox = mc.LineCollection([[[l,t],[r,t],[r,b],[l,b],[l,t]] for l,t,r,b,_ in dbxs], linewidth=lw+1, colors=color_det) ax.add_collection(cbox) cbox = mc.LineCollection([[[l,t],[r,t],[r,b],[l,b],[l,t]] for l,t,r,b in Ms], linewidth=lw, colors=color) ax.add_collection(cbox) print(len(metric['marks']),end=', ') print('Annotated Boxes',end=', ') print(' '+metric['path']) for name in metric['persons']: if name !='': print(name,end=', ') #plt.axis('off') plt.show() def results(d_set,predictor,name='mAP_',ext='---',pres=False): start=time.time() gbxs=[]; dbxs=[]; dset=d_set if 'annotations' in d_set[0].keys(): #zmiana metryki z Wider do Faces, wybrane pola dset=[] for r in d_set: dset.append({'path' : r['file_name'],'marks' : [b['bbox'] for b in r['annotations']], 'persons': ['' for b in r['annotations']]}) Len=len(dset); print(ext,' ',Len,' imgs : 0 +',end='') for i,d in enumerate(dset): #przypisanie listy detetected i ground truth boxes im = cv2.imread(d["path"]) outputs = predictor(im) pbxs=outputs['instances'].pred_boxes.tensor.tolist() pconfs=outputs['instances'].scores.tolist() dbx=[[*(np.array(bx)+0.5).astype('int'),pconfs[i]] for i,bx in enumerate(pbxs)] dbxs.append(dbx) gbxs.append(d['marks']) if not i%100 : print('\r',ext,' ',Len,' imgs : ',i//100*100,' +',end='') if not (i+1)%10 : print(' ',(i+1)%100,end='') total=time.time()-start print('\r',name,ext,' - total time {:.2f} s per img {:.4f} s'.format(total,total/(i+1))) # IoUs=[.5,.55,.6,.65,.7,.75,.8,.85,.9,.95,.0] # keys ['All {:.2f}' for x in IoUs] + ['large','medium','small'] for IoUs[0] # r_p: 0 - interpolated, 1 - full m,d=mAP(gbxs,dbxs,data=True) plot_mAP(m,d,['All 0.50','All 0.00','large','medium','small'],1,name+ext+' conf>0',file=name+ext) if pres: for l in [58,233,259,365,388,394,413,424,446,455,483,532,683,709,722,802,809,874,759]: if l < Len: m=dset[l] print('------------------------ idx ',l,' gtx/dbxs',len(m['marks']),'/',len(dbxs[l])) plot_marks(Image.open(m['path']),m,dbxs[l],figsize=(18,14)) return {'gbxs':gbxs,'dbxs':dbxs,'metric':dset} #['FrozenBN', 'BN', 'BN_V2', 'BN_Mish', 'BN_Mish_V2', 'BN_Mish_V3', 'BN_Mish_V2F','R50_C4] cfg sets test_set_add(6,300,'10JXHtaSjBtDt0b0Sa6X4esBSysnBb2v-','_250+F_22') test_set_add(4,250,'1-NY6ZeI_0YsI6axbBG9kGpcAr-M0S4wT',) #najlepszy wynik do dalszego douczania test_set_add(6,50,'1-I6YSAs9NrORI4cFISfK_-4Yhm1twfDi','_250+F_2') # najlepszy wynik douczania test_set_add(7,270,'1-I4m091opkwIRHhvMLftt0eAQp6G0CmJ') with open('WIDER_test_set.json','w')as f: json.dump(test_set,f) ###Output _____no_output_____ ###Markdown wybór zestawów do analizy ###Code for key in test_set.keys(): if 'F_2' in key: print(key) set_choices=['BN_Mish_V2_250+F_2_100k','BN_Mish_V2F_250+F_22_300k'] for c in set_choices: if c in test_set.keys(): print(c,' is OK, ',end='') else: print('\n!!!!!!! ',c,' is not in test_set!, ',end='') set_choices[0],test_set[set_choices[0]] predictor,cfg,ext=set_predictor(set_choices[-1],device='cuda') #normalnie cuda print(cfg.dump()) l = [58,233,259,365,388,394,413,424,446,455,483,532,683,709,722,802,809,874,759] for i in l: d=val[i] im = cv2.imread(d["file_name"]) outputs = predictor(im[:, :, ::]) v = Visualizer(im,metadata=faces_metadata, scale=.7) v = v.draw_instance_predictions(outputs["instances"].to("cpu")) cv2_imshow(v.get_image()[:, :, ::]) for d in random.sample(val, 5): im = cv2.imread(d["file_name"]) outputs = predictor(im[:, :, ::]) v = Visualizer(im,metadata=faces_metadata, scale=0.7) v = v.draw_instance_predictions(outputs["instances"].to("cpu")) cv2_imshow(v.get_image()[:, :, ::]) !rm D* for c in set_choices: predictor,cfg,ext =set_predictor(c) resF[c]=results(Faces_DD,predictor,'Detectron2_Faces_DD_mAP_MN2_',ext,pres=False) resW[c]=results(val,predictor,'Detectron2_Wider_Face_mAP_MN2_',ext,pres=False) for c in set_choices: res=mAP(resW[c]['gbxs'],resW[c]['dbxs'],data=False) Ap=[] for i in range(10): Ap.append(res[0]['All {:.2f}'.format(0.5+i*0.05)][0]) print(c,' mAP: ',sum(Ap)/10,'\nset: ', Ap) #srednie AP dla conf [.5 ... 0.95] resF[set_choices[0]]['metric'][0] import json def to_int(data): if isinstance(data,dict): for key in data.keys(): data[key]=to_int(data[key]) if isinstance(data,list): for k in range(len(data)): data[k]=to_int(data[k]) if isinstance(data,tuple): data=list(data) for k in range(len(data)): data[k]=to_int(data[k]) if 'int' in data.__class__.__name__: data=int(data) return data #json obsluguje calkowite tylko w typie int iresF = to_int(resF) iresW = to_int(resW) with open('Results Detectron2_mobilenetV2_Faces.json','w') as f: json.dump(iresF,f) with open('Results Detectron2_mobilenetV2_Wider.json','w') as f: json.dump(iresW,f) for c in list(iresF.keys())[:1]: _,d,ms=iresF[c].values(); num=len(ms) #d dbxs, ms metrics for l in [58,233,259,365,388,394,413,424,446,455,483,532,683,709,722,802,809,874,759]: if l < num: m=ms[l] print('------------------------ idx ',l,' gtx/dbxs',len(m['marks']),'/',len(d[l])) plot_marks(Image.open(m['path']),m,d[l],figsize=(16,12)) for c in list(iresW.keys())[-1:]: _,d,ms=iresW[c].values(); num=len(ms) #d dbxs, ms metrics for l in [58,233,259,365,388,394,413,424,446,455,483,532,683,709,722,802,809,874,759]: if l < num: m=ms[l] print('------------------------ idx ',l,' gtx/dbxs',len(m['marks']),'/',len(d[l])) plot_marks(Image.open(m['path']),m,d[l],figsize=(16,12)) !zip -u PNG D*.png #inicjalizacja sieci mtcnn, pelny opis dostepny: help(MTCNN) #mtcnn = MTCNN(image_size=224, margin=0, keep_all=True) ###Output _____no_output_____ ###Markdown total time 1317.56 per img 0.4084 ###Code from detectron2.evaluation import COCOEvaluator, inference_on_dataset from detectron2.data import build_detection_test_loader evaluator = COCOEvaluator("face_val", cfg, False, output_dir="./OUTPUT/") val_loader = build_detection_test_loader(cfg, "face_val") inference_on_dataset(predictor.model, val_loader, evaluator) ###Output WARNING [06/07 11:09:19 d2.evaluation.coco_evaluation]: json_file was not found in MetaDataCatalog for 'face_val'. Trying to convert it to COCO format ... WARNING [06/07 11:09:19 d2.data.datasets.coco]: Using previously cached COCO format annotations at './OUTPUT/face_val_coco_format.json'. You need to clear the cache file if your dataset has been modified. [06/07 11:09:19 d2.data.build]: Distribution of instances among all 1 categories: | category | #instances | |:----------:|:-------------| | face | 39708 | | | | [06/07 11:09:19 d2.data.common]: Serializing 3226 elements to byte tensors and concatenating them all ... [06/07 11:09:19 d2.data.common]: Serialized dataset takes 3.16 MiB [06/07 11:09:19 d2.evaluation.evaluator]: Start inference on 3226 images [06/07 11:09:20 d2.evaluation.evaluator]: Inference done 11/3226. 0.0498 s / img. ETA=0:02:43 [06/07 11:09:25 d2.evaluation.evaluator]: Inference done 111/3226. 0.0490 s / img. ETA=0:02:36 [06/07 11:09:30 d2.evaluation.evaluator]: Inference done 209/3226. 0.0496 s / img. ETA=0:02:33 [06/07 11:09:35 d2.evaluation.evaluator]: Inference done 307/3226. 0.0497 s / img. ETA=0:02:29 [06/07 11:09:40 d2.evaluation.evaluator]: Inference done 406/3226. 0.0496 s / img. ETA=0:02:23 [06/07 11:09:45 d2.evaluation.evaluator]: Inference done 505/3226. 0.0496 s / img. ETA=0:02:18 [06/07 11:09:50 d2.evaluation.evaluator]: Inference done 602/3226. 0.0497 s / img. ETA=0:02:13 [06/07 11:09:55 d2.evaluation.evaluator]: Inference done 701/3226. 0.0497 s / img. ETA=0:02:08 [06/07 11:10:01 d2.evaluation.evaluator]: Inference done 801/3226. 0.0496 s / img. ETA=0:02:03 [06/07 11:10:06 d2.evaluation.evaluator]: Inference done 898/3226. 0.0497 s / img. ETA=0:01:58 [06/07 11:10:11 d2.evaluation.evaluator]: Inference done 997/3226. 0.0496 s / img. ETA=0:01:53 [06/07 11:10:16 d2.evaluation.evaluator]: Inference done 1097/3226. 0.0496 s / img. ETA=0:01:48 [06/07 11:10:21 d2.evaluation.evaluator]: Inference done 1196/3226. 0.0495 s / img. ETA=0:01:43 [06/07 11:10:26 d2.evaluation.evaluator]: Inference done 1295/3226. 0.0495 s / img. ETA=0:01:38 [06/07 11:10:31 d2.evaluation.evaluator]: Inference done 1394/3226. 0.0495 s / img. ETA=0:01:33 [06/07 11:10:36 d2.evaluation.evaluator]: Inference done 1496/3226. 0.0494 s / img. ETA=0:01:27 [06/07 11:10:41 d2.evaluation.evaluator]: Inference done 1597/3226. 0.0493 s / img. ETA=0:01:22 [06/07 11:10:46 d2.evaluation.evaluator]: Inference done 1695/3226. 0.0494 s / img. ETA=0:01:17 [06/07 11:10:51 d2.evaluation.evaluator]: Inference done 1794/3226. 0.0494 s / img. ETA=0:01:12 [06/07 11:10:56 d2.evaluation.evaluator]: Inference done 1893/3226. 0.0494 s / img. ETA=0:01:07 [06/07 11:11:01 d2.evaluation.evaluator]: Inference done 1992/3226. 0.0494 s / img. ETA=0:01:02 [06/07 11:11:06 d2.evaluation.evaluator]: Inference done 2089/3226. 0.0494 s / img. ETA=0:00:57 [06/07 11:11:11 d2.evaluation.evaluator]: Inference done 2188/3226. 0.0494 s / img. ETA=0:00:52 [06/07 11:11:16 d2.evaluation.evaluator]: Inference done 2286/3226. 0.0494 s / img. ETA=0:00:47 [06/07 11:11:21 d2.evaluation.evaluator]: Inference done 2384/3226. 0.0495 s / img. ETA=0:00:42 [06/07 11:11:26 d2.evaluation.evaluator]: Inference done 2482/3226. 0.0495 s / img. ETA=0:00:37 [06/07 11:11:31 d2.evaluation.evaluator]: Inference done 2581/3226. 0.0495 s / img. ETA=0:00:32 [06/07 11:11:36 d2.evaluation.evaluator]: Inference done 2680/3226. 0.0495 s / img. ETA=0:00:27 [06/07 11:11:41 d2.evaluation.evaluator]: Inference done 2778/3226. 0.0495 s / img. ETA=0:00:22 [06/07 11:11:46 d2.evaluation.evaluator]: Inference done 2877/3226. 0.0495 s / img. ETA=0:00:17 [06/07 11:11:51 d2.evaluation.evaluator]: Inference done 2975/3226. 0.0495 s / img. ETA=0:00:12 [06/07 11:11:56 d2.evaluation.evaluator]: Inference done 3074/3226. 0.0495 s / img. ETA=0:00:07 [06/07 11:12:01 d2.evaluation.evaluator]: Inference done 3172/3226. 0.0495 s / img. ETA=0:00:02 [06/07 11:12:04 d2.evaluation.evaluator]: Total inference time: 0:02:44.062716 (0.050935 s / img per device, on 1 devices) [06/07 11:12:04 d2.evaluation.evaluator]: Total inference pure compute time: 0:02:39 (0.049505 s / img per device, on 1 devices) [06/07 11:12:04 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [06/07 11:12:04 d2.evaluation.coco_evaluation]: Saving results to ./OUTPUT/coco_instances_results.json [06/07 11:12:04 d2.evaluation.coco_evaluation]: Evaluating predictions ... Loading and preparing results... DONE (t=0.03s) creating index... index created! Running per image evaluation... Evaluate annotation type *bbox* DONE (t=44.45s). Accumulating evaluation results... DONE (t=0.44s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.305 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.545 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.315 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.197 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.558 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.639 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.056 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.211 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.334 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.225 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.603 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.685 [06/07 11:12:49 d2.evaluation.coco_evaluation]: Evaluation results for bbox: | AP | AP50 | AP75 | APs | APm | APl | |:------:|:------:|:------:|:------:|:------:|:------:| | 30.492 | 54.485 | 31.527 | 19.708 | 55.829 | 63.922 |
notebooks_AR/Module1_unit4/2_Preprocess_WaPOR_data.ipynb
###Markdown المعالجة المسبقة لبيانات WaPOR باستخدام بيثونيحتوي هذا الدفتر على خطوات للمعالجة المسبقة لبيانات WaPOR- [1. قراءة وكتابة البيانات النقطية](1.-Read-and-write-raster-data)- [2. تجميع البيانات في فترة](2.-Aggregate-data-in-a-period)- [3. التفاف البيانات النقطية](3.-Warp-raster-data)- [4. عمل مقطع إلى ملف الشكل cutline](4.-Clip-to-Shapefile-cutline)في كل خطوة، ستجد أمثلة على أكواد وتمارين لتطبيق المثالفي كل خطوة، ستجد أمثلة على أكواد وتمارين لتطبيق المثالأولاً، قم باستيراد مكتبات / حزم بحيث يمكن استخدام الوظائف من هذه المكتبات. ###Code import os import sys import shapefile import glob import gdal import osgeo import osr import ogr import numpy as np from matplotlib import pyplot as plt import pandas as pd folder=r"..\..\modules" sys.path.append(folder) #أضف مجلدًا بالوحدات المحلية إلى مسارات النظام import WaPOR #استيراد وحدة محلية "WaPOR" WaPOR.API.getCatalog() ###Output Loading WaPOR catalog... Loading WaPOR catalog...Done ###Markdown 1. Read and write raster data 1. قراءة وكتابة البيانات النقطية مثال لاقتصاص تعيين الخريطة النقطية إلى حد ما ، يلزم وجود ملف شكل أو إحداثيات المدى. على سبيل المثال ، تم إعداد ملف أشكالمن أجل العمل مع البيانات النقطية في Python ، سنحتاج إلى فتح مجموعة البيانات النقطية كمصفوفة عددية وإجراء العمليات الحسابية باستخدام هذه المصفوفة. تحتوي الحزمة * gdal * على وظائف للعمل مع مجموعة الخرائط النقطية التي يمكن استخدامها للقيام بهذه المهمة. فيما يلي خطوات الحصول على هذه المعلومات من ملف GeoTIFF. في هذا المثال ، سنستخدم مجموعات البيانات النقطية للنتح الفعلي والاعتراض (AETI) من المستوى 1 الشهري التي تم تنزيلها في تمرين [دفتر الملاحظات] السابق.أولاً ، لفتح ملف نقطي ، سنحتاج إلى المسار إلى هذا الملف. في خلية الكود أدناه ، تحصل الشفرة على قائمة الملفات النقطية (بتنسيق GeoTIFF) في مجلد إدخال ثم اطبع مسار الملف الأول. [دفتر الملاحظات](1_Bulk_download_WaPOR_data.ipynb) ###Code input_folder=r'.\data\WAPOR.v2_monthly_L1_AETI_M' #حدد ملف الإدخال input_fhs=sorted(glob.glob(input_folder+'\*.tif')) #احصل على قائمة بملفات GeoTIFF النقطية في ملف الإدخال in_fh=input_fhs[0] #احصل على مسار الملف الأول print(in_fh) ###Output .\data\WAPOR.v2_monthly_L1_AETI_M\L1_AETI_0901M.tif ###Markdown يحتوي الملف النقطي على العديد من الخصائص بما في ذلك الحجم وقيمة nodata والتحويل والمرجع المكاني والإسقاط وما إلى ذلك. يتم تخزين هذه المعلومات في مجموعة البيانات ويمكن قراءتها باستخدام حزمة gdal. يوجد أدناه الكود للحصول على هذه المعلومات من ملف GeoTIFF in_fh. ###Code DataSet = gdal.Open(in_fh, gdal.GA_ReadOnly) #فتح خطوط المسح في مسار in_fh Type = DataSet.GetDriver().ShortName #GDAL driver bandnumber=1 Subdataset = DataSet.GetRasterBand(bandnumber) # احصل على مجموعة البيانات الفرعية للنطاق 1 NDV = Subdataset.GetNoDataValue() # لا توجد قيمة بيانات xsize = DataSet.RasterXSize # عدد الأعمدة في خطوط المسح ysize = DataSet.RasterYSize # عدد الصفوف في خطوط المسح GeoT = DataSet.GetGeoTransform() # التحويل الجغرافي (تنسيق وحدات بكسل الزاوية) Projection = osr.SpatialReference() # نظام الإسناد المكاني للخطوط النقطية Projection.ImportFromWkt(DataSet.GetProjectionRef()) driver = gdal.GetDriverByName(Type) print('driver: {0} \nNodata Value: {1}\nxsize: {2}\nysize: {3}\nGeoT: {4}\nProjection: {5}'.format( driver, NDV, xsize, ysize, GeoT, Projection)) #print metadata ###Output driver: <osgeo.gdal.Driver; proxy of <Swig Object of type 'GDALDriverShadow *' at 0x000001C9F13A2090> > Nodata Value: -9999.0 xsize: 2851 ysize: 2461 GeoT: (37.45758935778001, 0.0022321428599999995, 0.0, 12.888392836720001, 0.0, -0.0022321428599999995) Projection: GEOGCS["WGS 84", DATUM["WGS_1984", SPHEROID["WGS 84",6378137,298.257223563, AUTHORITY["EPSG","7030"]], AUTHORITY["EPSG","6326"]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433], AUTHORITY["EPSG","4326"]] ###Markdown يمكن قراءة البيانات الموجودة في مجموعة بيانات gdal باستخدام وظيفة ReadAsArray. على سبيل المثال ، يتم استخدام الكود أدناه لقراءة مجموعة بيانات GeoTIFF المفتوحة كمصفوفة عددية. يمكنك رسم البيانات في هذه المصفوفة باستخدام الوظيفة في مكتبة matplotlib. انظر إلى رمز المثال ومخطط الإخراج أدناه. لاحظ مكان حدوث ارتفاع مستوى البخرنتح.نظرًا لأن هذه هي البيانات التي يعرضها AETI شهريًا ، ستكون وحدة شريط الألوان مم في الشهر ###Code Array = Subdataset.ReadAsArray().astype(np.float32) # قراءة مجموعة البيانات الفرعية كمصفوفة رقمية Array[Array == NDV] = np.nan # استبدل لا توجد قيمة بيانات بقيمة NAN plt.imshow(Array)# مخطط مجموعة كصورة plt.colorbar() plt.show() ###Output _____no_output_____ ###Markdown بمجرد فتحها كمصفوفة عددية NumPy ، يمكن استخدام مجموعة البيانات النقطية في حساب النقاط من خلال تطبيق العمليات الحسابية مثل الجمع (+) والطرح (-) والضرب (*) والقسمة (/). على سبيل المثال ، يحسب الكود أدناه NewArray بقسمة Array على 30 (متوسط عدد الأيام في الشهر). ثم NewArray هو متوسط AETI اليومي (مم / يوم). ###Code NewArray=Array/30 plt.imshow(NewArray) plt.colorbar() plt.show() ###Output _____no_output_____ ###Markdown يمكننا حفظ NewArray هذا كملف نقطي جديد. لحفظ النتيجة كملف صورة نقطية (GeoTIFF) ، نحتاج إلى معرفة خصائص الملف النقطي الأصلي (برنامج التشغيل ، لا توجد قيمة بيانات ، الحجم ، النطاق ، الإسقاط). يوجد أدناه مثال على حفظ المصفوفة أعلاه كملف نقطي جديد. أولاً ، يتم تحديد اسم ملف الإخراج ويتم إنشاء مجلد الإخراج إذا لم يكن موجودًا. ###Code output_folder=r'.\data\daily_average_L1_AETI_M' if not os.path.exists(output_folder): # تحقق مما إذا كان output_folder موجودًا os.makedirs(output_folder) # إذا لم تقم بإنشاء ملف output_folder filename=os.path.basename(in_fh) #احصل على اسم ملف الإدخال out_fh=os.path.join(output_folder,filename) #يتم تحديد مسار ملف الإخراج print(out_fh) ###Output .\data\daily_average_L1_AETI_M\L1_AETI_0901M.tif ###Markdown ثم يتم حفظ قيمة NewArray كـ out_fh في مجلد الإخراج (data/daily_average_L1_AETI_M) باستخدام الرموز أدناه [مجلد الإخراج](data/daily_average_L1_AETI_M) ###Code datatypes = {"uint8": 1, "int8": 1, "uint16": 2, "int16": 3, "Int16": 3, "uint32": 4, "int32": 5, "float32": 6, "float64": 7, "complex64": 10, "complex128": 11, "Int32": 5, "Float32": 6, "Float64": 7, "Complex64": 10, "Complex128": 11,} driver, NDV, xsize, ysize, GeoT, Projection DataSet = driver.Create(out_fh,xsize,ysize,1,datatypes['float32']) #create dataset from driver DataSet.GetRasterBand(1).SetNoDataValue(NDV) #set Nodata value of the new dataset DataSet.SetGeoTransform(GeoT) #set Geotransformation DataSet.SetProjection(Projection.ExportToWkt()) DataSet.GetRasterBand(1).WriteArray(NewArray) #write the CorrectedArray values to the new dataset print(out_fh) ###Output .\data\daily_average_L1_AETI_M\L1_AETI_0901M.tif ###Markdown 1 تمرين يمكنك إنشاء وظائفك الخاصة لقراءة البيانات الوصفية ، وفتح ملف نقطي كمصفوفة ، وكتابة مصفوفة NumPy كملف نقطي. يتم تعريف الوظيفة في Python بواسطة * def ***def** Function(Inputs): حدد اسم وظيفة البرمجة ومعلمات الإدخال Do something with Inputs افعل شيئًا باستخدام المدخلات return Output ** تلميح **: استخدم أمثلة البرامج النصية في الأمثلة أعلاه. للتحقق مرة أخرى من وظائفك ، يمكنك فتح الاختبار / التعيين لهذه الوحدة في دورة OCW للاطلاع على الأمثلة المكتملة. ###Code import glob import os from osgeo import gdal import numpy as np import osr def GetGeoInfo(fh, subdataset = 0): ''' تستخرج هذه الوظيفة البيانات الوصفية من ملف GeoTIFF أو HDF4 أو netCDF. ''' ''' اكتب الكود الخاص بك هنا ''' return driver, NDV, xsize, ysize, GeoT, Projection def OpenAsArray(fh, bandnumber = 1, dtype = 'float32', nan_values = False): ''' تقرأ هذه الوظيفة ملف GeoTIFF أو HDF4 كمصفوفة numpy. ''' ''' اكتب الكود الخاص بك هنا ''' return Array def CreateGeoTiff(fh, Array, driver, NDV, xsize, ysize, GeoT, Projection, explicit = True, compress = None): ''' تقوم هذه الوظيفة بحفظ مصفوفة numpy كملف GeoTIFF نقطي. ''' ''' اكتب الكود الخاص بك هنا ''' ###Output _____no_output_____ ###Markdown بعد إكمال وتحديد الوظائف المذكورة أعلاه ، يمكنك إعادة استخدامها في الكود أدناه لحساب متوسط ملفات الصور النقطية AETI اليومية من جميع الملفات النقطية في مجلد AETI الشهري. لتطبيق نفس الوظيفة على جميع ملفات الصور النقطية ، نستخدم حلقة for-loop **for** in_fh **in** input_fhs: functions هذا يعني أن البرنامج النصي سيتكرر عبر جميع مسارات الملفات في ** input_fhs ** ، وهي قائمة مسارات ملفات GeoTIFF في مجلد الإدخال. في كل تكرار ، سيتم فتح الملف النقطي كمصفوفة ، تُستخدم لحساب مصفوفة جديدة ، وسيتم حفظ المصفوفة الجديدة كملف نقطي جديد في مجلد الإخراج. ###Code ### الحصول على البيانات النقطية الإدخال input_folder=r'.\data\WAPOR.v2_monthly_L1_AETI_M' #مجلد الإدخال input_fhs=sorted(glob.glob(input_folder+'\*.tif')) #احصل على قائمة بملفات .tif في مجلد الإدخال output_folder=r'.\data\daily_average_L1_AETI_M' #مجلد الإخراج if not os.path.exists(output_folder): #قم بإنشاء مجلد الإخراج إذا لم يكن موجودًا os.makedirs(output_folder) ### احصل على GeoInfo in_fh=input_fhs[0] driver, NDV, xsize, ysize, GeoT, Projection=GetGeoInfo(in_fh) ### عملية حسابية for in_fh in input_fhs: ### قسّم البيانات النقطية على 30 Array=OpenAsArray(in_fh,nan_values=True) NewArray=Array/30 ### احفظ المصفوفة المحسوبة كملف نقطي filename=os.path.basename(in_fh) out_fh=os.path.join(output_folder,filename) CreateGeoTiff(out_fh, NewArray, driver, NDV, xsize, ysize, GeoT, Projection) print(out_fh) ###Output .\data\daily_average_L1_AETI_M\L1_AETI_0901M.tif .\data\daily_average_L1_AETI_M\L1_AETI_0902M.tif .\data\daily_average_L1_AETI_M\L1_AETI_0903M.tif .\data\daily_average_L1_AETI_M\L1_AETI_0904M.tif .\data\daily_average_L1_AETI_M\L1_AETI_0905M.tif .\data\daily_average_L1_AETI_M\L1_AETI_0906M.tif .\data\daily_average_L1_AETI_M\L1_AETI_0907M.tif .\data\daily_average_L1_AETI_M\L1_AETI_0908M.tif .\data\daily_average_L1_AETI_M\L1_AETI_0909M.tif .\data\daily_average_L1_AETI_M\L1_AETI_0910M.tif .\data\daily_average_L1_AETI_M\L1_AETI_0911M.tif .\data\daily_average_L1_AETI_M\L1_AETI_0912M.tif .\data\daily_average_L1_AETI_M\L1_AETI_1001M.tif .\data\daily_average_L1_AETI_M\L1_AETI_1002M.tif .\data\daily_average_L1_AETI_M\L1_AETI_1003M.tif .\data\daily_average_L1_AETI_M\L1_AETI_1004M.tif .\data\daily_average_L1_AETI_M\L1_AETI_1005M.tif .\data\daily_average_L1_AETI_M\L1_AETI_1006M.tif .\data\daily_average_L1_AETI_M\L1_AETI_1007M.tif .\data\daily_average_L1_AETI_M\L1_AETI_1008M.tif .\data\daily_average_L1_AETI_M\L1_AETI_1009M.tif .\data\daily_average_L1_AETI_M\L1_AETI_1010M.tif .\data\daily_average_L1_AETI_M\L1_AETI_1011M.tif .\data\daily_average_L1_AETI_M\L1_AETI_1012M.tif ###Markdown 2. Aggregate data in a period 2. تجميع البيانات في فترة لتجميع البيانات في فترة ما ، يجب دمج جميع الملفات النقطية في فترة لحساب البيانات النقطية الجديدة. على سبيل المثال ، نجمع قيم البكسل في جميع مجموعات بيانات AETI النقطية في موسم واحد معًا لحساب إجمالي AETI الموسمي. في تمرين دفتر الملاحظات السابق ، قمت بتنزيل بيانات عامي 2009 و 2010. في هذا المثال ، سنقوم بحساب إجمالي AETI للموسم من يونيو 2009 إلى يونيو 2010. [دفتر ملاحظات سابق](1_Bulk_download_WaPOR_data.ipynb) أولاً ، سوف نحصل على قائمة الملفات النقطية المتوفرة في مجلد AETI الشهري. ###Code input_folder=r'.\data\WAPOR.v2_monthly_L1_AETI_M' input_fhs=sorted(glob.glob(input_folder+'\*.tif')) input_fhs ###Output _____no_output_____ ###Markdown بعد ذلك ، سنستخدم وظيفة WaPOR.API.getAvailData لسرد جميع معرفات البيانات النقطية المتاحة من يونيو 2009 إلى يونيو 2010. ###Code start='2009-06-01' end='2010-06-30' time_range=f'{start},{end}' cube_code='L1_AETI_M' df_avail=WaPOR.API.getAvailData(cube_code, time_range) df_avail ###Output _____no_output_____ ###Markdown يتم استخدام المعرف النقطي أيضًا كاسم لملف الصورة النقطية ، لذلك يمكننا استخدام هذه القائمة لتحديد الملفات التي تنتمي إلى هذه الفترة فقط. قم بالتمرير فوق قائمة الملفات النقطية في المجلد وتحقق مما إذا كان raster_id في اسم الملف النقطي ينتمي إلى قائمة raster_id بين 6/2009 و 6/2010. إذا كان الأمر كذلك ، فسيتم إلحاق مسار الملف بقائمة جميع الملفات في الفترة. ###Code period_fhs=[] for in_fh in input_fhs: #الحصول على معرف نقطي من اسم الملف raster_id=os.path.basename(in_fh).split('.tif')[0] #احصل على اسم الملف في المسار واستبعد الامتداد ".tif" #تحقق مما إذا كانت البيانات النقطية تنتمي إلى فترة if raster_id in list(df_avail.raster_id): #إذا كان raster_id في القائمة period_fhs.append(in_fh) #إلحاق مسار الملف بـ period_fhs period_fhs ###Output _____no_output_____ ###Markdown الآن ، يمكننا التمرير عبر هذه القائمة لجميع الملفات النقطية لتلك الفترة وجمع جميع قيم البكسل لحساب إجمالي AETI لتلك الفترة. ###Code driver, NDV, xsize, ysize, GeoT, Projection=GetGeoInfo(period_fhs[0]) SumArray=np.zeros((ysize,xsize)) #قم بإنشاء مصفوفة صفرية بنفس حجم البيانات النقطية المدخلة for fh in period_fhs: #كرر لجميع ملفات الصور النقطية في هذه الفترة Array=OpenAsArray(fh,nan_values=True) SumArray+=Array out_fh=os.path.join(f'.\data\L1_AETI_{start}_{end}.tif') CreateGeoTiff(out_fh, SumArray, driver, NDV, xsize, ysize, GeoT, Projection) ###Output _____no_output_____ ###Markdown يمكن رسم البيانات النقطية الناتجة على النحو التالي ###Code Array=OpenAsArray(out_fh,nan_values=True) plt.imshow(Array) plt.colorbar() plt.title('Total AETI (mm) from 6/2009 to 6/2010') plt.show() ###Output _____no_output_____ ###Markdown 3. Warp raster data 3. التفاف البيانات النقطيةتأتي البيانات المكانية بأحجام ودقة مختلفة ونظام مرجعي مكاني مختلف. على سبيل المثال ، عند فتحها كمصفوفة ، يختلف حجم خرائط الترسيب والبخرنتح الفعلي والاعتراض. يوجد أدناه حجم طبقة خطوط هطول الأمطار. ###Code P_fh=r".\data\WAPOR.v2_monthly_L1_PCP_M\L1_PCP_0901M.tif" P=OpenAsArray(P_fh,nan_values=True) print(P.shape) plt.imshow(P) plt.colorbar() plt.show() ###Output (111, 128) ###Markdown ويوجد أسفل هذه الخلية حجم طبقة AETI النقطية ###Code ET_fh=r".\data\WAPOR.v2_monthly_L1_AETI_M\L1_AETI_0901M.tif" ET=OpenAsArray(ET_fh,nan_values=True) print(ET.shape) plt.imshow(ET) plt.colorbar() plt.show() ###Output (2461, 2851) ###Markdown كما نرى ، فإن بيانات هطول الأمطار لها دقة أقل (0.5 درجة) ، وبالتالي حجم مصفوفة أصغر (111x128) من بيانات AETI (250 متر و 2851 × 2461). بالنسبة لهاتين النقطتين في نفس الحساب ، يجب تحويلهما إلى نفس الحجم والدقة والإسقاط. هنا ، نستخدم خريطة استخدام الأراضي كقالب لتشويه هذه البيانات في نفس النموذج.يمكن القيام بذلك باستخدام دالة gdal.Warp ووظيفة فسيفساء للصورة وإعادة إسقاط وتزييف. يتم استخدام gdal.Warp لتشويه البيانات النقطية المحددة لحجم البيانات المحددة مسبقًا والمدى المكاني ، وإعادة إسقاطها إلى نظام الإسناد المكاني المحدد مسبقًا. الى gdal. قم بلف ملف نقطي لمطابقة الإسقاط والحجم والمدى مع ملف نقطي آخر ، يجب الحصول على معلومات الملف الهدف أولاً. الوظيفة التالية هي الحصول على قيمة nodata ونظام الإسناد المكاني وحجم البيانات والمدى المكاني لملف نقطي باستخدام حزمة gdal.على سبيل المثال ، هنا الملف النقطي لخريطة AETI هو الملف الهدف. نريد مطابقة الإسقاط والحجم ومدى الملف المصدر، وهو هطول الأمطار، مع الملف الهدف. ###Code dst_info=gdal.Info(gdal.Open(ET_fh),format='json') src_info=gdal.Info(gdal.Open(P_fh),format='json') print('Target info: ',dst_info,'\n') print('Source info: ',src_info) ###Output Source info: {'description': '.\\data\\WAPOR.v2_monthly_L1_PCP_M\\L1_PCP_0901M.tif', 'driverShortName': 'GTiff', 'driverLongName': 'GeoTIFF', 'files': ['.\\data\\WAPOR.v2_monthly_L1_PCP_M\\L1_PCP_0901M.tif'], 'size': [128, 111], 'coordinateSystem': {'wkt': 'GEOGCS["WGS 84",\n DATUM["WGS_1984",\n SPHEROID["WGS 84",6378137,298.257223563,\n AUTHORITY["EPSG","7030"]],\n AUTHORITY["EPSG","6326"]],\n PRIMEM["Greenwich",0],\n UNIT["degree",0.0174532925199433],\n AUTHORITY["EPSG","4326"]]'}, 'geoTransform': [37.45000000000002, 0.05, 0.0, 12.9, 0.0, -0.05], 'metadata': {'': {'AREA_OR_POINT': 'Area'}, 'IMAGE_STRUCTURE': {'INTERLEAVE': 'BAND'}}, 'cornerCoordinates': {'upperLeft': [37.45, 12.9], 'lowerLeft': [37.45, 7.35], 'lowerRight': [43.85, 7.35], 'upperRight': [43.85, 12.9], 'center': [40.65, 10.125]}, 'wgs84Extent': {'type': 'Polygon', 'coordinates': [[[37.45, 12.9], [37.45, 7.35], [43.85, 7.35], [43.85, 12.9], [37.45, 12.9]]]}, 'bands': [{'band': 1, 'block': [128, 16], 'type': 'Float32', 'colorInterpretation': 'Gray', 'noDataValue': -9999.0, 'metadata': {}}]} Target info: {'description': '.\\data\\WAPOR.v2_monthly_L1_AETI_M\\L1_AETI_0901M.tif', 'driverShortName': 'GTiff', 'driverLongName': 'GeoTIFF', 'files': ['.\\data\\WAPOR.v2_monthly_L1_AETI_M\\L1_AETI_0901M.tif'], 'size': [2851, 2461], 'coordinateSystem': {'wkt': 'GEOGCS["WGS 84",\n DATUM["WGS_1984",\n SPHEROID["WGS 84",6378137,298.257223563,\n AUTHORITY["EPSG","7030"]],\n AUTHORITY["EPSG","6326"]],\n PRIMEM["Greenwich",0],\n UNIT["degree",0.0174532925199433],\n AUTHORITY["EPSG","4326"]]'}, 'geoTransform': [37.45758935778001, 0.00223214286, 0.0, 12.888392836720001, 0.0, -0.00223214286], 'metadata': {'': {'AREA_OR_POINT': 'Area'}, 'IMAGE_STRUCTURE': {'INTERLEAVE': 'BAND'}}, 'cornerCoordinates': {'upperLeft': [37.4575894, 12.8883928], 'lowerLeft': [37.4575894, 7.3950893], 'lowerRight': [43.8214287, 7.3950893], 'upperRight': [43.8214287, 12.8883928], 'center': [40.639509, 10.141741]}, 'wgs84Extent': {'type': 'Polygon', 'coordinates': [[[37.4575894, 12.8883928], [37.4575894, 7.3950893], [43.8214287, 7.3950893], [43.8214287, 12.8883928], [37.4575894, 12.8883928]]]}, 'bands': [{'band': 1, 'block': [2851, 1], 'type': 'Float32', 'colorInterpretation': 'Gray', 'noDataValue': -9999.0, 'metadata': {}}]} ###Markdown بعد الحصول على src_info من المصدر ومعلومات الهدف dst_info ، يمكننا إملاء هذه المعلومات إلى gdal.Warp على النحو التالي. إذا كان gdal. انتهى من وظيفة الالتفاف بنجاح، ستقوم بإرجاع كائن osgeo.gdal.Dataset وسيتم حفظ الملف النقطي الناتج في مجلد الإخراج (data/L1_PCP_E_warped) [مجلد الإخراج](data/L1_PCP_M_warped) ###Code source_file=P_fh output_folder=r'.\data\L1_PCP_M_warped' if not os.path.exists(output_folder): # إنشاء output_folder إذا لم يكن موجودًا os.makedirs(output_folder) filename=os.path.basename(source_file) # الحصول على اسم الملف من مسار ملف المصدر output_file=os.path.join(output_folder,filename) # إنشاء مسار ملف الإخراج gdal.Warp(output_file,P_fh,format='GTiff', srcSRS=src_info['coordinateSystem']['wkt'], dstSRS=dst_info['coordinateSystem']['wkt'], srcNodata=src_info['bands'][0]['noDataValue'], dstNodata=dst_info['bands'][0]['noDataValue'], width=dst_info['size'][0], height=dst_info['size'][1], outputBounds=(dst_info['cornerCoordinates']['lowerLeft'][0], dst_info['cornerCoordinates']['lowerLeft'][1], dst_info['cornerCoordinates']['upperRight'][0], dst_info['cornerCoordinates']['upperRight'][1]), outputBoundsSRS=dst_info['coordinateSystem']['wkt'], resampleAlg='near') ###Output _____no_output_____ ###Markdown سيكون لملف البيانات النقطية لهطول الأمطار الآن نفس حجم AETI raster. ###Code P=OpenAsArray(output_file,nan_values=True) print(P.shape) plt.imshow(P) plt.colorbar() plt.show() ###Output (2461, 2851) ###Markdown التمرين 2 قم بإنشاء دالة لتشوه أي ملف نقطي باستخدام نقطي معينة. استخدم هذه الوظيفة لمطابقة الإسقاط والحجم ومدى جميع بيانات هطول الأمطار مع بيانات ET. ** تلميح **: استخدم أمثلة البرامج النصية في الأمثلة أعلاه. للتحقق مرة أخرى من وظائفك ، يمكنك فتح الاختبار / التعيين لهذه الوحدة في دورة OCW للاطلاع على الأمثلة المكتملة. ###Code def MatchProjResNDV(target_file, source_fhs, output_dir, resample = 'near', dtype = 'float32', scale = None, ndv_to_zero = False): """ تعمل هذه الوظيفة على تحويل جميع الملفات النقطية في قائمة بنفس الحجم، الدقة والإسقاط بملف مصدر نقطي. """ ''' اكتب رمزك هنا ''' return output_files input_folder=r".\data\WAPOR.v2_monthly_L1_PCP_M" input_fhs=sorted(glob.glob(os.path.join(input_folder,'*.tif'))) output_folder=r'.\data\L1_PCP_M_warped' MatchProjResNDV(ET_fh, input_fhs, output_folder) ###Output _____no_output_____ ###Markdown 4. Clip to Shapefile cutline 4. القص إلى حدود ملف الشكل عندما نحتاج إلى استبعاد وحدات البكسل خارج منطقة الاهتمام ، على سبيل المثال ، في حالة الحوض الهيدرولوجي أو كتلة الري ، يمكننا استخدام * gdal.Warp * مع خيار cutline لقص خطوط نقطية إلى حدود المضلع لملف الشكل. توضح أكواد الأمثلة أدناه كيف يتم ذلك باستخدام الخرائط النقطية لهطول الأمطار. أولاً ، نحصل على المسار إلى ملف الإدخال النقطي وننشئ مجلد إخراج [مجلد الإخراج](data/L1_PCP_M_clipped) ###Code input_fh=r".\data\WAPOR.v2_monthly_L1_PCP_M\L1_PCP_0901M.tif" # المسار إلى البيانات النقطية المراد قصها shp_fh=r".\data\Awash_shapefile.shp" # المسار إلى ملف الشكل الذي يحتوي على مضلع المنطقة محل الاهتمام output_folder=r'.\data\L1_PCP_M_clipped'# مسار لمجلد الإخراج if not os.path.exists(output_folder): # إنشاء output_folder إذا لم يكن موجودًا os.makedirs(output_folder) filename=os.path.basename(source_file) # الحصول على اسم الملف من مسار ملف المصدر output_fh=os.path.join(output_folder,filename) # إنشاء مسار ملف الإخراج print(output_fh) ###Output .\data\L1_PCP_M_clipped\L1_PCP_0901M.tif ###Markdown بعد تحديد الملف النقطي للإدخال والإخراج ومسارات ملف الأشكال ، يمكننا استخدام وظيفة مكتبة ogr لقراءة مجموعة بيانات الشكل ككائن gdal للحصول على اسم الطبقة. بعد ذلك ، استخدم مسار ملف الشكل والطبقة كوسائط خيارات لوظيفة gdal.Warp. تحقق من ملف الإخراج في مجلد الإخراج (data/L1_PCP_E_clipped) [مجلد الإخراج](data\L1_PCP_M_clipped) ###Code inDriver = ogr.GetDriverByName("ESRI Shapefile") inDataSource = inDriver.Open(shp_fh, 1) # قراءة ملف الشكل كمجموعة بيانات gdal inLayer = inDataSource.GetLayer() options = gdal.WarpOptions(cutlineDSName = shp_fh, cutlineLayer = inLayer.GetName(), cropToCutline = True, dstNodata=NDV ) sourceds = gdal.Warp(output_fh, input_fh, options = options) ###Output _____no_output_____ ###Markdown يوجد أدناه قيمة إدخال البيانات النقطية قبل القطع ###Code input_array=OpenAsArray(input_fh,nan_values=True) plt.imshow(input_array) plt.colorbar() plt.show() ###Output _____no_output_____ ###Markdown وأدناه هي قيمة الناتج النقطي بعد القطع. ###Code output_array=OpenAsArray(output_fh,nan_values=True) plt.imshow(output_array) plt.colorbar() plt.show() ###Output _____no_output_____ ###Markdown لاحظ أن الملف النقطي الأصلي لهطول الأمطار يحتوي على دقة مكانية تبلغ 5 كيلومترات. لذلك ، عند القص إلى الخط المقطوع ، يمكننا رؤية وحدات البكسل الخشنة على الحدود. إذا كنا بحاجة إلى أن يكون الخط المختصر أكثر سلاسة ، فيمكننا استخدام ملف البيانات النقطية لهطول الأمطار الذي تم تشويهه إلى دقة مكانية أعلى (250 م) كملف إدخال. على سبيل المثال ، راجع الفرق بين الخريطة المقطوعة من دقة المسح النقطية 5 كيلومترات (أعلاه) والخريطة المقطوعة من نقطية بدقة 250 مترًا (أدناه). ###Code input_fh=r".\data\L1_PCP_M_warped\L1_PCP_0901M.tif" inDriver = ogr.GetDriverByName("ESRI Shapefile") inDataSource = inDriver.Open(shp_fh, 1) # قراءة ملف الشكل كمجموعة بيانات gdal inLayer = inDataSource.GetLayer() options = gdal.WarpOptions(cutlineDSName = shp_fh, cutlineLayer = inLayer.GetName(), cropToCutline = True, dstNodata=NDV ) sourceds = gdal.Warp(output_fh, input_fh, options = options) output_array=OpenAsArray(output_fh,nan_values=True) plt.imshow(output_array) plt.colorbar() plt.show() ###Output _____no_output_____ ###Markdown 3 تمرين قم بإنشاء وظيفة CliptoShp () لقص جميع هطول الأمطار الشهرية الملتوية وجميع بيانات AETI الشهرية وبيانات LCC النقطية السنوية التي تم تنزيلها من تمرين دفتر الملاحظات السابق. [دفتر الملاحظات السابق](1_Bulk_download_WaPOR_data.ipynb)** تلميح **: استخدم أمثلة البرامج النصية في الأمثلة أعلاه. للتحقق مرة أخرى من وظائفك ، يمكنك فتح الاختبار / التعيين لهذه الوحدة في دورة OCW للاطلاع على الأمثلة المكتملة. ###Code def CliptoShp(input_fhs,output_folder,shp_fh,NDV=-9999): """ قص خطوط المسح إلى خط الحدود لملف الشكل المعلمات """ ''' اكتب رمزك هنا ''' return output_fhs ###Output _____no_output_____ ###Markdown باستخدام وظيفة ** CliptoShp ** المحددة ، يمكن قص جميع الملفات النقطية في مجلدات الإدخال إلى حد المضلع لملف الشكل. سيتم حفظ ملفات الإخراج في مجلدات الإخراج. ###Code input_folder=r".\data\L1_PCP_M_warped" input_fhs=sorted(glob.glob(os.path.join(input_folder,'*.tif'))) output_folder=r'.\data\L1_PCP_M_clipped' CliptoShp(input_fhs, output_folder,shp_fh) input_folder=r".\data\WAPOR.v2_monthly_L1_AETI_M" input_fhs=sorted(glob.glob(os.path.join(input_folder,'*.tif'))) output_folder=r'.\data\L1_AETI_M_clipped' if not os.path.exists(output_folder): os.makedirs(output_folder) CliptoShp(input_fhs, output_folder,shp_fh) input_folder=r".\data\WAPOR.v2_yearly_L1_LCC_A" input_fhs=sorted(glob.glob(os.path.join(input_folder,'*.tif'))) output_folder=r'.\data\L1_LCC_A_clipped' if not os.path.exists(output_folder): os.makedirs(output_folder) CliptoShp(input_fhs, output_folder,shp_fh) ###Output _____no_output_____
psql_db_creation_prod.ipynb
###Markdown Testing for connection Run the cell below to test if your machine is able to connect to the sanjose database.If connection is successful, the expected output should be:`Connecting to the PostgreSQL database...PostgreSQL database version:('PostgreSQL 10.10 (Ubuntu 10.10-0ubuntu0.18.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit',)Database connection closed.` ###Code conn = psycopg2.connect(host="sanjose", database="atlas", user="student" ) def connect(): ''' Connect to the PostgreSQL database server ''' #conn = None try: # read connection parameters #params = config() # connect to the PostgreSQL server print('Connecting to the PostgreSQL database...') #conn = psycopg2.connect(**params) # create a cursor cur = conn.cursor() # execute a statement print('PostgreSQL database version:') cur.execute('SELECT version()') # display the PostgreSQL database server version db_version = cur.fetchone() print(db_version) # close the communication with the PostgreSQL cur.close() except (Exception, psycopg2.DatabaseError) as error: print(error) finally: if conn is not None: conn.close() print('Database connection closed.') if __name__ == '__main__': connect() ###Output Connecting to the PostgreSQL database... PostgreSQL database version: ('PostgreSQL 10.10 (Ubuntu 10.10-0ubuntu0.18.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit',) Database connection closed. ###Markdown Part 1: Database Creation Before running, make sure the file path below points to where all your folders live ###Code directory = 'C:/<YOUR FILE PATH>' ###Output _____no_output_____ ###Markdown Define Create Table function The cell below creates a connection to the psql database. It returns a connection object that can be used by other functions. ###Code ''' arg should ideally be a config() file. ''' def create_connection(): """ create a database connection to the SQLite database specified by db_file :param db_file: database file :return: Connection object or None """ import psycopg2 conn = None try: conn = psycopg2.connect(host="sanjose", database="atlas", user="student" ) return conn except: logging.warning('unable to connect to database') exit(1) return conn ###Output _____no_output_____ ###Markdown The cell below creates a cursor object from the connection object created above. It then takes in a SQL query to create new tables in the psql database. ###Code def create_table(conn, create_table_sql): """ create a table from the create_table_sql statement :param conn: Connection object :param create_table_sql: a CREATE TABLE statement :return: """ try: c = conn.cursor() c.execute(create_table_sql) except: print('Table creation failed.') ###Output _____no_output_____ ###Markdown Below, the main function is where the table creation function is executed. It also commits (saves) and closes the connection to the psql database (this is recommended every time a connection is opened and changes are made to a database).If successful, the expected output is:`Table created.Database committed.Database connection closed.` ###Code def main(sql_query): # create a database connection conn = create_connection() # create tables if conn is not None: # create weather table create_table(conn, sql_query) print('Table created.') conn.commit() print('Database committed.') conn.close() print('Database connection closed.') else: print("Error! cannot create the database connection.") ###Output Table created. Database committed. Database connection closed. ###Markdown Table creation query Define your table creation query below.Note, data types in PSQL are different from SQLite. More about data types [here](http://www.postgresqltutorial.com/postgresql-data-types/) and [here](https://www.postgresql.org/docs/9.5/datatype.html). ###Code ''' This is a variable that holds the SQL query as a string object. This variable then gets passed into the main() function created above and creates a table based off the SQL query you specify. ''' sql_create_weather_table = """ CREATE TABLE IF NOT EXISTS your_table ( record_id SERIAL PRIMARY KEY, region VARCHAR(64), latitude NUMERIC, longitude NUMERIC, date DATE, precipitation NUMERIC, max_temp NUMERIC, min_temp NUMERIC, wind NUMERIC ); """ ###Output _____no_output_____ ###Markdown Run Table Creation Query ###Code if __name__ == '__main__': main(sql_create_weather_table) ###Output _____no_output_____ ###Markdown Define INSERT statements Example of data the insert statement accepts:`your_data = [(col_1,col_2,col_3),(col1,col2,col3),(col1,col2,col3),...]` ###Code ''' The function below ideally takes in rows of data formatted as a list of tuples. ''' def insert_data(chunk): try: # create a database connection conn = create_connection() cur = conn.cursor() with conn: # The number of %s below should match the number of columns you pass in. args_str = ','.join(cur.mogrify("(%s,%s,%s,%s,%s,%s,%s,%s)", x).decode("utf-8") for x in tuple(chunk)) cur.execute("INSERT INTO your_table (region,latitude,longitude,date,precipitation,max_temp,min_temp,wind) VALUES " + args_str) conn.commit() conn.close() except: print("Data insertion failed.") ''' This function splits the contents of a text file into individual rows. Returns a list of tuples. ''' def row_split(cont): measurements = None try: rows = cont.split('\n') rows = rows[:-1] measurements = [tuple(x.split()) for x in rows] return measurements except: print("Check that file contents are correct!") def process_file(filename, folder_dir): data_chunk = None try: if filename.startswith('data'): with open(folder_dir + '//' + filename, 'r') as f: cont = f.read() records = row_split(cont) data_chunk = [x for x in records] return data_chunk else: return data_chunk except: print("Failed to extract data for insertion.") def main(directory): import time start_time = time.time() counter = 0 print('Reading in files from %s' % directory) for filename in os.listdir(directory): try: insert_data(process_file(filename)) counter += 1 if (counter % 1000) == 0: print("Still working...") continue else: continue except: print("You broke the Internet.") end_time = time.time() total_time = end_time - start_time print("Congratulations, Mr. Stark. All data successfully extracted from all folders.") print("Time elapsed: %.2f minutes" % (total_time/60)) ###Output _____no_output_____ ###Markdown Run Data Insertion function ###Code ''' Takes in the directory path specified at the top of this notebook ''' if __name__ == '__main__': main(directory) ###Output _____no_output_____
fusion/fusion.ipynb
###Markdown fusion Import des librairies ###Code import glob import pandas as pd ###Output _____no_output_____ ###Markdown Import de tous les fichiers CSV (US, CA, DE, GB, FR) ###Code files = [i for i in glob.glob('../data/*.{}'.format('csv'))] sorted(files) ###Output _____no_output_____ ###Markdown Grouper les données des différentes zones ###Code dfs = list() for csv in files: df = pd.read_csv(csv, index_col='video_id') df['country'] = csv[9:11] dfs.append(df) my_df = pd.concat(dfs) my_df.head(3) ###Output _____no_output_____
01 Machine Learning/scikit_examples_jupyter/plot_multilabel.ipynb
###Markdown Multilabel classificationThis example simulates a multi-label document classification problem. Thedataset is generated randomly based on the following process: - pick the number of labels: n ~ Poisson(n_labels) - n times, choose a class c: c ~ Multinomial(theta) - pick the document length: k ~ Poisson(length) - k times, choose a word: w ~ Multinomial(theta_c)In the above process, rejection sampling is used to make sure that n is morethan 2, and that the document length is never zero. Likewise, we reject classeswhich have already been chosen. The documents that are assigned to bothclasses are plotted surrounded by two colored circles.The classification is performed by projecting to the first two principalcomponents found by PCA and CCA for visualisation purposes, followed by usingthe :class:`sklearn.multiclass.OneVsRestClassifier` metaclassifier using twoSVCs with linear kernels to learn a discriminative model for each class.Note that PCA is used to perform an unsupervised dimensionality reduction,while CCA is used to perform a supervised one.Note: in the plot, "unlabeled samples" does not mean that we don't know thelabels (as in semi-supervised learning) but that the samples simply do *not*have a label. ###Code print(__doc__) import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_multilabel_classification from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import SVC from sklearn.decomposition import PCA from sklearn.cross_decomposition import CCA def plot_hyperplane(clf, min_x, max_x, linestyle, label): # get the separating hyperplane w = clf.coef_[0] a = -w[0] / w[1] xx = np.linspace(min_x - 5, max_x + 5) # make sure the line is long enough yy = a * xx - (clf.intercept_[0]) / w[1] plt.plot(xx, yy, linestyle, label=label) def plot_subfigure(X, Y, subplot, title, transform): if transform == "pca": X = PCA(n_components=2).fit_transform(X) elif transform == "cca": X = CCA(n_components=2).fit(X, Y).transform(X) else: raise ValueError min_x = np.min(X[:, 0]) max_x = np.max(X[:, 0]) min_y = np.min(X[:, 1]) max_y = np.max(X[:, 1]) classif = OneVsRestClassifier(SVC(kernel='linear')) classif.fit(X, Y) plt.subplot(2, 2, subplot) plt.title(title) zero_class = np.where(Y[:, 0]) one_class = np.where(Y[:, 1]) plt.scatter(X[:, 0], X[:, 1], s=40, c='gray', edgecolors=(0, 0, 0)) plt.scatter(X[zero_class, 0], X[zero_class, 1], s=160, edgecolors='b', facecolors='none', linewidths=2, label='Class 1') plt.scatter(X[one_class, 0], X[one_class, 1], s=80, edgecolors='orange', facecolors='none', linewidths=2, label='Class 2') plot_hyperplane(classif.estimators_[0], min_x, max_x, 'k--', 'Boundary\nfor class 1') plot_hyperplane(classif.estimators_[1], min_x, max_x, 'k-.', 'Boundary\nfor class 2') plt.xticks(()) plt.yticks(()) plt.xlim(min_x - .5 * max_x, max_x + .5 * max_x) plt.ylim(min_y - .5 * max_y, max_y + .5 * max_y) if subplot == 2: plt.xlabel('First principal component') plt.ylabel('Second principal component') plt.legend(loc="upper left") plt.figure(figsize=(8, 6)) X, Y = make_multilabel_classification(n_classes=2, n_labels=1, allow_unlabeled=True, random_state=1) plot_subfigure(X, Y, 1, "With unlabeled samples + CCA", "cca") plot_subfigure(X, Y, 2, "With unlabeled samples + PCA", "pca") X, Y = make_multilabel_classification(n_classes=2, n_labels=1, allow_unlabeled=False, random_state=1) plot_subfigure(X, Y, 3, "Without unlabeled samples + CCA", "cca") plot_subfigure(X, Y, 4, "Without unlabeled samples + PCA", "pca") plt.subplots_adjust(.04, .02, .97, .94, .09, .2) plt.show() ###Output _____no_output_____
src/notebook/(1_2)LastImg_MobileNet.ipynb
###Markdown Connect Google Drive ###Code from google.colab import drive drive.mount('/content/gdrive') ###Output Drive already mounted at /content/gdrive; to attempt to forcibly remount, call drive.mount("/content/gdrive", force_remount=True). ###Markdown Import ###Code import tensorflow as tf import numpy as np import pandas as pd import matplotlib.pyplot as plt import cv2 import os from tensorflow import keras from tensorflow.keras.layers import Input from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import MaxPool2D from tensorflow.keras.layers import ReLU from tensorflow.keras.layers import Softmax from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Dropout from tensorflow.keras.metrics import sparse_top_k_categorical_accuracy from tensorflow.keras.callbacks import CSVLogger from ast import literal_eval ###Output _____no_output_____ ###Markdown Parameters and Work-Space Paths ###Code # parameters BATCH_SIZE = 200 EPOCHS = 50 STEPS_PER_EPOCH = 850 VALIDATION_STEPS = 100 EVALUATE_STEPS = 850 IMAGE_SIZE = 128 LINE_SIZE = 3 # load path TRAIN_DATA_PATH = 'gdrive/My Drive/QW/Data/Data_10000/All_classes_10000.csv' VALID_DATA_PATH = 'gdrive/My Drive/QW/Data/My_test_data/My_test_data.csv' LABEL_DICT_PATH = 'gdrive/My Drive/QW/Data/labels_dict.npy' # save path CKPT_PATH = 'gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt' LOSS_PLOT_PATH = 'gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/loss_plot_1_2.png' ACC_PLOT_PATH = 'gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/acc_plot_1_2.png' LOG_PATH = 'gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/Log_1_2.log' print('finish!') ###Output finish! ###Markdown Generator ###Code def generate_data(data, batch_size, choose_recognized): data = data.sample(frac = 1) while 1: # get columns' values named 'drawing', 'word' and 'recognized' drawings = data["drawing"].values drawing_recognized = data["recognized"].values drawing_class = data["word"].values # initialization cnt = 0 data_X =[] data_Y =[] # generate batch for i in range(len(drawings)): if choose_recognized: if drawing_recognized[i] == 'False': #Choose according to recognized value continue draw = drawings[i] label = drawing_class[i] stroke_vec = literal_eval(draw) img = np.zeros([256, 256]) for j in range(len(stroke_vec)): line = np.array(stroke_vec[j]).T cv2.polylines(img, [line], False, 1, LINE_SIZE) img = cv2.resize(img, (IMAGE_SIZE,IMAGE_SIZE), interpolation = cv2.INTER_NEAREST) img = img[:,:, np.newaxis] x = img y = labels2nums_dict[label] data_X.append(x) data_Y.append(y) cnt += 1 if cnt==batch_size: #generate a batch when cnt reaches batch_size cnt = 0 yield (np.array(data_X), np.array(data_Y)) data_X = [] data_Y = [] print('finish!') ###Output finish! ###Markdown Callbacks ###Code # define a class named LossHitory class LossHistory(keras.callbacks.Callback): def on_train_begin(self, logs={}): self.losses = {'batch':[], 'epoch':[]} self.accuracy = {'batch':[], 'epoch':[]} self.val_loss = {'batch':[], 'epoch':[]} self.val_acc = {'batch':[], 'epoch':[]} def on_batch_end(self, batch, logs={}): self.losses['batch'].append(logs.get('loss')) self.accuracy['batch'].append(logs.get('acc')) self.val_loss['batch'].append(logs.get('val_loss')) self.val_acc['batch'].append(logs.get('val_acc')) def on_epoch_end(self, batch, logs={}): self.losses['epoch'].append(logs.get('loss')) self.accuracy['epoch'].append(logs.get('acc')) self.val_loss['epoch'].append(logs.get('val_loss')) self.val_acc['epoch'].append(logs.get('val_acc')) def loss_plot(self, loss_type, loss_fig_save_path, acc_fig_save_path): iters = range(len(self.losses[loss_type])) plt.figure('acc') plt.plot(iters, self.accuracy[loss_type], 'r', label='train acc') plt.plot(iters, self.val_acc[loss_type], 'b', label='val acc') plt.grid(True) plt.xlabel(loss_type) plt.ylabel('acc') plt.legend(loc="upper right") plt.savefig(acc_fig_save_path) plt.show() plt.figure('loss') plt.plot(iters, self.losses[loss_type], 'g', label='train loss') plt.plot(iters, self.val_loss[loss_type], 'k', label='val loss') plt.grid(True) plt.xlabel(loss_type) plt.ylabel('loss') plt.legend(loc="upper right") plt.savefig(loss_fig_save_path) plt.show() # create a object from LossHistory class History = LossHistory() print("finish!") cp_callback = tf.keras.callbacks.ModelCheckpoint( CKPT_PATH, verbose = 1, monitor='val_acc', mode = 'max', save_best_only=True) print("finish!") ReduceLR = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_acc', factor=0.5, patience=3, min_delta=0.005, mode='max', cooldown=3, verbose=1) csv_logger = CSVLogger(LOG_PATH, separator=',', append=True) ###Output _____no_output_____ ###Markdown Load Data ###Code # load train data and valid data # labels_dict and data path # labels convert into nums labels_dict = np.load(LABEL_DICT_PATH) labels2nums_dict = {v: k for k, v in enumerate(labels_dict)} # read csv train_data = pd.read_csv(TRAIN_DATA_PATH) valid_data = pd.read_csv(VALID_DATA_PATH) print('finish!') ###Output finish! ###Markdown Model ###Code MODEL = tf.keras.applications.MobileNet( input_shape=(IMAGE_SIZE,IMAGE_SIZE,1), alpha=1.0, include_top=True, weights=None, classes=340 ) MODEL.summary() ###Output _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 128, 128, 1) 0 _________________________________________________________________ conv1_pad (ZeroPadding2D) (None, 129, 129, 1) 0 _________________________________________________________________ conv1 (Conv2D) (None, 64, 64, 32) 288 _________________________________________________________________ conv1_bn (BatchNormalization (None, 64, 64, 32) 128 _________________________________________________________________ conv1_relu (ReLU) (None, 64, 64, 32) 0 _________________________________________________________________ conv_dw_1 (DepthwiseConv2D) (None, 64, 64, 32) 288 _________________________________________________________________ conv_dw_1_bn (BatchNormaliza (None, 64, 64, 32) 128 _________________________________________________________________ conv_dw_1_relu (ReLU) (None, 64, 64, 32) 0 _________________________________________________________________ conv_pw_1 (Conv2D) (None, 64, 64, 64) 2048 _________________________________________________________________ conv_pw_1_bn (BatchNormaliza (None, 64, 64, 64) 256 _________________________________________________________________ conv_pw_1_relu (ReLU) (None, 64, 64, 64) 0 _________________________________________________________________ conv_pad_2 (ZeroPadding2D) (None, 65, 65, 64) 0 _________________________________________________________________ conv_dw_2 (DepthwiseConv2D) (None, 32, 32, 64) 576 _________________________________________________________________ conv_dw_2_bn (BatchNormaliza (None, 32, 32, 64) 256 _________________________________________________________________ conv_dw_2_relu (ReLU) (None, 32, 32, 64) 0 _________________________________________________________________ conv_pw_2 (Conv2D) (None, 32, 32, 128) 8192 _________________________________________________________________ conv_pw_2_bn (BatchNormaliza (None, 32, 32, 128) 512 _________________________________________________________________ conv_pw_2_relu (ReLU) (None, 32, 32, 128) 0 _________________________________________________________________ conv_dw_3 (DepthwiseConv2D) (None, 32, 32, 128) 1152 _________________________________________________________________ conv_dw_3_bn (BatchNormaliza (None, 32, 32, 128) 512 _________________________________________________________________ conv_dw_3_relu (ReLU) (None, 32, 32, 128) 0 _________________________________________________________________ conv_pw_3 (Conv2D) (None, 32, 32, 128) 16384 _________________________________________________________________ conv_pw_3_bn (BatchNormaliza (None, 32, 32, 128) 512 _________________________________________________________________ conv_pw_3_relu (ReLU) (None, 32, 32, 128) 0 _________________________________________________________________ conv_pad_4 (ZeroPadding2D) (None, 33, 33, 128) 0 _________________________________________________________________ conv_dw_4 (DepthwiseConv2D) (None, 16, 16, 128) 1152 _________________________________________________________________ conv_dw_4_bn (BatchNormaliza (None, 16, 16, 128) 512 _________________________________________________________________ conv_dw_4_relu (ReLU) (None, 16, 16, 128) 0 _________________________________________________________________ conv_pw_4 (Conv2D) (None, 16, 16, 256) 32768 _________________________________________________________________ conv_pw_4_bn (BatchNormaliza (None, 16, 16, 256) 1024 _________________________________________________________________ conv_pw_4_relu (ReLU) (None, 16, 16, 256) 0 _________________________________________________________________ conv_dw_5 (DepthwiseConv2D) (None, 16, 16, 256) 2304 _________________________________________________________________ conv_dw_5_bn (BatchNormaliza (None, 16, 16, 256) 1024 _________________________________________________________________ conv_dw_5_relu (ReLU) (None, 16, 16, 256) 0 _________________________________________________________________ conv_pw_5 (Conv2D) (None, 16, 16, 256) 65536 _________________________________________________________________ conv_pw_5_bn (BatchNormaliza (None, 16, 16, 256) 1024 _________________________________________________________________ conv_pw_5_relu (ReLU) (None, 16, 16, 256) 0 _________________________________________________________________ conv_pad_6 (ZeroPadding2D) (None, 17, 17, 256) 0 _________________________________________________________________ conv_dw_6 (DepthwiseConv2D) (None, 8, 8, 256) 2304 _________________________________________________________________ conv_dw_6_bn (BatchNormaliza (None, 8, 8, 256) 1024 _________________________________________________________________ conv_dw_6_relu (ReLU) (None, 8, 8, 256) 0 _________________________________________________________________ conv_pw_6 (Conv2D) (None, 8, 8, 512) 131072 _________________________________________________________________ conv_pw_6_bn (BatchNormaliza (None, 8, 8, 512) 2048 _________________________________________________________________ conv_pw_6_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_dw_7 (DepthwiseConv2D) (None, 8, 8, 512) 4608 _________________________________________________________________ conv_dw_7_bn (BatchNormaliza (None, 8, 8, 512) 2048 _________________________________________________________________ conv_dw_7_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_pw_7 (Conv2D) (None, 8, 8, 512) 262144 _________________________________________________________________ conv_pw_7_bn (BatchNormaliza (None, 8, 8, 512) 2048 _________________________________________________________________ conv_pw_7_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_dw_8 (DepthwiseConv2D) (None, 8, 8, 512) 4608 _________________________________________________________________ conv_dw_8_bn (BatchNormaliza (None, 8, 8, 512) 2048 _________________________________________________________________ conv_dw_8_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_pw_8 (Conv2D) (None, 8, 8, 512) 262144 _________________________________________________________________ conv_pw_8_bn (BatchNormaliza (None, 8, 8, 512) 2048 _________________________________________________________________ conv_pw_8_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_dw_9 (DepthwiseConv2D) (None, 8, 8, 512) 4608 _________________________________________________________________ conv_dw_9_bn (BatchNormaliza (None, 8, 8, 512) 2048 _________________________________________________________________ conv_dw_9_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_pw_9 (Conv2D) (None, 8, 8, 512) 262144 _________________________________________________________________ conv_pw_9_bn (BatchNormaliza (None, 8, 8, 512) 2048 _________________________________________________________________ conv_pw_9_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_dw_10 (DepthwiseConv2D) (None, 8, 8, 512) 4608 _________________________________________________________________ conv_dw_10_bn (BatchNormaliz (None, 8, 8, 512) 2048 _________________________________________________________________ conv_dw_10_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_pw_10 (Conv2D) (None, 8, 8, 512) 262144 _________________________________________________________________ conv_pw_10_bn (BatchNormaliz (None, 8, 8, 512) 2048 _________________________________________________________________ conv_pw_10_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_dw_11 (DepthwiseConv2D) (None, 8, 8, 512) 4608 _________________________________________________________________ conv_dw_11_bn (BatchNormaliz (None, 8, 8, 512) 2048 _________________________________________________________________ conv_dw_11_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_pw_11 (Conv2D) (None, 8, 8, 512) 262144 _________________________________________________________________ conv_pw_11_bn (BatchNormaliz (None, 8, 8, 512) 2048 _________________________________________________________________ conv_pw_11_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_pad_12 (ZeroPadding2D) (None, 9, 9, 512) 0 _________________________________________________________________ conv_dw_12 (DepthwiseConv2D) (None, 4, 4, 512) 4608 _________________________________________________________________ conv_dw_12_bn (BatchNormaliz (None, 4, 4, 512) 2048 _________________________________________________________________ conv_dw_12_relu (ReLU) (None, 4, 4, 512) 0 _________________________________________________________________ conv_pw_12 (Conv2D) (None, 4, 4, 1024) 524288 _________________________________________________________________ conv_pw_12_bn (BatchNormaliz (None, 4, 4, 1024) 4096 _________________________________________________________________ conv_pw_12_relu (ReLU) (None, 4, 4, 1024) 0 _________________________________________________________________ conv_dw_13 (DepthwiseConv2D) (None, 4, 4, 1024) 9216 _________________________________________________________________ conv_dw_13_bn (BatchNormaliz (None, 4, 4, 1024) 4096 _________________________________________________________________ conv_dw_13_relu (ReLU) (None, 4, 4, 1024) 0 _________________________________________________________________ conv_pw_13 (Conv2D) (None, 4, 4, 1024) 1048576 _________________________________________________________________ conv_pw_13_bn (BatchNormaliz (None, 4, 4, 1024) 4096 _________________________________________________________________ conv_pw_13_relu (ReLU) (None, 4, 4, 1024) 0 _________________________________________________________________ global_average_pooling2d (Gl (None, 1024) 0 _________________________________________________________________ reshape_1 (Reshape) (None, 1, 1, 1024) 0 _________________________________________________________________ dropout (Dropout) (None, 1, 1, 1024) 0 _________________________________________________________________ conv_preds (Conv2D) (None, 1, 1, 340) 348500 _________________________________________________________________ act_softmax (Activation) (None, 1, 1, 340) 0 _________________________________________________________________ reshape_2 (Reshape) (None, 340) 0 ================================================================= Total params: 3,576,788 Trainable params: 3,554,900 Non-trainable params: 21,888 _________________________________________________________________ ###Markdown Complie ###Code model = MODEL model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) print('finish') ###Output finish ###Markdown Train ###Code print('start training') # callbacks = [History, cp_callback] history = model.fit_generator(generate_data(train_data, BATCH_SIZE, True), steps_per_epoch = STEPS_PER_EPOCH, epochs = EPOCHS, validation_data = generate_data(valid_data, BATCH_SIZE, False) , validation_steps = VALIDATION_STEPS, verbose = 1, initial_epoch = 0, callbacks = [History,cp_callback,ReduceLR, csv_logger] ) print("finish training") History.loss_plot('epoch', LOSS_PLOT_PATH, ACC_PLOT_PATH) print('finish!') ###Output start training Epoch 1/50 849/850 [============================>.] - ETA: 0s - loss: 3.2709 - acc: 0.2955 Epoch 00001: val_acc improved from -inf to 0.32985, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 540s 635ms/step - loss: 3.2695 - acc: 0.2957 - val_loss: 3.0095 - val_acc: 0.3299 Epoch 2/50 849/850 [============================>.] - ETA: 0s - loss: 1.8670 - acc: 0.5488 Epoch 00002: val_acc improved from 0.32985 to 0.52070, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 523s 616ms/step - loss: 1.8669 - acc: 0.5488 - val_loss: 2.0488 - val_acc: 0.5207 Epoch 3/50 849/850 [============================>.] - ETA: 0s - loss: 1.5905 - acc: 0.6088 Epoch 00003: val_acc improved from 0.52070 to 0.56900, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 522s 614ms/step - loss: 1.5901 - acc: 0.6089 - val_loss: 1.8013 - val_acc: 0.5690 Epoch 4/50 849/850 [============================>.] - ETA: 0s - loss: 1.4628 - acc: 0.6379 Epoch 00004: val_acc improved from 0.56900 to 0.60120, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 521s 613ms/step - loss: 1.4625 - acc: 0.6380 - val_loss: 1.6183 - val_acc: 0.6012 Epoch 5/50 849/850 [============================>.] - ETA: 0s - loss: 1.3735 - acc: 0.6571 Epoch 00005: val_acc improved from 0.60120 to 0.63890, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 514s 605ms/step - loss: 1.3738 - acc: 0.6571 - val_loss: 1.4546 - val_acc: 0.6389 Epoch 6/50 849/850 [============================>.] - ETA: 0s - loss: 1.3197 - acc: 0.6701 Epoch 00006: val_acc improved from 0.63890 to 0.66605, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 517s 609ms/step - loss: 1.3197 - acc: 0.6702 - val_loss: 1.3522 - val_acc: 0.6661 Epoch 7/50 849/850 [============================>.] - ETA: 0s - loss: 1.2787 - acc: 0.6806 Epoch 00007: val_acc improved from 0.66605 to 0.67050, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 516s 607ms/step - loss: 1.2786 - acc: 0.6806 - val_loss: 1.3222 - val_acc: 0.6705 Epoch 8/50 849/850 [============================>.] - ETA: 0s - loss: 1.2393 - acc: 0.6893 Epoch 00008: val_acc did not improve from 0.67050 850/850 [==============================] - 515s 606ms/step - loss: 1.2390 - acc: 0.6893 - val_loss: 1.3127 - val_acc: 0.6666 Epoch 9/50 849/850 [============================>.] - ETA: 0s - loss: 1.2071 - acc: 0.6965 Epoch 00009: val_acc improved from 0.67050 to 0.67100, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257. 850/850 [==============================] - 514s 605ms/step - loss: 1.2069 - acc: 0.6965 - val_loss: 1.3119 - val_acc: 0.6710 Epoch 10/50 849/850 [============================>.] - ETA: 0s - loss: 1.1188 - acc: 0.7187 Epoch 00010: val_acc improved from 0.67100 to 0.71895, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 513s 604ms/step - loss: 1.1187 - acc: 0.7187 - val_loss: 1.1142 - val_acc: 0.7190 Epoch 11/50 849/850 [============================>.] - ETA: 0s - loss: 1.0864 - acc: 0.7254 Epoch 00011: val_acc did not improve from 0.71895 850/850 [==============================] - 512s 602ms/step - loss: 1.0863 - acc: 0.7254 - val_loss: 1.1139 - val_acc: 0.7174 Epoch 12/50 849/850 [============================>.] - ETA: 0s - loss: 1.0803 - acc: 0.7279 Epoch 00012: val_acc improved from 0.71895 to 0.72665, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 514s 605ms/step - loss: 1.0806 - acc: 0.7278 - val_loss: 1.0738 - val_acc: 0.7266 Epoch 13/50 849/850 [============================>.] - ETA: 0s - loss: 1.0569 - acc: 0.7331 Epoch 00013: val_acc did not improve from 0.72665 850/850 [==============================] - 514s 604ms/step - loss: 1.0567 - acc: 0.7331 - val_loss: 1.0951 - val_acc: 0.7246 Epoch 14/50 849/850 [============================>.] - ETA: 0s - loss: 1.0596 - acc: 0.7336 Epoch 00014: val_acc improved from 0.72665 to 0.73180, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 517s 608ms/step - loss: 1.0596 - acc: 0.7335 - val_loss: 1.0793 - val_acc: 0.7318 Epoch 15/50 849/850 [============================>.] - ETA: 0s - loss: 1.0419 - acc: 0.7362 Epoch 00015: val_acc did not improve from 0.73180 850/850 [==============================] - 510s 601ms/step - loss: 1.0419 - acc: 0.7362 - val_loss: 1.0637 - val_acc: 0.7295 Epoch 16/50 849/850 [============================>.] - ETA: 0s - loss: 1.0293 - acc: 0.7394 Epoch 00016: val_acc improved from 0.73180 to 0.73470, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 510s 600ms/step - loss: 1.0294 - acc: 0.7394 - val_loss: 1.0414 - val_acc: 0.7347 Epoch 17/50 849/850 [============================>.] - ETA: 0s - loss: 1.0219 - acc: 0.7411 Epoch 00017: val_acc did not improve from 0.73470 Epoch 00017: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628. 850/850 [==============================] - 515s 606ms/step - loss: 1.0220 - acc: 0.7410 - val_loss: 1.0510 - val_acc: 0.7325 Epoch 18/50 849/850 [============================>.] - ETA: 0s - loss: 0.9938 - acc: 0.7507 Epoch 00018: val_acc improved from 0.73470 to 0.74830, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 517s 609ms/step - loss: 0.9936 - acc: 0.7508 - val_loss: 1.0015 - val_acc: 0.7483 Epoch 19/50 849/850 [============================>.] - ETA: 0s - loss: 0.9703 - acc: 0.7527 Epoch 00019: val_acc improved from 0.74830 to 0.75395, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 516s 607ms/step - loss: 0.9704 - acc: 0.7527 - val_loss: 0.9592 - val_acc: 0.7539 Epoch 20/50 849/850 [============================>.] - ETA: 0s - loss: 0.9743 - acc: 0.7536 Epoch 00020: val_acc improved from 0.75395 to 0.75560, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 516s 607ms/step - loss: 0.9744 - acc: 0.7536 - val_loss: 0.9614 - val_acc: 0.7556 Epoch 21/50 849/850 [============================>.] - ETA: 0s - loss: 0.9585 - acc: 0.7562 Epoch 00021: val_acc improved from 0.75560 to 0.75955, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 517s 608ms/step - loss: 0.9586 - acc: 0.7562 - val_loss: 0.9403 - val_acc: 0.7595 Epoch 22/50 849/850 [============================>.] - ETA: 0s - loss: 0.9555 - acc: 0.7578 Epoch 00022: val_acc improved from 0.75955 to 0.75990, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 522s 614ms/step - loss: 0.9555 - acc: 0.7578 - val_loss: 0.9567 - val_acc: 0.7599 Epoch 23/50 849/850 [============================>.] - ETA: 0s - loss: 0.9424 - acc: 0.7607 Epoch 00023: val_acc improved from 0.75990 to 0.76140, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 524s 617ms/step - loss: 0.9423 - acc: 0.7608 - val_loss: 0.9521 - val_acc: 0.7614 Epoch 24/50 849/850 [============================>.] - ETA: 0s - loss: 0.9336 - acc: 0.7622 Epoch 00024: val_acc improved from 0.76140 to 0.76145, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt Epoch 00024: ReduceLROnPlateau reducing learning rate to 0.0001250000059371814. 850/850 [==============================] - 521s 613ms/step - loss: 0.9333 - acc: 0.7622 - val_loss: 0.9419 - val_acc: 0.7614 Epoch 25/50 849/850 [============================>.] - ETA: 0s - loss: 0.9061 - acc: 0.7694 Epoch 00025: val_acc did not improve from 0.76145 850/850 [==============================] - 520s 612ms/step - loss: 0.9063 - acc: 0.7693 - val_loss: 0.9336 - val_acc: 0.7596 Epoch 26/50 849/850 [============================>.] - ETA: 0s - loss: 0.9013 - acc: 0.7716 Epoch 00026: val_acc improved from 0.76145 to 0.76160, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 521s 613ms/step - loss: 0.9014 - acc: 0.7715 - val_loss: 0.9417 - val_acc: 0.7616 Epoch 27/50 849/850 [============================>.] - ETA: 0s - loss: 0.8912 - acc: 0.7729 Epoch 00027: val_acc improved from 0.76160 to 0.76825, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 522s 614ms/step - loss: 0.8911 - acc: 0.7729 - val_loss: 0.9228 - val_acc: 0.7682 Epoch 28/50 849/850 [============================>.] - ETA: 0s - loss: 0.8808 - acc: 0.7750 Epoch 00028: val_acc did not improve from 0.76825 850/850 [==============================] - 521s 613ms/step - loss: 0.8805 - acc: 0.7751 - val_loss: 0.9232 - val_acc: 0.7655 Epoch 29/50 849/850 [============================>.] - ETA: 0s - loss: 0.8627 - acc: 0.7801 Epoch 00029: val_acc did not improve from 0.76825 850/850 [==============================] - 519s 610ms/step - loss: 0.8624 - acc: 0.7801 - val_loss: 0.9049 - val_acc: 0.7666 Epoch 30/50 849/850 [============================>.] - ETA: 0s - loss: 0.8885 - acc: 0.7748 Epoch 00030: val_acc improved from 0.76825 to 0.77050, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt Epoch 00030: ReduceLROnPlateau reducing learning rate to 6.25000029685907e-05. 850/850 [==============================] - 521s 613ms/step - loss: 0.8884 - acc: 0.7748 - val_loss: 0.9115 - val_acc: 0.7705 Epoch 31/50 849/850 [============================>.] - ETA: 0s - loss: 0.8655 - acc: 0.7785 Epoch 00031: val_acc improved from 0.77050 to 0.77245, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 520s 612ms/step - loss: 0.8655 - acc: 0.7784 - val_loss: 0.9136 - val_acc: 0.7725 Epoch 32/50 849/850 [============================>.] - ETA: 0s - loss: 0.8607 - acc: 0.7813 Epoch 00032: val_acc improved from 0.77245 to 0.77660, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 520s 611ms/step - loss: 0.8609 - acc: 0.7813 - val_loss: 0.8806 - val_acc: 0.7766 Epoch 33/50 849/850 [============================>.] - ETA: 0s - loss: 0.8424 - acc: 0.7843 Epoch 00033: val_acc did not improve from 0.77660 850/850 [==============================] - 518s 609ms/step - loss: 0.8423 - acc: 0.7843 - val_loss: 0.9031 - val_acc: 0.7707 Epoch 34/50 849/850 [============================>.] - ETA: 0s - loss: 0.8482 - acc: 0.7854 Epoch 00034: val_acc did not improve from 0.77660 850/850 [==============================] - 513s 603ms/step - loss: 0.8482 - acc: 0.7854 - val_loss: 0.9108 - val_acc: 0.7689 Epoch 35/50 849/850 [============================>.] - ETA: 0s - loss: 0.8322 - acc: 0.7884 Epoch 00035: val_acc did not improve from 0.77660 Epoch 00035: ReduceLROnPlateau reducing learning rate to 3.125000148429535e-05. 850/850 [==============================] - 512s 602ms/step - loss: 0.8323 - acc: 0.7885 - val_loss: 0.9091 - val_acc: 0.7728 Epoch 36/50 849/850 [============================>.] - ETA: 0s - loss: 0.8185 - acc: 0.7919 Epoch 00036: val_acc did not improve from 0.77660 850/850 [==============================] - 513s 604ms/step - loss: 0.8186 - acc: 0.7919 - val_loss: 0.8909 - val_acc: 0.7721 Epoch 37/50 849/850 [============================>.] - ETA: 0s - loss: 0.8106 - acc: 0.7932 Epoch 00037: val_acc did not improve from 0.77660 850/850 [==============================] - 515s 606ms/step - loss: 0.8106 - acc: 0.7931 - val_loss: 0.8856 - val_acc: 0.7745 Epoch 38/50 849/850 [============================>.] - ETA: 0s - loss: 0.8531 - acc: 0.7835 Epoch 00038: val_acc improved from 0.77660 to 0.77785, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 518s 609ms/step - loss: 0.8530 - acc: 0.7835 - val_loss: 0.8726 - val_acc: 0.7778 Epoch 39/50 849/850 [============================>.] - ETA: 0s - loss: 0.8369 - acc: 0.7859 Epoch 00039: val_acc did not improve from 0.77785 850/850 [==============================] - 516s 607ms/step - loss: 0.8370 - acc: 0.7859 - val_loss: 0.9076 - val_acc: 0.7744 Epoch 40/50 849/850 [============================>.] - ETA: 0s - loss: 0.8407 - acc: 0.7864 Epoch 00040: val_acc improved from 0.77785 to 0.77820, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt Epoch 00040: ReduceLROnPlateau reducing learning rate to 1.5625000742147677e-05. 850/850 [==============================] - 522s 615ms/step - loss: 0.8407 - acc: 0.7864 - val_loss: 0.8784 - val_acc: 0.7782 Epoch 41/50 849/850 [============================>.] - ETA: 0s - loss: 0.8278 - acc: 0.7887 Epoch 00041: val_acc improved from 0.77820 to 0.77955, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 518s 610ms/step - loss: 0.8279 - acc: 0.7887 - val_loss: 0.8737 - val_acc: 0.7795 Epoch 42/50 849/850 [============================>.] - ETA: 0s - loss: 0.8243 - acc: 0.7896 Epoch 00042: val_acc did not improve from 0.77955 850/850 [==============================] - 520s 611ms/step - loss: 0.8243 - acc: 0.7896 - val_loss: 0.8958 - val_acc: 0.7715 Epoch 43/50 849/850 [============================>.] - ETA: 0s - loss: 0.8112 - acc: 0.7935 Epoch 00043: val_acc did not improve from 0.77955 850/850 [==============================] - 517s 608ms/step - loss: 0.8110 - acc: 0.7935 - val_loss: 0.8955 - val_acc: 0.7738 Epoch 44/50 849/850 [============================>.] - ETA: 0s - loss: 0.8009 - acc: 0.7953 Epoch 00044: val_acc did not improve from 0.77955 850/850 [==============================] - 518s 610ms/step - loss: 0.8007 - acc: 0.7953 - val_loss: 0.8825 - val_acc: 0.7762 Epoch 45/50 849/850 [============================>.] - ETA: 0s - loss: 0.8247 - acc: 0.7899 Epoch 00045: val_acc did not improve from 0.77955 Epoch 00045: ReduceLROnPlateau reducing learning rate to 7.812500371073838e-06. 850/850 [==============================] - 520s 611ms/step - loss: 0.8249 - acc: 0.7899 - val_loss: 0.8916 - val_acc: 0.7730 Epoch 46/50 849/850 [============================>.] - ETA: 0s - loss: 0.8233 - acc: 0.7901 Epoch 00046: val_acc did not improve from 0.77955 850/850 [==============================] - 519s 610ms/step - loss: 0.8233 - acc: 0.7901 - val_loss: 0.8750 - val_acc: 0.7770 Epoch 47/50 849/850 [============================>.] - ETA: 0s - loss: 0.8131 - acc: 0.7926 Epoch 00047: val_acc did not improve from 0.77955 850/850 [==============================] - 518s 609ms/step - loss: 0.8129 - acc: 0.7926 - val_loss: 0.8850 - val_acc: 0.7769 Epoch 48/50 849/850 [============================>.] - ETA: 0s - loss: 0.8038 - acc: 0.7937 Epoch 00048: val_acc did not improve from 0.77955 850/850 [==============================] - 517s 608ms/step - loss: 0.8035 - acc: 0.7937 - val_loss: 0.8852 - val_acc: 0.7784 Epoch 49/50 849/850 [============================>.] - ETA: 0s - loss: 0.7846 - acc: 0.8002 Epoch 00049: val_acc improved from 0.77955 to 0.78180, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(1-2)LastImg_MobileNet/best_model_1_2.ckpt 850/850 [==============================] - 519s 610ms/step - loss: 0.7844 - acc: 0.8002 - val_loss: 0.8644 - val_acc: 0.7818 Epoch 50/50 849/850 [============================>.] - ETA: 0s - loss: 0.8066 - acc: 0.7956 Epoch 00050: val_acc did not improve from 0.78180 850/850 [==============================] - 518s 610ms/step - loss: 0.8065 - acc: 0.7957 - val_loss: 0.8810 - val_acc: 0.7738 finish training ###Markdown Evaluate ###Code def top_3_accuracy(X, Y): return sparse_top_k_categorical_accuracy(X, Y, 3) def top_5_accuracy(X, Y): return sparse_top_k_categorical_accuracy(X, Y, 5) model_E = MODEL model_E.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy',top_3_accuracy, top_5_accuracy]) model_weights_path = CKPT_PATH model_E.load_weights(model_weights_path) print('finish') result = model_E.evaluate_generator( generate_data(valid_data, BATCH_SIZE, False), steps = EVALUATE_STEPS, verbose = 1 ) print('number of test samples:', len(result)) print('loss:', result[0]) print('top1 accuracy:', result[1]) print('top3 accuracy:', result[2]) print('top3 accuracy:', result[3]) ###Output 850/850 [==============================] - 150s 176ms/step number of test samples: 4 loss: 0.8837472029994515 top1 accuracy: 0.7758647046369664 top3 accuracy: 0.9089058828353882 top3 accuracy: 0.9344470597014708
5-basic-parallel/parallel-tutorial-master/notebooks/05-distributed-services.ipynb
###Markdown Distributed Services====================**NOTE:** This notebook and those that follow are to be run on the [pycon-parallel.jovyan.org](https://pycon-parallel.jovyan.org) cluster.The cluster contains the following distributed frameworks:* **Spark**, running at `spark://schedulers:7077`* **Dask**, running at `schedulers:9000`Each of these systems has a central controller/scheduler/master that manages worker processes on other computers on your cluster. This notebook contains setup information for connecting to each of these services.Dask and spark also provide web interfaces with a variety of feedback- [Dask dashboard](../../../9002/status) - [Spark UI](../../../9070) Spark ###Code from pyspark.sql import SparkSession spark = SparkSession.builder.master('spark://schedulers:7077').getOrCreate() spark ###Output _____no_output_____ ###Markdown Dask ###Code from dask.distributed import Client client = Client('schedulers:9000') client ###Output _____no_output_____
Semi-Finals/underline_trainning/Step1 itemCF_based_on_Apriori/7_generate_recall.ipynb
###Markdown 注意,下面这一行代码是在256GB内存的机器上运行的。如果你的机器配置不够256GB,为你提供两种解决方案:1. 使用最近日期的数据运行,依然会占用较大内存,对结果有一定影响: data = data[data['day'] >= K] 这里K设置为12/13等靠后的天数 2. 【推荐】在Step 6_Sta_for_SparseMatrix.ipynb 中把下面代码中的参数500调小为300,对结果影响较小,内存需求很小: item_dict[i] = sorted(tmp_list,key=lambda x:x[1], reverse=True)[:500] 当然这部分是完全可以在不减少精度的情况下批处理优化的,但是由于当时比赛时间不够没有优化,而且有大内存机器。 ###Code recall_logs = get_recall_list(data, targetDay=targetday, k=lenth) recall_df = reshape_recall_to_dataframe(recall_logs) temp = pd.merge(left=recall_df, right=data[data['day'] == targetday][['userID','itemID','behavior']], on=['userID','itemID'], how='left').rename(columns={'behavior':'label'}) len(set(recall_df['userID']) & set(data[data['day'] == targetday]['userID'])) len(set(recall_df['userID'])) recall_df.to_csv(name, index=False) ###Output _____no_output_____
notebooks/04-miscellaneous/dense-vs-sparse-vectors.ipynb
###Markdown Dense vector and sparse vector A vector can be represented in dense and sparse formats. A dense vector is a regular vector that has each elements printed. A sparse vector use three components to represent a vector but with less memory. ###Code dv = DenseVector([1.0,0.,0.,0.,4.5,0]) dv ###Output _____no_output_____ ###Markdown Three components of a sparse vector* vector size* indices of active elements* values of active elementsIn the above dense vector:* vector size = 6* indices of active elements = [0, 4]* values of active elements = [1.0, 4.5] We can use the `SparseVector()` function to create a sparse vector. The first argument is the vector size, the secondargument is a dictionary. The keys are indices of active elements and the values are values of active elements. ###Code sv = SparseVector(6, {0:1.0, 4:4.5}) sv ###Output _____no_output_____ ###Markdown Convert sparse vector to dense vector ###Code DenseVector(sv.toArray()) ###Output _____no_output_____ ###Markdown Convert dense vector to sparse vector ###Code active_elements_dict = {index: value for index, value in enumerate(dv) if value != 0} active_elements_dict SparseVector(len(dv), active_elements_dict) ###Output _____no_output_____
2_4_LSTMs/3. Chararacter-Level RNN, Exercise.ipynb
###Markdown Character-Level LSTM in PyTorchIn this notebook, I'll construct a character-level LSTM with PyTorch. The network will train character by character on some text, then generate new text character by character. As an example, I will train on Anna Karenina. **This model will be able to generate new text based on the text from the book!**This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Below is the general architecture of the character-wise RNN. First let's load in our required resources for data loading and model creation. ###Code import numpy as np import torch from torch import nn import torch.nn.functional as F ###Output _____no_output_____ ###Markdown Load in DataThen, we'll load the Anna Karenina text file and convert it into integers for our network to use. TokenizationIn the second cell, below, I'm creating a couple **dictionaries** to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. ###Code # open text file and read in data as `text` with open('data/anna.txt', 'r') as f: text = f.read() ###Output _____no_output_____ ###Markdown Now we have the text, encode it as integers. ###Code # encode the text and map each character to an integer and vice versa # we create two dictonaries: # 1. int2char, which maps integers to characters # 2. char2int, which maps characters to unique integers chars = tuple(set(text)) int2char = dict(enumerate(chars)) char2int = {ch: ii for ii, ch in int2char.items()} encoded = np.array([char2int[ch] for ch in text]) ###Output _____no_output_____ ###Markdown Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever. ###Code text[:100] ###Output _____no_output_____ ###Markdown And we can see those same characters encoded as integers. ###Code encoded[:100] ###Output _____no_output_____ ###Markdown Pre-processing the dataAs you can see in our char-RNN image above, our LSTM expects an input that is **one-hot encoded** meaning that each character is converted into an intgere (via our created dictionary) and *then* converted into a column vector where only it's corresponsing integer index will have the value of 1 and the rest of the vector will be filled with 0's. Since we're one-hot encoding the data, let's make a function to do that! ###Code def one_hot_encode(arr, n_labels): # Initialize the the encoded array one_hot = np.zeros((np.multiply(*arr.shape), n_labels), dtype=np.float32) # Fill the appropriate elements with ones one_hot[np.arange(one_hot.shape[0]), arr.flatten()] = 1. # Finally reshape it to get back to the original array one_hot = one_hot.reshape((*arr.shape, n_labels)) return one_hot ###Output _____no_output_____ ###Markdown Making training mini-batchesTo train on this data, we also want to create mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:In this example, we'll take the encoded characters (passed in as the `arr` parameter) and split them into multiple sequences, given by `n_seqs` (also refered to as "batch size" in other places). Each of those sequences will be `n_steps` long. Creating Batches**1. The first thing we need to do is discard some of the text so we only have completely full batches. **Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the total number of batches, $K$, we can make from the array `arr`, you divide the length of `arr` by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from `arr`, $N * M * K$.**2. After that, we need to split `arr` into $N$ sequences. ** You can do this using `arr.reshape(size)` where `size` is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences, so let's make that the size of the first dimension. For the second dimension, you can use `-1` as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$.**3. Now that we have this array, we can iterate through it to get our batches. **The idea is each batch is a $N \times M$ window on the $N \times (M * K)$ array. For each subsequent batch, the window moves over by `n_steps`. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. The way I like to do this window is use `range` to take steps of size `n_steps` from $0$ to `arr.shape[1]`, the total number of steps in each sequence. That way, the integers you get from `range` always point to the start of a batch, and each window is `n_steps` wide.> **TODO:** Write the code for creating batches in the function below. The exercises in this notebook _will not be easy_. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, **type out the solution code yourself.** ###Code def get_batches(arr, n_seqs, n_steps): '''Create a generator that returns batches of size n_seqs x n_steps from arr. Arguments --------- arr: Array you want to make batches from n_seqs: Batch size, the number of sequences per batch n_steps: Number of sequence steps per batch ''' # Get the number of characters per batch batch_size = n_seqs * n_steps n_batches = len(arr)//batch_size # Keep only enough characters to make full batches arr = arr[:n_batches * batch_size] # Reshape into n_seqs rows arr = arr.reshape((n_seqs, -1)) for n in range(0, arr.shape[1], n_steps): # The features x = arr[:, n:n+n_steps] # The targets, shifted by one y = np.zeros_like(x) try: y[:, :-1], y[:, -1] = x[:, 1:], arr[:, n+n_steps] except IndexError: y[:, :-1], y[:, -1] = x[:, 1:], arr[:, 0] yield x, y ###Output _____no_output_____ ###Markdown Test Your ImplementationNow I'll make some data sets and we can check out what's going on as we batch data. Here, as an example, I'm going to use a batch size of 10 and 50 sequence steps. ###Code batches = get_batches(encoded, 10, 50) x, y = next(batches) print('x\n', x[:10, :10]) print('\ny\n', y[:10, :10]) ###Output x [[72 28 46 8 11 2 6 4 63 52] [ 4 46 79 4 10 66 11 4 53 66] [29 82 10 5 52 52 73 38 2 9] [10 4 24 7 6 82 10 53 4 28] [ 4 82 11 4 82 9 1 4 9 82] [ 4 33 11 4 12 46 9 52 66 10] [28 2 10 4 21 66 79 2 4 70] [18 4 71 7 11 4 10 66 12 4] [11 4 82 9 10 40 11 5 4 31] [ 4 9 46 82 24 4 11 66 4 28]] y [[28 46 8 11 2 6 4 63 52 52] [46 79 4 10 66 11 4 53 66 82] [82 10 5 52 52 73 38 2 9 1] [ 4 24 7 6 82 10 53 4 28 82] [82 11 4 82 9 1 4 9 82 6] [33 11 4 12 46 9 52 66 10 75] [ 2 10 4 21 66 79 2 4 70 66] [ 4 71 7 11 4 10 66 12 4 9] [ 4 82 9 10 40 11 5 4 31 28] [ 9 46 82 24 4 11 66 4 28 2]] ###Markdown If you implemented `get_batches` correctly, the above output should look something like ```x [[55 63 69 22 6 76 45 5 16 35] [ 5 69 1 5 12 52 6 5 56 52] [48 29 12 61 35 35 8 64 76 78] [12 5 24 39 45 29 12 56 5 63] [ 5 29 6 5 29 78 28 5 78 29] [ 5 13 6 5 36 69 78 35 52 12] [63 76 12 5 18 52 1 76 5 58] [34 5 73 39 6 5 12 52 36 5] [ 6 5 29 78 12 79 6 61 5 59] [ 5 78 69 29 24 5 6 52 5 63]]y [[63 69 22 6 76 45 5 16 35 35] [69 1 5 12 52 6 5 56 52 29] [29 12 61 35 35 8 64 76 78 28] [ 5 24 39 45 29 12 56 5 63 29] [29 6 5 29 78 28 5 78 29 45] [13 6 5 36 69 78 35 52 12 43] [76 12 5 18 52 1 76 5 58 52] [ 5 73 39 6 5 12 52 36 5 78] [ 5 29 78 12 79 6 61 5 59 63] [78 69 29 24 5 6 52 5 63 76]] ``` although the exact numbers will be different. Check to make sure the data is shifted over one step for `y`. --- Defining the network with PyTorchBelow is where you'll define the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.Next, you'll use PyTorch to define the architecture of the network. We start by defining the layers and operations we want. Then, define a method for the forward pass. You've also been given a method for predicting characters. Model StructureIn `__init__` the suggested structure is as follows:* Create and store the necessary dictionaries (this has been done for you)* Define an LSTM layer that takes as params: an input size (the number of characters), a hidden layer size `n_hidden`, a number of layers `n_layers`, a dropout probability `drop_prob`, and a batch_first boolean (True, since we are batching)* Define a dropout layer with `dropout_prob`* Define a fully-connected layer with params: input size `n_hidden` and output size (the number of characters)* Finally, initialize the weights (again, this has been given)Note that some parameters have been named and given in the `__init__` function, and we use them and store them by doing something like `self.drop_prob = drop_prob`. --- LSTM Inputs/OutputsYou can create a basic LSTM cell as follows```pythonself.lstm = nn.LSTM(input_size, n_hidden, n_layers, dropout=drop_prob, batch_first=True)```where `input_size` is the number of characters this cell expects to see as sequential input, and `n_hidden` is the number of units in the hidden layers in the cell. And we can add dropout by adding a dropout parameter with a specified probability; this will automatically add dropout to the inputs or outputs. Finally, in the `forward` function, we can stack up the LSTM cells into layers using `.view`. With this, you pass in a list of cells and it will send the output of one cell into the next cell.We also need to create an initial cell state of all zeros. This is done like so```pythonself.init_weights()``` ###Code class CharRNN(nn.Module): def __init__(self, tokens, n_steps=100, n_hidden=256, n_layers=2, drop_prob=0.5, lr=0.001): super().__init__() self.drop_prob = drop_prob self.n_layers = n_layers self.n_hidden = n_hidden self.lr = lr # creating character dictionaries self.chars = tokens self.int2char = dict(enumerate(self.chars)) self.char2int = {ch: ii for ii, ch in self.int2char.items()} ## TODO: define the LSTM, self.lstm self.lstm = nn.LSTM(len(self.chars), n_hidden, n_layers, dropout=drop_prob, batch_first=True) ## TODO: define a dropout layer, self.dropout self.dropout = nn.Dropout(p = drop_prob) ## TODO: define the final, fully-connected output layer, self.fc self.fc = nn.Linear(n_hidden,len(self.chars)) # initialize the weights self.init_weights() def forward(self, x, hc): ''' Forward pass through the network. These inputs are x, and the hidden/cell state `hc`. ''' ## TODO: Get x, and the new hidden state (h, c) from the lstm x, (h, c) = self. lstm(x, hc) ## TODO: pass x through a droupout layer x = self.dropout(x) # Stack up LSTM outputs using view x = x.view(x.size()[0]*x.size()[1], self.n_hidden) ## TODO: put x through the fully-connected layer x = self.fc(x) # return x and the hidden state (h, c) return x, (h, c) def predict(self, char, h=None, cuda=False, top_k=None): ''' Given a character, predict the next character. Returns the predicted character and the hidden state. ''' if cuda: self.cuda() else: self.cpu() if h is None: h = self.init_hidden(1) x = np.array([[self.char2int[char]]]) x = one_hot_encode(x, len(self.chars)) inputs = torch.from_numpy(x) if cuda: inputs = inputs.cuda() h = tuple([each.data for each in h]) out, h = self.forward(inputs, h) p = F.softmax(out, dim=1).data if cuda: p = p.cpu() if top_k is None: top_ch = np.arange(len(self.chars)) else: p, top_ch = p.topk(top_k) top_ch = top_ch.numpy().squeeze() p = p.numpy().squeeze() char = np.random.choice(top_ch, p=p/p.sum()) return self.int2char[char], h def init_weights(self): ''' Initialize weights for fully connected layer ''' initrange = 0.1 # Set bias tensor to all zeros self.fc.bias.data.fill_(0) # FC weights as random uniform self.fc.weight.data.uniform_(-1, 1) def init_hidden(self, n_seqs): ''' Initializes hidden state ''' # Create two new tensors with sizes n_layers x n_seqs x n_hidden, # initialized to zero, for hidden state and cell state of LSTM weight = next(self.parameters()).data return (weight.new(self.n_layers, n_seqs, self.n_hidden).zero_(), weight.new(self.n_layers, n_seqs, self.n_hidden).zero_()) ###Output _____no_output_____ ###Markdown A note on the `predict` functionThe output of our RNN is from a fully-connected layer and it outputs a **distribution of next-character scores**.To actually get the next character, we apply a softmax function, which gives us a *probability* distribution that we can then sample to predict the next character. ###Code ## ---- keep notebook from crashing during training --- ## import os import requests import time def train(net, data, epochs=10, n_seqs=10, n_steps=50, lr=0.001, clip=5, val_frac=0.1, cuda=False, print_every=10): ''' Training a network Arguments --------- net: CharRNN network data: text data to train the network epochs: Number of epochs to train n_seqs: Number of mini-sequences per mini-batch, aka batch size n_steps: Number of character steps per mini-batch lr: learning rate clip: gradient clipping val_frac: Fraction of data to hold out for validation cuda: Train with CUDA on a GPU print_every: Number of steps for printing training and validation loss ''' net.train() opt = torch.optim.Adam(net.parameters(), lr=lr) criterion = nn.CrossEntropyLoss() # create training and validation data val_idx = int(len(data)*(1-val_frac)) data, val_data = data[:val_idx], data[val_idx:] if cuda: net.cuda() counter = 0 n_chars = len(net.chars) old_time = time.time() for e in range(epochs): h = net.init_hidden(n_seqs) for x, y in get_batches(data, n_seqs, n_steps): if time.time() - old_time > 60: old_time = time.time() requests.request("POST", "https://nebula.udacity.com/api/v1/remote/keep-alive", headers={'Authorization': "STAR " + response.text}) counter += 1 # One-hot encode our data and make them Torch tensors x = one_hot_encode(x, n_chars) inputs, targets = torch.from_numpy(x), torch.from_numpy(y) if cuda: inputs, targets = inputs.cuda(), targets.cuda() # Creating new variables for the hidden state, otherwise # we'd backprop through the entire training history h = tuple([each.data for each in h]) net.zero_grad() output, h = net.forward(inputs, h) loss = criterion(output, targets.view(n_seqs*n_steps)) loss.backward() # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs. nn.utils.clip_grad_norm_(net.parameters(), clip) opt.step() if counter % print_every == 0: # Get validation loss val_h = net.init_hidden(n_seqs) val_losses = [] for x, y in get_batches(val_data, n_seqs, n_steps): # One-hot encode our data and make them Torch tensors x = one_hot_encode(x, n_chars) x, y = torch.from_numpy(x), torch.from_numpy(y) # Creating new variables for the hidden state, otherwise # we'd backprop through the entire training history val_h = tuple([each.data for each in val_h]) inputs, targets = x, y if cuda: inputs, targets = inputs.cuda(), targets.cuda() output, val_h = net.forward(inputs, val_h) val_loss = criterion(output, targets.view(n_seqs*n_steps)) val_losses.append(val_loss.item()) print("Epoch: {}/{}...".format(e+1, epochs), "Step: {}...".format(counter), "Loss: {:.4f}...".format(loss.item()), "Val Loss: {:.4f}".format(np.mean(val_losses))) ###Output _____no_output_____ ###Markdown Time to trainNow we can actually train the network. First we'll create the network itself, with some given hyperparameters. Then, define the mini-batches sizes (number of sequences and number of steps), and start the training. With the train function, we can set the number of epochs, the learning rate, and other parameters. Also, we can run the training on a GPU by setting `cuda=True`. ###Code if 'net' in locals(): del net # define and print the net net = CharRNN(chars, n_hidden=512, n_layers=2) print(net) n_seqs, n_steps = 128, 100 # you may change cuda to True if you plan on using a GPU! # also, if you do, please INCREASE the epochs to 25 # Open the training log file. log_file = 'training_log.txt' f = open(log_file, 'w') response = requests.request("GET", "http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token", headers={"Metadata-Flavor":"Google"}) # TRAIN cuda = True train(net, encoded, epochs=1, n_seqs=n_seqs, n_steps=n_steps, lr=0.001, cuda=cuda, print_every=10) # Close the training log file. f.close() ###Output Epoch: 1/1... Step: 10... Loss: 3.3286... Val Loss: 3.2957 Epoch: 1/1... Step: 20... Loss: 3.1690... Val Loss: 3.1895 Epoch: 1/1... Step: 30... Loss: 3.0809... Val Loss: 3.0669 Epoch: 1/1... Step: 40... Loss: 2.8979... Val Loss: 2.9058 Epoch: 1/1... Step: 50... Loss: 2.7654... Val Loss: 2.7368 Epoch: 1/1... Step: 60... Loss: 2.6145... Val Loss: 2.6272 Epoch: 1/1... Step: 70... Loss: 2.5468... Val Loss: 2.5560 Epoch: 1/1... Step: 80... Loss: 2.4760... Val Loss: 2.5039 Epoch: 1/1... Step: 90... Loss: 2.4521... Val Loss: 2.4607 Epoch: 1/1... Step: 100... Loss: 2.3978... Val Loss: 2.4222 Epoch: 1/1... Step: 110... Loss: 2.3391... Val Loss: 2.3908 Epoch: 1/1... Step: 120... Loss: 2.2915... Val Loss: 2.3601 Epoch: 1/1... Step: 130... Loss: 2.3100... Val Loss: 2.3328 ###Markdown Getting the best modelTo set your hyperparameters to get the best performance, you'll want to watch the training and validation losses. If your training loss is much lower than the validation loss, you're overfitting. Increase regularization (more dropout) or use a smaller network. If the training and validation losses are close, you're underfitting so you can increase the size of the network. HyperparametersHere are the hyperparameters for the network.In defining the model:* `n_hidden` - The number of units in the hidden layers.* `n_layers` - Number of hidden LSTM layers to use.We assume that dropout probability and learning rate will be kept at the default, in this example.And in training:* `n_seqs` - Number of sequences running through the network in one pass.* `n_steps` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.* `lr` - Learning rate for trainingHere's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnntips-and-tricks).> Tips and Tricks> Monitoring Validation Loss vs. Training Loss>If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:> - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.> - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer)> Approximate number of parameters> The two most important parameters that control the model are `n_hidden` and `n_layers`. I would advise that you always use `n_layers` of either 2/3. The `n_hidden` can be adjusted based on how much data you have. The two important quantities to keep track of here are:> - The number of parameters in your model. This is printed when you start training.> - The size of your dataset. 1MB file is approximately 1 million characters.>These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:> - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `n_hidden` larger.> - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.> Best models strategy>The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.>It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.>By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative. After training, we'll save the model so we can load it again later if we need too. Here I'm saving the parameters needed to create the same architecture, the hidden layer hyperparameters and the text characters. ###Code # change the name, for saving multiple files model_name = 'rnn_1_epoch.net' checkpoint = {'n_hidden': net.n_hidden, 'n_layers': net.n_layers, 'state_dict': net.state_dict(), 'tokens': net.chars} with open(model_name, 'wb') as f: torch.save(checkpoint, f) ###Output _____no_output_____ ###Markdown SamplingNow that the model is trained, we'll want to sample from it. To sample, we pass in a character and have the network predict the next character. Then we take that character, pass it back in, and get another predicted character. Just keep doing this and you'll generate a bunch of text! Top K samplingOur predictions come from a categorcial probability distribution over all the possible characters. We can make the sample text and make it more reasonable to handle (with less variables) by only considering some $K$ most probable characters. This will prevent the network from giving us completely absurd characters while allowing it to introduce some noise and randomness into the sampled text.Typically you'll want to prime the network so you can build up a hidden state. Otherwise the network will start out generating characters at random. In general the first bunch of characters will be a little rough since it hasn't built up a long history of characters to predict from. ###Code def sample(net, size, prime='The', top_k=None, cuda=False): if cuda: net.cuda() else: net.cpu() net.eval() # First off, run through the prime characters chars = [ch for ch in prime] h = net.init_hidden(1) for ch in prime: char, h = net.predict(ch, h, cuda=cuda, top_k=top_k) chars.append(char) # Now pass in the previous character and get a new one for ii in range(size): char, h = net.predict(chars[-1], h, cuda=cuda, top_k=top_k) chars.append(char) return ''.join(chars) print(sample(net, 2000, prime='Anna', top_k=5, cuda=cuda)) ###Output Annad thes and the his andere he sele and thou the and the thang he hed of hing thit tha dardisth we thand he ald of he histhe to to he sorong, he hamd sine as tho sed he herad ally at her and and at ime astene thin whe her sald of her ale fer and te tee him thas iof ald aly and ontere the too hat hat and tating anerand as as at of the saing the tiot are, the ad on win the soud he and ang the to sain the sher, atte shis, but of the hems tere sald he womle, and to sadin the and th alle frond somad hur some the here an tha ghed hes thing and ond tiont and of he wand sine he aly of her tere he the saist, whous thoud hed saines the wing the as ome too too she her on and seat toute she all the sailes ad the and hevingine hes ind he thathe anding thas that and hat wis ad to has one and onthing her her has ad to his aly on the wer the sand tat at allat outh sap tie and to her tho gont whe he shis dreatsing ant in tout him sel ading ofetre hor and then hamd the thes to he wous the pones teas the anther. The camare to aneded and has alighere the ward ton thas and sat the hes hinge to hers and an tie the chat his and allyoved he wing the and ane wand the thad dear on tione ard ting ha said hor ther ad at he her of hes, whoug, and tiel to tere the had houd at here ang athang hat wast his has thas ard heres at ald and her shed ond to hin wand sing of the hersasted tor and heversend ontthe said and thin wore and of ald to him thand all weran sadintene at her tor the ardid tonerstid, and ofr had hes and te has had saing had binte was hithit houg somand sate an the hase faris, bet anded on he his show what ang att of hem herded. I and the wored of and he sated ton ine ware, he son arsithed tha ding to har aditha dout of har and so the sand tary aly of of the that her ther whe hes and ous here sing athed, bet of an to tha ghom he sould ton the andeng of aringe ate and anthe hesed to the weres thang and her the her has and the an the ardingere, "sain thanded till hin and, and hat ond ###Markdown Loading a checkpoint ###Code # Here we have loaded in a model that trained over 1 epoch `rnn_1_epoch.net` with open('rnn_1_epoch.net', 'rb') as f: checkpoint = torch.load(f) loaded = CharRNN(checkpoint['tokens'], n_hidden=checkpoint['n_hidden'], n_layers=checkpoint['n_layers']) loaded.load_state_dict(checkpoint['state_dict']) # Change cuda to True if you are using GPU! print(sample(loaded, 2000, cuda=cuda, top_k=5, prime="And Levin said")) ###Output And Levin said shing ot and an tite ald hom then was thers ing tin ther wis her ange as tea he singted in this seand her ware to him the woud an and thit astene," his and she the was and he what hin sored tho sore his the se tanted, sad her that he sald at ham and on thou hed to sear tare silat in that her tere and of ha d ome thar ant as to tit has te tha to thas the the wher as then to and the peathe sade thim the shad the somithe hard the he sere sonded and the sald sor the hat tout, and the ad at ather ond thit se tio thour has he tha geroung to his har he singt the sald ter the ware and there fou her shomed anderine har ald houred on seers ome the har tarestally sad ting the som his and homed he his tale fom this. I wand so the wast at in woth tere hed, hes of and som her of ing toonger, at the was ou tha se ard her to he hat thed an to the frit ane he has dadine saler an tome, and the hes ithis sat athan whas and the he thad his her tontt tho gho has douted tar at arind tor ith he mused. "Whe d of to har wars the for ther he was then ad tha gelid anters ine to hiss an out and tha terater ton ith there has war ther has doule, ant the hear on the toud to tare ald the hor andered at he shed of tout hat and sad him of the tared he wead ald to satin ho wentine her shad singt and the and ovedred to had ham the that how has dond, hat has ad thing of the thes ing he mante he wand ofere fread in satt the wared of in ane his andinge son in the withe hed ho had, and that ing him as and ta come to the somend the her and and the harss allonthe, was hid, he heare wald the songer ofren hed he ward tho hat saded too than the and ter the wall and tore his serent ou sher tien this to and saterer hing ath his toul fas hime shate fot his sat at ond thas in and of ar the soud ovey took the cerere tin waster and, wis ta ther ant hin tore his he anden to at toulde at he waredeng, at and allo ghed him and to has the sithe herd, and thig out of the sorithed ontered the han and he sher the sored h
Advanced Transforms.ipynb
###Markdown Download Dataset: https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.csvChoose Save as on Windows. Save as housing.CSV ###Code # load and summarize the dataset from numpy import loadtxt # load data dataset = loadtxt('housing.csv', delimiter=",") # split into inputs and outputs X, y = dataset[:, :-1], dataset[:, -1] # summarize dataset print(X.shape, y.shape) # NOT EXECUTABLE # prepare the model with input scaling pipeline = Pipeline(steps=[('normalize', MinMaxScaler()), ('model', HuberRegressor())]) # NOT EXECUTABLE # prepare the model with target scaling model = TransformedTargetRegressor(regressor=pipeline, transformer=MinMaxScaler()) # example of normalizing input and output variables for regression. from numpy import mean from numpy import absolute from numpy import loadtxt from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedKFold from sklearn.pipeline import Pipeline from sklearn.linear_model import HuberRegressor from sklearn.preprocessing import MinMaxScaler from sklearn.compose import TransformedTargetRegressor # load data dataset = loadtxt('housing.csv', delimiter=",") # split into inputs and outputs X, y = dataset[:, :-1], dataset[:, -1] # prepare the model with input scaling pipeline = Pipeline(steps=[('normalize', MinMaxScaler()), ('model', HuberRegressor())]) # prepare the model with target scaling model = TransformedTargetRegressor(regressor=pipeline, transformer=MinMaxScaler()) # evaluate model cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1) scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1) # convert scores to positive scores = absolute(scores) # summarize the result s_mean = mean(scores) print('Mean MAE: %.3f' % (s_mean)) # example of power transform input and output variables for regression. from numpy import mean from numpy import absolute from numpy import loadtxt from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedKFold from sklearn.pipeline import Pipeline from sklearn.linear_model import HuberRegressor from sklearn.preprocessing import PowerTransformer from sklearn.preprocessing import MinMaxScaler from sklearn.compose import TransformedTargetRegressor # load data dataset = loadtxt('housing.csv', delimiter=",") # split into inputs and outputs X, y = dataset[:, :-1], dataset[:, -1] # prepare the model with input scaling and power transform steps = list() steps.append(('scale', MinMaxScaler(feature_range=(1e-5,1)))) steps.append(('power', PowerTransformer())) steps.append(('model', HuberRegressor())) pipeline = Pipeline(steps=steps) # prepare the model with target scaling model = TransformedTargetRegressor(regressor=pipeline, transformer=PowerTransformer()) # evaluate model cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1) scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1) # convert scores to positive scores = absolute(scores) # summarize the result s_mean = mean(scores) print('Mean MAE: %.3f' % (s_mean)) # example of creating a test dataset and splitting it into train and test sets from sklearn.datasets import make_blobs from sklearn.model_selection import train_test_split # prepare dataset X, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=1) # split data into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1) # summarize the scale of each input variable for i in range(X_test.shape[1]): print('>%d, train: min=%.3f, max=%.3f, test: min=%.3f, max=%.3f' % (i, X_train[:, i].min(), X_train[:, i].max(), X_test[:, i].min(), X_test[:, i].max())) # example of scaling the dataset from sklearn.datasets import make_blobs from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler # prepare dataset X, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=1) # split data into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1) # define scaler scaler = MinMaxScaler() # fit scaler on the training dataset scaler.fit(X_train) # transform both datasets X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) # summarize the scale of each input variable for i in range(X_test.shape[1]): print('>%d, train: min=%.3f, max=%.3f, test: min=%.3f, max=%.3f' % (i, X_train_scaled[:, i].min(), X_train_scaled[:, i].max(), X_test_scaled[:, i].min(), X_test_scaled[:, i].max())) # example of fitting a model on the scaled dataset from sklearn.datasets import make_blobs from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler from sklearn.linear_model import LogisticRegression from pickle import dump # prepare dataset X, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=1) # split data into train and test sets X_train, _, y_train, _ = train_test_split(X, y, test_size=0.33, random_state=1) # define scaler scaler = MinMaxScaler() # fit scaler on the training dataset scaler.fit(X_train) # transform the training dataset X_train_scaled = scaler.transform(X_train) # define model model = LogisticRegression(solver='lbfgs') model.fit(X_train_scaled, y_train) # save the model dump(model, open('model.pkl', 'wb')) # save the scaler dump(scaler, open('scaler.pkl', 'wb')) # load model and scaler and make predictions on new data from sklearn.datasets import make_blobs from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from pickle import load # prepare dataset X, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=1) # split data into train and test sets _, X_test, _, y_test = train_test_split(X, y, test_size=0.33, random_state=1) # load the model model = load(open('model.pkl', 'rb')) # load the scaler scaler = load(open('scaler.pkl', 'rb')) # check scale of the test set before scaling print('Raw test set range') for i in range(X_test.shape[1]): print('>%d, min=%.3f, max=%.3f' % (i, X_test[:, i].min(), X_test[:, i].max())) # transform the test dataset X_test_scaled = scaler.transform(X_test) print('Scaled test set range') for i in range(X_test_scaled.shape[1]): print('>%d, min=%.3f, max=%.3f' % (i, X_test_scaled[:, i].min(), X_test_scaled[:, i].max())) # make predictions on the test set yhat = model.predict(X_test_scaled) # evaluate accuracy acc = accuracy_score(y_test, yhat) # evaluate accuracy acc = accuracy_score(y_test, yhat) print('Test Accuracy:', acc) ###Output Raw test set range >0, min=-11.270, max=0.085 >1, min=-5.581, max=5.926 Scaled test set range >0, min=0.047, max=0.964 >1, min=0.063, max=0.955 Test Accuracy: 1.0
gnn-tracking/scripts/ipynb/train_nx_graph.ipynb
###Markdown Get data--------- ###Code #TODO: @tf.function def get_data(n_graphs, is_trained=True): inputs, targets = generate_input_target(n_graphs, is_trained) if isinstance(inputs[0], dict): input_graphs = utils_np.data_dicts_to_graphs_tuple(inputs) target_graphs = utils_np.data_dicts_to_graphs_tuple(targets) else: input_graphs = utils_np.networkxs_to_graphs_tuple(inputs) target_graphs = utils_np.networkxs_to_graphs_tuple(targets) return input_graphs, target_graphs ###Output _____no_output_____ ###Markdown Initialize input_graphs and target_graphs as instances of GraphsTuple--------------- ###Code model = get_model(config['model_name']) input_graphs, target_graphs = get_data(n_graphs) # dir(model) ###Output _____no_output_____ ###Markdown Optimizer------------ ###Code global_step = tf.Variable(0, trainable=False) start_learning_rate = config_tr['learning_rate'] # 0.001 learning_rate = tf.compat.v1.train.exponential_decay( start_learning_rate, global_step, decay_steps=500, decay_rate=0.97, staircase=True) # using a constant learning rate instead: # snt Adam doesn't seem to support decaying learning rate # use tf.keras.optimizers.adam for decaying learning rate optimizer = snt.optimizers.Adam(1e-3) ###Output _____no_output_____ ###Markdown loss_weights matrix------------------ ###Code loss_weights = 1.0 if config_tr['real_weight']: real_weight = config_tr['real_weight'] fake_weight = config_tr['fake_weight'] loss_weights = target_graphs.edges * real_weight + (1 - target_graphs.edges)*fake_weight ###Output _____no_output_____ ###Markdown Training step:----------------- ###Code def update_step(inputs_tr, targets_tr, loss_weights): with tf.GradientTape() as tape: outputs_tr = model._build(inputs_tr, num_processing_steps_tr) # print(outputs_tr) # print('ccc') # Loss: losses_tr = utils_train.create_loss_ops(targets_tr, outputs_tr, loss_weights) loss_tr = sum(losses_tr) / num_processing_steps_tr gradients = tape.gradient(loss_tr, model.trainable_variables) # print('aaa') optimizer.apply(gradients, model.trainable_variables) # print('ddd') return outputs_tr, loss_tr ###Output _____no_output_____ ###Markdown Compiling update_step with tf_function to speed up code:------------------- ###Code example_input_data, example_target_data = get_data(n_graphs) input_signature = [ utils_tf.specs_from_graphs_tuple(example_input_data), utils_tf.specs_from_graphs_tuple(example_target_data), tf.TensorSpec.from_tensor(loss_weights) ] # # use this function instead of update_step: compiled_update_step = tf.function(update_step, input_signature=input_signature) ###Output _____no_output_____ ###Markdown Restore Previous Checkpoint------- ###Code # both optimizer and model are snt version, they should be trackable ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=optimizer, model=model) manager = tf.train.CheckpointManager(ckpt, os.path.join(output_dir, ckpt_name.format(last_iteration)), max_to_keep=3) ckpt.restore(manager.latest_checkpoint) if manager.latest_checkpoint: print("loading checkpoint:", os.path.join(output_dir, ckpt_name.format(last_iteration))) else: print("Initializing from scratch.") ###Output loading checkpoint: ../out/segments_100/v0_kaggle/checkpoint_00000.ckpt ###Markdown Initialize-------- ###Code logged_iterations = [] losses_tr = [] corrects_tr = [] solveds_tr = [] ###Output _____no_output_____ ###Markdown Log Training Process----------- ###Code out_str = time.strftime('%d %b %Y %H:%M:%S', time.localtime()) out_str += '\n' out_str += "# (iteration number), T (elapsed seconds), Ltr (training loss), Precision, Recall\n" log_name = os.path.join(output_dir, config['log_name']) with open(log_name, 'a') as f: f.write(out_str) # instead of using feed_dict, manually take the values out inputs_tr, targets_tr = get_data(batch_size) # print(inputs_tr) outputs_tr, loss_tr = update_step(inputs_tr, targets_tr, loss_weights) ###Output _____no_output_____ ###Markdown Run Training Steps:--------------- ###Code start_time = time.time() last_log_time = start_time ## loop over iterations, each iteration generating a batch of data for training iruns = 0 all_run_time = start_time all_data_taking_time = start_time print("# (iteration number), TD (get graph), TR (TF run)") for iteration in range(last_iteration, num_training_iterations): if iruns > iter_per_job: print("runs larger than {} iterations per job, stop".format(iter_per_job)) break else: iruns += 1 last_iteration = iteration data_start_time = time.time() # instead of using feed_dict, manually take the values out inputs_tr, targets_tr = get_data(batch_size) loss_weights = targets_tr.edges * real_weight + (1 - targets_tr.edges)*fake_weight all_data_taking_time += time.time() - data_start_time # timing the run time only run_start_time = time.time() # added this: not sure if correct # print('aaa') outputs_tr, loss_tr = update_step(inputs_tr, targets_tr, loss_weights) # print('bbb') run_time = time.time() - run_start_time all_run_time += run_time the_time = time.time() elapsed_since_last_log = the_time - last_log_time if elapsed_since_last_log > log_every_seconds: # save a checkpoint # print('in here') last_log_time = the_time inputs_ge, targets_ge = get_data(batch_size, is_trained=False) loss_weights = targets_ge.edges * real_weight + (1 - targets_ge.edges)*fake_weight outputs_ge = model._build(inputs_ge, num_processing_steps_tr) losses_ge = utils_train.create_loss_ops(targets_ge, outputs_ge, loss_weights) loss_ge = sum(losses_ge) / num_processing_steps_tr correct_tr, solved_tr = utils_train.compute_matrics( targets_ge, outputs_ge[-1]) elapsed = time.time() - start_time losses_tr.append(loss_tr) corrects_tr.append(correct_tr) solveds_tr.append(solved_tr) logged_iterations.append(iteration) out_str = "# {:05d}, T {:.1f}, Ltr {:.4f}, Lge {:.4f}, Precision {:.4f}, Recall {:.4f}\n".format( iteration, elapsed, loss_tr, loss_ge, correct_tr, solved_tr) run_cost_time = all_run_time - start_time data_cost_time = all_data_taking_time - start_time print("# {:05d}, TD {:.1f}, TR {:.1f}".format(iteration, data_cost_time, run_cost_time)) with open(log_name, 'a') as f: f.write(out_str) ckpt.step.assign_add(1) save_path = manager.save() ###Output # (iteration number), TD (get graph), TR (TF run)
appendix/defining-suspected-infection.ipynb
###Markdown Defining suspected infectionThe Sepsis 3 guidelines specify that among patients with suspected infection, a SOFA value >= 2 indicates sepsis. This notebook overviews the implementation of suspicion of infection in the MIMIC-III database.The appendix of Seymour et al. details the algorithm used, which is:* prescription of an antibiotic (excluding single dose for surgical patients)* acquisition of a fluid cultureIf these two occur in close temporal proximity, then this is seen as evidence that the clinician suspects infection in the patient.First we import all the necessary libraries. This notebooks requires MIMIC-III v1.4 available in a PostgreSQL instance (you can specify the connection details below). ###Code # Import libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import psycopg2 import sys from IPython.display import display, HTML # used to print out pretty pandas dataframes # colours for prettier plots col = [[0.9047, 0.1918, 0.1988], [0.2941, 0.5447, 0.7494], [0.3718, 0.7176, 0.3612], [1.0000, 0.5482, 0.1000], [0.4550, 0.4946, 0.4722], [0.6859, 0.4035, 0.2412], [0.9718, 0.5553, 0.7741], [0.5313, 0.3359, 0.6523]]; marker = ['v','o','d','^','s','o','+'] ls = ['-','-','-','-','-','s','--','--'] %matplotlib inline # create a database connection (you may need to update this) sqluser = 'alistairewj' dbname = 'mimic' schema_name = 'public,mimiciii' query_schema = 'set search_path to ' + schema_name + ';' # Connect to local postgres version of mimic con = psycopg2.connect(dbname=dbname, user=sqluser) # you can also specify host and port in the above call ###Output _____no_output_____ ###Markdown Blood cultureBlood cultures are relatively simple to identify in MIMIC-III. The `microbiologyevents` contains all blood cultures made from patients along with sensitivities. The table is structured as follows: there is an `spec_type_desc` column which describes the specimen analyzed (usually blood, but can be any fluid e.g. urine, pleural, etc).If a culture is negative (i.e. no bacteria grew), then the remaining columns are null.If a culture is positive, then for each organism that grew, a row exists and the organism is named in `org_name`. Optionally, antibiotics can be tested for sensitivity, and these are listed in `ab_name`.Here's an example. ###Code query = query_schema + \ """ SELECT hadm_id, TO_CHAR(charttime, 'HH:MM') as tm , spec_type_desc, org_name, ab_name , interpretation FROM microbiologyevents WHERE hadm_id = 145167 ORDER BY charttime; """ df = pd.read_sql_query(query, con) df.head(n=6) ###Output _____no_output_____ ###Markdown Note that we have excluded some columns from this display (in particular, dilution data for the antibiotic data is available).Reading from left to right, at the top row, we can see:* The patient's sputum was tested at 09:08 * it was positive for staphylococcus aureus * multiple antibiotics were tested * bacteria was sensitive ('S') to gentamicin, oxacillin, and levofloxacin * bacteria was resistant ('R') to erythromycin and penicillin.* The patient's urine was tested and was negative for bacterial growthUsing this logic, we can make a simple query that groups cultures together and assigns a flag indicating whether the culture was positive or negative (this is just for our information - the sepsis-3 criteria make no mention of positive vs. negative as a requirement). ###Code query_me = """ select hadm_id , chartdate, charttime , spec_type_desc -- if organism is present, then this is a positive culture, otherwise it's negative , max(case when org_name is not null and org_name != '' then 1 else 0 end) as PositiveCulture from microbiologyevents group by hadm_id, chartdate, charttime, spec_type_desc """ ###Output _____no_output_____ ###Markdown We will later combine this with antibiotics usage to define suspected infection. Antibiotics usageWe can extract antibiotic usage from two sources:* `prescriptions` table - prescribed medications* `inputevents_mv` and `inputevents_cv` tables - intravenously delivered medicationsBoth of these approaches have upsides and downsides. For prescriptions, this is sourced from a hospital wide database, so it captures antibiotic prescriptions before ICU admission. However, as these are prescriptions and not administrations, we have no guarantee that the patient received the medication (though it is very likely in the case of antibiotics). We also only have a start *date* (i.e. the day of prescription), as the exact time of prescription is not available.For the input events tables, these tables only contain antibiotics given intravenously. It's not unreasonable to assume that antibiotics given to patients admitted to the ICU will be almost entirely intravenously. We have much higher time resolution (to the hour) for antibiotic administration, but this data is only recorded in the ICU.Since we have two possible sources of antibiotic usage, we will extract antibiotics using both and compare them at the end. Antibiotics - intravenous administrationHere we use the same logic, except we extract antibiotics from the `inputevents_mv` and `inputevents_cv` tables. ###Code query_ab_ditems = \ """ with di as ( select di.* , case when lower(label) like '%' || lower('adoxa') || '%' then 1 when lower(label) like '%' || lower('ala-tet') || '%' then 1 when lower(label) like '%' || lower('alodox') || '%' then 1 when lower(label) like '%' || lower('amikacin') || '%' then 1 when lower(label) like '%' || lower('amikin') || '%' then 1 when lower(label) like '%' || lower('amoxicillin') || '%' then 1 when lower(label) like '%' || lower('amoxicillin%clavulanate') || '%' then 1 when lower(label) like '%' || lower('clavulanate') || '%' then 1 when lower(label) like '%' || lower('ampicillin') || '%' then 1 when lower(label) like '%' || lower('augmentin') || '%' then 1 when lower(label) like '%' || lower('avelox') || '%' then 1 when lower(label) like '%' || lower('avidoxy') || '%' then 1 when lower(label) like '%' || lower('azactam') || '%' then 1 when lower(label) like '%' || lower('azithromycin') || '%' then 1 when lower(label) like '%' || lower('aztreonam') || '%' then 1 when lower(label) like '%' || lower('axetil') || '%' then 1 when lower(label) like '%' || lower('bactocill') || '%' then 1 when lower(label) like '%' || lower('bactrim') || '%' then 1 when lower(label) like '%' || lower('bethkis') || '%' then 1 when lower(label) like '%' || lower('biaxin') || '%' then 1 when lower(label) like '%' || lower('bicillin l-a') || '%' then 1 when lower(label) like '%' || lower('cayston') || '%' then 1 when lower(label) like '%' || lower('cefazolin') || '%' then 1 when lower(label) like '%' || lower('cedax') || '%' then 1 when lower(label) like '%' || lower('cefoxitin') || '%' then 1 when lower(label) like '%' || lower('ceftazidime') || '%' then 1 when lower(label) like '%' || lower('cefaclor') || '%' then 1 when lower(label) like '%' || lower('cefadroxil') || '%' then 1 when lower(label) like '%' || lower('cefdinir') || '%' then 1 when lower(label) like '%' || lower('cefditoren') || '%' then 1 when lower(label) like '%' || lower('cefepime') || '%' then 1 when lower(label) like '%' || lower('cefotetan') || '%' then 1 when lower(label) like '%' || lower('cefotaxime') || '%' then 1 when lower(label) like '%' || lower('cefpodoxime') || '%' then 1 when lower(label) like '%' || lower('cefprozil') || '%' then 1 when lower(label) like '%' || lower('ceftibuten') || '%' then 1 when lower(label) like '%' || lower('ceftin') || '%' then 1 when lower(label) like '%' || lower('cefuroxime ') || '%' then 1 when lower(label) like '%' || lower('cefuroxime') || '%' then 1 when lower(label) like '%' || lower('cephalexin') || '%' then 1 when lower(label) like '%' || lower('chloramphenicol') || '%' then 1 when lower(label) like '%' || lower('cipro') || '%' then 1 when lower(label) like '%' || lower('ciprofloxacin') || '%' then 1 when lower(label) like '%' || lower('claforan') || '%' then 1 when lower(label) like '%' || lower('clarithromycin') || '%' then 1 when lower(label) like '%' || lower('cleocin') || '%' then 1 when lower(label) like '%' || lower('clindamycin') || '%' then 1 when lower(label) like '%' || lower('cubicin') || '%' then 1 when lower(label) like '%' || lower('dicloxacillin') || '%' then 1 when lower(label) like '%' || lower('doryx') || '%' then 1 when lower(label) like '%' || lower('doxycycline') || '%' then 1 when lower(label) like '%' || lower('duricef') || '%' then 1 when lower(label) like '%' || lower('dynacin') || '%' then 1 when lower(label) like '%' || lower('ery-tab') || '%' then 1 when lower(label) like '%' || lower('eryped') || '%' then 1 when lower(label) like '%' || lower('eryc') || '%' then 1 when lower(label) like '%' || lower('erythrocin') || '%' then 1 when lower(label) like '%' || lower('erythromycin') || '%' then 1 when lower(label) like '%' || lower('factive') || '%' then 1 when lower(label) like '%' || lower('flagyl') || '%' then 1 when lower(label) like '%' || lower('fortaz') || '%' then 1 when lower(label) like '%' || lower('furadantin') || '%' then 1 when lower(label) like '%' || lower('garamycin') || '%' then 1 when lower(label) like '%' || lower('gentamicin') || '%' then 1 when lower(label) like '%' || lower('kanamycin') || '%' then 1 when lower(label) like '%' || lower('keflex') || '%' then 1 when lower(label) like '%' || lower('ketek') || '%' then 1 when lower(label) like '%' || lower('levaquin') || '%' then 1 when lower(label) like '%' || lower('levofloxacin') || '%' then 1 when lower(label) like '%' || lower('lincocin') || '%' then 1 when lower(label) like '%' || lower('macrobid') || '%' then 1 when lower(label) like '%' || lower('macrodantin') || '%' then 1 when lower(label) like '%' || lower('maxipime') || '%' then 1 when lower(label) like '%' || lower('mefoxin') || '%' then 1 when lower(label) like '%' || lower('metronidazole') || '%' then 1 when lower(label) like '%' || lower('minocin') || '%' then 1 when lower(label) like '%' || lower('minocycline') || '%' then 1 when lower(label) like '%' || lower('monodox') || '%' then 1 when lower(label) like '%' || lower('monurol') || '%' then 1 when lower(label) like '%' || lower('morgidox') || '%' then 1 when lower(label) like '%' || lower('moxatag') || '%' then 1 when lower(label) like '%' || lower('moxifloxacin') || '%' then 1 when lower(label) like '%' || lower('myrac') || '%' then 1 when lower(label) like '%' || lower('nafcillin sodium') || '%' then 1 when lower(label) like '%' || lower('nicazel doxy 30') || '%' then 1 when lower(label) like '%' || lower('nitrofurantoin') || '%' then 1 when lower(label) like '%' || lower('noroxin') || '%' then 1 when lower(label) like '%' || lower('ocudox') || '%' then 1 when lower(label) like '%' || lower('ofloxacin') || '%' then 1 when lower(label) like '%' || lower('omnicef') || '%' then 1 when lower(label) like '%' || lower('oracea') || '%' then 1 when lower(label) like '%' || lower('oraxyl') || '%' then 1 when lower(label) like '%' || lower('oxacillin') || '%' then 1 when lower(label) like '%' || lower('pc pen vk') || '%' then 1 when lower(label) like '%' || lower('pce dispertab') || '%' then 1 when lower(label) like '%' || lower('panixine') || '%' then 1 when lower(label) like '%' || lower('pediazole') || '%' then 1 when lower(label) like '%' || lower('penicillin') || '%' then 1 when lower(label) like '%' || lower('periostat') || '%' then 1 when lower(label) like '%' || lower('pfizerpen') || '%' then 1 when lower(label) like '%' || lower('piperacillin') || '%' then 1 when lower(label) like '%' || lower('tazobactam') || '%' then 1 when lower(label) like '%' || lower('primsol') || '%' then 1 when lower(label) like '%' || lower('proquin') || '%' then 1 when lower(label) like '%' || lower('raniclor') || '%' then 1 when lower(label) like '%' || lower('rifadin') || '%' then 1 when lower(label) like '%' || lower('rifampin') || '%' then 1 when lower(label) like '%' || lower('rocephin') || '%' then 1 when lower(label) like '%' || lower('smz-tmp') || '%' then 1 when lower(label) like '%' || lower('septra') || '%' then 1 when lower(label) like '%' || lower('septra ds') || '%' then 1 when lower(label) like '%' || lower('septra') || '%' then 1 when lower(label) like '%' || lower('solodyn') || '%' then 1 when lower(label) like '%' || lower('spectracef') || '%' then 1 when lower(label) like '%' || lower('streptomycin sulfate') || '%' then 1 when lower(label) like '%' || lower('sulfadiazine') || '%' then 1 when lower(label) like '%' || lower('sulfamethoxazole') || '%' then 1 when lower(label) like '%' || lower('trimethoprim') || '%' then 1 when lower(label) like '%' || lower('sulfatrim') || '%' then 1 when lower(label) like '%' || lower('sulfisoxazole') || '%' then 1 when lower(label) like '%' || lower('suprax') || '%' then 1 when lower(label) like '%' || lower('synercid') || '%' then 1 when lower(label) like '%' || lower('tazicef') || '%' then 1 when lower(label) like '%' || lower('tetracycline') || '%' then 1 when lower(label) like '%' || lower('timentin') || '%' then 1 when lower(label) like '%' || lower('tobi') || '%' then 1 when lower(label) like '%' || lower('tobramycin') || '%' then 1 when lower(label) like '%' || lower('trimethoprim') || '%' then 1 when lower(label) like '%' || lower('unasyn') || '%' then 1 when lower(label) like '%' || lower('vancocin') || '%' then 1 when lower(label) like '%' || lower('vancomycin') || '%' then 1 when lower(label) like '%' || lower('vantin') || '%' then 1 when lower(label) like '%' || lower('vibativ') || '%' then 1 when lower(label) like '%' || lower('vibra-tabs') || '%' then 1 when lower(label) like '%' || lower('vibramycin') || '%' then 1 when lower(label) like '%' || lower('zinacef') || '%' then 1 when lower(label) like '%' || lower('zithromax') || '%' then 1 when lower(label) like '%' || lower('zmax') || '%' then 1 when lower(label) like '%' || lower('zosyn') || '%' then 1 when lower(label) like '%' || lower('zyvox') || '%' then 1 else 0 end as antibiotic from d_items di where linksto in ('inputevents_mv','inputevents_cv') ) """ ###Output _____no_output_____ ###Markdown The above (crudely) extracts all possible antibiotics in the data. Now we can isolate antibiotic administrations from `inputevents_mv` and `inputevents_cv`. ###Code # metavision query query_abx_iv_mv = """ , mv as ( select icustay_id , label as first_antibiotic_name , starttime as first_antibiotic_time , ROW_NUMBER() over (partition by icustay_id order by starttime, endtime) as rn from inputevents_mv mv inner join di on mv.itemid = di.itemid and di.antibiotic = 1 where statusdescription != 'Rewritten' ) """ query_abx_iv_cv = """ , cv as ( select icustay_id , label as first_antibiotic_name , charttime as first_antibiotic_time , ROW_NUMBER() over (partition by icustay_id order by charttime) as rn from inputevents_cv cv inner join di on cv.itemid = di.itemid and di.antibiotic = 1 ) """ # join these two together query_abtbl = \ """ , ab_tbl as ( select ie.icustay_id, ie.hadm_id , case when mv.first_antibiotic_time is not null and cv.first_antibiotic_time is not null then case when mv.first_antibiotic_time < cv.first_antibiotic_time then mv.first_antibiotic_name else cv.first_antibiotic_name end else coalesce(mv.first_antibiotic_name, cv.first_antibiotic_name) end first_antibiotic_name , case when mv.first_antibiotic_time is not null and cv.first_antibiotic_time is not null then case when mv.first_antibiotic_time < cv.first_antibiotic_time then mv.first_antibiotic_time else cv.first_antibiotic_time end else coalesce(mv.first_antibiotic_time, cv.first_antibiotic_time) end first_antibiotic_time from icustays ie left join mv on ie.icustay_id = mv.icustay_id and mv.rn = 1 left join cv on ie.icustay_id = cv.icustay_id and cv.rn = 1 ) """ query_abx_iv = query_ab_ditems + query_abx_iv_mv + query_abx_iv_cv + query_abtbl # select all from the last temp view abx_iv = pd.read_sql_query(query_schema + query_abx_iv + 'select * from ab_tbl',con) abx_iv.describe(include='all') ###Output _____no_output_____ ###Markdown Defining suspected infectionSuspected infection is defined as:* Antibiotics within 72 hours of a culture* A culture within 24 hours of antibioticsSo the next step is to left join to this table from the antibiotic table with the following rules:* If first_antibiotic_time is null, do not join* If charttime is null, use chartdate and add an extra day to allow for fuzziness* If charttime is not null, use charttimeHere's the first stab: ###Code # add micro table query_abx_iv_me = query_abx_iv + ", me as ( " + query_me + " ) " query = query_abx_iv_me + \ """ select ab_tbl.* , me72.spec_type_desc as last72_specimen , coalesce(me72.charttime, me72.chartdate) as last72_time , me72.PositiveCulture as last72_positive , me24.spec_type_desc as next24_specimen , coalesce(me24.charttime, me24.chartdate)as next24_time , me24.PositiveCulture as next24_positive from ab_tbl -- blood culture in last 72 hours left join me me72 on ab_tbl.hadm_id = me72.hadm_id and ab_tbl.first_antibiotic_time is not null and ( -- if charttime is available, use it ( ab_tbl.first_antibiotic_time > me72.charttime and ab_tbl.first_antibiotic_time <= me72.charttime + interval '72' hour ) OR ( -- if charttime is not available, use chartdate me72.charttime is null and ab_tbl.first_antibiotic_time > me72.chartdate and ab_tbl.first_antibiotic_time < me72.chartdate + interval '96' hour -- could equally do this with a date_trunc, but that's less portable ) ) -- blood culture in subsequent 24 hours left join me me24 on ab_tbl.hadm_id = me24.hadm_id and ab_tbl.first_antibiotic_time is not null and me24.charttime is not null and ( -- if charttime is available, use it ( ab_tbl.first_antibiotic_time > me24.charttime - interval '24' hour and ab_tbl.first_antibiotic_time <= me24.charttime ) OR ( -- if charttime is not available, use chartdate me24.charttime is null and ab_tbl.first_antibiotic_time > me24.chartdate and ab_tbl.first_antibiotic_time <= me24.chartdate + interval '24' hour ) ) """ ab = pd.read_sql_query(query_schema + query,con) ab.head() ###Output _____no_output_____ ###Markdown For now, let's just define it based of the antibiotic dose, and look for *all* cultures before/after. This will not affect the definition of suspected infection (as the definition only requires one).We'll group and count how often charttime is available, and how often chartdate is available. We're a bit worried about quantization due to chartdate. ###Code query = query_abx_iv_me + \ """ , ab_fnl as ( select ab_tbl.icustay_id , ab_tbl.first_antibiotic_name , ab_tbl.first_antibiotic_time , min(me72.charttime) as last72_charttime , min(me72.chartdate) as last72_chartdate , min(me24.charttime) as next24_charttime , min(me24.chartdate) as next24_chartdate from ab_tbl -- blood culture in last 72 hours left join me me72 on ab_tbl.hadm_id = me72.hadm_id and ab_tbl.first_antibiotic_time is not null and ( -- if charttime is available, use it ( ab_tbl.first_antibiotic_time > me72.charttime and ab_tbl.first_antibiotic_time <= me72.charttime + interval '72' hour ) OR ( -- if charttime is not available, use chartdate me72.charttime is null and ab_tbl.first_antibiotic_time > me72.chartdate and ab_tbl.first_antibiotic_time < me72.chartdate + interval '96' hour -- could equally do this with a date_trunc, but that's less portable ) ) -- blood culture in subsequent 24 hours left join me me24 on ab_tbl.hadm_id = me24.hadm_id and ab_tbl.first_antibiotic_time is not null and me24.charttime is not null and ( -- if charttime is available, use it ( ab_tbl.first_antibiotic_time > me24.charttime - interval '24' hour and ab_tbl.first_antibiotic_time <= me24.charttime ) OR ( -- if charttime is not available, use chartdate me24.charttime is null and ab_tbl.first_antibiotic_time > me24.chartdate and ab_tbl.first_antibiotic_time <= me24.chartdate + interval '24' hour ) ) group by ab_tbl.icustay_id, ab_tbl.first_antibiotic_name, ab_tbl.first_antibiotic_time ) select count(icustay_id) as NumICU , count(first_antibiotic_name) as HasAB , count(last72_charttime) as last72_charttime , count(last72_chartdate) as last72_chartdate , count(case when last72_charttime is null then last72_chartdate end) as last72_chartdate_only , count(next24_charttime) as next24_charttime , count(next24_chartdate) as next24_chartdate , count(case when next24_charttime is null then next24_chartdate end) as next24_chartdate_only from ab_fnl """ ab = pd.read_sql_query(query_schema + query,con) ab.head() ###Output _____no_output_____ ###Markdown This tells us something: if the culture was drawn in the ICU (as it would be for next 24 hours), then it always has a charttime. But if it's drawn before the ICU, then it is (very infrequently) missing a charttime. Lets look at these chartdates. ###Code query = query_abx_iv_me + \ """ , ab_fnl as ( select ab_tbl.icustay_id , ab_tbl.first_antibiotic_name , ab_tbl.first_antibiotic_time , min(me72.charttime) as last72_charttime , min(me72.chartdate) as last72_chartdate , min(me24.charttime) as next24_charttime , min(me24.chartdate) as next24_chartdate from ab_tbl -- blood culture in last 72 hours left join me me72 on ab_tbl.hadm_id = me72.hadm_id and ab_tbl.first_antibiotic_time is not null and ( -- if charttime is available, use it ( ab_tbl.first_antibiotic_time > me72.charttime and ab_tbl.first_antibiotic_time <= me72.charttime + interval '72' hour ) OR ( -- if charttime is not available, use chartdate me72.charttime is null and ab_tbl.first_antibiotic_time > me72.chartdate and ab_tbl.first_antibiotic_time < me72.chartdate + interval '96' hour -- could equally do this with a date_trunc, but that's less portable ) ) -- blood culture in subsequent 24 hours left join me me24 on ab_tbl.hadm_id = me24.hadm_id and ab_tbl.first_antibiotic_time is not null and me24.charttime is not null and ( -- if charttime is available, use it ( ab_tbl.first_antibiotic_time > me24.charttime - interval '24' hour and ab_tbl.first_antibiotic_time <= me24.charttime ) OR ( -- if charttime is not available, use chartdate me24.charttime is null and ab_tbl.first_antibiotic_time > me24.chartdate and ab_tbl.first_antibiotic_time <= me24.chartdate + interval '24' hour ) ) group by ab_tbl.icustay_id, ab_tbl.first_antibiotic_name, ab_tbl.first_antibiotic_time ) select ab_fnl.* from ab_fnl where last72_charttime is null and last72_chartdate is not null; """ ab = pd.read_sql_query(query_schema + query,con) ab.head(n=10) ###Output _____no_output_____ ###Markdown For these cases, let's assume the first antibiotic time is the starting time of suspicion of infection.With this rule, we can create a table for the antibiotic definition using IV. ###Code query = query_schema + \ """ DROP TABLE IF EXISTS abx_micro_iv CASCADE; CREATE TABLE abx_micro_iv as """ + query_abx_iv_me + \ """ , ab_fnl as ( select ab_tbl.icustay_id , ab_tbl.first_antibiotic_name , ab_tbl.first_antibiotic_time , me72.charttime as last72_charttime , me72.chartdate as last72_chartdate , me24.charttime as next24_charttime , me24.chartdate as next24_chartdate , me72.positiveculture as last72_positiveculture , me72.spec_type_desc as last72_specimen , me24.positiveculture as next24_positiveculture , me24.spec_type_desc as next24_specimen , ROW_NUMBER() over (partition by ab_tbl.icustay_id order by coalesce(me72.charttime, me24.charttime, me72.chartdate)) as rn from ab_tbl -- blood culture in last 72 hours left join me me72 on ab_tbl.hadm_id = me72.hadm_id and ab_tbl.first_antibiotic_time is not null and ( -- if charttime is available, use it ( ab_tbl.first_antibiotic_time > me72.charttime and ab_tbl.first_antibiotic_time <= me72.charttime + interval '72' hour ) OR ( -- if charttime is not available, use chartdate me72.charttime is null and ab_tbl.first_antibiotic_time > me72.chartdate and ab_tbl.first_antibiotic_time < me72.chartdate + interval '96' hour -- could equally do this with a date_trunc, but that's less portable ) ) -- blood culture in subsequent 24 hours left join me me24 on ab_tbl.hadm_id = me24.hadm_id and ab_tbl.first_antibiotic_time is not null and me24.charttime is not null and ( -- if charttime is available, use it ( ab_tbl.first_antibiotic_time > me24.charttime - interval '24' hour and ab_tbl.first_antibiotic_time <= me24.charttime ) OR ( -- if charttime is not available, use chartdate me24.charttime is null and ab_tbl.first_antibiotic_time > me24.chartdate and ab_tbl.first_antibiotic_time <= me24.chartdate + interval '24' hour ) ) ) select ab_fnl.icustay_id , first_antibiotic_name as antibiotic_name , first_antibiotic_time as antibiotic_time , coalesce(last72_charttime, last72_chartdate) as last72_charttime , coalesce(next24_charttime, next24_chartdate) as next24_charttime -- time of suspected infection: either the culture time (if before antibiotic), or the antibiotic time , case when last72_charttime is not null then last72_charttime when next24_charttime is not null or last72_chartdate is not null then first_antibiotic_time else null end as suspected_infection_time -- the specimen that was cultured , case when last72_charttime is not null or last72_chartdate is not null then last72_specimen when next24_charttime is not null then next24_specimen else null end as specimen -- whether the cultured specimen ended up being positive or not , case when last72_charttime is not null or last72_chartdate is not null then last72_positiveculture when next24_charttime is not null then next24_positiveculture else null end as positiveculture from ab_fnl where rn = 1 order by icustay_id; """ cur = con.cursor() cur.execute(query) cur.execute('COMMIT;') cur.close() print('abx_micro_iv table created!') ###Output abx_micro_iv table created! ###Markdown Antibiotics using `prescriptions`Now we follow the same logic, except we will define antibiotics using `prescriptions`.First, we call a query available in this repository that generates a list of antibiotics available in the `prescriptions` table. This query can take some time, so we add some code that checks if the table already exists, and only creates the table if we need to. ###Code # check if abx_poe_list table exists # "abx_poe_list" # abx == antibiotics # poe == provider order entry == the hospital interface that doctors use to prescribe medications query = query_schema + \ "select * from information_schema.tables where table_name = 'abx_poe_list'" df = pd.read_sql_query(query,con) if df.shape[0]>0: print('Table exists - skipping regeneration of table.') else: # read the query from the file into a single string with open('../query/tbls/abx-poe-list.sql') as fp: query = fp.readlines() # call the query to create the table using a cursor cur = con.cursor() cur.execute(query_schema) cur.execute(query) cur.execute('COMMIT;') cur.close() print('abx_poe_list table created!') # load in the table query = query_schema + \ "select * from abx_poe_list;" abx_poe = pd.read_sql_query(query, con) abx_poe.head() ###Output _____no_output_____ ###Markdown Here we can see a list of the antibiotics along with a count of how frequently that antibiotic occurred in the prescription table. Vancomycin is by far the most common, though it is not consistently documented using the same drug name. Just for reference: the query to find these antibiotics was generated using regular expressions and then manually reviewed by a clinical data scientist and an intensivist.With this table in hand, we can easily isolate prescribed antibiotics for all patients using an inner join: ###Code query_abx_poe = """ select pr.hadm_id , pr.drug as antibiotic_name , pr.startdate as antibiotic_time , pr.enddate as antibiotic_endtime from prescriptions pr -- inner join to subselect to only antibiotic prescriptions inner join abx_poe_list ab on pr.drug = ab.drug """ ###Output _____no_output_____ ###Markdown Now we recreate the rules as outlined by sepsis-3:* antibiotics up to 72 hours *before* blood culture* blood culture up to 24 hours *before* antibioticsThis query would be relatively straightforward, except we only have prescriptions on a daily base and sometimes blood cultures are only recorded with the day resolution. In these cases, we have decided to be a bit more flexible with the time windows by extending them 24 hours. The result is as follows: if I have a blood culture on `01-20 00:00`, then the query looks back *96* hours, not *72*, which allows a prescription dated `01-17 00:00` to be included. Note that the prescription dated `01-17 00:00` was likely done sometime during the day, and so this rule is conservative. ###Code query = query_schema + \ """ DROP TABLE IF EXISTS abx_micro_poe CASCADE; CREATE TABLE abx_micro_poe as with abx as ( """ + query_abx_poe + """ ) -- get cultures for each icustay -- note this duplicates prescriptions -- each ICU stay in the same hospitalization will get a copy of all prescriptions for that hospitalization , ab_tbl as ( select ie.subject_id, ie.hadm_id, ie.icustay_id , ie.intime, ie.outtime , abx.antibiotic_name , abx.antibiotic_time , abx.antibiotic_endtime from icustays ie left join abx on ie.hadm_id = abx.hadm_id ) , me as ( """ + query_me + """ ) , ab_fnl as ( select ab_tbl.icustay_id, ab_tbl.intime, ab_tbl.outtime , ab_tbl.antibiotic_name , ab_tbl.antibiotic_time , coalesce(me72.charttime,me72.chartdate) as last72_charttime , coalesce(me24.charttime,me24.chartdate) as next24_charttime , me72.positiveculture as last72_positiveculture , me72.spec_type_desc as last72_specimen , me24.positiveculture as next24_positiveculture , me24.spec_type_desc as next24_specimen from ab_tbl -- blood culture in last 72 hours left join me me72 on ab_tbl.hadm_id = me72.hadm_id and ab_tbl.antibiotic_time is not null and ( -- if charttime is available, use it ( ab_tbl.antibiotic_time > me72.charttime and ab_tbl.antibiotic_time <= me72.charttime + interval '72' hour ) OR ( -- if charttime is not available, use chartdate me72.charttime is null and ab_tbl.antibiotic_time > me72.chartdate and ab_tbl.antibiotic_time < me72.chartdate + interval '96' hour -- could equally do this with a date_trunc, but that's less portable ) ) -- blood culture in subsequent 24 hours left join me me24 on ab_tbl.hadm_id = me24.hadm_id and ab_tbl.antibiotic_time is not null and me24.charttime is not null and ( -- if charttime is available, use it ( ab_tbl.antibiotic_time > me24.charttime - interval '24' hour and ab_tbl.antibiotic_time <= me24.charttime ) OR ( -- if charttime is not available, use chartdate me24.charttime is null and ab_tbl.antibiotic_time > me24.chartdate and ab_tbl.antibiotic_time <= me24.chartdate + interval '24' hour ) ) ) , ab_laststg as ( select icustay_id , antibiotic_name , antibiotic_time , last72_charttime , next24_charttime -- time of suspected infection: either the culture time (if before antibiotic), or the antibiotic time , case when coalesce(last72_charttime,next24_charttime) is null then 0 else 1 end as suspected_infection , coalesce(last72_charttime,next24_charttime) as suspected_infection_time -- the specimen that was cultured , case when last72_charttime is not null then last72_specimen when next24_charttime is not null then next24_specimen else null end as specimen -- whether the cultured specimen ended up being positive or not , case when last72_charttime is not null then last72_positiveculture when next24_charttime is not null then next24_positiveculture else null end as positiveculture -- used to identify the *first* occurrence of suspected infection , ROW_NUMBER() over ( PARTITION BY ab_fnl.icustay_id ORDER BY coalesce(last72_charttime, next24_charttime) ) as rn from ab_fnl ) select icustay_id , antibiotic_name , antibiotic_time , last72_charttime , next24_charttime , suspected_infection_time , specimen, positiveculture from ab_laststg where rn=1 order by icustay_id, antibiotic_time; """ cur = con.cursor() cur.execute(query) cur.execute('COMMIT;') cur.close() print('abx_micro_poe table created!') ###Output abx_micro_poe table created! ###Markdown Compare definitionsFirst, let's load the data in and look at a few rows. ###Code df_abx_poe = pd.read_sql_query(query_schema + "select * from abx_micro_poe", con) df_abx_iv = pd.read_sql_query(query_schema + "select * from abx_micro_iv", con) print('Prescriptions') display(HTML(df_abx_poe.head().to_html())) print('IV') display(HTML(df_abx_iv.head().to_html())) ###Output Prescriptions ###Markdown Right away we can see there may be a substantial disagreement in the frequency. ###Code print('Prescriptions: {}'.format(df_abx_poe['suspected_infection_time'].count())) print('Intravenous: {}'.format(df_abx_iv['suspected_infection_time'].count())) ###Output Prescriptions: 35389 Intravenous: 13689 ###Markdown On a hunch, let's stratify this based off year: 2001-2008 ('carevue') and 2008-2012 ('metavision'). ###Code query_add_dbsource = " abx inner join icustays ie on abx.icustay_id = ie.icustay_id" df_abx_poe = pd.read_sql_query(query_schema + "select abx.*, ie.dbsource from abx_micro_poe" + query_add_dbsource, con) df_abx_iv = pd.read_sql_query(query_schema + "select abx.*, ie.dbsource from abx_micro_iv" + query_add_dbsource, con) print('Prescriptions') print(df_abx_poe.groupby('dbsource')['suspected_infection_time'].count()) print('') print('Intravenous') print(df_abx_iv.groupby('dbsource')['suspected_infection_time'].count()) ###Output Prescriptions dbsource both 117 carevue 18423 metavision 16849 Name: suspected_infection_time, dtype: int64 Intravenous dbsource both 62 carevue 25 metavision 13602 Name: suspected_infection_time, dtype: int64
notebooks/torch-gpu-test.ipynb
###Markdown Validating PyTorch installation ###Code # First, import PyTorch import torch # Check PyTorch version torch.__version__ # Instant validation # True - Success!!! # False - Something is wrong!!! torch.cuda.is_available() device_cnt = torch.cuda.device_count() cur_device = torch.cuda.current_device() device_name = torch.cuda.get_device_name(cur_device) print(f"Number of CUDA-enabled GPUs: {device_cnt}\nCurrent Device ID: {cur_device}\nCurrent Device Name: {device_name}") ###Output Number of CUDA-enabled GPUs: 1 Current Device ID: 0 Current Device Name: NVIDIA GeForce RTX 2060 with Max-Q Design
notebooks/Render cyto clusters.ipynb
###Markdown Read the ventricle segmenation ###Code with open(os.path.join(working_dir, 'mesh_ventricles.pkl'), mode='rb') as f: mesh = pickle.load(f) mesh.keys() verts = mesh['verts'] faces = mesh['faces'] normals = mesh['normals'] values = mesh['values'] normals.shape ###Output _____no_output_____ ###Markdown Read the profile labels ###Code labels = np.load(os.path.join(working_dir, 'cyto_labels.npy')) labels.shape, np.unique(labels) ###Output _____no_output_____ ###Markdown Make the rendered ventricles ###Code n_clusters = 6 surf = mlab.triangular_mesh([vert[0] for vert in verts], [vert[1] for vert in verts], [vert[2] for vert in verts], faces, colormap='cool', vmin=0, vmax=255, scalars=labels) lut = surf.module_manager.scalar_lut_manager.lut.table.to_array() # # Class 0 RGBA color # lut[0, 0] = palette[0][0] * 255 # lut[0, 1] = palette[0][1] * 255 # lut[0, 2] = palette[0][2] * 255 # lut[0, 3] = 255 # Use seaborn color palette for i in range(n_clusters): # if i != 3: # lut[i, 3] = 0 # continue color = palette[i] print(color) for j, c in enumerate(color): val = int(c * 255) lut[i, j] = val surf.module_manager.scalar_lut_manager.lut.table = lut mlab.show() sns.palplot(palette) palette np.unique(labels) ###Output _____no_output_____
Model backlog/Deep Learning/[24th] Bi GRU - Map and replace preprocess.ipynb
###Markdown Dependencies ###Code import warnings import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn import metrics from sklearn.model_selection import train_test_split from keras import optimizers from keras.models import Model from keras.callbacks import EarlyStopping, ReduceLROnPlateau, LearningRateScheduler from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.layers import Dense, Input, Embedding, Dropout, Activation, CuDNNGRU, Conv1D, Bidirectional, GlobalMaxPool1D # Set seeds to make the experiment more reproducible. from tensorflow import set_random_seed from numpy.random import seed set_random_seed(0) seed(0) %matplotlib inline sns.set_style("whitegrid") pd.set_option('display.float_format', lambda x: '%.4f' % x) warnings.filterwarnings("ignore") train = pd.read_csv("../input/train.csv") test = pd.read_csv("../input/test.csv") print("Train shape : ", train.shape) print("Test shape : ", test.shape) ###Output Train shape : (1804874, 45) Test shape : (97320, 2) ###Markdown Preprocess ###Code train['target'] = np.where(train['target'] >= 0.5, 1, 0) train['comment_text'] = train['comment_text'].astype(str) X_test = test['comment_text'].astype(str) # Lower comments train['comment_text'] = train['comment_text'].apply(lambda x: x.lower()) X_test = X_test.apply(lambda x: x.lower()) # Mapping Punctuation def map_punctuation(data): punct_mapping = {"_":" ", "`":" ", "‘": "'", "₹": "e", "´": "'", "°": "", "€": "e", "™": "tm", "√": " sqrt ", "×": "x", "²": "2", "—": "-", "–": "-", "’": "'", "_": "-", "`": "'", '“': '"', '”': '"', '“': '"', "£": "e", '∞': 'infinity', 'θ': 'theta', '÷': '/', 'α': 'alpha', '•': '.', 'à': 'a', '−': '-', 'β': 'beta', '∅': '', '³': '3', 'π': 'pi'} def clean_special_chars(text, mapping): for p in mapping: text = text.replace(p, mapping[p]) return text return data.apply(lambda x: clean_special_chars(x, punct_mapping)) train['comment_text'] = map_punctuation(train['comment_text']) X_test = map_punctuation(X_test) # Removing Punctuation def remove_punctuation(data): punct = "/-'?!.,#$%\'()*+-/:;<=>@[\\]^_`{|}~`" + '""“”’' + '∞θ÷α•à−β∅³π‘₹´°£€\×™√²—–&' def clean_special_chars(text, punct): for p in punct: text = text.replace(p, ' ') return text return data.apply(lambda x: clean_special_chars(x, punct)) train['comment_text'] = remove_punctuation(train['comment_text']) X_test = remove_punctuation(X_test) # Clean contractions def clean_contractions(text): specials = ["’", "‘", "´", "`"] for s in specials: text = text.replace(s, "'") return text train['comment_text'] = train['comment_text'].apply(lambda x: clean_contractions(x)) X_test = X_test.apply(lambda x: clean_contractions(x)) # Mapping contraction def map_contraction(data): contraction_mapping = { "Trump's" : 'trump is',"'cause": 'because',',cause': 'because',';cause': 'because',"ain't": 'am not','ain,t': 'am not', 'ain;t': 'am not','ain´t': 'am not','ain’t': 'am not',"aren't": 'are not', 'aren,t': 'are not','aren;t': 'are not','aren´t': 'are not','aren’t': 'are not',"can't": 'cannot',"can't've": 'cannot have','can,t': 'cannot','can,t,ve': 'cannot have', 'can;t': 'cannot','can;t;ve': 'cannot have', 'can´t': 'cannot','can´t´ve': 'cannot have','can’t': 'cannot','can’t’ve': 'cannot have', "could've": 'could have','could,ve': 'could have','could;ve': 'could have',"couldn't": 'could not',"couldn't've": 'could not have','couldn,t': 'could not','couldn,t,ve': 'could not have','couldn;t': 'could not', 'couldn;t;ve': 'could not have','couldn´t': 'could not', 'couldn´t´ve': 'could not have','couldn’t': 'could not','couldn’t’ve': 'could not have','could´ve': 'could have', 'could’ve': 'could have',"didn't": 'did not','didn,t': 'did not','didn;t': 'did not','didn´t': 'did not', 'didn’t': 'did not',"doesn't": 'does not','doesn,t': 'does not','doesn;t': 'does not','doesn´t': 'does not', 'doesn’t': 'does not',"don't": 'do not','don,t': 'do not','don;t': 'do not','don´t': 'do not','don’t': 'do not', "hadn't": 'had not',"hadn't've": 'had not have','hadn,t': 'had not','hadn,t,ve': 'had not have','hadn;t': 'had not', 'hadn;t;ve': 'had not have','hadn´t': 'had not','hadn´t´ve': 'had not have','hadn’t': 'had not','hadn’t’ve': 'had not have',"hasn't": 'has not','hasn,t': 'has not','hasn;t': 'has not','hasn´t': 'has not','hasn’t': 'has not', "haven't": 'have not','haven,t': 'have not','haven;t': 'have not','haven´t': 'have not','haven’t': 'have not',"he'd": 'he would', "he'd've": 'he would have',"he'll": 'he will', "he's": 'he is','he,d': 'he would','he,d,ve': 'he would have','he,ll': 'he will','he,s': 'he is','he;d': 'he would', 'he;d;ve': 'he would have','he;ll': 'he will','he;s': 'he is','he´d': 'he would','he´d´ve': 'he would have','he´ll': 'he will', 'he´s': 'he is','he’d': 'he would','he’d’ve': 'he would have','he’ll': 'he will','he’s': 'he is',"how'd": 'how did',"how'll": 'how will', "how's": 'how is','how,d': 'how did','how,ll': 'how will','how,s': 'how is','how;d': 'how did','how;ll': 'how will', 'how;s': 'how is','how´d': 'how did','how´ll': 'how will','how´s': 'how is','how’d': 'how did','how’ll': 'how will', 'how’s': 'how is',"i'd": 'i would',"i'll": 'i will',"i'm": 'i am',"i've": 'i have','i,d': 'i would','i,ll': 'i will', 'i,m': 'i am','i,ve': 'i have','i;d': 'i would','i;ll': 'i will','i;m': 'i am','i;ve': 'i have',"isn't": 'is not', 'isn,t': 'is not','isn;t': 'is not','isn´t': 'is not','isn’t': 'is not',"it'd": 'it would',"it'll": 'it will',"It's":'it is', "it's": 'it is','it,d': 'it would','it,ll': 'it will','it,s': 'it is','it;d': 'it would','it;ll': 'it will','it;s': 'it is','it´d': 'it would','it´ll': 'it will','it´s': 'it is', 'it’d': 'it would','it’ll': 'it will','it’s': 'it is', 'i´d': 'i would','i´ll': 'i will','i´m': 'i am','i´ve': 'i have','i’d': 'i would','i’ll': 'i will','i’m': 'i am', 'i’ve': 'i have',"let's": 'let us','let,s': 'let us','let;s': 'let us','let´s': 'let us', 'let’s': 'let us',"ma'am": 'madam','ma,am': 'madam','ma;am': 'madam',"mayn't": 'may not','mayn,t': 'may not','mayn;t': 'may not', 'mayn´t': 'may not','mayn’t': 'may not','ma´am': 'madam','ma’am': 'madam',"might've": 'might have','might,ve': 'might have','might;ve': 'might have',"mightn't": 'might not','mightn,t': 'might not','mightn;t': 'might not','mightn´t': 'might not', 'mightn’t': 'might not','might´ve': 'might have','might’ve': 'might have',"must've": 'must have','must,ve': 'must have','must;ve': 'must have', "mustn't": 'must not','mustn,t': 'must not','mustn;t': 'must not','mustn´t': 'must not','mustn’t': 'must not','must´ve': 'must have', 'must’ve': 'must have',"needn't": 'need not','needn,t': 'need not','needn;t': 'need not','needn´t': 'need not','needn’t': 'need not',"oughtn't": 'ought not','oughtn,t': 'ought not','oughtn;t': 'ought not', 'oughtn´t': 'ought not','oughtn’t': 'ought not',"sha'n't": 'shall not','sha,n,t': 'shall not','sha;n;t': 'shall not',"shan't": 'shall not', 'shan,t': 'shall not','shan;t': 'shall not','shan´t': 'shall not','shan’t': 'shall not','sha´n´t': 'shall not','sha’n’t': 'shall not', "she'd": 'she would',"she'll": 'she will',"she's": 'she is','she,d': 'she would','she,ll': 'she will', 'she,s': 'she is','she;d': 'she would','she;ll': 'she will','she;s': 'she is','she´d': 'she would','she´ll': 'she will', 'she´s': 'she is','she’d': 'she would','she’ll': 'she will','she’s': 'she is',"should've": 'should have','should,ve': 'should have','should;ve': 'should have', "shouldn't": 'should not','shouldn,t': 'should not','shouldn;t': 'should not','shouldn´t': 'should not','shouldn’t': 'should not','should´ve': 'should have', 'should’ve': 'should have',"that'd": 'that would',"that's": 'that is','that,d': 'that would','that,s': 'that is','that;d': 'that would', 'that;s': 'that is','that´d': 'that would','that´s': 'that is','that’d': 'that would','that’s': 'that is',"there'd": 'there had', "there's": 'there is','there,d': 'there had','there,s': 'there is','there;d': 'there had','there;s': 'there is', 'there´d': 'there had','there´s': 'there is','there’d': 'there had','there’s': 'there is', "they'd": 'they would',"they'll": 'they will',"they're": 'they are',"they've": 'they have', 'they,d': 'they would','they,ll': 'they will','they,re': 'they are','they,ve': 'they have','they;d': 'they would','they;ll': 'they will','they;re': 'they are', 'they;ve': 'they have','they´d': 'they would','they´ll': 'they will','they´re': 'they are','they´ve': 'they have','they’d': 'they would','they’ll': 'they will', 'they’re': 'they are','they’ve': 'they have',"wasn't": 'was not','wasn,t': 'was not','wasn;t': 'was not','wasn´t': 'was not', 'wasn’t': 'was not',"we'd": 'we would',"we'll": 'we will',"we're": 'we are',"we've": 'we have','we,d': 'we would','we,ll': 'we will', 'we,re': 'we are','we,ve': 'we have','we;d': 'we would','we;ll': 'we will','we;re': 'we are','we;ve': 'we have', "weren't": 'were not','weren,t': 'were not','weren;t': 'were not','weren´t': 'were not','weren’t': 'were not','we´d': 'we would','we´ll': 'we will', 'we´re': 'we are','we´ve': 'we have','we’d': 'we would','we’ll': 'we will','we’re': 'we are','we’ve': 'we have',"what'll": 'what will',"what're": 'what are',"what's": 'what is', "what've": 'what have','what,ll': 'what will','what,re': 'what are','what,s': 'what is','what,ve': 'what have','what;ll': 'what will','what;re': 'what are', 'what;s': 'what is','what;ve': 'what have','what´ll': 'what will', 'what´re': 'what are','what´s': 'what is','what´ve': 'what have','what’ll': 'what will','what’re': 'what are','what’s': 'what is', 'what’ve': 'what have',"where'd": 'where did',"where's": 'where is','where,d': 'where did','where,s': 'where is','where;d': 'where did', 'where;s': 'where is','where´d': 'where did','where´s': 'where is','where’d': 'where did','where’s': 'where is', "who'll": 'who will',"who's": 'who is','who,ll': 'who will','who,s': 'who is','who;ll': 'who will','who;s': 'who is', 'who´ll': 'who will','who´s': 'who is','who’ll': 'who will','who’s': 'who is',"won't": 'will not','won,t': 'will not','won;t': 'will not', 'won´t': 'will not','won’t': 'will not',"wouldn't": 'would not','wouldn,t': 'would not','wouldn;t': 'would not','wouldn´t': 'would not', 'wouldn’t': 'would not',"you'd": 'you would',"you'll": 'you will',"you're": 'you are','you,d': 'you would','you,ll': 'you will', 'you,re': 'you are','you;d': 'you would','you;ll': 'you will', 'you;re': 'you are','you´d': 'you would','you´ll': 'you will','you´re': 'you are','you’d': 'you would','you’ll': 'you will','you’re': 'you are', '´cause': 'because','’cause': 'because',"you've": "you have","could'nt": 'could not', "havn't": 'have not',"here’s": "here is",'i""m': 'i am',"i'am": 'i am',"i'l": "i will","i'v": 'i have',"wan't": 'want',"was'nt": "was not","who'd": "who would", "who're": "who are","who've": "who have","why'd": "why would","would've": "would have","y'all": "you all","y'know": "you know","you.i": "you i", "your'e": "you are","arn't": "are not","agains't": "against","c'mon": "common","doens't": "does not",'don""t': "do not","dosen't": "does not", "dosn't": "does not","shoudn't": "should not","that'll": "that will","there'll": "there will","there're": "there are", "this'll": "this all","u're": "you are", "ya'll": "you all","you'r": "you are","you’ve": "you have","d'int": "did not","did'nt": "did not","din't": "did not","dont't": "do not","gov't": "government", "i'ma": "i am","is'nt": "is not","‘I":'I', 'ᴀɴᴅ':'and','ᴛʜᴇ':'the','ʜᴏᴍᴇ':'home','ᴜᴘ':'up','ʙʏ':'by','ᴀᴛ':'at','…and':'and','civilbeat':'civil beat',\ 'TrumpCare':'Trump care','Trumpcare':'Trump care', 'OBAMAcare':'Obama care','ᴄʜᴇᴄᴋ':'check','ғᴏʀ':'for','ᴛʜɪs':'this','ᴄᴏᴍᴘᴜᴛᴇʀ':'computer',\ 'ᴍᴏɴᴛʜ':'month','ᴡᴏʀᴋɪɴɢ':'working','ᴊᴏʙ':'job','ғʀᴏᴍ':'from','Sᴛᴀʀᴛ':'start','gubmit':'submit','CO₂':'carbon dioxide','ғɪʀsᴛ':'first',\ 'ᴇɴᴅ':'end','ᴄᴀɴ':'can','ʜᴀᴠᴇ':'have','ᴛᴏ':'to','ʟɪɴᴋ':'link','ᴏғ':'of','ʜᴏᴜʀʟʏ':'hourly','ᴡᴇᴇᴋ':'week','ᴇɴᴅ':'end','ᴇxᴛʀᴀ':'extra',\ 'Gʀᴇᴀᴛ':'great','sᴛᴜᴅᴇɴᴛs':'student','sᴛᴀʏ':'stay','ᴍᴏᴍs':'mother','ᴏʀ':'or','ᴀɴʏᴏɴᴇ':'anyone','ɴᴇᴇᴅɪɴɢ':'needing','ᴀɴ':'an','ɪɴᴄᴏᴍᴇ':'income',\ 'ʀᴇʟɪᴀʙʟᴇ':'reliable','ғɪʀsᴛ':'first','ʏᴏᴜʀ':'your','sɪɢɴɪɴɢ':'signing','ʙᴏᴛᴛᴏᴍ':'bottom','ғᴏʟʟᴏᴡɪɴɢ':'following','Mᴀᴋᴇ':'make',\ 'ᴄᴏɴɴᴇᴄᴛɪᴏɴ':'connection','ɪɴᴛᴇʀɴᴇᴛ':'internet','financialpost':'financial post', 'ʜaᴠᴇ':' have ', 'ᴄaɴ':' can ', 'Maᴋᴇ':' make ', 'ʀᴇʟɪaʙʟᴇ':' reliable ', 'ɴᴇᴇᴅ':' need ', 'ᴏɴʟʏ':' only ', 'ᴇxᴛʀa':' extra ', 'aɴ':' an ', 'aɴʏᴏɴᴇ':' anyone ', 'sᴛaʏ':' stay ', 'Sᴛaʀᴛ':' start', 'SHOPO':'shop', } def clean_special_chars(text, mapping): for p in mapping: text = text.replace(p, mapping[p]) return text return data.apply(lambda x: clean_special_chars(x, contraction_mapping)) train['comment_text'] = map_contraction(train['comment_text']) X_test = map_contraction(X_test) # Mapping misspelling def map_misspelling(data): misspelling_mapping = {'SB91':'senate bill','tRump':'trump','utmterm':'utm term','FakeNews':'fake news','Gʀᴇat':'great','ʙᴏᴛtoᴍ':'bottom','washingtontimes':'washington times', 'garycrum':'gary crum','htmlutmterm':'html utm term','RangerMC':'car','TFWs':'tuition fee waiver','SJWs':'social justice warrior','Koncerned':'concerned', 'Vinis':'vinys','Yᴏᴜ':'you','Trumpsters':'trump','Trumpian':'trump','bigly':'big league','Trumpism':'trump','Yoyou':'you','Auwe':'wonder','Drumpf':'trump', 'utmterm':'utm term','Brexit':'british exit','utilitas':'utilities','ᴀ':'a', '😉':'wink','😂':'joy','😀':'stuck out tongue', 'theguardian':'the guardian', 'deplorables':'deplorable', 'theglobeandmail':'the globe and mail', 'justiciaries': 'justiciary','creditdation': 'Accreditation','doctrne':'doctrine', 'fentayal': 'fentanyl','designation-': 'designation','CONartist' : 'con-artist','Mutilitated' : 'Mutilated','Obumblers': 'bumblers', 'negotiatiations': 'negotiations','dood-': 'dood','irakis' : 'iraki','cooerate': 'cooperate','COx':'cox','racistcomments':'racist comments', 'envirnmetalists': 'environmentalists'} def clean_special_chars(text, mapping): for p in mapping: text = text.replace(p, mapping[p]) return text return data.apply(lambda x: clean_special_chars(x, misspelling_mapping)) train['comment_text'] = map_misspelling(train['comment_text']) X_test = map_misspelling(X_test) # Train/validation split train_ids, val_ids = train_test_split(train['id'], test_size=0.2, random_state=2019) train_df = pd.merge(train_ids.to_frame(), train) validate_df = pd.merge(val_ids.to_frame(), train) Y_train = train_df['target'].values Y_val = validate_df['target'].values X_train = train_df['comment_text'] X_val = validate_df['comment_text'] # Hyper parameters maxlen = 150 # max number of words in a question to use embed_size = 300 # how big is each word vector max_features = 30000 # how many unique words to use (i.e num rows in embedding vector) learning_rate = 0.01 decay_factor = 0.25 epochs = 65 batch_size = 512 # Fill missing values X_train = X_train.fillna("_na_").values X_val = X_val.fillna("_na_").values X_test = X_test.fillna("_na_").values # Tokenize the sentences tokenizer = Tokenizer(num_words=max_features) tokenizer.fit_on_texts(list(X_train)) X_train = tokenizer.texts_to_sequences(X_train) X_val = tokenizer.texts_to_sequences(X_val) X_test = tokenizer.texts_to_sequences(X_test) # Pad the sentences X_train = pad_sequences(X_train, maxlen=maxlen) X_val = pad_sequences(X_val, maxlen=maxlen) X_test = pad_sequences(X_test, maxlen=maxlen) ###Output _____no_output_____ ###Markdown Model ###Code inp = Input(shape=(maxlen,)) x = Embedding(max_features, embed_size)(inp) x = Bidirectional(CuDNNGRU(64, return_sequences=True))(x) x = GlobalMaxPool1D()(x) x = Dense(32, activation="relu")(x) x = Dropout(0.5)(x) x = Dense(1, activation="sigmoid")(x) model = Model(inputs=inp, outputs=x) optimizer = optimizers.SGD(lr=learning_rate, momentum=0.9, nesterov='true') model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']) model.summary() es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=7) rlrop = ReduceLROnPlateau(monitor='val_loss', factor=decay_factor, patience=5) history = model.fit(X_train, Y_train, batch_size=batch_size, epochs=epochs, validation_data=(X_val, Y_val), callbacks=[es, rlrop]) fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 8)) ax1.plot(history.history['acc'], label='Train Accuracy') ax1.plot(history.history['val_acc'], label='Validation accuracy') ax1.legend(loc='best') ax1.set_title('Accuracy') ax2.plot(history.history['loss'], label='Train loss') ax2.plot(history.history['val_loss'], label='Validation loss') ax2.legend(loc='best') ax2.set_title('Loss') plt.xlabel('Epochs') sns.despine() plt.show() ###Output _____no_output_____ ###Markdown Model evaluation ###Code identity_columns = [ 'male', 'female', 'homosexual_gay_or_lesbian', 'christian', 'jewish', 'muslim', 'black', 'white', 'psychiatric_or_mental_illness'] # Convert taget and identity columns to booleans def convert_to_bool(df, col_name): df[col_name] = np.where(df[col_name] >= 0.5, True, False) def convert_dataframe_to_bool(df): bool_df = df.copy() for col in ['target'] + identity_columns: convert_to_bool(bool_df, col) return bool_df SUBGROUP_AUC = 'subgroup_auc' BPSN_AUC = 'bpsn_auc' # stands for background positive, subgroup negative BNSP_AUC = 'bnsp_auc' # stands for background negative, subgroup positive def compute_auc(y_true, y_pred): try: return metrics.roc_auc_score(y_true, y_pred) except ValueError: return np.nan def compute_subgroup_auc(df, subgroup, label, model_name): subgroup_examples = df[df[subgroup]] return compute_auc(subgroup_examples[label], subgroup_examples[model_name]) def compute_bpsn_auc(df, subgroup, label, model_name): """Computes the AUC of the within-subgroup negative examples and the background positive examples.""" subgroup_negative_examples = df[df[subgroup] & ~df[label]] non_subgroup_positive_examples = df[~df[subgroup] & df[label]] examples = subgroup_negative_examples.append(non_subgroup_positive_examples) return compute_auc(examples[label], examples[model_name]) def compute_bnsp_auc(df, subgroup, label, model_name): """Computes the AUC of the within-subgroup positive examples and the background negative examples.""" subgroup_positive_examples = df[df[subgroup] & df[label]] non_subgroup_negative_examples = df[~df[subgroup] & ~df[label]] examples = subgroup_positive_examples.append(non_subgroup_negative_examples) return compute_auc(examples[label], examples[model_name]) def compute_bias_metrics_for_model(dataset, subgroups, model, label_col, include_asegs=False): """Computes per-subgroup metrics for all subgroups and one model.""" records = [] for subgroup in subgroups: record = { 'subgroup': subgroup, 'subgroup_size': len(dataset[dataset[subgroup]]) } record[SUBGROUP_AUC] = compute_subgroup_auc(dataset, subgroup, label_col, model) record[BPSN_AUC] = compute_bpsn_auc(dataset, subgroup, label_col, model) record[BNSP_AUC] = compute_bnsp_auc(dataset, subgroup, label_col, model) records.append(record) return pd.DataFrame(records).sort_values('subgroup_auc', ascending=True) # validate_df = pd.merge(val_ids.to_frame(), train) validate_df['preds'] = model.predict(X_val) validate_df = convert_dataframe_to_bool(validate_df) bias_metrics_df = compute_bias_metrics_for_model(validate_df, identity_columns, 'preds', 'target') print('Validation bias metric by group') display(bias_metrics_df) def power_mean(series, p): total = sum(np.power(series, p)) return np.power(total / len(series), 1 / p) def get_final_metric(bias_df, overall_auc, POWER=-5, OVERALL_MODEL_WEIGHT=0.25): bias_score = np.average([ power_mean(bias_df[SUBGROUP_AUC], POWER), power_mean(bias_df[BPSN_AUC], POWER), power_mean(bias_df[BNSP_AUC], POWER) ]) return (OVERALL_MODEL_WEIGHT * overall_auc) + ((1 - OVERALL_MODEL_WEIGHT) * bias_score) # train_df = pd.merge(train_ids.to_frame(), train) train_df['preds'] = model.predict(X_train) train_df = convert_dataframe_to_bool(train_df) print('Train ROC AUC: %.4f' % get_final_metric(bias_metrics_df, metrics.roc_auc_score(train_df['target'].values, train_df['preds'].values))) print('Validation ROC AUC: %.4f' % get_final_metric(bias_metrics_df, metrics.roc_auc_score(validate_df['target'].values, validate_df['preds'].values))) ###Output Train ROC AUC: 0.9129 Validation ROC AUC: 0.9092 ###Markdown Predictions ###Code Y_test = model.predict(X_test) submission = pd.read_csv('../input/sample_submission.csv') submission['prediction'] = Y_test submission.to_csv('submission.csv', index=False) submission.head(10) ###Output _____no_output_____
.ipynb_checkpoints/Praktikum 03 - Flip-checkpoint.ipynb
###Markdown Praktikum 3 | Pengolahan Citra Flip Gambar Fadhil Yori Hibatullah | 2103161037 | 2 D3 Teknik Informatika B ------------------------------------------------------- Import dependency ###Code import numpy as np import imageio import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Membaca gambar ###Code img = imageio.imread("woman.jpg") ###Output _____no_output_____ ###Markdown Mendapatkan resolusi dan type dari gambar ###Code img_height = img.shape[0] img_width = img.shape[1] img_channel = img.shape[2] img_type = img.dtype ###Output _____no_output_____ ###Markdown Membuat variabel dengan resolusi dan tipe yang sama seperti gambar ###Code img_flip_horizontal = np.zeros(img.shape, img_type) img_flip_vertical = np.zeros(img.shape, img_type) ###Output _____no_output_____ ###Markdown Membalik gambar secara horizontal ###Code for y in range(0, img_height): for x in range(0, img_width): for c in range(0, img_channel): img_flip_horizontal[y][x][c] = img[y][img_width-1-x][c] ###Output _____no_output_____ ###Markdown Membalik gambar secara vertical ###Code for y in range(0, img_height): for x in range(0, img_width): for c in range(0, img_channel): img_flip_vertical[y][x][c] = img[img_height-1-y][x][c] ###Output _____no_output_____ ###Markdown Menampilkan hasil balik gambar ###Code plt.imshow(img_flip_horizontal) plt.title("Flip Horizontal") plt.show() plt.imshow(img_flip_vertical) plt.title("Flip Vertical") plt.show() ###Output _____no_output_____
deep-learning-school/[5]oop_neuron/seminar/old_seminar/[seminar]neuron_logloss.ipynb
###Markdown Физтех-Школа Прикладной математики и информатики (ФПМИ) МФТИ --- Обучение нейрона с помощью функции потерь LogLoss: решение --- Напоминание Почти любой алгоритм машинного обучения, решающий задачу *классификации* или *регрессии*, работает так:1. (*стадия инициализации*) Задаются его **гиперпараметры**, то есть те величины, которые не "выучиваются" алгоритмом в процессе обучения самостоятельно 2. (*стадия обучения*) Алгоритм запускается на данных, **обучаясь** на них и меняя свои **параметры** (не путать с *гипер*параметрами) каким-то определённым образом (например, с помощью *метода градиентного спуска*), исходя из функции потерь (её называют *loss function*). Функция потерь, по сути, говорит, где и как ошибается модель3. (*стадия предсказания*) Модель готова, и теперь с помощью неё можно делать **предсказания** на новых объектах В даном ноутбуке будет решаться задача **бинарной классификации** с помощью нейрона: - *Входные данные*: матрица $X$ размера $(n, m)$ и столбец $y$ из нулей и единиц размера $(n, 1)$. Строкам матрицы соответствуют объекты, столбцам - признаки (то есть строка $i$ есть набор признаков (*признаковое описание*) объекта $x_i$).- *Выходные данные*: столбец $\hat{y}$ из нулей и единиц размера $(n, 1)$ - предсказания алгоритма. Модель нейрона в биологии и в deep learning: ![title](http://lamda.nju.edu.cn/weixs/project/CNNTricks/imgs/neuron.png) Нейрон с сигмоидой Снова рассмотрим нейрон с сигмоидой, то есть $$f(x) = \sigma(x)=\frac{1}{1+e^{-x}}$$ на рисунке выше. На предыдущем занятии мы установили, что обучение нейрона с сигмоидой с квадратичной функцией потерь: $$J(w, x) = \frac{1}{2n}\sum_{i=1}^{n} (\hat{y_i} - y_i)^2 = \frac{1}{2n}\sum_{i=1}^{n} (\sigma(w \cdot x_i) - y_i)^2$$ где $w \cdot x_i$ - скалярное произведение, а $\sigma(w \cdot x_i) =\frac{1}{1+e^{-w \cdot x_i}} $ - сигмоида - **неэффективно**, то есть мы увидели, что даже за большое количество итераций нейрон предсказывает плохо. Давайте ещё раз взглянем на формулу для градиентного спуска от функции потерь $J$ по весам нейрона: $$ \frac{\partial J}{\partial w} = \frac{1}{n} X^T (\sigma(w \cdot X) - y)\sigma(w \cdot X)(1 - \sigma(w \cdot X))$$ А теперь смотрим на график сигмоиды: **Её значения -- числа от 0 до 1.** Если получше проанализировать формулу, то теперь можно заметить, что, поскольку сигмоида принимает значения между 0 и 1 (а значит (1-$\sigma$) тоже принимает значения от 0 до 1), то мы умножаем $X^T$ на столбец $(\sigma(w \cdot X) - y)$ из чисел от -1 до 1, а потом ещё на столбцы $\sigma(w \cdot X)$ и $(1 - \sigma(w \cdot X))$ из чисел от 0 до 1. Таким образом в лучшем случае $\frac{\partial{J}}{\partial{w}}$ будет столбцом из чисел, порядок которых максимум 0.01 (в среднем, понятно, что если сигмоида выдаёт все 0, то будет 0, если все 1, то тоже 0). После этого мы умножаем на шаг градиентного спуска, который обычно порядкF 0.001 или 0.01 максимум. То есть мы вычитаем из весов числа порядка ~0.0001. Медленновато спускаемся, не правда ли? Всё верно. В задачах классификации, в которых моделью является нейрон с сигмоидной функцией активации, предсказывающий "вероятности" принадлженостей к классам, почти никогда не используют квадратичную функцию потерь $J$. Вместо неё придумали другую прекрасную функцию потерь. Просим любить и жаловать, **LogLoss**: $$J(\hat{y}, y) = -\frac{1}{n} \sum_{i=1}^n y_i \log(\hat{y_i}) + (1 - y_i) \log(1 - \hat{y_i}) = -\frac{1}{n} \sum_{i=1}^n y_i \log(\sigma(w \cdot x_i)) + (1 - y_i) \log(1 - \sigma(w \cdot x_i))$$ где, как и прежде, $y$ - столбец $(n, 1)$ из истинных значений классов, а $\hat{y}$ - столбец $(n, 1)$ из предсказаний нейрона. ###Code from matplotlib import pyplot as plt from matplotlib.colors import ListedColormap # тут лежат разные штуки для цветовой магии import numpy as np import pandas as pd def J(y_pred, y): return -np.mean(y * np.log(y_pred) + (1 - y) * np.log(1 - y_pred)) ###Output _____no_output_____ ###Markdown Отметим, что сейчас речь идёт именно о **бинарной классификации (на два класса)**, в многоклассовой классификации используется функция потерь под названием *кросс-энтропия*, которая является обобщением LogLoss'а на случай нескольких классов. А почему же теперь всё будет лучше? Раньше была проблема умножения маленьких чисел в градиенте. Давайте посмотрим, что теперь (в формуле ниже $X^T$ написан так для того, чтобы потом сразу стала видна матричная запись, вообще запись "умножение $X^T$ на сумму из чисел" здесь некорректна): $$ \frac{\partial J}{\partial w} = -\frac{1}{n} X^T \sum_{i=1}^n \left(\frac{y_i}{\sigma(w \cdot x_i)} - \frac{1 - y_i}{1 - \sigma(w \cdot x_i)}\right)\sigma'(w \cdot x_i) = -\frac{1}{n} X^T \sum_{i=1}^n \left(\frac{y_i}{\sigma(w \cdot x_i)} - \frac{1 - y_i}{1 - \sigma(w \cdot x_i)}\right)\sigma(w \cdot x_i)(1 - \sigma(w \cdot x_i)) = \\=-\frac{1}{n} X^T \sum_{i=1}^n \left(y_i - \sigma(w \cdot x_i)\right) = \frac{1}{n} X^T \left(\hat{y} - y\right)$$ Получили новое правило для обновления. Интересно.. Но погодите, это же в точности то правило обновления весов для градиентного спуска, которое мы использовали для перцептрона! Получается, что мы пришли к этому правилу "по-честному", сделав функцией активации сигмоиду и введя новую функию потерь -- LogLoss, а когда реализовывали перцептрон, мы просто сказали (воспользовавшись *правилом Хебба*), что $f'(x)$ возьмём единицей, то есть тут имеет место интересная связь градиентного спуска и метода коррекции ошибки. Отсюда очевидно, что код для нейрона с такой функцией потерь не будем ничем отличаться от кода для перцептрона, за исключением метода `self.activate()`. Напишем: ###Code def sigmoid(x): """Сигмоидальная функция""" return 1 / (1 + np.exp(-x)) ###Output _____no_output_____ ###Markdown Реализацем нейрон с функцией потерь LogLoss: ###Code class Neuron: def __init__(self, w=None, b=0): """ :param: w -- вектор весов :param: b -- смещение """ # Пока что мы не знаем размер матрицы X, а значит не знаем, сколько будет весов self.w = w self.b = b def activate(self, x): return sigmoid(x) def forward_pass(self, X): """ Эта функция рассчитывает ответ нейрона при предъявлении набора объектов :param: X -- матрица объектов размера (n, m), каждая строка - отдельный объект :return: вектор размера (n, 1) из нулей и единиц с ответами перцептрона """ n = X.shape[0] y_pred = np.zeros((n, 1)) # y_pred == y_predicted - предсказанные классы y_pred = self.activate(X @ self.w.reshape(X.shape[1], 1) + self.b) return y_pred.reshape(-1, 1) def backward_pass(self, X, y, y_pred, learning_rate=1): """ Обновляет значения весов нейрона в соответствии с этим объектом :param: X -- матрица объектов размера (n, m) y -- вектор правильных ответов размера (n, 1) learning_rate - "скорость обучения" (символ alpha в формулах выше) В этом методе ничего возвращать не нужно, только правильно поменять веса с помощью градиентного спуска. """ n = len(y) y = np.array(y).reshape(-1, 1) self.w = self.w - learning_rate * (X.T @ (y_pred - y) / n) self.b = self.b - learning_rate * np.mean(y_pred - y) def fit(self, X, y, num_epochs=5000): """ Спускаемся в минимум :param: X -- матрица объектов размера (n, m) y -- вектор правильных ответов размера (n, 1) num_epochs -- количество итераций обучения :return: J_values -- вектор значений функции потерь """ self.w = np.zeros((X.shape[1], 1)) # столбец (m, 1) self.b = 0 # смещение J_values = [] # значения функции потерь на различных итерациях обновления весов for i in range(num_epochs): # предсказания с текущими весами y_pred = self.forward_pass(X) # считаем функцию потерь с текущими весами J_values.append(J(y_pred, y)) # обновляем веса в соответсвие с тем, где ошиблись раньше self.backward_pass(X, y, y_pred) return J_values ###Output _____no_output_____ ###Markdown Тестирование Протестируем нейрон, обученный с новой функцией потерь, на тех же данных, что и в предыдущем ноутбуке: **Проверка forward_pass()** ###Code w = np.array([1., 2.]).reshape(2, 1) b = 2. X = np.array([[1., 3.], [2., 4.], [-1., -3.2]]) neuron = Neuron(w, b) y_pred = neuron.forward_pass(X) print ("y_pred = " + str(y_pred)) ###Output y_pred = [[0.99987661] [0.99999386] [0.00449627]] ###Markdown **Проверка backward_pass()** ###Code y = np.array([1, 0, 1]).reshape(3, 1) neuron.backward_pass(X, y, y_pred) print ("w = " + str(neuron.w)) print ("b = " + str(neuron.b)) ###Output w = [[ 0.00154399] [-0.39507239]] b = 1.9985444218632158 ###Markdown Загрузим данные (яблоки и груши): ###Code data = pd.read_csv("./data/apples_pears.csv") data.head() plt.figure(figsize=(10, 8)) plt.scatter(data.iloc[:, 0], data.iloc[:, 1], c=data['target'], cmap='rainbow') plt.title('Яблоки и груши', fontsize=15) plt.xlabel('симметричность', fontsize=14) plt.ylabel('желтизна', fontsize=14) plt.show(); ###Output _____no_output_____ ###Markdown Обозначим, что здесь признаки, а что - классы: ###Code X = data.iloc[:,:2].values # матрица объекты-признаки y = data['target'].values.reshape((-1, 1)) # классы (столбец из нулей и единиц) ###Output _____no_output_____ ###Markdown **Вывод функции потерь** Функция потерь должна убывать и в итоге стать близкой к 0 ###Code %%time neuron = Neuron() J_values = neuron.fit(X, y) plt.figure(figsize=(10, 8)) plt.plot(J_values) plt.title('Функция потерь', fontsize=15) plt.xlabel('номер итерации', fontsize=14) plt.ylabel('$J(\hat{y}, y)$', fontsize=14) plt.show() ###Output _____no_output_____ ###Markdown Посмотрим, как теперь нейрон классифицировал объекты из выборки: ###Code plt.figure(figsize=(10, 8)) plt.scatter(data.iloc[:, 0], data.iloc[:, 1], c=np.array(neuron.forward_pass(X) > 0.5), cmap='spring') plt.title('Яблоки и груши', fontsize=15) plt.xlabel('симметричность', fontsize=14) plt.ylabel('желтизна', fontsize=14) plt.show(); ###Output _____no_output_____
PRC_GFN_UCM_UNET-3D_PATCH_FULLY3D.ipynb
###Markdown DEEP POSITRON RANGE CORRECTION WITH PATCHES - FULLY3D CONFERENCE J.L.Herraiz et al. GFN - UCM - 2021 STEP 1 ) Load Libraries ###Code import tensorflow as tf tf.version.VERSION ###Output _____no_output_____ ###Markdown !pip install opencv-python!pip install keras-unet ###Code import numpy as np import os import time import matplotlib.pyplot as plt from IPython.display import clear_output import cv2 import sklearn.model_selection as sk #!pip install -q tfa-nightly import tensorflow_addons as tfa ###Output C:\Users\herra\anaconda3\lib\site-packages\tensorflow_addons\utils\ensure_tf_install.py:53: UserWarning: Tensorflow Addons supports using Python ops for all Tensorflow versions above or equal to 2.2.0 and strictly below 2.4.0 (nightly versions are not supported). The versions of TensorFlow you are currently using is 2.4.0 and is not supported. Some things might work, some things might not. If you were to encounter a bug, do not file an issue. If you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version. You can find the compatibility matrix in TensorFlow Addon's readme: https://github.com/tensorflow/addons warnings.warn( C:\Users\herra\anaconda3\lib\site-packages\typeguard\__init__.py:885: UserWarning: no type annotations present -- not typechecking tensorflow_addons.layers.max_unpooling_2d.MaxUnpooling2D.__init__ warn('no type annotations present -- not typechecking {}'.format(function_name(func))) ###Markdown IMAGE PARAMETERS ###Code # PARAMETROS DE LAS IMAGENES Nx0 = 154 Ny0 = 154 Nz0 = 80 Nt1 = 7 Nt2 = 8 Nt = Nt1+Nt2 dx = 0.0280 dy = 0.0280 dz = 0.0280 #VOXEL SIZE (cm) ALONG X,Y,Z # EXTENDED FOV TO MAKE IT MORE DIVISIBLE (PADDED WITH ZEROS) Nx1 = 160 Ny1 = 160 Nz1 = 81 ksize_planes = 9 ksize_rows = 32 ksize_cols = 32 stride_planes = 4 strides_rows = 16 strides_cols = 16 ksizes = [1, ksize_planes, ksize_rows, ksize_cols, 1] #The size of the sliding window for each dimension of input. strides = [1, stride_planes, strides_rows, strides_cols, 1] #How far the centers of two consecutive patches are in input. padding = "VALID" # EXTENDED FOV # FROM EACH PATCH WE EXTRACT 16X16X1 OUTPUT --> THEFORE WE NEED ADDITIONAL 8X8X4 IN EACH EDGE outputx = ksize_rows//2 #16 outputy = ksize_cols//2 #16 outputz = 1 Zc = ksize_planes//2 #4 extrax = ksize_rows - outputx extray = ksize_cols - outputy extraz = ksize_planes - 1 Nx2=Nx1 + extrax Ny2=Ny1 + extray Nz2=Nz1 + extraz Npatchesx = Nx1 // outputx Npatchesy = Ny1 // outputy ###Output _____no_output_____ ###Markdown LOAD IMAGES ###Code URL = 'https://tomografia.es/data/Ga68_V2_ALL.raw' PATH_Ga68_V2 = tf.keras.utils.get_file('Ga68_V2.raw',origin=URL) Ga68_V2_file = np.fromfile(PATH_Ga68_V2, dtype='float32') Ga68_V2 = Ga68_V2_file.reshape((Nt2,Nz0,Ny0,Nx0)) URL = 'https://tomografia.es/data/F18_V2_ALL.raw' PATH_F18_V2 = tf.keras.utils.get_file('F18_V2_ALL.raw',origin=URL) F18_V2_file = np.fromfile(PATH_F18_V2, dtype='float32') F18_V2 = F18_V2_file.reshape((Nt2,Nz0,Ny0,Nx0)) URL = 'https://tomografia.es/data/MUNORM_V2_ALL.raw' PATH_MU_V2 = tf.keras.utils.get_file('MUNORM_V2_ALL.raw',origin=URL) MU_V2_file = np.fromfile(PATH_MU_V2, dtype='float32') MU_V2 = MU_V2_file.reshape((Nt2,Nz0,Ny0,Nx0)) ###Output _____no_output_____ ###Markdown NORMALIZATION OF MUNORM (GLOBAL) ###Code MAXM = np.max(MU_V2) MUN = MU_V2/MAXM print(MAXM) ###Output 0.27039027 ###Markdown PATCHES FROM VOLUME ###Code Ga68_tf = tf.convert_to_tensor(Ga68_V2, tf.float32) Ga68_tf = tf.image.pad_to_bounding_box(Ga68_tf,stride_planes,0,Nz2,Nx0) Ga68_tf = tf.transpose(Ga68_tf, perm=[0,2,3,1]) Ga68_tf = tf.image.pad_to_bounding_box(Ga68_tf,3+strides_rows,3+strides_cols,Nx2,Ny2) Ga68_tf = tf.transpose(Ga68_tf, perm=[0,3,1,2]) Ga68_tf = tf.expand_dims(Ga68_tf,axis=-1) #5-D Tensor with shape [batch, in_planes, in_rows, in_cols, depth]. Ga68_tf_patches = tf.extract_volume_patches(Ga68_tf,ksizes,strides,padding) #Extract patches from input and put them in the "depth" output dimension. F18_tf = tf.convert_to_tensor(F18_V2, tf.float32) F18_tf = tf.image.pad_to_bounding_box(F18_tf,stride_planes,0,Nz2,Nx0) F18_tf = tf.transpose(F18_tf, perm=[0,2,3,1]) F18_tf = tf.image.pad_to_bounding_box(F18_tf,3+strides_cols,3+strides_cols,Nx2,Ny2) F18_tf = tf.transpose(F18_tf, perm=[0,3,1,2]) F18_tf = tf.expand_dims(F18_tf,axis=-1) #5-D Tensor with shape [batch, in_planes, in_rows, in_cols, depth]. F18_tf_patches = tf.extract_volume_patches(F18_tf,ksizes,strides,padding) #Extract patches from input and put them in the "depth" output dimension. MU_tf = tf.convert_to_tensor(MUN, tf.float32) MU_tf = tf.image.pad_to_bounding_box(MU_tf,stride_planes,0,Nz2,Nx0) MU_tf = tf.transpose(MU_tf, perm=[0,2,3,1]) MU_tf = tf.image.pad_to_bounding_box(MU_tf,3+strides_cols,3+strides_cols,Nx2,Ny2) MU_tf = tf.transpose(MU_tf, perm=[0,3,1,2]) MU_tf = tf.expand_dims(MU_tf,axis=-1) #5-D Tensor with shape [batch, in_planes, in_rows, in_cols, depth]. MU_tf_patches = tf.extract_volume_patches(MU_tf,ksizes,strides,padding) #Extract patches from input and put them in the "depth" output dimension. ###Output _____no_output_____ ###Markdown MAX OF EACH INPUT PATCH ###Code Ga = tf.reshape(Ga68_tf_patches,[-1,Ga68_tf_patches.shape[4]]) Ga_max = tf.math.reduce_max(Ga,1)+tf.constant(1.0e-5) #Avoid division by zero if patch is null Ga_max = tf.expand_dims(Ga_max,axis=-1) maximo = tf.tile(Ga_max,tf.constant([1,Ga.shape[1]], tf.int32)) ###Output _____no_output_____ ###Markdown NORMALIZE EACH PATCH AND RESHAPE ###Code Ga = tf.divide(Ga,maximo) Ga = tf.reshape(Ga,[Ga.shape[0],ksize_planes,ksize_rows,ksize_cols]) Ga = tf.transpose(Ga, perm=[0, 2, 1, 3]) Ga = tf.transpose(Ga, perm=[0, 1, 3, 2]) Ga68 = Ga.numpy() M = tf.reshape(MU_tf_patches,[-1,MU_tf_patches.shape[4]]) M = tf.reshape(M,[M.shape[0],ksize_planes,ksize_rows,ksize_cols]) M = tf.transpose(M, perm=[0, 2, 1, 3]) M = tf.transpose(M, perm=[0, 1, 3, 2]) MU = M.numpy() F = tf.reshape(F18_tf_patches,[-1,F18_tf_patches.shape[4]]) F = tf.divide(F,maximo) F = tf.reshape(F,[F.shape[0],ksize_planes,ksize_rows,ksize_cols]) F = tf.transpose(F, perm=[0, 2, 1, 3]) F = tf.transpose(F, perm=[0, 1, 3, 2]) F18 = F[:,:,:,Zc:Zc+1].numpy() Ga68.shape ###Output _____no_output_____ ###Markdown INPUT / OUTPUT 4-D Tensor of shape [batch, height, width, channels] ###Code inp_np = Ga68 #np.concatenate((Ga68,MU),axis=-1) out_np = F18 Npatches = F18.shape[0] # Number of patches ###Output _____no_output_____ ###Markdown SPLIT DATA INTO TRAIN AND VALIDATION ###Code x_train, x_val, y_train, y_val = sk.train_test_split(inp_np, out_np, test_size=0.3, shuffle=True) ###Output _____no_output_____ ###Markdown NUMPY TO TENSOR ###Code x_train_tf = tf.convert_to_tensor(x_train, tf.float32) x_val_tf = tf.convert_to_tensor(x_val, tf.float32) y_train_tf = tf.convert_to_tensor(y_train, tf.float32) y_val_tf = tf.convert_to_tensor(y_val, tf.float32) x_train_tf.shape ###Output _____no_output_____ ###Markdown SHOW EXAMPLE ###Code k=17 #SLICE images = np.concatenate((x_val_tf[k,:,:,Zc], y_val_tf[k,:,:,0]), axis=1) fig, ax = plt.subplots(figsize=(20,5)) im = ax.imshow(images,cmap='hot') _ = fig.colorbar(im, ax=ax) _ = ax.set_xlabel('[Ga68 / F18 ]', fontsize=24) ###Output _____no_output_____ ###Markdown U-NET ###Code from keras_unet.models import custom_unet from keras_unet.utils import get_augmented #from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping #from keras_unet.metrics import iou #from tensorflow.keras.optimizers import Adam, SGD ###Output _____no_output_____ ###Markdown train_gen = get_augmented(x_train_tf, y_train_tf, batch_size=32, data_gen_args = dict(width_shift_range=0.3,height_shift_range=0.3,rotation_range=90.0, horizontal_flip=True,vertical_flip=True,fill_mode='nearest')) ###Code train_gen = get_augmented(x_train_tf, y_train_tf, batch_size=64, data_gen_args = dict(horizontal_flip=True,vertical_flip=True)) model = custom_unet( input_shape=(x_train_tf.shape[1], x_train_tf.shape[2], x_train_tf.shape[3]), use_batch_norm=True, activation='swish', filters=64, num_layers=4, use_attention=True, dropout=0.2, output_activation='relu') opt = tfa.optimizers.RectifiedAdam(lr=1e-3) opt = tfa.optimizers.Lookahead(opt) L1 = tf.keras.losses.MeanAbsoluteError(reduction=tf.keras.losses.Reduction.NONE) L2 = tf.keras.losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.NONE) def make_my_loss(): def my_loss(y_true, y_pred): yt = tf.image.central_crop(y_true,0.5) # TRAIN WITH THE CENTRAL PART OF THE PATCH yp = tf.image.central_crop(y_pred,0.5) # TRAIN WITH THE CENTRAL PART OF THE PATCH return L1(yt, yp) return my_loss my_loss = make_my_loss() model.compile(optimizer=opt,loss=my_loss) history = model.fit(train_gen,steps_per_epoch=100, epochs=200, validation_data=(x_val_tf, y_val_tf)) fig, ax = plt.subplots(figsize=(20,5)) loss = np.array(history.history['loss']) var_loss = np.array(history.history['val_loss']) ax.plot(20*np.log(1+loss), 'orange', label='Training Loss') ax.plot(20*np.log(1+var_loss), 'green', label='Validation loss') ax.set_ylim([0, 1]) ax.legend() fig.show() ###Output <ipython-input-55-15d81941667f>:9: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure. fig.show() ###Markdown SAVE LOSS HISTORY ###Code loss = np.array(history.history['loss']) var_loss = np.array(history.history['val_loss']) loss_info = np.transpose(100*np.array([loss,var_loss])) np.savetxt(("LOSS_%d.csv" % 0), loss_info, fmt='%.3f', delimiter='\t') ###Output _____no_output_____ ###Markdown SAVE MODEL ###Code # Save the entire model as a HDF5 file. model.save('PRC_FULLY3D.h5') ###Output _____no_output_____ ###Markdown RESTORING MODEL ###Code new_model = tf.keras.models.load_model('PRC_FULLY3D.h5',compile=False) # LOADING KERAS MODEL opt = tfa.optimizers.RectifiedAdam(lr=1e-3) opt = tfa.optimizers.Lookahead(opt) new_model.compile(optimizer=opt,loss='MeanAbsoluteError') img_index = 34 test = np.expand_dims(x_val[img_index,:,:,:],axis=0) estim = new_model.predict(test) Ga68_img = np.squeeze(test[:,:,:,Zc]) estim_img = np.squeeze(estim[:,:,:,0]) F18_img = np.squeeze(y_val[img_index,:,:,0]) plt.figure(figsize=(20, 10)) plt.subplot(1,3,1) plt.imshow(Ga68_img, cmap=plt.cm.bone) plt.axis('off') plt.title('Ga68') plt.subplot(1,3,2) plt.imshow(estim_img, cmap=plt.cm.bone) plt.axis('off') plt.title('Estimated') plt.subplot(1,3,3) plt.imshow(F18_img, cmap=plt.cm.bone) plt.axis('off') plt.title('F18') ###Output _____no_output_____ ###Markdown DEPLOYMENT - STEP 1 ) ADAPTING INPUT --> PATCHES ###Code import time t0 = time.time() TGa68 = Ga68_V2[0,:,:,:] TGa68_tf = tf.convert_to_tensor(TGa68, tf.float32) TGa68_tf = tf.expand_dims(TGa68_tf,axis=0) Tstride_planes = 1 TGa68_tf = tf.image.pad_to_bounding_box(TGa68_tf,stride_planes,0,Nz2,Nx0) TGa68_tf = tf.transpose(TGa68_tf, perm=[0,2,3,1]) TGa68_tf = tf.image.pad_to_bounding_box(TGa68_tf,3+strides_rows//2,3+strides_cols//2,Nx2,Ny2) TGa68_tf = tf.transpose(TGa68_tf, perm=[0,3,1,2]) TGa68_tf = tf.expand_dims(TGa68_tf,axis=-1) # 5-D Tensor with shape [batch, in_planes, in_rows, in_cols, depth]. Tstrides = [1, Tstride_planes, strides_rows, strides_cols, 1] # How far the centers of two consecutive patches are in input. TGa68_tf_patches = tf.extract_volume_patches(TGa68_tf,ksizes,Tstrides,padding) #Extract patches from input and put them in the "depth" output dimension. TGa68_tf_patches.shape TGa = tf.reshape(TGa68_tf_patches,[-1,TGa68_tf_patches.shape[4]]) TGa_max = tf.math.reduce_max(TGa,1)+tf.constant(1.0e-5) TGa_max = tf.expand_dims(TGa_max,axis=-1) Tmaximo = tf.tile(TGa_max,tf.constant([1,TGa.shape[1]], tf.int32)) TGa = tf.divide(TGa,Tmaximo) TGa = tf.reshape(TGa,[TGa.shape[0],ksize_planes,ksize_rows,ksize_cols]) TGa = tf.transpose(TGa, perm=[0, 2, 3, 1]) TGa.shape ###Output _____no_output_____ ###Markdown DEPLOYMENT - STEP 2 ) NORMALIZATION INPUT ###Code Tmaxim2 = tf.tile(TGa_max,tf.constant([1,ksize_rows*ksize_cols//4], tf.int32)) Tmaxim2 = tf.reshape(Tmaxim2,[Tmaxim2.shape[0],ksize_rows//2,ksize_cols//2,1]) Tmaxim2.shape ###Output _____no_output_____ ###Markdown DEPLOYMENT - STEP 3 ) MODEL PREDICT ###Code t1 = time.time() TEst = model.predict(TGa) t2 = time.time() TEst = tf.image.central_crop(TEst,0.5) # KEEP JUST THE CENTRAL PART TEst = tf.multiply(TEst,Tmaxim2) #Restore Normalization #SHAPE 8100,16,16,1 TEst2 = tf.reshape(TEst,[-1,Npatchesy,Npatchesx,16,16]) #SHAPE 81X10X10X16X16 TEst3 = tf.transpose(TEst2,perm=[0,1,3,2,4]) #SHAPE 81X10X16X10X16 (REMEMBER: VOLUME [Z,Y,X] ORDER) TEst3 = tf.reshape(TEst3,[-1,Ny1,Nx1]) #SHAPE 81X160X160 print(t2-t1) ###Output 2.8044540882110596 ###Markdown SAVING RESULTRaw format (Nx0 = 154, Ny0 = 154, Nz0 = 80) Float32 ###Code d = np.array(TGa68,'float32') f=open("Initial.raw","wb") f.write(d) f.close() d = np.array(F18_V2[0,:,:,:],'float32') f=open("Reference.raw","wb") f.write(d) f.close() d = np.squeeze(np.array(TEst3[0:Nz0,3:157,3:157],'float32')) f=open("Corrected.raw","wb") f.write(d) f.close() ###Output _____no_output_____
MNIST digit classification/digits_classification.ipynb
###Markdown MNIST digits classification with TensorFlow ###Code import numpy as np from sklearn.metrics import accuracy_score from matplotlib import pyplot as plt %matplotlib inline import tensorflow.compat.v1 as tf tf.disable_v2_behavior() print("We're using TF", tf.__version__) from Utils import matplotlib_utils from importlib import reload reload(matplotlib_utils) from Utils import keras_utils from Utils.keras_utils import reset_tf_session ###Output _____no_output_____ ###Markdown Look at the dataIn this task we have 50000 28x28 images of digits from 0 to 9.We will train a classifier on this data. ###Code from Utils import preprocessed_mnist X_train, y_train, X_val, y_val, X_test, y_test = preprocessed_mnist.laod_dataset() # X contains rgb values divided by 255 print("X_train [shape %s] sample patch:\n" % (str(X_train.shape)), X_train[1, 15:20, 5:10]) print("A closeup of a sample patch:") plt.imshow(X_train[1, 15:20, 5:10], cmap="Greys") plt.show() print("And the whole sample:") plt.imshow(X_train[1], cmap="Greys") plt.show() print("y_train [shape %s] 10 samples:\n" % (str(y_train.shape)), y_train[:10]) ###Output X_train [shape (50000, 28, 28)] sample patch: [[0. 0.29803922 0.96470588 0.98823529 0.43921569] [0. 0.33333333 0.98823529 0.90196078 0.09803922] [0. 0.33333333 0.98823529 0.8745098 0. ] [0. 0.33333333 0.98823529 0.56862745 0. ] [0. 0.3372549 0.99215686 0.88235294 0. ]] A closeup of a sample patch: ###Markdown Linear modelOur task is to train a linear classifier $\vec{x} \rightarrow y$ with SGD using TensorFlow.We will need to calculate a logit (a linear transformation) $z_k$ for each class: $$z_k = \vec{x} \cdot \vec{w_k} + b_k \quad k = 0..9$$And transform logits $z_k$ to valid probabilities $p_k$ with softmax: $$p_k = \frac{e^{z_k}}{\sum_{i=0}^{9}{e^{z_i}}} \quad k = 0..9$$We will use a cross-entropy loss to train our multi-class classifier:$$\text{cross-entropy}(y, p) = -\sum_{k=0}^{9}{\log(p_k)[y = k]}$$ where $$[x]=\begin{cases} 1, \quad \text{if $x$ is true} \\ 0, \quad \text{otherwise} \end{cases}$$Cross-entropy minimization pushes $p_k$ close to 1 when $y = k$, which is what we want.Here's the plan:* Flatten the images (28x28 -> 784) with `X_train.reshape((X_train.shape[0], -1))` to simplify our linear model implementation* Use a matrix placeholder for flattened `X_train`* Convert `y_train` to one-hot encoded vectors that are needed for cross-entropy* Use a shared variable `W` for all weights (a column $\vec{w_k}$ per class) and `b` for all biases.* Aim for ~0.93 validation accuracy ###Code X_train_flat = X_train.reshape((X_train.shape[0], -1)) print(X_train_flat.shape) X_val_flat = X_val.reshape((X_val.shape[0], -1)) print(X_val_flat.shape) import keras y_train_oh = keras.utils.to_categorical(y_train, 10) y_val_oh = keras.utils.to_categorical(y_val, 10) print(y_train_oh.shape) print(y_train_oh[:3], y_train[:3]) # run this again if you remake your graph s = reset_tf_session() # Model parameters: W and b W = tf.get_variable(name='W', shape=(X_train_flat.shape[1], y_train_oh.shape[1]), dtype=tf.float32)### YOUR CODE HERE ### tf.get_variable(...) with shape[0] = 784 b = tf.get_variable(name='b', shape=(y_train_oh.shape[1], ), dtype=tf.float32)### YOUR CODE HERE ### tf.get_variable(...) # Placeholders for the input data input_X = tf.placeholder('float32', shape=(None, X_train_flat.shape[1]), name='input_x')### YOUR CODE HERE ### tf.placeholder(...) for flat X with shape[0] = None for any batch size input_y = tf.placeholder('float32', shape=(None, y_train_oh.shape[1]), name='input_y')### YOUR CODE HERE ### tf.placeholder(...) for one-hot encoded true labels # Compute predictions logits = input_X @ W + b### YOUR CODE HERE ### logits for input_X, resulting shape should be [input_X.shape[0], 10] probas = tf.nn.softmax(logits)### YOUR CODE HERE ### apply tf.nn.softmax to logits classes = tf.argmax(probas, 1)### YOUR CODE HERE ### apply tf.argmax to find a class index with highest probability # Loss should be a scalar number: average loss over all the objects with tf.reduce_mean(). # Use tf.nn.softmax_cross_entropy_with_logits on top of one-hot encoded input_y and logits. # It is identical to calculating cross-entropy on top of probas, but is more numerically friendly (read the docs). loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=input_y, logits=logits))### YOUR CODE HERE ### cross-entropy loss # Use a default tf.train.AdamOptimizer to get an SGD step step = tf.train.AdamOptimizer(learning_rate=0.01).minimize(loss)### YOUR CODE HERE ### optimizer step that minimizes the loss print(logits.shape) s.run(tf.global_variables_initializer()) BATCH_SIZE = 512 EPOCHS = 40 # for logging the progress right here in Jupyter (for those who don't have TensorBoard) simpleTrainingCurves = matplotlib_utils.SimpleTrainingCurves("cross-entropy", "accuracy") for epoch in range(EPOCHS): # we finish an epoch when we've looked at all training samples batch_losses = [] for batch_start in range(0, X_train_flat.shape[0], BATCH_SIZE): # data is already shuffled _, batch_loss = s.run([step, loss], {input_X: X_train_flat[batch_start:batch_start+BATCH_SIZE], input_y: y_train_oh[batch_start:batch_start+BATCH_SIZE]}) # collect batch losses, this is almost free as we need a forward pass for backprop anyway batch_losses.append(batch_loss) train_loss = np.mean(batch_losses) val_loss = s.run(loss, {input_X: X_val_flat, input_y: y_val_oh}) # this part is usually small train_accuracy = accuracy_score(y_train, s.run(classes, {input_X: X_train_flat})) # this is slow and usually skipped valid_accuracy = accuracy_score(y_val, s.run(classes, {input_X: X_val_flat})) simpleTrainingCurves.add(train_loss, val_loss, train_accuracy, valid_accuracy) ###Output _____no_output_____ ###Markdown MLP with hidden layers Previously we've coded a dense layer with matrix multiplication by hand. But this is not convenient, you have to create a lot of variables and your code becomes a mess. In TensorFlow there's an easier way to make a dense layer:```pythonhidden1 = tf.layers.dense(inputs, 256, activation=tf.nn.sigmoid)```That will create all the necessary variables automatically.Here you can also choose an activation function (remember that we need it for a hidden layer!).Now define the MLP with 2 hidden layers and restart training with the cell above. ###Code # write the code here to get a new `step` operation and then run the cell with training loop above. # name your variables in the same way (e.g. logits, probas, classes, etc) for safety. ### YOUR CODE HERE ### # define a two layer MLP hidden1 = tf.layers.dense(input_X, 256, activation=tf.nn.sigmoid) # layer 1 after input layer hidden2 = tf.layers.dense(hidden1, 256, activation=tf.nn.sigmoid) # layer 2 adjacent to layer 1 logits = tf.layers.dense(hidden2, 10) # as we have 10 classes as output # define the prediction probabilities and calsses layer probas = tf.nn.softmax(logits) # as we have multiple classes here classes = tf.argmax(probas, 1) # 1 is axis i.e column vector containing 10 values of probabilities # define the loss function loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=input_y)) # define our gradient descent step step = tf.train.AdamOptimizer().minimize(loss) # Now let's moveforward to train our network #sess = reset_tf_session() s.run(tf.global_variables_initializer()) # define the batchsize and no of epochs BATCH_SIZE_MLP = 512 EPOCHS_MLP = 40 # for logging the progress right here simpleTrainingCurvesMLP = matplotlib_utils.SimpleTrainingCurves('cross-entropy', 'accuracy') # iterate for no of epochs for epoch in range(EPOCHS_MLP): batch_losses_MLP = [] for start in range(0, X_train_flat.shape[0], BATCH_SIZE_MLP): _, batch_loss = s.run([step, loss], {input_X:X_train_flat[start : start + BATCH_SIZE_MLP], input_y : y_train_oh[start : start + BATCH_SIZE_MLP]}) batch_losses_MLP.append(batch_loss) train_loss = np.mean(batch_losses_MLP) val_loss = s.run(loss, {input_X: X_val_flat, input_y: y_val_oh}) training_accuracy = accuracy_score(y_train, s.run(classes, {input_X: X_train_flat})) val_accuracy = accuracy_score(y_val, s.run(classes, {input_X: X_val_flat})) simpleTrainingCurvesMLP.add(train_loss, val_loss, training_accuracy, val_accuracy) ###Output _____no_output_____ ###Markdown MNIST Digit classidfication usign keras ###Code # building a model with keras from keras.layers import Dense, Activation from keras.models import Sequential # we still need to clear a graph though s = reset_tf_session() model = Sequential() # it is a feed-forward network without loops like in RNN model.add(Dense(256, input_shape=(784,))) # the first layer must specify the input shape (replacing placeholders) model.add(Activation('sigmoid')) model.add(Dense(256)) model.add(Activation('sigmoid')) model.add(Dense(10)) model.add(Activation('softmax')) # spectate the params model.summary() # now we "compile" the model specifying the loss and optimizer model.compile( loss='categorical_crossentropy', # this is our cross-entropy optimizer='adam', metrics=['accuracy'] # report accuracy during training ) # and now we can fit the model with model.fit() # and we don't have to write loops and batching manually as in TensorFlow model.fit( X_train_flat, y_train_oh, batch_size=512, epochs=40, validation_data=(X_val_flat, y_val_oh), callbacks=[keras_utils.TqdmProgressCallback()], verbose=0 ) ###Output Epoch 1/40
py/lab_01_transformations.ipynb
###Markdown Step 2: Image transformations with NumPy and OpenCVWe will now use NumPy and OpenCV to transform an image by applying transformation matrices on homogeneous form. 1. Eigen and homogeneous representationsFor different values of **t**, *&theta;* and **u**,use NumPy to define a 2D Euclidean transformation matrix $\mathbf{E} = \begin{bmatrix}\mathbf{R} & \mathbf{t} \\\mathbf{0} & 1\end{bmatrix}$ where $\mathbf{R} = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{bmatrix}$ is the rotation matrix corresponding to a counterclockwise rotation with an angle *&theta;* about the origin, and $\mathbf{t} = \begin{bmatrix} dx \\ dy \\ \end{bmatrix}$ is the translation vector.Define a pixel $\mathbf{u} = \begin{bmatrix} u \\ v \\ \end{bmatrix}$ and obtain the transformed pixel $\mathbf{u}_{trans}$ by computing the transformation using homogeneous coordinates$\mathbf{\tilde u}_{trans} = \mathbf{E} \mathbf{\tilde u}$**Tips**:You might want to convert between degrees and radians, so see if you can find the appropriate values and functions in NumPy.Hint: [https://numpy.org/doc/stable/reference/routines.math.html](https://numpy.org/doc/stable/reference/routines.math.html) ###Code import numpy as np # For convenience, we define two functions for converting a vector to homogeneous coordinates and back # If you are not at all familiar with the lambda, see e.g. # https://www.w3schools.com/python/python_lambda.asp homogeneous = lambda x: np.append(x, [[1]], axis=0) hnormalized = lambda x: x[:-1]/x[-1] # TODO: Translation t = None # TODO: Rotation theta = None R = None # TODO: Euclidean transformation that rotates and then translates E = None # TODO: Perform the transformation on a pixel u. # Hint: What operator is used for matrix multiplication in NumPy? u = None u_transformed = None print(f"Euclidean transformation E = \n{E}\n") print(f"Original pixel u = \n{u}\n") print(f"Transformed pixel u_transformed = \n{u_transformed}") ###Output _____no_output_____ ###Markdown 2. Transform imagesWe will now use the **E** matrix to transform the image above.Notice that you can use the grid and protractor to check your transformations.There are 100 pixels between each grid line, and you can check the rotation by recognizing which protractor line is parallel with the new y-axis.- Read the image using [cv2.imread()](https://docs.opencv.org/4.5.5/d4/da8/group__imgcodecs.htmlga288b8b3da0892bd651fce07b3bbd3a56)- Perform the transformation using [cv2.warpPerspective()](https://docs.opencv.org/4.5.5/da/d54/group__imgproc__transform.htmlgaf73673a7e8e18ec6963e3774e6a94b87)- Try different transformations.- Try the *inverse* transformation (how can you easily compute that?).Hint: See the OpenCV Python tutorial, [Getting Started with Images](https://docs.opencv.org/4.5.5/db/deb/tutorial_display_image.html) (Click the `Python` button). ###Code %matplotlib inline import cv2 import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [15, 15] # TODO: Read the image filename = 'img-grid.png' img = None # TODO: Perform transformation on the image # TODO: Repeat exercise 1 and create the transformation matrix E #t = None #< Todo: repeat from step 1 #R = None #< Todo: repeat from step 1 #E = None #< Todo: repeat from step 1 # img_size must be (cols, rows) img_size = None img_trans_E = None # Display the original and the transformed image axes = plt.subplots(1, 2)[1] ax1, ax2 = axes ax1.set_title('Original image') ax2.set_title('Transformed image') ax1.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) ax2.imshow(cv2.cvtColor(img_trans_E, cv2.COLOR_BGR2RGB)) plt.show() # TODO: Try different transformations (rotations and translations) ###Output _____no_output_____ ###Markdown 3. Composing transformationsHave you noticed that the image is rotated around the upper left corner? Why is it so?We can rotate around the image center by first translating the origin to the center, rotating and then translate back by performing the opposite translation.We can compose these transformations to a single transformation by multiplying all corresponding transformation matrices together:$\mathbf{E}_{composed} = \mathbf{E}_{corner \leftarrow center} \; \mathbf{E}_{rotate} \; \mathbf{E}_{center \leftarrow corner}$- Rotate the image about its center by computing the composed transformation above.- Finally, try adding a scaling transformation (zoom) after the rotation. What kind of composed transformation do we obtain then? ###Code %matplotlib inline import cv2 import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [15, 15] # Todo: Compose transformations to rotate and scale around the centre of the image. R = None #< Todo: repeat from step 1 img = None img_rotated = None img_rotated_scaled = None # Display the transformed and the scaled image axes = plt.subplots(1, 2)[1] ax1, ax2 = axes ax1.set_title('Rotated image') ax2.set_title('Rotated and scaled image') ax1.imshow(cv2.cvtColor(img_rotated, cv2.COLOR_BGR2RGB)) ax2.imshow(cv2.cvtColor(img_rotated_scaled, cv2.COLOR_BGR2RGB)) # remove the x and y ticks plt.setp(axes, xticks=[], yticks=[]) plt.show() ###Output _____no_output_____
applications/classification/sentiment_classification/Sentimix with XLM-Roberta-CNN.ipynb
###Markdown Initial Setup ###Code from google.colab import drive drive.mount('/content/drive') train_file = '/content/drive/My Drive/train_14k_split_conll.txt' test_file = '/content/drive/My Drive/dev_3k_split_conll.txt' # Data containing transliteration using google's api is taken from here # https://github.com/keshav22bansal/BAKSA_IITK processed_train_file = '/content/drive/My Drive/hinglish_train.txt' processed_test_file = '/content/drive/My Drive/hinglish_test.txt' !pip install indic_transliteration -q !pip install contractions -q !pip install transformers -q ###Output  |████████████████████████████████| 102kB 3.6MB/s  |████████████████████████████████| 911kB 9.5MB/s  |████████████████████████████████| 245kB 6.3MB/s  |████████████████████████████████| 317kB 13.5MB/s [?25h Building wheel for pyahocorasick (setup.py) ... [?25l[?25hdone  |████████████████████████████████| 778kB 5.5MB/s  |████████████████████████████████| 890kB 29.6MB/s  |████████████████████████████████| 3.0MB 20.5MB/s  |████████████████████████████████| 1.1MB 52.5MB/s [?25h Building wheel for sacremoses (setup.py) ... [?25l[?25hdone ###Markdown Imports ###Code import re import time import string import contractions import numpy as np import pandas as pd from indic_transliteration import sanscript from indic_transliteration.sanscript import SchemeMap, SCHEMES, transliterate from collections import Counter from sklearn.model_selection import train_test_split from sklearn import metrics import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import Dataset, DataLoader from torch.optim.lr_scheduler import ReduceLROnPlateau from transformers import XLMRobertaTokenizer, XLMRobertaModel, AdamW, get_linear_schedule_with_warmup import matplotlib.pyplot as plt import seaborn as sns device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') device ###Output _____no_output_____ ###Markdown Processing the DataSkip this step and move to Using the processed sentences.The processed data is taken from [here](https://github.com/keshav22bansal/BAKSA_IITK)The major difference is that transliteration of hinglish words to hindi is done using google's api instead of indic_transliteration module ###Code with open(train_file) as f: data = f.readlines() with open(test_file, 'r') as f: test_data = f.readlines() def parse_data(data): uids, sentences, sentences_info, sentiment = [], [], [], [] single_sentence, single_sentence_info = [], [] sent = "" uid = 0 for idx, each_line in enumerate(data): line = each_line.strip() tokens = line.split('\t') num_tokens = len(tokens) if num_tokens == 2: # add the word single_sentence.append(tokens[0]) # add the word info(lang) single_sentence_info.append(tokens[1]) elif num_tokens == 3 and idx > 0: # append the sentence data sentences.append(single_sentence) sentences_info.append(single_sentence_info) sentiment.append(sent) uids.append(uid) sent = tokens[-1] uid = int(tokens[1]) # clear the single sentence single_sentence = [] single_sentence_info = [] # new line after the sentence elif num_tokens == 1: continue else: sent = tokens[-1] uid = int(tokens[1]) # for the last sentence if len(single_sentence) > 0: sentences.append(single_sentence) sentences_info.append(single_sentence_info) sentiment.append(sent) uids.append(uid) assert len(sentences) == len(sentences_info) == len(sentiment) == len(uids) return sentences, sentences_info, sentiment, uids sentences, sentences_info, sentiment, uids = parse_data(data) test_sentences, test_sentences_info, test_sentiment, test_uids = parse_data(test_data) list(zip(sentences[0], sentences_info[0])) data = "jen klid takhle vypad" transliterate(data, sanscript.ITRANS, sanscript.DEVANAGARI) def translate(sentences, sentences_info): translated = [] for sent, sent_info in zip(sentences, sentences_info): partial_translated = [] for word, word_info in zip(sent, sent_info): if word_info == "Hin": partial_translated.append(transliterate(word, sanscript.ITRANS, sanscript.DEVANAGARI)) else: partial_translated.append(word) translated.append(partial_translated) return translated translated_sentences = translate(sentences, sentences_info) test_translated_sentences = translate(test_sentences, test_sentences_info) url_pattern = r'https(.*)/\s[\w\u0900-\u097F]+' special_chars = r'[_…\*\[\]\(\)&“]' names_with_numbers = r'([A-Za-z\u0900-\u097F]+)\d{3,}' apostee = r"([\w]+)\s'\s([\w]+)" names = r"@[\s]*[\w\u0900-\u097F]+[\s]*[_]+[\s]*[\w\u0900-\u097F]+|@[\s]*[\w\u0900-\u097F]+" hashtags = r"#[\s]*[\w\u0900-\u097F]+[\s]*" def preprocess_data(sentence_tokens): sentence = " ".join(sentence_tokens) sentence = " " + sentence # remove rt and … from string sentence = sentence.replace(" RT ", "") sentence = sentence.replace("…", "") # replace apostee sentence = sentence.replace("’", "'") # replace _ sentence = sentence.replace("_", " ") # replace names sentence = re.sub(re.compile(names), " ", sentence) # remove hashtags sentence = re.sub(re.compile(hashtags), " ", sentence) # remove urls sentence = re.sub(re.compile(url_pattern), "", sentence) # combine only ' related words => ... it ' s ... -> ... it's ... sentence = re.sub(re.compile(apostee), r"\1'\2", sentence) # fix contractions sentence = contractions.fix(sentence) # replace names ending with numbers with only names (remove numbers) sentence = re.sub(re.compile(names_with_numbers), r" ", sentence) sentence = " ".join(sentence.split()).strip() return sentence MODEL_NAME = "xlm-roberta-base" tokenizer = XLMRobertaTokenizer.from_pretrained(MODEL_NAME) print(tokenizer.sep_token, tokenizer.sep_token_id) print(tokenizer.cls_token, tokenizer.cls_token_id) print(tokenizer.pad_token, tokenizer.pad_token_id) print(tokenizer.unk_token, tokenizer.unk_token_id) " ".join(sentences[32]), sentiment[32] " ".join(translated_sentences[32]) preprocess_data(translated_sentences[32]) encoding = tokenizer.encode_plus( preprocess_data(translated_sentences[32]), max_length=100, add_special_tokens=True, # Add '[CLS]' and '[SEP]' return_token_type_ids=False, truncation=True, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', # Return PyTorch tensors ) print(len(encoding['input_ids'][0])) encoding['input_ids'][0] print(len(encoding['attention_mask'][0])) encoding['attention_mask'] tokenizer.convert_ids_to_tokens(encoding['input_ids'][0]) " ".join(sentences[29]), sentiment[29] " ".join(translated_sentences[29]) preprocess_data(translated_sentences[29]) " ".join(sentences[10]), sentiment[10] " ".join(translated_sentences[10]) preprocess_data(translated_sentences[10]) %%time processed_sentences = [] for sent in translated_sentences: processed_sentences.append(preprocess_data(sent)) test_data = [] for sent in test_translated_sentences: test_data.append(preprocess_data(sent)) sentiment_mapping = { "negative": 0, "neutral": 1, "positive": 2 } labels = [sentiment_mapping[sent] for sent in sentiment] test_label = [sentiment_mapping[sent] for sent in test_sentiment] ###Output _____no_output_____ ###Markdown Using the Processed sentences ###Code uids = [] processed_sentences = [] labels = [] with open(processed_train_file, 'r') as f: for line in f.readlines()[1:]: items = line.strip().split('\t') uids.append(items[0]) processed_sentences.append(str(items[1])) labels.append(int(items[2])) test_uids = [] test_data = [] test_label = [] with open(processed_test_file, 'r') as f: for line in f.readlines()[1:]: items = line.strip().split('\t') test_uids.append(items[0]) test_data.append(str(items[1])) test_label.append(int(items[2])) ###Output _____no_output_____ ###Markdown Train-Val-Test data splits ###Code train_uids, val_uids, train_data, val_data, train_label, val_label = train_test_split(uids, processed_sentences, labels, test_size=0.2) len(train_data), len(val_data), len(test_data) ###Output _____no_output_____ ###Markdown Tokenizer ###Code MAX_LEN = 150 MODEL_NAME = "xlm-roberta-base" tokenizer = XLMRobertaTokenizer.from_pretrained(MODEL_NAME) ###Output _____no_output_____ ###Markdown Dataset Wrapper class ###Code class SentiMixDataSet(Dataset): def __init__(self, inputs, labels, tokenizer, max_len): self.sentences = inputs self.labels = labels self.tokenizer = tokenizer self.max_len = max_len def __len__(self): return len(self.labels) def __getitem__(self, item): sentence = self.sentences[item] sentiment = int(self.labels[item]) encoding = self.tokenizer.encode_plus( sentence, add_special_tokens=True, max_length=self.max_len, return_token_type_ids=False, pad_to_max_length=True, truncation=True, return_attention_mask=True, return_tensors='pt', ) return { "text": sentence, "input_ids": encoding['input_ids'].flatten(), "attention_mask": encoding['attention_mask'].flatten(), "label": torch.tensor(sentiment, dtype=torch.long) } train_dataset = SentiMixDataSet(train_data, train_label, tokenizer, MAX_LEN) val_dataset = SentiMixDataSet(val_data, val_label, tokenizer, MAX_LEN) test_dataset = SentiMixDataSet(test_data, test_label, tokenizer, MAX_LEN) ###Output _____no_output_____ ###Markdown DataLoaders ###Code BATCH_SIZE = 64 train_data_loader = DataLoader(train_dataset, shuffle=True, batch_size=BATCH_SIZE) valid_data_loader = DataLoader(val_dataset, batch_size=BATCH_SIZE) test_data_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE) # sample sample = next(iter(train_data_loader)) sample["input_ids"].shape, sample["attention_mask"].shape, sample["label"].shape ###Output _____no_output_____ ###Markdown XLM-RoBERTa with CNN Model ###Code class XLMCNNModel(nn.Module): def __init__(self, output_dim, n_filters, filter_sizes, dropout=0.3): super().__init__() self.bert = XLMRobertaModel.from_pretrained(MODEL_NAME) embedding_size = self.bert.config.to_dict()['hidden_size'] self.conv_0 = nn.Conv2d( in_channels=1, out_channels=n_filters, kernel_size=(filter_sizes[0], embedding_size) ) self.conv_1 = nn.Conv2d( in_channels=1, out_channels=n_filters, kernel_size=(filter_sizes[1], embedding_size) ) self.conv_2 = nn.Conv2d( in_channels=1, out_channels=n_filters, kernel_size=(filter_sizes[2], embedding_size) ) self.out = nn.Linear(len(filter_sizes) * n_filters, output_dim) self.dropout = nn.Dropout(dropout) def forward(self, input_ids, attention_mask): # input_ids => [batch_size, seq_len] # attention_mask => [batch_size, seq_len] embeddings, _ = self.bert( input_ids=input_ids, attention_mask=attention_mask ) embeddings = self.dropout(embeddings) # embeddings => [batch_size, seq_len, emb_dim] embedded = embeddings.unsqueeze(1) # embedded => [batch_size, 1, seq_len, emb_dim] conved_0 = F.relu(self.conv_0(embedded).squeeze(3)) conved_1 = F.relu(self.conv_1(embedded).squeeze(3)) conved_2 = F.relu(self.conv_2(embedded).squeeze(3)) # conved_n => [batch_size, n_filters, seq_len - filter_size[n] + 1] pooled_0 = F.max_pool1d(conved_0, conved_0.shape[2]).squeeze(2) pooled_1 = F.max_pool1d(conved_1, conved_1.shape[2]).squeeze(2) pooled_2 = F.max_pool1d(conved_2, conved_2.shape[2]).squeeze(2) # pooled_n => [batch_size, n_filters] combined = torch.cat((pooled_0, pooled_1, pooled_2), dim=1) combined = self.dropout(combined) # combined => [batch_size, len(filter_sizes) * n_filters] logits = self.out(combined) # logits => [batch_size, output_dim] return logits n_filters = 100 filter_sizes = [3,4,5] output_dim = 3 model = XLMCNNModel(output_dim, n_filters, filter_sizes) model = model.to(device) torch.cuda.empty_cache() def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') ###Output The model has 278,966,451 trainable parameters ###Markdown Loss & Optimizer ###Code EPOCHS = 5 no_decay = ['bias', 'LayerNorm.weight'] optimizer_grouped_parameters = [ {'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01}, {'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0} ] optimizer = AdamW(optimizer_grouped_parameters, lr=1e-5) loss_fn = nn.CrossEntropyLoss().to(device) ###Output _____no_output_____ ###Markdown Training Method ###Code def train(model, iterator, clip=2.0): epoch_loss = 0 model.train() for batch in iterator: input_ids = batch["input_ids"].to(device) attention_mask = batch["attention_mask"].to(device) targets = batch["label"].to(device) predictions = model( input_ids=input_ids, attention_mask=attention_mask ) optimizer.zero_grad() loss = loss_fn(predictions, targets) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() epoch_loss += loss.item() return epoch_loss / len(iterator) ###Output _____no_output_____ ###Markdown Evaluation Method ###Code def simple_accuracy(preds, labels): """Takes in two lists of predicted labels and actual labels and returns the accuracy in the form of a float. """ return np.equal(preds, labels).mean() def evaluate(model, iterator): model.eval() epoch_loss = 0 preds = [] trgs = [] with torch.no_grad(): for batch in iterator: input_ids = batch["input_ids"].to(device) attention_mask = batch["attention_mask"].to(device) targets = batch["label"].to(device) predictions = model( input_ids=input_ids, attention_mask=attention_mask ) loss = loss_fn(predictions, targets) epoch_loss += loss.item() trgs.extend(targets.detach().cpu().numpy().tolist()) _, predicted = torch.max(predictions, 1) preds.extend(predicted.detach().cpu().numpy().tolist()) return epoch_loss / len(iterator), simple_accuracy(preds, trgs) def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs ###Output _____no_output_____ ###Markdown Training Loop ###Code best_valid_loss = float('inf') for epoch in range(EPOCHS): start_time = time.time() train_loss = train(model, train_data_loader) val_loss, val_acc = evaluate(model, valid_data_loader) end_time = time.time() # scheduler.step(val_loss) epoch_mins, epoch_secs = epoch_time(start_time, end_time) print(f"Epoch: {epoch + 1:02} | Time: {epoch_mins}m {epoch_secs:.2f}s") print(f"\tTrain Loss: {train_loss:.3f} | Val Loss: {val_loss:.3f} | Val Acc: {val_acc:.3f}") if val_loss < best_valid_loss: best_valid_loss = val_loss torch.save(model.state_dict(), 'xlm_roberta.pt') ###Output Epoch: 01 | Time: 5m 50.00s Train Loss: 1.016 | Val Loss: 0.889 | Val Acc: 0.583 Epoch: 02 | Time: 5m 54.00s Train Loss: 0.859 | Val Loss: 0.867 | Val Acc: 0.632 Epoch: 03 | Time: 5m 54.00s Train Loss: 0.803 | Val Loss: 0.837 | Val Acc: 0.633 Epoch: 04 | Time: 5m 54.00s Train Loss: 0.750 | Val Loss: 0.833 | Val Acc: 0.614 Epoch: 05 | Time: 5m 54.00s Train Loss: 0.705 | Val Loss: 0.818 | Val Acc: 0.641 ###Markdown Test Data results ###Code model.load_state_dict(torch.load('xlm_roberta.pt')) with torch.no_grad(): model.eval() preds = [] trgs = [] for batch in test_data_loader: input_ids = batch["input_ids"].to(device) attention_mask = batch["attention_mask"].to(device) targets = batch["label"].to(device) outputs = model( input_ids=input_ids, attention_mask=attention_mask ) # get the predicted labels _, predicted = torch.max(outputs, 1) # Add data to lists preds.extend(predicted.detach().cpu().numpy().tolist()) trgs.extend(targets.detach().cpu().numpy().tolist()) print(metrics.classification_report(trgs, preds)) ###Output _____no_output_____
Python_Stock/Apply_Mathematics_Trading_Investment/Statistics_Anscombe_Quartet_Stock.ipynb
###Markdown Anscombe's Quartet Stock Data ###Code import numpy as np import matplotlib.pyplot as plt import pandas as pd import warnings warnings.filterwarnings("ignore") # yfinance is used to fetch data import yfinance as yf yf.pdr_override() # input symbol = 'AMD' start = '2019-12-01' end = '2020-01-01' # Read data df = yf.download(symbol,start,end) # View Columns df.head() df = df.astype('float64') df.head() df = df[['Open', 'High', 'Low', 'Adj Close']] df.head() df.shape for i in df.values: print(np.array(i)) quartets = np.array([df['Open'], df['High'], df['Low'], df['Adj Close']]) quartets quartets[0] quartets.shape[0] for quartet in range(quartets.shape[0]): x = np.array(quartet) print(x) for names in df.columns: print(names) for name in df.columns: print(name) for name in df.columns: print("Next") print("Adj Close vs ", name) roman = ['I', 'II', 'III', 'IV'] %matplotlib inline fig = plt.figure(figsize=(16,12)) fig.suptitle("Anscombe's Quartets", fontsize=14) axes = fig.subplots(2, 2, sharex= True, sharey = True) n = len(df.index) for name, quartet in zip(df.columns, range(quartets.shape[0])): x = quartets[quartet] y = np.array(df['Adj Close']) coef = np.polyfit(x, y, 1) reg_line = np.poly1d(coef) ax = axes[quartet // 2, quartet % 2] ax.plot(x, y, 'ro', x, reg_line(x), '--k') ax.set_title(roman[quartet]) print("Quartet:", roman[quartet]) print("Adj Close vs", name) print("Mean X:", x.mean()) print("Variance X:", x.var()) print("Mean Y:", y.mean()) print("Variance Y:", y.var()) print("Pearson's correlation coef.:", round(np.corrcoef(x, y)[0][1], 2)) print('-'*40) plt.show() ###Output Quartet: I Adj Close vs Open Mean X: 42.13190460205078 Variance X: 8.22944332000805 Mean Y: 42.32095264253162 Variance Y: 7.766971667924468 Pearson's correlation coef.: 0.95 ---------------------------------------- Quartet: II Adj Close vs High Mean X: 42.7980953398205 Variance X: 7.692720974234099 Mean Y: 42.32095264253162 Variance Y: 7.766971667924468 Pearson's correlation coef.: 0.99 ---------------------------------------- Quartet: III Adj Close vs Low Mean X: 41.631904783703035 Variance X: 7.489308791873262 Mean Y: 42.32095264253162 Variance Y: 7.766971667924468 Pearson's correlation coef.: 0.97 ---------------------------------------- Quartet: IV Adj Close vs Adj Close Mean X: 42.32095264253162 Variance X: 7.766971667924468 Mean Y: 42.32095264253162 Variance Y: 7.766971667924468 Pearson's correlation coef.: 1.0 ----------------------------------------
build/_downloads/fd5844c59252caa18f0f4975441c985d/advanced_tutorial.ipynb
###Markdown 高级专题: 做动态决策 和 Bi-LSTM CRF======================================================动态Vs静态深度学习工具包--------------------------------------------Pytorch 是一种 *动态* 神经网络工具箱。动态工具包的另一个例子是 `Dynet `__ (我提到这一点,因为Pytorch和Dynet是相似的。 如果您在Dynet中看到一个示例,它可能会帮助您在Python中实现它)。相反的是 *静态* 工具包,包括Theano、Keras、TensorFlow等。其核心区别是:* 在静态工具箱中,您只定义一次计算图,编译它,然后将实例流入它。* 在动态工具箱中,为 *每个实例* 定义一个计算图。它从来不被编译,并且是实时执行的。如果没有丰富的经验,就很难理解其中的不同之处。一个例子是假设我们想要构建一个深层成分解析器(deep constituent parser)。假设我们的模型大致包括以下步骤:* 我们自底向上构建树* 标记根节点 (句子的单子)* 从那里,使用神经网络和单词的嵌入,以找到形成成分的组合。 每当您形成一个新的成分时,都要使用某种技术来获得该成分的嵌入。 在这种情况下,我们的网络体系结构将完全依赖于输入语句。在“The green cat scratched the wall”这个句子中,在模型的某一点上, 我们希望组合跨度(span) $(i,j,r) = (1, 3, \text{NP})$ (即NP成分跨越单词1到单词3,在本例中是"The green cat")。然而,另一个句子可能是"Somewhere, the big fat cat scratched the wall" 。在这个句子中,我们要在某一点上形成成分 $(2, 4, NP)$ 。我们想要形成的成分将取决于实例。如果我们只编译一次计算图,就像在静态工具箱中一样,那么编程这个深层成分解析器的逻辑将是非常困难或不可能的。然而,在动态工具箱中,并不只有一个预定义的计算图。每个实例都可以有一个新的计算图,所以这根本就不是个问题。动态工具包还具有更易于调试的优点,而且代码更类似于宿主语言(host language)(我的意思是,与Keras或Theano相比,Pytorch和Dynet看起来更像真正的Python代码)。Bi-LSTM 条件随机场(CRFs)的讨论-------------------------------------------对于本节,我们将看到一个完整的,复杂的例子:用于命名实体(named-entity)识别的Bi-LSTM条件随机场。上面的LSTM标记器对于词性标注来说通常是足够的,但是像CRF这样的序列模型对于在NER上的强大性能来说是非常重要的。假设你熟悉条件随机场(CRF)。虽然这个名字听起来很吓人,所有的模型都是CRF,只是LSTM为这些CRF模型提供了特征。不过,这仍然是一个高级模型,比本教程中的任何早期模型都要复杂得多。如果你想跳过它,那也很好。看你是否准备好了,看看你能不能:- Write the recurrence for the viterbi variable at step i for tag k.- Modify the above recurrence to compute the forward variables instead.- Modify again the above recurrence to compute the forward variables in log-space (hint: log-sum-exp)如果你能做这三件事,你应该能够理解下面的代码。回想一下CRF咋样计算条件概率的。让 $y$ 是标记序列,$x$ 是单词的输入序列。然后我们计算\begin{align}P(y|x) = \frac{\exp{(\text{Score}(x, y)})}{\sum_{y'} \exp{(\text{Score}(x, y')})}\end{align}其中,分数是通过定义一些 log potentials $\log \psi_i(x,y)$ 来确定的,这样就有下式:\begin{align}\text{Score}(x,y) = \sum_i \log \psi_i(x,y)\end{align}要使分区函数(partition function)易于处理,the potentials 必须只考虑局部特征。在Bi-LSTM CRF中,我们定义了两种势(potentials):发射势(emission)和跃迁势(transition)。在索引 $i$ 处单词的emission来自Bi-LSTM在第 $i$ 步的隐藏状态。transition scores 存储在一个 $|T|x|T|$ 矩阵 $\textbf{P}$ 中,其中 $T$ 是标记集(tag set)。在我的实现中,$\textbf{P}_{j,k}$ 是从标记 $k$ 跃迁到标记 $j$ 的得分。因此:\begin{align}\text{Score}(x,y) = \sum_i \log \psi_\text{EMIT}(y_i \rightarrow x_i) + \log \psi_\text{TRANS}(y_{i-1} \rightarrow y_i)\end{align}\begin{align}= \sum_i h_i[y_i] + \textbf{P}_{y_i, y_{i-1}}\end{align}其中在第二个表达式中, 我们认为标签被分配了唯一的非负索引。如果你觉得上面的讨论过于简单, 你可以查看 `这个 `__ 来自 Michael Collins 所写的关于 CRFs 的内容。实现笔记--------------------下面的例子实现了对数空间中的前向算法来计算分区函数(partition function),并实现了Viterbi算法来解码。反向传播将自动为我们计算梯度。我们不需要手工做任何事。这里的实现并没有进行优化。如果您了解正在发生的事情,您可能很快就会看到,前向算法中的下一个标记的迭代其实更适合在一个大操作中完成的。我想要代码更易读,所以并没有将其放在大操作中。如果您想要进行相关的更改,可以将这个标记器(tagger)用于实际任务。 ###Code # Author: Robert Guthrie import torch import torch.autograd as autograd import torch.nn as nn import torch.optim as optim torch.manual_seed(1) ###Output _____no_output_____ ###Markdown 辅助函数使代码更具可读性。 ###Code def argmax(vec): # return the argmax as a python int _, idx = torch.max(vec, 1) return idx.item() def prepare_sequence(seq, to_ix): idxs = [to_ix[w] for w in seq] return torch.tensor(idxs, dtype=torch.long) # Compute log sum exp in a numerically stable way for the forward algorithm def log_sum_exp(vec): max_score = vec[0, argmax(vec)] max_score_broadcast = max_score.view(1, -1).expand(1, vec.size()[1]) return max_score + \ torch.log(torch.sum(torch.exp(vec - max_score_broadcast))) ###Output _____no_output_____ ###Markdown 创建模型 ###Code class BiLSTM_CRF(nn.Module): def __init__(self, vocab_size, tag_to_ix, embedding_dim, hidden_dim): super(BiLSTM_CRF, self).__init__() self.embedding_dim = embedding_dim self.hidden_dim = hidden_dim self.vocab_size = vocab_size self.tag_to_ix = tag_to_ix self.tagset_size = len(tag_to_ix) self.word_embeds = nn.Embedding(vocab_size, embedding_dim) self.lstm = nn.LSTM(embedding_dim, hidden_dim // 2, num_layers=1, bidirectional=True) # Maps the output of the LSTM into tag space. self.hidden2tag = nn.Linear(hidden_dim, self.tagset_size) # Matrix of transition parameters. Entry i,j is the score of # transitioning *to* i *from* j. self.transitions = nn.Parameter( torch.randn(self.tagset_size, self.tagset_size)) # These two statements enforce the constraint that we never transfer # to the start tag and we never transfer from the stop tag self.transitions.data[tag_to_ix[START_TAG], :] = -10000 self.transitions.data[:, tag_to_ix[STOP_TAG]] = -10000 self.hidden = self.init_hidden() def init_hidden(self): return (torch.randn(2, 1, self.hidden_dim // 2), torch.randn(2, 1, self.hidden_dim // 2)) def _forward_alg(self, feats): # Do the forward algorithm to compute the partition function init_alphas = torch.full((1, self.tagset_size), -10000.) # START_TAG has all of the score. init_alphas[0][self.tag_to_ix[START_TAG]] = 0. # Wrap in a variable so that we will get automatic backprop forward_var = init_alphas # Iterate through the sentence for feat in feats: alphas_t = [] # The forward tensors at this timestep for next_tag in range(self.tagset_size): # broadcast the emission score: it is the same regardless of # the previous tag emit_score = feat[next_tag].view( 1, -1).expand(1, self.tagset_size) # the ith entry of trans_score is the score of transitioning to # next_tag from i trans_score = self.transitions[next_tag].view(1, -1) # The ith entry of next_tag_var is the value for the # edge (i -> next_tag) before we do log-sum-exp next_tag_var = forward_var + trans_score + emit_score # The forward variable for this tag is log-sum-exp of all the # scores. alphas_t.append(log_sum_exp(next_tag_var).view(1)) forward_var = torch.cat(alphas_t).view(1, -1) terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]] alpha = log_sum_exp(terminal_var) return alpha def _get_lstm_features(self, sentence): self.hidden = self.init_hidden() embeds = self.word_embeds(sentence).view(len(sentence), 1, -1) lstm_out, self.hidden = self.lstm(embeds, self.hidden) lstm_out = lstm_out.view(len(sentence), self.hidden_dim) lstm_feats = self.hidden2tag(lstm_out) return lstm_feats def _score_sentence(self, feats, tags): # Gives the score of a provided tag sequence score = torch.zeros(1) tags = torch.cat([torch.tensor([self.tag_to_ix[START_TAG]], dtype=torch.long), tags]) for i, feat in enumerate(feats): score = score + \ self.transitions[tags[i + 1], tags[i]] + feat[tags[i + 1]] score = score + self.transitions[self.tag_to_ix[STOP_TAG], tags[-1]] return score def _viterbi_decode(self, feats): backpointers = [] # Initialize the viterbi variables in log space init_vvars = torch.full((1, self.tagset_size), -10000.) init_vvars[0][self.tag_to_ix[START_TAG]] = 0 # forward_var at step i holds the viterbi variables for step i-1 forward_var = init_vvars for feat in feats: bptrs_t = [] # holds the backpointers for this step viterbivars_t = [] # holds the viterbi variables for this step for next_tag in range(self.tagset_size): # next_tag_var[i] holds the viterbi variable for tag i at the # previous step, plus the score of transitioning # from tag i to next_tag. # We don't include the emission scores here because the max # does not depend on them (we add them in below) next_tag_var = forward_var + self.transitions[next_tag] best_tag_id = argmax(next_tag_var) bptrs_t.append(best_tag_id) viterbivars_t.append(next_tag_var[0][best_tag_id].view(1)) # Now add in the emission scores, and assign forward_var to the set # of viterbi variables we just computed forward_var = (torch.cat(viterbivars_t) + feat).view(1, -1) backpointers.append(bptrs_t) # Transition to STOP_TAG terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]] best_tag_id = argmax(terminal_var) path_score = terminal_var[0][best_tag_id] # Follow the back pointers to decode the best path. best_path = [best_tag_id] for bptrs_t in reversed(backpointers): best_tag_id = bptrs_t[best_tag_id] best_path.append(best_tag_id) # Pop off the start tag (we dont want to return that to the caller) start = best_path.pop() assert start == self.tag_to_ix[START_TAG] # Sanity check best_path.reverse() return path_score, best_path def neg_log_likelihood(self, sentence, tags): feats = self._get_lstm_features(sentence) forward_score = self._forward_alg(feats) gold_score = self._score_sentence(feats, tags) return forward_score - gold_score def forward(self, sentence): # dont confuse this with _forward_alg above. # Get the emission scores from the BiLSTM lstm_feats = self._get_lstm_features(sentence) # Find the best path, given the features. score, tag_seq = self._viterbi_decode(lstm_feats) return score, tag_seq ###Output _____no_output_____ ###Markdown 运行训练 ###Code START_TAG = "<START>" STOP_TAG = "<STOP>" EMBEDDING_DIM = 5 HIDDEN_DIM = 4 # Make up some training data training_data = [( "the wall street journal reported today that apple corporation made money".split(), "B I I I O O O B I O O".split() ), ( "georgia tech is a university in georgia".split(), "B I O O O O B".split() )] word_to_ix = {} for sentence, tags in training_data: for word in sentence: if word not in word_to_ix: word_to_ix[word] = len(word_to_ix) tag_to_ix = {"B": 0, "I": 1, "O": 2, START_TAG: 3, STOP_TAG: 4} model = BiLSTM_CRF(len(word_to_ix), tag_to_ix, EMBEDDING_DIM, HIDDEN_DIM) optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-4) # Check predictions before training with torch.no_grad(): precheck_sent = prepare_sequence(training_data[0][0], word_to_ix) precheck_tags = torch.tensor([tag_to_ix[t] for t in training_data[0][1]], dtype=torch.long) print(model(precheck_sent)) # Make sure prepare_sequence from earlier in the LSTM section is loaded for epoch in range( 300): # again, normally you would NOT do 300 epochs, it is toy data for sentence, tags in training_data: # Step 1. Remember that Pytorch accumulates gradients. # We need to clear them out before each instance model.zero_grad() # Step 2. Get our inputs ready for the network, that is, # turn them into Tensors of word indices. sentence_in = prepare_sequence(sentence, word_to_ix) targets = torch.tensor([tag_to_ix[t] for t in tags], dtype=torch.long) # Step 3. Run our forward pass. loss = model.neg_log_likelihood(sentence_in, targets) # Step 4. Compute the loss, gradients, and update the parameters by # calling optimizer.step() loss.backward() optimizer.step() # Check predictions after training with torch.no_grad(): precheck_sent = prepare_sequence(training_data[0][0], word_to_ix) print(model(precheck_sent)) # We got it! ###Output _____no_output_____
S71_drone_state.ipynb
###Markdown ###Code %pip install -q -U gtbook import math import numpy as np import gtsam ###Output _____no_output_____ ###Markdown Moving in Three Dimensions> Motion in 3D has 6 DOF. ###Code from gtbook.display import randomImages from IPython.display import display display(randomImages(7, 1, "steampunk", 1)) ###Output _____no_output_____ ###Markdown In Section 6.1, we introduced $SE(2)$, and showed how matrices can be used to represent bothrotation and translation in two dimensions.At that time, we promised that this could be easily generalized to the case of rotation andtranslation in three dimensions. In this section, we deliver on that promise, and introduce$SO(3)$ and $SE(3)$.In addition to representing the pose of an object in 3D, for the case of drones, we are also interested inrepresenting velocity.This includes the both the linear velocity of the drone, which corresponds to the velocityof the origin of the body-attached frame, as well as the rate of change in the orientation of thebody-attached frame. Rotations in 3D aka SO(3)In Section 6.1, we constructed the rotation matrix $R^0_1 \in SO(2)$by projecting the axes of Frame 1 onto Frame 0.The extension to 3D is indeed straightforward.Again, we merely project the axes of Frame 1 onto Frame 0,but in this case, each frame is equipped with a $z$-axis:$$R_{1}^{0}=\begin{bmatrix}x_1 \cdot x_0 & y_1 \cdot x_0 & z_1 \cdot x_0 \\x_1 \cdot y_0 & y_1 \cdot y_0 & z_1 \cdot y_0 \\x_1 \cdot z_0 & y_1 \cdot z_0 & z_1 \cdot z_0 \\\end{bmatrix}$$The expression for rotational coordinate transformations is now exactly thesame as in Chapter 6.Given a point $p$ with coordinates expressed relative to the body-attachedframe, we compute its coordinates in frame $s$:$$p^s = R^s_b p^b$$Extending the semantics of $SO(2)$ to $SO(3)$,the columnsof $R_{b}^{s}$ represent the axes of frame $B$ in the $S$ coordinateframe: $$R_{b}^{s}=\left[\begin{array}{ccc}\hat{x}_{b}^{s} & \hat{y}_{b}^{s} & \hat{z}_{b}^{s}\end{array}\right]$$The 3D rotations together with composition constitute the **specialorthogonal group** $SO(3)$. It is made up of all $3\times3$ orthonormalmatrices with determinant $+1$, with matrix multiplication implementingcomposition. However, *3D rotations do not commute*, i.e., in general $R_{2}R_{1}\neq R_{1}R_{2}$. It is often useful to refer to rotations about one of the coordinate axes.These basic rotation matrices are given by$$R_{x,\phi}=\begin{bmatrix}1 & 0 & 0 \\0 & \cos \phi & - \sin \phi \\0 & \sin \phi & \cos \phi \end{bmatrix}~~~R_{y,\theta}=\begin{bmatrix}\cos \theta & 0 & \sin \theta \\0 & 1 & 0 \\-\sin \theta & 0 & \cos \theta\end{bmatrix}~~~R_{z,\psi}=\begin{bmatrix}\cos \psi &-\sin \psi & 0 \\\sin \psi &\cos \psi & 0 \\0 & 0 & 1\end{bmatrix}$$Note, in particular, the form of $R_{z,\psi}$, the rotation about the $z$-axis.The upper-left $2\times 2$ block of this matrix is exactly the rotation matrix $R \in SO(2)$as introduced in Section 6.1.This can be understood by realizing that rotation in the $xy$ plane is actuallya rotation about an implicit $z$- axis that points "out of the page." 3D Rigid transforms aka SE(3)In a similar way, we can immediately extend $SE(2)$ to the caseof rotation and translation in 3D.Suppose the point $p$ is rigidly attached to frame $b$, andwe wish to determine its coordinates with respect to the reference frame $s$.If the coordinates of $p$ with respect to frame $b$ are given by $p^b$,then$$p^{s}=R_{b}^{s}p^{b}+t_{b}^{s}$$ in which $R^s_b \in SO(3)$ is the rotation matrix that specifies the orientationof frame $b$ w.r.t. frame $s$,and $t^s_b \in \mathbb{R}^3$ gives the location of the originof frame $b$ w.r.t. frame $s$ (specified in the coordinates of frame $s$).We denote this transform by$T_{b}^{s}\doteq\left(R_{b}^{s},\,t_{b}^{s}\right)$. The **specialEuclidean group** $SE(3)$, with the group operation defined similarly as in the previous chapter.Moreover, the group $SE(3)$ can be viewed as a subgroup of the general linear group$GL(4)$ of degree 4, by embedding the rotation and translation into a$4\times4$ invertible matrix defined as$$T_{b}^{s}=\begin{bmatrix}R_{b}^{s} & t_{b}^{s}\\0 & 1\end{bmatrix}$$Again, by embedding 3D points in a four-vector, a3D rigid transform acting on a point can be implemented by matrix-vectormultiplication:$$\begin{bmatrix}R_{b}^{s} & t_{b}^{s}\\0 & 1\end{bmatrix}\begin{bmatrix}p^{b}\\1\end{bmatrix}=\begin{bmatrix}R_{b}^{s}p^{b}+t_{b}^{s}\\1\end{bmatrix}$$ The Group $SE(3)$ in GTSAM> `Pose3` is a non-commutative group which acts on `Point3`.$SE(3)$ is also a group, as the following properties all hold:1. **Closure**: For all transforms $T, T' \in SE(3)$, their product is also in $SE(3)$, i.e., $T T' \in SE(3)$.2. **Identity Element**: The $4\times 4$ identity matrix $I$ is included in the group, and forall $T \in SE(3)$ we have $T I = I T = T$.3. **Inverse**: For every $T \in SE(3)$ there exists $T^{-1} \in SE(3)$ such that $T^{-1}T = T T^{-1} = I$.4. **Associativity**: For all $T_1, T_2, T_3 \in SE(3)$, $(T_1 T_2) T_3 = T_1 (T_2 T_3)$.3D rigid transforms do *not* commute. The inverse $T^{-1}$ is given by:$$T^{-1} = \begin{bmatrix}R & d\\0_2 & 1\end{bmatrix}^{-1} = \begin{bmatrix}R^T & -R^T d\\0_2 & 1\end{bmatrix}$$Again, all of this is built into GTSAM, where both 3D poses and 3D rigid transforms are represented by the type `Pose3`, with rotation matrices represented by `Rot3`: ###Code R = gtsam.Rot3.Yaw(math.degrees(20)) # rotation around Z-axis by 20 degrees T = gtsam.Pose3(R, [1,2,3]) print(f"3D Pose {T} corresponding to transformation matrix:\n{T.matrix()}") ###Output 3D Pose R: [ -0.720878, -0.693062, 0; 0.693062, -0.720878, 0; 0, 0, 1 ] t: 1 2 3 corresponding to transformation matrix: [[-0.72087779 -0.6930622 0. 1. ] [ 0.6930622 -0.72087779 0. 2. ] [ 0. 0. 1. 3. ] [ 0. 0. 0. 1. ]] ###Markdown 3D transforms form a *non-commutative* group, as demonstrated here in code: ###Code print(isinstance(T * T, gtsam.Pose3)) # closure I = gtsam.Pose3.identity() print(T.equals(T * I, 1e-9)) # identity print((T * T.inverse()).equals(I, 1e-9)) # inverse T1, T2, T3 = T, gtsam.Pose3(gtsam.Rot3.Roll(0.1), [1,2,3]), gtsam.Pose3(gtsam.Rot3.Roll(0.2), [1,2,3]) print(((T1 * T2)* T3).equals(T1 * (T2 * T3), 1e-9)) # associative print((T1 * T2).equals(T2 * T1, 1e-9)) # NOT commutative ###Output True True True True False ###Markdown Finally, 3D transforms can act on 3D points, which we can do using matrix multiplication, or using the `Pose3.transformFrom` method: ###Code P1 = gtsam.Point3(2, 4, 3) print(f"P0 = {T.matrix() @ [2, 4, 3, 1]}") # need to make P0 homogeneous print(f"P0 = {T.transformFrom(P1)}") ###Output P0 = [-3.21400437 0.50261322 6. 1. ] P0 = [-3.21400437 0.50261322 6. ]
Python Fundamentals for Data Science/S1/Additional Notebooks S1/1_Python - Input Statements.ipynb
###Markdown Python - Input Statements Python Inputs The input function is a simple way to get data from user. Here is example : ###Code input('What is your name?') ###Output What is your name? ParameshwaranP ###Markdown The string specified in the input fuction serves as message to the user about what needs to be keyed in. Whatever is entered by user is treated as string only. If number is entered by user, still it will be considered as string. For further processing it needs to be type casted to the appropriate data type. ###Code input('What is your age?') ###Output What is your age? 22 ###Markdown The user input can be saved as part of variable. variable_name = input('message to the user') ###Code name = input('What is your name?') print(name) name = input('What is your name?') print("Welcome", name) ###Output What is your name? ParameswananP ###Markdown Check type of input variable ###Code type(name) age = input('What is your age?') print("You have entered age as ", age) print("Type : " , type(age)) ###Output What is your age? 22 ###Markdown Convert type of input variable ###Code age = input('What is your age?') modified_age = int(age) print("You have entered age as ", modified_age) print("Type : " , type(modified_age)) salary = input('What is your salary?') modified_salary = float(salary) print("You have entered salary as ", modified_salary) print("Type : " , type(modified_salary)) ###Output What is your salary? 143.2 ###Markdown Using eval The eval function convertes the text entered by user into a number without explicity typecasting. ###Code my_age = eval(input('What is your age?\n')) print("You have entered age as ", my_age) print("Type : " , type(my_age)) my_salary = eval(input('What is your salary?\n')) print("You have entered salary as ", my_salary) print("Type : " , type(my_salary)) my_total_marks = eval(input('Enter your marks\n')) #Enter an expression like 23 + 34 + 45 print("You have entered age as ", my_total_marks) ###Output Enter your marks 11+11+12 ###Markdown Exercise Q1. Ask the user to enter the number and then print its squares and cubes. ###Code #Try it here number = eval(input("Enter the number to be squared and cubed")) squ = number ** 2 cub = number**3 print("Square = ", squ) print("Cube = ", cub) ###Output Enter the number to be squared and cubed 10 ###Markdown Q2. Write a program that asks user to input three numbers with separate input statements. Use the variable named total to store the sum of those numbers and avg to store the average of those nummbers.Then output the numbers along with the total and average. ###Code #Try it here num1 = eval(input("Enter the 1st number")) num2 = eval(input("Enter the 2nd number")) num3 = eval(input("Enter the 3rd number")) total = num1 + num2 + num3 avg = total/3 print("Input numbers are", num1, num2, num3, "There total = ", total, "There Average = ", avg) # List Method: # try_num_lis = eval(input("Enter list of num")) # input need to given as a list Eg - [10, 20, 30] # total = sum(try_num_lis) # avg = sum(try_num_lis)/len(try_num_lis) # print("Input numbers are", try_num_lis, "There total = ", total, "There Average = ", avg) ###Output Enter the 1st number 10 Enter the 2nd number 20 Enter the 3rd number 30 ###Markdown Q3. Create a salary estimator. Ask user for the basic salary, using it compute HRA as 10% of basic salary, add conveyance allowance as Rs 3500 to it. Output all the salary components on the console. ###Code #Try it here basic_salary = eval(input("Enetr the salary : ")) hra = salary * 0.1 allowance = 3500 total_salry = basic_salary + hra + allowance print("Total Salary = ", total_salry, "\nBasic Salary = ", basic_salary, "\nHRA = ", hra, "\nAllowance = ", allowance) ###Output Enetr the salary : 5000
DCGAN_intro_for_MNIST.ipynb
###Markdown ###Code import tensorflow as tf import matplotlib.pyplot as plt import numpy as np import os import PIL import time from skimage.io import imshow from IPython.display import display tf.__version__ (train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data() train_images.dtype, train_images.shape imshow(train_images[0]) def img_to_float(img): return (np.float32(img)[..., None]-127.5)/127.5 def img_to_uint8(img): return np.uint8(img*127.5+128).clip(0, 255)[...,0] train_img_f32 = img_to_float(train_images) imshow(img_to_uint8(train_img_f32[0])) BUFFER_SIZE = train_img_f32.shape[0] BATCH_SIZE = 32 train_dataset = tf.data.Dataset.from_tensor_slices(train_img_f32).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) from tensorflow.keras.layers import Dense, BatchNormalization, LeakyReLU, Reshape, Conv2DTranspose latent_dim = 100 generator = tf.keras.Sequential([ Dense(7*7*256, use_bias=False, input_shape=(latent_dim,)), BatchNormalization(), LeakyReLU(), Reshape((7, 7, 256)), Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False), BatchNormalization(), LeakyReLU(), Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False), BatchNormalization(), LeakyReLU(), Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh') ] ) from tensorflow.keras.layers import Conv2D, Dropout, Flatten discriminator = tf.keras.Sequential([ Conv2D(64, (5, 5), strides=(2, 2), padding='same', input_shape=(28,28, 1)), BatchNormalization(), LeakyReLU(), Dropout(0.3), Conv2D(128, (5, 5), strides=(2, 2), padding='same'), BatchNormalization(), LeakyReLU(), Dropout(0.3), Flatten(), Dense(1)] ) loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=True) def generator_loss(generated_output): return loss_fn(tf.ones_like(generated_output), generated_output) def discriminator_loss(real_output, generated_output): # [1,1,...,1] with real output since it is true and we want our generated examples to look like it real_loss = loss_fn(tf.ones_like(real_output), real_output) # [0,0,...,0] with generated images since they are fake generated_loss = loss_fn(tf.zeros_like(generated_output), generated_output) total_loss = real_loss + generated_loss return total_loss generator_optimizer = tf.keras.optimizers.Adam(1e-4) discriminator_optimizer = tf.keras.optimizers.Adam(1e-4) EPOCHS = 50 num_examples_to_generate = 16 # We'll re-use this random vector used to seed the generator so # it will be easier to see the improvement over time. random_vector_for_generation = tf.random.normal([num_examples_to_generate, latent_dim]) @tf.function def train_step(images): # generating noise from a normal distribution noise = tf.random.normal([BATCH_SIZE, latent_dim]) with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: generated_images = generator(noise, training=True) real_logits = discriminator(images, training=True) generated_logits = discriminator(generated_images, training=True) gen_loss = generator_loss(generated_logits) disc_loss = discriminator_loss(real_logtis, generated_logits) gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables) gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables) generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables)) discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables)) for epoch in range(15): start_time = time.time() for images in train_dataset: train_step(images) fake = generator(random_vector_for_generation, training=False) fake_concat = np.transpose(img_to_uint8(fake), [1,0,2]).reshape((28,-1)) print(epoch, time.time()-start_time) display(PIL.Image.fromarray(fake_concat)) ###Output 0 11.480594396591187
technical_indicators.ipynb
###Markdown Technical Indicators - [pandas-ta](https://pypi.org/project/pandas-ta/) python library ###Code # Import yfinance import yfinance as yf # Specify asset ticker ticker = 'BTC-USD' # Download asset price history for the specific time interval asset_history = yf.download(tickers=f"{ticker}", start="2021-01-01", end="2022-02-04", progress=False) asset_history.tail() ###Output _____no_output_____ ###Markdown Technical Indicators - for the list of available indicators see [pandas-ta documentation](https://technical-analysis-library-in-python.readthedocs.io/en/latest/ta.html?highlight=obvta.volume.OnBalanceVolumeIndicator.on_balance_volume) ###Code # Import libraries import pandas as pd import pandas_ta as pta import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Relative Strength Index - RSI ###Code rsi_btc = pta.rsi(asset_history['Adj Close'],length=14).dropna() rsi_btc.tail() rsi_btc.plot(title = 'RSI', rot=45,figsize=(16,8), ylabel='RSI index') plt.axhline(70,color='r', label='70% RSI') plt.axhline(30,color='g', label='30% RSI') plt.axhline(rsi_btc.iloc[-1], color='b', label='current RSI') plt.legend(loc='best') ###Output _____no_output_____ ###Markdown Stochastic Oscilator - STOCH ###Code stoch_btc = pta.stoch(asset_history['High'], asset_history['Low'],asset_history['Adj Close'],length=14) stoch_btc.tail() stoch_btc.plot(figsize=(16,8), title='STOCH', legend=True) ###Output _____no_output_____ ###Markdown On-Balance Volume - OBV ###Code obv_btc = pta.obv(asset_history['Adj Close'], asset_history['Volume']) obv_btc.tail() obv_btc.plot(figsize=(16,8), title='OBV', legend=True) ###Output _____no_output_____ ###Markdown Average Directional Movement - ADX ###Code adx_btc = pta.adx(asset_history['High'], asset_history['Low'], asset_history['Adj Close'], length=14) adx_btc.tail() adx_btc.plot(figsize=(16,8), title = 'ADX', legend=True) # DMP (positive, DMN (negatve)) ###Output _____no_output_____ ###Markdown Moving Average Convergence Divergence - MACD ###Code macd_btc = pta.macd(asset_history['Adj Close'], fast=12, slow=26, signal=9) macd_btc.tail() macd_btc.plot(figsize=(16,8), title = 'MACD') ###Output _____no_output_____ ###Markdown Simple Moving Average - SMA ###Code sma_btc = pta.sma(asset_history['Adj Close'], length=20) # change window size sma_btc.tail() sma_btc.plot(figsize=(16,8), title ='SMA', legend=True) ###Output _____no_output_____ ###Markdown Bollinger Bands - BBANDS ###Code bbands_btc = pta.bbands(asset_history['Adj Close'], length=20, std=2) # change window size (length) and std dev (std) bbands_btc.tail() bbands_btc.plot(figsize=(16,8), title ='BBANDS', legend=True) ###Output _____no_output_____ ###Markdown Accumulation/ Distribution Index - ADI ###Code adi_btc = pta.ad(asset_history['High'], asset_history['Low'],asset_history['Adj Close'],asset_history['Volume']) adi_btc.tail() adi_btc.plot(figsize=(16,8), title ='ADI', legend=True) ###Output _____no_output_____
notebooks/results_regression.ipynb
###Markdown Only vanilla RNN and MLP ###Code fig, axs = plt.subplots(2, ncoef, figsize=(25,10)) val_names = ['Intercept', 'Ground Truth E','Warped E'] for coef in range(ncoef): if coef==0: fig.delaxes(axs[0][coef]) fig.delaxes(axs[1][coef]) continue val_name = val_names[coef] # MLP val = param_mlp[:,:,coef] df = pd.DataFrame(val, columns= np.arange(checkpoints)) df.insert(0, 'runs', np.arange(runs)) df2 = pd.melt(df, id_vars=['runs'],var_name='steps', value_name=val_name) # plot ax = axs[0][coef] ax = sns.boxplot(x='steps', y=val_name, data=df2, ax=ax) ax = sns.stripplot(x='steps', y=val_name, data=df2, ax=ax) for run in range(runs): for i, p in enumerate(p_val_mlp[run,:,coef]): s = '*' if p<0.05 else ' ' ax.annotate(s, (i, param_mlp[run, i, coef]), color='r') ax.set_title('MLP', fontweight='bold') # RNN val = param_rnn[:,:,coef] df = pd.DataFrame(val, columns= np.arange(checkpoints)) df.insert(0, 'runs', np.arange(runs)) df2 = pd.melt(df, id_vars=['runs'],var_name='steps', value_name=val_name) # plot ax = axs[1][coef] ax = sns.boxplot(x='steps', y=val_name, data=df2, ax=ax) ax = sns.stripplot(x='steps', y=val_name, data=df2, ax=ax) for run in range(runs): for i, p in enumerate(p_val_rnn[run,:,coef]): s = '*' if p<0.05 else ' ' ax.annotate(s, (i, param_rnn[run, i, coef]), color='r') ax.set_title('RNN - ax %s' %(ctx_order), fontweight='bold') for ax in axs.flatten(): ax.set_ylim([mi, mx]) ax.axhline(y=0, color='r', linewidth=2) fig_str = '%s_reg_results_both_models_hidds' %(ctx_order_str) fig.suptitle('Regression Results', fontweight='bold') plt.tight_layout() fig.savefig(('../../figures/' + fig_str + '.pdf'), bbox_inches = 'tight', pad_inches = 0) fig.savefig(('../../figures/' + fig_str + '.png'), bbox_inches = 'tight', pad_inches = 0) ###Output _____no_output_____ ###Markdown Including Stepwise MLP ###Code fig, axs = plt.subplots(4, ncoef, figsize=(45,20)) val_names = ['Intercept', 'Ground Truth E','Warped E'] for coef in range(ncoef): if coef == 0: fig.delaxes(axs[0][coef]) fig.delaxes(axs[1][coef]) fig.delaxes(axs[2][coef]) fig.delaxes(axs[3][coef]) continue val_name = val_names[coef] # MLP val = param_mlp[:,:,coef] df = pd.DataFrame(val, columns= np.arange(checkpoints)) df.insert(0, 'runs', np.arange(runs)) df2 = pd.melt(df, id_vars=['runs'],var_name='steps', value_name=val_name) # plot ax = axs[0][coef] ax = sns.boxplot(x='steps', y=val_name, data=df2, ax=ax) ax = sns.stripplot(x='steps', y=val_name, data=df2, ax=ax) for run in range(runs): for i, p in enumerate(p_val_mlp[run,:,coef]): s = '*' if p<0.05 else ' ' ax.annotate(s, (i, param_mlp[run, i, coef]), color='r') ax.set_title('MLP', fontweight='bold') # RNN val = param_rnn[:,:,coef] df = pd.DataFrame(val, columns= np.arange(checkpoints)) df.insert(0, 'runs', np.arange(runs)) df2 = pd.melt(df, id_vars=['runs'],var_name='steps', value_name=val_name) # plot ax = axs[1][coef] ax = sns.boxplot(x='steps', y=val_name, data=df2, ax=ax) ax = sns.stripplot(x='steps', y=val_name, data=df2, ax=ax) for run in range(runs): for i, p in enumerate(p_val_rnn[run,:,coef]): s = '*' if p<0.05 else ' ' ax.annotate(s, (i, param_rnn[run, i, coef]), color='r') ax.set_title('RNN - ax %s' %(ctx_order), fontweight='bold') # Stepwise MLP - Hidden 1 val = param_swmlp[:,:,0,coef] df = pd.DataFrame(val, columns= np.arange(checkpoints)) df.insert(0, 'runs', np.arange(runs)) df2 = pd.melt(df, id_vars=['runs'],var_name='steps', value_name=val_name) # plot ax = axs[2][coef] ax = sns.boxplot(x='steps', y=val_name, data=df2, ax=ax) ax = sns.stripplot(x='steps', y=val_name, data=df2, ax=ax) for run in range(runs): for i, p in enumerate(p_val_swmlp[run,:,0,coef]): s = '*' if p<0.05 else ' ' ax.annotate(s, (i, param_swmlp[run,i,0,coef]), color='r') ax.set_title('StepwiseMLP - Hidden 1', fontweight='bold') # Stepwise MLP - Hidden 2 val = param_swmlp[:,:,1,coef] df = pd.DataFrame(val, columns= np.arange(checkpoints)) df.insert(0, 'runs', np.arange(runs)) df2 = pd.melt(df, id_vars=['runs'],var_name='steps', value_name=val_name) # plot ax = axs[3][coef] ax = sns.boxplot(x='steps', y=val_name, data=df2, ax=ax) ax = sns.stripplot(x='steps', y=val_name, data=df2, ax=ax) for run in range(runs): for i, p in enumerate(p_val_swmlp[run,:,1,coef]): s = '*' if p<0.05 else ' ' ax.annotate(s, (i, param_swmlp[run,i,1,coef]), color='r') ax.set_title('StepwiseMLP - Hidden 2', fontweight='bold') for ax in axs.flatten(): ax.set_ylim([mi, mx]) ax.axhline(y=0, color='r', linewidth=2) fig_str = '%s_reg_results_three_models_hidds' %(ctx_order_str) fig.suptitle('Regression Results', fontweight='bold') plt.tight_layout() # plt.gca().xaxis.set_major_locator(plt.NullLocator()) # plt.gca().yaxis.set_major_locator(plt.NullLocator()) plt.show() fig.savefig(('../../figures/' + fig_str + '.pdf'), bbox_inches = 'tight', pad_inches = 0) fig.savefig(('../../figures/' + fig_str + '.png'), bbox_inches = 'tight', pad_inches = 0) ###Output _____no_output_____ ###Markdown Truncated backprop - RNNCel ###Code fig, axs = plt.subplots(2, ncoef, figsize=(25,10)) val_names = ['Intercept', 'Ground Truth E','Warped E'] for coef in range(ncoef): if coef==0: fig.delaxes(axs[0][coef]) fig.delaxes(axs[1][coef]) continue val_name = val_names[coef] # RNN val = param_rnncell[:,:,coef] df = pd.DataFrame(val, columns= np.arange(checkpoints)) df.insert(0, 'runs', np.arange(runs)) df2 = pd.melt(df, id_vars=['runs'],var_name='steps', value_name=val_name) # plot ax = axs[0][coef] ax = sns.boxplot(x='steps', y=val_name, data=df2, ax=ax) ax = sns.stripplot(x='steps', y=val_name, data=df2, ax=ax) for run in range(runs): for i, p in enumerate(p_val_rnn[run,:,coef]): s = '*' if p<0.05 else ' ' ax.annotate(s, (i, param_rnn[run, i, coef]), color='r') ax.set_title('RNN - Ax %s' %(ctx_order), fontweight='bold') # RNN Cell val = param_rnn[:,:,coef] df = pd.DataFrame(val, columns= np.arange(checkpoints)) df.insert(0, 'runs', np.arange(runs)) df2 = pd.melt(df, id_vars=['runs'],var_name='steps', value_name=val_name) # plot ax = axs[1][coef] ax = sns.boxplot(x='steps', y=val_name, data=df2, ax=ax) ax = sns.stripplot(x='steps', y=val_name, data=df2, ax=ax) for run in range(runs): for i, p in enumerate(p_val_rnncell[run,:,coef]): s = '*' if p<0.05 else ' ' ax.annotate(s, (i, param_rnncell[run, i, coef]), color='r') ax.set_title('RNNCell - Ax %s' %(ctx_order), fontweight='bold') for ax in axs.flatten(): ax.set_ylim([mi, mx]) ax.axhline(y=0, color='r', linewidth=2) fig_str = '%s_reg_results_both_rnnmodels_hidds' %(ctx_order_str) fig.suptitle('Regression Results', fontweight='bold') plt.tight_layout() fig.savefig(('../../figures/' + fig_str + '.pdf'), bbox_inches = 'tight', pad_inches = 0) fig.savefig(('../../figures/' + fig_str + '.png'), bbox_inches = 'tight', pad_inches = 0) ###Output _____no_output_____
SCRIPTS/Q5.tweet_count (1).ipynb
###Markdown data.createorReplaceTempview('data')spark.sql("select * from data oreder by tweet_count) ###Code data1=spark.read.format("com.databricks.spark.csv").options(inferSchema="true",header='true',escape='"').load("C:/Users/HP/Downloads/REVATURE/PROJECT2/data-society-twitter-user-data/gender-classifier-DFE-791531.csv") data1.show() data1.select('name','tweet_count').distinct().where(data1.tweet_count>100000).sort(data1.tweet_count.desc()).show() ###Output +---------------+-----------+ | name|tweet_count| +---------------+-----------+ | TaibaDXB| 992101| | AmNewsWatch| 983165| |AllTheNewsIsNow| 982075| | ukworldnews| 963707| | yumronidua| 958325| | sexysleepwear| 950858| | BTCare| 937954| | xxolh| 890423| | Koran_Inggris| 873619| | KangenGaa| 849415| | krs21da| 836529| | myvotefactor| 793162| | subredditsbot| 790012| | TMobileHelp| 768078| | ReadersGazette| 766629| |high_on_glitter| 752344| | SkyHelpTeam| 729779| | BaltNetRadio| 726980| | commonpatriot| 726477| | uzsanjarbek| 692281| +---------------+-----------+ only showing top 20 rows