desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'Compute scalar loss tensors with respect to provided groundtruth. Calling this function requires that groundtruth tensors have been provided via the provide_groundtruth function. Args: prediction_dict: a dictionary holding prediction tensors with 1) box_encodings: 4-D float tensor of shape [batch_size, num_anchors, box_code_dimension] containing predicted boxes. 2) class_predictions_with_background: 2-D float tensor of shape [batch_size, num_anchors, num_classes+1] containing class predictions (logits) for each of the anchors. Note that this tensor *includes* background class predictions. scope: Optional scope name. Returns: a dictionary mapping loss keys (`localization_loss` and `classification_loss`) to scalar tensors representing corresponding loss values.'
def loss(self, prediction_dict, scope=None):
with tf.name_scope(scope, 'Loss', prediction_dict.values()): (batch_cls_targets, batch_cls_weights, batch_reg_targets, batch_reg_weights, match_list) = self._assign_targets(self.groundtruth_lists(fields.BoxListFields.boxes), self.groundtruth_lists(fields.BoxListFields.classes)) if self._add_summaries: self._summarize_input(self.groundtruth_lists(fields.BoxListFields.boxes), match_list) num_matches = tf.stack([match.num_matched_columns() for match in match_list]) location_losses = self._localization_loss(prediction_dict['box_encodings'], batch_reg_targets, weights=batch_reg_weights) cls_losses = self._classification_loss(prediction_dict['class_predictions_with_background'], batch_cls_targets, weights=batch_cls_weights) localization_loss = tf.reduce_sum(location_losses) classification_loss = tf.reduce_sum(cls_losses) if self._hard_example_miner: (localization_loss, classification_loss) = self._apply_hard_mining(location_losses, cls_losses, prediction_dict, match_list) if self._add_summaries: self._hard_example_miner.summarize() normalizer = tf.constant(1.0, dtype=tf.float32) if self._normalize_loss_by_num_matches: normalizer = tf.maximum(tf.to_float(tf.reduce_sum(num_matches)), 1.0) loss_dict = {'localization_loss': ((self._localization_loss_weight / normalizer) * localization_loss), 'classification_loss': ((self._classification_loss_weight / normalizer) * classification_loss)} return loss_dict
'Assign groundtruth targets. Adds a background class to each one-hot encoding of groundtruth classes and uses target assigner to obtain regression and classification targets. Args: groundtruth_boxes_list: a list of 2-D tensors of shape [num_boxes, 4] containing coordinates of the groundtruth boxes. Groundtruth boxes are provided in [y_min, x_min, y_max, x_max] format and assumed to be normalized and clipped relative to the image window with y_min <= y_max and x_min <= x_max. groundtruth_classes_list: a list of 2-D one-hot (or k-hot) tensors of shape [num_boxes, num_classes] containing the class targets with the 0th index assumed to map to the first non-background class. Returns: batch_cls_targets: a tensor with shape [batch_size, num_anchors, num_classes], batch_cls_weights: a tensor with shape [batch_size, num_anchors], batch_reg_targets: a tensor with shape [batch_size, num_anchors, box_code_dimension] batch_reg_weights: a tensor with shape [batch_size, num_anchors], match_list: a list of matcher.Match objects encoding the match between anchors and groundtruth boxes for each image of the batch, with rows of the Match objects corresponding to groundtruth boxes and columns corresponding to anchors.'
def _assign_targets(self, groundtruth_boxes_list, groundtruth_classes_list):
groundtruth_boxlists = [box_list.BoxList(boxes) for boxes in groundtruth_boxes_list] groundtruth_classes_with_background_list = [tf.pad(one_hot_encoding, [[0, 0], [1, 0]], mode='CONSTANT') for one_hot_encoding in groundtruth_classes_list] return target_assigner.batch_assign_targets(self._target_assigner, self.anchors, groundtruth_boxlists, groundtruth_classes_with_background_list)
'Creates tensorflow summaries for the input boxes and anchors. This function creates four summaries corresponding to the average number (over images in a batch) of (1) groundtruth boxes, (2) anchors marked as positive, (3) anchors marked as negative, and (4) anchors marked as ignored. Args: groundtruth_boxes_list: a list of 2-D tensors of shape [num_boxes, 4] containing corners of the groundtruth boxes. match_list: a list of matcher.Match objects encoding the match between anchors and groundtruth boxes for each image of the batch, with rows of the Match objects corresponding to groundtruth boxes and columns corresponding to anchors.'
def _summarize_input(self, groundtruth_boxes_list, match_list):
num_boxes_per_image = tf.stack([tf.shape(x)[0] for x in groundtruth_boxes_list]) pos_anchors_per_image = tf.stack([match.num_matched_columns() for match in match_list]) neg_anchors_per_image = tf.stack([match.num_unmatched_columns() for match in match_list]) ignored_anchors_per_image = tf.stack([match.num_ignored_columns() for match in match_list]) tf.summary.scalar('Input/AvgNumGroundtruthBoxesPerImage', tf.reduce_mean(tf.to_float(num_boxes_per_image))) tf.summary.scalar('Input/AvgNumPositiveAnchorsPerImage', tf.reduce_mean(tf.to_float(pos_anchors_per_image))) tf.summary.scalar('Input/AvgNumNegativeAnchorsPerImage', tf.reduce_mean(tf.to_float(neg_anchors_per_image))) tf.summary.scalar('Input/AvgNumIgnoredAnchorsPerImage', tf.reduce_mean(tf.to_float(ignored_anchors_per_image)))
'Applies hard mining to anchorwise losses. Args: location_losses: Float tensor of shape [batch_size, num_anchors] representing anchorwise location losses. cls_losses: Float tensor of shape [batch_size, num_anchors] representing anchorwise classification losses. prediction_dict: p a dictionary holding prediction tensors with 1) box_encodings: 4-D float tensor of shape [batch_size, num_anchors, box_code_dimension] containing predicted boxes. 2) class_predictions_with_background: 2-D float tensor of shape [batch_size, num_anchors, num_classes+1] containing class predictions (logits) for each of the anchors. Note that this tensor *includes* background class predictions. match_list: a list of matcher.Match objects encoding the match between anchors and groundtruth boxes for each image of the batch, with rows of the Match objects corresponding to groundtruth boxes and columns corresponding to anchors. Returns: mined_location_loss: a float scalar with sum of localization losses from selected hard examples. mined_cls_loss: a float scalar with sum of classification losses from selected hard examples.'
def _apply_hard_mining(self, location_losses, cls_losses, prediction_dict, match_list):
class_pred_shape = [(-1), self.anchors.num_boxes_static(), self.num_classes] class_predictions = tf.reshape(tf.slice(prediction_dict['class_predictions_with_background'], [0, 0, 1], class_pred_shape), class_pred_shape) decoded_boxes = bcoder.batch_decode(prediction_dict['box_encodings'], self._box_coder, self.anchors) decoded_box_tensors_list = tf.unstack(decoded_boxes) class_prediction_list = tf.unstack(class_predictions) decoded_boxlist_list = [] for (box_location, box_score) in zip(decoded_box_tensors_list, class_prediction_list): decoded_boxlist = box_list.BoxList(box_location) decoded_boxlist.add_field('scores', box_score) decoded_boxlist_list.append(decoded_boxlist) return self._hard_example_miner(location_losses=location_losses, cls_losses=cls_losses, decoded_boxlist_list=decoded_boxlist_list, match_list=match_list)
'Return callable for loading a checkpoint into the tensorflow graph. Args: checkpoint_path: path to checkpoint to restore. from_detection_checkpoint: whether to restore from a full detection checkpoint (with compatible variable names) or to restore from a classification checkpoint for initialization prior to training. Returns: a callable which takes a tf.Session as input and loads a checkpoint when run.'
def restore_fn(self, checkpoint_path, from_detection_checkpoint=True):
variables_to_restore = {} for variable in tf.all_variables(): if variable.op.name.startswith(self._extract_features_scope): var_name = variable.op.name if (not from_detection_checkpoint): var_name = re.split((('^' + self._extract_features_scope) + '/'), var_name)[(-1)] variables_to_restore[var_name] = variable variables_to_restore = variables_helper.get_variables_available_in_checkpoint(variables_to_restore, checkpoint_path) saver = tf.train.Saver(variables_to_restore) def restore(sess): saver.restore(sess, checkpoint_path) return restore
'RFCNMetaArch Constructor. Args: is_training: A boolean indicating whether the training version of the computation graph should be constructed. num_classes: Number of classes. Note that num_classes *does not* include the background category, so if groundtruth labels take values in {0, 1, .., K-1}, num_classes=K (and not K+1, even though the assigned classification targets can range from {0,... K}). image_resizer_fn: A callable for image resizing. This callable always takes a rank-3 image tensor (corresponding to a single image) and returns a rank-3 image tensor, possibly with new spatial dimensions. See builders/image_resizer_builder.py. feature_extractor: A FasterRCNNFeatureExtractor object. first_stage_only: Whether to construct only the Region Proposal Network (RPN) part of the model. first_stage_anchor_generator: An anchor_generator.AnchorGenerator object (note that currently we only support grid_anchor_generator.GridAnchorGenerator objects) first_stage_atrous_rate: A single integer indicating the atrous rate for the single convolution op which is applied to the `rpn_features_to_crop` tensor to obtain a tensor to be used for box prediction. Some feature extractors optionally allow for producing feature maps computed at denser resolutions. The atrous rate is used to compensate for the denser feature maps by using an effectively larger receptive field. (This should typically be set to 1). first_stage_box_predictor_arg_scope: Slim arg_scope for conv2d, separable_conv2d and fully_connected ops for the RPN box predictor. first_stage_box_predictor_kernel_size: Kernel size to use for the convolution op just prior to RPN box predictions. first_stage_box_predictor_depth: Output depth for the convolution op just prior to RPN box predictions. first_stage_minibatch_size: The "batch size" to use for computing the objectness and location loss of the region proposal network. This "batch size" refers to the number of anchors selected as contributing to the loss function for any given image within the image batch and is only called "batch_size" due to terminology from the Faster R-CNN paper. first_stage_positive_balance_fraction: Fraction of positive examples per image for the RPN. The recommended value for Faster RCNN is 0.5. first_stage_nms_score_threshold: Score threshold for non max suppression for the Region Proposal Network (RPN). This value is expected to be in [0, 1] as it is applied directly after a softmax transformation. The recommended value for Faster R-CNN is 0. first_stage_nms_iou_threshold: The Intersection Over Union (IOU) threshold for performing Non-Max Suppression (NMS) on the boxes predicted by the Region Proposal Network (RPN). first_stage_max_proposals: Maximum number of boxes to retain after performing Non-Max Suppression (NMS) on the boxes predicted by the Region Proposal Network (RPN). first_stage_localization_loss_weight: A float first_stage_objectness_loss_weight: A float second_stage_rfcn_box_predictor: RFCN box predictor to use for second stage. second_stage_batch_size: The batch size used for computing the classification and refined location loss of the box classifier. This "batch size" refers to the number of proposals selected as contributing to the loss function for any given image within the image batch and is only called "batch_size" due to terminology from the Faster R-CNN paper. second_stage_balance_fraction: Fraction of positive examples to use per image for the box classifier. The recommended value for Faster RCNN is 0.25. second_stage_non_max_suppression_fn: batch_multiclass_non_max_suppression callable that takes `boxes`, `scores`, optional `clip_window` and optional (kwarg) `mask` inputs (with all other inputs already set) and returns a dictionary containing tensors with keys: `detection_boxes`, `detection_scores`, `detection_classes`, `num_detections`, and (optionally) `detection_masks`. See `post_processing.batch_multiclass_non_max_suppression` for the type and shape of these tensors. second_stage_score_conversion_fn: Callable elementwise nonlinearity (that takes tensors as inputs and returns tensors). This is usually used to convert logits to probabilities. second_stage_localization_loss_weight: A float second_stage_classification_loss_weight: A float hard_example_miner: A losses.HardExampleMiner object (can be None). parallel_iterations: (Optional) The number of iterations allowed to run in parallel for calls to tf.map_fn. Raises: ValueError: If `second_stage_batch_size` > `first_stage_max_proposals` ValueError: If first_stage_anchor_generator is not of type grid_anchor_generator.GridAnchorGenerator.'
def __init__(self, is_training, num_classes, image_resizer_fn, feature_extractor, first_stage_only, first_stage_anchor_generator, first_stage_atrous_rate, first_stage_box_predictor_arg_scope, first_stage_box_predictor_kernel_size, first_stage_box_predictor_depth, first_stage_minibatch_size, first_stage_positive_balance_fraction, first_stage_nms_score_threshold, first_stage_nms_iou_threshold, first_stage_max_proposals, first_stage_localization_loss_weight, first_stage_objectness_loss_weight, second_stage_rfcn_box_predictor, second_stage_batch_size, second_stage_balance_fraction, second_stage_non_max_suppression_fn, second_stage_score_conversion_fn, second_stage_localization_loss_weight, second_stage_classification_loss_weight, hard_example_miner, parallel_iterations=16):
super(RFCNMetaArch, self).__init__(is_training, num_classes, image_resizer_fn, feature_extractor, first_stage_only, first_stage_anchor_generator, first_stage_atrous_rate, first_stage_box_predictor_arg_scope, first_stage_box_predictor_kernel_size, first_stage_box_predictor_depth, first_stage_minibatch_size, first_stage_positive_balance_fraction, first_stage_nms_score_threshold, first_stage_nms_iou_threshold, first_stage_max_proposals, first_stage_localization_loss_weight, first_stage_objectness_loss_weight, None, None, None, None, second_stage_batch_size, second_stage_balance_fraction, second_stage_non_max_suppression_fn, second_stage_score_conversion_fn, second_stage_localization_loss_weight, second_stage_classification_loss_weight, hard_example_miner, parallel_iterations) self._rfcn_box_predictor = second_stage_rfcn_box_predictor
'Predicts the output tensors from 2nd stage of FasterRCNN. Args: rpn_box_encodings: 4-D float tensor of shape [batch_size, num_valid_anchors, self._box_coder.code_size] containing predicted boxes. rpn_objectness_predictions_with_background: 2-D float tensor of shape [batch_size, num_valid_anchors, 2] containing class predictions (logits) for each of the anchors. Note that this tensor *includes* background class predictions (at class index 0). rpn_features: A 4-D float32 tensor with shape [batch_size, height, width, depth] representing image features from the RPN. anchors: 2-D float tensor of shape [num_anchors, self._box_coder.code_size]. image_shape: A 1D int32 tensors of size [4] containing the image shape. Returns: prediction_dict: a dictionary holding "raw" prediction tensors: 1) refined_box_encodings: a 3-D tensor with shape [total_num_proposals, num_classes, 4] representing predicted (final) refined box encodings, where total_num_proposals=batch_size*self._max_num_proposals 2) class_predictions_with_background: a 3-D tensor with shape [total_num_proposals, num_classes + 1] containing class predictions (logits) for each of the anchors, where total_num_proposals=batch_size*self._max_num_proposals. Note that this tensor *includes* background class predictions (at class index 0). 3) num_proposals: An int32 tensor of shape [batch_size] representing the number of proposals generated by the RPN. `num_proposals` allows us to keep track of which entries are to be treated as zero paddings and which are not since we always pad the number of proposals to be `self.max_num_proposals` for each image. 4) proposal_boxes: A float32 tensor of shape [batch_size, self.max_num_proposals, 4] representing decoded proposal bounding boxes (in absolute coordinates).'
def _predict_second_stage(self, rpn_box_encodings, rpn_objectness_predictions_with_background, rpn_features, anchors, image_shape):
(proposal_boxes_normalized, _, num_proposals) = self._postprocess_rpn(rpn_box_encodings, rpn_objectness_predictions_with_background, anchors, image_shape) box_classifier_features = self._feature_extractor.extract_box_classifier_features(rpn_features, scope=self.second_stage_feature_extractor_scope) box_predictions = self._rfcn_box_predictor.predict(box_classifier_features, num_predictions_per_location=1, scope=self.second_stage_box_predictor_scope, proposal_boxes=proposal_boxes_normalized) refined_box_encodings = tf.squeeze(box_predictions[box_predictor.BOX_ENCODINGS], axis=1) class_predictions_with_background = tf.squeeze(box_predictions[box_predictor.CLASS_PREDICTIONS_WITH_BACKGROUND], axis=1) absolute_proposal_boxes = ops.normalized_to_image_coordinates(proposal_boxes_normalized, image_shape, parallel_iterations=self._parallel_iterations) prediction_dict = {'refined_box_encodings': refined_box_encodings, 'class_predictions_with_background': class_predictions_with_background, 'num_proposals': num_proposals, 'proposal_boxes': absolute_proposal_boxes} return prediction_dict
'Set up mock SSD model. Here we set up a simple mock SSD model that will always predict 4 detections that happen to always be exactly the anchors that are set up in the above MockAnchorGenerator. Because we let max_detections=5, we will also always end up with an extra padded row in the detection results.'
def setUp(self):
is_training = False self._num_classes = 1 mock_anchor_generator = MockAnchorGenerator2x2() mock_box_predictor = test_utils.MockBoxPredictor(is_training, self._num_classes) mock_box_coder = test_utils.MockBoxCoder() fake_feature_extractor = FakeSSDFeatureExtractor() mock_matcher = test_utils.MockMatcher() region_similarity_calculator = sim_calc.IouSimilarity() def image_resizer_fn(image): return tf.identity(image) classification_loss = losses.WeightedSigmoidClassificationLoss(anchorwise_output=True) localization_loss = losses.WeightedSmoothL1LocalizationLoss(anchorwise_output=True) non_max_suppression_fn = functools.partial(post_processing.batch_multiclass_non_max_suppression, score_thresh=(-20.0), iou_thresh=1.0, max_size_per_class=5, max_total_size=5) classification_loss_weight = 1.0 localization_loss_weight = 1.0 normalize_loss_by_num_matches = False hard_example_miner = losses.HardExampleMiner(num_hard_examples=None, iou_threshold=1.0) self._num_anchors = 4 self._code_size = 4 self._model = ssd_meta_arch.SSDMetaArch(is_training, mock_anchor_generator, mock_box_predictor, mock_box_coder, fake_feature_extractor, mock_matcher, region_similarity_calculator, image_resizer_fn, non_max_suppression_fn, tf.identity, classification_loss, localization_loss, classification_loss_weight, localization_loss_weight, normalize_loss_by_num_matches, hard_example_miner)
'Support batch or single str.'
def encode(self, text, depth=0):
if isinstance(text, str): text = [self.dict[char.lower()] for char in text] length = [len(text)] elif isinstance(text, collections.Iterable): length = [len(s) for s in text] text = ''.join(text) (text, _) = self.encode(text) if depth: return (text, len(text)) return (torch.IntTensor(text), torch.IntTensor(length))
'Immutable Not allowed :param serializer: :return:'
def perform_update(self, serializer):
raise NotImplementedError
'Immutable Not allowed :param serializer: :return:'
def perform_update(self, serializer):
raise NotImplementedError
'Immutable Not allowed :param serializer: :return:'
def perform_update(self, serializer):
raise NotImplementedError
'Generate a Unique version value from the git information :return:'
@property def version(self):
git_rev = len(os.popen('git rev-list HEAD').readlines()) if (git_rev != 0): self.version_list[(-1)] = ('%d' % git_rev) version = '.'.join(self.version_list) return version
'Get the current git branch :return:'
@property def branch(self):
return os.popen('git rev-parse --abbrev-ref HEAD').read().strip()
'Return the git hash for the current build :return:'
@property def hash(self):
return os.popen('git rev-parse HEAD').read().strip()
'Return the fetch url for the git origin :return:'
@property def origin(self):
for item in os.popen('git remote -v'): split_item = item.strip().split() if ((split_item[0] == 'origin') and (split_item[(-1)] == '(push)')): return split_item[1]
'Add raw data into the search index. :param ndarray data: an ndarray with data points on the rows :param ndarray ids: an optional array of ids for each data point; defaults to the index of the data point if not provided :param int num_procs: an integer specifying the number of processes to use to compute codes for the data'
def add_data(self, data, ids=None, num_procs=1):
codes = compute_codes_parallel(data, self.model, num_procs) self.add_codes(codes, ids)
'Given a query vector and result quota, retrieve as many cells as necessary to fill the quota. :param ndarray x: a query vector :param int quota: the desired number of items to retrieve :returns list retrieved: a list of index items :returns int visited: the number of multi-index cells visited'
def get_result_quota(self, x, quota=10):
retrieved = [] visited = 0 for (_, cell) in multisequence(x, self.model.Cs): retrieved += self.get_cell(cell) visited += 1 if (len(retrieved) >= quota): break return (retrieved, visited)
'Given a query and a list of index items, compute the approximate distance of the query to each item and return a list of tuples that contain the distance and the item. Memoize subquantizer distances per coarse cluster to save work. :param ndarray x: a query vector :param list items: a list of items from the index :returns list: a list of items with distance'
def compute_distances(self, x, items):
memoized_subquant_dists = [{}, {}] def get_subquantizer_distances(x, coarse): (d0, d1) = memoized_subquant_dists (c0, c1) = coarse if (c0 not in d0): d0[c0] = self.model.get_subquantizer_distances(x, coarse, coarse_split=0) if (c1 not in d1): d1[c1] = self.model.get_subquantizer_distances(x, coarse, coarse_split=1) return (d0[c0] + d1[c1]) results = [] for item in items: codes = item[1] (coarse, fine) = codes subquantizer_distances = get_subquantizer_distances(x, coarse) dist = sum([subquantizer_distances[i][fc] for (i, fc) in enumerate(fine)]) results.append((dist, item)) return results
'Return euclidean distance ranked results, along with the number of cells traversed to fill the quota. :param ndarray x: a query vector :param int quota: the number of desired results to rank :param int limit: the number of desired results to return - defaults to quota :param bool with_dists: boolean indicating whether result items should be returned with their distance :returns list results: the list of ranked results :returns int visited: the number of cells visited in the query'
def search(self, x, quota=10, limit=None, with_dists=False):
(retrieved, visited) = self.get_result_quota(x, quota) results = self.compute_distances(x, retrieved) results = sorted(results, key=(lambda d: d[0])) if (limit is None): limit = quota results = results[:limit] if with_dists: Result = namedtuple('Result', ['id', 'code', 'dist']) results = map((lambda d: Result(d[1][0], d[1][1], d[0])), results) else: Result = namedtuple('Result', ['id', 'code']) results = map((lambda d: Result(d[1][0], d[1])), results) return (results, visited)
'Add LOPQ codes into the search index. :param iterable codes: an iterable of LOPQ code tuples :param iterable ids: an optional iterable of ids for each code; defaults to the index of the code tuple if not provided'
def add_codes(self, codes, ids=None):
raise NotImplementedError()
'Retrieve a cell bucket from the index. :param tuple cell: a cell tuple :returns list: the list of index items in this cell bucket'
def get_cell(self, cell):
raise NotImplementedError()
'Create an LOPQSearcher instance that encapsulates retrieving and ranking with LOPQ. Requires an LOPQModel instance. This class uses a Python dict to implement the index. :param LOPQModel model: the model for indexing and ranking'
def __init__(self, model):
self.model = model self.index = defaultdict(list)
'Add LOPQ codes into the search index. :param iterable codes: an iterable of LOPQ code tuples :param iterable ids: an optional iterable of ids for each code; defaults to the index of the code tuple if not provided'
def add_codes(self, codes, ids=None):
if (ids is None): ids = count() for (item_id, code) in zip(ids, codes): cell = code[0] self.index[cell].append((item_id, code))
'Retrieve a cell bucket from the index. :param tuple cell: a cell tuple :returns list: the list of index items in this cell bucket'
def get_cell(self, cell):
return self.index[cell]
'Create an LOPQSearcher instance that encapsulates retrieving and ranking with LOPQ. Requires an LOPQModel instance. This class uses an lmbd database to implement the index. :param LOPQModel model: the model for indexing and ranking :param str lmdb_path: path for the lmdb database; if it does not exist it is created :param callable id_lambda: a lambda function to reconstruct item ids from their string representation (computed by calling `bytes`) during retrieval'
def __init__(self, model, lmdb_path, id_lambda=int):
import lmdb self.model = model self.lmdb_path = lmdb_path self.id_lambda = id_lambda self.env = lmdb.open(self.lmdb_path, map_size=((1024 * 2000000) * 2), writemap=False, map_async=True, max_dbs=1) self.index_db = self.env.open_db('index')
'Add LOPQ codes into the search index. :param iterable codes: an iterable of LOPQ code tuples :param iterable ids: an optional iterable of ids for each code; defaults to the index of the code tuple if not provided'
def add_codes(self, codes, ids=None):
if (ids is None): ids = count() with self.env.begin(db=self.index_db, write=True) as txn: for (item_id, code) in zip(ids, codes): key_prefix = self.encode_cell(code[0]) key_suffix = bytes(item_id) key = (key_prefix + key_suffix) val = self.encode_fine_codes(code[1]) txn.put(key, val) self.env.sync()
'Retrieve a cell bucket from the index. :param tuple cell: a cell tuple :returns list: the list of index items in this cell bucket'
def get_cell(self, cell):
prefix = self.encode_cell(cell) items = [] with self.env.begin(db=self.index_db) as txn: cursor = txn.cursor() cursor.set_range(prefix) for (key, value) in cursor: if (not key.startswith(prefix)): break else: item_id = self.id_lambda(key[4:]) cell = self.decode_cell(key[:4]) fine = self.decode_fine_codes(value) code = (cell, fine) items.append((item_id, code)) cursor.close() return items
'Create an LOPQModel instance that encapsulates a complete LOPQ model with parameters and hyperparameters. :param int V: the number of clusters per a coarse split :param int M: the total number of subvectors (equivalent to the total number of subquantizers) :param int subquantizer_clusters: the number of clusters for each subquantizer :param tuple parameters: a tuple of parameters - missing parameters are allowed to be None the tuple will look like the following ((C1, C2), (Rs1, Rs2), (mu1, mu2), (subquantizers1, subquantizers2)) where each element is itself a pair with one split of parameters for the each of the coarse splits. the parameters have the following data types (V and M have the meaning described above, D is the total dimension of the data, and S is the number of subquantizer clusters): C: VxD/2 ndarray of coarse centroids R: VxD/2xD/2 ndarray of fitted rotation matrices for each coarse cluster mu: VxD/2 ndarray of mean residuals for each coar cluster subquantizer: length M/2 list of SxD/M ndarrays of cluster centroids for each subvector'
def __init__(self, V=8, M=4, subquantizer_clusters=256, parameters=None):
(self.Cs, self.Rs, self.mus, self.subquantizers) = (parameters if (parameters is not None) else (None, None, None, None)) if (self.Cs is not None): self.V = self.Cs[0].shape[0] self.num_coarse_splits = len(self.Cs) else: self.V = V self.num_coarse_splits = 2 if (self.subquantizers is not None): self.num_fine_splits = len(self.subquantizers[0]) self.M = (self.num_fine_splits * self.num_coarse_splits) self.subquantizer_clusters = self.subquantizers[0][0].shape[0] else: self.num_fine_splits = (M / 2) self.M = M self.subquantizer_clusters = subquantizer_clusters
'Fit a model with the current model parameters. This method will use existing parameters and only train missing parameters. :param int kmeans_coarse_iters: the number of kmeans iterations for coarse quantizer training :param int kmeans_local_iters: the number of kmeans iterations for subquantizer taining :param int n_init: the number of independent kmeans runs for all kmeans when training - set low for faster training :param float subquantizer_sample_ratio: the proportion of the training data to sample for training subquantizers - since the number of subquantizer clusters is much smaller then the number of coarse clusters, less data is needed :param int random_state: a random seed used in all random operations during training if provided :param bool verbose: a bool enabling verbose output during training'
def fit(self, data, kmeans_coarse_iters=10, kmeans_local_iters=20, n_init=10, subquantizer_sample_ratio=1.0, random_state=None, verbose=False):
existing_parameters = (self.Cs, self.Rs, self.mus, self.subquantizers) parameters = train(data, self.V, self.M, self.subquantizer_clusters, existing_parameters, kmeans_coarse_iters, kmeans_local_iters, n_init, subquantizer_sample_ratio, random_state, verbose) (self.Cs, self.Rs, self.mus, self.subquantizers) = parameters
'A helper to return parameters for a given coarse split. :params int split: the coarse split :returns ndarray: a matrix of centroids for the coarse model :returns list: a list of residual means for each cluster :returns list: a list of rotation matrices for each cluster :returns list: a list of centroid matrices for each subquantizer in this coarse split'
def get_split_parameters(self, split):
return ((self.Cs[split] if (self.Cs is not None) else None), (self.Rs[split] if (self.Rs is not None) else None), (self.mus[split] if (self.mus is not None) else None), (self.subquantizers[split] if (self.subquantizers is not None) else None))
'Compute both coarse and fine codes for a datapoint. :param ndarray x: the point to code :returns tuple: a tuple of coarse codes :returns tuple: a tuple of fine codes'
def predict(self, x):
coarse_codes = self.predict_coarse(x) fine_codes = self.predict_fine(x, coarse_codes) return LOPQCode(coarse_codes, fine_codes)
'Compute the coarse codes for a datapoint. :param ndarray x: the point to code :returns tuple: a tuple of coarse codes'
def predict_coarse(self, x):
return tuple([predict_cluster(cx, self.Cs[split]) for (cx, split) in iterate_splits(x, self.num_coarse_splits)])
'Compute the fine codes for a datapoint. :param ndarray x: the point to code :param ndarray coarse_codes: the coarse codes for the point if they are already computed :returns tuple: a tuple of fine codes'
def predict_fine(self, x, coarse_codes=None):
if (coarse_codes is None): coarse_codes = self.predict_coarse(x) px = self.project(x, coarse_codes) fine_codes = [] for (cx, split) in iterate_splits(px, self.num_coarse_splits): (_, _, _, subC) = self.get_split_parameters(split) fine_codes += [predict_cluster(fx, subC[sub_split]) for (fx, sub_split) in iterate_splits(cx, self.num_fine_splits)] return tuple(fine_codes)
'Project this vector to its local residual space defined by the coarse codes. :param ndarray x: the point to project :param ndarray coarse_codes: the coarse codes defining the local space :param int coarse_split: index of the coarse split to get distances for - if None then all splits are computed :returns ndarray: the projected vector'
def project(self, x, coarse_codes, coarse_split=None):
px = [] if (coarse_split is None): split_iter = iterate_splits(x, self.num_coarse_splits) else: split_iter = [(np.split(x, self.num_coarse_splits)[coarse_split], coarse_split)] for (cx, split) in split_iter: (C, R, mu, _) = self.get_split_parameters(split) cluster = coarse_codes[split] r = (cx - C[cluster]) pr = np.dot(R[cluster], (r - mu[cluster])) px.append(pr) return np.concatenate(px)
'Given a code tuple, reconstruct an approximate vector. :param tuple codes: a code tuple as returned from the predict method :returns ndarray: a reconstructed vector'
def reconstruct(self, codes):
(coarse_codes, fine_codes) = codes x = [] for (fc, split) in iterate_splits(fine_codes, self.num_coarse_splits): (C, R, mu, subC) = self.get_split_parameters(split) sx = reduce((lambda acc, c: np.concatenate((acc, subC[c[0]][c[1]]))), enumerate(fc), []) cluster = coarse_codes[split] r = (np.dot(R[cluster].transpose(), sx) + mu[cluster]) x = np.concatenate((x, (r + C[cluster]))) return x
'Project a given query vector to the local space of the given coarse codes and compute the distances of each subvector to the corresponding subquantizer clusters. :param ndarray x: a query vector :param tuple coarse_codes: the coarse codes defining which local space to project to :param int coarse_split: index of the coarse split to get distances for - if None then all splits are computed :returns list: a list of distances to each subquantizer cluster for each subquantizer'
def get_subquantizer_distances(self, x, coarse_codes, coarse_split=None):
px = self.project(x, coarse_codes) subquantizer_dists = [] if (coarse_split is None): split_iter = iterate_splits(px, self.num_coarse_splits) else: split_iter = [(np.split(px, self.num_coarse_splits)[coarse_split], coarse_split)] for (cx, split) in split_iter: (_, _, _, subC) = self.get_split_parameters(split) subquantizer_dists += [((fx - subC[sub_split]) ** 2).sum(axis=1) for (fx, sub_split) in iterate_splits(cx, self.num_fine_splits)] return subquantizer_dists
'Export model parameters in .mat file format. Splits in the parameters (coarse splits and fine splits) are concatenated together in the resulting arrays. For example, the Cs paramaters become a 2 x V x D array where the first dimension indexes the split. The subquantizer centroids are encoded similarly as a 2 x (M/2) x 256 x (D/M) array.'
def export_mat(self, filename):
from scipy.io import savemat from .utils import concat_new_first Cs = concat_new_first(self.Cs) Rs = concat_new_first(self.Rs) mus = concat_new_first(self.mus) subs = concat_new_first(map(concat_new_first, self.subquantizers)) savemat(filename, {'Cs': Cs, 'Rs': Rs, 'mus': mus, 'subs': subs, 'V': self.V, 'M': self.M})
'Reconstitute an LOPQModel in the format exported by the `export_mat` method above.'
@staticmethod def load_mat(filename):
from scipy.io import loadmat d = loadmat(filename) M = d['M'][0][0] Cs = tuple(map(np.squeeze, np.split(d['Cs'], 2, axis=0))) Rs = tuple(map(np.squeeze, np.split(d['Rs'], 2, axis=0))) mus = tuple(map(np.squeeze, np.split(d['mus'], 2, axis=0))) subs = tuple([map(np.squeeze, np.split(half, (M / 2), axis=0)) for half in map(np.squeeze, np.split(d['subs'], 2, axis=0))]) return LOPQModel(parameters=(Cs, Rs, mus, subs))
'Export model parameters in protobuf format.'
def export_proto(self, f):
from .lopq_model_pb2 import LOPQModelParams from itertools import chain lopq_params = LOPQModelParams() lopq_params.D = (2 * self.Cs[0].shape[1]) lopq_params.V = self.V lopq_params.M = self.M lopq_params.num_subquantizers = self.subquantizer_clusters def matrix_from_ndarray(m, a): m.values.extend(map(float, np.nditer(a, order='C'))) m.shape.extend(a.shape) return m def vector_from_ndarray(m, a): m.values.extend(map(float, np.nditer(a, order='C'))) return m if (self.Cs != None): for C in self.Cs: matrix_from_ndarray(lopq_params.Cs.add(), C) if (self.Rs != None): for R in chain(*self.Rs): matrix_from_ndarray(lopq_params.Rs.add(), R) if (self.mus != None): for mu in chain(*self.mus): vector_from_ndarray(lopq_params.mus.add(), mu) if (self.subquantizers != None): for sub in chain(*self.subquantizers): matrix_from_ndarray(lopq_params.subs.add(), sub) if (type(f) is str): f = open(f, 'wb') f.write(lopq_params.SerializeToString()) f.close()
'Reconstitute a model from parameters stored in protobuf format.'
@staticmethod def load_proto(filename):
from .lopq_model_pb2 import LOPQModelParams from .utils import concat_new_first def halves(arr): return [arr[:(len(arr) / 2)], arr[(len(arr) / 2):]] lopq_params = LOPQModelParams() try: f = open(filename) lopq_params.ParseFromString(f.read()) f.close() Cs = Rs = mus = subs = None if (len(lopq_params.Cs) != 0): Cs = [np.reshape(C.values, C.shape) for C in lopq_params.Cs] if (len(lopq_params.Rs) != 0): Rs = map(concat_new_first, halves([np.reshape(R.values, R.shape) for R in lopq_params.Rs])) if (len(lopq_params.mus) != 0): mus = map(concat_new_first, halves([np.array(mu.values) for mu in lopq_params.mus])) if (len(lopq_params.subs) != 0): subs = halves([np.reshape(sub.values, sub.shape) for sub in lopq_params.subs]) return LOPQModel(parameters=(Cs, Rs, mus, subs)) except IOError: print (filename + ': Could not open file.') return None
'Create the model. Args: source_vocab_size: size of the source vocabulary. target_vocab_size: size of the target vocabulary. buckets: a list of pairs (I, O), where I specifies maximum input length that will be processed in that bucket, and O specifies maximum output length. Training instances that have inputs longer than I or outputs longer than O will be pushed to the next bucket and padded accordingly. We assume that the list is sorted, e.g., [(2, 4), (8, 16)]. size: number of units in each layer of the model. num_layers: number of layers in the model. max_gradient_norm: gradients will be clipped to maximally this norm. batch_size: the size of the batches used during training; the model construction is independent of batch_size, so it can be changed after initialization if this is convenient, e.g., for decoding. learning_rate: learning rate to start with. learning_rate_decay_factor: decay learning rate by this much when needed. use_lstm: if true, we use LSTM cells instead of GRU cells. num_samples: number of samples for sampled softmax. forward_only: if set, we do not construct the backward pass in the model.'
def __init__(self, source_vocab_size, target_vocab_size, buckets, size, num_layers, max_gradient_norm, batch_size, learning_rate, learning_rate_decay_factor, use_lstm=False, num_samples=512, forward_only=False):
self.source_vocab_size = source_vocab_size self.target_vocab_size = target_vocab_size self.buckets = buckets self.batch_size = batch_size self.learning_rate = tf.Variable(float(learning_rate), trainable=False) self.learning_rate_decay_op = self.learning_rate.assign((self.learning_rate * learning_rate_decay_factor)) self.global_step = tf.Variable(0, trainable=False) output_projection = None softmax_loss_function = None if ((num_samples > 0) and (num_samples < self.target_vocab_size)): w = tf.get_variable('proj_w', [size, self.target_vocab_size]) w_t = tf.transpose(w) b = tf.get_variable('proj_b', [self.target_vocab_size]) output_projection = (w, b) def sampled_loss(inputs, labels): labels = tf.reshape(labels, [(-1), 1]) return tf.nn.sampled_softmax_loss(w_t, b, inputs, labels, num_samples, self.target_vocab_size) softmax_loss_function = sampled_loss single_cell = tf.nn.rnn_cell.GRUCell(size) if use_lstm: single_cell = tf.nn.rnn_cell.BasicLSTMCell(size) cell = single_cell if (num_layers > 1): cell = tf.nn.rnn_cell.MultiRNNCell(([single_cell] * num_layers)) def seq2seq_f(encoder_inputs, decoder_inputs, do_decode): return tf.nn.seq2seq.embedding_attention_seq2seq(encoder_inputs, decoder_inputs, cell, num_encoder_symbols=source_vocab_size, num_decoder_symbols=target_vocab_size, embedding_size=size, output_projection=output_projection, feed_previous=do_decode) self.encoder_inputs = [] self.decoder_inputs = [] self.target_weights = [] for i in xrange(buckets[(-1)][0]): self.encoder_inputs.append(tf.placeholder(tf.int32, shape=[None], name='encoder{0}'.format(i))) for i in xrange((buckets[(-1)][1] + 1)): self.decoder_inputs.append(tf.placeholder(tf.int32, shape=[None], name='decoder{0}'.format(i))) self.target_weights.append(tf.placeholder(tf.float32, shape=[None], name='weight{0}'.format(i))) targets = [self.decoder_inputs[(i + 1)] for i in xrange((len(self.decoder_inputs) - 1))] if forward_only: (self.outputs, self.losses) = tf.nn.seq2seq.model_with_buckets(self.encoder_inputs, self.decoder_inputs, targets, self.target_weights, buckets, (lambda x, y: seq2seq_f(x, y, True)), softmax_loss_function=softmax_loss_function) if (output_projection is not None): for b in xrange(len(buckets)): self.outputs[b] = [(tf.matmul(output, output_projection[0]) + output_projection[1]) for output in self.outputs[b]] else: (self.outputs, self.losses) = tf.nn.seq2seq.model_with_buckets(self.encoder_inputs, self.decoder_inputs, targets, self.target_weights, buckets, (lambda x, y: seq2seq_f(x, y, False)), softmax_loss_function=softmax_loss_function) params = tf.trainable_variables() if (not forward_only): self.gradient_norms = [] self.updates = [] opt = tf.train.GradientDescentOptimizer(self.learning_rate) for b in xrange(len(buckets)): gradients = tf.gradients(self.losses[b], params) (clipped_gradients, norm) = tf.clip_by_global_norm(gradients, max_gradient_norm) self.gradient_norms.append(norm) self.updates.append(opt.apply_gradients(zip(clipped_gradients, params), global_step=self.global_step)) self.saver = tf.train.Saver(tf.all_variables())
'Run a step of the model feeding the given inputs. Args: session: tensorflow session to use. encoder_inputs: list of numpy int vectors to feed as encoder inputs. decoder_inputs: list of numpy int vectors to feed as decoder inputs. target_weights: list of numpy float vectors to feed as target weights. bucket_id: which bucket of the model to use. forward_only: whether to do the backward step or only forward. Returns: A triple consisting of gradient norm (or None if we did not do backward), average perplexity, and the outputs. Raises: ValueError: if length of encoder_inputs, decoder_inputs, or target_weights disagrees with bucket size for the specified bucket_id.'
def step(self, session, encoder_inputs, decoder_inputs, target_weights, bucket_id, forward_only):
(encoder_size, decoder_size) = self.buckets[bucket_id] if (len(encoder_inputs) != encoder_size): raise ValueError(('Encoder length must be equal to the one in bucket, %d != %d.' % (len(encoder_inputs), encoder_size))) if (len(decoder_inputs) != decoder_size): raise ValueError(('Decoder length must be equal to the one in bucket, %d != %d.' % (len(decoder_inputs), decoder_size))) if (len(target_weights) != decoder_size): raise ValueError(('Weights length must be equal to the one in bucket, %d != %d.' % (len(target_weights), decoder_size))) input_feed = {} for l in xrange(encoder_size): input_feed[self.encoder_inputs[l].name] = encoder_inputs[l] for l in xrange(decoder_size): input_feed[self.decoder_inputs[l].name] = decoder_inputs[l] input_feed[self.target_weights[l].name] = target_weights[l] last_target = self.decoder_inputs[decoder_size].name input_feed[last_target] = np.zeros([self.batch_size], dtype=np.int32) if (not forward_only): output_feed = [self.updates[bucket_id], self.gradient_norms[bucket_id], self.losses[bucket_id]] else: output_feed = [self.losses[bucket_id]] for l in xrange(decoder_size): output_feed.append(self.outputs[bucket_id][l]) outputs = session.run(output_feed, input_feed) if (not forward_only): return (outputs[1], outputs[2], None) else: return (None, outputs[0], outputs[1:])
'Get a random batch of data from the specified bucket, prepare for step. To feed data in step(..) it must be a list of batch-major vectors, while data here contains single length-major cases. So the main logic of this function is to re-index data cases to be in the proper format for feeding. Args: data: a tuple of size len(self.buckets) in which each element contains lists of pairs of input and output data that we use to create a batch. bucket_id: integer, which bucket to get the batch for. Returns: The triple (encoder_inputs, decoder_inputs, target_weights) for the constructed batch that has the proper format to call step(...) later.'
def get_batch(self, data, bucket_id):
(encoder_size, decoder_size) = self.buckets[bucket_id] (encoder_inputs, decoder_inputs) = ([], []) for _ in xrange(self.batch_size): (encoder_input, decoder_input) = random.choice(data[bucket_id]) encoder_pad = ([data_utils.PAD_ID] * (encoder_size - len(encoder_input))) encoder_inputs.append(list(reversed((encoder_input + encoder_pad)))) decoder_pad_size = ((decoder_size - len(decoder_input)) - 1) decoder_inputs.append((([data_utils.GO_ID] + decoder_input) + ([data_utils.PAD_ID] * decoder_pad_size))) (batch_encoder_inputs, batch_decoder_inputs, batch_weights) = ([], [], []) for length_idx in xrange(encoder_size): batch_encoder_inputs.append(np.array([encoder_inputs[batch_idx][length_idx] for batch_idx in xrange(self.batch_size)], dtype=np.int32)) for length_idx in xrange(decoder_size): batch_decoder_inputs.append(np.array([decoder_inputs[batch_idx][length_idx] for batch_idx in xrange(self.batch_size)], dtype=np.int32)) batch_weight = np.ones(self.batch_size, dtype=np.float32) for batch_idx in xrange(self.batch_size): if (length_idx < (decoder_size - 1)): target = decoder_inputs[batch_idx][(length_idx + 1)] if ((length_idx == (decoder_size - 1)) or (target == data_utils.PAD_ID)): batch_weight[batch_idx] = 0.0 batch_weights.append(batch_weight) return (batch_encoder_inputs, batch_decoder_inputs, batch_weights)
'Close the cursor. No further queries will be possible.'
def close(self):
if (not self.connection): return while self.nextset(): pass self.connection = None
'Advance to the next result set. Returns None if there are no more result sets.'
def nextset(self):
if self._executed: self.fetchall() del self.messages[:] db = self._get_db() nr = db.next_result() if (nr == (-1)): return None self._do_get_result() self._post_get_result() self._warning_check() return 1
'Execute a query. query -- string, query to execute on server args -- optional sequence or mapping, parameters to use with query. Note: If args is a sequence, then %s must be used as the parameter placeholder in the query. If a mapping is used, %(key)s must be used as the placeholder. Returns long integer rows affected, if any'
def execute(self, query, args=None):
del self.messages[:] db = self._get_db() if isinstance(query, unicode): query = query.encode(db.unicode_literal.charset) if (args is not None): if isinstance(args, dict): query = (query % dict(((key, db.literal(item)) for (key, item) in args.iteritems()))) else: query = (query % tuple([db.literal(item) for item in args])) try: r = None r = self._query(query) except TypeError as m: if (m.args[0] in ('not enough arguments for format string', 'not all arguments converted')): self.messages.append((ProgrammingError, m.args[0])) self.errorhandler(self, ProgrammingError, m.args[0]) else: self.messages.append((TypeError, m)) self.errorhandler(self, TypeError, m) except (SystemExit, KeyboardInterrupt): raise except: (exc, value, tb) = sys.exc_info() del tb self.messages.append((exc, value)) self.errorhandler(self, exc, value) self._executed = query if (not self._defer_warnings): self._warning_check() return r
'Execute a multi-row query. query -- string, query to execute on server args Sequence of sequences or mappings, parameters to use with query. Returns long integer rows affected, if any. This method improves performance on multiple-row INSERT and REPLACE. Otherwise it is equivalent to looping over args with execute().'
def executemany(self, query, args):
del self.messages[:] db = self._get_db() if (not args): return if isinstance(query, unicode): query = query.encode(db.unicode_literal.charset) m = insert_values.search(query) if (not m): r = 0 for a in args: r = (r + self.execute(query, a)) return r p = m.start(1) e = m.end(1) qv = m.group(1) try: q = [] for a in args: if isinstance(a, dict): q.append((qv % dict(((key, db.literal(item)) for (key, item) in a.iteritems())))) else: q.append((qv % tuple([db.literal(item) for item in a]))) except TypeError as msg: if (msg.args[0] in ('not enough arguments for format string', 'not all arguments converted')): self.errorhandler(self, ProgrammingError, msg.args[0]) else: self.errorhandler(self, TypeError, msg) except (SystemExit, KeyboardInterrupt): raise except: (exc, value, tb) = sys.exc_info() del tb self.errorhandler(self, exc, value) r = self._query('\n'.join([query[:p], ',\n'.join(q), query[e:]])) if (not self._defer_warnings): self._warning_check() return r
'Execute stored procedure procname with args procname -- string, name of procedure to execute on server args -- Sequence of parameters to use with procedure Returns the original args. Compatibility warning: PEP-249 specifies that any modified parameters must be returned. This is currently impossible as they are only available by storing them in a server variable and then retrieved by a query. Since stored procedures return zero or more result sets, there is no reliable way to get at OUT or INOUT parameters via callproc. The server variables are named @_procname_n, where procname is the parameter above and n is the position of the parameter (from zero). Once all result sets generated by the procedure have been fetched, you can issue a SELECT @_procname_0, ... query using .execute() to get any OUT or INOUT values. Compatibility warning: The act of calling a stored procedure itself creates an empty result set. This appears after any result sets generated by the procedure. This is non-standard behavior with respect to the DB-API. Be sure to use nextset() to advance through all result sets; otherwise you may get disconnected.'
def callproc(self, procname, args=()):
db = self._get_db() for (index, arg) in enumerate(args): q = ('SET @_%s_%d=%s' % (procname, index, db.literal(arg))) if isinstance(q, unicode): q = q.encode(db.unicode_literal.charset) self._query(q) self.nextset() q = ('CALL %s(%s)' % (procname, ','.join([('@_%s_%d' % (procname, i)) for i in range(len(args))]))) if (type(q) is UnicodeType): q = q.encode(db.unicode_literal.charset) self._query(q) self._executed = q if (not self._defer_warnings): self._warning_check() return args
'Fetches a single row from the cursor. None indicates that no more rows are available.'
def fetchone(self):
self._check_executed() if (self.rownumber >= len(self._rows)): return None result = self._rows[self.rownumber] self.rownumber = (self.rownumber + 1) return result
'Fetch up to size rows from the cursor. Result set may be smaller than size. If size is not defined, cursor.arraysize is used.'
def fetchmany(self, size=None):
self._check_executed() end = (self.rownumber + (size or self.arraysize)) result = self._rows[self.rownumber:end] self.rownumber = min(end, len(self._rows)) return result
'Fetchs all available rows from the cursor.'
def fetchall(self):
self._check_executed() if self.rownumber: result = self._rows[self.rownumber:] else: result = self._rows self.rownumber = len(self._rows) return result
'Scroll the cursor in the result set to a new position according to mode. If mode is \'relative\' (default), value is taken as offset to the current position in the result set, if set to \'absolute\', value states an absolute target position.'
def scroll(self, value, mode='relative'):
self._check_executed() if (mode == 'relative'): r = (self.rownumber + value) elif (mode == 'absolute'): r = value else: self.errorhandler(self, ProgrammingError, ('unknown scroll mode %s' % repr(mode))) if ((r < 0) or (r >= len(self._rows))): self.errorhandler(self, IndexError, 'out of range') self.rownumber = r
'Fetches a single row from the cursor.'
def fetchone(self):
self._check_executed() r = self._fetch_row(1) if (not r): self._warning_check() return None self.rownumber = (self.rownumber + 1) return r[0]
'Fetch up to size rows from the cursor. Result set may be smaller than size. If size is not defined, cursor.arraysize is used.'
def fetchmany(self, size=None):
self._check_executed() r = self._fetch_row((size or self.arraysize)) self.rownumber = (self.rownumber + len(r)) if (not r): self._warning_check() return r
'Fetchs all available rows from the cursor.'
def fetchall(self):
self._check_executed() r = self._fetch_row(0) self.rownumber = (self.rownumber + len(r)) self._warning_check() return r
'Fetch a single row as a dictionary. Deprecated: Use fetchone() instead. Will be removed in 1.3.'
def fetchoneDict(self):
from warnings import warn warn('fetchoneDict() is non-standard and will be removed in 1.3', DeprecationWarning, 2) return self.fetchone()
'Fetch several rows as a list of dictionaries. Deprecated: Use fetchmany() instead. Will be removed in 1.3.'
def fetchmanyDict(self, size=None):
from warnings import warn warn('fetchmanyDict() is non-standard and will be removed in 1.3', DeprecationWarning, 2) return self.fetchmany(size)
'Fetch all available rows as a list of dictionaries. Deprecated: Use fetchall() instead. Will be removed in 1.3.'
def fetchallDict(self):
from warnings import warn warn('fetchallDict() is non-standard and will be removed in 1.3', DeprecationWarning, 2) return self.fetchall()
'Create a connection to the database. It is strongly recommended that you only use keyword parameters. Consult the MySQL C API documentation for more information. host string, host to connect user string, user to connect as passwd string, password to use db string, database to use port integer, TCP/IP port to connect to unix_socket string, location of unix_socket to use conv conversion dictionary, see MySQLdb.converters connect_timeout number of seconds to wait before the connection attempt fails. compress if set, compression is enabled named_pipe if set, a named pipe is used to connect (Windows only) init_command command which is run once the connection is created read_default_file file from which default client values are read read_default_group configuration group to use from the default file cursorclass class object, used to create cursors (keyword only) use_unicode If True, text-like columns are returned as unicode objects using the connection\'s character set. Otherwise, text-like columns are returned as strings. columns are returned as normal strings. Unicode objects will always be encoded to the connection\'s character set regardless of this setting. charset If supplied, the connection character set will be changed to this character set (MySQL-4.1 and newer). This implies use_unicode=True. sql_mode If supplied, the session SQL mode will be changed to this setting (MySQL-4.1 and newer). For more details and legal values, see the MySQL documentation. client_flag integer, flags to use or 0 (see MySQL docs or constants/CLIENTS.py) ssl dictionary or mapping, contains SSL connection parameters; see the MySQL documentation for more details (mysql_ssl_set()). If this is set, and the client does not support SSL, NotSupportedError will be raised. local_infile integer, non-zero enables LOAD LOCAL INFILE; zero disables autocommit If False (default), autocommit is disabled. If True, autocommit is enabled. If None, autocommit isn\'t set and server default is used. There are a number of undocumented, non-standard methods. See the documentation for the MySQL C API for some hints on what they do.'
def __init__(self, *args, **kwargs):
from MySQLdb.constants import CLIENT, FIELD_TYPE from MySQLdb.converters import conversions from weakref import proxy kwargs2 = kwargs.copy() if ('conv' in kwargs): conv = kwargs['conv'] else: conv = conversions conv2 = {} for (k, v) in conv.items(): if (isinstance(k, int) and isinstance(v, list)): conv2[k] = v[:] else: conv2[k] = v kwargs2['conv'] = conv2 cursorclass = kwargs2.pop('cursorclass', self.default_cursor) charset = kwargs2.pop('charset', '') if charset: use_unicode = True else: use_unicode = False use_unicode = kwargs2.pop('use_unicode', use_unicode) sql_mode = kwargs2.pop('sql_mode', '') client_flag = kwargs.get('client_flag', 0) client_version = tuple([numeric_part(n) for n in _mysql.get_client_info().split('.')[:2]]) if (client_version >= (4, 1)): client_flag |= CLIENT.MULTI_STATEMENTS if (client_version >= (5, 0)): client_flag |= CLIENT.MULTI_RESULTS kwargs2['client_flag'] = client_flag autocommit = kwargs2.pop('autocommit', False) super(Connection, self).__init__(*args, **kwargs2) self.cursorclass = cursorclass self.encoders = dict([(k, v) for (k, v) in conv.items() if (type(k) is not int)]) self._server_version = tuple([numeric_part(n) for n in self.get_server_info().split('.')[:2]]) db = proxy(self) def _get_string_literal(): def string_literal(obj, dummy=None): return db.string_literal(obj) return string_literal def _get_unicode_literal(): def unicode_literal(u, dummy=None): return db.literal(u.encode(unicode_literal.charset)) return unicode_literal def _get_string_decoder(): def string_decoder(s): return s.decode(string_decoder.charset) return string_decoder string_literal = _get_string_literal() self.unicode_literal = unicode_literal = _get_unicode_literal() self.string_decoder = string_decoder = _get_string_decoder() if (not charset): charset = self.character_set_name() self.set_character_set(charset) if sql_mode: self.set_sql_mode(sql_mode) if use_unicode: self.converter[FIELD_TYPE.STRING].append((None, string_decoder)) self.converter[FIELD_TYPE.VAR_STRING].append((None, string_decoder)) self.converter[FIELD_TYPE.VARCHAR].append((None, string_decoder)) self.converter[FIELD_TYPE.BLOB].append((None, string_decoder)) self.encoders[types.StringType] = string_literal self.encoders[types.UnicodeType] = unicode_literal self._transactional = (self.server_capabilities & CLIENT.TRANSACTIONS) if self._transactional: if (autocommit is not None): self.autocommit(autocommit) self.messages = []
'Create a cursor on which queries may be performed. The optional cursorclass parameter is used to create the Cursor. By default, self.cursorclass=cursors.Cursor is used.'
def cursor(self, cursorclass=None):
return (cursorclass or self.cursorclass)(self)
'If o is a single object, returns an SQL literal as a string. If o is a non-string sequence, the items of the sequence are converted and returned as a sequence. Non-standard. For internal use; do not use this in your applications.'
def literal(self, o):
return self.escape(o, self.encoders)
'Explicitly begin a connection. Non-standard. DEPRECATED: Will be removed in 1.3. Use an SQL BEGIN statement instead.'
def begin(self):
from warnings import warn warn('begin() is non-standard and will be removed in 1.3', DeprecationWarning, 2) self.query('BEGIN')
'Set the connection character set to charset. The character set can only be changed in MySQL-4.1 and newer. If you try to change the character set from the current value in an older version, NotSupportedError will be raised.'
def set_character_set(self, charset):
if (charset == 'utf8mb4'): py_charset = 'utf8' else: py_charset = charset if (self.character_set_name() != charset): try: super(Connection, self).set_character_set(charset) except AttributeError: if (self._server_version < (4, 1)): raise NotSupportedError('server is too old to set charset') self.query(('SET NAMES %s' % charset)) self.store_result() self.string_decoder.charset = py_charset self.unicode_literal.charset = py_charset
'Set the connection sql_mode. See MySQL documentation for legal values.'
def set_sql_mode(self, sql_mode):
if (self._server_version < (4, 1)): raise NotSupportedError('server is too old to set sql_mode') self.query(("SET SESSION sql_mode='%s'" % sql_mode)) self.store_result()
'Return detailed information about warnings as a sequence of tuples of (Level, Code, Message). This is only supported in MySQL-4.1 and up. If your server is an earlier version, an empty sequence is returned.'
def show_warnings(self):
if (self._server_version < (4, 1)): return () self.query('SHOW WARNINGS') r = self.store_result() warnings = r.fetch_row(0) return warnings
'Should have a NULL constant.'
def test_NULL(self):
self.assertEqual(_mysql.NULL, 'NULL')
'Version information sanity.'
def test_version(self):
self.assertTrue(isinstance(_mysql.__version__, str)) self.assertTrue(isinstance(_mysql.version_info, tuple)) self.assertEqual(len(_mysql.version_info), 5)
'Should create a procedure called deleteme that returns two result sets, first the number of rows in booze then "name from booze"'
def help_nextset_setUp(self, cur):
sql = ('\n create procedure deleteme()\n begin\n select count(*) from %(tp)sbooze;\n select name from %(tp)sbooze;\n end\n ' % dict(tp=self.table_prefix)) cur.execute(sql)
'If cleaning up is needed after nextSetTest'
def help_nextset_tearDown(self, cur):
cur.execute('drop procedure deleteme')
'Create a table using a list of column definitions given in columndefs. generator must be a function taking arguments (row_number, col_number) returning a suitable data object for insertion into the table.'
def create_table(self, columndefs):
self.table = self.new_table_name() self.cursor.execute(('CREATE TABLE %s (%s) %s' % (self.table, ',\n'.join(columndefs), self.create_table_extra)))
'self.drivers should override this method to perform required setup if any is necessary, such as creating the database.'
def setUp(self):
pass
'self.drivers should override this method to perform required cleanup if any is necessary, such as deleting the test database. The default drops the tables that may be created.'
def tearDown(self):
con = self._connect() try: cur = con.cursor() for ddl in (self.xddl1, self.xddl2): try: cur.execute(ddl) con.commit() except self.driver.Error: pass finally: con.close()
'Return a list of sql commands to setup the DB for the fetch tests.'
def _populate(self):
populate = [("insert into %sbooze values ('%s')" % (self.table_prefix, s)) for s in self.samples] return populate
'Should create a procedure called deleteme that returns two result sets, first the number of rows in booze then "name from booze"'
def help_nextset_setUp(self, cur):
raise NotImplementedError('Helper not implemented')
'If cleaning up is needed after nextSetTest'
def help_nextset_tearDown(self, cur):
raise NotImplementedError('Helper not implemented')
'Return the number of seconds until this token expires.'
def get_expire_delta(self, reference=None):
if (reference is None): reference = now() expiration = self.expires if timezone: if (timezone.is_aware(reference) and timezone.is_naive(expiration)): expiration = timezone.make_aware(expiration, timezone.utc) elif (timezone.is_naive(reference) and timezone.is_aware(expiration)): reference = timezone.make_aware(reference, timezone.utc) timedelta = (expiration - reference) return ((timedelta.days * 86400) + timedelta.seconds)
'Override this method to implement your own authentication backend. Return a client or ``None`` in case of failure.'
def authenticate(self, request=None):
pass
'Validates that the input is a list or tuple.'
def validate(self, value):
if (self.required and (not value)): raise OAuthValidationError({'error': 'invalid_request'}) for val in value: if (not self.valid_value(val)): raise OAuthValidationError({'error': 'invalid_request', 'error_description': (_("'%s' is not a valid scope.") % val)})
'The scope is assembled by combining all the set flags into a single integer value which we can later check again for set bits. If *no* scope is set, we return the default scope which is the first defined scope in :attr:`provider.constants.SCOPES`.'
def clean_scope(self):
default = SCOPES[0][0] flags = self.cleaned_data.get('scope', []) return scope.to_int(default=default, *flags)
':rfc:`3.1.1` Lists of values are space delimited.'
def clean_response_type(self):
response_type = self.cleaned_data.get('response_type') if (not response_type): raise OAuthValidationError({'error': 'invalid_request', 'error_description': "No 'response_type' supplied."}) types = response_type.split(' ') for type in types: if (type not in RESPONSE_TYPE_CHOICES): raise OAuthValidationError({'error': 'unsupported_response_type', 'error_description': (u"'%s' is not a supported response type." % type)}) return response_type
':rfc:`3.1.2` The redirect value has to match what was saved on the authorization server.'
def clean_redirect_uri(self):
redirect_uri = self.cleaned_data.get('redirect_uri') if redirect_uri: if (not (redirect_uri == self.client.redirect_uri)): raise OAuthValidationError({'error': 'invalid_request', 'error_description': _("The requested redirect didn't match the client settings.")}) return redirect_uri
'Make sure that the scope is less or equal to the previous scope!'
def clean(self):
data = self.cleaned_data want_scope = (data.get('scope') or 0) refresh_token = data.get('refresh_token') access_token = (getattr(refresh_token, 'access_token', None) if refresh_token else None) has_scope = (access_token.scope if access_token else 0) if ((want_scope is not 0) and (not scope.check(want_scope, has_scope))): raise OAuthValidationError({'error': 'invalid_scope'}) return data
'Make sure that the scope is less or equal to the scope allowed on the grant!'
def clean(self):
data = self.cleaned_data want_scope = (data.get('scope') or 0) grant = data.get('grant') has_scope = (grant.scope if grant else 0) if ((want_scope is not 0) and (not scope.check(want_scope, has_scope))): raise OAuthValidationError({'error': 'invalid_scope'}) return data
'Overriding the default cleaning behaviour to exit early on errors instead of validating each field.'
def _clean_fields(self):
try: super(OAuthForm, self)._clean_fields() except OAuthValidationError as e: self._errors.update(e.args[0])
'Overriding the default cleaning behaviour for a shallow error dict.'
def _clean_form(self):
try: super(OAuthForm, self)._clean_form() except OAuthValidationError as e: self._errors.update(e.args[0])
'Return stored data from the session store. :param key: `str` The key under which the data was stored.'
def get_data(self, request, key='params'):
return request.session.get(('%s:%s' % (constants.SESSION_KEY, key)))
'Cache data in the session store. :param request: :attr:`django.http.HttpRequest` :param data: Arbitrary data to store. :param key: `str` The key under which to store the data.'
def cache_data(self, request, data, key='params'):
request.session[('%s:%s' % (constants.SESSION_KEY, key))] = data
'Clear all OAuth related data from the session store.'
def clear_data(self, request):
for key in request.session.keys(): if key.startswith(constants.SESSION_KEY): del request.session[key]
'Authenticate a client against all the backends configured in :attr:`authentication`.'
def authenticate(self, request):
for backend in self.authentication: client = backend().authenticate(request) if (client is not None): return client return None
'Return a redirect to a URL where the resource owner (see :rfc:`1`) authorizes the client (also :rfc:`1`). :return: :class:`django.http.HttpResponseRedirect`'
def get_redirect_url(self, request):
raise NotImplementedError
':return: ``str`` - The client URL to display in the template after authorization succeeded or failed.'
def get_redirect_url(self, request):
raise NotImplementedError
'Return a form that is capable of validating the request data captured by the :class:`Capture` view. The form must accept a keyword argument ``client``.'
def get_request_form(self, client, data):
raise NotImplementedError
'Return a form that is capable of authorizing the client to the resource owner. :return: :attr:`django.forms.Form`'
def get_authorization_form(self, request, client, data, client_data):
raise NotImplementedError
'Return a client object from a given client identifier. Return ``None`` if no client is found. An error will be displayed to the resource owner and presented to the client upon the final redirect.'
def get_client(self, client_id):
raise NotImplementedError
'Save the authorization that the user granted to the client, involving the creation of a time limited authorization code as outlined in :rfc:`4.1.2`. Should return ``None`` in case authorization is not granted. Should return a string representing the authorization code grant. :return: ``None``, ``str``'
def save_authorization(self, request, client, form, client_data):
raise NotImplementedError
':return: ``tuple`` - ``(client or False, data or error)``'
def _validate_client(self, request, data):
client = self.get_client(data.get('client_id')) if (client is None): raise OAuthError({'error': 'unauthorized_client', 'error_description': _('An unauthorized client tried to access your resources.')}) form = self.get_request_form(client, data) if (not form.is_valid()): raise OAuthError(form.errors) return (client, form.cleaned_data)
'Return an error to be displayed to the resource owner if anything goes awry. Errors can include invalid clients, authorization denials and other edge cases such as a wrong ``redirect_uri`` in the authorization request. :param request: :attr:`django.http.HttpRequest` :param error: ``dict`` The different types of errors are outlined in :rfc:`4.2.2.1`'
def error_response(self, request, error, **kwargs):
ctx = {} ctx.update(error) if (error['error'] in ['redirect_uri', 'unauthorized_client']): ctx.update(next='/') return self.render_to_response(ctx, **kwargs) ctx.update(next=self.get_redirect_url(request)) return self.render_to_response(ctx, **kwargs)
'Return an error response to the client with default status code of *400* stating the error as outlined in :rfc:`5.2`.'
def error_response(self, error, mimetype='application/json', status=400, **kwargs):
return HttpResponse(json.dumps(error), mimetype=mimetype, status=status, **kwargs)
'Return the grant associated with this request or an error dict. :return: ``tuple`` - ``(True or False, grant or error_dict)``'
def get_authorization_code_grant(self, request, data, client):
raise NotImplementedError