repository_name
stringlengths 5
67
| func_path_in_repository
stringlengths 4
234
| func_name
stringlengths 0
314
| whole_func_string
stringlengths 52
3.87M
| language
stringclasses 6
values | func_code_string
stringlengths 52
3.87M
| func_documentation_string
stringlengths 1
47.2k
| func_code_url
stringlengths 85
339
|
---|---|---|---|---|---|---|---|
mitodl/PyLmod | pylmod/gradebook.py | GradeBook.get_assignments | def get_assignments(
self,
gradebook_id='',
simple=False,
max_points=True,
avg_stats=False,
grading_stats=False
):
"""Get assignments for a gradebook.
Return list of assignments for a given gradebook,
specified by a py:attribute::gradebook_id. You can control
if additional parameters are returned, but the response
time with py:attribute::avg_stats and py:attribute::grading_stats
enabled is significantly longer.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool): return just assignment names, default= ``False``
max_points (bool):
Max points is a property of the grading scheme for the
assignment rather than a property of the assignment itself,
default= ``True``
avg_stats (bool): return average grade, default= ``False``
grading_stats (bool):
return grading statistics, i.e. number of approved grades,
unapproved grades, etc., default= ``False``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: list of assignment dictionaries
An example return value is:
.. code-block:: python
[
{
u'assignmentId': 2431240,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1372392000000,
u'dueDateString': u'06-28-2013',
u'gradebookId': 1293808,
u'graderVisible': True,
u'gradingSchemeId': 2431243,
u'gradingSchemeType': u'NUMERIC',
u'isComposite': False,
u'isHomework': False,
u'maxPointsTotal': 10.0,
u'name': u'Homework 1',
u'shortName': u'HW1',
u'userDeleted': False,
u'weight': 1.0
},
{
u'assignmentId': 16708850,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1383541200000,
u'dueDateString': u'11-04-2013',
u'gradebookId': 1293808,
u'graderVisible': False,
u'gradingSchemeId': 16708851,
u'gradingSchemeType': u'NUMERIC',
u'isComposite': False,
u'isHomework': False,
u'maxPointsTotal': 100.0,
u'name': u'midterm1',
u'shortName': u'mid1',
u'userDeleted': False,
u'weight': 1.0
},
]
"""
# These are parameters required for the remote API call, so
# there aren't too many arguments
# pylint: disable=too-many-arguments
params = dict(
includeMaxPoints=json.dumps(max_points),
includeAvgStats=json.dumps(avg_stats),
includeGradingStats=json.dumps(grading_stats)
)
assignments = self.get(
'assignments/{gradebookId}'.format(
gradebookId=gradebook_id or self.gradebook_id
),
params=params,
)
if simple:
return [{'AssignmentName': x['name']}
for x in assignments['data']]
return assignments['data'] | python | def get_assignments(
self,
gradebook_id='',
simple=False,
max_points=True,
avg_stats=False,
grading_stats=False
):
"""Get assignments for a gradebook.
Return list of assignments for a given gradebook,
specified by a py:attribute::gradebook_id. You can control
if additional parameters are returned, but the response
time with py:attribute::avg_stats and py:attribute::grading_stats
enabled is significantly longer.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool): return just assignment names, default= ``False``
max_points (bool):
Max points is a property of the grading scheme for the
assignment rather than a property of the assignment itself,
default= ``True``
avg_stats (bool): return average grade, default= ``False``
grading_stats (bool):
return grading statistics, i.e. number of approved grades,
unapproved grades, etc., default= ``False``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: list of assignment dictionaries
An example return value is:
.. code-block:: python
[
{
u'assignmentId': 2431240,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1372392000000,
u'dueDateString': u'06-28-2013',
u'gradebookId': 1293808,
u'graderVisible': True,
u'gradingSchemeId': 2431243,
u'gradingSchemeType': u'NUMERIC',
u'isComposite': False,
u'isHomework': False,
u'maxPointsTotal': 10.0,
u'name': u'Homework 1',
u'shortName': u'HW1',
u'userDeleted': False,
u'weight': 1.0
},
{
u'assignmentId': 16708850,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1383541200000,
u'dueDateString': u'11-04-2013',
u'gradebookId': 1293808,
u'graderVisible': False,
u'gradingSchemeId': 16708851,
u'gradingSchemeType': u'NUMERIC',
u'isComposite': False,
u'isHomework': False,
u'maxPointsTotal': 100.0,
u'name': u'midterm1',
u'shortName': u'mid1',
u'userDeleted': False,
u'weight': 1.0
},
]
"""
# These are parameters required for the remote API call, so
# there aren't too many arguments
# pylint: disable=too-many-arguments
params = dict(
includeMaxPoints=json.dumps(max_points),
includeAvgStats=json.dumps(avg_stats),
includeGradingStats=json.dumps(grading_stats)
)
assignments = self.get(
'assignments/{gradebookId}'.format(
gradebookId=gradebook_id or self.gradebook_id
),
params=params,
)
if simple:
return [{'AssignmentName': x['name']}
for x in assignments['data']]
return assignments['data'] | Get assignments for a gradebook.
Return list of assignments for a given gradebook,
specified by a py:attribute::gradebook_id. You can control
if additional parameters are returned, but the response
time with py:attribute::avg_stats and py:attribute::grading_stats
enabled is significantly longer.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool): return just assignment names, default= ``False``
max_points (bool):
Max points is a property of the grading scheme for the
assignment rather than a property of the assignment itself,
default= ``True``
avg_stats (bool): return average grade, default= ``False``
grading_stats (bool):
return grading statistics, i.e. number of approved grades,
unapproved grades, etc., default= ``False``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: list of assignment dictionaries
An example return value is:
.. code-block:: python
[
{
u'assignmentId': 2431240,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1372392000000,
u'dueDateString': u'06-28-2013',
u'gradebookId': 1293808,
u'graderVisible': True,
u'gradingSchemeId': 2431243,
u'gradingSchemeType': u'NUMERIC',
u'isComposite': False,
u'isHomework': False,
u'maxPointsTotal': 10.0,
u'name': u'Homework 1',
u'shortName': u'HW1',
u'userDeleted': False,
u'weight': 1.0
},
{
u'assignmentId': 16708850,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1383541200000,
u'dueDateString': u'11-04-2013',
u'gradebookId': 1293808,
u'graderVisible': False,
u'gradingSchemeId': 16708851,
u'gradingSchemeType': u'NUMERIC',
u'isComposite': False,
u'isHomework': False,
u'maxPointsTotal': 100.0,
u'name': u'midterm1',
u'shortName': u'mid1',
u'userDeleted': False,
u'weight': 1.0
},
] | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/gradebook.py#L183-L280 |
mitodl/PyLmod | pylmod/gradebook.py | GradeBook.get_assignment_by_name | def get_assignment_by_name(self, assignment_name, assignments=None):
"""Get assignment by name.
Get an assignment by name. It works by retrieving all assignments
and returning the first assignment with a matching name. If the
optional parameter ``assignments`` is provided, it uses this
collection rather than retrieving all assignments from the service.
Args:
assignment_name (str): name of assignment
assignments (list): assignments to search, default: None
When ``assignments`` is unspecified, all assignments
are retrieved from the service.
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of assignment id and assignment dictionary
.. code-block:: python
(
16708850,
{
u'assignmentId': 16708850,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1383541200000,
u'dueDateString': u'11-04-2013',
u'gradebookId': 1293808,
u'graderVisible': False,
u'gradingSchemeId': 16708851,
u'gradingSchemeType': u'NUMERIC',
u'isComposite': False,
u'isHomework': False,
u'maxPointsTotal': 100.0,
u'name': u'midterm1',
u'shortName': u'mid1',
u'userDeleted': False,
u'weight': 1.0
}
)
"""
if assignments is None:
assignments = self.get_assignments()
for assignment in assignments:
if assignment['name'] == assignment_name:
return assignment['assignmentId'], assignment
return None, None | python | def get_assignment_by_name(self, assignment_name, assignments=None):
"""Get assignment by name.
Get an assignment by name. It works by retrieving all assignments
and returning the first assignment with a matching name. If the
optional parameter ``assignments`` is provided, it uses this
collection rather than retrieving all assignments from the service.
Args:
assignment_name (str): name of assignment
assignments (list): assignments to search, default: None
When ``assignments`` is unspecified, all assignments
are retrieved from the service.
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of assignment id and assignment dictionary
.. code-block:: python
(
16708850,
{
u'assignmentId': 16708850,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1383541200000,
u'dueDateString': u'11-04-2013',
u'gradebookId': 1293808,
u'graderVisible': False,
u'gradingSchemeId': 16708851,
u'gradingSchemeType': u'NUMERIC',
u'isComposite': False,
u'isHomework': False,
u'maxPointsTotal': 100.0,
u'name': u'midterm1',
u'shortName': u'mid1',
u'userDeleted': False,
u'weight': 1.0
}
)
"""
if assignments is None:
assignments = self.get_assignments()
for assignment in assignments:
if assignment['name'] == assignment_name:
return assignment['assignmentId'], assignment
return None, None | Get assignment by name.
Get an assignment by name. It works by retrieving all assignments
and returning the first assignment with a matching name. If the
optional parameter ``assignments`` is provided, it uses this
collection rather than retrieving all assignments from the service.
Args:
assignment_name (str): name of assignment
assignments (list): assignments to search, default: None
When ``assignments`` is unspecified, all assignments
are retrieved from the service.
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of assignment id and assignment dictionary
.. code-block:: python
(
16708850,
{
u'assignmentId': 16708850,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1383541200000,
u'dueDateString': u'11-04-2013',
u'gradebookId': 1293808,
u'graderVisible': False,
u'gradingSchemeId': 16708851,
u'gradingSchemeType': u'NUMERIC',
u'isComposite': False,
u'isHomework': False,
u'maxPointsTotal': 100.0,
u'name': u'midterm1',
u'shortName': u'mid1',
u'userDeleted': False,
u'weight': 1.0
}
) | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/gradebook.py#L282-L333 |
mitodl/PyLmod | pylmod/gradebook.py | GradeBook.create_assignment | def create_assignment( # pylint: disable=too-many-arguments
self,
name,
short_name,
weight,
max_points,
due_date_str,
gradebook_id='',
**kwargs
):
"""Create a new assignment.
Create a new assignment. By default, assignments are created
under the `Uncategorized` category.
Args:
name (str): descriptive assignment name,
i.e. ``new NUMERIC SIMPLE ASSIGNMENT``
short_name (str): short name of assignment, one word of
no more than 5 characters, i.e. ``SAnew``
weight (str): floating point value for weight, i.e. ``1.0``
max_points (str): floating point value for maximum point
total, i.e. ``100.0``
due_date_str (str): due date as string in ``mm-dd-yyyy``
format, i.e. ``08-21-2011``
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
kwargs (dict): dictionary containing additional parameters,
i.e. ``graderVisible``, ``totalAverage``, and ``categoryId``.
For example:
.. code-block:: python
{
u'graderVisible': True,
u'totalAverage': None
u'categoryId': 1007964,
}
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: dictionary containing ``data``, ``status`` and ``message``
for example:
.. code-block:: python
{
u'data':
{
u'assignmentId': 18490492,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1312171200000,
u'dueDateString': u'08-01-2011',
u'gradebookId': 1293808,
u'graderVisible': False,
u'gradingSchemeId': 18490493,
u'gradingSchemeType': u'NUMERIC',
u'isComposite': False,
u'isHomework': False,
u'maxPointsTotal': 100.0,
u'name': u'new NUMERIC SIMPLE ASSIGNMENT',
u'numStudentGradesToBeApproved': 0,
u'numStudentsToBeGraded': 614,
u'shortName': u'SAnew',
u'userDeleted': False,
u'weight': 1.0
},
u'message': u'assignment is created successfully',
u'status': 1
}
"""
data = {
'name': name,
'shortName': short_name,
'weight': weight,
'graderVisible': False,
'gradingSchemeType': 'NUMERIC',
'gradebookId': gradebook_id or self.gradebook_id,
'maxPointsTotal': max_points,
'dueDateString': due_date_str
}
data.update(kwargs)
log.info("Creating assignment %s", name)
response = self.post('assignment', data)
log.debug('Received response data: %s', response)
return response | python | def create_assignment( # pylint: disable=too-many-arguments
self,
name,
short_name,
weight,
max_points,
due_date_str,
gradebook_id='',
**kwargs
):
"""Create a new assignment.
Create a new assignment. By default, assignments are created
under the `Uncategorized` category.
Args:
name (str): descriptive assignment name,
i.e. ``new NUMERIC SIMPLE ASSIGNMENT``
short_name (str): short name of assignment, one word of
no more than 5 characters, i.e. ``SAnew``
weight (str): floating point value for weight, i.e. ``1.0``
max_points (str): floating point value for maximum point
total, i.e. ``100.0``
due_date_str (str): due date as string in ``mm-dd-yyyy``
format, i.e. ``08-21-2011``
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
kwargs (dict): dictionary containing additional parameters,
i.e. ``graderVisible``, ``totalAverage``, and ``categoryId``.
For example:
.. code-block:: python
{
u'graderVisible': True,
u'totalAverage': None
u'categoryId': 1007964,
}
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: dictionary containing ``data``, ``status`` and ``message``
for example:
.. code-block:: python
{
u'data':
{
u'assignmentId': 18490492,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1312171200000,
u'dueDateString': u'08-01-2011',
u'gradebookId': 1293808,
u'graderVisible': False,
u'gradingSchemeId': 18490493,
u'gradingSchemeType': u'NUMERIC',
u'isComposite': False,
u'isHomework': False,
u'maxPointsTotal': 100.0,
u'name': u'new NUMERIC SIMPLE ASSIGNMENT',
u'numStudentGradesToBeApproved': 0,
u'numStudentsToBeGraded': 614,
u'shortName': u'SAnew',
u'userDeleted': False,
u'weight': 1.0
},
u'message': u'assignment is created successfully',
u'status': 1
}
"""
data = {
'name': name,
'shortName': short_name,
'weight': weight,
'graderVisible': False,
'gradingSchemeType': 'NUMERIC',
'gradebookId': gradebook_id or self.gradebook_id,
'maxPointsTotal': max_points,
'dueDateString': due_date_str
}
data.update(kwargs)
log.info("Creating assignment %s", name)
response = self.post('assignment', data)
log.debug('Received response data: %s', response)
return response | Create a new assignment.
Create a new assignment. By default, assignments are created
under the `Uncategorized` category.
Args:
name (str): descriptive assignment name,
i.e. ``new NUMERIC SIMPLE ASSIGNMENT``
short_name (str): short name of assignment, one word of
no more than 5 characters, i.e. ``SAnew``
weight (str): floating point value for weight, i.e. ``1.0``
max_points (str): floating point value for maximum point
total, i.e. ``100.0``
due_date_str (str): due date as string in ``mm-dd-yyyy``
format, i.e. ``08-21-2011``
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
kwargs (dict): dictionary containing additional parameters,
i.e. ``graderVisible``, ``totalAverage``, and ``categoryId``.
For example:
.. code-block:: python
{
u'graderVisible': True,
u'totalAverage': None
u'categoryId': 1007964,
}
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: dictionary containing ``data``, ``status`` and ``message``
for example:
.. code-block:: python
{
u'data':
{
u'assignmentId': 18490492,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1312171200000,
u'dueDateString': u'08-01-2011',
u'gradebookId': 1293808,
u'graderVisible': False,
u'gradingSchemeId': 18490493,
u'gradingSchemeType': u'NUMERIC',
u'isComposite': False,
u'isHomework': False,
u'maxPointsTotal': 100.0,
u'name': u'new NUMERIC SIMPLE ASSIGNMENT',
u'numStudentGradesToBeApproved': 0,
u'numStudentsToBeGraded': 614,
u'shortName': u'SAnew',
u'userDeleted': False,
u'weight': 1.0
},
u'message': u'assignment is created successfully',
u'status': 1
} | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/gradebook.py#L335-L425 |
mitodl/PyLmod | pylmod/gradebook.py | GradeBook.set_grade | def set_grade(
self,
assignment_id,
student_id,
grade_value,
gradebook_id='',
**kwargs
):
"""Set numerical grade for student and assignment.
Set a numerical grade for for a student and assignment. Additional
options
for grade ``mode`` are: OVERALL_GRADE = ``1``, REGULAR_GRADE = ``2``
To set 'excused' as the grade, enter ``None`` for letter and
numeric grade values,
and pass ``x`` as the ``specialGradeValue``.
``ReturnAffectedValues`` flag determines whether or not to return
student cumulative points and
impacted assignment category grades (average and student grade).
Args:
assignment_id (str): numerical ID for assignment
student_id (str): numerical ID for student
grade_value (str): numerical grade value
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
kwargs (dict): dictionary of additional parameters
.. code-block:: python
{
u'letterGradeValue':None,
u'booleanGradeValue':None,
u'specialGradeValue':None,
u'mode':2,
u'isGradeApproved':False,
u'comment':None,
u'returnAffectedValues': True,
}
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: dictionary containing response ``status`` and ``message``
.. code-block:: python
{
u'message': u'grade saved successfully',
u'status': 1
}
"""
# pylint: disable=too-many-arguments
# numericGradeValue stringified because 'x' is a possible
# value for excused grades.
grade_info = {
'studentId': student_id,
'assignmentId': assignment_id,
'mode': 2,
'comment': 'from MITx {0}'.format(time.ctime(time.time())),
'numericGradeValue': str(grade_value),
'isGradeApproved': False
}
grade_info.update(kwargs)
log.info(
"student %s set_grade=%s for assignment %s",
student_id,
grade_value,
assignment_id)
return self.post(
'grades/{gradebookId}'.format(
gradebookId=gradebook_id or self.gradebook_id
),
data=grade_info,
) | python | def set_grade(
self,
assignment_id,
student_id,
grade_value,
gradebook_id='',
**kwargs
):
"""Set numerical grade for student and assignment.
Set a numerical grade for for a student and assignment. Additional
options
for grade ``mode`` are: OVERALL_GRADE = ``1``, REGULAR_GRADE = ``2``
To set 'excused' as the grade, enter ``None`` for letter and
numeric grade values,
and pass ``x`` as the ``specialGradeValue``.
``ReturnAffectedValues`` flag determines whether or not to return
student cumulative points and
impacted assignment category grades (average and student grade).
Args:
assignment_id (str): numerical ID for assignment
student_id (str): numerical ID for student
grade_value (str): numerical grade value
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
kwargs (dict): dictionary of additional parameters
.. code-block:: python
{
u'letterGradeValue':None,
u'booleanGradeValue':None,
u'specialGradeValue':None,
u'mode':2,
u'isGradeApproved':False,
u'comment':None,
u'returnAffectedValues': True,
}
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: dictionary containing response ``status`` and ``message``
.. code-block:: python
{
u'message': u'grade saved successfully',
u'status': 1
}
"""
# pylint: disable=too-many-arguments
# numericGradeValue stringified because 'x' is a possible
# value for excused grades.
grade_info = {
'studentId': student_id,
'assignmentId': assignment_id,
'mode': 2,
'comment': 'from MITx {0}'.format(time.ctime(time.time())),
'numericGradeValue': str(grade_value),
'isGradeApproved': False
}
grade_info.update(kwargs)
log.info(
"student %s set_grade=%s for assignment %s",
student_id,
grade_value,
assignment_id)
return self.post(
'grades/{gradebookId}'.format(
gradebookId=gradebook_id or self.gradebook_id
),
data=grade_info,
) | Set numerical grade for student and assignment.
Set a numerical grade for for a student and assignment. Additional
options
for grade ``mode`` are: OVERALL_GRADE = ``1``, REGULAR_GRADE = ``2``
To set 'excused' as the grade, enter ``None`` for letter and
numeric grade values,
and pass ``x`` as the ``specialGradeValue``.
``ReturnAffectedValues`` flag determines whether or not to return
student cumulative points and
impacted assignment category grades (average and student grade).
Args:
assignment_id (str): numerical ID for assignment
student_id (str): numerical ID for student
grade_value (str): numerical grade value
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
kwargs (dict): dictionary of additional parameters
.. code-block:: python
{
u'letterGradeValue':None,
u'booleanGradeValue':None,
u'specialGradeValue':None,
u'mode':2,
u'isGradeApproved':False,
u'comment':None,
u'returnAffectedValues': True,
}
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: dictionary containing response ``status`` and ``message``
.. code-block:: python
{
u'message': u'grade saved successfully',
u'status': 1
} | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/gradebook.py#L454-L531 |
mitodl/PyLmod | pylmod/gradebook.py | GradeBook.multi_grade | def multi_grade(self, grade_array, gradebook_id=''):
"""Set multiple grades for students.
Set multiple student grades for a gradebook. The grades are passed
as a list of dictionaries.
Each grade dictionary in ``grade_array`` must contain a
``studentId`` and a ``assignmentId``.
Options for grade mode are: OVERALL_GRADE = ``1``,
REGULAR_GRADE = ``2``
To set 'excused' as the grade, enter ``None`` for
``letterGradeValue`` and ``numericGradeValue``,
and pass ``x`` as the ``specialGradeValue``.
The ``ReturnAffectedValues`` flag determines whether to return
student cumulative points and impacted assignment category
grades (average and student grade)
.. code-block:: python
[
{
u'comment': None,
u'booleanGradeValue': None,
u'studentId': 1135,
u'assignmentId': 4522,
u'specialGradeValue': None,
u'returnAffectedValues': True,
u'letterGradeValue': None,
u'mode': 2,
u'numericGradeValue': 50,
u'isGradeApproved': False
},
{
u'comment': None,
u'booleanGradeValue': None,
u'studentId': 1135,
u'assignmentId': 4522,
u'specialGradeValue': u'x',
u'returnAffectedValues': True,
u'letterGradeValue': None,
u'mode': 2,
u'numericGradeValue': None,
u'isGradeApproved': False
},
{
u'comment': None,
u'booleanGradeValue': None,
u'studentId': 1135,
u'assignmentId': None,
u'specialGradeValue': None,
u'returnAffectedValues': True,
u'letterGradeValue': u'A',
u'mode': 1,
u'numericGradeValue': None,
u'isGradeApproved': False
}
]
Args:
grade_array (dict): an array of grades to save
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: dictionary containing response ``status`` and ``message``
"""
log.info('Sending grades: %r', grade_array)
return self.post(
'multiGrades/{gradebookId}'.format(
gradebookId=gradebook_id or self.gradebook_id
),
data=grade_array,
) | python | def multi_grade(self, grade_array, gradebook_id=''):
"""Set multiple grades for students.
Set multiple student grades for a gradebook. The grades are passed
as a list of dictionaries.
Each grade dictionary in ``grade_array`` must contain a
``studentId`` and a ``assignmentId``.
Options for grade mode are: OVERALL_GRADE = ``1``,
REGULAR_GRADE = ``2``
To set 'excused' as the grade, enter ``None`` for
``letterGradeValue`` and ``numericGradeValue``,
and pass ``x`` as the ``specialGradeValue``.
The ``ReturnAffectedValues`` flag determines whether to return
student cumulative points and impacted assignment category
grades (average and student grade)
.. code-block:: python
[
{
u'comment': None,
u'booleanGradeValue': None,
u'studentId': 1135,
u'assignmentId': 4522,
u'specialGradeValue': None,
u'returnAffectedValues': True,
u'letterGradeValue': None,
u'mode': 2,
u'numericGradeValue': 50,
u'isGradeApproved': False
},
{
u'comment': None,
u'booleanGradeValue': None,
u'studentId': 1135,
u'assignmentId': 4522,
u'specialGradeValue': u'x',
u'returnAffectedValues': True,
u'letterGradeValue': None,
u'mode': 2,
u'numericGradeValue': None,
u'isGradeApproved': False
},
{
u'comment': None,
u'booleanGradeValue': None,
u'studentId': 1135,
u'assignmentId': None,
u'specialGradeValue': None,
u'returnAffectedValues': True,
u'letterGradeValue': u'A',
u'mode': 1,
u'numericGradeValue': None,
u'isGradeApproved': False
}
]
Args:
grade_array (dict): an array of grades to save
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: dictionary containing response ``status`` and ``message``
"""
log.info('Sending grades: %r', grade_array)
return self.post(
'multiGrades/{gradebookId}'.format(
gradebookId=gradebook_id or self.gradebook_id
),
data=grade_array,
) | Set multiple grades for students.
Set multiple student grades for a gradebook. The grades are passed
as a list of dictionaries.
Each grade dictionary in ``grade_array`` must contain a
``studentId`` and a ``assignmentId``.
Options for grade mode are: OVERALL_GRADE = ``1``,
REGULAR_GRADE = ``2``
To set 'excused' as the grade, enter ``None`` for
``letterGradeValue`` and ``numericGradeValue``,
and pass ``x`` as the ``specialGradeValue``.
The ``ReturnAffectedValues`` flag determines whether to return
student cumulative points and impacted assignment category
grades (average and student grade)
.. code-block:: python
[
{
u'comment': None,
u'booleanGradeValue': None,
u'studentId': 1135,
u'assignmentId': 4522,
u'specialGradeValue': None,
u'returnAffectedValues': True,
u'letterGradeValue': None,
u'mode': 2,
u'numericGradeValue': 50,
u'isGradeApproved': False
},
{
u'comment': None,
u'booleanGradeValue': None,
u'studentId': 1135,
u'assignmentId': 4522,
u'specialGradeValue': u'x',
u'returnAffectedValues': True,
u'letterGradeValue': None,
u'mode': 2,
u'numericGradeValue': None,
u'isGradeApproved': False
},
{
u'comment': None,
u'booleanGradeValue': None,
u'studentId': 1135,
u'assignmentId': None,
u'specialGradeValue': None,
u'returnAffectedValues': True,
u'letterGradeValue': u'A',
u'mode': 1,
u'numericGradeValue': None,
u'isGradeApproved': False
}
]
Args:
grade_array (dict): an array of grades to save
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: dictionary containing response ``status`` and ``message`` | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/gradebook.py#L533-L609 |
mitodl/PyLmod | pylmod/gradebook.py | GradeBook.get_sections | def get_sections(self, gradebook_id='', simple=False):
"""Get the sections for a gradebook.
Return a dictionary of types of sections containing a list of that
type for a given gradebook. Specified by a gradebookid.
If simple=True, a list of dictionaries is provided for each
section regardless of type. The dictionary only contains one
key ``SectionName``.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool): return a list of section names only
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: Dictionary of section types where each type has a
list of sections
An example return value is:
.. code-block:: python
{
u'recitation':
[
{
u'editable': False,
u'groupId': 1293925,
u'groupingScheme': u'Recitation',
u'members': None,
u'name': u'Unassigned',
u'shortName': u'DefaultGroupNoCollisionPlease1234',
u'staffs': None
},
{
u'editable': True,
u'groupId': 1327565,
u'groupingScheme': u'Recitation',
u'members': None,
u'name': u'r01',
u'shortName': u'r01',
u'staffs': None},
{u'editable': True,
u'groupId': 1327555,
u'groupingScheme': u'Recitation',
u'members': None,
u'name': u'r02',
u'shortName': u'r02',
u'staffs': None
}
]
}
"""
params = dict(includeMembers='false')
section_data = self.get(
'sections/{gradebookId}'.format(
gradebookId=gradebook_id or self.gradebook_id
),
params=params
)
if simple:
sections = self.unravel_sections(section_data['data'])
return [{'SectionName': x['name']} for x in sections]
return section_data['data'] | python | def get_sections(self, gradebook_id='', simple=False):
"""Get the sections for a gradebook.
Return a dictionary of types of sections containing a list of that
type for a given gradebook. Specified by a gradebookid.
If simple=True, a list of dictionaries is provided for each
section regardless of type. The dictionary only contains one
key ``SectionName``.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool): return a list of section names only
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: Dictionary of section types where each type has a
list of sections
An example return value is:
.. code-block:: python
{
u'recitation':
[
{
u'editable': False,
u'groupId': 1293925,
u'groupingScheme': u'Recitation',
u'members': None,
u'name': u'Unassigned',
u'shortName': u'DefaultGroupNoCollisionPlease1234',
u'staffs': None
},
{
u'editable': True,
u'groupId': 1327565,
u'groupingScheme': u'Recitation',
u'members': None,
u'name': u'r01',
u'shortName': u'r01',
u'staffs': None},
{u'editable': True,
u'groupId': 1327555,
u'groupingScheme': u'Recitation',
u'members': None,
u'name': u'r02',
u'shortName': u'r02',
u'staffs': None
}
]
}
"""
params = dict(includeMembers='false')
section_data = self.get(
'sections/{gradebookId}'.format(
gradebookId=gradebook_id or self.gradebook_id
),
params=params
)
if simple:
sections = self.unravel_sections(section_data['data'])
return [{'SectionName': x['name']} for x in sections]
return section_data['data'] | Get the sections for a gradebook.
Return a dictionary of types of sections containing a list of that
type for a given gradebook. Specified by a gradebookid.
If simple=True, a list of dictionaries is provided for each
section regardless of type. The dictionary only contains one
key ``SectionName``.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool): return a list of section names only
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: Dictionary of section types where each type has a
list of sections
An example return value is:
.. code-block:: python
{
u'recitation':
[
{
u'editable': False,
u'groupId': 1293925,
u'groupingScheme': u'Recitation',
u'members': None,
u'name': u'Unassigned',
u'shortName': u'DefaultGroupNoCollisionPlease1234',
u'staffs': None
},
{
u'editable': True,
u'groupId': 1327565,
u'groupingScheme': u'Recitation',
u'members': None,
u'name': u'r01',
u'shortName': u'r01',
u'staffs': None},
{u'editable': True,
u'groupId': 1327555,
u'groupingScheme': u'Recitation',
u'members': None,
u'name': u'r02',
u'shortName': u'r02',
u'staffs': None
}
]
} | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/gradebook.py#L611-L681 |
mitodl/PyLmod | pylmod/gradebook.py | GradeBook.get_section_by_name | def get_section_by_name(self, section_name):
"""Get a section by its name.
Get a list of sections for a given gradebook,
specified by a gradebookid.
Args:
section_name (str): The section's name.
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of group id, and section dictionary
An example return value is:
.. code-block:: python
(
1327565,
{
u'editable': True,
u'groupId': 1327565,
u'groupingScheme': u'Recitation',
u'members': None,
u'name': u'r01',
u'shortName': u'r01',
u'staffs': None
}
)
"""
sections = self.unravel_sections(self.get_sections())
for section in sections:
if section['name'] == section_name:
return section['groupId'], section
return None, None | python | def get_section_by_name(self, section_name):
"""Get a section by its name.
Get a list of sections for a given gradebook,
specified by a gradebookid.
Args:
section_name (str): The section's name.
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of group id, and section dictionary
An example return value is:
.. code-block:: python
(
1327565,
{
u'editable': True,
u'groupId': 1327565,
u'groupingScheme': u'Recitation',
u'members': None,
u'name': u'r01',
u'shortName': u'r01',
u'staffs': None
}
)
"""
sections = self.unravel_sections(self.get_sections())
for section in sections:
if section['name'] == section_name:
return section['groupId'], section
return None, None | Get a section by its name.
Get a list of sections for a given gradebook,
specified by a gradebookid.
Args:
section_name (str): The section's name.
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of group id, and section dictionary
An example return value is:
.. code-block:: python
(
1327565,
{
u'editable': True,
u'groupId': 1327565,
u'groupingScheme': u'Recitation',
u'members': None,
u'name': u'r01',
u'shortName': u'r01',
u'staffs': None
}
) | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/gradebook.py#L683-L721 |
mitodl/PyLmod | pylmod/gradebook.py | GradeBook.get_students | def get_students(
self,
gradebook_id='',
simple=False,
section_name='',
include_photo=False,
include_grade_info=False,
include_grade_history=False,
include_makeup_grades=False
):
"""Get students for a gradebook.
Get a list of students for a given gradebook,
specified by a gradebook id. Does not include grade data.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool):
if ``True``, just return dictionary with keys ``email``,
``name``, ``section``, default = ``False``
section_name (str): section name
include_photo (bool): include student photo, default= ``False``
include_grade_info (bool):
include student's grade info, default= ``False``
include_grade_history (bool):
include student's grade history, default= ``False``
include_makeup_grades (bool):
include student's makeup grades, default= ``False``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: list of student dictionaries
.. code-block:: python
[{
u'accountEmail': u'[email protected]',
u'displayName': u'Molly Parker',
u'photoUrl': None,
u'middleName': None,
u'section': u'Unassigned',
u'sectionId': 1293925,
u'editable': False,
u'overallGradeInformation': None,
u'studentId': 1145,
u'studentAssignmentInfo': None,
u'sortableName': u'Parker, Molly',
u'surname': u'Parker',
u'givenName': u'Molly',
u'nickName': u'Molly',
u'email': u'[email protected]'
},]
"""
# These are parameters required for the remote API call, so
# there aren't too many arguments, or too many variables
# pylint: disable=too-many-arguments,too-many-locals
# Set params by arguments
params = dict(
includePhoto=json.dumps(include_photo),
includeGradeInfo=json.dumps(include_grade_info),
includeGradeHistory=json.dumps(include_grade_history),
includeMakeupGrades=json.dumps(include_makeup_grades),
)
url = 'students/{gradebookId}'
if section_name:
group_id, _ = self.get_section_by_name(section_name)
if group_id is None:
failure_message = (
'in get_students -- Error: '
'No such section %s' % section_name
)
log.critical(failure_message)
raise PyLmodNoSuchSection(failure_message)
url += '/section/{0}'.format(group_id)
student_data = self.get(
url.format(
gradebookId=gradebook_id or self.gradebook_id
),
params=params,
)
if simple:
# just return dict with keys email, name, section
student_map = dict(
accountEmail='email',
displayName='name',
section='section'
)
def remap(students):
"""Convert mit.edu domain to upper-case for student emails.
The mit.edu domain for user email must be upper-case,
i.e. MIT.EDU.
Args:
students (list): list of students
Returns:
dict: dictionary of updated student email domains
"""
newx = dict((student_map[k], students[k]) for k in student_map)
# match certs
newx['email'] = newx['email'].replace('@mit.edu', '@MIT.EDU')
return newx
return [remap(x) for x in student_data['data']]
return student_data['data'] | python | def get_students(
self,
gradebook_id='',
simple=False,
section_name='',
include_photo=False,
include_grade_info=False,
include_grade_history=False,
include_makeup_grades=False
):
"""Get students for a gradebook.
Get a list of students for a given gradebook,
specified by a gradebook id. Does not include grade data.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool):
if ``True``, just return dictionary with keys ``email``,
``name``, ``section``, default = ``False``
section_name (str): section name
include_photo (bool): include student photo, default= ``False``
include_grade_info (bool):
include student's grade info, default= ``False``
include_grade_history (bool):
include student's grade history, default= ``False``
include_makeup_grades (bool):
include student's makeup grades, default= ``False``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: list of student dictionaries
.. code-block:: python
[{
u'accountEmail': u'[email protected]',
u'displayName': u'Molly Parker',
u'photoUrl': None,
u'middleName': None,
u'section': u'Unassigned',
u'sectionId': 1293925,
u'editable': False,
u'overallGradeInformation': None,
u'studentId': 1145,
u'studentAssignmentInfo': None,
u'sortableName': u'Parker, Molly',
u'surname': u'Parker',
u'givenName': u'Molly',
u'nickName': u'Molly',
u'email': u'[email protected]'
},]
"""
# These are parameters required for the remote API call, so
# there aren't too many arguments, or too many variables
# pylint: disable=too-many-arguments,too-many-locals
# Set params by arguments
params = dict(
includePhoto=json.dumps(include_photo),
includeGradeInfo=json.dumps(include_grade_info),
includeGradeHistory=json.dumps(include_grade_history),
includeMakeupGrades=json.dumps(include_makeup_grades),
)
url = 'students/{gradebookId}'
if section_name:
group_id, _ = self.get_section_by_name(section_name)
if group_id is None:
failure_message = (
'in get_students -- Error: '
'No such section %s' % section_name
)
log.critical(failure_message)
raise PyLmodNoSuchSection(failure_message)
url += '/section/{0}'.format(group_id)
student_data = self.get(
url.format(
gradebookId=gradebook_id or self.gradebook_id
),
params=params,
)
if simple:
# just return dict with keys email, name, section
student_map = dict(
accountEmail='email',
displayName='name',
section='section'
)
def remap(students):
"""Convert mit.edu domain to upper-case for student emails.
The mit.edu domain for user email must be upper-case,
i.e. MIT.EDU.
Args:
students (list): list of students
Returns:
dict: dictionary of updated student email domains
"""
newx = dict((student_map[k], students[k]) for k in student_map)
# match certs
newx['email'] = newx['email'].replace('@mit.edu', '@MIT.EDU')
return newx
return [remap(x) for x in student_data['data']]
return student_data['data'] | Get students for a gradebook.
Get a list of students for a given gradebook,
specified by a gradebook id. Does not include grade data.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool):
if ``True``, just return dictionary with keys ``email``,
``name``, ``section``, default = ``False``
section_name (str): section name
include_photo (bool): include student photo, default= ``False``
include_grade_info (bool):
include student's grade info, default= ``False``
include_grade_history (bool):
include student's grade history, default= ``False``
include_makeup_grades (bool):
include student's makeup grades, default= ``False``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: list of student dictionaries
.. code-block:: python
[{
u'accountEmail': u'[email protected]',
u'displayName': u'Molly Parker',
u'photoUrl': None,
u'middleName': None,
u'section': u'Unassigned',
u'sectionId': 1293925,
u'editable': False,
u'overallGradeInformation': None,
u'studentId': 1145,
u'studentAssignmentInfo': None,
u'sortableName': u'Parker, Molly',
u'surname': u'Parker',
u'givenName': u'Molly',
u'nickName': u'Molly',
u'email': u'[email protected]'
},] | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/gradebook.py#L723-L838 |
mitodl/PyLmod | pylmod/gradebook.py | GradeBook.get_student_by_email | def get_student_by_email(self, email, students=None):
"""Get a student based on an email address.
Calls ``self.get_students()`` to get list of all students,
if not passed as the ``students`` parameter.
Args:
email (str): student email
students (list): dictionary of students to search, default: None
When ``students`` is unspecified, all students in gradebook
are retrieved.
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of student id and student dictionary.
"""
if students is None:
students = self.get_students()
email = email.lower()
for student in students:
if student['accountEmail'].lower() == email:
return student['studentId'], student
return None, None | python | def get_student_by_email(self, email, students=None):
"""Get a student based on an email address.
Calls ``self.get_students()`` to get list of all students,
if not passed as the ``students`` parameter.
Args:
email (str): student email
students (list): dictionary of students to search, default: None
When ``students`` is unspecified, all students in gradebook
are retrieved.
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of student id and student dictionary.
"""
if students is None:
students = self.get_students()
email = email.lower()
for student in students:
if student['accountEmail'].lower() == email:
return student['studentId'], student
return None, None | Get a student based on an email address.
Calls ``self.get_students()`` to get list of all students,
if not passed as the ``students`` parameter.
Args:
email (str): student email
students (list): dictionary of students to search, default: None
When ``students`` is unspecified, all students in gradebook
are retrieved.
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of student id and student dictionary. | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/gradebook.py#L840-L866 |
mitodl/PyLmod | pylmod/gradebook.py | GradeBook._spreadsheet2gradebook_multi | def _spreadsheet2gradebook_multi(
self,
csv_reader,
email_field,
non_assignment_fields,
approve_grades=False,
use_max_points_column=False,
max_points_column=None,
normalize_column=None
):
"""Transfer grades from spreadsheet to array.
Helper function that transfer grades from spreadsheet using
``multi_grade()`` (multiple students at a time). We do this by
creating a large array containing all grades to transfer, then
make one call to the Gradebook API.
Args:
csv_reader (csv.DictReader): list of rows in CSV file
email_field (str): The name of the email field
non_assignment_fields (list): list of column names in CSV file
which should not be treated as assignment names
approve_grades (bool): Should grades be auto approved?
use_max_points_column (bool): If true, read the max points
and normalize values from the CSV and use the max points value
in place of the default if normalized is False.
max_points_column (str): The name of the max_pts column. All
rows contain the same number, the max points for
the assignment.
normalize_column (str): The name of the normalize column which
indicates whether to use the max points value.
Raises:
PyLmodFailedAssignmentCreation: Failed to create assignment
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of dictionary containing response ``status``
and ``message``, and duration of operation
"""
# pylint: disable=too-many-locals
if use_max_points_column:
if max_points_column is None:
raise ValueError(
"max_points_column must be set "
"if use_max_points_column is set"
)
if normalize_column is None:
raise ValueError(
"normalize_column must be set "
"if use_max_points_column is set"
)
assignments = self.get_assignments()
students = self.get_students()
assignment2id = {}
grade_array = []
for row in csv_reader:
email = row[email_field]
sid, _ = self.get_student_by_email(email, students)
if sid is None:
log.warning(
'Error in spreadsheet2gradebook: cannot find '
'student id for email="%s"\n', email
)
continue
for field in row.keys():
if field in non_assignment_fields:
continue
if field not in assignment2id:
assignment_id, _ = self.get_assignment_by_name(
field, assignments=assignments
)
# If no assignment found, try creating it.
if assignment_id is None:
name = field
shortname = field[0:3] + field[-2:]
log.info('calling create_assignment from multi')
# If the max_pts and normalize columns are present,
# and use_max_points_column is True,
# replace the default value for max points.
max_points = DEFAULT_MAX_POINTS
if use_max_points_column:
# This value means it was already normalized, and
# we should use the default max points
# instead of the one in the CSV.
normalize_value = True
normalize_value_str = row.get(normalize_column)
if normalize_value_str is not None:
try:
normalize_value = bool(
int(normalize_value_str)
)
except ValueError as ex:
# Value is already normalized
log.warning(
'Bool conversion error '
' in normalize column for '
'value: %s, exception: %s',
normalize_value_str,
ex
)
if not normalize_value:
max_points_str = row.get(max_points_column)
if max_points_str is not None:
try:
max_points = float(max_points_str)
except ValueError as ex:
log.warning(
'Floating point conversion error '
'in max points column for '
'value: %s, exception: %s',
max_points_str,
ex
)
response = self.create_assignment(
name, shortname, 1.0, max_points, '12-15-2013'
)
if (
not response.get('data', '') or
'assignmentId' not in response.get('data')
):
failure_message = (
"Error! Failed to create assignment {0}"
", got {1}".format(
name, response
)
)
log.critical(failure_message)
raise PyLmodFailedAssignmentCreation(
failure_message
)
assignment_id = response['data']['assignmentId']
log.info("Assignment %s has Id=%s", field, assignment_id)
assignment2id[field] = assignment_id
assignment_id = assignment2id[field]
successful = True
try:
# Try to convert to numeric, but grade the
# rest anyway if any particular grade isn't a
# number
gradeval = float(row[field]) * 1.0
log.debug(
'Received grade value %s(converted to %s) for '
'student %s on assignment %s',
row[field],
gradeval,
sid,
assignment_id,
)
except ValueError as err:
log.exception(
"Failed in converting grade for student %s"
", row=%s, err=%s", sid, row, err
)
successful = False
if successful:
grade_array.append({
"studentId": sid,
"assignmentId": assignment_id,
"numericGradeValue": gradeval,
"mode": 2,
"isGradeApproved": approve_grades
})
# Everything is setup to post, do the post and track the time
# it takes.
log.info(
'Data read from file, doing multiGrades API '
'call (%d grades)', len(grade_array)
)
tstart = time.time()
response = self.multi_grade(grade_array)
duration = time.time() - tstart
log.info(
'multiGrades API call done (%d bytes returned) '
'dt=%6.2f seconds.', len(json.dumps(response)), duration
)
return response, duration | python | def _spreadsheet2gradebook_multi(
self,
csv_reader,
email_field,
non_assignment_fields,
approve_grades=False,
use_max_points_column=False,
max_points_column=None,
normalize_column=None
):
"""Transfer grades from spreadsheet to array.
Helper function that transfer grades from spreadsheet using
``multi_grade()`` (multiple students at a time). We do this by
creating a large array containing all grades to transfer, then
make one call to the Gradebook API.
Args:
csv_reader (csv.DictReader): list of rows in CSV file
email_field (str): The name of the email field
non_assignment_fields (list): list of column names in CSV file
which should not be treated as assignment names
approve_grades (bool): Should grades be auto approved?
use_max_points_column (bool): If true, read the max points
and normalize values from the CSV and use the max points value
in place of the default if normalized is False.
max_points_column (str): The name of the max_pts column. All
rows contain the same number, the max points for
the assignment.
normalize_column (str): The name of the normalize column which
indicates whether to use the max points value.
Raises:
PyLmodFailedAssignmentCreation: Failed to create assignment
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of dictionary containing response ``status``
and ``message``, and duration of operation
"""
# pylint: disable=too-many-locals
if use_max_points_column:
if max_points_column is None:
raise ValueError(
"max_points_column must be set "
"if use_max_points_column is set"
)
if normalize_column is None:
raise ValueError(
"normalize_column must be set "
"if use_max_points_column is set"
)
assignments = self.get_assignments()
students = self.get_students()
assignment2id = {}
grade_array = []
for row in csv_reader:
email = row[email_field]
sid, _ = self.get_student_by_email(email, students)
if sid is None:
log.warning(
'Error in spreadsheet2gradebook: cannot find '
'student id for email="%s"\n', email
)
continue
for field in row.keys():
if field in non_assignment_fields:
continue
if field not in assignment2id:
assignment_id, _ = self.get_assignment_by_name(
field, assignments=assignments
)
# If no assignment found, try creating it.
if assignment_id is None:
name = field
shortname = field[0:3] + field[-2:]
log.info('calling create_assignment from multi')
# If the max_pts and normalize columns are present,
# and use_max_points_column is True,
# replace the default value for max points.
max_points = DEFAULT_MAX_POINTS
if use_max_points_column:
# This value means it was already normalized, and
# we should use the default max points
# instead of the one in the CSV.
normalize_value = True
normalize_value_str = row.get(normalize_column)
if normalize_value_str is not None:
try:
normalize_value = bool(
int(normalize_value_str)
)
except ValueError as ex:
# Value is already normalized
log.warning(
'Bool conversion error '
' in normalize column for '
'value: %s, exception: %s',
normalize_value_str,
ex
)
if not normalize_value:
max_points_str = row.get(max_points_column)
if max_points_str is not None:
try:
max_points = float(max_points_str)
except ValueError as ex:
log.warning(
'Floating point conversion error '
'in max points column for '
'value: %s, exception: %s',
max_points_str,
ex
)
response = self.create_assignment(
name, shortname, 1.0, max_points, '12-15-2013'
)
if (
not response.get('data', '') or
'assignmentId' not in response.get('data')
):
failure_message = (
"Error! Failed to create assignment {0}"
", got {1}".format(
name, response
)
)
log.critical(failure_message)
raise PyLmodFailedAssignmentCreation(
failure_message
)
assignment_id = response['data']['assignmentId']
log.info("Assignment %s has Id=%s", field, assignment_id)
assignment2id[field] = assignment_id
assignment_id = assignment2id[field]
successful = True
try:
# Try to convert to numeric, but grade the
# rest anyway if any particular grade isn't a
# number
gradeval = float(row[field]) * 1.0
log.debug(
'Received grade value %s(converted to %s) for '
'student %s on assignment %s',
row[field],
gradeval,
sid,
assignment_id,
)
except ValueError as err:
log.exception(
"Failed in converting grade for student %s"
", row=%s, err=%s", sid, row, err
)
successful = False
if successful:
grade_array.append({
"studentId": sid,
"assignmentId": assignment_id,
"numericGradeValue": gradeval,
"mode": 2,
"isGradeApproved": approve_grades
})
# Everything is setup to post, do the post and track the time
# it takes.
log.info(
'Data read from file, doing multiGrades API '
'call (%d grades)', len(grade_array)
)
tstart = time.time()
response = self.multi_grade(grade_array)
duration = time.time() - tstart
log.info(
'multiGrades API call done (%d bytes returned) '
'dt=%6.2f seconds.', len(json.dumps(response)), duration
)
return response, duration | Transfer grades from spreadsheet to array.
Helper function that transfer grades from spreadsheet using
``multi_grade()`` (multiple students at a time). We do this by
creating a large array containing all grades to transfer, then
make one call to the Gradebook API.
Args:
csv_reader (csv.DictReader): list of rows in CSV file
email_field (str): The name of the email field
non_assignment_fields (list): list of column names in CSV file
which should not be treated as assignment names
approve_grades (bool): Should grades be auto approved?
use_max_points_column (bool): If true, read the max points
and normalize values from the CSV and use the max points value
in place of the default if normalized is False.
max_points_column (str): The name of the max_pts column. All
rows contain the same number, the max points for
the assignment.
normalize_column (str): The name of the normalize column which
indicates whether to use the max points value.
Raises:
PyLmodFailedAssignmentCreation: Failed to create assignment
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of dictionary containing response ``status``
and ``message``, and duration of operation | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/gradebook.py#L868-L1051 |
mitodl/PyLmod | pylmod/gradebook.py | GradeBook.spreadsheet2gradebook | def spreadsheet2gradebook(
self,
csv_file,
email_field=None,
approve_grades=False,
use_max_points_column=False,
max_points_column=None,
normalize_column=None
):
"""Upload grade spreadsheet to gradebook.
Upload grades from CSV format spreadsheet file into the
Learning Modules gradebook. The spreadsheet must have a column
named ``External email`` which is used as the student's email
address (for looking up and matching studentId).
These columns are disregarded: ``ID``, ``Username``,
``Full Name``, ``edX email``, ``External email``,
as well as the strings passed in ``max_points_column``
and ``normalize_column``, if any.
All other columns are taken as assignments.
If ``email_field`` is specified, then that field name is taken as
the student's email.
.. code-block:: none
External email,AB Assignment 01,AB Assignment 02
[email protected],1.0,0.9
[email protected],0.2,0.4
[email protected],0.93,0.77
Args:
csv_reader (str): filename of csv data, or readable file object
email_field (str): student's email
approve_grades (bool): Should grades be auto approved?
use_max_points_column (bool):
If ``True``, read the max points and normalize values
from the CSV and use the max points value in place of
the default if normalized is ``False``.
max_points_column (str): The name of the max_pts column. All
rows contain the same number, the max points for
the assignment.
normalize_column (str): The name of the normalize column which
indicates whether to use the max points value.
Raises:
PyLmodFailedAssignmentCreation: Failed to create assignment
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of dictionary containing response ``status``
and ``message``, and duration of operation
"""
non_assignment_fields = [
'ID', 'Username', 'Full Name', 'edX email', 'External email'
]
if max_points_column is not None:
non_assignment_fields.append(max_points_column)
if normalize_column is not None:
non_assignment_fields.append(normalize_column)
if email_field is not None:
non_assignment_fields.append(email_field)
else:
email_field = 'External email'
if not hasattr(csv_file, 'read'):
file_pointer = open(csv_file)
else:
file_pointer = csv_file
csv_reader = csv.DictReader(file_pointer, dialect='excel')
response = self._spreadsheet2gradebook_multi(
csv_reader,
email_field,
non_assignment_fields,
approve_grades=approve_grades,
use_max_points_column=use_max_points_column,
max_points_column=max_points_column,
normalize_column=normalize_column
)
return response | python | def spreadsheet2gradebook(
self,
csv_file,
email_field=None,
approve_grades=False,
use_max_points_column=False,
max_points_column=None,
normalize_column=None
):
"""Upload grade spreadsheet to gradebook.
Upload grades from CSV format spreadsheet file into the
Learning Modules gradebook. The spreadsheet must have a column
named ``External email`` which is used as the student's email
address (for looking up and matching studentId).
These columns are disregarded: ``ID``, ``Username``,
``Full Name``, ``edX email``, ``External email``,
as well as the strings passed in ``max_points_column``
and ``normalize_column``, if any.
All other columns are taken as assignments.
If ``email_field`` is specified, then that field name is taken as
the student's email.
.. code-block:: none
External email,AB Assignment 01,AB Assignment 02
[email protected],1.0,0.9
[email protected],0.2,0.4
[email protected],0.93,0.77
Args:
csv_reader (str): filename of csv data, or readable file object
email_field (str): student's email
approve_grades (bool): Should grades be auto approved?
use_max_points_column (bool):
If ``True``, read the max points and normalize values
from the CSV and use the max points value in place of
the default if normalized is ``False``.
max_points_column (str): The name of the max_pts column. All
rows contain the same number, the max points for
the assignment.
normalize_column (str): The name of the normalize column which
indicates whether to use the max points value.
Raises:
PyLmodFailedAssignmentCreation: Failed to create assignment
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of dictionary containing response ``status``
and ``message``, and duration of operation
"""
non_assignment_fields = [
'ID', 'Username', 'Full Name', 'edX email', 'External email'
]
if max_points_column is not None:
non_assignment_fields.append(max_points_column)
if normalize_column is not None:
non_assignment_fields.append(normalize_column)
if email_field is not None:
non_assignment_fields.append(email_field)
else:
email_field = 'External email'
if not hasattr(csv_file, 'read'):
file_pointer = open(csv_file)
else:
file_pointer = csv_file
csv_reader = csv.DictReader(file_pointer, dialect='excel')
response = self._spreadsheet2gradebook_multi(
csv_reader,
email_field,
non_assignment_fields,
approve_grades=approve_grades,
use_max_points_column=use_max_points_column,
max_points_column=max_points_column,
normalize_column=normalize_column
)
return response | Upload grade spreadsheet to gradebook.
Upload grades from CSV format spreadsheet file into the
Learning Modules gradebook. The spreadsheet must have a column
named ``External email`` which is used as the student's email
address (for looking up and matching studentId).
These columns are disregarded: ``ID``, ``Username``,
``Full Name``, ``edX email``, ``External email``,
as well as the strings passed in ``max_points_column``
and ``normalize_column``, if any.
All other columns are taken as assignments.
If ``email_field`` is specified, then that field name is taken as
the student's email.
.. code-block:: none
External email,AB Assignment 01,AB Assignment 02
[email protected],1.0,0.9
[email protected],0.2,0.4
[email protected],0.93,0.77
Args:
csv_reader (str): filename of csv data, or readable file object
email_field (str): student's email
approve_grades (bool): Should grades be auto approved?
use_max_points_column (bool):
If ``True``, read the max points and normalize values
from the CSV and use the max points value in place of
the default if normalized is ``False``.
max_points_column (str): The name of the max_pts column. All
rows contain the same number, the max points for
the assignment.
normalize_column (str): The name of the normalize column which
indicates whether to use the max points value.
Raises:
PyLmodFailedAssignmentCreation: Failed to create assignment
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of dictionary containing response ``status``
and ``message``, and duration of operation | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/gradebook.py#L1053-L1137 |
mitodl/PyLmod | pylmod/gradebook.py | GradeBook.get_staff | def get_staff(self, gradebook_id, simple=False):
"""Get staff list for gradebook.
Get staff list for the gradebook specified. Optionally, return
a less detailed list by specifying ``simple = True``.
If simple=True, return a list of dictionaries, one dictionary
for each member. The dictionary contains a member's ``email``,
``displayName``, and ``role``. Members with multiple roles will
appear in the list once for each role.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool): Return a staff list with less detail. Default
is ``False``.
Returns:
An example return value is:
.. code-block:: python
{
u'data': {
u'COURSE_ADMIN': [
{
u'accountEmail': u'[email protected]',
u'displayName': u'Benjamin Franklin',
u'editable': False,
u'email': u'[email protected]',
u'givenName': u'Benjamin',
u'middleName': None,
u'mitId': u'921344431',
u'nickName': u'Benjamin',
u'personId': 10710616,
u'sortableName': u'Franklin, Benjamin',
u'surname': u'Franklin',
u'year': None
},
],
u'COURSE_PROF': [
{
u'accountEmail': u'[email protected]',
u'displayName': u'Donald Duck',
u'editable': False,
u'email': u'[email protected]',
u'givenName': u'Donald',
u'middleName': None,
u'mitId': u'916144889',
u'nickName': u'Donald',
u'personId': 8117160,
u'sortableName': u'Duck, Donald',
u'surname': u'Duck',
u'year': None
},
],
u'COURSE_TA': [
{
u'accountEmail': u'[email protected]',
u'displayName': u'Huey Duck',
u'editable': False,
u'email': u'[email protected]',
u'givenName': u'Huey',
u'middleName': None,
u'mitId': u'920445024',
u'nickName': u'Huey',
u'personId': 1299059,
u'sortableName': u'Duck, Huey',
u'surname': u'Duck',
u'year': None
},
]
},
}
"""
staff_data = self.get(
'staff/{gradebookId}'.format(
gradebookId=gradebook_id or self.gradebook_id
),
params=None,
)
if simple:
simple_list = []
unraveled_list = self.unravel_staff(staff_data)
for member in unraveled_list.__iter__():
simple_list.append({
'accountEmail': member['accountEmail'],
'displayName': member['displayName'],
'role': member['role'],
})
return simple_list
return staff_data['data'] | python | def get_staff(self, gradebook_id, simple=False):
"""Get staff list for gradebook.
Get staff list for the gradebook specified. Optionally, return
a less detailed list by specifying ``simple = True``.
If simple=True, return a list of dictionaries, one dictionary
for each member. The dictionary contains a member's ``email``,
``displayName``, and ``role``. Members with multiple roles will
appear in the list once for each role.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool): Return a staff list with less detail. Default
is ``False``.
Returns:
An example return value is:
.. code-block:: python
{
u'data': {
u'COURSE_ADMIN': [
{
u'accountEmail': u'[email protected]',
u'displayName': u'Benjamin Franklin',
u'editable': False,
u'email': u'[email protected]',
u'givenName': u'Benjamin',
u'middleName': None,
u'mitId': u'921344431',
u'nickName': u'Benjamin',
u'personId': 10710616,
u'sortableName': u'Franklin, Benjamin',
u'surname': u'Franklin',
u'year': None
},
],
u'COURSE_PROF': [
{
u'accountEmail': u'[email protected]',
u'displayName': u'Donald Duck',
u'editable': False,
u'email': u'[email protected]',
u'givenName': u'Donald',
u'middleName': None,
u'mitId': u'916144889',
u'nickName': u'Donald',
u'personId': 8117160,
u'sortableName': u'Duck, Donald',
u'surname': u'Duck',
u'year': None
},
],
u'COURSE_TA': [
{
u'accountEmail': u'[email protected]',
u'displayName': u'Huey Duck',
u'editable': False,
u'email': u'[email protected]',
u'givenName': u'Huey',
u'middleName': None,
u'mitId': u'920445024',
u'nickName': u'Huey',
u'personId': 1299059,
u'sortableName': u'Duck, Huey',
u'surname': u'Duck',
u'year': None
},
]
},
}
"""
staff_data = self.get(
'staff/{gradebookId}'.format(
gradebookId=gradebook_id or self.gradebook_id
),
params=None,
)
if simple:
simple_list = []
unraveled_list = self.unravel_staff(staff_data)
for member in unraveled_list.__iter__():
simple_list.append({
'accountEmail': member['accountEmail'],
'displayName': member['displayName'],
'role': member['role'],
})
return simple_list
return staff_data['data'] | Get staff list for gradebook.
Get staff list for the gradebook specified. Optionally, return
a less detailed list by specifying ``simple = True``.
If simple=True, return a list of dictionaries, one dictionary
for each member. The dictionary contains a member's ``email``,
``displayName``, and ``role``. Members with multiple roles will
appear in the list once for each role.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool): Return a staff list with less detail. Default
is ``False``.
Returns:
An example return value is:
.. code-block:: python
{
u'data': {
u'COURSE_ADMIN': [
{
u'accountEmail': u'[email protected]',
u'displayName': u'Benjamin Franklin',
u'editable': False,
u'email': u'[email protected]',
u'givenName': u'Benjamin',
u'middleName': None,
u'mitId': u'921344431',
u'nickName': u'Benjamin',
u'personId': 10710616,
u'sortableName': u'Franklin, Benjamin',
u'surname': u'Franklin',
u'year': None
},
],
u'COURSE_PROF': [
{
u'accountEmail': u'[email protected]',
u'displayName': u'Donald Duck',
u'editable': False,
u'email': u'[email protected]',
u'givenName': u'Donald',
u'middleName': None,
u'mitId': u'916144889',
u'nickName': u'Donald',
u'personId': 8117160,
u'sortableName': u'Duck, Donald',
u'surname': u'Duck',
u'year': None
},
],
u'COURSE_TA': [
{
u'accountEmail': u'[email protected]',
u'displayName': u'Huey Duck',
u'editable': False,
u'email': u'[email protected]',
u'givenName': u'Huey',
u'middleName': None,
u'mitId': u'920445024',
u'nickName': u'Huey',
u'personId': 1299059,
u'sortableName': u'Duck, Huey',
u'surname': u'Duck',
u'year': None
},
]
},
} | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/gradebook.py#L1139-L1232 |
mitodl/PyLmod | pylmod/membership.py | Membership.get_group | def get_group(self, uuid=None):
"""Get group data based on uuid.
Args:
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: No data was returned.
requests.RequestException: Exception connection error
Returns:
dict: group json
"""
if uuid is None:
uuid = self.uuid
group_data = self.get('group', params={'uuid': uuid})
return group_data | python | def get_group(self, uuid=None):
"""Get group data based on uuid.
Args:
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: No data was returned.
requests.RequestException: Exception connection error
Returns:
dict: group json
"""
if uuid is None:
uuid = self.uuid
group_data = self.get('group', params={'uuid': uuid})
return group_data | Get group data based on uuid.
Args:
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: No data was returned.
requests.RequestException: Exception connection error
Returns:
dict: group json | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/membership.py#L32-L49 |
mitodl/PyLmod | pylmod/membership.py | Membership.get_group_id | def get_group_id(self, uuid=None):
"""Get group id based on uuid.
Args:
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: No group data was returned.
requests.RequestException: Exception connection error
Returns:
int: numeric group id
"""
group_data = self.get_group(uuid)
try:
return group_data['response']['docs'][0]['id']
except (KeyError, IndexError):
failure_message = ('Error in get_group response data - '
'got {0}'.format(group_data))
log.exception(failure_message)
raise PyLmodUnexpectedData(failure_message) | python | def get_group_id(self, uuid=None):
"""Get group id based on uuid.
Args:
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: No group data was returned.
requests.RequestException: Exception connection error
Returns:
int: numeric group id
"""
group_data = self.get_group(uuid)
try:
return group_data['response']['docs'][0]['id']
except (KeyError, IndexError):
failure_message = ('Error in get_group response data - '
'got {0}'.format(group_data))
log.exception(failure_message)
raise PyLmodUnexpectedData(failure_message) | Get group id based on uuid.
Args:
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: No group data was returned.
requests.RequestException: Exception connection error
Returns:
int: numeric group id | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/membership.py#L51-L72 |
mitodl/PyLmod | pylmod/membership.py | Membership.get_membership | def get_membership(self, uuid=None):
"""Get membership data based on uuid.
Args:
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: No data was returned.
requests.RequestException: Exception connection error
Returns:
dict: membership json
"""
group_id = self.get_group_id(uuid=uuid)
uri = 'group/{group_id}/member'
mbr_data = self.get(uri.format(group_id=group_id), params=None)
return mbr_data | python | def get_membership(self, uuid=None):
"""Get membership data based on uuid.
Args:
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: No data was returned.
requests.RequestException: Exception connection error
Returns:
dict: membership json
"""
group_id = self.get_group_id(uuid=uuid)
uri = 'group/{group_id}/member'
mbr_data = self.get(uri.format(group_id=group_id), params=None)
return mbr_data | Get membership data based on uuid.
Args:
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: No data was returned.
requests.RequestException: Exception connection error
Returns:
dict: membership json | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/membership.py#L74-L91 |
mitodl/PyLmod | pylmod/membership.py | Membership.email_has_role | def email_has_role(self, email, role_name, uuid=None):
"""Determine if an email is associated with a role.
Args:
email (str): user email
role_name (str): user role
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: Unexpected data was returned.
requests.RequestException: Exception connection error
Returns:
bool: True or False if email has role_name
"""
mbr_data = self.get_membership(uuid=uuid)
docs = []
try:
docs = mbr_data['response']['docs']
except KeyError:
failure_message = ('KeyError in membership data - '
'got {0}'.format(mbr_data))
log.exception(failure_message)
raise PyLmodUnexpectedData(failure_message)
if len(docs) == 0:
return False
has_role = any(
(x.get('email') == email and x.get('roleType') == role_name)
for x in docs
)
if has_role:
return True
return False | python | def email_has_role(self, email, role_name, uuid=None):
"""Determine if an email is associated with a role.
Args:
email (str): user email
role_name (str): user role
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: Unexpected data was returned.
requests.RequestException: Exception connection error
Returns:
bool: True or False if email has role_name
"""
mbr_data = self.get_membership(uuid=uuid)
docs = []
try:
docs = mbr_data['response']['docs']
except KeyError:
failure_message = ('KeyError in membership data - '
'got {0}'.format(mbr_data))
log.exception(failure_message)
raise PyLmodUnexpectedData(failure_message)
if len(docs) == 0:
return False
has_role = any(
(x.get('email') == email and x.get('roleType') == role_name)
for x in docs
)
if has_role:
return True
return False | Determine if an email is associated with a role.
Args:
email (str): user email
role_name (str): user role
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: Unexpected data was returned.
requests.RequestException: Exception connection error
Returns:
bool: True or False if email has role_name | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/membership.py#L93-L126 |
mitodl/PyLmod | pylmod/membership.py | Membership.get_course_id | def get_course_id(self, course_uuid):
"""Get course id based on uuid.
Args:
uuid (str): course uuid, i.e. /project/mitxdemosite
Raises:
PyLmodUnexpectedData: No course data was returned.
requests.RequestException: Exception connection error
Returns:
int: numeric course id
"""
course_data = self.get(
'courseguide/course?uuid={uuid}'.format(
uuid=course_uuid or self.course_id
),
params=None
)
try:
return course_data['response']['docs'][0]['id']
except KeyError:
failure_message = ('KeyError in get_course_id - '
'got {0}'.format(course_data))
log.exception(failure_message)
raise PyLmodUnexpectedData(failure_message)
except TypeError:
failure_message = ('TypeError in get_course_id - '
'got {0}'.format(course_data))
log.exception(failure_message)
raise PyLmodUnexpectedData(failure_message) | python | def get_course_id(self, course_uuid):
"""Get course id based on uuid.
Args:
uuid (str): course uuid, i.e. /project/mitxdemosite
Raises:
PyLmodUnexpectedData: No course data was returned.
requests.RequestException: Exception connection error
Returns:
int: numeric course id
"""
course_data = self.get(
'courseguide/course?uuid={uuid}'.format(
uuid=course_uuid or self.course_id
),
params=None
)
try:
return course_data['response']['docs'][0]['id']
except KeyError:
failure_message = ('KeyError in get_course_id - '
'got {0}'.format(course_data))
log.exception(failure_message)
raise PyLmodUnexpectedData(failure_message)
except TypeError:
failure_message = ('TypeError in get_course_id - '
'got {0}'.format(course_data))
log.exception(failure_message)
raise PyLmodUnexpectedData(failure_message) | Get course id based on uuid.
Args:
uuid (str): course uuid, i.e. /project/mitxdemosite
Raises:
PyLmodUnexpectedData: No course data was returned.
requests.RequestException: Exception connection error
Returns:
int: numeric course id | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/membership.py#L128-L159 |
mitodl/PyLmod | pylmod/membership.py | Membership.get_course_guide_staff | def get_course_guide_staff(self, course_id=''):
"""Get the staff roster for a course.
Get a list of staff members for a given course,
specified by a course id.
Args:
course_id (int): unique identifier for course, i.e. ``2314``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: list of dictionaries containing staff data
An example return value is:
.. code-block:: python
[
{
u'displayName': u'Huey Duck',
u'role': u'TA',
u'sortableDisplayName': u'Duck, Huey'
},
{
u'displayName': u'Louie Duck',
u'role': u'CourseAdmin',
u'sortableDisplayName': u'Duck, Louie'
},
{
u'displayName': u'Benjamin Franklin',
u'role': u'CourseAdmin',
u'sortableDisplayName': u'Franklin, Benjamin'
},
{
u'displayName': u'George Washington',
u'role': u'Instructor',
u'sortableDisplayName': u'Washington, George'
},
]
"""
staff_data = self.get(
'courseguide/course/{courseId}/staff'.format(
courseId=course_id or self.course_id
),
params=None
)
return staff_data['response']['docs'] | python | def get_course_guide_staff(self, course_id=''):
"""Get the staff roster for a course.
Get a list of staff members for a given course,
specified by a course id.
Args:
course_id (int): unique identifier for course, i.e. ``2314``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: list of dictionaries containing staff data
An example return value is:
.. code-block:: python
[
{
u'displayName': u'Huey Duck',
u'role': u'TA',
u'sortableDisplayName': u'Duck, Huey'
},
{
u'displayName': u'Louie Duck',
u'role': u'CourseAdmin',
u'sortableDisplayName': u'Duck, Louie'
},
{
u'displayName': u'Benjamin Franklin',
u'role': u'CourseAdmin',
u'sortableDisplayName': u'Franklin, Benjamin'
},
{
u'displayName': u'George Washington',
u'role': u'Instructor',
u'sortableDisplayName': u'Washington, George'
},
]
"""
staff_data = self.get(
'courseguide/course/{courseId}/staff'.format(
courseId=course_id or self.course_id
),
params=None
)
return staff_data['response']['docs'] | Get the staff roster for a course.
Get a list of staff members for a given course,
specified by a course id.
Args:
course_id (int): unique identifier for course, i.e. ``2314``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: list of dictionaries containing staff data
An example return value is:
.. code-block:: python
[
{
u'displayName': u'Huey Duck',
u'role': u'TA',
u'sortableDisplayName': u'Duck, Huey'
},
{
u'displayName': u'Louie Duck',
u'role': u'CourseAdmin',
u'sortableDisplayName': u'Duck, Louie'
},
{
u'displayName': u'Benjamin Franklin',
u'role': u'CourseAdmin',
u'sortableDisplayName': u'Franklin, Benjamin'
},
{
u'displayName': u'George Washington',
u'role': u'Instructor',
u'sortableDisplayName': u'Washington, George'
},
] | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/membership.py#L161-L210 |
mitodl/PyLmod | pylmod/base.py | Base._data_to_json | def _data_to_json(data):
"""Convert to json if it isn't already a string.
Args:
data (str): data to convert to json
"""
if type(data) not in [str, unicode]:
data = json.dumps(data)
return data | python | def _data_to_json(data):
"""Convert to json if it isn't already a string.
Args:
data (str): data to convert to json
"""
if type(data) not in [str, unicode]:
data = json.dumps(data)
return data | Convert to json if it isn't already a string.
Args:
data (str): data to convert to json | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/base.py#L69-L77 |
mitodl/PyLmod | pylmod/base.py | Base._url_format | def _url_format(self, service):
"""Generate URL from urlbase and service.
Args:
service (str): The endpoint service to use, i.e. gradebook
Returns:
str: URL to where the request should be made
"""
base_service_url = '{base}{service}'.format(
base=self.urlbase,
service=service
)
return base_service_url | python | def _url_format(self, service):
"""Generate URL from urlbase and service.
Args:
service (str): The endpoint service to use, i.e. gradebook
Returns:
str: URL to where the request should be made
"""
base_service_url = '{base}{service}'.format(
base=self.urlbase,
service=service
)
return base_service_url | Generate URL from urlbase and service.
Args:
service (str): The endpoint service to use, i.e. gradebook
Returns:
str: URL to where the request should be made | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/base.py#L79-L91 |
mitodl/PyLmod | pylmod/base.py | Base.rest_action | def rest_action(self, func, url, **kwargs):
"""Routine to do low-level REST operation, with retry.
Args:
func (callable): API function to call
url (str): service URL endpoint
kwargs (dict): addition parameters
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: the json-encoded content of the response
"""
try:
response = func(url, timeout=self.TIMEOUT, **kwargs)
except requests.RequestException, err:
log.exception(
"[PyLmod] Error - connection error in "
"rest_action, err=%s", err
)
raise err
try:
return response.json()
except ValueError, err:
log.exception('Unable to decode %s', response.content)
raise err | python | def rest_action(self, func, url, **kwargs):
"""Routine to do low-level REST operation, with retry.
Args:
func (callable): API function to call
url (str): service URL endpoint
kwargs (dict): addition parameters
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: the json-encoded content of the response
"""
try:
response = func(url, timeout=self.TIMEOUT, **kwargs)
except requests.RequestException, err:
log.exception(
"[PyLmod] Error - connection error in "
"rest_action, err=%s", err
)
raise err
try:
return response.json()
except ValueError, err:
log.exception('Unable to decode %s', response.content)
raise err | Routine to do low-level REST operation, with retry.
Args:
func (callable): API function to call
url (str): service URL endpoint
kwargs (dict): addition parameters
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: the json-encoded content of the response | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/base.py#L93-L120 |
mitodl/PyLmod | pylmod/base.py | Base.get | def get(self, service, params=None):
"""Generic GET operation for retrieving data from Learning Modules API.
.. code-block:: python
gbk.get('students/{gradebookId}', params=params, gradebookId=gbid)
Args:
service (str): The endpoint service to use, i.e. gradebook
params (dict): additional parameters to add to the call
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: the json-encoded content of the response
"""
url = self._url_format(service)
if params is None:
params = {}
return self.rest_action(self._session.get, url, params=params) | python | def get(self, service, params=None):
"""Generic GET operation for retrieving data from Learning Modules API.
.. code-block:: python
gbk.get('students/{gradebookId}', params=params, gradebookId=gbid)
Args:
service (str): The endpoint service to use, i.e. gradebook
params (dict): additional parameters to add to the call
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: the json-encoded content of the response
"""
url = self._url_format(service)
if params is None:
params = {}
return self.rest_action(self._session.get, url, params=params) | Generic GET operation for retrieving data from Learning Modules API.
.. code-block:: python
gbk.get('students/{gradebookId}', params=params, gradebookId=gbid)
Args:
service (str): The endpoint service to use, i.e. gradebook
params (dict): additional parameters to add to the call
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: the json-encoded content of the response | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/base.py#L122-L143 |
mitodl/PyLmod | pylmod/base.py | Base.post | def post(self, service, data):
"""Generic POST operation for sending data to Learning Modules API.
Data should be a JSON string or a dict. If it is not a string,
it is turned into a JSON string for the POST body.
Args:
service (str): The endpoint service to use, i.e. gradebook
data (json or dict): the data payload
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: the json-encoded content of the response
"""
url = self._url_format(service)
data = Base._data_to_json(data)
# Add content-type for body in POST.
headers = {'content-type': 'application/json'}
return self.rest_action(self._session.post, url,
data=data, headers=headers) | python | def post(self, service, data):
"""Generic POST operation for sending data to Learning Modules API.
Data should be a JSON string or a dict. If it is not a string,
it is turned into a JSON string for the POST body.
Args:
service (str): The endpoint service to use, i.e. gradebook
data (json or dict): the data payload
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: the json-encoded content of the response
"""
url = self._url_format(service)
data = Base._data_to_json(data)
# Add content-type for body in POST.
headers = {'content-type': 'application/json'}
return self.rest_action(self._session.post, url,
data=data, headers=headers) | Generic POST operation for sending data to Learning Modules API.
Data should be a JSON string or a dict. If it is not a string,
it is turned into a JSON string for the POST body.
Args:
service (str): The endpoint service to use, i.e. gradebook
data (json or dict): the data payload
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: the json-encoded content of the response | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/base.py#L145-L167 |
mitodl/PyLmod | pylmod/base.py | Base.delete | def delete(self, service):
"""Generic DELETE operation for Learning Modules API.
Args:
service (str): The endpoint service to use, i.e. gradebook
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: the json-encoded content of the response
"""
url = self._url_format(service)
return self.rest_action(
self._session.delete, url
) | python | def delete(self, service):
"""Generic DELETE operation for Learning Modules API.
Args:
service (str): The endpoint service to use, i.e. gradebook
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: the json-encoded content of the response
"""
url = self._url_format(service)
return self.rest_action(
self._session.delete, url
) | Generic DELETE operation for Learning Modules API.
Args:
service (str): The endpoint service to use, i.e. gradebook
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: the json-encoded content of the response | https://github.com/mitodl/PyLmod/blob/b798b86c33d1eb615e7cd4f3457b5c15da1d86e0/pylmod/base.py#L169-L185 |
googlearchive/firebase-token-generator-python | firebase_token_generator.py | create_token | def create_token(secret, data, options=None):
"""
Generates a secure authentication token.
Our token format follows the JSON Web Token (JWT) standard:
header.claims.signature
Where:
1) "header" is a stringified, base64-encoded JSON object containing version and algorithm information.
2) "claims" is a stringified, base64-encoded JSON object containing a set of claims:
Library-generated claims:
"iat" -> The issued at time in seconds since the epoch as a number
"d" -> The arbitrary JSON object supplied by the user.
User-supplied claims (these are all optional):
"exp" (optional) -> The expiration time of this token, as a number of seconds since the epoch.
"nbf" (optional) -> The "not before" time before which the token should be rejected (seconds since the epoch)
"admin" (optional) -> If set to true, this client will bypass all security rules (use this to authenticate servers)
"debug" (optional) -> "set to true to make this client receive debug information about security rule execution.
"simulate" (optional, internal-only for now) -> Set to true to neuter all API operations (listens / puts
will run security rules but not actually write or return data).
3) A signature that proves the validity of this token (see: http://tools.ietf.org/html/draft-ietf-jose-json-web-signature-07)
For base64-encoding we use URL-safe base64 encoding. This ensures that the entire token is URL-safe
and could, for instance, be placed as a query argument without any encoding (and this is what the JWT spec requires).
Args:
secret - the Firebase Application secret
data - a json serializable object of data to be included in the token
options - An optional dictionary of additional claims for the token. Possible keys include:
a) "expires" -- A datetime or timestamp (as a number of seconds since the epoch) denoting a time after
which this token should no longer be valid.
b) "notBefore" -- A datetime or timestamp (as a number of seconds since the epoch) denoting a time before
which this token should be rejected by the server.
c) "admin" -- Set to true to bypass all security rules (use this for your trusted servers).
d) "debug" -- Set to true to enable debug mode (so you can see the results of Rules API operations)
e) "simulate" -- (internal-only for now) Set to true to neuter all API operations (listens / puts
will run security rules but not actually write or return data)
Returns:
A signed Firebase Authentication Token
Raises:
ValueError: if an invalid key is specified in options
"""
if not isinstance(secret, basestring):
raise ValueError("firebase_token_generator.create_token: secret must be a string.")
if not options and not data:
raise ValueError("firebase_token_generator.create_token: data is empty and no options are set. This token will have no effect on Firebase.");
if not options:
options = {}
is_admin_token = ('admin' in options and options['admin'] == True)
_validate_data(data, is_admin_token)
claims = _create_options_claims(options)
claims['v'] = TOKEN_VERSION
claims['iat'] = int(time.time())
claims['d'] = data
token = _encode_token(secret, claims)
if len(token) > 1024:
raise RuntimeError("firebase_token_generator.create_token: generated token is too long.")
return token | python | def create_token(secret, data, options=None):
"""
Generates a secure authentication token.
Our token format follows the JSON Web Token (JWT) standard:
header.claims.signature
Where:
1) "header" is a stringified, base64-encoded JSON object containing version and algorithm information.
2) "claims" is a stringified, base64-encoded JSON object containing a set of claims:
Library-generated claims:
"iat" -> The issued at time in seconds since the epoch as a number
"d" -> The arbitrary JSON object supplied by the user.
User-supplied claims (these are all optional):
"exp" (optional) -> The expiration time of this token, as a number of seconds since the epoch.
"nbf" (optional) -> The "not before" time before which the token should be rejected (seconds since the epoch)
"admin" (optional) -> If set to true, this client will bypass all security rules (use this to authenticate servers)
"debug" (optional) -> "set to true to make this client receive debug information about security rule execution.
"simulate" (optional, internal-only for now) -> Set to true to neuter all API operations (listens / puts
will run security rules but not actually write or return data).
3) A signature that proves the validity of this token (see: http://tools.ietf.org/html/draft-ietf-jose-json-web-signature-07)
For base64-encoding we use URL-safe base64 encoding. This ensures that the entire token is URL-safe
and could, for instance, be placed as a query argument without any encoding (and this is what the JWT spec requires).
Args:
secret - the Firebase Application secret
data - a json serializable object of data to be included in the token
options - An optional dictionary of additional claims for the token. Possible keys include:
a) "expires" -- A datetime or timestamp (as a number of seconds since the epoch) denoting a time after
which this token should no longer be valid.
b) "notBefore" -- A datetime or timestamp (as a number of seconds since the epoch) denoting a time before
which this token should be rejected by the server.
c) "admin" -- Set to true to bypass all security rules (use this for your trusted servers).
d) "debug" -- Set to true to enable debug mode (so you can see the results of Rules API operations)
e) "simulate" -- (internal-only for now) Set to true to neuter all API operations (listens / puts
will run security rules but not actually write or return data)
Returns:
A signed Firebase Authentication Token
Raises:
ValueError: if an invalid key is specified in options
"""
if not isinstance(secret, basestring):
raise ValueError("firebase_token_generator.create_token: secret must be a string.")
if not options and not data:
raise ValueError("firebase_token_generator.create_token: data is empty and no options are set. This token will have no effect on Firebase.");
if not options:
options = {}
is_admin_token = ('admin' in options and options['admin'] == True)
_validate_data(data, is_admin_token)
claims = _create_options_claims(options)
claims['v'] = TOKEN_VERSION
claims['iat'] = int(time.time())
claims['d'] = data
token = _encode_token(secret, claims)
if len(token) > 1024:
raise RuntimeError("firebase_token_generator.create_token: generated token is too long.")
return token | Generates a secure authentication token.
Our token format follows the JSON Web Token (JWT) standard:
header.claims.signature
Where:
1) "header" is a stringified, base64-encoded JSON object containing version and algorithm information.
2) "claims" is a stringified, base64-encoded JSON object containing a set of claims:
Library-generated claims:
"iat" -> The issued at time in seconds since the epoch as a number
"d" -> The arbitrary JSON object supplied by the user.
User-supplied claims (these are all optional):
"exp" (optional) -> The expiration time of this token, as a number of seconds since the epoch.
"nbf" (optional) -> The "not before" time before which the token should be rejected (seconds since the epoch)
"admin" (optional) -> If set to true, this client will bypass all security rules (use this to authenticate servers)
"debug" (optional) -> "set to true to make this client receive debug information about security rule execution.
"simulate" (optional, internal-only for now) -> Set to true to neuter all API operations (listens / puts
will run security rules but not actually write or return data).
3) A signature that proves the validity of this token (see: http://tools.ietf.org/html/draft-ietf-jose-json-web-signature-07)
For base64-encoding we use URL-safe base64 encoding. This ensures that the entire token is URL-safe
and could, for instance, be placed as a query argument without any encoding (and this is what the JWT spec requires).
Args:
secret - the Firebase Application secret
data - a json serializable object of data to be included in the token
options - An optional dictionary of additional claims for the token. Possible keys include:
a) "expires" -- A datetime or timestamp (as a number of seconds since the epoch) denoting a time after
which this token should no longer be valid.
b) "notBefore" -- A datetime or timestamp (as a number of seconds since the epoch) denoting a time before
which this token should be rejected by the server.
c) "admin" -- Set to true to bypass all security rules (use this for your trusted servers).
d) "debug" -- Set to true to enable debug mode (so you can see the results of Rules API operations)
e) "simulate" -- (internal-only for now) Set to true to neuter all API operations (listens / puts
will run security rules but not actually write or return data)
Returns:
A signed Firebase Authentication Token
Raises:
ValueError: if an invalid key is specified in options | https://github.com/googlearchive/firebase-token-generator-python/blob/cb8a67d25f4a464cd4f37f076046d17912621c09/firebase_token_generator.py#L31-L90 |
datalib/libextract | libextract/core.py | parse_html | def parse_html(fileobj, encoding):
"""
Given a file object *fileobj*, get an ElementTree instance.
The *encoding* is assumed to be utf8.
"""
parser = HTMLParser(encoding=encoding, remove_blank_text=True)
return parse(fileobj, parser) | python | def parse_html(fileobj, encoding):
"""
Given a file object *fileobj*, get an ElementTree instance.
The *encoding* is assumed to be utf8.
"""
parser = HTMLParser(encoding=encoding, remove_blank_text=True)
return parse(fileobj, parser) | Given a file object *fileobj*, get an ElementTree instance.
The *encoding* is assumed to be utf8. | https://github.com/datalib/libextract/blob/9cf9d55c7f8cd622eab0a50f009385f0a39b1200/libextract/core.py#L20-L26 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/fastadir/fastadir.py | FastaDir.fetch | def fetch(self, seq_id, start=None, end=None):
"""fetch sequence by seq_id, optionally with start, end bounds
"""
rec = self._db.execute("""select * from seqinfo where seq_id = ? order by added desc""", [seq_id]).fetchone()
if rec is None:
raise KeyError(seq_id)
if self._writing and self._writing["relpath"] == rec["relpath"]:
logger.warning("""Fetching from file opened for writing;
closing first ({})""".format(rec["relpath"]))
self.commit()
path = os.path.join(self._root_dir, rec["relpath"])
fabgz = self._open_for_reading(path)
return fabgz.fetch(seq_id, start, end) | python | def fetch(self, seq_id, start=None, end=None):
"""fetch sequence by seq_id, optionally with start, end bounds
"""
rec = self._db.execute("""select * from seqinfo where seq_id = ? order by added desc""", [seq_id]).fetchone()
if rec is None:
raise KeyError(seq_id)
if self._writing and self._writing["relpath"] == rec["relpath"]:
logger.warning("""Fetching from file opened for writing;
closing first ({})""".format(rec["relpath"]))
self.commit()
path = os.path.join(self._root_dir, rec["relpath"])
fabgz = self._open_for_reading(path)
return fabgz.fetch(seq_id, start, end) | fetch sequence by seq_id, optionally with start, end bounds | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/fastadir/fastadir.py#L102-L118 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/fastadir/fastadir.py | FastaDir.store | def store(self, seq_id, seq):
"""store a sequence with key seq_id. The sequence itself is stored in
a fasta file and a reference to it in the sqlite3 database.
"""
if not self._writeable:
raise RuntimeError("Cannot write -- opened read-only")
# open a file for writing if necessary
# path: <root_dir>/<reldir>/<basename>
# <---- relpath ---->
# <------ dir_ ----->
# <----------- path ----------->
if self._writing is None:
reldir = datetime.datetime.utcnow().strftime("%Y/%m%d/%H%M")
basename = str(time.time()) + ".fa.bgz"
relpath = os.path.join(reldir, basename)
dir_ = os.path.join(self._root_dir, reldir)
path = os.path.join(self._root_dir, reldir, basename)
makedirs(dir_, exist_ok=True)
fabgz = FabgzWriter(path)
self._writing = {"relpath": relpath, "fabgz": fabgz}
logger.info("Opened for writing: " + path)
self._writing["fabgz"].store(seq_id, seq)
alpha = "".join(sorted(set(seq)))
self._db.execute("""insert into seqinfo (seq_id, len, alpha, relpath)
values (?, ?, ?,?)""", (seq_id, len(seq), alpha, self._writing["relpath"]))
return seq_id | python | def store(self, seq_id, seq):
"""store a sequence with key seq_id. The sequence itself is stored in
a fasta file and a reference to it in the sqlite3 database.
"""
if not self._writeable:
raise RuntimeError("Cannot write -- opened read-only")
# open a file for writing if necessary
# path: <root_dir>/<reldir>/<basename>
# <---- relpath ---->
# <------ dir_ ----->
# <----------- path ----------->
if self._writing is None:
reldir = datetime.datetime.utcnow().strftime("%Y/%m%d/%H%M")
basename = str(time.time()) + ".fa.bgz"
relpath = os.path.join(reldir, basename)
dir_ = os.path.join(self._root_dir, reldir)
path = os.path.join(self._root_dir, reldir, basename)
makedirs(dir_, exist_ok=True)
fabgz = FabgzWriter(path)
self._writing = {"relpath": relpath, "fabgz": fabgz}
logger.info("Opened for writing: " + path)
self._writing["fabgz"].store(seq_id, seq)
alpha = "".join(sorted(set(seq)))
self._db.execute("""insert into seqinfo (seq_id, len, alpha, relpath)
values (?, ?, ?,?)""", (seq_id, len(seq), alpha, self._writing["relpath"]))
return seq_id | store a sequence with key seq_id. The sequence itself is stored in
a fasta file and a reference to it in the sqlite3 database. | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/fastadir/fastadir.py#L135-L165 |
jmcarp/sqlalchemy-postgres-copy | postgres_copy/__init__.py | copy_to | def copy_to(source, dest, engine_or_conn, **flags):
"""Export a query or select to a file. For flags, see the PostgreSQL
documentation at http://www.postgresql.org/docs/9.5/static/sql-copy.html.
Examples: ::
select = MyTable.select()
with open('/path/to/file.tsv', 'w') as fp:
copy_to(select, fp, conn)
query = session.query(MyModel)
with open('/path/to/file/csv', 'w') as fp:
copy_to(query, fp, engine, format='csv', null='.')
:param source: SQLAlchemy query or select
:param dest: Destination file pointer, in write mode
:param engine_or_conn: SQLAlchemy engine, connection, or raw_connection
:param **flags: Options passed through to COPY
If an existing connection is passed to `engine_or_conn`, it is the caller's
responsibility to commit and close.
"""
dialect = postgresql.dialect()
statement = getattr(source, 'statement', source)
compiled = statement.compile(dialect=dialect)
conn, autoclose = raw_connection_from(engine_or_conn)
cursor = conn.cursor()
query = cursor.mogrify(compiled.string, compiled.params).decode()
formatted_flags = '({})'.format(format_flags(flags)) if flags else ''
copy = 'COPY ({}) TO STDOUT {}'.format(query, formatted_flags)
cursor.copy_expert(copy, dest)
if autoclose:
conn.close() | python | def copy_to(source, dest, engine_or_conn, **flags):
"""Export a query or select to a file. For flags, see the PostgreSQL
documentation at http://www.postgresql.org/docs/9.5/static/sql-copy.html.
Examples: ::
select = MyTable.select()
with open('/path/to/file.tsv', 'w') as fp:
copy_to(select, fp, conn)
query = session.query(MyModel)
with open('/path/to/file/csv', 'w') as fp:
copy_to(query, fp, engine, format='csv', null='.')
:param source: SQLAlchemy query or select
:param dest: Destination file pointer, in write mode
:param engine_or_conn: SQLAlchemy engine, connection, or raw_connection
:param **flags: Options passed through to COPY
If an existing connection is passed to `engine_or_conn`, it is the caller's
responsibility to commit and close.
"""
dialect = postgresql.dialect()
statement = getattr(source, 'statement', source)
compiled = statement.compile(dialect=dialect)
conn, autoclose = raw_connection_from(engine_or_conn)
cursor = conn.cursor()
query = cursor.mogrify(compiled.string, compiled.params).decode()
formatted_flags = '({})'.format(format_flags(flags)) if flags else ''
copy = 'COPY ({}) TO STDOUT {}'.format(query, formatted_flags)
cursor.copy_expert(copy, dest)
if autoclose:
conn.close() | Export a query or select to a file. For flags, see the PostgreSQL
documentation at http://www.postgresql.org/docs/9.5/static/sql-copy.html.
Examples: ::
select = MyTable.select()
with open('/path/to/file.tsv', 'w') as fp:
copy_to(select, fp, conn)
query = session.query(MyModel)
with open('/path/to/file/csv', 'w') as fp:
copy_to(query, fp, engine, format='csv', null='.')
:param source: SQLAlchemy query or select
:param dest: Destination file pointer, in write mode
:param engine_or_conn: SQLAlchemy engine, connection, or raw_connection
:param **flags: Options passed through to COPY
If an existing connection is passed to `engine_or_conn`, it is the caller's
responsibility to commit and close. | https://github.com/jmcarp/sqlalchemy-postgres-copy/blob/01ef522e8e46a6961e227069d465b0cb93e42383/postgres_copy/__init__.py#L10-L41 |
jmcarp/sqlalchemy-postgres-copy | postgres_copy/__init__.py | copy_from | def copy_from(source, dest, engine_or_conn, columns=(), **flags):
"""Import a table from a file. For flags, see the PostgreSQL documentation
at http://www.postgresql.org/docs/9.5/static/sql-copy.html.
Examples: ::
with open('/path/to/file.tsv') as fp:
copy_from(fp, MyTable, conn)
with open('/path/to/file.csv') as fp:
copy_from(fp, MyModel, engine, format='csv')
:param source: Source file pointer, in read mode
:param dest: SQLAlchemy model or table
:param engine_or_conn: SQLAlchemy engine, connection, or raw_connection
:param columns: Optional tuple of columns
:param **flags: Options passed through to COPY
If an existing connection is passed to `engine_or_conn`, it is the caller's
responsibility to commit and close.
The `columns` flag can be set to a tuple of strings to specify the column
order. Passing `header` alone will not handle out of order columns, it simply tells
postgres to ignore the first line of `source`.
"""
tbl = dest.__table__ if is_model(dest) else dest
conn, autoclose = raw_connection_from(engine_or_conn)
cursor = conn.cursor()
relation = '.'.join('"{}"'.format(part) for part in (tbl.schema, tbl.name) if part)
formatted_columns = '({})'.format(','.join(columns)) if columns else ''
formatted_flags = '({})'.format(format_flags(flags)) if flags else ''
copy = 'COPY {} {} FROM STDIN {}'.format(
relation,
formatted_columns,
formatted_flags,
)
cursor.copy_expert(copy, source)
if autoclose:
conn.commit()
conn.close() | python | def copy_from(source, dest, engine_or_conn, columns=(), **flags):
"""Import a table from a file. For flags, see the PostgreSQL documentation
at http://www.postgresql.org/docs/9.5/static/sql-copy.html.
Examples: ::
with open('/path/to/file.tsv') as fp:
copy_from(fp, MyTable, conn)
with open('/path/to/file.csv') as fp:
copy_from(fp, MyModel, engine, format='csv')
:param source: Source file pointer, in read mode
:param dest: SQLAlchemy model or table
:param engine_or_conn: SQLAlchemy engine, connection, or raw_connection
:param columns: Optional tuple of columns
:param **flags: Options passed through to COPY
If an existing connection is passed to `engine_or_conn`, it is the caller's
responsibility to commit and close.
The `columns` flag can be set to a tuple of strings to specify the column
order. Passing `header` alone will not handle out of order columns, it simply tells
postgres to ignore the first line of `source`.
"""
tbl = dest.__table__ if is_model(dest) else dest
conn, autoclose = raw_connection_from(engine_or_conn)
cursor = conn.cursor()
relation = '.'.join('"{}"'.format(part) for part in (tbl.schema, tbl.name) if part)
formatted_columns = '({})'.format(','.join(columns)) if columns else ''
formatted_flags = '({})'.format(format_flags(flags)) if flags else ''
copy = 'COPY {} {} FROM STDIN {}'.format(
relation,
formatted_columns,
formatted_flags,
)
cursor.copy_expert(copy, source)
if autoclose:
conn.commit()
conn.close() | Import a table from a file. For flags, see the PostgreSQL documentation
at http://www.postgresql.org/docs/9.5/static/sql-copy.html.
Examples: ::
with open('/path/to/file.tsv') as fp:
copy_from(fp, MyTable, conn)
with open('/path/to/file.csv') as fp:
copy_from(fp, MyModel, engine, format='csv')
:param source: Source file pointer, in read mode
:param dest: SQLAlchemy model or table
:param engine_or_conn: SQLAlchemy engine, connection, or raw_connection
:param columns: Optional tuple of columns
:param **flags: Options passed through to COPY
If an existing connection is passed to `engine_or_conn`, it is the caller's
responsibility to commit and close.
The `columns` flag can be set to a tuple of strings to specify the column
order. Passing `header` alone will not handle out of order columns, it simply tells
postgres to ignore the first line of `source`. | https://github.com/jmcarp/sqlalchemy-postgres-copy/blob/01ef522e8e46a6961e227069d465b0cb93e42383/postgres_copy/__init__.py#L43-L81 |
jmcarp/sqlalchemy-postgres-copy | postgres_copy/__init__.py | raw_connection_from | def raw_connection_from(engine_or_conn):
"""Extract a raw_connection and determine if it should be automatically closed.
Only connections opened by this package will be closed automatically.
"""
if hasattr(engine_or_conn, 'cursor'):
return engine_or_conn, False
if hasattr(engine_or_conn, 'connection'):
return engine_or_conn.connection, False
return engine_or_conn.raw_connection(), True | python | def raw_connection_from(engine_or_conn):
"""Extract a raw_connection and determine if it should be automatically closed.
Only connections opened by this package will be closed automatically.
"""
if hasattr(engine_or_conn, 'cursor'):
return engine_or_conn, False
if hasattr(engine_or_conn, 'connection'):
return engine_or_conn.connection, False
return engine_or_conn.raw_connection(), True | Extract a raw_connection and determine if it should be automatically closed.
Only connections opened by this package will be closed automatically. | https://github.com/jmcarp/sqlalchemy-postgres-copy/blob/01ef522e8e46a6961e227069d465b0cb93e42383/postgres_copy/__init__.py#L83-L92 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/py2compat/_makedirs.py | makedirs | def makedirs(name, mode=0o777, exist_ok=False):
"""cheapo replacement for py3 makedirs with support for exist_ok
"""
if os.path.exists(name):
if not exist_ok:
raise FileExistsError("File exists: " + name)
else:
os.makedirs(name, mode) | python | def makedirs(name, mode=0o777, exist_ok=False):
"""cheapo replacement for py3 makedirs with support for exist_ok
"""
if os.path.exists(name):
if not exist_ok:
raise FileExistsError("File exists: " + name)
else:
os.makedirs(name, mode) | cheapo replacement for py3 makedirs with support for exist_ok | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/py2compat/_makedirs.py#L10-L19 |
biocommons/biocommons.seqrepo | misc/threading-verification.py | fetch_in_thread | def fetch_in_thread(sr, nsa):
"""fetch a sequence in a thread
"""
def fetch_seq(q, nsa):
pid, ppid = os.getpid(), os.getppid()
q.put((pid, ppid, sr[nsa]))
q = Queue()
p = Process(target=fetch_seq, args=(q, nsa))
p.start()
pid, ppid, seq = q.get()
p.join()
assert pid != ppid, "sequence was not fetched from thread"
return pid, ppid, seq | python | def fetch_in_thread(sr, nsa):
"""fetch a sequence in a thread
"""
def fetch_seq(q, nsa):
pid, ppid = os.getpid(), os.getppid()
q.put((pid, ppid, sr[nsa]))
q = Queue()
p = Process(target=fetch_seq, args=(q, nsa))
p.start()
pid, ppid, seq = q.get()
p.join()
assert pid != ppid, "sequence was not fetched from thread"
return pid, ppid, seq | fetch a sequence in a thread | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/misc/threading-verification.py#L32-L48 |
junaruga/rpm-py-installer | install.py | Application.run | def run(self):
"""Run install process."""
try:
self.linux.verify_system_status()
except InstallSkipError:
Log.info('Install skipped.')
return
work_dir = tempfile.mkdtemp(suffix='-rpm-py-installer')
Log.info("Created working directory '{0}'".format(work_dir))
with Cmd.pushd(work_dir):
self.rpm_py.download_and_install()
if not self.python.is_python_binding_installed():
message = (
'RPM Python binding failed to install '
'with unknown reason.'
)
raise InstallError(message)
# TODO: Print installed module name and version as INFO.
if self.is_work_dir_removed:
shutil.rmtree(work_dir)
Log.info("Removed working directory '{0}'".format(work_dir))
else:
Log.info("Saved working directory '{0}'".format(work_dir)) | python | def run(self):
"""Run install process."""
try:
self.linux.verify_system_status()
except InstallSkipError:
Log.info('Install skipped.')
return
work_dir = tempfile.mkdtemp(suffix='-rpm-py-installer')
Log.info("Created working directory '{0}'".format(work_dir))
with Cmd.pushd(work_dir):
self.rpm_py.download_and_install()
if not self.python.is_python_binding_installed():
message = (
'RPM Python binding failed to install '
'with unknown reason.'
)
raise InstallError(message)
# TODO: Print installed module name and version as INFO.
if self.is_work_dir_removed:
shutil.rmtree(work_dir)
Log.info("Removed working directory '{0}'".format(work_dir))
else:
Log.info("Saved working directory '{0}'".format(work_dir)) | Run install process. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L28-L54 |
junaruga/rpm-py-installer | install.py | RpmPy.download_and_install | def download_and_install(self):
"""Download and install RPM Python binding."""
if self.is_installed_from_bin:
try:
self.installer.install_from_rpm_py_package()
return
except RpmPyPackageNotFoundError as e:
Log.warn('RPM Py Package not found. reason: {0}'.format(e))
# Pass to try to install from the source.
pass
# Download and install from the source.
top_dir_name = self.downloader.download_and_expand()
rpm_py_dir = os.path.join(top_dir_name, 'python')
setup_py_in_found = False
with Cmd.pushd(rpm_py_dir):
if self.installer.setup_py.exists_in_path():
setup_py_in_found = True
self.installer.run()
if not setup_py_in_found:
self.installer.install_from_rpm_py_package() | python | def download_and_install(self):
"""Download and install RPM Python binding."""
if self.is_installed_from_bin:
try:
self.installer.install_from_rpm_py_package()
return
except RpmPyPackageNotFoundError as e:
Log.warn('RPM Py Package not found. reason: {0}'.format(e))
# Pass to try to install from the source.
pass
# Download and install from the source.
top_dir_name = self.downloader.download_and_expand()
rpm_py_dir = os.path.join(top_dir_name, 'python')
setup_py_in_found = False
with Cmd.pushd(rpm_py_dir):
if self.installer.setup_py.exists_in_path():
setup_py_in_found = True
self.installer.run()
if not setup_py_in_found:
self.installer.install_from_rpm_py_package() | Download and install RPM Python binding. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L161-L184 |
junaruga/rpm-py-installer | install.py | RpmPyVersion.git_branch | def git_branch(self):
"""Git branch name."""
info = self.info
return 'rpm-{major}.{minor}.x'.format(
major=info[0], minor=info[1]) | python | def git_branch(self):
"""Git branch name."""
info = self.info
return 'rpm-{major}.{minor}.x'.format(
major=info[0], minor=info[1]) | Git branch name. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L215-L219 |
junaruga/rpm-py-installer | install.py | SetupPy.add_patchs_to_build_without_pkg_config | def add_patchs_to_build_without_pkg_config(self, lib_dir, include_dir):
"""Add patches to remove pkg-config command and rpm.pc part.
Replace with given library_path: lib_dir and include_path: include_dir
without rpm.pc file.
"""
additional_patches = [
{
'src': r"pkgconfig\('--libs-only-L'\)",
'dest': "['{0}']".format(lib_dir),
},
# Considering -libs-only-l and -libs-only-L
# https://github.com/rpm-software-management/rpm/pull/327
{
'src': r"pkgconfig\('--libs(-only-l)?'\)",
'dest': "['rpm', 'rpmio']",
'required': True,
},
{
'src': r"pkgconfig\('--cflags'\)",
'dest': "['{0}']".format(include_dir),
'required': True,
},
]
self.patches.extend(additional_patches) | python | def add_patchs_to_build_without_pkg_config(self, lib_dir, include_dir):
"""Add patches to remove pkg-config command and rpm.pc part.
Replace with given library_path: lib_dir and include_path: include_dir
without rpm.pc file.
"""
additional_patches = [
{
'src': r"pkgconfig\('--libs-only-L'\)",
'dest': "['{0}']".format(lib_dir),
},
# Considering -libs-only-l and -libs-only-L
# https://github.com/rpm-software-management/rpm/pull/327
{
'src': r"pkgconfig\('--libs(-only-l)?'\)",
'dest': "['rpm', 'rpmio']",
'required': True,
},
{
'src': r"pkgconfig\('--cflags'\)",
'dest': "['{0}']".format(include_dir),
'required': True,
},
]
self.patches.extend(additional_patches) | Add patches to remove pkg-config command and rpm.pc part.
Replace with given library_path: lib_dir and include_path: include_dir
without rpm.pc file. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L323-L347 |
junaruga/rpm-py-installer | install.py | SetupPy.apply_and_save | def apply_and_save(self):
"""Apply replaced words and patches, and save setup.py file."""
patches = self.patches
content = None
with open(self.IN_PATH) as f_in:
# As setup.py.in file size is 2.4 KByte.
# it's fine to read entire content.
content = f_in.read()
# Replace words.
for key in self.replaced_word_dict:
content = content.replace(key, self.replaced_word_dict[key])
# Apply patches.
out_patches = []
for patch in patches:
pattern = re.compile(patch['src'], re.MULTILINE)
(content, subs_num) = re.subn(pattern, patch['dest'],
content)
if subs_num > 0:
patch['applied'] = True
out_patches.append(patch)
for patch in out_patches:
if patch.get('required') and not patch.get('applied'):
Log.warn('Patch not applied {0}'.format(patch['src']))
with open(self.OUT_PATH, 'w') as f_out:
f_out.write(content)
self.pathces = out_patches
# Release content data to make it released by GC quickly.
content = None | python | def apply_and_save(self):
"""Apply replaced words and patches, and save setup.py file."""
patches = self.patches
content = None
with open(self.IN_PATH) as f_in:
# As setup.py.in file size is 2.4 KByte.
# it's fine to read entire content.
content = f_in.read()
# Replace words.
for key in self.replaced_word_dict:
content = content.replace(key, self.replaced_word_dict[key])
# Apply patches.
out_patches = []
for patch in patches:
pattern = re.compile(patch['src'], re.MULTILINE)
(content, subs_num) = re.subn(pattern, patch['dest'],
content)
if subs_num > 0:
patch['applied'] = True
out_patches.append(patch)
for patch in out_patches:
if patch.get('required') and not patch.get('applied'):
Log.warn('Patch not applied {0}'.format(patch['src']))
with open(self.OUT_PATH, 'w') as f_out:
f_out.write(content)
self.pathces = out_patches
# Release content data to make it released by GC quickly.
content = None | Apply replaced words and patches, and save setup.py file. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L349-L382 |
junaruga/rpm-py-installer | install.py | Downloader.download_and_expand | def download_and_expand(self):
"""Download and expand RPM Python binding."""
top_dir_name = None
if self.git_branch:
# Download a source by git clone.
top_dir_name = self._download_and_expand_by_git()
else:
# Download a source from the arcihve URL.
# Downloading the compressed archive is better than "git clone",
# because it is faster.
# If download failed due to URL not found, try "git clone".
try:
top_dir_name = self._download_and_expand_from_archive_url()
except RemoteFileNotFoundError:
Log.info('Try to download by git clone.')
top_dir_name = self._download_and_expand_by_git()
return top_dir_name | python | def download_and_expand(self):
"""Download and expand RPM Python binding."""
top_dir_name = None
if self.git_branch:
# Download a source by git clone.
top_dir_name = self._download_and_expand_by_git()
else:
# Download a source from the arcihve URL.
# Downloading the compressed archive is better than "git clone",
# because it is faster.
# If download failed due to URL not found, try "git clone".
try:
top_dir_name = self._download_and_expand_from_archive_url()
except RemoteFileNotFoundError:
Log.info('Try to download by git clone.')
top_dir_name = self._download_and_expand_by_git()
return top_dir_name | Download and expand RPM Python binding. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L412-L428 |
junaruga/rpm-py-installer | install.py | Installer.run | def run(self):
"""Run install main logic."""
self._make_lib_file_symbolic_links()
self._copy_each_include_files_to_include_dir()
self._make_dep_lib_file_sym_links_and_copy_include_files()
self.setup_py.add_patchs_to_build_without_pkg_config(
self.rpm.lib_dir, self.rpm.include_dir
)
self.setup_py.apply_and_save()
self._build_and_install() | python | def run(self):
"""Run install main logic."""
self._make_lib_file_symbolic_links()
self._copy_each_include_files_to_include_dir()
self._make_dep_lib_file_sym_links_and_copy_include_files()
self.setup_py.add_patchs_to_build_without_pkg_config(
self.rpm.lib_dir, self.rpm.include_dir
)
self.setup_py.apply_and_save()
self._build_and_install() | Run install main logic. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L610-L619 |
junaruga/rpm-py-installer | install.py | Installer._make_lib_file_symbolic_links | def _make_lib_file_symbolic_links(self):
"""Make symbolic links for lib files.
Make symbolic links from system library files or downloaded lib files
to downloaded source library files.
For example, case: Fedora x86_64
Make symbolic links
from
a. /usr/lib64/librpmio.so* (one of them)
b. /usr/lib64/librpm.so* (one of them)
c. If rpm-build-libs package is installed,
/usr/lib64/librpmbuild.so* (one of them)
otherwise, downloaded and extracted rpm-build-libs.
./usr/lib64/librpmbuild.so* (one of them)
c. If rpm-build-libs package is installed,
/usr/lib64/librpmsign.so* (one of them)
otherwise, downloaded and extracted rpm-build-libs.
./usr/lib64/librpmsign.so* (one of them)
to
a. rpm/rpmio/.libs/librpmio.so
b. rpm/lib/.libs/librpm.so
c. rpm/build/.libs/librpmbuild.so
d. rpm/sign/.libs/librpmsign.so
.
This is a status after running "make" on actual rpm build process.
"""
so_file_dict = {
'rpmio': {
'sym_src_dir': self.rpm.lib_dir,
'sym_dst_dir': 'rpmio/.libs',
'require': True,
},
'rpm': {
'sym_src_dir': self.rpm.lib_dir,
'sym_dst_dir': 'lib/.libs',
'require': True,
},
'rpmbuild': {
'sym_src_dir': self.rpm.lib_dir,
'sym_dst_dir': 'build/.libs',
'require': True,
},
'rpmsign': {
'sym_src_dir': self.rpm.lib_dir,
'sym_dst_dir': 'sign/.libs',
},
}
self._update_sym_src_dirs_conditionally(so_file_dict)
for name in so_file_dict:
so_dict = so_file_dict[name]
pattern = 'lib{0}.so*'.format(name)
so_files = Cmd.find(so_dict['sym_src_dir'], pattern)
if not so_files:
is_required = so_dict.get('require', False)
if not is_required:
message_format = (
"Skip creating symbolic link of "
"not existing so file '{0}'"
)
Log.debug(message_format.format(name))
continue
message = 'so file pattern {0} not found at {1}'.format(
pattern, so_dict['sym_src_dir']
)
raise InstallError(message)
sym_dst_dir = os.path.abspath('../{0}'.format(
so_dict['sym_dst_dir']))
if not os.path.isdir(sym_dst_dir):
Cmd.mkdir_p(sym_dst_dir)
cmd = 'ln -sf {0} {1}/lib{2}.so'.format(so_files[0],
sym_dst_dir,
name)
Cmd.sh_e(cmd) | python | def _make_lib_file_symbolic_links(self):
"""Make symbolic links for lib files.
Make symbolic links from system library files or downloaded lib files
to downloaded source library files.
For example, case: Fedora x86_64
Make symbolic links
from
a. /usr/lib64/librpmio.so* (one of them)
b. /usr/lib64/librpm.so* (one of them)
c. If rpm-build-libs package is installed,
/usr/lib64/librpmbuild.so* (one of them)
otherwise, downloaded and extracted rpm-build-libs.
./usr/lib64/librpmbuild.so* (one of them)
c. If rpm-build-libs package is installed,
/usr/lib64/librpmsign.so* (one of them)
otherwise, downloaded and extracted rpm-build-libs.
./usr/lib64/librpmsign.so* (one of them)
to
a. rpm/rpmio/.libs/librpmio.so
b. rpm/lib/.libs/librpm.so
c. rpm/build/.libs/librpmbuild.so
d. rpm/sign/.libs/librpmsign.so
.
This is a status after running "make" on actual rpm build process.
"""
so_file_dict = {
'rpmio': {
'sym_src_dir': self.rpm.lib_dir,
'sym_dst_dir': 'rpmio/.libs',
'require': True,
},
'rpm': {
'sym_src_dir': self.rpm.lib_dir,
'sym_dst_dir': 'lib/.libs',
'require': True,
},
'rpmbuild': {
'sym_src_dir': self.rpm.lib_dir,
'sym_dst_dir': 'build/.libs',
'require': True,
},
'rpmsign': {
'sym_src_dir': self.rpm.lib_dir,
'sym_dst_dir': 'sign/.libs',
},
}
self._update_sym_src_dirs_conditionally(so_file_dict)
for name in so_file_dict:
so_dict = so_file_dict[name]
pattern = 'lib{0}.so*'.format(name)
so_files = Cmd.find(so_dict['sym_src_dir'], pattern)
if not so_files:
is_required = so_dict.get('require', False)
if not is_required:
message_format = (
"Skip creating symbolic link of "
"not existing so file '{0}'"
)
Log.debug(message_format.format(name))
continue
message = 'so file pattern {0} not found at {1}'.format(
pattern, so_dict['sym_src_dir']
)
raise InstallError(message)
sym_dst_dir = os.path.abspath('../{0}'.format(
so_dict['sym_dst_dir']))
if not os.path.isdir(sym_dst_dir):
Cmd.mkdir_p(sym_dst_dir)
cmd = 'ln -sf {0} {1}/lib{2}.so'.format(so_files[0],
sym_dst_dir,
name)
Cmd.sh_e(cmd) | Make symbolic links for lib files.
Make symbolic links from system library files or downloaded lib files
to downloaded source library files.
For example, case: Fedora x86_64
Make symbolic links
from
a. /usr/lib64/librpmio.so* (one of them)
b. /usr/lib64/librpm.so* (one of them)
c. If rpm-build-libs package is installed,
/usr/lib64/librpmbuild.so* (one of them)
otherwise, downloaded and extracted rpm-build-libs.
./usr/lib64/librpmbuild.so* (one of them)
c. If rpm-build-libs package is installed,
/usr/lib64/librpmsign.so* (one of them)
otherwise, downloaded and extracted rpm-build-libs.
./usr/lib64/librpmsign.so* (one of them)
to
a. rpm/rpmio/.libs/librpmio.so
b. rpm/lib/.libs/librpm.so
c. rpm/build/.libs/librpmbuild.so
d. rpm/sign/.libs/librpmsign.so
.
This is a status after running "make" on actual rpm build process. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L629-L706 |
junaruga/rpm-py-installer | install.py | Installer._copy_each_include_files_to_include_dir | def _copy_each_include_files_to_include_dir(self):
"""Copy include header files for each directory to include directory.
Copy include header files
from
rpm/
rpmio/*.h
lib/*.h
build/*.h
sign/*.h
to
rpm/
include/
rpm/*.h
.
This is a status after running "make" on actual rpm build process.
"""
src_header_dirs = [
'rpmio',
'lib',
'build',
'sign',
]
with Cmd.pushd('..'):
src_include_dir = os.path.abspath('./include')
for header_dir in src_header_dirs:
if not os.path.isdir(header_dir):
message_format = "Skip not existing header directory '{0}'"
Log.debug(message_format.format(header_dir))
continue
header_files = Cmd.find(header_dir, '*.h')
for header_file in header_files:
pattern = '^{0}/'.format(header_dir)
(dst_header_file, subs_num) = re.subn(pattern,
'', header_file)
if subs_num == 0:
message = 'Failed to replace header_file: {0}'.format(
header_file)
raise ValueError(message)
dst_header_file = os.path.abspath(
os.path.join(src_include_dir, 'rpm', dst_header_file)
)
dst_dir = os.path.dirname(dst_header_file)
if not os.path.isdir(dst_dir):
Cmd.mkdir_p(dst_dir)
shutil.copyfile(header_file, dst_header_file) | python | def _copy_each_include_files_to_include_dir(self):
"""Copy include header files for each directory to include directory.
Copy include header files
from
rpm/
rpmio/*.h
lib/*.h
build/*.h
sign/*.h
to
rpm/
include/
rpm/*.h
.
This is a status after running "make" on actual rpm build process.
"""
src_header_dirs = [
'rpmio',
'lib',
'build',
'sign',
]
with Cmd.pushd('..'):
src_include_dir = os.path.abspath('./include')
for header_dir in src_header_dirs:
if not os.path.isdir(header_dir):
message_format = "Skip not existing header directory '{0}'"
Log.debug(message_format.format(header_dir))
continue
header_files = Cmd.find(header_dir, '*.h')
for header_file in header_files:
pattern = '^{0}/'.format(header_dir)
(dst_header_file, subs_num) = re.subn(pattern,
'', header_file)
if subs_num == 0:
message = 'Failed to replace header_file: {0}'.format(
header_file)
raise ValueError(message)
dst_header_file = os.path.abspath(
os.path.join(src_include_dir, 'rpm', dst_header_file)
)
dst_dir = os.path.dirname(dst_header_file)
if not os.path.isdir(dst_dir):
Cmd.mkdir_p(dst_dir)
shutil.copyfile(header_file, dst_header_file) | Copy include header files for each directory to include directory.
Copy include header files
from
rpm/
rpmio/*.h
lib/*.h
build/*.h
sign/*.h
to
rpm/
include/
rpm/*.h
.
This is a status after running "make" on actual rpm build process. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L711-L756 |
junaruga/rpm-py-installer | install.py | Installer._make_dep_lib_file_sym_links_and_copy_include_files | def _make_dep_lib_file_sym_links_and_copy_include_files(self):
"""Make symbolick links for lib files and copy include files.
Do below steps for a dependency packages.
Dependency packages
- popt-devel
Steps
1. Make symbolic links from system library files or downloaded lib
files to downloaded source library files.
2. Copy include header files to include directory.
"""
if not self._rpm_py_has_popt_devel_dep():
message = (
'The RPM Python binding does not have popt-devel dependency'
)
Log.debug(message)
return
if self._is_popt_devel_installed():
message = '{0} package is installed.'.format(
self.pacakge_popt_devel_name)
Log.debug(message)
return
if not self._is_package_downloadable():
message = '''
Install a {0} download plugin or
install the {0} pacakge [{1}].
'''.format(self.package_sys_name, self.pacakge_popt_devel_name)
raise InstallError(message)
if not self._is_popt_installed():
message = '''
Required {0} not installed: [{1}],
Install the {0} package.
'''.format(self.package_sys_name, self.pacakge_popt_name)
raise InstallError(message)
self._download_and_extract_popt_devel()
# Copy libpopt.so to rpm_root/lib/.libs/.
popt_lib_dirs = [
self.rpm.lib_dir,
# /lib64/libpopt.so* installed at popt-1.13-7.el6.x86_64.
'/lib64',
# /lib/*/libpopt.so* installed at libpopt0-1.16-8ubuntu1
'/lib',
]
pattern = 'libpopt.so*'
popt_so_file = None
for popt_lib_dir in popt_lib_dirs:
so_files = Cmd.find(popt_lib_dir, pattern)
if so_files:
popt_so_file = so_files[0]
break
if not popt_so_file:
message = 'so file pattern {0} not found at {1}'.format(
pattern, str(popt_lib_dirs)
)
raise InstallError(message)
cmd = 'ln -sf {0} ../lib/.libs/libpopt.so'.format(
popt_so_file)
Cmd.sh_e(cmd)
# Copy popt.h to rpm_root/include
shutil.copy('./usr/include/popt.h', '../include') | python | def _make_dep_lib_file_sym_links_and_copy_include_files(self):
"""Make symbolick links for lib files and copy include files.
Do below steps for a dependency packages.
Dependency packages
- popt-devel
Steps
1. Make symbolic links from system library files or downloaded lib
files to downloaded source library files.
2. Copy include header files to include directory.
"""
if not self._rpm_py_has_popt_devel_dep():
message = (
'The RPM Python binding does not have popt-devel dependency'
)
Log.debug(message)
return
if self._is_popt_devel_installed():
message = '{0} package is installed.'.format(
self.pacakge_popt_devel_name)
Log.debug(message)
return
if not self._is_package_downloadable():
message = '''
Install a {0} download plugin or
install the {0} pacakge [{1}].
'''.format(self.package_sys_name, self.pacakge_popt_devel_name)
raise InstallError(message)
if not self._is_popt_installed():
message = '''
Required {0} not installed: [{1}],
Install the {0} package.
'''.format(self.package_sys_name, self.pacakge_popt_name)
raise InstallError(message)
self._download_and_extract_popt_devel()
# Copy libpopt.so to rpm_root/lib/.libs/.
popt_lib_dirs = [
self.rpm.lib_dir,
# /lib64/libpopt.so* installed at popt-1.13-7.el6.x86_64.
'/lib64',
# /lib/*/libpopt.so* installed at libpopt0-1.16-8ubuntu1
'/lib',
]
pattern = 'libpopt.so*'
popt_so_file = None
for popt_lib_dir in popt_lib_dirs:
so_files = Cmd.find(popt_lib_dir, pattern)
if so_files:
popt_so_file = so_files[0]
break
if not popt_so_file:
message = 'so file pattern {0} not found at {1}'.format(
pattern, str(popt_lib_dirs)
)
raise InstallError(message)
cmd = 'ln -sf {0} ../lib/.libs/libpopt.so'.format(
popt_so_file)
Cmd.sh_e(cmd)
# Copy popt.h to rpm_root/include
shutil.copy('./usr/include/popt.h', '../include') | Make symbolick links for lib files and copy include files.
Do below steps for a dependency packages.
Dependency packages
- popt-devel
Steps
1. Make symbolic links from system library files or downloaded lib
files to downloaded source library files.
2. Copy include header files to include directory. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L758-L824 |
junaruga/rpm-py-installer | install.py | Installer._rpm_py_has_popt_devel_dep | def _rpm_py_has_popt_devel_dep(self):
"""Check if the RPM Python binding has a depndency to popt-devel.
Search include header files in the source code to check it.
"""
found = False
with open('../include/rpm/rpmlib.h') as f_in:
for line in f_in:
if re.match(r'^#include .*popt.h.*$', line):
found = True
break
return found | python | def _rpm_py_has_popt_devel_dep(self):
"""Check if the RPM Python binding has a depndency to popt-devel.
Search include header files in the source code to check it.
"""
found = False
with open('../include/rpm/rpmlib.h') as f_in:
for line in f_in:
if re.match(r'^#include .*popt.h.*$', line):
found = True
break
return found | Check if the RPM Python binding has a depndency to popt-devel.
Search include header files in the source code to check it. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L833-L844 |
junaruga/rpm-py-installer | install.py | FedoraInstaller.run | def run(self):
"""Run install main logic."""
try:
if not self._is_rpm_all_lib_include_files_installed():
self._make_lib_file_symbolic_links()
self._copy_each_include_files_to_include_dir()
self._make_dep_lib_file_sym_links_and_copy_include_files()
self.setup_py.add_patchs_to_build_without_pkg_config(
self.rpm.lib_dir, self.rpm.include_dir
)
self.setup_py.apply_and_save()
self._build_and_install()
except InstallError as e:
if not self._is_rpm_all_lib_include_files_installed():
org_message = str(e)
message = '''
Install failed without rpm-devel package by below reason.
Can you install the RPM package, and run this installer again?
'''
message += org_message
raise InstallError(message)
else:
raise e | python | def run(self):
"""Run install main logic."""
try:
if not self._is_rpm_all_lib_include_files_installed():
self._make_lib_file_symbolic_links()
self._copy_each_include_files_to_include_dir()
self._make_dep_lib_file_sym_links_and_copy_include_files()
self.setup_py.add_patchs_to_build_without_pkg_config(
self.rpm.lib_dir, self.rpm.include_dir
)
self.setup_py.apply_and_save()
self._build_and_install()
except InstallError as e:
if not self._is_rpm_all_lib_include_files_installed():
org_message = str(e)
message = '''
Install failed without rpm-devel package by below reason.
Can you install the RPM package, and run this installer again?
'''
message += org_message
raise InstallError(message)
else:
raise e | Run install main logic. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L874-L896 |
junaruga/rpm-py-installer | install.py | FedoraInstaller.install_from_rpm_py_package | def install_from_rpm_py_package(self):
"""Run install from RPM Python binding RPM package."""
self._download_and_extract_rpm_py_package()
# Find ./usr/lib64/pythonN.N/site-packages/rpm directory.
# A binary built by same version Python with used Python is target
# for the safe installation.
if self.rpm.has_set_up_py_in():
# If RPM has setup.py.in, this strict check is okay.
# Because we can still install from the source.
py_dir_name = 'python{0}.{1}'.format(
sys.version_info[0], sys.version_info[1])
else:
# If RPM does not have setup.py.in such as CentOS6,
# Only way to install is by different Python's RPM package.
py_dir_name = '*'
python_lib_dir_pattern = os.path.join(
'usr', '*', py_dir_name, 'site-packages')
rpm_dir_pattern = os.path.join(python_lib_dir_pattern, 'rpm')
downloaded_rpm_dirs = glob.glob(rpm_dir_pattern)
if not downloaded_rpm_dirs:
message = 'Directory with a pattern: {0} not found.'.format(
rpm_dir_pattern)
raise RpmPyPackageNotFoundError(message)
src_rpm_dir = downloaded_rpm_dirs[0]
# Remove rpm directory for the possible installed directories.
for rpm_dir in self.python.python_lib_rpm_dirs:
if os.path.isdir(rpm_dir):
Log.debug("Remove existing rpm directory {0}".format(rpm_dir))
shutil.rmtree(rpm_dir)
dst_rpm_dir = self.python.python_lib_rpm_dir
Log.debug("Copy directory from '{0}' to '{1}'".format(
src_rpm_dir, dst_rpm_dir))
shutil.copytree(src_rpm_dir, dst_rpm_dir)
file_name_pattern = 'rpm-*.egg-info'
rpm_egg_info_pattern = os.path.join(
python_lib_dir_pattern, file_name_pattern)
downloaded_rpm_egg_infos = glob.glob(rpm_egg_info_pattern)
if downloaded_rpm_egg_infos:
existing_rpm_egg_info_pattern = os.path.join(
self.python.python_lib_dir, file_name_pattern)
existing_rpm_egg_infos = glob.glob(existing_rpm_egg_info_pattern)
for existing_rpm_egg_info in existing_rpm_egg_infos:
Log.debug("Remove existing rpm egg info file '{0}'".format(
existing_rpm_egg_info))
os.remove(existing_rpm_egg_info)
Log.debug("Copy file from '{0}' to '{1}'".format(
downloaded_rpm_egg_infos[0], self.python.python_lib_dir))
shutil.copy2(downloaded_rpm_egg_infos[0],
self.python.python_lib_dir) | python | def install_from_rpm_py_package(self):
"""Run install from RPM Python binding RPM package."""
self._download_and_extract_rpm_py_package()
# Find ./usr/lib64/pythonN.N/site-packages/rpm directory.
# A binary built by same version Python with used Python is target
# for the safe installation.
if self.rpm.has_set_up_py_in():
# If RPM has setup.py.in, this strict check is okay.
# Because we can still install from the source.
py_dir_name = 'python{0}.{1}'.format(
sys.version_info[0], sys.version_info[1])
else:
# If RPM does not have setup.py.in such as CentOS6,
# Only way to install is by different Python's RPM package.
py_dir_name = '*'
python_lib_dir_pattern = os.path.join(
'usr', '*', py_dir_name, 'site-packages')
rpm_dir_pattern = os.path.join(python_lib_dir_pattern, 'rpm')
downloaded_rpm_dirs = glob.glob(rpm_dir_pattern)
if not downloaded_rpm_dirs:
message = 'Directory with a pattern: {0} not found.'.format(
rpm_dir_pattern)
raise RpmPyPackageNotFoundError(message)
src_rpm_dir = downloaded_rpm_dirs[0]
# Remove rpm directory for the possible installed directories.
for rpm_dir in self.python.python_lib_rpm_dirs:
if os.path.isdir(rpm_dir):
Log.debug("Remove existing rpm directory {0}".format(rpm_dir))
shutil.rmtree(rpm_dir)
dst_rpm_dir = self.python.python_lib_rpm_dir
Log.debug("Copy directory from '{0}' to '{1}'".format(
src_rpm_dir, dst_rpm_dir))
shutil.copytree(src_rpm_dir, dst_rpm_dir)
file_name_pattern = 'rpm-*.egg-info'
rpm_egg_info_pattern = os.path.join(
python_lib_dir_pattern, file_name_pattern)
downloaded_rpm_egg_infos = glob.glob(rpm_egg_info_pattern)
if downloaded_rpm_egg_infos:
existing_rpm_egg_info_pattern = os.path.join(
self.python.python_lib_dir, file_name_pattern)
existing_rpm_egg_infos = glob.glob(existing_rpm_egg_info_pattern)
for existing_rpm_egg_info in existing_rpm_egg_infos:
Log.debug("Remove existing rpm egg info file '{0}'".format(
existing_rpm_egg_info))
os.remove(existing_rpm_egg_info)
Log.debug("Copy file from '{0}' to '{1}'".format(
downloaded_rpm_egg_infos[0], self.python.python_lib_dir))
shutil.copy2(downloaded_rpm_egg_infos[0],
self.python.python_lib_dir) | Run install from RPM Python binding RPM package. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L898-L953 |
junaruga/rpm-py-installer | install.py | Linux.get_instance | def get_instance(cls, python, rpm_path, **kwargs):
"""Get OS object."""
linux = None
if Cmd.which('apt-get'):
linux = DebianLinux(python, rpm_path, **kwargs)
else:
linux = FedoraLinux(python, rpm_path, **kwargs)
return linux | python | def get_instance(cls, python, rpm_path, **kwargs):
"""Get OS object."""
linux = None
if Cmd.which('apt-get'):
linux = DebianLinux(python, rpm_path, **kwargs)
else:
linux = FedoraLinux(python, rpm_path, **kwargs)
return linux | Get OS object. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1175-L1182 |
junaruga/rpm-py-installer | install.py | Linux.verify_system_status | def verify_system_status(self):
"""Verify system status."""
if not sys.platform.startswith('linux'):
raise InstallError('Supported platform is Linux only.')
if self.python.is_system_python():
if self.python.is_python_binding_installed():
message = '''
RPM Python binding already installed on system Python.
Nothing to do.
'''
Log.info(message)
raise InstallSkipError(message)
elif self.sys_installed:
pass
else:
message = '''
RPM Python binding on system Python should be installed manually.
Install the proper RPM package of python{,2,3}-rpm,
or set a environment variable RPM_PY_SYS=true
'''
raise InstallError(message)
if self.rpm.is_system_rpm():
self.verify_package_status() | python | def verify_system_status(self):
"""Verify system status."""
if not sys.platform.startswith('linux'):
raise InstallError('Supported platform is Linux only.')
if self.python.is_system_python():
if self.python.is_python_binding_installed():
message = '''
RPM Python binding already installed on system Python.
Nothing to do.
'''
Log.info(message)
raise InstallSkipError(message)
elif self.sys_installed:
pass
else:
message = '''
RPM Python binding on system Python should be installed manually.
Install the proper RPM package of python{,2,3}-rpm,
or set a environment variable RPM_PY_SYS=true
'''
raise InstallError(message)
if self.rpm.is_system_rpm():
self.verify_package_status() | Verify system status. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1192-L1216 |
junaruga/rpm-py-installer | install.py | FedoraLinux.verify_package_status | def verify_package_status(self):
"""Verify dependency RPM package status."""
# rpm-libs is required for /usr/lib64/librpm*.so
self.rpm.verify_packages_installed(['rpm-libs'])
# Check RPM so files to build the Python binding.
message_format = '''
RPM: {0} or
RPM download tool (dnf-plugins-core (dnf) or yum-utils (yum)) required.
Install any of those.
'''
if self.rpm.has_composed_rpm_bulid_libs():
if (not self.rpm.is_package_installed('rpm-build-libs')
and not self.rpm.is_downloadable()):
raise InstallError(message_format.format('rpm-build-libs'))
else:
# All the needed so files are included in rpm-libs package.
pass | python | def verify_package_status(self):
"""Verify dependency RPM package status."""
# rpm-libs is required for /usr/lib64/librpm*.so
self.rpm.verify_packages_installed(['rpm-libs'])
# Check RPM so files to build the Python binding.
message_format = '''
RPM: {0} or
RPM download tool (dnf-plugins-core (dnf) or yum-utils (yum)) required.
Install any of those.
'''
if self.rpm.has_composed_rpm_bulid_libs():
if (not self.rpm.is_package_installed('rpm-build-libs')
and not self.rpm.is_downloadable()):
raise InstallError(message_format.format('rpm-build-libs'))
else:
# All the needed so files are included in rpm-libs package.
pass | Verify dependency RPM package status. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1233-L1250 |
junaruga/rpm-py-installer | install.py | FedoraLinux.create_installer | def create_installer(self, rpm_py_version, **kwargs):
"""Create Installer object."""
return FedoraInstaller(rpm_py_version, self.python, self.rpm, **kwargs) | python | def create_installer(self, rpm_py_version, **kwargs):
"""Create Installer object."""
return FedoraInstaller(rpm_py_version, self.python, self.rpm, **kwargs) | Create Installer object. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1256-L1258 |
junaruga/rpm-py-installer | install.py | DebianLinux.create_installer | def create_installer(self, rpm_py_version, **kwargs):
"""Create Installer object."""
return DebianInstaller(rpm_py_version, self.python, self.rpm, **kwargs) | python | def create_installer(self, rpm_py_version, **kwargs):
"""Create Installer object."""
return DebianInstaller(rpm_py_version, self.python, self.rpm, **kwargs) | Create Installer object. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1282-L1284 |
junaruga/rpm-py-installer | install.py | Python.python_lib_rpm_dirs | def python_lib_rpm_dirs(self):
"""Both arch and non-arch site-packages directories."""
libs = [self.python_lib_arch_dir, self.python_lib_non_arch_dir]
def append_rpm(path):
return os.path.join(path, 'rpm')
return map(append_rpm, libs) | python | def python_lib_rpm_dirs(self):
"""Both arch and non-arch site-packages directories."""
libs = [self.python_lib_arch_dir, self.python_lib_non_arch_dir]
def append_rpm(path):
return os.path.join(path, 'rpm')
return map(append_rpm, libs) | Both arch and non-arch site-packages directories. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1325-L1332 |
junaruga/rpm-py-installer | install.py | Python.is_python_binding_installed | def is_python_binding_installed(self):
"""Check if the Python binding has already installed.
Consider below cases.
- pip command is not installed.
- The installed RPM Python binding does not have information
showed as a result of pip list.
"""
is_installed = False
is_install_error = False
try:
is_installed = self.is_python_binding_installed_on_pip()
except InstallError:
# Consider a case of pip is not installed in old Python (<= 2.6).
is_install_error = True
if not is_installed or is_install_error:
for rpm_dir in self.python_lib_rpm_dirs:
init_py = os.path.join(rpm_dir, '__init__.py')
if os.path.isfile(init_py):
is_installed = True
break
return is_installed | python | def is_python_binding_installed(self):
"""Check if the Python binding has already installed.
Consider below cases.
- pip command is not installed.
- The installed RPM Python binding does not have information
showed as a result of pip list.
"""
is_installed = False
is_install_error = False
try:
is_installed = self.is_python_binding_installed_on_pip()
except InstallError:
# Consider a case of pip is not installed in old Python (<= 2.6).
is_install_error = True
if not is_installed or is_install_error:
for rpm_dir in self.python_lib_rpm_dirs:
init_py = os.path.join(rpm_dir, '__init__.py')
if os.path.isfile(init_py):
is_installed = True
break
return is_installed | Check if the Python binding has already installed.
Consider below cases.
- pip command is not installed.
- The installed RPM Python binding does not have information
showed as a result of pip list. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1334-L1357 |
junaruga/rpm-py-installer | install.py | Python.is_python_binding_installed_on_pip | def is_python_binding_installed_on_pip(self):
"""Check if the Python binding has already installed."""
pip_version = self._get_pip_version()
Log.debug('Pip version: {0}'.format(pip_version))
pip_major_version = int(pip_version.split('.')[0])
installed = False
# --format is from pip v9.0.0
# https://pip.pypa.io/en/stable/news/
if pip_major_version >= 9:
json_obj = self._get_pip_list_json_obj()
for package in json_obj:
Log.debug('pip list: {0}'.format(package))
if package['name'] in ('rpm-python', 'rpm'):
installed = True
Log.debug('Package installed: {0}, {1}'.format(
package['name'], package['version']))
break
else:
# Implementation for pip old version.
# It will be removed in the future.
lines = self._get_pip_list_lines()
for line in lines:
if re.match('^rpm(-python)? ', line):
installed = True
Log.debug('Package installed.')
break
return installed | python | def is_python_binding_installed_on_pip(self):
"""Check if the Python binding has already installed."""
pip_version = self._get_pip_version()
Log.debug('Pip version: {0}'.format(pip_version))
pip_major_version = int(pip_version.split('.')[0])
installed = False
# --format is from pip v9.0.0
# https://pip.pypa.io/en/stable/news/
if pip_major_version >= 9:
json_obj = self._get_pip_list_json_obj()
for package in json_obj:
Log.debug('pip list: {0}'.format(package))
if package['name'] in ('rpm-python', 'rpm'):
installed = True
Log.debug('Package installed: {0}, {1}'.format(
package['name'], package['version']))
break
else:
# Implementation for pip old version.
# It will be removed in the future.
lines = self._get_pip_list_lines()
for line in lines:
if re.match('^rpm(-python)? ', line):
installed = True
Log.debug('Package installed.')
break
return installed | Check if the Python binding has already installed. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1359-L1388 |
junaruga/rpm-py-installer | install.py | Rpm.version | def version(self):
"""RPM vesion string."""
stdout = Cmd.sh_e_out('{0} --version'.format(self.rpm_path))
rpm_version = stdout.split()[2]
return rpm_version | python | def version(self):
"""RPM vesion string."""
stdout = Cmd.sh_e_out('{0} --version'.format(self.rpm_path))
rpm_version = stdout.split()[2]
return rpm_version | RPM vesion string. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1435-L1439 |
junaruga/rpm-py-installer | install.py | Rpm.is_system_rpm | def is_system_rpm(self):
"""Check if the RPM is system RPM."""
sys_rpm_paths = [
'/usr/bin/rpm',
# On CentOS6, system RPM is installed in this directory.
'/bin/rpm',
]
matched = False
for sys_rpm_path in sys_rpm_paths:
if self.rpm_path.startswith(sys_rpm_path):
matched = True
break
return matched | python | def is_system_rpm(self):
"""Check if the RPM is system RPM."""
sys_rpm_paths = [
'/usr/bin/rpm',
# On CentOS6, system RPM is installed in this directory.
'/bin/rpm',
]
matched = False
for sys_rpm_path in sys_rpm_paths:
if self.rpm_path.startswith(sys_rpm_path):
matched = True
break
return matched | Check if the RPM is system RPM. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1447-L1459 |
junaruga/rpm-py-installer | install.py | Rpm.is_package_installed | def is_package_installed(self, package_name):
"""Check if the RPM package is installed."""
if not package_name:
raise ValueError('package_name required.')
installed = True
try:
Cmd.sh_e('{0} --query {1} --quiet'.format(self.rpm_path,
package_name))
except InstallError:
installed = False
return installed | python | def is_package_installed(self, package_name):
"""Check if the RPM package is installed."""
if not package_name:
raise ValueError('package_name required.')
installed = True
try:
Cmd.sh_e('{0} --query {1} --quiet'.format(self.rpm_path,
package_name))
except InstallError:
installed = False
return installed | Check if the RPM package is installed. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1465-L1476 |
junaruga/rpm-py-installer | install.py | Rpm.verify_packages_installed | def verify_packages_installed(self, package_names):
"""Check if the RPM packages are installed.
Raise InstallError if any of the packages is not installed.
"""
if not package_names:
raise ValueError('package_names required.')
missing_packages = []
for package_name in package_names:
if not self.is_package_installed(package_name):
missing_packages.append(package_name)
if missing_packages:
comma_packages = ', '.join(missing_packages)
message = '''
Required RPM not installed: [{0}].
Install the RPM package.
'''.format(comma_packages)
raise InstallError(message) | python | def verify_packages_installed(self, package_names):
"""Check if the RPM packages are installed.
Raise InstallError if any of the packages is not installed.
"""
if not package_names:
raise ValueError('package_names required.')
missing_packages = []
for package_name in package_names:
if not self.is_package_installed(package_name):
missing_packages.append(package_name)
if missing_packages:
comma_packages = ', '.join(missing_packages)
message = '''
Required RPM not installed: [{0}].
Install the RPM package.
'''.format(comma_packages)
raise InstallError(message) | Check if the RPM packages are installed.
Raise InstallError if any of the packages is not installed. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1478-L1497 |
junaruga/rpm-py-installer | install.py | FedoraRpm.lib_dir | def lib_dir(self):
"""Return standard library directory path used by RPM libs.
TODO: Support non-system RPM.
"""
if not self._lib_dir:
rpm_lib_dir = None
cmd = '{0} -ql rpm-libs'.format(self.rpm_path)
out = Cmd.sh_e_out(cmd)
lines = out.split('\n')
for line in lines:
if 'librpm.so' in line:
rpm_lib_dir = os.path.dirname(line)
break
self._lib_dir = rpm_lib_dir
return self._lib_dir | python | def lib_dir(self):
"""Return standard library directory path used by RPM libs.
TODO: Support non-system RPM.
"""
if not self._lib_dir:
rpm_lib_dir = None
cmd = '{0} -ql rpm-libs'.format(self.rpm_path)
out = Cmd.sh_e_out(cmd)
lines = out.split('\n')
for line in lines:
if 'librpm.so' in line:
rpm_lib_dir = os.path.dirname(line)
break
self._lib_dir = rpm_lib_dir
return self._lib_dir | Return standard library directory path used by RPM libs.
TODO: Support non-system RPM. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1527-L1542 |
junaruga/rpm-py-installer | install.py | FedoraRpm.is_downloadable | def is_downloadable(self):
"""Return if rpm is downloadable by the package command.
Check if dnf or yum plugin package exists.
"""
is_plugin_avaiable = False
if self.is_dnf:
is_plugin_avaiable = self.is_package_installed(
'dnf-plugins-core')
else:
""" yum environment.
Make sure
# yum -y --downloadonly --downloaddir=. install package_name
is only available for root user.
yumdownloader in yum-utils is available for normal user.
https://access.redhat.com/solutions/10154
"""
is_plugin_avaiable = self.is_package_installed(
'yum-utils')
return is_plugin_avaiable | python | def is_downloadable(self):
"""Return if rpm is downloadable by the package command.
Check if dnf or yum plugin package exists.
"""
is_plugin_avaiable = False
if self.is_dnf:
is_plugin_avaiable = self.is_package_installed(
'dnf-plugins-core')
else:
""" yum environment.
Make sure
# yum -y --downloadonly --downloaddir=. install package_name
is only available for root user.
yumdownloader in yum-utils is available for normal user.
https://access.redhat.com/solutions/10154
"""
is_plugin_avaiable = self.is_package_installed(
'yum-utils')
return is_plugin_avaiable | Return if rpm is downloadable by the package command.
Check if dnf or yum plugin package exists. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1561-L1581 |
junaruga/rpm-py-installer | install.py | FedoraRpm.download | def download(self, package_name):
"""Download given package."""
if not package_name:
ValueError('package_name required.')
if self.is_dnf:
cmd = 'dnf download {0}.{1}'.format(package_name, self.arch)
else:
cmd = 'yumdownloader {0}.{1}'.format(package_name, self.arch)
try:
Cmd.sh_e(cmd, stdout=subprocess.PIPE)
except CmdError as e:
for out in (e.stdout, e.stderr):
for line in out.split('\n'):
if re.match(r'^No package [^ ]+ available', line) or \
re.match(r'^No Match for argument', line):
raise RemoteFileNotFoundError(
'Package {0} not found on remote'.format(
package_name
)
)
raise e | python | def download(self, package_name):
"""Download given package."""
if not package_name:
ValueError('package_name required.')
if self.is_dnf:
cmd = 'dnf download {0}.{1}'.format(package_name, self.arch)
else:
cmd = 'yumdownloader {0}.{1}'.format(package_name, self.arch)
try:
Cmd.sh_e(cmd, stdout=subprocess.PIPE)
except CmdError as e:
for out in (e.stdout, e.stderr):
for line in out.split('\n'):
if re.match(r'^No package [^ ]+ available', line) or \
re.match(r'^No Match for argument', line):
raise RemoteFileNotFoundError(
'Package {0} not found on remote'.format(
package_name
)
)
raise e | Download given package. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1588-L1608 |
junaruga/rpm-py-installer | install.py | FedoraRpm.extract | def extract(self, package_name):
"""Extract given package."""
for cmd in ['rpm2cpio', 'cpio']:
if not Cmd.which(cmd):
message = '{0} command not found. Install {0}.'.format(cmd)
raise InstallError(message)
pattern = '{0}*{1}.rpm'.format(package_name, self.arch)
rpm_files = Cmd.find('.', pattern)
if not rpm_files:
raise InstallError('PRM file not found.')
cmd = 'rpm2cpio {0} | cpio -idmv'.format(rpm_files[0])
Cmd.sh_e(cmd) | python | def extract(self, package_name):
"""Extract given package."""
for cmd in ['rpm2cpio', 'cpio']:
if not Cmd.which(cmd):
message = '{0} command not found. Install {0}.'.format(cmd)
raise InstallError(message)
pattern = '{0}*{1}.rpm'.format(package_name, self.arch)
rpm_files = Cmd.find('.', pattern)
if not rpm_files:
raise InstallError('PRM file not found.')
cmd = 'rpm2cpio {0} | cpio -idmv'.format(rpm_files[0])
Cmd.sh_e(cmd) | Extract given package. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1610-L1622 |
junaruga/rpm-py-installer | install.py | DebianRpm.lib_dir | def lib_dir(self):
"""Return standard library directory path used by RPM libs."""
if not self._lib_dir:
lib_files = glob.glob("/usr/lib/*/librpm.so*")
if not lib_files:
raise InstallError("Can not find lib directory.")
self._lib_dir = os.path.dirname(lib_files[0])
return self._lib_dir | python | def lib_dir(self):
"""Return standard library directory path used by RPM libs."""
if not self._lib_dir:
lib_files = glob.glob("/usr/lib/*/librpm.so*")
if not lib_files:
raise InstallError("Can not find lib directory.")
self._lib_dir = os.path.dirname(lib_files[0])
return self._lib_dir | Return standard library directory path used by RPM libs. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1636-L1643 |
junaruga/rpm-py-installer | install.py | Cmd.sh_e | def sh_e(cls, cmd, **kwargs):
"""Run the command. It behaves like "sh -e".
It raises InstallError if the command failed.
"""
Log.debug('CMD: {0}'.format(cmd))
cmd_kwargs = {
'shell': True,
}
cmd_kwargs.update(kwargs)
env = os.environ.copy()
# Better to parse English output
env['LC_ALL'] = 'en_US.utf-8'
if 'env' in kwargs:
env.update(kwargs['env'])
cmd_kwargs['env'] = env
# Capture stderr to show it on error message.
cmd_kwargs['stderr'] = subprocess.PIPE
proc = None
try:
proc = subprocess.Popen(cmd, **cmd_kwargs)
stdout, stderr = proc.communicate()
returncode = proc.returncode
message_format = (
'CMD Return Code: [{0}], Stdout: [{1}], Stderr: [{2}]'
)
Log.debug(message_format.format(returncode, stdout, stderr))
if stdout is not None:
stdout = stdout.decode('utf-8')
if stderr is not None:
stderr = stderr.decode('utf-8')
if returncode != 0:
message = 'CMD: [{0}], Return Code: [{1}] at [{2}]'.format(
cmd, returncode, os.getcwd())
if stderr is not None:
message += ' Stderr: [{0}]'.format(stderr)
ie = CmdError(message)
ie.stdout = stdout
ie.stderr = stderr
raise ie
return (stdout, stderr)
except Exception as e:
try:
proc.kill()
except Exception:
pass
raise e | python | def sh_e(cls, cmd, **kwargs):
"""Run the command. It behaves like "sh -e".
It raises InstallError if the command failed.
"""
Log.debug('CMD: {0}'.format(cmd))
cmd_kwargs = {
'shell': True,
}
cmd_kwargs.update(kwargs)
env = os.environ.copy()
# Better to parse English output
env['LC_ALL'] = 'en_US.utf-8'
if 'env' in kwargs:
env.update(kwargs['env'])
cmd_kwargs['env'] = env
# Capture stderr to show it on error message.
cmd_kwargs['stderr'] = subprocess.PIPE
proc = None
try:
proc = subprocess.Popen(cmd, **cmd_kwargs)
stdout, stderr = proc.communicate()
returncode = proc.returncode
message_format = (
'CMD Return Code: [{0}], Stdout: [{1}], Stderr: [{2}]'
)
Log.debug(message_format.format(returncode, stdout, stderr))
if stdout is not None:
stdout = stdout.decode('utf-8')
if stderr is not None:
stderr = stderr.decode('utf-8')
if returncode != 0:
message = 'CMD: [{0}], Return Code: [{1}] at [{2}]'.format(
cmd, returncode, os.getcwd())
if stderr is not None:
message += ' Stderr: [{0}]'.format(stderr)
ie = CmdError(message)
ie.stdout = stdout
ie.stderr = stderr
raise ie
return (stdout, stderr)
except Exception as e:
try:
proc.kill()
except Exception:
pass
raise e | Run the command. It behaves like "sh -e".
It raises InstallError if the command failed. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1707-L1758 |
junaruga/rpm-py-installer | install.py | Cmd.sh_e_out | def sh_e_out(cls, cmd, **kwargs):
"""Run the command. and returns the stdout."""
cmd_kwargs = {
'stdout': subprocess.PIPE,
}
cmd_kwargs.update(kwargs)
return cls.sh_e(cmd, **cmd_kwargs)[0] | python | def sh_e_out(cls, cmd, **kwargs):
"""Run the command. and returns the stdout."""
cmd_kwargs = {
'stdout': subprocess.PIPE,
}
cmd_kwargs.update(kwargs)
return cls.sh_e(cmd, **cmd_kwargs)[0] | Run the command. and returns the stdout. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1761-L1767 |
junaruga/rpm-py-installer | install.py | Cmd.cd | def cd(cls, directory):
"""Change directory. It behaves like "cd directory"."""
Log.debug('CMD: cd {0}'.format(directory))
os.chdir(directory) | python | def cd(cls, directory):
"""Change directory. It behaves like "cd directory"."""
Log.debug('CMD: cd {0}'.format(directory))
os.chdir(directory) | Change directory. It behaves like "cd directory". | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1770-L1773 |
junaruga/rpm-py-installer | install.py | Cmd.pushd | def pushd(cls, new_dir):
"""Change directory, and back to previous directory.
It behaves like "pushd directory; something; popd".
"""
previous_dir = os.getcwd()
try:
new_ab_dir = None
if os.path.isabs(new_dir):
new_ab_dir = new_dir
else:
new_ab_dir = os.path.join(previous_dir, new_dir)
# Use absolute path to show it on FileNotFoundError message.
cls.cd(new_ab_dir)
yield
finally:
cls.cd(previous_dir) | python | def pushd(cls, new_dir):
"""Change directory, and back to previous directory.
It behaves like "pushd directory; something; popd".
"""
previous_dir = os.getcwd()
try:
new_ab_dir = None
if os.path.isabs(new_dir):
new_ab_dir = new_dir
else:
new_ab_dir = os.path.join(previous_dir, new_dir)
# Use absolute path to show it on FileNotFoundError message.
cls.cd(new_ab_dir)
yield
finally:
cls.cd(previous_dir) | Change directory, and back to previous directory.
It behaves like "pushd directory; something; popd". | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1777-L1793 |
junaruga/rpm-py-installer | install.py | Cmd.which | def which(cls, cmd):
"""Return an absolute path of the command.
It behaves like "which command".
"""
abs_path_cmd = None
if sys.version_info >= (3, 3):
abs_path_cmd = shutil.which(cmd)
else:
abs_path_cmd = find_executable(cmd)
return abs_path_cmd | python | def which(cls, cmd):
"""Return an absolute path of the command.
It behaves like "which command".
"""
abs_path_cmd = None
if sys.version_info >= (3, 3):
abs_path_cmd = shutil.which(cmd)
else:
abs_path_cmd = find_executable(cmd)
return abs_path_cmd | Return an absolute path of the command.
It behaves like "which command". | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1796-L1806 |
junaruga/rpm-py-installer | install.py | Cmd.curl_remote_name | def curl_remote_name(cls, file_url):
"""Download file_url, and save as a file name of the URL.
It behaves like "curl -O or --remote-name".
It raises HTTPError if the file_url not found.
"""
tar_gz_file_name = file_url.split('/')[-1]
if sys.version_info >= (3, 2):
from urllib.request import urlopen
from urllib.error import HTTPError
else:
from urllib2 import urlopen
from urllib2 import HTTPError
response = None
try:
response = urlopen(file_url)
except HTTPError as e:
message = 'Download failed: URL: {0}, reason: {1}'.format(
file_url, e)
if 'HTTP Error 404' in str(e):
raise RemoteFileNotFoundError(message)
else:
raise InstallError(message)
tar_gz_file_obj = io.BytesIO(response.read())
with open(tar_gz_file_name, 'wb') as f_out:
f_out.write(tar_gz_file_obj.read())
return tar_gz_file_name | python | def curl_remote_name(cls, file_url):
"""Download file_url, and save as a file name of the URL.
It behaves like "curl -O or --remote-name".
It raises HTTPError if the file_url not found.
"""
tar_gz_file_name = file_url.split('/')[-1]
if sys.version_info >= (3, 2):
from urllib.request import urlopen
from urllib.error import HTTPError
else:
from urllib2 import urlopen
from urllib2 import HTTPError
response = None
try:
response = urlopen(file_url)
except HTTPError as e:
message = 'Download failed: URL: {0}, reason: {1}'.format(
file_url, e)
if 'HTTP Error 404' in str(e):
raise RemoteFileNotFoundError(message)
else:
raise InstallError(message)
tar_gz_file_obj = io.BytesIO(response.read())
with open(tar_gz_file_name, 'wb') as f_out:
f_out.write(tar_gz_file_obj.read())
return tar_gz_file_name | Download file_url, and save as a file name of the URL.
It behaves like "curl -O or --remote-name".
It raises HTTPError if the file_url not found. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1809-L1838 |
junaruga/rpm-py-installer | install.py | Cmd.tar_extract | def tar_extract(cls, tar_comp_file_path):
"""Extract tar.gz or tar bz2 file.
It behaves like
- tar xzf tar_gz_file_path
- tar xjf tar_bz2_file_path
It raises tarfile.ReadError if the file is broken.
"""
try:
with contextlib.closing(tarfile.open(tar_comp_file_path)) as tar:
tar.extractall()
except tarfile.ReadError as e:
message_format = (
'Extract failed: '
'tar_comp_file_path: {0}, reason: {1}'
)
raise InstallError(message_format.format(tar_comp_file_path, e)) | python | def tar_extract(cls, tar_comp_file_path):
"""Extract tar.gz or tar bz2 file.
It behaves like
- tar xzf tar_gz_file_path
- tar xjf tar_bz2_file_path
It raises tarfile.ReadError if the file is broken.
"""
try:
with contextlib.closing(tarfile.open(tar_comp_file_path)) as tar:
tar.extractall()
except tarfile.ReadError as e:
message_format = (
'Extract failed: '
'tar_comp_file_path: {0}, reason: {1}'
)
raise InstallError(message_format.format(tar_comp_file_path, e)) | Extract tar.gz or tar bz2 file.
It behaves like
- tar xzf tar_gz_file_path
- tar xjf tar_bz2_file_path
It raises tarfile.ReadError if the file is broken. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1841-L1857 |
junaruga/rpm-py-installer | install.py | Cmd.find | def find(cls, searched_dir, pattern):
"""Find matched files.
It does not include symbolic file in the result.
"""
Log.debug('find {0} with pattern: {1}'.format(searched_dir, pattern))
matched_files = []
for root_dir, dir_names, file_names in os.walk(searched_dir,
followlinks=False):
for file_name in file_names:
if fnmatch.fnmatch(file_name, pattern):
file_path = os.path.join(root_dir, file_name)
if not os.path.islink(file_path):
matched_files.append(file_path)
matched_files.sort()
return matched_files | python | def find(cls, searched_dir, pattern):
"""Find matched files.
It does not include symbolic file in the result.
"""
Log.debug('find {0} with pattern: {1}'.format(searched_dir, pattern))
matched_files = []
for root_dir, dir_names, file_names in os.walk(searched_dir,
followlinks=False):
for file_name in file_names:
if fnmatch.fnmatch(file_name, pattern):
file_path = os.path.join(root_dir, file_name)
if not os.path.islink(file_path):
matched_files.append(file_path)
matched_files.sort()
return matched_files | Find matched files.
It does not include symbolic file in the result. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1860-L1875 |
junaruga/rpm-py-installer | install.py | Utils.version_str2tuple | def version_str2tuple(cls, version_str):
"""Version info.
tuple object. ex. ('4', '14', '0', 'rc1')
"""
if not isinstance(version_str, str):
ValueError('version_str invalid instance.')
version_info_list = re.findall(r'[0-9a-zA-Z]+', version_str)
def convert_to_int(string):
value = None
if re.match(r'^\d+$', string):
value = int(string)
else:
value = string
return value
version_info_list = [convert_to_int(s) for s in version_info_list]
return tuple(version_info_list) | python | def version_str2tuple(cls, version_str):
"""Version info.
tuple object. ex. ('4', '14', '0', 'rc1')
"""
if not isinstance(version_str, str):
ValueError('version_str invalid instance.')
version_info_list = re.findall(r'[0-9a-zA-Z]+', version_str)
def convert_to_int(string):
value = None
if re.match(r'^\d+$', string):
value = int(string)
else:
value = string
return value
version_info_list = [convert_to_int(s) for s in version_info_list]
return tuple(version_info_list) | Version info.
tuple object. ex. ('4', '14', '0', 'rc1') | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/install.py#L1890-L1909 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/fastaiter/fastaiter.py | FastaIter | def FastaIter(handle):
"""generator that returns (header, sequence) tuples from an open FASTA file handle
Lines before the start of the first record are ignored.
"""
header = None
for line in handle:
if line.startswith(">"):
if header is not None: # not the first record
yield header, "".join(seq_lines)
seq_lines = list()
header = line[1:].rstrip()
else:
if header is not None: # not the first record
seq_lines.append(line.strip())
if header is not None:
yield header, "".join(seq_lines)
else: # no FASTA records in file
return | python | def FastaIter(handle):
"""generator that returns (header, sequence) tuples from an open FASTA file handle
Lines before the start of the first record are ignored.
"""
header = None
for line in handle:
if line.startswith(">"):
if header is not None: # not the first record
yield header, "".join(seq_lines)
seq_lines = list()
header = line[1:].rstrip()
else:
if header is not None: # not the first record
seq_lines.append(line.strip())
if header is not None:
yield header, "".join(seq_lines)
else: # no FASTA records in file
return | generator that returns (header, sequence) tuples from an open FASTA file handle
Lines before the start of the first record are ignored. | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/fastaiter/fastaiter.py#L1-L21 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/py2compat/_which.py | which | def which(file):
"""
>>> which("sh") is not None
True
>>> which("bogus-executable-that-doesn't-exist") is None
True
"""
for path in os.environ["PATH"].split(os.pathsep):
if os.path.exists(os.path.join(path, file)):
return os.path.join(path, file)
return None | python | def which(file):
"""
>>> which("sh") is not None
True
>>> which("bogus-executable-that-doesn't-exist") is None
True
"""
for path in os.environ["PATH"].split(os.pathsep):
if os.path.exists(os.path.join(path, file)):
return os.path.join(path, file)
return None | >>> which("sh") is not None
True
>>> which("bogus-executable-that-doesn't-exist") is None
True | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/py2compat/_which.py#L3-L15 |
junaruga/rpm-py-installer | setup.py | install_rpm_py | def install_rpm_py():
"""Install RPM Python binding."""
python_path = sys.executable
cmd = '{0} install.py'.format(python_path)
exit_status = os.system(cmd)
if exit_status != 0:
raise Exception('Command failed: {0}'.format(cmd)) | python | def install_rpm_py():
"""Install RPM Python binding."""
python_path = sys.executable
cmd = '{0} install.py'.format(python_path)
exit_status = os.system(cmd)
if exit_status != 0:
raise Exception('Command failed: {0}'.format(cmd)) | Install RPM Python binding. | https://github.com/junaruga/rpm-py-installer/blob/12f45feb0ba533dec8d0d16ef1e9b7fb8cfbd4ed/setup.py#L14-L20 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/fastadir/fabgz.py | _get_bgzip_version | def _get_bgzip_version(exe):
"""return bgzip version as string"""
p = subprocess.Popen([exe, "-h"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
output = p.communicate()
version_line = output[0].splitlines()[1]
version = re.match(r"(?:Version:|bgzip \(htslib\))\s+(\d+\.\d+(\.\d+)?)", version_line).group(1)
return version | python | def _get_bgzip_version(exe):
"""return bgzip version as string"""
p = subprocess.Popen([exe, "-h"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
output = p.communicate()
version_line = output[0].splitlines()[1]
version = re.match(r"(?:Version:|bgzip \(htslib\))\s+(\d+\.\d+(\.\d+)?)", version_line).group(1)
return version | return bgzip version as string | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/fastadir/fabgz.py#L32-L38 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/fastadir/fabgz.py | _find_bgzip | def _find_bgzip():
"""return path to bgzip if found and meets version requirements, else exception"""
missing_file_exception = OSError if six.PY2 else FileNotFoundError
min_bgzip_version = ".".join(map(str, min_bgzip_version_info))
exe = os.environ.get("SEQREPO_BGZIP_PATH", which("bgzip") or "/usr/bin/bgzip")
try:
bgzip_version = _get_bgzip_version(exe)
except AttributeError:
raise RuntimeError("Didn't find version string in bgzip executable ({exe})".format(exe=exe))
except missing_file_exception:
raise RuntimeError("{exe} doesn't exist; you need to install htslib (See https://github.com/biocommons/biocommons.seqrepo#requirements)".format(exe=exe))
except Exception:
raise RuntimeError("Unknown error while executing {exe}".format(exe=exe))
bgzip_version_info = tuple(map(int, bgzip_version.split(".")))
if bgzip_version_info < min_bgzip_version_info:
raise RuntimeError("bgzip ({exe}) {ev} is too old; >= {rv} is required; please upgrade".format(
exe=exe, ev=bgzip_version, rv=min_bgzip_version))
logger.info("Using bgzip {ev} ({exe})".format(ev=bgzip_version, exe=exe))
return exe | python | def _find_bgzip():
"""return path to bgzip if found and meets version requirements, else exception"""
missing_file_exception = OSError if six.PY2 else FileNotFoundError
min_bgzip_version = ".".join(map(str, min_bgzip_version_info))
exe = os.environ.get("SEQREPO_BGZIP_PATH", which("bgzip") or "/usr/bin/bgzip")
try:
bgzip_version = _get_bgzip_version(exe)
except AttributeError:
raise RuntimeError("Didn't find version string in bgzip executable ({exe})".format(exe=exe))
except missing_file_exception:
raise RuntimeError("{exe} doesn't exist; you need to install htslib (See https://github.com/biocommons/biocommons.seqrepo#requirements)".format(exe=exe))
except Exception:
raise RuntimeError("Unknown error while executing {exe}".format(exe=exe))
bgzip_version_info = tuple(map(int, bgzip_version.split(".")))
if bgzip_version_info < min_bgzip_version_info:
raise RuntimeError("bgzip ({exe}) {ev} is too old; >= {rv} is required; please upgrade".format(
exe=exe, ev=bgzip_version, rv=min_bgzip_version))
logger.info("Using bgzip {ev} ({exe})".format(ev=bgzip_version, exe=exe))
return exe | return path to bgzip if found and meets version requirements, else exception | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/fastadir/fabgz.py#L41-L60 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/seqrepo.py | SeqRepo.fetch_uri | def fetch_uri(self, uri, start=None, end=None):
"""fetch sequence for URI/CURIE of the form namespace:alias, such as
NCBI:NM_000059.3.
"""
namespace, alias = uri_re.match(uri).groups()
return self.fetch(alias=alias, namespace=namespace, start=start, end=end) | python | def fetch_uri(self, uri, start=None, end=None):
"""fetch sequence for URI/CURIE of the form namespace:alias, such as
NCBI:NM_000059.3.
"""
namespace, alias = uri_re.match(uri).groups()
return self.fetch(alias=alias, namespace=namespace, start=start, end=end) | fetch sequence for URI/CURIE of the form namespace:alias, such as
NCBI:NM_000059.3. | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/seqrepo.py#L102-L109 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/seqrepo.py | SeqRepo.store | def store(self, seq, nsaliases):
"""nsaliases is a list of dicts, like:
[{"namespace": "en", "alias": "rose"},
{"namespace": "fr", "alias": "rose"},
{"namespace": "es", "alias": "rosa"}]
"""
if not self._writeable:
raise RuntimeError("Cannot write -- opened read-only")
if self._upcase:
seq = seq.upper()
try:
seqhash = bioutils.digests.seq_seqhash(seq)
except Exception as e:
import pprint
_logger.critical("Exception raised for " + pprint.pformat(nsaliases))
raise
seq_id = seqhash
# add sequence if not present
n_seqs_added = n_aliases_added = 0
msg = "sh{nsa_sep}{seq_id:.10s}... ({l} residues; {na} aliases {aliases})".format(
seq_id=seq_id,
l=len(seq),
na=len(nsaliases),
nsa_sep=nsa_sep,
aliases=", ".join("{nsa[namespace]}:{nsa[alias]}".format(nsa=nsa) for nsa in nsaliases))
if seq_id not in self.sequences:
_logger.info("Storing " + msg)
if len(seq) > ct_n_residues: # pragma: no cover
_logger.debug("Precommit for large sequence")
self.commit()
self.sequences.store(seq_id, seq)
n_seqs_added += 1
self._pending_sequences += 1
self._pending_sequences_len += len(seq)
self._pending_aliases += self._update_digest_aliases(seq_id, seq)
else:
_logger.debug("Sequence exists: " + msg)
# add/update external aliases for new and existing sequences
# updating is optimized to load only new <seq_id,ns,alias> tuples
existing_aliases = self.aliases.fetch_aliases(seq_id)
ea_tuples = [(r["seq_id"], r["namespace"], r["alias"]) for r in existing_aliases]
new_tuples = [(seq_id, r["namespace"], r["alias"]) for r in nsaliases]
upd_tuples = set(new_tuples) - set(ea_tuples)
if upd_tuples:
_logger.info("{} new aliases for {}".format(len(upd_tuples), msg))
for _, namespace, alias in upd_tuples:
self.aliases.store_alias(seq_id=seq_id, namespace=namespace, alias=alias)
self._pending_aliases += len(upd_tuples)
n_aliases_added += len(upd_tuples)
if (self._pending_sequences > ct_n_seqs or self._pending_aliases > ct_n_aliases
or self._pending_sequences_len > ct_n_residues): # pragma: no cover
_logger.info("Hit commit thresholds ({self._pending_sequences} sequences, "
"{self._pending_aliases} aliases, {self._pending_sequences_len} residues)".format(self=self))
self.commit()
return n_seqs_added, n_aliases_added | python | def store(self, seq, nsaliases):
"""nsaliases is a list of dicts, like:
[{"namespace": "en", "alias": "rose"},
{"namespace": "fr", "alias": "rose"},
{"namespace": "es", "alias": "rosa"}]
"""
if not self._writeable:
raise RuntimeError("Cannot write -- opened read-only")
if self._upcase:
seq = seq.upper()
try:
seqhash = bioutils.digests.seq_seqhash(seq)
except Exception as e:
import pprint
_logger.critical("Exception raised for " + pprint.pformat(nsaliases))
raise
seq_id = seqhash
# add sequence if not present
n_seqs_added = n_aliases_added = 0
msg = "sh{nsa_sep}{seq_id:.10s}... ({l} residues; {na} aliases {aliases})".format(
seq_id=seq_id,
l=len(seq),
na=len(nsaliases),
nsa_sep=nsa_sep,
aliases=", ".join("{nsa[namespace]}:{nsa[alias]}".format(nsa=nsa) for nsa in nsaliases))
if seq_id not in self.sequences:
_logger.info("Storing " + msg)
if len(seq) > ct_n_residues: # pragma: no cover
_logger.debug("Precommit for large sequence")
self.commit()
self.sequences.store(seq_id, seq)
n_seqs_added += 1
self._pending_sequences += 1
self._pending_sequences_len += len(seq)
self._pending_aliases += self._update_digest_aliases(seq_id, seq)
else:
_logger.debug("Sequence exists: " + msg)
# add/update external aliases for new and existing sequences
# updating is optimized to load only new <seq_id,ns,alias> tuples
existing_aliases = self.aliases.fetch_aliases(seq_id)
ea_tuples = [(r["seq_id"], r["namespace"], r["alias"]) for r in existing_aliases]
new_tuples = [(seq_id, r["namespace"], r["alias"]) for r in nsaliases]
upd_tuples = set(new_tuples) - set(ea_tuples)
if upd_tuples:
_logger.info("{} new aliases for {}".format(len(upd_tuples), msg))
for _, namespace, alias in upd_tuples:
self.aliases.store_alias(seq_id=seq_id, namespace=namespace, alias=alias)
self._pending_aliases += len(upd_tuples)
n_aliases_added += len(upd_tuples)
if (self._pending_sequences > ct_n_seqs or self._pending_aliases > ct_n_aliases
or self._pending_sequences_len > ct_n_residues): # pragma: no cover
_logger.info("Hit commit thresholds ({self._pending_sequences} sequences, "
"{self._pending_aliases} aliases, {self._pending_sequences_len} residues)".format(self=self))
self.commit()
return n_seqs_added, n_aliases_added | nsaliases is a list of dicts, like:
[{"namespace": "en", "alias": "rose"},
{"namespace": "fr", "alias": "rose"},
{"namespace": "es", "alias": "rosa"}] | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/seqrepo.py#L112-L172 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/seqrepo.py | SeqRepo.translate_alias | def translate_alias(self, alias, namespace=None, target_namespaces=None, translate_ncbi_namespace=None):
"""given an alias and optional namespace, return a list of all other
aliases for same sequence
"""
if translate_ncbi_namespace is None:
translate_ncbi_namespace = self.translate_ncbi_namespace
seq_id = self._get_unique_seqid(alias=alias, namespace=namespace)
aliases = self.aliases.fetch_aliases(seq_id=seq_id,
translate_ncbi_namespace=translate_ncbi_namespace)
if target_namespaces:
aliases = [a for a in aliases if a["namespace"] in target_namespaces]
return aliases | python | def translate_alias(self, alias, namespace=None, target_namespaces=None, translate_ncbi_namespace=None):
"""given an alias and optional namespace, return a list of all other
aliases for same sequence
"""
if translate_ncbi_namespace is None:
translate_ncbi_namespace = self.translate_ncbi_namespace
seq_id = self._get_unique_seqid(alias=alias, namespace=namespace)
aliases = self.aliases.fetch_aliases(seq_id=seq_id,
translate_ncbi_namespace=translate_ncbi_namespace)
if target_namespaces:
aliases = [a for a in aliases if a["namespace"] in target_namespaces]
return aliases | given an alias and optional namespace, return a list of all other
aliases for same sequence | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/seqrepo.py#L175-L188 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/seqrepo.py | SeqRepo.translate_identifier | def translate_identifier(self, identifier, target_namespaces=None, translate_ncbi_namespace=None):
"""Given a string identifier, return a list of aliases (as
identifiers) that refer to the same sequence.
"""
namespace, alias = identifier.split(nsa_sep) if nsa_sep in identifier else (None, identifier)
aliases = self.translate_alias(alias=alias,
namespace=namespace,
target_namespaces=target_namespaces,
translate_ncbi_namespace=translate_ncbi_namespace)
return [nsa_sep.join((a["namespace"], a["alias"])) for a in aliases] | python | def translate_identifier(self, identifier, target_namespaces=None, translate_ncbi_namespace=None):
"""Given a string identifier, return a list of aliases (as
identifiers) that refer to the same sequence.
"""
namespace, alias = identifier.split(nsa_sep) if nsa_sep in identifier else (None, identifier)
aliases = self.translate_alias(alias=alias,
namespace=namespace,
target_namespaces=target_namespaces,
translate_ncbi_namespace=translate_ncbi_namespace)
return [nsa_sep.join((a["namespace"], a["alias"])) for a in aliases] | Given a string identifier, return a list of aliases (as
identifiers) that refer to the same sequence. | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/seqrepo.py#L191-L201 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/seqrepo.py | SeqRepo._get_unique_seqid | def _get_unique_seqid(self, alias, namespace):
"""given alias and namespace, return seq_id if exactly one distinct
sequence id is found, raise KeyError if there's no match, or
raise ValueError if there's more than one match.
"""
recs = self.aliases.find_aliases(alias=alias, namespace=namespace)
seq_ids = set(r["seq_id"] for r in recs)
if len(seq_ids) == 0:
raise KeyError("Alias {} (namespace: {})".format(alias, namespace))
if len(seq_ids) > 1:
# This should only happen when namespace is None
raise KeyError("Alias {} (namespace: {}): not unique".format(alias, namespace))
return seq_ids.pop() | python | def _get_unique_seqid(self, alias, namespace):
"""given alias and namespace, return seq_id if exactly one distinct
sequence id is found, raise KeyError if there's no match, or
raise ValueError if there's more than one match.
"""
recs = self.aliases.find_aliases(alias=alias, namespace=namespace)
seq_ids = set(r["seq_id"] for r in recs)
if len(seq_ids) == 0:
raise KeyError("Alias {} (namespace: {})".format(alias, namespace))
if len(seq_ids) > 1:
# This should only happen when namespace is None
raise KeyError("Alias {} (namespace: {}): not unique".format(alias, namespace))
return seq_ids.pop() | given alias and namespace, return seq_id if exactly one distinct
sequence id is found, raise KeyError if there's no match, or
raise ValueError if there's more than one match. | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/seqrepo.py#L207-L221 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/seqrepo.py | SeqRepo._update_digest_aliases | def _update_digest_aliases(self, seq_id, seq):
"""compute digest aliases for seq and update; returns number of digest
aliases (some of which may have already existed)
For the moment, sha512 is computed for seq_id separately from
the sha512 here. We should fix that.
"""
ir = bioutils.digests.seq_vmc_identifier(seq)
seq_aliases = [
{
"namespace": ir["namespace"],
"alias": ir["accession"],
},
{
"namespace": "SHA1",
"alias": bioutils.digests.seq_sha1(seq)
},
{
"namespace": "MD5",
"alias": bioutils.digests.seq_md5(seq)
},
{
"namespace": "SEGUID",
"alias": bioutils.digests.seq_seguid(seq)
},
]
for sa in seq_aliases:
self.aliases.store_alias(seq_id=seq_id, **sa)
return len(seq_aliases) | python | def _update_digest_aliases(self, seq_id, seq):
"""compute digest aliases for seq and update; returns number of digest
aliases (some of which may have already existed)
For the moment, sha512 is computed for seq_id separately from
the sha512 here. We should fix that.
"""
ir = bioutils.digests.seq_vmc_identifier(seq)
seq_aliases = [
{
"namespace": ir["namespace"],
"alias": ir["accession"],
},
{
"namespace": "SHA1",
"alias": bioutils.digests.seq_sha1(seq)
},
{
"namespace": "MD5",
"alias": bioutils.digests.seq_md5(seq)
},
{
"namespace": "SEGUID",
"alias": bioutils.digests.seq_seguid(seq)
},
]
for sa in seq_aliases:
self.aliases.store_alias(seq_id=seq_id, **sa)
return len(seq_aliases) | compute digest aliases for seq and update; returns number of digest
aliases (some of which may have already existed)
For the moment, sha512 is computed for seq_id separately from
the sha512 here. We should fix that. | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/seqrepo.py#L224-L255 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/seqaliasdb/seqaliasdb.py | SeqAliasDB.fetch_aliases | def fetch_aliases(self, seq_id, current_only=True, translate_ncbi_namespace=None):
"""return list of alias annotation records (dicts) for a given seq_id"""
return [dict(r) for r in self.find_aliases(seq_id=seq_id,
current_only=current_only,
translate_ncbi_namespace=translate_ncbi_namespace)] | python | def fetch_aliases(self, seq_id, current_only=True, translate_ncbi_namespace=None):
"""return list of alias annotation records (dicts) for a given seq_id"""
return [dict(r) for r in self.find_aliases(seq_id=seq_id,
current_only=current_only,
translate_ncbi_namespace=translate_ncbi_namespace)] | return list of alias annotation records (dicts) for a given seq_id | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/seqaliasdb/seqaliasdb.py#L58-L62 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/seqaliasdb/seqaliasdb.py | SeqAliasDB.find_aliases | def find_aliases(self, seq_id=None, namespace=None, alias=None, current_only=True, translate_ncbi_namespace=None):
"""returns iterator over alias annotation records that match criteria
The arguments, all optional, restrict the records that are
returned. Without arguments, all aliases are returned.
If arguments contain %, the `like` comparison operator is
used. Otherwise arguments must match exactly.
"""
clauses = []
params = []
def eq_or_like(s):
return "like" if "%" in s else "="
if translate_ncbi_namespace is None:
translate_ncbi_namespace = self.translate_ncbi_namespace
if alias is not None:
clauses += ["alias {} ?".format(eq_or_like(alias))]
params += [alias]
if namespace is not None:
# Switch to using RefSeq for RefSeq accessions
# issue #38: translate "RefSeq" to "NCBI" to enable RefSeq lookups
# issue #31: later breaking change, translate database
if namespace == "RefSeq":
namespace = "NCBI"
clauses += ["namespace {} ?".format(eq_or_like(namespace))]
params += [namespace]
if seq_id is not None:
clauses += ["seq_id {} ?".format(eq_or_like(seq_id))]
params += [seq_id]
if current_only:
clauses += ["is_current = 1"]
cols = ["seqalias_id", "seq_id", "alias", "added", "is_current"]
if translate_ncbi_namespace:
cols += ["case namespace when 'NCBI' then 'RefSeq' else namespace end as namespace"]
else:
cols += ["namespace"]
sql = "select {cols} from seqalias".format(cols=", ".join(cols))
if clauses:
sql += " where " + " and ".join("(" + c + ")" for c in clauses)
sql += " order by seq_id, namespace, alias"
_logger.debug("Executing: " + sql)
return self._db.execute(sql, params) | python | def find_aliases(self, seq_id=None, namespace=None, alias=None, current_only=True, translate_ncbi_namespace=None):
"""returns iterator over alias annotation records that match criteria
The arguments, all optional, restrict the records that are
returned. Without arguments, all aliases are returned.
If arguments contain %, the `like` comparison operator is
used. Otherwise arguments must match exactly.
"""
clauses = []
params = []
def eq_or_like(s):
return "like" if "%" in s else "="
if translate_ncbi_namespace is None:
translate_ncbi_namespace = self.translate_ncbi_namespace
if alias is not None:
clauses += ["alias {} ?".format(eq_or_like(alias))]
params += [alias]
if namespace is not None:
# Switch to using RefSeq for RefSeq accessions
# issue #38: translate "RefSeq" to "NCBI" to enable RefSeq lookups
# issue #31: later breaking change, translate database
if namespace == "RefSeq":
namespace = "NCBI"
clauses += ["namespace {} ?".format(eq_or_like(namespace))]
params += [namespace]
if seq_id is not None:
clauses += ["seq_id {} ?".format(eq_or_like(seq_id))]
params += [seq_id]
if current_only:
clauses += ["is_current = 1"]
cols = ["seqalias_id", "seq_id", "alias", "added", "is_current"]
if translate_ncbi_namespace:
cols += ["case namespace when 'NCBI' then 'RefSeq' else namespace end as namespace"]
else:
cols += ["namespace"]
sql = "select {cols} from seqalias".format(cols=", ".join(cols))
if clauses:
sql += " where " + " and ".join("(" + c + ")" for c in clauses)
sql += " order by seq_id, namespace, alias"
_logger.debug("Executing: " + sql)
return self._db.execute(sql, params) | returns iterator over alias annotation records that match criteria
The arguments, all optional, restrict the records that are
returned. Without arguments, all aliases are returned.
If arguments contain %, the `like` comparison operator is
used. Otherwise arguments must match exactly. | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/seqaliasdb/seqaliasdb.py#L64-L110 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/seqaliasdb/seqaliasdb.py | SeqAliasDB.store_alias | def store_alias(self, seq_id, namespace, alias):
"""associate a namespaced alias with a sequence
Alias association with sequences is idempotent: duplicate
associations are discarded silently.
"""
if not self._writeable:
raise RuntimeError("Cannot write -- opened read-only")
log_pfx = "store({q},{n},{a})".format(n=namespace, a=alias, q=seq_id)
try:
c = self._db.execute("insert into seqalias (seq_id, namespace, alias) values (?, ?, ?)", (seq_id, namespace,
alias))
# success => new record
return c.lastrowid
except sqlite3.IntegrityError:
pass
# IntegrityError fall-through
# existing record is guaranteed to exist uniquely; fetchone() should always succeed
current_rec = self.find_aliases(namespace=namespace, alias=alias).fetchone()
# if seq_id matches current record, it's a duplicate (seq_id, namespace, alias) tuple
# and we return current record
if current_rec["seq_id"] == seq_id:
_logger.debug(log_pfx + ": duplicate record")
return current_rec["seqalias_id"]
# otherwise, we're reassigning; deprecate old record, then retry
_logger.debug(log_pfx + ": collision; deprecating {s1}".format(s1=current_rec["seq_id"]))
self._db.execute("update seqalias set is_current = 0 where seqalias_id = ?", [current_rec["seqalias_id"]])
return self.store_alias(seq_id, namespace, alias) | python | def store_alias(self, seq_id, namespace, alias):
"""associate a namespaced alias with a sequence
Alias association with sequences is idempotent: duplicate
associations are discarded silently.
"""
if not self._writeable:
raise RuntimeError("Cannot write -- opened read-only")
log_pfx = "store({q},{n},{a})".format(n=namespace, a=alias, q=seq_id)
try:
c = self._db.execute("insert into seqalias (seq_id, namespace, alias) values (?, ?, ?)", (seq_id, namespace,
alias))
# success => new record
return c.lastrowid
except sqlite3.IntegrityError:
pass
# IntegrityError fall-through
# existing record is guaranteed to exist uniquely; fetchone() should always succeed
current_rec = self.find_aliases(namespace=namespace, alias=alias).fetchone()
# if seq_id matches current record, it's a duplicate (seq_id, namespace, alias) tuple
# and we return current record
if current_rec["seq_id"] == seq_id:
_logger.debug(log_pfx + ": duplicate record")
return current_rec["seqalias_id"]
# otherwise, we're reassigning; deprecate old record, then retry
_logger.debug(log_pfx + ": collision; deprecating {s1}".format(s1=current_rec["seq_id"]))
self._db.execute("update seqalias set is_current = 0 where seqalias_id = ?", [current_rec["seqalias_id"]])
return self.store_alias(seq_id, namespace, alias) | associate a namespaced alias with a sequence
Alias association with sequences is idempotent: duplicate
associations are discarded silently. | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/seqaliasdb/seqaliasdb.py#L123-L157 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/seqaliasdb/seqaliasdb.py | SeqAliasDB._upgrade_db | def _upgrade_db(self):
"""upgrade db using scripts for specified (current) schema version"""
migration_path = "_data/migrations"
sqlite3.connect(self._db_path).close() # ensure that it exists
db_url = "sqlite:///" + self._db_path
backend = yoyo.get_backend(db_url)
migration_dir = pkg_resources.resource_filename(__package__, migration_path)
migrations = yoyo.read_migrations(migration_dir)
assert len(migrations) > 0, "no migration scripts found -- wrong migraion path for " + __package__
migrations_to_apply = backend.to_apply(migrations)
backend.apply_migrations(migrations_to_apply) | python | def _upgrade_db(self):
"""upgrade db using scripts for specified (current) schema version"""
migration_path = "_data/migrations"
sqlite3.connect(self._db_path).close() # ensure that it exists
db_url = "sqlite:///" + self._db_path
backend = yoyo.get_backend(db_url)
migration_dir = pkg_resources.resource_filename(__package__, migration_path)
migrations = yoyo.read_migrations(migration_dir)
assert len(migrations) > 0, "no migration scripts found -- wrong migraion path for " + __package__
migrations_to_apply = backend.to_apply(migrations)
backend.apply_migrations(migrations_to_apply) | upgrade db using scripts for specified (current) schema version | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/seqaliasdb/seqaliasdb.py#L173-L183 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/py2compat/_commonpath.py | commonpath | def commonpath(paths):
"""py2 compatible version of py3's os.path.commonpath
>>> commonpath([""])
''
>>> commonpath(["/"])
'/'
>>> commonpath(["/a"])
'/a'
>>> commonpath(["/a//"])
'/a'
>>> commonpath(["/a", "/a"])
'/a'
>>> commonpath(["/a/b", "/a"])
'/a'
>>> commonpath(["/a/b", "/a/b"])
'/a/b'
>>> commonpath(["/a/b/c", "/a/b/d"])
'/a/b'
>>> commonpath(["/a/b/c", "/a/b/d", "//a//b//e//"])
'/a/b'
>>> commonpath([])
Traceback (most recent call last):
...
ValueError: commonpath() arg is an empty sequence
>>> commonpath(["/absolute/path", "relative/path"])
Traceback (most recent call last):
...
ValueError: (Can't mix absolute and relative paths")
"""
assert os.sep == "/", "tested only on slash-delimited paths"
split_re = re.compile(os.sep + "+")
if len(paths) == 0:
raise ValueError("commonpath() arg is an empty sequence")
spaths = [p.rstrip(os.sep) for p in paths]
splitpaths = [split_re.split(p) for p in spaths]
if all(p.startswith(os.sep) for p in paths):
abs_paths = True
splitpaths = [p[1:] for p in splitpaths]
elif all(not p.startswith(os.sep) for p in paths):
abs_paths = False
else:
raise ValueError("Can't mix absolute and relative paths")
splitpaths0 = splitpaths[0]
splitpaths1n = splitpaths[1:]
min_length = min(len(p) for p in splitpaths)
equal = [i for i in range(min_length) if all(splitpaths0[i] == sp[i] for sp in splitpaths1n)]
max_equal = max(equal or [-1])
commonelems = splitpaths0[:max_equal + 1]
commonpath = os.sep.join(commonelems)
return (os.sep if abs_paths else '') + commonpath | python | def commonpath(paths):
"""py2 compatible version of py3's os.path.commonpath
>>> commonpath([""])
''
>>> commonpath(["/"])
'/'
>>> commonpath(["/a"])
'/a'
>>> commonpath(["/a//"])
'/a'
>>> commonpath(["/a", "/a"])
'/a'
>>> commonpath(["/a/b", "/a"])
'/a'
>>> commonpath(["/a/b", "/a/b"])
'/a/b'
>>> commonpath(["/a/b/c", "/a/b/d"])
'/a/b'
>>> commonpath(["/a/b/c", "/a/b/d", "//a//b//e//"])
'/a/b'
>>> commonpath([])
Traceback (most recent call last):
...
ValueError: commonpath() arg is an empty sequence
>>> commonpath(["/absolute/path", "relative/path"])
Traceback (most recent call last):
...
ValueError: (Can't mix absolute and relative paths")
"""
assert os.sep == "/", "tested only on slash-delimited paths"
split_re = re.compile(os.sep + "+")
if len(paths) == 0:
raise ValueError("commonpath() arg is an empty sequence")
spaths = [p.rstrip(os.sep) for p in paths]
splitpaths = [split_re.split(p) for p in spaths]
if all(p.startswith(os.sep) for p in paths):
abs_paths = True
splitpaths = [p[1:] for p in splitpaths]
elif all(not p.startswith(os.sep) for p in paths):
abs_paths = False
else:
raise ValueError("Can't mix absolute and relative paths")
splitpaths0 = splitpaths[0]
splitpaths1n = splitpaths[1:]
min_length = min(len(p) for p in splitpaths)
equal = [i for i in range(min_length) if all(splitpaths0[i] == sp[i] for sp in splitpaths1n)]
max_equal = max(equal or [-1])
commonelems = splitpaths0[:max_equal + 1]
commonpath = os.sep.join(commonelems)
return (os.sep if abs_paths else '') + commonpath | py2 compatible version of py3's os.path.commonpath
>>> commonpath([""])
''
>>> commonpath(["/"])
'/'
>>> commonpath(["/a"])
'/a'
>>> commonpath(["/a//"])
'/a'
>>> commonpath(["/a", "/a"])
'/a'
>>> commonpath(["/a/b", "/a"])
'/a'
>>> commonpath(["/a/b", "/a/b"])
'/a/b'
>>> commonpath(["/a/b/c", "/a/b/d"])
'/a/b'
>>> commonpath(["/a/b/c", "/a/b/d", "//a//b//e//"])
'/a/b'
>>> commonpath([])
Traceback (most recent call last):
...
ValueError: commonpath() arg is an empty sequence
>>> commonpath(["/absolute/path", "relative/path"])
Traceback (most recent call last):
...
ValueError: (Can't mix absolute and relative paths") | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/py2compat/_commonpath.py#L5-L58 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/cli.py | add_assembly_names | def add_assembly_names(opts):
"""add assembly names as aliases to existing sequences
Specifically, associate aliases like GRCh37.p9:1 with existing
refseq accessions
```
[{'aliases': ['chr19'],
'assembly_unit': 'Primary Assembly',
'genbank_ac': 'CM000681.2',
'length': 58617616,
'name': '19',
'refseq_ac': 'NC_000019.10',
'relationship': '=',
'sequence_role': 'assembled-molecule'}]
```
For the above sample record, this function adds the following aliases:
* genbank:CM000681.2
* GRCh38:19
* GRCh38:chr19
to the sequence referred to by refseq:NC_000019.10.
"""
seqrepo_dir = os.path.join(opts.root_directory, opts.instance_name)
sr = SeqRepo(seqrepo_dir, writeable=True)
assemblies = bioutils.assemblies.get_assemblies()
if opts.reload_all:
assemblies_to_load = sorted(assemblies)
else:
namespaces = [r["namespace"] for r in sr.aliases._db.execute("select distinct namespace from seqalias")]
assemblies_to_load = sorted(k for k in assemblies if k not in namespaces)
_logger.info("{} assemblies to load".format(len(assemblies_to_load)))
ncbi_alias_map = {r["alias"]: r["seq_id"] for r in sr.aliases.find_aliases(namespace="NCBI", current_only=False)}
for assy_name in tqdm.tqdm(assemblies_to_load, unit="assembly"):
_logger.debug("loading " + assy_name)
sequences = assemblies[assy_name]["sequences"]
eq_sequences = [s for s in sequences if s["relationship"] in ("=", "<>")]
if not eq_sequences:
_logger.info("No '=' sequences to load for {an}; skipping".format(an=assy_name))
continue
# all assembled-molecules (1..22, X, Y, MT) have ncbi aliases in seqrepo
not_in_seqrepo = [s["refseq_ac"] for s in eq_sequences if s["refseq_ac"] not in ncbi_alias_map]
if not_in_seqrepo:
_logger.warning("Assembly {an} references {n} accessions not in SeqRepo instance {opts.instance_name} (e.g., {acs})".format(
an=assy_name, n=len(not_in_seqrepo), opts=opts, acs=", ".join(not_in_seqrepo[:5]+["..."]), seqrepo_dir=seqrepo_dir))
if not opts.partial_load:
_logger.warning("Skipping {an} (-p to enable partial loading)".format(an=assy_name))
continue
eq_sequences = [es for es in eq_sequences if es["refseq_ac"] in ncbi_alias_map]
_logger.info("Loading {n} new accessions for assembly {an}".format(an=assy_name, n=len(eq_sequences)))
for s in eq_sequences:
seq_id = ncbi_alias_map[s["refseq_ac"]]
aliases = [{"namespace": assy_name, "alias": a} for a in [s["name"]] + s["aliases"]]
if "genbank_ac" in s and s["genbank_ac"]:
aliases += [{"namespace": "genbank", "alias": s["genbank_ac"]}]
for alias in aliases:
sr.aliases.store_alias(seq_id=seq_id, **alias)
_logger.debug("Added assembly alias {a[namespace]}:{a[alias]} for {seq_id}".format(a=alias, seq_id=seq_id))
sr.commit() | python | def add_assembly_names(opts):
"""add assembly names as aliases to existing sequences
Specifically, associate aliases like GRCh37.p9:1 with existing
refseq accessions
```
[{'aliases': ['chr19'],
'assembly_unit': 'Primary Assembly',
'genbank_ac': 'CM000681.2',
'length': 58617616,
'name': '19',
'refseq_ac': 'NC_000019.10',
'relationship': '=',
'sequence_role': 'assembled-molecule'}]
```
For the above sample record, this function adds the following aliases:
* genbank:CM000681.2
* GRCh38:19
* GRCh38:chr19
to the sequence referred to by refseq:NC_000019.10.
"""
seqrepo_dir = os.path.join(opts.root_directory, opts.instance_name)
sr = SeqRepo(seqrepo_dir, writeable=True)
assemblies = bioutils.assemblies.get_assemblies()
if opts.reload_all:
assemblies_to_load = sorted(assemblies)
else:
namespaces = [r["namespace"] for r in sr.aliases._db.execute("select distinct namespace from seqalias")]
assemblies_to_load = sorted(k for k in assemblies if k not in namespaces)
_logger.info("{} assemblies to load".format(len(assemblies_to_load)))
ncbi_alias_map = {r["alias"]: r["seq_id"] for r in sr.aliases.find_aliases(namespace="NCBI", current_only=False)}
for assy_name in tqdm.tqdm(assemblies_to_load, unit="assembly"):
_logger.debug("loading " + assy_name)
sequences = assemblies[assy_name]["sequences"]
eq_sequences = [s for s in sequences if s["relationship"] in ("=", "<>")]
if not eq_sequences:
_logger.info("No '=' sequences to load for {an}; skipping".format(an=assy_name))
continue
# all assembled-molecules (1..22, X, Y, MT) have ncbi aliases in seqrepo
not_in_seqrepo = [s["refseq_ac"] for s in eq_sequences if s["refseq_ac"] not in ncbi_alias_map]
if not_in_seqrepo:
_logger.warning("Assembly {an} references {n} accessions not in SeqRepo instance {opts.instance_name} (e.g., {acs})".format(
an=assy_name, n=len(not_in_seqrepo), opts=opts, acs=", ".join(not_in_seqrepo[:5]+["..."]), seqrepo_dir=seqrepo_dir))
if not opts.partial_load:
_logger.warning("Skipping {an} (-p to enable partial loading)".format(an=assy_name))
continue
eq_sequences = [es for es in eq_sequences if es["refseq_ac"] in ncbi_alias_map]
_logger.info("Loading {n} new accessions for assembly {an}".format(an=assy_name, n=len(eq_sequences)))
for s in eq_sequences:
seq_id = ncbi_alias_map[s["refseq_ac"]]
aliases = [{"namespace": assy_name, "alias": a} for a in [s["name"]] + s["aliases"]]
if "genbank_ac" in s and s["genbank_ac"]:
aliases += [{"namespace": "genbank", "alias": s["genbank_ac"]}]
for alias in aliases:
sr.aliases.store_alias(seq_id=seq_id, **alias)
_logger.debug("Added assembly alias {a[namespace]}:{a[alias]} for {seq_id}".format(a=alias, seq_id=seq_id))
sr.commit() | add assembly names as aliases to existing sequences
Specifically, associate aliases like GRCh37.p9:1 with existing
refseq accessions
```
[{'aliases': ['chr19'],
'assembly_unit': 'Primary Assembly',
'genbank_ac': 'CM000681.2',
'length': 58617616,
'name': '19',
'refseq_ac': 'NC_000019.10',
'relationship': '=',
'sequence_role': 'assembled-molecule'}]
```
For the above sample record, this function adds the following aliases:
* genbank:CM000681.2
* GRCh38:19
* GRCh38:chr19
to the sequence referred to by refseq:NC_000019.10. | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/cli.py#L200-L265 |
biocommons/biocommons.seqrepo | biocommons/seqrepo/cli.py | snapshot | def snapshot(opts):
"""snapshot a seqrepo data directory by hardlinking sequence files,
copying sqlite databases, and remove write permissions from directories
"""
seqrepo_dir = os.path.join(opts.root_directory, opts.instance_name)
dst_dir = opts.destination_name
if not dst_dir.startswith("/"):
# interpret dst_dir as relative to parent dir of seqrepo_dir
dst_dir = os.path.join(opts.root_directory, dst_dir)
src_dir = os.path.realpath(seqrepo_dir)
dst_dir = os.path.realpath(dst_dir)
if commonpath([src_dir, dst_dir]).startswith(src_dir):
raise RuntimeError("Cannot nest seqrepo directories " "({} is within {})".format(dst_dir, src_dir))
if os.path.exists(dst_dir):
raise IOError(dst_dir + ": File exists")
tmp_dir = tempfile.mkdtemp(prefix=dst_dir + ".")
_logger.debug("src_dir = " + src_dir)
_logger.debug("dst_dir = " + dst_dir)
_logger.debug("tmp_dir = " + tmp_dir)
# TODO: cleanup of tmpdir on failure
makedirs(tmp_dir, exist_ok=True)
wd = os.getcwd()
os.chdir(src_dir)
# make destination directories (walk is top-down)
for rp in (os.path.join(dirpath, dirname) for dirpath, dirnames, _ in os.walk(".") for dirname in dirnames):
dp = os.path.join(tmp_dir, rp)
os.mkdir(dp)
# hard link sequence files
for rp in (os.path.join(dirpath, filename) for dirpath, _, filenames in os.walk(".") for filename in filenames
if ".bgz" in filename):
dp = os.path.join(tmp_dir, rp)
os.link(rp, dp)
# copy sqlite databases
for rp in ["aliases.sqlite3", "sequences/db.sqlite3"]:
dp = os.path.join(tmp_dir, rp)
shutil.copyfile(rp, dp)
# recursively drop write perms on snapshot
mode_aw = stat.S_IWUSR | stat.S_IWGRP | stat.S_IWOTH
def _drop_write(p):
mode = os.lstat(p).st_mode
new_mode = mode & ~mode_aw
os.chmod(p, new_mode)
for dp in (os.path.join(dirpath, dirent)
for dirpath, dirnames, filenames in os.walk(tmp_dir) for dirent in dirnames + filenames):
_drop_write(dp)
_drop_write(tmp_dir)
os.rename(tmp_dir, dst_dir)
_logger.info("snapshot created in " + dst_dir)
os.chdir(wd) | python | def snapshot(opts):
"""snapshot a seqrepo data directory by hardlinking sequence files,
copying sqlite databases, and remove write permissions from directories
"""
seqrepo_dir = os.path.join(opts.root_directory, opts.instance_name)
dst_dir = opts.destination_name
if not dst_dir.startswith("/"):
# interpret dst_dir as relative to parent dir of seqrepo_dir
dst_dir = os.path.join(opts.root_directory, dst_dir)
src_dir = os.path.realpath(seqrepo_dir)
dst_dir = os.path.realpath(dst_dir)
if commonpath([src_dir, dst_dir]).startswith(src_dir):
raise RuntimeError("Cannot nest seqrepo directories " "({} is within {})".format(dst_dir, src_dir))
if os.path.exists(dst_dir):
raise IOError(dst_dir + ": File exists")
tmp_dir = tempfile.mkdtemp(prefix=dst_dir + ".")
_logger.debug("src_dir = " + src_dir)
_logger.debug("dst_dir = " + dst_dir)
_logger.debug("tmp_dir = " + tmp_dir)
# TODO: cleanup of tmpdir on failure
makedirs(tmp_dir, exist_ok=True)
wd = os.getcwd()
os.chdir(src_dir)
# make destination directories (walk is top-down)
for rp in (os.path.join(dirpath, dirname) for dirpath, dirnames, _ in os.walk(".") for dirname in dirnames):
dp = os.path.join(tmp_dir, rp)
os.mkdir(dp)
# hard link sequence files
for rp in (os.path.join(dirpath, filename) for dirpath, _, filenames in os.walk(".") for filename in filenames
if ".bgz" in filename):
dp = os.path.join(tmp_dir, rp)
os.link(rp, dp)
# copy sqlite databases
for rp in ["aliases.sqlite3", "sequences/db.sqlite3"]:
dp = os.path.join(tmp_dir, rp)
shutil.copyfile(rp, dp)
# recursively drop write perms on snapshot
mode_aw = stat.S_IWUSR | stat.S_IWGRP | stat.S_IWOTH
def _drop_write(p):
mode = os.lstat(p).st_mode
new_mode = mode & ~mode_aw
os.chmod(p, new_mode)
for dp in (os.path.join(dirpath, dirent)
for dirpath, dirnames, filenames in os.walk(tmp_dir) for dirent in dirnames + filenames):
_drop_write(dp)
_drop_write(tmp_dir)
os.rename(tmp_dir, dst_dir)
_logger.info("snapshot created in " + dst_dir)
os.chdir(wd) | snapshot a seqrepo data directory by hardlinking sequence files,
copying sqlite databases, and remove write permissions from directories | https://github.com/biocommons/biocommons.seqrepo/blob/fb6d88682cb73ee6971cfa47d4dcd90a9c649167/biocommons/seqrepo/cli.py#L423-L486 |
learningequality/iceqube | src/iceqube/storage/backends/default.py | BaseBackend.wait_for_job_update | def wait_for_job_update(self, job_id, timeout=None):
"""
Blocks until a job given by job_id has updated its state (canceled, completed, progress updated, etc.)
if timeout is not None, then this function raises iceqube.exceptions.TimeoutError.
:param job_id: the job's job_id to monitor for changes.
:param timeout: if None, wait forever for a job update. If given, wait until timeout seconds, and then raise
iceqube.exceptions.TimeoutError.
:return: the Job object corresponding to job_id.
"""
# internally, we register an Event object for each entry in this function.
# when self.notify_of_job_update() is called, we call Event.set() on all events
# registered for that job, thereby releasing any threads waiting for that specific job.
event = JOB_EVENT_MAPPING[job_id]
event.clear()
result = event.wait(timeout=timeout)
job = self.get_job(job_id)
if result:
return job
else:
raise TimeoutError("Job {} has not received any updates.".format(job_id)) | python | def wait_for_job_update(self, job_id, timeout=None):
"""
Blocks until a job given by job_id has updated its state (canceled, completed, progress updated, etc.)
if timeout is not None, then this function raises iceqube.exceptions.TimeoutError.
:param job_id: the job's job_id to monitor for changes.
:param timeout: if None, wait forever for a job update. If given, wait until timeout seconds, and then raise
iceqube.exceptions.TimeoutError.
:return: the Job object corresponding to job_id.
"""
# internally, we register an Event object for each entry in this function.
# when self.notify_of_job_update() is called, we call Event.set() on all events
# registered for that job, thereby releasing any threads waiting for that specific job.
event = JOB_EVENT_MAPPING[job_id]
event.clear()
result = event.wait(timeout=timeout)
job = self.get_job(job_id)
if result:
return job
else:
raise TimeoutError("Job {} has not received any updates.".format(job_id)) | Blocks until a job given by job_id has updated its state (canceled, completed, progress updated, etc.)
if timeout is not None, then this function raises iceqube.exceptions.TimeoutError.
:param job_id: the job's job_id to monitor for changes.
:param timeout: if None, wait forever for a job update. If given, wait until timeout seconds, and then raise
iceqube.exceptions.TimeoutError.
:return: the Job object corresponding to job_id. | https://github.com/learningequality/iceqube/blob/97ac9e0f65bfedb0efa9f94638bcb57c7926dea2/src/iceqube/storage/backends/default.py#L45-L67 |
learningequality/iceqube | src/iceqube/common/utils.py | import_stringified_func | def import_stringified_func(funcstring):
"""
Import a string that represents a module and function, e.g. {module}.{funcname}.
Given a function f, import_stringified_func(stringify_func(f)) will return the same function.
:param funcstring: String to try to import
:return: callable
"""
assert isinstance(funcstring, str)
modulestring, funcname = funcstring.rsplit('.', 1)
mod = importlib.import_module(modulestring)
func = getattr(mod, funcname)
return func | python | def import_stringified_func(funcstring):
"""
Import a string that represents a module and function, e.g. {module}.{funcname}.
Given a function f, import_stringified_func(stringify_func(f)) will return the same function.
:param funcstring: String to try to import
:return: callable
"""
assert isinstance(funcstring, str)
modulestring, funcname = funcstring.rsplit('.', 1)
mod = importlib.import_module(modulestring)
func = getattr(mod, funcname)
return func | Import a string that represents a module and function, e.g. {module}.{funcname}.
Given a function f, import_stringified_func(stringify_func(f)) will return the same function.
:param funcstring: String to try to import
:return: callable | https://github.com/learningequality/iceqube/blob/97ac9e0f65bfedb0efa9f94638bcb57c7926dea2/src/iceqube/common/utils.py#L18-L33 |
learningequality/iceqube | src/iceqube/common/utils.py | EventWaitingThread.main_loop | def main_loop(self, timeout=None):
"""
Check if self.trigger_event is set. If it is, then run our function. If not, return early.
:param timeout: How long to wait for a trigger event. Defaults to 0.
:return:
"""
if self.trigger_event.wait(timeout):
try:
self.func()
except Exception as e:
self.logger.warning("Got an exception running {func}: {e}".format(func=self.func, e=str(e)))
finally:
self.trigger_event.clear() | python | def main_loop(self, timeout=None):
"""
Check if self.trigger_event is set. If it is, then run our function. If not, return early.
:param timeout: How long to wait for a trigger event. Defaults to 0.
:return:
"""
if self.trigger_event.wait(timeout):
try:
self.func()
except Exception as e:
self.logger.warning("Got an exception running {func}: {e}".format(func=self.func, e=str(e)))
finally:
self.trigger_event.clear() | Check if self.trigger_event is set. If it is, then run our function. If not, return early.
:param timeout: How long to wait for a trigger event. Defaults to 0.
:return: | https://github.com/learningequality/iceqube/blob/97ac9e0f65bfedb0efa9f94638bcb57c7926dea2/src/iceqube/common/utils.py#L134-L146 |
learningequality/iceqube | src/iceqube/storage/backends/inmem.py | StorageBackend.set_sqlite_pragmas | def set_sqlite_pragmas(self):
"""
Sets the connection PRAGMAs for the sqlalchemy engine stored in self.engine.
It currently sets:
- journal_mode to WAL
:return: None
"""
def _pragmas_on_connect(dbapi_con, con_record):
dbapi_con.execute("PRAGMA journal_mode = WAL;")
event.listen(self.engine, "connect", _pragmas_on_connect) | python | def set_sqlite_pragmas(self):
"""
Sets the connection PRAGMAs for the sqlalchemy engine stored in self.engine.
It currently sets:
- journal_mode to WAL
:return: None
"""
def _pragmas_on_connect(dbapi_con, con_record):
dbapi_con.execute("PRAGMA journal_mode = WAL;")
event.listen(self.engine, "connect", _pragmas_on_connect) | Sets the connection PRAGMAs for the sqlalchemy engine stored in self.engine.
It currently sets:
- journal_mode to WAL
:return: None | https://github.com/learningequality/iceqube/blob/97ac9e0f65bfedb0efa9f94638bcb57c7926dea2/src/iceqube/storage/backends/inmem.py#L78-L91 |
learningequality/iceqube | src/iceqube/storage/backends/inmem.py | StorageBackend.schedule_job | def schedule_job(self, j):
"""
Add the job given by j to the job queue.
Note: Does not actually run the job.
"""
job_id = uuid.uuid4().hex
j.job_id = job_id
session = self.sessionmaker()
orm_job = ORMJob(
id=job_id,
state=j.state,
app=self.app,
namespace=self.namespace,
obj=j)
session.add(orm_job)
try:
session.commit()
except Exception as e:
logging.error(
"Got an error running session.commit(): {}".format(e))
return job_id | python | def schedule_job(self, j):
"""
Add the job given by j to the job queue.
Note: Does not actually run the job.
"""
job_id = uuid.uuid4().hex
j.job_id = job_id
session = self.sessionmaker()
orm_job = ORMJob(
id=job_id,
state=j.state,
app=self.app,
namespace=self.namespace,
obj=j)
session.add(orm_job)
try:
session.commit()
except Exception as e:
logging.error(
"Got an error running session.commit(): {}".format(e))
return job_id | Add the job given by j to the job queue.
Note: Does not actually run the job. | https://github.com/learningequality/iceqube/blob/97ac9e0f65bfedb0efa9f94638bcb57c7926dea2/src/iceqube/storage/backends/inmem.py#L93-L116 |
learningequality/iceqube | src/iceqube/storage/backends/inmem.py | StorageBackend.mark_job_as_canceling | def mark_job_as_canceling(self, job_id):
"""
Mark the job as requested for canceling. Does not actually try to cancel a running job.
:param job_id: the job to be marked as canceling.
:return: the job object
"""
job, _ = self._update_job_state(job_id, State.CANCELING)
return job | python | def mark_job_as_canceling(self, job_id):
"""
Mark the job as requested for canceling. Does not actually try to cancel a running job.
:param job_id: the job to be marked as canceling.
:return: the job object
"""
job, _ = self._update_job_state(job_id, State.CANCELING)
return job | Mark the job as requested for canceling. Does not actually try to cancel a running job.
:param job_id: the job to be marked as canceling.
:return: the job object | https://github.com/learningequality/iceqube/blob/97ac9e0f65bfedb0efa9f94638bcb57c7926dea2/src/iceqube/storage/backends/inmem.py#L127-L136 |
learningequality/iceqube | src/iceqube/storage/backends/inmem.py | StorageBackend.clear | def clear(self, job_id=None, force=False):
"""
Clear the queue and the job data. If job_id is not given, clear out all
jobs marked COMPLETED. If job_id is given, clear out the given job's
data. This function won't do anything if the job's state is not COMPLETED or FAILED.
:type job_id: NoneType or str
:param job_id: the job_id to clear. If None, clear all jobs.
:type force: bool
:param force: If True, clear the job (or jobs), even if it hasn't completed or failed.
"""
s = self.sessionmaker()
q = self._ns_query(s)
if job_id:
q = q.filter_by(id=job_id)
# filter only by the finished jobs, if we are not specified to force
if not force:
q = q.filter(
or_(ORMJob.state == State.COMPLETED, ORMJob.state == State.FAILED))
q.delete(synchronize_session=False)
s.commit()
s.close() | python | def clear(self, job_id=None, force=False):
"""
Clear the queue and the job data. If job_id is not given, clear out all
jobs marked COMPLETED. If job_id is given, clear out the given job's
data. This function won't do anything if the job's state is not COMPLETED or FAILED.
:type job_id: NoneType or str
:param job_id: the job_id to clear. If None, clear all jobs.
:type force: bool
:param force: If True, clear the job (or jobs), even if it hasn't completed or failed.
"""
s = self.sessionmaker()
q = self._ns_query(s)
if job_id:
q = q.filter_by(id=job_id)
# filter only by the finished jobs, if we are not specified to force
if not force:
q = q.filter(
or_(ORMJob.state == State.COMPLETED, ORMJob.state == State.FAILED))
q.delete(synchronize_session=False)
s.commit()
s.close() | Clear the queue and the job data. If job_id is not given, clear out all
jobs marked COMPLETED. If job_id is given, clear out the given job's
data. This function won't do anything if the job's state is not COMPLETED or FAILED.
:type job_id: NoneType or str
:param job_id: the job_id to clear. If None, clear all jobs.
:type force: bool
:param force: If True, clear the job (or jobs), even if it hasn't completed or failed. | https://github.com/learningequality/iceqube/blob/97ac9e0f65bfedb0efa9f94638bcb57c7926dea2/src/iceqube/storage/backends/inmem.py#L166-L188 |
learningequality/iceqube | src/iceqube/storage/backends/inmem.py | StorageBackend.update_job_progress | def update_job_progress(self, job_id, progress, total_progress):
"""
Update the job given by job_id's progress info.
:type total_progress: int
:type progress: int
:type job_id: str
:param job_id: The id of the job to update
:param progress: The current progress achieved by the job
:param total_progress: The total progress achievable by the job.
:return: the job_id
"""
session = self.sessionmaker()
job, orm_job = self._update_job_state(
job_id, state=State.RUNNING, session=session)
# Note (aron): looks like SQLAlchemy doesn't automatically
# save any pickletype fields even if we re-set (orm_job.obj = job) that
# field. My hunch is that it's tracking the id of the object,
# and if that doesn't change, then SQLAlchemy doesn't repickle the object
# and save to the DB.
# Our hack here is to just copy the job object, and then set thespecific
# field we want to edit, in this case the job.state. That forces
# SQLAlchemy to re-pickle the object, thus setting it to the correct state.
job = copy(job)
job.progress = progress
job.total_progress = total_progress
orm_job.obj = job
session.add(orm_job)
session.commit()
session.close()
return job_id | python | def update_job_progress(self, job_id, progress, total_progress):
"""
Update the job given by job_id's progress info.
:type total_progress: int
:type progress: int
:type job_id: str
:param job_id: The id of the job to update
:param progress: The current progress achieved by the job
:param total_progress: The total progress achievable by the job.
:return: the job_id
"""
session = self.sessionmaker()
job, orm_job = self._update_job_state(
job_id, state=State.RUNNING, session=session)
# Note (aron): looks like SQLAlchemy doesn't automatically
# save any pickletype fields even if we re-set (orm_job.obj = job) that
# field. My hunch is that it's tracking the id of the object,
# and if that doesn't change, then SQLAlchemy doesn't repickle the object
# and save to the DB.
# Our hack here is to just copy the job object, and then set thespecific
# field we want to edit, in this case the job.state. That forces
# SQLAlchemy to re-pickle the object, thus setting it to the correct state.
job = copy(job)
job.progress = progress
job.total_progress = total_progress
orm_job.obj = job
session.add(orm_job)
session.commit()
session.close()
return job_id | Update the job given by job_id's progress info.
:type total_progress: int
:type progress: int
:type job_id: str
:param job_id: The id of the job to update
:param progress: The current progress achieved by the job
:param total_progress: The total progress achievable by the job.
:return: the job_id | https://github.com/learningequality/iceqube/blob/97ac9e0f65bfedb0efa9f94638bcb57c7926dea2/src/iceqube/storage/backends/inmem.py#L190-L224 |
learningequality/iceqube | src/iceqube/storage/backends/inmem.py | StorageBackend.mark_job_as_failed | def mark_job_as_failed(self, job_id, exception, traceback):
"""
Mark the job as failed, and record the traceback and exception.
Args:
job_id: The job_id of the job that failed.
exception: The exception object thrown by the job.
traceback: The traceback, if any. Note (aron): Not implemented yet. We need to find a way
for the conncurrent.futures workers to throw back the error to us.
Returns: None
"""
session = self.sessionmaker()
job, orm_job = self._update_job_state(
job_id, State.FAILED, session=session)
# Note (aron): looks like SQLAlchemy doesn't automatically
# save any pickletype fields even if we re-set (orm_job.obj = job) that
# field. My hunch is that it's tracking the id of the object,
# and if that doesn't change, then SQLAlchemy doesn't repickle the object
# and save to the DB.
# Our hack here is to just copy the job object, and then set thespecific
# field we want to edit, in this case the job.state. That forces
# SQLAlchemy to re-pickle the object, thus setting it to the correct state.
job = copy(job)
job.exception = exception
job.traceback = traceback
orm_job.obj = job
session.add(orm_job)
session.commit()
session.close() | python | def mark_job_as_failed(self, job_id, exception, traceback):
"""
Mark the job as failed, and record the traceback and exception.
Args:
job_id: The job_id of the job that failed.
exception: The exception object thrown by the job.
traceback: The traceback, if any. Note (aron): Not implemented yet. We need to find a way
for the conncurrent.futures workers to throw back the error to us.
Returns: None
"""
session = self.sessionmaker()
job, orm_job = self._update_job_state(
job_id, State.FAILED, session=session)
# Note (aron): looks like SQLAlchemy doesn't automatically
# save any pickletype fields even if we re-set (orm_job.obj = job) that
# field. My hunch is that it's tracking the id of the object,
# and if that doesn't change, then SQLAlchemy doesn't repickle the object
# and save to the DB.
# Our hack here is to just copy the job object, and then set thespecific
# field we want to edit, in this case the job.state. That forces
# SQLAlchemy to re-pickle the object, thus setting it to the correct state.
job = copy(job)
job.exception = exception
job.traceback = traceback
orm_job.obj = job
session.add(orm_job)
session.commit()
session.close() | Mark the job as failed, and record the traceback and exception.
Args:
job_id: The job_id of the job that failed.
exception: The exception object thrown by the job.
traceback: The traceback, if any. Note (aron): Not implemented yet. We need to find a way
for the conncurrent.futures workers to throw back the error to us.
Returns: None | https://github.com/learningequality/iceqube/blob/97ac9e0f65bfedb0efa9f94638bcb57c7926dea2/src/iceqube/storage/backends/inmem.py#L235-L267 |
learningequality/iceqube | src/iceqube/storage/backends/inmem.py | StorageBackend._ns_query | def _ns_query(self, session):
"""
Return a SQLAlchemy query that is already namespaced by the app and namespace given to this backend
during initialization.
Returns: a SQLAlchemy query object
"""
return session.query(ORMJob).filter(ORMJob.app == self.app,
ORMJob.namespace == self.namespace) | python | def _ns_query(self, session):
"""
Return a SQLAlchemy query that is already namespaced by the app and namespace given to this backend
during initialization.
Returns: a SQLAlchemy query object
"""
return session.query(ORMJob).filter(ORMJob.app == self.app,
ORMJob.namespace == self.namespace) | Return a SQLAlchemy query that is already namespaced by the app and namespace given to this backend
during initialization.
Returns: a SQLAlchemy query object | https://github.com/learningequality/iceqube/blob/97ac9e0f65bfedb0efa9f94638bcb57c7926dea2/src/iceqube/storage/backends/inmem.py#L304-L312 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.