repo_name
stringclasses 1
value | pr_number
int64 4.12k
11.2k
| pr_title
stringlengths 9
107
| pr_description
stringlengths 107
5.48k
| author
stringlengths 4
18
| date_created
unknown | date_merged
unknown | previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| query
stringlengths 118
5.52k
| before_content
stringlengths 0
7.93M
| after_content
stringlengths 0
7.93M
| label
int64 -1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
TheAlgorithms/Python | 5,782 | [mypy] Fix type annotations for maths directory | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Rohanrbharadwaj | "2021-11-06T13:38:09Z" | "2021-11-07T15:13:59Z" | db5aa1d18890439e4108fa416679dbab5859f30c | a98465230f21e6ece76332eeca1558613788c387 | [mypy] Fix type annotations for maths directory. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 5,782 | [mypy] Fix type annotations for maths directory | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Rohanrbharadwaj | "2021-11-06T13:38:09Z" | "2021-11-07T15:13:59Z" | db5aa1d18890439e4108fa416679dbab5859f30c | a98465230f21e6ece76332eeca1558613788c387 | [mypy] Fix type annotations for maths directory. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| class FenwickTree:
def __init__(self, SIZE): # create fenwick tree with size SIZE
self.Size = SIZE
self.ft = [0 for i in range(0, SIZE)]
def update(self, i, val): # update data (adding) in index i in O(lg N)
while i < self.Size:
self.ft[i] += val
i += i & (-i)
def query(self, i): # query cumulative data from index 0 to i in O(lg N)
ret = 0
while i > 0:
ret += self.ft[i]
i -= i & (-i)
return ret
if __name__ == "__main__":
f = FenwickTree(100)
f.update(1, 20)
f.update(4, 4)
print(f.query(1))
print(f.query(3))
print(f.query(4))
f.update(2, -5)
print(f.query(1))
print(f.query(3))
| class FenwickTree:
def __init__(self, SIZE): # create fenwick tree with size SIZE
self.Size = SIZE
self.ft = [0 for i in range(0, SIZE)]
def update(self, i, val): # update data (adding) in index i in O(lg N)
while i < self.Size:
self.ft[i] += val
i += i & (-i)
def query(self, i): # query cumulative data from index 0 to i in O(lg N)
ret = 0
while i > 0:
ret += self.ft[i]
i -= i & (-i)
return ret
if __name__ == "__main__":
f = FenwickTree(100)
f.update(1, 20)
f.update(4, 4)
print(f.query(1))
print(f.query(3))
print(f.query(4))
f.update(2, -5)
print(f.query(1))
print(f.query(3))
| -1 |
TheAlgorithms/Python | 5,782 | [mypy] Fix type annotations for maths directory | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Rohanrbharadwaj | "2021-11-06T13:38:09Z" | "2021-11-07T15:13:59Z" | db5aa1d18890439e4108fa416679dbab5859f30c | a98465230f21e6ece76332eeca1558613788c387 | [mypy] Fix type annotations for maths directory. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 5,782 | [mypy] Fix type annotations for maths directory | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Rohanrbharadwaj | "2021-11-06T13:38:09Z" | "2021-11-07T15:13:59Z" | db5aa1d18890439e4108fa416679dbab5859f30c | a98465230f21e6ece76332eeca1558613788c387 | [mypy] Fix type annotations for maths directory. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 58:https://projecteuler.net/problem=58
Starting with 1 and spiralling anticlockwise in the following way,
a square spiral with side length 7 is formed.
37 36 35 34 33 32 31
38 17 16 15 14 13 30
39 18 5 4 3 12 29
40 19 6 1 2 11 28
41 20 7 8 9 10 27
42 21 22 23 24 25 26
43 44 45 46 47 48 49
It is interesting to note that the odd squares lie along the bottom right
diagonal ,but what is more interesting is that 8 out of the 13 numbers
lying along both diagonals are prime; that is, a ratio of 8/13 ≈ 62%.
If one complete new layer is wrapped around the spiral above,
a square spiral with side length 9 will be formed.
If this process is continued,
what is the side length of the square spiral for which
the ratio of primes along both diagonals first falls below 10%?
Solution: We have to find an odd length side for which square falls below
10%. With every layer we add 4 elements are being added to the diagonals
,lets say we have a square spiral of odd length with side length j,
then if we move from j to j+2, we are adding j*j+j+1,j*j+2*(j+1),j*j+3*(j+1)
j*j+4*(j+1). Out of these 4 only the first three can become prime
because last one reduces to (j+2)*(j+2).
So we check individually each one of these before incrementing our
count of current primes.
"""
from math import isqrt
def isprime(number: int) -> int:
"""
returns whether the given number is prime or not
>>> isprime(1)
0
>>> isprime(17)
1
>>> isprime(10000)
0
"""
if number == 1:
return 0
if number % 2 == 0 and number > 2:
return 0
for i in range(3, isqrt(number) + 1, 2):
if number % i == 0:
return 0
return 1
def solution(ratio: float = 0.1) -> int:
"""
returns the side length of the square spiral of odd length greater
than 1 for which the ratio of primes along both diagonals
first falls below the given ratio.
>>> solution(.5)
11
>>> solution(.2)
309
>>> solution(.111)
11317
"""
j = 3
primes = 3
while primes / (2 * j - 1) >= ratio:
for i in range(j * j + j + 1, (j + 2) * (j + 2), j + 1):
primes = primes + isprime(i)
j = j + 2
return j
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Project Euler Problem 58:https://projecteuler.net/problem=58
Starting with 1 and spiralling anticlockwise in the following way,
a square spiral with side length 7 is formed.
37 36 35 34 33 32 31
38 17 16 15 14 13 30
39 18 5 4 3 12 29
40 19 6 1 2 11 28
41 20 7 8 9 10 27
42 21 22 23 24 25 26
43 44 45 46 47 48 49
It is interesting to note that the odd squares lie along the bottom right
diagonal ,but what is more interesting is that 8 out of the 13 numbers
lying along both diagonals are prime; that is, a ratio of 8/13 ≈ 62%.
If one complete new layer is wrapped around the spiral above,
a square spiral with side length 9 will be formed.
If this process is continued,
what is the side length of the square spiral for which
the ratio of primes along both diagonals first falls below 10%?
Solution: We have to find an odd length side for which square falls below
10%. With every layer we add 4 elements are being added to the diagonals
,lets say we have a square spiral of odd length with side length j,
then if we move from j to j+2, we are adding j*j+j+1,j*j+2*(j+1),j*j+3*(j+1)
j*j+4*(j+1). Out of these 4 only the first three can become prime
because last one reduces to (j+2)*(j+2).
So we check individually each one of these before incrementing our
count of current primes.
"""
from math import isqrt
def isprime(number: int) -> int:
"""
returns whether the given number is prime or not
>>> isprime(1)
0
>>> isprime(17)
1
>>> isprime(10000)
0
"""
if number == 1:
return 0
if number % 2 == 0 and number > 2:
return 0
for i in range(3, isqrt(number) + 1, 2):
if number % i == 0:
return 0
return 1
def solution(ratio: float = 0.1) -> int:
"""
returns the side length of the square spiral of odd length greater
than 1 for which the ratio of primes along both diagonals
first falls below the given ratio.
>>> solution(.5)
11
>>> solution(.2)
309
>>> solution(.111)
11317
"""
j = 3
primes = 3
while primes / (2 * j - 1) >= ratio:
for i in range(j * j + j + 1, (j + 2) * (j + 2), j + 1):
primes = primes + isprime(i)
j = j + 2
return j
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 5,782 | [mypy] Fix type annotations for maths directory | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Rohanrbharadwaj | "2021-11-06T13:38:09Z" | "2021-11-07T15:13:59Z" | db5aa1d18890439e4108fa416679dbab5859f30c | a98465230f21e6ece76332eeca1558613788c387 | [mypy] Fix type annotations for maths directory. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 5,782 | [mypy] Fix type annotations for maths directory | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Rohanrbharadwaj | "2021-11-06T13:38:09Z" | "2021-11-07T15:13:59Z" | db5aa1d18890439e4108fa416679dbab5859f30c | a98465230f21e6ece76332eeca1558613788c387 | [mypy] Fix type annotations for maths directory. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 5,782 | [mypy] Fix type annotations for maths directory | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Rohanrbharadwaj | "2021-11-06T13:38:09Z" | "2021-11-07T15:13:59Z" | db5aa1d18890439e4108fa416679dbab5859f30c | a98465230f21e6ece76332eeca1558613788c387 | [mypy] Fix type annotations for maths directory. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/python
"""Author Anurag Kumar | [email protected] | git/anuragkumarak95
Simple example of Fractal generation using recursive function.
What is Sierpinski Triangle?
>>The Sierpinski triangle (also with the original orthography Sierpinski), also called
the Sierpinski gasket or the Sierpinski Sieve, is a fractal and attractive fixed set
with the overall shape of an equilateral triangle, subdivided recursively into smaller
equilateral triangles. Originally constructed as a curve, this is one of the basic
examples of self-similar sets, i.e., it is a mathematically generated pattern that can
be reproducible at any magnification or reduction. It is named after the Polish
mathematician Wacław Sierpinski, but appeared as a decorative pattern many centuries
prior to the work of Sierpinski.
Requirements(pip):
- turtle
Python:
- 2.6
Usage:
- $python sierpinski_triangle.py <int:depth_for_fractal>
Credits: This code was written by editing the code from
http://www.riannetrujillo.com/blog/python-fractal/
"""
import sys
import turtle
PROGNAME = "Sierpinski Triangle"
points = [[-175, -125], [0, 175], [175, -125]] # size of triangle
def getMid(p1, p2):
return ((p1[0] + p2[0]) / 2, (p1[1] + p2[1]) / 2) # find midpoint
def triangle(points, depth):
myPen.up()
myPen.goto(points[0][0], points[0][1])
myPen.down()
myPen.goto(points[1][0], points[1][1])
myPen.goto(points[2][0], points[2][1])
myPen.goto(points[0][0], points[0][1])
if depth > 0:
triangle(
[points[0], getMid(points[0], points[1]), getMid(points[0], points[2])],
depth - 1,
)
triangle(
[points[1], getMid(points[0], points[1]), getMid(points[1], points[2])],
depth - 1,
)
triangle(
[points[2], getMid(points[2], points[1]), getMid(points[0], points[2])],
depth - 1,
)
if __name__ == "__main__":
if len(sys.argv) != 2:
raise ValueError(
"right format for using this script: "
"$python fractals.py <int:depth_for_fractal>"
)
myPen = turtle.Turtle()
myPen.ht()
myPen.speed(5)
myPen.pencolor("red")
triangle(points, int(sys.argv[1]))
| #!/usr/bin/python
"""Author Anurag Kumar | [email protected] | git/anuragkumarak95
Simple example of Fractal generation using recursive function.
What is Sierpinski Triangle?
>>The Sierpinski triangle (also with the original orthography Sierpinski), also called
the Sierpinski gasket or the Sierpinski Sieve, is a fractal and attractive fixed set
with the overall shape of an equilateral triangle, subdivided recursively into smaller
equilateral triangles. Originally constructed as a curve, this is one of the basic
examples of self-similar sets, i.e., it is a mathematically generated pattern that can
be reproducible at any magnification or reduction. It is named after the Polish
mathematician Wacław Sierpinski, but appeared as a decorative pattern many centuries
prior to the work of Sierpinski.
Requirements(pip):
- turtle
Python:
- 2.6
Usage:
- $python sierpinski_triangle.py <int:depth_for_fractal>
Credits: This code was written by editing the code from
http://www.riannetrujillo.com/blog/python-fractal/
"""
import sys
import turtle
PROGNAME = "Sierpinski Triangle"
points = [[-175, -125], [0, 175], [175, -125]] # size of triangle
def getMid(p1, p2):
return ((p1[0] + p2[0]) / 2, (p1[1] + p2[1]) / 2) # find midpoint
def triangle(points, depth):
myPen.up()
myPen.goto(points[0][0], points[0][1])
myPen.down()
myPen.goto(points[1][0], points[1][1])
myPen.goto(points[2][0], points[2][1])
myPen.goto(points[0][0], points[0][1])
if depth > 0:
triangle(
[points[0], getMid(points[0], points[1]), getMid(points[0], points[2])],
depth - 1,
)
triangle(
[points[1], getMid(points[0], points[1]), getMid(points[1], points[2])],
depth - 1,
)
triangle(
[points[2], getMid(points[2], points[1]), getMid(points[0], points[2])],
depth - 1,
)
if __name__ == "__main__":
if len(sys.argv) != 2:
raise ValueError(
"right format for using this script: "
"$python fractals.py <int:depth_for_fractal>"
)
myPen = turtle.Turtle()
myPen.ht()
myPen.speed(5)
myPen.pencolor("red")
triangle(points, int(sys.argv[1]))
| -1 |
TheAlgorithms/Python | 5,782 | [mypy] Fix type annotations for maths directory | ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| Rohanrbharadwaj | "2021-11-06T13:38:09Z" | "2021-11-07T15:13:59Z" | db5aa1d18890439e4108fa416679dbab5859f30c | a98465230f21e6ece76332eeca1558613788c387 | [mypy] Fix type annotations for maths directory. ### Describe your change:
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Topological Sort."""
# a
# / \
# b c
# / \
# d e
edges = {"a": ["c", "b"], "b": ["d", "e"], "c": [], "d": [], "e": []}
vertices = ["a", "b", "c", "d", "e"]
def topological_sort(start, visited, sort):
"""Perform topological sort on a directed acyclic graph."""
current = start
# add current to visited
visited.append(current)
neighbors = edges[current]
for neighbor in neighbors:
# if neighbor not in visited, visit
if neighbor not in visited:
sort = topological_sort(neighbor, visited, sort)
# if all neighbors visited add current to sort
sort.append(current)
# if all vertices haven't been visited select a new one to visit
if len(visited) != len(vertices):
for vertice in vertices:
if vertice not in visited:
sort = topological_sort(vertice, visited, sort)
# return sort
return sort
if __name__ == "__main__":
sort = topological_sort("a", [], [])
print(sort)
| """Topological Sort."""
# a
# / \
# b c
# / \
# d e
edges = {"a": ["c", "b"], "b": ["d", "e"], "c": [], "d": [], "e": []}
vertices = ["a", "b", "c", "d", "e"]
def topological_sort(start, visited, sort):
"""Perform topological sort on a directed acyclic graph."""
current = start
# add current to visited
visited.append(current)
neighbors = edges[current]
for neighbor in neighbors:
# if neighbor not in visited, visit
if neighbor not in visited:
sort = topological_sort(neighbor, visited, sort)
# if all neighbors visited add current to sort
sort.append(current)
# if all vertices haven't been visited select a new one to visit
if len(visited) != len(vertices):
for vertice in vertices:
if vertice not in visited:
sort = topological_sort(vertice, visited, sort)
# return sort
return sort
if __name__ == "__main__":
sort = topological_sort("a", [], [])
print(sort)
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| [mypy]
ignore_missing_imports = True
install_types = True
non_interactive = True
exclude = (other/least_recently_used.py|other/lfu_cache.py|other/lru_cache.py)
| [mypy]
ignore_missing_imports = True
install_types = True
non_interactive = True
exclude = (other/least_recently_used.py)
| 1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
from typing import Callable
class DoubleLinkedListNode:
"""
Double Linked List Node built specifically for LFU Cache
"""
def __init__(self, key: int, val: int):
self.key = key
self.val = val
self.freq = 0
self.next = None
self.prev = None
class DoubleLinkedList:
"""
Double Linked List built specifically for LFU Cache
"""
def __init__(self):
self.head = DoubleLinkedListNode(None, None)
self.rear = DoubleLinkedListNode(None, None)
self.head.next, self.rear.prev = self.rear, self.head
def add(self, node: DoubleLinkedListNode) -> None:
"""
Adds the given node at the head of the list and shifting it to proper position
"""
temp = self.rear.prev
self.rear.prev, node.next = node, self.rear
temp.next, node.prev = node, temp
node.freq += 1
self._position_node(node)
def _position_node(self, node: DoubleLinkedListNode) -> None:
while node.prev.key and node.prev.freq > node.freq:
node1, node2 = node, node.prev
node1.prev, node2.next = node2.prev, node1.prev
node1.next, node2.prev = node2, node1
def remove(self, node: DoubleLinkedListNode) -> DoubleLinkedListNode:
"""
Removes and returns the given node from the list
"""
temp_last, temp_next = node.prev, node.next
node.prev, node.next = None, None
temp_last.next, temp_next.prev = temp_next, temp_last
return node
class LFUCache:
"""
LFU Cache to store a given capacity of data. Can be used as a stand-alone object
or as a function decorator.
>>> cache = LFUCache(2)
>>> cache.set(1, 1)
>>> cache.set(2, 2)
>>> cache.get(1)
1
>>> cache.set(3, 3)
>>> cache.get(2) # None is returned
>>> cache.set(4, 4)
>>> cache.get(1) # None is returned
>>> cache.get(3)
3
>>> cache.get(4)
4
>>> cache
CacheInfo(hits=3, misses=2, capacity=2, current_size=2)
>>> @LFUCache.decorator(100)
... def fib(num):
... if num in (1, 2):
... return 1
... return fib(num - 1) + fib(num - 2)
>>> for i in range(1, 101):
... res = fib(i)
>>> fib.cache_info()
CacheInfo(hits=196, misses=100, capacity=100, current_size=100)
"""
# class variable to map the decorator functions to their respective instance
decorator_function_to_instance_map = {}
def __init__(self, capacity: int):
self.list = DoubleLinkedList()
self.capacity = capacity
self.num_keys = 0
self.hits = 0
self.miss = 0
self.cache = {}
def __repr__(self) -> str:
"""
Return the details for the cache instance
[hits, misses, capacity, current_size]
"""
return (
f"CacheInfo(hits={self.hits}, misses={self.miss}, "
f"capacity={self.capacity}, current_size={self.num_keys})"
)
def __contains__(self, key: int) -> bool:
"""
>>> cache = LFUCache(1)
>>> 1 in cache
False
>>> cache.set(1, 1)
>>> 1 in cache
True
"""
return key in self.cache
def get(self, key: int) -> int | None:
"""
Returns the value for the input key and updates the Double Linked List. Returns
None if key is not present in cache
"""
if key in self.cache:
self.hits += 1
self.list.add(self.list.remove(self.cache[key]))
return self.cache[key].val
self.miss += 1
return None
def set(self, key: int, value: int) -> None:
"""
Sets the value for the input key and updates the Double Linked List
"""
if key not in self.cache:
if self.num_keys >= self.capacity:
key_to_delete = self.list.head.next.key
self.list.remove(self.cache[key_to_delete])
del self.cache[key_to_delete]
self.num_keys -= 1
self.cache[key] = DoubleLinkedListNode(key, value)
self.list.add(self.cache[key])
self.num_keys += 1
else:
node = self.list.remove(self.cache[key])
node.val = value
self.list.add(node)
@staticmethod
def decorator(size: int = 128):
"""
Decorator version of LFU Cache
"""
def cache_decorator_inner(func: Callable):
def cache_decorator_wrapper(*args, **kwargs):
if func not in LFUCache.decorator_function_to_instance_map:
LFUCache.decorator_function_to_instance_map[func] = LFUCache(size)
result = LFUCache.decorator_function_to_instance_map[func].get(args[0])
if result is None:
result = func(*args, **kwargs)
LFUCache.decorator_function_to_instance_map[func].set(
args[0], result
)
return result
def cache_info():
return LFUCache.decorator_function_to_instance_map[func]
cache_decorator_wrapper.cache_info = cache_info
return cache_decorator_wrapper
return cache_decorator_inner
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
from typing import Callable, Generic, TypeVar
T = TypeVar("T")
U = TypeVar("U")
class DoubleLinkedListNode(Generic[T, U]):
"""
Double Linked List Node built specifically for LFU Cache
>>> node = DoubleLinkedListNode(1,1)
>>> node
Node: key: 1, val: 1, freq: 0, has next: False, has prev: False
"""
def __init__(self, key: T | None, val: U | None):
self.key = key
self.val = val
self.freq: int = 0
self.next: DoubleLinkedListNode[T, U] | None = None
self.prev: DoubleLinkedListNode[T, U] | None = None
def __repr__(self) -> str:
return "Node: key: {}, val: {}, freq: {}, has next: {}, has prev: {}".format(
self.key, self.val, self.freq, self.next is not None, self.prev is not None
)
class DoubleLinkedList(Generic[T, U]):
"""
Double Linked List built specifically for LFU Cache
>>> dll: DoubleLinkedList = DoubleLinkedList()
>>> dll
DoubleLinkedList,
Node: key: None, val: None, freq: 0, has next: True, has prev: False,
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> first_node = DoubleLinkedListNode(1,10)
>>> first_node
Node: key: 1, val: 10, freq: 0, has next: False, has prev: False
>>> dll.add(first_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, freq: 0, has next: True, has prev: False,
Node: key: 1, val: 10, freq: 1, has next: True, has prev: True,
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> # node is mutated
>>> first_node
Node: key: 1, val: 10, freq: 1, has next: True, has prev: True
>>> second_node = DoubleLinkedListNode(2,20)
>>> second_node
Node: key: 2, val: 20, freq: 0, has next: False, has prev: False
>>> dll.add(second_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, freq: 0, has next: True, has prev: False,
Node: key: 1, val: 10, freq: 1, has next: True, has prev: True,
Node: key: 2, val: 20, freq: 1, has next: True, has prev: True,
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> removed_node = dll.remove(first_node)
>>> assert removed_node == first_node
>>> dll
DoubleLinkedList,
Node: key: None, val: None, freq: 0, has next: True, has prev: False,
Node: key: 2, val: 20, freq: 1, has next: True, has prev: True,
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> # Attempt to remove node not on list
>>> removed_node = dll.remove(first_node)
>>> removed_node is None
True
>>> # Attempt to remove head or rear
>>> dll.head
Node: key: None, val: None, freq: 0, has next: True, has prev: False
>>> dll.remove(dll.head) is None
True
>>> # Attempt to remove head or rear
>>> dll.rear
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> dll.remove(dll.rear) is None
True
"""
def __init__(self) -> None:
self.head: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.rear: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.head.next, self.rear.prev = self.rear, self.head
def __repr__(self) -> str:
rep = ["DoubleLinkedList"]
node = self.head
while node.next is not None:
rep.append(str(node))
node = node.next
rep.append(str(self.rear))
return ",\n ".join(rep)
def add(self, node: DoubleLinkedListNode[T, U]) -> None:
"""
Adds the given node at the tail of the list and shifting it to proper position
"""
previous = self.rear.prev
# All nodes other than self.head are guaranteed to have non-None previous
assert previous is not None
previous.next = node
node.prev = previous
self.rear.prev = node
node.next = self.rear
node.freq += 1
self._position_node(node)
def _position_node(self, node: DoubleLinkedListNode[T, U]) -> None:
"""
Moves node forward to maintain invariant of sort by freq value
"""
while node.prev is not None and node.prev.freq > node.freq:
# swap node with previous node
previous_node = node.prev
node.prev = previous_node.prev
previous_node.next = node.prev
node.next = previous_node
previous_node.prev = node
def remove(
self, node: DoubleLinkedListNode[T, U]
) -> DoubleLinkedListNode[T, U] | None:
"""
Removes and returns the given node from the list
Returns None if node.prev or node.next is None
"""
if node.prev is None or node.next is None:
return None
node.prev.next = node.next
node.next.prev = node.prev
node.prev = None
node.next = None
return node
class LFUCache(Generic[T, U]):
"""
LFU Cache to store a given capacity of data. Can be used as a stand-alone object
or as a function decorator.
>>> cache = LFUCache(2)
>>> cache.set(1, 1)
>>> cache.set(2, 2)
>>> cache.get(1)
1
>>> cache.set(3, 3)
>>> cache.get(2) is None
True
>>> cache.set(4, 4)
>>> cache.get(1) is None
True
>>> cache.get(3)
3
>>> cache.get(4)
4
>>> cache
CacheInfo(hits=3, misses=2, capacity=2, current_size=2)
>>> @LFUCache.decorator(100)
... def fib(num):
... if num in (1, 2):
... return 1
... return fib(num - 1) + fib(num - 2)
>>> for i in range(1, 101):
... res = fib(i)
>>> fib.cache_info()
CacheInfo(hits=196, misses=100, capacity=100, current_size=100)
"""
# class variable to map the decorator functions to their respective instance
decorator_function_to_instance_map: dict[Callable[[T], U], LFUCache[T, U]] = {}
def __init__(self, capacity: int):
self.list: DoubleLinkedList[T, U] = DoubleLinkedList()
self.capacity = capacity
self.num_keys = 0
self.hits = 0
self.miss = 0
self.cache: dict[T, DoubleLinkedListNode[T, U]] = {}
def __repr__(self) -> str:
"""
Return the details for the cache instance
[hits, misses, capacity, current_size]
"""
return (
f"CacheInfo(hits={self.hits}, misses={self.miss}, "
f"capacity={self.capacity}, current_size={self.num_keys})"
)
def __contains__(self, key: T) -> bool:
"""
>>> cache = LFUCache(1)
>>> 1 in cache
False
>>> cache.set(1, 1)
>>> 1 in cache
True
"""
return key in self.cache
def get(self, key: T) -> U | None:
"""
Returns the value for the input key and updates the Double Linked List. Returns
Returns None if key is not present in cache
"""
if key in self.cache:
self.hits += 1
value_node: DoubleLinkedListNode[T, U] = self.cache[key]
node = self.list.remove(self.cache[key])
assert node == value_node
# node is guaranteed not None because it is in self.cache
assert node is not None
self.list.add(node)
return node.val
self.miss += 1
return None
def set(self, key: T, value: U) -> None:
"""
Sets the value for the input key and updates the Double Linked List
"""
if key not in self.cache:
if self.num_keys >= self.capacity:
# delete first node when over capacity
first_node = self.list.head.next
# guaranteed to have a non-None first node when num_keys > 0
# explain to type checker via assertions
assert first_node is not None
assert first_node.key is not None
assert self.list.remove(first_node) is not None
# first_node guaranteed to be in list
del self.cache[first_node.key]
self.num_keys -= 1
self.cache[key] = DoubleLinkedListNode(key, value)
self.list.add(self.cache[key])
self.num_keys += 1
else:
node = self.list.remove(self.cache[key])
assert node is not None # node guaranteed to be in list
node.val = value
self.list.add(node)
@classmethod
def decorator(
cls: type[LFUCache[T, U]], size: int = 128
) -> Callable[[Callable[[T], U]], Callable[..., U]]:
"""
Decorator version of LFU Cache
Decorated function must be function of T -> U
"""
def cache_decorator_inner(func: Callable[[T], U]) -> Callable[..., U]:
def cache_decorator_wrapper(*args: T) -> U:
if func not in cls.decorator_function_to_instance_map:
cls.decorator_function_to_instance_map[func] = LFUCache(size)
result = cls.decorator_function_to_instance_map[func].get(args[0])
if result is None:
result = func(*args)
cls.decorator_function_to_instance_map[func].set(args[0], result)
return result
def cache_info() -> LFUCache[T, U]:
return cls.decorator_function_to_instance_map[func]
setattr(cache_decorator_wrapper, "cache_info", cache_info)
return cache_decorator_wrapper
return cache_decorator_inner
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
from typing import Callable
class DoubleLinkedListNode:
"""
Double Linked List Node built specifically for LRU Cache
"""
def __init__(self, key: int, val: int):
self.key = key
self.val = val
self.next = None
self.prev = None
class DoubleLinkedList:
"""
Double Linked List built specifically for LRU Cache
"""
def __init__(self):
self.head = DoubleLinkedListNode(None, None)
self.rear = DoubleLinkedListNode(None, None)
self.head.next, self.rear.prev = self.rear, self.head
def add(self, node: DoubleLinkedListNode) -> None:
"""
Adds the given node to the end of the list (before rear)
"""
temp = self.rear.prev
temp.next, node.prev = node, temp
self.rear.prev, node.next = node, self.rear
def remove(self, node: DoubleLinkedListNode) -> DoubleLinkedListNode:
"""
Removes and returns the given node from the list
"""
temp_last, temp_next = node.prev, node.next
node.prev, node.next = None, None
temp_last.next, temp_next.prev = temp_next, temp_last
return node
class LRUCache:
"""
LRU Cache to store a given capacity of data. Can be used as a stand-alone object
or as a function decorator.
>>> cache = LRUCache(2)
>>> cache.set(1, 1)
>>> cache.set(2, 2)
>>> cache.get(1)
1
>>> cache.set(3, 3)
>>> cache.get(2) # None returned
>>> cache.set(4, 4)
>>> cache.get(1) # None returned
>>> cache.get(3)
3
>>> cache.get(4)
4
>>> cache
CacheInfo(hits=3, misses=2, capacity=2, current size=2)
>>> @LRUCache.decorator(100)
... def fib(num):
... if num in (1, 2):
... return 1
... return fib(num - 1) + fib(num - 2)
>>> for i in range(1, 100):
... res = fib(i)
>>> fib.cache_info()
CacheInfo(hits=194, misses=99, capacity=100, current size=99)
"""
# class variable to map the decorator functions to their respective instance
decorator_function_to_instance_map = {}
def __init__(self, capacity: int):
self.list = DoubleLinkedList()
self.capacity = capacity
self.num_keys = 0
self.hits = 0
self.miss = 0
self.cache = {}
def __repr__(self) -> str:
"""
Return the details for the cache instance
[hits, misses, capacity, current_size]
"""
return (
f"CacheInfo(hits={self.hits}, misses={self.miss}, "
f"capacity={self.capacity}, current size={self.num_keys})"
)
def __contains__(self, key: int) -> bool:
"""
>>> cache = LRUCache(1)
>>> 1 in cache
False
>>> cache.set(1, 1)
>>> 1 in cache
True
"""
return key in self.cache
def get(self, key: int) -> int | None:
"""
Returns the value for the input key and updates the Double Linked List. Returns
None if key is not present in cache
"""
if key in self.cache:
self.hits += 1
self.list.add(self.list.remove(self.cache[key]))
return self.cache[key].val
self.miss += 1
return None
def set(self, key: int, value: int) -> None:
"""
Sets the value for the input key and updates the Double Linked List
"""
if key not in self.cache:
if self.num_keys >= self.capacity:
key_to_delete = self.list.head.next.key
self.list.remove(self.cache[key_to_delete])
del self.cache[key_to_delete]
self.num_keys -= 1
self.cache[key] = DoubleLinkedListNode(key, value)
self.list.add(self.cache[key])
self.num_keys += 1
else:
node = self.list.remove(self.cache[key])
node.val = value
self.list.add(node)
@staticmethod
def decorator(size: int = 128):
"""
Decorator version of LRU Cache
"""
def cache_decorator_inner(func: Callable):
def cache_decorator_wrapper(*args, **kwargs):
if func not in LRUCache.decorator_function_to_instance_map:
LRUCache.decorator_function_to_instance_map[func] = LRUCache(size)
result = LRUCache.decorator_function_to_instance_map[func].get(args[0])
if result is None:
result = func(*args, **kwargs)
LRUCache.decorator_function_to_instance_map[func].set(
args[0], result
)
return result
def cache_info():
return LRUCache.decorator_function_to_instance_map[func]
cache_decorator_wrapper.cache_info = cache_info
return cache_decorator_wrapper
return cache_decorator_inner
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
from typing import Callable, Generic, TypeVar
T = TypeVar("T")
U = TypeVar("U")
class DoubleLinkedListNode(Generic[T, U]):
"""
Double Linked List Node built specifically for LRU Cache
>>> DoubleLinkedListNode(1,1)
Node: key: 1, val: 1, has next: False, has prev: False
"""
def __init__(self, key: T | None, val: U | None):
self.key = key
self.val = val
self.next: DoubleLinkedListNode[T, U] | None = None
self.prev: DoubleLinkedListNode[T, U] | None = None
def __repr__(self) -> str:
return "Node: key: {}, val: {}, has next: {}, has prev: {}".format(
self.key, self.val, self.next is not None, self.prev is not None
)
class DoubleLinkedList(Generic[T, U]):
"""
Double Linked List built specifically for LRU Cache
>>> dll: DoubleLinkedList = DoubleLinkedList()
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: None, val: None, has next: False, has prev: True
>>> first_node = DoubleLinkedListNode(1,10)
>>> first_node
Node: key: 1, val: 10, has next: False, has prev: False
>>> dll.add(first_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 10, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> # node is mutated
>>> first_node
Node: key: 1, val: 10, has next: True, has prev: True
>>> second_node = DoubleLinkedListNode(2,20)
>>> second_node
Node: key: 2, val: 20, has next: False, has prev: False
>>> dll.add(second_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 10, has next: True, has prev: True,
Node: key: 2, val: 20, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> removed_node = dll.remove(first_node)
>>> assert removed_node == first_node
>>> dll
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 2, val: 20, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> # Attempt to remove node not on list
>>> removed_node = dll.remove(first_node)
>>> removed_node is None
True
>>> # Attempt to remove head or rear
>>> dll.head
Node: key: None, val: None, has next: True, has prev: False
>>> dll.remove(dll.head) is None
True
>>> # Attempt to remove head or rear
>>> dll.rear
Node: key: None, val: None, has next: False, has prev: True
>>> dll.remove(dll.rear) is None
True
"""
def __init__(self) -> None:
self.head: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.rear: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.head.next, self.rear.prev = self.rear, self.head
def __repr__(self) -> str:
rep = ["DoubleLinkedList"]
node = self.head
while node.next is not None:
rep.append(str(node))
node = node.next
rep.append(str(self.rear))
return ",\n ".join(rep)
def add(self, node: DoubleLinkedListNode[T, U]) -> None:
"""
Adds the given node to the end of the list (before rear)
"""
previous = self.rear.prev
# All nodes other than self.head are guaranteed to have non-None previous
assert previous is not None
previous.next = node
node.prev = previous
self.rear.prev = node
node.next = self.rear
def remove(
self, node: DoubleLinkedListNode[T, U]
) -> DoubleLinkedListNode[T, U] | None:
"""
Removes and returns the given node from the list
Returns None if node.prev or node.next is None
"""
if node.prev is None or node.next is None:
return None
node.prev.next = node.next
node.next.prev = node.prev
node.prev = None
node.next = None
return node
class LRUCache(Generic[T, U]):
"""
LRU Cache to store a given capacity of data. Can be used as a stand-alone object
or as a function decorator.
>>> cache = LRUCache(2)
>>> cache.set(1, 1)
>>> cache.set(2, 2)
>>> cache.get(1)
1
>>> cache.list
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 2, val: 2, has next: True, has prev: True,
Node: key: 1, val: 1, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> cache.cache # doctest: +NORMALIZE_WHITESPACE
{1: Node: key: 1, val: 1, has next: True, has prev: True, \
2: Node: key: 2, val: 2, has next: True, has prev: True}
>>> cache.set(3, 3)
>>> cache.list
DoubleLinkedList,
Node: key: None, val: None, has next: True, has prev: False,
Node: key: 1, val: 1, has next: True, has prev: True,
Node: key: 3, val: 3, has next: True, has prev: True,
Node: key: None, val: None, has next: False, has prev: True
>>> cache.cache # doctest: +NORMALIZE_WHITESPACE
{1: Node: key: 1, val: 1, has next: True, has prev: True, \
3: Node: key: 3, val: 3, has next: True, has prev: True}
>>> cache.get(2) is None
True
>>> cache.set(4, 4)
>>> cache.get(1) is None
True
>>> cache.get(3)
3
>>> cache.get(4)
4
>>> cache
CacheInfo(hits=3, misses=2, capacity=2, current size=2)
>>> @LRUCache.decorator(100)
... def fib(num):
... if num in (1, 2):
... return 1
... return fib(num - 1) + fib(num - 2)
>>> for i in range(1, 100):
... res = fib(i)
>>> fib.cache_info()
CacheInfo(hits=194, misses=99, capacity=100, current size=99)
"""
# class variable to map the decorator functions to their respective instance
decorator_function_to_instance_map: dict[Callable[[T], U], LRUCache[T, U]] = {}
def __init__(self, capacity: int):
self.list: DoubleLinkedList[T, U] = DoubleLinkedList()
self.capacity = capacity
self.num_keys = 0
self.hits = 0
self.miss = 0
self.cache: dict[T, DoubleLinkedListNode[T, U]] = {}
def __repr__(self) -> str:
"""
Return the details for the cache instance
[hits, misses, capacity, current_size]
"""
return (
f"CacheInfo(hits={self.hits}, misses={self.miss}, "
f"capacity={self.capacity}, current size={self.num_keys})"
)
def __contains__(self, key: T) -> bool:
"""
>>> cache = LRUCache(1)
>>> 1 in cache
False
>>> cache.set(1, 1)
>>> 1 in cache
True
"""
return key in self.cache
def get(self, key: T) -> U | None:
"""
Returns the value for the input key and updates the Double Linked List.
Returns None if key is not present in cache
"""
# Note: pythonic interface would throw KeyError rather than return None
if key in self.cache:
self.hits += 1
value_node: DoubleLinkedListNode[T, U] = self.cache[key]
node = self.list.remove(self.cache[key])
assert node == value_node
# node is guaranteed not None because it is in self.cache
assert node is not None
self.list.add(node)
return node.val
self.miss += 1
return None
def set(self, key: T, value: U) -> None:
"""
Sets the value for the input key and updates the Double Linked List
"""
if key not in self.cache:
if self.num_keys >= self.capacity:
# delete first node (oldest) when over capacity
first_node = self.list.head.next
# guaranteed to have a non-None first node when num_keys > 0
# explain to type checker via assertions
assert first_node is not None
assert first_node.key is not None
assert (
self.list.remove(first_node) is not None
) # node guaranteed to be in list assert node.key is not None
del self.cache[first_node.key]
self.num_keys -= 1
self.cache[key] = DoubleLinkedListNode(key, value)
self.list.add(self.cache[key])
self.num_keys += 1
else:
# bump node to the end of the list, update value
node = self.list.remove(self.cache[key])
assert node is not None # node guaranteed to be in list
node.val = value
self.list.add(node)
@classmethod
def decorator(
cls, size: int = 128
) -> Callable[[Callable[[T], U]], Callable[..., U]]:
"""
Decorator version of LRU Cache
Decorated function must be function of T -> U
"""
def cache_decorator_inner(func: Callable[[T], U]) -> Callable[..., U]:
def cache_decorator_wrapper(*args: T) -> U:
if func not in cls.decorator_function_to_instance_map:
cls.decorator_function_to_instance_map[func] = LRUCache(size)
result = cls.decorator_function_to_instance_map[func].get(args[0])
if result is None:
result = func(*args)
cls.decorator_function_to_instance_map[func].set(args[0], result)
return result
def cache_info() -> LRUCache[T, U]:
return cls.decorator_function_to_instance_map[func]
setattr(cache_decorator_wrapper, "cache_info", cache_info)
return cache_decorator_wrapper
return cache_decorator_inner
if __name__ == "__main__":
import doctest
doctest.testmod()
| 1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """Matrix Exponentiation"""
import timeit
"""
Matrix Exponentiation is a technique to solve linear recurrences in logarithmic time.
You read more about it here:
http://zobayer.blogspot.com/2010/11/matrix-exponentiation.html
https://www.hackerearth.com/practice/notes/matrix-exponentiation-1/
"""
class Matrix:
def __init__(self, arg):
if isinstance(arg, list): # Initializes a matrix identical to the one provided.
self.t = arg
self.n = len(arg)
else: # Initializes a square matrix of the given size and set values to zero.
self.n = arg
self.t = [[0 for _ in range(self.n)] for _ in range(self.n)]
def __mul__(self, b):
matrix = Matrix(self.n)
for i in range(self.n):
for j in range(self.n):
for k in range(self.n):
matrix.t[i][j] += self.t[i][k] * b.t[k][j]
return matrix
def modular_exponentiation(a, b):
matrix = Matrix([[1, 0], [0, 1]])
while b > 0:
if b & 1:
matrix *= a
a *= a
b >>= 1
return matrix
def fibonacci_with_matrix_exponentiation(n, f1, f2):
# Trivial Cases
if n == 1:
return f1
elif n == 2:
return f2
matrix = Matrix([[1, 1], [1, 0]])
matrix = modular_exponentiation(matrix, n - 2)
return f2 * matrix.t[0][0] + f1 * matrix.t[0][1]
def simple_fibonacci(n, f1, f2):
# Trivial Cases
if n == 1:
return f1
elif n == 2:
return f2
fn_1 = f1
fn_2 = f2
n -= 2
while n > 0:
fn_1, fn_2 = fn_1 + fn_2, fn_1
n -= 1
return fn_1
def matrix_exponentiation_time():
setup = """
from random import randint
from __main__ import fibonacci_with_matrix_exponentiation
"""
code = "fibonacci_with_matrix_exponentiation(randint(1,70000), 1, 1)"
exec_time = timeit.timeit(setup=setup, stmt=code, number=100)
print("With matrix exponentiation the average execution time is ", exec_time / 100)
return exec_time
def simple_fibonacci_time():
setup = """
from random import randint
from __main__ import simple_fibonacci
"""
code = "simple_fibonacci(randint(1,70000), 1, 1)"
exec_time = timeit.timeit(setup=setup, stmt=code, number=100)
print(
"Without matrix exponentiation the average execution time is ", exec_time / 100
)
return exec_time
def main():
matrix_exponentiation_time()
simple_fibonacci_time()
if __name__ == "__main__":
main()
| """Matrix Exponentiation"""
import timeit
"""
Matrix Exponentiation is a technique to solve linear recurrences in logarithmic time.
You read more about it here:
http://zobayer.blogspot.com/2010/11/matrix-exponentiation.html
https://www.hackerearth.com/practice/notes/matrix-exponentiation-1/
"""
class Matrix:
def __init__(self, arg):
if isinstance(arg, list): # Initializes a matrix identical to the one provided.
self.t = arg
self.n = len(arg)
else: # Initializes a square matrix of the given size and set values to zero.
self.n = arg
self.t = [[0 for _ in range(self.n)] for _ in range(self.n)]
def __mul__(self, b):
matrix = Matrix(self.n)
for i in range(self.n):
for j in range(self.n):
for k in range(self.n):
matrix.t[i][j] += self.t[i][k] * b.t[k][j]
return matrix
def modular_exponentiation(a, b):
matrix = Matrix([[1, 0], [0, 1]])
while b > 0:
if b & 1:
matrix *= a
a *= a
b >>= 1
return matrix
def fibonacci_with_matrix_exponentiation(n, f1, f2):
# Trivial Cases
if n == 1:
return f1
elif n == 2:
return f2
matrix = Matrix([[1, 1], [1, 0]])
matrix = modular_exponentiation(matrix, n - 2)
return f2 * matrix.t[0][0] + f1 * matrix.t[0][1]
def simple_fibonacci(n, f1, f2):
# Trivial Cases
if n == 1:
return f1
elif n == 2:
return f2
fn_1 = f1
fn_2 = f2
n -= 2
while n > 0:
fn_1, fn_2 = fn_1 + fn_2, fn_1
n -= 1
return fn_1
def matrix_exponentiation_time():
setup = """
from random import randint
from __main__ import fibonacci_with_matrix_exponentiation
"""
code = "fibonacci_with_matrix_exponentiation(randint(1,70000), 1, 1)"
exec_time = timeit.timeit(setup=setup, stmt=code, number=100)
print("With matrix exponentiation the average execution time is ", exec_time / 100)
return exec_time
def simple_fibonacci_time():
setup = """
from random import randint
from __main__ import simple_fibonacci
"""
code = "simple_fibonacci(randint(1,70000), 1, 1)"
exec_time = timeit.timeit(setup=setup, stmt=code, number=100)
print(
"Without matrix exponentiation the average execution time is ", exec_time / 100
)
return exec_time
def main():
matrix_exponentiation_time()
simple_fibonacci_time()
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def mixed_keyword(key: str = "college", pt: str = "UNIVERSITY") -> str:
"""
For key:hello
H E L O
A B C D
F G I J
K M N P
Q R S T
U V W X
Y Z
and map vertically
>>> mixed_keyword("college", "UNIVERSITY") # doctest: +NORMALIZE_WHITESPACE
{'A': 'C', 'B': 'A', 'C': 'I', 'D': 'P', 'E': 'U', 'F': 'Z', 'G': 'O', 'H': 'B',
'I': 'J', 'J': 'Q', 'K': 'V', 'L': 'L', 'M': 'D', 'N': 'K', 'O': 'R', 'P': 'W',
'Q': 'E', 'R': 'F', 'S': 'M', 'T': 'S', 'U': 'X', 'V': 'G', 'W': 'H', 'X': 'N',
'Y': 'T', 'Z': 'Y'}
'XKJGUFMJST'
"""
key = key.upper()
pt = pt.upper()
temp = []
for i in key:
if i not in temp:
temp.append(i)
len_temp = len(temp)
# print(temp)
alpha = []
modalpha = []
for j in range(65, 91):
t = chr(j)
alpha.append(t)
if t not in temp:
temp.append(t)
# print(temp)
r = int(26 / 4)
# print(r)
k = 0
for _ in range(r):
s = []
for j in range(len_temp):
s.append(temp[k])
if not (k < 25):
break
k += 1
modalpha.append(s)
# print(modalpha)
d = {}
j = 0
k = 0
for j in range(len_temp):
for m in modalpha:
if not (len(m) - 1 >= j):
break
d[alpha[k]] = m[j]
if not k < 25:
break
k += 1
print(d)
cypher = ""
for i in pt:
cypher += d[i]
return cypher
print(mixed_keyword("college", "UNIVERSITY"))
| def mixed_keyword(key: str = "college", pt: str = "UNIVERSITY") -> str:
"""
For key:hello
H E L O
A B C D
F G I J
K M N P
Q R S T
U V W X
Y Z
and map vertically
>>> mixed_keyword("college", "UNIVERSITY") # doctest: +NORMALIZE_WHITESPACE
{'A': 'C', 'B': 'A', 'C': 'I', 'D': 'P', 'E': 'U', 'F': 'Z', 'G': 'O', 'H': 'B',
'I': 'J', 'J': 'Q', 'K': 'V', 'L': 'L', 'M': 'D', 'N': 'K', 'O': 'R', 'P': 'W',
'Q': 'E', 'R': 'F', 'S': 'M', 'T': 'S', 'U': 'X', 'V': 'G', 'W': 'H', 'X': 'N',
'Y': 'T', 'Z': 'Y'}
'XKJGUFMJST'
"""
key = key.upper()
pt = pt.upper()
temp = []
for i in key:
if i not in temp:
temp.append(i)
len_temp = len(temp)
# print(temp)
alpha = []
modalpha = []
for j in range(65, 91):
t = chr(j)
alpha.append(t)
if t not in temp:
temp.append(t)
# print(temp)
r = int(26 / 4)
# print(r)
k = 0
for _ in range(r):
s = []
for j in range(len_temp):
s.append(temp[k])
if not (k < 25):
break
k += 1
modalpha.append(s)
# print(modalpha)
d = {}
j = 0
k = 0
for j in range(len_temp):
for m in modalpha:
if not (len(m) - 1 >= j):
break
d[alpha[k]] = m[j]
if not k < 25:
break
k += 1
print(d)
cypher = ""
for i in pt:
cypher += d[i]
return cypher
print(mixed_keyword("college", "UNIVERSITY"))
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Amicable Numbers
Problem 21
Let d(n) be defined as the sum of proper divisors of n (numbers less than n
which divide evenly into n).
If d(a) = b and d(b) = a, where a ≠ b, then a and b are an amicable pair and
each of a and b are called amicable numbers.
For example, the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55
and 110; therefore d(220) = 284. The proper divisors of 284 are 1, 2, 4, 71 and
142; so d(284) = 220.
Evaluate the sum of all the amicable numbers under 10000.
"""
from math import sqrt
def sum_of_divisors(n: int) -> int:
total = 0
for i in range(1, int(sqrt(n) + 1)):
if n % i == 0 and i != sqrt(n):
total += i + n // i
elif i == sqrt(n):
total += i
return total - n
def solution(n: int = 10000) -> int:
"""Returns the sum of all the amicable numbers under n.
>>> solution(10000)
31626
>>> solution(5000)
8442
>>> solution(1000)
504
>>> solution(100)
0
>>> solution(50)
0
"""
total = sum(
i
for i in range(1, n)
if sum_of_divisors(sum_of_divisors(i)) == i and sum_of_divisors(i) != i
)
return total
if __name__ == "__main__":
print(solution(int(str(input()).strip())))
| """
Amicable Numbers
Problem 21
Let d(n) be defined as the sum of proper divisors of n (numbers less than n
which divide evenly into n).
If d(a) = b and d(b) = a, where a ≠ b, then a and b are an amicable pair and
each of a and b are called amicable numbers.
For example, the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55
and 110; therefore d(220) = 284. The proper divisors of 284 are 1, 2, 4, 71 and
142; so d(284) = 220.
Evaluate the sum of all the amicable numbers under 10000.
"""
from math import sqrt
def sum_of_divisors(n: int) -> int:
total = 0
for i in range(1, int(sqrt(n) + 1)):
if n % i == 0 and i != sqrt(n):
total += i + n // i
elif i == sqrt(n):
total += i
return total - n
def solution(n: int = 10000) -> int:
"""Returns the sum of all the amicable numbers under n.
>>> solution(10000)
31626
>>> solution(5000)
8442
>>> solution(1000)
504
>>> solution(100)
0
>>> solution(50)
0
"""
total = sum(
i
for i in range(1, n)
if sum_of_divisors(sum_of_divisors(i)) == i and sum_of_divisors(i) != i
)
return total
if __name__ == "__main__":
print(solution(int(str(input()).strip())))
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
from collections import deque
class Automaton:
def __init__(self, keywords: list[str]):
self.adlist: list[dict] = list()
self.adlist.append(
{"value": "", "next_states": [], "fail_state": 0, "output": []}
)
for keyword in keywords:
self.add_keyword(keyword)
self.set_fail_transitions()
def find_next_state(self, current_state: int, char: str) -> int | None:
for state in self.adlist[current_state]["next_states"]:
if char == self.adlist[state]["value"]:
return state
return None
def add_keyword(self, keyword: str) -> None:
current_state = 0
for character in keyword:
next_state = self.find_next_state(current_state, character)
if next_state is None:
self.adlist.append(
{
"value": character,
"next_states": [],
"fail_state": 0,
"output": [],
}
)
self.adlist[current_state]["next_states"].append(len(self.adlist) - 1)
current_state = len(self.adlist) - 1
else:
current_state = next_state
self.adlist[current_state]["output"].append(keyword)
def set_fail_transitions(self) -> None:
q: deque = deque()
for node in self.adlist[0]["next_states"]:
q.append(node)
self.adlist[node]["fail_state"] = 0
while q:
r = q.popleft()
for child in self.adlist[r]["next_states"]:
q.append(child)
state = self.adlist[r]["fail_state"]
while (
self.find_next_state(state, self.adlist[child]["value"]) is None
and state != 0
):
state = self.adlist[state]["fail_state"]
self.adlist[child]["fail_state"] = self.find_next_state(
state, self.adlist[child]["value"]
)
if self.adlist[child]["fail_state"] is None:
self.adlist[child]["fail_state"] = 0
self.adlist[child]["output"] = (
self.adlist[child]["output"]
+ self.adlist[self.adlist[child]["fail_state"]]["output"]
)
def search_in(self, string: str) -> dict[str, list[int]]:
"""
>>> A = Automaton(["what", "hat", "ver", "er"])
>>> A.search_in("whatever, err ... , wherever")
{'what': [0], 'hat': [1], 'ver': [5, 25], 'er': [6, 10, 22, 26]}
"""
result: dict = (
dict()
) # returns a dict with keywords and list of its occurrences
current_state = 0
for i in range(len(string)):
while (
self.find_next_state(current_state, string[i]) is None
and current_state != 0
):
current_state = self.adlist[current_state]["fail_state"]
next_state = self.find_next_state(current_state, string[i])
if next_state is None:
current_state = 0
else:
current_state = next_state
for key in self.adlist[current_state]["output"]:
if not (key in result):
result[key] = []
result[key].append(i - len(key) + 1)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
from collections import deque
class Automaton:
def __init__(self, keywords: list[str]):
self.adlist: list[dict] = list()
self.adlist.append(
{"value": "", "next_states": [], "fail_state": 0, "output": []}
)
for keyword in keywords:
self.add_keyword(keyword)
self.set_fail_transitions()
def find_next_state(self, current_state: int, char: str) -> int | None:
for state in self.adlist[current_state]["next_states"]:
if char == self.adlist[state]["value"]:
return state
return None
def add_keyword(self, keyword: str) -> None:
current_state = 0
for character in keyword:
next_state = self.find_next_state(current_state, character)
if next_state is None:
self.adlist.append(
{
"value": character,
"next_states": [],
"fail_state": 0,
"output": [],
}
)
self.adlist[current_state]["next_states"].append(len(self.adlist) - 1)
current_state = len(self.adlist) - 1
else:
current_state = next_state
self.adlist[current_state]["output"].append(keyword)
def set_fail_transitions(self) -> None:
q: deque = deque()
for node in self.adlist[0]["next_states"]:
q.append(node)
self.adlist[node]["fail_state"] = 0
while q:
r = q.popleft()
for child in self.adlist[r]["next_states"]:
q.append(child)
state = self.adlist[r]["fail_state"]
while (
self.find_next_state(state, self.adlist[child]["value"]) is None
and state != 0
):
state = self.adlist[state]["fail_state"]
self.adlist[child]["fail_state"] = self.find_next_state(
state, self.adlist[child]["value"]
)
if self.adlist[child]["fail_state"] is None:
self.adlist[child]["fail_state"] = 0
self.adlist[child]["output"] = (
self.adlist[child]["output"]
+ self.adlist[self.adlist[child]["fail_state"]]["output"]
)
def search_in(self, string: str) -> dict[str, list[int]]:
"""
>>> A = Automaton(["what", "hat", "ver", "er"])
>>> A.search_in("whatever, err ... , wherever")
{'what': [0], 'hat': [1], 'ver': [5, 25], 'er': [6, 10, 22, 26]}
"""
result: dict = (
dict()
) # returns a dict with keywords and list of its occurrences
current_state = 0
for i in range(len(string)):
while (
self.find_next_state(current_state, string[i]) is None
and current_state != 0
):
current_state = self.adlist[current_state]["fail_state"]
next_state = self.find_next_state(current_state, string[i])
if next_state is None:
current_state = 0
else:
current_state = next_state
for key in self.adlist[current_state]["output"]:
if not (key in result):
result[key] = []
result[key].append(i - len(key) + 1)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def binary_recursive(decimal: int) -> str:
"""
Take a positive integer value and return its binary equivalent.
>>> binary_recursive(1000)
'1111101000'
>>> binary_recursive("72")
'1001000'
>>> binary_recursive("number")
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: 'number'
"""
decimal = int(decimal)
if decimal in (0, 1): # Exit cases for the recursion
return str(decimal)
div, mod = divmod(decimal, 2)
return binary_recursive(div) + str(mod)
def main(number: str) -> str:
"""
Take an integer value and raise ValueError for wrong inputs,
call the function above and return the output with prefix "0b" & "-0b"
for positive and negative integers respectively.
>>> main(0)
'0b0'
>>> main(40)
'0b101000'
>>> main(-40)
'-0b101000'
>>> main(40.8)
Traceback (most recent call last):
...
ValueError: Input value is not an integer
>>> main("forty")
Traceback (most recent call last):
...
ValueError: Input value is not an integer
"""
number = str(number).strip()
if not number:
raise ValueError("No input value was provided")
negative = "-" if number.startswith("-") else ""
number = number.lstrip("-")
if not number.isnumeric():
raise ValueError("Input value is not an integer")
return f"{negative}0b{binary_recursive(int(number))}"
if __name__ == "__main__":
from doctest import testmod
testmod()
| def binary_recursive(decimal: int) -> str:
"""
Take a positive integer value and return its binary equivalent.
>>> binary_recursive(1000)
'1111101000'
>>> binary_recursive("72")
'1001000'
>>> binary_recursive("number")
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: 'number'
"""
decimal = int(decimal)
if decimal in (0, 1): # Exit cases for the recursion
return str(decimal)
div, mod = divmod(decimal, 2)
return binary_recursive(div) + str(mod)
def main(number: str) -> str:
"""
Take an integer value and raise ValueError for wrong inputs,
call the function above and return the output with prefix "0b" & "-0b"
for positive and negative integers respectively.
>>> main(0)
'0b0'
>>> main(40)
'0b101000'
>>> main(-40)
'-0b101000'
>>> main(40.8)
Traceback (most recent call last):
...
ValueError: Input value is not an integer
>>> main("forty")
Traceback (most recent call last):
...
ValueError: Input value is not an integer
"""
number = str(number).strip()
if not number:
raise ValueError("No input value was provided")
negative = "-" if number.startswith("-") else ""
number = number.lstrip("-")
if not number.isnumeric():
raise ValueError("Input value is not an integer")
return f"{negative}0b{binary_recursive(int(number))}"
if __name__ == "__main__":
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Is IP v4 address valid?
A valid IP address must be four octets in the form of A.B.C.D,
where A,B,C and D are numbers from 0-254
for example: 192.168.23.1, 172.254.254.254 are valid IP address
192.168.255.0, 255.192.3.121 are invalid IP address
"""
def is_ip_v4_address_valid(ip_v4_address: str) -> bool:
"""
print "Valid IP address" If IP is valid.
or
print "Invalid IP address" If IP is invalid.
>>> is_ip_v4_address_valid("192.168.0.23")
True
>>> is_ip_v4_address_valid("192.255.15.8")
False
>>> is_ip_v4_address_valid("172.100.0.8")
True
>>> is_ip_v4_address_valid("254.255.0.255")
False
>>> is_ip_v4_address_valid("1.2.33333333.4")
False
>>> is_ip_v4_address_valid("1.2.-3.4")
False
>>> is_ip_v4_address_valid("1.2.3")
False
>>> is_ip_v4_address_valid("1.2.3.4.5")
False
>>> is_ip_v4_address_valid("1.2.A.4")
False
>>> is_ip_v4_address_valid("0.0.0.0")
True
>>> is_ip_v4_address_valid("1.2.3.")
False
"""
octets = [int(i) for i in ip_v4_address.split(".") if i.isdigit()]
return len(octets) == 4 and all(0 <= int(octet) <= 254 for octet in octets)
if __name__ == "__main__":
ip = input().strip()
valid_or_invalid = "valid" if is_ip_v4_address_valid(ip) else "invalid"
print(f"{ip} is a {valid_or_invalid} IP v4 address.")
| """
Is IP v4 address valid?
A valid IP address must be four octets in the form of A.B.C.D,
where A,B,C and D are numbers from 0-254
for example: 192.168.23.1, 172.254.254.254 are valid IP address
192.168.255.0, 255.192.3.121 are invalid IP address
"""
def is_ip_v4_address_valid(ip_v4_address: str) -> bool:
"""
print "Valid IP address" If IP is valid.
or
print "Invalid IP address" If IP is invalid.
>>> is_ip_v4_address_valid("192.168.0.23")
True
>>> is_ip_v4_address_valid("192.255.15.8")
False
>>> is_ip_v4_address_valid("172.100.0.8")
True
>>> is_ip_v4_address_valid("254.255.0.255")
False
>>> is_ip_v4_address_valid("1.2.33333333.4")
False
>>> is_ip_v4_address_valid("1.2.-3.4")
False
>>> is_ip_v4_address_valid("1.2.3")
False
>>> is_ip_v4_address_valid("1.2.3.4.5")
False
>>> is_ip_v4_address_valid("1.2.A.4")
False
>>> is_ip_v4_address_valid("0.0.0.0")
True
>>> is_ip_v4_address_valid("1.2.3.")
False
"""
octets = [int(i) for i in ip_v4_address.split(".") if i.isdigit()]
return len(octets) == 4 and all(0 <= int(octet) <= 254 for octet in octets)
if __name__ == "__main__":
ip = input().strip()
valid_or_invalid = "valid" if is_ip_v4_address_valid(ip) else "invalid"
print(f"{ip} is a {valid_or_invalid} IP v4 address.")
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| class Graph:
"""
Data structure to store graphs (based on adjacency lists)
"""
def __init__(self):
self.num_vertices = 0
self.num_edges = 0
self.adjacency = {}
def add_vertex(self, vertex):
"""
Adds a vertex to the graph
"""
if vertex not in self.adjacency:
self.adjacency[vertex] = {}
self.num_vertices += 1
def add_edge(self, head, tail, weight):
"""
Adds an edge to the graph
"""
self.add_vertex(head)
self.add_vertex(tail)
if head == tail:
return
self.adjacency[head][tail] = weight
self.adjacency[tail][head] = weight
def distinct_weight(self):
"""
For Boruvks's algorithm the weights should be distinct
Converts the weights to be distinct
"""
edges = self.get_edges()
for edge in edges:
head, tail, weight = edge
edges.remove((tail, head, weight))
for i in range(len(edges)):
edges[i] = list(edges[i])
edges.sort(key=lambda e: e[2])
for i in range(len(edges) - 1):
if edges[i][2] >= edges[i + 1][2]:
edges[i + 1][2] = edges[i][2] + 1
for edge in edges:
head, tail, weight = edge
self.adjacency[head][tail] = weight
self.adjacency[tail][head] = weight
def __str__(self):
"""
Returns string representation of the graph
"""
string = ""
for tail in self.adjacency:
for head in self.adjacency[tail]:
weight = self.adjacency[head][tail]
string += "%d -> %d == %d\n" % (head, tail, weight)
return string.rstrip("\n")
def get_edges(self):
"""
Returna all edges in the graph
"""
output = []
for tail in self.adjacency:
for head in self.adjacency[tail]:
output.append((tail, head, self.adjacency[head][tail]))
return output
def get_vertices(self):
"""
Returns all vertices in the graph
"""
return self.adjacency.keys()
@staticmethod
def build(vertices=None, edges=None):
"""
Builds a graph from the given set of vertices and edges
"""
g = Graph()
if vertices is None:
vertices = []
if edges is None:
edge = []
for vertex in vertices:
g.add_vertex(vertex)
for edge in edges:
g.add_edge(*edge)
return g
class UnionFind:
"""
Disjoint set Union and Find for Boruvka's algorithm
"""
def __init__(self):
self.parent = {}
self.rank = {}
def __len__(self):
return len(self.parent)
def make_set(self, item):
if item in self.parent:
return self.find(item)
self.parent[item] = item
self.rank[item] = 0
return item
def find(self, item):
if item not in self.parent:
return self.make_set(item)
if item != self.parent[item]:
self.parent[item] = self.find(self.parent[item])
return self.parent[item]
def union(self, item1, item2):
root1 = self.find(item1)
root2 = self.find(item2)
if root1 == root2:
return root1
if self.rank[root1] > self.rank[root2]:
self.parent[root2] = root1
return root1
if self.rank[root1] < self.rank[root2]:
self.parent[root1] = root2
return root2
if self.rank[root1] == self.rank[root2]:
self.rank[root1] += 1
self.parent[root2] = root1
return root1
@staticmethod
def boruvka_mst(graph):
"""
Implementation of Boruvka's algorithm
>>> g = Graph()
>>> g = Graph.build([0, 1, 2, 3], [[0, 1, 1], [0, 2, 1],[2, 3, 1]])
>>> g.distinct_weight()
>>> bg = Graph.boruvka_mst(g)
>>> print(bg)
1 -> 0 == 1
2 -> 0 == 2
0 -> 1 == 1
0 -> 2 == 2
3 -> 2 == 3
2 -> 3 == 3
"""
num_components = graph.num_vertices
union_find = Graph.UnionFind()
mst_edges = []
while num_components > 1:
cheap_edge = {}
for vertex in graph.get_vertices():
cheap_edge[vertex] = -1
edges = graph.get_edges()
for edge in edges:
head, tail, weight = edge
edges.remove((tail, head, weight))
for edge in edges:
head, tail, weight = edge
set1 = union_find.find(head)
set2 = union_find.find(tail)
if set1 != set2:
if cheap_edge[set1] == -1 or cheap_edge[set1][2] > weight:
cheap_edge[set1] = [head, tail, weight]
if cheap_edge[set2] == -1 or cheap_edge[set2][2] > weight:
cheap_edge[set2] = [head, tail, weight]
for vertex in cheap_edge:
if cheap_edge[vertex] != -1:
head, tail, weight = cheap_edge[vertex]
if union_find.find(head) != union_find.find(tail):
union_find.union(head, tail)
mst_edges.append(cheap_edge[vertex])
num_components = num_components - 1
mst = Graph.build(edges=mst_edges)
return mst
| class Graph:
"""
Data structure to store graphs (based on adjacency lists)
"""
def __init__(self):
self.num_vertices = 0
self.num_edges = 0
self.adjacency = {}
def add_vertex(self, vertex):
"""
Adds a vertex to the graph
"""
if vertex not in self.adjacency:
self.adjacency[vertex] = {}
self.num_vertices += 1
def add_edge(self, head, tail, weight):
"""
Adds an edge to the graph
"""
self.add_vertex(head)
self.add_vertex(tail)
if head == tail:
return
self.adjacency[head][tail] = weight
self.adjacency[tail][head] = weight
def distinct_weight(self):
"""
For Boruvks's algorithm the weights should be distinct
Converts the weights to be distinct
"""
edges = self.get_edges()
for edge in edges:
head, tail, weight = edge
edges.remove((tail, head, weight))
for i in range(len(edges)):
edges[i] = list(edges[i])
edges.sort(key=lambda e: e[2])
for i in range(len(edges) - 1):
if edges[i][2] >= edges[i + 1][2]:
edges[i + 1][2] = edges[i][2] + 1
for edge in edges:
head, tail, weight = edge
self.adjacency[head][tail] = weight
self.adjacency[tail][head] = weight
def __str__(self):
"""
Returns string representation of the graph
"""
string = ""
for tail in self.adjacency:
for head in self.adjacency[tail]:
weight = self.adjacency[head][tail]
string += "%d -> %d == %d\n" % (head, tail, weight)
return string.rstrip("\n")
def get_edges(self):
"""
Returna all edges in the graph
"""
output = []
for tail in self.adjacency:
for head in self.adjacency[tail]:
output.append((tail, head, self.adjacency[head][tail]))
return output
def get_vertices(self):
"""
Returns all vertices in the graph
"""
return self.adjacency.keys()
@staticmethod
def build(vertices=None, edges=None):
"""
Builds a graph from the given set of vertices and edges
"""
g = Graph()
if vertices is None:
vertices = []
if edges is None:
edge = []
for vertex in vertices:
g.add_vertex(vertex)
for edge in edges:
g.add_edge(*edge)
return g
class UnionFind:
"""
Disjoint set Union and Find for Boruvka's algorithm
"""
def __init__(self):
self.parent = {}
self.rank = {}
def __len__(self):
return len(self.parent)
def make_set(self, item):
if item in self.parent:
return self.find(item)
self.parent[item] = item
self.rank[item] = 0
return item
def find(self, item):
if item not in self.parent:
return self.make_set(item)
if item != self.parent[item]:
self.parent[item] = self.find(self.parent[item])
return self.parent[item]
def union(self, item1, item2):
root1 = self.find(item1)
root2 = self.find(item2)
if root1 == root2:
return root1
if self.rank[root1] > self.rank[root2]:
self.parent[root2] = root1
return root1
if self.rank[root1] < self.rank[root2]:
self.parent[root1] = root2
return root2
if self.rank[root1] == self.rank[root2]:
self.rank[root1] += 1
self.parent[root2] = root1
return root1
@staticmethod
def boruvka_mst(graph):
"""
Implementation of Boruvka's algorithm
>>> g = Graph()
>>> g = Graph.build([0, 1, 2, 3], [[0, 1, 1], [0, 2, 1],[2, 3, 1]])
>>> g.distinct_weight()
>>> bg = Graph.boruvka_mst(g)
>>> print(bg)
1 -> 0 == 1
2 -> 0 == 2
0 -> 1 == 1
0 -> 2 == 2
3 -> 2 == 3
2 -> 3 == 3
"""
num_components = graph.num_vertices
union_find = Graph.UnionFind()
mst_edges = []
while num_components > 1:
cheap_edge = {}
for vertex in graph.get_vertices():
cheap_edge[vertex] = -1
edges = graph.get_edges()
for edge in edges:
head, tail, weight = edge
edges.remove((tail, head, weight))
for edge in edges:
head, tail, weight = edge
set1 = union_find.find(head)
set2 = union_find.find(tail)
if set1 != set2:
if cheap_edge[set1] == -1 or cheap_edge[set1][2] > weight:
cheap_edge[set1] = [head, tail, weight]
if cheap_edge[set2] == -1 or cheap_edge[set2][2] > weight:
cheap_edge[set2] = [head, tail, weight]
for vertex in cheap_edge:
if cheap_edge[vertex] != -1:
head, tail, weight = cheap_edge[vertex]
if union_find.find(head) != union_find.find(tail):
union_find.union(head, tail)
mst_edges.append(cheap_edge[vertex])
num_components = num_components - 1
mst = Graph.build(edges=mst_edges)
return mst
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| alphabet = {
"A": ("ABCDEFGHIJKLM", "NOPQRSTUVWXYZ"),
"B": ("ABCDEFGHIJKLM", "NOPQRSTUVWXYZ"),
"C": ("ABCDEFGHIJKLM", "ZNOPQRSTUVWXY"),
"D": ("ABCDEFGHIJKLM", "ZNOPQRSTUVWXY"),
"E": ("ABCDEFGHIJKLM", "YZNOPQRSTUVWX"),
"F": ("ABCDEFGHIJKLM", "YZNOPQRSTUVWX"),
"G": ("ABCDEFGHIJKLM", "XYZNOPQRSTUVW"),
"H": ("ABCDEFGHIJKLM", "XYZNOPQRSTUVW"),
"I": ("ABCDEFGHIJKLM", "WXYZNOPQRSTUV"),
"J": ("ABCDEFGHIJKLM", "WXYZNOPQRSTUV"),
"K": ("ABCDEFGHIJKLM", "VWXYZNOPQRSTU"),
"L": ("ABCDEFGHIJKLM", "VWXYZNOPQRSTU"),
"M": ("ABCDEFGHIJKLM", "UVWXYZNOPQRST"),
"N": ("ABCDEFGHIJKLM", "UVWXYZNOPQRST"),
"O": ("ABCDEFGHIJKLM", "TUVWXYZNOPQRS"),
"P": ("ABCDEFGHIJKLM", "TUVWXYZNOPQRS"),
"Q": ("ABCDEFGHIJKLM", "STUVWXYZNOPQR"),
"R": ("ABCDEFGHIJKLM", "STUVWXYZNOPQR"),
"S": ("ABCDEFGHIJKLM", "RSTUVWXYZNOPQ"),
"T": ("ABCDEFGHIJKLM", "RSTUVWXYZNOPQ"),
"U": ("ABCDEFGHIJKLM", "QRSTUVWXYZNOP"),
"V": ("ABCDEFGHIJKLM", "QRSTUVWXYZNOP"),
"W": ("ABCDEFGHIJKLM", "PQRSTUVWXYZNO"),
"X": ("ABCDEFGHIJKLM", "PQRSTUVWXYZNO"),
"Y": ("ABCDEFGHIJKLM", "OPQRSTUVWXYZN"),
"Z": ("ABCDEFGHIJKLM", "OPQRSTUVWXYZN"),
}
def generate_table(key: str) -> list[tuple[str, str]]:
"""
>>> generate_table('marvin') # doctest: +NORMALIZE_WHITESPACE
[('ABCDEFGHIJKLM', 'UVWXYZNOPQRST'), ('ABCDEFGHIJKLM', 'NOPQRSTUVWXYZ'),
('ABCDEFGHIJKLM', 'STUVWXYZNOPQR'), ('ABCDEFGHIJKLM', 'QRSTUVWXYZNOP'),
('ABCDEFGHIJKLM', 'WXYZNOPQRSTUV'), ('ABCDEFGHIJKLM', 'UVWXYZNOPQRST')]
"""
return [alphabet[char] for char in key.upper()]
def encrypt(key: str, words: str) -> str:
"""
>>> encrypt('marvin', 'jessica')
'QRACRWU'
"""
cipher = ""
count = 0
table = generate_table(key)
for char in words.upper():
cipher += get_opponent(table[count], char)
count = (count + 1) % len(table)
return cipher
def decrypt(key: str, words: str) -> str:
"""
>>> decrypt('marvin', 'QRACRWU')
'JESSICA'
"""
return encrypt(key, words)
def get_position(table: tuple[str, str], char: str) -> tuple[int, int]:
"""
>>> get_position(generate_table('marvin')[0], 'M')
(0, 12)
"""
# `char` is either in the 0th row or the 1st row
row = 0 if char in table[0] else 1
col = table[row].index(char)
return row, col
def get_opponent(table: tuple[str, str], char: str) -> str:
"""
>>> get_opponent(generate_table('marvin')[0], 'M')
'T'
"""
row, col = get_position(table, char.upper())
if row == 1:
return table[0][col]
else:
return table[1][col] if row == 0 else char
if __name__ == "__main__":
import doctest
doctest.testmod() # Fist ensure that all our tests are passing...
"""
Demo:
Enter key: marvin
Enter text to encrypt: jessica
Encrypted: QRACRWU
Decrypted with key: JESSICA
"""
key = input("Enter key: ").strip()
text = input("Enter text to encrypt: ").strip()
cipher_text = encrypt(key, text)
print(f"Encrypted: {cipher_text}")
print(f"Decrypted with key: {decrypt(key, cipher_text)}")
| alphabet = {
"A": ("ABCDEFGHIJKLM", "NOPQRSTUVWXYZ"),
"B": ("ABCDEFGHIJKLM", "NOPQRSTUVWXYZ"),
"C": ("ABCDEFGHIJKLM", "ZNOPQRSTUVWXY"),
"D": ("ABCDEFGHIJKLM", "ZNOPQRSTUVWXY"),
"E": ("ABCDEFGHIJKLM", "YZNOPQRSTUVWX"),
"F": ("ABCDEFGHIJKLM", "YZNOPQRSTUVWX"),
"G": ("ABCDEFGHIJKLM", "XYZNOPQRSTUVW"),
"H": ("ABCDEFGHIJKLM", "XYZNOPQRSTUVW"),
"I": ("ABCDEFGHIJKLM", "WXYZNOPQRSTUV"),
"J": ("ABCDEFGHIJKLM", "WXYZNOPQRSTUV"),
"K": ("ABCDEFGHIJKLM", "VWXYZNOPQRSTU"),
"L": ("ABCDEFGHIJKLM", "VWXYZNOPQRSTU"),
"M": ("ABCDEFGHIJKLM", "UVWXYZNOPQRST"),
"N": ("ABCDEFGHIJKLM", "UVWXYZNOPQRST"),
"O": ("ABCDEFGHIJKLM", "TUVWXYZNOPQRS"),
"P": ("ABCDEFGHIJKLM", "TUVWXYZNOPQRS"),
"Q": ("ABCDEFGHIJKLM", "STUVWXYZNOPQR"),
"R": ("ABCDEFGHIJKLM", "STUVWXYZNOPQR"),
"S": ("ABCDEFGHIJKLM", "RSTUVWXYZNOPQ"),
"T": ("ABCDEFGHIJKLM", "RSTUVWXYZNOPQ"),
"U": ("ABCDEFGHIJKLM", "QRSTUVWXYZNOP"),
"V": ("ABCDEFGHIJKLM", "QRSTUVWXYZNOP"),
"W": ("ABCDEFGHIJKLM", "PQRSTUVWXYZNO"),
"X": ("ABCDEFGHIJKLM", "PQRSTUVWXYZNO"),
"Y": ("ABCDEFGHIJKLM", "OPQRSTUVWXYZN"),
"Z": ("ABCDEFGHIJKLM", "OPQRSTUVWXYZN"),
}
def generate_table(key: str) -> list[tuple[str, str]]:
"""
>>> generate_table('marvin') # doctest: +NORMALIZE_WHITESPACE
[('ABCDEFGHIJKLM', 'UVWXYZNOPQRST'), ('ABCDEFGHIJKLM', 'NOPQRSTUVWXYZ'),
('ABCDEFGHIJKLM', 'STUVWXYZNOPQR'), ('ABCDEFGHIJKLM', 'QRSTUVWXYZNOP'),
('ABCDEFGHIJKLM', 'WXYZNOPQRSTUV'), ('ABCDEFGHIJKLM', 'UVWXYZNOPQRST')]
"""
return [alphabet[char] for char in key.upper()]
def encrypt(key: str, words: str) -> str:
"""
>>> encrypt('marvin', 'jessica')
'QRACRWU'
"""
cipher = ""
count = 0
table = generate_table(key)
for char in words.upper():
cipher += get_opponent(table[count], char)
count = (count + 1) % len(table)
return cipher
def decrypt(key: str, words: str) -> str:
"""
>>> decrypt('marvin', 'QRACRWU')
'JESSICA'
"""
return encrypt(key, words)
def get_position(table: tuple[str, str], char: str) -> tuple[int, int]:
"""
>>> get_position(generate_table('marvin')[0], 'M')
(0, 12)
"""
# `char` is either in the 0th row or the 1st row
row = 0 if char in table[0] else 1
col = table[row].index(char)
return row, col
def get_opponent(table: tuple[str, str], char: str) -> str:
"""
>>> get_opponent(generate_table('marvin')[0], 'M')
'T'
"""
row, col = get_position(table, char.upper())
if row == 1:
return table[0][col]
else:
return table[1][col] if row == 0 else char
if __name__ == "__main__":
import doctest
doctest.testmod() # Fist ensure that all our tests are passing...
"""
Demo:
Enter key: marvin
Enter text to encrypt: jessica
Encrypted: QRACRWU
Decrypted with key: JESSICA
"""
key = input("Enter key: ").strip()
text = input("Enter text to encrypt: ").strip()
cipher_text = encrypt(key, text)
print(f"Encrypted: {cipher_text}")
print(f"Decrypted with key: {decrypt(key, cipher_text)}")
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """ Multiple image resizing techniques """
import numpy as np
from cv2 import destroyAllWindows, imread, imshow, waitKey
class NearestNeighbour:
"""
Simplest and fastest version of image resizing.
Source: https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation
"""
def __init__(self, img, dst_width: int, dst_height: int):
if dst_width < 0 or dst_height < 0:
raise ValueError("Destination width/height should be > 0")
self.img = img
self.src_w = img.shape[1]
self.src_h = img.shape[0]
self.dst_w = dst_width
self.dst_h = dst_height
self.ratio_x = self.src_w / self.dst_w
self.ratio_y = self.src_h / self.dst_h
self.output = self.output_img = (
np.ones((self.dst_h, self.dst_w, 3), np.uint8) * 255
)
def process(self):
for i in range(self.dst_h):
for j in range(self.dst_w):
self.output[i][j] = self.img[self.get_y(i)][self.get_x(j)]
def get_x(self, x: int) -> int:
"""
Get parent X coordinate for destination X
:param x: Destination X coordinate
:return: Parent X coordinate based on `x ratio`
>>> nn = NearestNeighbour(imread("digital_image_processing/image_data/lena.jpg",
... 1), 100, 100)
>>> nn.ratio_x = 0.5
>>> nn.get_x(4)
2
"""
return int(self.ratio_x * x)
def get_y(self, y: int) -> int:
"""
Get parent Y coordinate for destination Y
:param y: Destination X coordinate
:return: Parent X coordinate based on `y ratio`
>>> nn = NearestNeighbour(imread("digital_image_processing/image_data/lena.jpg",
... 1), 100, 100)
>>> nn.ratio_y = 0.5
>>> nn.get_y(4)
2
"""
return int(self.ratio_y * y)
if __name__ == "__main__":
dst_w, dst_h = 800, 600
im = imread("image_data/lena.jpg", 1)
n = NearestNeighbour(im, dst_w, dst_h)
n.process()
imshow(
f"Image resized from: {im.shape[1]}x{im.shape[0]} to {dst_w}x{dst_h}", n.output
)
waitKey(0)
destroyAllWindows()
| """ Multiple image resizing techniques """
import numpy as np
from cv2 import destroyAllWindows, imread, imshow, waitKey
class NearestNeighbour:
"""
Simplest and fastest version of image resizing.
Source: https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation
"""
def __init__(self, img, dst_width: int, dst_height: int):
if dst_width < 0 or dst_height < 0:
raise ValueError("Destination width/height should be > 0")
self.img = img
self.src_w = img.shape[1]
self.src_h = img.shape[0]
self.dst_w = dst_width
self.dst_h = dst_height
self.ratio_x = self.src_w / self.dst_w
self.ratio_y = self.src_h / self.dst_h
self.output = self.output_img = (
np.ones((self.dst_h, self.dst_w, 3), np.uint8) * 255
)
def process(self):
for i in range(self.dst_h):
for j in range(self.dst_w):
self.output[i][j] = self.img[self.get_y(i)][self.get_x(j)]
def get_x(self, x: int) -> int:
"""
Get parent X coordinate for destination X
:param x: Destination X coordinate
:return: Parent X coordinate based on `x ratio`
>>> nn = NearestNeighbour(imread("digital_image_processing/image_data/lena.jpg",
... 1), 100, 100)
>>> nn.ratio_x = 0.5
>>> nn.get_x(4)
2
"""
return int(self.ratio_x * x)
def get_y(self, y: int) -> int:
"""
Get parent Y coordinate for destination Y
:param y: Destination X coordinate
:return: Parent X coordinate based on `y ratio`
>>> nn = NearestNeighbour(imread("digital_image_processing/image_data/lena.jpg",
... 1), 100, 100)
>>> nn.ratio_y = 0.5
>>> nn.get_y(4)
2
"""
return int(self.ratio_y * y)
if __name__ == "__main__":
dst_w, dst_h = 800, 600
im = imread("image_data/lena.jpg", 1)
n = NearestNeighbour(im, dst_w, dst_h)
n.process()
imshow(
f"Image resized from: {im.shape[1]}x{im.shape[0]} to {dst_w}x{dst_h}", n.output
)
waitKey(0)
destroyAllWindows()
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
An edge is a bridge if, after removing it count of connected components in graph will
be increased by one. Bridges represent vulnerabilities in a connected network and are
useful for designing reliable networks. For example, in a wired computer network, an
articulation point indicates the critical computers and a bridge indicates the critical
wires or connections.
For more details, refer this article:
https://www.geeksforgeeks.org/bridge-in-a-graph/
"""
def __get_demo_graph(index):
return [
{
0: [1, 2],
1: [0, 2],
2: [0, 1, 3, 5],
3: [2, 4],
4: [3],
5: [2, 6, 8],
6: [5, 7],
7: [6, 8],
8: [5, 7],
},
{
0: [6],
1: [9],
2: [4, 5],
3: [4],
4: [2, 3],
5: [2],
6: [0, 7],
7: [6],
8: [],
9: [1],
},
{
0: [4],
1: [6],
2: [],
3: [5, 6, 7],
4: [0, 6],
5: [3, 8, 9],
6: [1, 3, 4, 7],
7: [3, 6, 8, 9],
8: [5, 7],
9: [5, 7],
},
{
0: [1, 3],
1: [0, 2, 4],
2: [1, 3, 4],
3: [0, 2, 4],
4: [1, 2, 3],
},
][index]
def compute_bridges(graph: dict[int, list[int]]) -> list[tuple[int, int]]:
"""
Return the list of undirected graph bridges [(a1, b1), ..., (ak, bk)]; ai <= bi
>>> compute_bridges(__get_demo_graph(0))
[(3, 4), (2, 3), (2, 5)]
>>> compute_bridges(__get_demo_graph(1))
[(6, 7), (0, 6), (1, 9), (3, 4), (2, 4), (2, 5)]
>>> compute_bridges(__get_demo_graph(2))
[(1, 6), (4, 6), (0, 4)]
>>> compute_bridges(__get_demo_graph(3))
[]
>>> compute_bridges({})
[]
"""
id = 0
n = len(graph) # No of vertices in graph
low = [0] * n
visited = [False] * n
def dfs(at, parent, bridges, id):
visited[at] = True
low[at] = id
id += 1
for to in graph[at]:
if to == parent:
pass
elif not visited[to]:
dfs(to, at, bridges, id)
low[at] = min(low[at], low[to])
if id <= low[to]:
bridges.append((at, to) if at < to else (to, at))
else:
# This edge is a back edge and cannot be a bridge
low[at] = min(low[at], low[to])
bridges: list[tuple[int, int]] = []
for i in range(n):
if not visited[i]:
dfs(i, -1, bridges, id)
return bridges
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
An edge is a bridge if, after removing it count of connected components in graph will
be increased by one. Bridges represent vulnerabilities in a connected network and are
useful for designing reliable networks. For example, in a wired computer network, an
articulation point indicates the critical computers and a bridge indicates the critical
wires or connections.
For more details, refer this article:
https://www.geeksforgeeks.org/bridge-in-a-graph/
"""
def __get_demo_graph(index):
return [
{
0: [1, 2],
1: [0, 2],
2: [0, 1, 3, 5],
3: [2, 4],
4: [3],
5: [2, 6, 8],
6: [5, 7],
7: [6, 8],
8: [5, 7],
},
{
0: [6],
1: [9],
2: [4, 5],
3: [4],
4: [2, 3],
5: [2],
6: [0, 7],
7: [6],
8: [],
9: [1],
},
{
0: [4],
1: [6],
2: [],
3: [5, 6, 7],
4: [0, 6],
5: [3, 8, 9],
6: [1, 3, 4, 7],
7: [3, 6, 8, 9],
8: [5, 7],
9: [5, 7],
},
{
0: [1, 3],
1: [0, 2, 4],
2: [1, 3, 4],
3: [0, 2, 4],
4: [1, 2, 3],
},
][index]
def compute_bridges(graph: dict[int, list[int]]) -> list[tuple[int, int]]:
"""
Return the list of undirected graph bridges [(a1, b1), ..., (ak, bk)]; ai <= bi
>>> compute_bridges(__get_demo_graph(0))
[(3, 4), (2, 3), (2, 5)]
>>> compute_bridges(__get_demo_graph(1))
[(6, 7), (0, 6), (1, 9), (3, 4), (2, 4), (2, 5)]
>>> compute_bridges(__get_demo_graph(2))
[(1, 6), (4, 6), (0, 4)]
>>> compute_bridges(__get_demo_graph(3))
[]
>>> compute_bridges({})
[]
"""
id = 0
n = len(graph) # No of vertices in graph
low = [0] * n
visited = [False] * n
def dfs(at, parent, bridges, id):
visited[at] = True
low[at] = id
id += 1
for to in graph[at]:
if to == parent:
pass
elif not visited[to]:
dfs(to, at, bridges, id)
low[at] = min(low[at], low[to])
if id <= low[to]:
bridges.append((at, to) if at < to else (to, at))
else:
# This edge is a back edge and cannot be a bridge
low[at] = min(low[at], low[to])
bridges: list[tuple[int, int]] = []
for i in range(n):
if not visited[i]:
dfs(i, -1, bridges, id)
return bridges
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def apply_table(inp, table):
"""
>>> apply_table("0123456789", list(range(10)))
'9012345678'
>>> apply_table("0123456789", list(range(9, -1, -1)))
'8765432109'
"""
res = ""
for i in table:
res += inp[i - 1]
return res
def left_shift(data):
"""
>>> left_shift("0123456789")
'1234567890'
"""
return data[1:] + data[0]
def XOR(a, b):
"""
>>> XOR("01010101", "00001111")
'01011010'
"""
res = ""
for i in range(len(a)):
if a[i] == b[i]:
res += "0"
else:
res += "1"
return res
def apply_sbox(s, data):
row = int("0b" + data[0] + data[-1], 2)
col = int("0b" + data[1:3], 2)
return bin(s[row][col])[2:]
def function(expansion, s0, s1, key, message):
left = message[:4]
right = message[4:]
temp = apply_table(right, expansion)
temp = XOR(temp, key)
l = apply_sbox(s0, temp[:4]) # noqa: E741
r = apply_sbox(s1, temp[4:])
l = "0" * (2 - len(l)) + l # noqa: E741
r = "0" * (2 - len(r)) + r
temp = apply_table(l + r, p4_table)
temp = XOR(left, temp)
return temp + right
if __name__ == "__main__":
key = input("Enter 10 bit key: ")
message = input("Enter 8 bit message: ")
p8_table = [6, 3, 7, 4, 8, 5, 10, 9]
p10_table = [3, 5, 2, 7, 4, 10, 1, 9, 8, 6]
p4_table = [2, 4, 3, 1]
IP = [2, 6, 3, 1, 4, 8, 5, 7]
IP_inv = [4, 1, 3, 5, 7, 2, 8, 6]
expansion = [4, 1, 2, 3, 2, 3, 4, 1]
s0 = [[1, 0, 3, 2], [3, 2, 1, 0], [0, 2, 1, 3], [3, 1, 3, 2]]
s1 = [[0, 1, 2, 3], [2, 0, 1, 3], [3, 0, 1, 0], [2, 1, 0, 3]]
# key generation
temp = apply_table(key, p10_table)
left = temp[:5]
right = temp[5:]
left = left_shift(left)
right = left_shift(right)
key1 = apply_table(left + right, p8_table)
left = left_shift(left)
right = left_shift(right)
left = left_shift(left)
right = left_shift(right)
key2 = apply_table(left + right, p8_table)
# encryption
temp = apply_table(message, IP)
temp = function(expansion, s0, s1, key1, temp)
temp = temp[4:] + temp[:4]
temp = function(expansion, s0, s1, key2, temp)
CT = apply_table(temp, IP_inv)
print("Cipher text is:", CT)
# decryption
temp = apply_table(CT, IP)
temp = function(expansion, s0, s1, key2, temp)
temp = temp[4:] + temp[:4]
temp = function(expansion, s0, s1, key1, temp)
PT = apply_table(temp, IP_inv)
print("Plain text after decypting is:", PT)
| def apply_table(inp, table):
"""
>>> apply_table("0123456789", list(range(10)))
'9012345678'
>>> apply_table("0123456789", list(range(9, -1, -1)))
'8765432109'
"""
res = ""
for i in table:
res += inp[i - 1]
return res
def left_shift(data):
"""
>>> left_shift("0123456789")
'1234567890'
"""
return data[1:] + data[0]
def XOR(a, b):
"""
>>> XOR("01010101", "00001111")
'01011010'
"""
res = ""
for i in range(len(a)):
if a[i] == b[i]:
res += "0"
else:
res += "1"
return res
def apply_sbox(s, data):
row = int("0b" + data[0] + data[-1], 2)
col = int("0b" + data[1:3], 2)
return bin(s[row][col])[2:]
def function(expansion, s0, s1, key, message):
left = message[:4]
right = message[4:]
temp = apply_table(right, expansion)
temp = XOR(temp, key)
l = apply_sbox(s0, temp[:4]) # noqa: E741
r = apply_sbox(s1, temp[4:])
l = "0" * (2 - len(l)) + l # noqa: E741
r = "0" * (2 - len(r)) + r
temp = apply_table(l + r, p4_table)
temp = XOR(left, temp)
return temp + right
if __name__ == "__main__":
key = input("Enter 10 bit key: ")
message = input("Enter 8 bit message: ")
p8_table = [6, 3, 7, 4, 8, 5, 10, 9]
p10_table = [3, 5, 2, 7, 4, 10, 1, 9, 8, 6]
p4_table = [2, 4, 3, 1]
IP = [2, 6, 3, 1, 4, 8, 5, 7]
IP_inv = [4, 1, 3, 5, 7, 2, 8, 6]
expansion = [4, 1, 2, 3, 2, 3, 4, 1]
s0 = [[1, 0, 3, 2], [3, 2, 1, 0], [0, 2, 1, 3], [3, 1, 3, 2]]
s1 = [[0, 1, 2, 3], [2, 0, 1, 3], [3, 0, 1, 0], [2, 1, 0, 3]]
# key generation
temp = apply_table(key, p10_table)
left = temp[:5]
right = temp[5:]
left = left_shift(left)
right = left_shift(right)
key1 = apply_table(left + right, p8_table)
left = left_shift(left)
right = left_shift(right)
left = left_shift(left)
right = left_shift(right)
key2 = apply_table(left + right, p8_table)
# encryption
temp = apply_table(message, IP)
temp = function(expansion, s0, s1, key1, temp)
temp = temp[4:] + temp[:4]
temp = function(expansion, s0, s1, key2, temp)
CT = apply_table(temp, IP_inv)
print("Cipher text is:", CT)
# decryption
temp = apply_table(CT, IP)
temp = function(expansion, s0, s1, key2, temp)
temp = temp[4:] + temp[:4]
temp = function(expansion, s0, s1, key1, temp)
PT = apply_table(temp, IP_inv)
print("Plain text after decypting is:", PT)
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Created by sarathkaul on 12/11/19
import requests
_NEWS_API = "https://newsapi.org/v1/articles?source=bbc-news&sortBy=top&apiKey="
def fetch_bbc_news(bbc_news_api_key: str) -> None:
# fetching a list of articles in json format
bbc_news_page = requests.get(_NEWS_API + bbc_news_api_key).json()
# each article in the list is a dict
for i, article in enumerate(bbc_news_page["articles"], 1):
print(f"{i}.) {article['title']}")
if __name__ == "__main__":
fetch_bbc_news(bbc_news_api_key="<Your BBC News API key goes here>")
| # Created by sarathkaul on 12/11/19
import requests
_NEWS_API = "https://newsapi.org/v1/articles?source=bbc-news&sortBy=top&apiKey="
def fetch_bbc_news(bbc_news_api_key: str) -> None:
# fetching a list of articles in json format
bbc_news_page = requests.get(_NEWS_API + bbc_news_api_key).json()
# each article in the list is a dict
for i, article in enumerate(bbc_news_page["articles"], 1):
print(f"{i}.) {article['title']}")
if __name__ == "__main__":
fetch_bbc_news(bbc_news_api_key="<Your BBC News API key goes here>")
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Implementation of gaussian filter algorithm
"""
from itertools import product
from cv2 import COLOR_BGR2GRAY, cvtColor, imread, imshow, waitKey
from numpy import dot, exp, mgrid, pi, ravel, square, uint8, zeros
def gen_gaussian_kernel(k_size, sigma):
center = k_size // 2
x, y = mgrid[0 - center : k_size - center, 0 - center : k_size - center]
g = 1 / (2 * pi * sigma) * exp(-(square(x) + square(y)) / (2 * square(sigma)))
return g
def gaussian_filter(image, k_size, sigma):
height, width = image.shape[0], image.shape[1]
# dst image height and width
dst_height = height - k_size + 1
dst_width = width - k_size + 1
# im2col, turn the k_size*k_size pixels into a row and np.vstack all rows
image_array = zeros((dst_height * dst_width, k_size * k_size))
row = 0
for i, j in product(range(dst_height), range(dst_width)):
window = ravel(image[i : i + k_size, j : j + k_size])
image_array[row, :] = window
row += 1
# turn the kernel into shape(k*k, 1)
gaussian_kernel = gen_gaussian_kernel(k_size, sigma)
filter_array = ravel(gaussian_kernel)
# reshape and get the dst image
dst = dot(image_array, filter_array).reshape(dst_height, dst_width).astype(uint8)
return dst
if __name__ == "__main__":
# read original image
img = imread(r"../image_data/lena.jpg")
# turn image in gray scale value
gray = cvtColor(img, COLOR_BGR2GRAY)
# get values with two different mask size
gaussian3x3 = gaussian_filter(gray, 3, sigma=1)
gaussian5x5 = gaussian_filter(gray, 5, sigma=0.8)
# show result images
imshow("gaussian filter with 3x3 mask", gaussian3x3)
imshow("gaussian filter with 5x5 mask", gaussian5x5)
waitKey()
| """
Implementation of gaussian filter algorithm
"""
from itertools import product
from cv2 import COLOR_BGR2GRAY, cvtColor, imread, imshow, waitKey
from numpy import dot, exp, mgrid, pi, ravel, square, uint8, zeros
def gen_gaussian_kernel(k_size, sigma):
center = k_size // 2
x, y = mgrid[0 - center : k_size - center, 0 - center : k_size - center]
g = 1 / (2 * pi * sigma) * exp(-(square(x) + square(y)) / (2 * square(sigma)))
return g
def gaussian_filter(image, k_size, sigma):
height, width = image.shape[0], image.shape[1]
# dst image height and width
dst_height = height - k_size + 1
dst_width = width - k_size + 1
# im2col, turn the k_size*k_size pixels into a row and np.vstack all rows
image_array = zeros((dst_height * dst_width, k_size * k_size))
row = 0
for i, j in product(range(dst_height), range(dst_width)):
window = ravel(image[i : i + k_size, j : j + k_size])
image_array[row, :] = window
row += 1
# turn the kernel into shape(k*k, 1)
gaussian_kernel = gen_gaussian_kernel(k_size, sigma)
filter_array = ravel(gaussian_kernel)
# reshape and get the dst image
dst = dot(image_array, filter_array).reshape(dst_height, dst_width).astype(uint8)
return dst
if __name__ == "__main__":
# read original image
img = imread(r"../image_data/lena.jpg")
# turn image in gray scale value
gray = cvtColor(img, COLOR_BGR2GRAY)
# get values with two different mask size
gaussian3x3 = gaussian_filter(gray, 3, sigma=1)
gaussian5x5 = gaussian_filter(gray, 5, sigma=0.8)
# show result images
imshow("gaussian filter with 3x3 mask", gaussian3x3)
imshow("gaussian filter with 5x5 mask", gaussian5x5)
waitKey()
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
def kmp(pattern: str, text: str) -> bool:
"""
The Knuth-Morris-Pratt Algorithm for finding a pattern within a piece of text
with complexity O(n + m)
1) Preprocess pattern to identify any suffixes that are identical to prefixes
This tells us where to continue from if we get a mismatch between a character
in our pattern and the text.
2) Step through the text one character at a time and compare it to a character in
the pattern updating our location within the pattern if necessary
"""
# 1) Construct the failure array
failure = get_failure_array(pattern)
# 2) Step through text searching for pattern
i, j = 0, 0 # index into text, pattern
while i < len(text):
if pattern[j] == text[i]:
if j == (len(pattern) - 1):
return True
j += 1
# if this is a prefix in our pattern
# just go back far enough to continue
elif j > 0:
j = failure[j - 1]
continue
i += 1
return False
def get_failure_array(pattern: str) -> list[int]:
"""
Calculates the new index we should go to if we fail a comparison
:param pattern:
:return:
"""
failure = [0]
i = 0
j = 1
while j < len(pattern):
if pattern[i] == pattern[j]:
i += 1
elif i > 0:
i = failure[i - 1]
continue
j += 1
failure.append(i)
return failure
if __name__ == "__main__":
# Test 1)
pattern = "abc1abc12"
text1 = "alskfjaldsabc1abc1abc12k23adsfabcabc"
text2 = "alskfjaldsk23adsfabcabc"
assert kmp(pattern, text1) and not kmp(pattern, text2)
# Test 2)
pattern = "ABABX"
text = "ABABZABABYABABX"
assert kmp(pattern, text)
# Test 3)
pattern = "AAAB"
text = "ABAAAAAB"
assert kmp(pattern, text)
# Test 4)
pattern = "abcdabcy"
text = "abcxabcdabxabcdabcdabcy"
assert kmp(pattern, text)
# Test 5)
pattern = "aabaabaaa"
assert get_failure_array(pattern) == [0, 1, 0, 1, 2, 3, 4, 5, 2]
| from __future__ import annotations
def kmp(pattern: str, text: str) -> bool:
"""
The Knuth-Morris-Pratt Algorithm for finding a pattern within a piece of text
with complexity O(n + m)
1) Preprocess pattern to identify any suffixes that are identical to prefixes
This tells us where to continue from if we get a mismatch between a character
in our pattern and the text.
2) Step through the text one character at a time and compare it to a character in
the pattern updating our location within the pattern if necessary
"""
# 1) Construct the failure array
failure = get_failure_array(pattern)
# 2) Step through text searching for pattern
i, j = 0, 0 # index into text, pattern
while i < len(text):
if pattern[j] == text[i]:
if j == (len(pattern) - 1):
return True
j += 1
# if this is a prefix in our pattern
# just go back far enough to continue
elif j > 0:
j = failure[j - 1]
continue
i += 1
return False
def get_failure_array(pattern: str) -> list[int]:
"""
Calculates the new index we should go to if we fail a comparison
:param pattern:
:return:
"""
failure = [0]
i = 0
j = 1
while j < len(pattern):
if pattern[i] == pattern[j]:
i += 1
elif i > 0:
i = failure[i - 1]
continue
j += 1
failure.append(i)
return failure
if __name__ == "__main__":
# Test 1)
pattern = "abc1abc12"
text1 = "alskfjaldsabc1abc1abc12k23adsfabcabc"
text2 = "alskfjaldsk23adsfabcabc"
assert kmp(pattern, text1) and not kmp(pattern, text2)
# Test 2)
pattern = "ABABX"
text = "ABABZABABYABABX"
assert kmp(pattern, text)
# Test 3)
pattern = "AAAB"
text = "ABAAAAAB"
assert kmp(pattern, text)
# Test 4)
pattern = "abcdabcy"
text = "abcxabcdabxabcdabcdabcy"
assert kmp(pattern, text)
# Test 5)
pattern = "aabaabaaa"
assert get_failure_array(pattern) == [0, 1, 0, 1, 2, 3, 4, 5, 2]
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # floyd_warshall.py
"""
The problem is to find the shortest distance between all pairs of vertices in a
weighted directed graph that can have negative edge weights.
"""
def _print_dist(dist, v):
print("\nThe shortest path matrix using Floyd Warshall algorithm\n")
for i in range(v):
for j in range(v):
if dist[i][j] != float("inf"):
print(int(dist[i][j]), end="\t")
else:
print("INF", end="\t")
print()
def floyd_warshall(graph, v):
"""
:param graph: 2D array calculated from weight[edge[i, j]]
:type graph: List[List[float]]
:param v: number of vertices
:type v: int
:return: shortest distance between all vertex pairs
distance[u][v] will contain the shortest distance from vertex u to v.
1. For all edges from v to n, distance[i][j] = weight(edge(i, j)).
3. The algorithm then performs distance[i][j] = min(distance[i][j], distance[i][k] +
distance[k][j]) for each possible pair i, j of vertices.
4. The above is repeated for each vertex k in the graph.
5. Whenever distance[i][j] is given a new minimum value, next vertex[i][j] is
updated to the next vertex[i][k].
"""
dist = [[float("inf") for _ in range(v)] for _ in range(v)]
for i in range(v):
for j in range(v):
dist[i][j] = graph[i][j]
# check vertex k against all other vertices (i, j)
for k in range(v):
# looping through rows of graph array
for i in range(v):
# looping through columns of graph array
for j in range(v):
if (
dist[i][k] != float("inf")
and dist[k][j] != float("inf")
and dist[i][k] + dist[k][j] < dist[i][j]
):
dist[i][j] = dist[i][k] + dist[k][j]
_print_dist(dist, v)
return dist, v
if __name__ == "__main__":
v = int(input("Enter number of vertices: "))
e = int(input("Enter number of edges: "))
graph = [[float("inf") for i in range(v)] for j in range(v)]
for i in range(v):
graph[i][i] = 0.0
# src and dst are indices that must be within the array size graph[e][v]
# failure to follow this will result in an error
for i in range(e):
print("\nEdge ", i + 1)
src = int(input("Enter source:"))
dst = int(input("Enter destination:"))
weight = float(input("Enter weight:"))
graph[src][dst] = weight
floyd_warshall(graph, v)
# Example Input
# Enter number of vertices: 3
# Enter number of edges: 2
# # generated graph from vertex and edge inputs
# [[inf, inf, inf], [inf, inf, inf], [inf, inf, inf]]
# [[0.0, inf, inf], [inf, 0.0, inf], [inf, inf, 0.0]]
# specify source, destination and weight for edge #1
# Edge 1
# Enter source:1
# Enter destination:2
# Enter weight:2
# specify source, destination and weight for edge #2
# Edge 2
# Enter source:2
# Enter destination:1
# Enter weight:1
# # Expected Output from the vertice, edge and src, dst, weight inputs!!
# 0 INF INF
# INF 0 2
# INF 1 0
| # floyd_warshall.py
"""
The problem is to find the shortest distance between all pairs of vertices in a
weighted directed graph that can have negative edge weights.
"""
def _print_dist(dist, v):
print("\nThe shortest path matrix using Floyd Warshall algorithm\n")
for i in range(v):
for j in range(v):
if dist[i][j] != float("inf"):
print(int(dist[i][j]), end="\t")
else:
print("INF", end="\t")
print()
def floyd_warshall(graph, v):
"""
:param graph: 2D array calculated from weight[edge[i, j]]
:type graph: List[List[float]]
:param v: number of vertices
:type v: int
:return: shortest distance between all vertex pairs
distance[u][v] will contain the shortest distance from vertex u to v.
1. For all edges from v to n, distance[i][j] = weight(edge(i, j)).
3. The algorithm then performs distance[i][j] = min(distance[i][j], distance[i][k] +
distance[k][j]) for each possible pair i, j of vertices.
4. The above is repeated for each vertex k in the graph.
5. Whenever distance[i][j] is given a new minimum value, next vertex[i][j] is
updated to the next vertex[i][k].
"""
dist = [[float("inf") for _ in range(v)] for _ in range(v)]
for i in range(v):
for j in range(v):
dist[i][j] = graph[i][j]
# check vertex k against all other vertices (i, j)
for k in range(v):
# looping through rows of graph array
for i in range(v):
# looping through columns of graph array
for j in range(v):
if (
dist[i][k] != float("inf")
and dist[k][j] != float("inf")
and dist[i][k] + dist[k][j] < dist[i][j]
):
dist[i][j] = dist[i][k] + dist[k][j]
_print_dist(dist, v)
return dist, v
if __name__ == "__main__":
v = int(input("Enter number of vertices: "))
e = int(input("Enter number of edges: "))
graph = [[float("inf") for i in range(v)] for j in range(v)]
for i in range(v):
graph[i][i] = 0.0
# src and dst are indices that must be within the array size graph[e][v]
# failure to follow this will result in an error
for i in range(e):
print("\nEdge ", i + 1)
src = int(input("Enter source:"))
dst = int(input("Enter destination:"))
weight = float(input("Enter weight:"))
graph[src][dst] = weight
floyd_warshall(graph, v)
# Example Input
# Enter number of vertices: 3
# Enter number of edges: 2
# # generated graph from vertex and edge inputs
# [[inf, inf, inf], [inf, inf, inf], [inf, inf, inf]]
# [[0.0, inf, inf], [inf, 0.0, inf], [inf, inf, 0.0]]
# specify source, destination and weight for edge #1
# Edge 1
# Enter source:1
# Enter destination:2
# Enter weight:2
# specify source, destination and weight for edge #2
# Edge 2
# Enter source:2
# Enter destination:1
# Enter weight:1
# # Expected Output from the vertice, edge and src, dst, weight inputs!!
# 0 INF INF
# INF 0 2
# INF 1 0
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 10: https://projecteuler.net/problem=10
Summation of primes
The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.
Find the sum of all the primes below two million.
References:
- https://en.wikipedia.org/wiki/Prime_number
"""
from math import sqrt
def is_prime(n: int) -> bool:
"""
Returns boolean representing primality of given number num.
>>> is_prime(2)
True
>>> is_prime(3)
True
>>> is_prime(27)
False
>>> is_prime(2999)
True
"""
if 1 < n < 4:
return True
elif n < 2 or not n % 2:
return False
return not any(not n % i for i in range(3, int(sqrt(n) + 1), 2))
def solution(n: int = 2000000) -> int:
"""
Returns the sum of all the primes below n.
>>> solution(1000)
76127
>>> solution(5000)
1548136
>>> solution(10000)
5736396
>>> solution(7)
10
"""
return sum(num for num in range(3, n, 2) if is_prime(num)) + 2 if n > 2 else 0
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 10: https://projecteuler.net/problem=10
Summation of primes
The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.
Find the sum of all the primes below two million.
References:
- https://en.wikipedia.org/wiki/Prime_number
"""
from math import sqrt
def is_prime(n: int) -> bool:
"""
Returns boolean representing primality of given number num.
>>> is_prime(2)
True
>>> is_prime(3)
True
>>> is_prime(27)
False
>>> is_prime(2999)
True
"""
if 1 < n < 4:
return True
elif n < 2 or not n % 2:
return False
return not any(not n % i for i in range(3, int(sqrt(n) + 1), 2))
def solution(n: int = 2000000) -> int:
"""
Returns the sum of all the primes below n.
>>> solution(1000)
76127
>>> solution(5000)
1548136
>>> solution(10000)
5736396
>>> solution(7)
10
"""
return sum(num for num in range(3, n, 2) if is_prime(num)) + 2 if n > 2 else 0
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Created by sarathkaul on 17/11/19
# Modified by Arkadip Bhattacharya(@darkmatter18) on 20/04/2020
from collections import defaultdict
def word_occurence(sentence: str) -> dict:
"""
>>> from collections import Counter
>>> SENTENCE = "a b A b c b d b d e f e g e h e i e j e 0"
>>> occurence_dict = word_occurence(SENTENCE)
>>> all(occurence_dict[word] == count for word, count
... in Counter(SENTENCE.split()).items())
True
>>> dict(word_occurence("Two spaces"))
{'Two': 1, 'spaces': 1}
"""
occurrence: dict = defaultdict(int)
# Creating a dictionary containing count of each word
for word in sentence.split():
occurrence[word] += 1
return occurrence
if __name__ == "__main__":
for word, count in word_occurence("INPUT STRING").items():
print(f"{word}: {count}")
| # Created by sarathkaul on 17/11/19
# Modified by Arkadip Bhattacharya(@darkmatter18) on 20/04/2020
from collections import defaultdict
def word_occurence(sentence: str) -> dict:
"""
>>> from collections import Counter
>>> SENTENCE = "a b A b c b d b d e f e g e h e i e j e 0"
>>> occurence_dict = word_occurence(SENTENCE)
>>> all(occurence_dict[word] == count for word, count
... in Counter(SENTENCE.split()).items())
True
>>> dict(word_occurence("Two spaces"))
{'Two': 1, 'spaces': 1}
"""
occurrence: dict = defaultdict(int)
# Creating a dictionary containing count of each word
for word in sentence.split():
occurrence[word] += 1
return occurrence
if __name__ == "__main__":
for word, count in word_occurence("INPUT STRING").items():
print(f"{word}: {count}")
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import random
import sys
from . import cryptomath_module as cryptomath
SYMBOLS = (
r""" !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`"""
r"""abcdefghijklmnopqrstuvwxyz{|}~"""
)
def check_keys(keyA: int, keyB: int, mode: str) -> None:
if mode == "encrypt":
if keyA == 1:
sys.exit(
"The affine cipher becomes weak when key "
"A is set to 1. Choose different key"
)
if keyB == 0:
sys.exit(
"The affine cipher becomes weak when key "
"B is set to 0. Choose different key"
)
if keyA < 0 or keyB < 0 or keyB > len(SYMBOLS) - 1:
sys.exit(
"Key A must be greater than 0 and key B must "
f"be between 0 and {len(SYMBOLS) - 1}."
)
if cryptomath.gcd(keyA, len(SYMBOLS)) != 1:
sys.exit(
f"Key A {keyA} and the symbol set size {len(SYMBOLS)} "
"are not relatively prime. Choose a different key."
)
def encrypt_message(key: int, message: str) -> str:
"""
>>> encrypt_message(4545, 'The affine cipher is a type of monoalphabetic '
... 'substitution cipher.')
'VL}p MM{I}p~{HL}Gp{vp pFsH}pxMpyxIx JHL O}F{~pvuOvF{FuF{xIp~{HL}Gi'
"""
keyA, keyB = divmod(key, len(SYMBOLS))
check_keys(keyA, keyB, "encrypt")
cipherText = ""
for symbol in message:
if symbol in SYMBOLS:
symIndex = SYMBOLS.find(symbol)
cipherText += SYMBOLS[(symIndex * keyA + keyB) % len(SYMBOLS)]
else:
cipherText += symbol
return cipherText
def decrypt_message(key: int, message: str) -> str:
"""
>>> decrypt_message(4545, 'VL}p MM{I}p~{HL}Gp{vp pFsH}pxMpyxIx JHL O}F{~pvuOvF{FuF'
... '{xIp~{HL}Gi')
'The affine cipher is a type of monoalphabetic substitution cipher.'
"""
keyA, keyB = divmod(key, len(SYMBOLS))
check_keys(keyA, keyB, "decrypt")
plainText = ""
modInverseOfkeyA = cryptomath.find_mod_inverse(keyA, len(SYMBOLS))
for symbol in message:
if symbol in SYMBOLS:
symIndex = SYMBOLS.find(symbol)
plainText += SYMBOLS[(symIndex - keyB) * modInverseOfkeyA % len(SYMBOLS)]
else:
plainText += symbol
return plainText
def get_random_key() -> int:
while True:
keyA = random.randint(2, len(SYMBOLS))
keyB = random.randint(2, len(SYMBOLS))
if cryptomath.gcd(keyA, len(SYMBOLS)) == 1 and keyB % len(SYMBOLS) != 0:
return keyA * len(SYMBOLS) + keyB
def main() -> None:
"""
>>> key = get_random_key()
>>> msg = "This is a test!"
>>> decrypt_message(key, encrypt_message(key, msg)) == msg
True
"""
message = input("Enter message: ").strip()
key = int(input("Enter key [2000 - 9000]: ").strip())
mode = input("Encrypt/Decrypt [E/D]: ").strip().lower()
if mode.startswith("e"):
mode = "encrypt"
translated = encrypt_message(key, message)
elif mode.startswith("d"):
mode = "decrypt"
translated = decrypt_message(key, message)
print(f"\n{mode.title()}ed text: \n{translated}")
if __name__ == "__main__":
import doctest
doctest.testmod()
# main()
| import random
import sys
from . import cryptomath_module as cryptomath
SYMBOLS = (
r""" !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`"""
r"""abcdefghijklmnopqrstuvwxyz{|}~"""
)
def check_keys(keyA: int, keyB: int, mode: str) -> None:
if mode == "encrypt":
if keyA == 1:
sys.exit(
"The affine cipher becomes weak when key "
"A is set to 1. Choose different key"
)
if keyB == 0:
sys.exit(
"The affine cipher becomes weak when key "
"B is set to 0. Choose different key"
)
if keyA < 0 or keyB < 0 or keyB > len(SYMBOLS) - 1:
sys.exit(
"Key A must be greater than 0 and key B must "
f"be between 0 and {len(SYMBOLS) - 1}."
)
if cryptomath.gcd(keyA, len(SYMBOLS)) != 1:
sys.exit(
f"Key A {keyA} and the symbol set size {len(SYMBOLS)} "
"are not relatively prime. Choose a different key."
)
def encrypt_message(key: int, message: str) -> str:
"""
>>> encrypt_message(4545, 'The affine cipher is a type of monoalphabetic '
... 'substitution cipher.')
'VL}p MM{I}p~{HL}Gp{vp pFsH}pxMpyxIx JHL O}F{~pvuOvF{FuF{xIp~{HL}Gi'
"""
keyA, keyB = divmod(key, len(SYMBOLS))
check_keys(keyA, keyB, "encrypt")
cipherText = ""
for symbol in message:
if symbol in SYMBOLS:
symIndex = SYMBOLS.find(symbol)
cipherText += SYMBOLS[(symIndex * keyA + keyB) % len(SYMBOLS)]
else:
cipherText += symbol
return cipherText
def decrypt_message(key: int, message: str) -> str:
"""
>>> decrypt_message(4545, 'VL}p MM{I}p~{HL}Gp{vp pFsH}pxMpyxIx JHL O}F{~pvuOvF{FuF'
... '{xIp~{HL}Gi')
'The affine cipher is a type of monoalphabetic substitution cipher.'
"""
keyA, keyB = divmod(key, len(SYMBOLS))
check_keys(keyA, keyB, "decrypt")
plainText = ""
modInverseOfkeyA = cryptomath.find_mod_inverse(keyA, len(SYMBOLS))
for symbol in message:
if symbol in SYMBOLS:
symIndex = SYMBOLS.find(symbol)
plainText += SYMBOLS[(symIndex - keyB) * modInverseOfkeyA % len(SYMBOLS)]
else:
plainText += symbol
return plainText
def get_random_key() -> int:
while True:
keyA = random.randint(2, len(SYMBOLS))
keyB = random.randint(2, len(SYMBOLS))
if cryptomath.gcd(keyA, len(SYMBOLS)) == 1 and keyB % len(SYMBOLS) != 0:
return keyA * len(SYMBOLS) + keyB
def main() -> None:
"""
>>> key = get_random_key()
>>> msg = "This is a test!"
>>> decrypt_message(key, encrypt_message(key, msg)) == msg
True
"""
message = input("Enter message: ").strip()
key = int(input("Enter key [2000 - 9000]: ").strip())
mode = input("Encrypt/Decrypt [E/D]: ").strip().lower()
if mode.startswith("e"):
mode = "encrypt"
translated = encrypt_message(key, message)
elif mode.startswith("d"):
mode = "decrypt"
translated = decrypt_message(key, message)
print(f"\n{mode.title()}ed text: \n{translated}")
if __name__ == "__main__":
import doctest
doctest.testmod()
# main()
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # https://www.tutorialspoint.com/python3/bitwise_operators_example.htm
def binary_and(a: int, b: int) -> str:
"""
Take in 2 integers, convert them to binary,
return a binary number that is the
result of a binary and operation on the integers provided.
>>> binary_and(25, 32)
'0b000000'
>>> binary_and(37, 50)
'0b100000'
>>> binary_and(21, 30)
'0b10100'
>>> binary_and(58, 73)
'0b0001000'
>>> binary_and(0, 255)
'0b00000000'
>>> binary_and(256, 256)
'0b100000000'
>>> binary_and(0, -1)
Traceback (most recent call last):
...
ValueError: the value of both inputs must be positive
>>> binary_and(0, 1.1)
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> binary_and("0", "1")
Traceback (most recent call last):
...
TypeError: '<' not supported between instances of 'str' and 'int'
"""
if a < 0 or b < 0:
raise ValueError("the value of both inputs must be positive")
a_binary = str(bin(a))[2:] # remove the leading "0b"
b_binary = str(bin(b))[2:] # remove the leading "0b"
max_len = max(len(a_binary), len(b_binary))
return "0b" + "".join(
str(int(char_a == "1" and char_b == "1"))
for char_a, char_b in zip(a_binary.zfill(max_len), b_binary.zfill(max_len))
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| # https://www.tutorialspoint.com/python3/bitwise_operators_example.htm
def binary_and(a: int, b: int) -> str:
"""
Take in 2 integers, convert them to binary,
return a binary number that is the
result of a binary and operation on the integers provided.
>>> binary_and(25, 32)
'0b000000'
>>> binary_and(37, 50)
'0b100000'
>>> binary_and(21, 30)
'0b10100'
>>> binary_and(58, 73)
'0b0001000'
>>> binary_and(0, 255)
'0b00000000'
>>> binary_and(256, 256)
'0b100000000'
>>> binary_and(0, -1)
Traceback (most recent call last):
...
ValueError: the value of both inputs must be positive
>>> binary_and(0, 1.1)
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> binary_and("0", "1")
Traceback (most recent call last):
...
TypeError: '<' not supported between instances of 'str' and 'int'
"""
if a < 0 or b < 0:
raise ValueError("the value of both inputs must be positive")
a_binary = str(bin(a))[2:] # remove the leading "0b"
b_binary = str(bin(b))[2:] # remove the leading "0b"
max_len = max(len(a_binary), len(b_binary))
return "0b" + "".join(
str(int(char_a == "1" and char_b == "1"))
for char_a, char_b in zip(a_binary.zfill(max_len), b_binary.zfill(max_len))
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Recaptcha is a free captcha service offered by Google in order to secure websites and
forms. At https://www.google.com/recaptcha/admin/create you can create new recaptcha
keys and see the keys that your have already created.
* Keep in mind that recaptcha doesn't work with localhost
When you create a recaptcha key, your will get two separate keys: ClientKey & SecretKey.
ClientKey should be kept in your site's front end
SecretKey should be kept in your site's back end
# An example HTML login form with recaptcha tag is shown below
<form action="" method="post">
<h2 class="text-center">Log in</h2>
{% csrf_token %}
<div class="form-group">
<input type="text" name="username" required="required">
</div>
<div class="form-group">
<input type="password" name="password" required="required">
</div>
<div class="form-group">
<button type="submit">Log in</button>
</div>
<!-- Below is the recaptcha tag of html -->
<div class="g-recaptcha" data-sitekey="ClientKey"></div>
</form>
<!-- Below is the recaptcha script to be kept inside html tag -->
<script src="https://www.google.com/recaptcha/api.js" async defer></script>
Below a Django function for the views.py file contains a login form for demonstrating
recaptcha verification.
"""
import requests
try:
from django.contrib.auth import authenticate, login
from django.shortcuts import redirect, render
except ImportError:
authenticate = login = render = redirect = print
def login_using_recaptcha(request):
# Enter your recaptcha secret key here
secret_key = "secretKey"
url = "https://www.google.com/recaptcha/api/siteverify"
# when method is not POST, direct user to login page
if request.method != "POST":
return render(request, "login.html")
# from the frontend, get username, password, and client_key
username = request.POST.get("username")
password = request.POST.get("password")
client_key = request.POST.get("g-recaptcha-response")
# post recaptcha response to Google's recaptcha api
response = requests.post(url, data={"secret": secret_key, "response": client_key})
# if the recaptcha api verified our keys
if response.json().get("success", False):
# authenticate the user
user_in_database = authenticate(request, username=username, password=password)
if user_in_database:
login(request, user_in_database)
return redirect("/your-webpage")
return render(request, "login.html")
| """
Recaptcha is a free captcha service offered by Google in order to secure websites and
forms. At https://www.google.com/recaptcha/admin/create you can create new recaptcha
keys and see the keys that your have already created.
* Keep in mind that recaptcha doesn't work with localhost
When you create a recaptcha key, your will get two separate keys: ClientKey & SecretKey.
ClientKey should be kept in your site's front end
SecretKey should be kept in your site's back end
# An example HTML login form with recaptcha tag is shown below
<form action="" method="post">
<h2 class="text-center">Log in</h2>
{% csrf_token %}
<div class="form-group">
<input type="text" name="username" required="required">
</div>
<div class="form-group">
<input type="password" name="password" required="required">
</div>
<div class="form-group">
<button type="submit">Log in</button>
</div>
<!-- Below is the recaptcha tag of html -->
<div class="g-recaptcha" data-sitekey="ClientKey"></div>
</form>
<!-- Below is the recaptcha script to be kept inside html tag -->
<script src="https://www.google.com/recaptcha/api.js" async defer></script>
Below a Django function for the views.py file contains a login form for demonstrating
recaptcha verification.
"""
import requests
try:
from django.contrib.auth import authenticate, login
from django.shortcuts import redirect, render
except ImportError:
authenticate = login = render = redirect = print
def login_using_recaptcha(request):
# Enter your recaptcha secret key here
secret_key = "secretKey"
url = "https://www.google.com/recaptcha/api/siteverify"
# when method is not POST, direct user to login page
if request.method != "POST":
return render(request, "login.html")
# from the frontend, get username, password, and client_key
username = request.POST.get("username")
password = request.POST.get("password")
client_key = request.POST.get("g-recaptcha-response")
# post recaptcha response to Google's recaptcha api
response = requests.post(url, data={"secret": secret_key, "response": client_key})
# if the recaptcha api verified our keys
if response.json().get("success", False):
# authenticate the user
user_in_database = authenticate(request, username=username, password=password)
if user_in_database:
login(request, user_in_database)
return redirect("/your-webpage")
return render(request, "login.html")
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| class BinaryHeap:
"""
A max-heap implementation in Python
>>> binary_heap = BinaryHeap()
>>> binary_heap.insert(6)
>>> binary_heap.insert(10)
>>> binary_heap.insert(15)
>>> binary_heap.insert(12)
>>> binary_heap.pop()
15
>>> binary_heap.pop()
12
>>> binary_heap.get_list
[10, 6]
>>> len(binary_heap)
2
"""
def __init__(self):
self.__heap = [0]
self.__size = 0
def __swap_up(self, i: int) -> None:
"""Swap the element up"""
temporary = self.__heap[i]
while i // 2 > 0:
if self.__heap[i] > self.__heap[i // 2]:
self.__heap[i] = self.__heap[i // 2]
self.__heap[i // 2] = temporary
i //= 2
def insert(self, value: int) -> None:
"""Insert new element"""
self.__heap.append(value)
self.__size += 1
self.__swap_up(self.__size)
def __swap_down(self, i: int) -> None:
"""Swap the element down"""
while self.__size >= 2 * i:
if 2 * i + 1 > self.__size:
bigger_child = 2 * i
else:
if self.__heap[2 * i] > self.__heap[2 * i + 1]:
bigger_child = 2 * i
else:
bigger_child = 2 * i + 1
temporary = self.__heap[i]
if self.__heap[i] < self.__heap[bigger_child]:
self.__heap[i] = self.__heap[bigger_child]
self.__heap[bigger_child] = temporary
i = bigger_child
def pop(self) -> int:
"""Pop the root element"""
max_value = self.__heap[1]
self.__heap[1] = self.__heap[self.__size]
self.__size -= 1
self.__heap.pop()
self.__swap_down(1)
return max_value
@property
def get_list(self):
return self.__heap[1:]
def __len__(self):
"""Length of the array"""
return self.__size
if __name__ == "__main__":
import doctest
doctest.testmod()
# create an instance of BinaryHeap
binary_heap = BinaryHeap()
binary_heap.insert(6)
binary_heap.insert(10)
binary_heap.insert(15)
binary_heap.insert(12)
# pop root(max-values because it is max heap)
print(binary_heap.pop()) # 15
print(binary_heap.pop()) # 12
# get the list and size after operations
print(binary_heap.get_list)
print(len(binary_heap))
| class BinaryHeap:
"""
A max-heap implementation in Python
>>> binary_heap = BinaryHeap()
>>> binary_heap.insert(6)
>>> binary_heap.insert(10)
>>> binary_heap.insert(15)
>>> binary_heap.insert(12)
>>> binary_heap.pop()
15
>>> binary_heap.pop()
12
>>> binary_heap.get_list
[10, 6]
>>> len(binary_heap)
2
"""
def __init__(self):
self.__heap = [0]
self.__size = 0
def __swap_up(self, i: int) -> None:
"""Swap the element up"""
temporary = self.__heap[i]
while i // 2 > 0:
if self.__heap[i] > self.__heap[i // 2]:
self.__heap[i] = self.__heap[i // 2]
self.__heap[i // 2] = temporary
i //= 2
def insert(self, value: int) -> None:
"""Insert new element"""
self.__heap.append(value)
self.__size += 1
self.__swap_up(self.__size)
def __swap_down(self, i: int) -> None:
"""Swap the element down"""
while self.__size >= 2 * i:
if 2 * i + 1 > self.__size:
bigger_child = 2 * i
else:
if self.__heap[2 * i] > self.__heap[2 * i + 1]:
bigger_child = 2 * i
else:
bigger_child = 2 * i + 1
temporary = self.__heap[i]
if self.__heap[i] < self.__heap[bigger_child]:
self.__heap[i] = self.__heap[bigger_child]
self.__heap[bigger_child] = temporary
i = bigger_child
def pop(self) -> int:
"""Pop the root element"""
max_value = self.__heap[1]
self.__heap[1] = self.__heap[self.__size]
self.__size -= 1
self.__heap.pop()
self.__swap_down(1)
return max_value
@property
def get_list(self):
return self.__heap[1:]
def __len__(self):
"""Length of the array"""
return self.__size
if __name__ == "__main__":
import doctest
doctest.testmod()
# create an instance of BinaryHeap
binary_heap = BinaryHeap()
binary_heap.insert(6)
binary_heap.insert(10)
binary_heap.insert(15)
binary_heap.insert(12)
# pop root(max-values because it is max heap)
print(binary_heap.pop()) # 15
print(binary_heap.pop()) # 12
# get the list and size after operations
print(binary_heap.get_list)
print(len(binary_heap))
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python3
from __future__ import annotations
import json
import requests
from bs4 import BeautifulSoup
from fake_useragent import UserAgent
headers = {"UserAgent": UserAgent().random}
def extract_user_profile(script) -> dict:
"""
May raise json.decoder.JSONDecodeError
"""
data = script.contents[0]
info = json.loads(data[data.find('{"config"') : -1])
return info["entry_data"]["ProfilePage"][0]["graphql"]["user"]
class InstagramUser:
"""
Class Instagram crawl instagram user information
Usage: (doctest failing on GitHub Actions)
# >>> instagram_user = InstagramUser("github")
# >>> instagram_user.is_verified
True
# >>> instagram_user.biography
'Built for developers.'
"""
def __init__(self, username):
self.url = f"https://www.instagram.com/{username}/"
self.user_data = self.get_json()
def get_json(self) -> dict:
"""
Return a dict of user information
"""
html = requests.get(self.url, headers=headers).text
scripts = BeautifulSoup(html, "html.parser").find_all("script")
try:
return extract_user_profile(scripts[4])
except (json.decoder.JSONDecodeError, KeyError):
return extract_user_profile(scripts[3])
def __repr__(self) -> str:
return f"{self.__class__.__name__}('{self.username}')"
def __str__(self) -> str:
return f"{self.fullname} ({self.username}) is {self.biography}"
@property
def username(self) -> str:
return self.user_data["username"]
@property
def fullname(self) -> str:
return self.user_data["full_name"]
@property
def biography(self) -> str:
return self.user_data["biography"]
@property
def email(self) -> str:
return self.user_data["business_email"]
@property
def website(self) -> str:
return self.user_data["external_url"]
@property
def number_of_followers(self) -> int:
return self.user_data["edge_followed_by"]["count"]
@property
def number_of_followings(self) -> int:
return self.user_data["edge_follow"]["count"]
@property
def number_of_posts(self) -> int:
return self.user_data["edge_owner_to_timeline_media"]["count"]
@property
def profile_picture_url(self) -> str:
return self.user_data["profile_pic_url_hd"]
@property
def is_verified(self) -> bool:
return self.user_data["is_verified"]
@property
def is_private(self) -> bool:
return self.user_data["is_private"]
def test_instagram_user(username: str = "github") -> None:
"""
A self running doctest
>>> test_instagram_user()
"""
import os
if os.environ.get("CI"):
return None # test failing on GitHub Actions
instagram_user = InstagramUser(username)
assert instagram_user.user_data
assert isinstance(instagram_user.user_data, dict)
assert instagram_user.username == username
if username != "github":
return
assert instagram_user.fullname == "GitHub"
assert instagram_user.biography == "Built for developers."
assert instagram_user.number_of_posts > 150
assert instagram_user.number_of_followers > 120000
assert instagram_user.number_of_followings > 15
assert instagram_user.email == "[email protected]"
assert instagram_user.website == "https://github.com/readme"
assert instagram_user.profile_picture_url.startswith("https://instagram.")
assert instagram_user.is_verified is True
assert instagram_user.is_private is False
if __name__ == "__main__":
import doctest
doctest.testmod()
instagram_user = InstagramUser("github")
print(instagram_user)
print(f"{instagram_user.number_of_posts = }")
print(f"{instagram_user.number_of_followers = }")
print(f"{instagram_user.number_of_followings = }")
print(f"{instagram_user.email = }")
print(f"{instagram_user.website = }")
print(f"{instagram_user.profile_picture_url = }")
print(f"{instagram_user.is_verified = }")
print(f"{instagram_user.is_private = }")
| #!/usr/bin/env python3
from __future__ import annotations
import json
import requests
from bs4 import BeautifulSoup
from fake_useragent import UserAgent
headers = {"UserAgent": UserAgent().random}
def extract_user_profile(script) -> dict:
"""
May raise json.decoder.JSONDecodeError
"""
data = script.contents[0]
info = json.loads(data[data.find('{"config"') : -1])
return info["entry_data"]["ProfilePage"][0]["graphql"]["user"]
class InstagramUser:
"""
Class Instagram crawl instagram user information
Usage: (doctest failing on GitHub Actions)
# >>> instagram_user = InstagramUser("github")
# >>> instagram_user.is_verified
True
# >>> instagram_user.biography
'Built for developers.'
"""
def __init__(self, username):
self.url = f"https://www.instagram.com/{username}/"
self.user_data = self.get_json()
def get_json(self) -> dict:
"""
Return a dict of user information
"""
html = requests.get(self.url, headers=headers).text
scripts = BeautifulSoup(html, "html.parser").find_all("script")
try:
return extract_user_profile(scripts[4])
except (json.decoder.JSONDecodeError, KeyError):
return extract_user_profile(scripts[3])
def __repr__(self) -> str:
return f"{self.__class__.__name__}('{self.username}')"
def __str__(self) -> str:
return f"{self.fullname} ({self.username}) is {self.biography}"
@property
def username(self) -> str:
return self.user_data["username"]
@property
def fullname(self) -> str:
return self.user_data["full_name"]
@property
def biography(self) -> str:
return self.user_data["biography"]
@property
def email(self) -> str:
return self.user_data["business_email"]
@property
def website(self) -> str:
return self.user_data["external_url"]
@property
def number_of_followers(self) -> int:
return self.user_data["edge_followed_by"]["count"]
@property
def number_of_followings(self) -> int:
return self.user_data["edge_follow"]["count"]
@property
def number_of_posts(self) -> int:
return self.user_data["edge_owner_to_timeline_media"]["count"]
@property
def profile_picture_url(self) -> str:
return self.user_data["profile_pic_url_hd"]
@property
def is_verified(self) -> bool:
return self.user_data["is_verified"]
@property
def is_private(self) -> bool:
return self.user_data["is_private"]
def test_instagram_user(username: str = "github") -> None:
"""
A self running doctest
>>> test_instagram_user()
"""
import os
if os.environ.get("CI"):
return None # test failing on GitHub Actions
instagram_user = InstagramUser(username)
assert instagram_user.user_data
assert isinstance(instagram_user.user_data, dict)
assert instagram_user.username == username
if username != "github":
return
assert instagram_user.fullname == "GitHub"
assert instagram_user.biography == "Built for developers."
assert instagram_user.number_of_posts > 150
assert instagram_user.number_of_followers > 120000
assert instagram_user.number_of_followings > 15
assert instagram_user.email == "[email protected]"
assert instagram_user.website == "https://github.com/readme"
assert instagram_user.profile_picture_url.startswith("https://instagram.")
assert instagram_user.is_verified is True
assert instagram_user.is_private is False
if __name__ == "__main__":
import doctest
doctest.testmod()
instagram_user = InstagramUser("github")
print(instagram_user)
print(f"{instagram_user.number_of_posts = }")
print(f"{instagram_user.number_of_followers = }")
print(f"{instagram_user.number_of_followings = }")
print(f"{instagram_user.email = }")
print(f"{instagram_user.website = }")
print(f"{instagram_user.profile_picture_url = }")
print(f"{instagram_user.is_verified = }")
print(f"{instagram_user.is_private = }")
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Coin sums
Problem 31: https://projecteuler.net/problem=31
In England the currency is made up of pound, £, and pence, p, and there are
eight coins in general circulation:
1p, 2p, 5p, 10p, 20p, 50p, £1 (100p) and £2 (200p).
It is possible to make £2 in the following way:
1×£1 + 1×50p + 2×20p + 1×5p + 1×2p + 3×1p
How many different ways can £2 be made using any number of coins?
"""
def one_pence() -> int:
return 1
def two_pence(x: int) -> int:
return 0 if x < 0 else two_pence(x - 2) + one_pence()
def five_pence(x: int) -> int:
return 0 if x < 0 else five_pence(x - 5) + two_pence(x)
def ten_pence(x: int) -> int:
return 0 if x < 0 else ten_pence(x - 10) + five_pence(x)
def twenty_pence(x: int) -> int:
return 0 if x < 0 else twenty_pence(x - 20) + ten_pence(x)
def fifty_pence(x: int) -> int:
return 0 if x < 0 else fifty_pence(x - 50) + twenty_pence(x)
def one_pound(x: int) -> int:
return 0 if x < 0 else one_pound(x - 100) + fifty_pence(x)
def two_pound(x: int) -> int:
return 0 if x < 0 else two_pound(x - 200) + one_pound(x)
def solution(n: int = 200) -> int:
"""Returns the number of different ways can n pence be made using any number of
coins?
>>> solution(500)
6295434
>>> solution(200)
73682
>>> solution(50)
451
>>> solution(10)
11
"""
return two_pound(n)
if __name__ == "__main__":
print(solution(int(input().strip())))
| """
Coin sums
Problem 31: https://projecteuler.net/problem=31
In England the currency is made up of pound, £, and pence, p, and there are
eight coins in general circulation:
1p, 2p, 5p, 10p, 20p, 50p, £1 (100p) and £2 (200p).
It is possible to make £2 in the following way:
1×£1 + 1×50p + 2×20p + 1×5p + 1×2p + 3×1p
How many different ways can £2 be made using any number of coins?
"""
def one_pence() -> int:
return 1
def two_pence(x: int) -> int:
return 0 if x < 0 else two_pence(x - 2) + one_pence()
def five_pence(x: int) -> int:
return 0 if x < 0 else five_pence(x - 5) + two_pence(x)
def ten_pence(x: int) -> int:
return 0 if x < 0 else ten_pence(x - 10) + five_pence(x)
def twenty_pence(x: int) -> int:
return 0 if x < 0 else twenty_pence(x - 20) + ten_pence(x)
def fifty_pence(x: int) -> int:
return 0 if x < 0 else fifty_pence(x - 50) + twenty_pence(x)
def one_pound(x: int) -> int:
return 0 if x < 0 else one_pound(x - 100) + fifty_pence(x)
def two_pound(x: int) -> int:
return 0 if x < 0 else two_pound(x - 200) + one_pound(x)
def solution(n: int = 200) -> int:
"""Returns the number of different ways can n pence be made using any number of
coins?
>>> solution(500)
6295434
>>> solution(200)
73682
>>> solution(50)
451
>>> solution(10)
11
"""
return two_pound(n)
if __name__ == "__main__":
print(solution(int(input().strip())))
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Ugly numbers are numbers whose only prime factors are 2, 3 or 5. The sequence
1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, … shows the first 11 ugly numbers. By convention,
1 is included.
Given an integer n, we have to find the nth ugly number.
For more details, refer this article
https://www.geeksforgeeks.org/ugly-numbers/
"""
def ugly_numbers(n: int) -> int:
"""
Returns the nth ugly number.
>>> ugly_numbers(100)
1536
>>> ugly_numbers(0)
1
>>> ugly_numbers(20)
36
>>> ugly_numbers(-5)
1
>>> ugly_numbers(-5.5)
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
"""
ugly_nums = [1]
i2, i3, i5 = 0, 0, 0
next_2 = ugly_nums[i2] * 2
next_3 = ugly_nums[i3] * 3
next_5 = ugly_nums[i5] * 5
for i in range(1, n):
next_num = min(next_2, next_3, next_5)
ugly_nums.append(next_num)
if next_num == next_2:
i2 += 1
next_2 = ugly_nums[i2] * 2
if next_num == next_3:
i3 += 1
next_3 = ugly_nums[i3] * 3
if next_num == next_5:
i5 += 1
next_5 = ugly_nums[i5] * 5
return ugly_nums[-1]
if __name__ == "__main__":
from doctest import testmod
testmod(verbose=True)
print(f"{ugly_numbers(200) = }")
| """
Ugly numbers are numbers whose only prime factors are 2, 3 or 5. The sequence
1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, … shows the first 11 ugly numbers. By convention,
1 is included.
Given an integer n, we have to find the nth ugly number.
For more details, refer this article
https://www.geeksforgeeks.org/ugly-numbers/
"""
def ugly_numbers(n: int) -> int:
"""
Returns the nth ugly number.
>>> ugly_numbers(100)
1536
>>> ugly_numbers(0)
1
>>> ugly_numbers(20)
36
>>> ugly_numbers(-5)
1
>>> ugly_numbers(-5.5)
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
"""
ugly_nums = [1]
i2, i3, i5 = 0, 0, 0
next_2 = ugly_nums[i2] * 2
next_3 = ugly_nums[i3] * 3
next_5 = ugly_nums[i5] * 5
for i in range(1, n):
next_num = min(next_2, next_3, next_5)
ugly_nums.append(next_num)
if next_num == next_2:
i2 += 1
next_2 = ugly_nums[i2] * 2
if next_num == next_3:
i3 += 1
next_3 = ugly_nums[i3] * 3
if next_num == next_5:
i5 += 1
next_5 = ugly_nums[i5] * 5
return ugly_nums[-1]
if __name__ == "__main__":
from doctest import testmod
testmod(verbose=True)
print(f"{ugly_numbers(200) = }")
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from datetime import datetime
import requests
from bs4 import BeautifulSoup
if __name__ == "__main__":
url = input("Enter image url: ").strip()
print(f"Downloading image from {url} ...")
soup = BeautifulSoup(requests.get(url).content, "html.parser")
# The image URL is in the content field of the first meta tag with property og:image
image_url = soup.find("meta", {"property": "og:image"})["content"]
image_data = requests.get(image_url).content
file_name = f"{datetime.now():%Y-%m-%d_%H:%M:%S}.jpg"
with open(file_name, "wb") as fp:
fp.write(image_data)
print(f"Done. Image saved to disk as {file_name}.")
| from datetime import datetime
import requests
from bs4 import BeautifulSoup
if __name__ == "__main__":
url = input("Enter image url: ").strip()
print(f"Downloading image from {url} ...")
soup = BeautifulSoup(requests.get(url).content, "html.parser")
# The image URL is in the content field of the first meta tag with property og:image
image_url = soup.find("meta", {"property": "og:image"})["content"]
image_data = requests.get(image_url).content
file_name = f"{datetime.now():%Y-%m-%d_%H:%M:%S}.jpg"
with open(file_name, "wb") as fp:
fp.write(image_data)
print(f"Done. Image saved to disk as {file_name}.")
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #!/usr/bin/env python3
import hashlib
import importlib.util
import json
import os
import pathlib
from types import ModuleType
import pytest
import requests
PROJECT_EULER_DIR_PATH = pathlib.Path.cwd().joinpath("project_euler")
PROJECT_EULER_ANSWERS_PATH = pathlib.Path.cwd().joinpath(
"scripts", "project_euler_answers.json"
)
with open(PROJECT_EULER_ANSWERS_PATH) as file_handle:
PROBLEM_ANSWERS: dict[str, str] = json.load(file_handle)
def convert_path_to_module(file_path: pathlib.Path) -> ModuleType:
"""Converts a file path to a Python module"""
spec = importlib.util.spec_from_file_location(file_path.name, str(file_path))
module = importlib.util.module_from_spec(spec) # type: ignore
spec.loader.exec_module(module) # type: ignore
return module
def all_solution_file_paths() -> list[pathlib.Path]:
"""Collects all the solution file path in the Project Euler directory"""
solution_file_paths = []
for problem_dir_path in PROJECT_EULER_DIR_PATH.iterdir():
if problem_dir_path.is_file() or problem_dir_path.name.startswith("_"):
continue
for file_path in problem_dir_path.iterdir():
if file_path.suffix != ".py" or file_path.name.startswith(("_", "test")):
continue
solution_file_paths.append(file_path)
return solution_file_paths
def get_files_url() -> str:
"""Return the pull request number which triggered this action."""
with open(os.environ["GITHUB_EVENT_PATH"]) as file:
event = json.load(file)
return event["pull_request"]["url"] + "/files"
def added_solution_file_path() -> list[pathlib.Path]:
"""Collects only the solution file path which got added in the current
pull request.
This will only be triggered if the script is ran from GitHub Actions.
"""
solution_file_paths = []
headers = {
"Accept": "application/vnd.github.v3+json",
"Authorization": "token " + os.environ["GITHUB_TOKEN"],
}
files = requests.get(get_files_url(), headers=headers).json()
for file in files:
filepath = pathlib.Path.cwd().joinpath(file["filename"])
if (
filepath.suffix != ".py"
or filepath.name.startswith(("_", "test"))
or not filepath.name.startswith("sol")
):
continue
solution_file_paths.append(filepath)
return solution_file_paths
def collect_solution_file_paths() -> list[pathlib.Path]:
if os.environ.get("CI") and os.environ.get("GITHUB_EVENT_NAME") == "pull_request":
# Return only if there are any, otherwise default to all solutions
if filepaths := added_solution_file_path():
return filepaths
return all_solution_file_paths()
@pytest.mark.parametrize(
"solution_path",
collect_solution_file_paths(),
ids=lambda path: f"{path.parent.name}/{path.name}",
)
def test_project_euler(solution_path: pathlib.Path) -> None:
"""Testing for all Project Euler solutions"""
# problem_[extract this part] and pad it with zeroes for width 3
problem_number: str = solution_path.parent.name[8:].zfill(3)
expected: str = PROBLEM_ANSWERS[problem_number]
solution_module = convert_path_to_module(solution_path)
answer = str(solution_module.solution()) # type: ignore
answer = hashlib.sha256(answer.encode()).hexdigest()
assert (
answer == expected
), f"Expected solution to {problem_number} to have hash {expected}, got {answer}"
| #!/usr/bin/env python3
import hashlib
import importlib.util
import json
import os
import pathlib
from types import ModuleType
import pytest
import requests
PROJECT_EULER_DIR_PATH = pathlib.Path.cwd().joinpath("project_euler")
PROJECT_EULER_ANSWERS_PATH = pathlib.Path.cwd().joinpath(
"scripts", "project_euler_answers.json"
)
with open(PROJECT_EULER_ANSWERS_PATH) as file_handle:
PROBLEM_ANSWERS: dict[str, str] = json.load(file_handle)
def convert_path_to_module(file_path: pathlib.Path) -> ModuleType:
"""Converts a file path to a Python module"""
spec = importlib.util.spec_from_file_location(file_path.name, str(file_path))
module = importlib.util.module_from_spec(spec) # type: ignore
spec.loader.exec_module(module) # type: ignore
return module
def all_solution_file_paths() -> list[pathlib.Path]:
"""Collects all the solution file path in the Project Euler directory"""
solution_file_paths = []
for problem_dir_path in PROJECT_EULER_DIR_PATH.iterdir():
if problem_dir_path.is_file() or problem_dir_path.name.startswith("_"):
continue
for file_path in problem_dir_path.iterdir():
if file_path.suffix != ".py" or file_path.name.startswith(("_", "test")):
continue
solution_file_paths.append(file_path)
return solution_file_paths
def get_files_url() -> str:
"""Return the pull request number which triggered this action."""
with open(os.environ["GITHUB_EVENT_PATH"]) as file:
event = json.load(file)
return event["pull_request"]["url"] + "/files"
def added_solution_file_path() -> list[pathlib.Path]:
"""Collects only the solution file path which got added in the current
pull request.
This will only be triggered if the script is ran from GitHub Actions.
"""
solution_file_paths = []
headers = {
"Accept": "application/vnd.github.v3+json",
"Authorization": "token " + os.environ["GITHUB_TOKEN"],
}
files = requests.get(get_files_url(), headers=headers).json()
for file in files:
filepath = pathlib.Path.cwd().joinpath(file["filename"])
if (
filepath.suffix != ".py"
or filepath.name.startswith(("_", "test"))
or not filepath.name.startswith("sol")
):
continue
solution_file_paths.append(filepath)
return solution_file_paths
def collect_solution_file_paths() -> list[pathlib.Path]:
if os.environ.get("CI") and os.environ.get("GITHUB_EVENT_NAME") == "pull_request":
# Return only if there are any, otherwise default to all solutions
if filepaths := added_solution_file_path():
return filepaths
return all_solution_file_paths()
@pytest.mark.parametrize(
"solution_path",
collect_solution_file_paths(),
ids=lambda path: f"{path.parent.name}/{path.name}",
)
def test_project_euler(solution_path: pathlib.Path) -> None:
"""Testing for all Project Euler solutions"""
# problem_[extract this part] and pad it with zeroes for width 3
problem_number: str = solution_path.parent.name[8:].zfill(3)
expected: str = PROBLEM_ANSWERS[problem_number]
solution_module = convert_path_to_module(solution_path)
answer = str(solution_module.solution()) # type: ignore
answer = hashlib.sha256(answer.encode()).hexdigest()
assert (
answer == expected
), f"Expected solution to {problem_number} to have hash {expected}, got {answer}"
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Factorial of a number using memoization
from functools import lru_cache
@lru_cache
def factorial(num: int) -> int:
"""
>>> factorial(7)
5040
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: Number should not be negative.
>>> [factorial(i) for i in range(10)]
[1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
"""
if num < 0:
raise ValueError("Number should not be negative.")
return 1 if num in (0, 1) else num * factorial(num - 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| # Factorial of a number using memoization
from functools import lru_cache
@lru_cache
def factorial(num: int) -> int:
"""
>>> factorial(7)
5040
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: Number should not be negative.
>>> [factorial(i) for i in range(10)]
[1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
"""
if num < 0:
raise ValueError("Number should not be negative.")
return 1 if num in (0, 1) else num * factorial(num - 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 5,755 | [mypy] Annotate other/lru_cache and other/lfu_cache | ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| spazm | "2021-11-02T21:46:59Z" | "2021-11-10T22:21:16Z" | 7e81551d7b54f121458bd8e6a67b7ca86156815c | f36ee034f1f5c65cc89ed1fadea29a28e744a297 | [mypy] Annotate other/lru_cache and other/lfu_cache. ### Describe your change:
Annotation and improvements for lru_cache and lfu_cache.
These contain a lot of duplicated code so I have put them into a single PR.
+ Annotates existing lru_cache algorithm
+ Adds tests for the internal double linked list
+ Expands algorithm annotations to be generic of key and value types
+ Added __repr__ to enable clean doctests and clarity in the repl
+ mypy required using `setattr` rather than creating an attr through assignment.
+ Makes analogous changes to lfu_cache.
PR is broken into multiple commits each handling a logical chunk and is meant to be clear when viewing each commit individually.
#### Before - lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py :(-(master)-~/src/github/TheAlogorithms/Python/other
lru_cache.py:44: error: "None" has no attribute "next"
lru_cache.py:44: error: "None" has no attribute "prev"
lru_cache.py:94: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lru_cache.py:102: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lru_cache.py:185: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 5 errors in 1 file (checked 1 source file)
```
#### After -- lru_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lru_cache.py
Success: no issues found in 1 source file
% mypy --strict lru_cache.py
Success: no issues found in 1 source file
```
#### Before -- lfu_cache
```
% git checkout master
Switched to branch 'master'
Your branch is up to date with 'origin/master'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
lfu_cache.py:42: error: "None" has no attribute "key"
lfu_cache.py:42: error: "None" has no attribute "freq"
lfu_cache.py:44: error: "None" has no attribute "next"
lfu_cache.py:44: error: "None" has no attribute "prev"
lfu_cache.py:45: error: "None" has no attribute "prev"
lfu_cache.py:54: error: "None" has no attribute "next"
lfu_cache.py:54: error: "None" has no attribute "prev"
lfu_cache.py:92: error: Need type annotation for "decorator_function_to_instance_map" (hint: "decorator_function_to_instance_map: Dict[<type>, <type>] = ...")
lfu_cache.py:100: error: Need type annotation for "cache" (hint: "cache: Dict[<type>, <type>] = ...")
lfu_cache.py:179: error: "Callable[[VarArg(Any), KwArg(Any)], Any]" has no attribute "cache_info"
Found 10 errors in 1 file (checked 1 source file)
```
#### After -- lfu_cache
```
% git checkout -
Switched to branch 'mypy-fix-other-lru_cache'
Your branch is up to date with 'origin/mypy-fix-other-lru_cache'.
% mypy --ignore-missing-imports --install-types lfu_cache.py
Success: no issues found in 1 source file
% mypy --strict lfu_cache.py
Success: no issues found in 1 source file
```
Related: #4058
* [ ] Add an algorithm?
* [X] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes ~~one~~ two algorithm files. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """ A naive recursive implementation of 0-1 Knapsack Problem
https://en.wikipedia.org/wiki/Knapsack_problem
"""
from __future__ import annotations
def knapsack(capacity: int, weights: list[int], values: list[int], counter: int) -> int:
"""
Returns the maximum value that can be put in a knapsack of a capacity cap,
whereby each weight w has a specific value val.
>>> cap = 50
>>> val = [60, 100, 120]
>>> w = [10, 20, 30]
>>> c = len(val)
>>> knapsack(cap, w, val, c)
220
The result is 220 cause the values of 100 and 120 got the weight of 50
which is the limit of the capacity.
"""
# Base Case
if counter == 0 or capacity == 0:
return 0
# If weight of the nth item is more than Knapsack of capacity,
# then this item cannot be included in the optimal solution,
# else return the maximum of two cases:
# (1) nth item included
# (2) not included
if weights[counter - 1] > capacity:
return knapsack(capacity, weights, values, counter - 1)
else:
left_capacity = capacity - weights[counter - 1]
new_value_included = values[counter - 1] + knapsack(
left_capacity, weights, values, counter - 1
)
without_new_value = knapsack(capacity, weights, values, counter - 1)
return max(new_value_included, without_new_value)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """ A naive recursive implementation of 0-1 Knapsack Problem
https://en.wikipedia.org/wiki/Knapsack_problem
"""
from __future__ import annotations
def knapsack(capacity: int, weights: list[int], values: list[int], counter: int) -> int:
"""
Returns the maximum value that can be put in a knapsack of a capacity cap,
whereby each weight w has a specific value val.
>>> cap = 50
>>> val = [60, 100, 120]
>>> w = [10, 20, 30]
>>> c = len(val)
>>> knapsack(cap, w, val, c)
220
The result is 220 cause the values of 100 and 120 got the weight of 50
which is the limit of the capacity.
"""
# Base Case
if counter == 0 or capacity == 0:
return 0
# If weight of the nth item is more than Knapsack of capacity,
# then this item cannot be included in the optimal solution,
# else return the maximum of two cases:
# (1) nth item included
# (2) not included
if weights[counter - 1] > capacity:
return knapsack(capacity, weights, values, counter - 1)
else:
left_capacity = capacity - weights[counter - 1]
new_value_included = values[counter - 1] + knapsack(
left_capacity, weights, values, counter - 1
)
without_new_value = knapsack(capacity, weights, values, counter - 1)
return max(new_value_included, without_new_value)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Contributing guidelines
## Before contributing
Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before sending your pull requests, make sure that you __read the whole guidelines__. If you have any doubt on the contributing guide, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms).
## Contributing
### Contributor
We are very happy that you consider implementing algorithms and data structures for others! This repository is referenced and used by learners from all over the globe. Being one of our contributors, you agree and confirm that:
- You did your work - no plagiarism allowed
- Any plagiarized work will not be merged.
- Your work will be distributed under [MIT License](LICENSE.md) once your pull request is merged
- Your submitted work fulfils or mostly fulfils our styles and standards
__New implementation__ is welcome! For example, new solutions for a problem, different representations for a graph data structure or algorithm designs with different complexity but __identical implementation__ of an existing implementation is not allowed. Please check whether the solution is already implemented or not before submitting your pull request.
__Improving comments__ and __writing proper tests__ are also highly welcome.
### Contribution
We appreciate any contribution, from fixing a grammar mistake in a comment to implementing complex algorithms. Please read this section if you are contributing your work.
Your contribution will be tested by our [automated testing on Travis CI](https://travis-ci.org/TheAlgorithms/Python/pull_requests) to save time and mental energy. After you have submitted your pull request, you should see the Travis tests start to run at the bottom of your submission page. If those tests fail, then click on the ___details___ button try to read through the Travis output to understand the failure. If you do not understand, please leave a comment on your submission page and a community member will try to help.
Please help us keep our issue list small by adding fixes: #{$ISSUE_NO} to the commit message of pull requests that resolve open issues. GitHub will use this tag to auto-close the issue when the PR is merged.
#### What is an Algorithm?
An Algorithm is one or more functions (or classes) that:
* take one or more inputs,
* perform some internal calculations or data manipulations,
* return one or more outputs,
* have minimal side effects (Ex. `print()`, `plot()`, `read()`, `write()`).
Algorithms should be packaged in a way that would make it easy for readers to put them into larger programs.
Algorithms should:
* have intuitive class and function names that make their purpose clear to readers
* use Python naming conventions and intuitive variable names to ease comprehension
* be flexible to take different input values
* have Python type hints for their input parameters and return values
* raise Python exceptions (`ValueError`, etc.) on erroneous input values
* have docstrings with clear explanations and/or URLs to source materials
* contain doctests that test both valid and erroneous input values
* return all calculation results instead of printing or plotting them
Algorithms in this repo should not be how-to examples for existing Python packages. Instead, they should perform internal calculations or manipulations to convert input values into different output values. Those calculations or manipulations can use data types, classes, or functions of existing Python packages but each algorithm in this repo should add unique value.
#### Pre-commit plugin
Use [pre-commit](https://pre-commit.com/#installation) to automatically format your code to match our coding style:
```bash
python3 -m pip install pre-commit # only required the first time
pre-commit install
```
That's it! The plugin will run every time you commit any changes. If there are any errors found during the run, fix them and commit those changes. You can even run the plugin manually on all files:
```bash
pre-commit run --all-files --show-diff-on-failure
```
#### Coding Style
We want your work to be readable by others; therefore, we encourage you to note the following:
- Please write in Python 3.9+. For instance: `print()` is a function in Python 3 so `print "Hello"` will *not* work but `print("Hello")` will.
- Please focus hard on the naming of functions, classes, and variables. Help your reader by using __descriptive names__ that can help you to remove redundant comments.
- Single letter variable names are *old school* so please avoid them unless their life only spans a few lines.
- Expand acronyms because `gcd()` is hard to understand but `greatest_common_divisor()` is not.
- Please follow the [Python Naming Conventions](https://pep8.org/#prescriptive-naming-conventions) so variable_names and function_names should be lower_case, CONSTANTS in UPPERCASE, ClassNames should be CamelCase, etc.
- We encourage the use of Python [f-strings](https://realpython.com/python-f-strings/#f-strings-a-new-and-improved-way-to-format-strings-in-python) where they make the code easier to read.
- Please consider running [__psf/black__](https://github.com/python/black) on your Python file(s) before submitting your pull request. This is not yet a requirement but it does make your code more readable and automatically aligns it with much of [PEP 8](https://www.python.org/dev/peps/pep-0008/). There are other code formatters (autopep8, yapf) but the __black__ formatter is now hosted by the Python Software Foundation. To use it,
```bash
python3 -m pip install black # only required the first time
black .
```
- All submissions will need to pass the test `flake8 . --ignore=E203,W503 --max-line-length=88` before they will be accepted so if possible, try this test locally on your Python file(s) before submitting your pull request.
```bash
python3 -m pip install flake8 # only required the first time
flake8 . --ignore=E203,W503 --max-line-length=88 --show-source
```
- Original code submission require docstrings or comments to describe your work.
- More on docstrings and comments:
If you used a Wikipedia article or some other source material to create your algorithm, please add the URL in a docstring or comment to help your reader.
The following are considered to be bad and may be requested to be improved:
```python
x = x + 2 # increased by 2
```
This is too trivial. Comments are expected to be explanatory. For comments, you can write them above, on or below a line of code, as long as you are consistent within the same piece of code.
We encourage you to put docstrings inside your functions but please pay attention to the indentation of docstrings. The following is a good example:
```python
def sum_ab(a, b):
"""
Return the sum of two integers a and b.
"""
return a + b
```
- Write tests (especially [__doctests__](https://docs.python.org/3/library/doctest.html)) to illustrate and verify your work. We highly encourage the use of _doctests on all functions_.
```python
def sum_ab(a, b):
"""
Return the sum of two integers a and b
>>> sum_ab(2, 2)
4
>>> sum_ab(-2, 3)
1
>>> sum_ab(4.9, 5.1)
10.0
"""
return a + b
```
These doctests will be run by pytest as part of our automated testing so please try to run your doctests locally and make sure that they are found and pass:
```bash
python3 -m doctest -v my_submission.py
```
The use of the Python builtin `input()` function is __not__ encouraged:
```python
input('Enter your input:')
# Or even worse...
input = eval(input("Enter your input: "))
```
However, if your code uses `input()` then we encourage you to gracefully deal with leading and trailing whitespace in user input by adding `.strip()` as in:
```python
starting_value = int(input("Please enter a starting value: ").strip())
```
The use of [Python type hints](https://docs.python.org/3/library/typing.html) is encouraged for function parameters and return values. Our automated testing will run [mypy](http://mypy-lang.org) so run that locally before making your submission.
```python
def sum_ab(a: int, b: int) -> int:
return a + b
```
Instructions on how to install mypy can be found [here](https://github.com/python/mypy). Please use the command `mypy --ignore-missing-imports .` to test all files or `mypy --ignore-missing-imports path/to/file.py` to test a specific file.
- [__List comprehensions and generators__](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions) are preferred over the use of `lambda`, `map`, `filter`, `reduce` but the important thing is to demonstrate the power of Python in code that is easy to read and maintain.
- Avoid importing external libraries for basic algorithms. Only use those libraries for complicated algorithms.
- If you need a third-party module that is not in the file __requirements.txt__, please add it to that file as part of your submission.
#### Other Requirements for Submissions
- If you are submitting code in the `project_euler/` directory, please also read [the dedicated Guideline](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md) before contributing to our Project Euler library.
- The file extension for code files should be `.py`. Jupyter Notebooks should be submitted to [TheAlgorithms/Jupyter](https://github.com/TheAlgorithms/Jupyter).
- Strictly use snake_case (underscore_separated) in your file_name, as it will be easy to parse in future using scripts.
- Please avoid creating new directories if at all possible. Try to fit your work into the existing directory structure.
- If possible, follow the standard *within* the folder you are submitting to.
- If you have modified/added code work, make sure the code compiles before submitting.
- If you have modified/added documentation work, ensure your language is concise and contains no grammar errors.
- Do not update the README.md or DIRECTORY.md file which will be periodically autogenerated by our Travis CI processes.
- Add a corresponding explanation to [Algorithms-Explanation](https://github.com/TheAlgorithms/Algorithms-Explanation) (Optional but recommended).
- All submissions will be tested with [__mypy__](http://www.mypy-lang.org) so we encourage you to add [__Python type hints__](https://docs.python.org/3/library/typing.html) where it makes sense to do so.
- Most importantly,
- __Be consistent in the use of these guidelines when submitting.__
- __Join__ [Gitter](https://gitter.im/TheAlgorithms) __now!__
- Happy coding!
Writer [@poyea](https://github.com/poyea), Jun 2019.
| # Contributing guidelines
## Before contributing
Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before sending your pull requests, make sure that you __read the whole guidelines__. If you have any doubt on the contributing guide, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms).
## Contributing
### Contributor
We are very happy that you consider implementing algorithms and data structures for others! This repository is referenced and used by learners from all over the globe. Being one of our contributors, you agree and confirm that:
- You did your work - no plagiarism allowed
- Any plagiarized work will not be merged.
- Your work will be distributed under [MIT License](LICENSE.md) once your pull request is merged
- Your submitted work fulfils or mostly fulfils our styles and standards
__New implementation__ is welcome! For example, new solutions for a problem, different representations for a graph data structure or algorithm designs with different complexity but __identical implementation__ of an existing implementation is not allowed. Please check whether the solution is already implemented or not before submitting your pull request.
__Improving comments__ and __writing proper tests__ are also highly welcome.
### Contribution
We appreciate any contribution, from fixing a grammar mistake in a comment to implementing complex algorithms. Please read this section if you are contributing your work.
Your contribution will be tested by our [automated testing on GitHub Actions](https://github.com/TheAlgorithms/Python/actions) to save time and mental energy. After you have submitted your pull request, you should see the GitHub Actions tests start to run at the bottom of your submission page. If those tests fail, then click on the ___details___ button try to read through the GitHub Actions output to understand the failure. If you do not understand, please leave a comment on your submission page and a community member will try to help.
Please help us keep our issue list small by adding fixes: #{$ISSUE_NO} to the commit message of pull requests that resolve open issues. GitHub will use this tag to auto-close the issue when the PR is merged.
#### What is an Algorithm?
An Algorithm is one or more functions (or classes) that:
* take one or more inputs,
* perform some internal calculations or data manipulations,
* return one or more outputs,
* have minimal side effects (Ex. `print()`, `plot()`, `read()`, `write()`).
Algorithms should be packaged in a way that would make it easy for readers to put them into larger programs.
Algorithms should:
* have intuitive class and function names that make their purpose clear to readers
* use Python naming conventions and intuitive variable names to ease comprehension
* be flexible to take different input values
* have Python type hints for their input parameters and return values
* raise Python exceptions (`ValueError`, etc.) on erroneous input values
* have docstrings with clear explanations and/or URLs to source materials
* contain doctests that test both valid and erroneous input values
* return all calculation results instead of printing or plotting them
Algorithms in this repo should not be how-to examples for existing Python packages. Instead, they should perform internal calculations or manipulations to convert input values into different output values. Those calculations or manipulations can use data types, classes, or functions of existing Python packages but each algorithm in this repo should add unique value.
#### Pre-commit plugin
Use [pre-commit](https://pre-commit.com/#installation) to automatically format your code to match our coding style:
```bash
python3 -m pip install pre-commit # only required the first time
pre-commit install
```
That's it! The plugin will run every time you commit any changes. If there are any errors found during the run, fix them and commit those changes. You can even run the plugin manually on all files:
```bash
pre-commit run --all-files --show-diff-on-failure
```
#### Coding Style
We want your work to be readable by others; therefore, we encourage you to note the following:
- Please write in Python 3.9+. For instance: `print()` is a function in Python 3 so `print "Hello"` will *not* work but `print("Hello")` will.
- Please focus hard on the naming of functions, classes, and variables. Help your reader by using __descriptive names__ that can help you to remove redundant comments.
- Single letter variable names are *old school* so please avoid them unless their life only spans a few lines.
- Expand acronyms because `gcd()` is hard to understand but `greatest_common_divisor()` is not.
- Please follow the [Python Naming Conventions](https://pep8.org/#prescriptive-naming-conventions) so variable_names and function_names should be lower_case, CONSTANTS in UPPERCASE, ClassNames should be CamelCase, etc.
- We encourage the use of Python [f-strings](https://realpython.com/python-f-strings/#f-strings-a-new-and-improved-way-to-format-strings-in-python) where they make the code easier to read.
- Please consider running [__psf/black__](https://github.com/python/black) on your Python file(s) before submitting your pull request. This is not yet a requirement but it does make your code more readable and automatically aligns it with much of [PEP 8](https://www.python.org/dev/peps/pep-0008/). There are other code formatters (autopep8, yapf) but the __black__ formatter is now hosted by the Python Software Foundation. To use it,
```bash
python3 -m pip install black # only required the first time
black .
```
- All submissions will need to pass the test `flake8 . --ignore=E203,W503 --max-line-length=88` before they will be accepted so if possible, try this test locally on your Python file(s) before submitting your pull request.
```bash
python3 -m pip install flake8 # only required the first time
flake8 . --ignore=E203,W503 --max-line-length=88 --show-source
```
- Original code submission require docstrings or comments to describe your work.
- More on docstrings and comments:
If you used a Wikipedia article or some other source material to create your algorithm, please add the URL in a docstring or comment to help your reader.
The following are considered to be bad and may be requested to be improved:
```python
x = x + 2 # increased by 2
```
This is too trivial. Comments are expected to be explanatory. For comments, you can write them above, on or below a line of code, as long as you are consistent within the same piece of code.
We encourage you to put docstrings inside your functions but please pay attention to the indentation of docstrings. The following is a good example:
```python
def sum_ab(a, b):
"""
Return the sum of two integers a and b.
"""
return a + b
```
- Write tests (especially [__doctests__](https://docs.python.org/3/library/doctest.html)) to illustrate and verify your work. We highly encourage the use of _doctests on all functions_.
```python
def sum_ab(a, b):
"""
Return the sum of two integers a and b
>>> sum_ab(2, 2)
4
>>> sum_ab(-2, 3)
1
>>> sum_ab(4.9, 5.1)
10.0
"""
return a + b
```
These doctests will be run by pytest as part of our automated testing so please try to run your doctests locally and make sure that they are found and pass:
```bash
python3 -m doctest -v my_submission.py
```
The use of the Python builtin `input()` function is __not__ encouraged:
```python
input('Enter your input:')
# Or even worse...
input = eval(input("Enter your input: "))
```
However, if your code uses `input()` then we encourage you to gracefully deal with leading and trailing whitespace in user input by adding `.strip()` as in:
```python
starting_value = int(input("Please enter a starting value: ").strip())
```
The use of [Python type hints](https://docs.python.org/3/library/typing.html) is encouraged for function parameters and return values. Our automated testing will run [mypy](http://mypy-lang.org) so run that locally before making your submission.
```python
def sum_ab(a: int, b: int) -> int:
return a + b
```
Instructions on how to install mypy can be found [here](https://github.com/python/mypy). Please use the command `mypy --ignore-missing-imports .` to test all files or `mypy --ignore-missing-imports path/to/file.py` to test a specific file.
- [__List comprehensions and generators__](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions) are preferred over the use of `lambda`, `map`, `filter`, `reduce` but the important thing is to demonstrate the power of Python in code that is easy to read and maintain.
- Avoid importing external libraries for basic algorithms. Only use those libraries for complicated algorithms.
- If you need a third-party module that is not in the file __requirements.txt__, please add it to that file as part of your submission.
#### Other Requirements for Submissions
- If you are submitting code in the `project_euler/` directory, please also read [the dedicated Guideline](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md) before contributing to our Project Euler library.
- The file extension for code files should be `.py`. Jupyter Notebooks should be submitted to [TheAlgorithms/Jupyter](https://github.com/TheAlgorithms/Jupyter).
- Strictly use snake_case (underscore_separated) in your file_name, as it will be easy to parse in future using scripts.
- Please avoid creating new directories if at all possible. Try to fit your work into the existing directory structure.
- If possible, follow the standard *within* the folder you are submitting to.
- If you have modified/added code work, make sure the code compiles before submitting.
- If you have modified/added documentation work, ensure your language is concise and contains no grammar errors.
- Do not update the README.md or DIRECTORY.md file which will be periodically autogenerated by our GitHub Actions processes.
- Add a corresponding explanation to [Algorithms-Explanation](https://github.com/TheAlgorithms/Algorithms-Explanation) (Optional but recommended).
- All submissions will be tested with [__mypy__](http://www.mypy-lang.org) so we encourage you to add [__Python type hints__](https://docs.python.org/3/library/typing.html) where it makes sense to do so.
- Most importantly,
- __Be consistent in the use of these guidelines when submitting.__
- __Join__ [Gitter](https://gitter.im/TheAlgorithms) __now!__
- Happy coding!
Writer [@poyea](https://github.com/poyea), Jun 2019.
| 1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Project Euler
Problems are taken from https://projecteuler.net/, the Project Euler. [Problems are licensed under CC BY-NC-SA 4.0](https://projecteuler.net/copyright).
Project Euler is a series of challenging mathematical/computer programming problems that require more than just mathematical
insights to solve. Project Euler is ideal for mathematicians who are learning to code.
The solutions will be checked by our [automated testing on Travis CI](https://travis-ci.com/github/TheAlgorithms/Python/pull_requests) with the help of [this script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). The efficiency of your code is also checked. You can view the top 10 slowest solutions on Travis CI logs (under `slowest 10 durations`) and open a pull request to improve those solutions.
## Solution Guidelines
Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before reading the solution guidelines, make sure you read the whole [Contributing Guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) as it won't be repeated in here. If you have any doubt on the guidelines, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms). You can use the [template](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#solution-template) we have provided below as your starting point but be sure to read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) part first.
### Coding Style
* Please maintain consistency in project directory and solution file names. Keep the following points in mind:
* Create a new directory only for the problems which do not exist yet.
* If you create a new directory, please create an empty `__init__.py` file inside it as well.
* Please name the project **directory** as `problem_<problem_number>` where `problem_number` should be filled with 0s so as to occupy 3 digits. Example: `problem_001`, `problem_002`, `problem_067`, `problem_145`, and so on.
* Please provide a link to the problem and other references, if used, in the **module-level docstring**.
* All imports should come ***after*** the module-level docstring.
* You can have as many helper functions as you want but there should be one main function called `solution` which should satisfy the conditions as stated below:
* It should contain positional argument(s) whose default value is the question input. Example: Please take a look at [Problem 1](https://projecteuler.net/problem=1) where the question is to *Find the sum of all the multiples of 3 or 5 below 1000.* In this case the main solution function will be `solution(limit: int = 1000)`.
* When the `solution` function is called without any arguments like so: `solution()`, it should return the answer to the problem.
* Every function, which includes all the helper functions, if any, and the main solution function, should have `doctest` in the function docstring along with a brief statement mentioning what the function is about.
* There should not be a `doctest` for testing the answer as that is done by our Travis CI build using this [script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). Keeping in mind the above example of [Problem 1](https://projecteuler.net/problem=1):
```python
def solution(limit: int = 1000):
"""
A brief statement mentioning what the function is about.
You can have a detailed explanation about the solution method in the
module-level docstring.
>>> solution(1)
...
>>> solution(16)
...
>>> solution(100)
...
"""
```
### Solution Template
You can use the below template as your starting point but please read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) first to understand how the template works.
Please change the name of the helper functions accordingly, change the parameter names with a descriptive one, replace the content within `[square brackets]` (including the brackets) with the appropriate content.
```python
"""
Project Euler Problem [problem number]: [link to the original problem]
... [Entire problem statement] ...
... [Solution explanation - Optional] ...
References [Optional]:
- [Wikipedia link to the topic]
- [Stackoverflow link]
...
"""
import module1
import module2
...
def helper1(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]:
"""
A brief statement explaining what the function is about.
... A more elaborate description ... [Optional]
...
[Doctest]
...
"""
...
# calculations
...
return
# You can have multiple helper functions but the solution function should be
# after all the helper functions ...
def solution(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]:
"""
A brief statement mentioning what the function is about.
You can have a detailed explanation about the solution in the
module-level docstring.
...
[Doctest as mentioned above]
...
"""
...
# calculations
...
return answer
if __name__ == "__main__":
print(f"{solution() = }")
```
| # Project Euler
Problems are taken from https://projecteuler.net/, the Project Euler. [Problems are licensed under CC BY-NC-SA 4.0](https://projecteuler.net/copyright).
Project Euler is a series of challenging mathematical/computer programming problems that require more than just mathematical
insights to solve. Project Euler is ideal for mathematicians who are learning to code.
The solutions will be checked by our [automated testing on GitHub Actions](https://github.com/TheAlgorithms/Python/actions) with the help of [this script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). The efficiency of your code is also checked. You can view the top 10 slowest solutions on GitHub Actions logs (under `slowest 10 durations`) and open a pull request to improve those solutions.
## Solution Guidelines
Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before reading the solution guidelines, make sure you read the whole [Contributing Guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) as it won't be repeated in here. If you have any doubt on the guidelines, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms). You can use the [template](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#solution-template) we have provided below as your starting point but be sure to read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) part first.
### Coding Style
* Please maintain consistency in project directory and solution file names. Keep the following points in mind:
* Create a new directory only for the problems which do not exist yet.
* If you create a new directory, please create an empty `__init__.py` file inside it as well.
* Please name the project **directory** as `problem_<problem_number>` where `problem_number` should be filled with 0s so as to occupy 3 digits. Example: `problem_001`, `problem_002`, `problem_067`, `problem_145`, and so on.
* Please provide a link to the problem and other references, if used, in the **module-level docstring**.
* All imports should come ***after*** the module-level docstring.
* You can have as many helper functions as you want but there should be one main function called `solution` which should satisfy the conditions as stated below:
* It should contain positional argument(s) whose default value is the question input. Example: Please take a look at [Problem 1](https://projecteuler.net/problem=1) where the question is to *Find the sum of all the multiples of 3 or 5 below 1000.* In this case the main solution function will be `solution(limit: int = 1000)`.
* When the `solution` function is called without any arguments like so: `solution()`, it should return the answer to the problem.
* Every function, which includes all the helper functions, if any, and the main solution function, should have `doctest` in the function docstring along with a brief statement mentioning what the function is about.
* There should not be a `doctest` for testing the answer as that is done by our GitHub Actions build using this [script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). Keeping in mind the above example of [Problem 1](https://projecteuler.net/problem=1):
```python
def solution(limit: int = 1000):
"""
A brief statement mentioning what the function is about.
You can have a detailed explanation about the solution method in the
module-level docstring.
>>> solution(1)
...
>>> solution(16)
...
>>> solution(100)
...
"""
```
### Solution Template
You can use the below template as your starting point but please read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) first to understand how the template works.
Please change the name of the helper functions accordingly, change the parameter names with a descriptive one, replace the content within `[square brackets]` (including the brackets) with the appropriate content.
```python
"""
Project Euler Problem [problem number]: [link to the original problem]
... [Entire problem statement] ...
... [Solution explanation - Optional] ...
References [Optional]:
- [Wikipedia link to the topic]
- [Stackoverflow link]
...
"""
import module1
import module2
...
def helper1(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]:
"""
A brief statement explaining what the function is about.
... A more elaborate description ... [Optional]
...
[Doctest]
...
"""
...
# calculations
...
return
# You can have multiple helper functions but the solution function should be
# after all the helper functions ...
def solution(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]:
"""
A brief statement mentioning what the function is about.
You can have a detailed explanation about the solution in the
module-level docstring.
...
[Doctest as mentioned above]
...
"""
...
# calculations
...
return answer
if __name__ == "__main__":
print(f"{solution() = }")
```
| 1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Highly divisible triangular numbers
Problem 12
The sequence of triangle numbers is generated by adding the natural numbers. So
the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten
terms would be:
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...
Let us list the factors of the first seven triangle numbers:
1: 1
3: 1,3
6: 1,2,3,6
10: 1,2,5,10
15: 1,3,5,15
21: 1,3,7,21
28: 1,2,4,7,14,28
We can see that 28 is the first triangle number to have over five divisors.
What is the value of the first triangle number to have over five hundred
divisors?
"""
def triangle_number_generator():
for n in range(1, 1000000):
yield n * (n + 1) // 2
def count_divisors(n):
return sum(2 for i in range(1, int(n ** 0.5) + 1) if n % i == 0 and i * i != n)
def solution():
"""Returns the value of the first triangle number to have over five hundred
divisors.
# The code below has been commented due to slow execution affecting Travis.
# >>> solution()
# 76576500
"""
return next(i for i in triangle_number_generator() if count_divisors(i) > 500)
if __name__ == "__main__":
print(solution())
| """
Highly divisible triangular numbers
Problem 12
The sequence of triangle numbers is generated by adding the natural numbers. So
the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten
terms would be:
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...
Let us list the factors of the first seven triangle numbers:
1: 1
3: 1,3
6: 1,2,3,6
10: 1,2,5,10
15: 1,3,5,15
21: 1,3,7,21
28: 1,2,4,7,14,28
We can see that 28 is the first triangle number to have over five divisors.
What is the value of the first triangle number to have over five hundred
divisors?
"""
def triangle_number_generator():
for n in range(1, 1000000):
yield n * (n + 1) // 2
def count_divisors(n):
return sum(2 for i in range(1, int(n ** 0.5) + 1) if n % i == 0 and i * i != n)
def solution():
"""Returns the value of the first triangle number to have over five hundred
divisors.
>>> solution()
76576500
"""
return next(i for i in triangle_number_generator() if count_divisors(i) > 500)
if __name__ == "__main__":
print(solution())
| 1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # A complete working Python program to demonstrate all
# stack operations using a doubly linked list
from __future__ import annotations
from typing import Generic, TypeVar
T = TypeVar("T")
class Node(Generic[T]):
def __init__(self, data: T):
self.data = data # Assign data
self.next: Node[T] | None = None # Initialize next as null
self.prev: Node[T] | None = None # Initialize prev as null
class Stack(Generic[T]):
"""
>>> stack = Stack()
>>> stack.is_empty()
True
>>> stack.print_stack()
stack elements are:
>>> for i in range(4):
... stack.push(i)
...
>>> stack.is_empty()
False
>>> stack.print_stack()
stack elements are:
3->2->1->0->
>>> stack.top()
3
>>> len(stack)
4
>>> stack.pop()
3
>>> stack.print_stack()
stack elements are:
2->1->0->
"""
def __init__(self) -> None:
self.head: Node[T] | None = None
def push(self, data: T) -> None:
"""add a Node to the stack"""
if self.head is None:
self.head = Node(data)
else:
new_node = Node(data)
self.head.prev = new_node
new_node.next = self.head
new_node.prev = None
self.head = new_node
def pop(self) -> T | None:
"""pop the top element off the stack"""
if self.head is None:
return None
else:
assert self.head is not None
temp = self.head.data
self.head = self.head.next
if self.head is not None:
self.head.prev = None
return temp
def top(self) -> T | None:
"""return the top element of the stack"""
return self.head.data if self.head is not None else None
def __len__(self) -> int:
temp = self.head
count = 0
while temp is not None:
count += 1
temp = temp.next
return count
def is_empty(self) -> bool:
return self.head is None
def print_stack(self) -> None:
print("stack elements are:")
temp = self.head
while temp is not None:
print(temp.data, end="->")
temp = temp.next
# Code execution starts here
if __name__ == "__main__":
# Start with the empty stack
stack: Stack[int] = Stack()
# Insert 4 at the beginning. So stack becomes 4->None
print("Stack operations using Doubly LinkedList")
stack.push(4)
# Insert 5 at the beginning. So stack becomes 4->5->None
stack.push(5)
# Insert 6 at the beginning. So stack becomes 4->5->6->None
stack.push(6)
# Insert 7 at the beginning. So stack becomes 4->5->6->7->None
stack.push(7)
# Print the stack
stack.print_stack()
# Print the top element
print("\nTop element is ", stack.top())
# Print the stack size
print("Size of the stack is ", len(stack))
# pop the top element
stack.pop()
# pop the top element
stack.pop()
# two elements have now been popped off
stack.print_stack()
# Print True if the stack is empty else False
print("\nstack is empty:", stack.is_empty())
| # A complete working Python program to demonstrate all
# stack operations using a doubly linked list
from __future__ import annotations
from typing import Generic, TypeVar
T = TypeVar("T")
class Node(Generic[T]):
def __init__(self, data: T):
self.data = data # Assign data
self.next: Node[T] | None = None # Initialize next as null
self.prev: Node[T] | None = None # Initialize prev as null
class Stack(Generic[T]):
"""
>>> stack = Stack()
>>> stack.is_empty()
True
>>> stack.print_stack()
stack elements are:
>>> for i in range(4):
... stack.push(i)
...
>>> stack.is_empty()
False
>>> stack.print_stack()
stack elements are:
3->2->1->0->
>>> stack.top()
3
>>> len(stack)
4
>>> stack.pop()
3
>>> stack.print_stack()
stack elements are:
2->1->0->
"""
def __init__(self) -> None:
self.head: Node[T] | None = None
def push(self, data: T) -> None:
"""add a Node to the stack"""
if self.head is None:
self.head = Node(data)
else:
new_node = Node(data)
self.head.prev = new_node
new_node.next = self.head
new_node.prev = None
self.head = new_node
def pop(self) -> T | None:
"""pop the top element off the stack"""
if self.head is None:
return None
else:
assert self.head is not None
temp = self.head.data
self.head = self.head.next
if self.head is not None:
self.head.prev = None
return temp
def top(self) -> T | None:
"""return the top element of the stack"""
return self.head.data if self.head is not None else None
def __len__(self) -> int:
temp = self.head
count = 0
while temp is not None:
count += 1
temp = temp.next
return count
def is_empty(self) -> bool:
return self.head is None
def print_stack(self) -> None:
print("stack elements are:")
temp = self.head
while temp is not None:
print(temp.data, end="->")
temp = temp.next
# Code execution starts here
if __name__ == "__main__":
# Start with the empty stack
stack: Stack[int] = Stack()
# Insert 4 at the beginning. So stack becomes 4->None
print("Stack operations using Doubly LinkedList")
stack.push(4)
# Insert 5 at the beginning. So stack becomes 4->5->None
stack.push(5)
# Insert 6 at the beginning. So stack becomes 4->5->6->None
stack.push(6)
# Insert 7 at the beginning. So stack becomes 4->5->6->7->None
stack.push(7)
# Print the stack
stack.print_stack()
# Print the top element
print("\nTop element is ", stack.top())
# Print the stack size
print("Size of the stack is ", len(stack))
# pop the top element
stack.pop()
# pop the top element
stack.pop()
# two elements have now been popped off
stack.print_stack()
# Print True if the stack is empty else False
print("\nstack is empty:", stack.is_empty())
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 10: https://projecteuler.net/problem=10
Summation of primes
The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.
Find the sum of all the primes below two million.
References:
- https://en.wikipedia.org/wiki/Prime_number
- https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
"""
def solution(n: int = 2000000) -> int:
"""
Returns the sum of all the primes below n using Sieve of Eratosthenes:
The sieve of Eratosthenes is one of the most efficient ways to find all primes
smaller than n when n is smaller than 10 million. Only for positive numbers.
>>> solution(1000)
76127
>>> solution(5000)
1548136
>>> solution(10000)
5736396
>>> solution(7)
10
>>> solution(7.1) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> solution(-7) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
IndexError: list assignment index out of range
>>> solution("seven") # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: can only concatenate str (not "int") to str
"""
primality_list = [0 for i in range(n + 1)]
primality_list[0] = 1
primality_list[1] = 1
for i in range(2, int(n ** 0.5) + 1):
if primality_list[i] == 0:
for j in range(i * i, n + 1, i):
primality_list[j] = 1
sum_of_primes = 0
for i in range(n):
if primality_list[i] == 0:
sum_of_primes += i
return sum_of_primes
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 10: https://projecteuler.net/problem=10
Summation of primes
The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.
Find the sum of all the primes below two million.
References:
- https://en.wikipedia.org/wiki/Prime_number
- https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
"""
def solution(n: int = 2000000) -> int:
"""
Returns the sum of all the primes below n using Sieve of Eratosthenes:
The sieve of Eratosthenes is one of the most efficient ways to find all primes
smaller than n when n is smaller than 10 million. Only for positive numbers.
>>> solution(1000)
76127
>>> solution(5000)
1548136
>>> solution(10000)
5736396
>>> solution(7)
10
>>> solution(7.1) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> solution(-7) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
IndexError: list assignment index out of range
>>> solution("seven") # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: can only concatenate str (not "int") to str
"""
primality_list = [0 for i in range(n + 1)]
primality_list[0] = 1
primality_list[1] = 1
for i in range(2, int(n ** 0.5) + 1):
if primality_list[i] == 0:
for j in range(i * i, n + 1, i):
primality_list[j] = 1
sum_of_primes = 0
for i in range(n):
if primality_list[i] == 0:
sum_of_primes += i
return sum_of_primes
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 207: https://projecteuler.net/problem=207
Problem Statement:
For some positive integers k, there exists an integer partition of the form
4**t = 2**t + k, where 4**t, 2**t, and k are all positive integers and t is a real
number. The first two such partitions are 4**1 = 2**1 + 2 and
4**1.5849625... = 2**1.5849625... + 6.
Partitions where t is also an integer are called perfect.
For any m ≥ 1 let P(m) be the proportion of such partitions that are perfect with
k ≤ m.
Thus P(6) = 1/2.
In the following table are listed some values of P(m)
P(5) = 1/1
P(10) = 1/2
P(15) = 2/3
P(20) = 1/2
P(25) = 1/2
P(30) = 2/5
...
P(180) = 1/4
P(185) = 3/13
Find the smallest m for which P(m) < 1/12345
Solution:
Equation 4**t = 2**t + k solved for t gives:
t = log2(sqrt(4*k+1)/2 + 1/2)
For t to be real valued, sqrt(4*k+1) must be an integer which is implemented in
function check_t_real(k). For a perfect partition t must be an integer.
To speed up significantly the search for partitions, instead of incrementing k by one
per iteration, the next valid k is found by k = (i**2 - 1) / 4 with an integer i and
k has to be a positive integer. If this is the case a partition is found. The partition
is perfect if t os an integer. The integer i is increased with increment 1 until the
proportion perfect partitions / total partitions drops under the given value.
"""
import math
def check_partition_perfect(positive_integer: int) -> bool:
"""
Check if t = f(positive_integer) = log2(sqrt(4*positive_integer+1)/2 + 1/2) is a
real number.
>>> check_partition_perfect(2)
True
>>> check_partition_perfect(6)
False
"""
exponent = math.log2(math.sqrt(4 * positive_integer + 1) / 2 + 1 / 2)
return exponent == int(exponent)
def solution(max_proportion: float = 1 / 12345) -> int:
"""
Find m for which the proportion of perfect partitions to total partitions is lower
than max_proportion
>>> solution(1) > 5
True
>>> solution(1/2) > 10
True
>>> solution(3 / 13) > 185
True
"""
total_partitions = 0
perfect_partitions = 0
integer = 3
while True:
partition_candidate = (integer ** 2 - 1) / 4
# if candidate is an integer, then there is a partition for k
if partition_candidate == int(partition_candidate):
partition_candidate = int(partition_candidate)
total_partitions += 1
if check_partition_perfect(partition_candidate):
perfect_partitions += 1
if perfect_partitions > 0:
if perfect_partitions / total_partitions < max_proportion:
return int(partition_candidate)
integer += 1
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 207: https://projecteuler.net/problem=207
Problem Statement:
For some positive integers k, there exists an integer partition of the form
4**t = 2**t + k, where 4**t, 2**t, and k are all positive integers and t is a real
number. The first two such partitions are 4**1 = 2**1 + 2 and
4**1.5849625... = 2**1.5849625... + 6.
Partitions where t is also an integer are called perfect.
For any m ≥ 1 let P(m) be the proportion of such partitions that are perfect with
k ≤ m.
Thus P(6) = 1/2.
In the following table are listed some values of P(m)
P(5) = 1/1
P(10) = 1/2
P(15) = 2/3
P(20) = 1/2
P(25) = 1/2
P(30) = 2/5
...
P(180) = 1/4
P(185) = 3/13
Find the smallest m for which P(m) < 1/12345
Solution:
Equation 4**t = 2**t + k solved for t gives:
t = log2(sqrt(4*k+1)/2 + 1/2)
For t to be real valued, sqrt(4*k+1) must be an integer which is implemented in
function check_t_real(k). For a perfect partition t must be an integer.
To speed up significantly the search for partitions, instead of incrementing k by one
per iteration, the next valid k is found by k = (i**2 - 1) / 4 with an integer i and
k has to be a positive integer. If this is the case a partition is found. The partition
is perfect if t os an integer. The integer i is increased with increment 1 until the
proportion perfect partitions / total partitions drops under the given value.
"""
import math
def check_partition_perfect(positive_integer: int) -> bool:
"""
Check if t = f(positive_integer) = log2(sqrt(4*positive_integer+1)/2 + 1/2) is a
real number.
>>> check_partition_perfect(2)
True
>>> check_partition_perfect(6)
False
"""
exponent = math.log2(math.sqrt(4 * positive_integer + 1) / 2 + 1 / 2)
return exponent == int(exponent)
def solution(max_proportion: float = 1 / 12345) -> int:
"""
Find m for which the proportion of perfect partitions to total partitions is lower
than max_proportion
>>> solution(1) > 5
True
>>> solution(1/2) > 10
True
>>> solution(3 / 13) > 185
True
"""
total_partitions = 0
perfect_partitions = 0
integer = 3
while True:
partition_candidate = (integer ** 2 - 1) / 4
# if candidate is an integer, then there is a partition for k
if partition_candidate == int(partition_candidate):
partition_candidate = int(partition_candidate)
total_partitions += 1
if check_partition_perfect(partition_candidate):
perfect_partitions += 1
if perfect_partitions > 0:
if perfect_partitions / total_partitions < max_proportion:
return int(partition_candidate)
integer += 1
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
from string import ascii_letters
def encrypt(input_string: str, key: int, alphabet: str | None = None) -> str:
"""
encrypt
=======
Encodes a given string with the caesar cipher and returns the encoded
message
Parameters:
-----------
* input_string: the plain-text that needs to be encoded
* key: the number of letters to shift the message by
Optional:
* alphabet (None): the alphabet used to encode the cipher, if not
specified, the standard english alphabet with upper and lowercase
letters is used
Returns:
* A string containing the encoded cipher-text
More on the caesar cipher
=========================
The caesar cipher is named after Julius Caesar who used it when sending
secret military messages to his troops. This is a simple substitution cipher
where very character in the plain-text is shifted by a certain number known
as the "key" or "shift".
Example:
Say we have the following message:
"Hello, captain"
And our alphabet is made up of lower and uppercase letters:
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
And our shift is "2"
We can then encode the message, one letter at a time. "H" would become "J",
since "J" is two letters away, and so on. If the shift is ever two large, or
our letter is at the end of the alphabet, we just start at the beginning
("Z" would shift to "a" then "b" and so on).
Our final message would be "Jgnnq, ecrvckp"
Further reading
===============
* https://en.m.wikipedia.org/wiki/Caesar_cipher
Doctests
========
>>> encrypt('The quick brown fox jumps over the lazy dog', 8)
'bpm yCqks jzwEv nwF rCuxA wDmz Bpm tiHG lwo'
>>> encrypt('A very large key', 8000)
's nWjq dSjYW cWq'
>>> encrypt('a lowercase alphabet', 5, 'abcdefghijklmnopqrstuvwxyz')
'f qtbjwhfxj fqumfgjy'
"""
# Set default alphabet to lower and upper case english chars
alpha = alphabet or ascii_letters
# The final result string
result = ""
for character in input_string:
if character not in alpha:
# Append without encryption if character is not in the alphabet
result += character
else:
# Get the index of the new key and make sure it isn't too large
new_key = (alpha.index(character) + key) % len(alpha)
# Append the encoded character to the alphabet
result += alpha[new_key]
return result
def decrypt(input_string: str, key: int, alphabet: str | None = None) -> str:
"""
decrypt
=======
Decodes a given string of cipher-text and returns the decoded plain-text
Parameters:
-----------
* input_string: the cipher-text that needs to be decoded
* key: the number of letters to shift the message backwards by to decode
Optional:
* alphabet (None): the alphabet used to decode the cipher, if not
specified, the standard english alphabet with upper and lowercase
letters is used
Returns:
* A string containing the decoded plain-text
More on the caesar cipher
=========================
The caesar cipher is named after Julius Caesar who used it when sending
secret military messages to his troops. This is a simple substitution cipher
where very character in the plain-text is shifted by a certain number known
as the "key" or "shift". Please keep in mind, here we will be focused on
decryption.
Example:
Say we have the following cipher-text:
"Jgnnq, ecrvckp"
And our alphabet is made up of lower and uppercase letters:
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
And our shift is "2"
To decode the message, we would do the same thing as encoding, but in
reverse. The first letter, "J" would become "H" (remember: we are decoding)
because "H" is two letters in reverse (to the left) of "J". We would
continue doing this. A letter like "a" would shift back to the end of
the alphabet, and would become "Z" or "Y" and so on.
Our final message would be "Hello, captain"
Further reading
===============
* https://en.m.wikipedia.org/wiki/Caesar_cipher
Doctests
========
>>> decrypt('bpm yCqks jzwEv nwF rCuxA wDmz Bpm tiHG lwo', 8)
'The quick brown fox jumps over the lazy dog'
>>> decrypt('s nWjq dSjYW cWq', 8000)
'A very large key'
>>> decrypt('f qtbjwhfxj fqumfgjy', 5, 'abcdefghijklmnopqrstuvwxyz')
'a lowercase alphabet'
"""
# Turn on decode mode by making the key negative
key *= -1
return encrypt(input_string, key, alphabet)
def brute_force(input_string: str, alphabet: str | None = None) -> dict[int, str]:
"""
brute_force
===========
Returns all the possible combinations of keys and the decoded strings in the
form of a dictionary
Parameters:
-----------
* input_string: the cipher-text that needs to be used during brute-force
Optional:
* alphabet: (None): the alphabet used to decode the cipher, if not
specified, the standard english alphabet with upper and lowercase
letters is used
More about brute force
======================
Brute force is when a person intercepts a message or password, not knowing
the key and tries every single combination. This is easy with the caesar
cipher since there are only all the letters in the alphabet. The more
complex the cipher, the larger amount of time it will take to do brute force
Ex:
Say we have a 5 letter alphabet (abcde), for simplicity and we intercepted the
following message:
"dbc"
we could then just write out every combination:
ecd... and so on, until we reach a combination that makes sense:
"cab"
Further reading
===============
* https://en.wikipedia.org/wiki/Brute_force
Doctests
========
>>> brute_force("jFyuMy xIH'N vLONy zILwy Gy!")[20]
"Please don't brute force me!"
>>> brute_force(1)
Traceback (most recent call last):
TypeError: 'int' object is not iterable
"""
# Set default alphabet to lower and upper case english chars
alpha = alphabet or ascii_letters
# To store data on all the combinations
brute_force_data = {}
# Cycle through each combination
for key in range(1, len(alpha) + 1):
# Decrypt the message and store the result in the data
brute_force_data[key] = decrypt(input_string, key, alpha)
return brute_force_data
if __name__ == "__main__":
while True:
print(f'\n{"-" * 10}\n Menu\n{"-" * 10}')
print(*["1.Encrypt", "2.Decrypt", "3.BruteForce", "4.Quit"], sep="\n")
# get user input
choice = input("\nWhat would you like to do?: ").strip() or "4"
# run functions based on what the user chose
if choice not in ("1", "2", "3", "4"):
print("Invalid choice, please enter a valid choice")
elif choice == "1":
input_string = input("Please enter the string to be encrypted: ")
key = int(input("Please enter off-set: ").strip())
print(encrypt(input_string, key))
elif choice == "2":
input_string = input("Please enter the string to be decrypted: ")
key = int(input("Please enter off-set: ").strip())
print(decrypt(input_string, key))
elif choice == "3":
input_string = input("Please enter the string to be decrypted: ")
brute_force_data = brute_force(input_string)
for key, value in brute_force_data.items():
print(f"Key: {key} | Message: {value}")
elif choice == "4":
print("Goodbye.")
break
| from __future__ import annotations
from string import ascii_letters
def encrypt(input_string: str, key: int, alphabet: str | None = None) -> str:
"""
encrypt
=======
Encodes a given string with the caesar cipher and returns the encoded
message
Parameters:
-----------
* input_string: the plain-text that needs to be encoded
* key: the number of letters to shift the message by
Optional:
* alphabet (None): the alphabet used to encode the cipher, if not
specified, the standard english alphabet with upper and lowercase
letters is used
Returns:
* A string containing the encoded cipher-text
More on the caesar cipher
=========================
The caesar cipher is named after Julius Caesar who used it when sending
secret military messages to his troops. This is a simple substitution cipher
where very character in the plain-text is shifted by a certain number known
as the "key" or "shift".
Example:
Say we have the following message:
"Hello, captain"
And our alphabet is made up of lower and uppercase letters:
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
And our shift is "2"
We can then encode the message, one letter at a time. "H" would become "J",
since "J" is two letters away, and so on. If the shift is ever two large, or
our letter is at the end of the alphabet, we just start at the beginning
("Z" would shift to "a" then "b" and so on).
Our final message would be "Jgnnq, ecrvckp"
Further reading
===============
* https://en.m.wikipedia.org/wiki/Caesar_cipher
Doctests
========
>>> encrypt('The quick brown fox jumps over the lazy dog', 8)
'bpm yCqks jzwEv nwF rCuxA wDmz Bpm tiHG lwo'
>>> encrypt('A very large key', 8000)
's nWjq dSjYW cWq'
>>> encrypt('a lowercase alphabet', 5, 'abcdefghijklmnopqrstuvwxyz')
'f qtbjwhfxj fqumfgjy'
"""
# Set default alphabet to lower and upper case english chars
alpha = alphabet or ascii_letters
# The final result string
result = ""
for character in input_string:
if character not in alpha:
# Append without encryption if character is not in the alphabet
result += character
else:
# Get the index of the new key and make sure it isn't too large
new_key = (alpha.index(character) + key) % len(alpha)
# Append the encoded character to the alphabet
result += alpha[new_key]
return result
def decrypt(input_string: str, key: int, alphabet: str | None = None) -> str:
"""
decrypt
=======
Decodes a given string of cipher-text and returns the decoded plain-text
Parameters:
-----------
* input_string: the cipher-text that needs to be decoded
* key: the number of letters to shift the message backwards by to decode
Optional:
* alphabet (None): the alphabet used to decode the cipher, if not
specified, the standard english alphabet with upper and lowercase
letters is used
Returns:
* A string containing the decoded plain-text
More on the caesar cipher
=========================
The caesar cipher is named after Julius Caesar who used it when sending
secret military messages to his troops. This is a simple substitution cipher
where very character in the plain-text is shifted by a certain number known
as the "key" or "shift". Please keep in mind, here we will be focused on
decryption.
Example:
Say we have the following cipher-text:
"Jgnnq, ecrvckp"
And our alphabet is made up of lower and uppercase letters:
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
And our shift is "2"
To decode the message, we would do the same thing as encoding, but in
reverse. The first letter, "J" would become "H" (remember: we are decoding)
because "H" is two letters in reverse (to the left) of "J". We would
continue doing this. A letter like "a" would shift back to the end of
the alphabet, and would become "Z" or "Y" and so on.
Our final message would be "Hello, captain"
Further reading
===============
* https://en.m.wikipedia.org/wiki/Caesar_cipher
Doctests
========
>>> decrypt('bpm yCqks jzwEv nwF rCuxA wDmz Bpm tiHG lwo', 8)
'The quick brown fox jumps over the lazy dog'
>>> decrypt('s nWjq dSjYW cWq', 8000)
'A very large key'
>>> decrypt('f qtbjwhfxj fqumfgjy', 5, 'abcdefghijklmnopqrstuvwxyz')
'a lowercase alphabet'
"""
# Turn on decode mode by making the key negative
key *= -1
return encrypt(input_string, key, alphabet)
def brute_force(input_string: str, alphabet: str | None = None) -> dict[int, str]:
"""
brute_force
===========
Returns all the possible combinations of keys and the decoded strings in the
form of a dictionary
Parameters:
-----------
* input_string: the cipher-text that needs to be used during brute-force
Optional:
* alphabet: (None): the alphabet used to decode the cipher, if not
specified, the standard english alphabet with upper and lowercase
letters is used
More about brute force
======================
Brute force is when a person intercepts a message or password, not knowing
the key and tries every single combination. This is easy with the caesar
cipher since there are only all the letters in the alphabet. The more
complex the cipher, the larger amount of time it will take to do brute force
Ex:
Say we have a 5 letter alphabet (abcde), for simplicity and we intercepted the
following message:
"dbc"
we could then just write out every combination:
ecd... and so on, until we reach a combination that makes sense:
"cab"
Further reading
===============
* https://en.wikipedia.org/wiki/Brute_force
Doctests
========
>>> brute_force("jFyuMy xIH'N vLONy zILwy Gy!")[20]
"Please don't brute force me!"
>>> brute_force(1)
Traceback (most recent call last):
TypeError: 'int' object is not iterable
"""
# Set default alphabet to lower and upper case english chars
alpha = alphabet or ascii_letters
# To store data on all the combinations
brute_force_data = {}
# Cycle through each combination
for key in range(1, len(alpha) + 1):
# Decrypt the message and store the result in the data
brute_force_data[key] = decrypt(input_string, key, alpha)
return brute_force_data
if __name__ == "__main__":
while True:
print(f'\n{"-" * 10}\n Menu\n{"-" * 10}')
print(*["1.Encrypt", "2.Decrypt", "3.BruteForce", "4.Quit"], sep="\n")
# get user input
choice = input("\nWhat would you like to do?: ").strip() or "4"
# run functions based on what the user chose
if choice not in ("1", "2", "3", "4"):
print("Invalid choice, please enter a valid choice")
elif choice == "1":
input_string = input("Please enter the string to be encrypted: ")
key = int(input("Please enter off-set: ").strip())
print(encrypt(input_string, key))
elif choice == "2":
input_string = input("Please enter the string to be decrypted: ")
key = int(input("Please enter off-set: ").strip())
print(decrypt(input_string, key))
elif choice == "3":
input_string = input("Please enter the string to be decrypted: ")
brute_force_data = brute_force(input_string)
for key, value in brute_force_data.items():
print(f"Key: {key} | Message: {value}")
elif choice == "4":
print("Goodbye.")
break
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Created on Fri Oct 16 09:31:07 2020
@author: Dr. Tobias Schröder
@license: MIT-license
This file contains the test-suite for the knapsack problem.
"""
import unittest
from knapsack import knapsack as k
class Test(unittest.TestCase):
def test_base_case(self):
"""
test for the base case
"""
cap = 0
val = [0]
w = [0]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 0)
val = [60]
w = [10]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 0)
def test_easy_case(self):
"""
test for the base case
"""
cap = 3
val = [1, 2, 3]
w = [3, 2, 1]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 5)
def test_knapsack(self):
"""
test for the knapsack
"""
cap = 50
val = [60, 100, 120]
w = [10, 20, 30]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 220)
if __name__ == "__main__":
unittest.main()
| """
Created on Fri Oct 16 09:31:07 2020
@author: Dr. Tobias Schröder
@license: MIT-license
This file contains the test-suite for the knapsack problem.
"""
import unittest
from knapsack import knapsack as k
class Test(unittest.TestCase):
def test_base_case(self):
"""
test for the base case
"""
cap = 0
val = [0]
w = [0]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 0)
val = [60]
w = [10]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 0)
def test_easy_case(self):
"""
test for the base case
"""
cap = 3
val = [1, 2, 3]
w = [3, 2, 1]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 5)
def test_knapsack(self):
"""
test for the knapsack
"""
cap = 50
val = [60, 100, 120]
w = [10, 20, 30]
c = len(val)
self.assertEqual(k.knapsack(cap, w, val, c), 220)
if __name__ == "__main__":
unittest.main()
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Functions useful for doing molecular chemistry:
* molarity_to_normality
* moles_to_pressure
* moles_to_volume
* pressure_and_volume_to_temperature
"""
def molarity_to_normality(nfactor: int, moles: float, volume: float) -> float:
"""
Convert molarity to normality.
Volume is taken in litres.
Wikipedia reference: https://en.wikipedia.org/wiki/Equivalent_concentration
Wikipedia reference: https://en.wikipedia.org/wiki/Molar_concentration
>>> molarity_to_normality(2, 3.1, 0.31)
20
>>> molarity_to_normality(4, 11.4, 5.7)
8
"""
return round(float(moles / volume) * nfactor)
def moles_to_pressure(volume: float, moles: float, temperature: float) -> float:
"""
Convert moles to pressure.
Ideal gas laws are used.
Temperature is taken in kelvin.
Volume is taken in litres.
Pressure has atm as SI unit.
Wikipedia reference: https://en.wikipedia.org/wiki/Gas_laws
Wikipedia reference: https://en.wikipedia.org/wiki/Pressure
Wikipedia reference: https://en.wikipedia.org/wiki/Temperature
>>> moles_to_pressure(0.82, 3, 300)
90
>>> moles_to_pressure(8.2, 5, 200)
10
"""
return round(float((moles * 0.0821 * temperature) / (volume)))
def moles_to_volume(pressure: float, moles: float, temperature: float) -> float:
"""
Convert moles to volume.
Ideal gas laws are used.
Temperature is taken in kelvin.
Volume is taken in litres.
Pressure has atm as SI unit.
Wikipedia reference: https://en.wikipedia.org/wiki/Gas_laws
Wikipedia reference: https://en.wikipedia.org/wiki/Pressure
Wikipedia reference: https://en.wikipedia.org/wiki/Temperature
>>> moles_to_volume(0.82, 3, 300)
90
>>> moles_to_volume(8.2, 5, 200)
10
"""
return round(float((moles * 0.0821 * temperature) / (pressure)))
def pressure_and_volume_to_temperature(
pressure: float, moles: float, volume: float
) -> float:
"""
Convert pressure and volume to temperature.
Ideal gas laws are used.
Temperature is taken in kelvin.
Volume is taken in litres.
Pressure has atm as SI unit.
Wikipedia reference: https://en.wikipedia.org/wiki/Gas_laws
Wikipedia reference: https://en.wikipedia.org/wiki/Pressure
Wikipedia reference: https://en.wikipedia.org/wiki/Temperature
>>> pressure_and_volume_to_temperature(0.82, 1, 2)
20
>>> pressure_and_volume_to_temperature(8.2, 5, 3)
60
"""
return round(float((pressure * volume) / (0.0821 * moles)))
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Functions useful for doing molecular chemistry:
* molarity_to_normality
* moles_to_pressure
* moles_to_volume
* pressure_and_volume_to_temperature
"""
def molarity_to_normality(nfactor: int, moles: float, volume: float) -> float:
"""
Convert molarity to normality.
Volume is taken in litres.
Wikipedia reference: https://en.wikipedia.org/wiki/Equivalent_concentration
Wikipedia reference: https://en.wikipedia.org/wiki/Molar_concentration
>>> molarity_to_normality(2, 3.1, 0.31)
20
>>> molarity_to_normality(4, 11.4, 5.7)
8
"""
return round(float(moles / volume) * nfactor)
def moles_to_pressure(volume: float, moles: float, temperature: float) -> float:
"""
Convert moles to pressure.
Ideal gas laws are used.
Temperature is taken in kelvin.
Volume is taken in litres.
Pressure has atm as SI unit.
Wikipedia reference: https://en.wikipedia.org/wiki/Gas_laws
Wikipedia reference: https://en.wikipedia.org/wiki/Pressure
Wikipedia reference: https://en.wikipedia.org/wiki/Temperature
>>> moles_to_pressure(0.82, 3, 300)
90
>>> moles_to_pressure(8.2, 5, 200)
10
"""
return round(float((moles * 0.0821 * temperature) / (volume)))
def moles_to_volume(pressure: float, moles: float, temperature: float) -> float:
"""
Convert moles to volume.
Ideal gas laws are used.
Temperature is taken in kelvin.
Volume is taken in litres.
Pressure has atm as SI unit.
Wikipedia reference: https://en.wikipedia.org/wiki/Gas_laws
Wikipedia reference: https://en.wikipedia.org/wiki/Pressure
Wikipedia reference: https://en.wikipedia.org/wiki/Temperature
>>> moles_to_volume(0.82, 3, 300)
90
>>> moles_to_volume(8.2, 5, 200)
10
"""
return round(float((moles * 0.0821 * temperature) / (pressure)))
def pressure_and_volume_to_temperature(
pressure: float, moles: float, volume: float
) -> float:
"""
Convert pressure and volume to temperature.
Ideal gas laws are used.
Temperature is taken in kelvin.
Volume is taken in litres.
Pressure has atm as SI unit.
Wikipedia reference: https://en.wikipedia.org/wiki/Gas_laws
Wikipedia reference: https://en.wikipedia.org/wiki/Pressure
Wikipedia reference: https://en.wikipedia.org/wiki/Temperature
>>> pressure_and_volume_to_temperature(0.82, 1, 2)
20
>>> pressure_and_volume_to_temperature(8.2, 5, 3)
60
"""
return round(float((pressure * volume) / (0.0821 * moles)))
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # https://www.tutorialspoint.com/python3/bitwise_operators_example.htm
def binary_and(a: int, b: int) -> str:
"""
Take in 2 integers, convert them to binary,
return a binary number that is the
result of a binary and operation on the integers provided.
>>> binary_and(25, 32)
'0b000000'
>>> binary_and(37, 50)
'0b100000'
>>> binary_and(21, 30)
'0b10100'
>>> binary_and(58, 73)
'0b0001000'
>>> binary_and(0, 255)
'0b00000000'
>>> binary_and(256, 256)
'0b100000000'
>>> binary_and(0, -1)
Traceback (most recent call last):
...
ValueError: the value of both inputs must be positive
>>> binary_and(0, 1.1)
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> binary_and("0", "1")
Traceback (most recent call last):
...
TypeError: '<' not supported between instances of 'str' and 'int'
"""
if a < 0 or b < 0:
raise ValueError("the value of both inputs must be positive")
a_binary = str(bin(a))[2:] # remove the leading "0b"
b_binary = str(bin(b))[2:] # remove the leading "0b"
max_len = max(len(a_binary), len(b_binary))
return "0b" + "".join(
str(int(char_a == "1" and char_b == "1"))
for char_a, char_b in zip(a_binary.zfill(max_len), b_binary.zfill(max_len))
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| # https://www.tutorialspoint.com/python3/bitwise_operators_example.htm
def binary_and(a: int, b: int) -> str:
"""
Take in 2 integers, convert them to binary,
return a binary number that is the
result of a binary and operation on the integers provided.
>>> binary_and(25, 32)
'0b000000'
>>> binary_and(37, 50)
'0b100000'
>>> binary_and(21, 30)
'0b10100'
>>> binary_and(58, 73)
'0b0001000'
>>> binary_and(0, 255)
'0b00000000'
>>> binary_and(256, 256)
'0b100000000'
>>> binary_and(0, -1)
Traceback (most recent call last):
...
ValueError: the value of both inputs must be positive
>>> binary_and(0, 1.1)
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> binary_and("0", "1")
Traceback (most recent call last):
...
TypeError: '<' not supported between instances of 'str' and 'int'
"""
if a < 0 or b < 0:
raise ValueError("the value of both inputs must be positive")
a_binary = str(bin(a))[2:] # remove the leading "0b"
b_binary = str(bin(b))[2:] # remove the leading "0b"
max_len = max(len(a_binary), len(b_binary))
return "0b" + "".join(
str(int(char_a == "1" and char_b == "1"))
for char_a, char_b in zip(a_binary.zfill(max_len), b_binary.zfill(max_len))
)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Problem 15: https://projecteuler.net/problem=15
Starting in the top left corner of a 2×2 grid, and only being able to move to
the right and down, there are exactly 6 routes to the bottom right corner.
How many such routes are there through a 20×20 grid?
"""
from math import factorial
def solution(n: int = 20) -> int:
"""
Returns the number of paths possible in a n x n grid starting at top left
corner going to bottom right corner and being able to move right and down
only.
>>> solution(25)
126410606437752
>>> solution(23)
8233430727600
>>> solution(20)
137846528820
>>> solution(15)
155117520
>>> solution(1)
2
"""
n = 2 * n # middle entry of odd rows starting at row 3 is the solution for n = 1,
# 2, 3,...
k = n // 2
return int(factorial(n) / (factorial(k) * factorial(n - k)))
if __name__ == "__main__":
import sys
if len(sys.argv) == 1:
print(solution(20))
else:
try:
n = int(sys.argv[1])
print(solution(n))
except ValueError:
print("Invalid entry - please enter a number.")
| """
Problem 15: https://projecteuler.net/problem=15
Starting in the top left corner of a 2×2 grid, and only being able to move to
the right and down, there are exactly 6 routes to the bottom right corner.
How many such routes are there through a 20×20 grid?
"""
from math import factorial
def solution(n: int = 20) -> int:
"""
Returns the number of paths possible in a n x n grid starting at top left
corner going to bottom right corner and being able to move right and down
only.
>>> solution(25)
126410606437752
>>> solution(23)
8233430727600
>>> solution(20)
137846528820
>>> solution(15)
155117520
>>> solution(1)
2
"""
n = 2 * n # middle entry of odd rows starting at row 3 is the solution for n = 1,
# 2, 3,...
k = n // 2
return int(factorial(n) / (factorial(k) * factorial(n - k)))
if __name__ == "__main__":
import sys
if len(sys.argv) == 1:
print(solution(20))
else:
try:
n = int(sys.argv[1])
print(solution(n))
except ValueError:
print("Invalid entry - please enter a number.")
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
render 3d points for 2d surfaces.
"""
from __future__ import annotations
import math
__version__ = "2020.9.26"
__author__ = "xcodz-dot, cclaus, dhruvmanila"
def convert_to_2d(
x: float, y: float, z: float, scale: float, distance: float
) -> tuple[float, float]:
"""
Converts 3d point to a 2d drawable point
>>> convert_to_2d(1.0, 2.0, 3.0, 10.0, 10.0)
(7.6923076923076925, 15.384615384615385)
>>> convert_to_2d(1, 2, 3, 10, 10)
(7.6923076923076925, 15.384615384615385)
>>> convert_to_2d("1", 2, 3, 10, 10) # '1' is str
Traceback (most recent call last):
...
TypeError: Input values must either be float or int: ['1', 2, 3, 10, 10]
"""
if not all(isinstance(val, (float, int)) for val in locals().values()):
raise TypeError(
"Input values must either be float or int: " f"{list(locals().values())}"
)
projected_x = ((x * distance) / (z + distance)) * scale
projected_y = ((y * distance) / (z + distance)) * scale
return projected_x, projected_y
def rotate(
x: float, y: float, z: float, axis: str, angle: float
) -> tuple[float, float, float]:
"""
rotate a point around a certain axis with a certain angle
angle can be any integer between 1, 360 and axis can be any one of
'x', 'y', 'z'
>>> rotate(1.0, 2.0, 3.0, 'y', 90.0)
(3.130524675073759, 2.0, 0.4470070007889556)
>>> rotate(1, 2, 3, "z", 180)
(0.999736015495891, -2.0001319704760485, 3)
>>> rotate('1', 2, 3, "z", 90.0) # '1' is str
Traceback (most recent call last):
...
TypeError: Input values except axis must either be float or int: ['1', 2, 3, 90.0]
>>> rotate(1, 2, 3, "n", 90) # 'n' is not a valid axis
Traceback (most recent call last):
...
ValueError: not a valid axis, choose one of 'x', 'y', 'z'
>>> rotate(1, 2, 3, "x", -90)
(1, -2.5049096187183877, -2.5933429780983657)
>>> rotate(1, 2, 3, "x", 450) # 450 wrap around to 90
(1, 3.5776792428178217, -0.44744970165427644)
"""
if not isinstance(axis, str):
raise TypeError("Axis must be a str")
input_variables = locals()
del input_variables["axis"]
if not all(isinstance(val, (float, int)) for val in input_variables.values()):
raise TypeError(
"Input values except axis must either be float or int: "
f"{list(input_variables.values())}"
)
angle = (angle % 360) / 450 * 180 / math.pi
if axis == "z":
new_x = x * math.cos(angle) - y * math.sin(angle)
new_y = y * math.cos(angle) + x * math.sin(angle)
new_z = z
elif axis == "x":
new_y = y * math.cos(angle) - z * math.sin(angle)
new_z = z * math.cos(angle) + y * math.sin(angle)
new_x = x
elif axis == "y":
new_x = x * math.cos(angle) - z * math.sin(angle)
new_z = z * math.cos(angle) + x * math.sin(angle)
new_y = y
else:
raise ValueError("not a valid axis, choose one of 'x', 'y', 'z'")
return new_x, new_y, new_z
if __name__ == "__main__":
import doctest
doctest.testmod()
print(f"{convert_to_2d(1.0, 2.0, 3.0, 10.0, 10.0) = }")
print(f"{rotate(1.0, 2.0, 3.0, 'y', 90.0) = }")
| """
render 3d points for 2d surfaces.
"""
from __future__ import annotations
import math
__version__ = "2020.9.26"
__author__ = "xcodz-dot, cclaus, dhruvmanila"
def convert_to_2d(
x: float, y: float, z: float, scale: float, distance: float
) -> tuple[float, float]:
"""
Converts 3d point to a 2d drawable point
>>> convert_to_2d(1.0, 2.0, 3.0, 10.0, 10.0)
(7.6923076923076925, 15.384615384615385)
>>> convert_to_2d(1, 2, 3, 10, 10)
(7.6923076923076925, 15.384615384615385)
>>> convert_to_2d("1", 2, 3, 10, 10) # '1' is str
Traceback (most recent call last):
...
TypeError: Input values must either be float or int: ['1', 2, 3, 10, 10]
"""
if not all(isinstance(val, (float, int)) for val in locals().values()):
raise TypeError(
"Input values must either be float or int: " f"{list(locals().values())}"
)
projected_x = ((x * distance) / (z + distance)) * scale
projected_y = ((y * distance) / (z + distance)) * scale
return projected_x, projected_y
def rotate(
x: float, y: float, z: float, axis: str, angle: float
) -> tuple[float, float, float]:
"""
rotate a point around a certain axis with a certain angle
angle can be any integer between 1, 360 and axis can be any one of
'x', 'y', 'z'
>>> rotate(1.0, 2.0, 3.0, 'y', 90.0)
(3.130524675073759, 2.0, 0.4470070007889556)
>>> rotate(1, 2, 3, "z", 180)
(0.999736015495891, -2.0001319704760485, 3)
>>> rotate('1', 2, 3, "z", 90.0) # '1' is str
Traceback (most recent call last):
...
TypeError: Input values except axis must either be float or int: ['1', 2, 3, 90.0]
>>> rotate(1, 2, 3, "n", 90) # 'n' is not a valid axis
Traceback (most recent call last):
...
ValueError: not a valid axis, choose one of 'x', 'y', 'z'
>>> rotate(1, 2, 3, "x", -90)
(1, -2.5049096187183877, -2.5933429780983657)
>>> rotate(1, 2, 3, "x", 450) # 450 wrap around to 90
(1, 3.5776792428178217, -0.44744970165427644)
"""
if not isinstance(axis, str):
raise TypeError("Axis must be a str")
input_variables = locals()
del input_variables["axis"]
if not all(isinstance(val, (float, int)) for val in input_variables.values()):
raise TypeError(
"Input values except axis must either be float or int: "
f"{list(input_variables.values())}"
)
angle = (angle % 360) / 450 * 180 / math.pi
if axis == "z":
new_x = x * math.cos(angle) - y * math.sin(angle)
new_y = y * math.cos(angle) + x * math.sin(angle)
new_z = z
elif axis == "x":
new_y = y * math.cos(angle) - z * math.sin(angle)
new_z = z * math.cos(angle) + y * math.sin(angle)
new_x = x
elif axis == "y":
new_x = x * math.cos(angle) - z * math.sin(angle)
new_z = z * math.cos(angle) + x * math.sin(angle)
new_y = y
else:
raise ValueError("not a valid axis, choose one of 'x', 'y', 'z'")
return new_x, new_y, new_z
if __name__ == "__main__":
import doctest
doctest.testmod()
print(f"{convert_to_2d(1.0, 2.0, 3.0, 10.0, 10.0) = }")
print(f"{rotate(1.0, 2.0, 3.0, 'y', 90.0) = }")
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
def kmp(pattern: str, text: str) -> bool:
"""
The Knuth-Morris-Pratt Algorithm for finding a pattern within a piece of text
with complexity O(n + m)
1) Preprocess pattern to identify any suffixes that are identical to prefixes
This tells us where to continue from if we get a mismatch between a character
in our pattern and the text.
2) Step through the text one character at a time and compare it to a character in
the pattern updating our location within the pattern if necessary
"""
# 1) Construct the failure array
failure = get_failure_array(pattern)
# 2) Step through text searching for pattern
i, j = 0, 0 # index into text, pattern
while i < len(text):
if pattern[j] == text[i]:
if j == (len(pattern) - 1):
return True
j += 1
# if this is a prefix in our pattern
# just go back far enough to continue
elif j > 0:
j = failure[j - 1]
continue
i += 1
return False
def get_failure_array(pattern: str) -> list[int]:
"""
Calculates the new index we should go to if we fail a comparison
:param pattern:
:return:
"""
failure = [0]
i = 0
j = 1
while j < len(pattern):
if pattern[i] == pattern[j]:
i += 1
elif i > 0:
i = failure[i - 1]
continue
j += 1
failure.append(i)
return failure
if __name__ == "__main__":
# Test 1)
pattern = "abc1abc12"
text1 = "alskfjaldsabc1abc1abc12k23adsfabcabc"
text2 = "alskfjaldsk23adsfabcabc"
assert kmp(pattern, text1) and not kmp(pattern, text2)
# Test 2)
pattern = "ABABX"
text = "ABABZABABYABABX"
assert kmp(pattern, text)
# Test 3)
pattern = "AAAB"
text = "ABAAAAAB"
assert kmp(pattern, text)
# Test 4)
pattern = "abcdabcy"
text = "abcxabcdabxabcdabcdabcy"
assert kmp(pattern, text)
# Test 5)
pattern = "aabaabaaa"
assert get_failure_array(pattern) == [0, 1, 0, 1, 2, 3, 4, 5, 2]
| from __future__ import annotations
def kmp(pattern: str, text: str) -> bool:
"""
The Knuth-Morris-Pratt Algorithm for finding a pattern within a piece of text
with complexity O(n + m)
1) Preprocess pattern to identify any suffixes that are identical to prefixes
This tells us where to continue from if we get a mismatch between a character
in our pattern and the text.
2) Step through the text one character at a time and compare it to a character in
the pattern updating our location within the pattern if necessary
"""
# 1) Construct the failure array
failure = get_failure_array(pattern)
# 2) Step through text searching for pattern
i, j = 0, 0 # index into text, pattern
while i < len(text):
if pattern[j] == text[i]:
if j == (len(pattern) - 1):
return True
j += 1
# if this is a prefix in our pattern
# just go back far enough to continue
elif j > 0:
j = failure[j - 1]
continue
i += 1
return False
def get_failure_array(pattern: str) -> list[int]:
"""
Calculates the new index we should go to if we fail a comparison
:param pattern:
:return:
"""
failure = [0]
i = 0
j = 1
while j < len(pattern):
if pattern[i] == pattern[j]:
i += 1
elif i > 0:
i = failure[i - 1]
continue
j += 1
failure.append(i)
return failure
if __name__ == "__main__":
# Test 1)
pattern = "abc1abc12"
text1 = "alskfjaldsabc1abc1abc12k23adsfabcabc"
text2 = "alskfjaldsk23adsfabcabc"
assert kmp(pattern, text1) and not kmp(pattern, text2)
# Test 2)
pattern = "ABABX"
text = "ABABZABABYABABX"
assert kmp(pattern, text)
# Test 3)
pattern = "AAAB"
text = "ABAAAAAB"
assert kmp(pattern, text)
# Test 4)
pattern = "abcdabcy"
text = "abcxabcdabxabcdabcdabcy"
assert kmp(pattern, text)
# Test 5)
pattern = "aabaabaaa"
assert get_failure_array(pattern) == [0, 1, 0, 1, 2, 3, 4, 5, 2]
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Pandigital prime
Problem 41: https://projecteuler.net/problem=41
We shall say that an n-digit number is pandigital if it makes use of all the digits
1 to n exactly once. For example, 2143 is a 4-digit pandigital and is also prime.
What is the largest n-digit pandigital prime that exists?
All pandigital numbers except for 1, 4 ,7 pandigital numbers are divisible by 3.
So we will check only 7 digit pandigital numbers to obtain the largest possible
pandigital prime.
"""
from __future__ import annotations
from itertools import permutations
from math import sqrt
def is_prime(n: int) -> bool:
"""
Returns True if n is prime,
False otherwise.
>>> is_prime(67483)
False
>>> is_prime(563)
True
>>> is_prime(87)
False
"""
if n % 2 == 0:
return False
for i in range(3, int(sqrt(n) + 1), 2):
if n % i == 0:
return False
return True
def solution(n: int = 7) -> int:
"""
Returns the maximum pandigital prime number of length n.
If there are none, then it will return 0.
>>> solution(2)
0
>>> solution(4)
4231
>>> solution(7)
7652413
"""
pandigital_str = "".join(str(i) for i in range(1, n + 1))
perm_list = [int("".join(i)) for i in permutations(pandigital_str, n)]
pandigitals = [num for num in perm_list if is_prime(num)]
return max(pandigitals) if pandigitals else 0
if __name__ == "__main__":
print(f"{solution() = }")
| """
Pandigital prime
Problem 41: https://projecteuler.net/problem=41
We shall say that an n-digit number is pandigital if it makes use of all the digits
1 to n exactly once. For example, 2143 is a 4-digit pandigital and is also prime.
What is the largest n-digit pandigital prime that exists?
All pandigital numbers except for 1, 4 ,7 pandigital numbers are divisible by 3.
So we will check only 7 digit pandigital numbers to obtain the largest possible
pandigital prime.
"""
from __future__ import annotations
from itertools import permutations
from math import sqrt
def is_prime(n: int) -> bool:
"""
Returns True if n is prime,
False otherwise.
>>> is_prime(67483)
False
>>> is_prime(563)
True
>>> is_prime(87)
False
"""
if n % 2 == 0:
return False
for i in range(3, int(sqrt(n) + 1), 2):
if n % i == 0:
return False
return True
def solution(n: int = 7) -> int:
"""
Returns the maximum pandigital prime number of length n.
If there are none, then it will return 0.
>>> solution(2)
0
>>> solution(4)
4231
>>> solution(7)
7652413
"""
pandigital_str = "".join(str(i) for i in range(1, n + 1))
perm_list = [int("".join(i)) for i in permutations(pandigital_str, n)]
pandigitals = [num for num in perm_list if is_prime(num)]
return max(pandigitals) if pandigitals else 0
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Sum of digits sequence
Problem 551
Let a(0), a(1),... be an integer sequence defined by:
a(0) = 1
for n >= 1, a(n) is the sum of the digits of all preceding terms
The sequence starts with 1, 1, 2, 4, 8, ...
You are given a(10^6) = 31054319.
Find a(10^15)
"""
ks = [k for k in range(2, 20 + 1)]
base = [10 ** k for k in range(ks[-1] + 1)]
memo: dict[int, dict[int, list[list[int]]]] = {}
def next_term(a_i, k, i, n):
"""
Calculates and updates a_i in-place to either the n-th term or the
smallest term for which c > 10^k when the terms are written in the form:
a(i) = b * 10^k + c
For any a(i), if digitsum(b) and c have the same value, the difference
between subsequent terms will be the same until c >= 10^k. This difference
is cached to greatly speed up the computation.
Arguments:
a_i -- array of digits starting from the one's place that represent
the i-th term in the sequence
k -- k when terms are written in the from a(i) = b*10^k + c.
Term are calulcated until c > 10^k or the n-th term is reached.
i -- position along the sequence
n -- term to calculate up to if k is large enough
Return: a tuple of difference between ending term and starting term, and
the number of terms calculated. ex. if starting term is a_0=1, and
ending term is a_10=62, then (61, 9) is returned.
"""
# ds_b - digitsum(b)
ds_b = sum(a_i[j] for j in range(k, len(a_i)))
c = sum(a_i[j] * base[j] for j in range(min(len(a_i), k)))
diff, dn = 0, 0
max_dn = n - i
sub_memo = memo.get(ds_b)
if sub_memo is not None:
jumps = sub_memo.get(c)
if jumps is not None and len(jumps) > 0:
# find and make the largest jump without going over
max_jump = -1
for _k in range(len(jumps) - 1, -1, -1):
if jumps[_k][2] <= k and jumps[_k][1] <= max_dn:
max_jump = _k
break
if max_jump >= 0:
diff, dn, _kk = jumps[max_jump]
# since the difference between jumps is cached, add c
new_c = diff + c
for j in range(min(k, len(a_i))):
new_c, a_i[j] = divmod(new_c, 10)
if new_c > 0:
add(a_i, k, new_c)
else:
sub_memo[c] = []
else:
sub_memo = {c: []}
memo[ds_b] = sub_memo
if dn >= max_dn or c + diff >= base[k]:
return diff, dn
if k > ks[0]:
while True:
# keep doing smaller jumps
_diff, terms_jumped = next_term(a_i, k - 1, i + dn, n)
diff += _diff
dn += terms_jumped
if dn >= max_dn or c + diff >= base[k]:
break
else:
# would be too small a jump, just compute sequential terms instead
_diff, terms_jumped = compute(a_i, k, i + dn, n)
diff += _diff
dn += terms_jumped
jumps = sub_memo[c]
# keep jumps sorted by # of terms skipped
j = 0
while j < len(jumps):
if jumps[j][1] > dn:
break
j += 1
# cache the jump for this value digitsum(b) and c
sub_memo[c].insert(j, (diff, dn, k))
return (diff, dn)
def compute(a_i, k, i, n):
"""
same as next_term(a_i, k, i, n) but computes terms without memoizing results.
"""
if i >= n:
return 0, i
if k > len(a_i):
a_i.extend([0 for _ in range(k - len(a_i))])
# note: a_i -> b * 10^k + c
# ds_b -> digitsum(b)
# ds_c -> digitsum(c)
start_i = i
ds_b, ds_c, diff = 0, 0, 0
for j in range(len(a_i)):
if j >= k:
ds_b += a_i[j]
else:
ds_c += a_i[j]
while i < n:
i += 1
addend = ds_c + ds_b
diff += addend
ds_c = 0
for j in range(k):
s = a_i[j] + addend
addend, a_i[j] = divmod(s, 10)
ds_c += a_i[j]
if addend > 0:
break
if addend > 0:
add(a_i, k, addend)
return diff, i - start_i
def add(digits, k, addend):
"""
adds addend to digit array given in digits
starting at index k
"""
for j in range(k, len(digits)):
s = digits[j] + addend
if s >= 10:
quotient, digits[j] = divmod(s, 10)
addend = addend // 10 + quotient
else:
digits[j] = s
addend = addend // 10
if addend == 0:
break
while addend > 0:
addend, digit = divmod(addend, 10)
digits.append(digit)
def solution(n: int = 10 ** 15) -> int:
"""
returns n-th term of sequence
>>> solution(10)
62
>>> solution(10**6)
31054319
>>> solution(10**15)
73597483551591773
"""
digits = [1]
i = 1
dn = 0
while True:
diff, terms_jumped = next_term(digits, 20, i + dn, n)
dn += terms_jumped
if dn == n - i:
break
a_n = 0
for j in range(len(digits)):
a_n += digits[j] * 10 ** j
return a_n
if __name__ == "__main__":
print(f"{solution() = }")
| """
Sum of digits sequence
Problem 551
Let a(0), a(1),... be an integer sequence defined by:
a(0) = 1
for n >= 1, a(n) is the sum of the digits of all preceding terms
The sequence starts with 1, 1, 2, 4, 8, ...
You are given a(10^6) = 31054319.
Find a(10^15)
"""
ks = [k for k in range(2, 20 + 1)]
base = [10 ** k for k in range(ks[-1] + 1)]
memo: dict[int, dict[int, list[list[int]]]] = {}
def next_term(a_i, k, i, n):
"""
Calculates and updates a_i in-place to either the n-th term or the
smallest term for which c > 10^k when the terms are written in the form:
a(i) = b * 10^k + c
For any a(i), if digitsum(b) and c have the same value, the difference
between subsequent terms will be the same until c >= 10^k. This difference
is cached to greatly speed up the computation.
Arguments:
a_i -- array of digits starting from the one's place that represent
the i-th term in the sequence
k -- k when terms are written in the from a(i) = b*10^k + c.
Term are calulcated until c > 10^k or the n-th term is reached.
i -- position along the sequence
n -- term to calculate up to if k is large enough
Return: a tuple of difference between ending term and starting term, and
the number of terms calculated. ex. if starting term is a_0=1, and
ending term is a_10=62, then (61, 9) is returned.
"""
# ds_b - digitsum(b)
ds_b = sum(a_i[j] for j in range(k, len(a_i)))
c = sum(a_i[j] * base[j] for j in range(min(len(a_i), k)))
diff, dn = 0, 0
max_dn = n - i
sub_memo = memo.get(ds_b)
if sub_memo is not None:
jumps = sub_memo.get(c)
if jumps is not None and len(jumps) > 0:
# find and make the largest jump without going over
max_jump = -1
for _k in range(len(jumps) - 1, -1, -1):
if jumps[_k][2] <= k and jumps[_k][1] <= max_dn:
max_jump = _k
break
if max_jump >= 0:
diff, dn, _kk = jumps[max_jump]
# since the difference between jumps is cached, add c
new_c = diff + c
for j in range(min(k, len(a_i))):
new_c, a_i[j] = divmod(new_c, 10)
if new_c > 0:
add(a_i, k, new_c)
else:
sub_memo[c] = []
else:
sub_memo = {c: []}
memo[ds_b] = sub_memo
if dn >= max_dn or c + diff >= base[k]:
return diff, dn
if k > ks[0]:
while True:
# keep doing smaller jumps
_diff, terms_jumped = next_term(a_i, k - 1, i + dn, n)
diff += _diff
dn += terms_jumped
if dn >= max_dn or c + diff >= base[k]:
break
else:
# would be too small a jump, just compute sequential terms instead
_diff, terms_jumped = compute(a_i, k, i + dn, n)
diff += _diff
dn += terms_jumped
jumps = sub_memo[c]
# keep jumps sorted by # of terms skipped
j = 0
while j < len(jumps):
if jumps[j][1] > dn:
break
j += 1
# cache the jump for this value digitsum(b) and c
sub_memo[c].insert(j, (diff, dn, k))
return (diff, dn)
def compute(a_i, k, i, n):
"""
same as next_term(a_i, k, i, n) but computes terms without memoizing results.
"""
if i >= n:
return 0, i
if k > len(a_i):
a_i.extend([0 for _ in range(k - len(a_i))])
# note: a_i -> b * 10^k + c
# ds_b -> digitsum(b)
# ds_c -> digitsum(c)
start_i = i
ds_b, ds_c, diff = 0, 0, 0
for j in range(len(a_i)):
if j >= k:
ds_b += a_i[j]
else:
ds_c += a_i[j]
while i < n:
i += 1
addend = ds_c + ds_b
diff += addend
ds_c = 0
for j in range(k):
s = a_i[j] + addend
addend, a_i[j] = divmod(s, 10)
ds_c += a_i[j]
if addend > 0:
break
if addend > 0:
add(a_i, k, addend)
return diff, i - start_i
def add(digits, k, addend):
"""
adds addend to digit array given in digits
starting at index k
"""
for j in range(k, len(digits)):
s = digits[j] + addend
if s >= 10:
quotient, digits[j] = divmod(s, 10)
addend = addend // 10 + quotient
else:
digits[j] = s
addend = addend // 10
if addend == 0:
break
while addend > 0:
addend, digit = divmod(addend, 10)
digits.append(digit)
def solution(n: int = 10 ** 15) -> int:
"""
returns n-th term of sequence
>>> solution(10)
62
>>> solution(10**6)
31054319
>>> solution(10**15)
73597483551591773
"""
digits = [1]
i = 1
dn = 0
while True:
diff, terms_jumped = next_term(digits, 20, i + dn, n)
dn += terms_jumped
if dn == n - i:
break
a_n = 0
for j in range(len(digits)):
a_n += digits[j] * 10 ** j
return a_n
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| if __name__ == "__main__":
import socket # Import socket module
sock = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 12312
sock.connect((host, port))
sock.send(b"Hello server!")
with open("Received_file", "wb") as out_file:
print("File opened")
print("Receiving data...")
while True:
data = sock.recv(1024)
print(f"{data = }")
if not data:
break
out_file.write(data) # Write data to a file
print("Successfully got the file")
sock.close()
print("Connection closed")
| if __name__ == "__main__":
import socket # Import socket module
sock = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 12312
sock.connect((host, port))
sock.send(b"Hello server!")
with open("Received_file", "wb") as out_file:
print("File opened")
print("Receiving data...")
while True:
data = sock.recv(1024)
print(f"{data = }")
if not data:
break
out_file.write(data) # Write data to a file
print("Successfully got the file")
sock.close()
print("Connection closed")
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Author: João Gustavo A. Amorim & Gabriel Kunz
# Author email: [email protected] and [email protected]
# Coding date: apr 2019
# Black: True
"""
* This code implement the Hamming code:
https://en.wikipedia.org/wiki/Hamming_code - In telecommunication,
Hamming codes are a family of linear error-correcting codes. Hamming
codes can detect up to two-bit errors or correct one-bit errors
without detection of uncorrected errors. By contrast, the simple
parity code cannot correct errors, and can detect only an odd number
of bits in error. Hamming codes are perfect codes, that is, they
achieve the highest possible rate for codes with their block length
and minimum distance of three.
* the implemented code consists of:
* a function responsible for encoding the message (emitterConverter)
* return the encoded message
* a function responsible for decoding the message (receptorConverter)
* return the decoded message and a ack of data integrity
* how to use:
to be used you must declare how many parity bits (sizePari)
you want to include in the message.
it is desired (for test purposes) to select a bit to be set
as an error. This serves to check whether the code is working correctly.
Lastly, the variable of the message/word that must be desired to be
encoded (text).
* how this work:
declaration of variables (sizePari, be, text)
converts the message/word (text) to binary using the
text_to_bits function
encodes the message using the rules of hamming encoding
decodes the message using the rules of hamming encoding
print the original message, the encoded message and the
decoded message
forces an error in the coded text variable
decodes the message that was forced the error
print the original message, the encoded message, the bit changed
message and the decoded message
"""
# Imports
import numpy as np
# Functions of binary conversion--------------------------------------
def text_to_bits(text, encoding="utf-8", errors="surrogatepass"):
"""
>>> text_to_bits("msg")
'011011010111001101100111'
"""
bits = bin(int.from_bytes(text.encode(encoding, errors), "big"))[2:]
return bits.zfill(8 * ((len(bits) + 7) // 8))
def text_from_bits(bits, encoding="utf-8", errors="surrogatepass"):
"""
>>> text_from_bits('011011010111001101100111')
'msg'
"""
n = int(bits, 2)
return n.to_bytes((n.bit_length() + 7) // 8, "big").decode(encoding, errors) or "\0"
# Functions of hamming code-------------------------------------------
def emitterConverter(sizePar, data):
"""
:param sizePar: how many parity bits the message must have
:param data: information bits
:return: message to be transmitted by unreliable medium
- bits of information merged with parity bits
>>> emitterConverter(4, "101010111111")
['1', '1', '1', '1', '0', '1', '0', '0', '1', '0', '1', '1', '1', '1', '1', '1']
"""
if sizePar + len(data) <= 2 ** sizePar - (len(data) - 1):
print("ERROR - size of parity don't match with size of data")
exit(0)
dataOut = []
parity = []
binPos = [bin(x)[2:] for x in range(1, sizePar + len(data) + 1)]
# sorted information data for the size of the output data
dataOrd = []
# data position template + parity
dataOutGab = []
# parity bit counter
qtdBP = 0
# counter position of data bits
contData = 0
for x in range(1, sizePar + len(data) + 1):
# Performs a template of bit positions - who should be given,
# and who should be parity
if qtdBP < sizePar:
if (np.log(x) / np.log(2)).is_integer():
dataOutGab.append("P")
qtdBP = qtdBP + 1
else:
dataOutGab.append("D")
else:
dataOutGab.append("D")
# Sorts the data to the new output size
if dataOutGab[-1] == "D":
dataOrd.append(data[contData])
contData += 1
else:
dataOrd.append(None)
# Calculates parity
qtdBP = 0 # parity bit counter
for bp in range(1, sizePar + 1):
# Bit counter one for a given parity
contBO = 0
# counter to control the loop reading
contLoop = 0
for x in dataOrd:
if x is not None:
try:
aux = (binPos[contLoop])[-1 * (bp)]
except IndexError:
aux = "0"
if aux == "1":
if x == "1":
contBO += 1
contLoop += 1
parity.append(contBO % 2)
qtdBP += 1
# Mount the message
ContBP = 0 # parity bit counter
for x in range(0, sizePar + len(data)):
if dataOrd[x] is None:
dataOut.append(str(parity[ContBP]))
ContBP += 1
else:
dataOut.append(dataOrd[x])
return dataOut
def receptorConverter(sizePar, data):
"""
>>> receptorConverter(4, "1111010010111111")
(['1', '0', '1', '0', '1', '0', '1', '1', '1', '1', '1', '1'], True)
"""
# data position template + parity
dataOutGab = []
# Parity bit counter
qtdBP = 0
# Counter p data bit reading
contData = 0
# list of parity received
parityReceived = []
dataOutput = []
for x in range(1, len(data) + 1):
# Performs a template of bit positions - who should be given,
# and who should be parity
if qtdBP < sizePar and (np.log(x) / np.log(2)).is_integer():
dataOutGab.append("P")
qtdBP = qtdBP + 1
else:
dataOutGab.append("D")
# Sorts the data to the new output size
if dataOutGab[-1] == "D":
dataOutput.append(data[contData])
else:
parityReceived.append(data[contData])
contData += 1
# -----------calculates the parity with the data
dataOut = []
parity = []
binPos = [bin(x)[2:] for x in range(1, sizePar + len(dataOutput) + 1)]
# sorted information data for the size of the output data
dataOrd = []
# Data position feedback + parity
dataOutGab = []
# Parity bit counter
qtdBP = 0
# Counter p data bit reading
contData = 0
for x in range(1, sizePar + len(dataOutput) + 1):
# Performs a template position of bits - who should be given,
# and who should be parity
if qtdBP < sizePar and (np.log(x) / np.log(2)).is_integer():
dataOutGab.append("P")
qtdBP = qtdBP + 1
else:
dataOutGab.append("D")
# Sorts the data to the new output size
if dataOutGab[-1] == "D":
dataOrd.append(dataOutput[contData])
contData += 1
else:
dataOrd.append(None)
# Calculates parity
qtdBP = 0 # parity bit counter
for bp in range(1, sizePar + 1):
# Bit counter one for a certain parity
contBO = 0
# Counter to control loop reading
contLoop = 0
for x in dataOrd:
if x is not None:
try:
aux = (binPos[contLoop])[-1 * (bp)]
except IndexError:
aux = "0"
if aux == "1" and x == "1":
contBO += 1
contLoop += 1
parity.append(str(contBO % 2))
qtdBP += 1
# Mount the message
ContBP = 0 # Parity bit counter
for x in range(0, sizePar + len(dataOutput)):
if dataOrd[x] is None:
dataOut.append(str(parity[ContBP]))
ContBP += 1
else:
dataOut.append(dataOrd[x])
ack = parityReceived == parity
return dataOutput, ack
# ---------------------------------------------------------------------
"""
# Example how to use
# number of parity bits
sizePari = 4
# location of the bit that will be forced an error
be = 2
# Message/word to be encoded and decoded with hamming
# text = input("Enter the word to be read: ")
text = "Message01"
# Convert the message to binary
binaryText = text_to_bits(text)
# Prints the binary of the string
print("Text input in binary is '" + binaryText + "'")
# total transmitted bits
totalBits = len(binaryText) + sizePari
print("Size of data is " + str(totalBits))
print("\n --Message exchange--")
print("Data to send ------------> " + binaryText)
dataOut = emitterConverter(sizePari, binaryText)
print("Data converted ----------> " + "".join(dataOut))
dataReceiv, ack = receptorConverter(sizePari, dataOut)
print(
"Data receive ------------> "
+ "".join(dataReceiv)
+ "\t\t -- Data integrity: "
+ str(ack)
)
print("\n --Force error--")
print("Data to send ------------> " + binaryText)
dataOut = emitterConverter(sizePari, binaryText)
print("Data converted ----------> " + "".join(dataOut))
# forces error
dataOut[-be] = "1" * (dataOut[-be] == "0") + "0" * (dataOut[-be] == "1")
print("Data after transmission -> " + "".join(dataOut))
dataReceiv, ack = receptorConverter(sizePari, dataOut)
print(
"Data receive ------------> "
+ "".join(dataReceiv)
+ "\t\t -- Data integrity: "
+ str(ack)
)
"""
| # Author: João Gustavo A. Amorim & Gabriel Kunz
# Author email: [email protected] and [email protected]
# Coding date: apr 2019
# Black: True
"""
* This code implement the Hamming code:
https://en.wikipedia.org/wiki/Hamming_code - In telecommunication,
Hamming codes are a family of linear error-correcting codes. Hamming
codes can detect up to two-bit errors or correct one-bit errors
without detection of uncorrected errors. By contrast, the simple
parity code cannot correct errors, and can detect only an odd number
of bits in error. Hamming codes are perfect codes, that is, they
achieve the highest possible rate for codes with their block length
and minimum distance of three.
* the implemented code consists of:
* a function responsible for encoding the message (emitterConverter)
* return the encoded message
* a function responsible for decoding the message (receptorConverter)
* return the decoded message and a ack of data integrity
* how to use:
to be used you must declare how many parity bits (sizePari)
you want to include in the message.
it is desired (for test purposes) to select a bit to be set
as an error. This serves to check whether the code is working correctly.
Lastly, the variable of the message/word that must be desired to be
encoded (text).
* how this work:
declaration of variables (sizePari, be, text)
converts the message/word (text) to binary using the
text_to_bits function
encodes the message using the rules of hamming encoding
decodes the message using the rules of hamming encoding
print the original message, the encoded message and the
decoded message
forces an error in the coded text variable
decodes the message that was forced the error
print the original message, the encoded message, the bit changed
message and the decoded message
"""
# Imports
import numpy as np
# Functions of binary conversion--------------------------------------
def text_to_bits(text, encoding="utf-8", errors="surrogatepass"):
"""
>>> text_to_bits("msg")
'011011010111001101100111'
"""
bits = bin(int.from_bytes(text.encode(encoding, errors), "big"))[2:]
return bits.zfill(8 * ((len(bits) + 7) // 8))
def text_from_bits(bits, encoding="utf-8", errors="surrogatepass"):
"""
>>> text_from_bits('011011010111001101100111')
'msg'
"""
n = int(bits, 2)
return n.to_bytes((n.bit_length() + 7) // 8, "big").decode(encoding, errors) or "\0"
# Functions of hamming code-------------------------------------------
def emitterConverter(sizePar, data):
"""
:param sizePar: how many parity bits the message must have
:param data: information bits
:return: message to be transmitted by unreliable medium
- bits of information merged with parity bits
>>> emitterConverter(4, "101010111111")
['1', '1', '1', '1', '0', '1', '0', '0', '1', '0', '1', '1', '1', '1', '1', '1']
"""
if sizePar + len(data) <= 2 ** sizePar - (len(data) - 1):
print("ERROR - size of parity don't match with size of data")
exit(0)
dataOut = []
parity = []
binPos = [bin(x)[2:] for x in range(1, sizePar + len(data) + 1)]
# sorted information data for the size of the output data
dataOrd = []
# data position template + parity
dataOutGab = []
# parity bit counter
qtdBP = 0
# counter position of data bits
contData = 0
for x in range(1, sizePar + len(data) + 1):
# Performs a template of bit positions - who should be given,
# and who should be parity
if qtdBP < sizePar:
if (np.log(x) / np.log(2)).is_integer():
dataOutGab.append("P")
qtdBP = qtdBP + 1
else:
dataOutGab.append("D")
else:
dataOutGab.append("D")
# Sorts the data to the new output size
if dataOutGab[-1] == "D":
dataOrd.append(data[contData])
contData += 1
else:
dataOrd.append(None)
# Calculates parity
qtdBP = 0 # parity bit counter
for bp in range(1, sizePar + 1):
# Bit counter one for a given parity
contBO = 0
# counter to control the loop reading
contLoop = 0
for x in dataOrd:
if x is not None:
try:
aux = (binPos[contLoop])[-1 * (bp)]
except IndexError:
aux = "0"
if aux == "1":
if x == "1":
contBO += 1
contLoop += 1
parity.append(contBO % 2)
qtdBP += 1
# Mount the message
ContBP = 0 # parity bit counter
for x in range(0, sizePar + len(data)):
if dataOrd[x] is None:
dataOut.append(str(parity[ContBP]))
ContBP += 1
else:
dataOut.append(dataOrd[x])
return dataOut
def receptorConverter(sizePar, data):
"""
>>> receptorConverter(4, "1111010010111111")
(['1', '0', '1', '0', '1', '0', '1', '1', '1', '1', '1', '1'], True)
"""
# data position template + parity
dataOutGab = []
# Parity bit counter
qtdBP = 0
# Counter p data bit reading
contData = 0
# list of parity received
parityReceived = []
dataOutput = []
for x in range(1, len(data) + 1):
# Performs a template of bit positions - who should be given,
# and who should be parity
if qtdBP < sizePar and (np.log(x) / np.log(2)).is_integer():
dataOutGab.append("P")
qtdBP = qtdBP + 1
else:
dataOutGab.append("D")
# Sorts the data to the new output size
if dataOutGab[-1] == "D":
dataOutput.append(data[contData])
else:
parityReceived.append(data[contData])
contData += 1
# -----------calculates the parity with the data
dataOut = []
parity = []
binPos = [bin(x)[2:] for x in range(1, sizePar + len(dataOutput) + 1)]
# sorted information data for the size of the output data
dataOrd = []
# Data position feedback + parity
dataOutGab = []
# Parity bit counter
qtdBP = 0
# Counter p data bit reading
contData = 0
for x in range(1, sizePar + len(dataOutput) + 1):
# Performs a template position of bits - who should be given,
# and who should be parity
if qtdBP < sizePar and (np.log(x) / np.log(2)).is_integer():
dataOutGab.append("P")
qtdBP = qtdBP + 1
else:
dataOutGab.append("D")
# Sorts the data to the new output size
if dataOutGab[-1] == "D":
dataOrd.append(dataOutput[contData])
contData += 1
else:
dataOrd.append(None)
# Calculates parity
qtdBP = 0 # parity bit counter
for bp in range(1, sizePar + 1):
# Bit counter one for a certain parity
contBO = 0
# Counter to control loop reading
contLoop = 0
for x in dataOrd:
if x is not None:
try:
aux = (binPos[contLoop])[-1 * (bp)]
except IndexError:
aux = "0"
if aux == "1" and x == "1":
contBO += 1
contLoop += 1
parity.append(str(contBO % 2))
qtdBP += 1
# Mount the message
ContBP = 0 # Parity bit counter
for x in range(0, sizePar + len(dataOutput)):
if dataOrd[x] is None:
dataOut.append(str(parity[ContBP]))
ContBP += 1
else:
dataOut.append(dataOrd[x])
ack = parityReceived == parity
return dataOutput, ack
# ---------------------------------------------------------------------
"""
# Example how to use
# number of parity bits
sizePari = 4
# location of the bit that will be forced an error
be = 2
# Message/word to be encoded and decoded with hamming
# text = input("Enter the word to be read: ")
text = "Message01"
# Convert the message to binary
binaryText = text_to_bits(text)
# Prints the binary of the string
print("Text input in binary is '" + binaryText + "'")
# total transmitted bits
totalBits = len(binaryText) + sizePari
print("Size of data is " + str(totalBits))
print("\n --Message exchange--")
print("Data to send ------------> " + binaryText)
dataOut = emitterConverter(sizePari, binaryText)
print("Data converted ----------> " + "".join(dataOut))
dataReceiv, ack = receptorConverter(sizePari, dataOut)
print(
"Data receive ------------> "
+ "".join(dataReceiv)
+ "\t\t -- Data integrity: "
+ str(ack)
)
print("\n --Force error--")
print("Data to send ------------> " + binaryText)
dataOut = emitterConverter(sizePari, binaryText)
print("Data converted ----------> " + "".join(dataOut))
# forces error
dataOut[-be] = "1" * (dataOut[-be] == "0") + "0" * (dataOut[-be] == "1")
print("Data after transmission -> " + "".join(dataOut))
dataReceiv, ack = receptorConverter(sizePari, dataOut)
print(
"Data receive ------------> "
+ "".join(dataReceiv)
+ "\t\t -- Data integrity: "
+ str(ack)
)
"""
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 5: https://projecteuler.net/problem=5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers
from 1 to 10 without any remainder.
What is the smallest positive number that is _evenly divisible_ by all
of the numbers from 1 to 20?
References:
- https://en.wiktionary.org/wiki/evenly_divisible
"""
def solution(n: int = 20) -> int:
"""
Returns the smallest positive number that is evenly divisible (divisible
with no remainder) by all of the numbers from 1 to n.
>>> solution(10)
2520
>>> solution(15)
360360
>>> solution(22)
232792560
>>> solution(3.4)
6
>>> solution(0)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution(-17)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution([])
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
>>> solution("asd")
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
"""
try:
n = int(n)
except (TypeError, ValueError):
raise TypeError("Parameter n must be int or castable to int.")
if n <= 0:
raise ValueError("Parameter n must be greater than or equal to one.")
i = 0
while 1:
i += n * (n - 1)
nfound = 0
for j in range(2, n):
if i % j != 0:
nfound = 1
break
if nfound == 0:
if i == 0:
i = 1
return i
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 5: https://projecteuler.net/problem=5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers
from 1 to 10 without any remainder.
What is the smallest positive number that is _evenly divisible_ by all
of the numbers from 1 to 20?
References:
- https://en.wiktionary.org/wiki/evenly_divisible
"""
def solution(n: int = 20) -> int:
"""
Returns the smallest positive number that is evenly divisible (divisible
with no remainder) by all of the numbers from 1 to n.
>>> solution(10)
2520
>>> solution(15)
360360
>>> solution(22)
232792560
>>> solution(3.4)
6
>>> solution(0)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution(-17)
Traceback (most recent call last):
...
ValueError: Parameter n must be greater than or equal to one.
>>> solution([])
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
>>> solution("asd")
Traceback (most recent call last):
...
TypeError: Parameter n must be int or castable to int.
"""
try:
n = int(n)
except (TypeError, ValueError):
raise TypeError("Parameter n must be int or castable to int.")
if n <= 0:
raise ValueError("Parameter n must be greater than or equal to one.")
i = 0
while 1:
i += n * (n - 1)
nfound = 0
for j in range(2, n):
if i % j != 0:
nfound = 1
break
if nfound == 0:
if i == 0:
i = 1
return i
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Introspective Sort is hybrid sort (Quick Sort + Heap Sort + Insertion Sort)
if the size of the list is under 16, use insertion sort
https://en.wikipedia.org/wiki/Introsort
"""
import math
def insertion_sort(array: list, start: int = 0, end: int = 0) -> list:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> insertion_sort(array, 0, len(array))
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
"""
end = end or len(array)
for i in range(start, end):
temp_index = i
temp_index_value = array[i]
while temp_index != start and temp_index_value < array[temp_index - 1]:
array[temp_index] = array[temp_index - 1]
temp_index -= 1
array[temp_index] = temp_index_value
return array
def heapify(array: list, index: int, heap_size: int) -> None: # Max Heap
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> heapify(array, len(array) // 2 ,len(array))
"""
largest = index
left_index = 2 * index + 1 # Left Node
right_index = 2 * index + 2 # Right Node
if left_index < heap_size and array[largest] < array[left_index]:
largest = left_index
if right_index < heap_size and array[largest] < array[right_index]:
largest = right_index
if largest != index:
array[index], array[largest] = array[largest], array[index]
heapify(array, largest, heap_size)
def heap_sort(array: list) -> list:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> heap_sort(array)
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
"""
n = len(array)
for i in range(n // 2, -1, -1):
heapify(array, i, n)
for i in range(n - 1, 0, -1):
array[i], array[0] = array[0], array[i]
heapify(array, 0, i)
return array
def median_of_3(
array: list, first_index: int, middle_index: int, last_index: int
) -> int:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> median_of_3(array, 0, 0 + ((len(array) - 0) // 2) + 1, len(array) - 1)
12
"""
if (array[first_index] > array[middle_index]) != (
array[first_index] > array[last_index]
):
return array[first_index]
elif (array[middle_index] > array[first_index]) != (
array[middle_index] > array[last_index]
):
return array[middle_index]
else:
return array[last_index]
def partition(array: list, low: int, high: int, pivot: int) -> int:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> partition(array, 0, len(array), 12)
8
"""
i = low
j = high
while True:
while array[i] < pivot:
i += 1
j -= 1
while pivot < array[j]:
j -= 1
if i >= j:
return i
array[i], array[j] = array[j], array[i]
i += 1
def sort(array: list) -> list:
"""
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> sort([4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12])
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
>>> sort([-1, -5, -3, -13, -44])
[-44, -13, -5, -3, -1]
>>> sort([])
[]
>>> sort([5])
[5]
>>> sort([-3, 0, -7, 6, 23, -34])
[-34, -7, -3, 0, 6, 23]
>>> sort([1.7, 1.0, 3.3, 2.1, 0.3 ])
[0.3, 1.0, 1.7, 2.1, 3.3]
>>> sort(['d', 'a', 'b', 'e', 'c'])
['a', 'b', 'c', 'd', 'e']
"""
if len(array) == 0:
return array
max_depth = 2 * math.ceil(math.log2(len(array)))
size_threshold = 16
return intro_sort(array, 0, len(array), size_threshold, max_depth)
def intro_sort(
array: list, start: int, end: int, size_threshold: int, max_depth: int
) -> list:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> max_depth = 2 * math.ceil(math.log2(len(array)))
>>> intro_sort(array, 0, len(array), 16, max_depth)
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
"""
while end - start > size_threshold:
if max_depth == 0:
return heap_sort(array)
max_depth -= 1
pivot = median_of_3(array, start, start + ((end - start) // 2) + 1, end - 1)
p = partition(array, start, end, pivot)
intro_sort(array, p, end, size_threshold, max_depth)
end = p
return insertion_sort(array, start, end)
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma : ").strip()
unsorted = [float(item) for item in user_input.split(",")]
print(sort(unsorted))
| """
Introspective Sort is hybrid sort (Quick Sort + Heap Sort + Insertion Sort)
if the size of the list is under 16, use insertion sort
https://en.wikipedia.org/wiki/Introsort
"""
import math
def insertion_sort(array: list, start: int = 0, end: int = 0) -> list:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> insertion_sort(array, 0, len(array))
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
"""
end = end or len(array)
for i in range(start, end):
temp_index = i
temp_index_value = array[i]
while temp_index != start and temp_index_value < array[temp_index - 1]:
array[temp_index] = array[temp_index - 1]
temp_index -= 1
array[temp_index] = temp_index_value
return array
def heapify(array: list, index: int, heap_size: int) -> None: # Max Heap
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> heapify(array, len(array) // 2 ,len(array))
"""
largest = index
left_index = 2 * index + 1 # Left Node
right_index = 2 * index + 2 # Right Node
if left_index < heap_size and array[largest] < array[left_index]:
largest = left_index
if right_index < heap_size and array[largest] < array[right_index]:
largest = right_index
if largest != index:
array[index], array[largest] = array[largest], array[index]
heapify(array, largest, heap_size)
def heap_sort(array: list) -> list:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> heap_sort(array)
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
"""
n = len(array)
for i in range(n // 2, -1, -1):
heapify(array, i, n)
for i in range(n - 1, 0, -1):
array[i], array[0] = array[0], array[i]
heapify(array, 0, i)
return array
def median_of_3(
array: list, first_index: int, middle_index: int, last_index: int
) -> int:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> median_of_3(array, 0, 0 + ((len(array) - 0) // 2) + 1, len(array) - 1)
12
"""
if (array[first_index] > array[middle_index]) != (
array[first_index] > array[last_index]
):
return array[first_index]
elif (array[middle_index] > array[first_index]) != (
array[middle_index] > array[last_index]
):
return array[middle_index]
else:
return array[last_index]
def partition(array: list, low: int, high: int, pivot: int) -> int:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> partition(array, 0, len(array), 12)
8
"""
i = low
j = high
while True:
while array[i] < pivot:
i += 1
j -= 1
while pivot < array[j]:
j -= 1
if i >= j:
return i
array[i], array[j] = array[j], array[i]
i += 1
def sort(array: list) -> list:
"""
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> sort([4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12])
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
>>> sort([-1, -5, -3, -13, -44])
[-44, -13, -5, -3, -1]
>>> sort([])
[]
>>> sort([5])
[5]
>>> sort([-3, 0, -7, 6, 23, -34])
[-34, -7, -3, 0, 6, 23]
>>> sort([1.7, 1.0, 3.3, 2.1, 0.3 ])
[0.3, 1.0, 1.7, 2.1, 3.3]
>>> sort(['d', 'a', 'b', 'e', 'c'])
['a', 'b', 'c', 'd', 'e']
"""
if len(array) == 0:
return array
max_depth = 2 * math.ceil(math.log2(len(array)))
size_threshold = 16
return intro_sort(array, 0, len(array), size_threshold, max_depth)
def intro_sort(
array: list, start: int, end: int, size_threshold: int, max_depth: int
) -> list:
"""
>>> array = [4, 2, 6, 8, 1, 7, 8, 22, 14, 56, 27, 79, 23, 45, 14, 12]
>>> max_depth = 2 * math.ceil(math.log2(len(array)))
>>> intro_sort(array, 0, len(array), 16, max_depth)
[1, 2, 4, 6, 7, 8, 8, 12, 14, 14, 22, 23, 27, 45, 56, 79]
"""
while end - start > size_threshold:
if max_depth == 0:
return heap_sort(array)
max_depth -= 1
pivot = median_of_3(array, start, start + ((end - start) // 2) + 1, end - 1)
p = partition(array, start, end, pivot)
intro_sort(array, p, end, size_threshold, max_depth)
end = p
return insertion_sort(array, start, end)
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma : ").strip()
unsorted = [float(item) for item in user_input.split(",")]
print(sort(unsorted))
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| from __future__ import annotations
class Node:
"""
A Node has data variable and pointers to Nodes to its left and right.
"""
def __init__(self, data: int) -> None:
self.data = data
self.left: Node | None = None
self.right: Node | None = None
def display(tree: Node | None) -> None: # In Order traversal of the tree
"""
>>> root = Node(1)
>>> root.left = Node(0)
>>> root.right = Node(2)
>>> display(root)
0
1
2
>>> display(root.right)
2
"""
if tree:
display(tree.left)
print(tree.data)
display(tree.right)
def depth_of_tree(tree: Node | None) -> int:
"""
Recursive function that returns the depth of a binary tree.
>>> root = Node(0)
>>> depth_of_tree(root)
1
>>> root.left = Node(0)
>>> depth_of_tree(root)
2
>>> root.right = Node(0)
>>> depth_of_tree(root)
2
>>> root.left.right = Node(0)
>>> depth_of_tree(root)
3
>>> depth_of_tree(root.left)
2
"""
return 1 + max(depth_of_tree(tree.left), depth_of_tree(tree.right)) if tree else 0
def is_full_binary_tree(tree: Node) -> bool:
"""
Returns True if this is a full binary tree
>>> root = Node(0)
>>> is_full_binary_tree(root)
True
>>> root.left = Node(0)
>>> is_full_binary_tree(root)
False
>>> root.right = Node(0)
>>> is_full_binary_tree(root)
True
>>> root.left.left = Node(0)
>>> is_full_binary_tree(root)
False
>>> root.right.right = Node(0)
>>> is_full_binary_tree(root)
False
"""
if not tree:
return True
if tree.left and tree.right:
return is_full_binary_tree(tree.left) and is_full_binary_tree(tree.right)
else:
return not tree.left and not tree.right
def main() -> None: # Main function for testing.
tree = Node(1)
tree.left = Node(2)
tree.right = Node(3)
tree.left.left = Node(4)
tree.left.right = Node(5)
tree.left.right.left = Node(6)
tree.right.left = Node(7)
tree.right.left.left = Node(8)
tree.right.left.left.right = Node(9)
print(is_full_binary_tree(tree))
print(depth_of_tree(tree))
print("Tree is: ")
display(tree)
if __name__ == "__main__":
main()
| from __future__ import annotations
class Node:
"""
A Node has data variable and pointers to Nodes to its left and right.
"""
def __init__(self, data: int) -> None:
self.data = data
self.left: Node | None = None
self.right: Node | None = None
def display(tree: Node | None) -> None: # In Order traversal of the tree
"""
>>> root = Node(1)
>>> root.left = Node(0)
>>> root.right = Node(2)
>>> display(root)
0
1
2
>>> display(root.right)
2
"""
if tree:
display(tree.left)
print(tree.data)
display(tree.right)
def depth_of_tree(tree: Node | None) -> int:
"""
Recursive function that returns the depth of a binary tree.
>>> root = Node(0)
>>> depth_of_tree(root)
1
>>> root.left = Node(0)
>>> depth_of_tree(root)
2
>>> root.right = Node(0)
>>> depth_of_tree(root)
2
>>> root.left.right = Node(0)
>>> depth_of_tree(root)
3
>>> depth_of_tree(root.left)
2
"""
return 1 + max(depth_of_tree(tree.left), depth_of_tree(tree.right)) if tree else 0
def is_full_binary_tree(tree: Node) -> bool:
"""
Returns True if this is a full binary tree
>>> root = Node(0)
>>> is_full_binary_tree(root)
True
>>> root.left = Node(0)
>>> is_full_binary_tree(root)
False
>>> root.right = Node(0)
>>> is_full_binary_tree(root)
True
>>> root.left.left = Node(0)
>>> is_full_binary_tree(root)
False
>>> root.right.right = Node(0)
>>> is_full_binary_tree(root)
False
"""
if not tree:
return True
if tree.left and tree.right:
return is_full_binary_tree(tree.left) and is_full_binary_tree(tree.right)
else:
return not tree.left and not tree.right
def main() -> None: # Main function for testing.
tree = Node(1)
tree.left = Node(2)
tree.right = Node(3)
tree.left.left = Node(4)
tree.left.right = Node(5)
tree.left.right.left = Node(6)
tree.right.left = Node(7)
tree.right.left.left = Node(8)
tree.right.left.left.right = Node(9)
print(is_full_binary_tree(tree))
print(depth_of_tree(tree))
print("Tree is: ")
display(tree)
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is a pure Python implementation of the merge sort algorithm
For doctests run following command:
python -m doctest -v merge_sort.py
or
python3 -m doctest -v merge_sort.py
For manual testing run:
python merge_sort.py
"""
def merge_sort(collection: list) -> list:
"""Pure implementation of the merge sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> merge_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> merge_sort([])
[]
>>> merge_sort([-2, -5, -45])
[-45, -5, -2]
"""
def merge(left: list, right: list) -> list:
"""merge left and right
:param left: left collection
:param right: right collection
:return: merge result
"""
def _merge():
while left and right:
yield (left if left[0] <= right[0] else right).pop(0)
yield from left
yield from right
return list(_merge())
if len(collection) <= 1:
return collection
mid = len(collection) // 2
return merge(merge_sort(collection[:mid]), merge_sort(collection[mid:]))
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(*merge_sort(unsorted), sep=",")
| """
This is a pure Python implementation of the merge sort algorithm
For doctests run following command:
python -m doctest -v merge_sort.py
or
python3 -m doctest -v merge_sort.py
For manual testing run:
python merge_sort.py
"""
def merge_sort(collection: list) -> list:
"""Pure implementation of the merge sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> merge_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> merge_sort([])
[]
>>> merge_sort([-2, -5, -45])
[-45, -5, -2]
"""
def merge(left: list, right: list) -> list:
"""merge left and right
:param left: left collection
:param right: right collection
:return: merge result
"""
def _merge():
while left and right:
yield (left if left[0] <= right[0] else right).pop(0)
yield from left
yield from right
return list(_merge())
if len(collection) <= 1:
return collection
mid = len(collection) // 2
return merge(merge_sort(collection[:mid]), merge_sort(collection[mid:]))
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(*merge_sort(unsorted), sep=",")
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| #
| #
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is pure Python implementation of comb sort algorithm.
Comb sort is a relatively simple sorting algorithm originally designed by Wlodzimierz
Dobosiewicz in 1980. It was rediscovered by Stephen Lacey and Richard Box in 1991.
Comb sort improves on bubble sort algorithm.
In bubble sort, distance (or gap) between two compared elements is always one.
Comb sort improvement is that gap can be much more than 1, in order to prevent slowing
down by small values
at the end of a list.
More info on: https://en.wikipedia.org/wiki/Comb_sort
For doctests run following command:
python -m doctest -v comb_sort.py
or
python3 -m doctest -v comb_sort.py
For manual testing run:
python comb_sort.py
"""
def comb_sort(data: list) -> list:
"""Pure implementation of comb sort algorithm in Python
:param data: mutable collection with comparable items
:return: the same collection in ascending order
Examples:
>>> comb_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> comb_sort([])
[]
>>> comb_sort([99, 45, -7, 8, 2, 0, -15, 3])
[-15, -7, 0, 2, 3, 8, 45, 99]
"""
shrink_factor = 1.3
gap = len(data)
completed = False
while not completed:
# Update the gap value for a next comb
gap = int(gap / shrink_factor)
if gap <= 1:
completed = True
index = 0
while index + gap < len(data):
if data[index] > data[index + gap]:
# Swap values
data[index], data[index + gap] = data[index + gap], data[index]
completed = False
index += 1
return data
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(comb_sort(unsorted))
| """
This is pure Python implementation of comb sort algorithm.
Comb sort is a relatively simple sorting algorithm originally designed by Wlodzimierz
Dobosiewicz in 1980. It was rediscovered by Stephen Lacey and Richard Box in 1991.
Comb sort improves on bubble sort algorithm.
In bubble sort, distance (or gap) between two compared elements is always one.
Comb sort improvement is that gap can be much more than 1, in order to prevent slowing
down by small values
at the end of a list.
More info on: https://en.wikipedia.org/wiki/Comb_sort
For doctests run following command:
python -m doctest -v comb_sort.py
or
python3 -m doctest -v comb_sort.py
For manual testing run:
python comb_sort.py
"""
def comb_sort(data: list) -> list:
"""Pure implementation of comb sort algorithm in Python
:param data: mutable collection with comparable items
:return: the same collection in ascending order
Examples:
>>> comb_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> comb_sort([])
[]
>>> comb_sort([99, 45, -7, 8, 2, 0, -15, 3])
[-15, -7, 0, 2, 3, 8, 45, 99]
"""
shrink_factor = 1.3
gap = len(data)
completed = False
while not completed:
# Update the gap value for a next comb
gap = int(gap / shrink_factor)
if gap <= 1:
completed = True
index = 0
while index + gap < len(data):
if data[index] > data[index + gap]:
# Swap values
data[index], data[index + gap] = data[index + gap], data[index]
completed = False
index += 1
return data
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(comb_sort(unsorted))
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # fibonacci.py
"""
Calculates the Fibonacci sequence using iteration, recursion, and a simplified
form of Binet's formula
NOTE 1: the iterative and recursive functions are more accurate than the Binet's
formula function because the iterative function doesn't use floats
NOTE 2: the Binet's formula function is much more limited in the size of inputs
that it can handle due to the size limitations of Python floats
"""
from math import sqrt
from time import time
def time_func(func, *args, **kwargs):
"""
Times the execution of a function with parameters
"""
start = time()
output = func(*args, **kwargs)
end = time()
if int(end - start) > 0:
print(f"{func.__name__} runtime: {(end - start):0.4f} s")
else:
print(f"{func.__name__} runtime: {(end - start) * 1000:0.4f} ms")
return output
def fib_iterative(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using iteration
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
if n < 0:
raise Exception("n is negative")
if n == 0:
return [0]
fib = [0, 1]
for _ in range(n - 1):
fib.append(fib[-1] + fib[-2])
return fib
def fib_recursive(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using recursion
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
def fib_recursive_term(i: int) -> int:
"""
Calculates the i-th (0-indexed) Fibonacci number using recursion
"""
if i < 0:
raise Exception("n is negative")
if i < 2:
return i
return fib_recursive_term(i - 1) + fib_recursive_term(i - 2)
if n < 0:
raise Exception("n is negative")
return [fib_recursive_term(i) for i in range(n + 1)]
def fib_binet(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using a simplified form
of Binet's formula:
https://en.m.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding
NOTE 1: this function diverges from fib_iterative at around n = 71, likely
due to compounding floating-point arithmetic errors
NOTE 2: this function doesn't accept n >= 1475 because it overflows
thereafter due to the size limitations of Python floats
>>> fib_binet(0)
[0]
>>> fib_binet(1)
[0, 1]
>>> fib_binet(5)
[0, 1, 1, 2, 3, 5]
>>> fib_binet(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_binet(-1)
Traceback (most recent call last):
...
Exception: n is negative
>>> fib_binet(1475)
Traceback (most recent call last):
...
Exception: n is too large
"""
if n < 0:
raise Exception("n is negative")
if n >= 1475:
raise Exception("n is too large")
sqrt_5 = sqrt(5)
phi = (1 + sqrt_5) / 2
return [round(phi ** i / sqrt_5) for i in range(n + 1)]
if __name__ == "__main__":
num = 20
time_func(fib_iterative, num)
time_func(fib_recursive, num)
time_func(fib_binet, num)
| # fibonacci.py
"""
Calculates the Fibonacci sequence using iteration, recursion, and a simplified
form of Binet's formula
NOTE 1: the iterative and recursive functions are more accurate than the Binet's
formula function because the iterative function doesn't use floats
NOTE 2: the Binet's formula function is much more limited in the size of inputs
that it can handle due to the size limitations of Python floats
"""
from math import sqrt
from time import time
def time_func(func, *args, **kwargs):
"""
Times the execution of a function with parameters
"""
start = time()
output = func(*args, **kwargs)
end = time()
if int(end - start) > 0:
print(f"{func.__name__} runtime: {(end - start):0.4f} s")
else:
print(f"{func.__name__} runtime: {(end - start) * 1000:0.4f} ms")
return output
def fib_iterative(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using iteration
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
if n < 0:
raise Exception("n is negative")
if n == 0:
return [0]
fib = [0, 1]
for _ in range(n - 1):
fib.append(fib[-1] + fib[-2])
return fib
def fib_recursive(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using recursion
>>> fib_iterative(0)
[0]
>>> fib_iterative(1)
[0, 1]
>>> fib_iterative(5)
[0, 1, 1, 2, 3, 5]
>>> fib_iterative(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_iterative(-1)
Traceback (most recent call last):
...
Exception: n is negative
"""
def fib_recursive_term(i: int) -> int:
"""
Calculates the i-th (0-indexed) Fibonacci number using recursion
"""
if i < 0:
raise Exception("n is negative")
if i < 2:
return i
return fib_recursive_term(i - 1) + fib_recursive_term(i - 2)
if n < 0:
raise Exception("n is negative")
return [fib_recursive_term(i) for i in range(n + 1)]
def fib_binet(n: int) -> list[int]:
"""
Calculates the first n (0-indexed) Fibonacci numbers using a simplified form
of Binet's formula:
https://en.m.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding
NOTE 1: this function diverges from fib_iterative at around n = 71, likely
due to compounding floating-point arithmetic errors
NOTE 2: this function doesn't accept n >= 1475 because it overflows
thereafter due to the size limitations of Python floats
>>> fib_binet(0)
[0]
>>> fib_binet(1)
[0, 1]
>>> fib_binet(5)
[0, 1, 1, 2, 3, 5]
>>> fib_binet(10)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> fib_binet(-1)
Traceback (most recent call last):
...
Exception: n is negative
>>> fib_binet(1475)
Traceback (most recent call last):
...
Exception: n is too large
"""
if n < 0:
raise Exception("n is negative")
if n >= 1475:
raise Exception("n is too large")
sqrt_5 = sqrt(5)
phi = (1 + sqrt_5) / 2
return [round(phi ** i / sqrt_5) for i in range(n + 1)]
if __name__ == "__main__":
num = 20
time_func(fib_iterative, num)
time_func(fib_recursive, num)
time_func(fib_binet, num)
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| 37107287533902102798797998220837590246510135740250
46376937677490009712648124896970078050417018260538
74324986199524741059474233309513058123726617309629
91942213363574161572522430563301811072406154908250
23067588207539346171171980310421047513778063246676
89261670696623633820136378418383684178734361726757
28112879812849979408065481931592621691275889832738
44274228917432520321923589422876796487670272189318
47451445736001306439091167216856844588711603153276
70386486105843025439939619828917593665686757934951
62176457141856560629502157223196586755079324193331
64906352462741904929101432445813822663347944758178
92575867718337217661963751590579239728245598838407
58203565325359399008402633568948830189458628227828
80181199384826282014278194139940567587151170094390
35398664372827112653829987240784473053190104293586
86515506006295864861532075273371959191420517255829
71693888707715466499115593487603532921714970056938
54370070576826684624621495650076471787294438377604
53282654108756828443191190634694037855217779295145
36123272525000296071075082563815656710885258350721
45876576172410976447339110607218265236877223636045
17423706905851860660448207621209813287860733969412
81142660418086830619328460811191061556940512689692
51934325451728388641918047049293215058642563049483
62467221648435076201727918039944693004732956340691
15732444386908125794514089057706229429197107928209
55037687525678773091862540744969844508330393682126
18336384825330154686196124348767681297534375946515
80386287592878490201521685554828717201219257766954
78182833757993103614740356856449095527097864797581
16726320100436897842553539920931837441497806860984
48403098129077791799088218795327364475675590848030
87086987551392711854517078544161852424320693150332
59959406895756536782107074926966537676326235447210
69793950679652694742597709739166693763042633987085
41052684708299085211399427365734116182760315001271
65378607361501080857009149939512557028198746004375
35829035317434717326932123578154982629742552737307
94953759765105305946966067683156574377167401875275
88902802571733229619176668713819931811048770190271
25267680276078003013678680992525463401061632866526
36270218540497705585629946580636237993140746255962
24074486908231174977792365466257246923322810917141
91430288197103288597806669760892938638285025333403
34413065578016127815921815005561868836468420090470
23053081172816430487623791969842487255036638784583
11487696932154902810424020138335124462181441773470
63783299490636259666498587618221225225512486764533
67720186971698544312419572409913959008952310058822
95548255300263520781532296796249481641953868218774
76085327132285723110424803456124867697064507995236
37774242535411291684276865538926205024910326572967
23701913275725675285653248258265463092207058596522
29798860272258331913126375147341994889534765745501
18495701454879288984856827726077713721403798879715
38298203783031473527721580348144513491373226651381
34829543829199918180278916522431027392251122869539
40957953066405232632538044100059654939159879593635
29746152185502371307642255121183693803580388584903
41698116222072977186158236678424689157993532961922
62467957194401269043877107275048102390895523597457
23189706772547915061505504953922979530901129967519
86188088225875314529584099251203829009407770775672
11306739708304724483816533873502340845647058077308
82959174767140363198008187129011875491310547126581
97623331044818386269515456334926366572897563400500
42846280183517070527831839425882145521227251250327
55121603546981200581762165212827652751691296897789
32238195734329339946437501907836945765883352399886
75506164965184775180738168837861091527357929701337
62177842752192623401942399639168044983993173312731
32924185707147349566916674687634660915035914677504
99518671430235219628894890102423325116913619626622
73267460800591547471830798392868535206946944540724
76841822524674417161514036427982273348055556214818
97142617910342598647204516893989422179826088076852
87783646182799346313767754307809363333018982642090
10848802521674670883215120185883543223812876952786
71329612474782464538636993009049310363619763878039
62184073572399794223406235393808339651327408011116
66627891981488087797941876876144230030984490851411
60661826293682836764744779239180335110989069790714
85786944089552990653640447425576083659976645795096
66024396409905389607120198219976047599490197230297
64913982680032973156037120041377903785566085089252
16730939319872750275468906903707539413042652315011
94809377245048795150954100921645863754710598436791
78639167021187492431995700641917969777599028300699
15368713711936614952811305876380278410754449733078
40789923115535562561142322423255033685442488917353
44889911501440648020369068063960672322193204149535
41503128880339536053299340368006977710650566631954
81234880673210146739058568557934581403627822703280
82616570773948327592232845941706525094512325230608
22918802058777319719839450180888072429661980811197
77158542502016545090413245809786882778948721859617
72107838435069186155435662884062257473692284509516
20849603980134001723930671666823555245252804609722
53503534226472524250874054075591789781264330331690
| 37107287533902102798797998220837590246510135740250
46376937677490009712648124896970078050417018260538
74324986199524741059474233309513058123726617309629
91942213363574161572522430563301811072406154908250
23067588207539346171171980310421047513778063246676
89261670696623633820136378418383684178734361726757
28112879812849979408065481931592621691275889832738
44274228917432520321923589422876796487670272189318
47451445736001306439091167216856844588711603153276
70386486105843025439939619828917593665686757934951
62176457141856560629502157223196586755079324193331
64906352462741904929101432445813822663347944758178
92575867718337217661963751590579239728245598838407
58203565325359399008402633568948830189458628227828
80181199384826282014278194139940567587151170094390
35398664372827112653829987240784473053190104293586
86515506006295864861532075273371959191420517255829
71693888707715466499115593487603532921714970056938
54370070576826684624621495650076471787294438377604
53282654108756828443191190634694037855217779295145
36123272525000296071075082563815656710885258350721
45876576172410976447339110607218265236877223636045
17423706905851860660448207621209813287860733969412
81142660418086830619328460811191061556940512689692
51934325451728388641918047049293215058642563049483
62467221648435076201727918039944693004732956340691
15732444386908125794514089057706229429197107928209
55037687525678773091862540744969844508330393682126
18336384825330154686196124348767681297534375946515
80386287592878490201521685554828717201219257766954
78182833757993103614740356856449095527097864797581
16726320100436897842553539920931837441497806860984
48403098129077791799088218795327364475675590848030
87086987551392711854517078544161852424320693150332
59959406895756536782107074926966537676326235447210
69793950679652694742597709739166693763042633987085
41052684708299085211399427365734116182760315001271
65378607361501080857009149939512557028198746004375
35829035317434717326932123578154982629742552737307
94953759765105305946966067683156574377167401875275
88902802571733229619176668713819931811048770190271
25267680276078003013678680992525463401061632866526
36270218540497705585629946580636237993140746255962
24074486908231174977792365466257246923322810917141
91430288197103288597806669760892938638285025333403
34413065578016127815921815005561868836468420090470
23053081172816430487623791969842487255036638784583
11487696932154902810424020138335124462181441773470
63783299490636259666498587618221225225512486764533
67720186971698544312419572409913959008952310058822
95548255300263520781532296796249481641953868218774
76085327132285723110424803456124867697064507995236
37774242535411291684276865538926205024910326572967
23701913275725675285653248258265463092207058596522
29798860272258331913126375147341994889534765745501
18495701454879288984856827726077713721403798879715
38298203783031473527721580348144513491373226651381
34829543829199918180278916522431027392251122869539
40957953066405232632538044100059654939159879593635
29746152185502371307642255121183693803580388584903
41698116222072977186158236678424689157993532961922
62467957194401269043877107275048102390895523597457
23189706772547915061505504953922979530901129967519
86188088225875314529584099251203829009407770775672
11306739708304724483816533873502340845647058077308
82959174767140363198008187129011875491310547126581
97623331044818386269515456334926366572897563400500
42846280183517070527831839425882145521227251250327
55121603546981200581762165212827652751691296897789
32238195734329339946437501907836945765883352399886
75506164965184775180738168837861091527357929701337
62177842752192623401942399639168044983993173312731
32924185707147349566916674687634660915035914677504
99518671430235219628894890102423325116913619626622
73267460800591547471830798392868535206946944540724
76841822524674417161514036427982273348055556214818
97142617910342598647204516893989422179826088076852
87783646182799346313767754307809363333018982642090
10848802521674670883215120185883543223812876952786
71329612474782464538636993009049310363619763878039
62184073572399794223406235393808339651327408011116
66627891981488087797941876876144230030984490851411
60661826293682836764744779239180335110989069790714
85786944089552990653640447425576083659976645795096
66024396409905389607120198219976047599490197230297
64913982680032973156037120041377903785566085089252
16730939319872750275468906903707539413042652315011
94809377245048795150954100921645863754710598436791
78639167021187492431995700641917969777599028300699
15368713711936614952811305876380278410754449733078
40789923115535562561142322423255033685442488917353
44889911501440648020369068063960672322193204149535
41503128880339536053299340368006977710650566631954
81234880673210146739058568557934581403627822703280
82616570773948327592232845941706525094512325230608
22918802058777319719839450180888072429661980811197
77158542502016545090413245809786882778948721859617
72107838435069186155435662884062257473692284509516
20849603980134001723930671666823555245252804609722
53503534226472524250874054075591789781264330331690
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
- A linked list is similar to an array, it holds values. However, links in a linked
list do not have indexes.
- This is an example of a double ended, doubly linked list.
- Each link references the next link and the previous one.
- A Doubly Linked List (DLL) contains an extra pointer, typically called previous
pointer, together with next pointer and data which are there in singly linked list.
- Advantages over SLL - It can be traversed in both forward and backward direction.
Delete operation is more efficient
"""
class Node:
def __init__(self, data: int, previous=None, next_node=None):
self.data = data
self.previous = previous
self.next = next_node
def __str__(self) -> str:
return f"{self.data}"
def get_data(self) -> int:
return self.data
def get_next(self):
return self.next
def get_previous(self):
return self.previous
class LinkedListIterator:
def __init__(self, head):
self.current = head
def __iter__(self):
return self
def __next__(self):
if not self.current:
raise StopIteration
else:
value = self.current.get_data()
self.current = self.current.get_next()
return value
class LinkedList:
def __init__(self):
self.head = None # First node in list
self.tail = None # Last node in list
def __str__(self):
current = self.head
nodes = []
while current is not None:
nodes.append(current.get_data())
current = current.get_next()
return " ".join(str(node) for node in nodes)
def __contains__(self, value: int):
current = self.head
while current:
if current.get_data() == value:
return True
current = current.get_next()
return False
def __iter__(self):
return LinkedListIterator(self.head)
def get_head_data(self):
if self.head:
return self.head.get_data()
return None
def get_tail_data(self):
if self.tail:
return self.tail.get_data()
return None
def set_head(self, node: Node) -> None:
if self.head is None:
self.head = node
self.tail = node
else:
self.insert_before_node(self.head, node)
def set_tail(self, node: Node) -> None:
if self.head is None:
self.set_head(node)
else:
self.insert_after_node(self.tail, node)
def insert(self, value: int) -> None:
node = Node(value)
if self.head is None:
self.set_head(node)
else:
self.set_tail(node)
def insert_before_node(self, node: Node, node_to_insert: Node) -> None:
node_to_insert.next = node
node_to_insert.previous = node.previous
if node.get_previous() is None:
self.head = node_to_insert
else:
node.previous.next = node_to_insert
node.previous = node_to_insert
def insert_after_node(self, node: Node, node_to_insert: Node) -> None:
node_to_insert.previous = node
node_to_insert.next = node.next
if node.get_next() is None:
self.tail = node_to_insert
else:
node.next.previous = node_to_insert
node.next = node_to_insert
def insert_at_position(self, position: int, value: int) -> None:
current_position = 1
new_node = Node(value)
node = self.head
while node:
if current_position == position:
self.insert_before_node(node, new_node)
return None
current_position += 1
node = node.next
self.insert_after_node(self.tail, new_node)
def get_node(self, item: int) -> Node:
node = self.head
while node:
if node.get_data() == item:
return node
node = node.get_next()
raise Exception("Node not found")
def delete_value(self, value):
node = self.get_node(value)
if node is not None:
if node == self.head:
self.head = self.head.get_next()
if node == self.tail:
self.tail = self.tail.get_previous()
self.remove_node_pointers(node)
@staticmethod
def remove_node_pointers(node: Node) -> None:
if node.get_next():
node.next.previous = node.previous
if node.get_previous():
node.previous.next = node.next
node.next = None
node.previous = None
def is_empty(self):
return self.head is None
def create_linked_list() -> None:
"""
>>> new_linked_list = LinkedList()
>>> new_linked_list.get_head_data() is None
True
>>> new_linked_list.get_tail_data() is None
True
>>> new_linked_list.is_empty()
True
>>> new_linked_list.insert(10)
>>> new_linked_list.get_head_data()
10
>>> new_linked_list.get_tail_data()
10
>>> new_linked_list.insert_at_position(position=3, value=20)
>>> new_linked_list.get_head_data()
10
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.set_head(Node(1000))
>>> new_linked_list.get_head_data()
1000
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.set_tail(Node(2000))
>>> new_linked_list.get_head_data()
1000
>>> new_linked_list.get_tail_data()
2000
>>> for value in new_linked_list:
... print(value)
1000
10
20
2000
>>> new_linked_list.is_empty()
False
>>> for value in new_linked_list:
... print(value)
1000
10
20
2000
>>> 10 in new_linked_list
True
>>> new_linked_list.delete_value(value=10)
>>> 10 in new_linked_list
False
>>> new_linked_list.delete_value(value=2000)
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.delete_value(value=1000)
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.get_head_data()
20
>>> for value in new_linked_list:
... print(value)
20
>>> new_linked_list.delete_value(value=20)
>>> for value in new_linked_list:
... print(value)
>>> for value in range(1,10):
... new_linked_list.insert(value=value)
>>> for value in new_linked_list:
... print(value)
1
2
3
4
5
6
7
8
9
"""
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
- A linked list is similar to an array, it holds values. However, links in a linked
list do not have indexes.
- This is an example of a double ended, doubly linked list.
- Each link references the next link and the previous one.
- A Doubly Linked List (DLL) contains an extra pointer, typically called previous
pointer, together with next pointer and data which are there in singly linked list.
- Advantages over SLL - It can be traversed in both forward and backward direction.
Delete operation is more efficient
"""
class Node:
def __init__(self, data: int, previous=None, next_node=None):
self.data = data
self.previous = previous
self.next = next_node
def __str__(self) -> str:
return f"{self.data}"
def get_data(self) -> int:
return self.data
def get_next(self):
return self.next
def get_previous(self):
return self.previous
class LinkedListIterator:
def __init__(self, head):
self.current = head
def __iter__(self):
return self
def __next__(self):
if not self.current:
raise StopIteration
else:
value = self.current.get_data()
self.current = self.current.get_next()
return value
class LinkedList:
def __init__(self):
self.head = None # First node in list
self.tail = None # Last node in list
def __str__(self):
current = self.head
nodes = []
while current is not None:
nodes.append(current.get_data())
current = current.get_next()
return " ".join(str(node) for node in nodes)
def __contains__(self, value: int):
current = self.head
while current:
if current.get_data() == value:
return True
current = current.get_next()
return False
def __iter__(self):
return LinkedListIterator(self.head)
def get_head_data(self):
if self.head:
return self.head.get_data()
return None
def get_tail_data(self):
if self.tail:
return self.tail.get_data()
return None
def set_head(self, node: Node) -> None:
if self.head is None:
self.head = node
self.tail = node
else:
self.insert_before_node(self.head, node)
def set_tail(self, node: Node) -> None:
if self.head is None:
self.set_head(node)
else:
self.insert_after_node(self.tail, node)
def insert(self, value: int) -> None:
node = Node(value)
if self.head is None:
self.set_head(node)
else:
self.set_tail(node)
def insert_before_node(self, node: Node, node_to_insert: Node) -> None:
node_to_insert.next = node
node_to_insert.previous = node.previous
if node.get_previous() is None:
self.head = node_to_insert
else:
node.previous.next = node_to_insert
node.previous = node_to_insert
def insert_after_node(self, node: Node, node_to_insert: Node) -> None:
node_to_insert.previous = node
node_to_insert.next = node.next
if node.get_next() is None:
self.tail = node_to_insert
else:
node.next.previous = node_to_insert
node.next = node_to_insert
def insert_at_position(self, position: int, value: int) -> None:
current_position = 1
new_node = Node(value)
node = self.head
while node:
if current_position == position:
self.insert_before_node(node, new_node)
return None
current_position += 1
node = node.next
self.insert_after_node(self.tail, new_node)
def get_node(self, item: int) -> Node:
node = self.head
while node:
if node.get_data() == item:
return node
node = node.get_next()
raise Exception("Node not found")
def delete_value(self, value):
node = self.get_node(value)
if node is not None:
if node == self.head:
self.head = self.head.get_next()
if node == self.tail:
self.tail = self.tail.get_previous()
self.remove_node_pointers(node)
@staticmethod
def remove_node_pointers(node: Node) -> None:
if node.get_next():
node.next.previous = node.previous
if node.get_previous():
node.previous.next = node.next
node.next = None
node.previous = None
def is_empty(self):
return self.head is None
def create_linked_list() -> None:
"""
>>> new_linked_list = LinkedList()
>>> new_linked_list.get_head_data() is None
True
>>> new_linked_list.get_tail_data() is None
True
>>> new_linked_list.is_empty()
True
>>> new_linked_list.insert(10)
>>> new_linked_list.get_head_data()
10
>>> new_linked_list.get_tail_data()
10
>>> new_linked_list.insert_at_position(position=3, value=20)
>>> new_linked_list.get_head_data()
10
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.set_head(Node(1000))
>>> new_linked_list.get_head_data()
1000
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.set_tail(Node(2000))
>>> new_linked_list.get_head_data()
1000
>>> new_linked_list.get_tail_data()
2000
>>> for value in new_linked_list:
... print(value)
1000
10
20
2000
>>> new_linked_list.is_empty()
False
>>> for value in new_linked_list:
... print(value)
1000
10
20
2000
>>> 10 in new_linked_list
True
>>> new_linked_list.delete_value(value=10)
>>> 10 in new_linked_list
False
>>> new_linked_list.delete_value(value=2000)
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.delete_value(value=1000)
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.get_head_data()
20
>>> for value in new_linked_list:
... print(value)
20
>>> new_linked_list.delete_value(value=20)
>>> for value in new_linked_list:
... print(value)
>>> for value in range(1,10):
... new_linked_list.insert(value=value)
>>> for value in new_linked_list:
... print(value)
1
2
3
4
5
6
7
8
9
"""
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
https://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transform
The Burrows–Wheeler transform (BWT, also called block-sorting compression)
rearranges a character string into runs of similar characters. This is useful
for compression, since it tends to be easy to compress a string that has runs
of repeated characters by techniques such as move-to-front transform and
run-length encoding. More importantly, the transformation is reversible,
without needing to store any additional data except the position of the first
original character. The BWT is thus a "free" method of improving the efficiency
of text compression algorithms, costing only some extra computation.
"""
from __future__ import annotations
from typing import TypedDict
class BWTTransformDict(TypedDict):
bwt_string: str
idx_original_string: int
def all_rotations(s: str) -> list[str]:
"""
:param s: The string that will be rotated len(s) times.
:return: A list with the rotations.
:raises TypeError: If s is not an instance of str.
Examples:
>>> all_rotations("^BANANA|") # doctest: +NORMALIZE_WHITESPACE
['^BANANA|', 'BANANA|^', 'ANANA|^B', 'NANA|^BA', 'ANA|^BAN', 'NA|^BANA',
'A|^BANAN', '|^BANANA']
>>> all_rotations("a_asa_da_casa") # doctest: +NORMALIZE_WHITESPACE
['a_asa_da_casa', '_asa_da_casaa', 'asa_da_casaa_', 'sa_da_casaa_a',
'a_da_casaa_as', '_da_casaa_asa', 'da_casaa_asa_', 'a_casaa_asa_d',
'_casaa_asa_da', 'casaa_asa_da_', 'asaa_asa_da_c', 'saa_asa_da_ca',
'aa_asa_da_cas']
>>> all_rotations("panamabanana") # doctest: +NORMALIZE_WHITESPACE
['panamabanana', 'anamabananap', 'namabananapa', 'amabananapan',
'mabananapana', 'abananapanam', 'bananapanama', 'ananapanamab',
'nanapanamaba', 'anapanamaban', 'napanamabana', 'apanamabanan']
>>> all_rotations(5)
Traceback (most recent call last):
...
TypeError: The parameter s type must be str.
"""
if not isinstance(s, str):
raise TypeError("The parameter s type must be str.")
return [s[i:] + s[:i] for i in range(len(s))]
def bwt_transform(s: str) -> BWTTransformDict:
"""
:param s: The string that will be used at bwt algorithm
:return: the string composed of the last char of each row of the ordered
rotations and the index of the original string at ordered rotations list
:raises TypeError: If the s parameter type is not str
:raises ValueError: If the s parameter is empty
Examples:
>>> bwt_transform("^BANANA")
{'bwt_string': 'BNN^AAA', 'idx_original_string': 6}
>>> bwt_transform("a_asa_da_casa")
{'bwt_string': 'aaaadss_c__aa', 'idx_original_string': 3}
>>> bwt_transform("panamabanana")
{'bwt_string': 'mnpbnnaaaaaa', 'idx_original_string': 11}
>>> bwt_transform(4)
Traceback (most recent call last):
...
TypeError: The parameter s type must be str.
>>> bwt_transform('')
Traceback (most recent call last):
...
ValueError: The parameter s must not be empty.
"""
if not isinstance(s, str):
raise TypeError("The parameter s type must be str.")
if not s:
raise ValueError("The parameter s must not be empty.")
rotations = all_rotations(s)
rotations.sort() # sort the list of rotations in alphabetically order
# make a string composed of the last char of each rotation
response: BWTTransformDict = {
"bwt_string": "".join([word[-1] for word in rotations]),
"idx_original_string": rotations.index(s),
}
return response
def reverse_bwt(bwt_string: str, idx_original_string: int) -> str:
"""
:param bwt_string: The string returned from bwt algorithm execution
:param idx_original_string: A 0-based index of the string that was used to
generate bwt_string at ordered rotations list
:return: The string used to generate bwt_string when bwt was executed
:raises TypeError: If the bwt_string parameter type is not str
:raises ValueError: If the bwt_string parameter is empty
:raises TypeError: If the idx_original_string type is not int or if not
possible to cast it to int
:raises ValueError: If the idx_original_string value is lower than 0 or
greater than len(bwt_string) - 1
>>> reverse_bwt("BNN^AAA", 6)
'^BANANA'
>>> reverse_bwt("aaaadss_c__aa", 3)
'a_asa_da_casa'
>>> reverse_bwt("mnpbnnaaaaaa", 11)
'panamabanana'
>>> reverse_bwt(4, 11)
Traceback (most recent call last):
...
TypeError: The parameter bwt_string type must be str.
>>> reverse_bwt("", 11)
Traceback (most recent call last):
...
ValueError: The parameter bwt_string must not be empty.
>>> reverse_bwt("mnpbnnaaaaaa", "asd") # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
TypeError: The parameter idx_original_string type must be int or passive
of cast to int.
>>> reverse_bwt("mnpbnnaaaaaa", -1)
Traceback (most recent call last):
...
ValueError: The parameter idx_original_string must not be lower than 0.
>>> reverse_bwt("mnpbnnaaaaaa", 12) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: The parameter idx_original_string must be lower than
len(bwt_string).
>>> reverse_bwt("mnpbnnaaaaaa", 11.0)
'panamabanana'
>>> reverse_bwt("mnpbnnaaaaaa", 11.4)
'panamabanana'
"""
if not isinstance(bwt_string, str):
raise TypeError("The parameter bwt_string type must be str.")
if not bwt_string:
raise ValueError("The parameter bwt_string must not be empty.")
try:
idx_original_string = int(idx_original_string)
except ValueError:
raise TypeError(
"The parameter idx_original_string type must be int or passive"
" of cast to int."
)
if idx_original_string < 0:
raise ValueError("The parameter idx_original_string must not be lower than 0.")
if idx_original_string >= len(bwt_string):
raise ValueError(
"The parameter idx_original_string must be lower than" " len(bwt_string)."
)
ordered_rotations = [""] * len(bwt_string)
for x in range(len(bwt_string)):
for i in range(len(bwt_string)):
ordered_rotations[i] = bwt_string[i] + ordered_rotations[i]
ordered_rotations.sort()
return ordered_rotations[idx_original_string]
if __name__ == "__main__":
entry_msg = "Provide a string that I will generate its BWT transform: "
s = input(entry_msg).strip()
result = bwt_transform(s)
print(
f"Burrows Wheeler transform for string '{s}' results "
f"in '{result['bwt_string']}'"
)
original_string = reverse_bwt(result["bwt_string"], result["idx_original_string"])
print(
f"Reversing Burrows Wheeler transform for entry '{result['bwt_string']}' "
f"we get original string '{original_string}'"
)
| """
https://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transform
The Burrows–Wheeler transform (BWT, also called block-sorting compression)
rearranges a character string into runs of similar characters. This is useful
for compression, since it tends to be easy to compress a string that has runs
of repeated characters by techniques such as move-to-front transform and
run-length encoding. More importantly, the transformation is reversible,
without needing to store any additional data except the position of the first
original character. The BWT is thus a "free" method of improving the efficiency
of text compression algorithms, costing only some extra computation.
"""
from __future__ import annotations
from typing import TypedDict
class BWTTransformDict(TypedDict):
bwt_string: str
idx_original_string: int
def all_rotations(s: str) -> list[str]:
"""
:param s: The string that will be rotated len(s) times.
:return: A list with the rotations.
:raises TypeError: If s is not an instance of str.
Examples:
>>> all_rotations("^BANANA|") # doctest: +NORMALIZE_WHITESPACE
['^BANANA|', 'BANANA|^', 'ANANA|^B', 'NANA|^BA', 'ANA|^BAN', 'NA|^BANA',
'A|^BANAN', '|^BANANA']
>>> all_rotations("a_asa_da_casa") # doctest: +NORMALIZE_WHITESPACE
['a_asa_da_casa', '_asa_da_casaa', 'asa_da_casaa_', 'sa_da_casaa_a',
'a_da_casaa_as', '_da_casaa_asa', 'da_casaa_asa_', 'a_casaa_asa_d',
'_casaa_asa_da', 'casaa_asa_da_', 'asaa_asa_da_c', 'saa_asa_da_ca',
'aa_asa_da_cas']
>>> all_rotations("panamabanana") # doctest: +NORMALIZE_WHITESPACE
['panamabanana', 'anamabananap', 'namabananapa', 'amabananapan',
'mabananapana', 'abananapanam', 'bananapanama', 'ananapanamab',
'nanapanamaba', 'anapanamaban', 'napanamabana', 'apanamabanan']
>>> all_rotations(5)
Traceback (most recent call last):
...
TypeError: The parameter s type must be str.
"""
if not isinstance(s, str):
raise TypeError("The parameter s type must be str.")
return [s[i:] + s[:i] for i in range(len(s))]
def bwt_transform(s: str) -> BWTTransformDict:
"""
:param s: The string that will be used at bwt algorithm
:return: the string composed of the last char of each row of the ordered
rotations and the index of the original string at ordered rotations list
:raises TypeError: If the s parameter type is not str
:raises ValueError: If the s parameter is empty
Examples:
>>> bwt_transform("^BANANA")
{'bwt_string': 'BNN^AAA', 'idx_original_string': 6}
>>> bwt_transform("a_asa_da_casa")
{'bwt_string': 'aaaadss_c__aa', 'idx_original_string': 3}
>>> bwt_transform("panamabanana")
{'bwt_string': 'mnpbnnaaaaaa', 'idx_original_string': 11}
>>> bwt_transform(4)
Traceback (most recent call last):
...
TypeError: The parameter s type must be str.
>>> bwt_transform('')
Traceback (most recent call last):
...
ValueError: The parameter s must not be empty.
"""
if not isinstance(s, str):
raise TypeError("The parameter s type must be str.")
if not s:
raise ValueError("The parameter s must not be empty.")
rotations = all_rotations(s)
rotations.sort() # sort the list of rotations in alphabetically order
# make a string composed of the last char of each rotation
response: BWTTransformDict = {
"bwt_string": "".join([word[-1] for word in rotations]),
"idx_original_string": rotations.index(s),
}
return response
def reverse_bwt(bwt_string: str, idx_original_string: int) -> str:
"""
:param bwt_string: The string returned from bwt algorithm execution
:param idx_original_string: A 0-based index of the string that was used to
generate bwt_string at ordered rotations list
:return: The string used to generate bwt_string when bwt was executed
:raises TypeError: If the bwt_string parameter type is not str
:raises ValueError: If the bwt_string parameter is empty
:raises TypeError: If the idx_original_string type is not int or if not
possible to cast it to int
:raises ValueError: If the idx_original_string value is lower than 0 or
greater than len(bwt_string) - 1
>>> reverse_bwt("BNN^AAA", 6)
'^BANANA'
>>> reverse_bwt("aaaadss_c__aa", 3)
'a_asa_da_casa'
>>> reverse_bwt("mnpbnnaaaaaa", 11)
'panamabanana'
>>> reverse_bwt(4, 11)
Traceback (most recent call last):
...
TypeError: The parameter bwt_string type must be str.
>>> reverse_bwt("", 11)
Traceback (most recent call last):
...
ValueError: The parameter bwt_string must not be empty.
>>> reverse_bwt("mnpbnnaaaaaa", "asd") # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
TypeError: The parameter idx_original_string type must be int or passive
of cast to int.
>>> reverse_bwt("mnpbnnaaaaaa", -1)
Traceback (most recent call last):
...
ValueError: The parameter idx_original_string must not be lower than 0.
>>> reverse_bwt("mnpbnnaaaaaa", 12) # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
ValueError: The parameter idx_original_string must be lower than
len(bwt_string).
>>> reverse_bwt("mnpbnnaaaaaa", 11.0)
'panamabanana'
>>> reverse_bwt("mnpbnnaaaaaa", 11.4)
'panamabanana'
"""
if not isinstance(bwt_string, str):
raise TypeError("The parameter bwt_string type must be str.")
if not bwt_string:
raise ValueError("The parameter bwt_string must not be empty.")
try:
idx_original_string = int(idx_original_string)
except ValueError:
raise TypeError(
"The parameter idx_original_string type must be int or passive"
" of cast to int."
)
if idx_original_string < 0:
raise ValueError("The parameter idx_original_string must not be lower than 0.")
if idx_original_string >= len(bwt_string):
raise ValueError(
"The parameter idx_original_string must be lower than" " len(bwt_string)."
)
ordered_rotations = [""] * len(bwt_string)
for x in range(len(bwt_string)):
for i in range(len(bwt_string)):
ordered_rotations[i] = bwt_string[i] + ordered_rotations[i]
ordered_rotations.sort()
return ordered_rotations[idx_original_string]
if __name__ == "__main__":
entry_msg = "Provide a string that I will generate its BWT transform: "
s = input(entry_msg).strip()
result = bwt_transform(s)
print(
f"Burrows Wheeler transform for string '{s}' results "
f"in '{result['bwt_string']}'"
)
original_string = reverse_bwt(result["bwt_string"], result["idx_original_string"])
print(
f"Reversing Burrows Wheeler transform for entry '{result['bwt_string']}' "
f"we get original string '{original_string}'"
)
| -1 |
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 5,751 | Replace Travis CI mentions with GitHub actions | ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-02T13:12:16Z" | "2021-11-02T21:28:09Z" | 60ad32920d92a6095b28aa6952a759b40e5759c7 | 37bc6bdebf159d395b559dd7094934a337d59c8a | Replace Travis CI mentions with GitHub actions. ### Describe your change:
* Replace Travis CI mentions with GitHub actions
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Chinese Remainder Theorem:
GCD ( Greatest Common Divisor ) or HCF ( Highest Common Factor )
If GCD(a,b) = 1, then for any remainder ra modulo a and any remainder rb modulo b
there exists integer n, such that n = ra (mod a) and n = ra(mod b). If n1 and n2 are
two such integers, then n1=n2(mod ab)
Algorithm :
1. Use extended euclid algorithm to find x,y such that a*x + b*y = 1
2. Take n = ra*by + rb*ax
"""
from __future__ import annotations
# Extended Euclid
def extended_euclid(a: int, b: int) -> tuple[int, int]:
"""
>>> extended_euclid(10, 6)
(-1, 2)
>>> extended_euclid(7, 5)
(-2, 3)
"""
if b == 0:
return (1, 0)
(x, y) = extended_euclid(b, a % b)
k = a // b
return (y, x - k * y)
# Uses ExtendedEuclid to find inverses
def chinese_remainder_theorem(n1: int, r1: int, n2: int, r2: int) -> int:
"""
>>> chinese_remainder_theorem(5,1,7,3)
31
Explanation : 31 is the smallest number such that
(i) When we divide it by 5, we get remainder 1
(ii) When we divide it by 7, we get remainder 3
>>> chinese_remainder_theorem(6,1,4,3)
14
"""
(x, y) = extended_euclid(n1, n2)
m = n1 * n2
n = r2 * x * n1 + r1 * y * n2
return (n % m + m) % m
# ----------SAME SOLUTION USING InvertModulo instead ExtendedEuclid----------------
# This function find the inverses of a i.e., a^(-1)
def invert_modulo(a: int, n: int) -> int:
"""
>>> invert_modulo(2, 5)
3
>>> invert_modulo(8,7)
1
"""
(b, x) = extended_euclid(a, n)
if b < 0:
b = (b % n + n) % n
return b
# Same a above using InvertingModulo
def chinese_remainder_theorem2(n1: int, r1: int, n2: int, r2: int) -> int:
"""
>>> chinese_remainder_theorem2(5,1,7,3)
31
>>> chinese_remainder_theorem2(6,1,4,3)
14
"""
x, y = invert_modulo(n1, n2), invert_modulo(n2, n1)
m = n1 * n2
n = r2 * x * n1 + r1 * y * n2
return (n % m + m) % m
if __name__ == "__main__":
from doctest import testmod
testmod(name="chinese_remainder_theorem", verbose=True)
testmod(name="chinese_remainder_theorem2", verbose=True)
testmod(name="invert_modulo", verbose=True)
testmod(name="extended_euclid", verbose=True)
| """
Chinese Remainder Theorem:
GCD ( Greatest Common Divisor ) or HCF ( Highest Common Factor )
If GCD(a,b) = 1, then for any remainder ra modulo a and any remainder rb modulo b
there exists integer n, such that n = ra (mod a) and n = ra(mod b). If n1 and n2 are
two such integers, then n1=n2(mod ab)
Algorithm :
1. Use extended euclid algorithm to find x,y such that a*x + b*y = 1
2. Take n = ra*by + rb*ax
"""
from __future__ import annotations
# Extended Euclid
def extended_euclid(a: int, b: int) -> tuple[int, int]:
"""
>>> extended_euclid(10, 6)
(-1, 2)
>>> extended_euclid(7, 5)
(-2, 3)
"""
if b == 0:
return (1, 0)
(x, y) = extended_euclid(b, a % b)
k = a // b
return (y, x - k * y)
# Uses ExtendedEuclid to find inverses
def chinese_remainder_theorem(n1: int, r1: int, n2: int, r2: int) -> int:
"""
>>> chinese_remainder_theorem(5,1,7,3)
31
Explanation : 31 is the smallest number such that
(i) When we divide it by 5, we get remainder 1
(ii) When we divide it by 7, we get remainder 3
>>> chinese_remainder_theorem(6,1,4,3)
14
"""
(x, y) = extended_euclid(n1, n2)
m = n1 * n2
n = r2 * x * n1 + r1 * y * n2
return (n % m + m) % m
# ----------SAME SOLUTION USING InvertModulo instead ExtendedEuclid----------------
# This function find the inverses of a i.e., a^(-1)
def invert_modulo(a: int, n: int) -> int:
"""
>>> invert_modulo(2, 5)
3
>>> invert_modulo(8,7)
1
"""
(b, x) = extended_euclid(a, n)
if b < 0:
b = (b % n + n) % n
return b
# Same a above using InvertingModulo
def chinese_remainder_theorem2(n1: int, r1: int, n2: int, r2: int) -> int:
"""
>>> chinese_remainder_theorem2(5,1,7,3)
31
>>> chinese_remainder_theorem2(6,1,4,3)
14
"""
x, y = invert_modulo(n1, n2), invert_modulo(n2, n1)
m = n1 * n2
n = r2 * x * n1 + r1 * y * n2
return (n % m + m) % m
if __name__ == "__main__":
from doctest import testmod
testmod(name="chinese_remainder_theorem", verbose=True)
testmod(name="chinese_remainder_theorem2", verbose=True)
testmod(name="invert_modulo", verbose=True)
testmod(name="extended_euclid", verbose=True)
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """README, Author - Jigyasa Gandhi(mailto:[email protected])
Requirements:
- scikit-fuzzy
- numpy
- matplotlib
Python:
- 3.5
"""
import numpy as np
import skfuzzy as fuzz
if __name__ == "__main__":
# Create universe of discourse in Python using linspace ()
X = np.linspace(start=0, stop=75, num=75, endpoint=True, retstep=False)
# Create two fuzzy sets by defining any membership function
# (trapmf(), gbellmf(), gaussmf(), etc).
abc1 = [0, 25, 50]
abc2 = [25, 50, 75]
young = fuzz.membership.trimf(X, abc1)
middle_aged = fuzz.membership.trimf(X, abc2)
# Compute the different operations using inbuilt functions.
one = np.ones(75)
zero = np.zeros((75,))
# 1. Union = max(µA(x), µB(x))
union = fuzz.fuzzy_or(X, young, X, middle_aged)[1]
# 2. Intersection = min(µA(x), µB(x))
intersection = fuzz.fuzzy_and(X, young, X, middle_aged)[1]
# 3. Complement (A) = (1- min(µA(x))
complement_a = fuzz.fuzzy_not(young)
# 4. Difference (A/B) = min(µA(x),(1- µB(x)))
difference = fuzz.fuzzy_and(X, young, X, fuzz.fuzzy_not(middle_aged)[1])[1]
# 5. Algebraic Sum = [µA(x) + µB(x) – (µA(x) * µB(x))]
alg_sum = young + middle_aged - (young * middle_aged)
# 6. Algebraic Product = (µA(x) * µB(x))
alg_product = young * middle_aged
# 7. Bounded Sum = min[1,(µA(x), µB(x))]
bdd_sum = fuzz.fuzzy_and(X, one, X, young + middle_aged)[1]
# 8. Bounded difference = min[0,(µA(x), µB(x))]
bdd_difference = fuzz.fuzzy_or(X, zero, X, young - middle_aged)[1]
# max-min composition
# max-product composition
# Plot each set A, set B and each operation result using plot() and subplot().
from matplotlib import pyplot as plt
plt.figure()
plt.subplot(4, 3, 1)
plt.plot(X, young)
plt.title("Young")
plt.grid(True)
plt.subplot(4, 3, 2)
plt.plot(X, middle_aged)
plt.title("Middle aged")
plt.grid(True)
plt.subplot(4, 3, 3)
plt.plot(X, union)
plt.title("union")
plt.grid(True)
plt.subplot(4, 3, 4)
plt.plot(X, intersection)
plt.title("intersection")
plt.grid(True)
plt.subplot(4, 3, 5)
plt.plot(X, complement_a)
plt.title("complement_a")
plt.grid(True)
plt.subplot(4, 3, 6)
plt.plot(X, difference)
plt.title("difference a/b")
plt.grid(True)
plt.subplot(4, 3, 7)
plt.plot(X, alg_sum)
plt.title("alg_sum")
plt.grid(True)
plt.subplot(4, 3, 8)
plt.plot(X, alg_product)
plt.title("alg_product")
plt.grid(True)
plt.subplot(4, 3, 9)
plt.plot(X, bdd_sum)
plt.title("bdd_sum")
plt.grid(True)
plt.subplot(4, 3, 10)
plt.plot(X, bdd_difference)
plt.title("bdd_difference")
plt.grid(True)
plt.subplots_adjust(hspace=0.5)
plt.show()
| """
README, Author - Jigyasa Gandhi(mailto:[email protected])
Requirements:
- scikit-fuzzy
- numpy
- matplotlib
Python:
- 3.5
"""
import numpy as np
try:
import skfuzzy as fuzz
except ImportError:
fuzz = None
if __name__ == "__main__":
# Create universe of discourse in Python using linspace ()
X = np.linspace(start=0, stop=75, num=75, endpoint=True, retstep=False)
# Create two fuzzy sets by defining any membership function
# (trapmf(), gbellmf(), gaussmf(), etc).
abc1 = [0, 25, 50]
abc2 = [25, 50, 75]
young = fuzz.membership.trimf(X, abc1)
middle_aged = fuzz.membership.trimf(X, abc2)
# Compute the different operations using inbuilt functions.
one = np.ones(75)
zero = np.zeros((75,))
# 1. Union = max(µA(x), µB(x))
union = fuzz.fuzzy_or(X, young, X, middle_aged)[1]
# 2. Intersection = min(µA(x), µB(x))
intersection = fuzz.fuzzy_and(X, young, X, middle_aged)[1]
# 3. Complement (A) = (1- min(µA(x))
complement_a = fuzz.fuzzy_not(young)
# 4. Difference (A/B) = min(µA(x),(1- µB(x)))
difference = fuzz.fuzzy_and(X, young, X, fuzz.fuzzy_not(middle_aged)[1])[1]
# 5. Algebraic Sum = [µA(x) + µB(x) – (µA(x) * µB(x))]
alg_sum = young + middle_aged - (young * middle_aged)
# 6. Algebraic Product = (µA(x) * µB(x))
alg_product = young * middle_aged
# 7. Bounded Sum = min[1,(µA(x), µB(x))]
bdd_sum = fuzz.fuzzy_and(X, one, X, young + middle_aged)[1]
# 8. Bounded difference = min[0,(µA(x), µB(x))]
bdd_difference = fuzz.fuzzy_or(X, zero, X, young - middle_aged)[1]
# max-min composition
# max-product composition
# Plot each set A, set B and each operation result using plot() and subplot().
from matplotlib import pyplot as plt
plt.figure()
plt.subplot(4, 3, 1)
plt.plot(X, young)
plt.title("Young")
plt.grid(True)
plt.subplot(4, 3, 2)
plt.plot(X, middle_aged)
plt.title("Middle aged")
plt.grid(True)
plt.subplot(4, 3, 3)
plt.plot(X, union)
plt.title("union")
plt.grid(True)
plt.subplot(4, 3, 4)
plt.plot(X, intersection)
plt.title("intersection")
plt.grid(True)
plt.subplot(4, 3, 5)
plt.plot(X, complement_a)
plt.title("complement_a")
plt.grid(True)
plt.subplot(4, 3, 6)
plt.plot(X, difference)
plt.title("difference a/b")
plt.grid(True)
plt.subplot(4, 3, 7)
plt.plot(X, alg_sum)
plt.title("alg_sum")
plt.grid(True)
plt.subplot(4, 3, 8)
plt.plot(X, alg_product)
plt.title("alg_product")
plt.grid(True)
plt.subplot(4, 3, 9)
plt.plot(X, bdd_sum)
plt.title("bdd_sum")
plt.grid(True)
plt.subplot(4, 3, 10)
plt.plot(X, bdd_difference)
plt.title("bdd_difference")
plt.grid(True)
plt.subplots_adjust(hspace=0.5)
plt.show()
| 1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Problem 14: https://projecteuler.net/problem=14
Collatz conjecture: start with any positive integer n. Next term obtained from
the previous term as follows:
If the previous term is even, the next term is one half the previous term.
If the previous term is odd, the next term is 3 times the previous term plus 1.
The conjecture states the sequence will always reach 1 regardless of starting
n.
Problem Statement:
The following iterative sequence is defined for the set of positive integers:
n → n/2 (n is even)
n → 3n + 1 (n is odd)
Using the rule above and starting with 13, we generate the following sequence:
13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1
It can be seen that this sequence (starting at 13 and finishing at 1) contains
10 terms. Although it has not been proved yet (Collatz Problem), it is thought
that all starting numbers finish at 1.
Which starting number, under one million, produces the longest chain?
"""
from __future__ import annotations
def collatz_sequence_length(n: int) -> int:
"""Returns the Collatz sequence length for n."""
sequence_length = 1
while n != 1:
if n % 2 == 0:
n //= 2
else:
n = 3 * n + 1
sequence_length += 1
return sequence_length
def solution(n: int = 1000000) -> int:
"""Returns the number under n that generates the longest Collatz sequence.
# The code below has been commented due to slow execution affecting Travis.
# >>> solution(1000000)
# 837799
>>> solution(200)
171
>>> solution(5000)
3711
>>> solution(15000)
13255
"""
result = max((collatz_sequence_length(i), i) for i in range(1, n))
return result[1]
if __name__ == "__main__":
print(solution(int(input().strip())))
| """
Problem 14: https://projecteuler.net/problem=14
Collatz conjecture: start with any positive integer n. Next term obtained from
the previous term as follows:
If the previous term is even, the next term is one half the previous term.
If the previous term is odd, the next term is 3 times the previous term plus 1.
The conjecture states the sequence will always reach 1 regardless of starting
n.
Problem Statement:
The following iterative sequence is defined for the set of positive integers:
n → n/2 (n is even)
n → 3n + 1 (n is odd)
Using the rule above and starting with 13, we generate the following sequence:
13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1
It can be seen that this sequence (starting at 13 and finishing at 1) contains
10 terms. Although it has not been proved yet (Collatz Problem), it is thought
that all starting numbers finish at 1.
Which starting number, under one million, produces the longest chain?
"""
from __future__ import annotations
COLLATZ_SEQUENCE_LENGTHS = {1: 1}
def collatz_sequence_length(n: int) -> int:
"""Returns the Collatz sequence length for n."""
if n in COLLATZ_SEQUENCE_LENGTHS:
return COLLATZ_SEQUENCE_LENGTHS[n]
if n % 2 == 0:
next_n = n // 2
else:
next_n = 3 * n + 1
sequence_length = collatz_sequence_length(next_n) + 1
COLLATZ_SEQUENCE_LENGTHS[n] = sequence_length
return sequence_length
def solution(n: int = 1000000) -> int:
"""Returns the number under n that generates the longest Collatz sequence.
>>> solution(1000000)
837799
>>> solution(200)
171
>>> solution(5000)
3711
>>> solution(15000)
13255
"""
result = max((collatz_sequence_length(i), i) for i in range(1, n))
return result[1]
if __name__ == "__main__":
print(solution(int(input().strip())))
| 1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| beautifulsoup4
fake_useragent
keras<2.7.0
lxml
matplotlib
numpy
opencv-python
pandas
pillow
qiskit
requests
scikit-fuzzy
sklearn
statsmodels
sympy
tensorflow
texttable
tweepy
types-requests
xgboost
| beautifulsoup4
fake_useragent
keras<2.7.0
lxml
matplotlib
numpy
opencv-python
pandas
pillow
qiskit
requests
# scikit-fuzzy # Causing broken builds
sklearn
statsmodels
sympy
tensorflow
texttable
tweepy
types-requests
xgboost
| 1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 173: https://projecteuler.net/problem=173
We shall define a square lamina to be a square outline with a square "hole" so that
the shape possesses vertical and horizontal symmetry. For example, using exactly
thirty-two square tiles we can form two different square laminae:
With one-hundred tiles, and not necessarily using all of the tiles at one time, it is
possible to form forty-one different square laminae.
Using up to one million tiles how many different square laminae can be formed?
"""
from math import ceil, sqrt
def solution(limit: int = 1000000) -> int:
"""
Return the number of different square laminae that can be formed using up to
one million tiles.
>>> solution(100)
41
"""
answer = 0
for outer_width in range(3, (limit // 4) + 2):
if outer_width ** 2 > limit:
hole_width_lower_bound = max(ceil(sqrt(outer_width ** 2 - limit)), 1)
else:
hole_width_lower_bound = 1
if (outer_width - hole_width_lower_bound) % 2:
hole_width_lower_bound += 1
answer += (outer_width - hole_width_lower_bound - 2) // 2 + 1
return answer
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 173: https://projecteuler.net/problem=173
We shall define a square lamina to be a square outline with a square "hole" so that
the shape possesses vertical and horizontal symmetry. For example, using exactly
thirty-two square tiles we can form two different square laminae:
With one-hundred tiles, and not necessarily using all of the tiles at one time, it is
possible to form forty-one different square laminae.
Using up to one million tiles how many different square laminae can be formed?
"""
from math import ceil, sqrt
def solution(limit: int = 1000000) -> int:
"""
Return the number of different square laminae that can be formed using up to
one million tiles.
>>> solution(100)
41
"""
answer = 0
for outer_width in range(3, (limit // 4) + 2):
if outer_width ** 2 > limit:
hole_width_lower_bound = max(ceil(sqrt(outer_width ** 2 - limit)), 1)
else:
hole_width_lower_bound = 1
if (outer_width - hole_width_lower_bound) % 2:
hole_width_lower_bound += 1
answer += (outer_width - hole_width_lower_bound - 2) // 2 + 1
return answer
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is to show simple COVID19 info fetching from worldometers site using lxml
* The main motivation to use lxml in place of bs4 is that it is faster and therefore
more convenient to use in Python web projects (e.g. Django or Flask-based)
"""
from collections import namedtuple
import requests
from lxml import html # type: ignore
covid_data = namedtuple("covid_data", "cases deaths recovered")
def covid_stats(url: str = "https://www.worldometers.info/coronavirus/") -> covid_data:
xpath_str = '//div[@class = "maincounter-number"]/span/text()'
return covid_data(*html.fromstring(requests.get(url).content).xpath(xpath_str))
fmt = """Total COVID-19 cases in the world: {}
Total deaths due to COVID-19 in the world: {}
Total COVID-19 patients recovered in the world: {}"""
print(fmt.format(*covid_stats()))
| """
This is to show simple COVID19 info fetching from worldometers site using lxml
* The main motivation to use lxml in place of bs4 is that it is faster and therefore
more convenient to use in Python web projects (e.g. Django or Flask-based)
"""
from collections import namedtuple
import requests
from lxml import html # type: ignore
covid_data = namedtuple("covid_data", "cases deaths recovered")
def covid_stats(url: str = "https://www.worldometers.info/coronavirus/") -> covid_data:
xpath_str = '//div[@class = "maincounter-number"]/span/text()'
return covid_data(*html.fromstring(requests.get(url).content).xpath(xpath_str))
fmt = """Total COVID-19 cases in the world: {}
Total deaths due to COVID-19 in the world: {}
Total COVID-19 patients recovered in the world: {}"""
print(fmt.format(*covid_stats()))
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is a pure Python implementation of the merge-insertion sort algorithm
Source: https://en.wikipedia.org/wiki/Merge-insertion_sort
For doctests run following command:
python3 -m doctest -v merge_insertion_sort.py
or
python -m doctest -v merge_insertion_sort.py
For manual testing run:
python3 merge_insertion_sort.py
"""
from __future__ import annotations
def merge_insertion_sort(collection: list[int]) -> list[int]:
"""Pure implementation of merge-insertion sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> merge_insertion_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> merge_insertion_sort([99])
[99]
>>> merge_insertion_sort([-2, -5, -45])
[-45, -5, -2]
"""
def binary_search_insertion(sorted_list, item):
left = 0
right = len(sorted_list) - 1
while left <= right:
middle = (left + right) // 2
if left == right:
if sorted_list[middle] < item:
left = middle + 1
break
elif sorted_list[middle] < item:
left = middle + 1
else:
right = middle - 1
sorted_list.insert(left, item)
return sorted_list
def sortlist_2d(list_2d):
def merge(left, right):
result = []
while left and right:
if left[0][0] < right[0][0]:
result.append(left.pop(0))
else:
result.append(right.pop(0))
return result + left + right
length = len(list_2d)
if length <= 1:
return list_2d
middle = length // 2
return merge(sortlist_2d(list_2d[:middle]), sortlist_2d(list_2d[middle:]))
if len(collection) <= 1:
return collection
"""
Group the items into two pairs, and leave one element if there is a last odd item.
Example: [999, 100, 75, 40, 10000]
-> [999, 100], [75, 40]. Leave 10000.
"""
two_paired_list = []
has_last_odd_item = False
for i in range(0, len(collection), 2):
if i == len(collection) - 1:
has_last_odd_item = True
else:
"""
Sort two-pairs in each groups.
Example: [999, 100], [75, 40]
-> [100, 999], [40, 75]
"""
if collection[i] < collection[i + 1]:
two_paired_list.append([collection[i], collection[i + 1]])
else:
two_paired_list.append([collection[i + 1], collection[i]])
"""
Sort two_paired_list.
Example: [100, 999], [40, 75]
-> [40, 75], [100, 999]
"""
sorted_list_2d = sortlist_2d(two_paired_list)
"""
40 < 100 is sure because it has already been sorted.
Generate the sorted_list of them so that you can avoid unnecessary comparison.
Example:
group0 group1
40 100
75 999
->
group0 group1
[40, 100]
75 999
"""
result = [i[0] for i in sorted_list_2d]
"""
100 < 999 is sure because it has already been sorted.
Put 999 in last of the sorted_list so that you can avoid unnecessary comparison.
Example:
group0 group1
[40, 100]
75 999
->
group0 group1
[40, 100, 999]
75
"""
result.append(sorted_list_2d[-1][1])
"""
Insert the last odd item left if there is.
Example:
group0 group1
[40, 100, 999]
75
->
group0 group1
[40, 100, 999, 10000]
75
"""
if has_last_odd_item:
pivot = collection[-1]
result = binary_search_insertion(result, pivot)
"""
Insert the remaining items.
In this case, 40 < 75 is sure because it has already been sorted.
Therefore, you only need to insert 75 into [100, 999, 10000],
so that you can avoid unnecessary comparison.
Example:
group0 group1
[40, 100, 999, 10000]
^ You don't need to compare with this as 40 < 75 is already sure.
75
->
[40, 75, 100, 999, 10000]
"""
is_last_odd_item_inserted_before_this_index = False
for i in range(len(sorted_list_2d) - 1):
if result[i] == collection[-i]:
is_last_odd_item_inserted_before_this_index = True
pivot = sorted_list_2d[i][1]
# If last_odd_item is inserted before the item's index,
# you should forward index one more.
if is_last_odd_item_inserted_before_this_index:
result = result[: i + 2] + binary_search_insertion(result[i + 2 :], pivot)
else:
result = result[: i + 1] + binary_search_insertion(result[i + 1 :], pivot)
return result
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(merge_insertion_sort(unsorted))
| """
This is a pure Python implementation of the merge-insertion sort algorithm
Source: https://en.wikipedia.org/wiki/Merge-insertion_sort
For doctests run following command:
python3 -m doctest -v merge_insertion_sort.py
or
python -m doctest -v merge_insertion_sort.py
For manual testing run:
python3 merge_insertion_sort.py
"""
from __future__ import annotations
def merge_insertion_sort(collection: list[int]) -> list[int]:
"""Pure implementation of merge-insertion sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> merge_insertion_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> merge_insertion_sort([99])
[99]
>>> merge_insertion_sort([-2, -5, -45])
[-45, -5, -2]
"""
def binary_search_insertion(sorted_list, item):
left = 0
right = len(sorted_list) - 1
while left <= right:
middle = (left + right) // 2
if left == right:
if sorted_list[middle] < item:
left = middle + 1
break
elif sorted_list[middle] < item:
left = middle + 1
else:
right = middle - 1
sorted_list.insert(left, item)
return sorted_list
def sortlist_2d(list_2d):
def merge(left, right):
result = []
while left and right:
if left[0][0] < right[0][0]:
result.append(left.pop(0))
else:
result.append(right.pop(0))
return result + left + right
length = len(list_2d)
if length <= 1:
return list_2d
middle = length // 2
return merge(sortlist_2d(list_2d[:middle]), sortlist_2d(list_2d[middle:]))
if len(collection) <= 1:
return collection
"""
Group the items into two pairs, and leave one element if there is a last odd item.
Example: [999, 100, 75, 40, 10000]
-> [999, 100], [75, 40]. Leave 10000.
"""
two_paired_list = []
has_last_odd_item = False
for i in range(0, len(collection), 2):
if i == len(collection) - 1:
has_last_odd_item = True
else:
"""
Sort two-pairs in each groups.
Example: [999, 100], [75, 40]
-> [100, 999], [40, 75]
"""
if collection[i] < collection[i + 1]:
two_paired_list.append([collection[i], collection[i + 1]])
else:
two_paired_list.append([collection[i + 1], collection[i]])
"""
Sort two_paired_list.
Example: [100, 999], [40, 75]
-> [40, 75], [100, 999]
"""
sorted_list_2d = sortlist_2d(two_paired_list)
"""
40 < 100 is sure because it has already been sorted.
Generate the sorted_list of them so that you can avoid unnecessary comparison.
Example:
group0 group1
40 100
75 999
->
group0 group1
[40, 100]
75 999
"""
result = [i[0] for i in sorted_list_2d]
"""
100 < 999 is sure because it has already been sorted.
Put 999 in last of the sorted_list so that you can avoid unnecessary comparison.
Example:
group0 group1
[40, 100]
75 999
->
group0 group1
[40, 100, 999]
75
"""
result.append(sorted_list_2d[-1][1])
"""
Insert the last odd item left if there is.
Example:
group0 group1
[40, 100, 999]
75
->
group0 group1
[40, 100, 999, 10000]
75
"""
if has_last_odd_item:
pivot = collection[-1]
result = binary_search_insertion(result, pivot)
"""
Insert the remaining items.
In this case, 40 < 75 is sure because it has already been sorted.
Therefore, you only need to insert 75 into [100, 999, 10000],
so that you can avoid unnecessary comparison.
Example:
group0 group1
[40, 100, 999, 10000]
^ You don't need to compare with this as 40 < 75 is already sure.
75
->
[40, 75, 100, 999, 10000]
"""
is_last_odd_item_inserted_before_this_index = False
for i in range(len(sorted_list_2d) - 1):
if result[i] == collection[-i]:
is_last_odd_item_inserted_before_this_index = True
pivot = sorted_list_2d[i][1]
# If last_odd_item is inserted before the item's index,
# you should forward index one more.
if is_last_odd_item_inserted_before_this_index:
result = result[: i + 2] + binary_search_insertion(result[i + 2 :], pivot)
else:
result = result[: i + 1] + binary_search_insertion(result[i + 1 :], pivot)
return result
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(merge_insertion_sort(unsorted))
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
An implementation of Karger's Algorithm for partitioning a graph.
"""
from __future__ import annotations
import random
# Adjacency list representation of this graph:
# https://en.wikipedia.org/wiki/File:Single_run_of_Karger%E2%80%99s_Mincut_algorithm.svg
TEST_GRAPH = {
"1": ["2", "3", "4", "5"],
"2": ["1", "3", "4", "5"],
"3": ["1", "2", "4", "5", "10"],
"4": ["1", "2", "3", "5", "6"],
"5": ["1", "2", "3", "4", "7"],
"6": ["7", "8", "9", "10", "4"],
"7": ["6", "8", "9", "10", "5"],
"8": ["6", "7", "9", "10"],
"9": ["6", "7", "8", "10"],
"10": ["6", "7", "8", "9", "3"],
}
def partition_graph(graph: dict[str, list[str]]) -> set[tuple[str, str]]:
"""
Partitions a graph using Karger's Algorithm. Implemented from
pseudocode found here:
https://en.wikipedia.org/wiki/Karger%27s_algorithm.
This function involves random choices, meaning it will not give
consistent outputs.
Args:
graph: A dictionary containing adacency lists for the graph.
Nodes must be strings.
Returns:
The cutset of the cut found by Karger's Algorithm.
>>> graph = {'0':['1'], '1':['0']}
>>> partition_graph(graph)
{('0', '1')}
"""
# Dict that maps contracted nodes to a list of all the nodes it "contains."
contracted_nodes = {node: {node} for node in graph}
graph_copy = {node: graph[node][:] for node in graph}
while len(graph_copy) > 2:
# Choose a random edge.
u = random.choice(list(graph_copy.keys()))
v = random.choice(graph_copy[u])
# Contract edge (u, v) to new node uv
uv = u + v
uv_neighbors = list(set(graph_copy[u] + graph_copy[v]))
uv_neighbors.remove(u)
uv_neighbors.remove(v)
graph_copy[uv] = uv_neighbors
for neighbor in uv_neighbors:
graph_copy[neighbor].append(uv)
contracted_nodes[uv] = set(contracted_nodes[u].union(contracted_nodes[v]))
# Remove nodes u and v.
del graph_copy[u]
del graph_copy[v]
for neighbor in uv_neighbors:
if u in graph_copy[neighbor]:
graph_copy[neighbor].remove(u)
if v in graph_copy[neighbor]:
graph_copy[neighbor].remove(v)
# Find cutset.
groups = [contracted_nodes[node] for node in graph_copy]
return {
(node, neighbor)
for node in groups[0]
for neighbor in graph[node]
if neighbor in groups[1]
}
if __name__ == "__main__":
print(partition_graph(TEST_GRAPH))
| """
An implementation of Karger's Algorithm for partitioning a graph.
"""
from __future__ import annotations
import random
# Adjacency list representation of this graph:
# https://en.wikipedia.org/wiki/File:Single_run_of_Karger%E2%80%99s_Mincut_algorithm.svg
TEST_GRAPH = {
"1": ["2", "3", "4", "5"],
"2": ["1", "3", "4", "5"],
"3": ["1", "2", "4", "5", "10"],
"4": ["1", "2", "3", "5", "6"],
"5": ["1", "2", "3", "4", "7"],
"6": ["7", "8", "9", "10", "4"],
"7": ["6", "8", "9", "10", "5"],
"8": ["6", "7", "9", "10"],
"9": ["6", "7", "8", "10"],
"10": ["6", "7", "8", "9", "3"],
}
def partition_graph(graph: dict[str, list[str]]) -> set[tuple[str, str]]:
"""
Partitions a graph using Karger's Algorithm. Implemented from
pseudocode found here:
https://en.wikipedia.org/wiki/Karger%27s_algorithm.
This function involves random choices, meaning it will not give
consistent outputs.
Args:
graph: A dictionary containing adacency lists for the graph.
Nodes must be strings.
Returns:
The cutset of the cut found by Karger's Algorithm.
>>> graph = {'0':['1'], '1':['0']}
>>> partition_graph(graph)
{('0', '1')}
"""
# Dict that maps contracted nodes to a list of all the nodes it "contains."
contracted_nodes = {node: {node} for node in graph}
graph_copy = {node: graph[node][:] for node in graph}
while len(graph_copy) > 2:
# Choose a random edge.
u = random.choice(list(graph_copy.keys()))
v = random.choice(graph_copy[u])
# Contract edge (u, v) to new node uv
uv = u + v
uv_neighbors = list(set(graph_copy[u] + graph_copy[v]))
uv_neighbors.remove(u)
uv_neighbors.remove(v)
graph_copy[uv] = uv_neighbors
for neighbor in uv_neighbors:
graph_copy[neighbor].append(uv)
contracted_nodes[uv] = set(contracted_nodes[u].union(contracted_nodes[v]))
# Remove nodes u and v.
del graph_copy[u]
del graph_copy[v]
for neighbor in uv_neighbors:
if u in graph_copy[neighbor]:
graph_copy[neighbor].remove(u)
if v in graph_copy[neighbor]:
graph_copy[neighbor].remove(v)
# Find cutset.
groups = [contracted_nodes[node] for node in graph_copy]
return {
(node, neighbor)
for node in groups[0]
for neighbor in graph[node]
if neighbor in groups[1]
}
if __name__ == "__main__":
print(partition_graph(TEST_GRAPH))
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import math
def rearrange(bitString32):
"""[summary]
Regroups the given binary string.
Arguments:
bitString32 {[string]} -- [32 bit binary]
Raises:
ValueError -- [if the given string not are 32 bit binary string]
Returns:
[string] -- [32 bit binary string]
>>> rearrange('1234567890abcdfghijklmnopqrstuvw')
'pqrstuvwhijklmno90abcdfg12345678'
"""
if len(bitString32) != 32:
raise ValueError("Need length 32")
newString = ""
for i in [3, 2, 1, 0]:
newString += bitString32[8 * i : 8 * i + 8]
return newString
def reformatHex(i):
"""[summary]
Converts the given integer into 8-digit hex number.
Arguments:
i {[int]} -- [integer]
>>> reformatHex(666)
'9a020000'
"""
hexrep = format(i, "08x")
thing = ""
for i in [3, 2, 1, 0]:
thing += hexrep[2 * i : 2 * i + 2]
return thing
def pad(bitString):
"""[summary]
Fills up the binary string to a 512 bit binary string
Arguments:
bitString {[string]} -- [binary string]
Returns:
[string] -- [binary string]
"""
startLength = len(bitString)
bitString += "1"
while len(bitString) % 512 != 448:
bitString += "0"
lastPart = format(startLength, "064b")
bitString += rearrange(lastPart[32:]) + rearrange(lastPart[:32])
return bitString
def getBlock(bitString):
"""[summary]
Iterator:
Returns by each call a list of length 16 with the 32 bit
integer blocks.
Arguments:
bitString {[string]} -- [binary string >= 512]
"""
currPos = 0
while currPos < len(bitString):
currPart = bitString[currPos : currPos + 512]
mySplits = []
for i in range(16):
mySplits.append(int(rearrange(currPart[32 * i : 32 * i + 32]), 2))
yield mySplits
currPos += 512
def not32(i):
"""
>>> not32(34)
4294967261
"""
i_str = format(i, "032b")
new_str = ""
for c in i_str:
new_str += "1" if c == "0" else "0"
return int(new_str, 2)
def sum32(a, b):
return (a + b) % 2 ** 32
def leftrot32(i, s):
return (i << s) ^ (i >> (32 - s))
def md5me(testString):
"""[summary]
Returns a 32-bit hash code of the string 'testString'
Arguments:
testString {[string]} -- [message]
"""
bs = ""
for i in testString:
bs += format(ord(i), "08b")
bs = pad(bs)
tvals = [int(2 ** 32 * abs(math.sin(i + 1))) for i in range(64)]
a0 = 0x67452301
b0 = 0xEFCDAB89
c0 = 0x98BADCFE
d0 = 0x10325476
s = [
7,
12,
17,
22,
7,
12,
17,
22,
7,
12,
17,
22,
7,
12,
17,
22,
5,
9,
14,
20,
5,
9,
14,
20,
5,
9,
14,
20,
5,
9,
14,
20,
4,
11,
16,
23,
4,
11,
16,
23,
4,
11,
16,
23,
4,
11,
16,
23,
6,
10,
15,
21,
6,
10,
15,
21,
6,
10,
15,
21,
6,
10,
15,
21,
]
for m in getBlock(bs):
A = a0
B = b0
C = c0
D = d0
for i in range(64):
if i <= 15:
# f = (B & C) | (not32(B) & D)
f = D ^ (B & (C ^ D))
g = i
elif i <= 31:
# f = (D & B) | (not32(D) & C)
f = C ^ (D & (B ^ C))
g = (5 * i + 1) % 16
elif i <= 47:
f = B ^ C ^ D
g = (3 * i + 5) % 16
else:
f = C ^ (B | not32(D))
g = (7 * i) % 16
dtemp = D
D = C
C = B
B = sum32(B, leftrot32((A + f + tvals[i] + m[g]) % 2 ** 32, s[i]))
A = dtemp
a0 = sum32(a0, A)
b0 = sum32(b0, B)
c0 = sum32(c0, C)
d0 = sum32(d0, D)
digest = reformatHex(a0) + reformatHex(b0) + reformatHex(c0) + reformatHex(d0)
return digest
def test():
assert md5me("") == "d41d8cd98f00b204e9800998ecf8427e"
assert (
md5me("The quick brown fox jumps over the lazy dog")
== "9e107d9d372bb6826bd81d3542a419d6"
)
print("Success.")
if __name__ == "__main__":
test()
import doctest
doctest.testmod()
| import math
def rearrange(bitString32):
"""[summary]
Regroups the given binary string.
Arguments:
bitString32 {[string]} -- [32 bit binary]
Raises:
ValueError -- [if the given string not are 32 bit binary string]
Returns:
[string] -- [32 bit binary string]
>>> rearrange('1234567890abcdfghijklmnopqrstuvw')
'pqrstuvwhijklmno90abcdfg12345678'
"""
if len(bitString32) != 32:
raise ValueError("Need length 32")
newString = ""
for i in [3, 2, 1, 0]:
newString += bitString32[8 * i : 8 * i + 8]
return newString
def reformatHex(i):
"""[summary]
Converts the given integer into 8-digit hex number.
Arguments:
i {[int]} -- [integer]
>>> reformatHex(666)
'9a020000'
"""
hexrep = format(i, "08x")
thing = ""
for i in [3, 2, 1, 0]:
thing += hexrep[2 * i : 2 * i + 2]
return thing
def pad(bitString):
"""[summary]
Fills up the binary string to a 512 bit binary string
Arguments:
bitString {[string]} -- [binary string]
Returns:
[string] -- [binary string]
"""
startLength = len(bitString)
bitString += "1"
while len(bitString) % 512 != 448:
bitString += "0"
lastPart = format(startLength, "064b")
bitString += rearrange(lastPart[32:]) + rearrange(lastPart[:32])
return bitString
def getBlock(bitString):
"""[summary]
Iterator:
Returns by each call a list of length 16 with the 32 bit
integer blocks.
Arguments:
bitString {[string]} -- [binary string >= 512]
"""
currPos = 0
while currPos < len(bitString):
currPart = bitString[currPos : currPos + 512]
mySplits = []
for i in range(16):
mySplits.append(int(rearrange(currPart[32 * i : 32 * i + 32]), 2))
yield mySplits
currPos += 512
def not32(i):
"""
>>> not32(34)
4294967261
"""
i_str = format(i, "032b")
new_str = ""
for c in i_str:
new_str += "1" if c == "0" else "0"
return int(new_str, 2)
def sum32(a, b):
return (a + b) % 2 ** 32
def leftrot32(i, s):
return (i << s) ^ (i >> (32 - s))
def md5me(testString):
"""[summary]
Returns a 32-bit hash code of the string 'testString'
Arguments:
testString {[string]} -- [message]
"""
bs = ""
for i in testString:
bs += format(ord(i), "08b")
bs = pad(bs)
tvals = [int(2 ** 32 * abs(math.sin(i + 1))) for i in range(64)]
a0 = 0x67452301
b0 = 0xEFCDAB89
c0 = 0x98BADCFE
d0 = 0x10325476
s = [
7,
12,
17,
22,
7,
12,
17,
22,
7,
12,
17,
22,
7,
12,
17,
22,
5,
9,
14,
20,
5,
9,
14,
20,
5,
9,
14,
20,
5,
9,
14,
20,
4,
11,
16,
23,
4,
11,
16,
23,
4,
11,
16,
23,
4,
11,
16,
23,
6,
10,
15,
21,
6,
10,
15,
21,
6,
10,
15,
21,
6,
10,
15,
21,
]
for m in getBlock(bs):
A = a0
B = b0
C = c0
D = d0
for i in range(64):
if i <= 15:
# f = (B & C) | (not32(B) & D)
f = D ^ (B & (C ^ D))
g = i
elif i <= 31:
# f = (D & B) | (not32(D) & C)
f = C ^ (D & (B ^ C))
g = (5 * i + 1) % 16
elif i <= 47:
f = B ^ C ^ D
g = (3 * i + 5) % 16
else:
f = C ^ (B | not32(D))
g = (7 * i) % 16
dtemp = D
D = C
C = B
B = sum32(B, leftrot32((A + f + tvals[i] + m[g]) % 2 ** 32, s[i]))
A = dtemp
a0 = sum32(a0, A)
b0 = sum32(b0, B)
c0 = sum32(c0, C)
d0 = sum32(d0, D)
digest = reformatHex(a0) + reformatHex(b0) + reformatHex(c0) + reformatHex(d0)
return digest
def test():
assert md5me("") == "d41d8cd98f00b204e9800998ecf8427e"
assert (
md5me("The quick brown fox jumps over the lazy dog")
== "9e107d9d372bb6826bd81d3542a419d6"
)
print("Success.")
if __name__ == "__main__":
test()
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def get_word_pattern(word: str) -> str:
"""
>>> get_word_pattern("pattern")
'0.1.2.2.3.4.5'
>>> get_word_pattern("word pattern")
'0.1.2.3.4.5.6.7.7.8.2.9'
>>> get_word_pattern("get word pattern")
'0.1.2.3.4.5.6.7.3.8.9.2.2.1.6.10'
"""
word = word.upper()
next_num = 0
letter_nums = {}
word_pattern = []
for letter in word:
if letter not in letter_nums:
letter_nums[letter] = str(next_num)
next_num += 1
word_pattern.append(letter_nums[letter])
return ".".join(word_pattern)
if __name__ == "__main__":
import pprint
import time
start_time = time.time()
with open("dictionary.txt") as in_file:
wordList = in_file.read().splitlines()
all_patterns: dict = {}
for word in wordList:
pattern = get_word_pattern(word)
if pattern in all_patterns:
all_patterns[pattern].append(word)
else:
all_patterns[pattern] = [word]
with open("word_patterns.txt", "w") as out_file:
out_file.write(pprint.pformat(all_patterns))
totalTime = round(time.time() - start_time, 2)
print(f"Done! {len(all_patterns):,} word patterns found in {totalTime} seconds.")
# Done! 9,581 word patterns found in 0.58 seconds.
| def get_word_pattern(word: str) -> str:
"""
>>> get_word_pattern("pattern")
'0.1.2.2.3.4.5'
>>> get_word_pattern("word pattern")
'0.1.2.3.4.5.6.7.7.8.2.9'
>>> get_word_pattern("get word pattern")
'0.1.2.3.4.5.6.7.3.8.9.2.2.1.6.10'
"""
word = word.upper()
next_num = 0
letter_nums = {}
word_pattern = []
for letter in word:
if letter not in letter_nums:
letter_nums[letter] = str(next_num)
next_num += 1
word_pattern.append(letter_nums[letter])
return ".".join(word_pattern)
if __name__ == "__main__":
import pprint
import time
start_time = time.time()
with open("dictionary.txt") as in_file:
wordList = in_file.read().splitlines()
all_patterns: dict = {}
for word in wordList:
pattern = get_word_pattern(word)
if pattern in all_patterns:
all_patterns[pattern].append(word)
else:
all_patterns[pattern] = [word]
with open("word_patterns.txt", "w") as out_file:
out_file.write(pprint.pformat(all_patterns))
totalTime = round(time.time() - start_time, 2)
print(f"Done! {len(all_patterns):,} word patterns found in {totalTime} seconds.")
# Done! 9,581 word patterns found in 0.58 seconds.
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 1: https://projecteuler.net/problem=1
Multiples of 3 and 5
If we list all the natural numbers below 10 that are multiples of 3 or 5,
we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
"""
def solution(n: int = 1000) -> int:
"""
Returns the sum of all the multiples of 3 or 5 below n.
>>> solution(3)
0
>>> solution(4)
3
>>> solution(10)
23
>>> solution(600)
83700
>>> solution(-7)
0
"""
return sum(e for e in range(3, n) if e % 3 == 0 or e % 5 == 0)
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 1: https://projecteuler.net/problem=1
Multiples of 3 and 5
If we list all the natural numbers below 10 that are multiples of 3 or 5,
we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
"""
def solution(n: int = 1000) -> int:
"""
Returns the sum of all the multiples of 3 or 5 below n.
>>> solution(3)
0
>>> solution(4)
3
>>> solution(10)
23
>>> solution(600)
83700
>>> solution(-7)
0
"""
return sum(e for e in range(3, n) if e % 3 == 0 or e % 5 == 0)
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import math
from timeit import timeit
def num_digits(n: int) -> int:
"""
Find the number of digits in a number.
>>> num_digits(12345)
5
>>> num_digits(123)
3
>>> num_digits(0)
1
>>> num_digits(-1)
1
>>> num_digits(-123456)
6
"""
digits = 0
n = abs(n)
while True:
n = n // 10
digits += 1
if n == 0:
break
return digits
def num_digits_fast(n: int) -> int:
"""
Find the number of digits in a number.
abs() is used as logarithm for negative numbers is not defined.
>>> num_digits_fast(12345)
5
>>> num_digits_fast(123)
3
>>> num_digits_fast(0)
1
>>> num_digits_fast(-1)
1
>>> num_digits_fast(-123456)
6
"""
return 1 if n == 0 else math.floor(math.log(abs(n), 10) + 1)
def num_digits_faster(n: int) -> int:
"""
Find the number of digits in a number.
abs() is used for negative numbers
>>> num_digits_faster(12345)
5
>>> num_digits_faster(123)
3
>>> num_digits_faster(0)
1
>>> num_digits_faster(-1)
1
>>> num_digits_faster(-123456)
6
"""
return len(str(abs(n)))
def benchmark() -> None:
"""
Benchmark code for comparing 3 functions,
with 3 different length int values.
"""
print("\nFor small_num = ", small_num, ":")
print(
"> num_digits()",
"\t\tans =",
num_digits(small_num),
"\ttime =",
timeit("z.num_digits(z.small_num)", setup="import __main__ as z"),
"seconds",
)
print(
"> num_digits_fast()",
"\tans =",
num_digits_fast(small_num),
"\ttime =",
timeit("z.num_digits_fast(z.small_num)", setup="import __main__ as z"),
"seconds",
)
print(
"> num_digits_faster()",
"\tans =",
num_digits_faster(small_num),
"\ttime =",
timeit("z.num_digits_faster(z.small_num)", setup="import __main__ as z"),
"seconds",
)
print("\nFor medium_num = ", medium_num, ":")
print(
"> num_digits()",
"\t\tans =",
num_digits(medium_num),
"\ttime =",
timeit("z.num_digits(z.medium_num)", setup="import __main__ as z"),
"seconds",
)
print(
"> num_digits_fast()",
"\tans =",
num_digits_fast(medium_num),
"\ttime =",
timeit("z.num_digits_fast(z.medium_num)", setup="import __main__ as z"),
"seconds",
)
print(
"> num_digits_faster()",
"\tans =",
num_digits_faster(medium_num),
"\ttime =",
timeit("z.num_digits_faster(z.medium_num)", setup="import __main__ as z"),
"seconds",
)
print("\nFor large_num = ", large_num, ":")
print(
"> num_digits()",
"\t\tans =",
num_digits(large_num),
"\ttime =",
timeit("z.num_digits(z.large_num)", setup="import __main__ as z"),
"seconds",
)
print(
"> num_digits_fast()",
"\tans =",
num_digits_fast(large_num),
"\ttime =",
timeit("z.num_digits_fast(z.large_num)", setup="import __main__ as z"),
"seconds",
)
print(
"> num_digits_faster()",
"\tans =",
num_digits_faster(large_num),
"\ttime =",
timeit("z.num_digits_faster(z.large_num)", setup="import __main__ as z"),
"seconds",
)
if __name__ == "__main__":
small_num = 262144
medium_num = 1125899906842624
large_num = 1267650600228229401496703205376
benchmark()
import doctest
doctest.testmod()
| import math
from timeit import timeit
def num_digits(n: int) -> int:
"""
Find the number of digits in a number.
>>> num_digits(12345)
5
>>> num_digits(123)
3
>>> num_digits(0)
1
>>> num_digits(-1)
1
>>> num_digits(-123456)
6
"""
digits = 0
n = abs(n)
while True:
n = n // 10
digits += 1
if n == 0:
break
return digits
def num_digits_fast(n: int) -> int:
"""
Find the number of digits in a number.
abs() is used as logarithm for negative numbers is not defined.
>>> num_digits_fast(12345)
5
>>> num_digits_fast(123)
3
>>> num_digits_fast(0)
1
>>> num_digits_fast(-1)
1
>>> num_digits_fast(-123456)
6
"""
return 1 if n == 0 else math.floor(math.log(abs(n), 10) + 1)
def num_digits_faster(n: int) -> int:
"""
Find the number of digits in a number.
abs() is used for negative numbers
>>> num_digits_faster(12345)
5
>>> num_digits_faster(123)
3
>>> num_digits_faster(0)
1
>>> num_digits_faster(-1)
1
>>> num_digits_faster(-123456)
6
"""
return len(str(abs(n)))
def benchmark() -> None:
"""
Benchmark code for comparing 3 functions,
with 3 different length int values.
"""
print("\nFor small_num = ", small_num, ":")
print(
"> num_digits()",
"\t\tans =",
num_digits(small_num),
"\ttime =",
timeit("z.num_digits(z.small_num)", setup="import __main__ as z"),
"seconds",
)
print(
"> num_digits_fast()",
"\tans =",
num_digits_fast(small_num),
"\ttime =",
timeit("z.num_digits_fast(z.small_num)", setup="import __main__ as z"),
"seconds",
)
print(
"> num_digits_faster()",
"\tans =",
num_digits_faster(small_num),
"\ttime =",
timeit("z.num_digits_faster(z.small_num)", setup="import __main__ as z"),
"seconds",
)
print("\nFor medium_num = ", medium_num, ":")
print(
"> num_digits()",
"\t\tans =",
num_digits(medium_num),
"\ttime =",
timeit("z.num_digits(z.medium_num)", setup="import __main__ as z"),
"seconds",
)
print(
"> num_digits_fast()",
"\tans =",
num_digits_fast(medium_num),
"\ttime =",
timeit("z.num_digits_fast(z.medium_num)", setup="import __main__ as z"),
"seconds",
)
print(
"> num_digits_faster()",
"\tans =",
num_digits_faster(medium_num),
"\ttime =",
timeit("z.num_digits_faster(z.medium_num)", setup="import __main__ as z"),
"seconds",
)
print("\nFor large_num = ", large_num, ":")
print(
"> num_digits()",
"\t\tans =",
num_digits(large_num),
"\ttime =",
timeit("z.num_digits(z.large_num)", setup="import __main__ as z"),
"seconds",
)
print(
"> num_digits_fast()",
"\tans =",
num_digits_fast(large_num),
"\ttime =",
timeit("z.num_digits_fast(z.large_num)", setup="import __main__ as z"),
"seconds",
)
print(
"> num_digits_faster()",
"\tans =",
num_digits_faster(large_num),
"\ttime =",
timeit("z.num_digits_faster(z.large_num)", setup="import __main__ as z"),
"seconds",
)
if __name__ == "__main__":
small_num = 262144
medium_num = 1125899906842624
large_num = 1267650600228229401496703205376
benchmark()
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # https://en.m.wikipedia.org/wiki/Electric_power
from __future__ import annotations
from collections import namedtuple
def electric_power(voltage: float, current: float, power: float) -> tuple:
"""
This function can calculate any one of the three (voltage, current, power),
fundamental value of electrical system.
examples are below:
>>> electric_power(voltage=0, current=2, power=5)
result(name='voltage', value=2.5)
>>> electric_power(voltage=2, current=2, power=0)
result(name='power', value=4.0)
>>> electric_power(voltage=-2, current=3, power=0)
result(name='power', value=6.0)
>>> electric_power(voltage=2, current=4, power=2)
Traceback (most recent call last):
File "<stdin>", line 15, in <module>
ValueError: Only one argument must be 0
>>> electric_power(voltage=0, current=0, power=2)
Traceback (most recent call last):
File "<stdin>", line 19, in <module>
ValueError: Only one argument must be 0
>>> electric_power(voltage=0, current=2, power=-4)
Traceback (most recent call last):
File "<stdin>", line 23, in <modulei
ValueError: Power cannot be negative in any electrical/electronics system
>>> electric_power(voltage=2.2, current=2.2, power=0)
result(name='power', value=4.84)
"""
result = namedtuple("result", "name value")
if (voltage, current, power).count(0) != 1:
raise ValueError("Only one argument must be 0")
elif power < 0:
raise ValueError(
"Power cannot be negative in any electrical/electronics system"
)
elif voltage == 0:
return result("voltage", power / current)
elif current == 0:
return result("current", power / voltage)
elif power == 0:
return result("power", float(round(abs(voltage * current), 2)))
else:
raise ValueError("Exactly one argument must be 0")
if __name__ == "__main__":
import doctest
doctest.testmod()
| # https://en.m.wikipedia.org/wiki/Electric_power
from __future__ import annotations
from collections import namedtuple
def electric_power(voltage: float, current: float, power: float) -> tuple:
"""
This function can calculate any one of the three (voltage, current, power),
fundamental value of electrical system.
examples are below:
>>> electric_power(voltage=0, current=2, power=5)
result(name='voltage', value=2.5)
>>> electric_power(voltage=2, current=2, power=0)
result(name='power', value=4.0)
>>> electric_power(voltage=-2, current=3, power=0)
result(name='power', value=6.0)
>>> electric_power(voltage=2, current=4, power=2)
Traceback (most recent call last):
File "<stdin>", line 15, in <module>
ValueError: Only one argument must be 0
>>> electric_power(voltage=0, current=0, power=2)
Traceback (most recent call last):
File "<stdin>", line 19, in <module>
ValueError: Only one argument must be 0
>>> electric_power(voltage=0, current=2, power=-4)
Traceback (most recent call last):
File "<stdin>", line 23, in <modulei
ValueError: Power cannot be negative in any electrical/electronics system
>>> electric_power(voltage=2.2, current=2.2, power=0)
result(name='power', value=4.84)
"""
result = namedtuple("result", "name value")
if (voltage, current, power).count(0) != 1:
raise ValueError("Only one argument must be 0")
elif power < 0:
raise ValueError(
"Power cannot be negative in any electrical/electronics system"
)
elif voltage == 0:
return result("voltage", power / current)
elif current == 0:
return result("current", power / voltage)
elif power == 0:
return result("power", float(round(abs(voltage * current), 2)))
else:
raise ValueError("Exactly one argument must be 0")
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Recursive Prorgam to create a Linked List from a sequence and
# print a string representation of it.
class Node:
def __init__(self, data=None):
self.data = data
self.next = None
def __repr__(self):
"""Returns a visual representation of the node and all its following nodes."""
string_rep = ""
temp = self
while temp:
string_rep += f"<{temp.data}> ---> "
temp = temp.next
string_rep += "<END>"
return string_rep
def make_linked_list(elements_list):
"""Creates a Linked List from the elements of the given sequence
(list/tuple) and returns the head of the Linked List."""
# if elements_list is empty
if not elements_list:
raise Exception("The Elements List is empty")
# Set first element as Head
head = Node(elements_list[0])
current = head
# Loop through elements from position 1
for data in elements_list[1:]:
current.next = Node(data)
current = current.next
return head
list_data = [1, 3, 5, 32, 44, 12, 43]
print(f"List: {list_data}")
print("Creating Linked List from List.")
linked_list = make_linked_list(list_data)
print("Linked List:")
print(linked_list)
| # Recursive Prorgam to create a Linked List from a sequence and
# print a string representation of it.
class Node:
def __init__(self, data=None):
self.data = data
self.next = None
def __repr__(self):
"""Returns a visual representation of the node and all its following nodes."""
string_rep = ""
temp = self
while temp:
string_rep += f"<{temp.data}> ---> "
temp = temp.next
string_rep += "<END>"
return string_rep
def make_linked_list(elements_list):
"""Creates a Linked List from the elements of the given sequence
(list/tuple) and returns the head of the Linked List."""
# if elements_list is empty
if not elements_list:
raise Exception("The Elements List is empty")
# Set first element as Head
head = Node(elements_list[0])
current = head
# Loop through elements from position 1
for data in elements_list[1:]:
current.next = Node(data)
current = current.next
return head
list_data = [1, 3, 5, 32, 44, 12, 43]
print(f"List: {list_data}")
print("Creating Linked List from List.")
linked_list = make_linked_list(list_data)
print("Linked List:")
print(linked_list)
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Problem 125: https://projecteuler.net/problem=125
The palindromic number 595 is interesting because it can be written as the sum
of consecutive squares: 6^2 + 7^2 + 8^2 + 9^2 + 10^2 + 11^2 + 12^2.
There are exactly eleven palindromes below one-thousand that can be written as
consecutive square sums, and the sum of these palindromes is 4164. Note that
1 = 0^2 + 1^2 has not been included as this problem is concerned with the
squares of positive integers.
Find the sum of all the numbers less than 10^8 that are both palindromic and can
be written as the sum of consecutive squares.
"""
def is_palindrome(n: int) -> bool:
"""
Check if an integer is palindromic.
>>> is_palindrome(12521)
True
>>> is_palindrome(12522)
False
>>> is_palindrome(12210)
False
"""
if n % 10 == 0:
return False
s = str(n)
return s == s[::-1]
def solution() -> int:
"""
Returns the sum of all numbers less than 1e8 that are both palindromic and
can be written as the sum of consecutive squares.
"""
LIMIT = 10 ** 8
answer = set()
first_square = 1
sum_squares = 5
while sum_squares < LIMIT:
last_square = first_square + 1
while sum_squares < LIMIT:
if is_palindrome(sum_squares):
answer.add(sum_squares)
last_square += 1
sum_squares += last_square ** 2
first_square += 1
sum_squares = first_square ** 2 + (first_square + 1) ** 2
return sum(answer)
if __name__ == "__main__":
print(solution())
| """
Problem 125: https://projecteuler.net/problem=125
The palindromic number 595 is interesting because it can be written as the sum
of consecutive squares: 6^2 + 7^2 + 8^2 + 9^2 + 10^2 + 11^2 + 12^2.
There are exactly eleven palindromes below one-thousand that can be written as
consecutive square sums, and the sum of these palindromes is 4164. Note that
1 = 0^2 + 1^2 has not been included as this problem is concerned with the
squares of positive integers.
Find the sum of all the numbers less than 10^8 that are both palindromic and can
be written as the sum of consecutive squares.
"""
def is_palindrome(n: int) -> bool:
"""
Check if an integer is palindromic.
>>> is_palindrome(12521)
True
>>> is_palindrome(12522)
False
>>> is_palindrome(12210)
False
"""
if n % 10 == 0:
return False
s = str(n)
return s == s[::-1]
def solution() -> int:
"""
Returns the sum of all numbers less than 1e8 that are both palindromic and
can be written as the sum of consecutive squares.
"""
LIMIT = 10 ** 8
answer = set()
first_square = 1
sum_squares = 5
while sum_squares < LIMIT:
last_square = first_square + 1
while sum_squares < LIMIT:
if is_palindrome(sum_squares):
answer.add(sum_squares)
last_square += 1
sum_squares += last_square ** 2
first_square += 1
sum_squares = first_square ** 2 + (first_square + 1) ** 2
return sum(answer)
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
In the game of darts a player throws three darts at a target board which is
split into twenty equal sized sections numbered one to twenty.

The score of a dart is determined by the number of the region that the dart
lands in. A dart landing outside the red/green outer ring scores zero. The black
and cream regions inside this ring represent single scores. However, the red/green
outer ring and middle ring score double and treble scores respectively.
At the centre of the board are two concentric circles called the bull region, or
bulls-eye. The outer bull is worth 25 points and the inner bull is a double,
worth 50 points.
There are many variations of rules but in the most popular game the players will
begin with a score 301 or 501 and the first player to reduce their running total
to zero is a winner. However, it is normal to play a "doubles out" system, which
means that the player must land a double (including the double bulls-eye at the
centre of the board) on their final dart to win; any other dart that would reduce
their running total to one or lower means the score for that set of three darts
is "bust".
When a player is able to finish on their current score it is called a "checkout"
and the highest checkout is 170: T20 T20 D25 (two treble 20s and double bull).
There are exactly eleven distinct ways to checkout on a score of 6:
D3
D1 D2
S2 D2
D2 D1
S4 D1
S1 S1 D2
S1 T1 D1
S1 S3 D1
D1 D1 D1
D1 S2 D1
S2 S2 D1
Note that D1 D2 is considered different to D2 D1 as they finish on different
doubles. However, the combination S1 T1 D1 is considered the same as T1 S1 D1.
In addition we shall not include misses in considering combinations; for example,
D3 is the same as 0 D3 and 0 0 D3.
Incredibly there are 42336 distinct ways of checking out in total.
How many distinct ways can a player checkout with a score less than 100?
Solution:
We first construct a list of the possible dart values, separated by type.
We then iterate through the doubles, followed by the possible 2 following throws.
If the total of these three darts is less than the given limit, we increment
the counter.
"""
from itertools import combinations_with_replacement
def solution(limit: int = 100) -> int:
"""
Count the number of distinct ways a player can checkout with a score
less than limit.
>>> solution(171)
42336
>>> solution(50)
12577
"""
singles: list[int] = [x for x in range(1, 21)] + [25]
doubles: list[int] = [2 * x for x in range(1, 21)] + [50]
triples: list[int] = [3 * x for x in range(1, 21)]
all_values: list[int] = singles + doubles + triples + [0]
num_checkouts: int = 0
double: int
throw1: int
throw2: int
checkout_total: int
for double in doubles:
for throw1, throw2 in combinations_with_replacement(all_values, 2):
checkout_total = double + throw1 + throw2
if checkout_total < limit:
num_checkouts += 1
return num_checkouts
if __name__ == "__main__":
print(f"{solution() = }")
| """
In the game of darts a player throws three darts at a target board which is
split into twenty equal sized sections numbered one to twenty.

The score of a dart is determined by the number of the region that the dart
lands in. A dart landing outside the red/green outer ring scores zero. The black
and cream regions inside this ring represent single scores. However, the red/green
outer ring and middle ring score double and treble scores respectively.
At the centre of the board are two concentric circles called the bull region, or
bulls-eye. The outer bull is worth 25 points and the inner bull is a double,
worth 50 points.
There are many variations of rules but in the most popular game the players will
begin with a score 301 or 501 and the first player to reduce their running total
to zero is a winner. However, it is normal to play a "doubles out" system, which
means that the player must land a double (including the double bulls-eye at the
centre of the board) on their final dart to win; any other dart that would reduce
their running total to one or lower means the score for that set of three darts
is "bust".
When a player is able to finish on their current score it is called a "checkout"
and the highest checkout is 170: T20 T20 D25 (two treble 20s and double bull).
There are exactly eleven distinct ways to checkout on a score of 6:
D3
D1 D2
S2 D2
D2 D1
S4 D1
S1 S1 D2
S1 T1 D1
S1 S3 D1
D1 D1 D1
D1 S2 D1
S2 S2 D1
Note that D1 D2 is considered different to D2 D1 as they finish on different
doubles. However, the combination S1 T1 D1 is considered the same as T1 S1 D1.
In addition we shall not include misses in considering combinations; for example,
D3 is the same as 0 D3 and 0 0 D3.
Incredibly there are 42336 distinct ways of checking out in total.
How many distinct ways can a player checkout with a score less than 100?
Solution:
We first construct a list of the possible dart values, separated by type.
We then iterate through the doubles, followed by the possible 2 following throws.
If the total of these three darts is less than the given limit, we increment
the counter.
"""
from itertools import combinations_with_replacement
def solution(limit: int = 100) -> int:
"""
Count the number of distinct ways a player can checkout with a score
less than limit.
>>> solution(171)
42336
>>> solution(50)
12577
"""
singles: list[int] = [x for x in range(1, 21)] + [25]
doubles: list[int] = [2 * x for x in range(1, 21)] + [50]
triples: list[int] = [3 * x for x in range(1, 21)]
all_values: list[int] = singles + doubles + triples + [0]
num_checkouts: int = 0
double: int
throw1: int
throw2: int
checkout_total: int
for double in doubles:
for throw1, throw2 in combinations_with_replacement(all_values, 2):
checkout_total = double + throw1 + throw2
if checkout_total < limit:
num_checkouts += 1
return num_checkouts
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
Project Euler Problem 65: https://projecteuler.net/problem=65
The square root of 2 can be written as an infinite continued fraction.
sqrt(2) = 1 + 1 / (2 + 1 / (2 + 1 / (2 + 1 / (2 + ...))))
The infinite continued fraction can be written, sqrt(2) = [1;(2)], (2)
indicates that 2 repeats ad infinitum. In a similar way, sqrt(23) =
[4;(1,3,1,8)].
It turns out that the sequence of partial values of continued
fractions for square roots provide the best rational approximations.
Let us consider the convergents for sqrt(2).
1 + 1 / 2 = 3/2
1 + 1 / (2 + 1 / 2) = 7/5
1 + 1 / (2 + 1 / (2 + 1 / 2)) = 17/12
1 + 1 / (2 + 1 / (2 + 1 / (2 + 1 / 2))) = 41/29
Hence the sequence of the first ten convergents for sqrt(2) are:
1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, 577/408, 1393/985, 3363/2378, ...
What is most surprising is that the important mathematical constant,
e = [2;1,2,1,1,4,1,1,6,1,...,1,2k,1,...].
The first ten terms in the sequence of convergents for e are:
2, 3, 8/3, 11/4, 19/7, 87/32, 106/39, 193/71, 1264/465, 1457/536, ...
The sum of digits in the numerator of the 10th convergent is
1 + 4 + 5 + 7 = 17.
Find the sum of the digits in the numerator of the 100th convergent
of the continued fraction for e.
-----
The solution mostly comes down to finding an equation that will generate
the numerator of the continued fraction. For the i-th numerator, the
pattern is:
n_i = m_i * n_(i-1) + n_(i-2)
for m_i = the i-th index of the continued fraction representation of e,
n_0 = 1, and n_1 = 2 as the first 2 numbers of the representation.
For example:
n_9 = 6 * 193 + 106 = 1264
1 + 2 + 6 + 4 = 13
n_10 = 1 * 193 + 1264 = 1457
1 + 4 + 5 + 7 = 17
"""
def sum_digits(num: int) -> int:
"""
Returns the sum of every digit in num.
>>> sum_digits(1)
1
>>> sum_digits(12345)
15
>>> sum_digits(999001)
28
"""
digit_sum = 0
while num > 0:
digit_sum += num % 10
num //= 10
return digit_sum
def solution(max: int = 100) -> int:
"""
Returns the sum of the digits in the numerator of the max-th convergent of
the continued fraction for e.
>>> solution(9)
13
>>> solution(10)
17
>>> solution(50)
91
"""
pre_numerator = 1
cur_numerator = 2
for i in range(2, max + 1):
temp = pre_numerator
e_cont = 2 * i // 3 if i % 3 == 0 else 1
pre_numerator = cur_numerator
cur_numerator = e_cont * pre_numerator + temp
return sum_digits(cur_numerator)
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 65: https://projecteuler.net/problem=65
The square root of 2 can be written as an infinite continued fraction.
sqrt(2) = 1 + 1 / (2 + 1 / (2 + 1 / (2 + 1 / (2 + ...))))
The infinite continued fraction can be written, sqrt(2) = [1;(2)], (2)
indicates that 2 repeats ad infinitum. In a similar way, sqrt(23) =
[4;(1,3,1,8)].
It turns out that the sequence of partial values of continued
fractions for square roots provide the best rational approximations.
Let us consider the convergents for sqrt(2).
1 + 1 / 2 = 3/2
1 + 1 / (2 + 1 / 2) = 7/5
1 + 1 / (2 + 1 / (2 + 1 / 2)) = 17/12
1 + 1 / (2 + 1 / (2 + 1 / (2 + 1 / 2))) = 41/29
Hence the sequence of the first ten convergents for sqrt(2) are:
1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, 577/408, 1393/985, 3363/2378, ...
What is most surprising is that the important mathematical constant,
e = [2;1,2,1,1,4,1,1,6,1,...,1,2k,1,...].
The first ten terms in the sequence of convergents for e are:
2, 3, 8/3, 11/4, 19/7, 87/32, 106/39, 193/71, 1264/465, 1457/536, ...
The sum of digits in the numerator of the 10th convergent is
1 + 4 + 5 + 7 = 17.
Find the sum of the digits in the numerator of the 100th convergent
of the continued fraction for e.
-----
The solution mostly comes down to finding an equation that will generate
the numerator of the continued fraction. For the i-th numerator, the
pattern is:
n_i = m_i * n_(i-1) + n_(i-2)
for m_i = the i-th index of the continued fraction representation of e,
n_0 = 1, and n_1 = 2 as the first 2 numbers of the representation.
For example:
n_9 = 6 * 193 + 106 = 1264
1 + 2 + 6 + 4 = 13
n_10 = 1 * 193 + 1264 = 1457
1 + 4 + 5 + 7 = 17
"""
def sum_digits(num: int) -> int:
"""
Returns the sum of every digit in num.
>>> sum_digits(1)
1
>>> sum_digits(12345)
15
>>> sum_digits(999001)
28
"""
digit_sum = 0
while num > 0:
digit_sum += num % 10
num //= 10
return digit_sum
def solution(max: int = 100) -> int:
"""
Returns the sum of the digits in the numerator of the max-th convergent of
the continued fraction for e.
>>> solution(9)
13
>>> solution(10)
17
>>> solution(50)
91
"""
pre_numerator = 1
cur_numerator = 2
for i in range(2, max + 1):
temp = pre_numerator
e_cont = 2 * i // 3 if i % 3 == 0 else 1
pre_numerator = cur_numerator
cur_numerator = e_cont * pre_numerator + temp
return sum_digits(cur_numerator)
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def max_subarray_sum(nums: list) -> int:
"""
>>> max_subarray_sum([6 , 9, -1, 3, -7, -5, 10])
17
"""
if not nums:
return 0
n = len(nums)
res, s, s_pre = nums[0], nums[0], nums[0]
for i in range(1, n):
s = max(nums[i], s_pre + nums[i])
s_pre = s
res = max(res, s)
return res
if __name__ == "__main__":
nums = [6, 9, -1, 3, -7, -5, 10]
print(max_subarray_sum(nums))
| def max_subarray_sum(nums: list) -> int:
"""
>>> max_subarray_sum([6 , 9, -1, 3, -7, -5, 10])
17
"""
if not nums:
return 0
n = len(nums)
res, s, s_pre = nums[0], nums[0], nums[0]
for i in range(1, n):
s = max(nums[i], s_pre + nums[i])
s_pre = s
res = max(res, s)
return res
if __name__ == "__main__":
nums = [6, 9, -1, 3, -7, -5, 10]
print(max_subarray_sum(nums))
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Finding Bridges in Undirected Graph
def computeBridges(graph):
id = 0
n = len(graph) # No of vertices in graph
low = [0] * n
visited = [False] * n
def dfs(at, parent, bridges, id):
visited[at] = True
low[at] = id
id += 1
for to in graph[at]:
if to == parent:
pass
elif not visited[to]:
dfs(to, at, bridges, id)
low[at] = min(low[at], low[to])
if at < low[to]:
bridges.append([at, to])
else:
# This edge is a back edge and cannot be a bridge
low[at] = min(low[at], to)
bridges = []
for i in range(n):
if not visited[i]:
dfs(i, -1, bridges, id)
print(bridges)
graph = {
0: [1, 2],
1: [0, 2],
2: [0, 1, 3, 5],
3: [2, 4],
4: [3],
5: [2, 6, 8],
6: [5, 7],
7: [6, 8],
8: [5, 7],
}
computeBridges(graph)
| # Finding Bridges in Undirected Graph
def computeBridges(graph):
id = 0
n = len(graph) # No of vertices in graph
low = [0] * n
visited = [False] * n
def dfs(at, parent, bridges, id):
visited[at] = True
low[at] = id
id += 1
for to in graph[at]:
if to == parent:
pass
elif not visited[to]:
dfs(to, at, bridges, id)
low[at] = min(low[at], low[to])
if at < low[to]:
bridges.append([at, to])
else:
# This edge is a back edge and cannot be a bridge
low[at] = min(low[at], to)
bridges = []
for i in range(n):
if not visited[i]:
dfs(i, -1, bridges, id)
print(bridges)
graph = {
0: [1, 2],
1: [0, 2],
2: [0, 1, 3, 5],
3: [2, 4],
4: [3],
5: [2, 6, 8],
6: [5, 7],
7: [6, 8],
8: [5, 7],
}
computeBridges(graph)
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| import requests
from bs4 import BeautifulSoup
def horoscope(zodiac_sign: int, day: str) -> str:
url = (
"https://www.horoscope.com/us/horoscopes/general/"
f"horoscope-general-daily-{day}.aspx?sign={zodiac_sign}"
)
soup = BeautifulSoup(requests.get(url).content, "html.parser")
return soup.find("div", class_="main-horoscope").p.text
if __name__ == "__main__":
print("Daily Horoscope. \n")
print(
"enter your Zodiac sign number:\n",
"1. Aries\n",
"2. Taurus\n",
"3. Gemini\n",
"4. Cancer\n",
"5. Leo\n",
"6. Virgo\n",
"7. Libra\n",
"8. Scorpio\n",
"9. Sagittarius\n",
"10. Capricorn\n",
"11. Aquarius\n",
"12. Pisces\n",
)
zodiac_sign = int(input("number> ").strip())
print("choose some day:\n", "yesterday\n", "today\n", "tomorrow\n")
day = input("enter the day> ")
horoscope_text = horoscope(zodiac_sign, day)
print(horoscope_text)
| import requests
from bs4 import BeautifulSoup
def horoscope(zodiac_sign: int, day: str) -> str:
url = (
"https://www.horoscope.com/us/horoscopes/general/"
f"horoscope-general-daily-{day}.aspx?sign={zodiac_sign}"
)
soup = BeautifulSoup(requests.get(url).content, "html.parser")
return soup.find("div", class_="main-horoscope").p.text
if __name__ == "__main__":
print("Daily Horoscope. \n")
print(
"enter your Zodiac sign number:\n",
"1. Aries\n",
"2. Taurus\n",
"3. Gemini\n",
"4. Cancer\n",
"5. Leo\n",
"6. Virgo\n",
"7. Libra\n",
"8. Scorpio\n",
"9. Sagittarius\n",
"10. Capricorn\n",
"11. Aquarius\n",
"12. Pisces\n",
)
zodiac_sign = int(input("number> ").strip())
print("choose some day:\n", "yesterday\n", "today\n", "tomorrow\n")
day = input("enter the day> ")
horoscope_text = horoscope(zodiac_sign, day)
print(horoscope_text)
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| IIII
IV
IIIIIIIIII
X
VIIIII
| IIII
IV
IIIIIIIIII
X
VIIIII
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| """
This is a pure Python implementation of the merge sort algorithm
For doctests run following command:
python -m doctest -v merge_sort.py
or
python3 -m doctest -v merge_sort.py
For manual testing run:
python merge_sort.py
"""
def merge_sort(collection: list) -> list:
"""Pure implementation of the merge sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> merge_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> merge_sort([])
[]
>>> merge_sort([-2, -5, -45])
[-45, -5, -2]
"""
def merge(left: list, right: list) -> list:
"""merge left and right
:param left: left collection
:param right: right collection
:return: merge result
"""
def _merge():
while left and right:
yield (left if left[0] <= right[0] else right).pop(0)
yield from left
yield from right
return list(_merge())
if len(collection) <= 1:
return collection
mid = len(collection) // 2
return merge(merge_sort(collection[:mid]), merge_sort(collection[mid:]))
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(*merge_sort(unsorted), sep=",")
| """
This is a pure Python implementation of the merge sort algorithm
For doctests run following command:
python -m doctest -v merge_sort.py
or
python3 -m doctest -v merge_sort.py
For manual testing run:
python merge_sort.py
"""
def merge_sort(collection: list) -> list:
"""Pure implementation of the merge sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> merge_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> merge_sort([])
[]
>>> merge_sort([-2, -5, -45])
[-45, -5, -2]
"""
def merge(left: list, right: list) -> list:
"""merge left and right
:param left: left collection
:param right: right collection
:return: merge result
"""
def _merge():
while left and right:
yield (left if left[0] <= right[0] else right).pop(0)
yield from left
yield from right
return list(_merge())
if len(collection) <= 1:
return collection
mid = len(collection) // 2
return merge(merge_sort(collection[:mid]), merge_sort(collection[mid:]))
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(*merge_sort(unsorted), sep=",")
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Functions for downloading and reading MNIST data (deprecated).
This module and all its submodules are deprecated.
"""
import collections
import gzip
import os
import numpy
from six.moves import urllib
from six.moves import xrange # pylint: disable=redefined-builtin
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import random_seed
from tensorflow.python.platform import gfile
from tensorflow.python.util.deprecation import deprecated
_Datasets = collections.namedtuple("_Datasets", ["train", "validation", "test"])
# CVDF mirror of http://yann.lecun.com/exdb/mnist/
DEFAULT_SOURCE_URL = "https://storage.googleapis.com/cvdf-datasets/mnist/"
def _read32(bytestream):
dt = numpy.dtype(numpy.uint32).newbyteorder(">")
return numpy.frombuffer(bytestream.read(4), dtype=dt)[0]
@deprecated(None, "Please use tf.data to implement this functionality.")
def _extract_images(f):
"""Extract the images into a 4D uint8 numpy array [index, y, x, depth].
Args:
f: A file object that can be passed into a gzip reader.
Returns:
data: A 4D uint8 numpy array [index, y, x, depth].
Raises:
ValueError: If the bytestream does not start with 2051.
"""
print("Extracting", f.name)
with gzip.GzipFile(fileobj=f) as bytestream:
magic = _read32(bytestream)
if magic != 2051:
raise ValueError(
"Invalid magic number %d in MNIST image file: %s" % (magic, f.name)
)
num_images = _read32(bytestream)
rows = _read32(bytestream)
cols = _read32(bytestream)
buf = bytestream.read(rows * cols * num_images)
data = numpy.frombuffer(buf, dtype=numpy.uint8)
data = data.reshape(num_images, rows, cols, 1)
return data
@deprecated(None, "Please use tf.one_hot on tensors.")
def _dense_to_one_hot(labels_dense, num_classes):
"""Convert class labels from scalars to one-hot vectors."""
num_labels = labels_dense.shape[0]
index_offset = numpy.arange(num_labels) * num_classes
labels_one_hot = numpy.zeros((num_labels, num_classes))
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
@deprecated(None, "Please use tf.data to implement this functionality.")
def _extract_labels(f, one_hot=False, num_classes=10):
"""Extract the labels into a 1D uint8 numpy array [index].
Args:
f: A file object that can be passed into a gzip reader.
one_hot: Does one hot encoding for the result.
num_classes: Number of classes for the one hot encoding.
Returns:
labels: a 1D uint8 numpy array.
Raises:
ValueError: If the bystream doesn't start with 2049.
"""
print("Extracting", f.name)
with gzip.GzipFile(fileobj=f) as bytestream:
magic = _read32(bytestream)
if magic != 2049:
raise ValueError(
"Invalid magic number %d in MNIST label file: %s" % (magic, f.name)
)
num_items = _read32(bytestream)
buf = bytestream.read(num_items)
labels = numpy.frombuffer(buf, dtype=numpy.uint8)
if one_hot:
return _dense_to_one_hot(labels, num_classes)
return labels
class _DataSet:
"""Container class for a _DataSet (deprecated).
THIS CLASS IS DEPRECATED.
"""
@deprecated(
None,
"Please use alternatives such as official/mnist/_DataSet.py"
" from tensorflow/models.",
)
def __init__(
self,
images,
labels,
fake_data=False,
one_hot=False,
dtype=dtypes.float32,
reshape=True,
seed=None,
):
"""Construct a _DataSet.
one_hot arg is used only if fake_data is true. `dtype` can be either
`uint8` to leave the input as `[0, 255]`, or `float32` to rescale into
`[0, 1]`. Seed arg provides for convenient deterministic testing.
Args:
images: The images
labels: The labels
fake_data: Ignore inages and labels, use fake data.
one_hot: Bool, return the labels as one hot vectors (if True) or ints (if
False).
dtype: Output image dtype. One of [uint8, float32]. `uint8` output has
range [0,255]. float32 output has range [0,1].
reshape: Bool. If True returned images are returned flattened to vectors.
seed: The random seed to use.
"""
seed1, seed2 = random_seed.get_seed(seed)
# If op level seed is not set, use whatever graph level seed is returned
numpy.random.seed(seed1 if seed is None else seed2)
dtype = dtypes.as_dtype(dtype).base_dtype
if dtype not in (dtypes.uint8, dtypes.float32):
raise TypeError("Invalid image dtype %r, expected uint8 or float32" % dtype)
if fake_data:
self._num_examples = 10000
self.one_hot = one_hot
else:
assert (
images.shape[0] == labels.shape[0]
), f"images.shape: {images.shape} labels.shape: {labels.shape}"
self._num_examples = images.shape[0]
# Convert shape from [num examples, rows, columns, depth]
# to [num examples, rows*columns] (assuming depth == 1)
if reshape:
assert images.shape[3] == 1
images = images.reshape(
images.shape[0], images.shape[1] * images.shape[2]
)
if dtype == dtypes.float32:
# Convert from [0, 255] -> [0.0, 1.0].
images = images.astype(numpy.float32)
images = numpy.multiply(images, 1.0 / 255.0)
self._images = images
self._labels = labels
self._epochs_completed = 0
self._index_in_epoch = 0
@property
def images(self):
return self._images
@property
def labels(self):
return self._labels
@property
def num_examples(self):
return self._num_examples
@property
def epochs_completed(self):
return self._epochs_completed
def next_batch(self, batch_size, fake_data=False, shuffle=True):
"""Return the next `batch_size` examples from this data set."""
if fake_data:
fake_image = [1] * 784
if self.one_hot:
fake_label = [1] + [0] * 9
else:
fake_label = 0
return (
[fake_image for _ in xrange(batch_size)],
[fake_label for _ in xrange(batch_size)],
)
start = self._index_in_epoch
# Shuffle for the first epoch
if self._epochs_completed == 0 and start == 0 and shuffle:
perm0 = numpy.arange(self._num_examples)
numpy.random.shuffle(perm0)
self._images = self.images[perm0]
self._labels = self.labels[perm0]
# Go to the next epoch
if start + batch_size > self._num_examples:
# Finished epoch
self._epochs_completed += 1
# Get the rest examples in this epoch
rest_num_examples = self._num_examples - start
images_rest_part = self._images[start : self._num_examples]
labels_rest_part = self._labels[start : self._num_examples]
# Shuffle the data
if shuffle:
perm = numpy.arange(self._num_examples)
numpy.random.shuffle(perm)
self._images = self.images[perm]
self._labels = self.labels[perm]
# Start next epoch
start = 0
self._index_in_epoch = batch_size - rest_num_examples
end = self._index_in_epoch
images_new_part = self._images[start:end]
labels_new_part = self._labels[start:end]
return (
numpy.concatenate((images_rest_part, images_new_part), axis=0),
numpy.concatenate((labels_rest_part, labels_new_part), axis=0),
)
else:
self._index_in_epoch += batch_size
end = self._index_in_epoch
return self._images[start:end], self._labels[start:end]
@deprecated(None, "Please write your own downloading logic.")
def _maybe_download(filename, work_directory, source_url):
"""Download the data from source url, unless it's already here.
Args:
filename: string, name of the file in the directory.
work_directory: string, path to working directory.
source_url: url to download from if file doesn't exist.
Returns:
Path to resulting file.
"""
if not gfile.Exists(work_directory):
gfile.MakeDirs(work_directory)
filepath = os.path.join(work_directory, filename)
if not gfile.Exists(filepath):
urllib.request.urlretrieve(source_url, filepath)
with gfile.GFile(filepath) as f:
size = f.size()
print("Successfully downloaded", filename, size, "bytes.")
return filepath
@deprecated(
None, "Please use alternatives such as:" " tensorflow_datasets.load('mnist')"
)
def read_data_sets(
train_dir,
fake_data=False,
one_hot=False,
dtype=dtypes.float32,
reshape=True,
validation_size=5000,
seed=None,
source_url=DEFAULT_SOURCE_URL,
):
if fake_data:
def fake():
return _DataSet(
[], [], fake_data=True, one_hot=one_hot, dtype=dtype, seed=seed
)
train = fake()
validation = fake()
test = fake()
return _Datasets(train=train, validation=validation, test=test)
if not source_url: # empty string check
source_url = DEFAULT_SOURCE_URL
train_images_file = "train-images-idx3-ubyte.gz"
train_labels_file = "train-labels-idx1-ubyte.gz"
test_images_file = "t10k-images-idx3-ubyte.gz"
test_labels_file = "t10k-labels-idx1-ubyte.gz"
local_file = _maybe_download(
train_images_file, train_dir, source_url + train_images_file
)
with gfile.Open(local_file, "rb") as f:
train_images = _extract_images(f)
local_file = _maybe_download(
train_labels_file, train_dir, source_url + train_labels_file
)
with gfile.Open(local_file, "rb") as f:
train_labels = _extract_labels(f, one_hot=one_hot)
local_file = _maybe_download(
test_images_file, train_dir, source_url + test_images_file
)
with gfile.Open(local_file, "rb") as f:
test_images = _extract_images(f)
local_file = _maybe_download(
test_labels_file, train_dir, source_url + test_labels_file
)
with gfile.Open(local_file, "rb") as f:
test_labels = _extract_labels(f, one_hot=one_hot)
if not 0 <= validation_size <= len(train_images):
raise ValueError(
f"Validation size should be between 0 and {len(train_images)}. Received: {validation_size}."
)
validation_images = train_images[:validation_size]
validation_labels = train_labels[:validation_size]
train_images = train_images[validation_size:]
train_labels = train_labels[validation_size:]
options = dict(dtype=dtype, reshape=reshape, seed=seed)
train = _DataSet(train_images, train_labels, **options)
validation = _DataSet(validation_images, validation_labels, **options)
test = _DataSet(test_images, test_labels, **options)
return _Datasets(train=train, validation=validation, test=test)
| # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Functions for downloading and reading MNIST data (deprecated).
This module and all its submodules are deprecated.
"""
import collections
import gzip
import os
import numpy
from six.moves import urllib
from six.moves import xrange # pylint: disable=redefined-builtin
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import random_seed
from tensorflow.python.platform import gfile
from tensorflow.python.util.deprecation import deprecated
_Datasets = collections.namedtuple("_Datasets", ["train", "validation", "test"])
# CVDF mirror of http://yann.lecun.com/exdb/mnist/
DEFAULT_SOURCE_URL = "https://storage.googleapis.com/cvdf-datasets/mnist/"
def _read32(bytestream):
dt = numpy.dtype(numpy.uint32).newbyteorder(">")
return numpy.frombuffer(bytestream.read(4), dtype=dt)[0]
@deprecated(None, "Please use tf.data to implement this functionality.")
def _extract_images(f):
"""Extract the images into a 4D uint8 numpy array [index, y, x, depth].
Args:
f: A file object that can be passed into a gzip reader.
Returns:
data: A 4D uint8 numpy array [index, y, x, depth].
Raises:
ValueError: If the bytestream does not start with 2051.
"""
print("Extracting", f.name)
with gzip.GzipFile(fileobj=f) as bytestream:
magic = _read32(bytestream)
if magic != 2051:
raise ValueError(
"Invalid magic number %d in MNIST image file: %s" % (magic, f.name)
)
num_images = _read32(bytestream)
rows = _read32(bytestream)
cols = _read32(bytestream)
buf = bytestream.read(rows * cols * num_images)
data = numpy.frombuffer(buf, dtype=numpy.uint8)
data = data.reshape(num_images, rows, cols, 1)
return data
@deprecated(None, "Please use tf.one_hot on tensors.")
def _dense_to_one_hot(labels_dense, num_classes):
"""Convert class labels from scalars to one-hot vectors."""
num_labels = labels_dense.shape[0]
index_offset = numpy.arange(num_labels) * num_classes
labels_one_hot = numpy.zeros((num_labels, num_classes))
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
@deprecated(None, "Please use tf.data to implement this functionality.")
def _extract_labels(f, one_hot=False, num_classes=10):
"""Extract the labels into a 1D uint8 numpy array [index].
Args:
f: A file object that can be passed into a gzip reader.
one_hot: Does one hot encoding for the result.
num_classes: Number of classes for the one hot encoding.
Returns:
labels: a 1D uint8 numpy array.
Raises:
ValueError: If the bystream doesn't start with 2049.
"""
print("Extracting", f.name)
with gzip.GzipFile(fileobj=f) as bytestream:
magic = _read32(bytestream)
if magic != 2049:
raise ValueError(
"Invalid magic number %d in MNIST label file: %s" % (magic, f.name)
)
num_items = _read32(bytestream)
buf = bytestream.read(num_items)
labels = numpy.frombuffer(buf, dtype=numpy.uint8)
if one_hot:
return _dense_to_one_hot(labels, num_classes)
return labels
class _DataSet:
"""Container class for a _DataSet (deprecated).
THIS CLASS IS DEPRECATED.
"""
@deprecated(
None,
"Please use alternatives such as official/mnist/_DataSet.py"
" from tensorflow/models.",
)
def __init__(
self,
images,
labels,
fake_data=False,
one_hot=False,
dtype=dtypes.float32,
reshape=True,
seed=None,
):
"""Construct a _DataSet.
one_hot arg is used only if fake_data is true. `dtype` can be either
`uint8` to leave the input as `[0, 255]`, or `float32` to rescale into
`[0, 1]`. Seed arg provides for convenient deterministic testing.
Args:
images: The images
labels: The labels
fake_data: Ignore inages and labels, use fake data.
one_hot: Bool, return the labels as one hot vectors (if True) or ints (if
False).
dtype: Output image dtype. One of [uint8, float32]. `uint8` output has
range [0,255]. float32 output has range [0,1].
reshape: Bool. If True returned images are returned flattened to vectors.
seed: The random seed to use.
"""
seed1, seed2 = random_seed.get_seed(seed)
# If op level seed is not set, use whatever graph level seed is returned
numpy.random.seed(seed1 if seed is None else seed2)
dtype = dtypes.as_dtype(dtype).base_dtype
if dtype not in (dtypes.uint8, dtypes.float32):
raise TypeError("Invalid image dtype %r, expected uint8 or float32" % dtype)
if fake_data:
self._num_examples = 10000
self.one_hot = one_hot
else:
assert (
images.shape[0] == labels.shape[0]
), f"images.shape: {images.shape} labels.shape: {labels.shape}"
self._num_examples = images.shape[0]
# Convert shape from [num examples, rows, columns, depth]
# to [num examples, rows*columns] (assuming depth == 1)
if reshape:
assert images.shape[3] == 1
images = images.reshape(
images.shape[0], images.shape[1] * images.shape[2]
)
if dtype == dtypes.float32:
# Convert from [0, 255] -> [0.0, 1.0].
images = images.astype(numpy.float32)
images = numpy.multiply(images, 1.0 / 255.0)
self._images = images
self._labels = labels
self._epochs_completed = 0
self._index_in_epoch = 0
@property
def images(self):
return self._images
@property
def labels(self):
return self._labels
@property
def num_examples(self):
return self._num_examples
@property
def epochs_completed(self):
return self._epochs_completed
def next_batch(self, batch_size, fake_data=False, shuffle=True):
"""Return the next `batch_size` examples from this data set."""
if fake_data:
fake_image = [1] * 784
if self.one_hot:
fake_label = [1] + [0] * 9
else:
fake_label = 0
return (
[fake_image for _ in xrange(batch_size)],
[fake_label for _ in xrange(batch_size)],
)
start = self._index_in_epoch
# Shuffle for the first epoch
if self._epochs_completed == 0 and start == 0 and shuffle:
perm0 = numpy.arange(self._num_examples)
numpy.random.shuffle(perm0)
self._images = self.images[perm0]
self._labels = self.labels[perm0]
# Go to the next epoch
if start + batch_size > self._num_examples:
# Finished epoch
self._epochs_completed += 1
# Get the rest examples in this epoch
rest_num_examples = self._num_examples - start
images_rest_part = self._images[start : self._num_examples]
labels_rest_part = self._labels[start : self._num_examples]
# Shuffle the data
if shuffle:
perm = numpy.arange(self._num_examples)
numpy.random.shuffle(perm)
self._images = self.images[perm]
self._labels = self.labels[perm]
# Start next epoch
start = 0
self._index_in_epoch = batch_size - rest_num_examples
end = self._index_in_epoch
images_new_part = self._images[start:end]
labels_new_part = self._labels[start:end]
return (
numpy.concatenate((images_rest_part, images_new_part), axis=0),
numpy.concatenate((labels_rest_part, labels_new_part), axis=0),
)
else:
self._index_in_epoch += batch_size
end = self._index_in_epoch
return self._images[start:end], self._labels[start:end]
@deprecated(None, "Please write your own downloading logic.")
def _maybe_download(filename, work_directory, source_url):
"""Download the data from source url, unless it's already here.
Args:
filename: string, name of the file in the directory.
work_directory: string, path to working directory.
source_url: url to download from if file doesn't exist.
Returns:
Path to resulting file.
"""
if not gfile.Exists(work_directory):
gfile.MakeDirs(work_directory)
filepath = os.path.join(work_directory, filename)
if not gfile.Exists(filepath):
urllib.request.urlretrieve(source_url, filepath)
with gfile.GFile(filepath) as f:
size = f.size()
print("Successfully downloaded", filename, size, "bytes.")
return filepath
@deprecated(
None, "Please use alternatives such as:" " tensorflow_datasets.load('mnist')"
)
def read_data_sets(
train_dir,
fake_data=False,
one_hot=False,
dtype=dtypes.float32,
reshape=True,
validation_size=5000,
seed=None,
source_url=DEFAULT_SOURCE_URL,
):
if fake_data:
def fake():
return _DataSet(
[], [], fake_data=True, one_hot=one_hot, dtype=dtype, seed=seed
)
train = fake()
validation = fake()
test = fake()
return _Datasets(train=train, validation=validation, test=test)
if not source_url: # empty string check
source_url = DEFAULT_SOURCE_URL
train_images_file = "train-images-idx3-ubyte.gz"
train_labels_file = "train-labels-idx1-ubyte.gz"
test_images_file = "t10k-images-idx3-ubyte.gz"
test_labels_file = "t10k-labels-idx1-ubyte.gz"
local_file = _maybe_download(
train_images_file, train_dir, source_url + train_images_file
)
with gfile.Open(local_file, "rb") as f:
train_images = _extract_images(f)
local_file = _maybe_download(
train_labels_file, train_dir, source_url + train_labels_file
)
with gfile.Open(local_file, "rb") as f:
train_labels = _extract_labels(f, one_hot=one_hot)
local_file = _maybe_download(
test_images_file, train_dir, source_url + test_images_file
)
with gfile.Open(local_file, "rb") as f:
test_images = _extract_images(f)
local_file = _maybe_download(
test_labels_file, train_dir, source_url + test_labels_file
)
with gfile.Open(local_file, "rb") as f:
test_labels = _extract_labels(f, one_hot=one_hot)
if not 0 <= validation_size <= len(train_images):
raise ValueError(
f"Validation size should be between 0 and {len(train_images)}. Received: {validation_size}."
)
validation_images = train_images[:validation_size]
validation_labels = train_labels[:validation_size]
train_images = train_images[validation_size:]
train_labels = train_labels[validation_size:]
options = dict(dtype=dtype, reshape=reshape, seed=seed)
train = _DataSet(train_images, train_labels, **options)
validation = _DataSet(validation_images, validation_labels, **options)
test = _DataSet(test_images, test_labels, **options)
return _Datasets(train=train, validation=validation, test=test)
| -1 |
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| -1 |
||
TheAlgorithms/Python | 5,744 | Improve Project Euler problem 014 solution 2 | ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| MaximSmolskiy | "2021-11-01T19:21:41Z" | "2021-11-04T16:01:22Z" | 7a605766fe7fe79a00ba1f30447877be4b77a6f2 | 729aaf64275c61b8bc864ef9138eed078dea9cb2 | Improve Project Euler problem 014 solution 2. ### **Describe your change:**
Improve Project Euler problem 014 solution 2 - the top 1 slowest solution on Travis CI logs (under `slowest 10 durations`: `14.20s call scripts/validate_solutions.py::test_project_euler[problem_014/sol2.py]`):
* Improve solution (locally 10+ times - from 15+ seconds to ~1.5 seconds)
* Uncomment code that has been commented due to slow execution affecting Travis (now it should be quite fast execution and not affect Travis)
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### **Checklist:**
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
| def palindromic_string(input_string: str) -> str:
"""
>>> palindromic_string('abbbaba')
'abbba'
>>> palindromic_string('ababa')
'ababa'
Manacher’s algorithm which finds Longest palindromic Substring in linear time.
1. first this convert input_string("xyx") into new_string("x|y|x") where odd
positions are actual input characters.
2. for each character in new_string it find corresponding length and store the
length and l,r to store previously calculated info.(please look the explanation
for details)
3. return corresponding output_string by removing all "|"
"""
max_length = 0
# if input_string is "aba" than new_input_string become "a|b|a"
new_input_string = ""
output_string = ""
# append each character + "|" in new_string for range(0, length-1)
for i in input_string[: len(input_string) - 1]:
new_input_string += i + "|"
# append last character
new_input_string += input_string[-1]
# we will store the starting and ending of previous furthest ending palindromic
# substring
l, r = 0, 0
# length[i] shows the length of palindromic substring with center i
length = [1 for i in range(len(new_input_string))]
# for each character in new_string find corresponding palindromic string
start = 0
for j in range(len(new_input_string)):
k = 1 if j > r else min(length[l + r - j] // 2, r - j + 1)
while (
j - k >= 0
and j + k < len(new_input_string)
and new_input_string[k + j] == new_input_string[j - k]
):
k += 1
length[j] = 2 * k - 1
# does this string is ending after the previously explored end (that is r) ?
# if yes the update the new r to the last index of this
if j + k - 1 > r:
l = j - k + 1 # noqa: E741
r = j + k - 1
# update max_length and start position
if max_length < length[j]:
max_length = length[j]
start = j
# create that string
s = new_input_string[start - max_length // 2 : start + max_length // 2 + 1]
for i in s:
if i != "|":
output_string += i
return output_string
if __name__ == "__main__":
import doctest
doctest.testmod()
"""
...a0...a1...a2.....a3......a4...a5...a6....
consider the string for which we are calculating the longest palindromic substring is
shown above where ... are some characters in between and right now we are calculating
the length of palindromic substring with center at a5 with following conditions :
i) we have stored the length of palindromic substring which has center at a3 (starts at
l ends at r) and it is the furthest ending till now, and it has ending after a6
ii) a2 and a4 are equally distant from a3 so char(a2) == char(a4)
iii) a0 and a6 are equally distant from a3 so char(a0) == char(a6)
iv) a1 is corresponding equal character of a5 in palindrome with center a3 (remember
that in below derivation of a4==a6)
now for a5 we will calculate the length of palindromic substring with center as a5 but
can we use previously calculated information in some way?
Yes, look the above string we know that a5 is inside the palindrome with center a3 and
previously we have have calculated that
a0==a2 (palindrome of center a1)
a2==a4 (palindrome of center a3)
a0==a6 (palindrome of center a3)
so a4==a6
so we can say that palindrome at center a5 is at least as long as palindrome at center
a1 but this only holds if a0 and a6 are inside the limits of palindrome centered at a3
so finally ..
len_of_palindrome__at(a5) = min(len_of_palindrome_at(a1), r-a5)
where a3 lies from l to r and we have to keep updating that
and if the a5 lies outside of l,r boundary we calculate length of palindrome with
bruteforce and update l,r.
it gives the linear time complexity just like z-function
"""
| def palindromic_string(input_string: str) -> str:
"""
>>> palindromic_string('abbbaba')
'abbba'
>>> palindromic_string('ababa')
'ababa'
Manacher’s algorithm which finds Longest palindromic Substring in linear time.
1. first this convert input_string("xyx") into new_string("x|y|x") where odd
positions are actual input characters.
2. for each character in new_string it find corresponding length and store the
length and l,r to store previously calculated info.(please look the explanation
for details)
3. return corresponding output_string by removing all "|"
"""
max_length = 0
# if input_string is "aba" than new_input_string become "a|b|a"
new_input_string = ""
output_string = ""
# append each character + "|" in new_string for range(0, length-1)
for i in input_string[: len(input_string) - 1]:
new_input_string += i + "|"
# append last character
new_input_string += input_string[-1]
# we will store the starting and ending of previous furthest ending palindromic
# substring
l, r = 0, 0
# length[i] shows the length of palindromic substring with center i
length = [1 for i in range(len(new_input_string))]
# for each character in new_string find corresponding palindromic string
start = 0
for j in range(len(new_input_string)):
k = 1 if j > r else min(length[l + r - j] // 2, r - j + 1)
while (
j - k >= 0
and j + k < len(new_input_string)
and new_input_string[k + j] == new_input_string[j - k]
):
k += 1
length[j] = 2 * k - 1
# does this string is ending after the previously explored end (that is r) ?
# if yes the update the new r to the last index of this
if j + k - 1 > r:
l = j - k + 1 # noqa: E741
r = j + k - 1
# update max_length and start position
if max_length < length[j]:
max_length = length[j]
start = j
# create that string
s = new_input_string[start - max_length // 2 : start + max_length // 2 + 1]
for i in s:
if i != "|":
output_string += i
return output_string
if __name__ == "__main__":
import doctest
doctest.testmod()
"""
...a0...a1...a2.....a3......a4...a5...a6....
consider the string for which we are calculating the longest palindromic substring is
shown above where ... are some characters in between and right now we are calculating
the length of palindromic substring with center at a5 with following conditions :
i) we have stored the length of palindromic substring which has center at a3 (starts at
l ends at r) and it is the furthest ending till now, and it has ending after a6
ii) a2 and a4 are equally distant from a3 so char(a2) == char(a4)
iii) a0 and a6 are equally distant from a3 so char(a0) == char(a6)
iv) a1 is corresponding equal character of a5 in palindrome with center a3 (remember
that in below derivation of a4==a6)
now for a5 we will calculate the length of palindromic substring with center as a5 but
can we use previously calculated information in some way?
Yes, look the above string we know that a5 is inside the palindrome with center a3 and
previously we have have calculated that
a0==a2 (palindrome of center a1)
a2==a4 (palindrome of center a3)
a0==a6 (palindrome of center a3)
so a4==a6
so we can say that palindrome at center a5 is at least as long as palindrome at center
a1 but this only holds if a0 and a6 are inside the limits of palindrome centered at a3
so finally ..
len_of_palindrome__at(a5) = min(len_of_palindrome_at(a1), r-a5)
where a3 lies from l to r and we have to keep updating that
and if the a5 lies outside of l,r boundary we calculate length of palindrome with
bruteforce and update l,r.
it gives the linear time complexity just like z-function
"""
| -1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.